Robust Programming

I was perusing some job descriptions recently, and ran across the interesting phrase "robust programming".

The manner in which it was in the job description seemed to indicate that it was likely more than my immediate thought on the topic. Robust meaning that it has a quality of being sturdy and able to withstand change, I took this to mean that it was a form of fail-safe programming. That it was the concepts that you program to gracefully and properly handle errors, and try to write programs in a fashion that they were difficult to break. Being curious, I went out into that great big research resource (aka The Internet) and did a couple searches to see if I could find more information.

Of course, I did.

First stop, wikipedia:

In computing terms, robustness is the resilience of the system under stress or when confronted with invalid input. It is the ability of the software system to maintain function even with the changes in internal structure or external environment. For example, an operating system is considered robust if it operates correctly when it is starved of memory or disk storage space, or when confronted with an application that has bugs or is behaving in an "illegal" manner, such as trying to access memory or storage belonging to other tasks in a multitasking system.

Ages ago, when I was learning object oriented programming for the first time, I recall learning about Parnas' Principle which states:

  • The developer of a software component must provide the intended user with all the information needed to make effective use of the services provided by the component, and should provide no other information.
  • The developer of a software component must be provided with all the information necessary to carry out the given responsibilities assigned to the component, and should be provided with no other information.
  • So, both sides of an object, a function, a method, a procedure, a program, etc. should give the other side all the information they need to take the expected action, and only the information needed. This fits in very well with security models, only tell them what they need to know to do what they are supposed to do, and only accept the information that is necessary for the action but only the information needed for the action.

    In my searching, I ran into what seems like a very thorough covering of the topic of robust programming by Matt Bishop at UCDavis

    It's interesting reading, and makes you realize how fragile the typical programming really is. One thing that I hadn't thought about previously, when you get a data structure as part of an interface to a library, how much can you mangle the structure by filling it with inappropriate values and get 'unexpected results' which can be used to your advantage.

    Hopefully, with more use of test-driven developement, pair programming, robust programming, and people focusing on writing bomb-proof code, we will see fewer security issues in software.

    Honestly, I'm not holding my breath because everyone seems to think that their code is either invulnerable, or not important enough for someone to care about how secure it is.