About Me

I ramble about a number of things - but travel experiences, movies and music feature prominently. See my label cloud for a better idea. All comnments and opinions on this blog are my own, and do not in any way reflect the opinions/position of my employer (past/current/future).

16 August 2007

Software Liability

The recent furore about the quality of Chinese made toys has brought an interesting subject to light: product liability. In the most recent case, Mattel, the world’s largest toy maker has recalled entire product lines because of two separate issues. The biggest issue (happened twice in two weeks), was the case of the use of lead based paint. As the Wikipedia article on lead explains, long term exposure to lead could lead to damage to the nervous system and other problems; and thus the use of lead based paint has been discontinued in most industrial nations. Thus this issue of product liability has arisen out of the manufacturing process. The second issue, was the recall of certain magnetic toys, which are dangerous if swallowed, and despite certain news reports, it is not really the fault of the Chinese manufacturers. Rather, it is a design problem – the toy designers designed a faulty product, and these toys are being recalled so that they can be fixed.

This is my round about way to get to the topic – software liability. Unlike every other product in the world, software development houses (the vendors) are effectively immune from product liability. One of the biggest factors behind security issues in computer systems are because products are not designed and developed (manufactured) correctly – leading to a multitude of security problems. But unlike Mattel, vendors do not have to recall software because of bad design or development. In fact, while many vendors do provide patches, there is no obligation for them to do so (well it does make business sense for them to do so, most of the time).

There have been many, most notably Bruce Schneier, who have argued that product liability must also extend to software, and it is the only way to get more reliable, secure software. His basic argument is, at the moment, there is currently no incentive for a vendor to make secure software; and instead, it is the end user who is forced to spend extra money in an attempt to make his computer more secure through the use of firewalls, anti virus software etc. Not surprisingly, majority of vendors are opposed to software liability. They contend that software is too complex, and that there will always be bugs. Furthermore, it is not necessarily just the software that is at fault, but the combination of software applications that are used that is at fault.

In the recent report (pdf) by the Science and Technology Committee of the UK’s House of Lords (really good read), Prof. Mark Handley from University College London, sums it up very well:
“If your PC, for example, gets compromised at the moment there is no real liability for the software vendors or the person who sold them the PC or anything else. The question then is: did the person who sold you that software or the person who wrote that software or whatever actually do the best job industry knows how to do in writing that software? If they did then I really do not think they should be liable, but if they did not then I think some liability ought to be there.”


And that is exactly how product liability in other disciplines work: if a bridge falls down, it is only the fault of the construction company if they did not follow accepted standard practices like taking shortcuts or building the bridge using poor quality materials. Likewise, it is the designer's fault if they build a bridge in an area known for earthquakes without considering earthquakes in their design. And it is the same reason, Mattel is recalling some magnetic toys – because the designers did not consider what would happen if children swallowed those magnets.

Software should be the same. There needs to be some degree of accountability. During the design phase considerations such as security , reliability, stability must be taken into account. And there are tools out there to conduct rigorous testing of software design: for example (citing a tool that I know very well), Petri nets can be used to prove whether a process is bounded or not – and unbounded processes provide a good indication that the process could experience buffer overflows or similar issues during implementation.

Similarly, development also needs to have some degree of accountability. Buffer overflows caused because there are no checks on whether the input is of a correct size or not is not usually a fault of the designer. It is the fault of the programmers who did not bother to check for it, and the QA people because they forgot to test for it. Yes, programming is still a human process, and unlike robotic assembly lines, cannot be relied upon to provide perfect results, all the time. But there should be a reasonable grounding of all programmers to deliver a certain level of quality. It’s the least that should be expected.

Off course, this does not mean that patches will not be required. But, hopefully, patches will be used to fix vulnerabilities and bugs that are beyond the basic assurances. And this would still mean that users have to take care and maintain their computers and software – just like every other product. And just like every other product, the vendors should inform the users of the correct way to use and maintain software. The issue of a “computer driving license” has often been discussed … maybe it is high time, that it is actually discussed seriously.

There is off course the case of open source. In open source software, there is often no one to sue (for liability). But I think Bruce Schneier provides the perfect middle ground: open source software that is freely distributed, installed and maintained by the user (through help from online communities) should not offer any liability protection; after all the software cost nothing to begin with. However, vendors that package and support open source software (such as Red Hat), should be liable. In the end it is about assurance: from a vendor like Red Hat you are getting assurance that a specific set of open source products that is secure and stable.

I think it is inevitable that software liability will happen; it is just a matter of when. In their recommendations, the Science and Technology Committee of the UK’s House of Lords state:
“We therefore recommend that the Government explore, at European level, the introduction of the principle of vendor liability within the IT industry. In the short term we recommend that such liability should be imposed on vendors (that is, software and hardware manufacturers), notwithstanding end user licensing agreements, circumstances where negligence can be demonstrated. In the longer term, as the industry matures, a comprehensive framework of vendor liability and consumer protection should be introduced.”

5 comments:

Unknown said...

Well, I'm not sure whether I would agree 100% with what you said. Imposing more stringent rules on software liability by law appears to be problematic for me, as if wrong implemented could do damage to the industry. It reminds me on the recent discussions we had in Europe about software patents which has lead in the US to a ridiculous flood of trivial patents and legal conflicts like with Amazons One-Click technology. Enforcing software liability by law has the potential to cause similar problems.

I think there are several factors that are directly influenced by software liability and might be described as follows: Product prices, product features, competitive advantages. These factors almost appear to me as the corners of a cloth on a table, whereas the table represent the customers satisfaction. If you pull on one side, the table is subsequently covered by less cloth than on the other side. I mean it's not difficult to see why that is. Imposing a higher software liability by law means that more time has to be spend during the development process to ensure that liability requirements will be meet by the product. That means that either the product will be more expensive because of a longer development period or the product will have less features. Either way, domestic software companies in countries which have to guarantee software liability by law might have a competitive disadvantage to foreign companies that don't have to be feared to be sued.

Of course, a high degree of software liability can also be a reason for competitive advantages. But certainly not for the large majority of software products. Lets be honest, most of our software we are using every day works astonishingly good 90% of the time.
Undoubtedly, there are areas like in aerospace or life supporting systems where a high reliability is absolutely essential. But the development process of software of life-critical systems is very different to the usual one and ridiculous expensive. In these sectors software companies are already held responsible by contract (and I'm very sure in many country also by law) for a breakdown of their system. For developing life-critical system, formal methods like petri-nets and model-checking are used to ensure that the software works correctly not 90% but 98% of the time. But these 8% are only reached with a ridiculous amount of the effort.

So, what I'm saying is that I rather prefer not to see a regulation of software liability imposed by law as I'm not sure whether it will actually benefit the user in the end. Let the market decide. If a company wants to have a bullet proof software, than let them pay for it!

Like Steve Jobs used to say: "You can't just ask customers what they want and then try to give that to them. By the time you get it built, they'll want something new."

And you know what? I'm one of them :) I rather like companies to spend their money on cool features and new stuff instead of making them go the other 8% to make their software bullet prove :)

Anonymous said...

I'm with peter on this one. It's just to complex - for one simply because of the end user. It's just too damn difficult to satisfy users and in the process of trying to maintain some balance comes the 8% peter is talking about.

I feel the fact that a non-performing product will suffer business wise is a good enough trade-off (ofcourse this exludes the life-critical products. More stringent rules, it's just an added complexity in this regard.

Anonymous said...

It think it depends on the type of software one is referring to. If one is referring to COTS-(commercial off the shelf) software. Most of the OS fall under this category then I feel that the software vendor should be held accountable for ensuring software quality. This is because for this category the vendor has total ownership of the requirements analysis process. But in the case of most proprietary applications there are two parties involved the Software vendor and the customer. In this scenario the customer needs to draw up requirements and submit to them to the Vendor for development. My suggestion for this category of software is that the customer and the vendor should adopt a collaborative approach and both of them be held accountable for the software quality. If you want to take it further one can identify the root cause of the problem and deal with the relevant party. The point I am trying to make is that most software quality issues relate to requirements definition.

phathu said...

I think the problem with enforcing liability in software is also because of the complexity of the products involved. A car is made by one company, and they take responsibility for the whole car. But in IT we sometimes use a combination of products that could lead to an unforseen state. When that happens its not anyone's fault.

These guys dont spend time doing petri nets etc to validate their designs, being first to market is the key, They'd rather get to market first and fix the errors as you go.

I also agree with Hans in that I am happy with a product that works most of the time (given that it is not a critical system) instead of a bullet proof product that sacrifices innovation. Cause the more testing you do, the longer things will take before getting to market.

Anonymous said...

Having spent longer than I'd care to admit testing software, I have to agree with Phathu and Hans-Peter. The return on investment to the user for that last 8% (or even more! Let's go with the traditional 80/20 split here)... ok for that last 20% is just not worth while. The perceived value to the user in improved quality is grossly outweighed by the value they get in extra features and faster turn-around time.

I'd like to put 2 counterpoints to some of your ideas here:

The first is the driving licence for software. While I agree with Douglas Adams who said "never underestimate stupid people in large groups", I am inclined to believe that half of the support calls fielded by helpdesks around the world are as a result of poor design and misunderstanding of how ordinary users think, not because the users are stupid. Do we REALLY understand how normal people view technology? I know I don't. Like the transport analogy, if licences are to be imposed for the use of such common technology as PCs, then there must be the equivalent of public transport available: or bicycles at least! And there is no such facility. Denying people access to technology in the absence of an alternative is tantamount to discrimination at worst, or technologically regressive at best... food for thought, yes?

The second is the notion of quality you have addressed: in particular, the idea that software companies should be held accountable to build software to a reasonable expectation of quality as per the industry standards. I would argue that this notion of industry standards is what is flawed and that the fundamental failure is at the training level.

If you are a civil engineer, you are known to be accredited to a certain standard; you have been trained to build a bridge in an earthquake area. And no one in their right mind would hire someone else for the job. Software on the other hand, is taught in garages, through manuals, in fly-by-night courses... I'd even go as far as to question the thoroughness of the UCT Computer Science undergraduate course. There sure are some holes in my education, not withstanding my undergrad idleness.

Can we hold companies liable if the staff they are getting have not been trained to the industry 'standard'? What is the industry 'standard'? The quality of the graduates we employ varies vastly. Some self-taught developers are head and shoulders over Computer Science graduates! I think the problem should be addressed at this level first before commercial liability can be enforced in any way.