11 January 2006

Zero Day Exploit

A recent problem with an ancient file-type found in Microsoft products raises a host of questions about how ready we are to face computer attacks. This vulnerability was described as a zero-day exploit. That is, the vulnerability was discovered and ready for exploitation well before it could be patched. As it happened, the vulnerability was made public on December 26, a third party patch (written by a single developer, Ilfan Guilfanov) was issued on December 31, and Microsoft issued its own patch on January 4, six days before the next set of patches was scheduled to be issued.

I wrote about one question all this raises in a response to a post on the SecuriTeam blog

Microsoft has institutional problems with fixes that good will, excellent design, and technical acuity cannot solve.

Consider the recent wmf vulnerability. Microsoft put 200 people to work to find the fix. Once it was found, it had to be tested extensively. Once approved, the documentation for it had to be translated into more than 20 languages.

Microsoft's customers require all this. They have also preferred to have all fixes come on a predictable schedule.

Given that all this is required, and that it takes time, Microsoft showed remarkable flexibility and speed. To ask them to react as quickly as Ilfan Guilfanov, who wrote and issued a patch in a matter of hours, would be to ask a supertanker to turn on a dime.

This is neither simply to praise Microsoft nor to offer one more argument for abandoning IE. It is to outline a problem that affects all of us when the millions of users who rely on Microsoft get attacked.


There must be a way for Microsoft to respond more quickly. Interestingly, Microsoft's OneCare program told its customers that the problem was solved several days before the patch was issued. But other questions arise as well. Can we--should we--rely on third party patches like those Guilfanov created. At about the time his patch came out, the Metasploit project issued a way of exploiting the vulnerability. They argued that it gave the good guys a way to test the vulnerability of their systems. But the bad guys latched onto it to create additional exploits. Did they issue their framework for exploiting the vulnerability too soon? Or are they really allies of the bad guys?

In more general terms: What should we do about old code that was not written with security in mind? Do we really have to keep it? If not, who is responsible for the problems it causes? After all, updating any software has costs--monetary and in the time it takes to learn it.

It was a fascinating series of events. You can expect a paper about them.