25 November 2007

Subprime Risk

How could some of the brightest minds in business create the sub-prime mortgage mess? Most went to business school and had decades of experience in their industry. All are highly paid, with their pay, supposedly, reflecting their talent, education and experience. The market valued their skills highly. Yet their foolishness has triggered a plunge in the stock market with financial stocks--the stock of their companies--leading the way and a recession assuredly to come (it may already have begun).

It is not the first time that the brightest in business have eagerly touched a tar-baby that left them stuck when things went bad. Enron and the Internet bubble are but two other recent examples. What they have in common is a failure to realistically consider risk. They also showed a willingness to abandon some of the simple, cardinal virtues of conducting business.

The focus here will be on risk. That has strong implications for much of what happens in public life these days, and for information security, my chosen field, in particular. What happened with these great financial schemes is that their perpetrators ignored risks that, predictably, became reality. With subprime mortgages, it was predictable that interest rates, which could hardly have been lower, would rise in the face of inflationary pressures caused by federal budget deficits and the falling dollar, among other things. When those rates rose, the poor souls whose credit-worthiness was questionable were bound to default. How could that not be seen? Similar blindness afflicted the moguls at Enron and the visionaries of the dot-com era.

It has also afflicted the Bush administration, which entered Iraq without a clear view of the risks of insurgency and civil war. And--a central point here--it afflicts many of those who decide how to allocate time and effort on information technology (IT). There are two ways that risk is dealt with poorly in IT.

First, the simple fact of risk is ignored. The recent loss of the records of 25 million people in Britain is a case in point. An employee was allowed to put those records on a laptop, unencrypted and protected only with a password. The thought of loss seems to have escaped those who established policies and procedures for the agency that allowed it. I imagine that they relieve that such things rarely happen, so why protect against an occurrence that would probably not happen. The potential cost if it did was not considered. Nor were the--low--costs of countermeasures.

Another example comes from my own experience of several years ago. A network I worked on was unprotected by anti-virus. We knew we needed it, but the project manager told us to get licenses from HQ, which had extra, already paid for, that we could have. They dawdled; so did we (and, to be fair, I did not push the issue--a lesson for me). Then The Klez virus struck, big time. The network went down. Suddenly, no price was too high for AV. We got it. It took us several days to clean up and install the anti-virus package.

Risk is also poorly analyzed all too often. One agency I worked on decided that a particular piece of malware was a major threat because the unit chief saw that it was covered prominently on CNN. Other, more severe threats were ignored. This agency, like many organizations, never did even a preliminary analysis of the risks it faced and the most effective means of countering them. Consequently, at least some the means allocated to IT security--which will always be limited--were misallocated. These organizations are prepared to stop something, but not what threatens them most.

None of that would matter if risks did not become reality. Yet, as the executives at Citibank, Countrywide, Washington Mutual, and other firms have found, they do.

No comments: