Menu
A house of cards

A house of cards

Sources of the failures in risk management that allowed the global financial crisis to develop are located more in management than in technology. What are ways to avoid such problems in the future?

The question that inevitably follows the emergence of a crisis in banking and financial systems such as the current one is as simple as

it is obvious - how did this happen? How did the people who are meant

to be masters of risk management get it so catastrophically wrong and

drag global financial systems to the brink?

How was it that some of the most respected long-standing names of the

financial world, like Lehman Brothers, could fall in a heap almost

overnight, brought down by millions of sub-prime mortgages that

everyone now concedes should never have been written in the first

place?

There's no single reason. It's now painfully obvious that financial

institutions took enormous risks. Regulation of the financial sector

was lacking, especially in the United States. But it's also clear that

alarm bells, which were supposed to be part and parcel of the system,

didn't ring.

Technology designed to prevent crisis failed to fire and - as is often

the case - those failures resulted from management rather than the

systems themselves. Rules were bypassed and data fudged as people did

things well outside what they should have been allowed to do.

Part of the problem was the extreme complexity of some of the

financial instruments that were being traded. What had been simple, if

risky, residential mortgages were bundled up, on-sold, and sliced and

diced. The risks were obscured by the intricate financial instruments

and a web of transactions. A number of reports have shown that some of

the traded securities were so complex, alarms were not sounded simply

because authorities didn't know what they were dealing with.

But it also appears that, riding what seemed like an unstoppable wave

of growth, many financial institutions didn't want alarm bells to

ring, lest they spoil the party. Saul Hansell wrote in The New York

Times that some people on Wall Street "continued to trade complex

securities concocted by their most creative bankers even though their

risk-management systems weren't able to understand the details of what

they owned".

Traders sliced up extremely complex securities, he says, and pumped

them into systems as though they were simple bonds - a little bit of

corner cutting that no one noticed while the good times rolled. "But

once the mortgage market started to deteriorate, the computers were

not able to identify all the parts of the portfolio that might be

hurt," he wrote. Wall Street essentially lied to its systems, he says.

It was revealed in March that lenders at US bank JPMorgan Chase had

been using a "cheats and tricks" sheet to fudge entries in the bank's

Zippy automated mortgage processing system. With a barely noticeable

change here and there - like bumping up a borrower's salary by as

little as $US500 ($749) - lenders could write new mortgages for people

that the business rules said shouldn't be getting one. System

safeguards were bypassed simply by doing enough to get the borrower

over the line.

But the biggest warning that things were amiss came in January - and

not from Wall Street, but from the La Defense financial district of

Paris. Société Générale trader Jérôme Kerviel was found to have

exceeded his trading limit, and his superiors duly investigated. What

they uncovered was horrific. Not only had Kerviel gone out on a limb,

but by taking on an astonishing €50 billion ($96 billion) in

unauthorised positions over the past year he had taken the European

financial system out with him.

Kerviel covered his tracks using systems access privileges gained in

more lowly positions and concocted a labyrinth of fake transactions

and emails. But he told police after his arrest his position would

have been uncovered earlier had anyone bothered to look properly. Poor

controls and procedures, combined with his own deceit, meant Kerviel

was able to operate light years beyond his limited authority.

Managing director of S2 Intelligence Bruce McCabe says the

complexity of modern banking systems and the speed at which they

operate makes them increasingly difficult to police. "The key word

here is complexity," he says. "If you're looking at it from a bank

perspective, securing yourself against these sorts of situations is

getting harder and harder, because of the complexity of the systems

[involved]."

In today's hothouse market, trading problems are amplified. Automation

and the need to trade as quickly as possible have had an impact on the

ability of information chiefs to patrol the use of their systems,

McCabe says.

"We're able to do more transactions, very quickly," he says. "Computer

systems are enabling the faster transfers. In particular that

manifests itself in trading.

"They're in the business of decreasing execution time and reducing

latency, in particular, latency responding to the market. There's a

world of technology being invested in to execute trades automatically.

These are based on textual news feeds, on mining live CNN news feeds,

because the first trade is more important than the best trade. It's

more about reacting faster than others."

Technology is changing the way people look at markets, McCabe says.

"There's actually a lot of debate in computer science circles about

financial markets growing more chaotic."

It's not only banks that have faced problems. At all levels of

government, access control is a seemingly never-ending issue.

Auditors-general's reports into the information technology systems of

government departments and agencies routinely give public servants a

hiding for failing to implement better access controls.

In January, Victoria's auditors became fed up with continuing

access-control problems. In a report to parliament, they pointed out

that even though problems had been identified, no one was fixing them.

Their frustration was obvious.

"Many of the weaknesses identified during this audit cycle have been

previously identified and reported, either specifically to the

management of each agency, or generally through this report," they

said. "It is disappointing, therefore, that these weaknesses remain,

particularly given the potential exposures that can arise from poor

security, poor change-management practices and poor continuity

planning."

The Australian Financial Review recently reported that in the 2007-08

financial year, the Australian Taxation Office, Centrelink and the

federal Child Support Agency reported 140 instances of "browsing"

(unauthorised internal access to private and personal information),

leading to 17 resignations at Centrelink and five dismissals at the

Tax Office. Several matters were referred to the Commonwealth Director

of Public Prosecutions.

Former chief information security officer for the Commonwealth Bank,

Sarv Girn, says the founding principles of control were put in place

more than 30 years ago. "Principles on access rights and management

really go back to the old mainframe days," he says. "If you apply the

same principles to [today's] systems, you're actually quite sound."

Girn, who was speaking before his recent appointment as Westpac chief

technology officer, talks of a hybrid model, which employs a mixture

of automated procedures and manual checks to ensure people only have

access to the information they need and cannot operate outside their

authorised sphere. "In a large organisation, I think you need a

certain amount of automation to embed the control of information," he

says. "The way we've done that is to have a level of automation and

work flow when we create access to key systems. That automation allows

us to maintain the integrity of the information."

This is then checked manually. "Without that kind of hybrid approach,

with manual and automation, it becomes difficult in any large

organisation," he says.

Girn describes a life-cycle approach to access rights. This process

starts with defining the right job role and profile for staff. "We

have a mechanism [we use] to allocate that to individuals, which then

provides them with the access rights," he says. "That automation is

backed up by reconciliation mechanisms to make sure what you've set

out to do has been done. We have tools and facilities that do that for

key systems. On top of that, we conduct regular reviews to ensure the

applications and infrastructure have the right level of access and

integrity of information."

S2's McCabe argues that companies and other large organisations are

better off acknowledging that, given the size and complexity of their

systems, somewhere along the line something is going to go wrong.

After the Société Générale debacle, those responsible for reviewing

the company's systems didn't know where to begin looking. "Where do

you start?" he asks. "How do you audit them? How do you put procedures

in place that can't be worked around? It's a bit like the concept of

trusted computing. Mathematically, you can't secure yourself; it's

just too complicated."

He says organisations are better off keeping their data in such a way

that one piece can't give away the keys to the kingdom. "The

principles of all computer security now are getting into this other

keyword - compartmentalisation," McCabe says.

Government systems, like the ones found to be irresistible by browsers

at Centrelink and the ATO, provide good examples. Put all data in one

place and the problems only worsen. "Governments have taken a long

time to realise that it's a really bad idea to provide centralised

citizen records rather than federated records that can be brought

together on demand," he says.

"The main principle is to compartmentalise that information so you can

limit the damage and quickly find the people without a disaster

occurring. If you look at securing citizen information, with

smartcards and national ID cards, it's a principle for limiting

damage, because there'll always be people abusing the privilege."

The same ideas apply to finance, McCabe says. Systems need to be

designed in a compartmentalised way to firewall any damage that can be

caused by a breach in any one area.

"That's probably your overriding concern," he says. "When you have

people with access to very, very large financial trading capabilities

in one place, that's where you run into big problems. The exposures

are inevitable. The issue shouldn't be trying to prevent them

completely - because that's impossible. It should be about limiting

the damage through compartmentalisation."

But Girn says that with the right investment and strategy, as well as

a big dose of diligence, problems can be managed appropriately. He

doesn't necessarily agree with the federated data approach, at least

from a security standpoint.

"You have to apply the same principles whether you're looking at a

piece of the data or you're looking at the whole lot," Girn says. "The

crooks can piece together individual pieces; it can be valuable

information for criminals. Even if the data is held in multiple

locations, you still have to protect that, because in its own right it

could be confidential. It's the same standard regardless of whether

it's the federated or centralised [model].

"Sometimes what drives you towards a federated approach is not so much

an information security perspective. It's more performance, because

you're trying to put the data closer to the consumer, whether it's

your own internal network or elsewhere."

Girn says the bank has controls at a database level, for example,

which complement the access rights established at a transaction level.

"Whether it's the database, or the server," he says," we have

controlled access rights, monitoring and logging to ensure that we can

comfortably provide a confident assertion that the data is sound. But

you need that multiple layering in order to be confident."

The layered approach also extends to business rules, which form one of

the main defences against cheat-sheet data fudging. "But those kinds

of rules are also implemented out in the different parts of the

architecture," Girn says. "So sometimes business rules are in the

application. Others can be database rules - still business rules - but

[operating] in the database. That tiering ensures you are protected."

Critical rules have to be made non-negotiable and difficult to change,

he says. "Having them built into your system, and having the key

high-risk ones as being non-negotiable, is the approach we have

taken," he says. "It really makes life easier when you're doing

risk-management and market-risk analysis later on."

But McCabe says Société Générale showed that, without a push for

better regulation, many banks and other financial institutions will

continue to be unaware of problems they are sitting on.

"I suspect there are a lot of banks that just wouldn't know if they

had those exposures," he says. "They've built systems on systems and

it turns out that there's one more person that has quite a lot more

access than they really should have."

Join the CIO New Zealand group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags risk management

Show Comments