Part 6: Monitoring the institutions in the Scheme

The Treasury: Implementing and managing the Crown Retail Deposit Guarantee Scheme.

6.1 In this Part, we discuss:

In summary, the Treasury knew when the Scheme was introduced that it would be important to monitor the institutions covered by the guarantee. The monitoring framework that the Treasury implemented was, for the most part, effective. The Treasury identified all the institutions that triggered the guarantee as having a high risk of failure before they failed. However, it took five months for the Treasury to begin monitoring using information received under its agreement with the Reserve Bank. The Treasury had just started monitoring when the first failure under the Scheme occurred.

In our view, the Treasury was waiting for monitoring data to arrive from the Reserve Bank rather than planning for its arrival and for the next stage in the monitoring process (that is, what it would do with the information once it arrived). The Treasury should have prepared a monitoring work stream to run concurrently with the application process.

The Treasury monitored to prepare for potential payouts rather than to ensure that institutions did not fail. When monitoring showed that deposits with finance companies were increasing, the Treasury did not ask those finance companies what they were doing with the money or take measures to prevent finance companies from engaging in riskier investments.

When South Canterbury Finance was accepted into the Scheme, the Treasury could not know how vulnerable that finance company really was. However, South Canterbury Finance’s deposit base increased by 25% in the four months immediately after the Scheme was introduced. Its loan book increased in the early months of 2009, with many loans made to property developers and with capitalising interest and second mortgages, increasing South Canterbury Finance’s risk profile.

Although we cannot say for certain, closer monitoring of South Canterbury Finance earlier in the Scheme might have helped identify it as a problem institution and allowed the Treasury to consider earlier whether it needed to take steps to limit the Crown’s liability.

The Treasury’s monthly financial statements did not include any provisions for payouts under the Scheme until June 2009, when the provision was estimated at $0.8 billion. The Treasury knew before June 2009 that further failures of finance companies were likely, so this information should have been better reflected in the monthly financial statements earlier than June 2009. Once a Provisioning Working Group was set up in May 2009, it worked very well – but lower-level staff were instrumental in setting up some of the processes that should have been set up by senior management in late 2008.

Objectives of monitoring institutions

Issuing a deposit guarantee meant that the Treasury needed to closely monitor the financial institutions covered by the Scheme to effectively manage the Crown’s liability at the same time as maintaining depositor and public confidence. Monitoring of financial institutions was important because the conduct and financial health of an institution covered by the Scheme determined:

  • whether the Crown would be required to pay out under the guarantee; and
  • how much the Crown would be required to pay.

The Treasury recognised very early the importance of monitoring. In its 13 October 2008 report, the Treasury highlighted ongoing monitoring as a “practical detail” that the Treasury and the Reserve Bank needed to work through. In those early days, the Treasury thought that monitoring would include receiving a report after 12 months to assess aggregate indebtedness and liability for the fees charged under the Scheme. The report also indicated that additional monitoring could be imposed under the contract, such as quarterly reporting or copies of the standard reports provided to the trustees. At the time, the Treasury saw the role of the Reserve Bank as helping to verify the information received and monitoring registered banks. It was clear that monitoring financial institutions was an unfamiliar activity for the Treasury. The Treasury did not have staff with the necessary skills and, we were told, began a process to recruit people with the necessary skills in December 2008.

In our view, the Treasury’s objectives for monitoring financial institutions were to:

  • assess the effect that the Scheme was having on depositor confidence and any unintended consequences of the Scheme (see Part 5);
  • ensure compliance with the guarantee deed (which included a requirement to comply with the terms of the trust deed);
  • identify any activities by financial institutions that did not align with the Government’s intention for the Scheme (including activities to take undue advantage of the Scheme by taking deposits under the guarantee that the institutions would not otherwise have taken); and
  • assess the financial position and anticipated failure of institutions to:
    • provide accurate provisions in the Government’s financial statements for likely payouts under the Scheme;
    • prepare to pay out depositors; and
    • take any other action before or after an institution failed.

The Treasury adopted a minimal intervention approach to monitoring financial institutions. This meant that there was no objective of taking steps to prevent a guaranteed institution from failing or to minimise the costs of failure for the Crown.

In our view, the objectives of monitoring financial institutions should have been clearly documented, along with the monitoring tasks designed to meet these objectives, including the role of the Reserve Bank as well as the Treasury’s own role. There is a useful document prepared by the Treasury that sets out the NBDT monitoring process, including a flowchart detailing the information flows and relevant responsibilities. We understand that this flowchart was produced in June 2010 to help with the provisioning process, but this was more than a year after the start of the Treasury’s monitoring work. A document of this nature (but relating to the broader monitoring role) should have been prepared by the Treasury in late 2008, along with clear monitoring objectives.

How the monitoring was carried out

The role played by the Reserve Bank

Initially, the intention was for the Reserve Bank to monitor only banks covered by the Scheme. Over time, however, the Treasury and the Reserve Bank determined that the Reserve Bank was also best placed to monitor NBDTs. This was because the Reserve Bank’s formal relationship with the trustees enabled it to collect prudential information from individual NBDTs. The Reserve Bank had to recruit additional resources to do this monitoring.

The Reserve Bank used its new powers under Part 5D of the Reserve Bank Act to request regular information from trustees, which were the supervisors of the institutions. The information gathered was not audited and was based on management accounts. Apart from being slow, this process was preferable to having the Treasury, which early in the Scheme had limited experience in monitoring financial institutions, collect this information directly from trustees.

The framework for NBDT monitoring

The Reserve Bank prepared the monitoring framework for NBDTs in consultation with the Treasury and some trustees. The Scheme was announced on 12 October 2008, and the Treasury received its first risk ranking report based on this information at the end of March 2009. (The information was from January 2009.) We have been told that some financial data for individual institutions, as well as sector information, were collected by the Reserve Bank before the introduction of the Scheme and were provided to the Treasury. We have not seen any evidence that the data and information were used in the Treasury’s monitoring.

The Scheme was in place for five months before NBDTs began to be monitored by the Treasury through its agreement with the Reserve Bank. The reasons for this delay in the Treasury’s monitoring included the need to hire new staff and the time required by individual institutions to implement systems to facilitate the flow of data from the trustees. The December and January holiday period added to the delay. Nevertheless, the timing worked well from the Treasury’s perspective. The Treasury had almost completed assessing all the applications and had hired some staff (who started in February and March 2009) with experience in analysing financial institutions.

The Reserve Bank prepared a monthly template that was completed by the institutions, sent to the trustee for review, and then forwarded to the Reserve Bank. Institutions had six weeks to submit the data to the Reserve Bank through the trustee. This was later reduced to four weeks. The information collected was a typical suite of prudential statistics covering the balance sheet, asset quality, performance, and related-party activity. This template was drafted in January 2009 and refined by the Reserve Bank by mid-2009. The Reserve Bank also reviewed additional data, such as financial accounts and prospectuses.

The Reserve Bank gave the Treasury:

  • a monthly report that ranked the riskiness of the NBDTs relative to each other (but not the risk of them failing);
  • monthly reports providing a sector overview for the finance company and savings institution sectors;
  • detailed individual monthly analysis for institutions on the watch-list (those that were ranked as high or medium risk, based on the relative risk ranking report and/or those with higher potential losses); and
  • weekly liquidity reports for higher-risk institutions (which began in September 2009).

The Reserve Bank prepared models to analyse the data and to estimate the relative risk of the institutions and the losses that would occur if institutions failed. The Treasury used one of these models (the estimated loss model) as the basis for provisioning estimates (that is, estimates of how much the Crown needed to include in its financial statements for expected losses under the Scheme). Provisioning estimates also required estimates of the probability of failure. These were prepared by the Treasury’s Provisioning Working Group rather than by the Reserve Bank.

The relative risk ranking model used a spreadsheet to combine measures of liquidity, asset quality (that is, the quality of the institution’s lending), income margins, capital sufficiency, and related-party exposures (added in December 2009) into a single riskiness measure. The measure was then used to rank each institution. The Treasury relied substantially on the output of this model, which was developed by the Reserve Bank. Changes made by the Reserve Bank to the relative risk ranking model included judgement-based overrides of output (for example, increasing or decreasing the risk ranking of institutions). Ultimately, the Treasury decided what the relative risk ranking should look like and what the Treasury’s responses should be. Accordingly, the Treasury adjusted the risk ranking of institutions based on all of the information that it had available.

The Reserve Bank also provided qualitative input based on its knowledge of the institutions and on market intelligence (such as feedback from other regulators, including the Securities Commission and Companies Office, as well as trustees and other industry participants).

In November 2009, the Treasury requested that the Reserve Bank provide more detailed monthly monitoring (watch-list reports) for more institutions. Our evidence from interviews is that, although the Treasury had “a good feel” for the risks in larger entities, it needed additional detail for some smaller institutions.

As well as the reports received from the Reserve Bank, the Treasury also reviewed other external information. This included financial accounts, prospectuses, reports from ratings agencies, information provided as part of the application process, and general “intelligence” from other market participants. The Treasury also received updates from the Securities Commission and the Companies Office in the form of regulator meetings and informal information exchanges. Information was also exchanged at the monthly financial system issues meetings, although these exchanges tended to focus on general system-wide information rather than on information about the circumstances of individual institutions.

The Treasury and the Reserve Bank worked collaboratively and, at the operational level, were in frequent contact. The Treasury and the Reserve Bank also shared information from time to time with the Securities Commission and the Companies Office. Information sharing between the Treasury and the Reserve Bank appeared to work well. Information sharing with other agencies did not appear to be as frequent or structured and open, or even possible under the requirements of the deeds until the Extended Scheme.

The Treasury had broad powers to gather information under the guarantee deed. The Treasury could request additional details directly from institutions as well as details from third parties such as trustees, auditors, bankers, the Securities Commission, the Companies Office, and ratings agencies. The Treasury used these powers when it required additional details. From February 2009, the Treasury began to request further details from a number of higher-risk institutions (including data on liquidity positions and loan portfolios) and was in regular telephone contact with a number of institutions. The directors of institutions were also individually asked to provide directors’ certificates to support the financial position of the institution.

Appointing inspectors

Because of weaknesses in the information management and reporting systems of some finance companies, the only way to be certain of the accuracy of information was to appoint an inspector. The Treasury had the broad power under the guarantee deed to appoint an inspector at any time. It would use this option if it had concerns about the information that it had received or if it required additional information.

The Treasury determined that the best method for appointing inspectors was to set up a panel of potential inspectors that could be called on at short notice. The Treasury set up this panel using the Crown’s procurement process. This took time because it required detailed price and contract negotiations and scoping discussions. The panel was in place by the end of 2009 and comprised eight inspectors with a range of skills and expertise.

While this tender process was under way, the Treasury sought approval to appoint a number of inspectors outside the panel. The Treasury decided to do this because the need was urgent, which was consistent with Crown procurement guidelines. The urgency appeared to stem mostly from the need to quantify the Crown’s potential exposure and to provide provisioning estimates for the Government’s financial statements. Other factors contributing to the urgency included:

  • the high-risk nature and deteriorating financial position of a number of institutions;
  • the growth in guaranteed deposits of these institutions;
  • a potential review of the institutions’ credit ratings; and
  • market intelligence about these institutions.

Because of the cost of inspecting, the scope of each inspection needed to be carefully considered. We were told by those we interviewed that, for most inspections, the Treasury knew what aspects it wanted the inspectors to focus on, based on its knowledge of the individual institutions. Some inspections were generic. Others were targeted at specific concerns in a financial institution. In all instances, the Treasury sought additional information and analysis on asset quality, liquidity and funding, and business practices.

The Treasury kept in close contact with inspectors during inspections. The inspectors provided regular (often daily) updates, and there was frequent interaction. The inspectors often made interim presentations, with the focus of the inspections sometimes changing based on preliminary findings. In a number of instances, the reports uncovered other matters that needed further investigation.

During the Scheme, 12 institutions were inspected. For some of these institutions, there were multiple inspections for a range of issues. The first comprehensive inspection started in June 2009. There was an earlier shorter inspection in March 2009 because of the potential withdrawal of the guarantee. Overall, six inspections were begun in June and July 2009 before the inspection panel was operating. These inspections provided the Treasury with new information and greater insight into the risk of failure of an individual institution and the timing of a potential failure (which helped in planning for responses, in payout processes, and with the provisioning recommendations). The Treasury provided copies of the inspection reports to the Reserve Bank. We understand that the Treasury was reluctant to provide the reports to other regulatory agencies.

The outcome of the monitoring

The Treasury’s response to the results of monitoring

The Treasury had a range of tools at its disposal for responding to risky institutions. These included withdrawing the guarantee, restricting or prohibiting certain transactions, and requiring certain undertakings from directors.

The most significant of the Treasury’s explicit tools was withdrawing the guarantee. The Treasury did not take using this tool lightly. Any withdrawal of the guarantee needed to be disclosed to the public and could well cause an institution to fail because of the resulting loss of depositor confidence, triggering the guarantee for all existing deposits held by the institution. The withdrawal would affect only new deposits. All deposits up to the date of withdrawal continued to be covered by the existing guarantee.

Therefore, the effect of any withdrawal could be limited. The Crown’s potential payout would be roughly the same at the time of withdrawal as it would be if the institution eventually failed, unless the institution’s deposit book grew or the quality of its assets deteriorated. There was a chance that letting the institution resolve its issues could lead to a more favourable outcome for the Crown.

Three institutions had their guarantees withdrawn. The first withdrawal was Viaduct Capital Limited (Viaduct) on 20 April 2009. Its guarantee was withdrawn on the basis that the business operations were being conducted in a manner inconsistent with the intentions of the Crown (see Figure 14). The institution continued to operate for another year but eventually failed and triggered the guarantee on 14 May 2010. The payout covered deposits made up to 20 April 2009. In this instance, the Treasury’s monitoring successfully identified activities within an institution that were not consistent with the intentions of the Crown in extending the guarantee.

Figure 14
Viaduct Capital Limited (formerly Priority Finance Limited)

Priority Finance Limited (Priority) was a commercial and property lender with total assets at 31 March 2008 of $4.8 million. It raised funds through secured and unsecured term deposits.

Priority applied to join the Scheme on 23 October 2008 (its letter was dated 17 October 2008). The trustee (Prince and Partners Trustee Company Limited) provided a letter dated 31 October 2008 confirming that:
  • Priority held a current prospectus with eligible securities on issue (registered on 10 October 2008);
  • there was no breach of trust deed covenants; and
  • the trustee was not aware of any information about Priority not being able to pay its
    debts or being insolvent.
The Reserve Bank provided a letter to the Treasury on 6 November 2008 stating that Priority met the eligibility criteria and that there was no reason not to off er the guarantee. The Treasury considered this advice, the trustee’s confirmation, the size of the institution, and other factors to determine that its failure would undermine confidence. Therefore, it was necessary and expedient in the public interest to grant a guarantee. The guarantee was approved on 13 November 2008 with a supplemental deed signed on 19 November 2008 (because of minor drafting errors in the original guarantee deed).

On 13 February 2009, ownership of Priority changed when another entity purchased Priority through several complex transactions. As part of these transactions, Priority was renamed Viaduct Capital Limited (Viaduct). These transactions gave rise to concerns within the Treasury that Viaduct could be in breach of a number of obligations in its guarantee deed (including breaches of the deed, the arm’s-length nature of transactions, and business conduct). After consulting with the Reserve Bank, the Treasury told Viaduct on 16 March 2009 that it was planning to appoint an inspector to investigate the sale and associated transactions.

In response to the inspection report and its own analysis, the Treasury concluded that Viaduct conducted its business and affairs in a manner that was inconsistent with the Crown’s intentions in entering into the guarantee and that Viaduct extended the benefit of the guarantee to people who were not intended to receive that benefit. On 20 April 2009, the Treasury withdrew the guarantee with immediate effect. The Treasury did this after extensive internal consideration of the possible effects of withdrawal, as well as extensive consultation with Crown Law. The withdrawal applied only to deposits made after 20 April 2009. All deposits made up to 20 April 2009 were guaranteed. Viaduct strongly disagreed with the conclusions the Treasury drew.

Viaduct had issued a prospectus on 3 March 2009, seeking $50 million in additional funds. The prospectus was amended on 24 June 2009 to disclose the guarantee withdrawal. Another prospectus was registered on 9 October 2009, seeking deposits that would not have the benefit of the guarantee.

The Treasury continued to monitor Viaduct closely, despite the withdrawal of the guarantee, as the deposits up to the date of the guarantee withdrawal were still covered by the Scheme. The Treasury requested additional information, and Viaduct was subject to detailed monthly reporting from November 2009, as well as the weekly liquidity analysis that applied to higher-risk entities. From November 2009, it became apparent that Viaduct was selling off its good residential mortgage assets, leaving poorer-quality property development loans on its books. This could increase the Crown’s loss if the guarantee was triggered.

On 14 May 2010, receivers were appointed to Viaduct, triggering the guarantee for the $7.5 million in deposits that were covered by the Scheme.

In our view, the Treasury was thorough and timely in its analysis and response to the Viaduct transactions. The Treasury quickly appointed an inspector once it was aware of the transactions, and the Treasury sought specific advice from the Reserve Bank and Crown Law. The guarantee was withdrawn relatively quickly on the basis of “inappropriate activity”. In withdrawing the guarantee, the Treasury considered the intentions of the guarantee, the requirements of the guarantee deed, and the implications for the Crown’s liability if the guarantee was not withdrawn.

The second guarantee withdrawal was less contentious. The guarantee extended to FAI Money Limited was withdrawn on 7 May 2010. The withdrawal came after the Crown was notified that all debenture holders had been repaid, FAI Money Limited had ceased taking deposits from the public, and FAI Money Limited had no outstanding debt securities on issue. The third guarantee withdrawal was from PGG Wrightson Finance Limited in September 2011, when it was acquired by Heartland Building Society.

As well as the withdrawal of the guarantee and the appointment of inspectors, the Treasury’s response to monitoring results included:

  • requesting additional information from institutions directly (either due to increased risk of the institution or in response to business plans or proposed transactions);
  • requesting additional details where growth in deposits was too high or lower than expected;
  • requesting undertakings from directors about compliance with the deed, the institution’s position, or particular transactions;
  • notification and requesting additional details from auditors and the trustee about possible breaches of the guarantee deed;
  • requesting assurance that certain transactions were conducted on an arm’s length basis; and
  • advising the Provisioning Working Group about the probability of an institution failing and the appropriate level of provisioning for specific institutions.

Growth in deposits could signal that a financial institution was misusing the system. Some institutions experienced strong deposit growth by offering attractive interest rates under a government guarantee. This was a legitimate and expected practice. The challenge for the Treasury was in knowing how those deposits were being applied.

South Canterbury Finance Limited

The Treasury’s actions in response to a financial institution in difficulty are well illustrated by how it dealt with South Canterbury Finance, which failed while under the Revised Scheme. The Treasury did not issue directions or attempt to influence South Canterbury Finance’s operations directly or through another agency before it failed. Figure 15 sets out some of the main aspects of the Treasury’s response to South Canterbury Finance.

Figure 15
South Canterbury Finance Limited

South Canterbury Finance Limited (South Canterbury Finance) was a large and diversified finance company based in Timaru. It was placed in receivership on 31 August 2010 with assets recorded at $1.6 billion. South Canterbury Finance’s failure triggered the largest payout under the Scheme.

The Treasury received South Canterbury Finance’s application to join the Scheme on 15 October 2008. At the time of its application, the reputation of South Canterbury Finance was unquestioned and it had a BBB- rating based on its “sound business profile”. (It had maintained this rating since 2006.) Leading up to the start of the Scheme, South Canterbury Finance had been growing strongly with solid results. It had a strong local support base and the support of a highly regarded individual in Allan Hubbard.

As with other applications, the Treasury considered the advice of the trustee (on 15 October 2008) and the Reserve Bank (on 6 November 2008). Neither had any concerns. Other than following up on some additional documentation, the processing of South Canterbury’s application was straightforward.

South Canterbury Finance was a sizable company and its failure could have had a significant effect on public confidence in the financial system and in the confidence of depositors generally. After the Treasury concluded that it was necessary and expedient in the public interest to grant a guarantee, South Canterbury Finance was accepted into the Scheme on 19 November 2008.

South Canterbury Finance’s deposit base increased by 25% in the four months immediately after the Scheme was introduced.

The Treasury received the first Reserve Bank assessments of individual finance companies at the end of March 2009. The potential deficiencies in South Canterbury Finance were recognised at this time, including concerns about corporate governance, asset quality, and related-party exposures.

In April 2009, the Reserve Bank alerted the Treasury to a possible breach of the guarantee deed for two related-party transactions (in December 2008 and January 2009). South Canterbury Finance had not sought Crown consent to the transactions and had not provided an independent expert’s written opinion about whether the transactions were on an “arm’s length” basis. The Treasury asked South Canterbury Finance for more information on 21 April 2009 about these transactions and, more generally, about South Canterbury Finance’s financial position, including liquidity, arrears, and loan portfolio. The Treasury also asked for directors’ certificates.

South Canterbury Finance responded that it was an oversight to not seek the Crown’s consent – one transaction was a security-sharing agreement (not a loan) and the other a reclassification of investments. South Canterbury Finance was to obtain an expert opinion. We do not know whether an expert opinion was obtained.

Based on this response, the Treasury decided that an inspector needed to be appointed. The Treasury told South Canterbury Finance on 12 May 2009 that the Treasury would appoint an inspector to examine the affairs of South Canterbury Finance. The Treasury received the inspector’s report on 17 July 2009. The report reaffirmed the seriousness of the risk factors suspected with the books and management of South Canterbury Finance.

From April to August 2009, the Treasury investigated the affairs of South Canterbury Finance extensively. On 12 August 2009, the Treasury made a provision for the estimated loss if South Canterbury Finance failed. This provision reflected the Treasury’s judgment that South Canterbury Finance was more likely than not to fail. The provision was made with the benefit of the inspector’s report.

The Treasury’s main tool was the power to withdraw the guarantee for new deposits raised by South Canterbury Finance. The announcement of such a decision would almost certainly have resulted in a run on the deposits of South Canterbury Finance and its early failure, triggering the Crown’s liability under the guarantee.

The Treasury explored many options for South Canterbury Finance and kept the Minister well informed. In August 2009, the Treasury provided a report setting out the options available for dealing with South Canterbury Finance. The options included a Crown equity injection or other government support, but the report recommended against them. Other options were receivership or statutory management. The report set out the comparative costs to the Crown of these options. In October 2009, the Treasury also looked at the option of providing a short-term bridging loan, which was thought to potentially increase the probability of South Canterbury Finance surviving. The Treasury’s advice was that the Minister should proceed with the loan as a last resort if he thought the precedent could be adequately managed. The loan was never required.

South Canterbury Finance applied to join the Extended Scheme on 19 January 2010. The Treasury carefully considered the application, obtaining advice during March 2010 from the trustee, the Reserve Bank, the Companies Office, Securities Commission, South Canterbury Finance’s directors, and its auditors. Although South Canterbury Finance had made significant progress in addressing some of its main challenges from October 2009 to March 2010, the Treasury was well aware that it was more than likely that South Canterbury Finance would fail and trigger the guarantee.

The Treasury could have denied South Canterbury Finance entry to the Extended Scheme. By early 2010, the Treasury knew from the inspector’s report that South Canterbury Finance’s risk management and governance systems were inadequate for such a large company. The Treasury knew that South Canterbury Finance had little hope of meeting the Reserve Bank’s new prudential requirements for finance companies, which were to become effective later that year. However, the Treasury accepted South Canterbury Finance into the Extended Scheme because excluding the company would likely have resulted in its immediate failure.

The Treasury was of the view that, from mid-2009 to August 2010, the Crown’s liability if South Canterbury Finance failed was not increasing and that, while South Canterbury Finance continued to operate, there was a chance that a private sector solution would emerge that reduced the Crown’s liability. The Treasury decided to accept South Canterbury Finance into the Extended Scheme (on 1 April 2010), to provide South Canterbury Finance an opportunity to achieve a solution to its funding and capital challenges.

South Canterbury Finance’s deposit base did not increase from mid-2009 when the Treasury first made a provision against South Canterbury Finance’s likely failure. The Crown’s liability is not determined only by the volume of deposits guaranteed but also by the net loss on default, which depends on the quality of South Canterbury Finance’s asset portfolio. The Treasury’s estimate of this loss was $655 million in August 2010.

In the end, no private sector solution eventuated and a receiver was appointed to South Canterbury Finance on 31 August 2010. The receiver for South Canterbury Finance found that the state of its impaired assets was worse than expected. Our understanding is that related-party transactions were the source of a good deal of these unanticipated losses.

Evidence from interviews specifically about South Canterbury Finance, together with our review of documentation, confirms that the Treasury was monitoring South Canterbury Finance’s activities from April 2009.

Although we cannot say for certain, closer monitoring of South Canterbury Finance earlier in the Scheme might have helped identify its problems and allowed the Treasury to take earlier steps to constrain the Crown’s liability. Closer investigation of South Canterbury Finance’s specific circumstances would be needed to determine which, if any, tools might have been appropriate.

Although the Treasury did not use any such tools, its analysis and research during the period was plentiful, comprehensive, and thorough. We consider that it would be useful to use much of the good work that was done by the Treasury to prepare a framework for dealing with distressed institutions.

Recommendation 4
We recommend that the Treasury and the Reserve Bank of New Zealand document the analysis and thinking by the Treasury during its consideration of how to deal with South Canterbury Finance Limited. This could take the form of a framework for dealing with distressed institutions. The distressed institutions framework could set out possible courses of action for dealing with an institution, including deterrent processes, actions to take in the event of failure, the roles and responsibilities of regulatory agencies, and the communications that need to occur between agencies.

Provisioning for institutions’ failure

Measuring and disclosing the potential cost to the Crown

An important reason for monitoring was to determine the value of the potential Crown liability for the Scheme for the purpose of the Government’s financial statements.

In March 2009, the Treasury began to work out how it could quantify the Crown’s potential loss. The Treasury made a provision for losses based on the likelihood of an institution failing and the expected loss if that failure were to occur (taking into account asset recoveries as part of the receivership process).

In October 2008, the Treasury had prepared a report for the Minister that discussed fiscal risk and estimated possible payout amounts by the Crown. At that time, the Treasury’s worst-case estimate of loss was $945 million, and the best and mid-cases were $462 million and $704 million respectively. These estimates drew on work by the Reserve Bank and had several cautionary notes about the uncertainty of the underlying data and assumptions. We have not seen evidence that the Treasury updated these amounts between October 2008 and March 2009, nor that it provided any further reporting internally or to the Minister.

The same report noted that the Government’s financial statements must disclose information to enable users to evaluate the nature and extent of risks that the Crown is exposed to from the Scheme. The report noted that the Treasury was required, under New Zealand financial reporting standards, to recognise as a liability any risk exposures where a payout was probable. It also had to disclose the Crown’s objectives, policies, and processes for managing the risk, the method used to measure the risk, and any changes. The report noted that, as long as the likelihood of the guarantee being called was remote, a provision for the amount was not necessary. However, if a payment under the guarantee became likely or eventuated, then the Treasury would need to estimate the likely expenditure and include an expense and a provision in the financial statements.

The October 2008 monthly financial statements mentioned the Scheme. However, the statement notes indicated that, because the likelihood that the guarantee would be invoked was considered remote, the guarantee did not meet the definition of a contingent liability and was excluded from the statement of contingent liabilities and assets. The Treasury made similar disclosures in later months, including the February 2009 financial statements – just before the failure of Mascot Finance. In March and April 2009, the notes to the financial statements disclosed the failures by Mascot Finance and another financial institution, Strata Finance Limited. The notes also indicated that no further failures under the Scheme were likely.

The end-of-year financial statements for 30 June 2009 were, unlike the monthly statements, subject to audit and included a liability under the Scheme for the first time. The liability recognised the likelihood of future Scheme payouts, estimated at the time to be $0.8 billion. The 2009 end-of-year financial statements also included disclosures about the nature of the Crown’s exposure to risk under the Scheme.

In our view, the Treasury should have recognised the Crown’s obligations under the Scheme as a liability and contingent liability and provided additional disclosures about the nature of the Crown’s exposure to risk in its monthly financial statements earlier than it did. We have drawn this conclusion based on several factors, including:

  • the Treasury’s October 2008 estimates of possible losses;
  • the Treasury’s acknowledgement that, under New Zealand financial reporting standards, it was required to recognise a liability and disclose details of the Scheme in its financial statements should a payout become likely; and
  • the Treasury’s internal communications about the health of some financial institutions in the first half of 2009 and its general view about the likelihood of failures in the NBDT sector.

The Treasury started to estimate Crown losses more thoroughly after March 2009, after the monitoring process became fully operational. The information and methodology for these estimates was drawn from the monitoring reports provided by the Reserve Bank.

In May 2009, the Treasury established a Provisioning Working Group and began estimating potential Crown exposure and reporting the amount within the Treasury. The provisioning analysis improved with each monthly meeting. The Provisioning Working Group appeared to work well, especially from September 2009 when the June 2009 provisions were reviewed. This analysis was comprehensive. It was well organised, with main discussion points documented and findings pursued. The output of Provisioning Working Group meetings appeared to have been circulated broadly within the Treasury. Although set up primarily to consider the amount of the liability for the financial accounts, the Provisioning Working Group considered a range of matters about individual financial institutions under the Scheme.

In our view, the Provisioning Working Group should have been set up earlier than seven months after the Scheme was introduced. It should have operated within a formal monitoring, escalation, and reporting framework.

After March 2009, the Treasury assessed the need to provide for losses under the Scheme on a monthly basis. A recommendation to provide for losses was made when an institution was assessed to have a greater than 50% chance of failing. The provisioning recommendation took into account the estimated loss if the failure were to occur, the interest that would have to be paid to depositors after the failure, and the interest effect of timings of all payments (to depositors and from the receiver’s distributions). Under the Extended Scheme, the Crown did not have to pay interest to depositors.

The results of the monitoring allowed the Treasury to assess which institutions were the most and least likely to fail. However, the Treasury needed to be more precise in its assessment of possible failure. To make a provision in the Crown’s financial statements, the Treasury needed to decide whether a financial institution had a higher than 50% chance of failure. This was beyond “normal prudential assessment ability” but was required by accounting standards. It was also the main reason why the Treasury initiated a series of inspections of NBDTs. In our view, these inspections were consistent with the activities of many prudential supervisors that carry out intensive on-site examinations of regulated institutions to assess the quality of assets and risk management.

Figure 16 sets out the final provisions recognised in the Government’s financial statements during the Scheme.

Figure 16
Provisions for liability under the Crown Retail Deposit Guarantee Scheme

Date (as at) Provision for net cost of
institutions failing under the
Total Crown liability if all
institutions failed
31 October 2008 0 15.6
30 November 2008 0 122.0
31 December 2008 0 126.0
31 January 2009 0 126.0
28 February 2009 0 126.1
31 March 2009 0 126.1
30 April 2009 0 126.3
31 May 2009 0 126.3
30 June 2009 831 124.2
31 July 2009 Monthly reports not required Monthly reports not required
31 August 2009 Monthly reports not required Monthly reports not required
30 September 2009 866 124.3
31 October 2009 899 133.1
30 November 2009 899 133.1
31 December 2009 776 133.0
31 January 2010 771 133.0
28 February 2010 849 133.0
31 March 2010 881 133.0
30 April 2010 880 133.0
31 May 2010 887 133.0
30 June 2010 748 133.0
31 July 2010 Monthly reports not required Monthly reports not required
31 August 2010 Monthly reports not required Monthly reports not required
30 September 2010 0 133.0
Expiry of initial guarantee Scheme
31 October 2010 0 2.3
30 April 2011 0 1.9

The Crown considered it unlikely that any of the four institutions left in the Scheme then would fail. Therefore, as at 30 April 2011, the Crown had not made any provision for the amount guaranteed under the Extended Scheme.

As already noted, discussions about levels of provisions started in March 2009. The first meeting of the Provisioning Working Group in May 2009 discussed the provisioning process and decisions about two entities, including South Canterbury Finance. The Group then held monthly meetings. In our view, the provisioning process, once started, was thorough. High-risk entities were considered closely by the Provisioning Working Group, which carefully deliberated the probability of each institution failing and associated level of provisioning required. The Group was given detailed information about the individual institutions and tracked significant changes from month to month and the reasons for those changes. There were good governance procedures, with clear documentation of the process, the recommendations, and decisions each month.

The information assembled by the Treasury through the Reserve Bank, the inspectors, and by direct means from the institutions allowed it to judge in August 2009 the likelihood of the failure of several institutions under the Scheme.In mid-2009, we said (in our role as the Treasury’s auditor) that this information needed to improve. As more inspectors’ reports became available, the quality and understanding of the information improved.

Our views on the monitoring framework

The monitoring framework that the Treasury implemented – which included reporting by the Reserve Bank, inspections by the Treasury, and the Treasury’s analysis of information from other sources – was, for the most part, effective. It provided the Treasury with enough financial details on individual institutions to assess which institutions should be asked for additional information.

The Treasury identified most of the institutions that triggered the guarantee as having a high risk of failure at least three months, and often more, before they failed. Most of the institutions that failed were the subject of more detailed “watch-list” reports and were being monitored reasonably closely. The two exceptions were Mascot Finance and Strata Finance Limited, which failed before the monitoring system began. All the other institutions that failed had been inspected (except for one small institution where there was no expected loss if it failed).

From March 2009, the Treasury was proactive in its analysis and review of the institutions and its search for additional evidence. The Treasury used a wide range of information sources and did not depend only on the Reserve Bank’s monitoring reports. The Treasury’s use of inspectors was effective, and the Treasury closely interacted with inspectors to ensure valuable outcomes.

However, we consider that the monitoring of financial institutions started later than it should have. The first monitoring information was received by the Treasury on 30 March 2009 (for data as at 31 January 2009), five months after the start of the Scheme. The Reserve Bank began planning for its role in the monitoring process in November 2008. It worked from November 2008 to March 2009 to ensure that it was collecting the required information, building the templates and models to analyse the data, and analysing the first data collected.

In our view, the Treasury was waiting for monitoring data to arrive from the Reserve Bank, rather than planning for its arrival and for the next stage in the monitoring process (that is, what it would do with the information once it arrived).

We consider that the Treasury should have prepared a monitoring work stream to run concurrently with the application process. The people involved with this work stream could have worked with the Reserve Bank to prepare a monitoring framework and gather information more quickly.

From a practical viewpoint, there were many reasons why quicker monitoring did not take place, including:

  • Part 5D of the Reserve Bank Act was enacted in September 2008, so the formal regulation of the NBDTs was new to the Reserve Bank.
  • The monitoring contract between the Treasury and the Reserve Bank was effective from 1 December 2008.
  • During October and November 2008, the Treasury focused mainly on processing applications and on broader concerns associated with the financial crisis.
  • There were not enough skilled staff to conduct this monitoring, and other staff were occupied with processing applications. It takes time to recruit new staff and wait for them to start (and recruitment was further delayed by the summer holidays).
  • Until applications were processed, the Treasury did not know how many institutions would be in the Scheme and would need to be monitored.
  • There were delays in receiving data from the trustees. The trustee supervisory model posed some challenges because monthly data went from the institutions to the trustees to the Reserve Bank. The Reserve Bank then analysed the data before sending it to the Treasury. Although the institutions were already reporting to the trustee, the data collected for the Scheme was in a different format. It took time for the institutions to change their systems to compile this data. It also took time for information to start flowing through to the trustee and then on to the Reserve Bank.

Despite these issues, we consider that a monitoring process could have been in place before the end of 2008. There was a long history of finance company failures, so another failure was not unlikely. In our view, planning for thorough monitoring, how to manage any failures, and provisioning should have been a high priority.

The Treasury recognised early that appointing inspectors would be an essential part of the monitoring process. If the monitoring had started earlier, inspectors might have been appointed early in 2009 rather than in June and July 2009. It is by no means clear that appointing inspectors any earlier would have led to any significant savings to the Crown. However, a number of higher-risk institutions, such as South Canterbury Finance, had experienced strong deposit growth during the first six months of the Scheme. It is possible that earlier inspections might have identified issues with the higher-risk entities and allowed for early withdrawal of the guarantee or other intervention to restrict deposit growth and limit the potential cost to the Crown.

One of the risks associated with appointing an inspector is the possible market and media speculation about the appointment and the consequent loss of public confidence in that institution. There is evidence that the Treasury’s concern about this risk might have contributed to initial delays in appointing inspectors.

Data accuracy issues and the need to send in inspectors to fully understand the institution’s position also affected the effectiveness of monitoring. Data provided to the Reserve Bank from the trustees was not audited and was based only on management accounts. Requesting audited data would have significantly slowed the reporting process. In many instances, the data turned out to be inaccurate, particularly about loan classification. Many of these data inaccuracies could not be overcome until an inspector was appointed to conduct a detailed review of the loan book.

In our view, the Treasury was well aware of these issues and acted to appoint inspectors as early as possible, once monitoring began. Six inspections were conducted before the panel of inspectors was appointed. In some instances, the Treasury reconciled the data provided by the Reserve Bank with data from other sources.

In a 2007/08 report to Parliament, the Registrar of Companies expressed concern about the rigour with which some failed finance companies were audited by smaller firms that may not have had enough capability and experience to conduct these audits (especially given the complex business structures that many of the finance companies operated). As a result, the reliability of some of the financial information might have been questionable.24

The delays in receiving data also affected the performance of the monitoring framework. Because of the path of the information flows (dictated by powers under the relevant reporting obligations) and the amount of information required to be analysed initially, data in the reports was two months old when the Treasury received the reports from the Reserve Bank. This improved as the Scheme progressed (with the reporting time frame shortened from six weeks to four weeks) and as the Reserve Bank’s analysts became more familiar with the individual entities and their issues. We do not consider that the Treasury could have acted differently to alter these reporting delays.

The Treasury relied on the models developed by the Reserve Bank (including the risk ranking model and the estimated loss model). The relative risk ranking and estimated loss models were important tools the Treasury used to target the institutions that warranted closer monitoring (and eventual inspection). The models also fed into the provisioning process.

These models proved to be very accurate and provided the basis for the Treasury identifying all the institutions that triggered the guarantee as having a high risk of failure at least three months, and often more, before they failed. Most of the institutions that failed were the subject of more detailed “watch-list” reports and were being monitored reasonably closely.

We recognise that these complex models were developed quickly. However, given the Treasury’s reliance on these models, we consider that much stronger processes should have governed their use within the Treasury. These should have included comprehensive documentation, robust override and change control processes, and independent reviews and validations.

The Treasury carried out its own checks and validation (by feeding data of failed institutions back through the model to estimate losses). In July 2009, as part of the annual audit of the Treasury, we reviewed at a high level the estimated loss model used for provisioning in the financial statements. Around that time, Treasury significantly increased the amount and accuracy of the estimated loss.

There was a particular risk in the monitoring process. Responsibility for monitoring sat solely with two Treasury officials (supported by officials in other teams, such as legal, policy, and communications). This risk was compounded by the apparent lack of senior management oversight of the monitoring process. There is evidence that these two staff provided relatively frequent written updates to senior management. These updates appeared to be provided when needed. We understand that there were also senior leadership team meetings about the Scheme and that matters were raised with higher management as required (we did not see evidence about these meetings). Much of the information flow and reporting that we have reviewed was “bottom-up” rather than “top-down”. In other words, it was not prepared in response to information requests from senior management within the Treasury.

In our view, it would have been useful and sensible to set up a steering committee with senior management directing and reviewing the monitoring process. We understand that a steering committee was set up when South Canterbury Finance failed, with very senior representation. A group was also formed to deal with extending the Scheme, which was an important policy decision for the Government on advice from the Treasury.

The early months of the Scheme were undoubtedly busy. However, that was also the time when governance frameworks, escalation procedures, and strategic management were needed most.

24: See the Report of the Commerce Committee, 2007/08 Financial Review of the Ministry of Economic Development, Appendix B.

page top