Call and response: an open letter to the Basel Committee
By Ben De Prisco, Yijun Jiang, Alex Kreinin, Diane Reynolds, Michal Trebacz, and Michael Zerbs | 2009 January
Topics: Insurance, Trading, Enterprise risk management, Treasury management, Capital markets, Regulatory compliance and reporting, Capital management, Integrated risk management & strategic planning, Banking
Algorithmics was acquired by IBM in 2012. This document was published by Algorithmics prior to Algorithmics’ acquisition by IBM. Please see the notice at the end of this document.
Recent market turmoil has led the Basel Committee on Banking Supervision to publish a revised draft of the Guidelines for Computing Capital for Incremental Risk. Within this consultative document, the Committee invited feedback to help strengthen the guidelines prior to their required implementation date of January 1, 2010. Algorithmics responded in detail, supporting its comments with explanatory annexes and research papers.
While the Committee requested specific feedback on several topics, many questions were intimately connected to the “constant level of risk” approach. From the authors’ perspective, one of the most important areas of concern is the interaction of the measurement approach with the capital horizon and confidence level. What follows is an introduction to, and selected excerpts from, feedback to the Committee by Algorithmics regarding the adoption of a “constant level of risk” approach to capital measurement.
The full text of our submission addresses several issues beyond the constant level of risk approach.
The incremental risk charge(IRC) is a new requirement for banks that model specific risk to measure and hold capital against default and migration risks, credit spread risks and equity price risks that are incremental to the risk captured in the bank’s value-at-risk (VaR) model for the trading book.
The constant level of risk approach(CLRA) for calculating IRC assumes that a bank would rebalance, or roll over, its trading positions over the one-year capital horizon in a manner that maintains the initial risk level, as indicated by a metric such as VaR or established portfolio concentration limits.
We believe that the general direction of the revised proposal is sound as it provides a more comprehensive and risk-sensitive capitalization standard for the trading book. The decision to expand the scope of the capital charge to capture not only price changes due to defaults, but also other sources of price risk, such as credit migrations, significant moves of credit spreads and equity prices, or the loss of liquidity, recognizes that seemingly different risks are interconnected. This decision, which encourages an integrated and holistic outlook, has been central to Algorithmics’ approach to risk management since the company’s inception. We further commend the Committee for its rapid response to events in the credit markets and its attempt to articulate attainable standards of practice for international financial institutions.
Our main concern with the Guidelines is the option of adopting a constant level of risk approach to the calculation of incremental risks. We feel that this option is counterproductive: it adds complexity, reduces the usefulness of results for management purposes, clouds transparency and provides capital relief only through a subtle manipulation of input correlations. We presume the Committee initially adopted this approach to redress industry concerns over the one-year horizon and 99.9% quantile. However, our studies show that capital relief provided by the approach, if any, is due to an artificial downplaying of correlations. In some cases, we found the constant level of risk assumption created a higher capital requirement than that based on a simple one-year holding period.
The lack of transparency in the constant level of risk approach and problems translating its measures into actions are of pressing concern. Specifically, virtual issuers, created implicitly as part of modeling the rollover at each liquidity horizon, led to a lack of transparency by preventing risk managers from attributing the final capital requirements to specific, known names. For example, using standard risk attribution methods would give an attribution to an issuer such as Ford Motor Company as part of the current portfolio when in fact this capital must support Ford Motor Company and an unnamed, unknown substitute created within the model. Such opaqueness complicates business decisions, particularly hedging.
We have learned during the current market turmoil that a clear understanding of the major assumptions underpinning valuation and risk models is essential to their use in capital adequacy assessment and decision-making. As an industry, we must acknowledge that assumptions can have a material impact on model outcomes. It is also important to note that “the tendency to overly formalize arcane aspects of an analysis can often detract from an understanding of the bigger picture implications. Analytical detail must not be allowed to overwhelm users of the data.” 1 Making opaque, complex assumptions such as adopting the constant level of risk approach makes it much harder to achieve these critical objectives.
In light of the lack of actionable and transparent information, the discrepancies between the methodologies and the complexities of implementation and oversight, we strongly urged the Committee to consider removing the option of adopting a constant level of risk approach. A single, shorter liquidity horizon, specified directly by the Committee would both provide a more even playing field across institutions and relieve the implicit regulatory burden, without compromising realistic capitalization standards.
5. Capital horizon and confidence level
The proposal stipulates that an IRC model incorporate a one-year capital horizon, a 99.9 percent confidence level, and a liquidity horizon appropriate for each trading position. The Committee recognizes that such an approach could present considerable practical challenges, including the need for data to calibrate key parameters.
(a) What alternative guidelines would achieve the Committee’s objectives, but in a manner that would be less costly or difficult to implement?
The use of an integrated model across a single, common horizon, without the complexity of roll-overs to create a constant level of risk, would be a more feasible alternative. Under this approach, data collection would be less costly to implement because the practical challenges around the assignment of a liquidity horizon, giving due consideration to hedging and position-level issues, would be avoided.
Such an alternative would also simplify the model and the estimation of model parameters; in particular the mechanics of correlation assumptions. In an environment where liquidity horizons vary (primarily) by product type, it is difficult to combine positions for a single name and assess incremental risks for the entire name. It is also difficult to correlate across names. While enticing, the concept of roll-overs adds greatly to the complexity of modeling correlations appropriately. Complexity is of critical concern because the correlations are usually a key driver of capital requirements at the 99.9% level.
The analysis presented in Annex A [of the full response] was facilitated by the simplifying assumption that the liquidity horizon for all positions was one month. Using the constant level of risk approach with a consistent rollover assumption meant we could calculate the one month distribution and then convolute it twelve times to calculate the one-year distribution. While this process reduces correlations, the mechanics of allowing for different liquidity horizons would effectively eliminate any remaining vestiges of the estimated correlation structure.
For instance, if we estimated capital requirements for each segment (i.e., product type) of the book, as characterized by its liquidity horizon separately then aggregated after the fact, the one-month segment’s distribution would be convoluted twelve times, the three-month segment’s distribution four times and the one-year segment’s distribution used outright. The resultant distributions could be aggregated together to get the final distribution. However, not only does this significantly reduce correlation between issuers, it also ignores correlation between instruments referencing a single issuer.
Another implication of rollovers is the creation of a short (one-month) horizon for many asset classes. One month is problematic in the accurate estimation of probability of default values as they typically become exceedingly small on this horizon. A longer horizon would be better suited to effective estimation of probability of default. A single, common measurement horizon is likely, we presume, to exceed one month for more fundamental reasons, thereby mitigating this concern.
A final thought on the constant level of risk approach relates specifically to equities: It produces higher capital requirements than an approach based on a single horizon for equity portfolios, counter to the stated objective of the Guidelines. We refer to the details provided in Annex B [of the full response].
(b) Given the current state of risk modelling, is it feasible to estimate the portfolio loss distribution (excluding non-IRC market factors) over a one-year capital horizon at a 99.9 percent confidence level?
This question can be answered in three parts.
First, the issues around estimating incremental risk, excluding non-IRC market factors, are similar in scope to those in the estimation of credit risk capital. In our experience, such calculations are done quite readily today for default and migration risks. The inclusion of equity and spread risk in an integrated model with default and migration risks adds to overall model complexity. However, many financial institutions today are already able to do this type of calculation using sophisticated, publicly available approaches.
While we see no new quantitative modeling issues in measuring these risks over a one-year capital horizon at a 99.9% confidence level, long-standing issues remain. For example, value-at-risk is an incoherent measure (mathematically speaking) and as a result other industries such as insurance have moved to conditional tail expectations. Further, while modeling is possible at the one-year, 99.9% level, other processes are less developed. For example, back-testing and model parameter estimation remain particularly challenging in this extreme part of the tail.
The inclusion of varying liquidity horizons creates the need for complex, multi-level roll-over schemes. This in turn leads to many issues in both the modeling and interpretation of results. Many financial institutions are struggling with these concepts. There is a risk that inappropriate or ill-considered models and practices may develop in the pursuit of perceived capital relief. The Committee can encourage firms to focus on real issues such as calibration, coherence, back-testing and implementation without the distraction of complications from the proposed roll-over options by mandating a single measurement horizon.
(c) Would it be worthwhile to allow banks to use a single horizon for all covered positions (eg three months) and a lower confidence level (eg 99 percent), together with a supervisory scaling factor that was calibrated to achieve broad comparability with the IRB Framework for the banking book? Would such an approach be as useful for internal risk management purposes as the proposed IRC?
We feel that such a measure, provided the supervisory scaling factor is well considered, will prove more effective than the currently-proposed IRC. Having a scaling factor to adjust for the measurement horizon would be beneficial in assuring appropriate calibration across banking and trading activities. However, given the nature of the risks to be measured, we suggest that it would prove more informative to estimate the 99.9% quantile itself, rather than rescaling from a lower quantile.
As discussed in our opening remarks, the constant level of risk approach presents difficulties in interpretation of results. For IRC to be successfully adopted as a risk measure for internal use, it must be both transparent and actionable. The constant level of risk approach neither transparently identifies sources of risk, nor promotes positive risk management action.
The feedback in the letter was founded on analytical results and mathematical derivations comparing the constant position approach (CPA) to the constant level of risk approach (CLRA). An excerpt from Annex B of the full response illustrates the level of detail in the analysis.
Traditionally, equity prices are modeled through time as a Geometric Brownian Motion (GBM) process which implies that equity prices are log-normally distributed and ensures that the equity price can never fall below zero. In other words, an investor can never lose more than the value of the equity regardless of the time horizon or quantile of the risk measure. This model is consistent with reality: by definition, equity investors cannot lose more than their investment.
However, under CLRA the implicit floor of zero for equity prices disappears. The repeated rolling-over of the equity position, combined with capital support to reset back to the original investment, creates the situation where the investor can lose much more than the original investment. When the recapitalization is done at a high quantile and the equity is sufficiently volatile, CLRA implies that banks continue to re-invest heavily in markets after repeated extreme movements.
The question becomes: For the 99.9% quantile mandated for IRC, what is a sufficiently high volatility to create a significant problem? The answer, unfortunately, is that a realistic volatility such as 35% p.a. is sufficient. Figure B.2 plots the relationship between VaR computed using the CLRA (vertical axis) and equity price volatility (horizontal axis). The horizontal blue line depicts the original equity price of 100.
Based on this analysis of a single equity, the Guidelines imply the possibility of requiring investors to hold capital in excess of 100% of their initial investment. While this might be reasonable for some derivative positions (e.g., forwards, short positions), it is unlikely to be realistic for the direct equity position analyzed in this example.
We extend the analysis above to an equally weighted portfolio of fifty equities. Once again, numerical experiments indicate that the CLRA always produces VaR(99.9%) that is significantly greater than the corresponding CPA result. Perhaps surprisingly, this observation is independent of the assumed correlation between the equities, although the extent of the excess does increase with correlation, as illustrated in Figure B.3.
We expect that as the Committee reviews responses and publishes updated Guidelines, significant changes will take place. We hope that among these changes is a reconsideration of the constant level of risk approach. It is our view that integrated measures of a comprehensive set of risk types provide for the best representation of reality without introducing artificial boundaries that create concerns (e.g., double counting of risks, precise categorization of risks for quantification). As ISDA et al. noted in their response to the Committee, “the constant level of risk assumption that was considered appropriate for the incremental default risk charge… delivers minor (if any) capital reductions.”
Data and validation issues are very real, but can be addressed with time and creativity. For management purposes, using scaling factors to extend and shrink time horizons may be an acceptable approximation, but this is unlikely to be the case across quantiles. Stress testing, trend analysis, relative value comparisons and reconciliation to a base model are most likely to provide the assurances sought in model validation exercises.
The main issues we encountered with the Guidelines lie (1) in the language around the risks to be measured and (2) in the, perhaps unexpected, implications of measuring those risks as specified under the constant level of risk approach. Specifically, a reconsideration of the latter approach is encouraged.
1 Counterparty Risk Management Policy Group (2008), “Containing Systemic Risk: The Road to Reform. The Report of the CRMPG III.”
Algorithmics was acquired by IBM in 2012. This document was published by Algorithmics prior to Algorithmics’ acquisition by IBM.
The information contained in this documentation is provided for informational purposes only.
ALTHOUGH EFFORTS WERE MADE TO VERIFY THE COMPLETENESS AND ACCURACY OF THE INFORMATION CONTAINED IN THIS DOCUMENT, IT IS PROVIDED ‘AS-IS” WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING WITHOUT ANY WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND ANY WARRANTY OR CONDITION OF NON-INFRINGEMENT.
In addition, this information is based on IBM’s current product plans and strategy, which are subject to change by IBM without notice.
IBM will not be responsible for any damages arising out of the use of, or otherwise related to, this document or any other materials. Nothing contained in this document is intended to, or shall have the effect of creating any warranty or representation from IBM (or its affiliates or their suppliers and/or licensors); or altering the terms and conditions of the applicable license agreement governing the use of IBM software. References in this publication to IBM products or services do not imply that IBM intends to make them available in all countries in which IBM operates.
The software program, can be used to help the customer meet compliance obligations, which may be based on laws, regulations, standards or practices. Any directions, suggested usage, or guidance provided by the software program, or any related materials, does not constitute legal, accounting, or other professional advice, and the customer is cautioned to obtain its own legal or other expert counsel. The customer is solely responsible for ensuring that the customer, and the customer’s activities, applications and systems, comply with all applicable laws, regulations, standards and practices. Use of the software program, or any related materials, does not guarantee compliance with any law, regulation, standard or practice.
Any information regarding potential future products and/or services is intended to outline IBM’s general product and service direction and it should not be relied on in making a purchasing decision. Any information mentioned regarding potential future products and services is not a commitment, promise, or legal obligation to deliver any material, code, functionality or service. Any information about potential future products and services may not be incorporated into any contract. The development, release, and timing of any future features or functionality described for IBM’s products or services remains at IBM’s sole discretion.
Copyright IBM 2013.
Somers, NY 10589
All rights reserved.
IBM, the IBM logo, [and Algorithmics] are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. If these and other IBM trademarked terms are marked on their first occurrence in this information with the appropriate symbol (® or ™), these symbols indicate U.S. registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. A current list of IBM trademarks is available on the web at “Copyright and trademark information” at www.ibm.com/legal/copytrade.shtml.
Other company, product, and service names may be trademarks or service marks of others.