A recent survey of Bank CIOs shows that increased regulatory pressures falling out from new and changing regulations as well as the need to improve reporting are expected to be major drivers of increased IT spending at banks. The survey, conducted by Bank Systems and Technology magazine asked members of its Reader Advisory Board about plans for IT spending in 2011. According to the report, “most banks will undertake major systems upgrades in the next 12 months.” The projects include compliance, core systems and mobile banking initiatives.
Paul Johnson, CIO of Winston-Salem, N.C.-based BB&T, stated “responding to regulatory requirements [and] expectations” will be a top priority. Jerry Hermes, CIO of Navy Federal Credit Union added, “The regulatory changes we have to watch for next year will be huge.”
The scope of recent rulemaking such as Dodd-Frank suggests that we’re going to be in a very dynamic regulatory environment for a long time. If you’re evaluating regulatory compliance solutions to assist with this dynamic environment of regulatory change, you would do well to require one that can help you put in a place a programmatic framework for communicating changes and managing the internal regulatory change process, as well as managing the interactions with external regulators. Lastly, make sure this information architecture can adapt to change over time.
Did you happen to see where Danile Bouton, head of French bank Société Générale, admitted in an interview published on the French Internet site Mediapart that the bank’s internal control systems had faults?
Bouton said: "The controls were carried out in accordance with the rules for each area concerned” … [but] "a horizontal method for assessing the risk of fraud, [and] a pooling of the information, was missing. It was the lack of this method that allowed Jérôme Kerviel to play on the different deficiencies, which his experience in the back office had enabled him to see."
Bouton is referring to the lack of an end-to-end process view that spans different functional organizations. Kerviel’s experience in back office positions and his knowledge of how risk and controls systems worked allowed him to circumvent and override the bank’s systems/processes to carry out his fraudulent activities.
It sounds simple enough, but I wonder whether Bouton is guilty of what Nassim Taleb (author of the Black Swan) calls the “narrative fallacy” where a story is created post-hoc so that an event will seem to have a cause. In fact, the auditing firm PWC wrote a scathing report for Société Générale that described a flawed "general environment" that enabled Kerviel to rack up the record-breaking losses. The report pointed to a number of specific problems in the design and the implementation of the bank’s internal control system.
Since I haven’t read the report, I will put on my Monday morning quarterbacking hat and speculate about why the largest event of its kind went on for so long at an institution that had a reputation for being “well controlled.” My top ten list for why Jérôme Kerviel was able to perpetrate the fraudulent activities at Soc Gen:
10. Warning signs were not heeded: complaints that Kerviel was not following proper policies and procedures, was in breach of limits, etc. were ignored because he was deemed to be a star trader and a money-making engine.
9. Management inaction: management was informed about the problem but they did not react or escalate the issue; they also failed “to question above-market returns.” Kerviel’s management chain was reluctant to bring these problems to senior management since they did not want to be seen as being counter-productive to profit making.
8. Failure to set/enforce proper limits: There are trading environments that have a “no tolerance” rule when it comes to breach of limits and there are trading environments that treat limits as permeable. The fluid approach to such breaches can be especially risky during times of high market volatility when exposures and limit breaks can grow quickly and exponentially. In Soc Gen’s case, limits were not strictly enforced.
7. Risk taking environment (culture): Rogue traders such as Kerviel often flourish in environments where risk taking and idolization of traders go hand-in-hand. In these environments, a breach of limits is seen as tolerable and at times implicitly encouraged.
6. Gambling persona: Similar to gamblers, traders are risk takers. If a trader does not have the appetite to take on risk they will be ineffective in their job. Kerviel is a risk taker and when he sustained losses he tried to trade himself back to profitability. This led to a pattern of escalating losses that led to more rogue trading behavior and more losses.
5. Failure to reconcile daily cash flows: The volume of certain products, such as over-the-counter derivatives leads to challenges concerning reconciliation of trades and cash flow. There are important operational risk issues associated with the high volume of certain trading areas and the lag time between execution, settlement, and reconciliation of the books. A rogue trader such as Kerviel who understands the system and how it works can exploit the lag time between these activities to avoid detection.
4. Failure to comply with internal policies and procedures: Danile Bouton stated that there were adequate policies and procedures in place designed to prevent unauthorized trading events. But no firm wants to operate in an environment where controls are so rigid and inflexible that it is not possible to be creative and profitable. What happens over time is that an organization drifts away from following internal policies and procedures and becomes “fluid” in response to business demands. There are organizations with “no tolerance” policies for breaking control limits, and there are others that treat it as a part of doing business. Soc Gen appears to have been one of the latter organizations.
3. Failure to supervise: At the heart of unauthorized trading events are often supervisory issues at multitude of levels. This covers the obvious “failure to manage,” but also includes supervisors who many be caught up in a direct report’s scheme to increase profits or bring in outsized returns. At Soc Gen there was a clear lack of supervision and there may even have been two layers of misconduct.
2. Swiss cheese effect: Often the event attributes in a case such as Soc Gen occur in conjunction with a series of control failings. The largest unauthorized trading events contain a number of control breakdowns that occur in clusters. Think of the controls as slices of Swiss cheese lined up next to each other; the holes in the cheese are potential control failures. The rogue trader can see a clear path through the slices, where the holes are lined up, and the misdeeds can pass through the openings without being halted by operating controls. If even one or more controls were properly functioning, the misdeed might never have happened. For example, if someone had escalated concerns to management and management acted – the event might not have occurred or at a minimum would have been much less severe.
1. Lack of dual control and lack of proper segregation of duties: The “four eyes” tenet is a basic one in risk management and after the history of large events such as Barings (1995) it is difficult to imagine any institution that allows traders to confirm their own trades. Kerviel was able to break into Soc Gen’s trading system to assume the identity of someone else and effectively confirm his own trades. The breakdown of dual controls in this area was perhaps the most egregious failure of the internal control environment at Soc Gen.
So Danile Bouton admitted that the bank’s internal control systems had faults – no kidding!
If you’re involved with compliance, you must know that the SEC issued its final rules on whisleblowing. The original proposal was hugely contentious, with serious concern that employees will bypass companies’ internal reporting channels established as part of comprehensive compliance programs instituted and enhanced over recent years, and instead run directly to the SEC for a lottery-size payday.
The SEC’s director of enforcement initially had said that the agency will be “mindful of competing interests” as it shapes regulations around the new law.” Well, there are changes from the proposed rules to the final, but compliance officers and their companies are disappointed, understandably so. Unfortunately, as with the proposed rules, reporting first internally is not required. Among the changes are provision for employees to report internally and then within 120 days (rather than the proposed 90 days) go to the SEC and still maintain a “place in line” for a major payday by the regulator. Also, certain specified personnel are excluded from being paid by the government, generally including lawyers, auditors and compliance personnel, and those themselves involved in the misconduct – although there are exceptions to the exceptions. And interestingly, when a whistleblower reports to the SEC, related information subsequently provided by the company to the SEC is attributed to the whistleblower. Officials from the Association of Corporate Counsel have said the rule will result in “gutting” compliance systems, and the U.S. Chamber of Commerce continues to be up in arms. Two of the five commissioners voted against the final rule, which passed in a 3-2 vote. In a survey of directors, 67% said this is the most detrimental part of Dodd-Frank.
Suffice it to say here that the modifications from the proposed rules to the final are such that compliance and other corporate officers continue to believe their past efforts in establishing internal whistleblower protocols are being undermined, and they will need to work hard and be creative in encouraging employees to work within internal reporting systems. One law firm says the best line of defense is to have robust internal compliance and audit procedures designed to proactively uncover potential wrongdoing and, where misconduct is found, to promptly address and remediate it aggressively before a whistleblower surfaces. Easy to say, challenging to do. Clearly, there’s a lot of work ahead for compliance officers, general counsels and their colleagues.
OpRisk Europe 2011 – now in its 13th year, commenced today at the historic Waldorf Hotel in the West End of London. Somewhat ironic that the risk management conference is taking place in the stylish hotel whose interior is said to have inspired the designers of the “unsinkable” Titanic – a classic case study on risk management.
In one breakout session, Andrew Sheen of the FSA’s risk frameworks and governance team discussed recent developments from the BIS and their impact on operational risk. Citing updates to the “Sound Practices for the Management of Operational Risk” paper recently updated by the BIS Committee, Sheen emphasized several key considerations for the board of directors and senior management team. In particular, he emphasized the need for the board to set the tone at the top in order to promote a strong risk management culture and that banks should “develop, implement and maintain an operational risk management framework that is fully integrated into the banks overall risk management processes.” He also provided guidance for senior management. In particular, he noted that senior management should:
“Develop for approval by the board a clear, effective and robust governance structure
Be held responsible for implementing policies, processes and systems for managing operational risk and ones that are consistent with risk appetite and tolerance
Implement an approval process for all new prods, activities, processes and systems that fully assesses operational risk, and;
Regularly monitor operational risk profiles and material exposures to losses”
There’s been a lot of great content in Day One of OpRisk Europe, looking forward to tomorrow’s panel discussion on “The Impact of New Regulation on Operational Risk Management.”
In a recent OpRisk and Compliance Webinar, John Whittaker, group operational risk director at Barclays described how his team has achieved AMA by embedding the Operational Risk Framework. He described how the framework provides a single control infrastructure and common risk language across the Group and supports the effective measurement and management of operational risk.
In support of this initiative, Barclays implemented an operational risk system (built on OpenPages) which replaced separate systems used for Sarbanes Oxley and Operational Risk Management. The OpenPages solution provides Barclays a single repository for data and supports the harmonization of Operational Risk and Sarbanes Oxley risk and control assessment methodologies.
John was joined in the Webinar by Operational Riskdata eXchange Association (ORX) executive director Simon Wills who discussed operational loss data analysis and trends in the banking sector. To listen to this informative and substantive Webinar, check out the archived Webinar presented in its entirety.
At the recent OpenPages User Symposium (OPUS) 2010 held in Boston, Chris Haines, Vice President, Operational Risk Management Group at America Express presented a very informative and well attended session on how American Express has effectively leveraged the OpenPages technology in their efforts to converge risk management disciplines and best practices across the enterprise. In his session, Chris described how the Operational Risk Model employed by American Express provides management greater visibility into risk and empowers management to make strategic business decisions based on a broader understanding of its risk profile.
I caught up with Chris after his presentation and discussed his experience at OPUS as well as how American Express utilizes the OpenPages technology to create an integrated and converged risk and compliance management program that can streamline and improve its risk management processes.
With over $400b in assets under management and 57,000 employees in 38 countries, Old Mutual is a Fortune 500 company (#225) with an operational footprint that spans all 7 continents. Now based in London and listed on the FTSE100, Old Mutual was founded in South Africa in 1845 as the 166-member Mutual Life Association of Cape of Good Hope.
While steeped in history and tradition, Old Mutual has a progressive approach to risk management which includes a ‘risk governance framework’ based on a ‘three lines of defense’ model:
functions owning and managing risk
functions overseeing the management of risk; and
functions providing independent assurance.
Old Mutual recently adopted OpenPages Operational Risk Management (ORM) to improve its enterprise-wide risk management efforts. OpenPages ORM is being used by numerous global organizations like Old Mutual to manage risk through self-assessments, end-user surveys, automated workflow and executive dashboards that provide management with the visibility, control and decision support required to understand and manage risks throughout the organization.
Prior to the onset of the Basel II Accord and its resulting loss event category structure, there was no existing or suggested standards for financial institutions in how to classify loss events and risks. The reality was that there was no need for a standard, as companies were not particularly focused on tracking loss events and identifying operational risks within a formal structure. As banks were nudged along the operational and enterprise risk management path by the regulators and Basel II, a need for guidance was evident and the Basel II loss event category structure emerged to meet the need. Of course, many financial institutions clung to the new standard and began to implement their programs. Although the Basel II category structure was largely designed for the classification of loss events, many institutions have been leveraging the taxonomy for a risk classification structure as well.
As financial Institutions gained experience in operational risk management and the implementation of such risk programs within their organizations, they began to question the business alignment and validity of the Basel II loss event category model. Various consortiums, industry associations, consultants, academic researchers, and analysts began to study the structure and started to poke holes in its loss type basis and alternate classification models began to emerge. The RMA joined with RiskBusiness to coordinate an effort with banks to establish standards for Key Risk Indicators, which resulted in a risk classification structure that is gaining popularity. The Operational Riskdata eXchange Association (ORX) formed as a consortium to provide a platform for the exchange of operational loss data, and in due course developed standards and a classification structure for its member financial institutions. We also see the BITS organization looking at loss and risk classification structures, as well as many articles that have been written on the topic.
The article speaks of the importance of organizing data in a sound and clear-cut manner, and reaches a conclusion that the Basel II loss event category structure falls short with too much allowance for inconsistency. Dr. Alvarez proposes a classification schema that is based on causes, as opposed to types of loss events, which leads to a more structured and consistent classification of loss events. I encourage you to read t his article, as it article represents the current thinking in the industry, which is that the causes of an event are important to identify and understand, and when an organization captures its loss data and views risks form the causal point of view, it is better enabled to analyze the data and more effectively manage and mitigate risks, thereby being more successful in lowering operational losses and increasing operating efficiencies.
There will likely be more debate and thought put into loss event and risk taxonomies over the next few years, and the industry’s need for an effective and consistent standard that could enable benchmarking of operational risk will help drive convergence to a widely accepted loss event and risk data classification schema.
The name is Kweku Adoboli, and you’ll be hearing a lot more about him. He works – or rather, worked – at UBS, Switzerland’s largest bank. He graduated from the well-regarded University of Nottingham, and moved up at UBS. What did he do? Well, UBS executives say he engaged in unauthorized investment trades, and cost the bank $2 billion! We’re no longer talking in terms of millions, or even hundreds of millions – but billions of dollars – enough to wipe out the bank’s profit for the entire quarter and send its stock price tumbling.
So, presuming the charges prove true, we can add Adoboli’s name to the likes of such infamous “rogue traders” as Jerome Kerviel, who cost Societe Generale about $7 billion; Nicholas Leeson, who lost more than $1 billion, enough to bring down Bearings Bank; and Joseph Jett, who reportedly cost Kidder, Peabody $350 million – a substantial sum back in 1994. They may be among the most well-known, though there were many others. John Rusnak reportedly cost Allfirst Financial $700 million, Yasuo Hamanaka $2.6 billion at Sumitomo, and Toshihide Iguchi over $1 billion at Daiwa – among others.
Adoboli, a director in exchange traded funds working on the bank’s Delta One desk, was arrested in London, where he worked. We can expect information to be forthcoming as to exactly what was done, and how. For now, though, it’s worth noting that, not surprisingly, Adoboli is called a “rogue trader,” indicating that he did this by going outside established protocols – with the unspoken implication that the actions of such a rogue were unavoidable. Well, even without knowing exactly what went wrong, we can surmise that there was something terribly wrong with UBS’ risk management and internal control practices. Certainly the risk of unauthorized trading is well known in the banking industry, and with the amounts of money at risk, one would think that sufficient controls would be firmly in place to prevent or timely detect unauthorized trading that could approach a huge sum, certainly well before a loss reached anything in the neighborhood of a figure like $2 billion! Isn’t it cost-effective to spend a relatively few dollars to avoid losses in the millions or billions? How difficult is it to ensure the right people, processes and technology are in place?
We can only wonder why adequate controls weren’t there – and when and where the next “rogue trader” will surface, which hopefully will be before serious damage has already been done.
Some months ago I came across an article co-authored by a colleague of mine on enterprise risk management. It’s aimed at boards of directors, providing needed insight into difficulties companies have experienced implementing ERM, and puts forth principles for its effective use.
What I found particularly interesting is reference to principles outlined by “legendary management thinker” Peter F. Drucker, and the authors’ description of how those principles can be applied to ERM:
There’s no indication in the article that Peter Drucker ever spoke to ERM specifically, and to my knowledge he never did (if any readers know otherwise, please let us know). I had the great pleasure of knowing and spending some quality time with Mr. Drucker when after my stint at the Wharton School I was doing graduate work at NYU Graduate School of Business where I was fortunate to have him as a professor in an advanced management seminar. It was evident to me even then, as a still wet-behind-the-ears student, that Peter Drucker indeed was someone extraordinarily special. He had an amazing ability to identify and articulate valuable truths about business, which while obvious after he spoke them, were previously hidden from everyone else’s view.
With that said, I’d like to take the liberty of guessing what Peter Drucker, if he were still with us, might put forth as simple truths about enterprise risk management:
Forget “risk assessments” – they have little to do with ERM
ERM must be embedded throughout the entirety of an enterprise
ERM isn’t done by a staff function – it must be incorporated into the soul of every manager in the company
It must encompass clear responsibilities and accountability, with open and rapid communication up and down the organization
And it needs to become an integral part of daily business, enhancing judgments and decision-making at every level – it’s not an add-on, but rather how business is conducted throughout the organization
Mr. Drucker, if somehow you’re listening, I hope you’re smiling at what you hear.
This week OpenPages is sponsoring the RMA Operational Risk Management Discussion Group being held at The Federal Reserve Bank of Philadelphia. The two-day forum was kicked-off by Victoria Garrity, Senior Quantitative Analyst from the Boston Federal Reserve. Victoria’s session titled “Regulatory Perspective on Scenarios: Challenges and Issues”, was well attended and sparked a number of conversations on potential forthcoming regulations. Other interesting sessions included a discussion moderated by Michael Fenn of DTCC and Patrick McDermott of Freddie Mac on the evolution of ORM assessments, and a roundtable facilitated by Kathy Miller of KeyCorp on “Recent Experiences with Regulators” in which the discussion focused on operational risk examinations and emerging guidance from the regulatory environment.
Overall a very timely and thought-provoking forum attended by some of the leading operational risk practitioners.
We know the banks and related mortgage service organizations have been under fire for their role in the financial system’s near meltdown and ensuing foreclosure fiasco. JPMorgan Chase’s CEO Jamie Dimon reportedly owned up to taking some responsibility, saying “Some of the mistakes were egregious, and they’re embarrassing . . . but we made a mistake, and we’re going to pay for that mistake.” The 50 state attorney generals and the SEC, among others, are pushing for changes in how the banks and services operate, and there’s little doubt changes are coming.
In the interim, a report emanating from investigations by the Office of Comptroller of the Currency, Federal Reserve Board, Office of Thrift Supervision, and Federal Deposit Insurance Corporation, is expected to form a basis for a settlement where the financial institutions would make fundamental changes in operations and controls. The banks and other servicers would, for instance, have to:
Set up a single contact point within the organization, enabling homeowners to avoid what’s often a maze of different departments
Take steps to ensure there will be no action to foreclose while borrowers are pursuing loan modifications
Improve training of staff handling foreclosures
Establish more layers of management oversight over the process
Engage an independent consultant to review foreclosures over the past two years, and compensate homeowners who were treated improperly.
One wonders why adequate business process design and basics of internal control weren’t in place long ago, even though the volume of foreclosures wasn’t anticipated. The sloppiness has caused tremendous problems for both the banks and servicers on the one hand and their customers on the other – and executives should know by now that if a large swath of consumers is damaged, then laws and regulations will surely follow.
This of course is not the end for the banks and servicers – not by a long shot. They still need to deal with the state attorney generals and other regulators, and we can expect more required changes to be forthcoming, along with large financial payments for past misdeeds. Oh, if only the risks had been identified earlier and better managed, with appropriately designed business processes, and basic and supervisory controls and compliance in place.
Two recent events involving hurricanes provide insight into what risk management is about. Many of us who live in on the east coast of the U.S. know all too well the damage wrought by Irene. And many in the Florida are dealing with damage to the University of Miami “Hurricanes” football team.
Let’s begin with Miami, where student athletes are said to have taken gifts from a fan – against NCAA rules. The University has already suspended a number of players. But what could be coming is worse, when the NCAA completes its investigation and decides on such sanctions as loss of scholarships, ability to play in bowl games, and the like. The impact on the football team and indeed the University are seen by some as potentially devastating. Miami’s President seems to be taking an appropriate course in saying the University will take action to be sure this kind of thing doesn’t happen again. Kind of sounds like what many senior business executives say when they suffer a major mistake. But, wait a minute – haven’t many, many other university football programs suffered the same kind of misconduct and paid a very high price? Since the answer is a resounding “yes,” then why wouldn’t a university like Miami, which treasures its football program, have long ago recognized the risks and taken action to prevent, or early on detect, any such kind of misconduct?
As for Hurricane Irene, let’s take a look at the plight of homeowners. Certainly those residing in the Carolinas know well the paths of past hurricanes. And while the Northeast has fewer, it is by no means unfamiliar with hurricanes, nor’easters, and the like. Whether or not they’re in some level of denial, people residing in flood zones aren’t ignorant of the risks, and others are aware of the possibility of wind damage, loss of power and the like. Certainly storms can’t be prevented, but their impact can be mitigated, through storm shutters or plywood boards, generators, and insurance coverage, among other actions. Yes there’s a cost-benefit relationship, but the other side is the cost of being emotionally and financially devastated. Yes, as we see the news coverage our hearts go out to those who have suffered, and we recognize that some simply can’t afford even basic protections. But we can wonder whether sufficient advance thought was given to managing the risks.
A key learning point from this is that risk management can be viewed as having several “tiers”: identifying what has not yet occurred but could occur, seeing what has happened to others, and knowing what harm has already hit home. The last two tiers are by far the easiest to recognize and analyze in terms of potential impact, while the first takes more thought and analysis though still cannot be ignored. In the cases of Irene and Miami, these events clearly have occurred previously, and the inherent risks were well known and needed to be managed. The same holds true for businesses looking to survive and prosper in a dangerous economic and competitive environment. It’s well known that supply chains can be interrupted, product quality compromised, IT systems hacked, and company personnel can do bad things. In all likelihood, risks have materialized in one’s own company or at a competitor, and are well known and can be managed cost-effectively. It takes identification and analysis, along with the right tools and technology to ensure appropriate attention, accountability and communication – all critical to making better business decisions.
My sense is that as a reader of this blog, you already have a good handle on what’s involved here. But hopefully it will prove useful if you’re striving to influence and convince others in your organizations of what risk management is about, and why it needs to be taken seriously.
My last posting spoke to one of COSO’s two recently issued guidance reports on enterprise risk management. The first provides approaches for getting started on an ERM initiative, and while it’s based on good intentions and provides useful information, especially to smaller companies, in Olympic games terms with only two entrants, that report gets the silver. The second report, Developing Key Risk Indicators to Strengthen Enterprise Risk Management – How Key Risk Indicators Can Sharpen Focus on Emerging Risk wins the gold – by a good margin.
COSO’s ERM report Application Techniques volume touches on the topic of key risk indicators, use of which was not commonplace at the time. Since then, along with key performance indicators, which focus primarily on past performance, more organizations have incorporated forward looking key risk indicators into their ERM processes, further enhancing risk management effectiveness. This new report does a good job of explaining KRIs and how they can be of benefit. A couple of simple examples include:
For customer credit, where a common KPI includes data about customer delinquencies and write-offs, KRIs are developed to help anticipate future collection issues, focusing for example on analysis of reported financial results of a company’s 25 largest customers or general collection challenges throughout the industry to see what trends might be emerging among customers that could potentially signal challenges related to collection efforts going forward.
Management of a chain of family-style restaurants sought to avoid a negative earnings event that could arise with unexpected market conditions. Recognizing that restaurant traffic is directly affected by customers’ discretionary income – where as discretionary income levels fall off, customers are less likely to dine out – management establishes as a KRI average gasoline prices people pay at the pump. This is based on the premise that when gasoline prices rise, discretionary income for individuals and families representing their core customer base decreases, and customer traffic begins to drop.
As such, KRIs enable management to take quicker action in dealing with the risks. In the later example, management is positioned to adjust marketing and promotion events to reduce the impact of the risk.
The report explains how KRIs are most effective when closest to the ultimate root cause of the risk event, providing more time for management to act proactively. And multiple KRIs can provide still more relevant information, keeping in mind that a close relationship between the KRI and the risk, and accuracy of information used, are both critical. Another benefit is the ability to readily track trend lines with dash boards or exception reports, quickly and easily communicating where action may be needed.
With KRIs continuing to gain recognition as important elements of enterprise risk management, this COSO report provides readily usable information and is definitely worth the read.
Financial services firms, pharmaceutical companies and other heavily regulated organizations have long devoted significant resources to a compliance office, typically with a chief compliance officer and strong support staff. Multinationals have embedded part of the compliance function locally, typically with reporting to both the central compliance office and local management. But companies not facing heavy regulation, even large ones, have struggled in deciding whether a full time compliance office is needed.
Well, now there are clear indications that a full time role is becoming more common. Compliance Week recently reported on two studies saying just that. One is from the Open Compliance and Ethics Group (OCEG), who’s survey shows 75% of the 365 respondents has a chief ethics and compliance officer or similar title with “top-level oversight of compliance.” And 40% said the compliance chief has no other role in their company, and for companies with over $1 billion in revenue, the number is 55%. Where the title is shared, it’s with the company’s legal department in 23% of the time. The other survey was conducted by the Society of Corporate Compliance & Ethics, showing that of 560 respondents, 97% have a designated compliance or ethics officer, with 36% having no other title. Of those with another role in the company, 20% share responsibilities in the legal department. As with the OCEG study, other shared roles range from the chief audit executive, CFO, and head of human resources, among others.
Also telling about the relative importance of the compliance officer role is the reporting relationships. The SCCE study, for instance, shows the chief compliance officer reporting directly to the CEO in 55% of the organizations. And the compliance officer provides reports to the board of directors or a board committee both in writing and face-to-face in 80% of the companies. And with a more senior role comes higher pay. The OCEG study shows the most common level of compensation (36%) is between $150,000 and $250,000, with 20% reporting pay at $350,000 and above, not counting bonuses, stock options or other forms of pay. As we might expect, pay in larger companies is at the higher end, with companies with more than $1 billion in revenue showing 23% with total compensation at the $450,000 level or higher.
Certainly, if you’re directly or tangentially involved with compliance, these numbers probably aren’t surprising. With the regulatory spotlight shining brightly and companies struggling to keep costs from soaring out of control and to enhance compliance program effectiveness, companies are looking to strengthen the role of their chief compliance officer.
A tag is a keyword you assign to make a blog or blog content easier to find. Click a tag to find content that has been assigned that keyword. Click another tag to refine the search further. Click Find a tag to search for a tag that is not displayed in the collection.