Monday 31 August 2015

Macroprudential Follies and Monetary Policy


In a recent article in Project Syndicate Barry Eichengreen appears to criticize the former Fed Chair Alan Greenspan, for expressing doubt that policymakers can reliably identify bubbles, and are generally uneasy about managing asset prices. He writes:
To be sure, central bankers cannot know for sure when asset prices have reached unsustainable heights. But they cannot know for sure when inflation is about to take off, either. Monetary policy is an art, not a science; it is the art of taking one’s best guess. And, as the 2008-2009 crisis demonstrated, merely cleaning up after the bubbles burst is very costly and inefficient.
Eichengreen’s argument, is in fact, part of a post-global-financial-crisis discourse on macroprudential policy. Many  interventionists argue for financial regulation that would be  specifically  designed  to mitigate systemic risks to the financial system as a whole. According to macroprudential regulation’s proponents, monetary policy historically has failed as policy changes have resulted in blunt outcomes i.e., the monetary policy actions in either directions have resulted in a broad sweeping measure for the whole economy that does not properly address the issues specifically feeding financial instability. Thus, they argue that the recent financial crisis was created by a supervisory gap, as various sectors of the financial system often fall under the responsibility of different authorities, making it difficult to conduct a thorough analysis of systemic risk. As a result of these debates, in recent years, a number of new institutions have been popped out to preserve financial stability such as the European Systemic Risk Board in the EU and the Financial Stability Oversight Council in the US.

At the same time, central banks are now assuming an important role in this regard, to the extent that in a recent speech, Governor Daniel Tarullo, a member of the Board of Governors of the Fed, has stated: “I feel secure in observing that we are all macroprudentialists now. The imperative of fashioning a regulatory regime that focuses on the financial system as a whole, and not just the well-being of individual firms, is now quite broadly accepted.” Mario Draghi, President of the European Central Bank has also argued that “As you know, the [Single Supervisory Mechanism] Regulation gives the ECB the power to apply stricter macroprudential measures than the national authorities if it deems them necessary. We can also advise on the calibration of instruments. This goes some way towards insuring against an inaction bias at the national level, thus improving the prospects for a more stable euro area financial system.” As well, the Bank of England has been assigned full responsibility for macroprudential policy.

In 2008 chairman of the House Committee on Oversight and Government Reform, Henry A. Waxman of California, asked Mr. Greenspan: “You had the authority to prevent irresponsible lending practices that led to the subprime mortgage crisis. You were advised to do so by many others. Do you feel that your ideology pushed you to make decisions that you wish you had not made?” Mr. Greenspan conceded that:
“Yes, I’ve found a flaw. I don’t know how significant or permanent it is. But I’ve been very distressed by that fact. (…) Those of us who have looked to the self-interest of lending institutions to protect shareholders’ equity, myself included, are in a state of shocked disbelief.”
Unfortunately, it appears that this testament has caused an irreparable damage to confidence in the market’s self-equilibrating potentials. Instead authorities have espoused a partial preference for the risk-management and credit-allocation skills of a few central bank officials. The macroprudential regulation has been justified by arguing that Chairman Greenspan’s 1994 hypothesis, put forward in front of House Subcommittee on Telecommunications and Finance, to the effect that “There is nothing involved in federal regulation per se which makes it superior to market regulation,” is proved to be wrong. Also misguided was his argument for removal of the legislative barriers that prohibited the straightforward integration of banking, insurance and securities activities, when he concluded that:
“In virtually every other industry, Congress would not be asked to address issues such as these, which are associated with technological and market developments; the market would force the necessary institutional adjustments. Arguably, this difference reflects the painful experience that has taught us that developments in our banking system can have profound effects on the stability of our whole economy, rather than the limited impact we perceive from difficulties in most other industries.”
In fact, governments across the world begun to introduce macroprudential regulations in the form of more stringent capital requirements, requiring financial institutions to value their assets more conservatively, asking them to hold more liquidity buffers, placing constraints on risk-taking, enforcing more stable funding restrictions and requiring improved provisions to protect against bad and toxic loans. According to Christian Noyer, Governor of Banque de France, there is a consensus over broad outlines of macroprudential regulation that
First, it involves adding a macroeconomic perspective to the supervision of the financial system, which up till now has only really been addressed from a “micro” standpoint. As the crisis has shown, financial stability does not depend solely on the soundness of the individual components that make up the financial system; it also depends on complex interactions and interdependencies between these components.
Implicit, in Noyer‘s argument is the unsubstantiated claim that even those economies that their financial system were composed of sound micro components suffered from financial instability. This is not true. Canada is perhaps the only country that can legitimately claim that its financial sector was robust at micro level, and as it was expected its economy fared quite well during the global financial crisis. In the words of Mark Carney the governor of the Bank of Canada at the time “the core lesson we learned from those difficult years was the importance of coherent, principle-based policy frameworks.” These frameworks discipline policy-makers and enhance credibility. In the words of his predecessor David Dodge:
Canadian financial institutions took a more cautious approach to financial innovation at some cost to their short-term growth and profits relative to more leveraged foreign competitors, relied relatively less on wholesale funding and kept relatively more liquidity. In part, this stemmed from more stringent, coordinated and effective regulation and supervision in Canada, which provided the right incentives to financial institutions. (…)
Our system of principles-based regulation should continue to serve us well, even more so in a context where most national regulators elsewhere will not conform to the detailed, uniform international standards. What is required here in Canada is a high degree of cooperation between regulators and financial institutions to achieve stability goals. In the past, such cooperation in designing principles-based regulation has strengthened the Canadian system. We should not lose that advantage as we move forward.
It is important to note that in principle-based policy frameworks monetary policy would be dealing with the monetary policy goals and would not allow macroprudential considerations to contaminate  the transmission mechanism and distort the economic structure. Mr. Noyer ‘s second characteristic of macroprudential policy is that:
“it is preventive. Its aim is precisely to prevent the formation of financial imbalances, procyclical phenomena or systemic risks by limiting excessive growth in credit and in economic agents’ debt levels, and increasing the shock‑absorbing capacity of financial institutions or structures ex ante.”
The argument again assumes a number of untenable implicit assumptions; namely that authorities possess reliable measures of excess or systemic risk, the macroprudential policy makers are themselves experts in detecting and interpreting economic signals, the lag structure of the policy impacts are known and stable, and the policy actions can be precisely calibrated so that they will be efficacious in damping excesses while not unnecessarily reducing well-underwritten credit flows in the economy. In this regard the Bank of Spain’s assessment of the Spanish macroprudential experience shows that virtually none of these conditions are satisfied. That assessment reads:
Dynamic provisioning is not the macro-prudential panacea, since the lending cycle is too complicated to be dealt with using only loan loss provision policies. Indeed the Spanish experience shows that even well targeted and calibrated instruments cannot cope perfectly with the narrow objective for which they are designed, among other things because the required size to fully achieve its goals would have inhibited and distorted financial and banking activity.
 Of course, no proof exists to show that government regulators are more able than private investors at predicting which individual investments are justified and which are folly. The cost of macroprudential regulation in the name of financial stability has been a confused monetary policy that has caused a delay in return to equilibrium, increased uncertainty and a slower economic growth.

In a world that agents can innovate to take advantage of arbitrage opportunities, there would be no reasons for believing that macroprudential policies can have any impact on financial stability. In fact, if regulations were of any use the old Soviet Union would have been a success story, or today’s China’s financial markets would be the most stable in the world. The arbitrage possibilities generated by these regulations leads to financial innovations that would work against those policies annulling their impact. The evidence does in fact already present itself in the form of a shift of financial activities toward less regulated shadow banking system.

Moreover, to assume that macroprudential policies are of any impact must be based on the postulate that economic system can be represented by a stable model and that authorities have already discovered that model. Otherwise, in a fast changing world, in which parameters of taste and technology are responding to new scientific and digital advances at an ever increasing speed no financial authorities have any clue about the nature of an evolving transmission mechanism, the lag structure and the specification of an adequate model.

As Paul Kupiec has argued macroprudential policies will not have a significant impact and thus will not succeed. In fact, his study co-authored with Yan Lee of the Federal Deposit Insurance Corp. and Claire Rosenfeld has found that increasing a bank's minimum capital requirements by 1% will decrease bank lending growth by a paltry six one-hundredths of a percent. As he reported in the Wall Street Journal: There is not much evidence that these policies prevent financial bubbles. But there is great risk in allowing a small group of unelected technocrats to determine the allocation of credit in the U.S. economy.

Furthermore as many analysts have noted the propaganda surrounding macro-prudential regulation or supervision creates a false sense of security and stability. A central bank that is now playing too large a role in the economy in order to stabilize successfully the industrial, construction, and the other goods and services sectors, as well as the labour market, will have more difficulty communicating its monetary policy stance. It’s hard to believe that central banks models and expertise are adequate for executing various goals when the issues in each one of these sectors are complex and are impacted by various technological and competitive factors. The lines between discretion and rules for two sets of monetary and macroprudential would criss-cross and creates a very confusing lag structure for any signal extracting model.

There is no reason to believe that the principal- agent problem is not applicable in this case. In other words, it can be hypothesized that central banks as agents may not be supportive of market mechanism to stabilise independently. A regulated environment maximises the returns to agents in this framework as it entails more secure job prospects because of the needs for intense monitoring of capital and liquidity ratios, continuous inspection of the impact of various restrictions on banking practices, and conduct of periodic stress tests. The increased uncertainty will cause a surge in speculative activity in capital-asset markets which would exasperate the situation and boost the need for macroprudential regulation perpetuating the favourable job prospects of the agents.

Wednesday 19 August 2015

Will the Fed raise rates in September?


 In my estimation Fed can not risk rising rates in such  critical times and therefore it won't. A rising rate at current market conditions one month before October, that historically is associated with a stock market correction, could be the psychological trigger that would disturb the current fragile local equilibrium, pushing the US and the whole global system along a path towards instability and a full-fledged financial crisis exhibiting a collapse of investment, debt deflation, and thus leading to insolvent debtors and a weaker banking system – that would be 1937 all over again!

I am not of course a fan of current zero interest rate policies, and I believe these policies have distorted not only the US and European economies, but also the global economy.   The Fed indeed has created a catch 22 situation; as higher rates are needed badly, but any action towards raising rates would be extremely destabilizing. This is why I have been calling for an emergency global finance conference similar to the Brussels conference that took place   between the 24th of September and the 8th of October 1920.  That international conference was called

“with a view to  studying the financial crisis and looking for the means of remedying it and mitigating the dangerous  consequences arising from it.”

Such a sharp focus on financial crisis is needed for any new conference that would be dealing with the current situation in order to find a sustainable long-term solution.  In other words, none of the unrelated questions such as geopolitical crisis, human rights or environmental concerns need to be discussed in this conference, and its sole purpose should be a search for restructuring of global finance.  To arrive to an accurate assessment the Brussels conference secretariat asked the participant countries, and their financial institutions to submit latest data on currency, public finance, international trade, inflation and so on, which is an obvious prerequisite.
       
Like the current crisis, as Gustav Cassel, the great Swedish economist, argued the main responsibility for the 1920s financial crisis was the policy actions by various countries, which could only be remedied by an internationally coordinated return to stable currencies.  It is of note that, such a return in Cassel’s framework was not predicated on a return to the gold standards. He also strongly dismissed the possibility of arbitrarily fixing exchange rates and instead advocated a global exchange rate regime based on the theory of Purchasing Power Parity PPP, which would have linked fluctuating exchange rates to the prices paid for a common basket of goods and services in the regions that participated in international trade, such that the same price level would have been maintained for that common basket in every region.
     
Although Cassel correctly diagnosed the dangers of deflationary policies for the future prospects of economic growth and social stability, he warned that:

“As the internal value of a currency exclusively depends upon its purchasing power over commodities, a stabilization of this value can clearly only be attained by an adequate restriction of the supply of means of payment. The character of this restriction depends, of course, on the character of the means of payment used in the country. If they are supplied by the State as a paper money issued by the Government directly or indirectly for covering their expenses, the stabilization of the monetary standard clearly requires the stopping of further arbitrary creation of such money. This is so obvious that it is not necessary to waste many words on it”.


   Unfortunately under today’s “currency wars” conditions, with the slowdown in China, and Europe’s debt crisis, as well huge debt build up by consumers and states the normalization of monetary supply in any single country, as an isolated and uncoordinated action, would be a recipe for disaster.  

Thursday 13 August 2015

UK Competitiveness Outlook is Gloomy!

In its first budget, the new Government of Prime Minister David Cameron has eased markedly its intended austerity measures that had been pencilled in by the previous Coalition. However, the new relatively less intense tightening is still financed by welfare cuts, net tax increases and three years of higher government borrowing. Chancellor Osborn has delayed the expected return to a budget surplus by a year to 2019-20, sugar coating this delay by promising a slightly bigger surplus in the medium term. Chancellor’s introduction, from April next year, of a £7.20 an hour National Living Wage, rising to £9 an hour by 2020 outshone the opposition’s election pledge for an £8/hour minimum wage by 2020.

Does this budget change the trajectory of the British economy towards a more dynamic and competitive path? To explore this question let’s have a look at the current state of the economy. The UK independent Office for Budget Responsibility, OBR’s estimate of the margin of spare capacity in the economy is 0.6 per cent of potential output in 2015-16 and OBR expects this ‘output gap’ to close in 2018-19. However, these estimates may be hiding the fact that because of businesses’ utilization of contingent capacity the gap has been underestimated. This is because in planning for capacity during uncertain times businesses usually postpone their irreversible component of investment and utilize intensive margin production processes. As a result of this focus on short-term capacity corresponding to existing cost structure the longer-term capacity signals will be hidden. This reading is validated by the Bank of England’s August Inflation Report that reports:
Companies using their existing capital and labour more intensively will increase measured productivity but there is a limit to how far companies can do this without putting excessive upward pressure on their costs. Survey measures suggest that, having increased since 2013, capacity utilisation picked up a little in 2015 Q2, and is close to or perhaps slightly above past average levels.
Consistent with Ben Bernanke’s option price of waiting it would be quite rational for businesses to postpone their strategic investment plans at times of currency wars and global volatility, and focus instead on their contingent capacity limits. Thus, business surveys instead of picking up reports of capacity utilization rates relative to the long term capacity associated with the firm’s minimum long-term average costs would detect signals of capacity tightening due to delays in implementation of irreversible phases of investment. This observation can also be validated by indicators such as investment profile and productivity growth. Note that productivity growth — defined as the rate of change of output minus rate of change of hour worked — will rise when investors invest to expand the production possibility frontier which usually would  reduce their cost structure through adoption of new innovative technologies. The fact that growth in the UK productivity has been subdued in the past eight years is a clear indication that British investors are still quite hesitant to invest strategically to enhance competitiveness.

The chart below, based on the OECD data, shows the widening gap in capital formation between the UK and the United states, particularly since the recent big recession which can explain why productivity growth in Britain has been so low.


As the following chart shows, ONB predicts that investment as a share of GDP, which was hovering around 11% in recent quarters will increase to about 13% by 2018. However, the latest data show that business investment growth slowed in the second half of 2014, thus in a backdrop of heightened uncertainty the ONB predication may prove rather optimistic. Nevertheless, even if its prediction comes to pass the amount of investment would not be sufficient to remedy the loss of competitiveness of British industries, which are in need of a drastic restructuring in response to the imperatives of the new technological advances such as in internet of things, mobility, 3-D technologies, and smart raw materials, to name a few.


Moreover, the Bank of England’s Agents’ Summary, depicted in the following chart, indicates only moderate investment growth and it is not clear as to whether the new investment would be aiming at expanding the production frontier and increased competitiveness or will still be focused on a tactical reversible investment, such as repair and marginal upgrading of the existing technology along pursuing a path of intensive margin production process.


As already mentioned such moderate investment growth would not be sufficient for the needed restructuring and the crucially necessary enhancement of British competitiveness. The chancellor’s strategy to rejuvenate manufacturing and exports by new trade deals cannot succeed in the absence of investment that would be geared toward enhancing competitiveness. His fast-track visa system for wealthy Chinese investors would be ineffective, if the new investments just move towards real estate instead of new technology. In order to halt the persistent decline of the UK export market share, depicted in the following chart, a significant rise in investment would be prerequisite.



A participation in the current currency war, even when British pound has appreciated 20% on a trade-weighted basis since March 2013, would not be an option. As it would either worsen the public sector net borrowing (depicted in the chart below), or further reduce the effectiveness of monetary policy, and exacerbating household high level of debt (the next chart below). Of course, one needs to be reminded that that the Bank of England has maintained the stock of purchased assets financed by the issuance of central bank reserves at £375 billion, and will reinvest the £16.9 billion of cash flows associated with the redemption of the September 2015 gilt held in the Asset Purchase Facility. At the same time the Government’s spending is expected to be £83.3 billion higher in total over the current Parliament relative to the previous Coalition budget. Thus, more easing will add to the distorting imbalances.



Sunday 2 August 2015

Alpha, Beta, and Beyond -- A comment on "smart beta'

 



In a recent Project Syndicate article Dr. Roubini argues:
 [M]y economic research firm has a quantitative model, updated every three months, that ranks 174 countries on more than 200 economic, financial, political, and other factors to derive a measure or score of these countries’ medium-term attractiveness to investors. This approach provides strong signals concerning which countries will perform poorly or experience crises and which will achieve superior economic and financial results. 
Weeding out the bad and the ugly based on these scores, and thus picking more of the good apples, has been shown to provide higher returns with lower risk than actively managed alpha or passive beta funds. And, as the rankings change over time to reflect countries’ improving or worsening fundamentals, the equity markets that “smart beta” investors choose change accordingly.

The claim goes beyond the pale, and is absolutely stunning. It is hard to imagine that alphas and betas are not time varying parameters. In fact, studies by Blume; Hawawini, Michel, and Corhay; Levy and others have shown that stock betas can change drastically over two succeeding periods, and some have argued that linear estimators of beta are unrealistic estimates. Thus, one wonders, about the validity of any “smart” (or “enhanced”) beta strategy that can at any specific period pick up the true betas. It has always been a puzzle to many as to how some serious people look at betas and alphas as ex ante criteria for portfolio selection. Is it not reasonable to believe that the intrinsic value of any stock is derived from the firms’ competitiveness characteristics and the market fundamentals for the underlying goods or services that are represented by various stocks? 

Based on the ex post data any econometric technique  can always identify some alphas and betas that appear to have superior characteristics supported by an array of statistical measures attesting to the explanatory power of the regression.  Such models may capture part of the impacts of the real market fundamentals, say a rightward shift of demand curve for the underlying goods and services, or a shift of the cost structure of the firm producing those goods and services, and so on. The data may also contain some memory, due to various lags that can be captured be the estimated equations. However, if a portfolio manager shows you a selection of stats (and there are hundreds of those; R-squared, P-tests, LM, DW, BP, F to name a few) that appear to suggest some superior predictive information content, then one really needs to ask why the investment manager is prepared to share such a valuable information for a small fee, instead of attempting to corner the market!

This is neither a rehash of efficient market hypothesis, nor an argument derived from the possibility of black-swans. It is a subtle recognition of the nature of risk and uncertainty in its Knightian framework. In other words, the expected return from a portfolio is not the same as the expected return from casting of a number of fair dice. The distribution outcomes from casting of a die can be detected by a repetitive casting process and thus the volatility of returns can be formulated as a Knightian risk. However, this is not the case for the expected return of a stock, because each return would be derived from a specific demand-supply configuration for the underlying stock’s goods and services and the position of the short-run average cost of the company producing them at a particular time. This does not lend itself to a repeated sampling. Moreover, we are seldom in an idealistic case of perfect competition, in reality various strategic pricing and capacity decisions together with logistical constraints would also play important roles. Thus, the underlying distributions of the expected betas are unknown –Knightian uncertainty. Can one resort to time series analysis of say cointegration type? Simply because of the unavailability of long enough data (i.e., degrees of freedom restraint), difficulties in detecting of the order of integration and a host of other technical issues that are well known to practitioners that option too would be impractical.

Read more at https://www.project-syndicate.org/profile/551891a0bc1f570d68f3eac8#6VwyDPZ00gQdq8WH.99