Matt Levine on shareholder value

Matt Levine discusses:

  • Traditional and progressive views of the role of corporations
  • How a narrow desire to raise shareholder value keeps frauds from capitalizing on investors who appeal to higher causes.
  • Compartmentalizing your job from your personhood as a necessary convenience

https://www.bloomberg.com/view/articles/2018-09-27/shareholder-value-could-be-worse



Primacy of Corporate profits

Friedman, along with Michael Jensen and William Meckling, is probably the person most associated with the theory that—as his famous article put it—“The Social Responsibility of Business Is to Increase its Profits.” In this theory, managers of a corporation have a singular duty to the shareholders to maximize their economic return as far as possible (while complying with the law), and if managers pursue any other objective—treating workers well or being good environmental stewards or standing up for what they believe in—at the expense of shareholder value, then they are misbehaving. Of course, there is plenty of room to argue that pursuing those other objectives actually enhances shareholder value. And there is much to be said for Friedman’s view—which is also after all Adam Smith’s view—that by focusing on economic profits, a company will maximize the amount of social good that it does, simply because the normal way to maximize profits is to figure out what people want and then sell it to them.

Including More Stakeholders

It is popular, these days, to criticize that view. The corporation is a political construct, embedded in a society; it has many stakeholders whose interests it needs to consider, not just shareholders. You see this criticism everywhere, from attacks on stock buybacks to Elizabeth Warren’s call for “accountable capitalism.” In its more extreme form, you can see shareholder-profit-maximizing corporations compared to science-fiction robot villains, or to psychopaths: If you value only profit and nothing else, then there is something inhuman about you.

Compartmentalizing

That probably overstates matters. If you come to work and focus on maximizing the profits of your company, that probably doesn’t mean that you’re a psychopath. It probably just means that you have a job. You compartmentalize things a bit; your work does not contain the entirety of your personhood; it’s a thing that you do because you need to make a living. In this sense, a company whose philosophy is “we will sell products that people want for more than it costs us to make them so that we can make a profit and increase our share price” is rather psychologically healthy. That is a good goal to work on during business hours Monday through Friday, and then leave. It is a modest, reasonable, businesslike goal. Obviously there are large contested margins, and you shouldn’t do psychopathic things to pursue that goal, and some people do and that’s bad, but for the most part “shareholder value” is the sort of mission that inspires people more or less the right amount. If you go around murdering people to maximize shareholder value then, yes, you are a psychopath, but most people aren’t.

The Difficulty of Accounting for Intentions

But there are other goals. Those goals are bigger, and you can wrap your whole personhood up in them, and you can believe that those goals are so important that they can justify anything. If Facebook’s goal is to maximize revenue by selling targeted ads to clothing companies, and you find out that it has features that enable genocide, then you shut down those features because the ads just aren’t worth it. If Facebook is about the “noble mission” of “connecting people,” then the tradeoffs are murkier. If “Facebook is truly the only company that’s singularly about people,” then … what even … how do you measure how about-people it is being? If you’re the singular company whose focus is people, then whatever you do is sort of necessarily good; your end is so vague and noble that it can justify any means. And for all that Facebook’s meddling with Instagram and WhatsApp seems to be driven by straightforward ad-revenue-maximization considerations, it’s worth saying that Facebook isn’t really answerable to shareholders and that its explicit ideology rejects shareholder value as a goal. “Facebook was not originally created to be a company,” Mark Zuckerberg wrote when it went public. “It was built to accomplish a social mission.” Okay!

Grand Visions Can Be Weaponzied, But Shareholder Value? Not So Much.

I am late to it, but I just finished reading John Carreyrou’s “Bad Blood,” the story of the fraud at Theranos Inc. and his work to uncover it. Theranos—in Carreyrou’s view, and the view of federal prosecutors—issued tens of thousandsof blood-test results to real patients using technology that it knew didn’t work, endangering those patients’ lives. There are a lot of passages in the book about Theranos founder Elizabeth Holmes inspiring and cajoling her employees to work harder, to get with the mission, to override their moral objections to faking the technology and push ahead. None of those passages mention shareholder value or profit maximization. They mention Holmes’s vision of revolutionizing health care to save lives and treat cancer patients. If you want to inspire people to do terrible things, it is very useful to sell them on a grand vision, a higher purpose, a noble mission. Shareholder value is nobody’s idea of an inspiring mission. That’s what’s good about it!”

How much to wager when you have edge? (Hint: median not mean outcomes!)

Link: Rational Decision-Making under Uncertainty: Observed Betting Patterns on a Biased Coin


  • Optimal bet size as a fraction of bankroll is 2p-1 where p is the probability of winning1. You will recognize this as the edge per trial reported as a percent. So a 60% coin has 20% expected return or edge
  • The formula is a solution to a proportional betting system which implicitly assumes the gambler has log utility of wealth

Imagine tossing a 60% coin 100x and starting with a $25 bankroll

Arithmetic Mean Land

The mean of one flip is 20% positive expectancy

Optimal bet size is 20% of bankroll since you have .20 expectancy per toss

Increase in wealth per toss betting a Kelly fraction: 20% of bankroll x .20 expectancy = 4%

Expected (mean) value of game after 100 flips betting 20% of your wealth each time

$25 * (1+.04) ^ 100 = $1,262

Median Land

The median of one flip betting a Kelly fraction is (1.2^.60 * .8^.40 – 1) or 2%

Median value of game after 100 flips betting 20% of your wealth each time

25 * (1.2^60) * (.8^40) = $187.25!

Things to note

  • The median outcome by definition is the increase in utility since Kelly betting implicitly assumes the gambler has log utility
  • After 100 flips, the median outcome is only about 1/10 of the mean outcome! The median outcome gives an idea of how much to discount the mean payoff. If your utility function is not a log function (ie does quadrupling your wealth make you twice as happy) then a different Kelly fraction should be used

Refactored “When the Culture War Comes for the Kids”

This is a 10:1 compression and refactoring of George Packer’s When the Culture War Comes for the Kids


There is a deep sense of inequality prevailing in America. 

The parents on the fortunate ledge 0f this chasm gaze down, vertigo stuns them. Far below they see a dim world of processed food, obesity, divorce, addiction, online-education scams, stagnant wages, outsourcing, rising morbidity rates—and they pledge to do whatever they can to keep their children from falling…By kindergarten, the children of elite professionals are already a full two years ahead of middle-class children, and the achievement gap is almost unbridgeable.

The need for equality and the role of merit.

The claim of democracy doesn’t negate meritocracy, but they’re in tension. One values equality and openness, the other achievement and security. Neither can answer every need. To lose sight of either makes life poorer. The essential task is to bring meritocracy and democracy into a relation where they can coexist and even flourish.

In 2014 the front line of social advocacy hardened

This new mood was progressive but not hopeful…At the heart of the new progressivism was indignation, sometimes rage, about ongoing injustice against groups of Americans who had always been relegated to the outskirts of power and dignity…Over time the new mood took on the substance and hard edges of a radically egalitarian ideology…its biggest influence came in realms more inchoate than policy: the private spaces where we think and imagine and talk and write, and the public spaces where institutions shape the contours of our culture and guard its perimeter…You could almost believe they spoke for a majority—but you would be wrong…The new progressivism was a limited, mainly elite phenomenon.

“For better or worse, it’s all identity now.”

The battleground of the new progressivism is identity…progressive politics meant thinking in groups. In politics, identity is an appeal to authority—the moral authority of the oppressed: I am what I am, which explains my view and makes it the truth. The politics of identity starts out with the universal principles of equality, dignity, and freedom, but in practice it becomes an end in itself—often a dead end, a trap from which there’s no easy escape and maybe no desire for escape. Instead of equality, it sets up a new hierarchy that inverts the old, discredited one—a new moral caste system that ranks people by the oppression of their group identity. It makes race, which is a dubious and sinister social construct, an essence that defines individuals regardless of agency or circumstance—as when Representative Ayanna Pressley said, “We don’t need any more brown faces that don’t want to be a brown voice; we don’t need black faces that don’t want to be a black voice.”

De Blasio’s schools chancellor, Richard Carranza, has answered critics of the diversity initiative by calling them out for racism…Carranza has mandated anti-bias training…One training slide was titled “White Supremacy Culture.” It included “Perfectionism,” “Individualism,” “Objectivity,” and “Worship of the Written Word” among the white-supremacist values that need to be disrupted.

Witchhunt?

At times the new progressivism, for all its up-to-the-minuteness, carries a whiff of the 17th century, with heresy hunts and denunciations of sin and displays of self-mortification. The atmosphere of mental constriction in progressive milieus, the self-censorship and fear of public shaming, the intolerance of dissent—these are qualities of an illiberal politics.

[In Jared Dillian’s Daily Dirtnap, he recounts “I’m not sure if you heard about the spectacle at the Des Moines Register, but here goes. Guy goes to a football game and holds up a sign asking for beer money over Venmo. He gets some money. He gets more money. Then more money. He has $1 million! He donates it to a children’s hospital. Reporter at the Des Moines Register digs up old racist tweets from him when he was 16. The guy who saved the children. Outrage mob forms, digs up old racist and homophobic tweets on the reporter. Pandemonium ensues. Thousands of people canceling subscriptions from the newspaper. Newspaper editors stonewalling. Now death threats. This is where we are. Every day is worse than the last. Tomorrow will be worse than today. Yesterday was worse than the day before. This will continue for the next 20-30 years.”]

[Dave Chappelle, in his Netflix special Sticks and Stones, calls out the prevailing ‘cancel culture’: “If you do anything wrong in your life and I find out about it, I’m gonna try to take everything away from you!”]

It struck me that this would punish kids whom the movement was supposed to protect…

In the name of equality, disadvantaged kids were likelier to falter and disappear behind a mist of togetherness and self-deception. Banishing tests seemed like a way to let everyone off the hook. This was the price of dismissing meritocracy.

The middle-school scramble subjected 10- and 11-year-olds to the dictates of meritocracy and democracy at the same time: a furiously competitive contest and a heavy-handed ideology. The two systems don’t coexist so much as drive children simultaneously toward opposite extremes, realms that are equally inhospitable to the delicate, complex organism of a child’s mind.. Wokeness prettifies the success race, making contestants feel better about the heartless world into which they’re pushing their children. Constantly checking your privilege is one way of not having to give it up.

Our goal shouldn’t be to tell children what to think. The point is to teach them how to think so they can grow up to find their own answers.

There is no answer to this

I can imagine the retort—the rebuke to everything I’ve written here: Your privilege has spared them. There’s no answer to that—which is why it’s a potent weapon—except to say that identity alone should neither uphold nor invalidate an idea

“The legacy of racism, together with a false meritocracy in America today that keeps children trapped where they are, is the root cause of the inequalities in the city’s schools. But calling out racism and getting rid of objective standards won’t create real equality or close the achievement gap, and might have the perverse effect of making it worse by driving out families of all races who cling to an idea of education based on real merit. If integration is a necessary condition for equality, it isn’t sufficient. Equality is too important to be left to an ideology that rejects universal values.

 

 

Tradeoffs in tax policy

Excerpts from Howard Mark’s memo: It’s All Very Taxing

The concept of “paying a fair share” is nebulous at best. It’s very contentious because everyone has a horse in the race. Mark’s memo is my favorite reference for the complexities and competing goals when designing tax policy.

Mark’s published the memo in November 2011 so the specifics are outdated but this doesn’t negate the reasoning. In fact, the changes actually highlight how our tax code evolves to punish or incentivize certain behavior.


Intro

We have a progressive system of taxation, meaning that higher earners don’t merely pay more in terms of dollars; they generally pay a higher percentage of their incomes in taxes. Most people agree that this is fair. But is it? Why should success be penalized through greater taxation? And if the tax rate for those who earn more should be higher, how much higher?

Under the U.S. system, people in higher income brackets pay tax at higher rates. In large part, the question of fairness primarily surrounds whether the higher rates are high enough. Talk about “the eye of the beholder.” There’s evidence on both sides of this debate: The top 1% of U.S. taxpayers pay 38% of all individual federal taxes.

A breakdown of the numbers

The top 10% pay 70% of all taxes, the top 25% pay 86%, and the top 50% pay 97%. The bottom 50% of all taxpayers paying only 3% of the total.

About half of Americans pay no federal income tax, and almost 25% pay no federal taxes at all. The average federal income tax rate for the top 1% of Americans is 23% (and for the top half it’s 14%), while the average rate for the bottom half is 3%.

They pay at lower rates than they used to and it seems progressivity has declined. . . . the effective federal tax rate, including payroll taxes, for the wealthiest 0.01 percent of earners fell to 31.5 percent in 2005, from 42.9 percent in 1979 [for a decline of 26.6%], according to data from the Congressional Budget Office. Over the same time, effective rates for taxpayers in the center of the range fell to 14.2 percent, a decrease of just 4 percentage points [or 22.0%]. (The New York Times, September 21, 2011). Total revenues from income taxes have declined in the U.S. – they “are at a historic low. of 15.3 percent of the gross domestic product, compared with a postwar average of 18.5. percent” (Financial Times, September 25) – and they’ve declined more for top earners. than for the rest. This is because of both specific rate cuts that have been enacted and the fact that the rates applied to dividends and capital gains – which clearly flow more to people in the upper-income brackets – have declined relative to the rates on salaries and wages.

  • On average, higher earners absolutely do pay a higher percentage than those who earn less.
  • But the decision as to whether the differential is just right, too little or too great is highly subjective and certainly a valid topic for debate.

A non-exhaustive list of trade-offs to consider

  • Are some forms of income more desirable to society and thus deserving of taxation at lower rates?

A discussion about investment vs wage income

    • Long-term capital gains are taxed at reduced rates because of a judgment that long-term investment in things like securities, companies and real estate is beneficial for the economy and should be encouraged. Right now, the top tax rate on long-term investment profit is less than half that on short-term gains and ordinary income.
    • What about interest? Why are dividends taxed at preferential rates and interest at ordinary rates? The explanation may lie in the fact that interest is deductible for corporations, while dividends aren’t. Interest is paid out of pretax income, while in theory dividends are paid out of after-tax income – although the existence of corporate deductions and credits means dividends may, in fact, be paid out of income that hasn’t been taxed by the U.S. Alternatively, the difference in tax treatment may be the result of a desire to encourage investment in “risky” equities rather than “safe” debt. But some companies’ dividends are no doubt safer than some other companies’ interest payments, so this distinction is questionable. If the goal is to encourage risk-bearing, is dividend versus interest the right criterion?
    • While on the subject of gains from investments, it’s interesting to note that, not long ago, dividends were included with interest under the rubric “unearned income.” And it was taxed more heavily than wages.
    • But now things have turned 180 degrees, and returns on capital are taxed at lower rates than wages. It’s worth noting that the Democrats – commonly considered the party of labor – controlled the government for much of the period 1928 to 1980, when earned income was favored. On the other hand, the Republicans – the party of those with capital to invest – have been in control more of the time since 1980, and the taxation of returns on capital has declined in relative terms. The definition of virtuous income that should be encouraged through lower taxes clearly is subjective, impermanent and subject to change with the winds of politics.
  • Should we encourage certain expenditures by making them deductible from taxable income?

The drafters called them deductions: provisions that reduce the net income on which taxes are levied. Critics call them loopholes, suggesting there’s something underhanded
about those provisions. And politicians use the laudatory-sounding term tax incentives to describe tax code provisions that reduce tax revenues in order to encourage certain
behavior. It all depends on your point of view.

Interest on mortgages

    • For as long as I can remember, interest on home mortgages has been treated as a desirable expenditure that should be encouraged. Because homeownership is considered part of the American dream, the tax code subsidizes it by reducing the after-tax cost for those who borrow to buy homes (and are able to itemize rather than take the standard deduction). While everything else may be arguable, certainly this seems fair. But is it? Are homeowners more virtuous than renters? If mortgage interest is deductible but rent isn’t, we’re requiring renters to subsidize owners. Is that appropriate?
    • On average, homeowners are from the middle and upper-income brackets. Is it fair that poorer renters provide a benefit for richer owners?
    • And is it desirable that those able to buy more expensive homes should get more of a subsidy than those consigned to cheaper ones?
    • As with the taxation of dividends, judgments on these matters change over time. Until 1987, there was no limit on the amount of mortgage interest that could be deducted. If you could afford to own ten homes with multiple million-dollar mortgages on each one, taxpayers would collectively share the cost by reducing your income taxes due. Today interest is deductible on only a maximum of $1.1 million of debt, and only on first and second mortgages, and only on a primary residence and a second home. So the tax treatment of owners of many homes and more expensive homes has become less generous. But it’s still better than that of renters. Is that proper?

Charitable deductions

    • As I travel the world visiting with clients, I see that two things about the U.S. are quite uncommon: (a) Americans give a lot of money to charity and (b) donations to charity are deductible in calculating taxable income. Everyone tells me the latter is the main reason for the former. In particular, these things are part of the explanation for the existence of the
      many private, non-state-supported colleges and universities in the U.S.
    • Part of this is true because legislators decided at some point to subsidize non-profits by encouraging contributions through the tax code. That’s certainly understandable. And yet, changes were made in recent years to limit upper-bracket taxpayers’ use of deductions in order to ensure that they pay some minimum tax rate. What about the unevenness of the subsidy?
      • The cost of giving $1 to charity is reduced by the amount of taxes it saves the donor, which is equal to $1 times the person’s tax rate. So today, speaking simplistically, it costs a top-bracket taxpayer 65 cents to give a dollar to charity, while it costs a bottom-bracket taxpayer 85 cents. Is that fair? Should the bigger earner receive a greater reward for a dollar of philanthropy than someone who can afford it less easily?
      • And should those who aren’t inclined to give to charity be required to subsidize those who are?

State and local deductions (SALT)

    • Deductibility on the federal tax return somewhat evens out the burden and ensures that (a) the states get first crack at taxing income and (b) the federal government can only tax
      what’s left, in line with federalist principles.
    • This raises a number of questions. Is the deductibility of state and local taxes fair? As with other deductions, the key question is “fair to whom?” Some people pay more state
      and local taxes than others, meaning they get greater deductions than others. As a result, while a person with a given income who lives in a high-tax state pays higher total taxes,
      he or she pays less federal tax than someone in a low-tax state. Is that fair?
    • Should the federal government subsidize spending on the part of high-tax states? That is, should residents in low-tax states bear part of the expenses of high-tax states?
    • While the source of an exemption rather than a deduction, what about interest on “municipal bonds” issued by states, counties, cities and local agencies. This is exempt from federal taxation, under the legal doctrine that the federal government mustn’t tax the operations of the states. But here again, we’re talking about a federal benefit (in the form of a lower cost of capital) for the biggest-spending local governments and their citizens, and a tax break for people who lend to them.

Property and sales taxes

    • Property taxes deductible without limitation. Thus the owner of a mansion – or ten mansions – receives more of a tax benefit than a low-income earner. And it’s another subsidy for homeowners versus renters. Is this right, or should it be changed?
    • Sales tax used to be deductible, too (meaning the buyer of a Rolls Royce got assistance from the federal government). Now it’s not. More fair?

The biggest exclusions of all: employer-provided health care and the deferral of taxation of contributions to pension plans

    • In both cases, those receiving these employer-paid benefits enjoy a substantial benefit not shared by those not fortunate enough to participate. For instance, is it fair that many better-paid workers get thousands of dollars a year in untaxed health-care benefits, while other workers enjoy no such subsidy?
  • Just think of how complicated the argument is on “fair” ways to raise taxesThere’s an argument that for the deficit solution to be equitable, all citizens should contribute to it. Though some government spending benefits all citizens alike, such as national defense, national parks and the administration of justice, much spending disproportionately benefits lower earners, in the form of public education and transportation (which are supported by the federal government), unemployment insurance, food stamps, Medicare and Medicaid, etc. Thus the effect of the coming spending cuts will fall more heavily on the poor. Some argue that since they receive less in benefits and are therefore less likely to experience their loss, the wealthy should share the burden of reducing the deficit through increased tax payments.
  • Keeping taxes low in general
    • Reduce wastefulness
    • Laffer curve
    • Encourage growth

Notes on OSAM’s Factors from Scratch

http://osam.com/Commentary/factors-from-scratch

How does the ‘value’ factor generate excess return?

  • Method for decomposing the return into earnings growth and multiple expansion
    • Limit to large caps (conservative, since the factor is weakest here)
    • June 1964 – Oct 2017
    • Cheapest quintile of stocks rebalanced annually at the end of June
      • Rebalance averages about 38% turnover
      • Turnover requires technical adjustments to normalize the decomposition: “rebalancing growth” and “unrebalanced valuation change” [details in the paper]
  • Findings
    • Value stock fundamentals do deteriorate in the holding period. This is reasonably expected since these are the cheapest stocks. But the prices turn out to be overly pessimistic, as the multiple expands during the holding period more than enough to compensate for the decline in earnings.
    • The excess return becomes highly diluted after 1 year as the bulk of the market’s re-rating of the stock occurs within the first year. 1 year maximizes the “gain to time invested ratio” and also strikes a reasonable balance with the costs to rebalance
    • The market’s re-rating of the stock higher proves to be vindicated as fundamentals do stabilize. The value factor is capturing the ‘overreaction’ discount and the rebalance sells the re-rated names into the market’s stabilizing bid.
    • Value is difficult or impossible to time since it only correlates reliably with future returns at market extremes (ie dot com era).
  • ‘Value’ in recent context
    • Values has underperformed since the financial crisis although the underperformance is not unprecedented
      • Value is even cheaper today on relative basis compared to the overall market but both value and the market are about 50% more expensive than historical averages
      • Value is not currently stretched despite underperformance
      • Value may be underperfroming since cheap stock’s implied underperformance turned out to be even higher than their subsequent realized underperformance.
      • Likely that this is bad luck as opposed to the market having become better at handicapping future performance.
      • From Asness: ‘In stock selection, value is still not super cheap (i.e., super-cheap would be if the cheap stocks were way cheaper versus the expensive ones than normal).78 It would be fair to wonder why not, especially given the poor long-term value returns. Well, with any strategy, you can lose because either prices or fundamentals move against you. Unfortunately, more of this current drawdown has been about fundamentals. Value, at least using the behavioral. explanations, is a bet that prices over-extrapolate current prospects. The better companies deserve to be priced higher versus fundamentals, but even so, they’re priced too high (and vice versa). However, sometimes prices are correctly reflecting this information, and sometimes they are actually underreacting to it (meaning what looks expensive is actually an ex ante good deal). Prices may over-extrapolate on average, that’s why value works long-term, but not all of the time. Value wins more than it loses, but when price differentials underdo it (meaning, unlike most of the time, cheap companies aren’t actually cheap enough versus the expensive ones) is a time that value fails.79 Importantly, we find no signal from this analysis for timing value going forward. Value is not predictably bad or good following periods where fundamentals move against it.’

How does the ‘momentum’ factor generate excess return?

  • Momentum factor constructs equal weighted basked of top quintile of names based on prior 6 month returns
  • Findings
    • Recent returns are a better predictor of earnings growth than simple expensiveness
    • Decomposing returns we find that the resultant earnings growth exceeds the size of the multiple contraction
    • Unlike ‘value’ which leads to excess returns for many years (albeit at a declining rate after year 1, the momentum strategy is mean reverting. The excess return actually overshoots in year one and subsequent years actually show underperformance.

Combining ‘value’ and ‘momentum’

  • Value converges to fair value after initial overreaction which leaves them overly offered, momentum diverges above fair value at the tail end of the holding period as shares become overly bid.
    • This makes them complementary timing-wise over the 1 year holding period
    • They work best in opposite market environments

Digging further into factors

https://www.osam.com/Commentary/alpha-within-factors

The above describes how, in general, these factors work (distilled to their essence on average you are betting against overdone price declines in companies facing headwinds or trouble). [The strategy is innately convergent and supplies liquidity]. However, when digging into this general dynamic further, OSAM finds lots of dispersion under the hood. Since that is the case, it makes sense look for what differentiates the names which are favorably re-priced vs those which continue to underperform their price outlook.

To illustrate they show AAPL vs IBM from 2014-2018. Both stocks were in the cheapest quintile of P/E in 2014.

IBM earnings declined, AAPL’s grew and the stocks were predictably punished and rewarded respectively. They validate this is not an anomaly by looking at names historically that are priced similarly and then looking at their performance as a function of earnings growth over the next year. The names with faster-growing earnings outperform those with slower growing or declining earnings and the effects increase are amplified by the degree of growth (fastest growing, perform better than just faster growing on average etc). The paper’s appendix goes further by decomposing the stock’s returns into contributions from multiple expansion and earnings growth.

These findings unsurprisingly apply to the stock market in general — companies whose earnings grow faster, have share prices that grow faster. However, they find this dynamic to be much stronger in cheap (aka value) stocks meaning the rewards for being able to predict earnings growth are higher in the value arena.

This chart shows the ‘excess return vs historical average’ binned by rank of earnings growth. While the names with fastest growing earnings are the relative best performers, we can see how much volatility there is in the entire value factor with periods like the most recent 5 years and periods in the late 1980s being notably poor for the factor.

Zooming in:

How to capture this return with info available at the time?

“Is it possible to reliably identify the top-growing stocks in the value factor using presently-available information? The answer is surely no, especially if presently-available information is limited to price and financial statement data. The forces that determine future earnings outcomes in businesses arise out of complex, idiosyncratic chains of causality that are not fully captured in that data.”

A more reasonable and still very valuable goal is to “tilt” exposure towards the names which are more likely to be indicative of real value vs the ‘value traps’ which bring the value factor’s average down.

  1. Minimize selection bias by using a composite of measures to identify value
    • For example, P/E will be understated if a company takes a one time gain (for example if it sold a balance sheet item for much higher than its accounting value). Measures of value are vulnerable to any accounting variable which is not reflective of the ongoing business. By using a composite of measure the risk of a single accounting aberration having undue influence is mitigated.
  2. Addition by subtraction by removing the value-traps
    • Momentum: a measure of trailing total return; higher is better.
    • Growth: a measure of trailing change in earnings; higher is better.
    • Earnings Quality: a measure of accruals; lower is better.
    • Financial Strength: a measure of leverage; lower is better.

Because low scores in these indices have a disproportionately large impacts they choose to cut of say the bottom 10% as the best trade-off between the desire to avoid the worse outcomes reliably and the desire to maintain a large enough universe of names and diversification. Asymmetrical filter: Scoring poorly on those measures is a better predictor of poor performance than good scores are predictors of positive performance

  1. Create an equal weighted portfolio of the remaining top half of names: “value leaders portfolio”

Summary

The results as the portfolio becomes tilted to higher quality value names:

When the process above is used to filter value-traps and we further narrow the universe to the ‘value leaders’ we find that our equally weighted portfolio had much higher exposure to the faster-growing names than a strategy ranked according to simply cheapness (ie “Value:Top Quintile”)

This is the excess return to the traditional value factor in different historical periods:

This table shows how the tilt to the top quintile exceeded 20% in every period

This table shows the returns to strategies decomposed to multiple expansion (ie the point spread re-rating) vs earnings growth

The leaders strategy improves the generic value strategy by eliminating the names which drag on EPS growth (at the lesser expense of having less pronounced average multiple expansion)

The outperformance of the value leaders strategy is notable for three reasons:

  • First, it requires only a modest amount of intervention. The percentage of original value stocks retained in the final strategy–38%–is relatively large. Moreover, the strategy is rebalanced annually, rather than quarterly or monthly. These characteristics suggest that the strategy is able to accomplish more with less.
  • Second, it’s occurring entirely in the large cap space, a space in which factor signals are comparatively weak and, according to some, non-existent.
  • Third, it’s associated with a significant shift in allocation towards the value factor’s top growth bins, a shift that we know is efficacious, given the extreme levels of outperformance produced by stocks in those bins.”

OSAM notes that “In practice, we therefore use methods that are more focused and refined. We also take advantage of the benefits of concentration and size: factor investing is more powerful when applied in a concentrated manner and when used outside of the large cap space.”

Notes on Philosophical Economics: Predicting equity returns from supply/demand not valuation

http://www.philosophicaleconomics.com/2013/12/the-single-greatest-predictor-of-future-stock-market-returns/

Premise

If there is too much supply of a given asset relative to the amount that investors want to hold in their portfolios, then the market price of the asset will fall, and therefore the supply will fall. If there is too little supply of a given asset relative to the amount that investors want to hold in their portfolios, then the market price will rise, and therefore the supply will rise. Obviously, since the market price of cash is always unity, $1 for $1, its supply can only change in relative terms, relative to the supply of other assets.

Aggregate investor allocation to equities is the best predictor of future returns. This equals:

Market Value of All Stocks / (Market Value of All Stocks + Total Liabilities of All Real Economic Borrowers)

A description of various economic data used to calculate the total liabilities

His constructed chart

Assumptions

  • The rest of the world holdings of domestic assets cancels out US investors holdings of foreign assets.
  • The supply of cash and bonds that investors in an economy must hold perpetually increases with the economy’s growth. The cash and bonds in investor portfolios are literally “made from” the liabilities that real economic borrowers take on to fund investment–the fuel of growth. Chart shows how this grows at 5-15% per year.
  • For investors’ allocation to equities as a proportion of total assets, equity prices must rise commensurately or new share issuance must fill the gap. Equity issuance has actually been declining since the 1980s. Thus stock prices must levitate if the investor preference for equity is unchanged while the economy grows (which can only happen via cash and debt growth)

Price is a supremely important determinant of return!

Price balances supply/demand of allocators and “there’s absolutely nothing that says that this process has to equilibrate at any specific valuation. History confirms that it can equilibrate at a wide range of different valuations. For perspective, the average value of the P/E ratio for the U.S. stock market going back to 1871 is 15.50. But the standard deviation of that average is a whopping 8.4, more than 50% of the mean. One standard deviation in each direction is worth 243% in total return, or 13% per year over 10 years.”

There is no explicit link which mandates price and value must be sensibly related which highlights the risk of owning equities.

“Consider the classic buy-and-hold allocation recommendation: 60% to stocks, 40% to bonds (or cash). What rule says that there has to be a sufficient supply of equity, at a “fair” or “reasonable” valuation, for everyone to be able to allocate their portfolios in this ratio? There is no rule.”

Markets contain both ‘mechanical’ and ‘active allocators’ with active allocators varying allocations based on perceived risk and expected returns whereas ‘mechanical’ allocators are systematic investors who simply allocate on a pre-defined or regular basis typically without regard to price. They are a minority but significant part of the market.

Decomposing drivers of return:

Mostly price change, not dividends. The price change is a function of multiple changes and earnings changes.

Return = Change in price + dividend return

…but decomposing the price return:

Change in price = Price Return from Change in Aggregate Investor Allocation to Stocks + Price Return from Increase in Cash-Bond Supply (Realized if Aggregate Investor Allocation to Stocks Were to Stay Constant)

So the mechanism of return conceptually is reframed:

“In the previous way of thinking, the earnings grow normally as the economy grows. If the multiple stays the same, the price has to rise–this price rise produces a return. When the multiple increases alongside the process, the return is boosted. When it decreases, the return is attenuated. The multiple is said to be mean-reverting, and therefore when you buy at a low multiple, you tend to get higher returns (because of the boost of subsequent multiple expansion), and when you buy at a high multiple, you tend to get lower returns (because of the drag of subsequent multiple contractions).

In this new way of thinking, the supply of cash and bonds grows normally as the economy grows. If the preferred allocation to stocks stays the same, the price has to rise (that is the only way for the supply of stocks to keep up with the rising supply of cash and bonds–recall that the corporate sector is not issuing sufficient new shares of equity to help out). That price rise produces a return. When the preferred allocation to equities increases alongside this process, it boosts the return (price has to rise to keep the supply equal to the rising portfolio demand). When the preferred allocation to equities falls, it subtracts from the return (price has to fall to keep the supply equal to the falling portfolio demand)”

“If you buy in periods where the investor allocation to equities is high, you will get the dividend return plus the price return necessary to keep the portfolio equity allocation constant in the presence of a rising supply of cash and bonds, but then you will have to subtract the negative price return that will occur when equity allocation preferences fall back to more normal levels. This is what happened to investors in the 2001-2003 bear market. This way of thinking about stock market returns accounts for relevant supply-demand dynamics that pure valuation models leave out. That may be one of the reasons why it better correlates with actual historical outcomes than pure valuation models. ”

How can this explain the earningless bull market of the 1980s

It can explain, for example, the earningless bull market of the 1980s. Unbeknownst to many, earnings were not rising in the 1980s bull market. They actually fell slightly over the period–which is unusual. But prices didn’t care–they skyrocketed. The P/E ratio ended up rising well above 20, despite interest rates near 10%–a then unprecedented valuation disparity. Valuation purists can’t explain this move–they have to postulate that the “common sense” rules of valuation were temporarily suspended in favor of investor craziness.

But if we look at what investor allocations were back then, we will see that investors were already dramatically underinvested in equities. If prices hadn’t risen, if investors had instead respected the rules of “valuation” and refrained from jacking up the P/E multiple, the extreme underallocation to equities would have had to have grown even more extreme. It would have had to have fallen from a record low of 25% to an absurd 13% (see blue line in the chart below, which shows how the allocation would have evolved if the P/E multiple had not risen). Obviously, investors were not about to cut their equity allocations in half in the middle of a healthy, vibrant, inflation-free economic expansion–a period when things were clearly on the up. And so the multiple exploded.

This framework has a much higher correlation with future returns than any of the popular valuation based models

This table shows the R-squared stats for different methods

Excerpts from “Breaking Smart”

https://breakingsmart.com/en/season-1/

Original article by Venkat Rao.

Summary

An in-depth exploration of Andreesen’s “Software is eating the world”. Peak centralization is said to be year 1974 and since then agile software type tinkering and thinking has been winning the way forward. The essays discuss the engine, the discontent establishment which pines for pastoral entrenched way of life and projects the future as higher technology view of things as they are under appreciating and resisting that the very evolution of the technology done optimally opens up possibilities that were previously unconsidered.


Henry Ford: “My customers would have asked for faster horses”

Steve Jobs: “They’ll learn” – his response to whether the masses will adopt touchscreens and abandon keyboards


There is certain inevitability to technological evolution, and a certain naivete to certain patterns of resistance. Technological evolution is path-dependent in the short term, but not in the long term.


Pastoral visions are a direct result of Promethean periods of rapid evolution. Pastoral utopias are where the victors of particular historical finite games hope to secure their gains and rest indefinitely on their laurels. When pastoral fantasies start to collapse under the weight of their own internal contradictions, long-repressed energies are unleashed. The result is a societal condition marked by widespread lifestyle experimentation based on previously repressed values.

  1. Technological Unemployment: The debate around technological unemployment and the concern that “this time it is different” with AI and robots “eating all the jobs.”
  2. Inequality: The rising concern around persistent inequality and the fear that software, unlike previous technologies, does not offer much opportunity outside of an emerging intellectual elite of programmers and financiers.
  3. “Real” Problems: The idea that “real” problems such as climate change, collapsing biodiversity, healthcare, water scarcity and energy security are being neglected, while talent and energy are being frivolously expended on “trivial” photo-sharing apps.
  4. “Real” Innovation: The idea that “real” innovation in areas such as space exploration, flying cars and jetpacks has stagnated.
  5. National Competitiveness: The idea that software eating the world threatens national competitiveness based on manufacturing prowess and student performance on standardized tests.
  6. Cultural Decline: The idea that social networks, and seemingly “low-quality” new media and online education are destroying intellectual culture.
  7. Cybersecurity: The concern that vast new powers of repression are being gained by authoritarian forces, threatening freedom everywhere: Surveillance and cyberwarfare technologies (the latter ranging from worms like Stuxnet created by intelligence agencies, to drone strikes) beyond the reach of average citizens.
  8. The End of the Internet: The concern that new developments due to commercial interests pose a deep and existential threat to the freedoms and possibilities that we have come to associate with the Internet.


Power is zero-sum because it involves control over other people. Innovation can in fact be defined as ongoing moral progress achieved by driving directly towards the regimes of greatest moral ambiguity, where our collective demons lurk. These are also the regimes where technology finds its maximal expressions, and it is no accident that the two coincide. Genuine progress feels like onrushing obscenity and profanity, and also requires new technological capabilities to drive it. The subjective psychological feel of this evolutionary process is what Marshall McLuhan described in terms of a rear-view mirror effect: “we see the world through a rear-view mirror. We march backwards into the future.” Today, our collective rear-view mirror is packed with seeming profanity, in the form of multiple paths of descent into hell. Among the major ones that occupy our minds are the following:

The basic answer to the non-question of “inequality, surveillance and everything” is this: the best way through it is through it. It is an answer similar in spirit to the stoic principle that “the obstacle is the way” and the Finnish concept of sisu: meeting adversity head-on by cultivating a capacity for managing stress, rather than figuring out schemes to get around it.  Mechanisms we need for working through are the generative, pluralist ones we have been refining over the last century: liberal democracy, innovation, entrepreneurship, functional markets. It is crucial to limit ourselves and avoid the temptation of reactionary paths suggested by utopian or dystopian visions, especially those that appear in futurist guises. The idea that forward is backward and sacred is  profane will never feel natural or intuitive.

Authoritarian goal-driven problem-solving follows naturally from the politician’s syllogism: we must do something; this is something; we must do this. Such goals usually follow from gaps between reality and utopian visions. Solutions are driven by the deterministic form-follows-function principle, which emerged with authoritarian high-modernism in the early twentieth century. At its simplest, the process looks roughly like this:

  1. Problem selection: Choose a clear and important problem
  2. Resourcing: Capture resources by promising to solve it
  3. Solution: Solve the problem within promised constraints

This model is so familiar that it seems tautologically equivalent to “problem solving”. It is hard to see how problem-solving could work any other way. This model is also an authoritarian territorial claim in disguise. A problem scope defines a boundary of claimed authority. Acquiring resources means engaging in zero-sum competition to bring them into your boundary, as captive resources. Solving the problem generally means achieving promised effects within the boundary without regard to what happens outside. This means that unpleasant unintended consequences — what economists call social costs — are typically ignored, especially those which impact the least powerful.

Choosing a problem based on “importance” means uncritically accepting pastoral problem frames and priorities. Constraining the solution with an alluring “vision” of success means limiting creative possibilities for those who come later. Innovation is severely limited: You cannot act on unexpected ideas that solve different problems with the given resources, let alone pursue the direction of maximal interestingness indefinitely. This means unseen opportunity costs can be higher than visible benefits. You also cannot easily pursue solutions that require different (and possibly much cheaper) resources than the ones you competed for.This is not a process that tolerates uncertainty or ambiguity well, let alone thrive on it. Even positive uncertainty becomes a problem: an unexpected budget surplus must be hurriedly used up, often in wasteful ways, otherwise the budget might shrink next year. Unexpected new information and ideas, especially from novel perspectives — the fuel of innovation — are by definition a negative, to be dealt with like unwanted interruptions.


Contrast this to the networked approach. It does not begin with utopian goals or resources captured through specific promises or threats. Instead it begins with open-ended, pragmatic tinkering that thrives on the unexpected. The process is not even recognizable as a problem-solving mechanism at first glance:

  1. Immersion in relevant streams of ideas, people and free capabilities
  2. Experimentation to uncover new possibilities through trial and error
  3. Leverage to double down on whatever works unexpectedly well

Tinkering can look like play or procrastination but is actually the primary way to stay sensitized to developing opportunities or threats. The diversity of individual perspectives coupled with the law of large numbers (the statistical idea that rare events can become highly probable if there are enough trials going on). If an increasing number of highly diverse individuals operate this way, the chances of any given problem getting solved via a serendipitous new idea slowly rises. This is the luck of networks. Serendipitous solutions are not just cheaper than goal-directed ones. They are typically more creative and elegant, and require much less conflict. Sometimes they are so creative, the fact that they even solve a particular problem becomes hard to recognize. For example, telecommuting and video-conferencing do more to “solve” the problem of fossil-fuel dependence than many alternative energy technologies, but are usually understood as technologies for flex-work rather than energy savings. 

Ideas born of tinkering are not targeted solutions aimed at specific problems, such as “climate change” or “save the middle class,” so they can be applied more broadly. As a result, not only do current problems get solved in unexpected ways, but new value is created through surplus and spillover. The clearest early sign of such serendipity at work is unexpectedly rapid growth in the adoption of a new capability. This indicates that it is being used in many unanticipated ways, solving both seen and unseen problems, by both design and “luck”. From the inside, serendipitous problem solving feels like the most natural thing in the world. From the perspective of goal-driven problem solvers, however, it can look indistinguishable from waste and immoral priorities.


Organizational structures follow from which of the above strategies they were born from. Where a goal-driven strategy succeeds, the temporary scope of the original problem hardens into an enduring and policed organizational boundary. Temporary and specific claims on societal resources transform into indefinite and general captive property rights for the victors of specific political, cultural or military wars.


We form extractive institutions designed not just to solve a specific problem and secure the gains, but to continue extracting wealth indefinitely. Whatever the broader environmental conditions, ideally wealth, harmony and order accumulate inside the victor’s boundaries, while waste, social costs, and strife accumulate outside, to be dealt with by the losers of resource conflicts.


Where extractive institutions start to form, it becomes progressively harder to solve future problems in goal-driven ways. Each new problem-solving effort has more entrenched boundaries to deal with. Solving new problems usually means taking on increasingly expensive conflict to redraw boundaries as a first step. In the developed world, energy, healthcare and education are examples of sectors where problem-solving has slowed to a crawl due to a maze of regulatory and other boundaries. The result has been escalating costs and declining innovation — what economist William Baumol has labeled the “cost disease.”


The cost disease is an example of how, in their terminal state, goal-driven problem solving cultures exhaust themselves. Without open-ended innovation, the growing complexity of boundary redrawing makes most problems seem impossible.  This is the zero-sum logic of mercantile economic organization, and dates to the sixteenth century. In fact, because some value is lost through conflict, in the absence of open-ended innovation, it can be worse than zero-sum: what decision theorists call negative-sum (the ultimate example of which is of course war). By the early twentieth century, mercantilist economic logic had led to the world being completely carved up in terms of inflexible land, water, air, mineral and — perhaps most relevant today — spectrum rights. Rights that could not be freely traded or renegotiated in light of changing circumstances. This is a grim reality we have a tendency to romanticize. As the etymology of words like organization and corporation suggests, we tend to view our social containers through anthropomorphic metaphors.

We extend metaphoric and legal fictions of identity, personality, birth and death far beyond the point of diminishing marginal utility. We assume the “life” of these entities to be self-evidently worth extending into immortality. We even mourn them when they do occasionally enter irreversible decline. Companies like Kodak and Radio Shack for example, evoke such strong positive memories for many Americans that their decline seems truly tragic to many, despite the obvious irrelevance of the business models that originally fueled their rise. We assume that the fates of actual living humans is irreversibly tied to the fates of the artificial organisms they inhabit. 

The dark side of such anthropomorphic romanticization is what we might call geographic dualism: a stable planet-wide separation of local utopian zones secured for a privileged few and increasingly dystopian zones for many, maintained through policed boundaries. The greater the degree of geographic dualism, the clearer the  divides between slums and high-rises, home owners and home renters, developing and developed nations, wrong and right sides of the tracks, regions with landfills and regions with rent-controlled housing.  And perhaps the most glaring divide: secure jobs in regulated sectors with guaranteed lifelong benefits for some, at the cost of needlessly heightened precarity in a rapidly changing world for others.In a changing environment, organizational stability valued for its own sake becomes a kind of immorality. Seeking such stability means allowing the winners of historic conflicts to enjoy the steady, fixed benefits of stability by imposing increasing adaptation costs on the losers.

The antidote to extractive institutions is enabled by the idea that speech and people are free from the narrow control of authorities and owners. This allows the flourishing of pluralist institutions which  are open, inclusive and capable of creating wealth in non-zero-sum ways.  If the three most desirable things in a world defined by organizations are location, location and location,1 in the networked world they are connections, connections and connections.They are comprised of streams such as the streets of a city, the Slik Road from Europe to Asia, cafes. Permissionless access to others’ ideas. Digital streams are new iteration and previously geography dominated the streams, we are now inverted with the streams dominating the geography. In the past you needed proximity to Silicon Valley, but now access can be achieved thru github. What makes streams ideal contexts for open-ended innovation through tinkering is that they constantly present unrelated people, ideas and resources in unexpected juxtapositions.

This happens because streams emerge as the intersection of multiple networks.  As a result of such unexpected juxtapositions, you might “solve” problems you didn’t realize existed and do things that nobody realized were worth doing. For example, seeing a particular college friend and a particular coworker in the same stream might suggest a possibility for a high-value introduction: a small act of social bricolage. Because you are seen by many others from different perspectives, you might find people solving problems for you without any effort on your part. A common experience on Twitter, for example, is a Twitter-only friend tweeting an obscure but important news item, which you might otherwise have missed, just for your benefit.


By contrast…when you are sitting in a traditional office, working with a laptop configured exclusively for work use by an IT department, you receive updates only from one context, and can only view them against the backdrop of a single, exclusive and totalizing context. Despite the modernity of the tools deployed, the architecture of information is not very different from the paperware world. If information from other contexts leaks in, it is generally treated as a containment breach: a cause for disciplinary action in the most old-fashioned businesses. People you meet have pre-determined relationships with you, as defined by the organization chart. If you relate to a coworker in more than one way (as both a team member and a tennis buddy), that weakens the authority of the organization. The same is true of resources and ideas. Every resource is committed to a specific “official” function, and every idea is viewed from a fixed default perspective and has a fixed “official” interpretation: the organization’s “party line” or “policy.”

This has a radical consequence. When organizations work well and there are no streams, we view reality in what behavioral psychologists call functionally fixed 3 ways: people, ideas and things have fixed, single meanings. This makes them less capable of solving new problems in creative ways. In a dystopian stream-free world, the most valuable places are the innermost sanctums: these are typically the oldest organizations, most insulated from new information. But they are also the locus of the most wealth, and offer the most freedom for occupants. In China, for instance, the innermost recesses of the Communist Party are still the best place to be. In a Fortune 500 company, the best place to be is still the senior executive floor.When streams work well on the other hand, reality becomes increasingly intertwingled (a portmanteau of intertwined and tangled), as Ted Nelson evocatively labeled the phenomenon.

People, ideas and things can have multiple, fluid meanings depending on what else appears in juxtaposition with them. Creative possibilities rapidly multiply, with every new network feeding into the stream. The most interesting place to be is usually the very edge, rather than the innermost sanctums. In the United States, being a young and talented person in Silicon Valley can be more valuable and interesting than being a senior staffer in the White House. Being the founder of the fastest growing startup may offer more actual leverage than being President of the United States.We instinctively understand the difference between the two kinds of context. In an organization, if conflicting realities leak in, we view them as distractions or interruptions, and react by trying to seal them out better. In a stream, if things get too homogeneous and non-pluralistic, we complain that things are getting boring, predictable, and turning into an echo chamber. We react by trying to open things up, so that more unexpected things can happen.