The Case For Digital Moats

Over the last few decades, an interesting phenomenon has been puzzling top economists and analysts – the profit margins of a few large and successful companies seem to be defying gravity, soaring above the rest. One would expect the powerful forces of reversion to mean and competition to have restored this anomaly within a decade or so; after all, that is how capitalism is supposed to work. Yet, year after year of super-normal margins for the elite few demands a new explanation of what might be going on in the economy.

The excellent Philosophical Economics blog, in an intriguing post Profit Margins in a “Winner Take All” Economy, comments on this phenomenon. He links to an interesting post by Patrick O’Shaughnessy that presents the following graph showing all US stocks with market capitalization that is larger than $200 million sorted into five bins by their initial profit margins. A clear trend is visible over more than fifty years – from 1963 to 2015 – the top bin of companies remains at the top; the bottom quintile remains at the bottom. This means that the margins of already high-margin businesses are expanding whereas the margins for low-margin businesses seem to be falling even more.

f1http://investorfieldguide.com/the-rich-are-getting-richer

Furthermore, O’Shaughnessy shows that this profit margin gap has been increasing over the last two decades in some sectors – technology and finance (see the following graph), but also in the health care and consumer staples sectors.

f2http://investorfieldguide.com/the-rich-are-getting-richer

Coming at this anomaly from a different perspective, a recent labor productivity study conducted by OECD found that productivity of the top manufacturers – those at the “global productivity frontier” — grew at double the speed of the average manufacturing firm over the decade after 2000. This gap was even more extreme in services: firms on the productivity frontier grew productivity at a 5% rate, 16 times the 0.3% average rate. And, once again, the gap between the most productive firms and the rest has been increasing over time (see the graph below).

f3https://hbr.org/2015/08/productivity-is-soaring-at-top-firms-and-sluggish-everywhere-else

So what kind of companies might be occupying the productivity frontier?

I think it is highly likely that the gap between those on the frontier and the rest is about their strategic positioning vis-à-vis the digital disruption that is ongoing. In a previous post I have written about the investing implications of this phenomenon, dubbed “software is eating the world” by venture capitalist Marc Andreessen. Companies that are doing the “eating” tend to have increasing profit margins. Moreover, the very nature of digital technology tends to create a “winner-take-all” dynamic, where moats from network effects, patents, regulation, and scale economies allows a concentrated group at the top to earn increasingly higher profit margins. We have already pointed out that the profit margins of the various bins moved roughly in parallel until the late 1990s, after which the profit margins of the top bin proceeded to rise rapidly. This time-frame aligns nicely with the rise of the internet-based economy.

Conversely, it is notable that companies in the older sectors of the economy – such as the energy and materials – do not show this rising margins effect. A recent study by the chairman of the president’s Council of Economic Advisers also found that the biggest gains in profits have been among technology and drug manufacturers; their profits don’t come from tangible assets like factories and land, but from intangible assets such as software, standards, patents, and network effects. This also might explain the otherwise puzzling absence of strong capital expenditures even as the economy continues recovering steadily from the great recession – the modern digital economy just does not require that much spending on heavy equipment as the industrial economy of previous recessions.

The following diagram is my attempt to depict the on-going battle of companies in the age of digital disruption. The companies in the inner-most ring are former giants that are already dead – killed by the software-advantaged company with the arrow pointing at them. In their heyday we bought physical books at Borders, music records at Tower Records, rented movie DVDs at Blockbuster, and bought photography film from Kodak. These were the large companies of the time that failed to anticipate and adapt to the digital tsunami that has led our reading e-books, stream music and videos, and take digital pictures with our smartphone. The middle ring depicts companies we think are not yet dead but whose profit margins are under severe pressure from their software-armed foes. Again, these are giants of our times such as the venerable New York Times (and all physical newspapers), Time (and all other magazines), the big-four TV networks, medallion-based taxi companies, hotels, travel agencies, and physical retailers like Macy’s and Walmart.

f4

I was particularly struck by fact that, even as they were in the process of losing to their digital competition, the companies in the inner ring (Borders, Blockbuster, etc.) actually looked quite attractive on traditional value investing metrics like the Price/Earnings ratio; right up to the very end!

Summarizing:

  • There is an elite set of global companies with high and increasing labor productivity as well as high and increasing profit margins.
  • It is vitally important or a company to be strategically aligned with the powerful forces of digital disruption to prevent it from being “eaten”; ideally, it should be the one doing the eating.
  • Value-investing grandmasters like Buffett and Munger have virtually proven the vital importance of investing in companies with long-term-sustainable competitive advantage – moats – since such companies can maintain and compound their earnings over long periods of time.

“Digital moats” are precisely those companies that at the intersection of these concepts: they are at the productivity frontier and moated in a way that benefits from the disruption caused by powerful digital trends. Examples are shown as the outer ring of the previous diagram (Amazon, Google, Facebook, Uber, Airbnb, Pandora, Netflix, etc.).

Market capitalization indexes as such the S&P 500 must, by their very definition, contain all companies that qualify by their capitalization – whether they moated or not, and companies doing the disrupting as well as those that are being disrupted. I think one can get an edge over the market by concentrating the portfolio around the elite few – the crème-de-la-crème – of companies at the productivity frontier. Of course, a great company does not automatically equate to a great investment – valuation always matters, and a myriad of company-specific issues must to be carefully taken into account before including a stock in the portfolio. Having said that, as more and more people from all over the globe get a smartphone connected to the cloud of digital services, I do think it reasonably likely that digital moats will continue outperforming the market over the next few years.


 

Full Disclosure: As of the time of writing this post, I am long many stocks mentioned in this post, including Amazon, Google, Facebook, Twitter. This post is not meant to be and should not be construed as investment advice of any sort. Investing is extremely difficult, the risk of permanent loss is high, and past results are meaningless in the future. 

The Outsiders: Seven Other CEOs With Buffett-class Returns

Buffett’s 50th Anniversary letter was a fascinating read, as expected. I have now read every single annual letter he has written, including from his partnership days pre-Berkshire, and still find something new and valuable in each new letter! This time, Buffett’s (as well as Munger’s) comments on conglomerates — both their features and their bugs — clarified some thoughts that have been lingering in my mind ever since I wrote last year about the amazing returns generated by of eight other CEOs, besides Buffett, with an unusual management style that is characterized by:

Highly decentralized portfolio of autonomous operating units combined with a CEO focus on the centralized allocation of capital.

The following is an updated version of my thinking on this topic.

Buffett, along with seven others, are profiled in an interesting book, The Outsiders by William Thorndike, about CEOs who have delivered the highest long term compounded returns to their shareholders in the last half century or so.

Written with the help and advice of Charlie Munger, the book is the result of a research project that screened the records of thousands of CEOs and conducted over a thousand in-person interviews. The author then selects and describes eight “best of the best” – CEOs who compounded their company’s stocks at truly astonishing rates when compared to the market over the same interval.

Not surprisingly, Buffett is present in the list, as well as his protege Katharine Graham, but many of the other names are not that well-known, even though they all have delivered similarly fantastic returns:

  • Warren Buffett (Berkshire): 21.6% per year over 50 years (vs. 9.9% for S&P 500).
  • John Malone (TCI Cable): 30.3% per year over 25 years (vs. 14.3%).
  • Henry Singleton (Teledyne): 20.3% per year over 27 years (vs. 8.0%).
  • Tom Murphy (Capital Cities): 19.9% per year over 29 years (vs. 10.1%).
  • Katharine Graham (Washington Post): 22.3% per year over 22 years (vs. 7.4%).
  • Bill Anders (General Dynamics): 23.3% per year over 17 years (vs. 8.9%).
  • Bill Stiritz (Ralston Purina): 20.0% per year over 19 years (vs. 14.7%).
  • Dick Smith (General Cinema): 16.1% per year over 43 years (vs. 9.0%).

It seems clear from these results that it may be worth exploring the question: Do these CEOs share some kind of common pattern that might explain their off-the-charts performance?

Let us begin by zooming in on one of these CEOs, to get a flavor of their background and style.

Buffett has described Henry Singleton, the co-founder and former CEO of Teledyne, as having the “best operating and capital deployment record in American business.”  singleton

Yet Singleton did not have any formal background in business; he was, instead, an electrical engineer by training, with a Bachelor’s, Master’s, and Doctoral degrees from MIT.

During Singleton’s tenure as CEO, Teledyne’s board members included Prof. Claude Shannon of MIT (considered the father of information theory, and one of the seminal figures of our digital age) and Arthur Rock, the legendary VC who funded Intel and Apple, among others!  Imagine being a fly on the wall during those board discussions!

It turns out that Singleton was not the only Outsider CEO with an technical background — four of the eight had technical backgrounds,  including a nuclear engineer and a Ph.D. in Operations Research (John Malone, some of whose moves I have described a previous post). In fact, only one of these other seven besides Buffett had the expected MBA. Most of CEOs are also described in the book as being introverted in style, preferring to avoid the media limelight.

In view of their technical backgrounds, the most surprising pattern shared by the Outsider CEOs is that, like Buffett, they all chose to view their job primarily as a portfolio manager of their various operating units, focusing on capital allocation as opposed to actually running the operations of their company.

Of course, every CEO has to fulfill both essential functions – operating the company as well as allocating the capital generated from operations. It seems most CEOs are neither too interested in, nor very good at, the capital allocation process. Yet, since excess cash-flow has to be dealt with, one way or another, the majority of a company’s productive resources will soon enough be the result of a CEO’s capital allocation decisions. Poor allocations will, therefore,  soon lead to mediocre cash flows in the future, reducing the rate at which the company compounds the capital invested in it. Perhaps this is why the best performers are exactly those CEOs who excel at this vital but oft-neglected function.

Most Outsider CEOs had a trusted “right hand” person as their operations head (e.g., COO or President) and they completely delegated the operational aspects of their job to him.

This allowed the CEOs the time and energy to focus on what they did best, which was capital allocation. Many, like Buffett, chose to structure their company into independent, self-contained operating units, with only a bare-bones staff at the corporate level. They all preferred to push operating decisions down to the lowest, most local levels in their organizations. Perhaps this unusual pattern of decentralized operations and centralized capital allocation allowed these CEOs to more dispassionately view their operating assets as portfolio constituents, and spin-off or sell the operating units as and when the opportunity presented itself, thus concentrating capital around their most efficient, highest-returning units.

All Outsider CEOs have been unusually proactive in shrinking their company’s assets – people and as well as capital – when this was the right thing to do, returning excess capital to their shareholders rather than expand into mediocre business.

It is quite rare for a CEO to sell a division or business without pressure; most are “empire builders” who would rather prefer to grow their employee base and revenues. However, since the various operating units of any company usually have varying returns, the overall return can easily get dragged down by its inefficient units as the company gets bigger. Thus it makes sense that only the most radically rational CEOs who (1) carefully focused on identifying their best operating units, (2) pared down to concentrate capital on these outperforming units, and (3) returned any excess capital left over, were be able to compile the best returns over time.

Most of the Outsider CEOs seem to be very comfortable with volatility of earnings, and did not bother to smoothen them in any way.

Their cash-flows were often “lumpy” over the short term as a result of taking infrequent but bold capital actions. They all tended to focus on the long term growth of cash flow per share rather than managing short-term quarterly earnings expectations to please the analysts.

They were also willing to buy their stock back aggressively (as much as 90% in Singleton’s case!) when they thought it to be significantly undervalued.

There is catch, however. Such buybacks can create tremendous returns only when the shares are indeed undervalued compared to the company’s intrinsic value. As Buffett observes in his latest letter: “Berkshire’s directors will only authorize repurchases at a price they believe to be well below intrinsic value. (In our view, that is an essential criterion for repurchases that is often ignored by other managements.)”

Notice that all “outsider” CEOs, including Buffett, are “insiders” as far as their own company’s prospects are concerned! And since Mr. Market is prone to irrational fits of optimism and pessimism, a valuation-sensitive CEO can take advantage by suitably buying his own company’s shares back when they are under-priced, and using the company’s over-priced shares to buy other businesses. The usual oscillation of stock prices around their company’s intrinsic value thus provides such CEOs the opportunity to increase their compounding rate much faster than their underlying operational growth.

Outsiders may well be those few CEOs who understand valuation as well as have the temperament required to actually benefit from the price pendulum.

Most Outsiders were quite comfortable with high levels of leverage from time to time.

They all enjoyed unusually strong operating cash-flows which enabled them to carry debt. This allowed them to avoid diluting their outstanding shares when they needed more capital (to buy a company or to invest in growth).

Even Buffett, who publicly eschews debt in its explicit form, hugely benefits from the implicit leverage (estimated by a recent study as 1.6x) created by the negative cost of float created by his insurance units.

Debt leverage may well have boosted the returns of the Outsiders beyond their intrinsic operating rate of returns. However, the fact that they were able to do so for decades suggests that this may not have been as risky as it sounds (e.g., Buffett minimizes leverage risk by demanding an ultra-conservative underwriting discipline so that the debt is virtually “free”).

It is also possible, though, that these particular eight just got lucky, and that we are looking at the survivors.

Finally, many of the Outsider CEOs self-report being opportunistic as opposed to strategic.

I think this should be understood as a contrast between an adaptive vs. central planning styles. As Jim Barksdale, the former COO of FedEx and CEO of Netscape has said: “In a fight between a bear and an alligator, it is the terrain which determines who wins.”

The aphorism neatly captures the Achilles heel of long-term strategic planning: the terrain changes with time, making any top-down plan likely to be suboptimal. It is worth remembering that Darwinian evolution is such a successful “designer” of species precisely because it is adaptive, without any grand strategy. The “best-fit” progeny amongst the many produced by an individual organism simply gets an edge in proliferating its genes. After a series of such opportunistic choices, each successful species occupies a finely-tuned niche in its ecology.

Similarly, in the context of an economy, an operating unit of a business may have to make many unexpected adaptations before it finds a successful niche (its “moat”). Such non-linearly contingent outcomes would be virtually impossible to arrive at by the usual corporate strategic planning process. Maybe the Outsider approach of viewing their various operating units as a portfolio manager allows them to more objectively select the better “fits” in a given economic terrain, trading away the others in order to concentrate capital around the more successful units. This opportunistic approach may well be superior in quickly finding and developing moated business niches, leading to a powerful chain of economic compounding.

Dr. Singleton, for example, a winner of the national Putnam prize in mathematics, and a master-level chess player, was presumably as capable of being strategic as anyone on the planet; he nevertheless said: “I like to steer the boat each day rather than plan ahead way into the future … we’re subject to a tremendous number of outside influences and the vast majority of them cannot be predicted … So my idea is to stay flexible.”

He used this kind of “flexibility” to buy or sell more than 130 (!) companies for Teledyne, opportunistically using its high-priced stock. And later, when the market for such conglomerate collapsed, he turned around and bought back his own now under-priced stock massively — he bought back 90% of it! These valuation-sensitive capital allocation moves allowed him to compound his company’s stock at 20.3% per year over 27 years; each $1 invested in Teledyne turned into $160!

Singleton basically pioneered these type of capital decisions – they were highly unusual at that time, and still are for the most part. It probably required unconventional thinkers like Shannon and Rock on his board to back him up!

And brings me to the lingering doubts I had mentioned in the beginning: the history of conglomerates during the 1960s has been notorious for its “ponzi” aspects. As George Soros noted in his book The Alchemy of Finance, the boom in conglomerates was followed by a massive bust.

Buffett discusses how the conglomerate ponzi scheme was worked in his latest letter:

“The drill for conglomerate CEOs then was simple: By personality, promotion or dubious accounting – and often by all three – these managers drove a fledgling conglomerate’s stock
to, say, 20 times earnings and then issued shares as fast as possible to acquire another business selling at ten-or-so times earnings. They immediately applied “pooling” accounting to the acquisition, which – with not a dime’s worth of change in the underlying businesses – automatically increased per-share earnings, and used the rise as proof of managerial genius. They next explained to investors that this sort of talent justified the maintenance, or even the
enhancement, of the acquirer’s p/e multiple. And, finally, they promised to endlessly repeat this procedure and thereby create ever-increasing per-share earnings.”

He then explains how this kind of financial engineering came to its inglorious end:

“Since the per-share earnings gains of an expanding conglomerate came from exploiting p/e differences, its CEO had to search for businesses selling at low multiples of earnings. These, of course, were characteristically mediocre businesses with poor long-term prospects. This incentive to bottom-fish usually led to a conglomerate’s collection of underlying businesses becoming more and more junky… Eventually, however, the clock struck twelve, and everything turned to pumpkins and mice.”

But Buffett then goes on and explains how the judicious use of the conglomerate structure allows him to allocate capital in a way that is more efficient that the market:

“One of the heralded virtues of capitalism is that it efficiently allocates funds. The argument is that markets will direct investment to promising businesses and deny it to those destined to wither. That is true: With all its excesses, market-driven allocation of capital is usually far superior to any alternative.

Nevertheless, there are often obstacles to the rational movement of capital … A CEO with capital employed in a declining operation seldom elects to massively redeploy that capital into unrelated activities. A move of that kind would usually require that long-time associates be fired and mistakes be admitted. Moreover, it’s unlikely that CEO would be the manager you would wish to handle the redeployment job even if he or she was inclined to undertake it.

At the shareholder level, taxes and frictional costs weigh heavily on individual investors when they attempt to reallocate capital among businesses and industries. Even tax-free institutional investors face major costs as they move capital because they usually need intermediaries to do this job. A lot of mouths with expensive tastes then clamor to be fed – among them investment bankers, accountants, consultants, lawyers and such capital-reallocators as leveraged buyout operators. Money-shufflers don’t come cheap.

In contrast, a conglomerate such as Berkshire is perfectly positioned to allocate capital rationally and at minimal cost … At Berkshire, we can – without incurring taxes or much in the way of other costs – move huge sums from businesses that have limited opportunities for incremental investment to other sectors with greater promise. Moreover, we are free of historical biases created by lifelong association with a given industry and are not subject to
pressures from colleagues having a vested interest in maintaining the status quo. That’s important: If horses had controlled investment decisions, there would have been no auto industry.”

I want to end with some caveats:

  • The Outsides book does not really cover hands-on visionaries like Steve Jobs; perhaps there is no template that fits such one-of-a-kind geniuses; or, perhaps, his record does not hold up quite so well after mixing in his initial tenure at Apple, which was not so great (and compounded average returns are spoiled rather easily by even a few bad years).
  • After all is said and done, there remains the possibility that we may be looking at survivor and selection biases often lurking in such history-based records. The high levels of leverage is a clear red flag that luck may well have played a huge role in some of these outliers: the risk they took did not materialize even though it could have, so the survivor-ship bias really does matter.

Having said that, it is hard not be impressed by the power of a pattern — decentralized operations combined with centralized capital allocation in the hands of a CEO who understands valuation — that is capable 20% type annualized returns over decades!

 


Full Disclosure: As of the time of writing this post, I am long Liberty Global (LBTYA), whose largest holder is John Malone, one of the eight Outsider CEOs. I am not connected with William Thorndike or the Outsiders book in any way.

This post is not meant to be and should not be construed as investment advice of any sort. Investing is extremely difficult, the risk of permanent loss is high, and past results are meaningless in the future. 

It is Changes in Abundance and Scarcity That Drives Disruption

scarcity loop

A key question in business (and hence in investing) is: what drives change? Why do dominant businesses get disrupted so frequently by challengers? I posit in this post that most of this disruption is a consequence of a shift in economic scarcity, mainly caused by technological advances.

Most businesses can be conceptualized as offering a product or service bundle of value to their customers. The bundle is made up of various modules that combine together to provide the customer a valuable offering. I suggest that advances in technology cause changes in the relative scarcity or abundance in the underlying economics of these modules, and it is these changes in economics that create an opening for a challenger to topple a dominant business.

Consider an example given recently by Marc Andreessen:

“And so the newspaper bundle, the idea of this slug of news and sports scores and classifieds and stock quotes that arrives once a day was a consequence of the printing plant. Of the metro area printing plant, of the distribution network for newspapers using trucks and newsstands and newspaper vending machines and the famous newspaper delivery boy. That newspaper bundle was based on the distribution technology of a time and place.

When the distribution technology changed with the internet, there was going to be the great unwind, and then the great rebundle, in the form of Google and Facebook and Twitter and all these new bundles.

I think music is a great example of that. It made sense in the LP and CD era to [bundle] eight or 10 or 12 or 15 songs on a disc and press the disc and ship it out and have it sit in storage until somebody came along and bought it.

But, when you have the ability online to download or stream individual tracks, then all of a sudden that bundle just doesn’t make sense. So it [got unbundled] into individual MP3s.

And I think now it makes sense that it’s kind of re-bundling into streaming services like Pandora and Spotify.”

In general, once the deck of economic value has been shuffled by the shift in scarcity, it can create an opening to an entrepreneur to start from scratch by targeting a key module of the old bundle that is now relatively scarce — hence valuable — and leverage the newly created abundance. The Moore’s law driven plunge in the price of communications, for instance, is enabling a lot of startups to rethink existing business bundles by exploiting the “free” distribution available on the internet, just like iTunes did to unbundle CDs and Pandora is doing to iTunes in the example.

Once the challenger has won, it is fairly easy for the winner to bundle more and more features around the core module, to increase its value and to capture incremental marketshare. Of course, this process eventually sets up the bloated bundle to become a target for the next new challenger on the block, as technology changes the point of scarcity again!

The former CEO of Netscape, Jim Barksdale’s observation: (hbr.org/2014/06/how-to-succeed-in-business-by-bundling-and-unbundling): “there’s only two ways I know of to make money: bundling and unbundling” captures this cycle of unbundling followed by bundling; but he does not really explain why should this be so? Andreessen, in a recent Tweetstorm, has provided a detailed example of this phenomenon of bundling and unbundling (twitter.com/pmarca/status/481554165454209027).

Thus the key driver of all the disruption and unbundling is technology driven changes in economic scarcity. A particularly powerful example of such a technology driver is the virtuous cycle of semiconductors and software advances feeding into each other, diagrammed below (I have previously written about this loop here: arunsplace.com/2014/09/22/the-moore-andreessen-feedback-loop).

I think it is vital for a disruptor to succeed that it be better aligned with this loop than the product it is challenging!

This is why I think the Christensen model of disruption, while insightful, is not complete. It comes in two flavors: low-end disruption and new-market disruption. Neither is fully satisfying as an adequate model of disruption – a counter-example to Christensen’s framework is the fact that the expensive, richly-featured iPhone manage to completely disrupt the cheaper, less functional feature-phone business (e.g., Nokia) — the exact opposite of what his model predicts! In my framework, by contrast, the key driver (driven by Moore’s law) was the relative abundance, and hence cheapness, of the internet. This allowed the iPhone to feature internet-enabled apps as the main attraction, rather than phone calls (in fact the initial iPhone was not all that great at making calls!). Thus the shift in scarcity/abundance created an opening for Apple to target internet connectivity as the core offering. This, I claim, is a better framework to explaining why they succeeded in disrupting the plain old cellphones despite being much more expensive; it was clearly not an attack from the bottom. (To be sure, smartphones also disrupted PCs, and that fact can be explained as an attack from the bottom as well as an unbundling of PCs due to a shift in scaricity/abundance; I chose the disruption of dumb-phones in this example since that cannot be adequately explained by one of the two frameworks).

To be sure, there can be other kinds of technology changes that are not related to semiconductors. But it is really hard to find other examples of something that can grow 40% per year for nearly 50 years! Moore’s law in likely unique in this aspect, which is why I think it has plays such a crucial part in the persistence of the disruption phenomenon, of the kind we have been experiencing in the last few decades.

Why Blockchain should be unchained from Bitcoins – two analogies

Like many, I see great value in the de-centralized blockchain protocol that enables the trusted transfer of a digital token. (Briefly: since the token can represent the ownership rights to just about anything in the digital world, or for that matter, in the physical world, the blockchain technology can make transactions much lighter weight and much cheaper, creating potentially enormous economic value.)

However, I do not share that optimism about the fate of Bitcoin as a global currency; in fact, I argue in this post that Bitcoin represents the Achilles heel of blockchain technology.

Of course, the reason why Bitcoins and blockchains are so tightly coupled together is that the blockchain protocol needs incentives for all those decentralized computers busily engaged in the reliable transfer of trust. And the current solution is to get this task done as a side-effect of the distributed systems trying to “mine” Bitcoins. In other words, the mining of Bitcoins incentivizes the distributed computing needed to implement any blockchain’s trusted ledger of transactions.

But unlike the obvious value in decentralized blockchain transactions, I think there are fundamental economic problems with the very notion of a currency that is not supported by a central bank of some kind.

To see why, consider the following analogy to build up some intuition.

Circulating blood is the medium of exchange of energy in our bodies, just like circulating currency (whether digital or physical) is the medium of exchange of goods and services in an economy. As the body grows larger, the amount of total blood needed to fulfill its exchange-of-energy role is proportional to the body size. If there is too little blood, the energy exchange will be deficient; we need to transfuse blood in extreme cases of blood loss. By analogy, as the global GDP level grows, as it does every year, the amount of currency in circulation must also rise, roughly in proportion!

The Bitcoin algorithm, however, has no relation to the level or growth of the global GDP. This may well prove to be a significant fatal flaw in its design.

To further deepen intuition, I recommend the brilliant essay Paul Krugman wrote for Slate magazine way back when he was a professor at MIT. In it, he explains the irreplaceable role played by the central bank in maintaining a stable currency — one that is neither inflationary not deflationary —  using an insightful baby-sitting analogy based on a real world case study. I suggest pausing here to read that brief (one-page) essay, mentally replacing Bitcoins for the baby-sitting scrips in his example: “Baby-Sitting the Economy.”

A strong implication of thecurrency experiment described in that essay is that any form of currency —  including Bitcoin  — just cannot become a stable, mainstream currency without a central bank function that acts to stabilizes their value from time to time, and keeps the circulating base of currency proportional to the size of the economy. One could, perhaps, imagine an algorthmic replacement for a human central banker —  say a Taylor-type rule —  implemented in a decentralized manner, although it would still be a central policy decision, in essence, since it needs to control the aggregate amount of the currency in circulation. However, all that is besides the point since the current algorithm behind Bitcoins is completely decoupled from the size of the economy, since there are going to be exactly 21 millions Bitcoins, a fixed parameter in its very design. This is a major problem that I think will prevent Bitcoins from becoming a global currency of any major standing (as envisioned by its enthusiasts).

Perhaps, however, it can survive as a limited sort of transient exchange medium, its value always secured at both ends by currencies that are stably backed by central banking. Indeed, this is exactly how it seems to be working these days. But I am dubious of the long term fate of this as well, since the point about the need to be proportional to the size of the economic “body” still stands, even for transient exchanges; after all, the number of simultaneous exchanges will surely rise in line with the growth of the global economy, thus raising the need for more Bitcoins in circulation.

More importantly, if all Bitcoins do is transmit money, then their role as a store of value becomes problematical. Every succesful currency must perform both functions (store of value and medium of exchange) simultaneously. If all Bitcoins do is transmit money, then Warren Buffett has this to say about its ability to hold any intrinsic value:

“It’s a method of transmitting money. It’s a very effective way of transmitting money and you can do it anonymously and all that. A check is a way of transmitting money, too. Are checks worth a whole lot of money just because they can transmit money? Are money orders? You can transmit money by money orders. People do it. I hope bitcoin becomes a better way of doing it, but you can replicate it a bunch of different ways and it will be. The idea that it has some huge intrinsic value is just a joke in my view.”Buffett quoted by CNBC.

In response to this, Marc Andreessen, the venture capitalist behind many Bitcoin related investments has tweeted:

“Warren has gone out of his way for decades to avoid understanding new technology. Not a surprising result.”

Much as I admire Andreessen, I think this completely misses Buffett’s point, which is not about technology at all. Buffett understands moats, and he is saying that there is no moat in any technology in the role of a transmitter of money. As he says, there have been many technologies to transmit money in the past but that this carrier function has never managed to have much value, primarily due to competition. (Incidentally, Buffett is a student of business moats and has written in the past about the lack of moats in many innovative technological breakthroughs — airplanes, and cars being two examples that immediately jump to mind.)

Buffett’s point about money transmitters not being moated implies that the motivation of miners will be surely affected if the value of what they are mining is subject to competitive erosion over time. Note that Buffett is carefully not saying anything about the value of the blockchain protocol itself. Most articles on Bitcoin end up laying out the future possibilities of blockchains, rather than Bitcoins. I actually agree that the blockchain protocol has great potential; but I find the conceptual foundation behind Bitcoin not quite up to the task.

At least for now the two are tightly joined at the hip, and that is a bug, not a feature, in my opinion!

My argument so far is about the long-term issues with Bitcoins. But even over the short term, the recent plunge in Bitcoin prices is troublesome — in fact, it is now worse than the notable crash of both the ruble and oil prices:

This matters, since it will affect the motivation of Bitcoin miners. As explained in NYT’s DealBook blog:

Bitcoin miners are computers that run Bitcoin’s open-source program and perform complex algorithms. If they find the solution before other miners, they are rewarded with a block of 25 Bitcoins — essentially “unearthing” new Bitcoins from the digital currency’s decentralized network. Such mining operations, though potentially lucrative, are also expensive, requiring huge amounts of equipment and electricity.

It is vital for the integrity of blockchain protocol that Bitcoin miners continue to mine, since blockchain maintenance is a side-effect of Bitcoin mining! Any plunge in the motivation level, and hence the capital investment level, of Bitcoin miners clearly slows things down. If I am right about this (I am not a professional economist), the conceptual deficiency behind Bitcoin could really undermine the long-term prospects of blockchains.

The key question: Can the blockchain protocol be decoupled from Bitcoins in some way?

If only there was some other way to incentivize all those miners performing the distributed computations needed to maintain the integrity of blockchains …


Disclosure: I have no Bitcoin related investments at the time this post was written.

The Strange Economics Of Information

I have argued in my last post, Value Investing As Software Eats The World, that value investors these days can no longer avoid coming to grips with subtle differences between information-based and physical goods that might affect their moat and valuations. As Hal Varian (now chief economist at Google) and Carl Shapiro explain in their excellent (albeit slightly dated) book, Information Rules: A Strategic Guide to the Network Economy, while the classic principles of economics remain valid in concept, they do have some interesting implications due to the unusual cost structure of information-based goods. In this post I would like to highlight some of these implications that seem to me to be particularly relevant from an investor’s point of view.

Perhaps the most important of these differences is that information goods usually have high fixed costs of initial production but practically zero costs of reproduction. Think of the creation and production of a new song, a book, a movie, or a new software program. All the costs have to incurred upfront, before the first item is sold. But after that initial item is shipped, reproducing it is just a matter of copying the digital bits and transmitting them over the internet, for hardy a few cents per copy these days; and even that low cost is exponentially approaching zero!  Since the marginal cost of an information good is nearly zero, age-old economics principles confidently predict that its price will also head towards zero, in the presence of any significant competition. This implies that information goods cannot really be priced on the usual “cost of production plus profit” basis of physical goods — their cost of production is practically zero.

Another unusual aspect is that information goods often have to be experienced in order to be valued. Consider, for instance, how much would you pay for a news article? If it turns out that you already knew the news in it, it no value to you; but you cannot know this until you have actually read it!  Similarly, a piece of music has to be heard before one can properly put a value to it. Producers of information goods have to solve this conundrum in pricing; this often requires giving away some version of it for free and investing heavily in building a reputation for quality. The high quality of past editions of, say, The New York Times is the reason we agree to pay for today’s newspaper, even before knowing whether it contains anything of value.

A more consequential implication of near-zero variable costs is that an information-based products can be reproduced in billions of units with very little additional investment: a successful product can scale to massive volumes with surprising speed and efficiency. Value investors of the old school, used to the slow and steady expansion needed for scaling up physical goods, are often blindsided and tend to “miss” moated companies such as Google and Facebook, as they go from huge losses (due to high fixed costs of producing the first item) to billions in profits within a matter of just a few years (due to minimal costs of scaling up).

Zero incremental costs also explains the differential pricing often found in information-based products, where the same core product is sold at vastly different prices in multiple versions with superficial variations, including free versions. This abundance of digital copies must, however, deal with the harsh reality of that ultimate scarcity — human attention. Metrics related to the consequent “battle of eyeballs” are often used as a proxy for competitive advantage at the early stages of a new digital product. Such metrics are often scorned by traditional value investors, who prefer to measure actual cash earnings, but I think they are missing the point; at their early stages of production, metrics of attention capture are perfectly rational measures of value for information-based goods, due to the unusual economics of networks that applies to them.

The economics of networks

Notice that information-based goods have to interconnect with many complementary products in order to function: a browser has to work with an operating system; a smartphone with a mobile carrier; an app with an app store; a TV show with a cable network, etc. Such interconnections eventually lead to entire networks; and once a product is entrenched into the fabric of a customer’s network, it is very difficult to persuade that customer to switch to a newer product, even if it is quite a bit better or cheaper than the older one. These high costs of switching due to networks creates a strong moat against competition, once an information product has achieved a critical mass of acceptance.

The implication is clear: to really understand the information economy, it is necessary to master the economics of networks.

Think of buying the very first fax machine – it was useless by itself because it could talk to no other fax machine, but as soon as other businesses also purchased fax machines, it became more and more valuable. Similarly, the first user of Facebook had nobody to communicate with, but the value of the Facebook network goes up with every additional Facebook user that joins — the benefit to the nth user rises in proportion to (n-1), the number of previous users. The inventor of the Ethernet, Robert Metcalf was one of the first people to point out a mathematical implication of this: the total value of a network is proportional to the square of the number of connected users in the network [n*(n-1)]. This suggests that, if all other things are equal, as Facebook grew its user base 10 times, its value grew not 10 but 100 times! (Of course all things are not really equal – a user in a poor economy is not as valuable as a user from a rich one; nevertheless, the value of a network increases to very large numbers surprisingly quickly.)

The term “network” does not mean that wires or wireless connections have to exist between the nodes. The term is used metaphorically whenever there are interdependencies between nodes, which is almost always for information-based goods. For example, as more and more office workers adopted Microsoft Word in the 1990s as their editor of choice, the more valuable it became for a new user to adopt it, and the value of the “Word network” kept rising faster than the size of the installed base. Network effects from office workers dependent on Word, Excel, and PowerPoint explains why even a fading Microsoft still commands over 300 billion in market value. Any challenger product for processing a document or spreadsheet immediately runs into the problem that no one else in the company can work with it; it is extremely difficult to persuade everyone to move simultaneously to a new kind of document or spreadsheet editor, even if it is somewhat better than Word or Excel – a classic example of the powerful moat created by network effects.

Traditional value investors repeatedly underestimate this powerful arithmetic of networks. The market value of such companies can grow exponentially even as their P/E ratio shrinks arithmetically; they can change from apparently insanely expensive multiples to quite reasonable valuations all too quickly.

Tipping points and positive feedback loops – the dynamics of networks

Producers of physical goods usually face diminishing returns from the physical and bureaucratic costs of being too big. The demand for such a product usually is not too influenced by the number of previously sold products. Information-based goods, on the other hand, usually have decreasing supply costs as their fixed costs are amortized over larger volumes; in addition they benefit from increasing demand for every incremental user, due to the network effects noted earlier.  These double-barreled positive feedback loops can create runaway winners due to the extreme economies of scale involved. As noted by Geoffrey Moore in his prescient book, Inside The Tornado, such companies have an unusual growth pattern: they grow slowly at first before reaching a “tipping point” of acceptance; products that cross this point can show explosive growth afterwards. The demand for such products is also subject to self-fulfilling loops: a product that is expected to win often obtains more and more followers, following a herd-like dynamic of adoption.

This economics of such increasing returns to scale, have long been discussed but only recently have elegant models developed by economists such as Paul Krugman, Paul Romer, and Brian Arthur, enabled them to be tackled mathematically. Taken to extreme, increasing returns can lead to a winner-takes-all monopoly for some products, famously Microsoft’s Windows and Intel’s x86 chips; Facebook’s social network is showing similar adoption dynamics these days. A generative model for how such networks grow has been described by the physicist Albert-Laszlo Barabasi in his insightful book, Linked: The New Science of Networks. In Barabasi’s “preferential attachment” model, even a slight preference of a new node to join one of the bigger hubs of an existing network causes the big hubs to get even bigger. Such a growth process can explain many of the “power laws” often found in a variety of network-like phenomenon, from the size of cities to the popularity of blogs. Barabasi also found that a second factor, intrinsic quality, can explain the occasional exception to this “big get bigger” pattern, provided that the challenging product’s quality is an order of magnitude better than the existing product. A similar notion was first posited by Intel’s legendary CEO, Andy Grove, who observed that a new technology that is, say, only 10% better often fails to make a dent to a well established incumbent, but one that is 10 times better stands a good chance.

In light of such non-linear growth dynamics, it is easy to see why information-based companies spend so heavily in their early stages, sometimes even giving away their initial product in order to establish the impression that they have already won. Growing to critical mass quickly is often crucial to success, monetization can wait; an established network of users is much easier to monetize later on. Clear examples of this strategy are Google and Facebook: both patiently waited for many years before monetizing their growing base of users, and were quickly and hugely profitable once they turned on their monetization engines. Similarly, new products such as Twitter, Instagram, and Whatsapp are still growing their user networks, deliberately delaying monetization for now. While many value investors roll their eyes at the absence of current profits, the delay strategies are rational due to the strange economics of networks.

Invading the network moat

If there are such strong positive feedback loops as I have described above, how could Facebook topple Myspace, the earlier social network? In general, how can a challenger win in the face of the tremendous moat created by the high switching costs of networks?

The co-founder of PayPal and an early investor in Facebook, Peter Thiel, has shared some fascinating insights on this topic in his recent book, Zero to One. He thinks it is crucial for the new network to start by targeting a small universe of users that it can hope to dominate. Facebook, for example, initially targeted only Harvard university students, and quickly came to be perceived as the social network on the campus. Growth then involves expanding this concentric circle of users, all the while remaining dominant within it. Facebook expanded by growing into the adjacent markets of other universities; only much later did it expand to the general set of users outside its initial university focus. By then its tremendous momentum had a sense of inevitability that helped it topple Myspace, and seize the lead to become the dominant network of today.

Similarly, Amazon initially targeted just selling books online, and quickly came to dominate this niche. It then expanded into other media; only much later did it take on the much bigger eCommerce market. Consequently, it has enjoyed the benefits of self-fulfilling expectations — being perceived as the inevitable winner throughout.

It seems to me that another way a challenger can win the network battle against a well-established incumbent is by linking to an even larger network. Microsoft successfully toppled Netscape’s web browser by tightly linking their Internet Explorer to Windows, thus leveraging the massive network of the Windows installed base against Netscape’s then dominant browser network.

This appears to be happening these days to eBay’s dominant network of third party sellers. Amazon is trying to leverage its massive 250 million first party customers into its smaller but rapidly growing third party marketplace. It has opened up its existing (first party) product catalog and its massive shipping infrastructure (“Fulfillment by Amazon”) to third party sellers. I predict that it is only a matter of time before this battle tips decisively towards Amazon and against eBay.

A series of monopolies

Perhaps due to diminishing returns on scaling production of physical goods, the older industrial economy has been characterized by a set of stable oligopolies (a few large companies dominate the production of cars, airlines, chemicals, miners, oil, paper, etc.). In contrast, the presence of positive feedback loops in demand as well as supply often leads to the winner establishing a winner-take-all monopoly-like position in the economy of information-based goods. However, such monopolies do not necessarily last forever. The demise of MySpace and Netscape shows that a determined challenger with a 10x better product, or a clever linkage to an even bigger installed base, can overcome an existing network and establish a new monopoly.

I expect the information age will likely be governed by a series of monopolies, each successive disruptor toppling the older incumbent. This is an important pattern to note, since Buffett has shown during the past fifty years how monopolies can lead to amazing compounded investment returns. I think, however, that digital monopolies will be a lot more volatile — positive feedback can lead to vicious cycles as well, if a challenger succeeds in unbundling a monopoly incumbent. Value investors need to be very cognizant of the powerful forces of disruption that can destroy an existing monopoly and create a new one in its stead.


Disclosure: As of the time of writing this post, I am long many of the information economy stocks mentioned in this post, including Amazon, Google, Facebook, Twitter, Intel, and Microsoft. This post is not meant to be and should not be construed as investment advice of any sort. Investing is extremely difficult, the risk of permanent loss is high, and past results are meaningless in the future. 

Value Investing As Software Eats the World

The principles of value investing, as described by its best practitioners from Benjamin Graham to Warren Buffett, are widely held to be both eternal across time and universal in their application to all kinds of investments. But are they really so? Or, is there something about technology companies that makes them particularly tricky to value? What should we make of the fact that some of the best value investors, most famously Buffett, go out of their way to avoid technology stocks?

I look for businesses in which I think I can predict what they’re going to look like in ten or 15 or 20 years. That means businesses that will look more or less as they do today, except that they’ll be larger and doing more business internationally. So I focus on an absence of change. When I look at the Internet, for example, I try and figure out how an industry or a company can be hurt or changed by it, and then I avoid it … Take Wrigley’s chewing gum. I don’t think the Internet is going to change how people are going to chew gum… I don’t think it’s going to change the fact that Coke will be the drink of preference and will gain in per capita consumption around the world; I don’t think it will change whether people shave or how they shave.

Warren Buffett

This question is of vital interest to me since:

  • I have a technology background (degrees in Engineering and Computer Science; worked in technology companies for many years; co-founded a software company).
  • I am convinced, after two decades of experience, that value investing is the right approach to active investing.

But beyond my personal concerns, I will argue in this post that the consequences of technology cannot really be avoided by any investor in this day and age.

Value Investing: a very brief introduction

The term “value investing” probably means different things to different people, but its main principles can be summarized quite briefly. The central idea, articulated most clearly forcefully by Benjamin Graham, is that a stock should be thought of as a share of the underlying business rather than a piece of paper, or these days a set of bits in a brokerage account. It has an intrinsic value, which is the present value of all the future cash generated from that business. Since the future is always uncertain, this intrinsic value cannot be determined with any real precision; the price of a stock fluctuates from day to day as various classes of investors and speculators make informed —or uninformed! — guesses about this unknown intrinsic value. Graham advised investors to ignore this daily distraction and concentrate, instead, on the long term fundamentals of the underlying business, arguing that the stock price ultimately approaches the intrinsic value (“in the short run, the market is a voting machine, but in the long run, it is a weighing machine”). A value investor should patiently wait until the occasionally bipolar “Mr Market,” in one of his depressed moods, offers an opportunity to buy a stock below its intrinsic worth. Graham also suggested leaving a margin of safety between its price and its intrinsic value, in case our assumptions turn out to be too optimisticAnd when Mr Market, in one of his manic moods, is overvaluing a stock we hold, Graham advises us to sell it. In the memorable words of his most famous student, Warren Buffett, a value investor has to “be fearful when others are greedy and greedy when others are fearful”!

Buffett has applied these principles to great fame and fortune, further improving them through the help of his talented partner Charlie Munger. He has described his approach in his highly readable annual letters, and in speeches and interviews over his six decades of investing. Among the many seminal ideas contributed by Buffett and Munger is that the future cash profits of a business, the source of its intrinsic value, depend crucially on its ability to withstand profit-eroding competition: its “moat”. Businesses that have a moat against competitors are worth a lot more than the average business, since they can usually compound their earnings over time, thereby multiplying  their intrinsic value. Investing in moated companies have led to outlier returns for Buffett and Munger, as they held on to them for decades, letting the magic of compounding work over time. Finding and holding on to such businesses through the rough and tumble of market mayhem is not easy, however, otherwise everyone would have become a billionaire by following them!

Buffett and Munger also emphasize that an investor should stay well within his or her “circle of competence” in order to have the courage of conviction to stick with the investment when the market disagrees, and its price plunges, as often happens from time to time.  They are fond of citing:

I’m no genius. I’m smart in spots – but I stay around those spots.

— Thomas J. Watson Sr. legendary CEO of IBM

Finally, Buffett and Munger recognize that the market is generally quite efficient, with lots of smart investors competing to outdo each other for high stakes. They suggest, therefore, that investors concentrate their portfolios around a few companies within their circle of competence that they truly understand better than the average investor.

Broadly speaking, that completes our brief sketch of the central pillars of value investing.

It all sounds very logical and straightforward. What else could there be but this approach? Yet there are many different investing philosophies which have nothing in common with what is described above.

Perhaps the most influential is the academic idea, forcefully argued by Nobel-winning economist Eugene Fama, that the market is so efficient that is price of any stock is always right. A direct consequence of this theory is that the central notion of value investing – that the intrinsic value could vary from its extrinsic market price makes no sense. This school of thought, therefore, recommends buying a piece of every company trading in the market without discriminating based on the underlying business value and in strict proportion to their market value (this is called “index investing”). There are many, equally distinguished, economists who vehemently disagree with Fama’s efficient market hypothesis, among them economists Robert Shiller (shared the Nobel with Fama) and Andrei Shleifer.

The market is also full of all kinds of momentum and technical analysis strategies that do not pay any attention to the underlying business. There are also investors who specialize in art and antiques, as well as in precious metals like gold and silver; these are valued entirely for their scarcity, since they do not generate any cash flow that can form the basis for intrinsic value. Finally, there are the so-called  “noise” traders, pure speculators, devoid of any reliable information whatsoever!

Learning from more than twenty years of my own experiments in the markets, I am fully persuaded that value investing is the best approach to active investing (with passive indexing being the wise default for most people who do not have the time or inclination for actively participating in the market). It combines both rational and irrational aspects of human nature in a consistent and practical way. Although simple and elegant in concept, value investing is extremely hard in practice. This mainly due to our own temperament getting in the way — it is highly stressful for social and herd-evolved animals like us to be counter-cyclically fearful when others are greedy and greedy when others are fearful!

Temperament aside, my technology background poses an additional problem when it comes to value investing.

A personal dilemma

As already noted, one of the key pillars of value investing is that one should stay within one’s circle of competence in order to have any hope of beating the near-efficient market.

Knowing the edge of your circle of competence is one of the most difficult things for a human being to do … You have to strike the right balance between competency on the one hand and gumption on the other. Too much competency and no gumption is no good. And if you don’t know your circle of competence, then too much gumption will get you killed.

— Charlie Munger

This rings true to me since the “gumption” needed to buy more of a good stock when it goes down depends upon the degree of conviction one has in the fundamental soundness of the underlying company. And such contrarian conviction comes from deep knowledge that can only be had within one’s circle of competence, almost by definition of the latter.

But this poses a practical dilemma: given my background, the center of my circle of competence seems to be precisely those types of technology companies that are so judiciously avoided by nearly all the great value investors of the past!

In theory, there is no difference between theory and practice. In practice, there is!                 

— Yogi Berra

Perhaps the key to resolving this dilemma lies in understanding exactly what is it about technology that makes it such an anathema to the great value investors? As Buffett suggests, in the quote cited at the beginning, the rapid pace of technological change means that today’s dominant company will likely become tomorrow’s lunch for the next new start-up in silicon valley. Experienced value investors prefer a durable moat against competition; fast changing technology, however, makes maintaining such a moat for any length of time difficult, if not impossible.

To get a sense of why this might be the case, let us turn next to the technology in the very eye of the raging storm of change — the ubiquitous smartphone.

The great disruptor

Consider just a few of the hardware gadgets that have been replaced by software on our smartphone. Besides being a phone, of course, as well as a texting and emailing device, the smartphone contains software apps that replace what used to be physical calendars, contact books, and scratch pads; apps that replace the compass, map, and the GPS-based navigator; apps that replace the camera, video-camera and calculator; apps that convert the phone into a credit card reade. Various apps now monitor our health and keep track of our activities. Software on the phone has made it our social networking machine, our personal shopper, and our travel manager; searching and browsing software has replaced physical dictionaries, encyclopedias, yellow pages, and newspapers. Software like Google Now is even beginning to intelligently anticipate our needs — it informs us of important upcoming events without our asking it (“leave for the airport now!”), acting like our personal assistant. It seems like the list of hardware and human functions that have been at least partially replaced by software expands daily.

And its not just the smartphone. Software is encroaching into heavy industrial equipment too. General Electric now puts chips in their turbines that track of all rotations cumulatively so that their software can predict upcoming failures and schedule maintenance safely in time. Cars already have dozens of sensors and microprocessors monitoring both the internal and external environment, from engine temperature, oil quality, to the distance from other cars; software apps in cars now control the reversing camera, gas gauge, service alerts, GPS maps, radio, music players, and other devices. Taxis are being scheduled, and their approach monitored, by smartphone apps. There are sensors in the generators at the electrical utility plant, along the transmission lines, and at the meter at home, making the whole grid smarter about distributing electricity. Intel chips are already monitoring the “flow” at “smart urinals” at Heathrow airport, to help schedule maintenance during low-use periods. Software can even “print” a prosthetic bone replacement that is customized to fit our individual elbow or knee — software is entering our very bones!

Venture capitalist Marc Andreessen’s trenchant phrase — “Software is Eating the World” — evokes the reach and power of this pervasive and powerful phenomenon.

Software by itself, of course, could not have done all this transformation and disruption. In a previous post, I have described the “Moore-Andreesen” positive feedback loop between semiconductors and software that drives this disruptively-creative force. As semiconductors get cheaper and faster every year, both computing and communications become more capable, enabling software to “eat” more and more functions performed by people or hardware today. This extended reach expands the total market for semiconductors, enabling yet more investment in advances that make semiconductors even faster and cheaper — completing the self-reinforcing loop.

Moore-Andreesen Loop

There are serious investing implications of this loop. The following is just a small sample of companies that once used to dominate their niche are now are either gone, or fundamentally transformed, by the encroachment of software:

  • The bookstore chain Borders got “eaten” by software-based Amazon.
  • The music store chain Tower Records got “eaten” by iTunes software.
  • Apple’s iTunes itself is getting “eaten” by Pandora and Spotify streaming software.
  • The video chain Blockbusters got “eaten” by Netflix software.
  • Newspapers and magazines got “eaten” by the websites and blogs and online ads.
  • Yellow Pages got “eaten” by Google software.
  • Kodak got “eaten” by digital photos and smartphone cameras.
  • AT&T and Vodaphone are under attack from Skype, Whatsapp, Facetime, and Facebook.
  • Retailing giants like Sears, Target, Walmart, and Tesco are being “eaten” by Amazon.
  • Bank clerks got “eaten” by ATM machines.
  • Human brokers were “eaten” by online brokerage sites.
  • Travel agencies are being “eaten” by Expedia and Travelocity, etc.
  • Recruiters are being “eaten” by LinkedIn and other social networks.
  • Insurance underwriters and actuaries are being “eaten”  by “big data” analytic software.
  • … the list grows every day.

This should be of critical interest to all value investors since companies in the process of being eaten alive can often seem attractive to investors — inexpensive on the basis of the usual valuation ratios — right until their very end. New kinds of “value traps” lie in wait for investors unaware of the disruptive power of software.

Even more surprisingly, some companies that seem to be perpetually unprofitable on the usual earnings basis can sometimes turn out to be quite robustly moated! Consider TCI, the formidable cable monopoly assembled by John Malone, a veritable grandmaster of the information economy. As reported by WSJ’s Mark Robichaux in Cable Cowboy: John Malone and the Rise of the Modern Cable Business, TCI, remarkably, never reported positive earnings on a GAAP basis over its entire three decades of existence and yet was eventually bought by AT&T for 48 billion dollars!

Incidentally, even some of Buffett’s most cherished moats have not escaped this sort of software disruption. One of his favorite  investments, Washington Post, eventually lost its advertising moat to Google, and was recently sold to Jeff Bezos, the CEO of Amazon, and another master of the information economy. And long time Buffett holdings like Walmart and Tesco are under attack from Amazon these days and seem to be slowly losing their vaunted moats.

In other words: Buffett clearly ignores software, but will software ignore Buffett!?

Embracing my spots

Reading an earlier draft of this post, my wife, a practicing physician, raised an interesting question at this point — “Surely technological change is not a new phenomenon, and value investors seem to have successfully avoided it so far; what is so different now?”

I think the answer lies in the exponential nature of the semiconductor-software feedback loop. Disruptive change started within the limited circle of technology companies at first, but the relentless arithmetic of exponential growth (for instance, semiconductors are billions of times more powerful and cheaper now than just a few decades ago) means that this circle grows concentrically every year, and is beginning to intersect virtually every aspect of our world. To get a sense of this, just consider the fact that there are now more than a billion smartphones in the world, up from zero just seven years ago. Furthermore, phone companies are projecting that the remaining four billion people carrying ordinary feature phones today will also convert to smartphones soon, as prices continue to plunge exponentially due to the power of the hardware-software loop described above (some Indian smartphones cost less than 35 dollars today). And, as the earlier examples of hardware-eating apps suggest, a smartphone acts like a Trojan horse for proliferating software, spreading it all over the world.

The unavoidable conclusion: avoiding technology will no longer be a viable investing strategy 

Compelled by this line of reasoning, and since my center of competence seemed to be within the concentrically expanding circle of technology, I thought it wiser to heed Jacobi’s classic maxim:

Invert, Always Invert!          

— Carl Jacobi, 19th century mathematician, on solving difficult problems

Instead of avoiding technology, I decided to invert, and fully embrace the dreaded intersection of value investing and technology as an area of competitive advantage — a personal niche — in the world of investing.

This, however, is much easier said than done, since to the rapid pace of technological change really does make durable moats really hard to establish and maintain and moats are critical to compounding value.

Its not supposed to be easy… Everything that is important in investing is counter-intuitive, and everything that’s obvious is wrong!

— Charlie Munger

I have to confess at this point, though, that the counter-intuitive aspect of investing is exactly what attracts me to it in the first place! I think it somewhat analogous to the “aha” pleasure of scientific discovery. One has to be deeply and thoroughly informed of the existing state of all the relevant knowledge. While necessary, however, such knowledge is not sufficient, in either science or investing. The crucial next step is to have a non-obvious hypothesis about what is currently unknown (in science) or mispriced (in investing) and be proven right over time! Just like no scientist is likely to get any credit for rediscovering something already known, no investor is likely to rewarded for investing in companies that are already well appreciated by the market. The equivalent of that deadly phrase “already published” in science is “already priced into the stock” in the world of investing.

The strange economics of information

Given this conclusion, it seems that serious investors can no longer avoid coming to grips with all the subtle differences between information-based and physical goods that might affect their moat and valuations. I will save for a future post a discussion of the differences (update: see: The Strange Economics of Information); here is a sampler:

  • Information goods usually have high fixed costs of initial production but virtually zero costs of reproduction. Since the marginal cost of an information good is nearly zero, their price also tends to head towards zero, the presence of competition.
  • Information goods often have to be experienced in order to be valued. Producers of information often have to give away some version of it for free and investing heavily in building a reputation for quality.
  • Near-zero variable costs imply that an information-based products can be reproduced in billions of units with very little additional investment: hence a successful product can scale to massive volumes with surprising speed and efficiency.
  • Zero incremental costs causes the unusual pricing patterns often found in information-based products: the same product is sold at different prices in multiple versions with superficial variations. Even free versions sometimes make economic sense!
  • Information-based goods have to interconnect with many complementary products in order to work: such interconnections eventually lead to the network effects such as high cost of switching.
  • Information-based networks often obey the non-intuitive economics of increasing returns. They are fast changing due to the presence of positive feedback loops: virtuous and vicious cycles with tipping point adoption dynamics.
  • Information-based goods usually have decreasing supply costs as their fixed costs are amortized over larger volumes; in addition they benefit from increasing demand for every incremental user, due to the network effects.
  • The information age will likely be governed by a series of monopolies, each successive disruptor toppling the older incumbent.
  • New kinds of moats exist in the information-based economy.

Information is the new oil

Value investors like to invest in companies with a tangible book value, which often comes from owning “real” things such as fields full of oil or gold. Buffett, to his credit, was one of the first to discover the merits of owning intangible assets like the rights to produce an addictive product like Coca Cola or the rights to a popular brand like Gillette or American Express. Since the essence of information, by its very nature, is intangible, all value investors will soon have to deal with this new type of company, whose intangible assets can appreciate due to network effects rather than depreciate like physical plant and machinery.

The proliferation of sensors and computers and wireless chips embedded in physical things is likely to lead to a vast amount of data. Every time we buy a cup of coffee, search the web, send an email, or go for a run, data is collected and sent to the “cloud”. In fact, every rotation of an airplane turbine, every trip a car makes, every tune played on the radio, generates a burst of raw data. Companies that can extract meaningful patterns from this incoming data will have created a powerful new kind of moat, one that grows with every new bit of information flowing their way. Google’s search engine, for example, gets just a tiny bit more valuable every time we search, and their formidable moat gets incrementally wider, by mining the information buried in that search. No wonder the market valuation of Google has reached that of that industrial age godzilla, Exxon Mobile, within just a decade of it going public (both are worth about 400 billion dollars as I write this).  As the incoming CEO of IBM recently remarked, data is the “vast new natural resource for the next century.”

Eventually, the competitive advantages of superior information processing start showing up in the earnings numbers. Munger, who recently celebrated his 90th birthday, was one of the first of the traditional value investors to note:

Google has a huge new moat. In fact, I’ve probably never seen such a wide moat.

— Charlie Munger

Information economy is more than the technology sector

Finally, many companies that seem to be outside of the technology sector may have information processing at the core of their value creation. For example, Express Scripts is a company in the healthcare sector that processes one out of every three prescriptions in the country; every time a patient presents a prescription to the pharmacist, an internet transaction is triggered to request the insurance company to approve the payment for fulfilling the script. This flow of information between the doctor-patient-pharmacy-Express Scripts-insurer is the key to analyzing its value (What are the network effects? Is the feedback loop virtuous or vicious?), as a part of the information economy we have been discussing.

Similarly, many traditional value investors often mistakenly analyze Amazon as a retailer. However, as described in his annual letters to investors, Bezos is deliberately and methodically trying to transform the economics of retail — with its high variable costs — into a software company with high-fixed but low variable costs! In order to fully understand the implications of this, and to evaluate the strength of Amazon’s moat, it must be viewed primarily as an information processor, rather than as a retailer (although it is that as well).

Rediscovering Value

It seems clear, at least to me, that the old value investing strategy of avoiding technology stocks is no longer tenable as software keeps eating more and more of the world. This does not mean, however, that we can afford to throw away the hard-won insights of value investing. It is worth remembering that the Nasdaq crash of 2000 was led by those “dotcom” information economy stocks! It is all too easy to get carried away by “new era” stories spun by today’s version of castles-in-the-air companies like Pets.com, Homegrocer.com, MySpace, and the like.

There is a lot to be learned from the great value investors over the past century. A stock will always be nothing but a piece of a business; its value will derive entirely from that business’s ability to withstand competitive erosion of its profits, i.e. its moat. Valuation always matters, and it helps to have a margin of safety when purchasing a stock. Long term returns are dominated by the ability of the underlying company to compound its earnings over the years. These principles are indeed eternal as well as universal; they apply with equal force in the information economy.

However, the economics of information goods is unusual and non-intuitive enough to merit a re-conceptualization, ab initio, of competitive advantage, moats, and the basics of valuation.

In the end I feel that this creative frisson is exactly what makes it an exciting time to live at the center of the ever-growing intersection of value investing and the information economy. Living on the fault line may be tricky, but the future looks promising – the world is coming our way!


Disclosure: As of the time of writing this post, I am long many of the information economy stocks mentioned in this post, including Amazon, Google, Facebook, Twitter, Express Scripts, Intel, and Microsoft. This post is not meant to be and should not be construed as investment advice of any sort. Investing is extremely difficult, the risk of permanent loss is high, and past results are meaningless in the future. 

The Moore-Andreessen Disruptive Creation Loop

Moore-Andreesen Loop

This post combines two famous but separate observations, made 46 years apart, about technology advances into a single integrated feedback loop that I think has been the major source of disruptive creation in the world over the last fifty years; it still continues, unabated, today.

  1. Moore’s Law: This is not really a law in the sense of scientific laws of nature; rather, it is a prescient observation made by in 1965 Intel’s co-founder Gordon Moore on the exponential pace of change in the semiconductor industry. He noticed that the transistor count in semiconductors seems to double every couple of years. Surprisingly, his observation continues to be valid more than 49 years after he first made it. This relentless advance has been like a giant clock that drives the entire technology industry forward every year.
  2. Venture capitalist Marc Andreessen’s observation (first made, I think, on the pages of WSJ in 2011) that “software is eating the world” trenchantly captures the fact that software replaces more and more functions that were formerly performed by humans or physical machines with every passing year. This is also a relentless advance that feels like a force of nature in its power to disrupt.

Moore’s Law is the result of human ingenuity and hard work fueled by ever rising capital expenditure and R&D on semiconductor fabrication processes. Each advance has required major breakthroughs at the level of material science, industrial processes, and applied quantum physics. Amazingly, such advances have indeed been made just in time to preserve the cadence of the “law” for over four decades. The following graphic captures the fact that the number of transistors found on a chip has been doubling every two years — i.e., growing at more than 40% per year!

Transistor_Count_and_Moore's_Law_-_2011.svg

But the “ante” in terms of capex and R&D dollars needed for such complex advanced rises every year. Intel, Samsung, Taiwan Semiconductor, et al are nowadays spending ten-plus billion dollars every year to keep the semiconductor manufacturing process advances coming. So where are the returns needed to fund these investments coming from?

This is where the crucial role played by software comes in. As it becomes more and more capable, it can do more and more (a tautology!) thus expanding the market for semiconductors. And the expanding semiconductor market, in turn, justifies an increase in the investments needed for the capex and R&D required for driving Moore’s law forward.

There are two parallel channels that allow software to become more and more capable as a result of better semiconductors: better and cheaper computing and better and cheaper communications. Think of the speed and cost of today’s internet and speed and cost of today’s smartphones as compared to the slow modem lines and slow and clunky computers of even a decade ago. Both these advances are synergistic and are critical enablers of the advances in programming techniques and tools needed for software to eat more of the world.

As software captures more functions it expands the market for computers and communications. Nearly 2 billion people already have the powerful supercomputers in their pockets known as smartphones, and serious projections are suggesting the possibility of 4 to 5 billion people having smartphones communicating with each other and running powerful apps in the next few years.

Every part of the loop sketched at the top of this post is important, and mutually supportive, to all the other parts:

  • Software, by itself, cannot advance at 40% per year type rates without improved hardware to run on and improved networking speeds. The major advances in software tools — from assembly coding to higher level languages to object-oriented languages, from waterfall to Agile programming techniques — are all fundamentally enabled by faster computing speeds provided by the underlying hardware.
  • Computers by themselves will completely starve of data-to-compute if the speed and reach of communication pipes linking them together did not advance along with them. Imagine connecting your super-powerful, instant-on, tablet to any of the slow dial-up modems of the old days!
  • The massive capex in semiconductors needed for improving both computers and communications cannot be justified without the expanded market provided by software eating more of the world.
  • And so on; all components of this positive feedback loop have to advance together, mutually re-enforcing each other, to keep it going.

Scaling the Power of the Loop

The scale of this phenomenon is not that easy to grasp. In personal conversations, people nod knowingly, thinking that all I am saying is that technology progresses every year, and of course it does. However, that is not my point at all. There is probably no other phenomenon, natural or artificial, that has shown this kind of exponentially fast growth such an extended period of time. I can safely claim this because of the unusual power of compounding: if anything has been compounding at this high a rate for so many decades, we would see it nearly everywhere already today!

To appreciate the stupefying scale of the phenomenon, let me contrast it to another technological revolution.

During the Industrial Revolution, that supernova of all changes in the human condition, growth in Western Europe accelerated by a puny 1% per year! Even this tiny amount though, when steadily compounded for a hundred years (most historians date the Industrial Revolution roughly between 1750 to 1850), resulted in a massive improvement in living standards from the complete lack of growth for the thousands of years before it (see the zero net growth in income per person for thousands of years on the left of the Industrial Revolution, during the so-called “Mathusian Trap” period, in the graphic below; source: figure 1.1 from a recent book on this subject by economic historian Gregory Clark).

great-divergence-graph Clark

So if the Industrial Revolution changed the world so dramatically with a mere 1% growth compounded for a hundred years, consider the relative impact on the world affected by Moore’s Law, which has compounded at over 40% per year for nearly 50!

Anything growing at this blistering pace should become millions of times better (faster/cheaper) in only a few decades, and indeed our computers do show exactly such improvement – that smartphone in our pocket really is more than a million times more powerful than the most powerful computer on the planet from fifty years ago. It is hard to think of many other examples that show such an astounding growth pattern; and the examples that do come to mind seem to be, in one way or another, intersected and transformed by the power of the Moore-Andreessen loop. Try thinking of a counter-example!

Shift in Scarcity

This sort of massive change can cause a fundamental shift in what is scarce and what is abundant every few years. In my opinion, it is this frequent shift in the point of scarcity that has been the real driver of the constant disruption that characterizes our information age. As if an avatar of the ancient god Shiva of Hindu mythology, who is said to destroy in order to create anew, this loop is a powerful force of disruptive-creation: companies that are better aligned with this loop can — and often do — replace incumbent companies that are slower in adapting to its consequences.

A corollary: SoCs are eating the surrounding chips

A phenomenon that is strikingly similar to “software eating the world” seems to be happening right within the motherboard of all the smartphones and tablets and laptops. A “System on a Chip” (abbreviated SOC or SoC), as the name implies, contains a whole system of modules on one silicon chip by absorbing more and more functions that were formerly performed by independent chips. Functions such as graphics, modems, DSP, and various kinds of memory have already been absorbed.

I think the abandonment of the baseband chip market by Texas Instrument and Broadcom is the latest example of this corollary phenomenon. The baseband function now forms just a piece of a larger SoC chip produced by companies like Qualcomm, Mediatek, and Intel. The economics of producing them as part of an integrated SoC became increasingly tilted in their favor compared to the baseband function chips made by TI and Broadcom until the latter finally got eaten up.

I see this as a trend powered by essentially the same kind of loop as diagrammed above, with Moore’s law shrinking the size of semiconductors and interconnection technologies with every new generation when combined with the ever-growing power of chip design software. The expanded market (e.g., billions of smartphones and perhaps tens of billions of “internet of things” devices) enabled by the resulting SoCs fuels the necessary investments in capex and R&D, completing the loop.


Disclosure: I am long Qualcomm and Intel in various portfolios at the time this post was written.

Malone’s moves: a chess analogy

John Malone, one of the eight CEOs mentioned in my post about the Outsiders book, is featured in Cable Cowboy, a book that tries to describe how he built his cable empire and, in the process, compounded the stock of his cable company TCI by an astounding 30.3% per year for over 25 years.

Written by Mark Robichaux, a Wall Street journalist who covered several of Malone’s deals over the years, the book provides some behind-the-scenes color around his myriad cable and content deals.

But it does not, at least to my satisfaction, explain exactly what did Malone did that was so different from other CEOs.  He was clearly very smart and well educated – he graduated from Yale University with a B.A. in Electrical Engineering and Economics as a Phi Beta Kappa and National Merit scholar, and obtained an M.S in Electrical Engineering from an NYU program at Bell Labs as well as a Ph.D. in Operations Research from Johns Hopkins.

And the book describes how Malone learned about cable directly from some of its early pioneers. He was clearly good at financial engineering and pioneered many of the techniques used by private equity firms today (aggressive use of debt leverage). He could do this because he was early in realizing that cable revenues were reliably recurring, like a utility (but unregulated!), so it could be used to raise a lot of debt inexpensively. He also systematically maximized the tax benefits of financing his cable assets in this manner. I think he may have been somewhat lucky to not get wiped out at various points in his career while operating with a high level of leverage.

But beyond these operational strengths, I think he was particularly good at multiplying value via his deal-making. He was perpetually buying and selling various cable and content assets but its not obvious how all that wheeling and dealing actually creates value.

An analogy struck me while reading about some of his deals: the concept of trading small advantages in chess.

Lets say you start with a pawn sacrifice in the opening to get a move advantage. As time passes, unless one plays forcefully, this temporal advantage can quickly dissipate. So good players often convert this into a positional advantage if the opportunity presents itself. Positional advantage is more structural and hence robust. Later in the end-game it can, in turn, be converted into another kind of advantage – a passed pawn or perhaps a sacrifice to get an attack on the opponent’s king, etc. Thus there is a constant trading of advantages, from transient to permanent ones, depending upon the board situation. And a skilled player can usually translate this kind of trading of small advantages into a win.

I get the sense that Malone was very good at doing something equivalent in his business deals.

Using his deep knowledge of the cable industry, he could sense when a cable asset became available at an attractive price. He had a good sense of the intrinsic value of the cash-flows of cable assets. He would opportunistically buy such a mispriced asset even if it was not what he ultimately wanted (e.g. not in a region where he was building a roll-up of cable assets). Just like in chess you collect a small advantage when you can, even when its not a mating attack on the opposite king. You do that to get something to trade with.

Then he would patiently wait, sometimes for years, before an opportunity came to sell this asset, which would usually have appreciated by then (since he bought it when it was distressed). He would use the cash from this sale to then turn around and buy an asset that he really wanted all along. Or buy back shares in TCI if they were undervalued.

Thus he avoided overpaying for premium assets – the downfall of many of his competitors who were pursuing so-called “strategic M&A”on the advice of their investment bankers.

An example of his patience was evident in how he waited to but content assets (programming) until he had enough scale from his rolling up a bunch of regional cable providers. Once he had enough scale in cable distribution, he was in a strong negotiating position to acquire content assets on favorable terms.

And, since he could distribute the content to more subscribers than his competition, he was able to net more cash flow from his content assets. He would then leverage this additional cash-flow by raising more debt and buying more cable subscribers. And so on. This kind of virtuous cycle (more subscribers -> more content -> more subscribers) with increasing returns to scale can indeed explain compounded returns of 30% per year for more than two decades.

He was able to get to this point by systematically trading one advantageous deal into another, like a master chess player, thus multiplying overall value (in other words, by multiplying what economists call gains from trade).

The book triumphantly ends by describing how he is crowned his career by finally selling TCI to AT&T, once again opportunistically, when he judged that they were paying an attractive price for it.

And then he is supposed to have retired. 

Except he did not!

I think Malone is still playing this game, only this time in Europe, even in his seventies!

A company he chairs and controls, Liberty Global, is well along the way to owning a cable franchise that dominates Europe. There are significant economies of scale in doing so in such a dense and contiguous geographical area – just think of a cable truck being able to efficiently serve neighboring regions vs. one that services installations scattered all over the map.

And yet Liberty Global also owns some assets in Chile, completely disconnected from Europe! This fact was puzzling me when I was initially analyzing the company until the chess analogy came to mind. I now suspect Malone bought the Chilean cable opportunistically, when they were available for cheap, knowing full well that he will trade them later for what he really wanted – assets in the dense areas of Europe.

Indeed, recently the Liberty CEO is now talking about selling the Chilean cable assets and is in the process of buying more cable in Netherlands that is contiguous with their other European cable units.

In another repeat of the TCI playbook, Liberty Global is only now going about acquiring content in Europe. They have begun by making some small investments recently (e.g. a small position in ITV), but clearly waited until they had rolled up enough distribution muscle before they did so. At this point they are already the largest cable company (by number of subscribers) in Europe and thus clearly can get very attractive terms from any content producer there. And just like TCI, they can then monetize the acquired content better than others since they have the largest number of subscribers.

As Yogi Berra said, its deja vu all over again.

 

Disclosure: I am long Liberty Global (LBTYA) in various personal and professional portfolios.

 

 

 

 

The Disruption Controversy

There has been a lot of hubbub in the social media around a recent New Yorker article by Harvard historian Jill Lepore sharply disputing the famous “disruptive innovation” model Clayton Christensen, who teaches at the famous business school of the same university. Christensen has responded, with apparent disappointment and anger, in an interview he recently gave to BusinessWeek. Many Silicon Valley VCs seem to have come out in support of Christensen’s theory on Twitter, claiming that they practically live and breathe it in their daily hunt for disruptive startups.

A second, and ostensibly unrelated, meme has also been recently gone viral over Twitter. I think this was due to über-VC Marc Andressen who had one of his famous tweetstorms on offhand remark by his buddy, the former CEO of Netscape, Jim Barksdale:

… there’s only two ways I know of to make money: bundling and unbundling.

I think these two powerful ideas are actually related and underlying both is that ultimate force of technological change  – Moore’s law. I will use this (rather long and meandering) post to try and clarify my own thoughts on this topic since its of high interest to me as an investor in technology related moated companies.

Christensen’s theory of Disruptive Innovation

A provocative analysis of this idea can be found in the work of blogger Ben Thompson. He has illustrated the key idea in one of his distinctive sketches. The path of disruption looks something like the orange line in this:

Adapted from Figure 5-1 in the Innovatorʼs Solution, Christensen, Raynor

The key thing to notice is that products improve more rapidly than consumer needs expand. This means that while the incumbent product may have once been subpar, over time it becomes “too good” for most customers, offering features they don’t need yet charging for them anyways. Meanwhile, the new entrant has an inferior product, but at a much lower price, and as its product improves – again, more rapidly than consumer needs – it begins to peel away customers from the incumbent by virtue of its lower price. Eventually it becomes good enough for nearly all of the consumers, leaving the incumbent high and dry.

In an interesting post on this topic that predated the firestorm caused by Lepore, Thompson suggested that there are actually two types of disruption theories articulated by Christensen over time; he argued that one of them is flawed (emphasis in the following excerpt is mine):

The original theory of disruption, now known as new market disruption, was detailed in Christensen’s seminal paper Disruptive Technologies: Catching the Wave and expanded on in the classic book The Innovator’s Dilemma. Based primarily on a detailed study of the disk drive industry, the theory of new market disruption describes how incumbent companies ignore new technologies that don’t serve the needs of their customers or fit within their existing business models. However, as the new technology, which excels on completely different attributes than the incumbent’s product, continues to mature, it eventually takes over the market.

This remains an incredibly elegant and powerful theory, and I fully subscribe to it. We are, in fact, seeing it in action with Windows – the incumbent – and the iPad and other tablets; new technology that is inferior on attributes that matter to Windows’ best customers, but superior on other attributes that matter to many others.

And:

It is Christensen’s second theory of disruption – low-end disruption – that I believe is flawed … Briefly, an integrated approach wins at the beginning of a new market, because it produces a superior product that customers are willing to pay for. However, as a product category matures, even modular products become “good enough” – customers may know that the integrated product has superior features or specs, but they aren’t willing to pay more, and thus the low-priced providers, who build a product from parts with prices ground down by competition, come to own the market.

Thompson then goes on to argue that Christensen has been badly and repeatedly wrong in his prediction of the demise of Apple products based on this second, low end disruption, version of his theory.  He correctly points out that, in fact, Apple’s hit iPhone at its initial launch was clearly much more expensive than many of the products it has successfully “disrupted”: iPods, dumb cellphones, PDAs, GPS navigators, etc. And the iPhone launched with a rich superset of the features of the disrupted devices rather than a cheaper subset. This was not supposed to happen according to the disruption from the “low-end” theory.

But in the recent post on the success of Chromebooks, Thompson himself seems to be using this low-end version of the disruption hypothesis, contradicting his earlier assertion! Apparently, even the low-end disruption idea does work at times.

So the questions remain: Exactly when does which theory predict successfully? Under what circumstances?

Barksdale-Andreessen theory of Bundling and Unbundling

As HBR’s blog explains, Barksdale, a veteran of IBM, FedEx, and McCaw Cellular, “was brought on a few months after Netscape’s founding to provide adult supervision as its CEO”. He made his bundling and unbundling comment at the end of an investor roadshow in answer to Microsoft’s decision to bundle the Internet Explorer with Windows. In the HBR interview he elaborates:

I had worked for several businesses during my career by that time that had become conglomerates, some fairly large, and then had divested themselves of various businesses. I’m on the board of Time Warner, we have just parsed off our third major part — our original company, Time Inc., which is the publishing arm of Time Warner. We [already had] divested ourselves of Time Warner Cable as well as AOL. So, it’s not uncommon to add a bunch of companies together, much less software products, and then divest yourself of them as the shareholders think they have more value standing alone than standing together. You do it to get your stock price up.

… It’s easier to do in the digital age. It’s easier to bundle and unbundle digital products …

In his tweetstorm, Andreessen gives an example of how this process works:

1/A story of unbundling in the tech industry: 20 years of consumer Internet evolution —

2/One upon a time there was AOL, which was a completely integrated Internet access/information/communication service.

3/Then Yahoo came along and unbundled the information/communication parts like email/IM/sports-scores/stock-quotes from the access service.

4/One of the things you could do on Yahoo was search, then Google came along and unbundled that.

5/You can search for anything on Google, including people; Facebook came along with a much better way to just search for people.

5/You can search for anything on Google, including people; Facebook came along with a much better way to just search for people.

6/Three things you can do on Facebook are messaging, photo sharing, and status updates; therefore Whatsapp, Instagram, and Twitter.

7/And yes, Yo unbundles the creation & existence of a message from the contents of a message, unbundling Whatsapp and Twitter :-).

8/Ev Williams () is the modern genius of this concept–playing out in our industry continuously since the 1950’s.

9/The part people often miss is that you can get extremely powerful second/third order effects at each step with his pattern.

10/The entrepreneurs generally have a pretty good sense of this when they’re doing it, but it doesn’t become clear to others until later.

11/This is a pattern what we love to fund: unbundle X from Y, but then use the liberation of X as leverage to do amazing new things with X.

12/And the howls of press and analyst outrage at the apparent stupidity of each unbundling are very helpful for keeping valuations down :-).

1/The flip side of unbundling: Later on, the unbundlers tend to try to rebundle in the image of whatever they unbundled.

2/So Yahoo adds an ISP (), and Google adds email/IM/sports-scores/stock-quotes.

3/Twitter changes its user profile page to look more like Facebook :-).

4/Sun unbundled DEC with commodity components, then re-bundled into a proprietary computing stack just like DEC w/Solaris, Sparc, etc.

5/Microsoft likewise unbundled DEC minicomputers w/PC OS + tools, then rebundled into DEC-like integrated stack now including hardware (!).

6/Paraphrasing Harvey Dent: “You either die a hero or you live long enough to see yourself become the company you first competed with.”

7/And then sometimes the rebundlers realize what they’re doing and try to reverse course. E.g. Microsoft building apps for iOS & Android.

8/And thus the cycle of life repeats with yet more unbundling :-).

The key driver underneath all this is technology change – the bundles emerge as a consequence:

And so the newspaper bundle, the idea of this slug of news and sports scores and classifieds and stock quotes that arrives once a day was a consequence of the printing plant. Of the metro area printing plant, of the distribution network for newspapers using trucks and newsstands and newspaper vending machines and the famous newspaper delivery boy. That newspaper bundle was based on the distribution technology of a time and place.

When the distribution technology changed with the internet, there was going to be the great unwind, and then the great rebundle, in the form of Google and Facebook and Twitter and all these new bundles.

I think music is a great example of that. It made sense in the LP and CD era to [bundle] eight or 10 or 12 or 15 songs on a disc and press the disc and ship it out and have it sit in storage until somebody came along and bought it.

But, when you have the ability online to download or stream individual tracks, then all of a sudden that bundle just doesn’t make sense. So it [unbundled] into individual MP3s.

And I think now it makes sense that it’s kind of re-bundling into streaming services like Pandora and Spotify.

And the bundling or unbundling of the product actually directly affects the bundling or unbundling of the business:
 
So one of the other things you see happening in music now is actually the music industry getting reconfigured and being split out. There are now companies that are entirely online record labels that have started from scratch. Or there are companies that are entirely focused on merchandise sales. There are companies entirely focused on touring. And the old record labels that are still bundled businesses corresponding to a bundled product offering are struggling to adapt to this new world with lots of new competitors that are effectively unbundled.
 
Andreessen goes on to explain how this pattern helps him identify a promising startup:
 

Often, a key characteristic of large incumbents in any industry is, they have a bundle that is accumulated over time, for the reasons that Jim described [“because it’s an effective growth strategy. Once you try to grow the business, it’s an easier out to stay focused on your core and then add things to it.  And you become a big bundle again”].

And then what we look for is for something to have changed in the underlying technology. The arrival of the Internet was a big one. The arrival of mobile distribution. The arrival of social networks. The arrival of Bitcoin is a current example.

So, we look for something to change in the underlying technology, and then basically say, “Well, you know, gee, if you were to sit down today with a clean sheet of paper, and you knew that the technology was changing, then what would be the proper form of the product, if you were starting from scratch?

That’s the question that’s always the hardest for an incumbent to ask, because that’s the classic innovators dilemma. And that’s the question that’s the easiest for the startup to ask, because the startup literally is somebody sitting down with a clean sheet of paper. All they have is the ability to think from first principles, think from scratch.

I would say we look actively for the pattern of large incumbent, established industry, bundled product or service offering, coupled with underlying technology change, coupled with idea for unbundled product that the customer might prefer, and then of course coupled with an entrepreneur who can actually build a business around that. I think that’s a fairly common pattern.

Andy Grove’s theory of vertical-to-horizontal transition

But even before Christensen and Netscape, way back in early 1980s, Intel’s legendary former CEO Andy Grove had already commented upon the remarkable transition from vertical silos to horizontal modules that was then completely disrupting the landscape of the computer industry:
 
 
This change is obviously related to both the bundling/unbundling theory and the disruption theory. But I think the key underlying driver in Grove’s model is the all-powerful Moore’s law (the observation that silicon chips roughly doubled in performance at the same price point every couple of years).
 
My Take: Both disruption and bundling/unbundling arise from shifts in the point of scarcity
 
I think both these phenomenon are really just long term consequences of Moore’s law. Let me explain:
 
  • Every few years, like clockwork, the incessant, exponential rise in price/performance of silicon hardware alters what is scarce and what is abundant in the value chain of any product that depends upon silicon.
  • Economic value shifts to whatever is scarce. The newly abundant becomes a commodity and loses its value.
  • It is precisely this constant shift in the point of scarcity over time that ultimately drives all the disruptive bundling, unbundling and rebundling that goes on in so many industries.
 
A particularly powerful consequence of Moore’s law is that software can do more and more things every few years. In other words, by exploiting the cheaper and more powerful hardware software “eats the world” in Andreessen’s evocative phrase, encroaching a wider and wider circle of companies and industries. As pointed out by Grove, this can change the vertical value chains into horizontal. And then new value chains form around the winners.
 
Here is how I think the process works:
 
  • Every few years, this scarcity shift causes a specific “module” of a value chain bundle to become vastly more valuable than the surrounding pieces.
  • This creates an opportunity for value creation by unbundling. As Andreessen has observed, this can exploited by a competitor (or entrepreneur) starting from scratch and offering just that valuable point of scarcity as an attractive product offering.
  • This unbundling disrupts the existing value chain incumbents. Some nimble companies may be able to co-opt this process and quickly adapt; those who cannot decline. The devil is in the details, as Lepore has pointed out. of course, predicting the winners and losers in this transition cannot be easy otherwise everyone would be a great investor! It is probably helpful to conduct the mental exercise suggested by Andreessen: what would be the proper form of the product, if you were starting from scratch today? This is useful since some things can be done very differently – perhaps more simply (hence cheaply) today due to the ongoing scarcity shift. For example, the high speed internet and smartphones make digital distribution nearly a free commodity today. Many businesses that have spent a lot of money on distribution infrastructure will find that many parts of their overall value proposition have become a commodity. Imagining building an existing product or service all over from scratch can cause this fact to pop out in our minds, leading to better predictions.
  • Once the unbundled product has succeeded in capturing its market, its fairly easy to rebundle more and more features around it to increase its value (as Barksdale has observed).
  • This goes on until Moore’s law changes the game – the point of scarcity – again, causing the disruptive cycle to repeat!
 
I find this a clearer and deeper explanation of the observed disruptions as compared to the simplistic model graphed at the beginning of this article. And I will venture to guess that if we remove all examples that can be better explained by such a (Moore’s law driven) shift in scarcity from the various Christensen case studies, not much will be left that escapes Lepore’s valid criticism. However, I have not actually done this exercise so I cannot be sure about this. Perhaps there will remain some type of disruptions that are better explained by his model – it will be useful to know.
 
Update:
 
A conversation with my friend Alex led to a provocative question: surely there are other drivers of change than just Moore’s law, so why focus so much on that one force?
 
And, to be sure, there are many varieties of improvements that companies go through in improving their price/performance metrics over time. This could various kinds of process learning (at an individual as well as team level) as well as efficiencies that come from pumping a larger volume through manufacturing, for example.
 
It seems to me, though, that Moore’s law is perhaps unique in having compounded at such a high rate for many decades – most processes have not done that. Compounding over that long leads to improvements by the factor of millions and billions, and most processes just cannot improve that much. So its particularly important to be on the right side of Moore’s law; otherwise one risks being disrupted due to its unique ability to shift the point of scarcity every few years, like clockwork.
 
Having said that, I think the point about the shift in the point of scarcity remains valid and useful even if there are other drivers of rapid change – Moore’s law is just one of many potential drivers of such change.
 
 
 

Why is this “mate in 3” so hard?

I like chess puzzles and if you are like me you know that “mate in 3” can have only a limited number of solutions and usually can be solved within, say, 10 to 15 minutes (master level players will of course be much faster). However, the following puzzle turned out to be much more tricky, at least to me (I am only a club level player). Before going on, give it a try:

Black to move and mate in 3 moves:

chess

If you can’t wait for the answer, you can see it fully described here by Joe Wiesenthal, the prolific economics editor of Business Insider. As he says:

“the eminent chess player and commenter Susan Polgar posted on her blog the following:

Black to move and checkmate in 3. Please no computer analysis. This is a very cool checkmate. Try to find it for yourself.

Now chess problems where you’re asked to just mate in 3 moves aren’t typically all that hard, so my curiosity was piqued by the fact that Polgar said not to use a computer. Obviously it couldn’t be that simple if you’re tempted to use a computer to solve a 3 move chess problem.

And it wasn’t! In fact I spent the evening looking at it yesterday without getting it.”

But now I am intrigued – exactly why is this elegant little puzzle so surprisingly hard?

And that I find to be an even more interesting puzzle, one of human psychology. I think a hint can be obtained by trying to solve an entirely different, almost trivial, problem in arithmetic. Try this:

A bat and a ball cost $1.10. The bat costs $1.00 more than the ball.

How much does the ball cost?

If you answered, like most people, 10 cents then you are wrong!

The correct answer, which will be immediately obvious upon reflection, is 5 cents (since $1.05-$0.05 = $1.00).

This problem comes from the research of Shane Frederick, a collaborator of Daniel Kahneman, one of the world’s top cognitive psychologists. Frederick’s paper, “Cognitive Reflection and Decision Making” (Journal of Economic Perspectives, Volume 19, Number 4, (2005) pp 25–42) describes what might be happening in such problems.

Here are two more arithmetic problems from the paper:

If it takes 5 machines 5 minutes to make 5 widgets, how long would it take 100 machines to make 100 widgets?

In a lake, there is a patch of lily pads. Every day, the patch doubles in size. If it takes 48 days for the patch to cover the entire lake, how long would it take for the patch to cover half of the lake?

If you said 100 minutes for the widgets or 24 days for the lake, then once again you got the wrong answer. But you are not alone – a surprising majority of students from elite universities, including maths and physics types, get these elementary problems wrong (data in the paper cited above and various follow-up papers).

So what is going on?

Our brains can be thought of as consisting of two separate but interacting systems. As Kahneman explains in his brilliant Nobel lecture:

“The operations of System 1 are fast, automatic, effortless, associative, and difficult to control or modify.

The operations of System 2 are slower, serial, effortful, and deliberately controlled; they are also relatively
flexible and potentially rule-governed.”

System 1 can be thought of as the intuitive system and System 2 as the reflective system – what we normally call “thinking”. Obviously, neurons are firing in both cases, but System 1 feels so effortless that most people don’t realize the massive extent of neural processing involved in, say, seeing that we are looking at a chair, since such acts of perception are accomplished by the fast System 1.

It is likely that System 1 has “hardwired” the critical processing that our ancestors needed frequently, like perceiving objects and making very quick (“intuitive”) judgments. It is basically a pattern recognizer. But it can also learn new things after sufficient repetitions – it is where our habits reside.

System 2 is more flexible and algorithmic computer in its style, albeit a very slow computer. The overall executive control also is a part of System 2.

As the Frederick paper explains, the three arithmetic problems are

“easy” in the sense that their solution is easily understood when explained, yet reaching the correct answer often requires the suppression of an erroneous answer that springs “impulsively” to mind.

And that is exactly what I guess  is going on with the chess puzzle above.

Every chess puzzle lover “knows” that a discovered check pattern is often the heart of many a pretty mating sequence. And sure enough, there is a very seductive discovered (and, indeed, double!) check available on the second move after the obvious (and correct) rook check at first move.

So our intuitive pattern detector jumps to the conclusion that this discovered check just has to be part of any solution. It leads us down the proverbial garden-path and we tend to waste of a lot of time on this dead-end.

The solution to the chess puzzle finally emerges only after we have somehow (hours later in my case) managed to suppress this discovered-check. After that happened me – hours later – I finally focused on the fact that the white king is completely locked-in after the rook check on the first move. And after this “aha” moment, finding the two moves by the black bishop that deliver the coup-de-grace was not too hard.

Of course a computer would solve this problem in milliseconds, since the search tree is so small. And the computer, in a sense, is all logical system 2 with no intuitive system 1 to mislead it!

Unlike the arithmetic problem, there has been no research on this chess puzzle to my knowledge, so this explanation of why this simple enough mate is so hard for humans is just my best guess at this point.

Just in case this discussion makes System 1 appear dumb, it is worth keeping in mind that artificial intelligence programs still cannot come anywhere close to being able to perceive patterns that are trivial for humans. It is also the likely source of creative insights and the two systems together are responsible for all the glories of human achievement.

Incidentally, if you enjoy this sort of stuff, Kahneman’s Thinking, Fast and Slow is a true masterpiece and easily one of the best books I have read in the last few years.