It is Changes in Abundance and Scarcity That Drives Disruption

scarcity loop

A key question in business (and hence in investing) is: what drives change? Why do dominant businesses get disrupted so frequently by challengers? I posit in this post that most of this disruption is a consequence of a shift in economic scarcity, mainly caused by technological advances.

Most businesses can be conceptualized as offering a product or service bundle of value to their customers. The bundle is made up of various modules that combine together to provide the customer a valuable offering. I suggest that advances in technology cause changes in the relative scarcity or abundance in the underlying economics of these modules, and it is these changes in economics that create an opening for a challenger to topple a dominant business.

Consider an example given recently by Marc Andreessen:

“And so the newspaper bundle, the idea of this slug of news and sports scores and classifieds and stock quotes that arrives once a day was a consequence of the printing plant. Of the metro area printing plant, of the distribution network for newspapers using trucks and newsstands and newspaper vending machines and the famous newspaper delivery boy. That newspaper bundle was based on the distribution technology of a time and place.

When the distribution technology changed with the internet, there was going to be the great unwind, and then the great rebundle, in the form of Google and Facebook and Twitter and all these new bundles.

I think music is a great example of that. It made sense in the LP and CD era to [bundle] eight or 10 or 12 or 15 songs on a disc and press the disc and ship it out and have it sit in storage until somebody came along and bought it.

But, when you have the ability online to download or stream individual tracks, then all of a sudden that bundle just doesn’t make sense. So it [got unbundled] into individual MP3s.

And I think now it makes sense that it’s kind of re-bundling into streaming services like Pandora and Spotify.”

In general, once the deck of economic value has been shuffled by the shift in scarcity, it can create an opening to an entrepreneur to start from scratch by targeting a key module of the old bundle that is now relatively scarce — hence valuable — and leverage the newly created abundance. The Moore’s law driven plunge in the price of communications, for instance, is enabling a lot of startups to rethink existing business bundles by exploiting the “free” distribution available on the internet, just like iTunes did to unbundle CDs and Pandora is doing to iTunes in the example.

Once the challenger has won, it is fairly easy for the winner to bundle more and more features around the core module, to increase its value and to capture incremental marketshare. Of course, this process eventually sets up the bloated bundle to become a target for the next new challenger on the block, as technology changes the point of scarcity again!

The former CEO of Netscape, Jim Barksdale’s observation: ( “there’s only two ways I know of to make money: bundling and unbundling” captures this cycle of unbundling followed by bundling; but he does not really explain why should this be so? Andreessen, in a recent Tweetstorm, has provided a detailed example of this phenomenon of bundling and unbundling (

Thus the key driver of all the disruption and unbundling is technology driven changes in economic scarcity. A particularly powerful example of such a technology driver is the virtuous cycle of semiconductors and software advances feeding into each other, diagrammed below (I have previously written about this loop here:

I think it is vital for a disruptor to succeed that it be better aligned with this loop than the product it is challenging!

This is why I think the Christensen model of disruption, while insightful, is not complete. It comes in two flavors: low-end disruption and new-market disruption. Neither is fully satisfying as an adequate model of disruption – a counter-example to Christensen’s framework is the fact that the expensive, richly-featured iPhone manage to completely disrupt the cheaper, less functional feature-phone business (e.g., Nokia) — the exact opposite of what his model predicts! In my framework, by contrast, the key driver (driven by Moore’s law) was the relative abundance, and hence cheapness, of the internet. This allowed the iPhone to feature internet-enabled apps as the main attraction, rather than phone calls (in fact the initial iPhone was not all that great at making calls!). Thus the shift in scarcity/abundance created an opening for Apple to target internet connectivity as the core offering. This, I claim, is a better framework to explaining why they succeeded in disrupting the plain old cellphones despite being much more expensive; it was clearly not an attack from the bottom. (To be sure, smartphones also disrupted PCs, and that fact can be explained as an attack from the bottom as well as an unbundling of PCs due to a shift in scaricity/abundance; I chose the disruption of dumb-phones in this example since that cannot be adequately explained by one of the two frameworks).

To be sure, there can be other kinds of technology changes that are not related to semiconductors. But it is really hard to find other examples of something that can grow 40% per year for nearly 50 years! Moore’s law in likely unique in this aspect, which is why I think it has plays such a crucial part in the persistence of the disruption phenomenon, of the kind we have been experiencing in the last few decades.

The Moore-Andreessen Disruptive Creation Loop

Moore-Andreesen Loop

This post combines two famous but separate observations, made 46 years apart, about technology advances into a single integrated feedback loop that I think has been the major source of disruptive creation in the world over the last fifty years; it still continues, unabated, today.

  1. Moore’s Law: This is not really a law in the sense of scientific laws of nature; rather, it is a prescient observation made by in 1965 Intel’s co-founder Gordon Moore on the exponential pace of change in the semiconductor industry. He noticed that the transistor count in semiconductors seems to double every couple of years. Surprisingly, his observation continues to be valid more than 49 years after he first made it. This relentless advance has been like a giant clock that drives the entire technology industry forward every year.
  2. Venture capitalist Marc Andreessen’s observation (first made, I think, on the pages of WSJ in 2011) that “software is eating the world” trenchantly captures the fact that software replaces more and more functions that were formerly performed by humans or physical machines with every passing year. This is also a relentless advance that feels like a force of nature in its power to disrupt.

Moore’s Law is the result of human ingenuity and hard work fueled by ever rising capital expenditure and R&D on semiconductor fabrication processes. Each advance has required major breakthroughs at the level of material science, industrial processes, and applied quantum physics. Amazingly, such advances have indeed been made just in time to preserve the cadence of the “law” for over four decades. The following graphic captures the fact that the number of transistors found on a chip has been doubling every two years — i.e., growing at more than 40% per year!


But the “ante” in terms of capex and R&D dollars needed for such complex advanced rises every year. Intel, Samsung, Taiwan Semiconductor, et al are nowadays spending ten-plus billion dollars every year to keep the semiconductor manufacturing process advances coming. So where are the returns needed to fund these investments coming from?

This is where the crucial role played by software comes in. As it becomes more and more capable, it can do more and more (a tautology!) thus expanding the market for semiconductors. And the expanding semiconductor market, in turn, justifies an increase in the investments needed for the capex and R&D required for driving Moore’s law forward.

There are two parallel channels that allow software to become more and more capable as a result of better semiconductors: better and cheaper computing and better and cheaper communications. Think of the speed and cost of today’s internet and speed and cost of today’s smartphones as compared to the slow modem lines and slow and clunky computers of even a decade ago. Both these advances are synergistic and are critical enablers of the advances in programming techniques and tools needed for software to eat more of the world.

As software captures more functions it expands the market for computers and communications. Nearly 2 billion people already have the powerful supercomputers in their pockets known as smartphones, and serious projections are suggesting the possibility of 4 to 5 billion people having smartphones communicating with each other and running powerful apps in the next few years.

Every part of the loop sketched at the top of this post is important, and mutually supportive, to all the other parts:

  • Software, by itself, cannot advance at 40% per year type rates without improved hardware to run on and improved networking speeds. The major advances in software tools — from assembly coding to higher level languages to object-oriented languages, from waterfall to Agile programming techniques — are all fundamentally enabled by faster computing speeds provided by the underlying hardware.
  • Computers by themselves will completely starve of data-to-compute if the speed and reach of communication pipes linking them together did not advance along with them. Imagine connecting your super-powerful, instant-on, tablet to any of the slow dial-up modems of the old days!
  • The massive capex in semiconductors needed for improving both computers and communications cannot be justified without the expanded market provided by software eating more of the world.
  • And so on; all components of this positive feedback loop have to advance together, mutually re-enforcing each other, to keep it going.

Scaling the Power of the Loop

The scale of this phenomenon is not that easy to grasp. In personal conversations, people nod knowingly, thinking that all I am saying is that technology progresses every year, and of course it does. However, that is not my point at all. There is probably no other phenomenon, natural or artificial, that has shown this kind of exponentially fast growth such an extended period of time. I can safely claim this because of the unusual power of compounding: if anything has been compounding at this high a rate for so many decades, we would see it nearly everywhere already today!

To appreciate the stupefying scale of the phenomenon, let me contrast it to another technological revolution.

During the Industrial Revolution, that supernova of all changes in the human condition, growth in Western Europe accelerated by a puny 1% per year! Even this tiny amount though, when steadily compounded for a hundred years (most historians date the Industrial Revolution roughly between 1750 to 1850), resulted in a massive improvement in living standards from the complete lack of growth for the thousands of years before it (see the zero net growth in income per person for thousands of years on the left of the Industrial Revolution, during the so-called “Mathusian Trap” period, in the graphic below; source: figure 1.1 from a recent book on this subject by economic historian Gregory Clark).

great-divergence-graph Clark

So if the Industrial Revolution changed the world so dramatically with a mere 1% growth compounded for a hundred years, consider the relative impact on the world affected by Moore’s Law, which has compounded at over 40% per year for nearly 50!

Anything growing at this blistering pace should become millions of times better (faster/cheaper) in only a few decades, and indeed our computers do show exactly such improvement – that smartphone in our pocket really is more than a million times more powerful than the most powerful computer on the planet from fifty years ago. It is hard to think of many other examples that show such an astounding growth pattern; and the examples that do come to mind seem to be, in one way or another, intersected and transformed by the power of the Moore-Andreessen loop. Try thinking of a counter-example!

Shift in Scarcity

This sort of massive change can cause a fundamental shift in what is scarce and what is abundant every few years. In my opinion, it is this frequent shift in the point of scarcity that has been the real driver of the constant disruption that characterizes our information age. As if an avatar of the ancient god Shiva of Hindu mythology, who is said to destroy in order to create anew, this loop is a powerful force of disruptive-creation: companies that are better aligned with this loop can — and often do — replace incumbent companies that are slower in adapting to its consequences.

A corollary: SoCs are eating the surrounding chips

A phenomenon that is strikingly similar to “software eating the world” seems to be happening right within the motherboard of all the smartphones and tablets and laptops. A “System on a Chip” (abbreviated SOC or SoC), as the name implies, contains a whole system of modules on one silicon chip by absorbing more and more functions that were formerly performed by independent chips. Functions such as graphics, modems, DSP, and various kinds of memory have already been absorbed.

I think the abandonment of the baseband chip market by Texas Instrument and Broadcom is the latest example of this corollary phenomenon. The baseband function now forms just a piece of a larger SoC chip produced by companies like Qualcomm, Mediatek, and Intel. The economics of producing them as part of an integrated SoC became increasingly tilted in their favor compared to the baseband function chips made by TI and Broadcom until the latter finally got eaten up.

I see this as a trend powered by essentially the same kind of loop as diagrammed above, with Moore’s law shrinking the size of semiconductors and interconnection technologies with every new generation when combined with the ever-growing power of chip design software. The expanded market (e.g., billions of smartphones and perhaps tens of billions of “internet of things” devices) enabled by the resulting SoCs fuels the necessary investments in capex and R&D, completing the loop.

Disclosure: I am long Qualcomm and Intel in various portfolios at the time this post was written.

The Disruption Controversy

There has been a lot of hubbub in the social media around a recent New Yorker article by Harvard historian Jill Lepore sharply disputing the famous “disruptive innovation” model Clayton Christensen, who teaches at the famous business school of the same university. Christensen has responded, with apparent disappointment and anger, in an interview he recently gave to BusinessWeek. Many Silicon Valley VCs seem to have come out in support of Christensen’s theory on Twitter, claiming that they practically live and breathe it in their daily hunt for disruptive startups.

A second, and ostensibly unrelated, meme has also been recently gone viral over Twitter. I think this was due to über-VC Marc Andressen who had one of his famous tweetstorms on offhand remark by his buddy, the former CEO of Netscape, Jim Barksdale:

… there’s only two ways I know of to make money: bundling and unbundling.

I think these two powerful ideas are actually related and underlying both is that ultimate force of technological change  – Moore’s law. I will use this (rather long and meandering) post to try and clarify my own thoughts on this topic since its of high interest to me as an investor in technology related moated companies.

Christensen’s theory of Disruptive Innovation

A provocative analysis of this idea can be found in the work of blogger Ben Thompson. He has illustrated the key idea in one of his distinctive sketches. The path of disruption looks something like the orange line in this:

Adapted from Figure 5-1 in the Innovatorʼs Solution, Christensen, Raynor

The key thing to notice is that products improve more rapidly than consumer needs expand. This means that while the incumbent product may have once been subpar, over time it becomes “too good” for most customers, offering features they don’t need yet charging for them anyways. Meanwhile, the new entrant has an inferior product, but at a much lower price, and as its product improves – again, more rapidly than consumer needs – it begins to peel away customers from the incumbent by virtue of its lower price. Eventually it becomes good enough for nearly all of the consumers, leaving the incumbent high and dry.

In an interesting post on this topic that predated the firestorm caused by Lepore, Thompson suggested that there are actually two types of disruption theories articulated by Christensen over time; he argued that one of them is flawed (emphasis in the following excerpt is mine):

The original theory of disruption, now known as new market disruption, was detailed in Christensen’s seminal paper Disruptive Technologies: Catching the Wave and expanded on in the classic book The Innovator’s Dilemma. Based primarily on a detailed study of the disk drive industry, the theory of new market disruption describes how incumbent companies ignore new technologies that don’t serve the needs of their customers or fit within their existing business models. However, as the new technology, which excels on completely different attributes than the incumbent’s product, continues to mature, it eventually takes over the market.

This remains an incredibly elegant and powerful theory, and I fully subscribe to it. We are, in fact, seeing it in action with Windows – the incumbent – and the iPad and other tablets; new technology that is inferior on attributes that matter to Windows’ best customers, but superior on other attributes that matter to many others.


It is Christensen’s second theory of disruption – low-end disruption – that I believe is flawed … Briefly, an integrated approach wins at the beginning of a new market, because it produces a superior product that customers are willing to pay for. However, as a product category matures, even modular products become “good enough” – customers may know that the integrated product has superior features or specs, but they aren’t willing to pay more, and thus the low-priced providers, who build a product from parts with prices ground down by competition, come to own the market.

Thompson then goes on to argue that Christensen has been badly and repeatedly wrong in his prediction of the demise of Apple products based on this second, low end disruption, version of his theory.  He correctly points out that, in fact, Apple’s hit iPhone at its initial launch was clearly much more expensive than many of the products it has successfully “disrupted”: iPods, dumb cellphones, PDAs, GPS navigators, etc. And the iPhone launched with a rich superset of the features of the disrupted devices rather than a cheaper subset. This was not supposed to happen according to the disruption from the “low-end” theory.

But in the recent post on the success of Chromebooks, Thompson himself seems to be using this low-end version of the disruption hypothesis, contradicting his earlier assertion! Apparently, even the low-end disruption idea does work at times.

So the questions remain: Exactly when does which theory predict successfully? Under what circumstances?

Barksdale-Andreessen theory of Bundling and Unbundling

As HBR’s blog explains, Barksdale, a veteran of IBM, FedEx, and McCaw Cellular, “was brought on a few months after Netscape’s founding to provide adult supervision as its CEO”. He made his bundling and unbundling comment at the end of an investor roadshow in answer to Microsoft’s decision to bundle the Internet Explorer with Windows. In the HBR interview he elaborates:

I had worked for several businesses during my career by that time that had become conglomerates, some fairly large, and then had divested themselves of various businesses. I’m on the board of Time Warner, we have just parsed off our third major part — our original company, Time Inc., which is the publishing arm of Time Warner. We [already had] divested ourselves of Time Warner Cable as well as AOL. So, it’s not uncommon to add a bunch of companies together, much less software products, and then divest yourself of them as the shareholders think they have more value standing alone than standing together. You do it to get your stock price up.

… It’s easier to do in the digital age. It’s easier to bundle and unbundle digital products …

In his tweetstorm, Andreessen gives an example of how this process works:

1/A story of unbundling in the tech industry: 20 years of consumer Internet evolution —

2/One upon a time there was AOL, which was a completely integrated Internet access/information/communication service.

3/Then Yahoo came along and unbundled the information/communication parts like email/IM/sports-scores/stock-quotes from the access service.

4/One of the things you could do on Yahoo was search, then Google came along and unbundled that.

5/You can search for anything on Google, including people; Facebook came along with a much better way to just search for people.

5/You can search for anything on Google, including people; Facebook came along with a much better way to just search for people.

6/Three things you can do on Facebook are messaging, photo sharing, and status updates; therefore Whatsapp, Instagram, and Twitter.

7/And yes, Yo unbundles the creation & existence of a message from the contents of a message, unbundling Whatsapp and Twitter :-).

8/Ev Williams () is the modern genius of this concept–playing out in our industry continuously since the 1950’s.

9/The part people often miss is that you can get extremely powerful second/third order effects at each step with his pattern.

10/The entrepreneurs generally have a pretty good sense of this when they’re doing it, but it doesn’t become clear to others until later.

11/This is a pattern what we love to fund: unbundle X from Y, but then use the liberation of X as leverage to do amazing new things with X.

12/And the howls of press and analyst outrage at the apparent stupidity of each unbundling are very helpful for keeping valuations down :-).

1/The flip side of unbundling: Later on, the unbundlers tend to try to rebundle in the image of whatever they unbundled.

2/So Yahoo adds an ISP (), and Google adds email/IM/sports-scores/stock-quotes.

3/Twitter changes its user profile page to look more like Facebook :-).

4/Sun unbundled DEC with commodity components, then re-bundled into a proprietary computing stack just like DEC w/Solaris, Sparc, etc.

5/Microsoft likewise unbundled DEC minicomputers w/PC OS + tools, then rebundled into DEC-like integrated stack now including hardware (!).

6/Paraphrasing Harvey Dent: “You either die a hero or you live long enough to see yourself become the company you first competed with.”

7/And then sometimes the rebundlers realize what they’re doing and try to reverse course. E.g. Microsoft building apps for iOS & Android.

8/And thus the cycle of life repeats with yet more unbundling :-).

The key driver underneath all this is technology change – the bundles emerge as a consequence:

And so the newspaper bundle, the idea of this slug of news and sports scores and classifieds and stock quotes that arrives once a day was a consequence of the printing plant. Of the metro area printing plant, of the distribution network for newspapers using trucks and newsstands and newspaper vending machines and the famous newspaper delivery boy. That newspaper bundle was based on the distribution technology of a time and place.

When the distribution technology changed with the internet, there was going to be the great unwind, and then the great rebundle, in the form of Google and Facebook and Twitter and all these new bundles.

I think music is a great example of that. It made sense in the LP and CD era to [bundle] eight or 10 or 12 or 15 songs on a disc and press the disc and ship it out and have it sit in storage until somebody came along and bought it.

But, when you have the ability online to download or stream individual tracks, then all of a sudden that bundle just doesn’t make sense. So it [unbundled] into individual MP3s.

And I think now it makes sense that it’s kind of re-bundling into streaming services like Pandora and Spotify.

And the bundling or unbundling of the product actually directly affects the bundling or unbundling of the business:
So one of the other things you see happening in music now is actually the music industry getting reconfigured and being split out. There are now companies that are entirely online record labels that have started from scratch. Or there are companies that are entirely focused on merchandise sales. There are companies entirely focused on touring. And the old record labels that are still bundled businesses corresponding to a bundled product offering are struggling to adapt to this new world with lots of new competitors that are effectively unbundled.
Andreessen goes on to explain how this pattern helps him identify a promising startup:

Often, a key characteristic of large incumbents in any industry is, they have a bundle that is accumulated over time, for the reasons that Jim described [“because it’s an effective growth strategy. Once you try to grow the business, it’s an easier out to stay focused on your core and then add things to it.  And you become a big bundle again”].

And then what we look for is for something to have changed in the underlying technology. The arrival of the Internet was a big one. The arrival of mobile distribution. The arrival of social networks. The arrival of Bitcoin is a current example.

So, we look for something to change in the underlying technology, and then basically say, “Well, you know, gee, if you were to sit down today with a clean sheet of paper, and you knew that the technology was changing, then what would be the proper form of the product, if you were starting from scratch?

That’s the question that’s always the hardest for an incumbent to ask, because that’s the classic innovators dilemma. And that’s the question that’s the easiest for the startup to ask, because the startup literally is somebody sitting down with a clean sheet of paper. All they have is the ability to think from first principles, think from scratch.

I would say we look actively for the pattern of large incumbent, established industry, bundled product or service offering, coupled with underlying technology change, coupled with idea for unbundled product that the customer might prefer, and then of course coupled with an entrepreneur who can actually build a business around that. I think that’s a fairly common pattern.

Andy Grove’s theory of vertical-to-horizontal transition

But even before Christensen and Netscape, way back in early 1980s, Intel’s legendary former CEO Andy Grove had already commented upon the remarkable transition from vertical silos to horizontal modules that was then completely disrupting the landscape of the computer industry:
This change is obviously related to both the bundling/unbundling theory and the disruption theory. But I think the key underlying driver in Grove’s model is the all-powerful Moore’s law (the observation that silicon chips roughly doubled in performance at the same price point every couple of years).
My Take: Both disruption and bundling/unbundling arise from shifts in the point of scarcity
I think both these phenomenon are really just long term consequences of Moore’s law. Let me explain:
  • Every few years, like clockwork, the incessant, exponential rise in price/performance of silicon hardware alters what is scarce and what is abundant in the value chain of any product that depends upon silicon.
  • Economic value shifts to whatever is scarce. The newly abundant becomes a commodity and loses its value.
  • It is precisely this constant shift in the point of scarcity over time that ultimately drives all the disruptive bundling, unbundling and rebundling that goes on in so many industries.
A particularly powerful consequence of Moore’s law is that software can do more and more things every few years. In other words, by exploiting the cheaper and more powerful hardware software “eats the world” in Andreessen’s evocative phrase, encroaching a wider and wider circle of companies and industries. As pointed out by Grove, this can change the vertical value chains into horizontal. And then new value chains form around the winners.
Here is how I think the process works:
  • Every few years, this scarcity shift causes a specific “module” of a value chain bundle to become vastly more valuable than the surrounding pieces.
  • This creates an opportunity for value creation by unbundling. As Andreessen has observed, this can exploited by a competitor (or entrepreneur) starting from scratch and offering just that valuable point of scarcity as an attractive product offering.
  • This unbundling disrupts the existing value chain incumbents. Some nimble companies may be able to co-opt this process and quickly adapt; those who cannot decline. The devil is in the details, as Lepore has pointed out. of course, predicting the winners and losers in this transition cannot be easy otherwise everyone would be a great investor! It is probably helpful to conduct the mental exercise suggested by Andreessen: what would be the proper form of the product, if you were starting from scratch today? This is useful since some things can be done very differently – perhaps more simply (hence cheaply) today due to the ongoing scarcity shift. For example, the high speed internet and smartphones make digital distribution nearly a free commodity today. Many businesses that have spent a lot of money on distribution infrastructure will find that many parts of their overall value proposition have become a commodity. Imagining building an existing product or service all over from scratch can cause this fact to pop out in our minds, leading to better predictions.
  • Once the unbundled product has succeeded in capturing its market, its fairly easy to rebundle more and more features around it to increase its value (as Barksdale has observed).
  • This goes on until Moore’s law changes the game – the point of scarcity – again, causing the disruptive cycle to repeat!
I find this a clearer and deeper explanation of the observed disruptions as compared to the simplistic model graphed at the beginning of this article. And I will venture to guess that if we remove all examples that can be better explained by such a (Moore’s law driven) shift in scarcity from the various Christensen case studies, not much will be left that escapes Lepore’s valid criticism. However, I have not actually done this exercise so I cannot be sure about this. Perhaps there will remain some type of disruptions that are better explained by his model – it will be useful to know.
A conversation with my friend Alex led to a provocative question: surely there are other drivers of change than just Moore’s law, so why focus so much on that one force?
And, to be sure, there are many varieties of improvements that companies go through in improving their price/performance metrics over time. This could various kinds of process learning (at an individual as well as team level) as well as efficiencies that come from pumping a larger volume through manufacturing, for example.
It seems to me, though, that Moore’s law is perhaps unique in having compounded at such a high rate for many decades – most processes have not done that. Compounding over that long leads to improvements by the factor of millions and billions, and most processes just cannot improve that much. So its particularly important to be on the right side of Moore’s law; otherwise one risks being disrupted due to its unique ability to shift the point of scarcity every few years, like clockwork.
Having said that, I think the point about the shift in the point of scarcity remains valid and useful even if there are other drivers of rapid change – Moore’s law is just one of many potential drivers of such change.