Why Cloudflare Is a Game Changer

Ben Fathi
17 min readOct 25, 2021

An architectural approach to cloud scalability

“Whatever your mind can conceive and believe, it can achieve.” — Napoleon Hill.

Of course the market fluctuates. I just got fluctuated out of four thousand bucks!” — Jerry Seinfeld.

Full disclosure: I’m a stockholder.

Five or six years ago, right before I retired, I worked at Cloudflare, running the engineering and cloud operations teams, a position that afforded me an intimate view into the architecture and implementation of the platform.

Cloudflare was still a tiny startup in San Francisco with roughly two hundred employees, including a hundred or so engineers and about twenty Operations folks. Even back then, long before anyone had heard of the scrappy little company, it was already processing as much as 15% of all internet requests on a daily basis.

Let me repeat that because I don’t think you heard me. A 200-person startup was handling 15% of all internet requests on a daily basis as recently as five years ago; and most people outside the industry probably never even heard of it before the stock started shooting skyward recently.

As I write this, we’re all waiting for Cloudflare’s quarterly announcement. The stock has more than tripled over the past year, gaining 60% just over the past month. There’s a chance it’ll move lower post announcement as the naysayers come out of the woodwork, pointing out that the quarterly revenue and profit numbers don’t justify the valuation. [I say this based on no inside information, having left the company over five years ago.] Fear not, for I fully expect the company to continue growing at an exponential rate for the foreseeable future, defying all critics as it delivers on its promise of a “better internet.”

Most of the coverage on Cloudflare has been positive but a few have gone on the record as being negative on the stock. Here is a recent example: “Without Profits, Cloudflare Stock Is Too Expensive No Matter What”:

“My take on the matter is that NET stock now is too overvalued; a dip would not make it a bargain. Some investors may focus on technical analysis and argue that Cloudflare shows strong momentum. This is true, but the stock is also at extreme levels based on technical analysis indicators such as the popular RSI(14) indicator, which sits at 74.68. Anything above a 70 is considered overbought.”

These so-called “analysts” just don’t get it. They look at their spreadsheets and see scary numbers and balk at the high valuation. RSI-14 (Relative Strength Index) is simply a measure of how much momentum a stock has. 74 is higher than 70, an arbitrary bar that someone once defined, so it must be overpriced. You don’t need to know anything else about the company: its strategy, its architecture, its competitors, its total addressable market, etc.

Note that the author even admits as much: “Some investors may focus on technical analysis [but I don’t understand that so I’m going to look at a single number (accurate to two decimal points, no less!) instead and run away simply because the company is performing better than most others!]” Imagine someone telling you Michael Jordan or Roger Federer were overhyped early in their respective careers because they seemed too good when compared to other players and you’ll immediately see the fallacy in this logic.

Here’s another one: “Cloudflare (NET) Dips More Than Broader Markets: What You Should Know”: “Cloudflare (NET) closed the most recent trading day at $181.35, moving -0.36% from the previous trading session. This change lagged the S&P 500’s daily loss of 0.11%.”

And here’s my translation: “We used an AI algorithm to write this article, using lots of numbers (also accurate to within two decimal points!) to fill in formulaic sentences and create scary headlines as clickbait. In the process, we failed to notice (or tell our readers) that a 0.25% differential in daily closing price from the average of 500 companies in a multitude of industries is statistically insignificant and totally meaningless. You’re welcome.”

These are the same analysts that recommended Fastly over Cloudflare a year ago when both companies were valued at around 20 billion dollars. Fastly is now worth one tenth of Cloudflare ($6B vs. $60B) but they’re digging in their heels, telling you to avoid Cloudflare because it’s too expensive and to buy Fastly instead because it’s “a more timely purchase right now.”

I want to say to all of them: My friend, you don’t understand what you’re talking about. You don’t even understand what this company has built, what it’s doing, or what it’s about to do. You have no business telling anyone what stock to buy, just like you didn’t have any business telling them last year when you recommended a stock that went on to lose 70% of its value. “It’s cheaper so it’s a good buy” is not a recommendation. It’s sheer folly!

I wish I could shake these “analysts” by the lapels and yell in their ears: Take your head out of your spreadsheets, stop looking at quarter over quarter results (by definition a myopic view of the business) and take some time to understand the architecture, the strategy, and the solutions being offered. Look at the roadmap and the competition instead. And stop describing Cloudflare as a CDN (Content Delivery Network) or a security company. That would be akin to mistaking Amazon for an online bookstore.

Stop comparing it to niche players in those small markets. Compare it, instead, to its real competitors: major cloud providers such as Amazon AWS, Google Cloud, and Microsoft Azure. Perhaps then you’ll be looking at the correct TAM (Total Addressable Market) and will notice that, if anything, the stock is still a huge bargain.

Sigh. Instead, I guess I’ll just tell them to go away and come back in a year or two when the stock is at $500 or $1000 or more. Maybe they’ll get it then. And don’t blame me if I say “I told you so.” I can’t help it if they can’t see what’s right in front of their eyes.

Matthew Prince, the CEO of Cloudflare, got mad in an email exchange recently when I mentioned, offhandedly and as a complement, what a great platform he’d built for the edge of the cloud:

I really dislike the term “edge.” Everyone uses it, including hardware companies like Cisco and IoT firms. Becomes meaningless. What we are is a network. Increasingly hauling packets across our own backbone end-to-end. Certainly not just “the edge.”

He was right, of course. It’s a mistake to call what Cloudflare has built a platform for the edge of the cloud. It’s a fully fledged cloud that just happens to be implemented primarily at the edge, where it’s “closest to the eyeballs” and most responsive to end users like you and me.

By comparison, all other major cloud providers put the vast majority of their servers in a few huge data centers and force the developers to pick from several “availability zones” in order to avoid catastrophic failures. Cloudflare achieves the same effect simply through its massively distributed architecture with no single points of failure or bottlenecks.

You may have noticed that Cloudflare recently publicly challenged Amazon on their egress pricing and even started offering its own cloud storage service for significantly less. This is not a gimmick. Matthew is not interested in just competing at the edge. He’s going after the big guys and he’s doing it with a more naturally distributed and scalable system architecture than any of them have. If I were in their shoes, I’d be shaking in my boots.

Did I mention the US Government has quietly adopted Cloudflare’s Zero Trust architecture while Amazon and Microsoft were busy fighting over the $10B Jedi contract? Did I mention Cloudflare has a strategic partnership that extends its platform reach to hundreds of data centers and over a billion users in China? Talk about a global footprint. Talk about scale. Talk about five and six nines of service availability. Talk about Cloudflare as the water tap to your internet services, implemented in a naturally scalable model due to its distributed architecture.

Did I mention that over 7.5 million websites (and fully a third of the top 10,000 sites in the world) use Cloudflare? Did I mention over 250 data centers in 100 countries? What Cloudflare has built, right under our noses and without most of you ever even hearing about it, is “a better internet.” With our full consent and for the benefit of all of us.

You may have missed the recent announcement of a new service offered by Cloudflare. I mention it here only as an example of what the company can do. Cloudflare TV has been running non-stop for months now. It offers live high definition video with broadcast level quality that could be used to compete head-on with YouTube and Zoom or as a platform for other companies to deliver such services in various countries around the world. Check it out. I think you’ll be surprised at its maturity, performance, and reliability. I’m glad to see the company religiously “dogfooding” the platform, a practice I evangelized to the team back when I worked there.

I won’t even mention all the network and security services like DDoS protection, VPN, intelligent routing, load balancing and failover, gateway, firewall, serverless computing, Cloudflare for Teams, etc. They’ve all been described in detail on the great Cloudflare blog site. Did I mention its DNS services are three times faster than Google, averaging less than five milliseconds per request across the globe? Did I mention that most of these services are available for free?

What we have here is a major cloud platform; one that is already mature, implemented at a staggering global scale, and already handling a large chunk of all internet traffic. And it’s doing so without even breaking a sweat exactly because at its heart is a simple architectural idea that Lee Halloway recognized and fully utilized. His use of Anycast routing, clumsily explained by me below, turned an unused but broadly implemented feature of standard networking protocols into the enabling foundation for the most scalable cloud yet. The minute Matthew explained it to me the first time we met, I knew it was a game changer.

Cloudflare co-founders Michelle Zatlyn, Lee Halloway, and Matthew Prince

The story of Lee Halloway, the kid genius who built the Cloudflare software architecture from the bottom up, is a tragic one which has already been told ably in Wired Magazine: “The Devastating Decline of a Brilliant Young Coder.” I had the pleasure of working with Lee closely (he reported to me in my role as head of engineering) as he struggled with what, at the time, none of us knew was a debilitating brain disorder: frontotemporal dementia. I don’t want to make this post any longer than it needs to be by delving into Lee’s story. Here, again, others have done a better job than I could dare to attempt. Please do take the time to read the Wired article mentioned above.

Suffice it to say that what Lee dreamt up as a response to Matthew’s challenge to “build a better internet,” his architectural solution, is so simple and elegant at heart that it has stood the test of time. More than a decade after its initial implementation, it continues to grow exponentially and has done so without even breaking a sweat. And, guess what, it hasn’t even gotten close to its architectural potential yet. As Matthew might say: “Scale matters.”

If you’re going to build a better internet, make sure you bet on a protocol that scales seamlessly and does not become an architectural bottleneck. And, oh yeah, we’re not talking about now. We’re talking about a decade or two from now. And we’re not looking for a few thousand or even a few million users. We’re talking billions of users around the globe. Building platforms is hard. Building operating systems that are not just relevant but critical to the internet a decade or two into the future is really hard.

What most people don’t seem to understand is that Cloudflare has implemented the next step function in the evolution of computing, the future of the cloud. Let me try to explain in language that, hopefully, a non-technical person can follow.

Back in the early 1980’s, when I was a young engineer just out of school, I used to work on designing operating systems for UNIX workstations. These typically contained a single CPU so, by definition, they could only execute one instruction from one program at any given point in time. We created the illusion of multitasking by quickly switching between applications (dozens of times per second) so you could run multiple apps at the same time.

But this was just an illusion; there was only one CPU in the system and it was busy executing a single program at any given point in time. It was the software layer above, the one closest to the hardware, the operating system, that created the illusion of virtual worlds inhabited by virtual applications and virtual users.

The view that the operating system presents to the applications running above it, as to the hardware beneath it and to the developers programming it, not to mention the users accessing it, is completely virtual: made up. It exists nowhere but in the minds of men, in logic gates etched in silicon, in bits residing on magnetic disks rotating in dimly lit data centers.

Later on, I worked on multiprocessor systems that combined 2, 4, 8, 16, 32, 64, 128, even as many as 1024 of these processors together into a single system. Us humans, being who we are, weren’t satisfied with just 2x performance improvement every year or two as promised by Moore’s law. We were too impatient and tried to stuff dozens or hundreds of these CPUs into a single machine so we could run even faster!

Architecturally, these machines were complex and they were hard to program. The processors often shared storage and memory but could execute (mostly) independently of each other and run programs to completion more quickly. They were also very expensive, often costing many millions of dollars.

Lo and behold, the PC revolution happened and people found out they can replace these large expensive computers with a closet full of “personal” computers if they could just break down the problem space instead of trying to use brute force to solve problems. If you are generating 3D graphics for a Hollywood movie, you can do so on a single expensive computer or break a two hour movie into 24 five minute chunks and run each segment on a cheap PC. Doing oil and gas exploration? You can do it on a Cray supercomputer or break the map down into hundreds of pieces, each one square mile, and run the algorithm on a bunch of cheap PCs instead. You get the point.

Ok, great. So PC’s bring in a new era of scalable computing — not by being faster or better but by democratizing access: by being cheaper and more easily available to end users and by re-thinking the problem space. Then, of course, the cloud happened. Suddenly, the internet was everywhere. Well, it really took decades to build but network speeds finally caught up, WiFi became ubiquitous, and you could offer client-server computing to the masses, where the client is a PC (later mobile device) on your lap or in your hand and the server is a bunch of machines in a data center.

Roll in Amazon, Google, and Microsoft who build massive data centers with hundreds of thousands of servers so they can offer scalable services globally. Of course they also needed resiliency and disaster protection so they built a few data centers around the world and used them to “back each other up” in case of disasters.

The era of the cloud was upon us. We stopped making operating systems for single computers and, instead, started building operating systems to run entire data centers. But it was almost as if we’d forgotten the lesson of the PC revolution, creating huge pools of infrastructure that basically acted like supercomputers, all in a single centralized location.

Now comes along a company like Cloudflare: Let’s build a better internet. While we’re at it, why don’t we do things a little differently? Instead of a few large data centers, why don’t we move our servers to a hundred or two hundred or even a thousand locations around the globe? By doing so, we move the computing and storage closer to the end users, building a distributed system that scales much more naturally, is more resilient to failures, and offers lower latency to the end users.

The Anycast protocol that Lee Halloway used is at the core of this architecture and lends itself gracefully to the distributed nature of the implementation. Anycast was always there, right along with unicast and multicast routing mechanisms used by every other application in the world to do point-to-point and one-to-many broadcast type messaging between machines.

Source: GlobalDots.com

Anycast just says: when a client sends out a request, send it to all available nearby servers like multicast (one-to-many) but the server that responds most quickly (is closest to the client) is the one it connects to for future requests; all other servers drop the connection. It’s a one-to-one protocol designed and optimized for a world of millions of ubiquitously accessible (and fungible) servers and billions of clients spread around the globe.

Not so coincidentally, the same mechanism enables Cloudflare to deflect even the largest Distributed Denial of Service (DDoS) attacks effortlessly. When thousands of bots attempt to attack a given server to bring it to its knees, Anycast naturally spreads the incoming packets across the pool of all nearby servers instead, substantially reducing the impact on any single node.

Kudos to the networking pioneers who thought of the use case and built it into the standards. Kudos to Lee for finding the perfect match for what Cloudflare was building. And the beauty was that all computer systems in the world already spoke the dialect, so there was no barrier to entry. Everyone could already use the Cloudflare cloud.

This is a huge oversimplification but it makes the point. Google and Amazon and Microsoft will correctly point out that they have their own edge servers and offer some of the same solutions. That’s not the point. The point is that their “edge” is just a thin layer in front of their few (and huge) backend data centers where most of the heavy lifting still happens.

In the case of Cloudflare, the edge is the cloud. If a user is sitting in South Africa, there’s no reason his internet requests should go to Amsterdam or Virginia for processing when there’s a perfectly good data center right down the street or, at worst, in a nearby city. As governments start passing data sovereignty laws, Cloudflare is best positioned to deliver services locally within their jurisdictions.

When developers write applications for the cloud, they configure them to run in an “Availability Zone” such as Eastern US or Europe, with backup or failover instances running in one or more locations in case something goes wrong. This “clustering” architecture harkens back to the old days and by necessity impedes scalability, requiring a pool of (often unused) servers sitting in a secondary location “just in case something goes wrong.”

The Cloudflare cloud, by comparison, has no such single point of potential failure or bottleneck because all services can and do run on all servers in all data centers. Workloads simply (and naturally) run where it makes the most sense, where they’re needed, closest to the end users utilizing them and at their behest. It’s simply a more scalable distributed architecture than those offered by any of the other major cloud providers. That’s why they can offer DNS services that are three times faster than Google.

As IoT devices proliferate, latency becomes even more crucial. Ask yourself whether you want the time-critical network packets from your autonomous vehicle being processed by servers at the closest data center to where you’re driving right now or if you’d rather have them travel half way around the globe while you wait.

The scale achieved by a shared memory NUMA architecture in the early ’90s (with a bit of Hollywood magic thrown in by Silicon Graphics) eventually brought down the mighty Cray Research and ended the supercomputing era. And itself fell victim, only a few years later, to a cluster of PC’s at one tenth the cost. The PC’s, in turn, would be superseded by rows upon rows of servers sitting in hermetically sealed data centers “in the cloud.” Now augmented by, and soon to be superseded by, thousands and thousands of Cloudflare servers living in every country on the planet. It’s just a natural evolution of the cloud architecture. Scale matters and latency matters; especially when you’re building distributed operating systems and platforms.

Another fact that most people don’t seem to realize is that a successful startup will take at least a decade, if not longer, to build. A failed startup takes much less time but is also not likely to make a dent in history. Cloudflare was founded in 2009, fully twelve years ago. Thousands of people have worked there over the years and they all deserve credit for the great work they’ve done. If you ask Matthew, he might say something like “Think big or go home.” He could have easily carved a niche for himself as an edge computing company or a security company or a CDN company but he’d rather challenge the big guys. More power to him.

What Matthew, Michelle, and the team have managed to do is deliver high value services (everything from security to performance to networking to runtime services like key-value stores and storage subsystems and serverless computing), all implemented at scale, already at over 250 data centers across a hundred countries.

Architectural redundancy is at the core of the Cloudflare architecture, from its reliance on the Anycast networking protocol to storage and compute services now being offered — implemented at the edge of the cloud, closest to human eyeballs. Fastest. Most responsive. Most scalable. Architecturally superior. Better positioned to deliver international and local solutions than Google, Amazon, or Microsoft.

What’s stopping the incumbents from building a better cloud platform at the edge, from beating Cloudflare at its own game, you may ask. In a nutshell, “Innovator’s dilemma.” They’ve already invested hundreds of billions of dollars each in their current implementation, one fundamentally designed around a few centralized data centers; and they’re each paying tens of thousands of engineers to come to work every day and make small incremental changes to those existing platforms to implement new features. That’s always so much easier for incumbents than starting from a clean sheet of paper. Unfortunately, it also makes them vulnerable to architectural disruption. Cloudflare will do to AWS what the iPhone did to PCs.

The world still doesn’t understand what Cloudflare can do. I’ve only touched on a few parts above. You just wait until they finally figure it out. Then there’ll be a stampede to their doors — and to their stock.

Needless to say, I’m a big fan and have every reason to be a proud stockholder. And please stop telling me 74 on the RSI-14 scale is higher than 70, so we should stop buying. You, sir, don’t know what you’re talking about. That just means you (or more likely, your computer) have noticed an unusual trend in the stock. It says nothing about why that trend is happening. You probably also told people Amazon at $300 was too expensive and they should avoid it because you couldn’t figure out what the big deal was with “just an online bookstore.” You were busy looking down at your toes when all the action was happening right in front of you, on the path ahead. And you missed thr whole thing.

As far as I’m concerned, the recent upward swing in the stock price is just the knee in an exponential curve. This is just the beginning. Hang on tight. You ain’t seen nothin’ yet.

I’ve deleted all my social media accounts and now depend exclusively on the kindness of strangers to pass the word around about my blog posts. Please share this post if you liked it. Thanks.

--

--

Ben Fathi

Former {CTO at VMware, VP at Microsoft, SVP at Cisco, Head of Eng & Cloud Ops at Cloudflare}. Recovering distance runner, avid cyclist, newly minted grandpa.