Enterprise

How IBM lost the cloud

Insiders say that marketing missteps and duplicated development processes meant IBM Cloud was doomed from the start, and eight years after it attempted to launch its own public cloud the future of its effort is in dire straits.

The IBM logo surrounded by storm clouds.

Beset by marketing missteps and conflicting development priorities, IBM Cloud was doomed from the start.

Image: Christopher T. Fong / Protocol

The words stunned IBM's cloud executives in November 2013. Former CEO Ginni Rometty had just told them that Watson, IBM's dubious crown jewel, should run on the company's own Power chips inside SoftLayer, IBM's recently acquired cloud-computing division.

There was one big problem: SoftLayer, like all major cloud efforts at that point, only used x86 chips from Intel and AMD.

What came next can only be described as a scramble, according to sources who worked for IBM at the time. After throwing together a barely working demo for IBM's Pulse conference in February 2014, where Rometty publicly announced the news, executives quickly convened in Texas, home to SoftLayer. They realized fulfilling Rometty's pledge would be daunting: They would have to rewrite parts of the Watson code base for the cloud, and quickly find, and then configure, enough Power servers to run alongside the all-x86 SoftLayer environment.

So began IBM's experiments with cloud computing, imperiled from the start by a maniacal focus on selling Watson at the height of its public awareness and doting obedience to a customer base that still didn't trust the cloud.

IBM was once — and still is, for people whose main sources of information about technology are television ads during sporting events — an American innovation icon, a company that literally created what we now think of as information technology. Its fortunes have risen and fallen with broader trends in computing, but around the time of that meeting in late 2013, its business and technology reputation began a steady decline that it has yet to avert .

Today, Rometty is gone, replaced by Arvind Krishna, the first technologist to hold the top seat at IBM since the 1970s. But IBM finds itself almost entirely dependent on its $34 billion purchase of Red Hat in order to stay relevant among modern IT buyers, and IBM executives don't really talk about its own public cloud division these days.

"They've given up on the idea of, 'we're going to be a major contender in the public cloud space,'" said Tracy Woo of Forrester Research. "Everyone is trying to win with edge [computing] in some way, and trying to create the most compelling story."

IBM CEO Arvind Krishna speaks while seated in front of an open water bottle. IBM CEO Arvind Krishna Photo: Brian Ach/Getty Images for Wired

The opportunity was there for IBM: Longtime rival Microsoft successfully executed a pivot to cloud computing following the appointment of Satya Nadella in 2014, and while Azure was several years old at that point it had only started offering Linux virtual machines, the lingua franca of the cloud era, the year before IBM's SoftLayer acquisition.

Now, IBM considers itself a "hybrid cloud" company, according to its executive talking points and commercials. But IBM's use of such tech buzzwords is a familiar strategy to those who have followed the company over the last decade: It's trying to convince longtime customers to stick with the partner that brought them to the dance despite there being a plethora of interesting alternatives.

"IBM is all-in on hybrid cloud and AI, determining years ago that our clients' only feasible path to rapid digital transformation is through a hybrid cloud strategy. Public cloud is an integral piece of that strategy," IBM said in a statement.

So how did IBM miss the cloud? Interviews with more than a dozen current and former IBM executives and employees painted a picture of a company caught moving in two directions: a group that correctly understood how the cloud was going to play an enormous role in the future of enterprise computing, matched up against a sales-driven culture that prioritized the custom needs of its large customers over the work required to catch up with AWS.

The SoftLayer bulletin

It was an AWS deal with the CIA that made IBM think differently about the cloud.

In 2013, the now-dominant cloud provider won a contract to build the next-generation enterprise-tech infrastructure for the country's spies. That forced IBM to acknowledge that the cloud era had not only arrived, but also that it was losing, according to multiple sources who worked for the company at the time. And as it lodged an ultimately unsuccessful protest bid against the decision to award AWS the contract, IBM announced it had acquired SoftLayer in June 2013 .

The takeover was problematic almost from the start, according to multiple sources who worked for IBM at the time. At the outset, IBM was content to let SoftLayer continue to grow with a decent degree of autonomy, but the two companies looked at the world from different vantage points.

SoftLayer was built and designed for small and medium-sized businesses and its leadership team believed that was the market around which it was designing its infrastructure strategy. That market was mostly concerned with cost and less concerned with features and availability, and SoftLayer designed its cloud services accordingly.

A Softlayer data center in Dallas, Texas. A Softlayer data center in Dallas, Texas. Photo: Bloomberg / Contributor

It operated 13 data centers when IBM acquired the company, but those data centers utilized relatively simple designs and were based almost exclusively around off-the-rack servers from Supermicro, according to sources. There's nothing inherently wrong with that approach, but at the time major cloud vendors were, and still are, designing their own servers with strict enterprise-grade performance and reliability criteria.

After a few years, IBM salespeople were eager to sell cloud services alongside a package of IBM's more traditional enterprise software, yet quickly found that SoftLayer didn't offer many of the services that huge corporations needed to embrace the cloud, according to the sources. Its data centers lacked some of the resiliency features that were table stakes at AWS, such as availability zones, and the servers weren't powerful enough to support large application deployments, they said.

And one of the biggest obstacles was SoftLayer's lack of support for virtual private cloud technology, which gives cloud customers additional control over how their applications run on cloud services. AWS introduced such a service in 2009, but IBM Cloud didn't get what one source called a "true" virtual private cloud service until 2019 .

Some of these problems were understood at the time of the SoftLayer acquisition and IBM tech executives thought they could fix them in short order, according to sources. But IBM's culture during those years proved too much of a road block.

If there's one common thread through the experiences of multiple current and former IBM employees, including those who didn't work for the cloud division, it's the power that current customers had over everything IBM did .

Over and over again during the last decade, IBM engineers were asked to build special one-off projects for key clients at the expense of their road maps for building the types of cross-customer cloud services offered by the major clouds. Top executives at some of the largest companies in the country — the biggest banks, airlines and insurance companies — knew they could call IBM management and get what they wanted because the company was so eager to retain their business, the sources said.

This practice, which delayed work on key infrastructure services for months or even years, was still happening inside IBM as recently as last year, according to one source.

"To the extent IBM is a public cloud provider, they do so as it adds to their broader orientation as a hybrid cloud platform provider," said Melanie Posey, an analyst with S&P Global Market Intelligence. "And the stuff that's on IBM's hybrid cloud platform includes IBM's public cloud, which some of their traditional long-standing IT enterprise customers prefer, like, 'let's keep it all in the family.'"

Build it once, build it twice

Just a few years after acquiring SoftLayer, IBM's top executives knew their cloud strategy as designed was not going to work. Convinced they needed fresh eyes, they hired several executives from Verizon's cloud services business — which it would later acquire — to rebuild IBM Cloud.

John Considine became general manager of IBM Cloud Infrastructure in November 2016 and was given the leeway to install a brand-new cloud infrastructure architecture to replace SoftLayer's approach. He began work on a project internally code-named "Genesis," an ambitious attempt to build an enterprise-grade cloud system from scratch.

Before too long, however, IBM began to realize that Genesis was unlikely to scale well enough to be a competitive threat to AWS or Microsoft. A decision to use Intel's Red Rock Canyon networking chip proved particularly troublesome, according to sources, as it caused IBM to rank very poorly on a key (if not exactly workaday) test used by Gartner to rate cloud vendors: launching 1,000 virtual machines at the same time.

And at first, Genesis still lacked support for the key virtual private cloud technology that both engineers and salespeople had identified as important to most prospective cloud buyers.

This caused a split inside IBM Cloud: A group headed by the former Verizon executives continued to work on the Genesis project, while another group, persuaded by a team from IBM Research that concluded Genesis would never work, began designing a separate infrastructure architecture called GC that would achieve the scaling goals and include the virtual private cloud technology using the original SoftLayer infrastructure design.

Genesis would never ship. It was scrapped in 2017, and that team began work on its own new architecture project, internally called NG, that ran in parallel to the GC effort.

For almost two years, two teams inside IBM Cloud worked on two completely different cloud infrastructure designs, which led to turf fights, resource constraints and internal confusion over the direction of the division. The cancellation of Genesis forced IBM to write off nearly $250 million in Dell servers (a bitter irony, in that IBM sold its own server group just before acquiring SoftLayer) that had been purchased for that project, according to one source.

And the two architectures — which IBM had intended to be compatible but due to subtle design differences, were not — became generally available within four months of each other in 2019. IBM continued to maintain two different cloud architectures until earlier this year, according to one source, when the GC effort was scrapped.

Presented with a detailed account of what this story would contain, IBM declined to dispute any of the facts, and sent over the following statement:

We spent more than two years evolving IBM's cloud to be the industry's most secure, enterprise-grade cloud built on a foundation of open source software – offering our clients choice, instead of locking them in. We have integrated key capabilities from across the IBM portfolio – from Software and AI to System Z and Power to our Services offerings. And we continue to invest in our global cloud footprint, making IBM Cloud the right choice for clients in highly regulated industries such as Financial Services, Government, and Telco – where it's essential to balance modernization with data privacy and compliance requirements.

Too late to the game

But by the time IBM finally shipped not one, but two different next-generation cloud infrastructure designs with support for virtual-private cloud technology in 2019, it was too late.

Right around the time the parallel development efforts kicked off, many of IBM's longtime clients in heavily regulated industries like banking had begun to understand how they could operate safely on cloud services , and were looking for options. Buying enterprise technology is a lot like hiring a contractor for a home-improvement job: The only sensible thing to do is get a few bids.

Most major companies considering cloud services in 2017 (and today) would get a bid from AWS, given its leadership position in the market and track record of stability. In most cases, however, they would only get two additional bids: Microsoft and either IBM or Google Cloud.

And it was at this point that IBM Cloud staff began to realize that they had lost the opportunity to win that business , according to sources. When big businesses make an IT decision they're deciding which technology they are going to use for a significant number of years; it can take up to two years just to move complex operations to cloud services, and AWS started encouraging potential customers to sign multiyear contracts in exchange for pricing discounts around this time.

"I think the realization [for IBM] was, do we really want to do this?" Posey said. "Does it really make a whole lot of sense for us to build up all of this infrastructure to be sort of a general-purpose cloud or is there a better way to go?"

IBM Cloud simply wasn't competitive. Genesis was an attempt to move beyond SoftLayer's reputation as a hosting provider for small businesses. But it didn't work, and cost the company years before it rolled out a feature-competitive cloud service with the coveted — but by then table stakes — VPC technology.

An image of an eye, a bee, and the letter M. Thanks to years of delays and mismanagement, IBM will never be a major public cloud player. Image: Christopher T. Fong / Protocol

There is some small hope for IBM more broadly to cling to in this story, though. One of Krishna's first acts when he took over IBM Cloud in January 2019, before he became CEO, was to end the double-track infrastructure design strategy and get the team to focus on a singular approach going forward, sources said. That gave employees familiar with the overall saga and his leadership confidence that he might yet be able to turn the company around.

But, thanks to years of delays and mismanagement, IBM will never be a major cloud player. It's not entirely clear how committed the company is to its public cloud service, which still has thousands of customers. In the past year it has suffered several major outages that have gone virtually unnoticed by the broader internet community, which is using services built on other clouds.

Sources were evenly divided on the long-term prospects of the group, although a steady decline in IBM's capital expenditures this year does not bode well for a capital-intensive business like cloud computing.

And while it has become clear even to AWS that hybrid and multicloud strategies will be popular for the foreseeable future, which does bode well for Red Hat's software business, cloud computing is growing at around 35% a year and generating enormous profits for its top two contenders.

IBM had everything in place to become a major cloud provider. But technology shifts like cloud computing don't come along every decade, and while IBM has survived every shift in technology since the 1930s, its inability to capitalize on that historic shift was a huge strategic oversight — and one that has left its status as an American technology icon hanging in the balance.

Correction: An earlier version of this story contained a misspelled version of Ginni Rometty's name. This story was updated on Sept. 30, 2021.

Fintech

Judge Zia Faruqui is trying to teach you crypto, one ‘SNL’ reference at a time

His decisions on major cryptocurrency cases have quoted "The Big Lebowski," "SNL," and "Dr. Strangelove." That’s because he wants you — yes, you — to read them.

The ways Zia Faruqui (right) has weighed on cases that have come before him can give lawyers clues as to what legal frameworks will pass muster.

Photo: Carolyn Van Houten/The Washington Post via Getty Images

“Cryptocurrency and related software analytics tools are ‘The wave of the future, Dude. One hundred percent electronic.’”

That’s not a quote from "The Big Lebowski" — at least, not directly. It’s a quote from a Washington, D.C., district court memorandum opinion on the role cryptocurrency analytics tools can play in government investigations. The author is Magistrate Judge Zia Faruqui.

Keep Reading Show less
Veronica Irwin

Veronica Irwin (@vronirwin) is a San Francisco-based reporter at Protocol covering fintech. Previously she was at the San Francisco Examiner, covering tech from a hyper-local angle. Before that, her byline was featured in SF Weekly, The Nation, Techworker, Ms. Magazine and The Frisc.

The financial technology transformation is driving competition, creating consumer choice, and shaping the future of finance. Hear from seven fintech leaders who are reshaping the future of finance, and join the inaugural Financial Technology Association Fintech Summit to learn more .

Keep Reading Show less
FTA
The Financial Technology Association (FTA) represents industry leaders shaping the future of finance. We champion the power of technology-centered financial services and advocate for the modernization of financial regulation to support inclusion and responsible innovation.
Enterprise

AWS CEO: The cloud isn’t just about technology

As AWS preps for its annual re:Invent conference, Adam Selipsky talks product strategy, support for hybrid environments, and the value of the cloud in uncertain economic times.

Photo: Noah Berger/Getty Images for Amazon Web Services

AWS is gearing up for re:Invent, its annual cloud computing conference where announcements this year are expected to focus on its end-to-end data strategy and delivering new industry-specific services.

It will be the second re:Invent with CEO Adam Selipsky as leader of the industry’s largest cloud provider after his return last year to AWS from data visualization company Tableau Software.

Keep Reading Show less
Donna Goodison

Donna Goodison ( @dgoodison ) is Protocol's senior reporter focusing on enterprise infrastructure technology, from the 'Big 3' cloud computing providers to data centers. She previously covered the public cloud at CRN after 15 years as a business reporter for the Boston Herald. Based in Massachusetts, she also has worked as a Boston Globe freelancer, business reporter at the Boston Business Journal and real estate reporter at Banker & Tradesman after toiling at weekly newspapers.

Image: Protocol

We launched Protocol in February 2020 to cover the evolving power center of tech. It is with deep sadness that just under three years later, we are winding down the publication.

As of today, we will not publish any more stories. All of our newsletters, apart from our flagship, Source Code, will no longer be sent. Source Code will be published and sent for the next few weeks, but it will also close down in December.

Keep Reading Show less
Bennett Richardson

Bennett Richardson ( @bennettrich ) is the president of Protocol. Prior to joining Protocol in 2019, Bennett was executive director of global strategic partnerships at POLITICO, where he led strategic growth efforts including POLITICO's European expansion in Brussels and POLITICO's creative agency POLITICO Focus during his six years with the company. Prior to POLITICO, Bennett was co-founder and CMO of Hinge, the mobile dating company recently acquired by Match Group. Bennett began his career in digital and social brand marketing working with major brands across tech, energy, and health care at leading marketing and communications agencies including Edelman and GMMB. Bennett is originally from Portland, Maine, and received his bachelor's degree from Colgate University.

Enterprise

Why large enterprises struggle to find suitable platforms for MLops

As companies expand their use of AI beyond running just a few machine learning models, and as larger enterprises go from deploying hundreds of models to thousands and even millions of models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

As companies expand their use of AI beyond running just a few machine learning models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

Photo: artpartner-images via Getty Images

On any given day, Lily AI runs hundreds of machine learning models using computer vision and natural language processing that are customized for its retail and ecommerce clients to make website product recommendations, forecast demand, and plan merchandising. But this spring when the company was in the market for a machine learning operations platform to manage its expanding model roster, it wasn’t easy to find a suitable off-the-shelf system that could handle such a large number of models in deployment while also meeting other criteria.

Some MLops platforms are not well-suited for maintaining even more than 10 machine learning models when it comes to keeping track of data, navigating their user interfaces, or reporting capabilities, Matthew Nokleby, machine learning manager for Lily AI’s product intelligence team, told Protocol earlier this year. “The duct tape starts to show,” he said.

Keep Reading Show less
Kate Kaye

Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of RedTailMedia.org and is the author of "Campaign '08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

Latest Stories
Bulletins