Part One: Data is the new currency

01

By Curt Hopkins

Neither Airbnb nor Uber, two of the most prominent tech firms in the US, own any capital resources to speak of in their sector. As companies, they are ensembles of capital assets, ad hoc participants, and edge-to-cloud infrastructure. The unrecognized asset of the ensembles, maintains Kirk Bresniker, chief architect of Hewlett Packard Labs, is the data that is produced by their operation, data that at one point was considered a liability. Data is an asset both in terms of how such a company can use the data to increase cash flow (by targeting buyers more precisely) and how they make money from the data externally (say with advertising or from selling customer data to third-party sources).

Among the implications of this new kind of company are almost operatic breaches of security and failure of corporate values, such as the 87 million Facebook users whose data was stolen by Cambridge Analytica. But there are many more implications of this new model and both researchers and the C-suite need to get hip before they get overwhelmed.

Data has become the new gold that backs the value of companies, though it is a gold that companies must locate, mine, and refine before spending. Business has shifted from making things to knowing things, so realizing usable data is the order of the day if a company wants to remain profitable. In order to secure that data, what technology and processes will become necessary and how do they differ from what we have inherited? How likely are companies like Uber and Airbnb to prove the model for future companies?

This model was developed by the San Diego Supercomputer Center.

The massless company

Stories and observations are fine, but if we’re talking about data, we might as well use some.

“If you assume an average growth rate of around 40 percent a year,” says Jim Short, lead scientist at the San Diego Supercomputer Center, “which is a ballpark estimate that isn’t a good number but is not an awful one, then in 2018 the rough number (CAGR of 40% for 10 periods) would be on the order of around 2 petabytes of data processed annually per company.”

In 2015, Price Waterhouse Cooper estimated the value of commercialized data would grow to $300 billion by this year. Finally, according to Gartner, by 2022, as a result of digital business projects, 75% of enterprise-generated data will be created and processed outside the traditional, centralized data center or cloud, which is an increase from the less than 10% being generated today.

According to a 2017 report by global data analysis firm IDC, DataAge 2025, the world as a whole is likely to increase the amount of data it produces annually from 16.3 to 163 zettabytes a year by 2025.

HPE’s whole strategic research vision centers on responding to these changes by reorienting a company’s posture toward data from triage to opportunity.

If data is the new gold backing the value of companies – and the movement of bellwether companies from Uber to Ford to Google certainly seems to indicate it is – then we have to disregard our old idea that data is a waste product for a company. It is at least a strategic edge and at most the product itself.

As we struggle with both excitement and anxiety through what Short calls a “period of discovery” regarding how to value and use data – if we hope to master it and avoid having it enslave us – we need to create a new infrastructure.

Data used to be the cost. Now it’s the most interesting potential.

– Kirk Bresnker, HPE

“But how do you get to it?” he asks. “How do you turn it into economic value?” These are the questions that should determine how we remake our IT and business systems so that they make or retain profitability.

In Bresniker’s opinion, the key is as much mindset as it is restructuring. Or rather, one affects the other. If you retool your company such that it touches the ground lightly, if you make it into what he calls a “massless company,” then you’ll be able to change directions effortlessly.

“Inertia-free companies can transform information into economic activity with little risk,” he says. “This is the model you want.” The moment you start to efficiently turn information into product, you’ll find more opportunities to do so. It’s a positive feedback loop, where you’ll see data everywhere and see opportunities everywhere.” Mindset, then tech.

This philosophical reorientation seems germane to the IT industry but its relevance to the vast majority of companies outside that industry is harder to see. But it’s there.

“Think about McDonald’s,” he says. “McDonald’s is not in the burger business, it’s a real estate company.” It makes its money, in other words, from its franchises, not from its menu. In one way or another, there are no companies that do not utilize data as an element of their business. So all of them should pay attention and stop thinking of themselves as the passive recipients, or victims, of the data flood.

Memory-Driven Computing is an architecture that allows us to build anything and to build it with less energy expenditure (therefore cheaper). It allows us to respond to latency issues and reach a real-time response to an increasingly fast world, not just in terms of sheer speed but in how quick our responses will need to be to gain, regain, or retain profitability.

The takeaway

When the barrier to entry for a new business is lowered as profoundly for the majority of entities in an industry as they have been in those dependent on information, such as financial services, logistics, and communications, capital assets cease to be a major issue.

These days, in our switch to machine learning and composable infrastructure – and eventually to machine-written code – even code ceases to be an asset. In the end, we’re left with the enterprise completely described by the data that it accumulates. 

So the value of an undertaking, and the question of whether it will be successful, is no longer a question of traditional economic cost barriers. Instead, it becomes a question of whether it will be worth the effort to refine the raw data into actionable intelligence.

As Short reminds us, we are in a discovery phase. That means more mistakes than successes – a reality no one is keen to embrace. But embrace it you must. Without a willingness to make mistakes, we can’t face up to trying something new. If we’re willing to try new things, we might learn not just to surf the data flood, but to channel it into turbines that produce a whole new level of energy for this human experiment.