Top 12 Cloud Trends Of 2012 (Information Week)

Five years into the cloud computing phenomenon, we're much more aware of the limitations and consequences. Here are 12 trends to watch in the coming year, starting with numbers 7 to 12. 

Only a few years ago, cloud computing didn't exist. Or rather, it existed by a dozen other names--such as virtualization, managed hosting, or simply The Internet. Today, it's the must-have feature of every product or service, from mobile phones to cameras to TVs.

Nobody knows this better than enterprise IT professionals, who have to deal with a rising tide of hyperbole and insatiable consumer expectations even as their budgets shrink and the role of technology in business grows. What nobody disputes, however, is that on-demand IT is here to stay.

While companies have been relying on software as a service and third-party tools for decades, it has been roughly five years since clouds entered the enterprise IT psyche, introduced by public providers such as Amazon, Google, and Salesforce.com and via private stacks from VMware, Microsoft, and Citrix. Five years is plenty of time to mature. We're much more aware of the limitations and consequences of utility computing. Here, then, are a dozen insights into what the next year will bring, nearly half a decade into the cloud era. In Part I of this two-part series, I'll cover trends seven to 12, in reverse order. In Part II, I'll cover cloud trends one to six.

Cloud Trend No. 12: Infrastructure, Code, And Data Are Intertwined.

We talk about writing code, storing data, and managing infrastructure, but these three things will soon be one and the same.

While much of the emphasis around cloud computing has been on virtual machines, it's really about data.

-- Compared to the cost of moving bytes around, nearly every other part of computing is free, according to research done by Microsoft nearly a decade ago.

-- Data is what we're worried will leak out. The reason the analogy between clouds and the electrical grid falls apart is that when someone steals your electrons, they don't have your corporate secrets.

-- Availability is a data problem. I can have 50 instances of an application running around the world. That's easy. But getting them to cooperate on sharing and updating a single user record is the hard part. The more copies we make, the more the data can be corrupted, get out of sync, and so on.

-- With the ability to scale out horizontally, we can make applications fast for millions of users. Scaling the data, however, is an entirely different matter. Ask any architect where the bottleneck is, and more often than not they'll point to data and I/O.

Tomorrow's applications will include three kinds of code: instructions for the business process itself; instructions for how to handle the data needed; and instructions for how to manage growth, shrinkage, and failure.

When the application is running in a trusted on-premises environment, the call center operator has access to all of the data. The operator can, after properly verifying a caller's identity, make changes to it. But when the application is running in a different environment--such as a public cloud used as part of a disaster recovery plan--the application can't access the social security data.

To accomplish this task, we need to encrypt the information not at the device or file level, but at the table or field level. The application needs to run with different permissions depending on its circumstances. It also needs to be smart enough to tell the operator what's happening, so that the operator can explain the situation to a caller.

Similarly, if a data center has a problem, the application can re-launch in another data center. But as machines and programs come online, they need to adapt to the new environment: different unique names, addresses, latencies, and so on. We do this through Devops, and platform orchestration systems like Chef, Puppet, and Pallet.

When a machine moves to a new location, it needs to take with it the data required to run. The more data it takes, the better it can run quickly. But the more that it takes, the longer it will take to relocate--and the more it will cost. As a result, there's a tradeoff to be made when moving a workload: just enough metadata and application logic to function, but not so much that things slow down.

There are nascent standards that let programmers declare how data should be handled, making workloads move about the world efficiently, adapting to changing circumstances.

Cloud Trend No. 11: What Can't I Put In The Cloud?

Several weeks ago, I was at a doctor's office with 10 physicians and two assistants. One wall of the waiting room was lined with manila file folders, each emblazoned with colored stickers and numbers.

I spent a half hour waiting to see the doctor, and in that time, I saw at least three data errors. In one case, a doctor picked up the wrong folder, opened it, and then realized her mistake. In another, an assistant dropped a folder, spilling patient records across the floor. And in a third, an assistant couldn't find a patient's record because it had been misfiled.

The benefits of electronic health records are huge. In addition to overcoming these kinds of errors, health practitioners can work together on a patient, transferring information from specialist to specialist. And researchers can mine the information to understand the efficacy of a cure or the spread of a disease.

[ Understand how your peers are using the cloud. Read our exclusive research on the state of cloud computing. ]

Today, we're concerned about putting data in the cloud. For large organizations, that might be a real concern, but for small organizations like doctors' offices, police precincts, and schools--all of which deal with regulated data--leaving information out of the cloud could be a huge mistake.

We criticize the cloud, but we don't compare apples to apples. We don't really understand the costs of paper medical records, evidence stored on analog tape, student information saved in a single spreadsheet. In 2012, we'll start to do a real comparison of on- and off-cloud solutions, and realize that, for many businesses, the real question is what can't be better done in a cloud.

Cloud Trend No. 10: Inception, The Brain In The Vat, And Hardware.

An ever-increasing percentage of our enterprise applications run in virtual environments. We no longer use virtualization solely for increased utilization--that is, putting several virtual machines on one physical one in order to make the best use of its processing capacity. We also do it for operational efficiency, because it's easier to work with virtual bits than physical atoms.

Between the virtual machine and the bare metal on which it runs is a hypervisor, a piece of code whose core function is to trick the operating system into thinking it's running on bare metal. In some cases, companies add another layer beneath the hypervisor, to further streamline operations.

As we go down through increasingly nested layers of virtualization, how do we know when we've reached the real, physical, bare-metal machine? Philosopher Rene Descartes started his famous "I think, therefore I am" reasoning with a thought experiment: What if we're a disembodied brain in a vat, being fed a perfect set of sensory information, being tricked into thinking we're in the real world?

Decades of science fiction, from The Matrix to Inception to Vanilla Sky, have tackled the notion that we might not be in the world we experience but instead are living within a simulation.

This has important consequences for hardware makers. When we don't know what virtualization layer we're at, the jar is what matters most, because it's the only thing that truly knows. The bare metal has an important role to play, because it establishes trust. It thwarts trickery. It accelerates security and dedicates resources.

New hardware takes time to find its way into the wild. But the latest chipsets have features that distinguish them from the hypervisors they're running, and in a virtual world where no machine knows if it's just a brain in a jar, the jar is critical. In 2012, we'll start to expect more of the bare metal, because it's the only thing we can really trust.

Cloud Trend No. 9: The Rise Of Real Brokerages.

Enterprises use dozens of clouds already. Those bills are adding up, not just in terms of cost, but in terms of complexity. Some providers bill by machine; others by CPU cycle; others by user, megabyte, or request.

Having several providers is useful, because it offers the customer some degree of independence and negotiating leverage. But managing myriad cloud offerings will soon turn enterprise IT professionals into procurement officers and contract negotiators, handling varying terms and conditions, payment schemes, and disputes.

In a market with many buyers and sellers, brokerages inevitably emerge. They simplify and standardize transactions. They perform "bulk breaking"--the sharing of a good across many buyers--and assortment. And they find pricing efficiencies.

We're already seeing the start of cloud brokerages. Spot markets are an early indicator of the market liquidity necessary for a brokerage. Cross-cloud platform-as-a-service offerings like OpenShift and Cloud Foundry encourage workloads to move from cloud to cloud. And brokers like Cloudability aim to streamline billing and management of multiple contracts.

In 2012, expect to see the first real cloud brokerage offerings, as enterprise IT organizations look to team with other companies to procure and manage commodity cloud capacity.

Cloud Trend No. 8: An SLA Detente.

One of the biggest enterprise IT complaints is that the cloud offers bad service level agreements. Here's why that complaint doesn't hold up.

I drive a Volkswagen, but I don't get my insurance from that company. I get it from one that specializes in amortizing risk across clients. My insurance company knows the chances I'll get into an accident, as well as how safe my car is. It spends a lot of time reviewing safety features of cars and understanding regulations and quality checks by governments.

If you want to amortize the risk, you'll find an insurer or certifier of some kind that can inspect the cloud provider on your behalf and understand its reliability. Cloud providers are no more in the business of amortizing risk than carmakers are in the business of selling insurance.

Now consider hardware. We don't ask a hardware maker to guarantee their equipment. We ask for how likely it is to fail--the mean time between failure--and use this baseline to create an architecture that will give us the reliability we need. We build this architecture out of resilient tiers that can fail gracefully: DNS, load-balancers, and so on.

Clouds offer availability zones, CDN front-ends, shared storage, message queues, and dozens of other building blocks with which an architect can create applications of unprecedented scale and resiliency.

And that's the second problem with the complaints about cloud SLAs: The best SLA is the one you architect for yourself.

A combination of certifications and amortization from insurance companies will assuage some of the enterprise SLA concerns, by giving risk a price. The remaining concerns will be addressed by better architecture. In 2012, we'll realize that the providers have been trying to tell us something: You can have any SLA you want, as long as you code it yourself and find a way to turn risk into economic value.

Cloud Trend No. 7: Disaster Recovery And Scaling Are The New Drivers.

The first thing we virtualized was the print server. When virtualization first emerged, IT used it as a way to cut costs by consolidating otherwise idle machines running mundane tasks: print, email, and intranet servers--things that weren't mission critical but were taking up space.

After a few years, virtualization found its way into test and development, where the rate of change was high enough that ease of deployment was paramount. This consolidation was good, but what really helped was the ability to quickly clone, copy, spin up, and tear down machines as QA needed.

Today, we're using virtualization for production applications, and we know that many virtual machines, running on commodity hardware, properly clustered and architected, can actually be more reliable than standalone high-end servers.

That's an important shift--from non-mission-critical applications to really critical ones. Cloud computing is undergoing a similar shift. Early cloud use was for experimentation, throwaway applications, and spiky, batch computing jobs. But now companies are realizing that highly available, cross-geography deployments can help them survive outages better than machines they own. On-demand computing changes the economics of disaster recovery significantly.

Moreover, the ability to scale up and down according to the user experience we want to deliver makes cloud computing attractive for time-sensitive applications, and as we learn to code elastic applications, clouds look like the right place to run them.

This means that in 2012, disaster recovery and elastic scaling will replace cost savings and convenience as the big reasons for enterprises to adopt the cloud.

There are six more predictions to go to round out the top 12. You can find them here.

Alistair Croll, founder of analyst firm Bitcurrent, is conference chair of the Cloud Connect events. Cloud Connect will take place in Santa Clara, Calif., from Feb. 13 to 16.

The full article can be found here: http://www.informationweek.com/news/cloud-computing/infrastructure/232301203

 

1 Like
Recent Stories
DSCOOP Announces Expanded Community Connecting Creatives and Printers Spearheaded by New Chief Innovation Officer -- HP veteran, Leta Wood

Neujahrsempfang Dscoop Deutschland 25. Januar 2018

Dscoop at IPEX - 2 days of must-see educational and engaging sessions