The Future of Data Centers

Posted by on Dec 9, 2011 in Industry Trends, Innovation, Leadership | 0 comments

Everything Evolves

I still remember playing with my Commodore VIC 20 and thinking that 3K of memory was plenty.  But of course, within a couple months, 3K of memory wasn’t enough and I was already entertaining the idea of getting a Commodore 64.  While both the VIC 20 and the Commodore 64 hooked to your TV set, the Commodore 64 had color – and it had 64K of usable memory, more memory than I could use in a lifetime – or so I thought.  Nowadays, my watch has more computing power.

Data centers have evolved too.  To begin with, computers were housed in “computer rooms.”  These computers were so large that an entire room was dedicated to one computer.

As computers evolved, “computer rooms” became “data processing rooms.”  Then entire floors were devoted to data processing.  Gradually, entire facilities became “data centers.”  Now we build server farms as this Facebook data center picture shows.

Predictable Patterns

Throughout history, the delivery of services has followed rather predictable patterns.  Take the supply of electricity.  Electricity wasn’t always delivered via large, interconnected grids.  Initially, electrical power was generated locally because the technology did not exist to distribute it more than a few miles.  Soon towns, cities, and industrial complexes developed their own generating stations and distributed power according to their needs and ability.  Due largely to demand and the enormous expenses involved, governmental agencies and large corporations began centralizing electrical power generation.  Power distribution took the same path – starting as local distribution to begin with and eventually growing into large, interconnected distribution grids.  The typical setup is a large coal, hydro, or nuclear power plant connected to a large distribution system, then to smaller sub-systems, and finally to your home.

Economics will always drive efficiencies in the market.  And the push to drive efficiency in the electricity market caused power suppliers to build smaller generation stations in greater numbers – many being mixed-fuel turbines and some being wind or solar.  These smaller stations are used for peaking power – capacity that is needed when the grid’s base load (large generating stations) capacity is temporarily exceeded.

Today we see further evolution – a transition into micro-generation (wind, solar, fuel cell, micro-turbines, etc.) being developed even at the individual-home level.  These “micro-generators” have the capability to feed power back to the main distribution grid, causing the meter to “run backwards,” as it were.   Many futurists imagine that each home or building will have its own solar, wind, or other generating capacity, reducing our reliance on and need for the huge capital outlays of major generating stations.  The economy and service becomes more efficient by benefiting from the aggregation of resources.

Note the pattern:

  1. Small, localized supply; localized distribution; localized control
  2. Large, centralized supply; wide-area distribution; wide-area control
  3. Mixed, large and mid-sized supply; extensive, interconnected distribution; wide-area regional control
  4. Mixed, large, mid-sized, and micro supply; fully interconnected, extensive distribution; mixed, wide-area regional and localized control

The last phase in the evolution allows for maximum flexibility, supplying the need with precise capacity and control.  This is made possible by technological improvements.

Data Center Evolution

The actual services provided by computers (data processing and storage) are also following this predictable pattern.  We first saw local supply and local distribution in the computer room.  Next, universities and governmental agencies developed networked computing facilities (ARPANET, CYCLADES, Telenet, et cetera), setting the stage for wide distribution.  We’re currently in the third phase of the evolution pattern where large- to mid-sized facilities with extensive interconnected distribution and regional control have been developed (Internet exchange points and peering facilities).

So what does the fourth phase look like?   In this phase, micro supply, fully interconnected global distribution and mixed control levels are introduced to the market.

Death of the Mega Data Center

Just as it is cost-restrictive to build huge power plants or dams, data centers will become smaller and more geographically distributed in the future.  Instead of the 400,000 sq ft behemoths, 10- to 30,000 sq ft designs will prove more economical to build.  The availability of power and demands on local infrastructure are already driving design and siting decisions in this direction.  Furthermore, with smaller, localized data centers, latency to the end user is potentially reduced.

Evolutionary phase changes are driven by and dependent on technological advances.  With the advent of true geographically independent processing (cloud, fabric, et cetera), the location of the server will no longer be an issue.   Controlling software aggregates server resources wherever they may be and assigns processing tasks.  Data storage demands are also handled by distributed libraries of data that continuously back up on geographically diverse resources.

This development will usher in a new paradigm in how processing is done.  With the restriction of resource (server) location eliminated, data processing could “follow the moon,” meaning that tasks can be assigned to areas where power costs and availability and, for that matter, facility-free cooling are at their lowest cost or maximum efficiency.

Micro Supply

When control software can be developed that has the capability to aggregate/secure any server anywhere, then individuals everywhere could enlist the “resources” of their computer(s) that are attached to the Internet to an aggregator network to do processing work.  They could even be paid every month for the amount of processing/storage that their computer(s) did for the aggregator.   In essence, individuals will become micro-data centers.  Just as banks compete for your savings dollars (another resource that is aggregated), aggregators will compete for your computing and storage resources.  In reality, small businesses with excess computing capacity during non-business hours will most likely be the first to take advantage of this new market.

Like power, data processing and storage are following the pattern.  Even now, some “power” companies own only the distribution and control, not the capability to generate power.  I expect data processing and storage services to continue to evolve along this pattern.  Companies will morph into distribution and control, leaving the “commodity” of data processing and storage to smaller more efficient companies and solutions.  Providers of the data-processing commodity would then be rated on industry accepted standards of reliability, capacity, and cost.

While this capability may not be entirely possible today, issues around data security, workflow control, and storage aggregation will be solved in the foreseeable future.  I’m not sure who the next Google, Apple, or Microsoft is going to be, but I believe companies that work to solve these issues stand to profit tremendously as we move into this next evolutionary phase.

Leave a Comment

Your email address will not be published. Required fields are marked *