Unavoidable Change in the Data Center and Information Technology Industry
We hear unbelievable emphasis all the
time about “availability”, “how to build more availability in a single data
center site or facility?”, “N+1”, “N+2”,
“2N”, and “2N+1” guidelines all relevant to site and facilities infrastructure. But are we correctly addressing the issue of availability? Are we defining availability correctly? What are we trying to achieve via so much
emphasis on such narrow and incomplete aspects of a bigger picture? Are the notions of resilience and
availability truly all that matter?
Every time I meet fellow
colleagues, businessmen, engineers or technicians, the discussion centers around
the benefits of cloud computing, yet the legacy standards don’t fully capture
its efficacies and fail to integrate them into a meaningful bigger
picture.
We still see data center facilities
being certified as 99.995% or 99.982% available by legacy certification bodies,
and those metrics are good to know. But is it sufficient to only evaluate and
measure a facility’s infrastructure design and capabilities without any
indication of how that facility fits into the overall topology, or how well
it’s design is aligned with application requirements? Do such certifications
inform the business as to whether that design for that facility is the best
option over other alternatives, or can it only evaluate the facility at hand?
While a particular high availability facility’s architecture may serve the
application sufficiently, perhaps a topology of facilities - each designed to a
lower availability specification - would have produced the same or better
overall result at a lower cost and with less operational complexity. Our
existing toolsets have been ill-equipped to provide such guidance.
Further, The high availability site
that is rated with all those shiny nines, with its golden fault tolerant UPS
& generators may have deficiencies in its operational characteristics, such
as SLA’s, SOP’s, documentation, governance, personnel competency, etc. that
will negatively affect its availability, yet none of the actual standards have the
ability to comprehensively evaluate all of those important characteristics both
at a component level and as a holistic system when looking at an infrastructure. The same thing applies to deficiencies in
cyber-security, or an application’s robustness and resilience, and the many other
factors that can introduce extreme vulnerabilities to the application delivery
architecture and infrastructure.
Where are the metrics that
identify, evaluate, calculate, weigh & show the exact score of an entity’s
application delivery capabilities, globally?
Isn’t it as important to be
redundant, yet remain efficient? What about sustaining capacity? How can a facility be considered available, even
in consideration of its redundancies, if it lacks the required capacity? How
about safety & security? What are the prerequisites to design a 99.995%
available data center “node”?
These are
questions that industry professionals are still searching for the answers to.
There are many standards, each
setting metrics for a particular technical domain, but not a single universal
standard that encompasses all domains under its umbrella, with all their
inherent interdependencies, to create a balanced & quantified macro-view of
the data center, infrastructure, information technology and cloud platforms that
converges to a unified grading scheme. That very scheme will be the absolute grade
to compare apples to apples & not apples to oranges!
Often, we hear about PUE, and how
some organizations are reducing it down to fantastic values. But how can we
benchmark a PUE in Alaska and compare it to a site in the tropical climate of the
Amazon? Wouldn’t such benchmarks be useless for comparison purposes? What about
the data center facilities that are using renewable energy & innovative
power generation? How do current efficiency metrics factor in such efficiencies?
The mindset that the
infrastructure availability is everything has changed. The mindset that PUE is
measuring the efficiency of a data center comprehensively and all we must do to
save energy costs is bring our PUE down, is misleading. The mindset that power
& cooling are the sole infrastructure elements is completely iniquitous.
We are now living in the era of modern
information technology, the era of software defined networks, virtualization
and cloud computing where everyone is focusing more and more on the end
goal: the delivery of the “APPLICATION”
Think about it!
Comments
Post a Comment