A Standards Framework for the Applications Ecosystem – The Infinity Paradigm

The Infinity Paradigm Standards Framework from the International Data Center Authority (IDC-A.org) is the latest and I believe most complete standards set for IT. Many of us have participated in efforts to create a method to measure the value, effectiveness and efficiency of our IT infrastructure but I believe this to be the most complete by far. Early efforts to improve infrastructure efficiency came from The Green Grid with PUE, followed by WUE, the Data Center Sustainability Metric and the Data Center Maturity Model. There have been other efforts to improve communication and resiliency like the Data Center Pulse “Data Center Stack” and most recently Infrastructure Masons released their data center rating system. The aforementioned efforts helped to put focus on and provide measures and metrics needed for improved management and ownership of critical and costly IT infrastructure. In the case of the Infinity Paradigm perhaps the most important factor is the notion of treating your environment as an “Application Ecosystem®”.

Why is the Infinity Paradigm® Important?

I’ve written over the years about the need to drive greater value from IT infrastructure and I’ve also written more recently about the trends that are likely to create a forcing function for enterprises to take a closer look at best practice in how they operate their IT infrastructure. However, before we go into how current and future trends might cause you to think twice about your infrastructure strategies, let’s consider the current state.

IT Infrastructure Capacity Should be Treated like Manufacturing Capacity

Today’s infrastructure operator is faced with a never-ending barrage of requirements for automation, more cloud, multi-cloud, hybrid cloud, data sovereignty, scale, elasticity, cost management and even being green. These demands mean our existing ability to effectively manage, improve and report on our infrastructure use is failing our businesses. How many of us can accurately say that we’re using only the amount and type of Cooling that we need? How many of us can argue effectively that the resiliency applied to a specific area of the infrastructure or application code is appropriate based on real risk? One of the key areas of complexity is associated with the idea that much of our infrastructure will likely be outsourced in coming years. However, outsourcing can often add complexity, you still need to understand what you’re using, how well you’re using it and whether it’s cost effective.
As our ability to create more capacity in disparate locations continues to improve we must have a greater capability to effectively understand what we have and why we need it.

Current and Future Trends Impacting Infrastructure Demand

In a blog that I republished on LinkedIn in March of 2016 about the Need for 400 Million Servers by 2020 I postulated that trends like IoT among others would create new business models that would drive network and infrastructure demand well past any historical linear trend line of growth. In a recent blog by James Hamilton of Amazon he postulates very effectively that demand is likely to explode beyond anything we’ve planned for as a direct result of artificial intelligence and machine learning.

What Does This Demand Mean to Me?

The world is digitizing in ways that we still can’t easily comprehend and this digitization is being enabled by IT that has become much easier to consume (IoT, Cloud, AI/ML, VR, etc). As IT is more efficiently acquired and more discretely priced we find more unique ways to use it. The historical trend of most enterprises IT deployment strategies have tracked independent of the big web players like Facebook, Google, Microsoft and others. As an enterprise IT organization you could afford to just add another body or put in a little automation here or there but otherwise keep building IT in roughly the same ways we have for over a decade. The web players knew that their rapid growth and massive scale meant they needed to do things very differently and throwing more bodies at the problem wasn’t the answer. Over the next 5 years the average enterprise will need to be able to build and manage IT capacity in a comparable fashion to the big web players or the business won’t survive. Having the appropriate solutions in place to allow for greater scale and more effective and efficient management combined with a holistic approach to ownership and measurement will be critical to your success.
The following is an excerpt from the Infinity Paradigm that provides a quick view into the assumption of need and the opportunity of use:
The Infinity Paradigm® bridges existing industry gaps by providing a comprehensive open standards framework capable of handling the complexity of the post-legacy data center age. It maintains a focus on certain unique core values: comprehensiveness, effectiveness, practicality, and adaptability. This framework will grow and evolve over time through the input and collaboration of its users around the world, continuously redefining the very essence of what a Data Center is and does. It integrates legacy standards and guidelines, remaining fully aware of their strengths and limitations, in a forward looking and practical way. As a result, the Infinity Paradigm® provides data center designers, planners, builders, operators and end-users with a holistic, constructive approach, addressing the problems of today and challenges of tomorrow.
There’s also a very well thought out grading (G0 – G4) system for your data center and IT infrastructure included in the Infinity Paradigm. This new grading system provides the end-user with a mechanism for establishing performance and availability metrics that mean more to the C-Suite.

Excerpt from the Framework

Grade Levels, or “Gs” are the method of performance classification within the various Application EcosystemTM layers. Gs range from G4 to G0, with G4 representing the maximum allowable level of operational vulnerabilities, such as probability of failure, security risks, inefficiencies, operational lags, capacity insufficiencies, and lack of resilience, while G0 essentially mandates total elimination of all such vulnerabilities. Thus, it represents the highest Grade Level that can be achieved at any layer of the application ecosystem.
The Infinity Paradigm is a critical and comprehensive strategy that couldn’t come at a more important time in the evolution of IT demand and delivery. I highly recommend you take a look and then introduce the concept to your organization and take the reins of your IT environments in a way that drives real business benefit now while also delivering a solid foundation from which to scale into what looks to be a very dynamic future for IT.


Popular posts from this blog

Reconnecting leaders with their employees

How Do Data Centers Drive the Tourism Industry

COVID-19 is stress-testing applications and global data centers