During the latter half of my career I’ve spent a lot of time working with disruptive application technologies, so I know firsthand just how dynamic and unpredictable new business workloads can be from the perspective of infrastructure utilization. Yet, IT staffs are mainly trying to support this new breed of applications with data center technologies, processes and procedures that were originally developed to manage highly repetitive and predictable sequential transactions. The tension between twenty-first-century workloads and twentieth-century IT is almost palpable, and the answer, according to some, will be something called the software-defined environment (SDE).
|IBM Fellow Jeff Frey talking SDE|
So what’s the problem?
According to Jeff, even in our current age of massive virtualization—where the physical resources of a computer system can be stretched to support hundreds, or even thousands, of virtual images—those virtual resources are still allocated to workloads on a largely manual basis.
Does your web site need new web server instances to handle unexpected demand?
Those virtual servers have to be explicitly defined, provisioned and assigned.
Today’s automation tools can help, but they don’t fully solve the problem. Data centers really need increased levels of intelligent automation and optimization that leverages virtualization as a foundation and can provide increased levels of operational efficiency, flexibility and responsiveness as well as lower costs.
Jeff believes the answer lies in expressing the virtualized resources of an IT infrastructure as “software.” When you are able to represent hardware as software, you make the entire IT ecosystem programmable, dynamic and subject to far higher degrees of management automation than can currently be achieved.
He says that today most people view virtualization solely as a means to drive higher levels of utilization of their physical resources. We need to take the concept of virtualization a step further and treat it as a mechanism for gaining full operational control over the data center by making the infrastructure programmable.
The goal of SDE is to do just that. Jeff’s view is that SDE provides the means by which compute, storage and network resources are expressed as software and thereby made programmable. A programmable infrastructure enables these precious resources to be put in catalogs, commissioned and decommissioned, repurposed, and repositioned automatically, and it allows them to be extended with “intelligence” that can provide extended levels of resource optimization. It opens the door to unprecedented levels of agility, efficiency and alignment with service level objectives.
Now I have yet to find a data center that is entirely standardized on a single virtualization technology. Looking at compute virtualization alone, one is likely to encounter some mix of IBM PowerVM, z/VM, KVM, VMware ESX server, MS Hyper-V and perhaps a handful of other technologies in any given data center. How is it possible to create a programming layer across such a diverse set of server virtualization technologies?
The answer, according to Jeff, comes from the OpenStack project. The IT industry, including IBM, is settling on OpenStack as the software-defined infrastructure (SDI) for SDE. The OpenStack SDI spans all compute virtual environments and will enable the federation of heterogeneous compute, storage and network environments into a single, programmable infrastructure.
In some ways this is not a new idea; it is the formalization of strategies that a number of industry leaders have been pursuing. For example, VMware’s vision for SDE is being marketed as the Software-Defined Data Center. However, the “data center” in VMware’s strategy is strictly built around x86 resources. Is x86 the only architecture in your data center?
IBM takes a broader and more inclusive view of SDE, driving SDE technology into all of its enterprise systems (such as IBM zEnterprise System), expert integrated systems (such as IBM PureApplication System) and modular and blade (such as IBM Flex System) to deliver an SDE experience that more closely matches the realities of today’s data centers.
How does SDE relate to cloud computing? Stay tuned—I’ll cover that in Part 2. In the meantime, I’ll look forward to hearing your thoughts.
Leave your comments below or connect with me on Twitter.
In the first part of this series, I interviewed Jeff Frey, an IBM Fellow and Chief Technology Officer of the System z platform, about the software-defined environment (SDE), what it is and how it can help. In this second part, we dive deeper into the SDE conversation.How does SDE relate to cloud computing?
I have some background in cloud computing, and at this point in the conversation I was thinking that SDE was just another way to express the cloud concept of infrastructure as a service (IaaS). So I asked if the two were one and the same.
Jeff says the difference is subtle but important, and he gave a great example: “Suppose I wanted to host my new web business on Amazon’s EC2. The interfaces that Amazon exposes to me that let me choose what I need, give me access, create my account and manage my billing—that’s IaaS. Under the covers, how Amazon actually provides the resources to me that fulfill our contract—this is where SDE would help.”
In other words, SDE is the engineering and management tooling that makes it easier for an infrastructure provider to organize their resources and enable operational efficiency; IaaS builds the business sense on top of this dynamic infrastructure.
Jeff was also quick to point out that SDE is not at all limited to cloud environments. Any IT shop that wanted to improve its business responsiveness would benefit from SDE.
What’s the role of the mainframe in SDE?
I work for the System z brand, so of course everything I do—and every blog I write—needs to tie back to IBM’s flagship zEnterprise system. So how does SDE relate to the mainframe?
The goal of SDE is to make IT infrastructure dynamic and responsive. For this to happen, the servers, hypervisors and virtualization managers that make up that infrastructure need to be able to exhibit certain characteristics, such as: multitenancy, rapid resource provisioning, elastic scaling, policy-driven resource management, shared infrastructure, instrumentation, accounting and audit trails and so on—and express these capabilities through application programming interfaces (APIs) that enable high degrees of dynamic and intelligent automation. If these characteristics don’t exist, they will have to be built in or added on.
The modern virtualized mainframe stack (zEnterprise and z/VM with SMAPI, for example) already exhibits many of these characteristics. All that’s needed is OpenStack integration for System z to be the cornerstone of any SDE.
There are no limits!
SDE addresses the needs of IT organizations, allowing them to manage their compute, storage and network assets programmatically to simplify the creation of services that drive down on this virtualized infrastructure. Many enterprises—especially small and medium-sized firms—would like this concept of programmatic automation to be enabled further up the stack, into the middleware and software service layers.
IBM has lots of experience in this area, most recently exhibited in the PureApplication System line. In this recent eWeek article, IBM Distinguished Engineer Matt Hogstrom—CTO for IBM’s software-defined environments efforts—talked about how IBM intends to focus on delivering turnkey, application-ready solutions that deploy software as a service (SaaS) on top of an SDE.
There really is no limit to the automation of the data center that is built on SDE technologies. A software-defined environment can help IT shops bring order to chaos, allowing virtualized IT resources to be dynamically and programmatically assigned based on the characteristics of the workload requesting resources. Twenty-first century applications running on twenty-first century infrastructure!
What do you think—is SDE an imminent reality or marketing hype? Will it help you deliver better value to your business? Add your comments below or engage me on Twitter, @PaulD360.
Paul DiMarzio has over 25 years of experience with IBM focused on bringing new and emerging technologies to the mainframe. He is currently part of the System z Growth business line, with specific focus on cross-industry business analytics offerings and the mainframe strategy for the insurance industry. You can reach Paul on Twitter: @PaulD360