From Private to Multi to Hybrid Cloud in the Enterprise - What, Why & How by James Kelly

From the results of recent RightScale and Gartner surveys on cloud, hybrid cloud is the ideal goal for about 75% of enterprises, especially with increasing importance on the near-seamless compatibility with the quickly growing public side. Let’s break down the differences between the private cloud, multi-cloud and hybrid cloud, so we can define them, investigate why hybrid is the destination, and see how to climb to such hybrid cloud heights.

 Private cloud: In this case we’re looking at a single in-house data center or site of data centers, orchestrated as one internal private cloud. This is a building block for hybrid and multi-cloud.

Although my blog title begins with private, private clouds are less common than you may think because IT shops will often puff up their chest so to speak, to inflate their cloud ego that their virtualized data center is a private cloud. Knowing that enterprises and vendors do “cloud wash” and mix virtualized and software-defined data center messaging with cloud messaging, we probably ought to define our cloud in less “cloudy” terms. The NIST definition of cloud computing offers a concrete definition that includes the characteristics of on-demand self-service, broad network access, multi-tenant resource pooling, rapid elasticity, and measured service (SDN Era Side Note: notice how the values behind these traits align with the values of SDN – I guess that’s a separate blog though). While cloud service and deployment models differ, traits do not. Put differently, one can think of the characteristics of prominent public clouds like AWS, and ask if one’s private data center IT meets those same principles. On the journey to really achieve these cloud characteristics, different enterprises are at different places and moving forward at different speeds.

I often liken the journey to climbing Mount Everest. There are a number of basecamp milestones that one usually passes en route to the peak. Most enterprises are taking these milestones as steps in turn from their history of being entrenched in the valleys of IT silos. Some have the luxury of little legacy IT and are able to leapfrog to higher basecamps, just like some climbers take a helicopter ride as far as possible up the mountain. In any case, the path is not simple and straightforward, and without proper preparation in future-proofing for the final destination, like climbing, what gets you part of the way, may not work to get to the summit.

 Multi-cloud: Multi-cloud is the notion of multiple interconnected private clouds. The clouds are interconnected with unified monitoring, management, and automation across globally distributed private clouds.

 Hybrid cloud: A lot of enterprises—at least those not constrained by internal data locality regulation—are already using public IaaS and PaaS cloud resource elasticity or virtual private clouds—managed or hosted by a public provider. To elevate their game to hybrid cloud, they need a better way to integrate management, or in other words hybridize it, with their multiple private clouds. Like the multi-cloud notion above, they seek to unify and orchestrate applications, software platforms, and infrastructure resources with a centralized system working across private and public clouds.

Why Hybrid?

The purely private, in-house multi-cloud models described above necessitate a reserve and capital expenditure beyond the current needs of the organization. Furthermore, private clouds don’t offer the public cloud’s as-a-service economics of choosing OpEx over CapEx. Beyond economics, the behemoth size and multitude of public cloud providers offers enterprises of any size a way to increase the resiliency and globalization of their IT footprint.

Personally, I related to hybrid cloud like my credit cards. They give me the elasticity to buy whatever I need on credit margin rather than saving. It can be a lifesaver in an emergency. For a businesses, not buying, but rather investing, elasticity is crucial to business agility to seize a business opportunity. As they say, “luck favors the prepared.” Fully realizing hybrid cloud, enterprise IT can prepare by design for infinite scale.

Getting to the summit: Realizing Hybrid

Getting to real hybrid cloud requires embracing the open source ecosystem. I realize that’s a bold statement. I don’t mean enterprises need to partake in open source projects, but they do need to heed them. Let me explain.

There is a saying I like going around that, “cloud is the new computer.” You’ll recall the terminal-mainframe model, then the client-server model, and now the app-cloud model. Cloud is the new application backend.

We know that COTS servers and storage is the assumed basis for cloud hardware. The network hardware is best served by providing a standard (ubiquitous like COTS) IP fabric, as long as it is scalable. Don’t read this as me saying there’s no innovation happening in cloud hardware. Quite the opposite is true, but the interface it provides is increasingly homogeneous.

The software side of cloud infrastructure is shaping up much the same way whereby a homogeneous platform will benefit applications (the ultimate driver of IT value) and devOps (drives application value faster). Remember the server software battle between Windows Server, UNIX variants and Linux? When Linux became the clear winner, it provided an obvious platform of choice for application developers. Much the same thing is happening today in cloud. If cloud is the new computer, ask yourself what will be the new cloud operating system. The only cloud management platform (aka OS) gaining momentum this year is OpenStack. Just passing its fourth birthday, it now has many robust distributions. I think there is a parallel to be drawn between the previous battle and this one for the cloud operating system. The OS with the largest open source following won the battle. The reason why open source wins this battle has many variables, and nowhere is it more important than for cloud infrastructure on which enterprises require application, application platform, and devops-automation portability across multiple private clouds and the public clouds. For enterprise cloud, the choice of an OpenStack-based cloud-OS foundation clearly aligns them for hybrid cloud success because by design, the OpenStack API hybridizes well with the leading public clouds, namely AWS, GAE/GCE, and increasingly Azure. Furthermore, OpenStack is increasingly the choice cloud OS of the other public, managed, and hosted cloud providers like Rackspace. To summarize what Randy Bias at CloudScaling covers in his Hybrid IT webinar, running OpenStack for the private cloud is the simplest path to hybrid cloud which, for the portability described above, requires approximate hybrid cloud parity between clouds in two ways with respect to a standard infrastructure and interface: functionally and non-functionally. Functional compatibility is required in APIs, in infrastructure (e.g. object storage, block storage, compute, VPC networking), and in other behavior (e.g. configuration semantics and default settings). Non-functional compatibility centers around availability, performance, and QoS. A third axis would also be around the economics because the public vs. private clouds must be roughly the same economically, so that neither is prohibitively expensive to use. OpenStack doesn’t guarantee these parities, but it is the best place to start because a number of distributions and integrating technologies (even from VMware) are focused on these parities, understanding that hybrid cloud is the best objective for customers, and that they drive sales.

I have been saying for a while that cloud infrastructure is simply too important and ubiquitous not to innovate in open source, with open standards and open interfaces, but I must say, I love how Jim Whitehurst at Red Hat draws an analogy for executives from the national interstate highway system to talk about standardization.

So back to my original statement, “hybrid cloud requires embracing the open source ecosystem.” Why “requires?” After all couldn’t we have a highway system that isn’t all freeways, but instead all express toll routes? I wouldn’t want to, but I suppose we could. They wouldn’t be as popular as they are, and they wouldn’t have revolutionized the logistics industry and our travel and commute accessibility. Like I indicated above, the cloud transformation for enterprise IT is done for the sake of a few things. Firstly, it’s motivated by devops automation, built out of programmability, best with open source, and whose portability is hampered by heterogeneous APIs. Secondly, it’s motivated by applications and application platforms. Many of these themselves are increasingly open source and designed for cloud-native scale out. These open source developers will choose to align with communities of their open source kin when it comes to choosing cloud OS APIs so as to optimize their applications for cloud. Furthermore, choosing enterprise cloud infrastructure is vastly more important than choosing something like, say, a smartphone. We know that open sourced Android is a great model for application portability across phone vendors and it has dwarfed Apple iOS in its worldwide popularity. Still, there are a lot of iPhones out there and some people have happily bought and locked into Apple’s ecosystem. Getting bought and locked into closed cloud infrastructure is prohibitive for an enterprise on a far more massive scale of investment, and it’s especially unwise if you’re betting against aligning the public cloud providers building their clouds on open source-based infrastructure.

Please leave your comments and thoughts below on what you think will rise to the top as the new cloud operating system. I know it is a journey of a thousand miles, and some enterprises are just taking the first step. Heck, some are still looking at the map as the technology landscape evolves! Whatever the case, it is a journey fit to organize the best of our abilities as we strive to stake our enterprise’s flag into the summit amongst the clouds.

This was originally published on the Juniper Networks blog and ported to my new blog here for posterity.

What is Open Good For? by James Kelly

Last week in a conversation with a customer I was asked about the differences between ODL and OpenContrail and if NorthStar was going open source as well. That discussion seemed interesting for a blog on approaching “open” networking, especially given we’ve just come thru the Open Networking Summit earlier this month.

Open can manifest in a lot of ways. In my mind, chiefly, as open source, open standards, and open interfaces. Personally “open interfaces (APIs)” seems like a misnomer to me, but it seems too late to change.

Open standards have always been the underpinning of network component interoperability. The degree to which networks are interoperable is the degree to which they can peer and grow into larger networks, so this is vital for networks in a world with explosive desire for connectedness of people and things all the time and everywhere.

In developing OpenContrail and NorthStar, you’ll notice the implementation bias toward using proven and existing open standard protocols like MBGP and others, rather than employing new protocols like OpenFlow for example. This is important because we’re building platforms AND products. Customers buying the product care about investment protection and an evolution from where they are to the future they will realize as they roll out the product more and more. Notice how this is at odds with pure technologists’ desire to optimize the technology, regardless of whether reinvention is necessary or not in their view. When building products, we simply must incorporate a bias to evolution over revolution.

Open source is another topic with its virtues and its own problems. The ecosystem and community built around a cause is wonderful for innovation. Projects like Linux and OpenStack offer customers greater options of which vendor they want to use for the commercial support, and with some flexibility to change between them and leverage innovations across them easier than wholly closed-source vendor solutions.  On the side of open source challenges, these ecosystems can tend to be driven by technologists and they don’t necessarily rally around specific customer business problems. It takes vendors to best focus on the customer, and with that does come some lock-in. Open source doesn’t necessarily mitigate all vendor lock-in as some would tout, but open (common) interfaces, a frequent byproduct of open source projects, does help in this regard.

Open interfaces help to create isolation in layers of the software stack; thus, we get the option to switch out and recompose the layers and that option truly lowers vendor lock-in. Interfaces effectively decouple. Decoupling can also be built into systems by choice. For example, Juniper designed OpenContrail to not rely at all on the physical network except IP reachability. That creates true freedom for the customer, breaking an area where lock-in may otherwise set in.

Are all these axes of openness needed or best applied universally? Probably not. When you look at cloud infrastructure, open source is trending strongly. The ETSI NFV ISG is even creating its reference architectures on open source infrastructure, and in genernal, trying to keep IT up-to-speed with the public clouds, Web 2.0s, and hyperscale enterprises demands keeping up to the speed of innovation that an open source ecosystem provides. OpenContrail fills this demand perfectly, coming to the open source cloud infrastructure party with a lot to offer because it was incubated to maturity before launching. When it comes to applying SDN paradigms in the WAN (also read backbone/core), this open source party simply doesn’t exist. Therefore, I don’t see a need to open source Juniper’s NorthStar controller. Incubating it in-house until its later release, we will ensure we align all of our internal technologists (Juniper is known for wickedly smart and passionate engineers) in the direction of customer needs.

I know I still haven’t answered with respect to OpenDaylight (ODL) vs. OpenContrail and NorthStar. There are a bunch of things to say like how Juniper Contrail used OpenContrail code exactly (not just a a basis to hack into a product), but my favorite part is my car analogy. OpenDayight is positioned as platform for SDN and innovation, but these are very broad goals. SDN and innovation are an umbrella across networking domains, but for totally separate solution areas like high-IQ WAN networks and cloud infrastructure for virtualized data centers and the service-enabled SP edge, we are best served by separate but complementary products. As I said to our customer, “If you had an F1 race on the track and also an off-road race on a course with obstacles, then you would take separate cars, that is, if you want to win.”

This was originally published on the Juniper Networks blog and ported to my new blog here for posterity.