Technology

The Best SDN for OpenStack, now for Kubernetes by James Kelly

In between two weeks of events as I write this, I got to stop at home and in the office for only a day, but I was excited by what I saw on Friday afternoon… The best SDN and network automation solution out there, OpenContrail, is now set to rule the Kubernetes stage too. With title of “#1 SDN for OpenStack” secured for the last two years, the OpenContrail community is now ready for Kubernetes (K8s), the hottest stack out there, and a passion of mine for which I’ve been advocating for more and more of attention around the office at Juniper.

It couldn’t be better timing, heading into KubeCon / CloudNativeCon / OpenShift Commons in Berlin this week. For those that know me, you know I love the (b)leading edge of cloud and devops, and last year when I built an AWS cluster for OpenContrail + OpenShift / Kubernetes then shared it in my Getting to #GIFEE blog, github repo, and demo video, those were early days for OpenContrail in the K8s space. It was work pioneered by OpenContrail co-founder and a few star engineers to help me cut through the off-piste mank and uncharted territory. It was also inspired by my friends at tcp cloud (now Mirantis) who presented on Smart cities / IoT with K8s, OpenContrail, and Raspberry Pi at the last KubeCon EU. Today, OpenContrail, the all-capability open SDN is ready to kill it in the Kubernetes and OpenShift spaces, elevating it to precisely what I recently demo shopped for in skis: a true all-mountain top performer (BTW my vote goes to the Rosi Exp. HD 88). Now that you know what I was doing when not blogging recently… Skiing aside, our demos are ready for the KubeCon audience this week at our Juniper booth, and I’ll be there talking about bringing a ton of extra value to the Kubernetes v1.6 networking stack.

What sets OpenContrail apart from the other networking options for Kubernetes and OpenShift is that it brings an arsenal of networking and security features to bear on your stack, developed by years of the performance-obsessed folks at Juniper and many other engineers in the community. In this space OpenContrail often competes with smaller SDN solutions offered by startups. Their said advantage is positioned as nimbler and simpler solutions. This was particularly true in their easy, integrated installation with K8s (coming for OpenContrail very soon), but on the whole, their advantage, in general across the likes of Contiv, Calico, Flannel, Weave, etc. boils down to 2 things. Let’s have a look…

First, it is easy to be simpler when primitive. This isn’t a knock on the little guys. They are very good for some pointed use cases and indeed they are simple to use. They do have real limits however. Simple operation can always come from less knobs and fewer capabilities, but I believe an important goal of SDN, self-driving infrastructure, cognitive / AI in tech, is abstraction; in other words, simplicity needn’t come from striping away functionality. We only need start with a good model for excellent performance at scale and design for elegance and ease of use. It’s not easy to do, but have you noticed that Kubernetes itself is the perfect example of this? – super easy 2 years ago, still is, but at the same time there’s a lot more to it today. It’s built on solid concepts and architecture, from a lot of wisdom at Google. Abstraction is the new black. It’s open, layered and transparent when you need to peel it back, and it is the solution to manage complexity that arises from features and concepts that aren’t for everyone. General arguments aside, if you look at something like K8s ingress or services networking / balancing, none of the SDN solutions cover that except for very pointed solutions like GKE or nginx where you’d still need one or more smaller SDN tools. Furthermore, when you start to demand network performance on the control (protocol/API) and data (traffic feeds & speeds) planes with scale in many other K8s metrics, the benefits of OpenContrail that you get for free really stand apart from the defacto K8s networking components and these niche offerings. Better still, you get it all as one OpenContrail solution that is portable across public and private clouds.

Second, other solutions are often solely focused on K8s or CaaS stacks. OpenContrail is an SDN solution that isn’t just for Kubernetes nor containers. It’s one ring to rule them all. It works on adjacent CaaS stack integrations with OpenShift, Mesos, etc., but it’s equally if not more time-tested in VMware and OpenStack stacks and even bare-metal or DIY orchestration of mixed runtimes. You might have heard of publicized customer cases for CaaS, VMware and OpenStack across hybrid clouds and continuing with the same OpenContrail networking irrespective of stacking PaaS and CaaS upon pub/prvt IaaS or metal (here are a few 1, 2, 3). It’s one solution unifying your stack integration, network automation and policy needs across everything. In other words, the only-one-you-need all-mountain ski. The only competitor that comes close to this breadth is Nuage Networks, but they get knocked out quickly for not being open source (without tackling other important performance and scale detail here). If the right ski for the conditions sounds more fun to you, like the right tool for the right job, then wait for my next blog on hybrid cloud…and what I spoke about at IBM Interconnect (sorry no recording). In short, while having lots of skis may have an appeal to different conditions, having lots of networking solutions is a recipe for IT disaster because the network foundation is so pervasive and needs to connect everything.

With OpenContrail developers now making it is dead easy to deploy with K8s (think kubeadm, Helm, Ansible…) and seamless to operate with K8s, it’s going to do what it did for OpenStack networking, for Kubernetes networking very quickly. I’m really excited to be a part of it.

You can visit the Juniper booth to find me and hear more. As you may know, Juniper is the leading contributor to OpenContrail, but we’re also putting K8s to work in the Juniper customer service workbench and CSO product (NFV/CPE offering). Moreover, and extremely complementary to Juniper’s Contrail Networking (our OpenContrail offering), Juniper just acquired AppFormix 4 months ago. AppFormix is smart operations management software that will make your OpenStack or K8s cluster ops akin to a self-driving Tesla (a must if your ops experience is about as much fun as a traffic jam). Juniper’s AppFormix will give you visibility into the state of workloads both apps and software-defined infrastructure stack components alike, and it applies big data and cognitive machine learning to automate and adapt policies, alarms, chargebacks and more. AppFormix is my new jam (with Contrail PB)… it gathers and turns your cloud ops data into insight… The better and faster you can do that, the better your competitive advantage. As you can tell, KubeCon is going to be fun this week!

DEMOS DEMOS DEMOS

+update 3/28
Thanks to the Juniper marketing folks for a list of the demos that we'll be sharing at KubeCon:

  1. Contrail Networking integrated with a Kubernetes cluster (fully v1.6 compatible) -
    • Namespaces/RBAC/Network Policy improved security with OpenContrail virtual networks, VPC/projects and security groups (YouTube demo video)
    • Ingress and Services load balancing with OpenContrail HAProxy, vRouter ECMP, and floating IP addresses, improving performance (remove KubeProxy) and consolidating SDN implementation (YouTube demo video)
    • Connecting multiple hybrid clouds with simple network federation and unified policy
    • Consolidating stacked CaaS on IaaS SDN: optimized OpenContrail vRouter networking "passthru" for Kubernetes nodes on top of OpenStack VMs
  2. Contrail Networking integrated with an OpenShift cluster (YouTube demo video)
  3. Contrail Networking containerized and easy deployment in Kubernetes
  4. AppFormix monitoring and analytics for Kubernetes and workload re-placement automation

I'll light these up with links to YouTube videos as I get the URLs.

The Road to Intent-Driven Infrastructure Is Paved with Confusion by James Kelly

Winding down 2016, I’ve participated in a few conferences, KubeCon / CloudNativeCon and the Gartner Data Center Infrastructure conference. I’ve also worked on the acquisition of AppFormix by Juniper to give us a leg up in cloud analytics based on big data and machine learning. The sum of my learning and conversations piqued my analysis and skepticism with some commonly encountered and complementary hyped topics:

  • Intent-driven infrastructure/networking
  • Machine/deep learning-based autonomous infrastructure
  • Self-driving infrastructure/networking
  • Declarative infrastructure and infrastructure as code

With more freshness and luster than the long-in-the-tooth SDN, NFV, cloud, XaaS, all of these topics are already well-traveled and racked up millions of blog and PowerPoint miles in 2016, but often I see these topics and terms conflated or interchanged like they are the same thing.

Based on my recent discussions at the Gartner conference and some of the networking pundits I read, the most anticipated and highest form of infrastructure enlightenment is intent-driven infrastructure. This is mostly covered as intent-based or intent-driven networking in my circles, but it certainly can be applied more broadly as an approach to systems. Unfortunately, “intent…,” in my observation, is the term with the most offenders misusing it, abusing it, substituting it or mixing it with terms like automated, policy-based, declarative, or smart/intelligent infrastructure.

Embellishments and lofty labeling aside, I believe the term “intent” is easily and often abused because it is a precarious business to understand intent at all. With the way I use the word “intent” normally, doing something based on intent, seems like it could be exercise in mind reading.

If you’ve seen that, and scratched your head, then you’re not alone. Seeking to better understand this myself, I undertook a journey of reading and research, but especially looking at intent in its truer form studied in psychology. In this blog, I suss out a few of the difficult aspects of understanding intent, searching for key tenets and considerations for constructing an intent-based and -driven technical system. However, instead of keeping it all technical, I look for support of these traits from reflection on and inspiration from the traditional intention of intent.

What is intent?

A logical place to start is in defining and really understanding the terms.

When one reads about intent, we find the dictionary definition, of one’s purpose or meaning. Have you ever been on the bad end of a miscommunication where the intent didn’t come across properly? It happens all the time because intent is generally communicated as much by what we don’t say and don’t need to consider as much as what we do express. Miscommunications and mistaken assumptions won’t do for technical applications unfortunately.

So can we apply intent or user intent to technical applications? Certainly not the general way that we define intent because intent is never outwardly expressed. Rather it is the internal precursor to what we say or do, or even more generally who we are. Nevertheless, technocrats are well-practiced at making inspired metaphors, so let’s go with it, and find out what we can usefully borrow from what we know of intentions in their true forms.

Studying intent a little deeper with the help of psychological research, we find out where intentions come from and some description of the unspoken context that makes up intention. Here’s a snippet lifted from Wikipedia, where there is copious more detail if you’re interested.

“intentions to perform the behavior appear to derive from attitudes, subjective norms, and perceived behavioral control.”

From this we can say a few things about the context surrounding intentions:

  • Our intentions are shaped by the state of our attitudes (which may be capricious)
  • We don’t usually intend to act outside of social norms, especially not outside of the law
  • We don’t intend to act beyond our perceived capabilities and knowledge

What is intent-based and intent-driven?

To apply the idea of intent to a technical system, these two terms come up in recent language: intent-based and intent-driven.

When I hear these, I think of configuration in different regards.

With respect to intent-based, we can say that the basis of the system, that is foundational, more static configurations, should be in the form of user intent.

With respect to intent-driven, we can say that ongoing execution and more-dynamic control should be in accordance with user intent and the intent of interfacing systems that help “drive” the system. For technical applications, I take this to mean that how a system’s own APIs, self-feedback loops are interpreted and oriented are also re-configurations aligned with intent-based user configuration and intent of other system actors.

Of the two terms, let’s continue with intent-driven because it encompasses intent-based.

Expressing Intent to Technical Applications

Using the abovementioned definition and understanding of intent we can translate some the findings to come up with key tenets for intent-driven systems and their configurations.

If intent is a purpose or meaning behind a behavior, then it isn’t the behavior itself (words, actions…). It’s the meta behavior. It’s inherently abstract in the what, and doesn’t really consider many details of the how or the motives of the why, but will certainly have contextual requirements for the why, what and how. I believe the when, where and who, are usually abstract considerations as well, if not even explicit in our intentions, but this varies.

The first thing that falls out of this in many people’s observations is notion of something abstract.

Since in human communication, our words and actions closely approximate the intent, but we abstract away unnecessary detail, we can say that abstraction is an important aspect to translate to technical applications of intent.

Abstraction isn’t a new desirable tenet of technical systems neither. We talked about it for years as a desirable aspect of SDN systems in the networking industry. Abstraction is a double-edged sword however.

Abstraction allows us to encapsulate and hide detail which provides flexibility. Losing specificity, however, introduces unpredictability. Unpredictability and variability in the nature of acting out our own intentions is exactly why miscommunications happen. It is also why many of us never achieve our intents. Another interesting point borrowed from the psychology of personal development, is that a lazy man has intents, and an achiever has goals. When it comes to goals, the more clarity, specificity and measurability, the more likely we will follow through on course.

The difference with technical systems is that there is no question of follow through or execution in other words. Systems execute, and don’t have lazy days. The value abstraction provides is that for a set of abstract user input we have many ways to achieve a result without changing the input or its form. This flexibility is important because it allows us to avoid lock-in and innovate, creating new ways to get the same result. New and different results, however, that still meet the abstract intention, may or may not be so welcome, so we need to strike the right balance of abstraction and very likely ascend to grander abstractions in layers… more on this below.

Moving on, if we have to express the meta behavior, the intent, directly, then we should not specify the how, but rather the where, when, who and especially what. An intent can certainly embody these things, and of course systems deal in these too. This important aspect, points us to adopt the virtues of a configuration that is declarative for technical applications, as opposed to imperative directions that control the detail of the how.

We also learned that our own intentions are guided by subjective norms and law. If we had to expressly communicate those for a technical application, this translates to policy-oriented configuration constraints. Subjective norms may also change based on the environment, time, actors, etc. It would be useful if these variable norms, could also be policy adjustments resulting from the insight of self-learning technology with or without human verification.

When it comes to human intentions restricted by the possible and capable, I think the technical translation easily points to what is programmed and resourced for the system. I don’t think this is a key tenet by any means because it is so obvious. Interestingly, some technical systems can grow new capabilities through extensibility or training, and those new capabilities are basically new programming, and we would likely have to reshape the form of the input intent to take advantage of them. This is similar to how we change our own intentions based on new knowledge and capability.

What about our intents being shaped by our attitudes?

Just Deal with It

How could an attitude bias inspire a technical translation? Attitudes are our internal states, moods, feelings. Systems have internal states as least. This aspect could be simply mean when executing a system to realize an intent, there is a consideration of the internal state of the system.

Have you ever had to do something when you were tired, sad, frustrated? Sometimes humans also intend to achieve things that are in conflict with our own attitudes, and even require us to change our attitudes. This is kind of like a system dealing with its own internal state and feeding that back to itself to carry out the intention in spite of its state. To me this reveals, an interesting tenet of the inter-workings of an intent-driven system. They should be self-aware of their ability to meet their purpose using real-time telemetry and analytics about its own performance, scale, and productivity. They should automatically pace, scale, and heal themselves in accordance with the input intent they accept.

Architecting Intent-Driven Systems

Above, we covered where, when, who, what, and how, but not the why of an intent, and yet an important aspect of an intent is that it also embodies the why, the purpose or meaning behind. It’s not something we are as used to thinking about for technical applications. When we look at the why of any real intention we have, we can get another intention and can recursively ask why again, until we climb up to our core worldview, morals, values and beliefs. Taking inspiration from that, points us in the direction that I hinted at above when I mentioned layers. A system that is acting on an intent, is really just focused on a single purpose, but often in service of a larger purpose. This points us to architecting our systems with composability, and decomposing systems into services, subservices, and micro-services insofar as we can still clearly define small purposed systems without getting into the minutia.

Where to Start

There is a lot of intent-driven research and development happening in networking, but really that is a bad place to start thinking about intent overall because as above, when we ask why, for an intent-driven network system, we ascend to the higher level. There is no network built out of the intent to build a network, except as a pure science project. Intent never describes a network, intent designs connection and has explicit or implicit criteria like security norms, latency and bandwidth.

Of course that intent is in service of a larger intent. If we keep asking why, we will ascend to what is really of value. If we take an Enterprise as an example, the value a business gets from technology is derived from applications. Infrastructure and network connection is down the chain of value and intent past the apps and their development and orchestration systems that manage the infrastructure and connections they require.

In short, looking up is the best place to start. If a system is down the value chain, we need to consider what will encompass it. Generally, that points to the application stack first, and second to the ITSM or I&O management tools that should take business policy and governance intent as additional constraints.

In Summary

After a closer scientific look at intention, I’m now a lot clearer about intent-driven systems. We’ve extracted a few key tenets about intent-oriented system configurations and intent-driven systems themselves.

While there are values to each one of these aspects individually, it is only when applied collectively that we can really say intent-driven. So if we go and make intent-driven systems are customers going to be clamoring for them? It is worth a lot of the hype? I don’t think so. The systems developers will get value in organizing this way, but the users are never going to get out of bed in the morning and say to themselves, “I’ve got to get an intent-driven X.” Users and businesses are always going to be sold on meeting their own higher intentions.

Meet the Fockers! What the Fork? by James Kelly

In the past week, all the rage (actual hacker rage in some forums) in the already hot area of containers is about forking Docker code, and I have to say, “what the fork?”

What the Fork?

Really!? Maybe it is because I work in IT infrastructure, but I believe standards and consolidation are good and useful...especially “down in the weeds,” so that we can focus on the bigger picture of new tech problems and the apps that directly add value. This blog installment summarizes Docker’s problems of the past year or so, introduces you to the “Fockers” as I call them, and points out an obvious solution that, strangely, very few folks are mentioning.

This reminds me of the debate about overlay networking protocols a few years back – only then we didn’t blow up Twitter doing it. This is a bigger deal than that of overlay protocols however, so my friends in networking may relate this to deciding upon the packet standard (a laugh about the joy of standards). We have IP today, but it wasn’t always so pervasive.

Docker is actually decently pervasive today as far as containers go, so if you’re new to this topic, you might wonder where the problem is. Quick recap… The first thing you should know is that Docker is not one thing, even though we techies talk about it colloquially like this. There’s Docker Inc., Docker’s open source github code and the community and Docker products. Most notably, Docker Engine is usually just “Docker,” and even that is actually a cli tool and a daemon (background process). The core tech that made Docker famous is a packaging, distribution, and execution wrapper over Linux containers that is very developer and ops friendly. It was genius, open source, and everyone fell in love. There were cheers of “lightning in a bottle,” and “VMs are the new legacy!” Even Microsoft has had a whale of a time joining the Docker ecosystem more and more.

Containers and good tech around them has absolutely let tech move faster, transforming CI/CD, devops, cloud computing, and NFV, well…it hasn’t quite hit NFV absolutely yet, but that’s a different blog.

Docker wasn’t perfect, but when they delivered v1.0 (summer 2014), it was very successful. But then something happened. Our darling started to make promises she didn’t keep and worse, she got fat. No not literally, but Docker received a lot more VC funding, and grew organically and with a few acquisitions. This product growth took them away from just running and wrapping containers into orchestration, analytics, networking etc. (Hmm. Reminds me of VMware actually. Was that successful for them? :/) Growing adjacent or outside of core competencies is always difficult for businesses, let alone doing it with the pressure of having taken VC funding and expected to multiply it. At the same time, as Docker started releasing more features and more frequently, quality and performance suffered.

By and large I expect that most folks using Docker, weren’t using it with Docker Inc. nor anyone’s support, but nonetheless, with Docker dissatisfaction, boycotts and breakups starting to surface, in came some alternatives: fairly new entrants Core OS and Rancher, with more support from other traditional Linux vendors Red Hat (Project Atomic) and Canonical (LXD).

Containers, being fundamentally enabled by the Linux kernel, all Linux vendors have some stake in seeing that containers remain successful and proliferate. Enter OCP – wait that name is taken – OCI, I mean OCI. OCI aims to standardize a container specification: What goes in the container “packet” both as an on-disk and portable file format and what it means to a runtime. If we can all agree on that, then we can innovate around it. There is also a reference implementation, runC, donated by Docker, and I believe that as of Docker Engine v1.11 it adheres to the specification. However, according to the OCI site, the spec is still in progress, which I interpret as a moving target.

You may imagine the differing opinions at the OCI table (and thrown across blogs and Twitter – more below if you’re in the mood for some tech drama) and otherwise just the slow progress has some folks wanting to fork Docker so we can quickly get to a standard.

But wait there’s more! In Docker v1.12, near-released and announced at DockerCon June 2016, Docker Inc. refactored their largest tool, called Swarm, into the Docker Engine core product. When Docker made acquisitions like SocketPlane that competed with the container ecosystem, it stepped on the toes of other vendors, but they have a marketing mantra to prove they’re not at all malevolent: “Batteries included, but replaceable.” Ok fine we’ll overlook some indiscretions. But what about bundling the Swarm orchestration tools with Docker Engine? Those aren’t batteries. That’s bundling an airline with the sale of the airplanes.

Swarm is competing with Kubernetes, Mesosphere and Nomad for container orchestration (BTW. Kubernetes and Kubernetes-based PaaSs are the most popular currently), and this Docker move appears to be a ploy to force feed Docker users with Swarm whether they want it or not. Of course they don’t have to use it, but it is grossly excessive to have around if not used, not to mention detrimental to quality, time-to-deploy, and security. For many techies, aside from being too smart to have the Swarm pulled over their eyes, this was the last straw for another technical reason: the philosophy that decomposition into simple building blocks is good, and solutions should be loosely coupled integrations of these building blocks with clean, ideally standardized, interfaces.

Meet the Fockers! So, what do you when you are using Docker, but you’re increasingly are at odds with the way that Docker Inc. uses its open source project as a development ground for a monolithic product, a product you mostly don’t want? Fork them!

Especially among hard-core open source infrastructure users, people have a distaste for proprietary appliances, and Docker Engine is starting to look like one. All-in-ones are usually not so all-in-wonderful. That’s because functions built as decomposed units can proceed on their own innovation paths with more focus and thus, more often than not, beat the same function in an all-in-one. Of course, I’m not saying all-in-ones aren’t right for some, but they’re not for those that demand the best, nor for those that want choice to change components instead of getting locked into a monolith.

All-in-ones are usually not so all-in-wonderful.

Taking all this into account, all the talk of forking Docker seems bothersome to me for a few reasons.

First of all there are already over 10000 forks of Docker. Hello! Forking on github is click-button easy, and many have done it. As many have said, forking = fragmentation, and makes it harder for those that need to use and support multiple forks or test integrations with them aka the container ecosystem of users and vendors.

Second, creating a fork, someone presumably wants to change their fork (BTW non-techies, that’s the new copy that you now own – thanks for reading). I haven’t seen anybody actually comment on what they would change if they forked the Docker code. All this discussion and no prescription :( Presumably you would fix bugs that affect you or try to migrate fixes that others have developed, including Docker Inc who will obviously continue to develop them. What else would you do? Probably scrap Swarm because most of you (in discussions) seem to be Kubernetes fans (me too). Still, in truth you have to remove and split out a lot more if you really want to decompose this into its basic functions. That’s not trivial.

Third, let’s say there is “the” fork. Not my fork or yours, but the royal fork. Who starts this? Who do you think should own it? Maybe it would be best for Docker if they just did this! They could re-split out Swarm and other tools too.

Don’t fork, make love.

Finally, and my preferred solution. Don’t fork, make love. In seriousness, what I mean is two things:

1.       There is a standards body, the OCI. Make sure Docker has a seat at the table, and get on with it! If they’re not willing to play ball, then let it be widely known, and move OCI forward without them. I would think they have the most to lose here, so they would cooperate. The backlash if they didn’t may be unforgiving.

2.       There is already a great compliant alternative to Docker today that few are talking about: rkt (pronounced Rocket). This is pioneered by CoreOS, and now supported on many Linux operating systems. rkt is very lightweight. It has no accompanying daemon. It is a single, small, simple function that helps start containers. Therefore, it passes the test of a small decomposed function. Even better, the most likely Fockers are probably not Swarm users, and all other orchestration systems support rkt: Kubernetes’s rktnetes, Mesos’s unified containerizer, and Nomad’s rkt driver. We all have a vote, with our wallet and free choices, so maybe it is time rock it on over to rkt.

 

In closing, I offer some more ecosystem observations:

In spite of this criticism (much is not mine in fact, but that of others to which I’m witness) I’m a big fan of Docker and its people. I’m not a fan of recent strategy to include Swarm into core however. I believe it goes against the generally accepted principles of computer science and Unix. IMO Docker Engine was already too big pre-v1.12. Take it out, and compete on the battle ground of orchestration on your own merits and find new business models (like Docker Cloud).

I feel like the orchestration piece is important and the momentum is with Kubernetes. I see the orchestration race a little like OpenStack vs. Cloud Stack vs. Eucalyptus 5 years ago where the momentum then was with OpenStack, and today it’s with Kubernetes but for different reasons. It has well-designed and cleanly defined interfaces and abstractions. It is very easy to try and learn. Those who say it is hard to learn, setup and use, what was your experience with OpenStack? Moreover, there are tons of vanilla Kubernetes users; can’t say that about OpenStack. In the OpenStack space, there are OpenStack vendors adding value with support and getting to day-1, but with Kubernetes the vendor space is adding features on top of Kubernetes, moving toward PaaS. That’s a good sign that the Kubernetes scope is about right.

I’ve seen 99% Docker love and 1% anti-Docker sentiment in my own experience. My employer, Juniper, is a Docker partner and uses Docker Engine. We have SDN and switching integrations for Docker Engine, and so personally, my hope is that Docker finds a way through this difficulty by re-evaluating how it is using open source and business together. Open source is not a vehicle to drive business nor shareholder value which is inherently competitive. Open source is a way to drive value to the IT community at large in a way that is intentionally cooperative.

Related:

http://thenewstack.io/docker-fork-talk-split-now-table/

https://medium.com/@bob_48171/an-ode-to-boring-creating-open-and-stable-container-world-4a7a39971443#.yco4sgiu2

http://thenewstack.io/container-format-dispute-twitter-shows-disparities-docker-community/

https://www.linkedin.com/pulse/forking-docker-daniel-riek