Fond Memories of 2016 by James Kelly

2016 was a year full of fun and adventure. To change up this blog installment, here is the video Linh and I shared on Facebook with our friends and families.

Some more quick fun facts about my 2016…

  • Linh and I got engaged on October 12 in Half Moon Bay
  • I have a new niece, Felicity, who I adore and got to visit in Jersey and can’t wait to see her again soon
  • I posted 165 times on Facebook (I’m sure that’s a record for me, but that’s what you get when you hang around with a Facebook star like Linh)
  • Linh and I went to Canada together on an extended business/pleasure adventure. It was her first time in Canada, first time meeting some of my family this year there and elsewhere, and the fall colours ;) were beautiful
  • Along that theme, we went to a hockey game. Linh’s first, my umpteenth, but it was a good match really far along in the playoffs
  • We built a garden that we love at our apartment home in San Jose
  • Linh and I both re-started painting. Although Linh was more productive in making many pieces, it has been wonderful to get back into it for us both. It’s fun to paint pieces for our friends and home
  • We ate countless oysters and in many fine establishments including *** Per Se
  • We went horseback riding on the beach
  • I traveled to a new country for me, Korea, and countless other places mostly with Linh… well I can count quite a few. Let me see: New York, Las Vegas, Los Angeles, Portland and Dundee/wine country, Oregon, Marin, Napa (twice), Seattle (twice), Chicago (3 times), Austin, Atlanta, Miami and Orlando, Washington DC, Yosemite National Park and Tioga Pass, Lake Tahoe, Paso Robles, SLO, Morro bay, Arizona, New Jersey, (old / UK) Jersey, England (London, Manchester, Cambridge, Oxford, Nantwich…), and on the trip to Canada just Niagara-on-the-Lake, Ontario wine country, Toronto, Montreal, Ottawa and Gatineau Park, Greater Vancouver and Roberts Creek on the BC Sunshine Coast.




The Road to Intent-Driven Infrastructure Is Paved with Confusion by James Kelly

Winding down 2016, I’ve participated in a few conferences, KubeCon / CloudNativeCon and the Gartner Data Center Infrastructure conference. I’ve also worked on the acquisition of AppFormix by Juniper to give us a leg up in cloud analytics based on big data and machine learning. The sum of my learning and conversations piqued my analysis and skepticism with some commonly encountered and complementary hyped topics:

  • Intent-driven infrastructure/networking
  • Machine/deep learning-based autonomous infrastructure
  • Self-driving infrastructure/networking
  • Declarative infrastructure and infrastructure as code

With more freshness and luster than the long-in-the-tooth SDN, NFV, cloud, XaaS, all of these topics are already well-traveled and racked up millions of blog and PowerPoint miles in 2016, but often I see these topics and terms conflated or interchanged like they are the same thing.

Based on my recent discussions at the Gartner conference and some of the networking pundits I read, the most anticipated and highest form of infrastructure enlightenment is intent-driven infrastructure. This is mostly covered as intent-based or intent-driven networking in my circles, but it certainly can be applied more broadly as an approach to systems. Unfortunately, “intent…,” in my observation, is the term with the most offenders misusing it, abusing it, substituting it or mixing it with terms like automated, policy-based, declarative, or smart/intelligent infrastructure.

Embellishments and lofty labeling aside, I believe the term “intent” is easily and often abused because it is a precarious business to understand intent at all. With the way I use the word “intent” normally, doing something based on intent, seems like it could be exercise in mind reading.

If you’ve seen that, and scratched your head, then you’re not alone. Seeking to better understand this myself, I undertook a journey of reading and research, but especially looking at intent in its truer form studied in psychology. In this blog, I suss out a few of the difficult aspects of understanding intent, searching for key tenets and considerations for constructing an intent-based and -driven technical system. However, instead of keeping it all technical, I look for support of these traits from reflection on and inspiration from the traditional intention of intent.

What is intent?

A logical place to start is in defining and really understanding the terms.

When one reads about intent, we find the dictionary definition, of one’s purpose or meaning. Have you ever been on the bad end of a miscommunication where the intent didn’t come across properly? It happens all the time because intent is generally communicated as much by what we don’t say and don’t need to consider as much as what we do express. Miscommunications and mistaken assumptions won’t do for technical applications unfortunately.

So can we apply intent or user intent to technical applications? Certainly not the general way that we define intent because intent is never outwardly expressed. Rather it is the internal precursor to what we say or do, or even more generally who we are. Nevertheless, technocrats are well-practiced at making inspired metaphors, so let’s go with it, and find out what we can usefully borrow from what we know of intentions in their true forms.

Studying intent a little deeper with the help of psychological research, we find out where intentions come from and some description of the unspoken context that makes up intention. Here’s a snippet lifted from Wikipedia, where there is copious more detail if you’re interested.

“intentions to perform the behavior appear to derive from attitudes, subjective norms, and perceived behavioral control.”

From this we can say a few things about the context surrounding intentions:

  • Our intentions are shaped by the state of our attitudes (which may be capricious)
  • We don’t usually intend to act outside of social norms, especially not outside of the law
  • We don’t intend to act beyond our perceived capabilities and knowledge

What is intent-based and intent-driven?

To apply the idea of intent to a technical system, these two terms come up in recent language: intent-based and intent-driven.

When I hear these, I think of configuration in different regards.

With respect to intent-based, we can say that the basis of the system, that is foundational, more static configurations, should be in the form of user intent.

With respect to intent-driven, we can say that ongoing execution and more-dynamic control should be in accordance with user intent and the intent of interfacing systems that help “drive” the system. For technical applications, I take this to mean that how a system’s own APIs, self-feedback loops are interpreted and oriented are also re-configurations aligned with intent-based user configuration and intent of other system actors.

Of the two terms, let’s continue with intent-driven because it encompasses intent-based.

Expressing Intent to Technical Applications

Using the abovementioned definition and understanding of intent we can translate some the findings to come up with key tenets for intent-driven systems and their configurations.

If intent is a purpose or meaning behind a behavior, then it isn’t the behavior itself (words, actions…). It’s the meta behavior. It’s inherently abstract in the what, and doesn’t really consider many details of the how or the motives of the why, but will certainly have contextual requirements for the why, what and how. I believe the when, where and who, are usually abstract considerations as well, if not even explicit in our intentions, but this varies.

The first thing that falls out of this in many people’s observations is notion of something abstract.

Since in human communication, our words and actions closely approximate the intent, but we abstract away unnecessary detail, we can say that abstraction is an important aspect to translate to technical applications of intent.

Abstraction isn’t a new desirable tenet of technical systems neither. We talked about it for years as a desirable aspect of SDN systems in the networking industry. Abstraction is a double-edged sword however.

Abstraction allows us to encapsulate and hide detail which provides flexibility. Losing specificity, however, introduces unpredictability. Unpredictability and variability in the nature of acting out our own intentions is exactly why miscommunications happen. It is also why many of us never achieve our intents. Another interesting point borrowed from the psychology of personal development, is that a lazy man has intents, and an achiever has goals. When it comes to goals, the more clarity, specificity and measurability, the more likely we will follow through on course.

The difference with technical systems is that there is no question of follow through or execution in other words. Systems execute, and don’t have lazy days. The value abstraction provides is that for a set of abstract user input we have many ways to achieve a result without changing the input or its form. This flexibility is important because it allows us to avoid lock-in and innovate, creating new ways to get the same result. New and different results, however, that still meet the abstract intention, may or may not be so welcome, so we need to strike the right balance of abstraction and very likely ascend to grander abstractions in layers… more on this below.

Moving on, if we have to express the meta behavior, the intent, directly, then we should not specify the how, but rather the where, when, who and especially what. An intent can certainly embody these things, and of course systems deal in these too. This important aspect, points us to adopt the virtues of a configuration that is declarative for technical applications, as opposed to imperative directions that control the detail of the how.

We also learned that our own intentions are guided by subjective norms and law. If we had to expressly communicate those for a technical application, this translates to policy-oriented configuration constraints. Subjective norms may also change based on the environment, time, actors, etc. It would be useful if these variable norms, could also be policy adjustments resulting from the insight of self-learning technology with or without human verification.

When it comes to human intentions restricted by the possible and capable, I think the technical translation easily points to what is programmed and resourced for the system. I don’t think this is a key tenet by any means because it is so obvious. Interestingly, some technical systems can grow new capabilities through extensibility or training, and those new capabilities are basically new programming, and we would likely have to reshape the form of the input intent to take advantage of them. This is similar to how we change our own intentions based on new knowledge and capability.

What about our intents being shaped by our attitudes?

Just Deal with It

How could an attitude bias inspire a technical translation? Attitudes are our internal states, moods, feelings. Systems have internal states as least. This aspect could be simply mean when executing a system to realize an intent, there is a consideration of the internal state of the system.

Have you ever had to do something when you were tired, sad, frustrated? Sometimes humans also intend to achieve things that are in conflict with our own attitudes, and even require us to change our attitudes. This is kind of like a system dealing with its own internal state and feeding that back to itself to carry out the intention in spite of its state. To me this reveals, an interesting tenet of the inter-workings of an intent-driven system. They should be self-aware of their ability to meet their purpose using real-time telemetry and analytics about its own performance, scale, and productivity. They should automatically pace, scale, and heal themselves in accordance with the input intent they accept.

Architecting Intent-Driven Systems

Above, we covered where, when, who, what, and how, but not the why of an intent, and yet an important aspect of an intent is that it also embodies the why, the purpose or meaning behind. It’s not something we are as used to thinking about for technical applications. When we look at the why of any real intention we have, we can get another intention and can recursively ask why again, until we climb up to our core worldview, morals, values and beliefs. Taking inspiration from that, points us in the direction that I hinted at above when I mentioned layers. A system that is acting on an intent, is really just focused on a single purpose, but often in service of a larger purpose. This points us to architecting our systems with composability, and decomposing systems into services, subservices, and micro-services insofar as we can still clearly define small purposed systems without getting into the minutia.

Where to Start

There is a lot of intent-driven research and development happening in networking, but really that is a bad place to start thinking about intent overall because as above, when we ask why, for an intent-driven network system, we ascend to the higher level. There is no network built out of the intent to build a network, except as a pure science project. Intent never describes a network, intent designs connection and has explicit or implicit criteria like security norms, latency and bandwidth.

Of course that intent is in service of a larger intent. If we keep asking why, we will ascend to what is really of value. If we take an Enterprise as an example, the value a business gets from technology is derived from applications. Infrastructure and network connection is down the chain of value and intent past the apps and their development and orchestration systems that manage the infrastructure and connections they require.

In short, looking up is the best place to start. If a system is down the value chain, we need to consider what will encompass it. Generally, that points to the application stack first, and second to the ITSM or I&O management tools that should take business policy and governance intent as additional constraints.

In Summary

After a closer scientific look at intention, I’m now a lot clearer about intent-driven systems. We’ve extracted a few key tenets about intent-oriented system configurations and intent-driven systems themselves.

While there are values to each one of these aspects individually, it is only when applied collectively that we can really say intent-driven. So if we go and make intent-driven systems are customers going to be clamoring for them? It is worth a lot of the hype? I don’t think so. The systems developers will get value in organizing this way, but the users are never going to get out of bed in the morning and say to themselves, “I’ve got to get an intent-driven X.” Users and businesses are always going to be sold on meeting their own higher intentions.

Meet the Fockers! What the Fork? by James Kelly

In the past week, all the rage (actual hacker rage in some forums) in the already hot area of containers is about forking Docker code, and I have to say, “what the fork?”

What the Fork?

Really!? Maybe it is because I work in IT infrastructure, but I believe standards and consolidation are good and useful...especially “down in the weeds,” so that we can focus on the bigger picture of new tech problems and the apps that directly add value. This blog installment summarizes Docker’s problems of the past year or so, introduces you to the “Fockers” as I call them, and points out an obvious solution that, strangely, very few folks are mentioning.

This reminds me of the debate about overlay networking protocols a few years back – only then we didn’t blow up Twitter doing it. This is a bigger deal than that of overlay protocols however, so my friends in networking may relate this to deciding upon the packet standard (a laugh about the joy of standards). We have IP today, but it wasn’t always so pervasive.

Docker is actually decently pervasive today as far as containers go, so if you’re new to this topic, you might wonder where the problem is. Quick recap… The first thing you should know is that Docker is not one thing, even though we techies talk about it colloquially like this. There’s Docker Inc., Docker’s open source github code and the community and Docker products. Most notably, Docker Engine is usually just “Docker,” and even that is actually a cli tool and a daemon (background process). The core tech that made Docker famous is a packaging, distribution, and execution wrapper over Linux containers that is very developer and ops friendly. It was genius, open source, and everyone fell in love. There were cheers of “lightning in a bottle,” and “VMs are the new legacy!” Even Microsoft has had a whale of a time joining the Docker ecosystem more and more.

Containers and good tech around them has absolutely let tech move faster, transforming CI/CD, devops, cloud computing, and NFV, well…it hasn’t quite hit NFV absolutely yet, but that’s a different blog.

Docker wasn’t perfect, but when they delivered v1.0 (summer 2014), it was very successful. But then something happened. Our darling started to make promises she didn’t keep and worse, she got fat. No not literally, but Docker received a lot more VC funding, and grew organically and with a few acquisitions. This product growth took them away from just running and wrapping containers into orchestration, analytics, networking etc. (Hmm. Reminds me of VMware actually. Was that successful for them? :/) Growing adjacent or outside of core competencies is always difficult for businesses, let alone doing it with the pressure of having taken VC funding and expected to multiply it. At the same time, as Docker started releasing more features and more frequently, quality and performance suffered.

By and large I expect that most folks using Docker, weren’t using it with Docker Inc. nor anyone’s support, but nonetheless, with Docker dissatisfaction, boycotts and breakups starting to surface, in came some alternatives: fairly new entrants Core OS and Rancher, with more support from other traditional Linux vendors Red Hat (Project Atomic) and Canonical (LXD).

Containers, being fundamentally enabled by the Linux kernel, all Linux vendors have some stake in seeing that containers remain successful and proliferate. Enter OCP – wait that name is taken – OCI, I mean OCI. OCI aims to standardize a container specification: What goes in the container “packet” both as an on-disk and portable file format and what it means to a runtime. If we can all agree on that, then we can innovate around it. There is also a reference implementation, runC, donated by Docker, and I believe that as of Docker Engine v1.11 it adheres to the specification. However, according to the OCI site, the spec is still in progress, which I interpret as a moving target.

You may imagine the differing opinions at the OCI table (and thrown across blogs and Twitter – more below if you’re in the mood for some tech drama) and otherwise just the slow progress has some folks wanting to fork Docker so we can quickly get to a standard.

But wait there’s more! In Docker v1.12, near-released and announced at DockerCon June 2016, Docker Inc. refactored their largest tool, called Swarm, into the Docker Engine core product. When Docker made acquisitions like SocketPlane that competed with the container ecosystem, it stepped on the toes of other vendors, but they have a marketing mantra to prove they’re not at all malevolent: “Batteries included, but replaceable.” Ok fine we’ll overlook some indiscretions. But what about bundling the Swarm orchestration tools with Docker Engine? Those aren’t batteries. That’s bundling an airline with the sale of the airplanes.

Swarm is competing with Kubernetes, Mesosphere and Nomad for container orchestration (BTW. Kubernetes and Kubernetes-based PaaSs are the most popular currently), and this Docker move appears to be a ploy to force feed Docker users with Swarm whether they want it or not. Of course they don’t have to use it, but it is grossly excessive to have around if not used, not to mention detrimental to quality, time-to-deploy, and security. For many techies, aside from being too smart to have the Swarm pulled over their eyes, this was the last straw for another technical reason: the philosophy that decomposition into simple building blocks is good, and solutions should be loosely coupled integrations of these building blocks with clean, ideally standardized, interfaces.

Meet the Fockers! So, what do you when you are using Docker, but you’re increasingly are at odds with the way that Docker Inc. uses its open source project as a development ground for a monolithic product, a product you mostly don’t want? Fork them!

Especially among hard-core open source infrastructure users, people have a distaste for proprietary appliances, and Docker Engine is starting to look like one. All-in-ones are usually not so all-in-wonderful. That’s because functions built as decomposed units can proceed on their own innovation paths with more focus and thus, more often than not, beat the same function in an all-in-one. Of course, I’m not saying all-in-ones aren’t right for some, but they’re not for those that demand the best, nor for those that want choice to change components instead of getting locked into a monolith.

All-in-ones are usually not so all-in-wonderful.

Taking all this into account, all the talk of forking Docker seems bothersome to me for a few reasons.

First of all there are already over 10000 forks of Docker. Hello! Forking on github is click-button easy, and many have done it. As many have said, forking = fragmentation, and makes it harder for those that need to use and support multiple forks or test integrations with them aka the container ecosystem of users and vendors.

Second, creating a fork, someone presumably wants to change their fork (BTW non-techies, that’s the new copy that you now own – thanks for reading). I haven’t seen anybody actually comment on what they would change if they forked the Docker code. All this discussion and no prescription :( Presumably you would fix bugs that affect you or try to migrate fixes that others have developed, including Docker Inc who will obviously continue to develop them. What else would you do? Probably scrap Swarm because most of you (in discussions) seem to be Kubernetes fans (me too). Still, in truth you have to remove and split out a lot more if you really want to decompose this into its basic functions. That’s not trivial.

Third, let’s say there is “the” fork. Not my fork or yours, but the royal fork. Who starts this? Who do you think should own it? Maybe it would be best for Docker if they just did this! They could re-split out Swarm and other tools too.

Don’t fork, make love.

Finally, and my preferred solution. Don’t fork, make love. In seriousness, what I mean is two things:

1.       There is a standards body, the OCI. Make sure Docker has a seat at the table, and get on with it! If they’re not willing to play ball, then let it be widely known, and move OCI forward without them. I would think they have the most to lose here, so they would cooperate. The backlash if they didn’t may be unforgiving.

2.       There is already a great compliant alternative to Docker today that few are talking about: rkt (pronounced Rocket). This is pioneered by CoreOS, and now supported on many Linux operating systems. rkt is very lightweight. It has no accompanying daemon. It is a single, small, simple function that helps start containers. Therefore, it passes the test of a small decomposed function. Even better, the most likely Fockers are probably not Swarm users, and all other orchestration systems support rkt: Kubernetes’s rktnetes, Mesos’s unified containerizer, and Nomad’s rkt driver. We all have a vote, with our wallet and free choices, so maybe it is time rock it on over to rkt.

 

In closing, I offer some more ecosystem observations:

In spite of this criticism (much is not mine in fact, but that of others to which I’m witness) I’m a big fan of Docker and its people. I’m not a fan of recent strategy to include Swarm into core however. I believe it goes against the generally accepted principles of computer science and Unix. IMO Docker Engine was already too big pre-v1.12. Take it out, and compete on the battle ground of orchestration on your own merits and find new business models (like Docker Cloud).

I feel like the orchestration piece is important and the momentum is with Kubernetes. I see the orchestration race a little like OpenStack vs. Cloud Stack vs. Eucalyptus 5 years ago where the momentum then was with OpenStack, and today it’s with Kubernetes but for different reasons. It has well-designed and cleanly defined interfaces and abstractions. It is very easy to try and learn. Those who say it is hard to learn, setup and use, what was your experience with OpenStack? Moreover, there are tons of vanilla Kubernetes users; can’t say that about OpenStack. In the OpenStack space, there are OpenStack vendors adding value with support and getting to day-1, but with Kubernetes the vendor space is adding features on top of Kubernetes, moving toward PaaS. That’s a good sign that the Kubernetes scope is about right.

I’ve seen 99% Docker love and 1% anti-Docker sentiment in my own experience. My employer, Juniper, is a Docker partner and uses Docker Engine. We have SDN and switching integrations for Docker Engine, and so personally, my hope is that Docker finds a way through this difficulty by re-evaluating how it is using open source and business together. Open source is not a vehicle to drive business nor shareholder value which is inherently competitive. Open source is a way to drive value to the IT community at large in a way that is intentionally cooperative.

Related:

http://thenewstack.io/docker-fork-talk-split-now-table/

https://medium.com/@bob_48171/an-ode-to-boring-creating-open-and-stable-container-world-4a7a39971443#.yco4sgiu2

http://thenewstack.io/container-format-dispute-twitter-shows-disparities-docker-community/

https://www.linkedin.com/pulse/forking-docker-daniel-riek