cloud

Micro-services: Knock, Knock, Knockin’ on DevNetOps’ Door by James Kelly

glass-1613653.png

A version of this article was published on October 13, 2017 at TheNewStack https://thenewstack.io/microservices-knock-knock-knockin-devnetops-door/

“You’ve got to ask yourself one question: ‘Do I feel lucky?’ Well, do ya, punk?” – Dirty Harry

Imagine that putting this famous question to the sentiment of IT deployments, it points to IT and even business performance with scary accuracy. High performers – “The most powerful guns in the world,” to borrow Harry’s words – pull the trigger on deployments with high confidence, while deployment dread is a surefire sign of lower performers, advises the State of DevOps report.

In his talks, Gene Kim, author of The DevOps Handbook, corroborates that deployment anxiety is associated with hapless businesses half as likely to exceed profitability, productivity and market share goals, and evidently with lower market-cap growth.

We may conclude high-performing teams wear the badge of confidence because they’re among the ranks in the academy of agile, deploying more often – orders of magnitude more often. Their speed is in taking lots of little steps, so they have regular experience with change.

And feeling luckier is also an effect of actually being luckier. Data show high performers break things less often; and when they do, there are more clear-cut forensics. They bring in better MTTF and MTTR than lower performers’ old-fashioned police work because the investigation and patching proceeds quickly from the last small step, instead of digging for clues in bigger deliveries peppered with many modifications.

Less, more often, is better than more, less often

Imagine conducting IT on two technology axes: time and space. If it’s clearly healthier to automate faster, smaller steps with respect to the timeline or pipeline, then consider space or architecture. Optimizing architecture design and orchestration to recruit nimble pipeline outputs, what stands out in today’s line up of characters? Affirmative, ace: micro-services.

The principles of DevOps have been around a while and in emerging practice for more than a decade, but the pivotal technologies that cracked DevOps wide open were containers and micro-services orchestration systems like Kubernetes. Looking back, it’s not so surprising that smaller boundaries and enforced packaging from developers, preserved through the continuous integration and delivery pipeline, make more reliable cases for deployment.

A micro-services architecture isn’t foolproof, but it’s the best partner today for the speed and agility of frequent or continuous deployment.

Micro-services networks and networking as micro-services

In a technical trial of service meshes versus SDN, there are three key positions networking takes in today’s micro-services scene:

  1. In a micro-services design, the pieces become smaller and the intercellular space – the network – gets bigger, busier, and hence, vital. Also, beyond zero-trust-style protection of the micro-services themselves, it’s important to have this network locked down.
  2. Service discovery, service/API gateways, service advertising with DNS, and service scale-out or -in with load balancing are all players in networking’s jurisdiction.
  3. Beyond micro-services, any state replication, backup, or analytics over an API, a volume, or a disk, also rides on the network.

Given the importance of networking to the success of micro-services, it’s ironic that networking components are mostly monolithic. Worse, deployment anxiety is epidemic: network operators have lengthy change controls, infrequent maintenance windows, and new code versions are held for questioning for 6-18 months and several revisions after availability.

A primal piece on DevNetOps cites five things we can borrow from the department of DevOps to remedy network ops in time and space, starting with code, pipelines and architecture.

Small steps for DevNetOps

Starting into DevNetOps is possible today with Spinnaker-esque orchestration of operational stages: a network-as-code model and repository would feed into a CICD pipeline for all configuration, template, code and software-image artifacts. With new processes and skills training of networking teams – like coding and reviewing logistics as well as testing and staging simulation – we could foil small-time CLI-push-to-production joyrides and rehabilitate seriously automation-addled “masterminds” who might playbook-push-to-production bigger mistakes.

The path to corrections and confidence on the technology time axis, begins with automating the ops timeline as a pipeline with steps of micro modifications.

Old indicted ops practices can be reinvented by the user community with vendor and open-source help for tooling, but when it comes to architecture, the vendors need to lead. Vendors are the chief “Dev” partner in the DevNetOps force, and motives are clearer than ever to build a case to pursue micro-services.

Small pieces for DevNetOps: Micro-services

After years of bigger badder network devices – producing some monolithic proportions so colossal they don’t fit through doors – vendors can’t ignore the flashing lights and sirens of cloud, containers and micro-services.

It’s clear for DevNetOps, like for DevOps, micro-sized artifacts are perfectly sized bullets for the chamber of an agile pipeline. But while there’s evidence of progress in the networking industry, there’s a ways to go.

Some good leads toward a solution include Arista supporting patch packages separate from their main EOS delivery; the OCP popularizing software and hardware disaggregation in its networking project; and Juniper Networks building on disaggregation by supporting node splicing and universal chassis for finer-grained modularity and management boundaries. Furthermore, data center network designs of resilient scale-out Clos network fabrics with pizza-box-sized devices are gradually favored over large aggregation devices. And in software-defined networking, projects like OpenContrail are now dispatched as containers.

In the world of DevOps, we know that nothing does wonders for deployment quality like developers threatened with the prospect of a page at 2am. But for DevNetOps that poetic justice is missing, and the 24/7 support between the vendor-customer wall hardly subdues operator angst when committing a change to roll a deployment. Moreover, the longer the time between a flawed vendor code change and the time it’s caught, the more muddled it gets and the tougher it is to pin.

The best line of defense against these challenges is smaller, more-frequent vendor deliveries, user tests and deployments. Drawing inspiration from the success of how DevOps was bolstered by micro-services, imagine if while we salute DevNetOps continuous and agile operations today, we compel vendors and architectural commissioners to uphold designs for finer-grained felicitous micro-services, devices and networks for a luckier tomorrow.

A version of this article was published on October 13, 2017 at TheNewStack https://thenewstack.io/microservices-knock-knock-knockin-devnetops-door/

 

For more information on defining DevNetOps and DecSecOps, see this article and my short slideshare:

Good Habits to Make the Multi-Cloud Work For You – Part 2 of 2 by James Kelly

shutterstock_248348047.jpg

This article was originally published on September 12 at Data Center Knowledge http://www.datacenterknowledge.com/industry-perspectives/good-habits-make-multi-cloud-work-you-part-2

In my previous article, I talked about the state of infatuation with hybrid and multi-cloud environments. Would you be surprised that in the stresses and mania surrounding IT cloud strategy, some folks fixate more on the playing field than the game itself? You probably already know that you’ve got to get your head in the game in this unforgiving age, and a winning strategy for digitally speeding and feeding the business across the multi-cloud is not: taste the rainbow; it’s choosing and consuming cloud wisely.

Too bad that how you do so isn’t obvious, and as if it wasn’t difficult enough to anticipate technology turns ahead, there are so many captivating cloud services that might lead you down treacherous roads to traps and debt. But there are also well-known tactics emerging that you can model to ready and steady your organization for change and success. Like most, if your journey has already begun, you’re picking these up along the way and adjusting your habits as you go.

You know how bad habits are easy to form and hard to live with? Similarly, it’s very easy to jump into multi-cloud or unwittingly let it happen to you. At this precipice, the warning signs and early stories of cloud lock-in, overwhelming multiple-cloud context switches, runaway expenses, and situational blindness, are hopefully enough to grab your attention. Multi-cloud is inevitable; these fatalities are not.

A multi-cloud platform is a powerful environment, and it requires proper preparation so you can control it, instead of it controlling you. With that, here are four of the best preparations I’ve seen, like good habits that are hard to form, but easy to live with.

1. Unify Your Toolchain

In the eternal deluge and disruption of new tech tooling and systems, remember those good old-fashioned IT values of standardization and consolidation? Don’t throw those babies out with the ITIL bathwater.

As you embrace cloud and bimodal IT with new and improved tools, you might lessen the reins on your traditional values, using public cloud and building private cloud infrastructure alongside your physical and virtualized data centers. In loosening the reins or spinning out agile side projects, just watch out for the trap of hasty developers rolling their own stack or going stackless/serverless, only to get caught in a web of proprietary cloud services.

Don’t rush an obstinate knee-jerk to block this neither. Think of a unified toolchain effort as one with the developers to rationalize a base devops pipeline, cluster, and middleware stack, that could serve 80 percent of projects.

  • Your tools need to work on any cloud infrastructure, and if they can work with your legacy infrastructure, even better.
  • Freeing yourself from lock-in of cluster and pipeline orchestration tools and infrastructure-as-code lifecycle management: keep them untethered from any specific underlying IaaS, with portable shims like Terraform.
  • While you don’t want to throttle developers back from using services outside of your stack – they’ll go around you anyway – encourage managed open-source-based services. Then incorporate such services into your middleware toolchain as it matures. Tools like Helm, make it easier to manage services yourself, more than ever before.

If you’re a lean IT shop, let’s face it, following this to the letter may take you away from getting to market ASAP. Maybe you’re a startup or in that mode? You don’t just want, but need, to focus on developing your core competitive technology, not a portable multi-cloud toolchain.

How do you balance moving fast, employing low-hanging SaaP, with the concern of vendor and architectural lock-in?

If a tool is a competitive differentiator, then you should probably build it. Otherwise, remember there are a lot of open-source tools that are glued together with reference implementations of other open-source tools: large projects like Kubernetes and Spinnaker are easy to adopt with a bunch of pre-canned sensible defaults. Another option is to choose managed open-source services, that are more easily insourced later or offered by multiple cloud vendors.

Finally, software design is probably the most important and challenging factor of all. Architecting for scale is obvious, but flexibility enables business agility; so consider not only today’s lock-in, but also getting locked out of a competitive advantage tomorrow. Assembling API-driven (capital ‘S’) Services from micro-services is a well-established pattern to do this, and I’d recommend software alchemists investigate evolutionary architecture from ThoughtWorks for more wisdom.

2. Connect Your Clouds

 Connection was a given in the world of hybrid cloud. That still holds true. However, cloud bursting, the most bombastic of all use cases for hybrid cloud, is the least common. Multiple clouds need to be connected together for many more realistic and common use cases:   

  • Imagine pipeline automation that includes environments or steps split across clouds. Dev/test can happen anywhere, but you may have higher requirements for staging/production.
  • Secure data replication for warehousing or distributed applications, and backups for disaster recovery and avoidance.
  • Split application tiers, where there are different non-functional requirements for the various application tiers like sovereignty, security, scale, performance, etc. that must be met in various geographies or optimized with split economics. Some applications may be split because of functional requirements too because certain clouds have unique advantages that others can’t reproduce.

Such cloud interconnections demand higher security than using the internet, and often clouds simply require a secure connection back to your enterprise staff or users. Beyond security, unique routing and legacy layer-2 unicast or even multicast connectivity could be an application requirement. While there are cloud-vendor tools for basic security and networking, you can also search for your own software-defined and virtualized security and networking solutions that are agnostic to any cloud infrastructure, unifying this toolchain too and incorporating it into your infrastructure-as-code policies.

3. Harmonize, Unify and Simplify Policy

If you have software deployed or scaled across multiple cloud locations, the configuration, monitoring and automatic-response systems may get unwieldly unless you seek to elevate your orchestration across clouds. Of course, there are cloud management platforms for this. With or without them, you can also do some multi-cloud management with your own centrally harmonized configurations and management as code. A further step might unify configuration and management with global controllers, but with the track record of humans causing most errors, be careful with your blast radius for a fat-finger typo.

Another trend in provisioning models and APIs is abstraction, which can be at many levels like multi-cloud orchestration, individual stack, pipeline or application. By making things more intuitive and concise for humans and leaving the execution to your software machinery and machine learning, you’re likely to improve the lives of your operators, your applications and your application users.

4. Hold Up Before You Speed Up

Cloud will move you faster, and if that’s not enough cause for care, even with no IT strategy, you’ll still end up with multi-cloud in no time: multiple owners, vendors, regions, and availability zones. The increased danger is that multi-cloud can multiply messes and mistakes. Preparation in building a platform is key, and like many things that take a bit of time upfront, it’s worth the effort in the long run.

Consciousness to hold up isolated quick gains as short-term one-offs, that generally beget debt down the road, is the critical gambit that will return long-term payouts in adaptability and speed atop a united multi-cloud platform.

IT leaders know that digital transformation is a journey, not a destination. With continuous learning, the first of all healthy continuous IT practices, mastering the tactics and good habits for structuring your multi-cloud platform and using the ins and outs of devops atop it, can be fun and rewarding. It allows safe acceleration and agility for IT, and it’s essential to sustainably advance the speeds and smarts of your business.

Multi-cloud is A Reality, Not A Strategy - Part 1 of 2 by James Kelly

cloud-346706_1280.png

This article was originally posted on August 25th, 2017 on Data Center Knowledge at http://www.datacenterknowledge.com/archives/2017/08/25/multi-cloud-reality-not-strategy-part-1/

So you’re doing cloud, and there is no sign of slowing down. Maybe your IT strategies are measured, maybe you’re following the wisdom of the crowd, maybe you’re under the gun, you’re impetuous or you’re oblivious. Maybe all of the above apply. In any case, like all businesses, you’ve realized the future belongs to the fast and that cloud is the vehicle for your newly dubbed software-defined enterprise: a definition carrying onerous, what I call, “daft pressures” for harder, better, faster, stronger IT.

You may as well be solving the climate-change crisis because to have a fighting chance today, it feels like you have to do everything all at once.

Enduring such exploding forces and pressures, most crapulent enterprises simultaneously devour cloud, left, right and center; the name for said zeitgeist: multi-cloud. If you’ve not kept abreast, overwhelming evidence – like years of the State of the Cloud Report and other analyst research – show that multi-cloud is the present direction for most enterprises. But the name “multi-cloud” would suggest that using multiple clouds is all there is to it. Thus, in the time it takes to read this article, any technocrat can have their own multi-cloud. It sounds too easy, and as they say, “when life looks like Easy Street, danger is at the door.”

Apparition du Jour

While some cloud pundits forecasting the future of IT infrastructure will shift from hybrid cloud to multi-cloud or go round and round in sanctimonious circles, those who are perceptive have realized that multi-cloud pretense is merely a catch-all moniker, rather than an IT pattern to model. As with many terms washed over too much, “multi-cloud” has quickly washed out. It’s delusional as a strategy, and isn’t particularly useful, other than as le mot du jour to attract attention. How’s that for hypocrisy?

Just like you wouldn’t last on only one food, it’s obvious that you choose multiple options at the cloud buffet. How you choose and how you consume is what will make the difference to if you perish, survive or thrive. Choosing and consuming wisely and with discipline is a strategy.

Is the Strategy Hybrid Cloud?

Whether you want to renew your vows to hybrid cloud or divorce that term, somewhere along the way, it got confused with bimodal, hybrid IT and ops. Nonetheless, removing the public-plus-private qualification, there was real value in the model to unite multiple clouds for one purpose. It’s precisely that aspect that would also add civility to the barbarism of blatantly pursuing multi-cloud. However, precipitating hybrid clouds for a single application with, say, the firecracker whiz-bang of the lofty cloud bursting use case, is really a distraction. The greater goal of using hybrid or multiple clouds should be as a unified, elastic IT infrastructure platform, exploiting the best of many environments.

Opportunities for Ops

Whether public or private, true clouds (which are far from every data center—even those well virtualized) ubiquitously offer elastic, API-controllable and observable infrastructure atop which devops can flourish to enable the speeds and feeds of your business. An obvious opportunity and challenge for IT operations professionals is in building true cloud infrastructure, but if that’s not in the cards for your enterprise, there is still even greater opportunity in managing and executing the strategy for the wisdom and discipline in choosing and consuming cloud.

In the new generation of IT, ops discipline doesn’t mean being the “no” person, it means shaping the path for developers, with developers. This is where devops teaches us that new fun exists to engineer, integrate and operate things. It’s just that they are stacks, pipelines and services, not servers. Of course, to ride this elevator up, traditional infrastructure and operations pros need to elevate their skills, practicing devops kung-fu across the multi-cloud and possibly applying it in building private clouds too.

Imagine what would happen if we exalted our trite multi-cloud environment with strategies and tactics to master it? We can indeed extract the most value from each cloud of choice, avoid cloud lock-in, and ensure evolvability, but you probably wouldn’t be surprised to learn that there are also shortcuts with new technical debt spirals lurking ahead.

To secure the multi-cloud as a platform serving us, and not the other way around, some care, commitment, time and work is needed toward forming the right skills, habits, and using or building tooling, services, and middleware. There is indeed a maturing shared map leading to portability, efficiency, situational awareness and orchestration. In my next article, we’ll examine some of the journey and important strategies to use the multi-cloud for speed with sustainability. Stay tuned.

This article was originally posted on August 25th, 2017 on Data Center Knowledge at http://www.datacenterknowledge.com/archives/2017/08/25/multi-cloud-reality-not-strategy-part-1/