Micro-services: Knock, Knock, Knockin’ on DevNetOps’ Door by James Kelly

glass-1613653.png

A version of this article was published on October 13, 2017 at TheNewStack https://thenewstack.io/microservices-knock-knock-knockin-devnetops-door/

“You’ve got to ask yourself one question: ‘Do I feel lucky?’ Well, do ya, punk?” – Dirty Harry

Imagine that putting this famous question to the sentiment of IT deployments, it points to IT and even business performance with scary accuracy. High performers – “The most powerful guns in the world,” to borrow Harry’s words – pull the trigger on deployments with high confidence, while deployment dread is a surefire sign of lower performers, advises the State of DevOps report.

In his talks, Gene Kim, author of The DevOps Handbook, corroborates that deployment anxiety is associated with hapless businesses half as likely to exceed profitability, productivity and market share goals, and evidently with lower market-cap growth.

We may conclude high-performing teams wear the badge of confidence because they’re among the ranks in the academy of agile, deploying more often – orders of magnitude more often. Their speed is in taking lots of little steps, so they have regular experience with change.

And feeling luckier is also an effect of actually being luckier. Data show high performers break things less often; and when they do, there are more clear-cut forensics. They bring in better MTTF and MTTR than lower performers’ old-fashioned police work because the investigation and patching proceeds quickly from the last small step, instead of digging for clues in bigger deliveries peppered with many modifications.

Less, more often, is better than more, less often

Imagine conducting IT on two technology axes: time and space. If it’s clearly healthier to automate faster, smaller steps with respect to the timeline or pipeline, then consider space or architecture. Optimizing architecture design and orchestration to recruit nimble pipeline outputs, what stands out in today’s line up of characters? Affirmative, ace: micro-services.

The principles of DevOps have been around a while and in emerging practice for more than a decade, but the pivotal technologies that cracked DevOps wide open were containers and micro-services orchestration systems like Kubernetes. Looking back, it’s not so surprising that smaller boundaries and enforced packaging from developers, preserved through the continuous integration and delivery pipeline, make more reliable cases for deployment.

A micro-services architecture isn’t foolproof, but it’s the best partner today for the speed and agility of frequent or continuous deployment.

Micro-services networks and networking as micro-services

In a technical trial of service meshes versus SDN, there are three key positions networking takes in today’s micro-services scene:

  1. In a micro-services design, the pieces become smaller and the intercellular space – the network – gets bigger, busier, and hence, vital. Also, beyond zero-trust-style protection of the micro-services themselves, it’s important to have this network locked down.
  2. Service discovery, service/API gateways, service advertising with DNS, and service scale-out or -in with load balancing are all players in networking’s jurisdiction.
  3. Beyond micro-services, any state replication, backup, or analytics over an API, a volume, or a disk, also rides on the network.

Given the importance of networking to the success of micro-services, it’s ironic that networking components are mostly monolithic. Worse, deployment anxiety is epidemic: network operators have lengthy change controls, infrequent maintenance windows, and new code versions are held for questioning for 6-18 months and several revisions after availability.

A primal piece on DevNetOps cites five things we can borrow from the department of DevOps to remedy network ops in time and space, starting with code, pipelines and architecture.

Small steps for DevNetOps

Starting into DevNetOps is possible today with Spinnaker-esque orchestration of operational stages: a network-as-code model and repository would feed into a CICD pipeline for all configuration, template, code and software-image artifacts. With new processes and skills training of networking teams – like coding and reviewing logistics as well as testing and staging simulation – we could foil small-time CLI-push-to-production joyrides and rehabilitate seriously automation-addled “masterminds” who might playbook-push-to-production bigger mistakes.

The path to corrections and confidence on the technology time axis, begins with automating the ops timeline as a pipeline with steps of micro modifications.

Old indicted ops practices can be reinvented by the user community with vendor and open-source help for tooling, but when it comes to architecture, the vendors need to lead. Vendors are the chief “Dev” partner in the DevNetOps force, and motives are clearer than ever to build a case to pursue micro-services.

Small pieces for DevNetOps: Micro-services

After years of bigger badder network devices – producing some monolithic proportions so colossal they don’t fit through doors – vendors can’t ignore the flashing lights and sirens of cloud, containers and micro-services.

It’s clear for DevNetOps, like for DevOps, micro-sized artifacts are perfectly sized bullets for the chamber of an agile pipeline. But while there’s evidence of progress in the networking industry, there’s a ways to go.

Some good leads toward a solution include Arista supporting patch packages separate from their main EOS delivery; the OCP popularizing software and hardware disaggregation in its networking project; and Juniper Networks building on disaggregation by supporting node splicing and universal chassis for finer-grained modularity and management boundaries. Furthermore, data center network designs of resilient scale-out Clos network fabrics with pizza-box-sized devices are gradually favored over large aggregation devices. And in software-defined networking, projects like OpenContrail are now dispatched as containers.

In the world of DevOps, we know that nothing does wonders for deployment quality like developers threatened with the prospect of a page at 2am. But for DevNetOps that poetic justice is missing, and the 24/7 support between the vendor-customer wall hardly subdues operator angst when committing a change to roll a deployment. Moreover, the longer the time between a flawed vendor code change and the time it’s caught, the more muddled it gets and the tougher it is to pin.

The best line of defense against these challenges is smaller, more-frequent vendor deliveries, user tests and deployments. Drawing inspiration from the success of how DevOps was bolstered by micro-services, imagine if while we salute DevNetOps continuous and agile operations today, we compel vendors and architectural commissioners to uphold designs for finer-grained felicitous micro-services, devices and networks for a luckier tomorrow.

A version of this article was published on October 13, 2017 at TheNewStack https://thenewstack.io/microservices-knock-knock-knockin-devnetops-door/

 

For more information on defining DevNetOps and DecSecOps, see this article and my short slideshare:

New Heroes in the DevOps Saga: DevSecOps and DevNetOps by James Kelly

batman-1293525_1920.jpg

This article was originally published on September 26 at DevOps.com https://devops.com/devsecops-devnetops-new-heroes-devops-saga/

The evolution of DevOps is by no means done, but it’s safe to say that there is enough agreement and acceptance to declare it a hero. DevOps has helped glorify IT to the point where it’s no longer preventing business, nor a provider nor a partner of the business.

Often IT is the business, or its vanguard for competitive disruption and differentiation.

Splintering the success of this portmanteau hero, we now hear more and more of two trusty sidekicks: DevSecOps and DevNetOps. Lesser understood in their adolescence, these tots are still frequently misunderstood, are still forming their identities, and still need a lot of development if they’re to enter the IT hall of fame like their forerunner.

Just as the terms look, DevSecOps and DevNetOps are often assumed to be about wrapping DevOps principles around security and networking: operators hope to assuage technical debt and drudgery by automating in proficiency and resiliency. For networking, I’ve covered how there is a lot more to that than coding, but to be sure, these sidekicks certainly espouse operators learning how to do develop while DevOps was equally, if not more, about developers learning to operate.

The Shift Left: SecDevOps and NetDevOps

As if it wasn’t hard enough to tell what DevSecOps and DevNetOps want to be when they grow up, we’ve gone and given them alter egos: SecDevOps (aka “rugged” DevOps) and NetDevOps. Think about them exactly as the words look – it’s about the shift to the left. Left of what?

Traditional DevOps practices focus on business-specific applications development. The development timeline is known as concept to cash, and with all the superpowers of DevOps we try to reduce our enemy: the lead time and repeatable processes between code and cash.

Security and building infrastructure – like networks – were supporting tasks, not revenue-generating nor competitive advantages. Thus, security and networking were far to the right on the timeline with concerns that deal with operational scale, performance and protection.

Today’s shift left propels security and infrastructure considerations earlier on the timeline, into coding, architecture and pre-production systems. It’s a palpable penny-drop amid daily news of security breaches and infrastructure outages causing technology-defined establishments to bleed money and brand equity.

Fill the bucket with cash, but don’t forget to forestall the leaks!

DevOps and Infrastructure: Challenge and Opportunity

Automation sparks have flown over the proverbial wall into the camp of I&O pros. Operators trading physical for virtual, macro for micro, converged for composed, and configuration for code is proof that the fire has caught security and networking. Controlling the burn now, is key, so that healthier skills and structures arise in place of the I&O dogma and duff. Fortunately, this is precisely the destiny for our newfound heroes, DevSecOps and DevNetOps.

However, doing DevSecOps and DevNetOps, embracing security and networks as code, we mustn’t be so credulous as to forget the formidable DevOps practices and patterns that need transforming along the ultimate automation journey. Testability, immutability, upgradability, traceability, auditability, reliability, and other __abilities are not straightforward to achieve.

Discounting “aaS” technology consumed as a service, a fundamental challenge to innovating SecOps and NetOps, compared to application ops, is that applications are crafted and built; security and networking solutions are mostly still bought and assembled.

Security and network infrastructure as code is something that needs to be co-created with the vendors. Other than in the cloud, it will take a while before security and networking systems are driven API-first, and are redesigned and broken down to offer simulation, composition and orchestration with scale and resilience.

While this will land first in software-defined infrastructure, there is still a ways to go to manage most software-defined security and networking systems with continuous practices of artifact integration, testing, and deployment. Hardware and embedded software will be even more challenging.

Finding Strength in Challenge

So on one hand, DevOps is evolving with security and networking shifting left. On the other hand, traditional security and networking ops are transforming with DevOps principles.

Is the ultimate innovation to squeeze out those traditional operations altogether? Does NetDevOps + DevNetOps = DevOps?

There is a parallel train of thought and debate, with success on both sides. Purist teams cut out operations with the “you build it, you run it” attitude. Other companies like Google have dedicated operations specialist teams of SREs. While the SRE reporting structure is isolated, SRE jobs are very integrated with that of development teams. It’s easy to imagine the purist approach, subsuming security and networking into DevOps practices, but only if we assume the presence of cloud infrastructure and services as a platform. Even then, there is still substantiation for the SRE.

Layers below, however, somebody still needs to build the foundations of the cloud IaaS and data center hardware. As they say, “Even serverless computing, is not actually serverless.”

Underpinning the clouds are data centers. And then there’s transport, IoT, mobile or other secure networks to and between clouds. In these areas, it’s obvious there is a niche for our two trusty sidekicks, DevSecOps and DevNetOps, to shake up ops culture and principles. These two heroes can rescue software-defined and physical infrastructure from the clutches of so many anti-pattern evils, like maintenance windows and change controls (ahem, it’s called a “commit”).

We may not require rapid experimentation in our infrastructure, but we would warmly welcome automated deployments, automated updates, failure and attack testing drills, and intent-driven continuous response. They will boost resiliency and optimization for the business and peace of mind for the builders.

Teams operating security, networks, and especially clouds, need to honor and elevate DevSecOps and DevNetOps, so that on the journey now afoot, our teams and our new heroes may realize their potential.

This article was originally published on September 26 at DevOps.com https://devops.com/devsecops-devnetops-new-heroes-devops-saga/

Good Habits to Make the Multi-Cloud Work For You – Part 2 of 2 by James Kelly

shutterstock_248348047.jpg

This article was originally published on September 12 at Data Center Knowledge http://www.datacenterknowledge.com/industry-perspectives/good-habits-make-multi-cloud-work-you-part-2

In my previous article, I talked about the state of infatuation with hybrid and multi-cloud environments. Would you be surprised that in the stresses and mania surrounding IT cloud strategy, some folks fixate more on the playing field than the game itself? You probably already know that you’ve got to get your head in the game in this unforgiving age, and a winning strategy for digitally speeding and feeding the business across the multi-cloud is not: taste the rainbow; it’s choosing and consuming cloud wisely.

Too bad that how you do so isn’t obvious, and as if it wasn’t difficult enough to anticipate technology turns ahead, there are so many captivating cloud services that might lead you down treacherous roads to traps and debt. But there are also well-known tactics emerging that you can model to ready and steady your organization for change and success. Like most, if your journey has already begun, you’re picking these up along the way and adjusting your habits as you go.

You know how bad habits are easy to form and hard to live with? Similarly, it’s very easy to jump into multi-cloud or unwittingly let it happen to you. At this precipice, the warning signs and early stories of cloud lock-in, overwhelming multiple-cloud context switches, runaway expenses, and situational blindness, are hopefully enough to grab your attention. Multi-cloud is inevitable; these fatalities are not.

A multi-cloud platform is a powerful environment, and it requires proper preparation so you can control it, instead of it controlling you. With that, here are four of the best preparations I’ve seen, like good habits that are hard to form, but easy to live with.

1. Unify Your Toolchain

In the eternal deluge and disruption of new tech tooling and systems, remember those good old-fashioned IT values of standardization and consolidation? Don’t throw those babies out with the ITIL bathwater.

As you embrace cloud and bimodal IT with new and improved tools, you might lessen the reins on your traditional values, using public cloud and building private cloud infrastructure alongside your physical and virtualized data centers. In loosening the reins or spinning out agile side projects, just watch out for the trap of hasty developers rolling their own stack or going stackless/serverless, only to get caught in a web of proprietary cloud services.

Don’t rush an obstinate knee-jerk to block this neither. Think of a unified toolchain effort as one with the developers to rationalize a base devops pipeline, cluster, and middleware stack, that could serve 80 percent of projects.

  • Your tools need to work on any cloud infrastructure, and if they can work with your legacy infrastructure, even better.
  • Freeing yourself from lock-in of cluster and pipeline orchestration tools and infrastructure-as-code lifecycle management: keep them untethered from any specific underlying IaaS, with portable shims like Terraform.
  • While you don’t want to throttle developers back from using services outside of your stack – they’ll go around you anyway – encourage managed open-source-based services. Then incorporate such services into your middleware toolchain as it matures. Tools like Helm, make it easier to manage services yourself, more than ever before.

If you’re a lean IT shop, let’s face it, following this to the letter may take you away from getting to market ASAP. Maybe you’re a startup or in that mode? You don’t just want, but need, to focus on developing your core competitive technology, not a portable multi-cloud toolchain.

How do you balance moving fast, employing low-hanging SaaP, with the concern of vendor and architectural lock-in?

If a tool is a competitive differentiator, then you should probably build it. Otherwise, remember there are a lot of open-source tools that are glued together with reference implementations of other open-source tools: large projects like Kubernetes and Spinnaker are easy to adopt with a bunch of pre-canned sensible defaults. Another option is to choose managed open-source services, that are more easily insourced later or offered by multiple cloud vendors.

Finally, software design is probably the most important and challenging factor of all. Architecting for scale is obvious, but flexibility enables business agility; so consider not only today’s lock-in, but also getting locked out of a competitive advantage tomorrow. Assembling API-driven (capital ‘S’) Services from micro-services is a well-established pattern to do this, and I’d recommend software alchemists investigate evolutionary architecture from ThoughtWorks for more wisdom.

2. Connect Your Clouds

 Connection was a given in the world of hybrid cloud. That still holds true. However, cloud bursting, the most bombastic of all use cases for hybrid cloud, is the least common. Multiple clouds need to be connected together for many more realistic and common use cases:   

  • Imagine pipeline automation that includes environments or steps split across clouds. Dev/test can happen anywhere, but you may have higher requirements for staging/production.
  • Secure data replication for warehousing or distributed applications, and backups for disaster recovery and avoidance.
  • Split application tiers, where there are different non-functional requirements for the various application tiers like sovereignty, security, scale, performance, etc. that must be met in various geographies or optimized with split economics. Some applications may be split because of functional requirements too because certain clouds have unique advantages that others can’t reproduce.

Such cloud interconnections demand higher security than using the internet, and often clouds simply require a secure connection back to your enterprise staff or users. Beyond security, unique routing and legacy layer-2 unicast or even multicast connectivity could be an application requirement. While there are cloud-vendor tools for basic security and networking, you can also search for your own software-defined and virtualized security and networking solutions that are agnostic to any cloud infrastructure, unifying this toolchain too and incorporating it into your infrastructure-as-code policies.

3. Harmonize, Unify and Simplify Policy

If you have software deployed or scaled across multiple cloud locations, the configuration, monitoring and automatic-response systems may get unwieldly unless you seek to elevate your orchestration across clouds. Of course, there are cloud management platforms for this. With or without them, you can also do some multi-cloud management with your own centrally harmonized configurations and management as code. A further step might unify configuration and management with global controllers, but with the track record of humans causing most errors, be careful with your blast radius for a fat-finger typo.

Another trend in provisioning models and APIs is abstraction, which can be at many levels like multi-cloud orchestration, individual stack, pipeline or application. By making things more intuitive and concise for humans and leaving the execution to your software machinery and machine learning, you’re likely to improve the lives of your operators, your applications and your application users.

4. Hold Up Before You Speed Up

Cloud will move you faster, and if that’s not enough cause for care, even with no IT strategy, you’ll still end up with multi-cloud in no time: multiple owners, vendors, regions, and availability zones. The increased danger is that multi-cloud can multiply messes and mistakes. Preparation in building a platform is key, and like many things that take a bit of time upfront, it’s worth the effort in the long run.

Consciousness to hold up isolated quick gains as short-term one-offs, that generally beget debt down the road, is the critical gambit that will return long-term payouts in adaptability and speed atop a united multi-cloud platform.

IT leaders know that digital transformation is a journey, not a destination. With continuous learning, the first of all healthy continuous IT practices, mastering the tactics and good habits for structuring your multi-cloud platform and using the ins and outs of devops atop it, can be fun and rewarding. It allows safe acceleration and agility for IT, and it’s essential to sustainably advance the speeds and smarts of your business.