Say Hello to Our Newest Branch Platforms for Secure SD-WAN by James Kelly

The announcement of Juniper's newest hardware additions for the AI-driven enterprise makes our portfolio of CPE the most extensive for secure SD-WAN across all sizes of branch and campuses. There’s no denying the growing importance of SD-WAN for providing secure and efficient connectivity of remote sites to the cloud. Even more important is enterprises’ need to drive operational simplicity and uniformity across the branch and campus in today’s multicloud environment. For SD-WAN to be successful, the key is to satisfy the needs of today while preparing for the ones of tomorrow and beyond.

One of the core needs of increasing importance for SD-WAN is security. Traditional security solutions don’t cut in when it comes to performance, interconnectivity and flexibility, meanwhile, SD-WAN-centric solutions may offer elementary security features that will ultimately put the business at risk. The industry is at an intersection where SD-WAN features and advanced threat protection need to be designed hand-in-hand to safeguard users, applications and infrastructure. This has been our exact focus for our SD-WAN solution and, to that end, we’ve now expanded our range of CPE hardware in the WAN edge portfolio to include:

Wi-Fi Mini Physical Interface Module (mPIM): An enterprise-grade Wi-Fi card for compact locations with our SRX Series Services Gateways. It provides dual radio support of 2.4 and 5Ghz frequencies along with 802.11ac Wave 2 and 802.11ac with backward compatibility of 802.11n standards. The module is suited for remote offices, guest Wi-Fi, small office, IoT connectivity or kiosks. It is an ideal branch-in-a-box solution where one access point is sufficient.

Screen Shot 2020-02-16 at 11.32.47 AM.png

This mPIM is manageable by CLI, JWeb or Juniper Sky Enterprise. It also offers ZTP and management via the Contrail Service Orchestration interface, as part of Juniper’s cloud-managed or on-premises Contrail SD-WAN solution. 

SRX380: For larger branches, the SRX380 is the fastest performing CPE platform of the branch SRX300 product line. Leading features include high port density with 10G options for high on-board connectivity, increased POE+ port density for IoT devices, AE256 MACsec encryption, dual power supplies and up to four MPIM card slots for wired or wireless connectivity. 

The SRX380 can be adapted to be a secure SD-WAN and next-gen firewall device. Users can add advanced threat prevention services to expand on the native next-generation firewall and UTM capabilities, IPS and AppSecure application visibility and policies. 

Branch Platforms Image 2.png

NFX350: The NFX350 is a high-end universal CPE platform in the NFX Series for large branch site deployments. Built on the next generation of Intel processors, Skylake, it offers up to 7.5 Gbps IPsec performance for higher SD-WAN scale and performance, while redundant power supplies provide greater platform resiliency. It includes 8x1Gbps and 8xSFP/SFP+ ports with AES256 MACsec support for high network connectivity and WAN interfaces for LTE, DSL and SFP. Support for multiple Juniper and third-party VNFs enables customers to accelerate application deployment in an automated and scalable fashion.

The NFX350 universal CPE platform fits the bill as a secure router, SD-WAN device or next-generation firewall. Consistent with the NFX Series, users reap the many benefits of SD-WAN, but most importantly, the simplicity of automation and consolidation with the reliability of smarter security and SDN.

Branch Platforms Image 3.png

These new products meet the needs of both the top and bottom ends of all branch and campus sizes – the SRX Wi-Fi mini card for compact spaces and the SRX380 and NFX350 as top line branch CPEs. We’re proud of our extensive SD-WAN solution and have plenty more to share about it in our Toolkit Tuesday webinars. Be sure to tune in or test drive Contrail SD-WAN for free.

Kubernetes: The Modern-day Kernel by James Kelly

container-4203677_1280.jpg

In the lead up to KubeCon + CloudNativeCon North America 2019, we are posting a series of blogs covering what we expect to be some of the most prevalent topics at the event. In our first post, we walked through the journey from the monolithic to the microservices-based application, outlining the benefits of speed and scale that microservices bring, with Kubernetes as the orchestrator of it all.

Kubernetes may appear to be the new software-defined, a panacea like software-defined networking (SDN), famously personified at Juniper by JKitty—the rainbow-butterfly-unicorn kitten. But you know what they say about the butterfly effect. When the Kubernetes kitty flaps its wings…there’s a storm coming. Kubernetes is indeed amazing—but not yet amazingly easy. Along with a shift to a microservices architecture, Kubernetes may create as many challenges as it solves.

Breaking applications into microservices means security and networking are critical because the network is how microservices communicate with each other to integrate into a larger application. Get it wrong, and it’s a storm indeed. 

This week at KubeCon, Juniper is showcasing solutions in the areas where we’re best known for engineering prowess: at-scale networking and security.

As luck would have it, these coincide perfectly to simplify the challenges that Kubernetes creates. Let’s look at some a little closer.

The Opportunities and Challenges of Microservices and Kubernetes

A well-architected microservices-based application could shrug off the loss of a single container or server node, as long as there was an orchestration platform in place to ensure enough instances of the right services were active to allow the application to meet demand. Microservices-based applications are, after all, designed to add and remove individual service instances on an as-needed basis. 

This sort of scale-out, fault-tolerant orchestration is what Kubernetes does with many additional constraint-based scheduling features. For this reason, Kubernetes is often called the operating system kernel for the cloud. In many ways, it’s a powerful process scheduler for a cluster of distributed microservices-based applications. But a kernel isn't everything that an application needs to function.

In true Juniper form of solving the hardest problems in our industry, we are engineering simplicity — tackling challenges in storage, security, networking and monitoring. We know from our customers and own experience that if you’re managing your own Kubernetes deployments, these challenges need to be squarely addressed in order to successfully manage this new IT platform.

One way to tackle problems is to outsource them. Kubernetes as a service is one approach to simplicity, but it also comes with trade-offs in cost and multicloud portability and uniformity. Therefore, operating Kubernetes clusters will be in the cards for most enterprises—and Juniper is here to help.

Don’t Limit Your Challenges, Challenge Your Limits

Kubernetes operators often deal with its challenges by limiting the size of clusters. 

Smaller cluster sizes will restrict the security and reliability blast radius and such pigmy-scaled demands on monitoring, storage and networking look easier to solve than one larger shared cluster. In some cases, each application team will deploy its own cluster. In other cases, each development lifecycle phase—dev, test, staging and production—has its own Kubernetes cluster. All variants aside, many Kubernetes operators are deploying small clusters; for example, 10-20 nodes and a few applications in each one. Yet, Kubernetes can scale a couple orders of magnitude beyond this.

The result of many small clusters already has a name: Kube sprawl.

While there may be benefits of small cluster design, this approach quickly introduces new challenges, not containing complexity but merely shifting it around. The operational challenges of managing many clusters also means the added juggling of more Kubernetes versions in flight, upgrades and patching. Not to mention, more engineers to do all of this or the added task of building your own automation to do so.

Moreover, there is the obvious drawback that since each server or VM node can only belong to one Kubernetes cluster, efficiencies are lost that a larger shared cluster would afford. Resource efficiencies and economies of scale come when there is a great variety of applications, teams, batches and peak usage times.

If a small cluster is just running a few apps, it’s unlikely that the apps will be diverse enough to steadily use its whole pool of resources, so there will be times of waste. Operators can try to make the number of cluster nodes itself elastic like the containerized applications, but this orchestration is difficult to automate and build per the unique demands of applications inside each small cluster. Given each Kubernetes cluster needs at least three server nodes for high availability, that also is replicated waste across the number of clusters maintained.

Many clusters create new challenges for developers. While cloud-native tools, such as microservices tracing, exist to benefit developers, these and other middleware services are generally designed to work within, not across, clusters. Other new tools like service meshes can be more complex when federated across clusters.

Applications Span Edge Clouds and Multicloud, So Too Must Cloud-native Infrastructure

Kubernetes is good at managing a single cluster of servers and the containers that run on them. But some applications will run in multiple clusters to span multiple fault domains or availability zones (AZ). This isn’t multiple small clusters inside the same data center AZ, but rather about spanning data centers for additional availability, better global coverage and user latency.

Solving security, networking, monitoring and storage per cluster is a first step, but strong solutions should deal with the challenges of federating multiple clusters across AZs, regions, edge cloud and multicloud. Here again, Juniper has some market leadership to show off at KubeCon.

Security and networking become more complex in this scenario, as they play a more important role in global application architecture. The network must link together microservices, as well as multicloud. Defense in depth must protect one cluster, but defense in breadth is another requirement needing to equally enforce policy end-to-end in addition to top-to-bottom.

You Can’t Always Leave the Legacy Behind

In the real world of enterprise, the services used by cloud-native applications will range from "that bit still runs on a mainframe" to "this bit can be properly called a microservice." So how can organizations provide a secure, performant and scalable software-defined infrastructure for an application made out of radically different types of services, each running on multiple different kinds of infrastructures and possibly from multiple providers located all around the world?

Here again, solving security, network, monitoring and storage for Kubernetes is all well and good, but what about managing the legacy in VMs and on bare-metal? Many new-fangled tools for Kubernetes end where Kubernetes does, but applications don’t. Moreover, operations don’t and yet another tool for ops teams to learn and deploy creates additional burden instead of reducing it. Technology and tooling today must solve new cloud-native problems, but the best tools will solve the hardest problem today: operations simplicity. This can only happen by spanning the boundaries of new and old, and maintaining evolvability for the future.

Solutions for Operational Simplicity

By this point, it’s easy to imagine what’s coming next. Juniper has been successfully building high-performance, scalable systems for a long time.

Stay tuned for our next blog that is set to explore how Juniper brings operational simplicity to Kubernetes users and beyond. In the meantime, find out more about Juniper’s cloud-native solutions at juniper.net/cloud-native

Bringing Operational Simplicity to Data Centers with Contrail Insights by James Kelly

Screen Shot 2020-02-16 at 11.21.52 AM.png

Data centers are the epitome of infrastructure automation, and their modern manifestation—cloud—provides an almost magical platform for its users. To construct clouds, separation of concerns into layers of abstraction, like network overlays and service API encapsulations, help enable service agility and innovation. But do these layers curb complexity, or merely mask it? 

The truth is, it’s a struggle to understand how all the magic happens behind the curtain of cloud infrastructure. Willfully blind reliability can be a house of cards, with applications stacked upon services, stacked upon a cloud platform, stacked upon data center infrastructure. If the foundation of the cloud architecture—the network—wobbles, or doesn’t live up to its SLA metrics, then issues reverberate all the way up the stack.

Demystifying this magic to identify root causes is a deeply complex problem faced by all data center operators. To thwart and unmask such complexity in the data center, we have engineered a solution that will shine a light on some of the most elusive troubleshooting and analytics issues faced today.

Introducing Contrail Insights

Juniper’s Contrail Insights simplifies multicloud operations with monitoring, troubleshooting and optimizing functions based on telemetry collection, policy rules, artificial intelligence and an intuitive user interface for analysis and observability. It works with VMware, OpenStack, Kubernetes and public cloud environments, as well as private cloud data center infrastructure, and it provides visibility across the network, servers and workloads.

Contrail Insights is available standalone or as part of Contrail Enterprise Multicloud, where’s it’s combined with Contrail Networking and Contrail Security. It’s available with Contrail Cloud for service providers. This next evolution of AppFormix has now been fully merged into the Contrail Command user interface and Contrail APIs for this trifecta of Contrail Networking, Security and Insights products.

In our new release of Contrail Insights, we have greatly expanded the analytics and observability features well beyond what AppFormix previously offered.

Seeing below the tip of the infrastructure iceberg

As revealed in Juniper’s 2019 State of Network Automation Report, monitoring is the most time consuming task day-to-day for network and security operations teams. As teams automate more, monitoring is increasingly the cornerstone of operations since there are fewer changes to manually perform.

Contrail Insights is now doing for monitoring and troubleshooting what Contrail Networking and Contrail Security did for data center, cloud and cloud-native orchestration. Seeing through the layers of entangled automation makes monitoring and troubleshooting possible via the instrumentation of Contrail Insights.

Let’s take a closer look at some of the other new features:

An intuitive tour of topology

A good place to start is the topology view. Contrail Insights shows the fabric of spine and leaf switches and links, all the way down to the servers and their hosted workloads.

The topology viewer works well for any size data center. There are smart arrangement presets as well as featured ability to customize the display with an intuitive user interface. This allows the user to drag and drop, select and move groups of nodes and links at a time and zoom in and out, when dealing with large topologies, improving broad visibility and quickly focusing that visibility as needed.

Visualize and analyze with a heatmap

Switch, link and server resources show up in the topology view with a configurable heat map. Heat maps can be based on switch resource usage, server resources usage or link usage. For example, the indicators show the heat map color scale based on link bytes, packets, or relative utilization.

The right panel provides further controls. Analysis can be done in a contextual way through the topology, with mouse-over tooltips and clicking on resources to present detailed and customizable analytics shown in the right panel with charts, graphs and tables. Contrail Insights provides statistics, both in real-time and on a historical basis. Using the calendar to navigate back to a certain period is immensely valuable for troubleshooting past issues like microbursts or intermittent hot spots.

Powerful querying and root-cause analysis with drill downs

The top-N view features a larger-scale bar chart or table in the main panel of the user interface, allowing operators to explore multi-parameter queries in more detail and to sort the top N results.

To build a query in this mode, the right panel is where the query filters are set. All query fields are populated with drop-down results, so that the user doesn’t have to guess or remember the resource names. This makes it easy to find traffic in a virtual network or between two points. This is also made easy by selecting points or links in the topology view and then clicking the top-N button from that view to enter the top-N interface with filters preset to what was selected in the topology.

Using the drill-down button in the table of results will recurse through the search results. This can aid in sifting through traffic volumes to find an exact flow or traffic group that may be an issue.  For example, if a link is running hot, the user can query between the source and destination nodes and then drill down through the traffic volumes of culprit overlays, protocols, and flows.

Troubleshooting with path finder

For each search result row in the top-N view, there’s also a simple find-path button to jump into the Contrail Insights path finder tool. This is a simple way to get into the path finder interface with the right-panel filters all preset to match the context of the row in the previous view, but you can also build path finder queries from scratch. 

The path finder tool is ideal for troubleshooting in a visual way, with the heatmap-contextual topology. It displays the path through the network topology for a specific flow or particular set of traffic parameters, and it presents an elegant solution to the problems of overlay-underlay correlation. 

Traffic groups—for example a given 3-tuple of source IP, destination IP, and protocol—can be balanced across multiple paths in a data center fabric. Path finder highlights the breakdown of the amount of traffic per path. In the right panel, paths are broken down across a bar chart, showing their relative share and allowing selection of those bars for individual subgroups of the traffic taking one path through the network. Also in the right panel, there is a line graph showing the traffic bandwidth over the selected time window.

Overlay to underlay correlation

Imagine troubleshooting incidents when an application team is experiencing issues. In such cases, the network engineer only knows the overlay information, such as workload endpoints (i.e. source and destination IP addresses,) and needs to find out the path in the underlay network. This requires correlating the overlay networks with the physical underlay data center networks.

Path finder shows the topology with the link path highlighted for the end-to-end path, workload to workload, traversing the server hosts and switching fabric. Because Contrail is overlay and underlay aware, it has all the context to filter on the appropriate domain, tenant, or virtual network, as well as the source and destination of the workload IPs. This is easily filtered in the path finder right panel to reveal the path through the fabric topology shown in the main panel. When the pressure is on for network engineers to show network innocence or find a problem, path finder is a leap forward in troubleshooting.

Underlay to overlay correlation

In the reverse scenario to the above, imagine the NetOps team must determine which applications are using the most bandwidth between two points in the physical network. The network engineer knows only the underlay physical switch IP addresses or interfaces and would like to know the top workloads whose overlay traffic are using the path between those two points.

From the top-N view, the user can select the overlay source and destination, along with other fields of interest, to present in the results. Then, in the right panel as query parameters, the user sets the filter to match the underlay source and destination switches or interfaces. The table view, or particularly its bar-chart view, shows the distribution of top overlay flows between the two switches. Now, to illustrate the result in a topology view, the user simply clicks the flow result’s find-path button in the given row. Presto! Contrail will render the path finder view for the end-to-end flow, clearly illustrating which part of the switch fabric the traffic is taking.

Resource consumption for a given tenant’s virtual network


In this use case, a data center operator wants to know the server and network resource consumption across a tenant or to drill down into more specific consumption at some points.

Starting in the topology view, the user can set up the heatmap configuration for a given time range and filter on just one tenant at the level of the source virtual network field. The topology heatmap highlights will activate for all links, servers and network nodes participating in that virtual network. Hovering the mouse over the highlighted resources shows a quick tooltip view of the resource consumption contextual to our single virtual network. To drill down further, simply click on any resource in the map and the right panel will present the default charts and tables that can be reconfigured to suit the search. For more detailed analysis the user can contextually launch into the top-N and path finder tools.

Incredible. Insightful. 

By now this illustrative blog has given you a good taste for the power of Contrail Insights.

If you’re joining us at NXTWORK this week, be sure to check out the breakout session on “Insights and Operational Simplicity” and the demos in Enterprise Multicloud kiosks. You can also binge Contrail demos to your heart’s content in our YouTube playlist on Contrail Enterprise Multicloud. When you’re ready to judge for yourself, ask your Juniper account team or partner for a demo.