Virtual Thoughts

Virtualisation, Storage and various other ramblings.

Page 2 of 4

Introducing VMware Kubernetes Engine

On the 26th of June 2018, VMware publically announced VKE – VMware Kubernetes Engine in Beta (with GA planned for later on this year). For me, the development of this solution flew under the radar, and its subsequent release came as quite a surprise – albeit quite a good one. So, where exactly does this solution fit with other Kubernetes based solutions that currently exist?

VKE Overview

VKE sits within VMware’s portfolio of cloud-native solutions as is pitched as a fully managed, Kubernetes-as-a-service offering. Therefore we have multiple ways we can consume Kubernetes resources from the VMware ecosystem, depicted in the diagram below.

 

Which prompts some customers to ask – Why should I pick VKS over PKS or vice-versa? From a high level, some of the differences are listed below:

PKS VKS
Management Responsibility Customer/Enterprise Fully Managed
Consumption Model Install, Configure, Manage, Consume Consume
Residence Public and Private Cloud Public Cloud only

 

What we can ascertain here is that VKE is designed to abstract away all the infrastructure components that are required for an operational Kubernetes deployment. As a reminder, PKS is composed of:

  • PKS Control Plane
  • Kubernetes core
  • BOSH
  • Harbor
  • GCP Service Broker
  • NSX-T

Which is quite a lot to manage and maintain. VKS however, takes away the requirement for us, the customers to manage such entities, and simply provides a Kubernetes endpoint for us to consume. Networking, storage and other aspects are abstracted. Of course, there are use cases for both VKE and PKS. VKE is not looking to be a replacement for PKS.

 

How does it work?

Under the hood, VKS is deployed on top of AWS (who recently announced EKS, Amazon’s own managed Kubernetes-as-a-service platform) but in fitting with VMware’s ethos of “Any app, any cloud”, this is likely to extend to other cloud platforms – notably Azure. In addition to simply leveraging the AWS backend, VKS adds a few new features:

  • VMware Smart Cluster – Essentially, this is a layer of resource management, designed to automate the allocation of compute resources for maximum efficiency and cost saving, as well as automatic remediation of nodes.
  • Full end-to-end encryption – Designed so that all data, be it in transit or at rest is encrypted by default.
  • Role based access control – Map enterprise users to clusters.
  • Integration with Amazon Services – EC2, Lambda, S3, ES, Machine learning, the list is extensive.
  • Integration with VMware cloud services – Log insight, Wavefront, Cost insight, etc.

 

Why shouldn’t I just use EKS if I wanted an AWS-backed Kubernetes instance?

To be honest, this is a good question. If you aren’t using any VMware services currently, then it makes sense to go with EKS. However, existing VMware customers can potentially gain a consistent operational experience with both on-premises and cloud-based resources using familiar tools. Plus, when VKS opens up to other cloud providers, this will add tremendous agility to the placement of Kubernetes workloads, facilitating a true multi-cloud experience.

This service is obviously very new, and no doubt will change a bit up to GA, but it’s definitely worth keeping an eye on, considering the growing adoption of Kubernetes in general.

Hybrid Cloud monitoring with VMware vRealize Operations

Applications and the underlying infrastructure, be it public, private or hybrid cloud are becoming increasingly sophisticated. Because of this, the way in which we monitor and observe these environments requires more sophisticated tools. In this blog post, we look at vRealize Operations and how it can be a facilitator of true hybrid cloud monitoring.

What is vRealize Operations?

vRealize Operations forms part of the overall vRealize suite from VMware – a collection of products targeted to accommodate cloud management and automation. In particular, vRealize Operations, as the name implies, primarily caters to operations management with full visibility across physical, virtual and cloud-based environments. The anatomy of vRealize Operations is depicted below

 

Integrated Cloud Operations Console – A single, unified frontend to access, modify and view all related vRealize Operations components.

Integrated Management Disciplines – vRealize Operations has built-in intelligence to assimilate, dissect and report back on a number of key operational metrics pertaining to performance, capacity, planning and more. Essentially, vRealize Operations “learns” about your environment and is able to make recommendations, predictions and much more based on your specific workloads.

Platform Services – vRealize Operations is able to perform a number of platform management disciplines based on your specific environment. As an example, vRealize Operations can automate the addition of virtual machine memory based on monitored load, therefore proactively addressing potential issues before they surface.

Extensibility – Available from the VMware Marketplace, Management Packs extend the functionality of vRealize Operations. Examples include:

  • Microsoft Azure Management Pack from Blue Medora
  • AWS Management Pack from VMware
  • Docker Management Pack from Blue Medora
  • Dell | EMC Management Pack from Blue Medora
  • vRealize Operations Compliance Pack for PCI from VMware

The examples above demonstrate vRealize Operation’s capability to monitor AWS and Azure environments in addition to on-premises workloads, making vRealize Operations a true platform for Hybrid Cloud monitoring and operations management

Practical Example – Cluster Monitoring / Troubleshooting

In this example, we leverage one of the vRealize Operation’s built-in dashboards to check the performance of a specific cluster. A dashboard in vRealize operations terminology is a collection of objects and their state, represented in a visual fashion.

 

One of the ways vRealize understands the underlying environment is to establish and map dependencies in a logical manner. In this example, we have a top-level datacentre object (ISH), which child objects are decedents of (Cluster and hosts) this dashboard identifies key aspects of this cluster in a single page:

  • Cluster activity / utilisation
  • Health state of associated objects
  • CPU contention information
  • Memory contention information
  • Disk latency information

Without vRealize Operations it would be common for an administrator to try and collate these metrics manually, looking at individual performance charts, DRS scheduling information, and vCenter health alarms. However, with vRealize operations, this data is collected and centralised for easy and effortless exposure.

 

Practical Example – Workload Planning

In this example, we have an upcoming project that we want to forecast into our environment, particularly around disk space demand. We facilitate this by creating a “Project” in vRealize Operations, but before that, let’s look at the project UI in a bit more detail:

 

We can access this section by navigating to Environment > vSphere Object. At which point we can select the resource we’re interested in forecasting into. The chart in the middle projects the disk space demand for this specific vSphere object (a cluster, in this example). Note how we have an incline in disk space demand, which is typical of a production environment, however, we are within capacity for the time period specified (90 days).

To add a project, we click the green “plus” icon below the chart:

 

Next, we fill in details pertaining to the demand. In this case, I’m adding demand in the form of 5 virtual machines and I’m populating the specification of these VM’s based on an existing VM in my environment with an implementation date of June 19th.

 

 

If we add this project to the forecast chart, the chart changes to accommodate this change in our environment:

 

 

By adding this project we have obviously created more demand, consequently, the date in which our disk space resources will exhaust has been expedited.

By having this knowledge we can plan our capacity requirements ahead of time. In this example, I decide to add another project to add resources prior to the commissioning of the aforementioned VM’s:

Because we can combine projects into a single chart, we can see based on observed metrics what effect adding demand and capacity to our environment has.

This is one of a vast number of features in vRealize Operations.  vRealize Operations Manager can be an incredibly useful tool to have for a number of reasons. Its intelligent analytics, a breath of extensibility options and unified experience make it a compelling experience for modern cloud-based operations

 

GCP Kubernetes & VMware Wavefront – a practical demonstration

Wavefront

Back in 2017, VMware acquired Wavefront – a company based in the US which focuses predominantly on real-time metrics and monitoring of a really…really vast array of platforms and technologies. We have technologies that aid in adopting and promoting cloud-native implementations, but monitoring, in some peoples eyes, can be a bit of an afterthought. Wavefront to the rescue. Having developed some Kubernetes and Docker knowledge myself, it seemed rather fitting to get an example going.

GCP – Creating our Kubernetes cluster

To begin with, we need a Google Cloud project. Log into your GCP account and create one:

Access the Kubernetes Engine:

You may have to wait a few minutes for the Kubernetees engine to initialise. Once initialised, create a new Kubernetes cluster:

 

We have a number of options to define when we create a new Kubernetes cluster:

Note: You are not charged for, or responsible for deploying and maintaining the master nodes. As this is a hosted solution, Google takes care of this for us. As for the cluster options, we have the following base options to get us up and running, all of which should be pretty self-explanatory.

Name – The name for the cluster.
Description – Optional value.
Location – Determines whether our cluster’s master VMs are localised within a single zone or spread across multiple zones in one region.
Zone/Region – Determines where our clusters worker VM’s are localised.
Cluster Version – The version of Kubernetes to be deployed in this cluster.
Node Image – We have two choices, either Container-Optimised OS (cos) or Ubuntu.
Size – Number of nodes in our cluster

One aspect of this wizard I really like is the ability to extract the corresponding REST or CLI command to create the Kubernetes cluster based on the options selected:

 

Click “Create” to initialise the Kuberntes cluster.

 

GCP – Deploying a simple application

After waiting a few minutes our Kuberntes cluster has been created:

To connect to it, we can click the “Connect” button which will give us two options:

At this stage, you can deploy your own application, but for me, I deployed a simple application following the instructions located at https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app

 

Wavefront and Kubernetes integration

To get started, we need to deploy the following:

  • Wavefront Proxy
  • Wavefront Proxy Service
  • Heapster (Collector Agent)

The YAML files are located at the following URL : https://longboard.wavefront.com/integration/kubernetes/setup

Note that you’ll need a logon to access the above URL. Also, and very cleverly, the generated YAML files contain tokens specific to your account. Therefore, after deploying the YAML files Wavefront will automagically start collecting stats:

 

 

Thoughts on wavefront

Once I got everything up and running I was pretty much in awe of the sheer depth of what Wavefront has visibility of.  From my tiny, insignificant environment I’m able to get extremely detailed metrics and content pertaining to:

  • Clusters
  • Namespaces
  • Nodes
  • Pods
  • Pod Containers

In particular, I was very impressed as to how easy it is to get wavefront to ingest data from the likes of GCP hosted K8s.

Introducing the vSAN Driver for Docker

The persistent storage requirement for the container ecosystem

When we talk about containers we generally think about microservices and all things ephemeral. But does this mean that we can’t facilitate stateful workloads leverage persistent storage? Absolutely not.

In the docker world, we choose a storage “driver” to back our persistent storage onto. The driver we choose is based on a number of requirements and which operating system our Docker hosts run. The table below lists the recommended out-of-the-box drivers for Docker Community Edition.

Most of the above are battle-hardened, well-documented drivers. But what if we’re running a vSphere based environment and want to integrate with some vSphere resources?

vSan Storage Driver

Docker introduced the Docker Volume Plugin framework. This extended the integration options between Docker and other storage platforms including (but not limited to):

  • Amazon EBS
  • EMC Scaleio
  • NFS
  • Azure File Services
  • iSCSI
  • VMware based storage
    • vSAN, VMFS

 

The vSAN Storage Driver for Docker has two components:

vSphere Data Volume Driver

This is installed on the ESXi host and primarily handles the VMDK creation that is requested by the underlying container ecosystem. It also keeps track of the mapping between these entities.

 

vSphere Docker Volume Plugin

This is installed on the Docker host and primarily acts as the northbound interface that facilitates requests from users / API / CLI to create persistent storage volumes to be used by containers.

From an architectural perspective it looks like this:

 

Step 1 – The user instantiates a new docker volume, specifying the appropriate driver (ie VMDK).

Step 2 – The vSphere Data Volume Driver accepts the request and communicates via the ESXi host to the underlying storage, which can be vSAN, VMFS or a mounted NFS share.

Why use this?

A distinct advantage of leveraging vSphere-backed storage for containers is how we can utilise native capabilities of the underlying storage infrastructure. For example, if we use vSAN as the backend storage for containers we can leverage:

  • Deduplication
  • Compression
  • Encryption
  • Erasure Coding.

Serverless and Containers – from a former ops guy

Post-AWS Summit 2018 Thoughts on Serverless and Containers

I was lucky enough to attend the AWS summit in London in May 2018. It was a first for me,  and the experience was pretty awesome. With a veritable smorgasbord of chalk talks, instructor-led demos and vendor presence there was something for everyone. I gravitated towards the docker/lambda sessions as I had recently picked up learning container technology, which got me thinking – from my perspective (previous ops-centric), how does container technology compare to the likes of serverless? When would you use one over the other? Whilst on the train home from London I decided to jot down my notes into this post.

Primer

I’m not a dev, but I have some development background. I got acquainted with C# in the past and wrote a number of applications – probably the most complicated one I wrote was a remote data collector for Windows-based machines to extract data from the WMI (Windows Management Instrumentation)  database, and then present this is an ASP.net page. But I’m fully aware things have moved on a lot since then. My career history has predominantly been based on the design, implementation and monitoring of infrastructure.

 

What I like about containers

  • Flexibility – You can pretty much take any existing application and package it into a container image. At this point, it’s portable, lightweight and may not require any change to the app itself.
  • Control – You have extensive control over the platform in which your containers are running, as well as the runtime itself.
  • Scale – Container environment can scale tremendously well and cater for the complete n-tier architecture.
  • Self-Contained – Excuse the pun, but you can encapsulate an application, its microservices, and it’s dependencies within a single ecosystem.
  • No Vendor Lock-in – Don’t like a particular way a cloud provider is hosting your containers? Simply move them elsewhere.

What I don’t like about containers

  • Can be complex – Orchestration tools such as Kubernetes can generate a bit of a learning curve, especially for non-devs.
  • Requires a change in mindset – Containers should be short-lived and ephemeral – treat them like cattle, not pets. Those who are used to nurturing, patching and tweaking individual VM’s will experience a bit of a mindset change.
  • Microsoft has some catching up to do – The smallest Linux container image is a few MB, whereas the smallest Windows image is a cool 1GB or more.

What I like about serverless

  • Abstraction – Zero touch on the infrastructure or runtime.
  • Cost – Can be significantly cheaper than running applications/services within VM’s.
  • Auto Scale – Increase resources with demand, scale back when not required.
  • Quicker time to deployment – Implement services quickly and efficiently.

What I don’t like about serverless

  • At the mercy of the provider – For example, with Lambda you’re at their mercy when it comes to changes or outages with the service.
  • Runtime Limits – A Lambda function can have a maximum lifetime of 5 minutes,  Minimum = 128 MB / Maximum = 3008 MB memory and 512MB of ephemeral disk space. This means that particular functions that are CPU intensive may not be well suited.
  • Language Limits – You are limited to writing code for specific runtimes supported by Lambda. For example, The latest version of Node.js that’s supported is 8.10, whereas newer versions have been released. To take advantage of additional features or bug fixes, you have to wait for the provider (AWS in this case) to update accordingly.
  • Latency – Expect invocation latency for functions that have not been executed for a while. This can yield unpredictable time to execute. Therefore, if you have services that are latency-sensitive, serverless may not be the best option.
  • The name – “Serverless” is not server-less. It runs on servers, including containers (!). Personally, I find the naming a misnomer.

 

So, which one is “better”?

I’ve read a lot of blog posts that compare the two – personally, I don’t think they can be compared. There are workloads you can do in containers but not in serverless and vice-versa – they solve different issues and have their own advantages and disadvantages. The deciding factor between them has to be influenced by exactly what you need to do/run. Ultimately though, from my perspective it boils down to whether or not you need to have absolute control and access over the runtime environment – If you don’t, serverless technologies from the likes of Lambda are great. If you need greater control and visibility of how & where and in what language/compiler you want your code to run in/from, containers may be better.

Container ecosystems can be pretty self-encapsulated. Lambda, however, works best by acting as a “glue” to bring together other features and resources from the AWS ecosystem into the bigger picture.

It’s probably worth mentioning that when you invoke a Lambda function, behind the scenes a container is spun up to execute your code, adding further weight to the reasoning behind not doing a direct comparison. Lambda actually needs containers to run.

vSphere and Containers part 1 – VIC (VMware Integrated Containers)

In this multi-part series, we evaluate the options available to vSphere users/customers wishing to deploy a native container service into an existing vSphere environment.

Part 1 – VIC (VMware Integrated Containers).

Part 2 – PKS (Pivotal Container Service).

Why should we care about containers?

Containers change the way we fundamentally look at application deployment and development. There was a huge shift in the way we managed platforms when server virtualisation came around – all of a sudden we had greater levels of flexibility, elasticity and redundancy compared to physical implementations. Consequently, the way in which applications were developed and deployed changed. And here we are again, with the next step of innovation using technology that is making rifts in the industry, changing the way consume resources.

 

What is VIC?

VIC (or vSphere Integrated Containers) is a native extension to the vSphere platform that facilitates container technology, because of this tight integration we’re able to perform actions and activities using the vSphere client and integrate it with auxiliary services. VIC is developed in such a way so it presents a Docker Compatible API endpoint. Therefore Ops/Dev staff already familiar with Docker can leverage VIC using the same tools/commands that they’re already familiar with.

VIC is a culmination of three technologies:

 

The containers engine is the core runtime technology that facilitates containerised applications in a vSphere environment. As previously mentioned, this engine presents a Docker-compatible API for consumption. Tight integration between this and vSphere enables vSphere admins to manage container and VM workloads in a consistent way.

 

 

Harbour is an enterprise-level facilitator of Docker-based image retrieval and distribution. It’s considered an extension of the open source Docker Distribution by adding features and constructs that are beneficial to the enterprise including but not limited to : LDAP support, Role-based access control, GUI control and much more.

 

 

Admiral is a scalable and lightweight container management platform for managing containers and associated applications. Primary responsibilities are mainly around automated deployment and lifecycle management of containers.

How VIC works

The management plane of VIC is facilitated by a OVA appliance, rather than going through the installation steps here, I will simply point to the direction of the (excellent) documentation located at https://vmware.github.io/vic-product/#documentation. At the core though, we have the following constructs:

  • VIC Appliance – Management plane.
  • Virtual Container HostsInfrastructure resource with a docker endpoint.
  • Registry – Location for Docker-compatible images.

 

Which, from a logical view looks like this:

 

Key observations are:

  • The VCH (Virtual Container Host) isn’t a Virtual machine, it’s actually a resource pool. Therefore, I think the best way to describe a VCH is a logical representation of a pool of resources, including clustering, scheduling, vMotion, HA, and other features.
  • When a VCH is created, a VM is created that facilitates the Docker-compatible API endpoint.

 

Advantages of VIC

So why would any of us consider VIC instead of, for example, standard Docker hosts? Here are a few points I’ve come across:

  1. Native integration into vSphere.
  2. Administrators can secure  and manage VM and Container resources in the same way.
  3. Easy integration into other VMware products.
    1. NSX.
    2. VSAN.
    3. vRealize Network Insight.
    4. vRealize Orchestrator.
    5. vRealize Automation.
  4. Eases adoption.
  5. Eases security.
  6. Eases management.

Conclusion

VIC helps bridge the gap between Developers and Administrators when it comes to the world of containers. I would say VIC is still in its infancy in terms of development, but it’s being backed by a great team and I think it’s going to make a compelling option for vSphere customers/users looking to embrace the container world, whilst maintaining a predictable, consistent security and management model.

My Technology Hitlist for 2018

It’s time for me to set a few objectives with which technologies I want to learn more about in 2018. As a reminder for me and to try and not lose focus, I’ve compiled it into a blog post.

 

vRealize Automation / Orchestration

I’ve always dabbled in automation and orchestration but never really gone “full on”.  I used to write a lot of scripts for migrations and such “back in the day”, so I’ll be looking forward to getting my hands dirty again and messing around with automating and orchestrating….all the things.

 

Docker / Containers

My career pretty much started with mainstream x86 server virtualisation. P2V’s were rife and everyone was losing their minds with technologies such as vMotion. The industry has changed now, and I personally feel the next wave of change in the way we manage applications and the underlying operating system libraries is with containers. Docker is leading the charge and this ties nicely with Automation and Orchestration. VMware integrated containers intrigues me as well, as it bridges a gap between the ops teams that are used to, and familiar with vSphere but are experiencing even more demand to provision, manage, secure and automate containers.

Containers are nothing new and are currently loved by the likes of developers, but from what I’ve read/heard/seen, typical enterprises are approaching with caution.

Kubernetes

Pretty much a follow on from containers. This, in my opinion, is the key driver to take [insert container engine of choice] to prime time, typical enterprise consumption. We all know the likes of Netflix, Facebook, Google are already using containers en masse and with an eye-watering amount of orchestration behind it, but I personally think the uptake from typical enterprises is a lot slower, particularly outside of Dev/Test but we’re getting there.

 

NSX-T

Fitting in with VMware’s ethos of any cloud, any device, anywhere, NSX-T sits as the hyperivsor-agnostic SDN solution. Having already dabbled quite a lot in NSX-V, I’d like to know more about NSX-T and the auxiliary technologies.

 

Google Cloud

Although some would consider late to the game, I think Google Cloud has potential. I’m already familiar with AWS and I think it’s a good idea to learn another cloud technology, so GCP it is.

Why not Azure? – I’m just not a Microsoft person anymore. Years ago (before Azure was even a thing) I used to be all over Windows Server, AD, Exchange, Group Policies, IIS, DHCP, WSUS etc, took the exams etc. After years of managing this ecosystem, I lost my enthusiasm for it. Bodged windows updates, Windows Server “quirks” and the like – couldn’t deal with it. Therefore I stay away.

“%20” free space on my NIC card? Good job, Windows.

Heck, if work would permit it, I’d run Linux on my company laptop.

Certifications?

Ideally this year I’d like to get:

  • VCIX-NV
  • Docker Certified Associate
  • Certified Kubernetes Administrator
  • VCP7-CMA

 

Understanding Data Center traffic flow using NSX-V capabilities

The defining characteristic of the Software Defined Data Center (SDDC), as the name implies, is to bring the intelligence and operations of various datacenter functions into software. This type of integration provides us with the ability to gain insights and analytics in a much more controlled, tightly integrated fashion.

VMware NSX is the market leader in network virtualisation. In this post, we have a look at a selection of tools which come with NSX, enabling a greater understanding of exactly what is transpiring in our NSX environment.

 

What we do now

Before diving into NSX-V traffic flow capabilities, let’s take a step back into how some organisations may approach identifying traffic flows currently by taking a simple example issue:

“Server A can’t talk to Server B on port 8443”

In this example, we assume that Server B is listening on port 8443.

Here are a few tools/methods that can be used to help identify the root cause

 

What these tools/methods have in common are:

 

  • Disjointed – Treated as separate, discrete exercises.
  • Isolated – Requires specific tools/skillsets.
  • Decentralised – Analysis requires output to be crossed referenced and analysed manually.

 

How NSX-V native tools can help

NSX-V provides us with a number tools to help us gain a deeper understanding of our network environment as well as provide accelerated troubleshooting and root cause analysis. These can be found via the vCenter client:

 

Flow Monitoring

Flow Monitoring is one of the traffic analysis tools that provide a detailed view of the traffic originating and terminating at virtual machines. One example use case of this is to determine in real time the traffic flows originating from a virtual machine – the below example demonstrating this. No agent or VM configuration is needed, unlike with Wireshark – NSX does this all natively without any modifications to the VM:

 

The VM in the example above has an IP of 172.16.201.10. We can see that itself is making DNS calls out to 8.8.8.8 as well as communicating with another machine with an IP of 172.16.200.10 over port 8443.

Endpoint Monitoring

 

Endpoint Monitoring enables us to map specific processes inside a guest operating system to network connections that are facilitating this traffic. This is helpful for gaining insight into application-layer details. The examples shown below demonstrate NSX’s ability to identify:

  • The source of the flow (process or application)
  • The source VM
  • The destination (can be any destination)
  • Traffic Type

 

 

 

Traceflow

Traceflow acts as a very useful diagnostic tool. Compared to flow monitoring, which takes a real-time view of network traffic, traceflow allows us to simulate traffic by synthetically “injecting” this traffic into our environment and monitoring the data path. In this example a test was executed for connectivity from a web server to an App server over port 8443:

 

NSX has informed us that this packet was dropped due a firewall rule – it also gives us the Rule ID in question. We can click on this link to get more information about this rule:

 

Once this rule was modified we can re-run the test, which shows this traffic has been successfully delivered to the target VM.

Traceflow also gives us an idea as to the journey our packet has travelled. From the above output we can see that this packet has traversed two logical switches, two ESXi hosts, one distributed logical router, and has forwarded through the distributed firewall running on the vNIC’s of two VM’s:

 

 

Packet Capture

The Packet Capture feature in NSX-V enables us to generate packet traces from physical ESXi hosts should we wish to perform any troubleshooting at that level.

These captures are done on a per-host level and we can specify to gather packet captures from one of the following interface types:

  • Physical NIC
  • VMKernel Interface
  • vNIC
  • vDR Port

Or from one of the respective filter types. Once started NSX will start gathering packet logs. Once the session has stopped these can be downloaded as .PCAP files which can be opened with a tool such as Wireshark

 

Conclusion

As organisations are adopting software-defined technologies, the tools and processes we use must also change. Thankfully, NSX-V has a plethora of native capabilities to observe, identify and troubleshoot software-defined networks.

vRealize Log insight – Frequently Overlooked Centralised Log Management

Log analysis has always been a standardised practice for activities such as root cause analysis or advanced troubleshooting. However, ingesting and analysing these logs from different devices, types, locations and formats can be a challenge. In this post, we have a look at vRealize Log Insight and what it can deliver.

 

What is it?

vRealize Log Insight is a product in the vRealize suite specifically designed for heterogeneous and scalable log management across physical, virtual and cloud-based environments. It is designed to be agnostic across what it can ingest logs from and is therefore valid candidate in a lot of deployments.

Additionally, any customer with a vCenter Server Standard or above license is entitled to a free 25 OSI pack. OSI is known as “Operating System Instance” and is broadly defined as a managed entity which is capable of generating logs. For example, a 25 OSI pack license can be used to cover a vCenter server, a number of ESXi hosts and other devices covered either natively or via VMware Content Packs (with the exception of Custom and 3rd party content packs – standalone vRealize Log Insight is required for this feature).

 

Current Challenges

Modern datacenters and cloud environments are rarely consumed by homogeneous solutions. Customers use a number of different technologies from different vendors and operating systems. With this comes a number of challenges:

 

  • The inconsistent format of log types – vCenter/ESXi uses syslog for logging, Windows has a bespoke method, applications may simply write data to a file in a specific format. This can require a number of tools/skills to read, interpret and action from this data.
  • Silos of information – The decentralised nature of dispersed logging causes this information to be siloed in different areas. This can have an impact on resolution times for incidents and accuracy of root cause analysis.
  • Manual analysis – Simply logging information can be helpful, but the reason why this is required is to perform the analysis. In some environments, this is a manual process performed by a systems administrator.
  • Not scalable – As environments grow larger and more complex having silos of differentiating logging types and formats becomes unwieldy to manage.
  • Cost – Man hours used to perform manual analysis can be costly.
  • No Correlation – Siloed logs doesn’t cater for any correlation of events/activities across an environment. This can greatly impede efforts in performing activities such as root cause analysis.

 

Addressing Challenges With vRealize Log Insight

Below are examples of how vRealize Log Insight can address the aforementioned challenges.

 

  • Create structure from unstructured data – Collected data is automatically analysed and structured for ease of reporting.
  • Centralised logging – vRealize Log Insight centrally collates logs from a number of sources which can then be accessed through a single management interface.
  • Automatic analysis – Logs are collected in near real-time and alerts can be configured to inform users of potential issues and unexpected events.
  • Scalable – Advanced licenses of vRealize Log insight include additional features such as Clustering, High Availability, Event Forwarding and Archiving to facilitate a highly scalable, centralised log management solution. vRealize Log Insight is also designed to analyse massive amounts of log data.
  • Cost – Automatic analysis of logs and alerting can assist with reducing man-hours spent manually analysing logs, freeing up IT staff to perform other tasks.
  • Log Correlation – Because logs are centralised and structured events across multiple devices/services can be correlated to identify trends and patterns.

 

Extensibility

vRealize Log Insight’s capabilities can be extended by the use of content packs. Content packs are available from the VMware marketplace (https://marketplace.vmware.com/vsx/?contentType=2)

Content packs are published either by VMware directly or from vendors to support their own devices/solutions. Examples include:

  • Apache Web Service
  • Brocade Devices
  • Cisco Devices
  • Dell | EMC Devices
  • F5 Devices
  • Juniper Devices
  • Microsoft Active Directory
  • Nimble Devices
  • VMware SRM

 

Closing Thoughts

It’s surprising how underused vRealize Log Insight is considering it comes bundled in as part of any valid vSphere Standard or above license. The modular design of the solution allowing third-party content packs adds a massive degree of flexibility which is not common amongst other centralised logging tools. 

Homelab Networking Refresh

Adios, Netgear router

In hindsight, I shouldn’t have bought a Netgear D7000 router. The reviews were good but after about 6 months of ownership, it decided to exhibit some pretty awful symptoms. One of which was completely and indiscriminately drop all wireless clients regardless of device type, range, band or frequency it resided on. A reconnect to the wireless network would prompt the passphrase again, weirdly. Even after putting in the passphrase (again) it wouldn’t connect. The only way to rectify this was to physically reboot the router.

Netgear support was pretty poor too. The support representative wanted me to downgrade firmware versions just to “see if it helps” despite confirming that this issue is not known in any of the published firmware versions.

Netgear support also suggested I changed the 2.4ghz network band. Simply put. They weren’t listening or couldn’t comprehend what I was saying.

Anyway, rant over. Amazon refunded me the £130 for the Netgear router after me explaining the situation about Netgear’s poor support. Amazing service really.

Hola, Ubiquiti

I’ve been eyeing up Ubiquiti for a while now but never had a reason to get any of their kit until now.  With me predominantly working from home when I’m not on the road and my other half running a business from home, stable connectivity is pretty important to both of us.

The EdgeMAX range from Ubiquiti looked like it fit the bill. I’d say it sits above the consumer-level stuff from the likes of Netgear, Asus, TP-Link etc and just below enterprise level kit from the likes of Juniper, Cisco, etc. Apart from the usual array of features found on devices of this type I particularly wanted to mess around with BGP/OSPF from my homelab when creating networks in VMware NSX.

With that in mind, I cracked open Visio and started diagramming, eventually ending up with the following:

 

I noted the following observations:

  • Ubiquti Edgerouters do not have a build in VDSL modem, therefore for connections such as mine, I required a separate modem.
  • The Edgerouter Lite has no hardware switching module, therefore it should be purely used as a router (makes sense)
  • The Edgerouter X has a hardware switching module with routing capabilities (but lower total pps (Packets Per Second))

Verdict

I managed to set up the pictured environment over the weekend fairly easily. The Ubiquiti software is very modern, slick, easy to use and responsive. Leaps and bounds from what I’ve found on consumer-grade equipment.

I have but one criticism with the Ubiquiti routers, and that is not everything is easily configurable through the UI (yet). From what I’ve read Ubiquiti are making good progress with this, but for me I had to resort to the CLI to finish my OSPF peering configuration.

The wireless access point is decent. good coverage and the ability to provision an isolated guest network with custom portal is a very nice touch.

Considering the Edgerouter Lite costs about £80 I personally think it represents good value for money considering the feature set it provides. I wouldn’t recommend it for every day casual network users, but then again, that isn’t Ubiquiti’s market.

The Ubiquiti community is active and very helpful as well.

 

 

 

 

« Older posts Newer posts »

© 2019 Virtual Thoughts

Theme by Anders NorenUp ↑

Social media & sharing icons powered by UltimatelySocial
RSS