Virtual Thoughts

Virtualisation, Storage and various other ramblings.

Category: Virtualisation (page 1 of 2)

PKS, Harbor and the importance of container registries

What are container registries and why do we need them?

A lot of the time, particularly when individuals and organisations are evaluating, testing and experimenting with containers they will use public container registries such as Docker Hub.  These public registries provide an easy-to-use, simple way to access images. As developers, application owners, system admins etc gain familiarity and experience additional operational considerations need to be explored, such as:

  • Organisation – How can we organise container images in a meaningful way? Such as by environment state (Prod/Dev/Test) and application type?
  • RBAC – How can we implement role-based access control to a container registry?
  • Vulnerability Scanning – How can we scan container images for known vulnerabilities?
  • Efficiency – How can we centrally manage all our container images and deploy an application from them?
  • Security – Some images need to kept under lock and key, rather than using an external service like Docker Hub.

Introducing VMware Harbor Registry

VMware Harbor Registry has been designed to address these considerations as enterprise-class container registry solution with integration into PKS. In this post, We’ll have a quick primer on getting up and running with Harbor in PKS and explore some of its features. To begin, we need to download PKS Harbor from the Pivotal site and import it into ops manager.

After which the tile will be added (When doing this for the first time it will have an orange bar at the bottom. Press the tile to configure).

The following need to be defined with applicable parameters to suit your environment.

  • Availability Zone and Networks – This is where the Harbor VM will reside, and the respective configuration will be dependent on your setup.
  • General – Hostname and IP address settings
  • Certificate – Generate a self-signed certificate, or BYOC (bring your own certificate)
  • Credentials – Define the local admin password
  • Authentication – Choose between
    • Internal
    • LDAP
    • UAA in PKS
    • UAA in PAS
  • Container Registry store – Choose where to store container images pushed to Harbor
    • Local file system
    • NFS Server
    • S3 Bucket
    • Google Cloud Storage
  • Clair Proxy Settings
  • Notary settings
  • Resource Config

VMware Harbor Registry – Organisation

Harbor employs the concept of “projects”. Projects are a way of collecting images for a specific application or service. When images are pushed to Harbor, they reside within a project:

 

Projects can either be private or public and can be configured during, or after, project creation:

A project is comprised of a number of components:

 

VMware Harbor Registry – RBAC

In Harbor, we have three role types we can assign to projects:

 

rbac

Image source: https://github.com/goharbor/harbor/blob/master/docs/user_guide.md#managing-projects

  • Guest – Read-only access, can pull images
  • Developer – Read/write access, can pull and push images
  • Admin – Read/Write access, as well as project-level activities, such as modifying parameters and permissions.

As a practical example, AD groups can be created to facilitate these roles:

And these AD groups can be mapped to respective permissions within the project

 

Therefore, facilitating RBAC within our Harbor environment. Pretty handy.

VMware Harbor Registry – Vulnerability Scanning

The ability to identify, evaluate and remediate vulnerabilities is a standard operation is modern software development and deployment. Thankfully Harbor addresses this with integration with Clair – an open source project that addresses the identification, categorisation and analysis of vulnerabilities within containers. As a demonstration we need to first push an image to Harbor:

After initiating a scan, Harbor can inform us of what vulnerabilities exist within this container image

We can then explore more details about these vulnerabilities, including when they were fixed:

 

Conclusion

Harbor provides us with an enterprise level, container registry solution. This blog post has only scratched the surface, and with constant development being invested into the project, expect more features and improvements.

 

vRealize Log Insight + PKS Integration

Introduction

In this blog post, we take a look into the integration between PKS and vRealize Log Insight and how this integration benefits the enterprise. As a bit of a recap:

PKS – PKS is a purpose-built enterprise level container solution leveraging the capabilities of Kubernetes, BOSH, VMware NSX-T, Harbour and more to deliver a highly available, highly flexible container runtime that operates on a number of cloud platforms, both private and public, including vSphere, AWS, Azure and GCP.

VMware also released VMware Cloud PKS, a fully managed service that combines the technical capabilities of AWS, PKS and Kuberntes which can be consumed in a similar fashion to other cloud services.

vRealize Log Insight – vRealize Log Insight is a log management system that’s designed to operate within heterogeneous environments, however, it’s much more than a simple aggregator of logging information. vRealize Log Insight has analytical and trend-identification capabilities which allow operators to gain invaluable insight into the state, health, and events which are transpiring in the environment. vRealize Log Insight works across physical, virtual and cloud environments.

Containers and Coexistence with VM’s

VM’s have existed for a long time now. Consequently, there are very mature, battle-hardened tools and software which can be used to monitor a plethora of operating systems, software, components and more. Containers, on the other hand, are relatively new in the enterprise. Although there is an overlap, there are significant differences in the way we monitor and collect logs from VM’s and  containers. How can this be addressed?

There are a number of ways to monitor a container based environment. Prometheus and Wavefront come to mind, but for environments that already leverage vRealize Log Insight, we can integrate PKS with it to facilitate a single plane of glass view of logging information from VM’s, their underlying infrastructure as well as containers and their underlying infrastructure.

 

What can we expect PKS to send to Log Insight

At a high level, the Integration between PKS and vRLI will facilitate the propagation of the following logs:

  • BOSH jobs
  • Core Kubernetes processes & nodes
  • Core BOSH processes
  • Kubernetes event logs
  • Individual Pod stdout and stderr

I’ve highlighted the last one as I can see real value in this. Imagine centralising all stdout and stderr from pods in combination with the analytics and trend identification capabilities from vRLI? Pretty interesting. Of course, we’re not that interested in what individual pods are logging, but if we have an example where some new code has been pushed out and 10’s / 100’s or 1000’s of pods start logging errors, we can identify, categorise and analyse these pretty easily with vRLI.

 

PKS and vRealize Log Insight in action

Talk is cheap, so let’s crack on.

Log into Ops Manager and select the PKS tile

 

Select “Logging” from the left and select “yes” under vRLI integration:

Enter the host and SSL settings where applicable in your environment:

Apply the changes:

if you keep an eye on the logs, references for the vRLI configuration will be shown:

– fluentd_vrli_ca_cert: “<redacted>”
– fluentd_vrli_host: “<redacted>”
+ fluentd_vrli_host: “<redacted>”
– fluentd_vrli_rate_limit_msec: “<redacted>”
+ fluentd_vrli_rate_limit_msec: “<redacted>”
– fluentd_vrli_skip_cert_verify: “<redacted>”
+ fluentd_vrli_skip_cert_verify: “<redacted>”
– fluentd_vrli_use_ssl: “<redacted>” + fluentd_vrli_use_ssl: “<redacted>”

Next, deploy a cluster in PKS:

After which, the following “hosts” can be observed, which in essence, is a reflection of the services within our Kubernetes cluster:

 

I also create a individual pod, named nginx-sleep. Below are the logs that were ingested for this event:

To validate the stdout capturing, create a cluster that writes to stdout:

 

And check the logs from the pod:

 

And also from Log Insight:

 

Conclusion

vRealize Log Insight provides a compelling platform for log ingestion, and it’s flexibility to ingest, analyse and interpret logs from physical, virtual and container based solutions makes it an extremely versatile tool in any admins repertoire.

vSphere and Containers part 1 – VIC (VMware Integrated Containers)

In this multi-part series, we evaluate the options available to vSphere users/customers wishing to deploy a native container service into an existing vSphere environment.

Part 1 – VIC (VMware Integrated Containers).

Part 2 – PKS (Pivotal Container Service).

Why should we care about containers?

Containers change the way we fundamentally look at application deployment and development. There was a huge shift in the way we managed platforms when server virtualisation came around – all of a sudden we had greater levels of flexibility, elasticity and redundancy compared to physical implementations. Consequently, the way in which applications were developed and deployed changed. And here we are again, with the next step of innovation using technology that is making rifts in the industry, changing the way consume resources.

 

What is VIC?

VIC (or vSphere Integrated Containers) is a native extension to the vSphere platform that facilitates container technology, because of this tight integration we’re able to perform actions and activities using the vSphere client and integrate it with auxiliary services. VIC is developed in such a way so it presents a Docker Compatible API endpoint. Therefore Ops/Dev staff already familiar with Docker can leverage VIC using the same tools/commands that they’re already familiar with.

VIC is a culmination of three technologies:

 

The containers engine is the core runtime technology that facilitates containerised applications in a vSphere environment. As previously mentioned, this engine presents a Docker-compatible API for consumption. Tight integration between this and vSphere enables vSphere admins to manage container and VM workloads in a consistent way.

 

 

Harbour is an enterprise-level facilitator of Docker-based image retrieval and distribution. It’s considered an extension of the open source Docker Distribution by adding features and constructs that are beneficial to the enterprise including but not limited to : LDAP support, Role-based access control, GUI control and much more.

 

 

Admiral is a scalable and lightweight container management platform for managing containers and associated applications. Primary responsibilities are mainly around automated deployment and lifecycle management of containers.

How VIC works

The management plane of VIC is facilitated by a OVA appliance, rather than going through the installation steps here, I will simply point to the direction of the (excellent) documentation located at https://vmware.github.io/vic-product/#documentation. At the core though, we have the following constructs:

  • VIC Appliance – Management plane.
  • Virtual Container HostsInfrastructure resource with a docker endpoint.
  • Registry – Location for Docker-compatible images.

 

Which, from a logical view looks like this:

 

Key observations are:

  • The VCH (Virtual Container Host) isn’t a Virtual machine, it’s actually a resource pool. Therefore, I think the best way to describe a VCH is a logical representation of a pool of resources, including clustering, scheduling, vMotion, HA, and other features.
  • When a VCH is created, a VM is created that facilitates the Docker-compatible API endpoint.

 

Advantages of VIC

So why would any of us consider VIC instead of, for example, standard Docker hosts? Here are a few points I’ve come across:

  1. Native integration into vSphere.
  2. Administrators can secure  and manage VM and Container resources in the same way.
  3. Easy integration into other VMware products.
    1. NSX.
    2. VSAN.
    3. vRealize Network Insight.
    4. vRealize Orchestrator.
    5. vRealize Automation.
  4. Eases adoption.
  5. Eases security.
  6. Eases management.

Conclusion

VIC helps bridge the gap between Developers and Administrators when it comes to the world of containers. I would say VIC is still in its infancy in terms of development, but it’s being backed by a great team and I think it’s going to make a compelling option for vSphere customers/users looking to embrace the container world, whilst maintaining a predictable, consistent security and management model.

vRealize Log insight – Frequently Overlooked Centralised Log Management

Log analysis has always been a standardised practice for activities such as root cause analysis or advanced troubleshooting. However, ingesting and analysing these logs from different devices, types, locations and formats can be a challenge. In this post, we have a look at vRealize Log Insight and what it can deliver.

 

What is it?

vRealize Log Insight is a product in the vRealize suite specifically designed for heterogeneous and scalable log management across physical, virtual and cloud-based environments. It is designed to be agnostic across what it can ingest logs from and is therefore valid candidate in a lot of deployments.

Additionally, any customer with a vCenter Server Standard or above license is entitled to a free 25 OSI pack. OSI is known as “Operating System Instance” and is broadly defined as a managed entity which is capable of generating logs. For example, a 25 OSI pack license can be used to cover a vCenter server, a number of ESXi hosts and other devices covered either natively or via VMware Content Packs (with the exception of Custom and 3rd party content packs – standalone vRealize Log Insight is required for this feature).

 

Current Challenges

Modern datacenters and cloud environments are rarely consumed by homogeneous solutions. Customers use a number of different technologies from different vendors and operating systems. With this comes a number of challenges:

 

  • The inconsistent format of log types – vCenter/ESXi uses syslog for logging, Windows has a bespoke method, applications may simply write data to a file in a specific format. This can require a number of tools/skills to read, interpret and action from this data.
  • Silos of information – The decentralised nature of dispersed logging causes this information to be siloed in different areas. This can have an impact on resolution times for incidents and accuracy of root cause analysis.
  • Manual analysis – Simply logging information can be helpful, but the reason why this is required is to perform the analysis. In some environments, this is a manual process performed by a systems administrator.
  • Not scalable – As environments grow larger and more complex having silos of differentiating logging types and formats becomes unwieldy to manage.
  • Cost – Man hours used to perform manual analysis can be costly.
  • No Correlation – Siloed logs doesn’t cater for any correlation of events/activities across an environment. This can greatly impede efforts in performing activities such as root cause analysis.

 

Addressing Challenges With vRealize Log Insight

Below are examples of how vRealize Log Insight can address the aforementioned challenges.

 

  • Create structure from unstructured data – Collected data is automatically analysed and structured for ease of reporting.
  • Centralised logging – vRealize Log Insight centrally collates logs from a number of sources which can then be accessed through a single management interface.
  • Automatic analysis – Logs are collected in near real-time and alerts can be configured to inform users of potential issues and unexpected events.
  • Scalable – Advanced licenses of vRealize Log insight include additional features such as Clustering, High Availability, Event Forwarding and Archiving to facilitate a highly scalable, centralised log management solution. vRealize Log Insight is also designed to analyse massive amounts of log data.
  • Cost – Automatic analysis of logs and alerting can assist with reducing man-hours spent manually analysing logs, freeing up IT staff to perform other tasks.
  • Log Correlation – Because logs are centralised and structured events across multiple devices/services can be correlated to identify trends and patterns.

 

Extensibility

vRealize Log Insight’s capabilities can be extended by the use of content packs. Content packs are available from the VMware marketplace (https://marketplace.vmware.com/vsx/?contentType=2)

Content packs are published either by VMware directly or from vendors to support their own devices/solutions. Examples include:

  • Apache Web Service
  • Brocade Devices
  • Cisco Devices
  • Dell | EMC Devices
  • F5 Devices
  • Juniper Devices
  • Microsoft Active Directory
  • Nimble Devices
  • VMware SRM

 

Closing Thoughts

It’s surprising how underused vRealize Log Insight is considering it comes bundled in as part of any valid vSphere Standard or above license. The modular design of the solution allowing third-party content packs adds a massive degree of flexibility which is not common amongst other centralised logging tools. 

Homelab Networking Refresh

Adios, Netgear router

In hindsight, I shouldn’t have bought a Netgear D7000 router. The reviews were good but after about 6 months of ownership, it decided to exhibit some pretty awful symptoms. One of which was completely and indiscriminately drop all wireless clients regardless of device type, range, band or frequency it resided on. A reconnect to the wireless network would prompt the passphrase again, weirdly. Even after putting in the passphrase (again) it wouldn’t connect. The only way to rectify this was to physically reboot the router.

Netgear support was pretty poor too. The support representative wanted me to downgrade firmware versions just to “see if it helps” despite confirming that this issue is not known in any of the published firmware versions.

Netgear support also suggested I changed the 2.4ghz network band. Simply put. They weren’t listening or couldn’t comprehend what I was saying.

Anyway, rant over. Amazon refunded me the £130 for the Netgear router after me explaining the situation about Netgear’s poor support. Amazing service really.

Hola, Ubiquiti

I’ve been eyeing up Ubiquiti for a while now but never had a reason to get any of their kit until now.  With me predominantly working from home when I’m not on the road and my other half running a business from home, stable connectivity is pretty important to both of us.

The EdgeMAX range from Ubiquiti looked like it fit the bill. I’d say it sits above the consumer-level stuff from the likes of Netgear, Asus, TP-Link etc and just below enterprise level kit from the likes of Juniper, Cisco, etc. Apart from the usual array of features found on devices of this type I particularly wanted to mess around with BGP/OSPF from my homelab when creating networks in VMware NSX.

With that in mind, I cracked open Visio and started diagramming, eventually ending up with the following:

 

I noted the following observations:

  • Ubiquti Edgerouters do not have a build in VDSL modem, therefore for connections such as mine, I required a separate modem.
  • The Edgerouter Lite has no hardware switching module, therefore it should be purely used as a router (makes sense)
  • The Edgerouter X has a hardware switching module with routing capabilities (but lower total pps (Packets Per Second))

Verdict

I managed to set up the pictured environment over the weekend fairly easily. The Ubiquiti software is very modern, slick, easy to use and responsive. Leaps and bounds from what I’ve found on consumer-grade equipment.

I have but one criticism with the Ubiquiti routers, and that is not everything is easily configurable through the UI (yet). From what I’ve read Ubiquiti are making good progress with this, but for me I had to resort to the CLI to finish my OSPF peering configuration.

The wireless access point is decent. good coverage and the ability to provision an isolated guest network with custom portal is a very nice touch.

Considering the Edgerouter Lite costs about £80 I personally think it represents good value for money considering the feature set it provides. I wouldn’t recommend it for every day casual network users, but then again, that isn’t Ubiquiti’s market.

The Ubiquiti community is active and very helpful as well.

 

 

 

 

Embracing the SDDC with NSX-V automation

The Software Defined Data Center (SDDC for short) has become a widely adopted and embraced model for modern datacentre implementations. Conveying the benefits of the SDDC, particularly the non-technical aspects can be a challenge. In this blog post we take a practical example of a single activity we can automate in NSX and the benefits that come from it, both technical and non-technical.

The NSX API

An API (Application Programming Interface), in simple terms is an intermediary that allows two applications to communicate with each other via a common syntax. Although we may not be aware of it, it’s likely we use API’s every day when we use applications such as Facebook, LinkedIn, vSphere and countless others. For example, when you create a logical switch in the vSphere web client, behind the scenes an API call is made to the NSX manager to facilitate that request.

The NSX API is based on REST which leverages HTTPS requests to GET, PUT, POST and DELETE data from the NSX ecosystem:

  • GET – Retrieve an entity
  • PUT – Create an entity
  • POST – Update an entity
  • DELETE – Remove an entity

An entity can be a variety of NSX objects such as Logical Switches, Distributed Routers, Edge Gateways, Firewall rules, etc.

 

Options for working with the NSX API

Several avenues exist for working with the rest API, each having their own advantages and disadvantages:

  • Direct API calls via REST client – These can be made via clients such as Postman. These calls are static and are therefore suitable for one-off requests.

 

 

  • PowerNSX – PowerNSX is a PowerShell module that enables the consumption of API calls via Powershell cmdlets. It’s an open source community project but is not supported by VMware. Also, not all API calls are currently exposed as cmdlets.
  • API calls via code – API calls can be made from a variety of programming libraries (Powershell, C#, Java, etc) which add functionality by adding an element of dynamic input. We use this as an example in this blog.

 

Practical example – Creating new networks in a legacy virtualised compute environment

To illustrate the power of automating NSX via automation let’s take an example activity and break it down into respective tasks. In this example, we want to create an N-tier network (IE a network comprising of Web, App and DB tiers which are routable and sit behind a perimeter firewall).

 

Depending on factors such as the number of vendors used and the structure of the IT team, we can see that executing a relatively simple task of creating an N-Tier routable, secure network for the purposes of consumption could:

  • Involve multiple network teams (vSphere admin/network admin/security admin)
  • Involve multiple tools (in this example tools from vSphere, Cisco, Juniper and Sonicwall)

This operational complexity can hinder the speed and agility of a business due to factors such as:

  • Multiple teams need to collaborate. Collaboration between vSphere / Network / Security teams can be time-consuming
  • Multiple tools/skillsets required. In the example above skills pertaining to Sonicwall, Juniper, Cisco and vSphere are required to create a secure network topology

 

Practical example – Automating in NSX

To demonstrate the automation capabilities designed to address the example a Powershell script was created to facilitate API calls directly to NSX. The advantage of doing this is:

  • API calls are supported by VMware.
  • The entire API ecosystem is exposed for consumption.
  • Powershell can prompt the user for information, which is then used to dynamically populate API requests.
  • All tiers of the network are created and managed by a single management plane.

 

This script starts with the layer 2 logical switches and then moves up the networking stack configuring the layer 3 and perimeter elements of this network:

 

For each logical network we prompt the user for the following:

  • Name – What we want to call the logical network
  • Network Range – The intended network range for this network. This is used to determine the DLR’s interface on it
  • Network Description – What we provide as the description
  • Network Type – Simply put, Uplinks are used for peering (North/South) traffic. We need one uplink network to facilitate the peering between the DLR and ESG

 

Once the user has put in the required networks, API calls are executed from the Powershell script to create the networks:

Next is to prompt the user for the DLR and ESG names:

 

This information is used to construct the Distributed Logical Router (DLR) and Edge Services Gateway (ESG) devices via API calls:

At this stage, the following has been created:

 

 

At which point the script outputs the total amount of time elapsed to construct this topology in NSX (including the time taken for the user to input the data for).

In this example it took 291.7 seconds (4.9 minutes) to construct the following:

  • Create 3 internal logical switches (for VM traffic)
  • Create 1 uplink logical switch (for BGP peering)
  • Create 1 DLR and configure interfaces on each internal logical switch (default gateway)
  • Create 1 ESG and configure interface for BGP peering
  • Configure BGP dynamic routing

Not bad at all.

To validate the routing, we can simply log on to the ESG and check its routing table:

We can see the ESG has learnt (by BGP) the networks that reside on our DLR.

This is one of the almost endless examples of exposing and leveraging the NSX API.

For anyone interested in the Powershell script – I intend to upload the code once I’ve added some decent input validation.

VMware Cloud on AWS

Perhaps one of VMware’s most significant announcements made in recent times is the partnership with Amazon Web Services (AWS), including the ability to leverage AWS’s infrastructure to provision vSphere managed resources. What exactly does this mean and what benefits could this bring to the enterprise?

 

Collaboration of Two Giants

To understand and appreciate the significance of this partnership we must acknowledge the position and perspective of each.

 

 

 

  • Market leader in private cloud offerings
  • Deep roots and history in virtualisation
  • Expanding portfolio

 

 

 

 

  • Market leader in public cloud offerings
  • Broad and expanding range of services
  • Global scale

 

VMware has a significant presence in the on-premise datacentre, in contrast to AWS which focuses entirely on the public cloud space. VMware cloud on AWS sits in the middle as a true hybrid cloud solution leveraging the established, industry-leading technologies and software developed by VMware, together with the infrastructure capabilities provided by AWS.

 

How it Works

In a typical setup, an established vSphere private cloud already exists. Customers can then provision an AWS-backed vSphere environment using a modern HTML5 based client. The environment created by AWS leverages the following technologies:

  • ESXi on bare metal servers
  • vSphere management
  • vSAN
  • NSX

 

The connection between the on-premise and AWS hosted vSphere environments is facilitated by Hybrid Linked Mode. This allows customers to manage both on-premise and AWS hosted environments through a single management interface. This also allows us to, for example, migrate and manage workloads between the two.

Advantages

Existing vSphere customers may already be leveraging AWS resources in a different way, however, there are significant advantages associated with implementing VMware cloud on AWS, such as:

Delivered as a service from VMware – The entire ecosystem of this hybrid cloud solution is sold, delivered and supported by VMware. This simplifies support, management, billing amongst other activities such as patching and updates.

Consistent operational model – Existing private cloud users use the same tools, processes and technologies to manage the solution. This includes integration with other VMware products included in the vRealize product suite.

Enterprise-grade capabilities – This solution leverages the extensive AWS hardware capabilities which include the latest in low latency IO storage technology based on Solid State Drive technology and high-performance networking.

Access to native AWS resources – This solution can be further expanded to access and consume native AWS technologies pertaining to databases, AI, analytics and more.

Use Cases

VMware Cloud on AWS has several applications, including (but not limited to) the following:

 

Datacenter Extension

 

Because of how rapidly an AWS-backed software-defined datacenter can be provisioned, expanding an on-premise environment becomes a trivial task. Once completed, these additional resources can be consumed to meet various business and technical demands.

 

 

 

Dev / Test

 

Adding additional capabilities to an existing private cloud environment enables the division of duties/responsibilities. This enables organisations to separate out specific environments for the purposes of security, delegation and management.

 

 

 

 

 

Application Migration

 

 

VMware cloud on AWS enables us to migrate N-tier applications to an AWS backed vSphere environment without the need to re-architect or convert our virtual machine/compute and storage constructs. This is because we’re using the same software-defined data centre technologies across our entire estate (vSphere, NSX and vSAN).

 

 

 

 

 

 

Conclusion

There are a number of viable applications for VMware Cloud on AWS and it’s a very strong offering considering the pedigree of both VMware and AWS. Combining the strengths from each creates a very compelling option for anyone considering a hybrid cloud adoption strategy.

To learn more about VMware Cloud on AWS please review the following:

https://aws.amazon.com/vmware/

https://cloud.vmware.com/vmc-aws

 

Homelab – Nested ESXi with NSX and vSAN

The Rebuild

I decided to trash and rebuild my nested homelab to include both NSX and vSAN. When I attempted to prepare the hosts for NSX I received the following message:

 

 

I’ve not had this issue before so I conducted some research. I found a lot of blog posts / comments / KB articles linking this issue to VUM. For example : https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2053782

However, after following the instructions I couldn’t set the “bypassVumEnabled” setting. Nor could I manually install the NSX vibs and was presented with the following:

 

[root@ESXi4:~] esxcli software vib install -v /vmfs/volumes/vsanDatastore/VIB/vib20/esx-nsxv/VMware_bootbank_esx-nsxv_6.5.0-0.0.6244264.vib –force
[LiveInstallationError]
Error in running [‘/etc/init.d/vShield-Stateful-Firewall’, ‘start’, ‘install’]:
Return code: 1
Output: vShield-Stateful-Firewall is not running
watchdog-dfwpktlogs: PID file /var/run/vmware/watchdog-dfwpktlogs.PID does not exist
watchdog-dfwpktlogs: Unable to terminate watchdog: No running watchdog process for dfwpktlogs
ERROR: ld.so: object ‘/lib/libMallocArenaFix.so’ from LD_PRELOAD cannot be preloaded: ignored.
Failed to release memory reservation for vsfwd
Resource pool ‘host/vim/vmvisor/vsfwd’ release failed. retrying..
Resource pool ‘host/vim/vmvisor/vsfwd’ release failed. retrying..
Resource pool ‘host/vim/vmvisor/vsfwd’ release failed. retrying..
Resource pool ‘host/vim/vmvisor/vsfwd’ release failed. retrying..
Resource pool ‘host/vim/vmvisor/vsfwd’ release failed. retrying..
Set memory minlimit for vsfwd to 256MB
ERROR: ld.so: object ‘/lib/libMallocArenaFix.so’ from LD_PRELOAD cannot be preloaded: ignored.
Failed to set memory reservation for vsfwd to 256MB
ERROR: ld.so: object ‘/lib/libMallocArenaFix.so’ from LD_PRELOAD cannot be preloaded: ignored.
Failed to release memory reservation for vsfwd
Resource pool ‘host/vim/vmvisor/vsfwd’ released.
Resource pool creation failed. Not starting vShield-Stateful-Firewall

It is not safe to continue. Please reboot the host immediately to discard the unfinished update.
Please refer to the log file for more details.
[root@ESXi4:~]

In particular I was intrigued by the “Failed to release memory reservation for vsfwd” message. I decided to increase the memory configuration of my ESXi VM’s from 6GB to 8GB and I was then able to prepare the hosts from the UI.

TLDR; If you’re running  ESXi 6.5, NSX 6.3.3 and vSAN 6.6.1 and experiencing issues preparing hosts for NSX, increase the ESXi memory configuration to at least 8GB.

vDS to vSS and back again

Overview

I was recently tasked with migrating a selection of ESXi 5.5 hosts into a new vSphere 6.5 environment. These hosts leveraged Fibre Channel HBA’s for block storage and 2x10Gbe interfaces for all other traffic types. I assumed that doing a vDS detach and resync was not the correct approach to do this, even though some people reported success doing it this way.  The /r/vmware Reddit community agreed and later I found a VMware KB article that backs the more widely accepted solution involving moving everything to a vSphere Standard Switch first.

 Automating the process

There are already several resources on how to do vDS -> vSS migrations but I fancied trying it myself. I used Virtually Ghetto’s script as a foundation for my own but wanted to add a few changes that were applicable to my specific environment. These included:

  • Populating a vSS dynamically by probing the vDS the host was attached to, including VLAN ID tags
    • Additionally, add a prefix to differentiate between the vSS and vDS portgroups
  • Automating the migration of VM port groups from the vDS to a vSS in a way that would result in no downtime.

Script process

This script performs the migration on a specific host, defined in $vmhost.

  1. Connect to vCenter Server
  2. Create a vSS on the host called “vSwitch_Migration”
  3. Iterate through the vDS portgroups, recreate on the vSS like-for-like, including VLANID tagging (where appropriate).
  4. Acquire list of VMKernel adaptors
  5. Move vmnic0 from the vDS to the vSS. At the same time migrate the VMKernel interfaces
  6. Iterate through all the VM’s on the host, reconfigure port group so it resides in the vSS
  7. Once all the VM’s have migrated, add the second (and final, in my environment) vmnic to the vSS
  8. At this point nothing specific to this host resides on the vDS, therefore remove the vDS from this host

If you plan to run these scripts in your environment, test first in a non-production environment.


Write-Host "Connecting to vCenter Server" -foregroundcolor Green
Connect-VIServer -Server "vCenterServer" -User administrator@vsphere.local -Pass "somepassword" | Out-Null

# Individual ESXi host to migrate from vDS to VSS
$vmhost = "192.168.1.20"
Write-Host "Host selected: " $vmhost -foregroundcolor Green

# Create a new vSS on the host
$vss_name = New-VirtualSwitch -VMHost $vmhost -Name vSwitch_Migration
Write-Host "Created new vSS on host" $vmhost "named" "vSwitch_Migration" -foregroundcolor Green

#VDS to migrate from
$vds_name = "MyvDS"
$vds = Get-VDSwitch -Name $vds_name

#Probe the VDS, get port groups and re-create on VSS
$vds_portgroups = Get-VDPortGroup -VDSwitch $vds_name
foreach ($vds_portgroup in $vds_portgroups)
{
if([string]::IsNullOrEmpty($vds_portgroup.vlanconfiguration.vlanid))
{
Write-Host "No VLAN Config for" $vds_portgroup.name "found" -foregroundcolor Green
$PortgroupName = $vds_portgroup.Name
New-VirtualPortGroup -virtualSwitch $vss_name -name "VSS_$PortgroupName" | Out-Null
}

else

{
Write-Host "VLAN config present for" $vds_portgroup.name -foregroundcolor Green
$PortgroupName = $vds_portgroup.Name
New-VirtualPortGroup -virtualSwitch $vss_name -name "VSS_$PortgroupName" -VLanId $vds_portgroup.vlanconfiguration.vlanid | Out-Null
}
}

#Create a list of VMKernel adapters
$management_vmkernel = Get-VMHostNetworkAdapter -VMHost $vmhost -Name "vmk0"
$vmotion1_vmkernel = Get-VMHostNetworkAdapter -VMHost $vmhost -Name "vmk1"
$vmotion2_vmkernel = Get-VMHostNetworkAdapter -VMHost $vmhost -Name "vmk2"
$vmkernel_list = @($management_vmkernel,$vmotion1_vmkernel,$vmotion2_vmkernel)

#Create mapping for VMKernel -> vss Port Group
$management_vmkernel_portgroup = Get-VirtualPortGroup -name "VSS_Mgmt" -Host $vmhost
$vmotion1_vmkernel_portgroup = Get-VirtualPortGroup -name "VSS_vMotion1" -Host $vmhost
$vmotion2_vmkernel_portgroup = Get-VirtualPortGroup -name "VSS_vMotion2" -Host $vmhost
$pg_array = @($management_vmkernel_portgroup,$vmotion1_vmkernel_portgroup,$vmotion2_vmkernel_portgroup)

#Move 1 uplink to the vss, also move over vmkernel interfaces
Write-Host "Moving vmnic0 from the vDS to VSS including vmkernel interfaces" -foregroundcolor Green
Add-VirtualSwitchPhysicalNetworkAdapter -VMHostPhysicalNic (Get-VMHostNetworkAdapter -Physical -Name "vmnic0" -VMHost $vmhost) -VirtualSwitch $vss_name -VMHostVirtualNic $vmkernel_list -VirtualNicPortgroup $pg_array -Confirm:$false

#Moving VM's from vDS to VSS
$vmlist = Get-VM | Where-Object {$_.VMHost.name -eq $vmhost}

foreach ($vm in $vmlist)
{
#VM's may have more that one nic
$vmniclist = Get-NetworkAdapter -vm $vm
foreach ($vmnic in $vmniclist)
{
$newportgroup = "VSS_" + $vmnic.NetworkName
Write-Host "Changing port group for" $vm.name "from" $vmnic.NetworkName "to " $newportgroup -foregroundcolor Green
Set-NetworkAdapter -NetworkAdapter $vmnic -NetworkName $newportgroup -Confirm:$false | Out-Null
}
}

#Moving additional vmnic to vss
Write-Host "All VM's migrated, adding second vmnic to vss" -foregroundcolor Green
Add-VirtualSwitchPhysicalNetworkAdapter -VMHostPhysicalNic (Get-VMHostNetworkAdapter -Physical -Name "vmnic1" -VMHost $vmhost) -VirtualSwitch $vss_name -Confirm:$false

#Tidyup - Remove DVS from this host
Write-Host "Removing host from vDS" -foregroundcolor Green
$vds | Remove-VDSwitchVMHost -VMHost $vmhost -Confirm:$false

 

 

The reverse

Although vSphere has some handy tools to migrate hosts, portgroups and networking to a vDS, scripting the reverse didn’t require many changes to the original script:


Write-Host "Connecting to vCenter Server" -foregroundcolor Green
Connect-VIServer -Server "vCenterServer" -User administrator@vsphere.local -Pass "somepassword" | Out-Null

# Individual ESXi host to migrate from vDS to VSS
$vmhost = "192.168.1.20"
Write-Host "Host selected: " $vmhost -foregroundcolor Green

#VDS to migrate to
$vds_name = "MyvDS"
$vds = Get-VDSwitch -Name $vds_name

#Vss to migrate from
$vss_name = "vSwitch_Migration"
$vss = Get-VirtualSwitch -Name $vss_name -VMHost $vmhost

#Add host to vDS but don't add uplinks yet
Write-Host "Adding host to vDS without uplinks" -foregroundcolor Green
Add-VDSwitchVMHost -VMHost $vmhost -VDSwitch $vds

#Create a list of VMKernel adaptors
$management_vmkernel = Get-VMHostNetworkAdapter -VMHost $vmhost -Name "vmk0"
$vmotion1_vmkernel = Get-VMHostNetworkAdapter -VMHost $vmhost -Name "vmk1"
$vmotion2_vmkernel = Get-VMHostNetworkAdapter -VMHost $vmhost -Name "vmk2"
$vmkernel_list = @($management_vmkernel,$vmotion1_vmkernel,$vmotion2_vmkernel)

#Create mapping for VMKernel -> vds Port Group
$management_vmkernel_portgroup = Get-VDPortgroup -name "Mgmt" -VDSwitch $vds_name
$vmotion1_vmkernel_portgroup = Get-VDPortgroup -name "vMotion0" -VDSwitch $vds_name
$vmotion2_vmkernel_portgroup = Get-VDPortgroup -name "vMotion1" -VDSwitch $vds_name
$vmkernel_portgroup_list = @($management_vmkernel_portgroup,$vmotion1_vmkernel_portgroup,$vmotion2_vmkernel_portgroup)

#Move 1 uplink to the vDS, also move over vmkernel interfaces
Write-Host "Moving vmnic0 from the vSS to vDS including vmkernel interfaces" -foregroundcolor Green
Add-VDSwitchPhysicalNetworkAdapter -VMHostPhysicalNic (Get-VMHostNetworkAdapter -Physical -Name "vmnic0" -VMHost $vmhost) -DistributedSwitch $vds_name -VMHostVirtualNic $vmkernel_list -VirtualNicPortgroup $vmkernel_portgroup_list -Confirm:$false

#Moving VM's from VSS to vDS
$vmlist = Get-VM | Where-Object {$_.VMHost.name -eq $vmhost}

foreach ($vm in $vmlist)
{
#VM's may have more that one nic
$vmniclist = Get-NetworkAdapter -vm $vm
foreach ($vmnic in $vmniclist)
{
$newportgroup = $vmnic.NetworkName.Replace("VSS_","")
Write-Host "Changing port group for" $vm.name "from" $vmnic.NetworkName "to " $newportgroup -foregroundcolor Green
Set-NetworkAdapter -NetworkAdapter $vmnic -Portgroup $newportgroup -Confirm:$false | Out-Null
}
}

#Moving additional vmnic to vds
Write-Host "All VM's migrated, adding second vmnic to vDS" -foregroundcolor Green
Add-VDSwitchPhysicalNetworkAdapter -VMHostPhysicalNic (Get-VMHostNetworkAdapter -Physical -Name "vmnic1" -VMHost $vmhost) -DistributedSwitch $vds_name -Confirm:$false

#Tidyup - Remove vSS from this host
Write-Host "Removing VSS from host" -foregroundcolor Green
Remove-VirtualSwitch -VirtualSwitch $vss -Confirm:$false

Intel Skylake/Kaby Lake processors: broken hyper-threading

Overview

Source : https://lists.debian.org/debian-devel/2017/06/msg00308.html

It appears some Intel Xeon CPU’s are susceptible to a recently discovered Hyper Threading bug. However, these are limited to E3 v5/v6 based Xeon systems which are found mostly in entry level servers with single socket implementations. > Dual socket systems currently leverage E5 based Xeons which don’t appear to be affected.

Currently, the easiest way to mitigate against this bug is to simply disable hyper-threading. The bug also appears to be OS agnostic.

Just Servers?

The focus around social media has predominately been around run of the mill servers; ones you typically purchase from the likes of Dell, HP, etc. However, there could be many bespoke devices that leverage susceptible processors, such as NAS/SAN heads. It is unlikely that in the event you find such a device HT can simply be disabled, but it should be something to be aware of.

List of Intel processors code-named “Skylake”
List of Intel processors code-named “Kaby Lake”

Older posts

© 2019 Virtual Thoughts

Theme by Anders NorenUp ↑

Social media & sharing icons powered by UltimatelySocial
RSS