Virtual Thoughts

Virtualisation, Storage and various other ramblings.

Category: NSX/SDDC (page 2 of 2)

Embracing the SDDC with NSX-V automation

The Software Defined Data Center (SDDC for short) has become a widely adopted and embraced model for modern datacentre implementations. Conveying the benefits of the SDDC, particularly the non-technical aspects can be a challenge. In this blog post we take a practical example of a single activity we can automate in NSX and the benefits that come from it, both technical and non-technical.

The NSX API

An API (Application Programming Interface), in simple terms is an intermediary that allows two applications to communicate with each other via a common syntax. Although we may not be aware of it, it’s likely we use API’s every day when we use applications such as Facebook, LinkedIn, vSphere and countless others. For example, when you create a logical switch in the vSphere web client, behind the scenes an API call is made to the NSX manager to facilitate that request.

The NSX API is based on REST which leverages HTTPS requests to GET, PUT, POST and DELETE data from the NSX ecosystem:

  • GET – Retrieve an entity
  • PUT – Create an entity
  • POST – Update an entity
  • DELETE – Remove an entity

An entity can be a variety of NSX objects such as Logical Switches, Distributed Routers, Edge Gateways, Firewall rules, etc.

 

Options for working with the NSX API

Several avenues exist for working with the rest API, each having their own advantages and disadvantages:

  • Direct API calls via REST client – These can be made via clients such as Postman. These calls are static and are therefore suitable for one-off requests.

 

 

  • PowerNSX – PowerNSX is a PowerShell module that enables the consumption of API calls via Powershell cmdlets. It’s an open source community project but is not supported by VMware. Also, not all API calls are currently exposed as cmdlets.
  • API calls via code – API calls can be made from a variety of programming libraries (Powershell, C#, Java, etc) which add functionality by adding an element of dynamic input. We use this as an example in this blog.

 

Practical example – Creating new networks in a legacy virtualised compute environment

To illustrate the power of automating NSX via automation let’s take an example activity and break it down into respective tasks. In this example, we want to create an N-tier network (IE a network comprising of Web, App and DB tiers which are routable and sit behind a perimeter firewall).

 

Depending on factors such as the number of vendors used and the structure of the IT team, we can see that executing a relatively simple task of creating an N-Tier routable, secure network for the purposes of consumption could:

  • Involve multiple network teams (vSphere admin/network admin/security admin)
  • Involve multiple tools (in this example tools from vSphere, Cisco, Juniper and Sonicwall)

This operational complexity can hinder the speed and agility of a business due to factors such as:

  • Multiple teams need to collaborate. Collaboration between vSphere / Network / Security teams can be time-consuming
  • Multiple tools/skillsets required. In the example above skills pertaining to Sonicwall, Juniper, Cisco and vSphere are required to create a secure network topology

 

Practical example – Automating in NSX

To demonstrate the automation capabilities designed to address the example a Powershell script was created to facilitate API calls directly to NSX. The advantage of doing this is:

  • API calls are supported by VMware.
  • The entire API ecosystem is exposed for consumption.
  • Powershell can prompt the user for information, which is then used to dynamically populate API requests.
  • All tiers of the network are created and managed by a single management plane.

 

This script starts with the layer 2 logical switches and then moves up the networking stack configuring the layer 3 and perimeter elements of this network:

 

For each logical network we prompt the user for the following:

  • Name – What we want to call the logical network
  • Network Range – The intended network range for this network. This is used to determine the DLR’s interface on it
  • Network Description – What we provide as the description
  • Network Type – Simply put, Uplinks are used for peering (North/South) traffic. We need one uplink network to facilitate the peering between the DLR and ESG

 

Once the user has put in the required networks, API calls are executed from the Powershell script to create the networks:

Next is to prompt the user for the DLR and ESG names:

 

This information is used to construct the Distributed Logical Router (DLR) and Edge Services Gateway (ESG) devices via API calls:

At this stage, the following has been created:

 

 

At which point the script outputs the total amount of time elapsed to construct this topology in NSX (including the time taken for the user to input the data for).

In this example it took 291.7 seconds (4.9 minutes) to construct the following:

  • Create 3 internal logical switches (for VM traffic)
  • Create 1 uplink logical switch (for BGP peering)
  • Create 1 DLR and configure interfaces on each internal logical switch (default gateway)
  • Create 1 ESG and configure interface for BGP peering
  • Configure BGP dynamic routing

Not bad at all.

To validate the routing, we can simply log on to the ESG and check its routing table:

We can see the ESG has learnt (by BGP) the networks that reside on our DLR.

This is one of the almost endless examples of exposing and leveraging the NSX API.

For anyone interested in the Powershell script – I intend to upload the code once I’ve added some decent input validation.

VMware Cloud on AWS

Perhaps one of VMware’s most significant announcements made in recent times is the partnership with Amazon Web Services (AWS), including the ability to leverage AWS’s infrastructure to provision vSphere managed resources. What exactly does this mean and what benefits could this bring to the enterprise?

 

Collaboration of Two Giants

To understand and appreciate the significance of this partnership we must acknowledge the position and perspective of each.

 

 

 

  • Market leader in private cloud offerings
  • Deep roots and history in virtualisation
  • Expanding portfolio

 

 

 

 

  • Market leader in public cloud offerings
  • Broad and expanding range of services
  • Global scale

 

VMware has a significant presence in the on-premise datacentre, in contrast to AWS which focuses entirely on the public cloud space. VMware cloud on AWS sits in the middle as a true hybrid cloud solution leveraging the established, industry-leading technologies and software developed by VMware, together with the infrastructure capabilities provided by AWS.

 

How it Works

In a typical setup, an established vSphere private cloud already exists. Customers can then provision an AWS-backed vSphere environment using a modern HTML5 based client. The environment created by AWS leverages the following technologies:

  • ESXi on bare metal servers
  • vSphere management
  • vSAN
  • NSX

 

The connection between the on-premise and AWS hosted vSphere environments is facilitated by Hybrid Linked Mode. This allows customers to manage both on-premise and AWS hosted environments through a single management interface. This also allows us to, for example, migrate and manage workloads between the two.

Advantages

Existing vSphere customers may already be leveraging AWS resources in a different way, however, there are significant advantages associated with implementing VMware cloud on AWS, such as:

Delivered as a service from VMware – The entire ecosystem of this hybrid cloud solution is sold, delivered and supported by VMware. This simplifies support, management, billing amongst other activities such as patching and updates.

Consistent operational model – Existing private cloud users use the same tools, processes and technologies to manage the solution. This includes integration with other VMware products included in the vRealize product suite.

Enterprise-grade capabilities – This solution leverages the extensive AWS hardware capabilities which include the latest in low latency IO storage technology based on Solid State Drive technology and high-performance networking.

Access to native AWS resources – This solution can be further expanded to access and consume native AWS technologies pertaining to databases, AI, analytics and more.

Use Cases

VMware Cloud on AWS has several applications, including (but not limited to) the following:

 

Datacenter Extension

 

Because of how rapidly an AWS-backed software-defined datacenter can be provisioned, expanding an on-premise environment becomes a trivial task. Once completed, these additional resources can be consumed to meet various business and technical demands.

 

 

 

Dev / Test

 

Adding additional capabilities to an existing private cloud environment enables the division of duties/responsibilities. This enables organisations to separate out specific environments for the purposes of security, delegation and management.

 

 

 

 

 

Application Migration

 

 

VMware cloud on AWS enables us to migrate N-tier applications to an AWS backed vSphere environment without the need to re-architect or convert our virtual machine/compute and storage constructs. This is because we’re using the same software-defined data centre technologies across our entire estate (vSphere, NSX and vSAN).

 

 

 

 

 

 

Conclusion

There are a number of viable applications for VMware Cloud on AWS and it’s a very strong offering considering the pedigree of both VMware and AWS. Combining the strengths from each creates a very compelling option for anyone considering a hybrid cloud adoption strategy.

To learn more about VMware Cloud on AWS please review the following:

https://aws.amazon.com/vmware/

https://cloud.vmware.com/vmc-aws

 

NSX Livefire Course

 

Recently I was lucky enough to attend a NSX livefire course hosted at the VMware EMEA HQ in Staines, It’s designed to facilitate a intensive knowledge transfer of NSX related subject matter. All participants are bound by NDA, however most of the information is GA with the exception of roadmap information.

 

Day One

Day one was focused on introducing all the participants, laying a foundation for the course objectives as well as some background info on NSX. In addition the following topics were covered:

  • Lab intro
  • Dynamic routing and operations
  • Integrating NSX with phyiscal infrastructure

Day Two

We covered:

  • Security
  • Multi site implementations
  • Business continuity and disaster recovery

Day Three

We covered:

  • Operations and Troubleshooting
  • Cloud management integration

Day Four

We covered:

  • VDI
  • Best practice

Overall, it was a very packed few days but an extremely valuable and positive experience. I would strongly recommend  attending if given the chance.

 

Homelab – Nested ESXi with NSX and vSAN

The Rebuild

I decided to trash and rebuild my nested homelab to include both NSX and vSAN. When I attempted to prepare the hosts for NSX I received the following message:

 

 

I’ve not had this issue before so I conducted some research. I found a lot of blog posts / comments / KB articles linking this issue to VUM. For example : https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2053782

However, after following the instructions I couldn’t set the “bypassVumEnabled” setting. Nor could I manually install the NSX vibs and was presented with the following:

 

[root@ESXi4:~] esxcli software vib install -v /vmfs/volumes/vsanDatastore/VIB/vib20/esx-nsxv/VMware_bootbank_esx-nsxv_6.5.0-0.0.6244264.vib –force
[LiveInstallationError]
Error in running [‘/etc/init.d/vShield-Stateful-Firewall’, ‘start’, ‘install’]:
Return code: 1
Output: vShield-Stateful-Firewall is not running
watchdog-dfwpktlogs: PID file /var/run/vmware/watchdog-dfwpktlogs.PID does not exist
watchdog-dfwpktlogs: Unable to terminate watchdog: No running watchdog process for dfwpktlogs
ERROR: ld.so: object ‘/lib/libMallocArenaFix.so’ from LD_PRELOAD cannot be preloaded: ignored.
Failed to release memory reservation for vsfwd
Resource pool ‘host/vim/vmvisor/vsfwd’ release failed. retrying..
Resource pool ‘host/vim/vmvisor/vsfwd’ release failed. retrying..
Resource pool ‘host/vim/vmvisor/vsfwd’ release failed. retrying..
Resource pool ‘host/vim/vmvisor/vsfwd’ release failed. retrying..
Resource pool ‘host/vim/vmvisor/vsfwd’ release failed. retrying..
Set memory minlimit for vsfwd to 256MB
ERROR: ld.so: object ‘/lib/libMallocArenaFix.so’ from LD_PRELOAD cannot be preloaded: ignored.
Failed to set memory reservation for vsfwd to 256MB
ERROR: ld.so: object ‘/lib/libMallocArenaFix.so’ from LD_PRELOAD cannot be preloaded: ignored.
Failed to release memory reservation for vsfwd
Resource pool ‘host/vim/vmvisor/vsfwd’ released.
Resource pool creation failed. Not starting vShield-Stateful-Firewall

It is not safe to continue. Please reboot the host immediately to discard the unfinished update.
Please refer to the log file for more details.
[root@ESXi4:~]

In particular I was intrigued by the “Failed to release memory reservation for vsfwd” message. I decided to increase the memory configuration of my ESXi VM’s from 6GB to 8GB and I was then able to prepare the hosts from the UI.

TLDR; If you’re running  ESXi 6.5, NSX 6.3.3 and vSAN 6.6.1 and experiencing issues preparing hosts for NSX, increase the ESXi memory configuration to at least 8GB.

Homelab v2 – Part 1

Out with the old

My previous homelab, although functional was starting to hit the limits of 32GB of RAM, particularly when running vCenter, vSAN, NSX, etc concurrently.

A family member had use for my old lab so I decided to sell it and get a replacement whitebox.

 

Requirements

  • Quiet – As this would live in my office and powered on pretty much 24/7 it need a silent running machine
  • Power efficient – I’d rather not rack up the electric bill.
  • 64GB Ram Support

 

Nice to have

  • 10GbE
  • IPMI / Remote Access
  • Mini-ITX

Order List

I’ve had a interest in the Xeon-D boards for quite some time, the low power footprint, SRV-IO support, integrated 10GbE, IPMI and 128GB RAM support make it an attractive offering. I spotted a good deal and decided to take the plunge on a Supermicro X10SDV-4C+-TLN4F

 

As for a complete list:

Motherboard – Supermicro X10SDV-4C+-TLN4F

RAM – 64GB (4x16GB) ADATA DDR4

Case – TBC, undecided between a supermicro 1U case or a standard desktop ITX case

Network – Existing gigabit switch. 10GbE Switches are still quite expensive, but it’s nice to have future compatibility on the motherboard for it.

I’ve yet to take delivery of all the components, part 2 will include assembly.

My Nested NSX Home Lab

With the ever growing popularity of SDDC solutions I’ve decided to invest some time in learning VMware NSX and sit the VCP6-NV Exam. For this I’ve re-purposed my existing homelab and configured it for NSX. I have a fairly simple setup consisting of a single whitebox “server” that will accommodate nested ESXi hypervisors and a HP Microserver acting as a iSCSI target.

Whitebox specs:

Motherboard: MSI B85M-E45 Socket 1150

CPU: Intel Core i7 4785T 35W TDP

RAM: 32GB Corsair DDR3 Vengeance

PSU: 300W be quiet! 80plus bronze

Case: Thermaltake Core v21 Micro ATX

Switch: 8 Port Netgear GS 108-T Smart Switch

Cooler: Akasa AK-CC7108EP01

NAS/SAN: HP Microserver N54L , 12GB Ram, 480GB SSD, 500GB mechanical.

 

ESXi is installed on the physical host with additional ESXi VM’s being created so I can play around with DRS/HA features too. The end result looks like this:

NSXLAB

From a networking perspective I have separate port groups on my physical host for Management, VM, iSCSI, vMotion etc. My nested ESXi hosts have vNIC’s in these port groups. Due to the nature of nesting ESXi hosts for this to work promiscuous mode has to be enabled on the port groups on the phyiscal host for this to work (management doubles as VXLAN Transport)

vSwitch

 

The actual installation of NSX is already well covered but this  covers the basics for what I needed to do.

VCP6-NV Passed!

Motivation

SDDC solutions are becoming increasingly more popular, and although I’m probably a little biased, I would say that NSX is leading the software defined networking front. Following on from my previous post about my nested NSX homelab I sat and (thankfully passed) the VCP6-NV exam.

 

Study Materials

The official cert guide – https://www.amazon.co.uk/VCP6-NV-Official-2V0-641-Vmware-Certification/dp/0789754800/ref=sr_1_1?ie=UTF8&qid=1493819639&sr=8-1&keywords=vcp6-NV

Practice (non exam) questions – http://www.elasticsky.co.uk/practice-questions/

Blueprint – https://mylearn.vmware.com/mgrReg/plan.cfm?plan=95141&ui=www_cert

Pluralsight NSX videos

Homelab

Lots of time

 

Experience

This was a particularly tough exam. As I would describe myself as somewhat inexperienced in NSX compared to vSphere, I found the exam challenging. I’ve noticed that VMware have recently revised the exam reducing the number of questions down to 77 from 85, which is nice.

 

Newer posts

© 2019 Virtual Thoughts

Theme by Anders NorenUp ↑

Social media & sharing icons powered by UltimatelySocial
RSS