Virtual Thoughts

Virtualisation, Storage and various other ramblings.

Category: NSX/SDDC (page 1 of 2)

Hybrid Cloud monitoring with VMware vRealize Operations

Applications and the underlying infrastructure, be it public, private or hybrid cloud are becoming increasingly sophisticated. Because of this, the way in which we monitor and observe these environments requires more sophisticated tools. In this blog post, we look at vRealize Operations and how it can be a facilitator of true hybrid cloud monitoring.

What is vRealize Operations?

vRealize Operations forms part of the overall vRealize suite from VMware – a collection of products targeted to accommodate cloud management and automation. In particular, vRealize Operations, as the name implies, primarily caters to operations management with full visibility across physical, virtual and cloud-based environments. The anatomy of vRealize Operations is depicted below

 

Integrated Cloud Operations Console – A single, unified frontend to access, modify and view all related vRealize Operations components.

Integrated Management Disciplines – vRealize Operations has built-in intelligence to assimilate, dissect and report back on a number of key operational metrics pertaining to performance, capacity, planning and more. Essentially, vRealize Operations “learns” about your environment and is able to make recommendations, predictions and much more based on your specific workloads.

Platform Services – vRealize Operations is able to perform a number of platform management disciplines based on your specific environment. As an example, vRealize Operations can automate the addition of virtual machine memory based on monitored load, therefore proactively addressing potential issues before they surface.

Extensibility – Available from the VMware Marketplace, Management Packs extend the functionality of vRealize Operations. Examples include:

  • Microsoft Azure Management Pack from Blue Medora
  • AWS Management Pack from VMware
  • Docker Management Pack from Blue Medora
  • Dell | EMC Management Pack from Blue Medora
  • vRealize Operations Compliance Pack for PCI from VMware

The examples above demonstrate vRealize Operation’s capability to monitor AWS and Azure environments in addition to on-premises workloads, making vRealize Operations a true platform for Hybrid Cloud monitoring and operations management

Practical Example – Cluster Monitoring / Troubleshooting

In this example, we leverage one of the vRealize Operation’s built-in dashboards to check the performance of a specific cluster. A dashboard in vRealize operations terminology is a collection of objects and their state, represented in a visual fashion.

 

One of the ways vRealize understands the underlying environment is to establish and map dependencies in a logical manner. In this example, we have a top-level datacentre object (ISH), which child objects are decedents of (Cluster and hosts) this dashboard identifies key aspects of this cluster in a single page:

  • Cluster activity / utilisation
  • Health state of associated objects
  • CPU contention information
  • Memory contention information
  • Disk latency information

Without vRealize Operations it would be common for an administrator to try and collate these metrics manually, looking at individual performance charts, DRS scheduling information, and vCenter health alarms. However, with vRealize operations, this data is collected and centralised for easy and effortless exposure.

 

Practical Example – Workload Planning

In this example, we have an upcoming project that we want to forecast into our environment, particularly around disk space demand. We facilitate this by creating a “Project” in vRealize Operations, but before that, let’s look at the project UI in a bit more detail:

 

We can access this section by navigating to Environment > vSphere Object. At which point we can select the resource we’re interested in forecasting into. The chart in the middle projects the disk space demand for this specific vSphere object (a cluster, in this example). Note how we have an incline in disk space demand, which is typical of a production environment, however, we are within capacity for the time period specified (90 days).

To add a project, we click the green “plus” icon below the chart:

 

Next, we fill in details pertaining to the demand. In this case, I’m adding demand in the form of 5 virtual machines and I’m populating the specification of these VM’s based on an existing VM in my environment with an implementation date of June 19th.

 

 

If we add this project to the forecast chart, the chart changes to accommodate this change in our environment:

 

 

By adding this project we have obviously created more demand, consequently, the date in which our disk space resources will exhaust has been expedited.

By having this knowledge we can plan our capacity requirements ahead of time. In this example, I decide to add another project to add resources prior to the commissioning of the aforementioned VM’s:

Because we can combine projects into a single chart, we can see based on observed metrics what effect adding demand and capacity to our environment has.

This is one of a vast number of features in vRealize Operations.  vRealize Operations Manager can be an incredibly useful tool to have for a number of reasons. Its intelligent analytics, a breath of extensibility options and unified experience make it a compelling experience for modern cloud-based operations

 

Understanding Data Center traffic flow using NSX-V capabilities

The defining characteristic of the Software Defined Data Center (SDDC), as the name implies, is to bring the intelligence and operations of various datacenter functions into software. This type of integration provides us with the ability to gain insights and analytics in a much more controlled, tightly integrated fashion.

VMware NSX is the market leader in network virtualisation. In this post, we have a look at a selection of tools which come with NSX, enabling a greater understanding of exactly what is transpiring in our NSX environment.

 

What we do now

Before diving into NSX-V traffic flow capabilities, let’s take a step back into how some organisations may approach identifying traffic flows currently by taking a simple example issue:

“Server A can’t talk to Server B on port 8443”

In this example, we assume that Server B is listening on port 8443.

Here are a few tools/methods that can be used to help identify the root cause

 

What these tools/methods have in common are:

 

  • Disjointed – Treated as separate, discrete exercises.
  • Isolated – Requires specific tools/skillsets.
  • Decentralised – Analysis requires output to be crossed referenced and analysed manually.

 

How NSX-V native tools can help

NSX-V provides us with a number tools to help us gain a deeper understanding of our network environment as well as provide accelerated troubleshooting and root cause analysis. These can be found via the vCenter client:

 

Flow Monitoring

Flow Monitoring is one of the traffic analysis tools that provide a detailed view of the traffic originating and terminating at virtual machines. One example use case of this is to determine in real time the traffic flows originating from a virtual machine – the below example demonstrating this. No agent or VM configuration is needed, unlike with Wireshark – NSX does this all natively without any modifications to the VM:

 

The VM in the example above has an IP of 172.16.201.10. We can see that itself is making DNS calls out to 8.8.8.8 as well as communicating with another machine with an IP of 172.16.200.10 over port 8443.

Endpoint Monitoring

 

Endpoint Monitoring enables us to map specific processes inside a guest operating system to network connections that are facilitating this traffic. This is helpful for gaining insight into application-layer details. The examples shown below demonstrate NSX’s ability to identify:

  • The source of the flow (process or application)
  • The source VM
  • The destination (can be any destination)
  • Traffic Type

 

 

 

Traceflow

Traceflow acts as a very useful diagnostic tool. Compared to flow monitoring, which takes a real-time view of network traffic, traceflow allows us to simulate traffic by synthetically “injecting” this traffic into our environment and monitoring the data path. In this example a test was executed for connectivity from a web server to an App server over port 8443:

 

NSX has informed us that this packet was dropped due a firewall rule – it also gives us the Rule ID in question. We can click on this link to get more information about this rule:

 

Once this rule was modified we can re-run the test, which shows this traffic has been successfully delivered to the target VM.

Traceflow also gives us an idea as to the journey our packet has travelled. From the above output we can see that this packet has traversed two logical switches, two ESXi hosts, one distributed logical router, and has forwarded through the distributed firewall running on the vNIC’s of two VM’s:

 

 

Packet Capture

The Packet Capture feature in NSX-V enables us to generate packet traces from physical ESXi hosts should we wish to perform any troubleshooting at that level.

These captures are done on a per-host level and we can specify to gather packet captures from one of the following interface types:

  • Physical NIC
  • VMKernel Interface
  • vNIC
  • vDR Port

Or from one of the respective filter types. Once started NSX will start gathering packet logs. Once the session has stopped these can be downloaded as .PCAP files which can be opened with a tool such as Wireshark

 

Conclusion

As organisations are adopting software-defined technologies, the tools and processes we use must also change. Thankfully, NSX-V has a plethora of native capabilities to observe, identify and troubleshoot software-defined networks.

vRealize Log insight – Frequently Overlooked Centralised Log Management

Log analysis has always been a standardised practice for activities such as root cause analysis or advanced troubleshooting. However, ingesting and analysing these logs from different devices, types, locations and formats can be a challenge. In this post, we have a look at vRealize Log Insight and what it can deliver.

 

What is it?

vRealize Log Insight is a product in the vRealize suite specifically designed for heterogeneous and scalable log management across physical, virtual and cloud-based environments. It is designed to be agnostic across what it can ingest logs from and is therefore valid candidate in a lot of deployments.

Additionally, any customer with a vCenter Server Standard or above license is entitled to a free 25 OSI pack. OSI is known as “Operating System Instance” and is broadly defined as a managed entity which is capable of generating logs. For example, a 25 OSI pack license can be used to cover a vCenter server, a number of ESXi hosts and other devices covered either natively or via VMware Content Packs (with the exception of Custom and 3rd party content packs – standalone vRealize Log Insight is required for this feature).

 

Current Challenges

Modern datacenters and cloud environments are rarely consumed by homogeneous solutions. Customers use a number of different technologies from different vendors and operating systems. With this comes a number of challenges:

 

  • The inconsistent format of log types – vCenter/ESXi uses syslog for logging, Windows has a bespoke method, applications may simply write data to a file in a specific format. This can require a number of tools/skills to read, interpret and action from this data.
  • Silos of information – The decentralised nature of dispersed logging causes this information to be siloed in different areas. This can have an impact on resolution times for incidents and accuracy of root cause analysis.
  • Manual analysis – Simply logging information can be helpful, but the reason why this is required is to perform the analysis. In some environments, this is a manual process performed by a systems administrator.
  • Not scalable – As environments grow larger and more complex having silos of differentiating logging types and formats becomes unwieldy to manage.
  • Cost – Man hours used to perform manual analysis can be costly.
  • No Correlation – Siloed logs doesn’t cater for any correlation of events/activities across an environment. This can greatly impede efforts in performing activities such as root cause analysis.

 

Addressing Challenges With vRealize Log Insight

Below are examples of how vRealize Log Insight can address the aforementioned challenges.

 

  • Create structure from unstructured data – Collected data is automatically analysed and structured for ease of reporting.
  • Centralised logging – vRealize Log Insight centrally collates logs from a number of sources which can then be accessed through a single management interface.
  • Automatic analysis – Logs are collected in near real-time and alerts can be configured to inform users of potential issues and unexpected events.
  • Scalable – Advanced licenses of vRealize Log insight include additional features such as Clustering, High Availability, Event Forwarding and Archiving to facilitate a highly scalable, centralised log management solution. vRealize Log Insight is also designed to analyse massive amounts of log data.
  • Cost – Automatic analysis of logs and alerting can assist with reducing man-hours spent manually analysing logs, freeing up IT staff to perform other tasks.
  • Log Correlation – Because logs are centralised and structured events across multiple devices/services can be correlated to identify trends and patterns.

 

Extensibility

vRealize Log Insight’s capabilities can be extended by the use of content packs. Content packs are available from the VMware marketplace (https://marketplace.vmware.com/vsx/?contentType=2)

Content packs are published either by VMware directly or from vendors to support their own devices/solutions. Examples include:

  • Apache Web Service
  • Brocade Devices
  • Cisco Devices
  • Dell | EMC Devices
  • F5 Devices
  • Juniper Devices
  • Microsoft Active Directory
  • Nimble Devices
  • VMware SRM

 

Closing Thoughts

It’s surprising how underused vRealize Log Insight is considering it comes bundled in as part of any valid vSphere Standard or above license. The modular design of the solution allowing third-party content packs adds a massive degree of flexibility which is not common amongst other centralised logging tools. 

Homelab Networking Refresh

Adios, Netgear router

In hindsight, I shouldn’t have bought a Netgear D7000 router. The reviews were good but after about 6 months of ownership, it decided to exhibit some pretty awful symptoms. One of which was completely and indiscriminately drop all wireless clients regardless of device type, range, band or frequency it resided on. A reconnect to the wireless network would prompt the passphrase again, weirdly. Even after putting in the passphrase (again) it wouldn’t connect. The only way to rectify this was to physically reboot the router.

Netgear support was pretty poor too. The support representative wanted me to downgrade firmware versions just to “see if it helps” despite confirming that this issue is not known in any of the published firmware versions.

Netgear support also suggested I changed the 2.4ghz network band. Simply put. They weren’t listening or couldn’t comprehend what I was saying.

Anyway, rant over. Amazon refunded me the £130 for the Netgear router after me explaining the situation about Netgear’s poor support. Amazing service really.

Hola, Ubiquiti

I’ve been eyeing up Ubiquiti for a while now but never had a reason to get any of their kit until now.  With me predominantly working from home when I’m not on the road and my other half running a business from home, stable connectivity is pretty important to both of us.

The EdgeMAX range from Ubiquiti looked like it fit the bill. I’d say it sits above the consumer-level stuff from the likes of Netgear, Asus, TP-Link etc and just below enterprise level kit from the likes of Juniper, Cisco, etc. Apart from the usual array of features found on devices of this type I particularly wanted to mess around with BGP/OSPF from my homelab when creating networks in VMware NSX.

With that in mind, I cracked open Visio and started diagramming, eventually ending up with the following:

 

I noted the following observations:

  • Ubiquti Edgerouters do not have a build in VDSL modem, therefore for connections such as mine, I required a separate modem.
  • The Edgerouter Lite has no hardware switching module, therefore it should be purely used as a router (makes sense)
  • The Edgerouter X has a hardware switching module with routing capabilities (but lower total pps (Packets Per Second))

Verdict

I managed to set up the pictured environment over the weekend fairly easily. The Ubiquiti software is very modern, slick, easy to use and responsive. Leaps and bounds from what I’ve found on consumer-grade equipment.

I have but one criticism with the Ubiquiti routers, and that is not everything is easily configurable through the UI (yet). From what I’ve read Ubiquiti are making good progress with this, but for me I had to resort to the CLI to finish my OSPF peering configuration.

The wireless access point is decent. good coverage and the ability to provision an isolated guest network with custom portal is a very nice touch.

Considering the Edgerouter Lite costs about £80 I personally think it represents good value for money considering the feature set it provides. I wouldn’t recommend it for every day casual network users, but then again, that isn’t Ubiquiti’s market.

The Ubiquiti community is active and very helpful as well.

 

 

 

 

Embracing the SDDC with NSX-V automation

The Software Defined Data Center (SDDC for short) has become a widely adopted and embraced model for modern datacentre implementations. Conveying the benefits of the SDDC, particularly the non-technical aspects can be a challenge. In this blog post we take a practical example of a single activity we can automate in NSX and the benefits that come from it, both technical and non-technical.

The NSX API

An API (Application Programming Interface), in simple terms is an intermediary that allows two applications to communicate with each other via a common syntax. Although we may not be aware of it, it’s likely we use API’s every day when we use applications such as Facebook, LinkedIn, vSphere and countless others. For example, when you create a logical switch in the vSphere web client, behind the scenes an API call is made to the NSX manager to facilitate that request.

The NSX API is based on REST which leverages HTTPS requests to GET, PUT, POST and DELETE data from the NSX ecosystem:

  • GET – Retrieve an entity
  • PUT – Create an entity
  • POST – Update an entity
  • DELETE – Remove an entity

An entity can be a variety of NSX objects such as Logical Switches, Distributed Routers, Edge Gateways, Firewall rules, etc.

 

Options for working with the NSX API

Several avenues exist for working with the rest API, each having their own advantages and disadvantages:

  • Direct API calls via REST client – These can be made via clients such as Postman. These calls are static and are therefore suitable for one-off requests.

 

 

  • PowerNSX – PowerNSX is a PowerShell module that enables the consumption of API calls via Powershell cmdlets. It’s an open source community project but is not supported by VMware. Also, not all API calls are currently exposed as cmdlets.
  • API calls via code – API calls can be made from a variety of programming libraries (Powershell, C#, Java, etc) which add functionality by adding an element of dynamic input. We use this as an example in this blog.

 

Practical example – Creating new networks in a legacy virtualised compute environment

To illustrate the power of automating NSX via automation let’s take an example activity and break it down into respective tasks. In this example, we want to create an N-tier network (IE a network comprising of Web, App and DB tiers which are routable and sit behind a perimeter firewall).

 

Depending on factors such as the number of vendors used and the structure of the IT team, we can see that executing a relatively simple task of creating an N-Tier routable, secure network for the purposes of consumption could:

  • Involve multiple network teams (vSphere admin/network admin/security admin)
  • Involve multiple tools (in this example tools from vSphere, Cisco, Juniper and Sonicwall)

This operational complexity can hinder the speed and agility of a business due to factors such as:

  • Multiple teams need to collaborate. Collaboration between vSphere / Network / Security teams can be time-consuming
  • Multiple tools/skillsets required. In the example above skills pertaining to Sonicwall, Juniper, Cisco and vSphere are required to create a secure network topology

 

Practical example – Automating in NSX

To demonstrate the automation capabilities designed to address the example a Powershell script was created to facilitate API calls directly to NSX. The advantage of doing this is:

  • API calls are supported by VMware.
  • The entire API ecosystem is exposed for consumption.
  • Powershell can prompt the user for information, which is then used to dynamically populate API requests.
  • All tiers of the network are created and managed by a single management plane.

 

This script starts with the layer 2 logical switches and then moves up the networking stack configuring the layer 3 and perimeter elements of this network:

 

For each logical network we prompt the user for the following:

  • Name – What we want to call the logical network
  • Network Range – The intended network range for this network. This is used to determine the DLR’s interface on it
  • Network Description – What we provide as the description
  • Network Type – Simply put, Uplinks are used for peering (North/South) traffic. We need one uplink network to facilitate the peering between the DLR and ESG

 

Once the user has put in the required networks, API calls are executed from the Powershell script to create the networks:

Next is to prompt the user for the DLR and ESG names:

 

This information is used to construct the Distributed Logical Router (DLR) and Edge Services Gateway (ESG) devices via API calls:

At this stage, the following has been created:

 

 

At which point the script outputs the total amount of time elapsed to construct this topology in NSX (including the time taken for the user to input the data for).

In this example it took 291.7 seconds (4.9 minutes) to construct the following:

  • Create 3 internal logical switches (for VM traffic)
  • Create 1 uplink logical switch (for BGP peering)
  • Create 1 DLR and configure interfaces on each internal logical switch (default gateway)
  • Create 1 ESG and configure interface for BGP peering
  • Configure BGP dynamic routing

Not bad at all.

To validate the routing, we can simply log on to the ESG and check its routing table:

We can see the ESG has learnt (by BGP) the networks that reside on our DLR.

This is one of the almost endless examples of exposing and leveraging the NSX API.

For anyone interested in the Powershell script – I intend to upload the code once I’ve added some decent input validation.

VMware Cloud on AWS

Perhaps one of VMware’s most significant announcements made in recent times is the partnership with Amazon Web Services (AWS), including the ability to leverage AWS’s infrastructure to provision vSphere managed resources. What exactly does this mean and what benefits could this bring to the enterprise?

 

Collaboration of Two Giants

To understand and appreciate the significance of this partnership we must acknowledge the position and perspective of each.

 

 

 

  • Market leader in private cloud offerings
  • Deep roots and history in virtualisation
  • Expanding portfolio

 

 

 

 

  • Market leader in public cloud offerings
  • Broad and expanding range of services
  • Global scale

 

VMware has a significant presence in the on-premise datacentre, in contrast to AWS which focuses entirely on the public cloud space. VMware cloud on AWS sits in the middle as a true hybrid cloud solution leveraging the established, industry-leading technologies and software developed by VMware, together with the infrastructure capabilities provided by AWS.

 

How it Works

In a typical setup, an established vSphere private cloud already exists. Customers can then provision an AWS-backed vSphere environment using a modern HTML5 based client. The environment created by AWS leverages the following technologies:

  • ESXi on bare metal servers
  • vSphere management
  • vSAN
  • NSX

 

The connection between the on-premise and AWS hosted vSphere environments is facilitated by Hybrid Linked Mode. This allows customers to manage both on-premise and AWS hosted environments through a single management interface. This also allows us to, for example, migrate and manage workloads between the two.

Advantages

Existing vSphere customers may already be leveraging AWS resources in a different way, however, there are significant advantages associated with implementing VMware cloud on AWS, such as:

Delivered as a service from VMware – The entire ecosystem of this hybrid cloud solution is sold, delivered and supported by VMware. This simplifies support, management, billing amongst other activities such as patching and updates.

Consistent operational model – Existing private cloud users use the same tools, processes and technologies to manage the solution. This includes integration with other VMware products included in the vRealize product suite.

Enterprise-grade capabilities – This solution leverages the extensive AWS hardware capabilities which include the latest in low latency IO storage technology based on Solid State Drive technology and high-performance networking.

Access to native AWS resources – This solution can be further expanded to access and consume native AWS technologies pertaining to databases, AI, analytics and more.

Use Cases

VMware Cloud on AWS has several applications, including (but not limited to) the following:

 

Datacenter Extension

 

Because of how rapidly an AWS-backed software-defined datacenter can be provisioned, expanding an on-premise environment becomes a trivial task. Once completed, these additional resources can be consumed to meet various business and technical demands.

 

 

 

Dev / Test

 

Adding additional capabilities to an existing private cloud environment enables the division of duties/responsibilities. This enables organisations to separate out specific environments for the purposes of security, delegation and management.

 

 

 

 

 

Application Migration

 

 

VMware cloud on AWS enables us to migrate N-tier applications to an AWS backed vSphere environment without the need to re-architect or convert our virtual machine/compute and storage constructs. This is because we’re using the same software-defined data centre technologies across our entire estate (vSphere, NSX and vSAN).

 

 

 

 

 

 

Conclusion

There are a number of viable applications for VMware Cloud on AWS and it’s a very strong offering considering the pedigree of both VMware and AWS. Combining the strengths from each creates a very compelling option for anyone considering a hybrid cloud adoption strategy.

To learn more about VMware Cloud on AWS please review the following:

https://aws.amazon.com/vmware/

https://cloud.vmware.com/vmc-aws

 

NSX Livefire Course

 

Recently I was lucky enough to attend a NSX livefire course hosted at the VMware EMEA HQ in Staines, It’s designed to facilitate a intensive knowledge transfer of NSX related subject matter. All participants are bound by NDA, however most of the information is GA with the exception of roadmap information.

 

Day One

Day one was focused on introducing all the participants, laying a foundation for the course objectives as well as some background info on NSX. In addition the following topics were covered:

  • Lab intro
  • Dynamic routing and operations
  • Integrating NSX with phyiscal infrastructure

Day Two

We covered:

  • Security
  • Multi site implementations
  • Business continuity and disaster recovery

Day Three

We covered:

  • Operations and Troubleshooting
  • Cloud management integration

Day Four

We covered:

  • VDI
  • Best practice

Overall, it was a very packed few days but an extremely valuable and positive experience. I would strongly recommend  attending if given the chance.

 

Homelab – Nested ESXi with NSX and vSAN

The Rebuild

I decided to trash and rebuild my nested homelab to include both NSX and vSAN. When I attempted to prepare the hosts for NSX I received the following message:

 

 

I’ve not had this issue before so I conducted some research. I found a lot of blog posts / comments / KB articles linking this issue to VUM. For example : https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2053782

However, after following the instructions I couldn’t set the “bypassVumEnabled” setting. Nor could I manually install the NSX vibs and was presented with the following:

 

[root@ESXi4:~] esxcli software vib install -v /vmfs/volumes/vsanDatastore/VIB/vib20/esx-nsxv/VMware_bootbank_esx-nsxv_6.5.0-0.0.6244264.vib –force
[LiveInstallationError]
Error in running [‘/etc/init.d/vShield-Stateful-Firewall’, ‘start’, ‘install’]:
Return code: 1
Output: vShield-Stateful-Firewall is not running
watchdog-dfwpktlogs: PID file /var/run/vmware/watchdog-dfwpktlogs.PID does not exist
watchdog-dfwpktlogs: Unable to terminate watchdog: No running watchdog process for dfwpktlogs
ERROR: ld.so: object ‘/lib/libMallocArenaFix.so’ from LD_PRELOAD cannot be preloaded: ignored.
Failed to release memory reservation for vsfwd
Resource pool ‘host/vim/vmvisor/vsfwd’ release failed. retrying..
Resource pool ‘host/vim/vmvisor/vsfwd’ release failed. retrying..
Resource pool ‘host/vim/vmvisor/vsfwd’ release failed. retrying..
Resource pool ‘host/vim/vmvisor/vsfwd’ release failed. retrying..
Resource pool ‘host/vim/vmvisor/vsfwd’ release failed. retrying..
Set memory minlimit for vsfwd to 256MB
ERROR: ld.so: object ‘/lib/libMallocArenaFix.so’ from LD_PRELOAD cannot be preloaded: ignored.
Failed to set memory reservation for vsfwd to 256MB
ERROR: ld.so: object ‘/lib/libMallocArenaFix.so’ from LD_PRELOAD cannot be preloaded: ignored.
Failed to release memory reservation for vsfwd
Resource pool ‘host/vim/vmvisor/vsfwd’ released.
Resource pool creation failed. Not starting vShield-Stateful-Firewall

It is not safe to continue. Please reboot the host immediately to discard the unfinished update.
Please refer to the log file for more details.
[root@ESXi4:~]

In particular I was intrigued by the “Failed to release memory reservation for vsfwd” message. I decided to increase the memory configuration of my ESXi VM’s from 6GB to 8GB and I was then able to prepare the hosts from the UI.

TLDR; If you’re running  ESXi 6.5, NSX 6.3.3 and vSAN 6.6.1 and experiencing issues preparing hosts for NSX, increase the ESXi memory configuration to at least 8GB.

Homelab v2 – Part 1

Out with the old

My previous homelab, although functional was starting to hit the limits of 32GB of RAM, particularly when running vCenter, vSAN, NSX, etc concurrently.

A family member had use for my old lab so I decided to sell it and get a replacement whitebox.

 

Requirements

  • Quiet – As this would live in my office and powered on pretty much 24/7 it need a silent running machine
  • Power efficient – I’d rather not rack up the electric bill.
  • 64GB Ram Support

 

Nice to have

  • 10GbE
  • IPMI / Remote Access
  • Mini-ITX

Order List

I’ve had a interest in the Xeon-D boards for quite some time, the low power footprint, SRV-IO support, integrated 10GbE, IPMI and 128GB RAM support make it an attractive offering. I spotted a good deal and decided to take the plunge on a Supermicro X10SDV-4C+-TLN4F

 

As for a complete list:

Motherboard – Supermicro X10SDV-4C+-TLN4F

RAM – 64GB (4x16GB) ADATA DDR4

Case – TBC, undecided between a supermicro 1U case or a standard desktop ITX case

Network – Existing gigabit switch. 10GbE Switches are still quite expensive, but it’s nice to have future compatibility on the motherboard for it.

I’ve yet to take delivery of all the components, part 2 will include assembly.

My Nested NSX Home Lab

With the ever growing popularity of SDDC solutions I’ve decided to invest some time in learning VMware NSX and sit the VCP6-NV Exam. For this I’ve re-purposed my existing homelab and configured it for NSX. I have a fairly simple setup consisting of a single whitebox “server” that will accommodate nested ESXi hypervisors and a HP Microserver acting as a iSCSI target.

Whitebox specs:

Motherboard: MSI B85M-E45 Socket 1150

CPU: Intel Core i7 4785T 35W TDP

RAM: 32GB Corsair DDR3 Vengeance

PSU: 300W be quiet! 80plus bronze

Case: Thermaltake Core v21 Micro ATX

Switch: 8 Port Netgear GS 108-T Smart Switch

Cooler: Akasa AK-CC7108EP01

NAS/SAN: HP Microserver N54L , 12GB Ram, 480GB SSD, 500GB mechanical.

 

ESXi is installed on the physical host with additional ESXi VM’s being created so I can play around with DRS/HA features too. The end result looks like this:

NSXLAB

From a networking perspective I have separate port groups on my physical host for Management, VM, iSCSI, vMotion etc. My nested ESXi hosts have vNIC’s in these port groups. Due to the nature of nesting ESXi hosts for this to work promiscuous mode has to be enabled on the port groups on the phyiscal host for this to work (management doubles as VXLAN Transport)

vSwitch

 

The actual installation of NSX is already well covered but this  covers the basics for what I needed to do.

Older posts

© 2018 Virtual Thoughts

Theme by Anders NorenUp ↑

Social media & sharing icons powered by UltimatelySocial
RSS