Virtual Thoughts

Virtualisation, Storage and various other ramblings.

Page 2 of 3

VMware Cloud on AWS

Perhaps one of VMware’s most significant announcements made in recent times is the partnership with Amazon Web Services (AWS), including the ability to leverage AWS’s infrastructure to provision vSphere managed resources. What exactly does this mean and what benefits could this bring to the enterprise?

 

Collaboration of Two Giants

To understand and appreciate the significance of this partnership we must acknowledge the position and perspective of each.

 

 

 

  • Market leader in private cloud offerings
  • Deep roots and history in virtualisation
  • Expanding portfolio

 

 

 

 

  • Market leader in public cloud offerings
  • Broad and expanding range of services
  • Global scale

 

VMware has a significant presence in the on-premise datacentre, in contrast to AWS which focuses entirely on the public cloud space. VMware cloud on AWS sits in the middle as a true hybrid cloud solution leveraging the established, industry-leading technologies and software developed by VMware, together with the infrastructure capabilities provided by AWS.

 

How it Works

In a typical setup, an established vSphere private cloud already exists. Customers can then provision an AWS-backed vSphere environment using a modern HTML5 based client. The environment created by AWS leverages the following technologies:

  • ESXi on bare metal servers
  • vSphere management
  • vSAN
  • NSX

 

The connection between the on-premise and AWS hosted vSphere environments is facilitated by Hybrid Linked Mode. This allows customers to manage both on-premise and AWS hosted environments through a single management interface. This also allows us to, for example, migrate and manage workloads between the two.

Advantages

Existing vSphere customers may already be leveraging AWS resources in a different way, however, there are significant advantages associated with implementing VMware cloud on AWS, such as:

Delivered as a service from VMware – The entire ecosystem of this hybrid cloud solution is sold, delivered and supported by VMware. This simplifies support, management, billing amongst other activities such as patching and updates.

Consistent operational model – Existing private cloud users use the same tools, processes and technologies to manage the solution. This includes integration with other VMware products included in the vRealize product suite.

Enterprise-grade capabilities – This solution leverages the extensive AWS hardware capabilities which include the latest in low latency IO storage technology based on Solid State Drive technology and high-performance networking.

Access to native AWS resources – This solution can be further expanded to access and consume native AWS technologies pertaining to databases, AI, analytics and more.

Use Cases

VMware Cloud on AWS has several applications, including (but not limited to) the following:

 

Datacenter Extension

 

Because of how rapidly an AWS-backed software-defined datacenter can be provisioned, expanding an on-premise environment becomes a trivial task. Once completed, these additional resources can be consumed to meet various business and technical demands.

 

 

 

Dev / Test

 

Adding additional capabilities to an existing private cloud environment enables the division of duties/responsibilities. This enables organisations to separate out specific environments for the purposes of security, delegation and management.

 

 

 

 

 

Application Migration

 

 

VMware cloud on AWS enables us to migrate N-tier applications to an AWS backed vSphere environment without the need to re-architect or convert our virtual machine/compute and storage constructs. This is because we’re using the same software-defined data centre technologies across our entire estate (vSphere, NSX and vSAN).

 

 

 

 

 

 

Conclusion

There are a number of viable applications for VMware Cloud on AWS and it’s a very strong offering considering the pedigree of both VMware and AWS. Combining the strengths from each creates a very compelling option for anyone considering a hybrid cloud adoption strategy.

To learn more about VMware Cloud on AWS please review the following:

https://aws.amazon.com/vmware/

https://cloud.vmware.com/vmc-aws

 

Joining the Insight Team

As of this week, I started a new position at Insight as a VMware/SDDC Solutions Architect/Evangelist. Exciting times!

I’ll be fortunate to work with the likes of established community contributors and experts in the field such as vJenner and Chan.

Why Insight?

The IT landscape is constantly changing and with it, we as IT professionals must adapt accordingly. I wanted a new challenge, to expand my horizons and delve deeper into the areas I’ve already gained experience in. Insight is such a place that will allow me to do this. My new boss described it quite eloquently: “We sell everything to everyone”. This doesn’t mean that Insight will push for subpar products though – part of the philosophy here is that we’re transparent, flexible and agnostic. Leading solutions are evaluated and assessed to address a plethora of challenges presented by both existing and new customers. Multiple vendors, multiple products, private/public/hybrid cloud and everything in-between is considered as part of the product/solutions/services portfolio.

I will continue to focus primarily on VMware based solutions with a bit of AWS on top, together with complementary technologies (IE storage, networking, containers, automation, scripting)

 

 

 

VMware vRealize Operations 2017 Specialist Exam (2VB-602)

“Specialist Exams”….Wait, what?

I have a requirement to essentially get more up to speed with vRealize Operations Manager. As I was digging through some of the reading material I came across the specialist exam. The details for which can be found here.

I wasn’t actually aware up until this point VMware actually offer specialist exams. At time of writing vRealize Operations and vSAN are the only two specialist certifications you can take.

I can understand the logic behind it – vRealize is becoming a very comprehensive suite of applications and with the VCP7-CMA certification primarily focused on vRealize Automation, it makes sense to separate out certain technologies into their own curriculum.

2VB-602 (vRealize Operations)

For a couple of weeks or so I’ve been messing around with / reading up on / watching videos of vRealize Operations primarily focused on 6.6 without even knowing about the certification. The exam, however, is based on 6.0 – 6.5 and 6.6 brings some rather substantial changes. Therefore don’t expect to see 6.6 related questions in the exam.

Resources

Although I wasn’t actually focused on passing this specific test, Here’s what I’ve used so far in an attempt to get up to speed:

 

Pluralsight’s training course on vRealize Operations (created March 2017) – https://www.pluralsight.com/courses/vmware-vrealize-operations-manager

VMware’s documentation center – https://docs.vmware.com/en/vRealize-Operations-Manager/index.html

vApp Deployment and Configuration Guide – https://docs.vmware.com/en/vRealize-Operations-Manager/6.6/vrealize-operations-manager-66-vapp-deploy-guide.pdf

VMware training videos – http://players.brightcove.net/1534342432001/S1xUFpuYwx_default/index.html?playlistId=5446534362001

Exam Experience

The exam can be taken anywhere unlike the VCP or VCAP exams which require you to attend a training center. The questions were pretty tough, but that may have come down to my lack of experience with the product.

Overall, it was a interesting experience. I probably would have preferred vRealize Operations to have it’s own VCP level exam being proctored etc. It’s a nice-to-have, but I still have a lot to learn about vRealize Operations but it’s given me some confidence that I’ve probably understood the fundamentals.

 

NSX Livefire Course

 

Recently I was lucky enough to attend a NSX livefire course hosted at the VMware EMEA HQ in Staines, It’s designed to facilitate a intensive knowledge transfer of NSX related subject matter. All participants are bound by NDA, however most of the information is GA with the exception of roadmap information.

 

Day One

Day one was focused on introducing all the participants, laying a foundation for the course objectives as well as some background info on NSX. In addition the following topics were covered:

  • Lab intro
  • Dynamic routing and operations
  • Integrating NSX with phyiscal infrastructure

Day Two

We covered:

  • Security
  • Multi site implementations
  • Business continuity and disaster recovery

Day Three

We covered:

  • Operations and Troubleshooting
  • Cloud management integration

Day Four

We covered:

  • VDI
  • Best practice

Overall, it was a very packed few days but an extremely valuable and positive experience. I would strongly recommend  attending if given the chance.

 

Homelab – Nested ESXi with NSX and vSAN

The Rebuild

I decided to trash and rebuild my nested homelab to include both NSX and vSAN. When I attempted to prepare the hosts for NSX I received the following message:

 

 

I’ve not had this issue before so I conducted some research. I found a lot of blog posts / comments / KB articles linking this issue to VUM. For example : https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2053782

However, after following the instructions I couldn’t set the “bypassVumEnabled” setting. Nor could I manually install the NSX vibs and was presented with the following:

 

[root@ESXi4:~] esxcli software vib install -v /vmfs/volumes/vsanDatastore/VIB/vib20/esx-nsxv/VMware_bootbank_esx-nsxv_6.5.0-0.0.6244264.vib –force
[LiveInstallationError]
Error in running [‘/etc/init.d/vShield-Stateful-Firewall’, ‘start’, ‘install’]:
Return code: 1
Output: vShield-Stateful-Firewall is not running
watchdog-dfwpktlogs: PID file /var/run/vmware/watchdog-dfwpktlogs.PID does not exist
watchdog-dfwpktlogs: Unable to terminate watchdog: No running watchdog process for dfwpktlogs
ERROR: ld.so: object ‘/lib/libMallocArenaFix.so’ from LD_PRELOAD cannot be preloaded: ignored.
Failed to release memory reservation for vsfwd
Resource pool ‘host/vim/vmvisor/vsfwd’ release failed. retrying..
Resource pool ‘host/vim/vmvisor/vsfwd’ release failed. retrying..
Resource pool ‘host/vim/vmvisor/vsfwd’ release failed. retrying..
Resource pool ‘host/vim/vmvisor/vsfwd’ release failed. retrying..
Resource pool ‘host/vim/vmvisor/vsfwd’ release failed. retrying..
Set memory minlimit for vsfwd to 256MB
ERROR: ld.so: object ‘/lib/libMallocArenaFix.so’ from LD_PRELOAD cannot be preloaded: ignored.
Failed to set memory reservation for vsfwd to 256MB
ERROR: ld.so: object ‘/lib/libMallocArenaFix.so’ from LD_PRELOAD cannot be preloaded: ignored.
Failed to release memory reservation for vsfwd
Resource pool ‘host/vim/vmvisor/vsfwd’ released.
Resource pool creation failed. Not starting vShield-Stateful-Firewall

It is not safe to continue. Please reboot the host immediately to discard the unfinished update.
Please refer to the log file for more details.
[root@ESXi4:~]

In particular I was intrigued by the “Failed to release memory reservation for vsfwd” message. I decided to increase the memory configuration of my ESXi VM’s from 6GB to 8GB and I was then able to prepare the hosts from the UI.

TLDR; If you’re running  ESXi 6.5, NSX 6.3.3 and vSAN 6.6.1 and experiencing issues preparing hosts for NSX, increase the ESXi memory configuration to at least 8GB.

vDS to vSS and back again

Overview

I was recently tasked with migrating a selection of ESXi 5.5 hosts into a new vSphere 6.5 environment. These hosts leveraged Fibre Channel HBA’s for block storage and 2x10Gbe interfaces for all other traffic types. I assumed that doing a vDS detach and resync was not the correct approach to do this, even though some people reported success doing it this way.  The /r/vmware Reddit community agreed and later I found a VMware KB article that backs the more widely accepted solution involving moving everything to a vSphere Standard Switch first.

 Automating the process

There are already several resources on how to do vDS -> vSS migrations but I fancied trying it myself. I used Virtually Ghetto’s script as a foundation for my own but wanted to add a few changes that were applicable to my specific environment. These included:

  • Populating a vSS dynamically by probing the vDS the host was attached to, including VLAN ID tags
    • Additionally, add a prefix to differentiate between the vSS and vDS portgroups
  • Automating the migration of VM port groups from the vDS to a vSS in a way that would result in no downtime.

Script process

This script performs the migration on a specific host, defined in $vmhost.

  1. Connect to vCenter Server
  2. Create a vSS on the host called “vSwitch_Migration”
  3. Iterate through the vDS portgroups, recreate on the vSS like-for-like, including VLANID tagging (where appropriate).
  4. Acquire list of VMKernel adaptors
  5. Move vmnic0 from the vDS to the vSS. At the same time migrate the VMKernel interfaces
  6. Iterate through all the VM’s on the host, reconfigure port group so it resides in the vSS
  7. Once all the VM’s have migrated, add the second (and final, in my environment) vmnic to the vSS
  8. At this point nothing specific to this host resides on the vDS, therefore remove the vDS from this host

If you plan to run these scripts in your environment, test first in a non-production environment.


Write-Host "Connecting to vCenter Server" -foregroundcolor Green
Connect-VIServer -Server "vCenterServer" -User administrator@vsphere.local -Pass "somepassword" | Out-Null

# Individual ESXi host to migrate from vDS to VSS
$vmhost = "192.168.1.20"
Write-Host "Host selected: " $vmhost -foregroundcolor Green

# Create a new vSS on the host
$vss_name = New-VirtualSwitch -VMHost $vmhost -Name vSwitch_Migration
Write-Host "Created new vSS on host" $vmhost "named" "vSwitch_Migration" -foregroundcolor Green

#VDS to migrate from
$vds_name = "MyvDS"
$vds = Get-VDSwitch -Name $vds_name

#Probe the VDS, get port groups and re-create on VSS
$vds_portgroups = Get-VDPortGroup -VDSwitch $vds_name
foreach ($vds_portgroup in $vds_portgroups)
{
if([string]::IsNullOrEmpty($vds_portgroup.vlanconfiguration.vlanid))
{
Write-Host "No VLAN Config for" $vds_portgroup.name "found" -foregroundcolor Green
$PortgroupName = $vds_portgroup.Name
New-VirtualPortGroup -virtualSwitch $vss_name -name "VSS_$PortgroupName" | Out-Null
}

else

{
Write-Host "VLAN config present for" $vds_portgroup.name -foregroundcolor Green
$PortgroupName = $vds_portgroup.Name
New-VirtualPortGroup -virtualSwitch $vss_name -name "VSS_$PortgroupName" -VLanId $vds_portgroup.vlanconfiguration.vlanid | Out-Null
}
}

#Create a list of VMKernel adapters
$management_vmkernel = Get-VMHostNetworkAdapter -VMHost $vmhost -Name "vmk0"
$vmotion1_vmkernel = Get-VMHostNetworkAdapter -VMHost $vmhost -Name "vmk1"
$vmotion2_vmkernel = Get-VMHostNetworkAdapter -VMHost $vmhost -Name "vmk2"
$vmkernel_list = @($management_vmkernel,$vmotion1_vmkernel,$vmotion2_vmkernel)

#Create mapping for VMKernel -> vss Port Group
$management_vmkernel_portgroup = Get-VirtualPortGroup -name "VSS_Mgmt" -Host $vmhost
$vmotion1_vmkernel_portgroup = Get-VirtualPortGroup -name "VSS_vMotion1" -Host $vmhost
$vmotion2_vmkernel_portgroup = Get-VirtualPortGroup -name "VSS_vMotion2" -Host $vmhost
$pg_array = @($management_vmkernel_portgroup,$vmotion1_vmkernel_portgroup,$vmotion2_vmkernel_portgroup)

#Move 1 uplink to the vss, also move over vmkernel interfaces
Write-Host "Moving vmnic0 from the vDS to VSS including vmkernel interfaces" -foregroundcolor Green
Add-VirtualSwitchPhysicalNetworkAdapter -VMHostPhysicalNic (Get-VMHostNetworkAdapter -Physical -Name "vmnic0" -VMHost $vmhost) -VirtualSwitch $vss_name -VMHostVirtualNic $vmkernel_list -VirtualNicPortgroup $pg_array -Confirm:$false

#Moving VM's from vDS to VSS
$vmlist = Get-VM | Where-Object {$_.VMHost.name -eq $vmhost}

foreach ($vm in $vmlist)
{
#VM's may have more that one nic
$vmniclist = Get-NetworkAdapter -vm $vm
foreach ($vmnic in $vmniclist)
{
$newportgroup = "VSS_" + $vmnic.NetworkName
Write-Host "Changing port group for" $vm.name "from" $vmnic.NetworkName "to " $newportgroup -foregroundcolor Green
Set-NetworkAdapter -NetworkAdapter $vmnic -NetworkName $newportgroup -Confirm:$false | Out-Null
}
}

#Moving additional vmnic to vss
Write-Host "All VM's migrated, adding second vmnic to vss" -foregroundcolor Green
Add-VirtualSwitchPhysicalNetworkAdapter -VMHostPhysicalNic (Get-VMHostNetworkAdapter -Physical -Name "vmnic1" -VMHost $vmhost) -VirtualSwitch $vss_name -Confirm:$false

#Tidyup - Remove DVS from this host
Write-Host "Removing host from vDS" -foregroundcolor Green
$vds | Remove-VDSwitchVMHost -VMHost $vmhost -Confirm:$false

 

 

The reverse

Although vSphere has some handy tools to migrate hosts, portgroups and networking to a vDS, scripting the reverse didn’t require many changes to the original script:


Write-Host "Connecting to vCenter Server" -foregroundcolor Green
Connect-VIServer -Server "vCenterServer" -User administrator@vsphere.local -Pass "somepassword" | Out-Null

# Individual ESXi host to migrate from vDS to VSS
$vmhost = "192.168.1.20"
Write-Host "Host selected: " $vmhost -foregroundcolor Green

#VDS to migrate to
$vds_name = "MyvDS"
$vds = Get-VDSwitch -Name $vds_name

#Vss to migrate from
$vss_name = "vSwitch_Migration"
$vss = Get-VirtualSwitch -Name $vss_name -VMHost $vmhost

#Add host to vDS but don't add uplinks yet
Write-Host "Adding host to vDS without uplinks" -foregroundcolor Green
Add-VDSwitchVMHost -VMHost $vmhost -VDSwitch $vds

#Create a list of VMKernel adaptors
$management_vmkernel = Get-VMHostNetworkAdapter -VMHost $vmhost -Name "vmk0"
$vmotion1_vmkernel = Get-VMHostNetworkAdapter -VMHost $vmhost -Name "vmk1"
$vmotion2_vmkernel = Get-VMHostNetworkAdapter -VMHost $vmhost -Name "vmk2"
$vmkernel_list = @($management_vmkernel,$vmotion1_vmkernel,$vmotion2_vmkernel)

#Create mapping for VMKernel -> vds Port Group
$management_vmkernel_portgroup = Get-VDPortgroup -name "Mgmt" -VDSwitch $vds_name
$vmotion1_vmkernel_portgroup = Get-VDPortgroup -name "vMotion0" -VDSwitch $vds_name
$vmotion2_vmkernel_portgroup = Get-VDPortgroup -name "vMotion1" -VDSwitch $vds_name
$vmkernel_portgroup_list = @($management_vmkernel_portgroup,$vmotion1_vmkernel_portgroup,$vmotion2_vmkernel_portgroup)

#Move 1 uplink to the vDS, also move over vmkernel interfaces
Write-Host "Moving vmnic0 from the vSS to vDS including vmkernel interfaces" -foregroundcolor Green
Add-VDSwitchPhysicalNetworkAdapter -VMHostPhysicalNic (Get-VMHostNetworkAdapter -Physical -Name "vmnic0" -VMHost $vmhost) -DistributedSwitch $vds_name -VMHostVirtualNic $vmkernel_list -VirtualNicPortgroup $vmkernel_portgroup_list -Confirm:$false

#Moving VM's from VSS to vDS
$vmlist = Get-VM | Where-Object {$_.VMHost.name -eq $vmhost}

foreach ($vm in $vmlist)
{
#VM's may have more that one nic
$vmniclist = Get-NetworkAdapter -vm $vm
foreach ($vmnic in $vmniclist)
{
$newportgroup = $vmnic.NetworkName.Replace("VSS_","")
Write-Host "Changing port group for" $vm.name "from" $vmnic.NetworkName "to " $newportgroup -foregroundcolor Green
Set-NetworkAdapter -NetworkAdapter $vmnic -Portgroup $newportgroup -Confirm:$false | Out-Null
}
}

#Moving additional vmnic to vds
Write-Host "All VM's migrated, adding second vmnic to vDS" -foregroundcolor Green
Add-VDSwitchPhysicalNetworkAdapter -VMHostPhysicalNic (Get-VMHostNetworkAdapter -Physical -Name "vmnic1" -VMHost $vmhost) -DistributedSwitch $vds_name -Confirm:$false

#Tidyup - Remove vSS from this host
Write-Host "Removing VSS from host" -foregroundcolor Green
Remove-VirtualSwitch -VirtualSwitch $vss -Confirm:$false

Intel Skylake/Kaby Lake processors: broken hyper-threading

Overview

Source : https://lists.debian.org/debian-devel/2017/06/msg00308.html

It appears some Intel Xeon CPU’s are susceptible to a recently discovered Hyper Threading bug. However, these are limited to E3 v5/v6 based Xeon systems which are found mostly in entry level servers with single socket implementations. > Dual socket systems currently leverage E5 based Xeons which don’t appear to be affected.

Currently, the easiest way to mitigate against this bug is to simply disable hyper-threading. The bug also appears to be OS agnostic.

Just Servers?

The focus around social media has predominately been around run of the mill servers; ones you typically purchase from the likes of Dell, HP, etc. However, there could be many bespoke devices that leverage susceptible processors, such as NAS/SAN heads. It is unlikely that in the event you find such a device HT can simply be disabled, but it should be something to be aware of.

List of Intel processors code-named “Skylake”
List of Intel processors code-named “Kaby Lake”

Homelab v2 – Part 1

Out with the old

My previous homelab, although functional was starting to hit the limits of 32GB of RAM, particularly when running vCenter, vSAN, NSX, etc concurrently.

A family member had use for my old lab so I decided to sell it and get a replacement whitebox.

 

Requirements

  • Quiet – As this would live in my office and powered on pretty much 24/7 it need a silent running machine
  • Power efficient – I’d rather not rack up the electric bill.
  • 64GB Ram Support

 

Nice to have

  • 10GbE
  • IPMI / Remote Access
  • Mini-ITX

Order List

I’ve had a interest in the Xeon-D boards for quite some time, the low power footprint, SRV-IO support, integrated 10GbE, IPMI and 128GB RAM support make it an attractive offering. I spotted a good deal and decided to take the plunge on a Supermicro X10SDV-4C+-TLN4F

 

As for a complete list:

Motherboard – Supermicro X10SDV-4C+-TLN4F

RAM – 64GB (4x16GB) ADATA DDR4

Case – TBC, undecided between a supermicro 1U case or a standard desktop ITX case

Network – Existing gigabit switch. 10GbE Switches are still quite expensive, but it’s nice to have future compatibility on the motherboard for it.

I’ve yet to take delivery of all the components, part 2 will include assembly.

VCAP6 Deploy Passed

Now I can rest…

I decided at around mid December to make passing the VCAP6 DCV Deploy exam a target. Today I can tick that objective off. As I have previously passed the VCAP5-DCD exam this should entitle me to the VCIX-DCV certification, but I may need to wait a bit for that.

My Experience

Precisely this time last year I passed the VCAP5-DCD exam. By cheer coincidence I picked exactly 365 days later to do the deploy exam on v6. I was quite nervous as I’ve never done a deploy VMware lab exam before. The lab itself was reasonably well laid out but the response times and general feel of the environment was a bit sluggish, but then again my home lab resides on SSD storage so perhaps I’m used to a snappy interface.

Tips based on my own prep

  • The study guide from vJenner is an absolute goldmine : http://www.vjenner.com/vcap6-dcv-deployment-study-guide/.
  • As with all VMware exams the blueprint is your main reference. If you’re comfortable with most of the objectives you should be good to go.
    • Additionally, there is a lot to cover. Naturally like myself you’re most likely going to have weak and strong areas. Don’t get too hung of up on (for example) nailing to commit the entire esxcli CLI namespace to memory.
  • If you’re finding it difficult to fully remember esxcli commands in their entirety remember there’s –help and –example flags.
  • use a VMware HOL (Hands on Lab) to get acquainted with the UI

Good Luck!

VMW-LGO-CERT-ADV-PRO-6-DATA-CTR-VIRT-DEPLOY-K

My Nested NSX Home Lab

With the ever growing popularity of SDDC solutions I’ve decided to invest some time in learning VMware NSX and sit the VCP6-NV Exam. For this I’ve re-purposed my existing homelab and configured it for NSX. I have a fairly simple setup consisting of a single whitebox “server” that will accommodate nested ESXi hypervisors and a HP Microserver acting as a iSCSI target.

Whitebox specs:

Motherboard: MSI B85M-E45 Socket 1150

CPU: Intel Core i7 4785T 35W TDP

RAM: 32GB Corsair DDR3 Vengeance

PSU: 300W be quiet! 80plus bronze

Case: Thermaltake Core v21 Micro ATX

Switch: 8 Port Netgear GS 108-T Smart Switch

Cooler: Akasa AK-CC7108EP01

NAS/SAN: HP Microserver N54L , 12GB Ram, 480GB SSD, 500GB mechanical.

 

ESXi is installed on the physical host with additional ESXi VM’s being created so I can play around with DRS/HA features too. The end result looks like this:

NSXLAB

From a networking perspective I have separate port groups on my physical host for Management, VM, iSCSI, vMotion etc. My nested ESXi hosts have vNIC’s in these port groups. Due to the nature of nesting ESXi hosts for this to work promiscuous mode has to be enabled on the port groups on the phyiscal host for this to work (management doubles as VXLAN Transport)

vSwitch

 

The actual installation of NSX is already well covered but this  covers the basics for what I needed to do.

« Older posts Newer posts »

© 2018 Virtual Thoughts

Theme by Anders NorenUp ↑

Social media & sharing icons powered by UltimatelySocial
RSS