Virtual Thoughts

Virtualisation, Storage and various other ramblings.

Page 3 of 4

NSX Livefire Course


Recently I was lucky enough to attend a NSX livefire course hosted at the VMware EMEA HQ in Staines, It’s designed to facilitate a intensive knowledge transfer of NSX related subject matter. All participants are bound by NDA, however most of the information is GA with the exception of roadmap information.


Day One

Day one was focused on introducing all the participants, laying a foundation for the course objectives as well as some background info on NSX. In addition the following topics were covered:

  • Lab intro
  • Dynamic routing and operations
  • Integrating NSX with phyiscal infrastructure

Day Two

We covered:

  • Security
  • Multi site implementations
  • Business continuity and disaster recovery

Day Three

We covered:

  • Operations and Troubleshooting
  • Cloud management integration

Day Four

We covered:

  • VDI
  • Best practice

Overall, it was a very packed few days but an extremely valuable and positive experience. I would strongly recommend  attending if given the chance.


Homelab – Nested ESXi with NSX and vSAN

The Rebuild

I decided to trash and rebuild my nested homelab to include both NSX and vSAN. When I attempted to prepare the hosts for NSX I received the following message:



I’ve not had this issue before so I conducted some research. I found a lot of blog posts / comments / KB articles linking this issue to VUM. For example :

However, after following the instructions I couldn’t set the “bypassVumEnabled” setting. Nor could I manually install the NSX vibs and was presented with the following:


[root@ESXi4:~] esxcli software vib install -v /vmfs/volumes/vsanDatastore/VIB/vib20/esx-nsxv/VMware_bootbank_esx-nsxv_6.5.0-0.0.6244264.vib –force
Error in running [‘/etc/init.d/vShield-Stateful-Firewall’, ‘start’, ‘install’]:
Return code: 1
Output: vShield-Stateful-Firewall is not running
watchdog-dfwpktlogs: PID file /var/run/vmware/watchdog-dfwpktlogs.PID does not exist
watchdog-dfwpktlogs: Unable to terminate watchdog: No running watchdog process for dfwpktlogs
ERROR: object ‘/lib/’ from LD_PRELOAD cannot be preloaded: ignored.
Failed to release memory reservation for vsfwd
Resource pool ‘host/vim/vmvisor/vsfwd’ release failed. retrying..
Resource pool ‘host/vim/vmvisor/vsfwd’ release failed. retrying..
Resource pool ‘host/vim/vmvisor/vsfwd’ release failed. retrying..
Resource pool ‘host/vim/vmvisor/vsfwd’ release failed. retrying..
Resource pool ‘host/vim/vmvisor/vsfwd’ release failed. retrying..
Set memory minlimit for vsfwd to 256MB
ERROR: object ‘/lib/’ from LD_PRELOAD cannot be preloaded: ignored.
Failed to set memory reservation for vsfwd to 256MB
ERROR: object ‘/lib/’ from LD_PRELOAD cannot be preloaded: ignored.
Failed to release memory reservation for vsfwd
Resource pool ‘host/vim/vmvisor/vsfwd’ released.
Resource pool creation failed. Not starting vShield-Stateful-Firewall

It is not safe to continue. Please reboot the host immediately to discard the unfinished update.
Please refer to the log file for more details.

In particular I was intrigued by the “Failed to release memory reservation for vsfwd” message. I decided to increase the memory configuration of my ESXi VM’s from 6GB to 8GB and I was then able to prepare the hosts from the UI.

TLDR; If you’re running  ESXi 6.5, NSX 6.3.3 and vSAN 6.6.1 and experiencing issues preparing hosts for NSX, increase the ESXi memory configuration to at least 8GB.

vDS to vSS and back again


I was recently tasked with migrating a selection of ESXi 5.5 hosts into a new vSphere 6.5 environment. These hosts leveraged Fibre Channel HBA’s for block storage and 2x10Gbe interfaces for all other traffic types. I assumed that doing a vDS detach and resync was not the correct approach to do this, even though some people reported success doing it this way.  The /r/vmware Reddit community agreed and later I found a VMware KB article that backs the more widely accepted solution involving moving everything to a vSphere Standard Switch first.

 Automating the process

There are already several resources on how to do vDS -> vSS migrations but I fancied trying it myself. I used Virtually Ghetto’s script as a foundation for my own but wanted to add a few changes that were applicable to my specific environment. These included:

  • Populating a vSS dynamically by probing the vDS the host was attached to, including VLAN ID tags
    • Additionally, add a prefix to differentiate between the vSS and vDS portgroups
  • Automating the migration of VM port groups from the vDS to a vSS in a way that would result in no downtime.

Script process

This script performs the migration on a specific host, defined in $vmhost.

  1. Connect to vCenter Server
  2. Create a vSS on the host called “vSwitch_Migration”
  3. Iterate through the vDS portgroups, recreate on the vSS like-for-like, including VLANID tagging (where appropriate).
  4. Acquire list of VMKernel adaptors
  5. Move vmnic0 from the vDS to the vSS. At the same time migrate the VMKernel interfaces
  6. Iterate through all the VM’s on the host, reconfigure port group so it resides in the vSS
  7. Once all the VM’s have migrated, add the second (and final, in my environment) vmnic to the vSS
  8. At this point nothing specific to this host resides on the vDS, therefore remove the vDS from this host

If you plan to run these scripts in your environment, test first in a non-production environment.

Write-Host "Connecting to vCenter Server" -foregroundcolor Green
Connect-VIServer -Server "vCenterServer" -User administrator@vsphere.local -Pass "somepassword" | Out-Null

# Individual ESXi host to migrate from vDS to VSS
$vmhost = ""
Write-Host "Host selected: " $vmhost -foregroundcolor Green

# Create a new vSS on the host
$vss_name = New-VirtualSwitch -VMHost $vmhost -Name vSwitch_Migration
Write-Host "Created new vSS on host" $vmhost "named" "vSwitch_Migration" -foregroundcolor Green

#VDS to migrate from
$vds_name = "MyvDS"
$vds = Get-VDSwitch -Name $vds_name

#Probe the VDS, get port groups and re-create on VSS
$vds_portgroups = Get-VDPortGroup -VDSwitch $vds_name
foreach ($vds_portgroup in $vds_portgroups)
Write-Host "No VLAN Config for" $ "found" -foregroundcolor Green
$PortgroupName = $vds_portgroup.Name
New-VirtualPortGroup -virtualSwitch $vss_name -name "VSS_$PortgroupName" | Out-Null


Write-Host "VLAN config present for" $ -foregroundcolor Green
$PortgroupName = $vds_portgroup.Name
New-VirtualPortGroup -virtualSwitch $vss_name -name "VSS_$PortgroupName" -VLanId $vds_portgroup.vlanconfiguration.vlanid | Out-Null

#Create a list of VMKernel adapters
$management_vmkernel = Get-VMHostNetworkAdapter -VMHost $vmhost -Name "vmk0"
$vmotion1_vmkernel = Get-VMHostNetworkAdapter -VMHost $vmhost -Name "vmk1"
$vmotion2_vmkernel = Get-VMHostNetworkAdapter -VMHost $vmhost -Name "vmk2"
$vmkernel_list = @($management_vmkernel,$vmotion1_vmkernel,$vmotion2_vmkernel)

#Create mapping for VMKernel -> vss Port Group
$management_vmkernel_portgroup = Get-VirtualPortGroup -name "VSS_Mgmt" -Host $vmhost
$vmotion1_vmkernel_portgroup = Get-VirtualPortGroup -name "VSS_vMotion1" -Host $vmhost
$vmotion2_vmkernel_portgroup = Get-VirtualPortGroup -name "VSS_vMotion2" -Host $vmhost
$pg_array = @($management_vmkernel_portgroup,$vmotion1_vmkernel_portgroup,$vmotion2_vmkernel_portgroup)

#Move 1 uplink to the vss, also move over vmkernel interfaces
Write-Host "Moving vmnic0 from the vDS to VSS including vmkernel interfaces" -foregroundcolor Green
Add-VirtualSwitchPhysicalNetworkAdapter -VMHostPhysicalNic (Get-VMHostNetworkAdapter -Physical -Name "vmnic0" -VMHost $vmhost) -VirtualSwitch $vss_name -VMHostVirtualNic $vmkernel_list -VirtualNicPortgroup $pg_array -Confirm:$false

#Moving VM's from vDS to VSS
$vmlist = Get-VM | Where-Object {$ -eq $vmhost}

foreach ($vm in $vmlist)
#VM's may have more that one nic
$vmniclist = Get-NetworkAdapter -vm $vm
foreach ($vmnic in $vmniclist)
$newportgroup = "VSS_" + $vmnic.NetworkName
Write-Host "Changing port group for" $ "from" $vmnic.NetworkName "to " $newportgroup -foregroundcolor Green
Set-NetworkAdapter -NetworkAdapter $vmnic -NetworkName $newportgroup -Confirm:$false | Out-Null

#Moving additional vmnic to vss
Write-Host "All VM's migrated, adding second vmnic to vss" -foregroundcolor Green
Add-VirtualSwitchPhysicalNetworkAdapter -VMHostPhysicalNic (Get-VMHostNetworkAdapter -Physical -Name "vmnic1" -VMHost $vmhost) -VirtualSwitch $vss_name -Confirm:$false

#Tidyup - Remove DVS from this host
Write-Host "Removing host from vDS" -foregroundcolor Green
$vds | Remove-VDSwitchVMHost -VMHost $vmhost -Confirm:$false



The reverse

Although vSphere has some handy tools to migrate hosts, portgroups and networking to a vDS, scripting the reverse didn’t require many changes to the original script:

Write-Host "Connecting to vCenter Server" -foregroundcolor Green
Connect-VIServer -Server "vCenterServer" -User administrator@vsphere.local -Pass "somepassword" | Out-Null

# Individual ESXi host to migrate from vDS to VSS
$vmhost = ""
Write-Host "Host selected: " $vmhost -foregroundcolor Green

#VDS to migrate to
$vds_name = "MyvDS"
$vds = Get-VDSwitch -Name $vds_name

#Vss to migrate from
$vss_name = "vSwitch_Migration"
$vss = Get-VirtualSwitch -Name $vss_name -VMHost $vmhost

#Add host to vDS but don't add uplinks yet
Write-Host "Adding host to vDS without uplinks" -foregroundcolor Green
Add-VDSwitchVMHost -VMHost $vmhost -VDSwitch $vds

#Create a list of VMKernel adaptors
$management_vmkernel = Get-VMHostNetworkAdapter -VMHost $vmhost -Name "vmk0"
$vmotion1_vmkernel = Get-VMHostNetworkAdapter -VMHost $vmhost -Name "vmk1"
$vmotion2_vmkernel = Get-VMHostNetworkAdapter -VMHost $vmhost -Name "vmk2"
$vmkernel_list = @($management_vmkernel,$vmotion1_vmkernel,$vmotion2_vmkernel)

#Create mapping for VMKernel -> vds Port Group
$management_vmkernel_portgroup = Get-VDPortgroup -name "Mgmt" -VDSwitch $vds_name
$vmotion1_vmkernel_portgroup = Get-VDPortgroup -name "vMotion0" -VDSwitch $vds_name
$vmotion2_vmkernel_portgroup = Get-VDPortgroup -name "vMotion1" -VDSwitch $vds_name
$vmkernel_portgroup_list = @($management_vmkernel_portgroup,$vmotion1_vmkernel_portgroup,$vmotion2_vmkernel_portgroup)

#Move 1 uplink to the vDS, also move over vmkernel interfaces
Write-Host "Moving vmnic0 from the vSS to vDS including vmkernel interfaces" -foregroundcolor Green
Add-VDSwitchPhysicalNetworkAdapter -VMHostPhysicalNic (Get-VMHostNetworkAdapter -Physical -Name "vmnic0" -VMHost $vmhost) -DistributedSwitch $vds_name -VMHostVirtualNic $vmkernel_list -VirtualNicPortgroup $vmkernel_portgroup_list -Confirm:$false

#Moving VM's from VSS to vDS
$vmlist = Get-VM | Where-Object {$ -eq $vmhost}

foreach ($vm in $vmlist)
#VM's may have more that one nic
$vmniclist = Get-NetworkAdapter -vm $vm
foreach ($vmnic in $vmniclist)
$newportgroup = $vmnic.NetworkName.Replace("VSS_","")
Write-Host "Changing port group for" $ "from" $vmnic.NetworkName "to " $newportgroup -foregroundcolor Green
Set-NetworkAdapter -NetworkAdapter $vmnic -Portgroup $newportgroup -Confirm:$false | Out-Null

#Moving additional vmnic to vds
Write-Host "All VM's migrated, adding second vmnic to vDS" -foregroundcolor Green
Add-VDSwitchPhysicalNetworkAdapter -VMHostPhysicalNic (Get-VMHostNetworkAdapter -Physical -Name "vmnic1" -VMHost $vmhost) -DistributedSwitch $vds_name -Confirm:$false

#Tidyup - Remove vSS from this host
Write-Host "Removing VSS from host" -foregroundcolor Green
Remove-VirtualSwitch -VirtualSwitch $vss -Confirm:$false

Intel Skylake/Kaby Lake processors: broken hyper-threading


Source :

It appears some Intel Xeon CPU’s are susceptible to a recently discovered Hyper Threading bug. However, these are limited to E3 v5/v6 based Xeon systems which are found mostly in entry level servers with single socket implementations. > Dual socket systems currently leverage E5 based Xeons which don’t appear to be affected.

Currently, the easiest way to mitigate against this bug is to simply disable hyper-threading. The bug also appears to be OS agnostic.

Just Servers?

The focus around social media has predominately been around run of the mill servers; ones you typically purchase from the likes of Dell, HP, etc. However, there could be many bespoke devices that leverage susceptible processors, such as NAS/SAN heads. It is unlikely that in the event you find such a device HT can simply be disabled, but it should be something to be aware of.

List of Intel processors code-named “Skylake”
List of Intel processors code-named “Kaby Lake”

Homelab v2 – Part 1

Out with the old

My previous homelab, although functional was starting to hit the limits of 32GB of RAM, particularly when running vCenter, vSAN, NSX, etc concurrently.

A family member had use for my old lab so I decided to sell it and get a replacement whitebox.



  • Quiet – As this would live in my office and powered on pretty much 24/7 it need a silent running machine
  • Power efficient – I’d rather not rack up the electric bill.
  • 64GB Ram Support


Nice to have

  • 10GbE
  • IPMI / Remote Access
  • Mini-ITX

Order List

I’ve had a interest in the Xeon-D boards for quite some time, the low power footprint, SRV-IO support, integrated 10GbE, IPMI and 128GB RAM support make it an attractive offering. I spotted a good deal and decided to take the plunge on a Supermicro X10SDV-4C+-TLN4F


As for a complete list:

Motherboard – Supermicro X10SDV-4C+-TLN4F

RAM – 64GB (4x16GB) ADATA DDR4

Case – TBC, undecided between a supermicro 1U case or a standard desktop ITX case

Network – Existing gigabit switch. 10GbE Switches are still quite expensive, but it’s nice to have future compatibility on the motherboard for it.

I’ve yet to take delivery of all the components, part 2 will include assembly.

VCAP6 Deploy Passed

Now I can rest…

I decided at around mid December to make passing the VCAP6 DCV Deploy exam a target. Today I can tick that objective off. As I have previously passed the VCAP5-DCD exam this should entitle me to the VCIX-DCV certification, but I may need to wait a bit for that.

My Experience

Precisely this time last year I passed the VCAP5-DCD exam. By cheer coincidence I picked exactly 365 days later to do the deploy exam on v6. I was quite nervous as I’ve never done a deploy VMware lab exam before. The lab itself was reasonably well laid out but the response times and general feel of the environment was a bit sluggish, but then again my home lab resides on SSD storage so perhaps I’m used to a snappy interface.

Tips based on my own prep

  • The study guide from vJenner is an absolute goldmine :
  • As with all VMware exams the blueprint is your main reference. If you’re comfortable with most of the objectives you should be good to go.
    • Additionally, there is a lot to cover. Naturally like myself you’re most likely going to have weak and strong areas. Don’t get too hung of up on (for example) nailing to commit the entire esxcli CLI namespace to memory.
  • If you’re finding it difficult to fully remember esxcli commands in their entirety remember there’s –help and –example flags.
  • use a VMware HOL (Hands on Lab) to get acquainted with the UI

Good Luck!


My Nested NSX Home Lab

With the ever growing popularity of SDDC solutions I’ve decided to invest some time in learning VMware NSX and sit the VCP6-NV Exam. For this I’ve re-purposed my existing homelab and configured it for NSX. I have a fairly simple setup consisting of a single whitebox “server” that will accommodate nested ESXi hypervisors and a HP Microserver acting as a iSCSI target.

Whitebox specs:

Motherboard: MSI B85M-E45 Socket 1150

CPU: Intel Core i7 4785T 35W TDP

RAM: 32GB Corsair DDR3 Vengeance

PSU: 300W be quiet! 80plus bronze

Case: Thermaltake Core v21 Micro ATX

Switch: 8 Port Netgear GS 108-T Smart Switch

Cooler: Akasa AK-CC7108EP01

NAS/SAN: HP Microserver N54L , 12GB Ram, 480GB SSD, 500GB mechanical.


ESXi is installed on the physical host with additional ESXi VM’s being created so I can play around with DRS/HA features too. The end result looks like this:


From a networking perspective I have separate port groups on my physical host for Management, VM, iSCSI, vMotion etc. My nested ESXi hosts have vNIC’s in these port groups. Due to the nature of nesting ESXi hosts for this to work promiscuous mode has to be enabled on the port groups on the phyiscal host for this to work (management doubles as VXLAN Transport)



The actual installation of NSX is already well covered but this  covers the basics for what I needed to do.

VCP6-NV Study Materials

NSX is a very exciting technology and I’ve made it a personal goal to sit and (hopefully) pass the VCP6-NV Exam. I hear it’s a tough exam, so it should provide a good challenge.

In preparation for this my list of resources are as follows:

More resources will be added as I find them. To anyone else wishing to pass this exam – Good luck!

Achievement Unlocked : Dell Compellent Certified Deployment Professional

I’ve only recently started focusing more on developing my storage skills which I personally believe to be a good complement to my existing VMware knowledge. I’ve been working with Compellent systems for a few months now and thought it was a good time to get officially certified.

The exam itself put me a little out of my comfort zone, as in the past my storage level knowledge was limited to administrator level on EqualLogic setups. This exam was tough but rewarding.

Now I get to enjoy week long holiday and relaxing.

My return will start my VCAP6-Deploy exam prep…




An introduction to vSphere Metro Storage Cluster with Compellent Live Volume


VMware vSphere Metro Storage Cluster is a suite of infrastructure configurations that facilitate a stretched cluster setup. It’s not a feature like HA/DRS that we can switch on easily; it requires architectural design decisions that specifically contribute to this configuration. The foundation of which are stretched clusters and with regards to the Compellent suite of solutions Live Volume

Stretched Cluster

Stretched clusters are pretty much self explanatory. In comparison to a lot of configurations where compute clusters reside within the same physical room, stretched clusters spread the compute capacity over more than one physical location. This can still be internally (different server rooms within the same building) or further apart over geographically disperse sites.

Having stretched clusters gives us greater flexibility and potentially better RPO/RTO with mission critical workloads when implemented correctly. Risk and performance are spread across one location. Failover scenarios can be further enhanced with automatic fail over features that come with solutions like Compellent Live Volume.

From a networking perspective. Ideally we have a stretched & trunked layer 2 network across both sites facilitated by redundant connections. I will touch on requirements later on in this post.

What is Live Volume?

Live volume is a specific feature with Dell Compellent storage centers. Broadly speaking  Live Volume virtualizes the volume presentation separating it from disk and RAID groups within each storage system. This virtualization enables decoupling of the volume presentation to the host from its physical location on a particular storage array. As a result, promoting the secondary storage array to primary status is transparent to the hosts, and can be done automatically with auto-failover. Why is this important for vMSC? Because in certain failure scenarios we can fail over between both sites automatically and gracefully.





Specifically regarding the Dell Compellent solution:

  • SCOS 6.7 or newer
  • High Bandwidth, low latency link between two sites
    • Latency must be no greater than 10ms. 5ms or less is recommended
    • Bandwidth is dependent on load, it is not uncommon to see redundant 10Gb/40Gb links between sites
  • Uniform or non-uniform presentation
  • Fixed or round Robin path selection
  • No support for Physical Mode RDM’s
    • Very important when considering traditional MSCS
  • For auto failover a third site is required with the Enterprise Manager software installed to act as a a tiebreaker
    • Maximum latency to both storage center networks must not exceed 200ms RTT
  • Redundant vMotion network supporting minimum throughput of 250Mbps


Presentation Modes – Uniform

For vMSC we have two options for presenting our storage. Uniform and Non uniform. Below is a diagram depicting a traditional uniform configuration. Uniform configurations are commonly reffered to as “mesh” typologies, because of how the compute layer has access to primary and secondary storage both locally and via the inter-site link.




Key considerations about uniform presentation:

  • Both Primary and Secondary Live Volumes presented on active paths to all ESXi hosts.
  • Typically used in environments where both sites are in close proximity.
  • Greater pressure/dependency on inter-site link compared to non-uniform.
  • Different reactions to failure scenarios compared to non-uniform – Because of storage paths and how Live volume works
  • Attention needs to be taken to IO paths. For example, write requests received by a storage center that holds the secondary volume will simply act as a proxy and redirect the I/O request to the Storage Center that has the primary volume over the replication network. This causes additional delay. Under some conditions, Live volume will be intelligent enough to swap the roles for a volume when it experiences all I/O requests from a specific site.


Presentation Modes – Non Uniform

Non-Uniform presentation restricts primary volume access to the confines of the local site. Key differences and observations are around how vCenter/ESXi will react to certain failure scenarios. It could be argued that non-uniform presentation isn’t as resilient as uniform, but this depends on the implementation.non-uniform

Key considerations about Non-uniform presentation:

  • Primary and Secondary Live Volumes presented via active paths to ESXi hosts within their local site only
  • Typically used in environments where both sites are not in close proximity
  • Less pressure/dependency on inter-site connectivity
  • Path/Storage failure would invoke a “All Paths Down” condition. Consequently affected VM’s will be rebooted on secondary site. Whereas compared to uniform presentation they would not – because paths would still be active.


Synchronous Replication Types

With Dell Compellent storage centers we have two methods of achieving synchronous replication:

  • High Consistency
    • Rigidly follows storage industry specifications pertaining to synchronous replication.
    • Guarantees data consistency between replication source and target.
    • Sensitive to Latency
    • If writes cannot be committed to destination target, it will not be committed at the source. Consequently the IO will appear as failed to the OS.
  • High Availability
    • Adopts a level of flexibility when adhering to industry specifications.
    • Under normal conditions behaves the same as High Consistency.
    • If the replication link or the destination storage either becomes unavailable or exceeds a latency threshold, Storage Center will automatically remove the dual write committal requirement at the estination volume.
    • IO is then Journaled at the source
    • When destination volume has returned healthy, IO is flushed at the destination

Most people tend to opt for High Availability mode for flexibility, unless they have some specific internal or external regulatory requirements.


Are there HA/DRS considerations?

Short answer, yes. Long answer, it depends on the storage vendor, but as this is a Compellent-Centric post I wanted to discuss a (really cool) feature that can potentially alleviate some headaches. It doesn’t absolve all HA/DRS considerations, because these are still valid design factors.



In this example we have a Live Volume configured on two SAN’s leveraging Synchronous replication in a uniform presentation.

If, for any reason a VM is migrated to the second site where the secondary volume resides, we will observe IO requests proxied over to the storage center that currently has the primary live volume.

However, Live volume is intelligent enough to identify this, and under these conditions will perform a automatic role swap, in an attempt to make all IO as efficient as possible.

I really like this feature, but it will only be efficient if a VM has its own volume, or VM’s that reside on one volume are grouped together. If Live Volume sees IO from both sites to the same Live Volume, then it will not perform a role swap. Prior to this feature, and under different design considerations we would need to leverage DRS affinity rules (should, not must) for optimal placement of VM’s for the shortest path of IO.

Other considerations include, but not limited to:

  • Admission Control
    • Set to 50% to allow enough resource for complete failover
  • Isolation Addresses
    • Specify two, one for each physical site
  • Datastore Heartbeating
    • Increase the number of datastore heartbeats from two to four in a stretched cluster. Two DS’s for each site


Why we need a third site

Live volume can work without a third site, but you won’t get automatic failover. It’s integral for establishing quorum during unplanned outages and for preventing split brain conditions during networking partitioning. With Compellent, we just need to install the Enterprise Manager on a third site that has connectivity to both storage centers, <200ms latency and it can be a physical or virtual Windows machine.



As you can imagine a lot of care and attention is required when designing and implementing a vMSC solution. Compellent has some very useful features to facilitate it, and with advancements in network technology there is a growing trend for stretched clusters for many reasons.

« Older posts Newer posts »

© 2019 Virtual Thoughts

Theme by Anders NorenUp ↑

Social media & sharing icons powered by UltimatelySocial