VMware NSX Setup

Posted on 13th December 2018 in VMware

This tutorial will explain how to setup VMware NSX. NSX 6.4.0 was used for this tutorial, other NSX versions follow the same setup but you may notice some differences as you go along.

1. Deploy the NSX Manager ova using vCenter (it doesn’t have to be deployed in the same vCenter you will be using it in).

Select the usual compute (folder and cluster), storage & network (for management of NSX Manager) settings for the VM.

The “Customize template” step will require relevant required info to be input such as Hostname, IP, DNS, NTP & Password for the NSX Manager.

2. Once deployed open the NSX Manager web page (FQDN of appliance) to configure vCenter relationship

Username is admin and the password is the password you specified in step 1 above.

NSX Manager and vCenter have a 1:1 relationship.

Use the vCenter Server that has the vDS and VMs you want to benefit from NSX

The “Lookup Service URL” should be configured: https://{vCenter_FQDN}:443/lookupservice/sdk E.G. https://vc01.company.net:443/lookupservice/sdk

The “vCenter Server” should be configured: {vCenter_FQDN} E.G. vc01.company.net

Both the “Lookup Service URL” & “vCenter Server” should authenticate with the administrator@vsphere.local credentials

Once done the NSX Manager vCenter connection should look like below:
NSX Manager vCenter Connection

3. Login to vCenter with administrator@vsphere.local credentials

You will now see the “Networking & Security” menu option like below:
vCenter Home Screen

4. Configure NSX user permissions

Under “System -> Users and Domains” you can add a user or group that exists in vCenter (can be local or domain user/group) and assign an NSX role.

5. Build NSX Controller Cluster

Under “Networking & Security -> Installation and Upgrade -> Management” you can add a controller node.

You need to create 3 controller nodes (ideally built on separate hosts).

Clicking the + button brings up the below settings box:
Add NSX Controller

Enter a unique friendly name & select the relevant datacenter/cluster/host/datastore.

“Connected To” should be the network you want the controllers to use to communicate, usually this is the same as the NSX Manager management network.

“IP Pool” should be a unique pool of IP addresses just for the NSX Controllers to be assigned. You can create the IP Pool when creating the first controller.

When creating the first controller you will also be asked for a password, this password will be for the controller cluster.

You can only build one controller at a time so prepare yourself for some waiting ;)

Once all 3 controllers have been built the “Management” tab should look like below:
NSX Controllers

6. Prepare Clusters & Hosts for NSX

Under “Networking & Security -> Installation and Upgrade -> Host Preparation” select the cluster (rather than a host) you want to enable for NSX and click “Install”.

The vSphere hosts do NOT need to be in Maintenance mode for this installation.

During this installation the vSphere hosts will have the NSX VIBs installed to handle VXLAN (Virtual Extensible LAN), DLR (Distributed Logical Router) and DFW (Distributed Firewall).

Once complete the “Installation Status” will change to show the NSX version installed and “Firewall” will have status “Enabled”.

Click “Configure” under “VXLAN” (Virtual Extensible LAN).

Select the required vDS (Virtual Distributed Switch), enter the VLAN ID to use for VXLAN VMkernel interface and set MTU to 1600.

“VMKNic IP Addressing” should use a unique pool of IP addresses for the specified VLAN ID (just like the NSX Controllers). Usually this network/VLAN is different to the NSX Manager/vSphere hosts management network(s).

Specify the required “VMKNic Teaming Policy” and click “OK”.

During this configuration the vSphere hosts will get a new VMkernel port for VTEP (VXLAN Tunnel Endpoint).

Once complete the “VXLAN” will change to status “Enabled”.

Once complete the “Host Preparation” tab should look like below:
Host Preparation

7. Logical Network Preparation

Under “Networking & Security -> Installation and Upgrade -> Logical Network Preparation” another 3 tabs will be exposed.

“VXLAN Transport”: This tab will just show the VXLAN configuration you created in previous step.

“Segment ID”: Edit to specify a range of IDs e.g. 5000-8000.

A Logical Switch (virtual wire) will use the Segment ID as the VNI (VXLAN Network Identifier). Therefore you must specify enough IDs for the Logical Switches you want to be able to create.

“Transport Zones”: Create a transport zone for Logical Switches (virtual wires) to be part of.

A Transport Zone defines which clusters of hosts will be able to see and use the Logical Switches within the zone.

When creating a Transport Zone specify a name, optional description and clusters to be part of the zone. Specify “Unicast” for “Replication Mode” to make the NSX Controllers responsible for replication.

That concludes the “Setup” for NSX. The next jobs are to create NSX Edges – both ESGs (Edge Services Gateways) and DLRs (Distributed Logical Routers), Logical Switches (virtual wires) and DFW (Distributed Firewall) rules.

Check out my other blog posts on these topics:
NSX Edges
Logical Switches
DFW Rules

comments: 0 »

vRealize Automation (vRA) 7.x NSX XaaS Resource Actions Issue

Posted on 15th November 2018 in VMware

I experienced this issue when working on a particular platform, it was a weird one and took some trial and error to fix so thought I would document it.

The issue:
When trying to view an NSX Resource Action item that had been provisioned it would just display a blank white page. Also if a day 2 action was attempted on the item a red ‘internal error’ box would be displayed and so the day 2 action wouldn’t load.

The fix:
The fix was a couple stage process…

1. Go to the vRO Endpoint in vRA portal, test connectivity and save
2. In vRO Control Center
(a) Disable NSX Plugin
(b) Restart vRO Service
(c) Enable NSX Plugin
(d) Restart vRO Service
3. You may also need to Reboot vRO appliance too

After that you should find that you can view the provisioned NSX Resource Action item details and the day 2 actions will load again as normal.

I would be interested in hearing if anything else has experienced this issue (please write a comment). I would also be curious to hear if anyone has experienced and found the root cause?

comments: 0 »

vRA 7.2 Fix Icons vRO Package

Posted on 9th November 2018 in vRO Packages

This vRO (vRealize Orchestrator) package contains a workflow which will run a postgres query on the vRA appliance to fix the vRA (vRealize Automation) 7.2 XaaS Resource Action issue outlined in VMware KB2149050.

Usage:
Download this: postgresiconsfix zip file
Extract the zip file to find the “org.cis.postgresiconsfix.package” file
Import this “.package” file into vRO (vRealize Orchestrator)

Note: This package was exported from vRO (vRealize Orchestrator) v7.3 but should work on any version of vRO

comments: 0 »

Hide Domain Dropdown vRO Package

Posted on 9th November 2018 in vRO Packages

This vRO (vRealize Orchestrator) package contains a workflow which will run a postgres query on the vRA appliance to hide the domain dropdown for a tenant login page.

vRA (vRealize Automation) tenant login page before:
tenant login page before

vRA (vRealize Automation) tenant login page after:
tenant login page after

Usage:
Download this: domaindropdown zip file
Extract the zip file to find the “org.cis.domaindropdown.package” file
Import this “.package” file into vRO (vRealize Orchestrator)

Note: This package was exported from vRO (vRealize Orchestrator) v7.3 but should work on any version of vRO

comments: 0 »

Generate Password vRO Package

Posted on 25th October 2018 in vRO Packages

This vRO (vRealize Orchestrator) package contains an action which will generate a random secure password.

Usage:
Download this: generatepassword zip file
Extract the zip file to find the “org.cis.generatepassword.package” file
Import this “.package” file into vRO (vRealize Orchestrator)

Note: This package was exported from vRO (vRealize Orchestrator) v7.3 but should work on any version of vRO

comments: 0 »

vRealize Orchestrator Packages

Posted on 25th October 2018 in vRO Packages

I have added a “vRO Packages” category to my blog so I can start to share some vRO (vRealize Orchestrator) packages containing useful code and features.

comments: 0 »

Sysprep Fails on Windows 2008 R2 with PowerShell 5.0 Installed

Posted on 29th December 2017 in VMware, Windows OS

I experienced this issue when working on a particular platform, it was a weird one so thought I would document it.

The issue:
I tried to deploy a VMware VM from a Windows 2008 R2 Template which had PowerShell 5.1 installed – Windows Management Framework (WMF) 5.0, however OS customisation would not complete. I tried to manually run sysprep within Windows too but that failed. Looking at the sysprep logs showed the error…

“Sysprep_Generalize_MiStreamProv: **** [gle=0x00000002]“

Thanks goes out to Ioan Popovici at sccm-zone.com for finding and documenting:
Fix Sysprep Error on Windows 2008 R2

The fix:
The fix was adding a registry entry to the Windows 2008 R2 Template.

Open regedit and navigate to HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\StreamProvider.

Create a DWORD named LastFullPayloadTime and with the value 0.

comments: 0 »

vRealize Automation (vRA) 7.x Remote Console Issue

Posted on 10th July 2017 in VMware

I experienced this issue when working on a particular platform, it was a weird one so thought I would document it.

The issue:
When trying to remote console to a VM using the vRealize Automation (vRA) web portal it would fail with the message…

Cannot establish a remote console connection , verify that the machine is powered on if the server has self-signed certificate, you might need to accept certificate, then close and retry the connection.

This was a weird one as it was not an issue in any of the lab environments I was running on the same version, nor was it an issue on the existing vRealize Automation (vRA) 6.x in the same environment. It appears to affect any vRealize Automation (vRA) 7 version such as vRealize Automation (vRA) 7.0, vRealize Automation (vRA) 7.1, vRealize Automation (vRA) 7.2 and vRealize Automation (vRA) 7.3.

Multiple steps had been taken to diagnose, including putting everything on the same vlan as the vSphere hosts (bypassing firewalls & load balancers etc) but no matter what I did the issue remained.

The fix:
The fix was an undocumented timeout setting provided by the VMware engineering team. The default timeout setting is 10 secs (10000 ms).

Edit the /etc/vcac/security.properties file on the vRealize Automation appliance(s).

Add the below line to the end of the file and save.

consoleproxy.timeout.connectionInitMs=20000

Then restart the vcac service: service vcac-server start

comments: 0 »

Linux DRBD Setup

Posted on 9th January 2014 in Linux OS

This tutorial will explain how to create a 2 node DRBD cluster, additional nodes can be added easily.

For the purpose of this tutorial (and because I believe it to be the easiest distribution) I will be using Ubuntu Server 12.04 LTS. This will work on other Linux distributions but this guide is written specifically for Ubuntu. This tutorial will assume you are using the root user.

For the purposes of this tutorial I will use the below IP config:
Server 1 (drbd01) = 192.168.0.111
Server 2 (drbd02) = 192.168.0.112

1. Install drbd and ocfs2 packages from respository using command ”apt-get install drbd8-utils ocfs2-tools”.

2. Edit “hosts” file (use “vim /etc/hosts” command), add the hostnames & ip addresses of all the drbd nodes

3. Create a resource file “{resource name}.res” file in “/etc/drbd.d/” directory (use “vim /etc/drbd.d/{resource name}.res” command) and populate with the below (feel free to edit as required). Please note anything between a set of *** are comments E.G. ***blah blah***

resource {resource name} { ***name the resource what you want, use the same as the filename***
protocol C;
startup {
wfc-timeout 15;
degr-wfc-timeout 60;
#become-primary-on both; ***allows drbd to go primary/primary on startup, un-comment if you want to use but it is recommended to get drbd setup and working fully first***
}

net {
cram-hmac-alg sha1;
shared-secret “secret”; ***set a password***
after-sb-0pri discard-zero-changes;
after-sb-1pri discard-secondary;
after-sb-2pri disconnect;
allow-two-primaries; ***set this if you want to allow a primary/primary setup***
}

on drbd01 {
device /dev/drbd0; ***if using multiple resources change this as required***
disk /dev/sdb1; ***set the physical disk/partition to use for drbd***
address 192.168.0.111:7788;
meta-disk internal;
}

on drbd02 {
device /dev/drbd0; ***if using multiple resources change this as required***
disk /dev/sdb1; ***set the physical disk/partition to use for drbd***
address 192.168.0.112:7788;
meta-disk internal;
}
}

4. Copy the “{resource name}.res” file to other node(s) (use “scp /etc/drbd.d/{resource name}.res drbd02:/etc/drbd.d/” command)

5. Run the command “drbdadm create-md {resource name}” to initialize the meta data storage. This command should be ran on all nodes

6. Create a cluster.conf file in “/etc/ocfs2/” directory (use “vim /etc/ocfs2/cluster.conf” command) and populate with the below (feel free to edit as required). Please note anything between a set of *** are comments E.G. ***blah blah***

cluster:
node_count = 2 ***set number of nodes***
name = {cluster name} ***set required cluster name***

node:
ip_port = 7777
ip_address = 192.168.0.111
number = 1
name = drbd01
cluster = {cluster name}

node:
ip_port = 7777
ip_address = 192.168.0.112
number = 2
name = drbd02
cluster = {cluster name}

7. Start DRBD (use “service drbd start” command)

8. Copy the “cluster.conf” file to other node(s) (use “scp /etc/ocfs2/cluster.conf drbd02:/etc/ocfs2/” command)

9. Run the command “dpkg-reconfigure ocfs2-tools” and follow the screen prompts to configure OCFS2 & cluster

10. Run the command “drbdadm — –overwrite-data-of-peer primary {resource name}” to start the data sync. This command should be ran from drbd01 only. You can watch progress by running the command “watch drbd-overview”. To stop watching the output press Ctrl+C

11. Run the command “mkfs.ocfs2 /dev/drbd0″ on drbd01 to create the file system on the DRBD device

12. If required you can now un-comment the “become-primary-on both” line in the resource file to allow primary/primary on startup. Restarting DRBD will put both nodes into primary mode (“service drbd restart”)

13. You are now free to mount the DRBD device to wherever you want E.G. “mount /dev/drbd0 /srv”

14. If you want the DRBD device mounted automatically at boot then run “echo ‘/dev/drbd0 /srv ocfs2 _netdev,defaults 0 0′ >> /etc/fstab” to populate fstab (remember to change the mount point from “/srv” to whatever you want)

Thanks goes out to the below websites that I used to collect information:
Ubuntu Documentation for DRBD
Linbit DRBD User’s Guide

comments: 0 »

Linux LVS Load Balancer Setup

Posted on 21st February 2013 in Linux OS

This tutorial will explain how to create a software load balancer using the Linux kernel, LVS load balancing.

For the purpose of this tutorial (and because I believe it to be the easiest distribution) I will be using Ubuntu Server 12.04 LTS. This will work on other Linux distributions but this guide is written specifically for Ubuntu. This tutorial will assume you are using the root user.

For the purposes of this tutorial I will use the below IP config:
VIP (Virtual IP, load balanced IP) = 192.168.0.110
RIP1 (Real IP 2, real server 1 IP) = 192.168.0.111
RIP2 (Real IP 2, real server 2 IP) = 192.168.0.112

1. Install keepalived package from respository using command ”apt-get install keepalived”.

3. Edit “sysctl.conf” file (use “vim  /etc/sysctl.conf” command), find “#net.ipv4.ip_forward = 1″ and remove the # (uncomment)

4. Run “sysctl -p” command to update sysctl

5. Edit “hosts” file (use “vim  /etc/hosts” command), add the hostnames & ip addresses of all load balancers and real servers

6. Set the loopback on all real servers to the VIP (see below for Windows loopback info)

7. Create a “keepalived.conf” file in “/etc/keepalived/” directory (use “vim /etc/keepalived/keepalived.conf” command) and populate with the below (feel free to edit as required). Please note anything between a set of *** are comments and must be removed before saving the file. E.G. ***blah blah***

vrrp_sync_group VG1 {
group {
VI_1
}
}

vrrp_instance VI_1 {
state MASTER ***Must be “BACKUP” on failover load balancer***
interface eth0
virtual_router_id 1
priority 100 ***Must be less than 100 on failover load balancer, 50 is best***
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.0.110
}
}
#Load Balance Port 80 – Standard Web Port
virtual_server 192.168.0.110 80 {
delay_loop 15
lb_algo wlc
lb_kind DR
persistence_timeout 50
protocol TCP

real_server 192.168.0.111 80 {
weight 100
TCP_CHECK {
connect_timeout 3
}
}

real_server 192.168.0.112 80 {
weight 100
TCP_CHECK {
connect_timeout 3
}
}
}

#Load Balance Port 443 – Standard SSL Web Port
virtual_server 192.168.0.110 443 {
delay_loop 15
lb_algo wlc
lb_kind DR
persistence_timeout 50
protocol TCP

real_server 192.168.0.111 443 {
weight 100
TCP_CHECK {
connect_timeout 3
}
}

real_server 192.168.0.112 443 {
weight 100
TCP_CHECK {
connect_timeout 3
}
}
}

8. Start keepalived on all load balancers using command “/etc/init.d/keepalived start”

 

You can then check the ipvsadm configuration and which load balancer has the VIP by using the below commands:
“ipvsadm”
“ipvsadm -lc”
“ip addr list” or “ip a s”

 

If you are load balancing a Windows server then you will need to do the following on the Windows real server:
1. Install the Windows Loopback and set IP to the VIP with a 255.255.255.255 subnet mask
2. Rename adapters to “lan” and “loopback” respectively
3. Run below commands in a command prompt:
netsh interface ipv4 set interface “lan” weakhostreceive=enabled
netsh interface ipv4 set interface “loopback” weakhostreceive=enabled
netsh interface ipv4 set interface “loopback” weakhostsend=enabled

 

Thanks goes out to the below websites that I used to collect information:
Ubuntu Manual for ipvsadm
gcharriere.com
www.linuxvirtualserver.org

comments: 0 »