When you need another brain

Archive for the ‘vSphere’ Category

Powershell script to permanently enable ssh on all ESXi hosts and supress SSH warning

leave a comment »

A small script to permanently enable ssh on all ESXi hosts and supress SSH warning.

$vcenter = "vc"
Connect-VIServer -Server $vcenter

$host_regexp = "esx0[1-2]"
$key = "TSM-SSH"
$policy = "on"

$hosts = Get-View -ViewType "HostSystem" -Property Name |
    Select-Object -ExpandProperty name | where {$_ -match $host_regexp} | sort

foreach ($h in $hosts) {
  $service = Get-VmHostService -VMHost $h | where {$_.key -eq $key}
  $_this = Set-VMHostService -HostService $service -Policy $policy
  $_this = Start-VmHostService -HostService $service
  $_this = Set-VMHostAdvancedConfiguration -VMHost $h UserVars.SuppressShellWarning 1
  Write-Host "enabled" $service "on" $h


Written by vmsysadmin

May 16, 2012 at 6:57 am

Posted in Powershell, vSphere

Adding vmkernel interfaces to Nexus 1000v distributed switch with VMware powercli

with one comment

Recently I faced a task where I needed to add vmotion and nfs vmkernel interfaces to a large number of ESXi 5 hosts that were attached to the Cisco Nexus 1000v distributed switch. In the field, due to the lack of time, we resorted to programmatically creating vmkernel interfaces on the standard virtual switch and then manually migrated them to the Nexus 1000v. Later I decided to create a powershell script that would streamline this task and create vmk interfaces on the Nexus 1000v directly.

This rather simple script takes a number of variables in the header, then creates the vmk interface (it automatically assigns the next available vmk number, i.e. if you already have vmk0 on the host, it will create vmk1). The script can enable vmotion on vmknic, and set the mtu. $hosts_regexp variable allows to narrow down the list of hosts to run the script against (for example, we had management hosts that did not need the new interfaces). Set the switch name, the portgroup name, and the IP with the last octet to starting IP of the new interface.

$netmask = “”

$conn = Connect-VIServer $vc

$hosts = get-vmhost
$hosts = $hosts -match $hosts_regexp | sort

foreach ($h in $hosts) {

$vmhost = Get-VMHost $h
$netsystem = Get-View $vmhost.Extensiondata.ConfigManager.networkSystem
$vnicmanager = Get-View $vmhost.Extensiondata.ConfigManager.virtualNicManager
$switchuuid = ($netsystem.NetworkInfo.ProxySwitch | where {$_.DvsName -eq $switch}).DvsUuid
$dvportgroupkey = (Get-VirtualPortGroup $vmhost | where {$_.Name -eq $portgroup}).Key

$nic = New-Object VMware.Vim.HostVirtualNicSpec
$nic.ip = New-Object VMware.Vim.HostIpConfig
$nic.ip.dhcp = $false
$nic.ip.ipAddress = $ip + $lastoct
$nic.ip.subnetMask = $netmask
$nic.mtu = $mtu
$nic.distributedVirtualPort = New-Object VMware.Vim.DistributedVirtualSwitchPortConnection
$nic.distributedVirtualPort.switchUuid = $switchuuid
$nic.distributedVirtualPort.portgroupKey = $dvportgroupkey

$vmk = $netsystem.AddVirtualNic(“”, $nic)
if ($vmotion) {$vnicmanager.SelectVnicForNicType(“vmotion”, $vmk)}

Write-Host $vmhost,”added”,$vmk,$nic.ip.ipAddress,$nic.ip.subnetMask,”mtu”,$nic.mtu,”vmotion”,$vmotion




The second script will remove the vmk interface, in case you need to change something or start from scratch.


$conn = Connect-VIServer $vc

$hosts = get-vmhost
$hosts = $hosts -match $hosts_regexp | sort

$conn = Connect-VIServer $vc

foreach ($h in $hosts) {

$vmhost = Get-VMHost -Name $h
$netsystem = Get-View $vmhost.Extensiondata.ConfigManager.networkSystem
Write-Host $vmhost,”removed”,$vmk


Written by vmsysadmin

April 30, 2012 at 11:42 pm

Posted in Powershell, vSphere

Resolving Linux boot issues after P2V with VMware Converter

with 3 comments

Recently I had to deal with migrating SuSE Enterprise Linux servers from the old environment to the Vblock. Customer had a number of physical servers and XEN instances that needed to be moved to the new vSphere environment.

While VMware Converter 5 does a good job live-cloning the existing Linux servers, sometimes you have to use the Cold Clone boot CD if the physical server uses software RAID (and then boot VM from the broken mirror side), or sometimes the resulting VM does not boot because of issues with bootloader.

For the most migrations I’ve used VMware Converter 5.0. One thing you need to do right away is to modify the config for VMware Converter to enable the root login and keep the failed helper VM. Enable useSourcePasswordInHelperVm flag and disable powerOffHelperVm. See http://kb.vmware.com/kb/1008209 for details.

Now you should have the result of your failed conversion preserved, and should be able boot the VM to check why the failure have occurred.


There are a few reasons why the conversion can fail at 99% while reconfiguring the OS. It could be because the disk path had changed, kernel modules are missing, or grub is not finding stuff.

Originally my XEN conversions were failing at 99% with FAILED: An error occurred during the conversion: ‘GrubInstaller::InstallGrub: Failed to read GRUB configuration from /mnt/p2v-src-root/boot/grub/menu.lst’. This problem is related to the fact that XEN instances do not have /boot/grub/menu.lst in place.

To fix this and or any other bootloader issues, grab your favorite Linux rescue disk and boot the VM from it. Since I was converting SLES, I’ve used SLES 10 SP2 boot CD, and booted into “Rescue System”. Alternatively, you can attach the converted vmdk to an existing Linux VM.

Once booted into the rescue, check the present disks with “fdisk -l“. Most likely, your devices are now showing up as /dev/sdaX, since the disk controller was changed to VMware LSI Logic Parallel.


Mount your new /dev/sda2 partition as /mnt, add /dev, and chroot into it.

# mount /dev/sda2 /mnt
# mount –bind /dev /mnt/dev
# cd /mnt
# chroot .

Once chrooted into your old environment, fix the /etc/fstab to use /dev/sdaX for your boot and swap, instead of whatever path you have there (XEN instances were booting from /dev/xvda2).

Next step is to make sure that the required kernel modules for VMware virtual machine support are loaded from the ramdisk. If piix and mptspi modules are missing, you will get “Waiting on /dev/sda2 device to appear…” message at boot.

On SLES, the location to define the ramdisk kernel modules is in /etc/sysconfig/kernel file.

The following modules should be present (remove unused stuff like “xenblk”):
INITRD_MODULES=”piix mptspi processor thermal fan jbd ext3 edd”

Once the INITRD_MODULES line is fixed, run “mkinitrd” to re-create the ramdisk.

# mkinitrd

In case the bootloader’s /boot/grub/menu.lst file is missing, you can specify the kernel and initrd parameters at boot time (use TAB to complete the filenames for your smp kernel).

grub> kernel /boot/vmlinuz- root=/dev/sda2
grub> initrd /boot/initrd-
grub> boot

Alternatively, use the menu.lst file below.

###YaST update: removed default
default 0
timeout 8
##YaST – generic_mbr
gfxmenu (hd0,1)/boot/message
##YaST – activate

###Don’t change this comment – YaST2 identifier: Original name: linux###
title SUSE Linux Enterprise Server 10 SP2
kernel (hd0,1)/boot/vmlinuz- root=/dev/sda2 repair=1 resume=/dev/sda1 splash=silent showopts vga=0x314
initrd (hd0,1)/boot/initrd-

###Don’t change this comment – YaST2 identifier: Original name: failsafe###
title Failsafe — SUSE Linux Enterprise Server 10 SP2
kernel (hd0,1)/boot/vmlinuz- root=/dev/sda2 showopts ide=nodma apm=off acpi=off noresume edd=off 3 vga=normal
initrd (hd0,1)/boot/initrd-

VM should be able to boot now.

Check 30-net_persistent_names.rules file in /etc/udev/rules.d directory for any network adapter changes that might need to be done. It is likely that udev detected the network hardware change and added eth1 to this file. Remove duplicate entries, leaving only the last one and change eth1 to eth0.

Don’t forget to install VMware Tools once the VM is up and running.

Written by vmsysadmin

February 10, 2012 at 6:06 am

Posted in vSphere

Tagged with ,

Using Perc 5i with ESXi 5

with 21 comments

Perc 5i is a Dell’s rebranded LSI SAS1078 RAID on Chip (ROC). You can use LSI firmware on it, since it’s essentially the same card as MegaRAID SAS 8480E.

Installing MegaCli and LSI SIM adapter in ESXi 5

Go to http://www.lsi.com => “Support” => “Support Downloads by Product” and search for 8480E. On 8480E page search for “MegaCLI”. Download MegaCli package, copy the executable from VMware folder to your host’s persistent storage (use your local vmfs datastore; do not use the root partition – it uses tmpfs to minimize the writes to the flash media and the file will disappear after the reboot).

Get the firmware from the same page (search for “Firmware”). 7.0.1-0083 was current as of Dec. 08 2011. Extract mr1068fw.rom to your host’s persistent storage.

Search http://www.lsi.com for “SMIS Provider” and download “SAS MegaRAID VMWare SMIS Provider VIB (Certified) for ESXi 5.0”, extract the vib file to your host’s persistent storage.

Install the vib from SMIS Provider package:

# esxcli software vib install -v ./vmware-esx-provider-LSIProvider.vib

Try to run MegaCli:

# ./MegaCli -AdpAllInfo -aALL | more

If it complains about libstorelib.so missing, you need to copy it from ESX 4 host and place in the same directory as MegaCli. See http://http://communities.vmware.com/thread/330535 for details. Alternatively, you can get it here: http://db.tt/aEYFL6CQ.

Shut down all VMs on the host and flush the controller’s firmware:

# ./MegaCli -adpfwflash -f mr1068fw.rom -a0

Adapter 0: PERC 5/i Adapter
Vendor ID: 0x1028, Device ID: 0x0015

Package version on the controller: 7.0.1-0075
Package version of the image file: 7.0.1-0083
Download Completed.
Flashing image to adapter…
Adapter 0: Flash Completed.

Exit Code: 0x00

Reboot the host.

After the reboot, you should see the LSIProvider vib is installed:

# esxcli software vib list | grep -i lsi
LSIProvider 500.04.V0.24-261033 LSI VMwareAccepted 2011-11-30

From the 8480E product page, you can download the “MegaRAID Storage Manager” for your preferred OS to connect to the SIM adapter and manage the controller. The version I tried was “Windows – 4.6”, build 11.06.00-03 from Aug 11, 2011. It failed to discover the adapter.

I then tried the MSM from MegaRAID SAS 9285-8e page (Windows – 5.0 – 10M12, Version: 9.00-01 from Mar 11, 2011), and it was able to discover and connect to the controller. The Storage Management software is rather slow, buggy, and crash-prone on Windows 7, but I was able to create a logical drive and change the caching policies on the existing VDs.

MSM uses TCP port 5989, so be sure to allow that in your ESXi firewall (just enable CIM secure server).

If you have a problem discovering the ESXi host in MSM, watch http://www.youtube.com/watch?feature=player_detailpage&v=mEBwt6Q_diU#t=479s. You need to stop the local discovery, then go to “Configure Host” and select “Display all the systems in the network of local server”.

Written by vmsysadmin

December 9, 2011 at 9:58 pm

Posted in vSphere