VMSysAdmin

When you need another brain

How to enable vRealize Automation 7 Orchestrator Control Center service

leave a comment »

When you install vRealize Automation 7, on the main web interface you’re greeted with a bunch of links, one of them is “vRealize Orchestrator Control Center (the service is stopped by default)”. There is absolutely no documentation from VMware on how to start the service.

To start the Control Center, you need to ssh into the appliance, log in as root, and run “/etc/init.d/vco-configurator start”.

Written by vmsysadmin

January 7, 2016 at 5:20 pm

Posted in Uncategorized

Powershell script to permanently enable ssh on all ESXi hosts and supress SSH warning

leave a comment »

A small script to permanently enable ssh on all ESXi hosts and supress SSH warning.

$vcenter = "vc"
Connect-VIServer -Server $vcenter

$host_regexp = "esx0[1-2]"
$key = "TSM-SSH"
$policy = "on"

$hosts = Get-View -ViewType "HostSystem" -Property Name |
    Select-Object -ExpandProperty name | where {$_ -match $host_regexp} | sort

foreach ($h in $hosts) {
  $service = Get-VmHostService -VMHost $h | where {$_.key -eq $key}
  $_this = Set-VMHostService -HostService $service -Policy $policy
  $_this = Start-VmHostService -HostService $service
  $_this = Set-VMHostAdvancedConfiguration -VMHost $h UserVars.SuppressShellWarning 1
  Write-Host "enabled" $service "on" $h
}

Written by vmsysadmin

May 16, 2012 at 6:57 am

Posted in Powershell, vSphere

Adding vmkernel interfaces to Nexus 1000v distributed switch with VMware powercli

with one comment

Recently I faced a task where I needed to add vmotion and nfs vmkernel interfaces to a large number of ESXi 5 hosts that were attached to the Cisco Nexus 1000v distributed switch. In the field, due to the lack of time, we resorted to programmatically creating vmkernel interfaces on the standard virtual switch and then manually migrated them to the Nexus 1000v. Later I decided to create a powershell script that would streamline this task and create vmk interfaces on the Nexus 1000v directly.

This rather simple script takes a number of variables in the header, then creates the vmk interface (it automatically assigns the next available vmk number, i.e. if you already have vmk0 on the host, it will create vmk1). The script can enable vmotion on vmknic, and set the mtu. $hosts_regexp variable allows to narrow down the list of hosts to run the script against (for example, we had management hosts that did not need the new interfaces). Set the switch name, the portgroup name, and the IP with the last octet to starting IP of the new interface.

$vc=’vc’
$switch=’n1000v’
$portgroup=’data-uplink’
$ip=”10.0.0.”
$lastoct=44
$netmask = “255.255.255.0”
$mtu=1500
$hosts_regexp=[regex]”esx0(0[1-9]|1[0-4])”
$vmotion=$false

$conn = Connect-VIServer $vc

$hosts = get-vmhost
$hosts = $hosts -match $hosts_regexp | sort

foreach ($h in $hosts) {

$vmhost = Get-VMHost $h
$netsystem = Get-View $vmhost.Extensiondata.ConfigManager.networkSystem
$vnicmanager = Get-View $vmhost.Extensiondata.ConfigManager.virtualNicManager
$switchuuid = ($netsystem.NetworkInfo.ProxySwitch | where {$_.DvsName -eq $switch}).DvsUuid
$dvportgroupkey = (Get-VirtualPortGroup $vmhost | where {$_.Name -eq $portgroup}).Key

$nic = New-Object VMware.Vim.HostVirtualNicSpec
$nic.ip = New-Object VMware.Vim.HostIpConfig
$nic.ip.dhcp = $false
$nic.ip.ipAddress = $ip + $lastoct
$nic.ip.subnetMask = $netmask
$nic.mtu = $mtu
$nic.distributedVirtualPort = New-Object VMware.Vim.DistributedVirtualSwitchPortConnection
$nic.distributedVirtualPort.switchUuid = $switchuuid
$nic.distributedVirtualPort.portgroupKey = $dvportgroupkey

$vmk = $netsystem.AddVirtualNic(“”, $nic)
if ($vmotion) {$vnicmanager.SelectVnicForNicType(“vmotion”, $vmk)}

Write-Host $vmhost,”added”,$vmk,$nic.ip.ipAddress,$nic.ip.subnetMask,”mtu”,$nic.mtu,”vmotion”,$vmotion

$lastoct++

}

 

The second script will remove the vmk interface, in case you need to change something or start from scratch.

$vc=”vc”
$vmk=”vmk1″
$hosts_regexp=[regex]”esx0(0[1-9]|1[0-4])”

$conn = Connect-VIServer $vc

$hosts = get-vmhost
$hosts = $hosts -match $hosts_regexp | sort

$conn = Connect-VIServer $vc

foreach ($h in $hosts) {

$vmhost = Get-VMHost -Name $h
$netsystem = Get-View $vmhost.Extensiondata.ConfigManager.networkSystem
$netsystem.RemoveVirtualNic($vmk)
Write-Host $vmhost,”removed”,$vmk

}

Written by vmsysadmin

April 30, 2012 at 11:42 pm

Posted in Powershell, vSphere

Resolving Linux boot issues after P2V with VMware Converter

with 3 comments

Recently I had to deal with migrating SuSE Enterprise Linux servers from the old environment to the Vblock. Customer had a number of physical servers and XEN instances that needed to be moved to the new vSphere environment.

While VMware Converter 5 does a good job live-cloning the existing Linux servers, sometimes you have to use the Cold Clone boot CD if the physical server uses software RAID (and then boot VM from the broken mirror side), or sometimes the resulting VM does not boot because of issues with bootloader.

For the most migrations I’ve used VMware Converter 5.0. One thing you need to do right away is to modify the config for VMware Converter to enable the root login and keep the failed helper VM. Enable useSourcePasswordInHelperVm flag and disable powerOffHelperVm. See http://kb.vmware.com/kb/1008209 for details.

Now you should have the result of your failed conversion preserved, and should be able boot the VM to check why the failure have occurred.

grub

There are a few reasons why the conversion can fail at 99% while reconfiguring the OS. It could be because the disk path had changed, kernel modules are missing, or grub is not finding stuff.

Originally my XEN conversions were failing at 99% with FAILED: An error occurred during the conversion: ‘GrubInstaller::InstallGrub: Failed to read GRUB configuration from /mnt/p2v-src-root/boot/grub/menu.lst’. This problem is related to the fact that XEN instances do not have /boot/grub/menu.lst in place.

To fix this and or any other bootloader issues, grab your favorite Linux rescue disk and boot the VM from it. Since I was converting SLES, I’ve used SLES 10 SP2 boot CD, and booted into “Rescue System”. Alternatively, you can attach the converted vmdk to an existing Linux VM.

Once booted into the rescue, check the present disks with “fdisk -l“. Most likely, your devices are now showing up as /dev/sdaX, since the disk controller was changed to VMware LSI Logic Parallel.

fdisk

Mount your new /dev/sda2 partition as /mnt, add /dev, and chroot into it.

# mount /dev/sda2 /mnt
# mount –bind /dev /mnt/dev
# cd /mnt
# chroot .

Once chrooted into your old environment, fix the /etc/fstab to use /dev/sdaX for your boot and swap, instead of whatever path you have there (XEN instances were booting from /dev/xvda2).

Next step is to make sure that the required kernel modules for VMware virtual machine support are loaded from the ramdisk. If piix and mptspi modules are missing, you will get “Waiting on /dev/sda2 device to appear…” message at boot.

On SLES, the location to define the ramdisk kernel modules is in /etc/sysconfig/kernel file.

The following modules should be present (remove unused stuff like “xenblk”):
INITRD_MODULES=”piix mptspi processor thermal fan jbd ext3 edd”

Once the INITRD_MODULES line is fixed, run “mkinitrd” to re-create the ramdisk.

# mkinitrd

In case the bootloader’s /boot/grub/menu.lst file is missing, you can specify the kernel and initrd parameters at boot time (use TAB to complete the filenames for your smp kernel).

grub> kernel /boot/vmlinuz-2.6.16.60-0.21-smp root=/dev/sda2
grub> initrd /boot/initrd-2.6.16.60-0.21-smp
grub> boot

Alternatively, use the menu.lst file below.

###YaST update: removed default
default 0
timeout 8
##YaST – generic_mbr
gfxmenu (hd0,1)/boot/message
##YaST – activate

###Don’t change this comment – YaST2 identifier: Original name: linux###
title SUSE Linux Enterprise Server 10 SP2
kernel (hd0,1)/boot/vmlinuz-2.6.16.60-0.21-smp root=/dev/sda2 repair=1 resume=/dev/sda1 splash=silent showopts vga=0x314
initrd (hd0,1)/boot/initrd-2.6.16.60-0.21-smp

###Don’t change this comment – YaST2 identifier: Original name: failsafe###
title Failsafe — SUSE Linux Enterprise Server 10 SP2
kernel (hd0,1)/boot/vmlinuz-2.6.16.60-0.21-smp root=/dev/sda2 showopts ide=nodma apm=off acpi=off noresume edd=off 3 vga=normal
initrd (hd0,1)/boot/initrd-2.6.16.60-0.21-smp

VM should be able to boot now.

Check 30-net_persistent_names.rules file in /etc/udev/rules.d directory for any network adapter changes that might need to be done. It is likely that udev detected the network hardware change and added eth1 to this file. Remove duplicate entries, leaving only the last one and change eth1 to eth0.

Don’t forget to install VMware Tools once the VM is up and running.

Written by vmsysadmin

February 10, 2012 at 6:06 am

Posted in vSphere

Tagged with ,

Using Perc 5i with ESXi 5

with 21 comments

Perc 5i is a Dell’s rebranded LSI SAS1078 RAID on Chip (ROC). You can use LSI firmware on it, since it’s essentially the same card as MegaRAID SAS 8480E.

Installing MegaCli and LSI SIM adapter in ESXi 5

Go to http://www.lsi.com => “Support” => “Support Downloads by Product” and search for 8480E. On 8480E page search for “MegaCLI”. Download MegaCli package, copy the executable from VMware folder to your host’s persistent storage (use your local vmfs datastore; do not use the root partition – it uses tmpfs to minimize the writes to the flash media and the file will disappear after the reboot).

Get the firmware from the same page (search for “Firmware”). 7.0.1-0083 was current as of Dec. 08 2011. Extract mr1068fw.rom to your host’s persistent storage.

Search http://www.lsi.com for “SMIS Provider” and download “SAS MegaRAID VMWare SMIS Provider VIB (Certified) for ESXi 5.0”, extract the vib file to your host’s persistent storage.

Install the vib from SMIS Provider package:

# esxcli software vib install -v ./vmware-esx-provider-LSIProvider.vib

Try to run MegaCli:

# ./MegaCli -AdpAllInfo -aALL | more

If it complains about libstorelib.so missing, you need to copy it from ESX 4 host and place in the same directory as MegaCli. See http://http://communities.vmware.com/thread/330535 for details. Alternatively, you can get it here: http://db.tt/aEYFL6CQ.

Shut down all VMs on the host and flush the controller’s firmware:

# ./MegaCli -adpfwflash -f mr1068fw.rom -a0

Adapter 0: PERC 5/i Adapter
Vendor ID: 0x1028, Device ID: 0x0015

Package version on the controller: 7.0.1-0075
Package version of the image file: 7.0.1-0083
Download Completed.
Flashing image to adapter…
Adapter 0: Flash Completed.

Exit Code: 0x00

Reboot the host.

After the reboot, you should see the LSIProvider vib is installed:

# esxcli software vib list | grep -i lsi
LSIProvider 500.04.V0.24-261033 LSI VMwareAccepted 2011-11-30

From the 8480E product page, you can download the “MegaRAID Storage Manager” for your preferred OS to connect to the SIM adapter and manage the controller. The version I tried was “Windows – 4.6”, build 11.06.00-03 from Aug 11, 2011. It failed to discover the adapter.

I then tried the MSM from MegaRAID SAS 9285-8e page (Windows – 5.0 – 10M12, Version: 9.00-01 from Mar 11, 2011), and it was able to discover and connect to the controller. The Storage Management software is rather slow, buggy, and crash-prone on Windows 7, but I was able to create a logical drive and change the caching policies on the existing VDs.

MSM uses TCP port 5989, so be sure to allow that in your ESXi firewall (just enable CIM secure server).

If you have a problem discovering the ESXi host in MSM, watch http://www.youtube.com/watch?feature=player_detailpage&v=mEBwt6Q_diU#t=479s. You need to stop the local discovery, then go to “Configure Host” and select “Display all the systems in the network of local server”.

Written by vmsysadmin

December 9, 2011 at 9:58 pm

Posted in vSphere

Using zfs ACLs to protect CIFS shares on OpenSolaris

with 3 comments

So you have created your cifs share and joined the AD domain. Now you want your domain users to have write access to the share.

This is what worked for me:

I’ve created smbusers group on Solaris:

# groupadd smbusers

Set up idmap to map your domain users to smbusers unix group, and map an administrator user to root user (so administrator is able to have full control). I’ve also added the line to map existing windows users to unix users, otherwise the system will use ephemeral UIDs.

idmap add winuser:*@vmsysadmin.com unixuser:*
idmap add winuser:administrator@vmsysadmin.com unixuser:root
idmap add “wingroup:Domain Users@vmsysadmin.com” unixgroup:smbusers

Now nuke the permissions on the cifs share, we will create our own:
# chmod A- /spool/cifs1

# ls -vd /spool/cifs1
drwxr-xr-x 2 root root 2 Jan 18 11:37 /spool/cifs1
0:owner@::deny
1:owner@:list_directory/read_data/add_file/write_data/add_subdirectory
/append_data/write_xattr/execute/write_attributes/write_acl
/write_owner:allow
2:group@:add_file/write_data/add_subdirectory/append_data:deny
3:group@:list_directory/read_data/execute:allow
4:everyone@:add_file/write_data/add_subdirectory/append_data/write_xattr
/write_attributes/write_acl/write_owner:deny
5:everyone@:list_directory/read_data/read_xattr/execute/read_attributes
/read_acl/synchronize:allow

Note that root (and domain administrator) has full access, but your smbusers do not have permissions to write or read the directories created by domain admin. If you want everyone to read the directories and files created by administrator, change the #5 ACL entry to propagate the permissions to the lower levels:

# chmod A5=everyone@:list_directory/read_data/read_xattr/execute/read_attributes/read_acl/synchronize:file_inherit/dir_inherit:allow /spool/cifs1

Now your users are able to read eveything in /spool/cifs. If you want your users to create and delete each other’s files, simply change group ownership and allow group to write:

# chgrp smbusers /spool/cifs1
# chmod g+w /spool/cifs1

I’d like to have a more precise permissions control though – users can only delete own files, but can read everything, and administrator can delete everything.

So we will not allow a simple group write, but instead will use ACLs for finer control.

Clear existing extended permissions
# chmod A- /spool/cifs1

# ls -vd /spool/cifs1
drwxr-xr-x 3 root root 4 Jan 18 13:40 /spool/cifs1
0:owner@::deny
1:owner@:list_directory/read_data/add_file/write_data/add_subdirectory
/append_data/write_xattr/execute/write_attributes/write_acl
/write_owner:allow
2:group@:add_file/write_data/add_subdirectory/append_data:deny
3:group@:list_directory/read_data/execute:allow
4:everyone@:add_file/write_data/add_subdirectory/append_data/write_xattr
/write_attributes/write_acl/write_owner:deny
5:everyone@:list_directory/read_data/read_xattr/execute/read_attributes
/read_acl/synchronize:allow

Allow owner to create/delete objects and propagate the inheritance of owner’s ACLs
# chmod A1=owner@:list_directory/read_data/add_file/write_data/add_subdirectory/append_data/write_xattr/execute/delete_child/write_attributes/delete/write_acl/write_owner:file_inherit/dir_inherit:allow /spool/cifs1

Allow smbusers group (mapped to Domain Users) to write to the top cifs share.
# chmod A2+group:smbusers:add_file/write_data/add_subdirectory/append_data:allow /spool/cifs1

Deny smbusers permissions to delete other users’ data
# chmod A2+group:smbusers:delete_child/delete:file_inherit/dir_inherit:deny /spool/cifs1

Allow everyone to read everything.
# chmod A7=everyone@:list_directory/read_data/read_xattr/execute/read_attributes/read_acl/synchronize:file_inherit/dir_inherit:allow /spool/cifs1

Your directory should look like this:
# ls -vd /spool/cifs1
drwxr-xr-x+ 2 root root 2 Jan 18 14:13 /spool/cifs1
0:owner@::deny
1:owner@:list_directory/read_data/add_file/write_data/add_subdirectory
/append_data/write_xattr/execute/delete_child/write_attributes
/delete/write_acl/write_owner:file_inherit/dir_inherit:allow
2:group:smbusers:delete_child/delete:file_inherit/dir_inherit:deny
3:group:smbusers:add_file/write_data/add_subdirectory/append_data:allow
4:group@:add_file/write_data/add_subdirectory/append_data:deny
5:group@:list_directory/read_data/execute:allow
6:everyone@:add_file/write_data/add_subdirectory/append_data/write_xattr
/write_attributes/write_acl/write_owner:deny
7:everyone@:list_directory/read_data/read_xattr/execute/read_attributes
/read_acl/synchronize:file_inherit/dir_inherit:allow

That’s it! Now your users can write, but cannot delete each other’s data. Domain Administrator login has full access to the share and can delete any user’s data.

Written by vmsysadmin

January 18, 2009 at 9:02 pm

Posted in OpenSolaris

Tagged with , , ,

CIFS in OpenSolaris – Domain mode, idmap, and ACLs

leave a comment »

I’ve created a share on OpenSolaris snv_104:

# zfs create -o casesensitivity=mixed -o nbmand=on -o sharesmb=name=cifs1 spool/cifs1

I’ve followed instructions on setting up CIFS on OpenSolaris in Domain mode from http://blogs.sun.com/timthomas/entry/configuring_the_opensolaris_cifs_server to join the the domain.

Also a good blog entry on the subject: http://jmlittle.blogspot.com/2008/03/step-by-step-cifs-server-setup-with.html

I also wanted to have my domain users maped to the unix accounts on the Solaris side. What I ended up with:
1) created a unix group “smbusers”
2) created unix accounts for domain users I want to grant access to cifs share and added them to the group smbusers
3) configured idmap

# idmap add ‘winuser:*’ ‘unixuser:*’
# idmap add ‘wingroup:Domain Users’ ‘unixgroup:smbusers’

you must restart smb and idmap for the settings to take effect:

# svcadm restart smb/server; svcadm restart idmap

Now when the domain user creates a file on the share, the file is created with correct unix user/group attributes, mapped by idmap.

If you need to figure out what group your domain users are in, you can use “idmap dump -n” and grep for the numbers from “ls -l”. Once the mapping is set and services restarted, you should see the correct user ids is directory listing:

# mkdir /spool/cifs1/test
# chgrp smbusers test
# chmod g+w test
# ls -ld test/New\ Folder/
d———+  2 user01   smbusers        2 Jan 17 21:53 test/New Folder/

Now I want to set up the right zfs ACLs to prevent other domain users in smbusers group from deleting your files. This however appears to be more difficult that I thought. No matter what ACLs I would set on the directory created by one user, the other smb user was able to remove it. If someone made it happen, please let me know.

Update: after a few hours in zfs ACL land, I’ve figured it out. See my next post: https://vmsysadmin.wordpress.com/2009/01/18/using-zfs-acls-to-protect-cifs-shares-on-opensolaris/

Written by vmsysadmin

January 18, 2009 at 4:44 am

Posted in OpenSolaris

Tagged with , , ,