power off VM
Modify VMX file -> keyboard.typematicMinDelay = "2000000" (sets keyboard repeat to 2 seconds)
(Either through text editor or , edit settings , options , general , configuration parameters , add row)
Power on VM
Tuesday, November 15, 2011
Tuesday, November 8, 2011
"NMI - Undetermined Source" when patching esxi 38XXXX to 50XXXXX on HP DL380 G6
Recieved an NMI - Undetermined Source on 2 hosts while patching our ESXi Servers.
This affected 2 of our DL380 G5 with Xeon 5550 CPU's
NMI is a non maskable interrupt , so i immediately assumed it was a hardware issue
when i saw the first host fail , when the second host failed i thought it was too much of a coincidence.
As the hardware was all on the HCL for ESXi i knew it wasnt unsupported hardware.
I noticed the BIOS firmware on both 380 G6's was the same very out of date (late 09) version, i downloaded the current version (06/07/2011) I flashed the BIOS and this resolved the problem.
This affected 2 of our DL380 G5 with Xeon 5550 CPU's
NMI is a non maskable interrupt , so i immediately assumed it was a hardware issue
when i saw the first host fail , when the second host failed i thought it was too much of a coincidence.
As the hardware was all on the HCL for ESXi i knew it wasnt unsupported hardware.
I noticed the BIOS firmware on both 380 G6's was the same very out of date (late 09) version, i downloaded the current version (06/07/2011) I flashed the BIOS and this resolved the problem.
Friday, November 4, 2011
powershell script to find VM disk status
foreach ($vm in Get-VM) {
$vm.extensiondata.Config.Hardware.Device |
Where {$_.DeviceInfo.Label -like "Hard Disk*"} |
Select @{N="VMname" ; E={$vm.Name}},
@{N="HD" ;E={$_.DeviceInfo.Label}},
@{N="Mode";E={$_.Backing.DiskMode}}
}
$vm.extensiondata.Config.Hardware.Device |
Where {$_.DeviceInfo.Label -like "Hard Disk*"} |
Select @{N="VMname" ; E={$vm.Name}},
@{N="HD" ;E={$_.DeviceInfo.Label}},
@{N="Mode";E={$_.Backing.DiskMode}}
}
Monday, October 3, 2011
virtualising esx on esx
virtualise esx on esx
to avoid confusion and for brevity i will use the following terms in this documnet
pESX - Physical ESX
vESX - Virtual ESX
A very basic network / logical layout of the pESX/vESX infrastructure
DRAFT
I decided to create an vESX cluster within an pESX server for the purpose of going over material for the VCAP, I needed to be able to break things without worrying about impacting the production environment along with the fact that change control and making changes for the purpose of testing configs for vcap study material are mutually exclusive.
I had read a few articles previously but never bothered to absorb any of the information , i had a few hours free and an pESX server sitting there doing nothing so i decided to see what would happen if i tried to install ESX on ESX and would i be able to get a fully working esx cluster without referring to somebody elses work.
One very important consideration was that the vESX traffic could not touch the production network
I will briefly describe the steps i took to build and configure the environment and the challenges i encountered
The architecture:
Physcial
HP BL 460 G1
4 X 3Ghz CPU's
32 GB RAM
4 X 146Gb HDD's (2 X raid 1 volumes 131GB and 143GB usable storage)
3 X Standard switches mapped to no physical uplinks
Virtual
2 vESX servers(2 pseudo Physical CPUs , 2GB RAM)
2 vESXi servers(2 pseudo Physical CPUs , 2GB RAM)
1 Open Filer which had 140GB of storage configured over 2 LUNs
1 W2K8r2 server acting as DC and VC
1 router (zeroshell)
IP Addressing
ESX01
SC - 192.168.100.1
VmotionFTvmKernel - 192.168.100.5
vswif1 - 192.168.150.21
vmk0 - 192.168.150.1
ESX02
SC - 192.168.100.2
VmotionFTvmKernel - 192.168.100.5
vswif1 - 192.168.150.21
vmk0 - 192.168.150.1
ESXi03
VMK0 - 192.168.100.3
VMK1 - 192.168.150.3
ESXi04
VMK0 - 192.168.100.4
VMK1 - 192.168.150.4
TLOpenFiler
Management IP - 192.168.100.11
iSCSI teamed IF - 192.168.150.13
TLFWrouter
192.168.100.254
192.168.150.254
192.168.200.254
TLDCVS01
192.168.200.100
Networking
In keeping with best practice i made the decision to split out the networking by traffic type,
on the pESX host i created 3 standard switches and did not bind them to any physical adapter
They were labeled as follows
"TL ESX vmK - isolated" 192.168.100.0/24 -> VMkernel and management traffic
"TL ESX iSCSI - isolated" 192.168.150.0/24 -> iSCSI Traffic
"TL ESX VMs - isolated" 192.168.200.0/24 -> Virtual Machine Traffic
Promiscous mode must be enabled on the virtual switches on the pESX host , the reason for enabling this security risk is the nested ESX host would otherwise only see the MAC addresses of virtual machines placed on the virtual network of that vESX host , we enable it so that all vESX servers see all MAC addresses.
Routing
As simple as putting a zeroshell router on the pESX host and giving it a NIC + an IP on each subnet.
Vlans
Not yet
Storage layout
Storage available on the pESX host was 4 X 146 GB HDD's split into 2 volumes , ESXi was installed on one of the volumes so there was 132 GB available on 1 volume (Datastore01) and 140 GB (esx07localvol1) available on the second.
in order to seperate storage by function (i.e vVMs on seperate phsycial spindles to the vESX and management/infrastucture systems) the vESX servers , DC/VC , openfiler and router were all installed on Datastore1.
In order to present storage to the openfiler i opened VC and pointed to the pESX server , and added a number of disks to the "TLOpenfiler" virtual machine (the only reason for adding a number of smaller disks rather than one big disk was simply to be able to add a number of disks within openfiler to create a LUN composed of a number of smaller disks as i felt this was more representative of a near production environment )
Openfiler configuration
The openfiler was installed as a VM directly on the pESX host , storage was allocated to it as above , and the VM was powered on and configured with boot , / (root) and swap all on the same disk (we dont care about performance in this setup) defaults were followed for the remainder.
The openfiler was initially configured with 1 interface which was on the "TL ESX vmK - isolated"
I added two more interfaces on the "TL ESX iSCSI - isolated" and created a bonded interface , assigning a 192.168.150.13 address.
create storage ......... as per http://greg.porter.name/wiki/HowTo:Openfiler
createLUN
createiSCSItarget
vESX server build
As the ESX Service console is simply an instance of Red hat linux i created the new virtual machine in the pVSphere and set the OS type as Redhat enterprise 64bit , i set the NIC type to be E1000 and the disk controller to be SAS parallel.
After the VM had been created i mounted the ESX media and started the VM following the defaults to get a good install.
ESXi does not have a service console instead it uses busybox to issue commands to the vmkernel , in this case i took a chance and created the vESXi machine as a generic 64bit version of linux , again NIC type was set to E1000 and the disk controller was set as SAS parallel.
ESXi image was mounted and installation was kicked off.
vESX Networking
on the pESX server for each vESX server , Add vNIC's mapped to the unbound vNICs created on the the pESX server this will give the vESX server physical adapters. in each case i added 2
NICs for each traffic type to more accurately reflect real world configurations
Attaching to iSCSI storage
On Each ESX server using the vSphere client to connect to each one the following steps must be
performed in order to access iSCSI LUNS (Which have already been created)
on vESX add service console port on the iSCSI vSwitch, assign IP, SN mask etc
Open iSCSI (3260) service console FW port under security configuration.
on the vspehere client attached to the vESX server go to configure , Storage adapters , iSCSI ,
Configured (enabled) and finally click OK
Now ESX is ready to attach to the iSCSI LUN created on the Openfiler
on the iSCSI software intitator , select the Static Discovery tab and enter the IP and port of the
OpenFilers IP on the Storage subnet and enter the target name copied from the Openfiler config page
MPIOing the storage
To enable Redundant MPIOing for the storage i added a second VMkernel port on the vswitch
and named it "TL ESX iSCSI Pathb - Isolated" and gave it an IP on the 150 Subnet.
Now to enable the redundancy in this environment i promoted NIC1 to the active and demoted
NIC2 to be the standby adapter on the "TL ESX iSCSI - Isolated" VMK Port ,
and the reverse of this NIC Order on the "TL ESX iSCSI Pathb - Isolated"
then on the ESX host i ran the following command:
esxcli swiscsi nic add -n vmk0 -d vmhna33
esxcli swiscsi nic add -n vmk1 -d vmhba33
RDM
I also created an RDM to the FileServer01 , to do this i added a virtual machine portgroup to the iSCSI vswitch, naming it "TL ESX iSCSI RDM - Isolated" , added a vNIC bound to that network to the VM , installed the iSCSI initiator and accessed the TLiSCSI1 LUN....except...i couldnt i found that W2K8 does not not seem to be compatible with the version of OpenFiler that i installed(2.3), i chose to not use CHAP authentication on the Openfiler ( Real world i would not use CHAP either , instead i would rely on the non routed segregated iSCSI network and switch port security, CHAP is cleartext and very easy to intercept)
Issues
Power on VM fails with an error about running ESX as a virtual machine
Cannot virtualise vcentre on a vESX (64bit problem)
vcenter install fails saying you cant install on this version of 64bit windows
(This is actually an issue with ADAM being installed on a DC and trying to listen on 389)
Cannot join esxi hosts to a HA clister that already contains esx hosts
(this is to do with the vmkernel and service consle ports having different names and types , create das.allownetworks entry on cluster)
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1006541
Cannot Start VMA within the vESX sinfrastructure
(64 bit OS cannot start in vESX)
to avoid confusion and for brevity i will use the following terms in this documnet
pESX - Physical ESX
vESX - Virtual ESX
A very basic network / logical layout of the pESX/vESX infrastructure
DRAFT
I decided to create an vESX cluster within an pESX server for the purpose of going over material for the VCAP, I needed to be able to break things without worrying about impacting the production environment along with the fact that change control and making changes for the purpose of testing configs for vcap study material are mutually exclusive.
I had read a few articles previously but never bothered to absorb any of the information , i had a few hours free and an pESX server sitting there doing nothing so i decided to see what would happen if i tried to install ESX on ESX and would i be able to get a fully working esx cluster without referring to somebody elses work.
One very important consideration was that the vESX traffic could not touch the production network
I will briefly describe the steps i took to build and configure the environment and the challenges i encountered
The architecture:
Physcial
HP BL 460 G1
4 X 3Ghz CPU's
32 GB RAM
4 X 146Gb HDD's (2 X raid 1 volumes 131GB and 143GB usable storage)
3 X Standard switches mapped to no physical uplinks
Virtual
2 vESX servers(2 pseudo Physical CPUs , 2GB RAM)
2 vESXi servers(2 pseudo Physical CPUs , 2GB RAM)
1 Open Filer which had 140GB of storage configured over 2 LUNs
1 W2K8r2 server acting as DC and VC
1 router (zeroshell)
IP Addressing
ESX01
SC - 192.168.100.1
VmotionFTvmKernel - 192.168.100.5
vswif1 - 192.168.150.21
vmk0 - 192.168.150.1
ESX02
SC - 192.168.100.2
VmotionFTvmKernel - 192.168.100.5
vswif1 - 192.168.150.21
vmk0 - 192.168.150.1
ESXi03
VMK0 - 192.168.100.3
VMK1 - 192.168.150.3
ESXi04
VMK0 - 192.168.100.4
VMK1 - 192.168.150.4
TLOpenFiler
Management IP - 192.168.100.11
iSCSI teamed IF - 192.168.150.13
TLFWrouter
192.168.100.254
192.168.150.254
192.168.200.254
TLDCVS01
192.168.200.100
Networking
In keeping with best practice i made the decision to split out the networking by traffic type,
on the pESX host i created 3 standard switches and did not bind them to any physical adapter
They were labeled as follows
"TL ESX vmK - isolated" 192.168.100.0/24 -> VMkernel and management traffic
"TL ESX iSCSI - isolated" 192.168.150.0/24 -> iSCSI Traffic
"TL ESX VMs - isolated" 192.168.200.0/24 -> Virtual Machine Traffic
Promiscous mode must be enabled on the virtual switches on the pESX host , the reason for enabling this security risk is the nested ESX host would otherwise only see the MAC addresses of virtual machines placed on the virtual network of that vESX host , we enable it so that all vESX servers see all MAC addresses.
Routing
As simple as putting a zeroshell router on the pESX host and giving it a NIC + an IP on each subnet.
Vlans
Not yet
Storage layout
Storage available on the pESX host was 4 X 146 GB HDD's split into 2 volumes , ESXi was installed on one of the volumes so there was 132 GB available on 1 volume (Datastore01) and 140 GB (esx07localvol1) available on the second.
in order to seperate storage by function (i.e vVMs on seperate phsycial spindles to the vESX and management/infrastucture systems) the vESX servers , DC/VC , openfiler and router were all installed on Datastore1.
In order to present storage to the openfiler i opened VC and pointed to the pESX server , and added a number of disks to the "TLOpenfiler" virtual machine (the only reason for adding a number of smaller disks rather than one big disk was simply to be able to add a number of disks within openfiler to create a LUN composed of a number of smaller disks as i felt this was more representative of a near production environment )
Openfiler configuration
The openfiler was installed as a VM directly on the pESX host , storage was allocated to it as above , and the VM was powered on and configured with boot , / (root) and swap all on the same disk (we dont care about performance in this setup) defaults were followed for the remainder.
The openfiler was initially configured with 1 interface which was on the "TL ESX vmK - isolated"
I added two more interfaces on the "TL ESX iSCSI - isolated" and created a bonded interface , assigning a 192.168.150.13 address.
create storage ......... as per http://greg.porter.name/wiki/HowTo:Openfiler
createLUN
createiSCSItarget
vESX server build
As the ESX Service console is simply an instance of Red hat linux i created the new virtual machine in the pVSphere and set the OS type as Redhat enterprise 64bit , i set the NIC type to be E1000 and the disk controller to be SAS parallel.
After the VM had been created i mounted the ESX media and started the VM following the defaults to get a good install.
ESXi does not have a service console instead it uses busybox to issue commands to the vmkernel , in this case i took a chance and created the vESXi machine as a generic 64bit version of linux , again NIC type was set to E1000 and the disk controller was set as SAS parallel.
ESXi image was mounted and installation was kicked off.
vESX Networking
on the pESX server for each vESX server , Add vNIC's mapped to the unbound vNICs created on the the pESX server this will give the vESX server physical adapters. in each case i added 2
NICs for each traffic type to more accurately reflect real world configurations
Attaching to iSCSI storage
On Each ESX server using the vSphere client to connect to each one the following steps must be
performed in order to access iSCSI LUNS (Which have already been created)
on vESX add service console port on the iSCSI vSwitch, assign IP, SN mask etc
Open iSCSI (3260) service console FW port under security configuration.
on the vspehere client attached to the vESX server go to configure , Storage adapters , iSCSI ,
Configured (enabled) and finally click OK
Now ESX is ready to attach to the iSCSI LUN created on the Openfiler
on the iSCSI software intitator , select the Static Discovery tab and enter the IP and port of the
OpenFilers IP on the Storage subnet and enter the target name copied from the Openfiler config page
MPIOing the storage
To enable Redundant MPIOing for the storage i added a second VMkernel port on the vswitch
and named it "TL ESX iSCSI Pathb - Isolated" and gave it an IP on the 150 Subnet.
Now to enable the redundancy in this environment i promoted NIC1 to the active and demoted
NIC2 to be the standby adapter on the "TL ESX iSCSI - Isolated" VMK Port ,
and the reverse of this NIC Order on the "TL ESX iSCSI Pathb - Isolated"
then on the ESX host i ran the following command:
esxcli swiscsi nic add -n vmk0 -d vmhna33
esxcli swiscsi nic add -n vmk1 -d vmhba33
RDM
I also created an RDM to the FileServer01 , to do this i added a virtual machine portgroup to the iSCSI vswitch, naming it "TL ESX iSCSI RDM - Isolated" , added a vNIC bound to that network to the VM , installed the iSCSI initiator and accessed the TLiSCSI1 LUN....except...i couldnt i found that W2K8 does not not seem to be compatible with the version of OpenFiler that i installed(2.3), i chose to not use CHAP authentication on the Openfiler ( Real world i would not use CHAP either , instead i would rely on the non routed segregated iSCSI network and switch port security, CHAP is cleartext and very easy to intercept)
Issues
Power on VM fails with an error about running ESX as a virtual machine
Cannot virtualise vcentre on a vESX (64bit problem)
vcenter install fails saying you cant install on this version of 64bit windows
(This is actually an issue with ADAM being installed on a DC and trying to listen on 389)
Cannot join esxi hosts to a HA clister that already contains esx hosts
(this is to do with the vmkernel and service consle ports having different names and types , create das.allownetworks entry on cluster)
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1006541
Cannot Start VMA within the vESX sinfrastructure
(64 bit OS cannot start in vESX)
Tuesday, September 27, 2011
VCAP
http://www.rayheffer.com/1132/vmware-vcap-dca-study-resources/#more-1132
http://damiankarlson.com/vcap-dca4-exam/
http://kendrickcoleman.com/index.php?/Tech-Blog/vcap-datacenter-administration-exam-landing-page-vdca410.html
http://www.vmwarevideos.com/vcap
http://damiankarlson.com/vcap-dca4-exam/
http://kendrickcoleman.com/index.php?/Tech-Blog/vcap-datacenter-administration-exam-landing-page-vdca410.html
http://www.vmwarevideos.com/vcap
Thursday, September 15, 2011
Batch file to identify missing computers when they come back online
We have an estate of approximately 2000 laptops , these would sometimes go missing for whatever reason...
they could be under a developers desk , they may have been put in a cupboard and forgotten about or given to a user who took the machine home and "forgot" about it , i decided to create a simple batch file that would ping a list of computer names and when it received a reply send a mail to a user or a distribution list , here is the content of the batch file.
(obvously blat is required for this)
echo off
for /f "tokens=1" %%d in (computerlist.txt) Do Call :Pingthelost %%d
:pingthelost
set state=found
ping -n 1 %1
if errorlevel 1 set state=dead
echo %1 is %state%
If %state%==found call "c:\windows\system32\blat.exe" "c:\null.txt" -t Darragh.mcavinue@work.com -s "%1 is a missing computer which has appeared on the network"
simple !
UPDATE
we had what looked like a few false positives , the assets manager said that although he was beinga lerted that some computers were appearing on the network he was still unable to reach them.
I pinged one of the suspected false positives and got a reply from the router with a TTL expired , any computer that was on our remote access subnet that was being pinged was generating this error , this i suspect was the firewall policy for any of the machines on this subnet.
they could be under a developers desk , they may have been put in a cupboard and forgotten about or given to a user who took the machine home and "forgot" about it , i decided to create a simple batch file that would ping a list of computer names and when it received a reply send a mail to a user or a distribution list , here is the content of the batch file.
(obvously blat is required for this)
echo off
for /f "tokens=1" %%d in (computerlist.txt) Do Call :Pingthelost %%d
:pingthelost
set state=found
ping -n 1 %1
if errorlevel 1 set state=dead
echo %1 is %state%
If %state%==found call "c:\windows\system32\blat.exe" "c:\null.txt" -t Darragh.mcavinue@work.com -s "%1 is a missing computer which has appeared on the network"
simple !
UPDATE
we had what looked like a few false positives , the assets manager said that although he was beinga lerted that some computers were appearing on the network he was still unable to reach them.
I pinged one of the suspected false positives and got a reply from the router with a TTL expired , any computer that was on our remote access subnet that was being pinged was generating this error , this i suspect was the firewall policy for any of the machines on this subnet.
Wednesday, September 14, 2011
Beep ping batch file
This is a very simple batch file whch will alert you if a system is up or down based on values passed to it by playing a sound in windows media player , i created it because a colleague was telling me that his mac book did this by default , and it seemed like a handy tool if you were waiting for a server to go down or come up.
i called the batch file bping.bat and this is an example of a command line to run the tool.
I also added the option to put an interpackket delay in so that i could use it to test for high latency wan link packets being dropped
bping.bat 192.168.100.100 1000 up
This will ping 192.168.100.100 with a inter packet delay of 1000 milliseconds and will play a sound when the device responds to ping.
Echo off
CLS
Echo you are pinging %1 with a delay of %2 milliseconds between pings
echo and will be notified if packets are %3
Pause
CLS
If %3 == up Goto :Pingup
If %3 == Down Goto :Pingdown
:PingUP
PING %1 -n 1 | FIND "TTL=" >NULL
set pingresult=%errorlevel%
Echo on
PING %1 -n 1
Echo off
sleep -m %2
IF %pingresult% EQU 1 Goto :PingUP
Start "C:\program Files\Combined Community Codec Pack\MPC\Mplayer.exe /Play /Close" "C:\windows\Media\Notify.wav"
Goto PingUP
:PingDown
PING %1 -n 1 | FIND "TTL=" >NULL
set pingresult=%errorlevel%
Echo on
PING %1 -n 1
Echo off
sleep -m %2
IF %pingresult% EQU 0 Goto :PingDown
Start "C:\program Files\Combined Community Codec Pack\MPC\Mplayer.exe /Play /Close" "C:\windows\Media\Windows Exclamation.wav"
Goto PingDown
i called the batch file bping.bat and this is an example of a command line to run the tool.
I also added the option to put an interpackket delay in so that i could use it to test for high latency wan link packets being dropped
bping.bat 192.168.100.100 1000 up
This will ping 192.168.100.100 with a inter packet delay of 1000 milliseconds and will play a sound when the device responds to ping.
Echo off
CLS
Echo you are pinging %1 with a delay of %2 milliseconds between pings
echo and will be notified if packets are %3
Pause
CLS
If %3 == up Goto :Pingup
If %3 == Down Goto :Pingdown
:PingUP
PING %1 -n 1 | FIND "TTL=" >NULL
set pingresult=%errorlevel%
Echo on
PING %1 -n 1
Echo off
sleep -m %2
IF %pingresult% EQU 1 Goto :PingUP
Start "C:\program Files\Combined Community Codec Pack\MPC\Mplayer.exe /Play /Close" "C:\windows\Media\Notify.wav"
Goto PingUP
:PingDown
PING %1 -n 1 | FIND "TTL=" >NULL
set pingresult=%errorlevel%
Echo on
PING %1 -n 1
Echo off
sleep -m %2
IF %pingresult% EQU 0 Goto :PingDown
Start "C:\program Files\Combined Community Codec Pack\MPC\Mplayer.exe /Play /Close" "C:\windows\Media\Windows Exclamation.wav"
Goto PingDown
Thursday, September 1, 2011
V2P - for when you need to make the virtual - physical
Virtual to physical
Ref http://www.vmware.com/support/v2p/doc/V2P_TechNote.pdf (This has been tested on W2K3 SP1Enterprise edition with 2 partitions )
Requirements:
HP Smartstart CD
HP PSP
Sysprep (http://download.microsoft.com/download/a/2/6/a267c18e-32d7-4e59-81e7-816c3b23cc29/WindowsServer2003-KB892778-SP1-DeployTools-x86-ENU.cab)
Acronis “Tru Image” (IEDUBESXHMLUNSW/Acronisboot/boot.iso)
Storage Drivers for the RAID controller card on the target physical drive
Sufficient storage to capture an image of the Virtual machine
• Ensure the server is available on the network
• Boot the server using acronis tru image Boot into Acronis tru image enterprise server
• Remove the VMware tools Add/remove programs – remove VMWare tools
Under the “additional Commands” enter the following command “c:\sysprep\sysprep –clean” next → Finish → OK
• Add a new path to the storage driver locations Open HKLM\SOFTWARE\MICROSOFT\WINDOWS\CurrentVersion add the following value to the Devicepath entry “;%systemdrive%\drivers\storage
• Modify the c:\sysprep\Sysprep.inf file to include the storage drivers Open the sysprep.inf file add the following section to the bottom of the file
[SysprepMassStorage]
PCI\VEN_103C&DEV_3230&SUBSYS_3234103C = “c:\drivers\storage\hpcissx2.inf”,”\”Smart Array SAS/SATA Windows Driver Diskette”, “txtsetup.oem”
PCI\VEN_103C&DEV_3230&SUBSYS_3235103C = “c:\drivers\storage\hpcissx2.inf”,”\”Smart Array SAS/SATA Windows Driver Diskette”, “txtsetup.oem”
this config is applicable only if the destination system has a P400 RAID controller, if the system has a different controller you will obviously have to replace the hardware hardware ID and the driver name save the sysprep.inf file
P410i Controller uses the same driver set but requires the following line to be added to the bottom
of the sysprep.inf file
[SysprepMassStorage]
PCI\VEN_103c&DEV_323A&SUBSYS_3245103C = “c:\drivers\storage\hpcissx2.inf”,”\”Smart Array SAS/SATA Windows Driver Diskette”, “txtsetup.oem”
5300 / 6400 controllers use the following config in the sysprep.inf file (obviously using the relevant 5300/6400 drivers)
[SysprepMassStorage]
PCI\VEN_0E11&DEV_B060 = “C:\Drivers\Storage\cpqcissm.inf”, “\”, “Smart Array 5x and 6x Driver Diskette”, “\TXTSETUP.OEM”
PCI\VEN_0E11&DEV_B178 = “C:\Drivers\Storage\cpqcissm.inf”, “\”, “Smart Array 5x and 6x Driver Diskette”, “\TXTSETUP.OEM”
PCI\VEN_0E11&DEV_0046 = “C:\Drivers\Storage\cpqcissm.inf”, “\”, “Smart Array 5x and 6x Driver Diskette”, “\TXTSETUP.OEM”
(if you haven’t done it already – Take a snapshot) Open a command prompt on the physical machine and run the following command c:\sysprep\sysprep -pnp –forceshutdown OK → “reseal” → OK
Browse to the location were you captured the image to
Click next , “restore disks or partitions” Select one disk at a time , putting each disk on the corresponding physical storage , If there is more than 1 disk , then you need to slect “yes , i want to….” at the end of the first disk import, repeat until all disks are selected and imported. Next → next → proceed
Remove the CD and boot up the server , follow the installation procedure (although the server is supposed to join the domain , it will most likely fail as the correct drivers will not be installed) Once the server has booted up , run the latest PSP and install the drivers Reboot , join the domain Reserve IP in DHCP and Statically Assign
Ref http://www.vmware.com/support/v2p/doc/V2P_TechNote.pdf (This has been tested on W2K3 SP1Enterprise edition with 2 partitions )
Requirements:
HP Smartstart CD
HP PSP
Sysprep (http://download.microsoft.com/download/a/2/6/a267c18e-32d7-4e59-81e7-816c3b23cc29/WindowsServer2003-KB892778-SP1-DeployTools-x86-ENU.cab)
Acronis “Tru Image” (IEDUBESXHMLUNSW/Acronisboot/boot.iso)
Storage Drivers for the RAID controller card on the target physical drive
Sufficient storage to capture an image of the Virtual machine
Step 1 – Preparing the target physical machine
• Create target partitions to match the virtual machine Boot from Smartstart and create identically sized partitions To the source virtual machine.• Ensure the server is available on the network
• Boot the server using acronis tru image Boot into Acronis tru image enterprise server
Step 2 – preparing the virtual machine
(take a VM snapshot before proceeding) • Create a directory on the C drive called c:\drivers\storage Copy the extracted storage drivers to this directory • Create a directory on the C drive called c:\Sysprep Extract the sysprep files to this directory• Remove the VMware tools Add/remove programs – remove VMWare tools
Step 3 – Performing a sysprep
• Run c:\sysprep\setupmgr “Create new” → “sysprep Setup” → Select relevant OS→ “Yes , ….” Enter the correct info in the following … enter Product keyUnder the “additional Commands” enter the following command “c:\sysprep\sysprep –clean” next → Finish → OK
• Add a new path to the storage driver locations Open HKLM\SOFTWARE\MICROSOFT\WINDOWS\CurrentVersion add the following value to the Devicepath entry “;%systemdrive%\drivers\storage
• Modify the c:\sysprep\Sysprep.inf file to include the storage drivers Open the sysprep.inf file add the following section to the bottom of the file
[SysprepMassStorage]
PCI\VEN_103C&DEV_3230&SUBSYS_3234103C = “c:\drivers\storage\hpcissx2.inf”,”\”Smart Array SAS/SATA Windows Driver Diskette”, “txtsetup.oem”
PCI\VEN_103C&DEV_3230&SUBSYS_3235103C = “c:\drivers\storage\hpcissx2.inf”,”\”Smart Array SAS/SATA Windows Driver Diskette”, “txtsetup.oem”
this config is applicable only if the destination system has a P400 RAID controller, if the system has a different controller you will obviously have to replace the hardware hardware ID and the driver name save the sysprep.inf file
P410i Controller uses the same driver set but requires the following line to be added to the bottom
of the sysprep.inf file
[SysprepMassStorage]
PCI\VEN_103c&DEV_323A&SUBSYS_3245103C = “c:\drivers\storage\hpcissx2.inf”,”\”Smart Array SAS/SATA Windows Driver Diskette”, “txtsetup.oem”
5300 / 6400 controllers use the following config in the sysprep.inf file (obviously using the relevant 5300/6400 drivers)
[SysprepMassStorage]
PCI\VEN_0E11&DEV_B060 = “C:\Drivers\Storage\cpqcissm.inf”, “\”, “Smart Array 5x and 6x Driver Diskette”, “\TXTSETUP.OEM”
PCI\VEN_0E11&DEV_B178 = “C:\Drivers\Storage\cpqcissm.inf”, “\”, “Smart Array 5x and 6x Driver Diskette”, “\TXTSETUP.OEM”
PCI\VEN_0E11&DEV_0046 = “C:\Drivers\Storage\cpqcissm.inf”, “\”, “Smart Array 5x and 6x Driver Diskette”, “\TXTSETUP.OEM”
(if you haven’t done it already – Take a snapshot) Open a command prompt on the physical machine and run the following command c:\sysprep\sysprep -pnp –forceshutdown OK → “reseal” → OK
Step 4 – Moving the image
Once sysprep has done its magic and the VM is powered down we need to take an image using acronis.- Edit the VM settings to point the CD drive to IEDUBESXHMLUNSW/Acronisboot/boot.iso (make sure that the device is set to “connect at power on”
- Also in the VM settings , click the “options” tab , “Boot options” and set the delay to 10000 ms
- Power on the Virtual machine from within the console (you need to see the VM boot up sequence because you have to force it to boot from CD)
- In the Acronis startup manager – select “acronis true image echo enterprise server (full Version)”
- Once the application has started up , select “backup” “My Computer” select all of the available disks and click “next”, Do not Exclude anything
- Select the network location where you want to save the image (e.g \\iedubsrv56\servername$\servername.tib)
- Next → next → next → proceed
Step 5 – deploying the image to the physical machine
Assuming step1 has been completed , select RecoveryBrowse to the location were you captured the image to
Click next , “restore disks or partitions” Select one disk at a time , putting each disk on the corresponding physical storage , If there is more than 1 disk , then you need to slect “yes , i want to….” at the end of the first disk import, repeat until all disks are selected and imported. Next → next → proceed
Remove the CD and boot up the server , follow the installation procedure (although the server is supposed to join the domain , it will most likely fail as the correct drivers will not be installed) Once the server has booted up , run the latest PSP and install the drivers Reboot , join the domain Reserve IP in DHCP and Statically Assign
ESX41i Scripted deployment
ESX41i deployment
Useful Pages
http://www.van-lieshout.com/2009/10/uda2-0-test-drive/
http://www.rtfm-ed.co.uk/docs/vmwdocs/uda20-beta.pdf
http://www.virtuallyghetto.com/2010/09/automating-esxi-41-kickstart-tips.html
http://www.kendrickcoleman.com/index.php?/Tech-Blog/esxi-41-kickstart-install-wip.html
Useful Pages
http://www.van-lieshout.com/2009/10/uda2-0-test-drive/
http://www.rtfm-ed.co.uk/docs/vmwdocs/uda20-beta.pdf
http://www.virtuallyghetto.com/2010/09/automating-esxi-41-kickstart-tips.html
http://www.kendrickcoleman.com/index.php?/Tech-Blog/esxi-41-kickstart-install-wip.html
PXE deployment of ESX41i to hosts,
Although there are only 24 ESX hosts in the firm and deploying them manually would not be a
significant undertaking as the install per machine would be approx 30 minutes , the benefits ,
in our case to doing the deployment via a scripted PXE install are :
Identical repeatable builds - Apply known good working configurations
easily cope with hardware failures - failed array controller , CPU's etc , swap out the blade or server and kick off install
Perform upgrades of the OS by rolling out new install rather than patching -point PXE server to new OS and deploy our ESX servers have been gradually diverging in terms of configuration , this would be an ideal way to homogenize the builds
I downloaded and configured a copy of the Ultimate deployment appliance from
http://www.ultimatedeployment.org/
Deployed this to a VM , within the low resource usage pool , started it up and configured
the appliance , give it an IP address , hostname etc etc disabling DHCP as i did not want
the appliance handing out IPs on the server VLAN.
Once the machine was deployed the rest of the configuration was handled over the web based interface,
from vSphere i presented the ESX41i iso over the CD drive and from web interface of the UDA i mounted the CD drive.
Then click the OS tab and click new: enter the following
Flavor Name : ESX41i
operating system : VMware ESX...... click next
browse to the image location and click finish.
This imports the deployment image source files to the UDA
Now the UDA has the OS source files and is ready to hand out the base image, indeed we could
use the UDA as a store for unconfigured images in this state.
As we want to automate the deployment there are a number of things we need to do
we also need to secure and allow the windows DHCP servers to hand out the PXE host IP and the image name.
i will be allowing all servers on the server VLAN to potentially start up and unconditionally format and deploy ESX i have set the Appliance to request a password , this is to prevent accidental deployment of ESX41 to windows boxes.
To deploy the configured image successfully i need to customise it for our infrastructure , this requires
a kickstart script which contains settings for amongst others the source for the deployment media, the root password
disk partitioning , networking configuration , port groups , vmotion portgroup etc etc
( Some of the following code is plagiarised , specifically the joining AD part)
Within the templates tab , select advanced and paste the kickstart script in here
# Location script
# Standard install arguments
vmaccepteula
rootpw password
install url http:[UDA_IPADDR]/esx41i/ESX41i/
autopart --firstdisk --overwritevmfs
network --bootproto=static --device=vmnic0 --ip=[IPADDR] --gateway=10.128.134.229 --nameserver=10.128.100.100 --netmask=255.255.255.0 --hostname=[HOSTNAME].domainname.com --addvmportgroup=0
# We will configure some basic things during the first boot from the commandline before we add the host to vCenter
%firstboot --unsupported --interpreter=busybox
# Add an extra nic to vSwitch0 and a VLAN ID
esxcfg-vswitch -L vmnic1 vSwitch0
esxcfg-vswitch -v 26 -p 'Service Console' vSwitch0
# Add Server LAN and Deployment LAN portgroups to vSwitch 0
esxcfg-vswitch -A ''Server LAN' vSwitch0
esxcfg-vswitch -v 0 -p ''Server LAN' vSwitch0
esxcfg-vswitch -A ''win7 Deployment SP' vSwitch0
esxcfg-vswitch -v 29 -p 'win7 Deployment SP' vSwitch0
# Add new vSwitch for VM traffic (vmnic2 and vmnic3)
esxcfg-vswitch -a vSwitch1
# Add NICs to the new vSwitch1
esxcfg-vswitch -L vmnic2 vSwitch1
esxcfg-vswitch -L vmnic3 vSwitch1
# Add iSCSI Service Console and vmotion portgroups to vSwitch 1
esxcfg-vswitch -A 'iSCSI Service Console' vSwitch1
esxcfg-vswitch -v 27 -p 'iSCSI Service Console' vSwitch1
# Assign an ip-adress to the vMotion VMkernel and a VLAN ID to the Portgroup
esxcfg-vmknic -a -i [iSCSIsVCiP] -n 255.255.255.0 'iSCSI Service Console'
# Add 'RDM Switch' port group to vSwitch 1
esxcfg-vswitch -A 'RDM Switch' vSwitch1
esxcfg-vswitch -v 27 -p 'RDM Switch' vSwitch1
esxcfg-vswitch -A 'iSCSI-Vmotion' vSwitch1
esxcfg-vswitch -v 27 -p 'iSCSI-Vmotion' vSwitch1
# Assign an ip-adress to the vMotion VMkernel and a VLAN ID to the Portgroup
esxcfg-vmknic -a -i [vMotioniSCSIiP] -n 255.255.255.0 'iSCSI-Vmotion'
# Enable vMotion on the newly created VMkernel vmk0
vim-cmd hostsvc/vmotion/vnic_set vmk0
# Try to Add NFS datastores
esxcfg-nas -a -o 192.168.100.100-s /vol/Storageswap Storageswap
esxcfg-nas -a –o 192.168.100.100 -s /vol/Storagesw Storagesw
esxcfg-nas -a -o 192.168.100.100 -s /vol/Storagepage Storagepage
esxcfg-nas -a -o 192.168.100.100 -s /vol/Storageprod Storageprod
esxcfg-nas -a -o 192.168.100.100 -s /vol/Storagesrmph Storagesrmph
esxcfg-nas -a -o 192.168.100.100 -s /vol/Storagevol0 Storagevol0
esxcfg-advcfg -s 30 /Net/TcpipHeapSize
esxcfg-advcfg -s 120 /Net/TcpipHeapMax
esxcfg-advcfg -s 10 /NFS/HeartbeatMaxFailures
esxcfg-advcfg -s 12 /NFS/HeartbeatFrequency
esxcfg-advcfg -s 5 /NFS/HeartbeatTimeout
esxcfg-advcfg -s 64 /NFS/MaxVolumes
vim-cmd hostsvc/net/refresh
There were a number of issues with the above script , well the script itself was fine but it highlighted a few issues with the coexistance of ESX and ESXi in our organisation.
once the host was deployed everything worked as expected , except HA . This appears to have been due to a misconfiguration between the Service console and the VMKernel port
ESXi no longer has a "service Console" it does however have a VMkernel port.
with our ESX setup the Service console was transmitting the HA heartbeats , obviously it was looking for an identical port on all other hosts to transmit to.
one workaround was to modify the HA settings and add a das.allownetworks0 "NetworkName" and das.allownetwork1 "NetworkName"// , the effect of this would have been
to permit other networks to send/recieve HA heartbeats.
The above fix worked ... but once i attempted to reset the HA config by renabling HA on the cluster it failed for all hosts.
The fix in the end turned out to be a lot simpler and tidier , when we initially built the hosts we followed VM best practice and
segregated the traffic at the portgroup level by putting the Service console and the VMkernel(NFS/iSCSI) out on different portgroups but on closer examiniation
this was segregation in name only as both portgroups were going out the same uplinks and both were on the same VLAN.
I cleaned up the config by putting all traffic out the same portgroup (still following best practise for ESXi), this resolved the HA issue as all HA heartbeat traffic was now going out over the
portgroup with he same name.
The script below is the updated script which also joins the ESX host to the domain and just adds one portgroup to the vswitch
- Stokes place script
- Standard install arguments
- Add DNS servers and NTP config
#
vmaccepteula
rootpw Pa55worD
install url http://[UDA_IPADDR]/esx41i/HPESX41i/
autopart --firstdisk --overwritevmfs
network --bootproto=static --device=vmnic0 --ip=[IPADDR] --gateway=10.128.134.229 --nameserver=10.128.100.100 --netmask=255.255.255.0 --hostname=[HOSTNAME]Domainname.com --addvmportgroup=0
vmaccepteula
rootpw Pa55worD
install url http://[UDA_IPADDR]/esx41i/HPESX41i/
autopart --firstdisk --overwritevmfs
network --bootproto=static --device=vmnic0 --ip=[IPADDR] --gateway=10.128.134.229 --nameserver=10.128.100.100 --netmask=255.255.255.0 --hostname=[HOSTNAME]Domainname.com --addvmportgroup=0
# We will configure some basic things during the first boot from the commandline before we add the host to vCenter
%firstboot --unsupported --interpreter=busybox
# Add an extra nic to vSwitch0 and a VLAN ID
esxcfg-vswitch -L vmnic1 vSwitch0
esxcfg-vswitch -v 26 -p 'Management Network' vSwitch0
esxcfg-vswitch -v 26 -p 'Management Network' vSwitch0
# Add Server LAN and Deployment LAN portgroups to vSwitch 0
esxcfg-vswitch -A 'Server LAN' vSwitch0
esxcfg-vswitch -v 0 -p ' Server LAN' vSwitch0
esxcfg-vswitch -A 'GDv5 Deployment SP' vSwitch0
esxcfg-vswitch -v 29 -p 'GDv5 Deployment SP' vSwitch0
esxcfg-vswitch -v 0 -p ' Server LAN' vSwitch0
esxcfg-vswitch -A 'GDv5 Deployment SP' vSwitch0
esxcfg-vswitch -v 29 -p 'GDv5 Deployment SP' vSwitch0
# Add new vSwitch for NFS/iSCSI traffic (vmnic2 and vmnic3)
esxcfg-vswitch -a vSwitch1
# Add NICs to the new vSwitch1
esxcfg-vswitch -L vmnic2 vSwitch1
esxcfg-vswitch -L vmnic3 vSwitch1
esxcfg-vswitch -L vmnic3 vSwitch1
# Add iSCSI Service Console and vmotion portgroups to vSwitch 1
esxcfg-vswitch -A 'iSCSI Service Console' vSwitch1
esxcfg-vswitch -v 27 -p 'iSCSI Service Console' vSwitch1
esxcfg-vswitch -v 27 -p 'iSCSI Service Console' vSwitch1
# Assign an ip-adress to the vMotion VMkernel and a VLAN ID to the Portgroup
esxcfg-vmknic -a -i [vMotioniSCSIiP] -n 255.255.255.0 'iSCSI Service Console'
# Add 'RDM Switch' port group to vSwitch 1
esxcfg-vswitch -A 'RDM Switch' vSwitch1
esxcfg-vswitch -v 27 -p 'RDM Switch' vSwitch1
esxcfg-vswitch -v 27 -p 'RDM Switch' vSwitch1
# Enable vMotion on the newly created VMkernel vmk1
vim-cmd hostsvc/vmotion/vnic_set vmk1
# Add DNS servers
vim-cmd hostsvc/net/dns_set --ip-addresses=10.128.100.100,10.128.100.3
# Try to configure NTP
Echo restrict default kod nomodify notrap noquerynopeer > /etc/ntp.conf
echo restrict 127.0.0.1 >> /etc/ntp.conf
echo server 10.128.100.100 >> /etc/ntp.conf
echo driftfile /var/lib/ntp/drift >> /etc/ntp.conf
/etc/init.d/ntpd stop
/etc/init.d/ntpd start
echo restrict 127.0.0.1 >> /etc/ntp.conf
echo server 10.128.100.100 >> /etc/ntp.conf
echo driftfile /var/lib/ntp/drift >> /etc/ntp.conf
/etc/init.d/ntpd stop
/etc/init.d/ntpd start
# Try to Add NFS datastores
esxcfg-nas -a -o 192.168.100.100 -s /vol/Storageswap Storageswap
esxcfg-nas -a -o 192.168.100.100 -s /vol/Storagesw Storagesw
esxcfg-nas -a -o 192.168.100.100 -s /vol/Storagepage Storagepage
esxcfg-nas -a -o 192.168.100.100 -s /vol/Storageprod Storageprod
esxcfg-nas -a -o 192.168.100.100 -s /vol/Storagesrmph Storagesrmph
esxcfg-nas -a -o 192.168.100.100 -s /vol/Storagevol0 Storagevol0
esxcfg-advcfg -s 30 /Net/TcpipHeapSize
esxcfg-advcfg -s 120 /Net/TcpipHeapMax
esxcfg-advcfg -s 10 /NFS/HeartbeatMaxFailures
esxcfg-advcfg -s 12 /NFS/HeartbeatFrequency
esxcfg-advcfg -s 5 /NFS/HeartbeatTimeout
esxcfg-advcfg -s 64 /NFS/MaxVolumes
# Try to join the Domain
cat > /tmp/joinActiveDirectory.py << JOIN_AD
import sys,re,os,urllib,urllib2,base64
import sys,re,os,urllib,urllib2,base64
# mob url
# mob login credentials -- use password = "" for build scripting
username = "root"
password = ""
password = ""
- which domain to join, and associated OU
- e.g.
- "primp-industries.com"
- "primp-industries.com/VMware Server OU"
domainname = "Domain.com/ou=esxhosts,ou=servers,ou=OU,dc=domain,dc=com"
# active directory credentials using encoded base64 password
ad_username = "username"
encodedpassword = "XXXXXXXXXXXXXXXXXX"
ad_password = base64.b64decode(encodedpassword)
encodedpassword = "XXXXXXXXXXXXXXXXXX"
ad_password = base64.b64decode(encodedpassword)
# Create global variables
global passman,authhandler,opener,req,page,page_content,nonce,headers,cookie,params,e_params
# Code to build opener with HTTP Basic Authentication
passman = urllib2.HTTPPasswordMgrWithDefaultRealm()
passman.add_password(None,url,username,password)
authhandler = urllib2.HTTPBasicAuthHandler(passman)
opener = urllib2.build_opener(authhandler)
urllib2.install_opener(opener)
passman.add_password(None,url,username,password)
authhandler = urllib2.HTTPBasicAuthHandler(passman)
opener = urllib2.build_opener(authhandler)
urllib2.install_opener(opener)
# Code to capture required page data and cookie required for post back to meet CSRF requirements ###
req = urllib2.Request(url)
page = urllib2.urlopen(req)
page_content= page.read()
page = urllib2.urlopen(req)
page_content= page.read()
# regex to get the vmware-session-nonce value from the hidden form entry
reg = re.compile('name="vmware-session-nonce" type="hidden" value="?([^\s^"]+)"')
nonce = reg.search(page_content).group(1)
nonce = reg.search(page_content).group(1)
# get the page headers to capture the cookie
headers = page.info()
cookie = headers.get("Set-Cookie")
cookie = headers.get("Set-Cookie")
# Code to join the domain
params = {'vmware-session-nonce':nonce,'domainName':domainname,'userName':ad_username,'password':ad_password}
e_params = urllib.urlencode(params)
req = urllib2.Request(url, e_params, headers={"Cookie":cookie})
page = urllib2.urlopen(req).read()
JOIN_AD
#execute python script to Join AD
python /tmp/joinActiveDirectory.py
vim-cmd hostsvc/net/refresh
Halt
There are a number of variables in [Brackets] , the purpose of these is to pipe values defined in the subtemplates tab into the kickstart script as it is running.
#boot into nic
2. request DHCP and PXE
3. Receive DHCP offer , PXE address and boot file name
4. boot into UDA , user is prompted for OS deployment type and prompted for password
5. User is prompted for subtemplate
6. OS Deployment starts, ESX41 is installed to host
7. kickstart script runs
8 machine is available
e_params = urllib.urlencode(params)
req = urllib2.Request(url, e_params, headers={"Cookie":cookie})
page = urllib2.urlopen(req).read()
JOIN_AD
#execute python script to Join AD
python /tmp/joinActiveDirectory.py
vim-cmd hostsvc/net/refresh
Halt
There are a number of variables in [Brackets] , the purpose of these is to pipe values defined in the subtemplates tab into the kickstart script as it is running.
#boot into nic
2. request DHCP and PXE
3. Receive DHCP offer , PXE address and boot file name
4. boot into UDA , user is prompted for OS deployment type and prompted for password
5. User is prompted for subtemplate
6. OS Deployment starts, ESX41 is installed to host
7. kickstart script runs
8 machine is available
Friday, August 12, 2011
Adding NIC Drivers to ESXi
I had to build a few ESXi servers to host a few machines which were going to do some data munging,
the application itself was a single threaded app ,
some pretty high end desktops had been purchased before this fact came to light so the desktops were to be reprovisioned as ESXi Hosts and then 4 windows 7 images would be deployed , the desktops would be allocated one pCPU each. We purchased an SSD on which to host the 4 VM's.
once i deployed ESXi the first issue i encountered was the onbourd NIC was not on the HCL and ESXi refused to install. as a temporary fix i installed a quad port NIC which was on the HCL this got the host up and running.
As this is ESXi i had to use the "vSphere Management Assistant" (which i had installed only a few weeks earlier as i am doing an ESX4.1 to ESXi4.1 migration )
The Intel 82578DM drivers were obtained from
http://downloads.vmware.com/d/details/esx_esxi40_intel_82575_82576_dt/ZHcqYmR0QGpidGR3
the image downloaded and mounted on the vMA
(One Centos root Password recovery later)
The CD device now needs to be mounted on the vMA
the application itself was a single threaded app ,
some pretty high end desktops had been purchased before this fact came to light so the desktops were to be reprovisioned as ESXi Hosts and then 4 windows 7 images would be deployed , the desktops would be allocated one pCPU each. We purchased an SSD on which to host the 4 VM's.
once i deployed ESXi the first issue i encountered was the onbourd NIC was not on the HCL and ESXi refused to install. as a temporary fix i installed a quad port NIC which was on the HCL this got the host up and running.
As this is ESXi i had to use the "vSphere Management Assistant" (which i had installed only a few weeks earlier as i am doing an ESX4.1 to ESXi4.1 migration )
The Intel 82578DM drivers were obtained from
http://downloads.vmware.com/d/details/esx_esxi40_intel_82575_82576_dt/ZHcqYmR0QGpidGR3
the image downloaded and mounted on the vMA
(One Centos root Password recovery later)
The CD device now needs to be mounted on the vMA
sudo mkdir /mnt/cdrom
sudo mount -t iso9660 -o ro /dev/cdrom /mnt/cdrom
CD /mnt/cdrom
cd offline-bundle
vihostupdate --server XX.XX.XX.XX --username XXXXX --password XXXXX --install --bundle INT-Intel............zip
The host must be in maintenance mode in order to complete the install of the NIC
Friday, June 17, 2011
ESXi - Install from USB and To a USB
Install from USB
Instructions
Format USB drive as FAT32.
Download Syslinux. http://www.kernel.org/pub/linux/utils/boot/syslinux/
Run win32/syslinux -f -m -a usbdriveletter: or win64/syslinux64 -f -m -a usbdriveletter:
Copy contents of ESXi ISO to USB drive.
Rename isolinux.bin and isolinux.cfg to syslinux.bin and syslinux.cfg
Create file on USB drive called ks.cfg. A simple one is listed below. For more information on this see pg. 36 of the documentation here: http://www.vmware.com/pdf/vsphere4/r41/vsp_41_esxi_i_vc_setup_guide.pdf
start ks.cfg
vmaccepteula
rootpw password
autopart --firstdisk --overwritevmfs
install usb
network --device=vmnic0 --bootproto=dhcp
end ks.cfg
Edit syslinux.cfg - under the ESXi Installer line, add "ks=usb" after vmkboot.gz in the "append" line, like so:
append vmkboot.gz ks=usb --- vmkernel.gz --- sys.vgz --- cim.vgz --- ienviron.vgz --- install.vgz
You need to do these two things if you downloaded the latest version of Syslinux, something past version 4. Otherwise you will get an error to the effect of "not a valid com32 image" or something when you try to boot.
Copy %syslinux%/com32/mboot/mboot.c32 to USB drive (overwrite)
Copy %syslinux%/com32/menu/menu.c32 to USB drive (overwrite)
Install to USB
http://www.techhead.co.uk/installing-vmware-esxi-4-0-on-a-usb-memory-stick-the-official-way
Instructions
Format USB drive as FAT32.
Download Syslinux. http://www.kernel.org/pub/linux/utils/boot/syslinux/
Run win32/syslinux -f -m -a usbdriveletter: or win64/syslinux64 -f -m -a usbdriveletter:
Copy contents of ESXi ISO to USB drive.
Rename isolinux.bin and isolinux.cfg to syslinux.bin and syslinux.cfg
Create file on USB drive called ks.cfg. A simple one is listed below. For more information on this see pg. 36 of the documentation here: http://www.vmware.com/pdf/vsphere4/r41/vsp_41_esxi_i_vc_setup_guide.pdf
start ks.cfg
vmaccepteula
rootpw password
autopart --firstdisk --overwritevmfs
install usb
network --device=vmnic0 --bootproto=dhcp
end ks.cfg
Edit syslinux.cfg - under the ESXi Installer line, add "ks=usb" after vmkboot.gz in the "append" line, like so:
append vmkboot.gz ks=usb --- vmkernel.gz --- sys.vgz --- cim.vgz --- ienviron.vgz --- install.vgz
You need to do these two things if you downloaded the latest version of Syslinux, something past version 4. Otherwise you will get an error to the effect of "not a valid com32 image" or something when you try to boot.
Copy %syslinux%/com32/mboot/mboot.c32 to USB drive (overwrite)
Copy %syslinux%/com32/menu/menu.c32 to USB drive (overwrite)
Install to USB
http://www.techhead.co.uk/installing-vmware-esxi-4-0-on-a-usb-memory-stick-the-official-way
Thursday, June 16, 2011
VMware VMA (Management Assistant for ESXi) and adding hosts and VMA to AD - resxtop limitiations
http://www.virtuallyghetto.com/2010/05/getting-started-with-vma.html - (Getting started guide)
http://www.simonlong.co.uk/blog/2010/05/28/using-vma-as-your-esxi-syslog-server/ - Setting up a syslog srver
Bulk add hosts to the vMA http://www.virtuallyghetto.com/p/vmware-vma-vima.html
name the vMA "domainjoin-cli name <Computername>"
Join the Domain "domainjoin-cli join <username>"
Add the user to the list of sudoers on the vMA
sudo nano /etc/sudoers
at the bottom of the file
add the following
%domainname//domain^admins ALL =(ALL) ALL
This allows the users within the domain admins group to sudo within the VMA
To allow domain admins to log on locally and act as root on the ESX servers , the group ESX Admins must be created in AD , add Domain Admins to this group.
The ESX server periodically check for the existence of this group , if it is present it adds it to the administrators group on the ESX server
We can now log on to the VMA box with our AD account and the ESX servers with our AD account.
unfortunately within the VMA the ESX servers are still configured to use fastpass authentication
( vifp listservers -l )
Run the following for each of the hosts to change them to use adauth instead of fpauth
(fast pass authentication)
vifp reconfigure esxhost.domainname --authpolicy adauth
Each time you logon to vMA set the target as the virtual center and you will not be prompted for your credentials when running commands against the hosts
vifptarget --set <vcentreserver>
Your prompt should be as follows
[domain\username@vmahostname ~][vcenter.domainname]$
you should be able to issue vicfg-nics -l --vihost esxserver and not be prompted for credentials
Resxtop
a limitiation of resxtop is that each time you want to switch between servers you will need to re-enter your credentials, there is no secure way around this.
It is possible to pipe your password in clear text
echo "password" | resxtop --server xxx --username user -b -d 15 -n 9 | ....
http://www.simonlong.co.uk/blog/2010/05/28/using-vma-as-your-esxi-syslog-server/ - Setting up a syslog srver
Bulk add hosts to the vMA http://www.virtuallyghetto.com/p/vmware-vma-vima.html
name the vMA "domainjoin-cli name <Computername>"
Join the Domain "domainjoin-cli join <username>"
Add the user to the list of sudoers on the vMA
sudo nano /etc/sudoers
at the bottom of the file
add the following
%domainname//domain^admins ALL =(ALL) ALL
This allows the users within the domain admins group to sudo within the VMA
To allow domain admins to log on locally and act as root on the ESX servers , the group ESX Admins must be created in AD , add Domain Admins to this group.
The ESX server periodically check for the existence of this group , if it is present it adds it to the administrators group on the ESX server
We can now log on to the VMA box with our AD account and the ESX servers with our AD account.
unfortunately within the VMA the ESX servers are still configured to use fastpass authentication
( vifp listservers -l )
Run the following for each of the hosts to change them to use adauth instead of fpauth
(fast pass authentication)
vifp reconfigure esxhost.domainname --authpolicy adauth
Each time you logon to vMA set the target as the virtual center and you will not be prompted for your credentials when running commands against the hosts
vifptarget --set <vcentreserver>
Your prompt should be as follows
[domain\username@vmahostname ~][vcenter.domainname]$
you should be able to issue vicfg-nics -l --vihost esxserver and not be prompted for credentials
Resxtop
a limitiation of resxtop is that each time you want to switch between servers you will need to re-enter your credentials, there is no secure way around this.
It is possible to pipe your password in clear text
echo "password" | resxtop --server xxx --username user -b -d 15 -n 9 | ....
Sunday, March 20, 2011
Tuesday, February 1, 2011
MS PowerShell
http://www.quest.com/powershell/activeroles-server.aspx Quest AD powershell extentions
http://www.computerperformance.co.uk/powershell/powershell_qad_user.htm user guide for quest powershell QadUser
Some Snippets
Update AD attributes
Takes a list of users in the IT OU and sets the Roomnumber attribute to XXXX
Displays a list of all users affected
<code powershell>
$OU = "OU=IT,OU=GDv4,OU=Users,OU=IE,DC=Domain,DC=domain,DC=Domain,DC=com"
$users = get-qaduser -searchroot $ou
$Users | foreach { set-Qaduser $_.name -objectAttributes @{roomNumber='XXXX'} }
$Users | foreach { write-host $_.Name }
</code>
http://www.computerperformance.co.uk/powershell/powershell_qad_user.htm user guide for quest powershell QadUser
Some Snippets
Update AD attributes
Takes a list of users in the IT OU and sets the Roomnumber attribute to XXXX
Displays a list of all users affected
<code powershell>
$OU = "OU=IT,OU=GDv4,OU=Users,OU=IE,DC=Domain,DC=domain,DC=Domain,DC=com"
$users = get-qaduser -searchroot $ou
$Users | foreach { set-Qaduser $_.name -objectAttributes @{roomNumber='XXXX'} }
$Users | foreach { write-host $_.Name }
</code>
Thursday, January 13, 2011
General VMware stuff
http://www.yellow-bricks.com/ - General vmware Blog
http://communities.vmware.com/servlet/JiveServlet/downloadBody/14905-102-1-17952/vsphere41-performance-troubleshooting.pdf - Trouble shooting guide
Http://www.rtfm-ed.co.uk - another general VMware blog
http://virtualgeek.typepad.com/virtual_geek/2009/12/whats-what-in-vmware-view-and-vdi-land.html
http://www.techhead.co.uk/vmware-esx-tools
http://vmwaredevotee.com/
http://www.simonlong.co.uk/blog/
http://vmware-land.com/
http://communities.vmware.com/servlet/JiveServlet/downloadBody/14905-102-1-17952/vsphere41-performance-troubleshooting.pdf - Trouble shooting guide
Http://www.rtfm-ed.co.uk - another general VMware blog
http://virtualgeek.typepad.com/virtual_geek/2009/12/whats-what-in-vmware-view-and-vdi-land.html
http://www.techhead.co.uk/vmware-esx-tools
http://vmwaredevotee.com/
http://www.simonlong.co.uk/blog/
http://vmware-land.com/
Thursday, January 6, 2011
Other Stuff
http://matadornetwork.com/nights/25-essential-jazz-albums/ - Jazz albums
http://hyperboleandahalf.blogspot.com/ - Funny blog
http://www.googleartproject.com/ - Museums by google
http://www.truecrypt.org/docs/?s=hidden-operating-system
Why are you closed ... WHY? http://www.youtube.com/watch?v=KqRPOEa3P44
http://hyperboleandahalf.blogspot.com/ - Funny blog
http://www.googleartproject.com/ - Museums by google
http://www.truecrypt.org/docs/?s=hidden-operating-system
Why are you closed ... WHY? http://www.youtube.com/watch?v=KqRPOEa3P44
VMware command line
http://www.vmware.com/pdf/vsphere4/r40_u1/vsp_40_u1_vcli.pdf - Command line reference
Wednesday, January 5, 2011
VMware Powershell stuff
http://www.virtu-al.net/tag/powershell/ - Links for powershell commands , including a reference for vmview cmdlets
http://www.virtu-al.net/script-list/ - List of handy scripts
http://communities.vmware.com/thread/281904?tstart=0 - PowerCLi quick ref
Handy CMDlets
http://www.virtu-al.net/script-list/ - List of handy scripts
http://communities.vmware.com/thread/281904?tstart=0 - PowerCLi quick ref
Handy CMDlets
List the vSphere PowerCLI commands
Get-VICommand
Connect to a ESX or VirtualCenter instance
connect-viserver -server %server%
List the currently available datastores
Get-Datastore | sort
List the currently available datastores filtered and sorted
Get-Datastore | where {$_.Name -like '*pr*'} | sort
Find the VMs attached to one or more datastores
foreach ($prodDatastore in $prodDatastores) { write-output $prodDatastore.Name; get-vm -datastore $proddatastore; write-output ''}
Get a Virtual Machine
$vm = get-vm -name '%vm%'
Get the virtual harddisk for the specified VMs
Get-HardDisk -vm $vm
Move a virtual machine to another container
Move-VM -Destination $prodApps -VM $vm
Update the VM description for a list of CSV entries
foreach ($virtualServer in $virtualservers) {$arr = $virtualServer.split(","); $desc = $arr[1]; $vmName = $arr[0]; write-output $vmName; $desc; $vm = get-vm -name $vmName; Set-VM -VM $vm -description $desc}
Query for a list of VMs and output in ANSI format
get-vm | sort-object | format-table -property Name | out-file -encoding ASCII -filepath c:\temp\vms_20090625.txt
Find VMware machine performance statistics
get-stat -entity $vm -disk -start 01/01/2009 -finish ([DateTime]::Now.ToString("dd/MM/yyyy"))
For a group of VMs, report performance statistics and save to file
foreach ($vm in $devVMs) {get-stat -entity $vm -disk -start 01/01/2009 -finish ([DateTime]::Now.ToString("dd/MM/yyyy")) | out-file -filepath ("c:\temp\" + $vm.Name + "DiskPerformance.txt")}
Find VM datastore disk usage
$devVMs = get-vm -name '*dv*'; foreach ($vm in $devvms) {$vm.harddisks}
Find VM datastore disk usage
$testVMs = Get-VM -Location (get-folder -name "Test") ;foreach ($vm in $testVMs) {$vm.harddisks | format-table -hideTableHeaders -wrap -autosize | findstr /i /c:per}
Find SCSI devices attached to an ESX server
get-scsilun -vmhost (Get-VMHost -Location "cluster")[0]
Rescan HBAs on an ESX server
get-VMHostStorage -VMHost (Get-VMHost -Location "cluster")[0] -RescanAllHba
Storage vMotion a virtual machine to a new datastore
Move-VM -vm "vmName" -datastore "NewDatastore"
Storage vMotion a group of machines from a CSV input file
$servers = get-content -path inputfile.txt; foreach ($server in $servers) {move-vm -vm $server.split(",")[0] -datastore $server.split(",")[1]}
Remove a snapshot and child snapshots, reporting how long the operation took
measure-command -expression {remove-snapshot -snapshot $snapshots[0] -removechildren}
Find datastore space, usage and number of VMs per datastore
$datastores = get-datastore | sort-object; write-output "Name,Size,Used,Free,% Used,#VMs"; foreach ($datastore in $datastores) { write-output ($datastore.Name + "," + [math]::round($datastore.CapacityMB/1024) + "," + [math]::round(($datastore.CapacityMB/1024)-($datastore.FreeSpaceMB/1024)) + "," + [math]::round($datastore.FreeSpaceMB/1024) + "," + [math]::round(((($datastore.CapacityMB/1024) - ($datastore.FreeSpaceMB/1024)) / ($datastore.CapacityMB/1024)) * 100) + "," + (get-vm -datastore $datastore).count)}
From a set of VMs, find which have snapshots
foreach ($testvm in $testvms) {if (get-snapshot -vm $testvm){write-output $testvm.Name}}
Find the size of the first hard disk in each VM
foreach ($vm in $vms) {$vm.harddisks[0] | format-table -hideTableHeaders -wrap -autosize | findstr /i /c:per }
Find disk information for VMs in the specified datastore
$VMs = Get-VM ;foreach ($vm in $VMs) {$vm.harddisks | where {$_.FileName -like '*clusterpr*'} | format-table -hideTableHeaders -wrap -autosize | findstr /i /c:per}
Find VMs in the specified datastore
$VMs = Get-VM | where {$_.harddisks[0].FileName -like '*clusterpr*'}
Get VM guest information, including virtual OS
get-vm | get-vmguest | format-table -wrap -autosize
Find virtual machines and their description/notes
$vms = get-vm ; $vms | format-table -wrap -autosize -property Name,Description
Create an associative array containing VM names and descriptions
$vmdesc = @{}; foreach ($vm in $vms) {$vmdesc.add($vm.Name, $vm.Description)}
Migrate a virtual machine to another host in a VMware ESX cluster
move-vm -vm %vmName% -destination %hostname%
Find the host a VM is currently located on
get-vmhost -vm %vnName%
Add a new harddisk to a virtual machine
New-HardDisk -vm %vmName% -CapacityKB 20971520
Retrieve details on the resource pools from the currently connected datacenter
Get-ResourcePool | format-table -wrap -autosize -property Name,Id,CpuExpandableReservation,CpuLimitMHz,CpuReservationMHz,CpuSharesLevel,CustomFields,MemExpandableReservation,MemLimitMB,MemReservationMB,MemSharesLevel,Name,NumCpuShares,NumMemShares
Find virtual machines and if they have a CD-ROM
get-vm | format-table -wrap -autosize -property Name,CDDrives
Find the last 100 events that aren't alarm related
$events = Get-VIEvent -MaxSamples 100 | where {$_.fullFormattedMessage -notmatch "Alarm*"}
Find all events for machine deployments from templates
$events = Get-VIEvent | where {$_.fullFormattedMessage -match "Deploying (.*) on host (.*) in (.*) from template (.*)"}
Create a resource pool with high CPU and memory shares
New-ResourcePool -location (get-cluster -name 'cluster') -Name ResPool1 -CpuSharesLevel [VMware.VimAutomation.Types.SharesLevel]::High -MemSharesLevel [VMware.VimAutomation.Types.SharesLevel]::High
Create a folder from the root of the tree
New-Folder -Name Workstations -location (get-folder -name 'vm')
Move one or more VMs to a resource pool (or other destination)
$vms = get-vm -name vmNames*; move-vm -vm $vms -destination (Get-ResourcePool -name 'ResPool1')
Get an OS customization specification, and list the properties in wide format
Get-OSCustomizationSpec -name "SpecName" | format-list
Take a snapshot of a virtual machine
New-Snapshot -Name "Snapshot 01" -description "Snapshot description" -vm vmName -Quiesce:$true
Convert a virtual machine to a template
$vmView = get-vm -name vm01 | Get-View; $vmView.MarkAsTemplate()
Find Datastore usage (custom written function)
get-datastoreusage
Get an ESX log bundle using PowerCLI
Get-Log -VMHost esxhostname -Bundle -DestinationPath c:\temp
Query for snapshots
Get-VM | Get-Snapshot | export-csv -path c:\temp\VMsnapshots.csv
Query for snapshot information
Get-VM | Get-Snapshot | foreach-object {$out= $_.VM.Name + "," + $_.Name + "," + $_.Description + "," + $_.PowerState; $out}
Subscribe to:
Posts (Atom)