Upgrading ESXi to 4.1

Finally got around to upgrading VC to 4.1. all went very smoothly for a change. Just the one error on on ESX host about a log file that was full, cleared the log file and all was well again.
Next up is to upgrade the hosts. this is something that requires a bit for juggling. Our Dev environment still contains 3 ESX hosts, this is while we sort out moving our SQL servers to our chunkier BL620 G7 blades, these currently have just a single server on to test them and were awaiting the 4-1 upgrade before we moved anything serious over.
So the plan is to do the 3 BL620, then move some VMs over which would free up some room to upgrade the rest of production, freeing up a couple blades for dev so we can then shut down the ESX DL380 servers.

Step one the BL620 upgrade. The first two blade went without a hitch, upgraded as expected through the upgrade manager. Hard core VM admins would probably use some command line tool but why make your life difficult? I have seen previous upgrades go a bit slow on the reboot afterwards due to time outs on the storage scanning. Let it boot up, even if it is over night, and then change the time-outs, which ones depends on your configuration, there are many documents on the web so just Google what is on the console when it seems to be frozen.
The third and last one would not upgrade, I left it running over night and it was still at 25% in the morning, so I rebooted and had what appeared to be an ESXi host running 4.1, but when I ran the check for updates against it it was showing it need the 4.1 update but applying it failed. I thought I would rebuuild it as a 4.0 host and try the upgrade again as it was simpler then rewriting my config script for 4.1, but got the same result, so I decided to bite teh bullit and go for 4.1 and a re-write. I downloaded the latest 4.1 U2 install from the HP site (not the easiest thing to find and requires registration https://h20392.www2.hp.com/portal/swdepot/displayProductInfo.do?productNumber=HPVM06) and then booted this and noticed there was a repair option. This seemed like a good idea as it should avoid me having to rewrite my configuration script to account for the changes VMware made. Set it the ESXi host rebooted and there was no configuration on the host at all, no vlan set up no IP addressing or anything, so VMware's idea of a repair is basically to do fresh install everything which is pretty pointless really.

Anyway, after rewriting my set up script  for the 4.1 vMA and getting everything set up, I was checking all was well and noticed I had 4 additional iSCSI connectors visible under strorage connectors. The BL620 blade come with 4 built in 10GB Flexconnect NICs and in the BIOS you can set these to be either FCoE or iSCSI but you cannot turn them off and use them as just plain NICs even if they are connected to 1 GbE switches and are used for the management network rather than storage. I guess this was a new driver that was picked up as part of the 4.1 install from HP.
It should be safe enough to ignore, but just to keep everything in check I guess I will need to run a repair on the other hosts to ensure all the drivers are the same and on the latest version, or something is bound to go wrong.

Comments

Popular posts from this blog

Scripting DNS entries

Enterprise Vault - Failed Exchange Task

Star Wars: Jedi Fallen Order - a review by an over 50s gamer