ESXI & Installation Media

by | Feb 10, 2022 | How to, News | 0 comments

Howdy everyone, Matt from Data Center Therapy here again.

Here at IVOXY Consulting, one of our biggest advantages as Consultants, is that we design and implement solutions in a super wide range of environments. We see things that are very niche, and we also come across issues that affect a huge swath of our customer base. Today, we’re touching on the latter: VMware customers, listen up. 

VMware ESXi is the standard in enterprise hypervisors. Sure, there’s Xen, there’s Hyper-V, there’s KVM, but by and large, VMware vSphere is the brand of tomato you’re making your sauce with, and for good reason. Going way back, it wasn’t unusual to see full blown hard drives in your ESXi hosts, consumed by an ESXi install of a few hundred meg. Maybe you use the leftover space for some ISOs or backups, but by and large, it was ESX and that was that. SD cards and USB drives came on strong and made a ton of sense. First of all, these are cheap types of storage, no doubt about it. Saving costs versus buying internal drives was a huge factor. Furthermore, certain server vendors enabled you to connect two SD cards together in a RAID1 for redundancy. More on that a bit later. Anyway – this has really become a widely used and primary approach for ESXi installs.

What’s the problem?

Remember how I said these options were cheap? Yeah. THESE THINGS FAIL. A lot. How many of you have had one of these fail on you, and why did it have to be yesterday?

Well, VMware has finally tired of it as well, and in the process has thrown us all a bit of a brushback pitch with vSphere 7 Update 3. If you’re just running USB or SD cards, you’re now considered to be in a degraded situation. vCenter will yell at you, and while ESXi will still run, you’ll be doing so unsupported by VMware should things go sideways. I wouldn’t surprised to see this method completely removed from the next major version of vSphere.

What are your options?

Well there are really two main answers here, but I can only really recommend one of them. And in that case, you have a few more options. Still with me? No? Good.

Option 1 – the not favored option – Add a local disk or disks to supplement the USB or SD card. This is a supported configuration, where the ESXi OS install remains on the SD card, but higher-IO data like the scratch partition, logs and other bit of vSphere 7 leverage the persistent storage media. Like I said, it’s supported, but it’s also considered legacy at this point. Which means you’re staring down the barrel of an ESXi reinstall soon enough. Which brings me to my preferred option…

Option 2 – Add a local disk or disks and migrate your install from USB or SD cards, and dump the cards. Ok, so you don’t technically have to dump the cards, but we won’t be using them for anything in this scenario. We reinstall ESXi on the disks, migrate the configuration, and now we’re in VMware’s (and our) recommended configuration, with the guarantee of long term support.

Now, remember how I said there were a few flavors of my preferred option, and also how I said I’d be talking about RAID1 a bit later? Later is now. Now is…later…Hmmm…Now & Laters. Candy. … Sorry.

So the options for the “local disk” are aplenty. You can go with a single SS, or a traditional hard disk drive. A more modern solution would be an M.2 SATA boot device tailor made for this sort of application. Or you could go down complex alley over there and setup some LUNs for booting from SAN. (shakes head). Any one of these will work, but if you decide to go with local disk, you’ll have to make a decision regarding RAID. Just throwing a single disk into each ESXi host is a cost, but it’s a modest one. Start adding two drives, and the requisite RAID controller to setup a RAID1? Then multiply but how many hosts you have? Yeah, it could get expensive. I’d be happy to chat with anyone about my thoughts on this (and yes, I have some), but it comes down to how much value you place on putting a RAID1 behind an OS that boots into RAM and runs from there. It’s not an availability concern, so that raid group really protects you against having to reinstall the OS in case of a drive failure. That’s about it. It depends on how much you value management overhead savings. Operationally? Little difference.

That’s the long and short of it, and unfortunately, this issue needs to be addressed now. vSphere 7 has been around for almost 2 years now, and folks who are still running 6.5 and 6.7 are most likely looking at the upgrade process, and this is a new wrinkle to our traditional vSphere upgrade cycle.

I’d encourage everyone to reach out to your favorite consultant here at IVOXY Consulting to chat about next steps. As I’ve said, we’ve helped a bunch of folks out already with this process, so we’re well aware of the options and are happy to discuss what your situation looks like, to ensure you’re ready and supported for vSphere 7 and beyond.

 

🔗 Resources 🔗

 

Need help preparing for your vSphere 7 upgrade? Let us know!