Log into your PVE server over SSH. Download the OVA appliance from the Broadcom Flings portal and unzip/untar it:
wget https://higherlogicdownload.s3.amazonaws.com/BROADCOM/092f2b51-ca4c-4dca-abc0-070f25ade760/UploadedImages/Flings_Content/Nested_ESXi8_0u3c_Appliance_Template_v1_ova-dl.zip
unzip Nested_ESXi8_0u3c_Appliance_Template_v1_ova-dl.zip
tar xvf Nested_ESXi8.0u3c_Appliance_Template_v1.ova
Then open vi
or your fav text editor, and paste the follow shell script and then save and run it.
The script is based on the one found at https://iriarte.it/homelab/2023/09/05/esxi-on-proxmox-as-nested-hypervisor.html with some customizations for my environment - I removed the extra vSAN VLAN, removed --hugepages 1024
, set the the ram to 32GB.
#!/bin/bash
VLANLAB=700
# Bridge interface to use (VLAN aware)
BRIDGE=vmbr0
# Storage destination for the appliance to live in
STORAGE=local-lvm
# Source OVF file, descriptor of the appliance
OVF=Nested_ESXi8.0u3c_Appliance_Template_v1.ovf
# Template/appliance name as defined in the OVF file, for us to later rename the VM.
TMPL_NAME=$(grep "<Name>" $OVF| cut -f 2 -d ">" |cut -f 1 -d "<" |sed 's/_//g')
# We need at least the name of the VM to create as script parameter
if [ $# -lt 1 ] || [ $# -gt 2 ]; then
echo "usage: $0 <vm name> [vm id]"
exit 1
else
VMNAME=$1
if [ $# -eq 2 ]; then # use VMID if specified as second positional param
VMID=$2
# Basic VM creation and disk files import
qm importovf ${VMID} ${OVF} ${STORAGE} -format qcow2
else # get the next VMID if not specified
# Basic VM creation and disk files import
qm importovf $(pvesh get /cluster/nextid) ${OVF} ${STORAGE} -format qcow2
fi
fi
# We get the ID of the created VM
NEWVM=$(qm list|grep ${TMPL_NAME}|awk '{ print $1 }')
# We define initial configuration. In this case, 12 cores in total, 2 sockets & 32GB of RAM. EFI boot.
qm set $NEWVM --name $VMNAME --bios ovmf --machine q35 \
--numa 1 --sockets 2 --cores 6 --cpu cputype=host \
--scsihw pvscsi \
--memory 32768 \
--efidisk0 ${STORAGE}:0,efitype=4m,format=raw
# We add network interfaces with the correct VLAN mapping
qm set $NEWVM \
--net0 model=vmxnet3,bridge=${BRIDGE},firewall=0,tag=${VLANLAB} #\
# --net1 model=vmxnet3,bridge=${BRIDGE},firewall=0,tag=${VLANVSAN}
# The import command attaches the disks to a SCSI controller, the VM doesn't recognize the VMWare PVSCSI controller unluckily. We detach disks from that controller.
for DISK in $(qm config $NEWVM |grep ^scsi|grep $NEWVM|cut -f 1 -d ":")
do
qm set $NEWVM --delete $DISK
done
# Reattach of disks to SATA controller. The appliance is happier with that one.
n=0
for DISK in $(qm config $NEWVM|grep ^unused|awk '{ print $2 }')
do
qm set $NEWVM -sata${n} $DISK
n=$(( $n + 1 ))
done
# Define boot disk
qm set $NEWVM --boot order=sata0
# Start VM
qm start $NEWVM
You can now open up the console and log in using the default root password of VMware1!
to configure the network and hostname, and set the password to something else.
First try - storage issue (Nov 2024)
I’ve left these notes from my first attempt at building this lab out below for reference!
I followed the link on William Lam’s website to the new Broadcom Flings portal.
There I downloaded the latest release (8.0U3) under Nested ESXi Virtual Appliance, which gave me a file Nested_ESXi8_0u3b_Appliance_Template_v1_ova-dl.zip
.
I scp’d the file over to my Proxmox server: scp ~/Downloads/Nested_ESXi8_0u3b_Appliance_Template_v1_ova-dl.zip root@cmb1-pve:~/
Then I installed unzip, and unzipped & untarred the OVA files:
root@cmb1-pve:~# apt install unzip
root@cmb1-pve:~# unzip Nested_ESXi8_0u3b_Appliance_Template_v1_ova-dl.zip
root@cmb1-pve:~# tar xvf Nested_ESXi8.0u3b_Appliance_Template_v1.ova
Nested_ESXi8.0u3b_Appliance_Template_v1.ovf
Nested_ESXi8.0u3b_Appliance_Template_v1.mf
Nested_ESXi8.0u3b_Appliance_Template_v1-disk1.vmdk
Nested_ESXi8.0u3b_Appliance_Template_v1-disk2.vmdk
Nested_ESXi8.0u3b_Appliance_Template_v1-disk3.vmdk
You can see the tar command unzipped the OVF (VM configuration), MF (checksum info), and VMDK (disk files) contained inside of the OVA.
Next we’ll import the VM configuration using the qm
command. I tried importing using the qcow2 format for the disks but it didn’t work and instead the disks were imported as raw. This is because I’m using LVM block storage and is to be expected.
root@cmb1-pve:~# qm importovf 201 ./Nested_ESXi8.0u3b_Appliance_Template_v1.ovf local-lvm --format qcow2
format 'qcow2' is not supported by the target storage - using 'raw' instead
Logical volume "vm-201-disk-0" created.
transferred 0.0 B of 16.0 GiB (0.00%)
transferred 163.8 MiB of 16.0 GiB (1.00%)
transferred 327.7 MiB of 16.0 GiB (2.00%)
...
cut!
...
transferred 16.0 GiB of 16.0 GiB (100.00%)
format 'qcow2' is not supported by the target storage - using 'raw' instead
Logical volume "vm-201-disk-1" created.
transferred 0.0 B of 4.0 GiB (0.00%)
transferred 4.0 GiB of 4.0 GiB (100.00%)
format 'qcow2' is not supported by the target storage - using 'raw' instead
Logical volume "vm-201-disk-2" created.
transferred 0.0 B of 8.0 GiB (0.00%)
transferred 8.0 GiB of 8.0 GiB (100.00%)
Then I could see that the VM appeared in my Proxmox: Virtual Machine 201 (NestedESXi8.0u3bApplianceTemplate) on node ‘cmb1-pve’
I added a VMXNET3 NIC as I assume that would be the most compatible.
Then I powered it up! and immediately got an error message:
VMB: 737:
Unsupported CPU: Intel family 0x0f, model 0x06, stepping 0x1
Common KVM processor
See http://www.vmware.com/resources/compatibility
I then found this article (which is awesome! definitely check it out for more info on nested ESXi on Proxmox) https://iriarte.it/homelab/2023/09/05/esxi-on-proxmox-as-nested-hypervisor.html and based on their settings I made these changes:
- Changed the CPU type on the VM from “Default (kvm64)” to “host”
- Changed the SCSI Controller from “Default (LSI 53C895A)” to “VMware PVSCSI”
And then everything booted up fine! (the default password for root is VMware1!
)
The resulting ESXi hosts had no storage, so I needed to figure out a way of providing them storage. I decided on setting up a FreeNAS box to supply NFS shared storage to each host.
My Nested ESXi lab worked fine for some time, however eventually I rebooted the VMs. After starting up, I saw that my ESXi systems lost their management configuration (IP address, hostname, root password).
I logged into the ESXi install over SSH, and noticed that the systems appeared to have no local file systems, so ESXi wasn’t able to save any changes made to the configuration.
[root@cmb1-esxi3:~] df -h
Filesystem Size Used Available Use% Mounted on
After taking another look at the article I linked above, I noticed that the script they used had a step to reattach the disks that were imported with the OVF, because otherwise they weren’t recognized properly.