Migrate VMware ESXi VM's to Proxmox VE

I’ve just migrated my virtual machines (Debian and Ubuntu servers) from VMware ESXi to Proxmox VE. I already have a Proxmox VE server, so I decided to migrate the VMs to this host and then install Proxmox VE on my default Intel NUC homelab server and then move the VMs back to it. In this post, I’ll share what I’ve done with a few notes.

Automatic import

Proxmox VE has a built-in VM importer that uses the storage plugin system, so it’s all integrated into the web-based user interface. It was really straightforward to migrate my ESXi VMs as a whole this way.

You can do a live restore migration, but you need a high-bandwidth network between the ESXi and the PVE host. Also, live import is risky because if it fails, you could lose data that’s been written since you started the import. It’s best to migrate VMs when they’re powered off. This will also speed up the process.

Info: Currently only VMware ESXi is supported as an import source, but there are plans to add support for importing OVAs/OVFs in the future.


Preparation

  • Verified that Proxmox VE was updated to version 8 and applied the latest available updates. This ensures I have the most stable environment for the migration.
  • Removed all VM snapshots on ESXi. Snapshots can significantly increase the time it takes to migrate VMs.
  • Checked that all VM backups were working correctly. This is part of my regular maintenance routine, but for a major migration, it’s critical to have reliable backups in case something goes wrong.
  • Uninstalled VMware ESXi guest tools (open-vm-tools) from the VMs before starting the migration. Removing them post-migration can become tricky.
  • For each migrated VM, I configured a static network interface name using a systemd .link file. This step ensures that network interfaces retain consistent names across hosts and reboots.

Short explanation why I have created a systemd link file for the network interface name. Once the migration is done, the VM will probably come up without a working network because I’m using static IP settings and the interfaces will get new names based on “predictable network interface device naming” scheme. This replaced the usual names like eth0, eth1, etc. in the newer versions of Debian and Ubuntu.

I ended up writing a bash script that creates a systemd network link file. This defines a specific network interface name for a specific network device’s MAC address, which was ideal for me as I’m not changing the MAC. The script should work on any system where the network is managed by systemd, like Debian or Ubuntu’s netplan.

Here are also a few other possible solutions regarding the network interface name:

udev rule

If you’re working with older or non systemd networking based systems, you could also set up a udev rule to assign a persistent network interface name. Example: /etc/udev/rules.d/70-persistent-net.rules

SUBSYSTEM=="net", ACTION=="add", ATTR{address}=="xx:xx:xx:xx:xx:xx", NAME="ens192"
  • Reload the udev rules or restart the VM.
    udevadm control --reload-rules
  • Trigger rule
    sudo udevadm trigger
Disable predictable network interface names

The traditional naming scheme (eth0, eth1, etc.) can be reactivated. This can be achieved using the following method:

  1. Open the file /etc/default/grub and search for the line GRUB_CMDLINE_LINUX.
  2. Add the kernel parameter net.ifnames=0:
    GRUB_CMDLINE_LINUX="net.ifnames=0 biosdevname=0"
  3. Update the Grub bootloader:
    sudo update-grub
  4. Restart the VM.

The network interface should then have the traditional name eth0 again. Adjust the network configuration accordingly.

DHCP

Switch the VMs on the ESXi host to DHCP and create DHCP reservations, so that they obtain an IP address via DHCP and not using static IP settings.

And here the systemd network link file solution what I used. I just created the following file with the MAC address and interface name:

/etc/systemd/network/10-persistent-net.link

[Match]
MACAddress=$MAC

[Link]
Name=$INTERFACE
EOF"

To automate the whole process I created a bash script which searches for a network interface with an IP address in the 192.168.0.X range and retrieves its MAC address. It then creates a systemd link file to ensure persistent interface naming for the identified network interface. I tested it with Debian 11, 12 and Ubuntu 22.04, 24.04.

  • Bash script on GitLab
  • Make it executable
    chmod +x set-network-inf-name-systemd.bash
  • Run the script
    ./set-network-inf-name-systemd.bash
  • Reload systemd-networkd service
    sudo systemctl restart systemd-networkd
  • Check IP settings
    ip a

Once the VM has been migrated to Proxmox, you can see from ip a that the altname is the name the system originally wanted to use:

ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000  
   link/ether 00:0c:29:95:0f:cd brd ff:ff:ff:ff:ff:ff   
   altname ens18  
   inet 192.168.0.10/24 brd 192.168.0.255 scope global ens192

Virtual Machine Migration

Add ESXi to Proxmox VE

  • Add an “ESXi” import-source storage, through the Datacenter -> Storage -> Add menu
    • Enter the domain or IP address
    • Enter admin account credentials
    • Check Skip Certificate Verification checkbox, if needed
    • Note: While it is possible to import through a vCenter instance, doing so will dramatically reduce performance
  • In the PVE node in the tree on the left, you will now see the VMware ESXi host at the bottom
    • All available VMs are listed here

VM Import

  • VMWare ESXi Web UI or via SSH, shut down the source VMs
  • Proxmox Web UI - Select VMware ESXi to list all VMs
  • Select the VM you wish to import and click the Import button at the top
  • You can set the VM ID, sockets, cores, memory, name, CPU type, OS type, version, default storage and default bridge

A few notes on some of the points:

  • Virtual Disk Type
    • Uses the VMWare PVSCSI SCSI controller by default. The VirtIO SCSI would be better as it uses the efficient VirtIO SCSI driver which also allows IO threads. For this to work, the guest must support VirtIO.
  • Network Adapter
    • VirtIO has the least overhead and is preferred. Other NIC models can be selected for older operating systems that do not have drivers for the paravirtualised VirtIO NIC.
  • Use the Advanced tab for finer selection
    • e.g. CD-ROM drive ISO, different storage target for each disk, configure different network settings for multiple devices, disable importing some disks
    • Note: You can edit and extend even more options and details of the VM hardware after creation.
  • Check the ‘Resulting Config’ configuration for the full list of key/value pairs that will be used to create the VM
  • Start the actual import in Proxmox by clicking on Import
  • Boot up the VM and then check to see if the network is working and everything is going well.

Post Migration

It is recommended that the QEMU guest agent is installed in VMs to improve communication between the host and guest and to facilitate the execution of commands in the guest. More information can be found here.
apt install qemu-guest-agent


Additional Notes

Names and datastores
You may not be able to import from a datastore that contains special characters such as a ‘+’. Renaming the datastore may be a workaround.

vSAN
If your VM’s disks are on vSAN storage, move them to a other datastore. The importer doesn’t work with VMs backed by vSAN yet.

BIOS / UEFI
It depends on which VM you’re looking at. If it boots in legacy BIOS mode, you’ll need to set it to SeaBIOS. If it boots in UEFI mode, just set it to OVMF.

Some operating systems will not set up the default boot path in UEFI mode (/EFI/BOOT/BOOTX64.EFI) but only their own custom one. In such a situation, the VM will not boot even when the BIOS is set correctly. You will have to enter the UEFI BIOS and add the custom boot entry. An EFI Disk needs to be configured in the Hardware panel of the VM to persist these settings.

Windows VMs
I haven’t migrated any Windows VMs yet, but they’ll likely have trouble booting after the migration. So, the hard disk controller will need to be adapted.

vTPM
You can’t currently migrate a virtual Trusted Platform Module (vTPM). You’ll need vTPM support for full disk encryption (FDE), cryptographic verification, and Windows 11. If your VM has a vTPM, it might be worth working around its absence before the first boot.


Resources