I recently configured a Windows 11 guest virtual machine on libvirt with the VirtIO drivers. This post is a collection of my notes for how to configure the host and guest. Most are applicable to any recent version of Windows.

For the impatient, just use my libvirt domain XML.

Host Configuration

Hyper-threading/Simultaneous Multithreading (SMT)

Many configuration guides recommend disabling hyper-threading on Intel chipsets before Sandy Bridge for performance reasons. Additionally, if the VM may run untrusted code, it is recommended to disable SMT on processors vulnerable to Microarchitectural Data Sampling (MDS).

RTC Synchronization

To keep RTC time in the guest accurate across suspend/resume, it is advisable to set SYNC_TIME=1 in /etc/default/libvirt-guests, which calls virsh domtime --sync after the guest is resumed. This causes the QEMU Guest Agent to call w32tm /resync /nowait in the guest, which synchronizes the clock with the configured w32time provider (usually NTP, although VMICTimeProvider could be used to sync with the Hyper-V host). Ignore the comment in older libvirt versions that SYNC_TIME is not supported on Windows, which was fixed in qemu/qemu@105fad6bb22.

Wayland Keyboard Inhibit

To send keyboard shortcuts (i.e. key combinations) to the virtual machine viewer that has focus, rather than sending them to the Wayland compositor, the compositor must support the Wayland keyboard shortcut inhibition protocol. For example, Sway gained support for for this protocol in Sway 1.5 (swaywm/sway#5021). When using Sway 1.4 or earlier in the default configuration, pressing Win + d would invoke dmenu rather than display or hide the desktop in the focused Windows VM.

Guest Configuration

BIOS vs UEFI (with SecureBoot)

There are trade-offs to consider when choosing between BIOS and UEFI:

If UEFI is selected, an image must be chosen for the pflash device firmware. I recommend OVMF_CODE_4M.ms.fd (which pairs with OVMF_VARS_4M.ms.fd which enables Secure Boot and includes Microsoft keys in KEK/DB) or OVMF_CODE_4M.fd if Secure Boot is not desired. See ovmf.README.Debian for details.

Note: Be aware that UEFI does not support the QEMU -boot order= option. It does support the bootindex properties. For example, to boot from win10.iso, use -drive id=drive0,file=win10.iso,format=raw,if=none,media=cdrom,readonly=on -device ide-cd,drive=drive0,bootindex=1 instead of -cdrom win10.iso -boot order=d.

CPU Model

It may be preferable to choose a CPU model which satisfies the Windows Processor Requirements for the Windows edition which will be installed on the guest. As of this writing, the choices are Skylake, Cascadelake, Icelake, Snowridge, Cooperlake, and EPYC.

If the VM may be migrated to a different machine, consider setting check='full' on <cpu/> so enforce will be added to the QEMU -cpu option and the domain will not start if the created vCPU doesn’t match the requested configuration. This is not currently set by default. (Bug 822148)

CPU Topology

If topology is not specified, libvirt instructs QEMU to add a socket for each vCPU (e.g. <vcpu placement="static">4</vcpu> results in -smp 4,sockets=4,cores=1,threads=1). It may be preferable to change this for several reasons:

First, as Jared Epp pointed out to me via email, for licensing reasons Windows 10 Home and Pro are limited to 2 CPUs (sockets), while Pro for Workstations and Enterprise are limited to 4 (possibly requiring build 1903 or later to use more than 2). Similarly, Windows 11 Home is limited to 1 CPU while 11 Pro is limited to 2. Therefore, limiting sockets to 1 or two on these systems is strongly recommended.

Additionally, it may be useful, particularly on a NUMA system, to specify a topology matching (a subset of) the host and pin vCPUs to the matching elements (e.g. virtual cores on physical cores). See KVM Windows 10 Guest - CPU Pinning Recommended? on Reddit and PCI passthrough via OVMF: CPU pinning on ArchWiki Be aware that, on my single-socket i5-3320M system, the matching configurations I tried performed worse than the default. Some expertise is likely required to get this right.

It may be possible to reduce jitter by pinning vCPUs to host cores, emulator and iothreads to other host cores and using a hook script with cset shield to ensure host processes don’t run on the vCPU cores. See Performance of your gaming VM.

Note that it is possible to set max CPUs in excess of current CPUs for CPU hotplug. See Linux KVM – How to add /Remove vCPU to Guest on fly ? Part 9.

Hyper-V Enlightenments

QEMU supports several Hyper-V Enlightenments for Windows guests. virt-manager/virt-install enables some Hyper-V Enlightenments by default, but is missing several useful recent additions (virt-manager/virt-manager#154). I recommend editing the libvirt domain XML to enable Hyper-V enlightenments which are not described as “nested specific”. In particular, hv_stimer, which reduces CPU usage when the guest is paused.

Memory Size

When configuring the memory size, be aware of the system requirements (4GB for Windows 11, 1GB for 32-bit, 2GB for 64-bit Windows 10) and Memory Limits for Windows and Windows Server Releases which vary by edition.

Memory Backing

If shared memory will be used (e.g. for virtio-fs discussed below), define a (virtual) NUMA zone and memory backing. The memory backing can be backed by files (which are flexible, but can have performance issues if not on hugetlbfs/tmpfs) or memfd (since QEMU 4.0, libvirt 4.10.0). The memory can be Huge Pages (which have lower overhead, but can’t be swapped) or regular pages. (Note: If hugepages are not configured, Transparent Hugepages may still be used, if THP is enabled system-wide on the host system. This may be advantageous, since it reduces translation overhead for merged pages while still allowing swapping. Alternatively, it may be disadvantageous due to increased CPU use for defrag/compact/reclaim operations.)

Memory Ballooning

If memory ballooning will be used, set current memory to the initial amount and max memory to the upper limit. Be aware that the balloon size is not automatically managed by KVM. There was an Automatic Ballooning project which has not been merged. Unless a separate tool, such as oVirt Memory Overcommitment Manager, is used, the balloon size must be changed manually (e.g. using virsh --hmp "balloon $size") for the guest to use more than “current memory”. Also be aware that when the balloon is inflated, the guest shows the memory as “in use” which may be counter-intuitive.

Machine Type

The Q35 Machine Type adds support for PCI-E, AHCI, PCI hotplug, and probably many other features, while removing legacy features such as the ISA bus. Historically it may have been preferable to use i440FX for stability and bug avoidance, but my experience is that it’s generally preferable to use the latest Q35 version (e.g. pc-q35-6.1 for QEMU 6.1).

Storage Controller

Paravirtualized storage can be implemented using either SCSI with virtio-scsi and the vioscsi driver or bulk storage with virtio-blk with the viostor driver. The choice is not obvious. In general, virtio-blk may be faster while virtio-scsi supports more features (e.g. pass-through, multiple LUNs, CD-ROMs, more than 28 disks). Citations:

Virtual Disk

Format

When choosing a format for the virtual disk, note that qcow2 supports snapshots. raw does not. However, raw is likely to have better performance due to less overhead.

Alberto Garcia added support for Subcluster allocation for qcow2 images in QEMU 5.2. When using 5.2 or later, it may be prudent to create qcow2 disk images with extended_l2=on,cluster_size=128k to reduce wasted space and write amplification. Note that extended L2 always uses 32 sub-clusters, so cluster_size should be 32 times the filesystem cluster size (4k for NTFS created by the Windows installer).

Discard

I find it generally preferable to set discard to unmap so that guest discard/trim requests are passed through to the disk image on the host filesystem, reducing its size. For Windows guests, discard/trim requests are normally only issued when Defragment and Optimize Drives is run. It is scheduled to run weekly by default.

I do not recommend enabling detect_zeroes to detect write requests with all zero bytes and optionally unmap the zeroed areas in the disk image. As the libvirt docs note: “enabling the detection is a compute intensive operation, but can save file space and/or time on slow media”.

Discard Granularity or SSD

Jared Epp also informed me of an incompatibility between the virtio drivers and defrag in Windows 10 and 11 (virtio-win/kvm-guest-drivers-windows#666) which causes defragment and optimize to take a long time and write a lot of data. There are two workarounds suggested:

  1. Use a recent version of the virtio-win drivers (0.1.225-1 or later?) which includes virtio-win/kvm-guest-drivers-windows#824 and set discard_granularity to a large value (Hyper-V uses 32M).

    For libvirt, discard_granularity can be set using <qemu:property> on libvirt 8.2 and later, or <qemu:arg> on earlier versions, as demonstrated by Pau Rodriguez-Estivill. Note: There was a patch to add discard_granularity to <blockio> but it was never merged, as far as I can tell.

  2. Emulate an SSD rather than a Thin Volume, as suggested by Pau Rodriguez-Estivill by setting rotation_rate=1 (for SSD detection) and discard_granularity=0 (to change the MODE PAGE POLICY to “Obsolete”?). These settings were inferred from QEMU behavior. It’s not clear to me why this avoids the slowness issue.

    For libvirt, rotation_rate can be set on <target> of <disk>. As above, discard_granularity can be set using <qemu:property> on libvirt 8.2 and later, or <qemu:arg> on earlier versions.

I am unsure which approach to recommend, although I am currently using discard_granularity=32M. Stewart Bright noted some differences between SSD and Thin Provisioning behavior in Windows guests. In particular, I’m curious how slabs and slab consolidation behave. Interested readers are encouraged to investigate further and report their findings.

Video

There are several options for graphics cards. VGA and other display devices in qemu by Gerd Hoffmann has practical descriptions and recommendations (kraxel’s news is great for following progress). virtio-drivers 0.1.208 and later include the viogpudo driver for virtio-vga. (Bug 1861229) Unfortunately, it has some limitations:

However, unless the above limitations are critical for a particular use case, I would recommend virtio-vga over QXL based on the understanding that it is a better and more promising approach on technical grounds and that it is where most current development effort is directed.

Warning: When using BIOS firmware, the video device should be connected to the PCI Express Root Complex (i.e. <address type='pci' bus='0x00'>) in order to access the VESA BIOS Extensions (VBE) registers. Without VBE modes, the Windows installer is limited to grayscale at 640x480, which is not pleasant.

Note that QEMU and libvirt connect video devices to the Root Complex by default, so no additional configuration is required. However, if a second video device is added using virt-manager or virt-xml, it is connected to a Root Port or PCIe-to-PCI bridge, which creates problems if the first device is removed (virt-manager/virt-manager#402).

Note: If 3D acceleration is enabled for virtio-vga, the VM must have a Spice display device with OpenGL enabled to avoid an “opengl is not available” error when the VM is started. Since the viogpudo driver does not support 3D acceleration, I recommend disabling both.

Keyboard and Mouse

I recommend adding a “Virtio Keyboard” and “Virtio Tablet” device in addition to the default USB or PS/2 Keyboard and Mouse devices. These are “basically sending linux evdev events over virtio”, which can be useful for keyboard or mouse with special features (e.g. keys/buttons not supported by PS/2). Possibly also a latency or performance advantage. Note that it is not necessary to remove the USB or PS/2 devices, since QEMU will route input events to virtio-input devices if they have been initialized by the guest and virtio input devices are not supported without drivers, which can make setup and recovery more difficult if the PS/2 devices are not present.

TPM

Windows 11 requires TPM 2.0. Therefore, I recommend adding a QEMU TPM Device to provide one. Either TIS or CRB can be used. “TPM CRB interface is a simpler interface than the TPM TIS and is only available for TPM 2.” If emulated, swtpm must be installed and configured on the host. Note: swtpm was packaged for Debian in 2022 (Bug 941199), so it is not available in Debian 11 (Bullseye) or earlier releases.

RNG

It may be useful to add a virtio-rng device to provide entropy to the guest. This is particularly true if the vCPU does not support the RDRAND instruction or if it is not trusted.

File/Folder Sharing

There are several options for sharing files between the host and guest with various trade-offs. Some common options are discussed below. My recommendation is to use SMB/CIFS unless you need the feature or performance offered by virtio-fs (and like living on the bleeding edge).

Virtio-fs

Libvirt supports sharing virtual filesystems using a protocol similar to FUSE over virtio. It is a great option if the host and guest can support it (QEMU 5.0, libvirt 6.2, Linux 5.4, Windows virtio-drivers 0.1.187). It has very high performance and supports many of the filesystem features and behaviors of a local filesystem. Unfortunately, it has several significant issues including configuration difficulty, lack of support for migration or snapshot, and Windows driver issues, each explained below:

Virtio-fs requires shared memory between the host and guest, which in turn requires configuring a (virtual) NUMA topology with shared memory backing: See Sharing files with Virtio-FS. Also ensure you are using a version of libvirt which includes the apparmor policy patch to allow libvirtd to call virtiofsd (6.7.0 or later).

Migration with virtiofs device is not supported by libvirt, which also prevents saving and creating snapshots while the VM is running. This is difficult to work around since live detach of device ‘filesystem’ is not supported by libvirt for QEMU.

The Windows driver has released with several severe known bugs, such as:

My offer to assist with adding tests (virtio-win/kvm-guest-drivers-windows#531) has seen very little interest or action. It’s not clear to me who’s working on virtio-fs and how much interest it has at the moment.

Virtio-9p

Although it is not an option for Windows guests due to lack of a driver (virtio-win/kvm-guest-drivers-windows#126), it’s worth nothing that virtio-9p is similar to virtio-fs except that it uses the 9P distributed file system protocol which is supported by older versions of Linux and QEMU and has the advantage of being used and supported outside of virtualization contexts. For a comparison of virtio-fs and virtio-9p, see the virtio-fs patchset on LKML.

SPICE Folder Sharing (WebDAV)

SPICE Folder Sharing is a relatively easy way to share directories from the host to the guest using the WebDAV protocol over the org.spice-space.webdav.0 virtio channel. Many libvirt viewers (remote-viewer, virt-viewer, Gnome Boxes) provide built-in support. Although virt-manager does not (virt-manager/virt-manager#156), it can be used to configure folder sharing (by adding a org.spice-space.webdav.0 channel) and other viewers used for running the VM and serving files. Note that users have reported performance is not great and the SPICE WebDAV Daemon must be installed in the guest to share files.

SMB/CIFS

Since Windows supports SMB/CIFS (aka “Windows File Sharing Protocol”) natively, it is relatively easy to share files between the host and guest if networking is configured on the guest. Either the host (with Samba or KSMBD) or the guest can act as the server. For a Linux server, see Setting up Samba as a Standalone Server. For Windows, see File sharing over a network in Windows 10. Be aware that, depending on the network topology, file shares may be exposed to other hosts on the network. Be sure to adjust the server configuration and add firewall rules as appropriate.

Channels

I recommend adding the following Channel Devices:

  • com.redhat.spice.0 (spicevmc) for the SPICE Agent
  • org.qemu.guest_agent.0 (unix) for the QEMU Guest Agent
  • org.spice-space.webdav.0 (spiceport) for SPICE Folder Sharing (WebDAV), if using.

Notes

There are some differences between the “legacy” 0.9/0.95 version of the virtio protocol and the “modern” 1.0 version. Recent versions (post-2016) of QEMU and libvirt use 1.0 by default. For older versions, it may be necessary to specify disable-legacy=on,disable-modern=off to force the modern version. For details and steps to confirm which version is being used, see Virtio 1.0 and Windows Guests.

Guest OS Installation

I recommend configuring the guest with two SATA CD-ROM devices during installation: One for the Windows 10 ISO or Windows 11 ISO, and one for the virtio-win ISO. At the “Where would you like to install Windows?” screen, click “Load Driver” then select the appropriate driver as described in How to Install virtio Drivers on KVM-QEMU Windows Virtual Machines.

Bypass Hardware Checks

If the guest does not satisfy the Windows 11 System Requirements, you can bypass the checks by:

  1. Press Shift-F10 to open Command Prompt.
  2. Run regedit.
  3. Create key HKEY_LOCAL_MACHINE\SYSTEM\Setup\LabConfig with one or more of the following DWORD values:
    • BypassRAMCheck set to 1 to skip memory size checks.
    • BypassSecureBootCheck set to 1 to skip SecureBoot checks.
    • BypassTPMCheck set to 1 to skip TPM 2.0 checks.
  4. Close regedit.
  5. If the “This PC can’t run Windows 11” screen is displayed, press the back button.
  6. Proceed with installation as normal.

Be aware that Windows 11 is not supported in this scenario and doing so may prevent some features from working.

virtio-win Drivers

Drivers for VirtIO devices can be installed by running the virtio-win-drivers-installer, virtio-win-gt-x64.msi (Source), available on the virtio-win ISO) or by using Device Manager to search for device drivers on the virtio-win ISO.

The memory ballooning service is installed by virtio-win-drivers-installer. To install it manually (for troubleshooting or other purposes):

  1. Copy blnsrv.exe from virtio-win.iso to somewhere permanent (since install command defines service using current location of exe).
  2. Run blnsrv.exe -i as Administrator
  3. Reboot (Necessary, per Bug 612801)

Note that the virtio-win-drivers-installer does not currently support Windows 11/Server 2022 (Bug 1995479). However, it appears to work correctly for me. It also does not support Windows 7 and earlier (#9). For these systems, the drivers must be installed manually.

virtio-fs

To use virtio-fs for file sharing, in addition to installing the viofs driver, complete the following steps (based on a comment by @FailSpy):

  1. Install WinFSP.
  2. Copy winfsp-x64.dll from C:\Program Files (x86)\WinFSP\bin to C:\Program Files\Virtio-Win\VioFS.
  3. Ensure the VirtioFSService created by virtio-win-drivers-installer is stopped and has Startup Type: Manual or Disabled. (Enabling this service would work, but would make shared files only accessible to elevated processes).
  4. Create a scheduled task to run virtiofs.exe at logon using the following PowerShell:
    $action = New-ScheduledTaskAction -Execute 'C:\Program Files\Virtio-Win\VioFS\virtiofs.exe'
    $trigger = New-ScheduledTaskTrigger -AtLogon
    $principal = New-ScheduledTaskPrincipal 'NT AUTHORITY\SYSTEM'
    $settings = New-ScheduledTaskSettingsSet -AllowStartIfOnBatteries -ExecutionTimeLimit 0
    $task = New-ScheduledTask -Action $action -Principal $principal -Trigger $trigger -Settings $settings
    Register-ScheduledTask Virtio-FS -InputObject $task
    

QEMU Guest Agent

The QEMU Guest Agent can be used to coordinate snapshot, suspend, and shutdown operations with the guest, including post-resume RTC synchronization. Install it by running qemu-ga-x86_64.msi (available in the guest-agent directory of the virtio-win ISO).

QXL Driver

If the virtual machine is configured with QXL graphics instead of virtio-vga, as discussed in the Video section, a QXL driver should be installed. For Windows 8 and later, install the QXL-WDDM-DOD driver (Source). On Windows 7 and earlier, the QXL driver (Source) can be used. The driver can be installed from the linked MSI, or from the qxldod/qxl directory of the virtio-win ISO.

SPICE Agent

For clipboard sharing and display size changes, install the SPICE Agent (Source).

Note: Some users have reported problems on Windows 11 (spice/win32#11). However, it has been working without issue for me.

SPICE WebDAV Daemon

To use SPICE folder sharing, install the SPICE WebDAV daemon (Source).

SPICE Guest Tools

Instead of installing the drivers/agents separately, you may prefer to install the SPICE Guest Tools (Source) which bundles the virtio-win Drivers, QXL Driver, and SPICE Agent into a single installer.

Warning: It does not include the QEMU Guest Agent and is several years out of date at the time of this writing (last updated on 2018-01-04 as of 2021-12-05).

QEMU Guest Tools

Another alternative to installing drivers/agents separately is to install the QEMU Guest Tools (Source) which bundles the virtio-win Drivers, QXL Driver, SPICE Agent, and QEMU Guest Agent into a single installer. virtio-win-guest-tools.exe is available in the virtio-win ISO.

Post-Installation Tasks

Remove CD-ROMs

Once Windows is installed, one or both CD-ROM drives can be removed. If both are removed, the SATA Controller may also be removed.

virtio-scsi CD-ROM

For a low-overhead CD-ROM drive, a virtio-scsi drive can be added by adding a VirtIO SCSI controller (if one is not already present) then a CD-ROM on the SCSI bus.

Defragment and Optimize Drives

If discard was enabled for the virtual disk, Defragment and Optimize Drives in the Windows guest should show the drive with media type “Thin provisioned drive” (or “SSD”, if configured with rotation_rate=1). It may be useful to configure a disk optimization schedule to trim/discard unused space in the disk image.

Additional Resources

ChangeLog

2022-10-22

2022-06-12

2022-06-02

  • Add warning to Video about missing VBE modes when the video device is connected to a PCIe Root Port rather than the Root Complex.
  • Improve discussion of UEFI firmware images.
  • Add note about -boot order=, bootindex, and UEFI.

2022-05-09

2022-05-06

  • Discuss Windows licensing limits on sockets in CPU Topology section, thanks to Jared Epp.
  • Discuss slow operation and excessive writes performed by defrag on Windows 10 and 11, also thanks to Jared Epp.
  • Add Memory Size section to note minimum and maximum size limits for different Windows editions.
  • Add quote from Paolo Bonzini about virtio-blk use for high-performance.

2022-03-19

  • Fix broken link to my example libvirt domain XML. Thanks to Peter Greenwood for notifying me.
  • Rewrite the “Wayland Keyboard Inhibit” section to improve clarity.

2022-01-13

  • Recommend virtio-vga with the viogpudo driver instead of QXL with the qxldod or qxl driver.