Late 2021 edit: They finally removed this in their driver now, so now they can be used in passthrough setups with their binary drivers.

It appears that the Nvidia GPU drivers (both the Windows and Linux ones, after a certain point) don’t particularly want to be run under a hypervisor

  • Microsoft Hv (The Hyper-V vendor id)
  • VMWare
  • VMwareVMware
  • XenVMMXenVMM
  • KVMKVMKVM
  • Parallels In addition, some of these model specific registers from KVM (the KVM wallclock) arbitrarily anger their drivers.
#define MSR_KVM_WALL_CLOCK_NEW      0x4b564d00
#define MSR_KVM_SYSTEM_TIME_NEW     0x4b564d01
#define MSR_KVM_ASYNC_PF_EN         0x4b564d02
#define MSR_KVM_STEAL_TIME          0x4b564d03
#define MSR_KVM_PV_EOI_EN           0x4b564d04

Luckily, parameters exist on KVM to modify both the CPUID, Hyper-V vendor id, and disable the KVM model specific registers. Xen on the other hand, has a somewhat obtuse looking option for modifying their CPUID (Sure, let me just type in the entire leaf in binary), and no current ability to modify the Hyper-V vendor id. I guess the quick and dirty way of working around that would be to either disable Hyper-V as one used to do on KVM, or modify these lines to change the ‘Microsoft Hv’ signature to something else.

Why Nvidia’s drivers are so hostile to their cards running under hypervisors? One merely has to look at the pricing of their Quadro line, which they have graciously allowed to run under a hypervisor, to figure out their motivations.