** DRAFT ** 

Introduction

In the future, computer operating systems and hardware will be smart enough to allow apps to run in an operating system agnostic way. To me this means that a computer could run a windows app, a mac app, a Linux app (or BeOS, or FreeBSD, or Plan9, or Android, or anything, really…) side-by-side with performance like as if it were on bare metal hardware.

In order to support that in the most efficient (read: least overhead) and most secure way possible, there must be hardware to assist with this level of compartmentalization. One working in technology is likely to have heard some industry buzzwords: virtualization and containerization.

This type of technology is not new in enterprise businesses; virtualization came in vogue over five years ago. The migration of heavy computing resources from the CPU to GPUs (and other specialized PCIe add-in cards) has driven the need for virtual machines and containers to be able to access non-virtual resources at “bare-metal” speeds.

This article is about leveraging some of those technologies to set up Fedora 26 on a Ryzen 5 or 7 system to be able to boot a Windows virtual machine that has direct access to a real “bare-metal” hardware graphics card. We will “pass through” a real graphics card to a virtual machine (VM).

The first guide on this that I put together was based on Arch Linux and the Intel “Skylake” series CPUs. The Skylake CPUs were wicked fast, but limited to only four computing cores. With AMD’s Ryzen, we’ve finally got an affordable desktop CPU with more than four cores.

It is even possible for one to setup a single installation of Windows can be booted both on bare metal hardware, and booted under virtualization, without too much headache. One would imagine it is more efficient and less wasteful to maintain a single installation of windows that can be used in use-case scenario.

One of the things I have covered in past videos, and one of the first things that is almost necessary

Getting Started

Before getting started, make sure the hardware one has (or the hardware one plans to have) properly supports this. This guide covers installing Windows directly to NVMe or SSD and then booting that device inside a VM under Linux.

One will need two GPUs to follow along with this guide, though it is possible with some GPUs to do this with a single graphics card.

Based on my experiences, I recommend the ASRock X370 Taichi or the Gigabyte Gaming 5 and they can be purchased here:

(TODO)

If one does not have one of these motherboards, that is okay. With the AGESA 1006 update, many Ryzen AM4 motherboards support proper IOMMU grouping, which is necessary for efficient (and reliable) PCIe device isolation, which is the crucial technology that allows a virtual machine to access the real hardware of a secondary graphics card. There is some variability in support – I have not tested this on the Asus Crosshair Hero VI (as I do not yet have access to one) but I am aware from helping some of our forum members that the built-in secondary SATA controller on the CH6 is not in an isolated IOMMU group as it is on the ASRock Taichi.

It is not necessary to have an isolated SATA controller, but one can speed things up a bit by passing through the SATA controller to the virtual machine (the same way we will pass through the graphics card to the VM). This unoptimization in I/O is largely due to Ryzen being a new platform and I am confident it will be improved over time.

So, if one is not sure about one’s IOMMU groups, run this script and examine one’s IOMMU situation.

#!/bin/bash

shopt -s nullglob

for d in /sys/kernel/iommu_groups/*/devices/*; do

    n=${d#*/iommu_groups/*}; n=${n%%/*}

    printf 'IOMMU Group %s ' "$n"

    lspci -nns "${d##*/}"

done;

 

One will need to ensure that IOMMU and SVM are enabled in the UEFI in one’s bios; this is covered in the video for the Gigabyte Gaming 5 and ASRock x370 Taichi.

With all of that in a good spot, one can probably start by doing an install of windows to a dedicated storage device. I would recommend removing other storage devices. This guide supports installing windows to NVMe or SSD.

(TODO)

If one happens to be using an SSD and one has picked the ASRock x370 Taichi (or another board that has a SATA controller in an isolated IOMMU group) this means that one can run the SATA SSD from that controller and pass through the entire controller to the VM if the I/O performance is unsatisfactory.

Installing Fedora 26

Install Fedora 26 according to one’s liking. I would suggest removing the Windows drive until after Fedora has been installed and updated.

# dnf update
# After rebooting to deal with any kernel updates, 
# dnf install @virtualization

 

It is a much more straightforward process to configure Fedora for this type of project than it has in years gone by. At this point it is one of the easiest experiences I have ever had setting up this type of thing, and there are really only a few key steps to get the system going.

In order to prevent the NVidia or AMD drivers from using all the graphics cards in the system at boot time, it is necessary to use a “stub” driver. This stub driver has changed somewhat over the years and there is more than one. We will be using the VFIO driver to “capture” our secondary graphics card and prevent the normal driver from being loaded so that the Virtual Machine can load its own driver (Windows Driver) for the video card.

First, configure grub to enable iommu and to load the vfio driver to load at boot time as it normally does not:

# Editing: /etc/default/grub 
# On the GRUB_CMDLINE_LINUX line add:
# iommu=1 amd_iommu=on rd.driver.pre=vfio-pci

 

Next go to /etc/modprobe.d/ and edit kvm.conf:

Make sure the kvm_amd and kvm_intel nested=1 lines are commented out to disable AVIC and nested page tables since they have performance/stability issues currently.

Edit vfio.conf and add options to specify the vendor and device IDs of one’s graphics card that one wishes to pass through to the virtual machine:

 

Note that if one has two identical graphics cards, it is necessary to do something a bit different here. TODO.

If one is not sure what the vendor and device ID are of one’s graphics card, run

# lcpci -nn

 

…and one should see a listing of PCIe devices on the system, along with their vendor and device IDs (the numbers in brackets).

Because of the way the boot sequence works on linux, we now have to update the initial ram disk on Fedora. The initial ram disk contains drivers required for essential system operations, including video drivers and VFIO drivers. Fedora can be asked to examine our configuration and regenerate the initial ramdisk based on our new vfio paramters with this command:

# dracut –f –kver `uname –r` 

 

This is insufficient to also update grub, the linux boot loader, so it is necessary for one to update it as well:

# grub2-mkconfig > /etc/grub2-efi.cfg  
# this is just a symlink to somewhere in the labyrinth that is /boot on EFI 
# systems. Non efi systems have a different symlink...

 

Once this is completed without errors, reboot once again. Shut down the system, reattach the Windows storage block device and then go into UEFI to make sure the system is configured to boot from the Linux block device.

Back in Fedora, the virtual machine can be configured in the virt manager gui. One generally has only to:

Configure the VM;

Add the hardware (graphics card, USB devices such as keyboard and mouse);

Remove the Spice or VNC “virtual” graphics hardware.

In so doing, one generates a configuration XML file for this virtual machine under /etc/libvirt/qemu/ that can be edited to fine-tune some things.

Because of performance issues, and because we want to pass through the Windows storage device, one can manually edit the virtual machine xml file directly:

 

Here are the full changes to the xml file:

# Inside the <Domain> section after the </MemoryBacking> close tag
<vcpu placement='static'>8</vcpu>
  <iothreads>4</iothreads>
  <cputune>
    <vcpupin vcpu='0' cpuset='0'/>
    <vcpupin vcpu='1' cpuset='1'/>
    <vcpupin vcpu='2' cpuset='2'/>
    <vcpupin vcpu='3' cpuset='3'/>
    <vcpupin vcpu='4' cpuset='4'/>
    <vcpupin vcpu='5' cpuset='5'/>
    <vcpupin vcpu='6' cpuset='6'/>
    <vcpupin vcpu='7' cpuset='7'/>
    <iothreadpin iothread='1' cpuset='0-1'/>
    <iothreadpin iothread='2' cpuset='2-3'/>
    <iothreadpin iothread='3' cpuset='4-5'/>
    <iothreadpin iothread='4' cpuset='6-7'/>
  </cputune>
 
# inside the <Features> </Features> tag add
<hyperv>
      <relaxed state='on'/>
<hyperv>
#or add relaxed state on if the hyperv section is already present.
 
# if you have an nvidia card add: 
<kvm>
      <hidden state='on'/>
</kvm>
# to resolve “Error Code 43” problems 
 
# I recommend changing the cpu mode section to
<cpu mode='host-passthrough'>
    <topology sockets='1' cores='4' threads='2'/>
</cpu>
 
# Also reported to work well:
<cpu mode='host-passthrough'>
    <topology sockets='1' cores='8' threads='1'/>
</cpu>
 
# finally, modify the hard disk block device to use a real 
# device instead of a hard drive image file . Before 
# modifications (for example): 
<disk type='file' device='disk'>
      <driver name='qemu' type='qcow2' cache='none'/>
      <source file='/home/w/win10/turds.qcow2'/>
      <target dev='sda' bus='sata'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
 </disk>
# after modifications to a block device appropriate for your
# system (e.g. learned from lsblk as in the video) 
<disk type='block' device='disk'>
      <driver name='qemu' type='raw' />
      <source dev='/dev/sda'/>
      <target dev='vdb' bus='virtio'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
 </disk>
# if you didn’t install the virtio drivers, or they aren’t 
# working, bus=’sata’ works fine too 
 

 

 

 

Once one has tuned the XML to one’s liking, feel free to post it here so that others can benefit.

Now one must save the XML back to the virtual machine manager with a command such as:

# virsh define vmname.xml  

 

Where VM Name is what one has named their virtual machine.

It should now be possible to turn on the virtual machine from the virtual machine manager and see display outputs from the monitor output ports of the secondary graphics card.

One should be greeted by the ‘Tiano Core’ open-source UEFI before booting into windows.

 

Next up: Game benchmarks on the VM vs Bare Metal!