TLDR; tell me if this is a waste of time before I spend forever tinkering on something that will always be janky
I want to run multiple OSs on one machine including Linux, Windows, and maybe OSX from a host with multiple GPUs + igpu. I know there are multiple solutions but I’m looking for advice, opinions and experience. I know I can google how-to but is this worh pursuing?
I currently dual boot Bazzite and Ubuntu, for gaming and develoent respectively. I love Bazzite ease of updates and Ubuntu is where it’s at for testing and building frontier AI/ML tools.
What if I kept my computer running a thin hypervisor 24/7 and switched VMs based on my working context? I could pass through hardware as needed.
Proxmox? XCP-NG? Debian + QEMU? Anyone living with these as their computing machines (not homelabs/server hosts)?
This is inspired by Chris Tidus’s (YouTube) setup on arch but 1) i don’t know arch 2) I have a fairly beefy i7 265k 192gb build, but he’s on an enterprise xenon ddr5 build so in a differenrent power class 3) I have a heterogenous mix of graphics cards I’m hoping to pass though depending on workload
Use cases:
- Bazzite + 1 gpu for gaming
- Ubuntu + 1 or more GPUs for work
- Windows + 0 or more GPU Music Production paid vstis and kernel-level anti cheat games (GTAV, etc)
- OSX? Lightroom? GPU?
Edit: Thank you all for your thoughts and contributions
Edit: what I’ve learned
- this is viable but might be a pain
- a Windows VM for getting around anti-cheat in vames defeats the purpose. I’d need a dual boot for that use case
- hyperV is a no. Qubes Qemu libvirt, yes
- may want to just put everything on sparate disks and boot / VM into them as needed
Edit: distrobox/docker works great but doesn’t fit all my needs because I can’t install kernel-level modules in them (AFAIK)
Windows vms for beating kernel level anticheat takes a lot of work to prevent detection. I recommend dual booting instead for that use case.
For the Linux environments I’d recommend using containers/podman/docker, systemd-nspawn or libvirt. These three solutions use the host kernel as the hypervisor and don’t require much setup.
Containers can also share the GPU with the host easily.
Your setup would be Hardware > Windows | Bazzite > Ubuntu(container) | OSX (libvirtd)
Edit: You can also triple boot with windows, Bazzite and Ubuntu or add a proxmox/whatever hypervisor disk and try it out without touching your working system.
I’ve read through the thread and your edit sounds like the best option for me. It gives direct hardware access and gets everything working right away but allows me to try out a hypervisor solution.
I love and use containers/Distrobox all the time and it all works great except that I do run into problems with firmware and kernel modules because you can’t containerize that or I haven’t figured it out yet.
My solution to this problem was to buy a $180 Dell workstation off eBay and install Ubuntu on that as my main workstation. My gaming desktop is now in the basement and runs sunshine. Moonlight over LAN is basically native, and solves the annoying reboot to switch tasks scenario.
This might/probably will be the end game for me as well. Sunshine for remote gaming and ruatdesk/ssh for work.
Watch out regarding local streaming like moonlight / sunshine for kernel anti cheat games. I remember testing it earlier this year and Valorant wouldn’t let me use my mouse input when using Moonlight/Sunshine from a Windows PC into another machine (which happened to be running Linux Mint, but not sure that had any relevance)
My wife and I have both been using this setup for over a year and we’ve never looked back
I hacked together a similar setup back when, for logistical reasons, I had to squeeze everything on one machine.
It’s doable, but be prepared for some challenges.
You will probably have a much better time than I did given the abundance of RAM. Getting graphics to work properly is the most arduous part. I never really figured that part out. Having to forward USB accessories also got really tedious.
Its very complex and overkill but its not a bad idea if you want to do it. I believe there is a youtuber SomeOrdinaryGamer who runs a linux base OS with hypervisor OSs on top with GPU passthrough. He has some videos explaining his setup and how it all works and how to configure it.
You could also take a look at this linux distro designed for exactly what you want to do https://www.qubes-os.org/.
I will look into this. I’ve heard of qubes but only thought of if for security/privacy use cases
I use VMs a lot for desktops doing development work. It’s certainly possible, with some caveats. Here’s a bit of a knowledge-dump that may be helpful.
Multiple displays can be tricky. If you just want a single monitor you’re fine. But hypervisors have varying support for more than one display and some do it better than others. VirtualBox had the absolute best support for it. Hyper-V has “support” for it but only really if you use their preferred Ubuntu install. I’m not sure how libvirt handles it. Remote display protocols have varying support and can be options - rustdesk seems to support it very nicely (one window per display like VirtualBox) but I’ve had lots of keyboard issues with that.
Running VMs is less efficient, you’ll need more memory in the base system to handle each VM but it sounds like you’re pretty decked out there.
Access to hardware can be tricky. Hyper-V sucks at this - don’t use it if you can help it. I’ve never quite done gaming in a VM but I suspect this would be the most problematic. I have done libvirt pass-through of an nvidia card but only for video encoding/decoding with Jellyfin - which does work just fine. But VM displays tend not to be well accelerated so I would expect other issues. As an example my hyper-v Linux guest can’t play video above a small resolution without the audio skipping and losing track.
I’d avoid Hyper-V entirely. It absolutely sucks as a desktop environment. It relies heavily on RDP “enhanced mode” for everything, has shitty support for hardware pass-through, etc. Just a complete fail.
VirtualBox was really quite good for a desktop environment - but Oracle is more lawfirm than software company these days. There are hidden license issues with using any extensions (for, e.g. accelerated displays).
Libvirt I don’t have as much experience with unfortunately. I’ve usually emulated Linux on Windows not the other way 'round.
I think I’m sold on NOT using HyperV. I will also keep it to one monitor because thats my current setup.
I work four days a week on a remote windows vm. It has everything I need, and I remote from /that/ onto whatever other vm I might need. I connect over a vpn using, well, anything. As you’ve pointed out, the local machine doesn’t need much in the way of specs, although in my case I have three monitors - all given over to the remote, and it’s a clean way to separate work’s environment and network from my own and it’s a very common work pattern. The hypervisor there is vmware, but that doesn’t matter.
But… Gaming is a different. There is latency over the conn, and audio/graphic lag would make FPS and gpu-heavy games particularly poor. I don’t know of a way to totally overcome that, although game-streaming services exist, so presumably it is possible.
I work on remote and local machines and I feel comfortable on both. For this project, I have the advantage of all of my hardware being in my house. Latency wouldn’t be an issue over my wired network. Sunshine would allow me to game locally for sure.
Conceptually, not a problem. Windows 11 runs on top of HyperV with no performance issues. In reality, I think you will spend a lot of time, hit lots of weird edge cases and performance issues, especially with trying to get the Linux and windows hosts to coexist nicely.
That said, I’d love to watch you try :)
I will try to post some updates as I Frankenstein this build
You will need to continue dual booting or you will need to use two computers. I promise there is no hope in virtualization for your use case. I have personally considered myself the exception to this rule and discovered everyone else was right all along. Save yourself the time.
Pro tip: you will not find a CPU + Chipset that supports 2 GPUs at full speed. If you think you have, read the specs closer. You may also throw away your M.2 bandwidth as well. If you can afford an enterprise CPU with enough PCI-E lanes, you should just build two computers
I am very aware if the PCIE lane circus you have to go through on consumer hardware. I considered building a used/refurb EPYC system and even borrowed one from work for a bit. It was nice, but my build would have put me somewhere near $6000 at best. Hahahahh 😭
This only works for simple workstation stuff, not for power users trying to get max performance. My recommend is dual/tripple boot or separate computers.
This only works for simple workstation stuff, not for power users trying to get max performance.
This is quite untrue. I do professional development work in a hypervisor VM. It’s not “as fast” as native but it’s more than adequate.
I’m going to try both ways. For most of my computing, I’d be happy to get 80% of my computing power if I get the convenience of not having to reboot across OSs and the ability to snapshot a VM state
Multiboot is annoying if you multitask. Separate computers is better but then they take space and power. I wish there was a perfect solution… I’m switching to Linux but still have long running Windows
protectsprojects I have to develop. I’ve settled for running Linux bare and Windows in virt-manager. Seems working for me so far.
Can’t comment on everything, but given you mentioned audio production: a couple of years ago I tried to get ASIO working from within a VM on a Linux host OS and wasn’t having a whole lot of luck.
I think I read somewhere that someone had come up with a special ASIO driver to send the audio directly into the host Linux OS audio subsystem, but I’ve not tried that or measured latency yet.
I hadnt considered this. I was thinking that if I passed an audio device directly to the guest OS, I would bypass ASIO headaches. I now realize thats naïve of me.
By all means try it out, it might have been something down to the drivers for my audio interface (focusrite scarlett) at the time
If you have better luck than I did, I’d love to know!
honestly I think you’re better off trying to figure out how to get it all done in one distro. two other vms hogging resources in the background is not something that can be optimized well. but two applications in the background can be optimized excellently by the os
but that said, keep us posted on what u do and how it works out…
I’ll give it a try!
It’d be interesting project but it seems overkill and over complicatiion when the simplest solution is dual booting and giving each OS complete access to the hardware. Hypervisors for all your systems would be a lot of configuration, and some constant overhead you can’t escape for potentially minimal convenience gain?
Are you hoping to run these OS at the same time and switch between them? If so I’m not sure the pain of the set up is worth it for a little less time switching between OS to switch task? If you’re hoping to run one task in one machine (like video editing) while gaming in another, it makes more sense but you’re still running a single i7 chip so it’ll still be a bottleneck even with all the GPUs and that RAM. Sure you can share out the cores but you won’t achieve the same performance of 1 chip and chip set dedicated to 1 machine that a server stack gives (and which Hypervisors can make good use of).
Also I’d question how good the performance you’d get on a desktop motherboard with multiple GPUs assigned to different tasks. It’s doubtful you’d hit data transfer bottlenecks but it’s still asking a lot of hardware not designed for that purpose I think?
If you intend to run the systems 1 at a time then you might as well dual boot and not be sharing system resources with an otherwise unneeded host for hypervisor software.
I think if you wanted to do this and run the machines in parallel then a server stack or enterprise level hardware probably would be better. I think it’s a case of “just because you can do something doesn’t mean you should”? Unless it’s just a “for fun” project and you don’t mind the downsides? Then I can see the lure.
But if I were in your position and wanted the “best” solution I’d probably go for a dual boot with Linux and Windows. In Linux I’d run games natively in the host OS, and use Qemu to have a virtual machine for development (passing through one of the GPUs for AI work). The good thing in this set up is you can back-up your whole development machine hard drive and restore it easily if you make big changes to the host Linux. Windows I’d use for kernel anti cheat games and just boot into it when I wanted.
Personally I dual boot Linux and windows. I barely use windows now but in Linux I do use Qemu and have multiple virtual machines. I have a few test environments for Linux because I like to tinker, plus a docker server stack that I use to test before deploying to a separate homelab device. I do have a Win11 VM, barely used - it doesn’t have a discrete GPU and it’s sluggish. If you’re gaming I’d dual boot and give it access to the best GPU as and when you need it.
And if you want the best performance, triple boot. Storage is cheap and you could easily have separate drives for separate OS. I have an Nvme for Linux and another Nvme for Windows for example. You could easily have 2 separate discrete Linux installs and a Windows installs. In some ways it may be best as you’d separate Linux gaming from Linux working and reduce distractions.
If you are fine with having things on the same OS, look into distrobox. It would let you set up an Ubuntu environment/container on top of your Bazzite install. You could also use something like OSX-KVM for MacOS with GPU passthrough (assuming you use a compatible GPU) which would simplify your setup greatly. That way you could technically have all 3 environments on one OS with one set of hardware but now the only thing being virtualized is MacOS.
(You could also dual-boot with MacOS if you wanted and it would be slightly faster than a VM but also more of a headache to setup)
Edit: Missed that you mentioned Windows but the setup for that would be pretty much the exact same as MacOS except getting GPU passthrough to work on Windows is easier (again, same limitations as MacOS though, and games with anticheats would be able to tell that Windows is in a VM).
The OSX idea is very much an edge case for me. I’ve heard of it but not something I know much about.
I think that’s possible. Some people regularly do their work in virtualized environments. Some developers, some people do this for security. And some companies have their employees run everything over network via a thin client / VNC.
It’ll be more complex, and you’ll probably spend some time setting it up and dealing with some edge cases and unforeseen annoyances. You’ll spread your data over several (virtual) computers and probably need some network share or file sync. And whether dynamic assigning of GPUs works, depends on the exact circumstances. Linux has a few tricks available to reset GPUs, mess with the firmware and reassign devices, or pass through things. But last time I tried, that was a lot of manual work. So does audio production if you need real time. And I think the “ease of updates” will be overshadowed a bit by now five times as many operating systems to keep up to date. And I don’t know much about anti-cheat. I usually skip those games altogether and the rest runs fine on my main distro.
Other possibilities: You could just use one main operating system and install some virtualization software there. And for development and ML you could also use something like Distrobox.