I have used for the past 2 years a Virtual Machine as my main personal system on my Personal PowerEdge R720. The reason for this was to keep work and personal separate of course. I can access a powerful system from any system or computer/tablet on the network.
It is nice to bring up my system on any system at home big or small. One thing I noticed as using a remote console was the latency.. While not horrific it is noticeable with video even if slight delay. It was decent and I did not feel like I was on a local real system. I started looking around at possibilities. One was first to give the VM a dedicated gpu.
There is not much information out there for this specific combination. The servers were designed for either compute or vdi style of configurations. Theses include Nvidia Grid and Tesla cards to provide vGPU’s. While this would be great for cad and compute operations it really is not designed for anything normal every day computer type of operations. This includes the occasional gaming.
For proof of concept I took an old ATI 256mb pci-e video card from a dell workstation. I surprisingly saw a difference in response in video. I even could game with steam stream. Cool!! So i started looking at low latency remote console solutions. I found a console called parsec. bummer I need a nvidia card.
Unlike a normal PC with a PowerEdge system you have to take things into consideration power draw/ airflow cooling / stability.
Many people have grabbed gforce cards with mixed results. Nvidia for the longest time mad it hard or impossible to pass through consumer grade video cards to a virtual machine. Note new beta drivers supposedly unlocked this.
So I wanted to find a card that was as close to supported as possible while at a reasonable price. The R720 officially supports up to a nvidia k4000 passthrough card. I looked into it and started having my eyes look at a Nvidia K4200
Dells official recommendations is that you need 1100 watt dual PSU’s and GPU enablement kit before using any GPU’s in the R720. The reason behind this is because most applications you are going to be running multiple gpu’s with many vdi’s like with grid cards and VMware Horizon. The reason for the GPU enablement kit is to not rely on the pci-e’s power instead have cable connected to riser to allow for proper power draw for the gpu.
My worry was that there was going to be something with the FW on the system that would say prevent the system from using or recognizing that I was using 750 watt power supplies instead of 1100watt power supplies. There was not alot of information regarding the k4200 and the r720.. I talked to a bunch of my PowerEdge collogues at work. They all said it should work.. I then talked with some in home lab community. Finely found some one who had done it.. So I decided to pull thr trigger. I picked up the Nvidia Quadro K4200 for 150$. Yes the card is almost 8 years old but I figured why not.
I got the card in decided to give it a go.. About 30 min later and almost drooping my server off the L-rails card was installed. Note to self need to save up for some real ready rails..
Booted up the server and it booted then restarted. I quickly went into my idrac and signed a relief that no errors in the idrac lifecycle log. Found this message which was cool to see I have never seen before.
Ok soo I saw no amber lights. system was booting ran upstairs to check to see if esxi booted.. It did.
Many who may not be familiar. So how do we now make the vm see and use the card. This is as mentioned what we call passthough.. I will show directly on the esxi hypervisor because many are not using vcenter nd may be using the free esxi hypervisor.
First once booted you will go to the navigator pain, the manage, then hardware.. You should if everything went right see your video card listed. At this point you would check the card and select toggle passthrough then reboot.
Once you have your dummy plug and your other remote console installed and running. Lets now passthrough the pci video card to the vm. To do this you will go into the vm config and select add other device select pci device and select the device. After this power on the vm.
I will create another post on gaming with the setup..
Very neat, Brian! Just curious, what are your thoughts in general about VMWare effectively relegating GPU functions to the CPU? I know that's not totally accurate, but considering the apples-to-oranges differences in purpose and functionality, I've always wondered why it's the case…Would incorporating a more robust GPU capability, either onboard or aftermarket, be worth it in terms of making better use of the host resources?
It all really depends on your use case. The software gpu option in esxi works ok. It will get you past some things like photoshop requiring gpu to do all functions. However do not expect it to function with real 3d applications. Really ESXi is not designed persay from default to be used for 3d type operations. It i smore designed around server type operations. In the past few years at first use of GPU's or more over vGPU's for vdi environments has been becoming more prevalent. The idea is for say normal office workers. You can partition out different level of resources to each user. Most of the time even vGPU's are limited number of users per system. I think say the 13g systems can support 2 grid cards as they are dual slot cards. However they are not passthrough so they are not 1 user per card or 2 or so on.. I have a lower end vdi with my company. It is great for remote file transfers and web applications when I need to transfer files faster as I am in the corporate data center. Depending on needs different tiers can be allocated out.. Now what I found useing sw only gpu when you say cut down the resolution of the system and same with video, you get better say audio sync on videos. However once you start going towards a normal resolution that goes out of the window. Big key I have found is around the remote console latency.. While a dedicated gpu improves this it seems in regards to resolution. In the end you still have console latency. I test horizon to my vdi at work it works ok.. rdp works meh, any desk and nomachine are closer to expected although still occasional audio lag.. What I found out is you really need remote console that leverages your gpu.. Many you have to have say nvida specific gpu's.. Theses are originally designed around cloud gaming however say parsec can be used for desktop also. I will make a post on gaming coming up..