DO Ideas 2

Support GPU on droplet

  • sarbjit singh
  • Sep 11 2018
  • Attach files
  • Peter Brooks commented
    September 11, 2018 16:27

    It would be very useful to be able to access the GPU on the droplet - and, maybe, more GPUs.

  • Anonymous commented
    September 11, 2018 16:27

    GPU Support Would Be Really A Nice Feature

  • Anonymous commented
    September 11, 2018 16:27

    Sell droplets with GPUs

  • Anonymous commented
    September 11, 2018 16:27

    Oh yes, this is kinda brilliant idea cos the only provider I can thing about now is Google.

  • Anonymous commented
    September 11, 2018 16:27

    Pls add GPU support with both virtual and bare metal gpu support. cheaper versions can use bitfusion or something similar

  • Matthew Krueger commented
    September 11, 2018 16:27

    The only issue that I see with this is that the DigitalOcean team would have to completely rewrite their custom hypervisor to support hardware mapping. Because of Intel's VT-x, using the CPU on a hardware level requires almost no code. But, neither Nvidia or ATI have that for their GPU's as of yet. They could use something like Un-RAID, but that would need to be rewritten for use on multi-thousand core supercomputers, and to be automatically managed.

  • Jason Livesay commented
    September 11, 2018 16:27

    I think you are going to see GPU computing become a new standard aspect of cloud/VPS services. Here is why.

    Other programmers who are paying attention to technology trends, watching Hacker News etc., are like me and seeing tons of articles coming out about deep learning successes, applications, and now, open source libraries and even pre-trained nets such as TensorFlow etc.

    Deep learning has applications in almost every field, and we now have libraries that are free and even come with useful capabilities out-of-the-box. I can literally follow a tutorial online and within probably 2-4 months of self-learning and work, even without having the core math really well understood, have a deep learning system for doing stock prediction that outperforms anything professionals were doing 15 years ago. I say that because I was just reading a tutorial online for doing just that, and even though I didn't understand some of the core math, it could be applied almost unchanged to many similar applications.

    So, there is a way to do it with PCI passthrough via OVMF. The Titan X stuff seems way overpriced now but maybe consider asking vendors to come up with a reasonably priced server solution so that you could get at least 8 (ideally 10-16) low-medium powered GPUs on one rack server.

    Obviously this is not easy to do and wouldn't make sense for the least expensive servers. But I believe that internet application programmers will shortly find deep learning capabilities being requested by business managers since it has applications in so many fields, AND, as I mentioned, there are open source tools available now that can be applied without a deep mathematics background. Certainly not everyone can do it -- like I said, I think I would need 3-6 months of study to apply a well-known solution..but I believe there are tons of programmers who are learning that now.

    The GPUs are just one or more orders of magnitude more efficient for this stuff. So I expect within 1-4 years the number of votes on this should go up dramatically and reasonably priced data-center oriented systems will appear.

  • Nicol├ís Alvarez commented
    September 11, 2018 16:27

    Kamen, your explanation of why it can't be done makes no sense if you consider that AWS EC2 has VMs with GPUs. And asking for GPUs doesn't mean I want to transfer the screen contents over the network back to me; just GPGPU computing.

  • Kamen Angelov commented
    September 11, 2018 16:27

    This is not possible at the moment. Allow me to explain.

    I was looking into that for some VMs I am running here and this is what I found: The way the x86 architecture evolved from the early 80ies until today makes emulating a GPU extremely extremely difficult: first video was memory-mapped, then it was port-mapped, then it was combination of the two, and with 3D acceleration it is effectively an embedded computer within your computer, complete with dedicated memory banks and an assembly language of its own. And all of that is designed to be backwards compatible so your computer's BIOS can initialize it. Emulating that requires one of those GPUs to be reverse-engineered, but doing so nowadays is a sure way to get served and dragged into court for copyright infringement. After all, if you can do that to, say, an nVidia chip, what's to stop you from manufacturing cheaper clones? To my knowledge the best GPU card to ever be reverse engineered in this way is... drumroll... Cirrus VGA. I think you just *might* be able to run Windows 95 on that.

    This isn't a unique problem to the x86 architecture.

    Back in the early 2000s this was a problem with the x86 architecture itself: your CPU's instruction set is backwards-compatible all the way back to the 8086 CPU back in the early 1980s by means of special "trap" instructions to enable different extensions as your computer boots a modern OS. Reverse-engineering the x86 architecture was extremely difficult and virtual machines were super slow and used to crash a lot. Early hypervisors were actually not even designed to emulate the x86 architecture. x86 virtualization only became practical relatively recently, in 2005, when Intel introduced the virtualization extensions (VT-x), which allows an Intel CPU to emulate itself through hardware. Overnight there was no need to emulate an Intel CPU anymore. There is still no real equivalent for a GPU, though: the best I've seen today is the so-called Intel VT-d extensions, which allows an Intel CPU to virtualize its PCI bus, which in turn allows you to expose a physical dedicated GPU to a virtual machine. Needless to say you need a physical GPU for that and many popular GPUs don't even work with that.

    When you run software such as VMWare Workstation/Fushion or Parallels Desktop, you are running a hypervisor which only really emulates an old and slow GPU, only used to complete the install of your guest OS. Beyond that, you need to install special "bridge" drivers which funnel your guest OS's display APIs into your host OS's display API by some means. This is what all the hypervisors do, without exception! This is the only practical way the GPU emulation problem is sidestepped and you get to enjoy some decent video on a virtual machine.

    This, unfortunately, does not extend to the data center: a server motherboard meant for a rack-mountable enclosure often has no GPU card to speak of, or if it does it is something very simple, only meant to drive the rare console caddy at VGA resolution. Even if that weren't the case, there is no standard transport mechanism capable of ferrying the tens of gigabytes of data a GPU creates every second to your screen. The longest Displayport cable is 50 feet, I think. You are asking Digital Ocean to do the impossible. Don't hold your breath.

    Hope this helps.

  • Anonymous commented
    September 11, 2018 16:27

    Where does the gpu support is??

  • buffonomics commented
    September 11, 2018 16:27

    @Sebastian really cool stuff. Thanks man

  • Sebastian commented
    September 11, 2018 16:27

    @buff, it's a bit slow without GPU, but I'm getting decent performance for rendering the mid-level servers (8-16 gb ram).

    Doubt you happen to be using Blender, but I wrote up some steps for how I did it there.

    https://docs.google.com/document/d/1gJq8rKKFEKEdQQReIubdaSLSV-BzJoCGV1QKPI2pUG0/edit?usp=sharing

    You could take some parts of it and apply it to any engine.

  • Moisey Uretsky commented
    September 11, 2018 16:27

    Dont think so, not anytime soon anyway the hardware needs to be standardized to allow for efficiencies in inventory management so not being considered currently.

    Thanks

  • buffonomics commented
    September 11, 2018 16:27

    I would really love to throw a small render farm up here :)

  • buffonomics commented
    September 11, 2018 16:27

    Will GPU's ever make it on to the roadmap for the purpose of server-side rendering?