DO Ideas 2

Give users the necessary information to avoid underprovisioning

Context: http://www.xaprb.com/blog/2010/06/01/under-provisioning-the-curse-of-the-cloud/

If given the specs of the host machine (number of cores, whether those cores have hyperthreading, amount of RAM, number and size of SSDs), an intelligent user can determine how big a share of that machine she is getting. Some indication of current CPU and I/O load for the machine hosting a given droplet may also be helpful.

  • Matt Campbell
  • Sep 11 2018
  • Attach files
  • Moisey Uretsky commented
    September 11, 2018 19:26

    The VMs are automatically distributed across the hypervisors, I dont think I've seen anything really conclusive that one is preferred to the other.

    Really comes down to the workload of the underlying virtual server and how it mixes with the rest on the system.

  • Ryan commented
    September 11, 2018 19:26

    Moisey, in the example I am thinking of the company uses one host for a specific size node (so they could say if you have a 1GB ram VM you're only on a host with 50 VMs of only 1GB, for example)

  • Moisey Uretsky commented
    September 11, 2018 19:26

    Hi Ryan,

    In our case that would be hard to do because we do not put particular sized VMs on a particular hypervisor, but if there are other providers out there doing that I would be interested to find out who they are.

    To the best of my knowledge the largest competitors in the space do not do that.

    Thanks

  • Ryan commented
    September 11, 2018 19:26

    I've seen some VPS providers that say how many nodes are running per host - something like that would sufficient for me.

  • Moisey Uretsky commented
    September 11, 2018 19:26

    That kind of goes against the idea of the cloud, which is that when resources are available you are able to use more of them and then they aren't you are operating more within your limits.

    There are too many factors to really create any kind of a real world calculation unless you restrict logical CPUs on the virtual machines directly to the physical core or hard limits are set to everyone in which case you would be receiving such a small % of CPU that you would end up on average running much slower than in a more free and uniform environment.

  • Santhan commented
    September 11, 2018 19:26

    Agreed. One of our concerns with using cloud virtual machines is that we find it difficult to measure true performance of our application within the context of our actual purchased plan. How do we know if our droplet has been provisioned on a server using only 10% if it's resources? Perhaps a way to impose a hard limit on cpu usage and memory via the droplet UI would allow us to temporarily test our application under constraints of the plan we purchase.