Jump to content

Dedicated vs Cloud for performance and security

Recommended Posts

Hello ^_^

Many times users are between a dedicated server and a cloud server.

I will try to share some info here so it will be more easy to do a selection according to there needs ^_^

While comparing different server providers you have probably come across mentions of different kinds of hosting models such as virtual private servers (VPS), dedicated servers and cloud servers. Even while any of these will undoubtedly get you started, it’s important to choose the service that would best suite your needs. The wisdom in your choice will come from knowing what each of these server types provide.

1)Dedicated servers offer close to metal implementation with little overhead, and they’ve been traditionally the go-to-solution for high performance demanding tasks. As the name implies, each server is dedicated privately to one client. The customer receives access to a physical server with the agreed upon hardware specifications, processing and storage, all in one unit.Since the dedicated server is yours and yours alone it can be built to your specification to ensure it performs well under the circumstances you need it to.All your data are on one place and you know exactly where are they.

2)Cloud servers are often confused with the VPS, as both are based on virtualization and come with many of the same advantages. Much of the definition however depends on the particular host provider.With Cloud servers you only ever pay for the exact amount of server space used, on an hourly basis, and have the benefit of infinite flexibility.  You can scale up or down resources and server specification depending on demand, meaning that you can avoid paying for idle infrastructure costs when demand is low.Your data is around on the cloud.


Cloud servers provide advantages such as easy scalability.So if you need scalability and you want to pay for what you use only then Cloud is the best solution !

Two main reasons why cloud servers are NOT best option for pure performance (there are more):

Info from Vmware below:



1) CPU Virtualization: Virtual CPU Halt/Wake-Up

Like other resources such as NICs, CPUs also need to be virtualized. VMs do not exclusively own physical CPUs, but they are instead configured with virtual CPUs (VCPUs) provided by the VM hardware. VMkernel’s proportional-share-based scheduler enforces a necessary CPU time to VCPUs based on their entitlement [4]. This is an extra step in addition to the guest OS scheduling work. Any scheduling latency in the VMkernel’s CPU scheduler therefore is added to the response-time overhead.

One common characteristic of RR workloads is that the VCPU becomes idle after a packet is sent out, because it must wait for a subsequent request from the client. Whenever the VCPU becomes idle and enters a halted state, it is removed from the scheduler because it does not need CPU time anymore. When a new packet comes in, the descheduled VCPU must be rescheduled when the CPU scheduler runs to perform necessary operations to make the VCPU ready again. This process of descheduling and rescheduling of a VCPU performed at each transaction incurs a nonnegligible latency that is added to the constant overhead in RR workload’s response time. VCPU halting (and descheduling) cost might have less impact on response-time overhead, because it needs to wait for the response from the client anyway.




2) Constant Response-Time Overhead

Even with the same number of transactions executed (i.e., one transaction), the response-time overhead of RR workloads in a virtualization setup is not constant. Instead, it consists of the combination of both a constant overhead and a variable part that increases with the workload transaction response time. (i.e., the longer the workload runs, the larger the overhead becomes.)

Sending and receiving packets requires access to physical resources such as CPUs and network interface cards (NICs). In a virtualized environment, however, the guest cannot directly touch the physical resources in the host; there are extra virtual layers to traverse, and the guest sees only virtualized resources (unless the VM is specifically configured to be given direct access to a physical resource). This implies that handling one packet- transmission-and-reception pair per transaction requires a certain amount of effort by the virtualization layer, thereby incurring an overhead in response time.

Instead of directly accessing physical PNICs, a VM can be configured with one or more virtual NICs (VNICs) that are exported by the VM hardware [3]. Accessing VNICs incurs overhead because it needs an intervention of the virtualization layer. Further, to properly connect VMs to the physical networks (i.e., rout packets between VNICs and PNICs), or connect them to one another (i.e., rout packets between VNICs and VNICs), a switching layer is necessary, which essentially forms virtual networks. Network I/O virtualization therefore requires extra layers of network packet processing that perform two important operations: NIC virtualization and virtual switching. Packet sending and receiving both involve these two operations of accessing VNICs and going through the virtual switching layer, the cost of which is directly added to the response time—a cost that does not show up in a native setup.



Dedicated servers provide advantages such as performance and security.Dedicated servers are faster and more secure !

Enjoy ^_^





Link to comment
Share on other sites


This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
  • Create New...