Jump to content

Multi server setup


Recommended Posts

Our board is running on a vps and because of various reasons it's easier to get a second vps than to upgrade to a higher plan or to move away to another provider. We were thinking of adding an extra webserver to ease the load on the current one.

The current setup involves a vps running mysql (percona), varnish + nginx + php-fpm+memcached. A second vps would run just nginix+php with the first one routing requests via varnish.

Is there any guide on how to do this properly with specifics related to ipb? It's a bit unclear to me how the uploads are handled . Would it be enough to mount over ssh the uploads folder on the second server? Any other settings that we should be aware of?

Thanks!

Link to comment
Share on other sites

"various reasons it's easier to get a second vps than to upgrade to a higher plan or to move away to another provider"
This reasoning sounds flaws imo. Setting up the forum on a dedicated machine is LOT easier than trying to manage a multi-system setup for ANY application.

"varnish + nginx"
Sounds redundant and useless baggage.

Link to comment
Share on other sites

Despite what you're asking for is much more complicated... and I foresee that this won't really happen... I will answer.

The simplest form of multi system setup is to divide up the service types. Not load balance a single service. For example, you put mysql on one server and php on another server. This is the most common first route anyone would initially take. Mysql server will naturally run on high IO system (such as SSD, or SAS) whereas now your PHP server doesn't need as high IOPS disk. The php server will ask the mysql server when ever it needs data from mysql. Similarly, you can put memcache on yet another server. None of these really need any additional work from IPB as IPB is designed to support this expansion. All you need to do is update the config file accordingly by changing the mysql IP.
Note that when you get into multi-system setup, your failure rate skyrockets because it's a multiplying problem. If one server has an uptime of 99%, with 1 php, 1 memcache, 1 mysql for example, you now have .99 ^ 3 = ~.97. Which is a terrible, terrible, uptime. Actually, without your memcache server, it'll still work, extremely slowly... (will timeout as it tries memcache). also, just to bring in the scale properly.... You probably need a separate memcache server if you got like 10+ servers doing 1 thing... Everyone will connect to your php server and your php server connects internally (very important) to other servers. You also now need to manage 3 servers instead of one. Because of the reliability issue in multi-server setup, you often find yourself having to upgrade to a high availability setup where you literally need to double up EVERYTHING and then read to next section.
This is a logical first step. The notion of a "server" is really a logical thing, so all you really did here is say the server is over there instead of over here.

What you suggest, with multiple front end servers that serves php requires load balancing (This is basically based on 2+ php front end and 1+ mysql backend but it still applies to you because the "backend" and "frontend" servers are logical.). At higher end solutions, you get a load balancers (these cost a pretty penny). At low budget solution, you often go with DNS round robin (these cost virtually nothing). Problem with DNS round robin is that if one goes down, your people still go to the failed server until the dns clears out. If you set the DNS with auto-fail detection (increasing prices) and a low TTL (and more $$), then people ask for DNS far too frequently and slows down their surfing experience (unhappy people). DNS load balancing is NOT a high availability (HA) solution despite misleading advertisements (better than nothing though). There is also datacenter level IP rerouting which allows a failed server's IP to point to another machine. Availability issue aside, the setup.
The setup is sometimes messy. But it's really your choice on how to set it up. You can choose to have master-slave relationship or you can choose to have a network relationship.

  • Master-slave is more easier, but less redundant. Network relationship is more redundant, therefore resilient, but more complicated. Going with master-slave first, you setup a master server where all slaves sync from. Syncing can be done via programs like rsync (with daemon or cron) to constantly sync between the slave and the master. You typically want to hide the master from the public unless there's only two and on budget of course. The death of master would however mean syncing fails and until master is backup, all the slaves will be having different files but still functioning as long as the backend - mysql - server is still up and running.
  • Network is lot more complicated as this too can be setup in many ways. Twitter for example boasted themselves of having a torrent-like setup of their system synchronization. Supposedly this improved their performance by like 70x in syncing vs the master-slave because they had SO many slaves and multi-tiered level masters. Another, more simple to implement is a broadcast method. Each node broadcasts their changes and others accept this. This is very common in areas where the data transfer required is small. Another way, but a simple one is act like everyone is master and everyone is slave. Everyone just constantly tries to sync from each other in network. This is highly viable in 2 computer setup. All it would need is a 2 rsync running on each other. But as the node count increases, you're really going to start ddosing yourself. Network-type systems really are better but often requires a custom solution.

Multi-mysql servers... (I started writing then realized you never asked for this).

=======================

If you just get a bigger server, you migrate your data, run the same services. and you're done.
Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...