icedream Posted March 9, 2015 Share Posted March 9, 2015 Hi guys, Sometimes my site will become very slow, and I find many httpd process via top and ps_mem I guess it's DDos attack and want to see what httpd and php are doing, could anyone suggest how to do or how to see why so many httpd connections? I am using centos 6.3 + nginx 1.6.2 + apache 2.2.15 + mariadb 5.5.42 + php 5.4.38, nginx as front proxy for httpd, Apache and httpd configuraton as attached. I am using prefork mode and configuration: StartServers 2 MinSpareServers 3 MaxSpareServers 5 ServerLimit 128 MaxClients 64 MaxRequestsPerChild 500 Thanks in advance. httpd.conf nginx.conf Link to comment Share on other sites More sharing options...
Makoto Posted March 9, 2015 Share Posted March 9, 2015 I'm on my Kindle right now so I can't offer any in-depth reviewal, but I can see your primary issue right now is that your server is running out of memory and swapping. This is why you are seeing severely degraded performance. Regardless of if this is from legitimate traffic or not, this in itself is likely indicative of an improper server configuration.When you reach the point of having to use swap, it can easily send your server into a spiraling cycle of performance degradation. On a busy server it can end up making the entire system entirely unresponsive and require a hard reset.If this was actually the result of a DDoS attack your server would almost certainly be completely inaccessible right now. Link to comment Share on other sites More sharing options...
icedream Posted March 9, 2015 Author Share Posted March 9, 2015 I'm on my Kindle right now so I can't offer any in-depth reviewal, but I can see your primary issue right now is that your server is running out of memory and swapping. This is why you are seeing severely degraded performance. Regardless of if this is from legitimate traffic or not, tthis in itself is likely indicative of an improper server configuration.When you reach the point of having to use swap, it can easily send your server into a spiraling cycle of performance degradation. On a busy server it can end up making the entire system entirely unresponsive and require a hard reset.So I should optimize httpd configurations? Could you please advise where to start? Link to comment Share on other sites More sharing options...
Grumpy Posted March 9, 2015 Share Posted March 9, 2015 If you think you're under DDoS, run thisnetstat -ntu | awk '{print $5}' | cut -d: -f1 | sort | uniq -c | sort -nand respond with results please. Also, why you came to the conclusion of ddos.Is this a vps?To reduce the immediate fire first... in httpd.confMaxClients 64 <-- Seems you raised this from 10, as I see 10 in comments. Put it back down.(Amount of memory each process consumes) x (max clients) should be strictly less than available ram. Roughly guessing, seems your apache is taking ~2GB of memory including swap. Is your ps_mem.py only looking at ram and not swap? Because there's a big gap of memory usage here.Timeout 300 <-- put it down to 10 seconds. in case you have run-away scripts.I'm going to take a wild guess, but in your php.ini (that you did not provide), there's a setting for max memory. It's probably too high. If it is too high, that probably means you raised it to satisfy some scripts... which then means some script of yours needs some optimizing.Restart apache. Link to comment Share on other sites More sharing options...
icedream Posted March 9, 2015 Author Share Posted March 9, 2015 If you think you're under DDoS, run thisnetstat -ntu | awk '{print $5}' | cut -d: -f1 | sort | uniq -c | sort -nand respond with results please. Also, why you came to the conclusion of ddos.Is this a vps?To reduce the immediate fire first... in httpd.confMaxClients 64 <-- Seems you raised this from 10, as I see 10 in comments. Put it back down.(Amount of memory each process consumes) x (max clients) should be strictly less than available ram. Roughly guessing, seems your apache is taking ~2GB of memory including swap. Is your ps_mem.py only looking at ram and not swap? Because there's a big gap of memory usage here.Timeout 300 <-- put it down to 10 seconds. in case you have run-away scripts.I'm going to take a wild guess, but in your php.ini (that you did not provide), there's a setting for max memory. It's probably too high. If it is too high, that probably means you raised it to satisfy some scripts... which then means some script of yours needs some optimizing.Restart apache.Very useful info, many thanks,.Yes, as I need to upload large files(1G), so I set max memory large and timeout time quite long. Maybe I can have better choices? I have uploaded php.ini, too.As for "Amount of memory each process consumes", could you please advise how I can get this?I have changed maxclients to 10 now, but check httpd limit shows:OK: AllProcsTotalMem (513.88 MB) fits within MemTotal (1512.07 MB).I am feeling the memory isn't used sufficiently, should I try increasing MaxClient to larger amount, such as 20? php.ini Link to comment Share on other sites More sharing options...
Grumpy Posted March 9, 2015 Share Posted March 9, 2015 If 10 is giving you ~500MB total, yes, you can practically raise to 20 since you'll rarely reach the maximum. But since you have other services running there too, I wouldn't fill up completely to 1.5GB. But if the memory_limit is 128MB, it means each process can theoretically take up to 128 (+overhead) each. Which is well over 1GB even with just 10. So if on some day of blue moon, everything hard was running at once, you'd run into swap problems again. Do be aware that you are increasing risk with 20 (albeit probably minor).In your php.ini...max_execution_time = 600 <-- This is not how long it takes to upload. This is how long it takes to process the PHP execution. The download is done on your nginx/apache layer. Once the download is complete, it passes to the PHP layer. So until it's done, PHP doesn't even realize you're uploading something. Unless you actually process the file within PHP (which is a terrible thing to do -- you should be forking them), you don't need to raise this. If all you do is simple move of the file that was uploaded, it's not related at all.max_input_time = 600 <-- same thing.memory_limit = 128M <-- same thing... btw 128 is actually suggested for IPB.post_max_size = 1024Mupload_max_filesize = 1024M <-- if just one person actually uploads a 1GB file, that will consume 1GB of your ram. This will leave no room left for the rest of your processes. Pick your limits. Ask yourself if you really need to support 1GB Either have lower total number of apache children, or refuse such large uploads. However, if you want to set this limit, you need to set it on the most outer layer, nginx. Because if it's passed to PHP, it's already too late. Link to comment Share on other sites More sharing options...
icedream Posted March 9, 2015 Author Share Posted March 9, 2015 If 10 is giving you ~500MB total, yes, you can practically raise to 20 since you'll rarely reach the maximum. But since you have other services running there too, I wouldn't fill up completely to 1.5GB. But if the memory_limit is 128MB, it means each process can theoretically take up to 128 (+overhead) each. Which is well over 1GB even with just 10. So if on some day of blue moon, everything hard was running at once, you'd run into swap problems again. Do be aware that you are increasing risk with 20 (albeit probably minor).In your php.ini...max_execution_time = 600 <-- This is not how long it takes to upload. This is how long it takes to process the PHP execution. The download is done on your nginx/apache layer. Once the download is complete, it passes to the PHP layer. So until it's done, PHP doesn't even realize you're uploading something. Unless you actually process the file within PHP (which is a terrible thing to do -- you should be forking them), you don't need to raise this. If all you do is simple move of the file that was uploaded, it's not related at all.max_input_time = 600 <-- same thing.memory_limit = 128M <-- same thing... btw 128 is actually suggested for IPB.post_max_size = 1024Mupload_max_filesize = 1024M <-- if just one person actually uploads a 1GB file, that will consume 1GB of your ram. This will leave no room left for the rest of your processes. Pick your limits. Ask yourself if you really need to support 1GB Either have lower total number of apache children, or refuse such large uploads. However, if you want to set this limit, you need to set it on the most outer layer, nginx. Because if it's passed to PHP, it's already too late.Thanks, I have tweaked max_execution_time to 30, max_input_time to 60, and post/upload max size to 512M. But actually I have no idea how it should be tweaked. I have searched and found check_httpd_limit for httpd optimization, mysql_tuning for mysql, etc, but failed to find any articles or tools about php optimization. Do you have related guides? Link to comment Share on other sites More sharing options...
icedream Posted March 9, 2015 Author Share Posted March 9, 2015 After changing MaxClients to 20, I found the following error in httpd log[Mon Mar 09 18:44:07 2015] [error] server reached MaxClients setting, consider raising the MaxClients settingShould I ignore this? Or something more to tweak? Link to comment Share on other sites More sharing options...
Makoto Posted March 9, 2015 Share Posted March 9, 2015 post_max_size = 1024Mupload_max_filesize = 1024M <-- if just one person actually uploads a 1GB file, that will consume 1GB of your ram. This will leave no room left for the rest of your processes. Pick your limits. Ask yourself if you really need to support 1GB Either have lower total number of apache children, or refuse such large uploads. However, if you want to set this limit, you need to set it on the most outer layer, nginx. Because if it's passed to PHP, it's already too late.Uploaded files should be written to tmp, so unless tmp is mounted as tmpfs (which with only 1.5GB of memory available on your server you shouldn't be doing), it shouldn't be relevant. (Though for non-multipart based POST's this could be a large potential for DoS exploitation)After changing MaxClients to 20, I found the following error in httpd log[Mon Mar 09 18:44:07 2015] [error] server reached MaxClients setting, consider raising the MaxClients settingShould I ignore this? Or something more to tweak?I would consider upgrading your available memory if you can. I don't know what your current traffic requirements are but 1.5GB is very minimal and will only be enough to sustain a small scale PHP application website. You're already hitting the maximum amount of client connections that you can support and there's a complete lack of wiggle/breathing room here in general, as you are seeing.Granted I don't know 100% whether this is from legitimate traffic or not, but if it wasn't I find it very hard to believe your server would have been responsive whatsoever earlier when you faced swapping issues, a flood of illegitimate traffic when swapping should bring your server to a grinding halt, not just make it slow. I'd need more information or to look into your needs personally to offer a better estimate, but at this point, I would really just consider upgrading your servers available memory if it's comfortably in your budget. Link to comment Share on other sites More sharing options...
Grumpy Posted March 9, 2015 Share Posted March 9, 2015 After changing MaxClients to 20, I found the following error in httpd log [Mon Mar 09 18:44:07 2015] [error] server reached MaxClients setting, consider raising the MaxClients setting Should I ignore this? Or something more to tweak? As long as you aren't going to upgrade your hardware, yes, you should ignore it. But as Kirito already mentioned, if it's due to illegitimate traffic, it should be cut off. If it's due to rogue scripts, they need to be fixed. But the latter is something I can't help you with over a forum. Uploaded files should be written to tmp, so unless tmp is mounted as tmpfs (which with only 1.5GB of memory available on your server you shouldn't be doing), it shouldn't be relevant. I mean before it has a chance to write to tmp, while it's still an active upload. Everything is written to pages first after all. But I just looked up nginx docs. And seems nginx is better than I thought. nginx will break up the uploaded pieces into chunks and make multiple small writes to not to hog the memory and then join it together after. Link to comment Share on other sites More sharing options...
RevengeFNF Posted March 9, 2015 Share Posted March 9, 2015 I also use Nginx as front end to Apache.I have 4Gb of Ram and this are the values i reached to maximize performance. Every server is a different server, but here are mine values anyway:ServerTokens OSServerRoot "/etc/httpd"PidFile run/httpd.pidTimeout 30KeepAlive OffMaxKeepAliveRequests 70KeepAliveTimeout 6<IfModule prefork.c> StartServers 8 MinSpareServers 5 MaxSpareServers 20 ServerLimit 70 MaxClients 70 MaxRequestsPerChild 4000</IfModule> Link to comment Share on other sites More sharing options...
icedream Posted March 9, 2015 Author Share Posted March 9, 2015 Uploaded files should be written to tmp, so unless tmp is mounted as tmpfs (which with only 1.5GB of memory available on your server you shouldn't be doing), it shouldn't be relevant. (Though for non-multipart based POST's this could be a large potential for DoS exploitation) Thanks, Kirito. Does this mean that I can set these settings to any value less than my memory size? Well, I have no enough money for upgrading.. As long as you aren't going to upgrade your hardware, yes, you should ignore it. But as Kirito already mentioned, if it's due to illegitimate traffic, it should be cut off. If it's due to rogue scripts, they need to be fixed. But the latter is something I can't help you with over a forum. I mean before it has a chance to write to tmp, while it's still an active upload. Everything is written to pages first after all. But I just looked up nginx docs. And seems nginx is better than I thought. nginx will break up the uploaded pieces into chunks and make multiple small writes to not to hog the memory and then join it together after. If I understand correctly, my nginx only serves static files, while httpd deals with all php requests. So if I upload a large file, isn't that php uploading the files? Link to comment Share on other sites More sharing options...
icedream Posted March 9, 2015 Author Share Posted March 9, 2015 I also use Nginx as front end to Apache. I have 4Gb of Ram and this are the values and reached to maximize performance. Every server is a different server, but here are mine values anyway: Wow..I cannot afford 4Gb Ram.. Link to comment Share on other sites More sharing options...
RevengeFNF Posted March 9, 2015 Share Posted March 9, 2015 And 4Gb is very low this days Im going to upgrade to 8Gb in the near future. Link to comment Share on other sites More sharing options...
Grumpy Posted March 9, 2015 Share Posted March 9, 2015 If I understand correctly, my nginx only serves static files, while httpd deals with all php requests. So if I upload a large file, isn't that php uploading the files?No. Nginx being your front end, is solely responsible for ALL communication between the client and the server in context of http. It will pass certain things, like cgi internally, but it's passing. Once the passed item is complete, they send the data back to nginx, and nginx sends to the client. Rather than thinking nginx only serves static files, it's better to say nginx serves everything. nginx will serve static files directly and for things it doesn't know how, it will ask something else to process it for them so nginx can serve it.To give an example, if you set the max upload size as 10MB in nginx and a person starts to upload a 100MB file, nginx will cut off the connection after 10MB. Your apache and php will never have received anything from nginx regarding this upload and won't even know if the upload was attempted.nginx, apache, etc are servers designed to handle communication. PHP, in essence, is the raw code in front you if you open index.php of your forum. Link to comment Share on other sites More sharing options...
Recommended Posts
Archived
This topic is now archived and is closed to further replies.