Jump to content

Clickfinity

Members
  • Posts

    2,038
  • Joined

  • Last visited

  • Days Won

    2

Reputation Activity

  1. Like
    Clickfinity reacted to AutoItScript in mySQL Tuning Tips   
    Background:
    I'm running CentOS 5.3 (linux) with mysql 5.0.58. My IPB site has around 500 users online and 700,000 posts. I've spent the best part of a week learning about tuning mysql instead of blindly using my-huge.cnf, or randomly plugging in values. I've written my findings here as there appears to be a load of anecdotes and misinformation out there on this topic. I'm still a newbie though so if you find that something I've said it utterly wrong then let me know!

    For the purposes of this guide I'll assume you have a simple dedicated server with web server and mysql on the same box and the main database being run by mysql is IPB. That said, I think most of the concepts are applicable to other systems.

    Ok, so when tweaking my.cnf the first "easy win" is to turn on query caching (which is off by default) with these lines:


    query_cache_type=1 query_cache_limit=1M query_cache_size=32M This just turns caching on, uses 32M for the cache, and allows the biggest single entry to be 1M. You can adjust this later when we monitor the server (below) but these values are pretty good for most. The next most important settings are these: key_buffer - used for caching table indexes. sort_buffer_size - used when performing sorts read_rnd_buffer_size - used after a sort for ORDERBY operations read_buffer_size - used for table scans I've read loads of articles where "general wisdom" is given for these. But after reading all of them I've come to the conclusion that they are all talking rubbish. The best method is to monitor what's going on and then tweak. Download these two scripts, tuning-primer.sh and mysqltuner.pl:

    wget http://www.day32.com/MySQL/tuning-primer.sh wget http://mysqltuner.com/mysqltuner.pl chmod 700 tuning-primer.sh chmod 700 mysqltuner.pl These scripts will give you easy access to some monitoring statistics and even try to give advice. Even if you ignore the advice they are very useful to give easy access to stats like total memory that could be used by mysql, cache hit rates, etc. Both give slightly different output so I've been using both as required. key_buffer (or key_buffer_size) - (Global) This is a global value - it is shared between all databases. There are two widely accepted comments about how to set this: 1. Set between 25% and 50% of RAM 2. Try and set as large as the total of all your myISAM index files (*.MYI files) (mysqltuner.pl shows this size this neatly) Option 1 - Setting as an amount of RAM is crazy. My board has 380M of .MYI files setting key_buffer larger than this does NOTHING apart from totally and utterly waste the RAM. What about Option 2 - Setting key_buffer large enough to contain all the .MYI files? This is a bit better but even still it probably won't ever use the entire amount unless every single index is read. And (this applies to the other memory settings as well) mysql is only caching index lookups, when the actual data is read from disk it is the OS that will be doing the majority of the caching. The more memory you needlessly waste in mysql the worse the OS caching will perform if you get low on RAM. So the best thing to do is to montior the key_buffer hit rate and aim for a 99% ratio (that is 99% of index lookups are done from RAM). After your server has been running for a while (the tools suggest 2 days, but you can do that when fine tuning). Run mysqltuner.pl and see what the key buffer ratio is. My server has 380M of index files and 2GB of RAM so conventional wisdom is to set it to either 512M-1024M (option 1) or 380M (option 2). Well, here is the mysqltuner.pl excerpt from my server:

    [OK] Key buffer size / total MyISAM indexes: 64.0M/379.7M [OK] Key buffer hit rate: 99.3% What the heck?! I'm only using a 64M key_buffer and yet I have a 99% hit rate! Is it worth increasing it to 390M just to get 99.9%? Maybe if you have more RAM than sense - I've only got a 2GB server so that extra RAM is much better utilized by the OS for general caching. My recommendation would be to start the key_buffer low, monitor it and increase it by a small amount each time until you hit a 99% hit rate. Conventional wisdom size: 512M My server size: 64M sort_buffer_size - (Per-connection) This is allocated per-connection - if you have 100 connections then you use 100 * sort_buffer_size As with key_buffer there is lots of advice on a "allocate 1M per GB of RAM" advice. Again I would ignore that and instead just monitor how the buffer is used. On my server I started with 1M and then upped it by 1M at a time until 99% of sorts where done in RAM and only 1% needed a temporary table. On my server that required a sort_buffer_size of 4M. Here's the mysqltuner.pl output:

    Sorts requiring temporary tables: 1% For my 2GB server the "wisdom" setting of 2M setting was too small and I had about 15% of sorts needing to use temporary tables. Conventional wisdom size: 2M My server size: 4M read_buffer_size - (Per-connection) This is allocated per-connection - if you have 100 connections then you use 100 * read_buffer_size This is used to buffer table scans. In theory as long as threads are cached (see below) then bigger is better, I've seen values from 2M to 16M suggested in articles and our old friend "1M per GB of RAM" is the most common. That's probably ok UNLESS YOU ARE RUNNING LINUX. Yep, it's bugged. In most versions of mysql 5 (and certainly in my server version of 5.0.58) any any read_buffer_size above 256K will actually make performance worse! The bug report (and repo steps) can be found at http://bugs.mysql.com/bug.php?id=44723 - I followed the steps and on my server a 2M buffer was TWICE as slow as a 256K buffer. The more I increased the buffer the worse it got. I highly recommend running the repro steps on your linux server to see if you are affected before going above 256K Conventional wisdom size: 2M My server size: 256K read_rnd_buffer_size - (Per-connection) This is allocated per-connection - if you have 100 connections then you use 100 * read_rnd_buffer_size This is used after a sort operation for certain queries like ORDER_BY. It's used quite a bit by IPB but I couldn't really detect any performance differences for values greater than 256K Conventional wisdom size: 2M My server size: 256K Those are some of the most "controversial" values. As said before, it's better to use buffers that are correctly sized rather than just using a percentage of RAM and leave as much memory as possible for the OS to do general caching. Per connection memory use is: sort_buffer_size + read_buffer_size + read_rnd_buffer_size + join_buffer_size (join_buffer_size defaults to 128K and I've left it low as IPB doesn't use queries that require it). Ensure that your max_connections value actually doesn't use more RAM than you actually have! mysqltuner.pl shows this information neatly:

    [--] Total buffers: 178.0M global + 4.9M per thread (50 max threads) [OK] Maximum possible memory usage: 421.8M (21% of installed RAM) The "global" value is essentially key_buffer + tmp_table_size. tmp_table_size - (Global) This is a global value - it is shared between all databases. When mysql needs to create temp tables this value alters whether the table is created in RAM or on disk. First, you must set max_heap_table_size and tmp_table_size to the same value as the lower of the two is used. The defaults are 16M and 32M respectively which essentially means that the default tmp_table_size is 16M. I currently have mine set to 32M and even when I set it to 512M the number of tmp tables created on disk is the same as when the value was 32M. Further research shows that TEXT and BLOB tables cannot be created in RAM. If you are running IPB only you are probably in the same position and won't see benefit above 32M either. So, what are the differences between the conventional wisdom sizing and my "just right" sizing for my server for 100 connections? Wisdom: 512M + 32M + (100 * (2M + 2M + 2M + 128K)) = 1156M My Server tweaked: 64M + 32M + (100 * (4M + 256K + 256K + 128K)) = 558M Quite a saving... half a gig extra for the OS to use for other things :) Ok, some more important variables thread_cache_size - during normal operation you don't really want to be creating threads (expensive memory allocations especially with large buffers), use mysqltuner.pl to monitor and tune table_cache - table open operations that have been cached. use mysqltuner.pl to monitor and tune. For a single IPB forum my value was 500 I also found some advice on a vBulletin for improving mysql searching which I'm still testing and researching (http://dev.mysql.com/doc/refman/5.0/en/midpoint-insertion.html):

    low_priority_updates=1 concurrent_insert=2 key_cache_division_limit=40 Ok, that's it I hope it's been useful. Please feedback on anything that is completely wrong - I'm still learning :) Oh, and here is my my.cnf for reference for my IPB.

    [mysqld] set-variable=local-infile=0 datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock # Default to using old password format for compatibility with mysql 3.x # clients (those using the mysqlclient10 compatibility package). old_passwords=1 skip-networking skip-locking skip-bdb query_cache_type=1 query_cache_limit=1M query_cache_size=32M # Set to combined size of all .MYI files, or between .25 and .5 of total RAM # Turns out this is nonsense, just monitor the key buffers and go for 99% hit rate key_buffer=64M # The following are created for each connection, so are important for sizing # per thread/connection memory = sort_buffer_size + read_buffer_size + read_rnd_buffer_size + join_buffer_size # Defaults: # sort_buffer_size=1M # read_buffer_size=128K # read_rnd_buffer_size=256K # join_buffer_size=128K # # If running on linux there is a bug that anything over read_buffer_size=256K will actually # harm performance. See http://bugs.mysql.com/bug.php?id=44723. # I've verified on CentOS 5.3 with mysql 5.0.58 sort_buffer_size=4M read_buffer_size=256K read_rnd_buffer_size=256K join_buffer_size=128K # Ensure you only use the number of connections you can support in memory (see above for per thread memory): # Memory used = key_buffer + tmp_table_size + ((per thread mem)*max_connections) max_connections=50 # Timeouts, set to not let idle connections - the defaults are huge interactive_timeout=90 wait_timeout=90 connect_timeout=10 # Check opened tables and adjust accordingly after running for a while. open_files_limit=1500 table_cache=500 # Check created_tmp_disk_tables, also needs heap table size as uses the smallest value # Have tried 512M for IPB but still same number of temp tables, might as well keep small tmp_table_size=32M max_heap_table_size=32M # thread_concurrency = 2 * (no. of CPU) - BUT only works on solaris so don't need to bother thread_concurrency=4 thread_cache_size=8 # Stuff that can help with mysql searches # For key_cache_division_limit see http://dev.mysql.com/doc/refman/5.0/en/midpoint-insertion.html low_priority_updates=1 concurrent_insert=2 key_cache_division_limit=40 # Log slow queries log_slow_queries=/var/log/mysqld.slow.log long_query_time=5 expire_logs_days=7
×
×
  • Create New...