Jump to content

Zend Opcache (Optimizer) - My hero


Recommended Posts

Posted

Well, I'm not one to often make ecstatic titles, but I'm having difficulty containing my joy right now.

I used to be a proponent of APC. Until tonight. Zend Optimizer is kind of new, wasn't really in my top consideration. Afterall, APC & xcache have been around for quite some time and fairly stable. I know people have stories of how terrible apc or xcache has been to them, but it's rare and didn't pay too much attention. It is what most people recommend and I quite frankly didn't see a reason to recommend anything other either. (Though with coming 5.5, we have little options) But I'm trying zend optimizer right now... and boy... it's different.

My overwhelming joy is from a long long hunt for a mysterious ailment that's been lingering on my server. Despite having multiples of identical web servers with almost same specifications, same services with same version, one of them was always weird. It would be fine, then suddenly it gets slower and slower to a point it eventually crashes, starts itself backup and pretends nothing ever happened. And then it runs fine! No logs of any issues, nothing. The interval seemed so random too, it wasn't based on high or low traffic, it seem to start occurring at some random time and quite often. I had to temporarily alleviate this issue by setting up a cronjob that'll restart php-fpm ever so often. I've been hunting for the cause of this for a month now. It's been extremely frustrating with no leads but tons of guess work to lead to nothing fruitful.

But tonight, I came upon a post on serverfault (site I frequent) that seem to have a similar issue as me. Busy site, php-fpm starts to slow down, crash. cronjob to solve temporarily. Problem seems to be the same. So I scroll down, read the answers... yeah I tried all that. Didn't help. Then I notice an edit at the end of his post saying he has APC. Well, I have APC too. Then he states he removed APC and problem went away... wait... what? And then someone else recommends zend optimizer instead.

Okay, I had nothing to lose. As I am using one of the newer apc versions, I had the option to disabled apc opcache. So, I disabled that and enabled zend optimizer's opcache. Restart... wait a few seconds...

And then it hit me. The server's now using less than half the CPU. Went from steady 55~70% to a steady 20~30%. What? Zend Optimizer wasn't even tweaked. It was out of the bag and only turned it on. That can't be possible... So, I installed it on another server, which was a bit more busy. And it goes from 70% to 30%. WHAT?!!? And here I was pondering just earlier in the day whether or not I should order 1 or 2 more dedi servers to handle the load during peak.

post-21368-0-23687700-1396173741_thumb.p

It went from 150ms response time to 50ms! Using less than half the cpu! It feels like I went from not having an opcache to having one! So much power saved...

Though, I don't know if my chase of this elusive bug is caught. And as I am getting more data as I write this long journal(?), I feel as though I haven't caught it. But, saving this much power... It's truly a sight for sore eyes.

P.S. I'm still using APC for some user cache. Just disabled opcache only since zend opcache is just opcache.

Posted

This looks interesting. Thanks for sharing your results. From what limited research I've done, Zend Opcache seems to show promising results all around.

I think I might upgrade to PHP 5.5 on one of my boxes later today and throw in Zend Opcache and APCU on it and see how it goes.

Posted

I was running 3.1.15 from remi. I think they stated it's slightly different from official latest dev build. But apc dev seems kind of dead/dying now.

---------------------

I'm really surprised it's still holding fast & steady at a low load even during my high traffic.

My elusive bug is clearly still there. But as opposed to restarting almost every 2~30min (my shell script attempts to detect it being a problem and reboots dynamically) it seems to restart every 6hrs or so. It's surely an improvement.

Posted

Did you add any parameters to conf_global.php?

$INFO['use_apc']= '1';
or equivalent?

This post is entirely in the context of opcache, so that's rather unrelated. To answer the question is complicated b/c I changed the IPB's user cache stuff for mine.

Posted

This post is entirely in the context of opcache, so that's rather unrelated. To answer the question is complicated b/c I changed the IPB's user cache stuff for mine.

Sorry I missed your "P.S. I'm still using APC for some user cache. Just disabled opcache only since zend opcache is just opcache."

I thought you had removed all APC traces.

Out of curiousity what app are you using to make the graphs? Looks a lot nicer then Cacti.

Posted

After reading this thread, I thought I would do the same and try Zend OPcache (I will call it ZOP).

I am running a reasonably powerful dedicated server here in the rack - with plenty of grunt!

Running CentOS 6.5 with Nginx 1.4.7, PHP 5.3.3, PHP-FPM and MariaDB 10.0.

Dell PowerEdge R710

  • 2 x Xeon X5690 CPU's (24 threads)
  • 64GB ECC DDR3 1333MHz RAM
  • 2.5" SSD rated at about 2.3GB/s write (see screenshot bellow):

13570920103_684eab2fb7_o.png

So really we have plenty to play with, in regards to running IPB with ease. Never really had any issue's with running Nginx/PHP-FPM/MariaDB, and have not had an issue with CPU - ever.

Our usual processing time was as follows:

PHP processing time = 40.9ms

DB processing time = 3.1ms

Apdex score of 0.99 (Excellent), with 99% of our members being satisfied, where 1% of our members being tolerated.

We thought that this was as good as it gets, until we changed over to ZOP.

Wow did I get a shock!

Our processing time now is:

PHP processing time = 14.1ms

DB processing time = 1.2ms

Apdex score of 1.00 (Excellent), with 100% of our members now being satisfied.

Here are the results (Graph by New Relic):

13570711625_f4eea23b36_o.png

So I am sticking with ZOP for the future. I will be re-enabling APC for user caching later today. I installed ZOP manually, using the guide bellow.

Thanks Grumpy, your post gets a 'Like This' from me :thumbsup:


How did I install on CentOS 6.5 with PHP 5.3.3? (Basically followed this guide, Published by Danijel Krmar).

1. Download the latest version of Zend OPcache from:

http://pecl.php.net/package/ZendOpcache

2. Then executed these commands at the CLI:

$ cd /usr/src
$ wget http://pecl.php.net/get/zendopcache-7.0.3.tgz
$ tar xvf zendopcache-7.0.3.tgz
$ cd zendopcache-7.0.3
$ phpize
$ ./configure
$ make
$ make install

3. Add the following to the beginning of your php.ini file (or second line if you have ioncube installed at the first line):

zend_extension=/full/path/to/opcache.so

You will see the full path after executing make install, should be something like: /usr/lib64/php/modules/opcache.so

4. Restart PHP-FPM and Nginx.

service nginx restart
service php-fpm restart

or

/etc/init.d/nginx restart
/etc/init.d/php-fpm restart

My configuration is as follows (in my php.ini), this will override the default ZOP config.

[opcache]
opcache.enable=1
opcache.enable_cli=1
opcache.memory_consumption=128
opcache.interned_strings_buffer=8
opcache.max_accelerated_files=4000
opcache.max_wasted_percentage=5
opcache.use_cwd=1
opcache.validate_timestamps=0
opcache.fast_shutdown=1
IMPORTANT NOTE
Setting opcache.validate_timestamps to 0 (disabling it) will increase the performance, especially when you have an application with a lot of files, but it also means you have to reset the OPcache manually when you alter the application files.
If not fully understood, opcache.validate_timestamps=0 can break your application or cause hard to find issues. The configuration above is for benchmarking only. For your production environments please use opcache.validate_timestamps=1
Posted

Thanks for posting your results Aussie, definitely useful to see.

But one thing that nags me is you're using /dev/zero to benchmark your SSD in a way that's not going to provide accurate results. For example, this is what I get when using the command you're running,

root@redact:~# dd if=/dev/zero of=test bs=1048576 count=2048
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 1.13941 s, 1.9 GB/s

Write caching can make your SSD seem unrealistically fast when using dd like that. For more accurate benchmarking results, I recommend following the instructions provided here.

Here's what I get,

root@redact:~# dd if=/dev/zero of=tempfile bs=1M count=1024 conv=fdatasync,notrunc
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 5.35927 s, 200 MB/s
root@redact:~# echo 3 > /proc/sys/vm/drop_caches
root@redact:~# dd if=tempfile of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 3.94136 s, 272 MB/s
root@redact:~# dd if=tempfile of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 0.13112 s, 8.2 GB/s
root@redact:~# rm tempfile 
root@redact:~# 
root@redact:~# dd if=/dev/zero of=tempfile bs=1M count=1024 conv=fdatasync,notrunc
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 5.55632 s, 193 MB/s
root@redact:~# echo 3 > /proc/sys/vm/drop_caches
root@redact:~# dd if=tempfile of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 3.94454 s, 272 MB/s
root@redact:~# dd if=tempfile of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 0.13129 s, 8.2 GB/s

So roughly ~200MB/s write, ~270MB/s read. Huge difference. You're not going to get a SATA based SSD that writes anywhere near 2.3GB/s :tongue:

Maybe a PCI based SSD device, but never a 2.5" SSD.

I also like to use hdparm for benchmarking, but that's for read speeds only. This is what I get on my server, with a 2x128GB SAMSUNG MZ7PC128HAFU-000DA SSD software RAID 1 array (OS/database) and Hardware 4x2TB Enterprise Grade SATA II RAID 5 array (on my less active dev server).

root@redact:~# hdparm -tT /dev/md0
 
/dev/md0:
 Timing cached reads:   29296 MB in  2.00 seconds = 14663.33 MB/sec
 Timing buffered disk reads: 188 MB in  0.73 seconds = 256.41 MB/sec
root@redact:~# hdparm -tT /dev/md0
 
/dev/md0:
 Timing cached reads:   30178 MB in  2.00 seconds = 15105.11 MB/sec
 Timing buffered disk reads: 188 MB in  0.73 seconds = 256.50 MB/sec
root@redact:~# hdparm -tT /dev/sda1
 
/dev/sda1:
 Timing cached reads:   30296 MB in  2.00 seconds = 15164.60 MB/sec
 Timing buffered disk reads: 820 MB in  3.00 seconds = 273.18 MB/sec
root@redact:~# hdparm -tT /dev/sda1
 
/dev/sda1:
 Timing cached reads:   29680 MB in  2.00 seconds = 14855.07 MB/sec
 Timing buffered disk reads: 780 MB in  3.00 seconds = 259.57 MB/sec

(Sorry for derailing the topic a bit)

Posted

Out of curiousity what app are you using to make the graphs? Looks a lot nicer then Cacti.

It's New Relic. Free version is great b/c it's free and good monitoring. It's paid version is extremely awesome, but extremely expensive... :/

-----------------------

On the topic of installation...

If you enable remi repo for centos, you can just do "yum install php-pecl-zendopcache"

Posted

Thanks for posting your results Aussie, definitely useful to see.

But one thing that nags me is you're using /dev/zero to benchmark your SSD in a way that's not going to provide accurate results. For example, this is what I get when using the command you're running

Sure, I agree, your right :thumbsup: I should have tested in the way that you have stated. I will retest with the posted link and post the results in a bit.

I'd suspect that we would get the same type of results as you have posted.

Posted

Sure, I agree, your right :thumbsup: I should have tested in the way that you have stated. I will retest with the posted link and post the results in a bit.

I'd suspect that we would get the same type of results as you have posted.

It's no problem, I'm just trying to provide useful information, so you can really know how well your servers hardware performs.

I'm sure you host would be happy leaving you to think they're giving you SSD hardware capable of writing at raw speeds of 2.3GB/s, but that's sadly not even possible on a SATA 2 or even SATA 3.0 device :tongue:

Fastest 2.5" SATA based SSD I've seen benchmarks at just under 600MB/s, not like you'd ever need that kind of raw power anyways, heh. SSD's are amazing because of their low latency, fast access times, and things of the sorts.

Now PCI based SSD's? Those things are beasts when it comes to raw read/write speeds, and they're extremely expensive. I imagine some time in the distant future, SATA based SSD's will be phased out by PCI based devices. Or maybe whatever technology phases out PCI by that time. After all, 2.5" SATA SSD's basically just emulate old technology for HDD's. SATA isn't even needed anymore.

Okay I'm really going to shut up and stop derailing the topic now, sorry. I'm in one of my overly talkative moods.

Posted

Indeed, and I am always adding these things to my folder for practical use, I am very grateful and I thank you for that :thumbsup:

I would be really annoyed if someone did not show me the correct way to benchmark my SSD's (I thought I had - or anything else), and always learning new things helps not only me, but other people, so all is good.

Yes we have looked at the Fusion-io ioDrives, and if the cost comes down, we will be looking at those for our DB. Or something similar.

Anyways, here is the updated tests, just basically used what you posted above, but will use that url for future reference:

root@5TH6TFC:~# dd if=/dev/zero of=tempfile bs=1M count=1024 conv=fdatasync,notrunc
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 5.22834 s, 210 MB/s
root@5TH6TFC:~# echo 3 > /proc/sys/vm/drop_caches
root@5TH6TFC:~# dd if=tempfile of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 3.79524 s, 282 MB/s

I will run some other tests in my spare time over the weekend and if I feel the need to post, I shall :ph34r:

Posted

So I wanted to update how ZOP has gone for me over a period of 24 hours. Including before and after screenshots.

Before using ZOP (Nginx/PHP/PHP-FPM/MairiaDB), average PHP response time was sitting at 59ms. Please ignore the dark green part of the graph, this is my external web requests and I forgot to click on it for it to be removed from the graph:

13590019294_491a805c55_o.png

After using ZOP (Nginx/PHP/PHP-FPM/MairiaDB), average PHP response time is now around 16ms. Dark green graph removed from this graph :tongue:

13589692313_9be34d6609_o.png

So from 59ms to 16ms in PHP processing time is HUGE! I am still using APC for user caching.

I have noticed that my TTFB has also improved, from a green B to green A, not too shabby if I do say so myself:

13590252593_2deb903b52_o.png


In my researching ZOP I wanted to find a web control panel, like the simple APC control panel, so here is what I came up with (screenshots included!).

Other members may be interested in one of the following control panels, so here is a simple tutorial to install them individually.

1. ocp.php by ck-on on github

Has a nice way to analyse/explore file-based cache, looks similar to the APC control panel:

13590225623_b879105c51_o.png

Installation:

cd /full/path/to/your/web/root/
wget https://gist.github.com/ck-on/4959032/raw/0b871b345fd6cfcd6d2be030c1f33d1ad6a475cb/ocp.php

To access:

http://your.tld/ocp.php
  • Make sure this is protected by some type of security layer.

2. Opcache-Status by Rasmus Lerdorf on github

Very nice simple UI:

13590544154_8212fe54fc_o.png' alt='' cla" alt="13590544154_8212fe54fc_o.png">

Installation:

cd /full/path/to/your/web/root/
wget https://raw.github.com/rlerdorf/opcache-status/master/opcache.php

To access:

http://your.tld/opcache.php
  • Make sure this is protected by some type of security layer.

3. opcache-gui by amnuts on github

I really think this is the best feature rich UI I have seen to date.

13590162675_03c14a3ed1_o.png' alt='' cla" alt="13590162675_03c14a3ed1_o.png">

Installation:

cd /full/path/to/your/web/root/
wget https://raw.github.com/amnuts/opcache-gui/master/index.php -O op.php

To access:

http://your.tld/op.php
  • Make sure this is protected by some type of security layer.

I personally will be protecting all of these with a password authentication, or some type of security layer, for my own security. I suggest that everyone does the same :thumbsup:

Posted

I don't have any real benchmarks, just feel of pants and watching loads, but with the zend opc and xcache I seem to be running well. can have couple hundred online in multiple forums/articles and load usually runs .02 or so. what I noticed is the lower load is consistent, very few spikes and the few I say seemed to be related to other domains I run not using caching.

I have not done any real tuning on it but out of the box it seemed to really help.

Posted

It's New Relic. Free version is great b/c it's free and good monitoring. It's paid version is extremely awesome, but extremely expensive... :/

It is a magical software i fully agree , expensive but gives you great insight on everything

Other than that there are some really interesting findings here , i will give this a try on our test installations also.

  • 6 months later...
Posted

What is the config_global.php IP.Board setting for Zend Opcache?

Is it ?

$INFO['opcache']         =     '1';

It's an Opcode cache, there's nothing to add to the IP.Board configuration

Posted

@Frontpage

You don't need to use:

$INFO['opcache']         =     '1';

in the global config...

It will work without it.

Use the global config edit for data caching like Memcache as Zend opcache is used only as opcode cacher .

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...