Jump to content

PHP 7 or above with new update


Recommended Posts

I wish there was more explanation of how the datastore and the caching interact. I have my datastore set to the file system and so in /datastore/ I have a whole bunch of PHP files with settings in them. I also have caching turned on so in my redis-cli I see all those filenames represented as keys. So I am assuming that with caching turned on it just moves all that data into Redis (or whatever) and serves it from there instead of the flat files. The thing is that the flat files will be picked up by the opcache and would subsequently be far faster than a Redis query, since they're already stored in memory and you don't have any latency from the call to Redis or the time needed to unserialize the data from the key store. So in a sense it would be faster to turn *off* caching and just rely on opcache. But then of course you have to make a database call for any cached pages for guests, which sort of defeats the point of trying to reduce database queries.

Link to comment
Share on other sites

Apache uses one of following MPM (Multi-Processing Module) for handling incoming requests and processes them. Both have their own working type. Below is some basic details about both MPM and there working.

Prefork MPM:-

Prefork MPM launches multiple child processes. Each child process handle one connection at a time.

Prefork uses high memory in comparison to worker MPM. Prefork is the default MPM used by Apache2 server. Preform MPM always runs few minimum (MinSpareServers) defined processes as spare, so new requests do not need to wait for new process to start.

Worker MPM:-

Worker MPM generates multiple child processes similar to prefork. Each child process runs many threads. Each thread handles one connection at a time.

In sort Worker MPM implements a hybrid multi-process multi-threaded server. Worker MPM uses low memory in comparison to Prefork MPM.

Event MPM:-

Event MPM is introduced in Apache 2.4, It is pretty similar to worker MPM but it designed for managing high loads.

This MPM allows more requests to be served simultaneously by passing off some processing work to supporting threads. Using this MPM Apache tries to fix the ‘keep alive problem’ faced by other MPM. When a client completes the first request then the client can keep the connection open, and send further requests using the same socket, which reduces connection overload.

 

So MPM event seems to handle better high loads :)

Can't compare to Nginx power but that's fine :)

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...