Jump to content

How to speed up the Background Processes?


sound

Recommended Posts

  • Replies 175
  • Created
  • Last Reply

I have just upgraded and wondered is there any way to pause these background processes for a few hours?

My server load is at 5 with normal activity but it has 4 million posts to process, if I'm expecting an increase in activity, I would rather pause for those few hours as I can see the server exploding.

Link to comment
Share on other sites

If running manually - just close the browser window that it is running in.

If using a CRON/Web Service then disable the CRON/Web Service in your ACP (set the task method back to " Run Automatically with Traffic (Default)") and then disable the CRON job at the server level too, or the Web Service at whatever service you are using.

 

Link to comment
Share on other sites

On 7/8/2016 at 6:58 AM, Charles said:

I replied in your ticket but for everyone's info:

Do not run the queue tasks both in the browser (using the manually run now link) and via the cron at the same time. The queue task locks when it is running (so only one thing has control at a time) and, when you do both the cron and the browser, only the browser task will execute and it is much slower than the cron task.

When you do both of them the cron is useless and does nothing because the browser keeps the queue task always locked. The cron will execute, see the browser has the queue locked, and instantly exit therefore doing nothing. Looping via the browser is much, much slower than letting the cron take care of it.

 

That's *really* important information that I was unaware of.  Maybe make put a note in the software saying "Don't use this if you're using Cron"?

Link to comment
Share on other sites

  • Management
2 hours ago, marklcfc said:

I have just upgraded and wondered is there any way to pause these background processes for a few hours?

My server load is at 5 with normal activity but it has 4 million posts to process, if I'm expecting an increase in activity, I would rather pause for those few hours as I can see the server exploding.

There's no way to pause it but the queue task has built in logic to handle such a situation. When it executes it may only do 100 posts per cycle but it keeps looping those cycles doing as many as it can for a set period of time. If your server is not too busy and powerful then it will do many loops before it stops that cycle. If your server is busy then it may only do a few loops before it runs out of time. Either way, after a period it will always stop, exit, and then re-execute on the next batch run. We do it this way so the queue does not take over your entire server in a mad rush to process tasks

Link to comment
Share on other sites

  • 9 months later...
On 07/04/2015 at 3:09 PM, bfarber said:

We have been testing a command line script that effectively does the same thing as the ACP "run background processes manually" command

Was this ever released @bfarber

We just upgraded from IPB 3.x to IPB 4.x and the background processes, on a VPS, are taking forever. We have had a browser tab open to run them manually, but I wondered if there were a series of command line scripts we could execute instead? We are comfy using the Linux command line (we used it for the upgrade for example).

Link to comment
Share on other sites

14 hours ago, bigPaws said:

Was this ever released @bfarber

We just upgraded from IPB 3.x to IPB 4.x and the background processes, on a VPS, are taking forever. We have had a browser tab open to run them manually, but I wondered if there were a series of command line scripts we could execute instead? We are comfy using the Linux command line (we used it for the upgrade for example).

No, we improved the cron job running of tasks instead so that it can run as much as possible in a loop. We recommend using cron jobs to run your tasks for larger sites.

Link to comment
Share on other sites

19 minutes ago, bfarber said:

No, we improved the cron job running of tasks instead so that it can run as much as possible in a loop. We recommend using cron jobs to run your tasks for larger sites.

Ok thanks, just to clarify - is cron somehow quicker than manually running them manually in a browser window all day?

Link to comment
Share on other sites

4 minutes ago, bigPaws said:

Ah-ha, thanks @Marc S (there's two of you on @ btw?). I guess I was looking for the usual * * * * type format or something at least in bold. Maybe they'll make a small tweak after this feedback for poor sighted people like me! :)

I can assure you there is only 1 of myself though. The world couldnt cope with 2 :D 

Link to comment
Share on other sites

  • 2 weeks later...

We have forums in cluster. 8 app instances, 2 physical db hosts (master-slave), Ceph as a storage and CDN. Now cron task active only on one app instance. Have we any chance to have ability to use background jobs on any instance? Now our bottleneck for it's performance is vCPU of 1 app instance. And of course we want to have ability to scale them. I am not sure about boost of 1 queue (but in special situations we can create correct logic), but i am sure we can now separate jobs per instance. If someone take them, others see they status as 'taken'. Very simply and clear, as I think.

Link to comment
Share on other sites

More about RebuildPosts background job. In my env this task with cron working less then 5 seconds. And after that it will wait for the next minute start (cron). This is because varibale $rebuild hardcoded to 100. So, a lot of time (~90%) is waiting for next minut. After I change this setting to 4000 task start working 45-55 seconds. It's very great. Not more 60 sec waiting -> cron next iteration will start very quick -> doing x40 more data -> x40 lesser time to do this.

May be create some logic, which track avg time of work and boost this $rebuild value to larger, if last task do fast. And this logic can decrease this setting if time of work be more than 60 sec. This logic must try to set 50-55 seconds for working. With that it's guarantee for that speed.

upd. and may be set avg time of work is 5 min is good too? why not? with this time we get a lesser non-working time than with 60s

Link to comment
Share on other sites

  • 1 month later...

One suggestion I would make to the developers is to split up the larger queue tasks (ie rebuild posts) into sets of no more than ~250,000 and tweak the task runner to allow for multiple parallel manual task workers.  Nothing has to change for the standard user, the built in task manager still chews through one task at a time.  If advanced customers know the task number or queue id, we can address those additional "chunks" via cron task or manual invocation to suit our environment.

We're living in a multithreaded world now.

Link to comment
Share on other sites

  • 1 month later...
On 5/18/2017 at 4:21 AM, Upgradeovec said:

More about RebuildPosts background job. In my env this task with cron working less then 5 seconds. And after that it will wait for the next minute start (cron). This is because varibale $rebuild hardcoded to 100. So, a lot of time (~90%) is waiting for next minut. After I change this setting to 4000 task start working 45-55 seconds. It's very great. Not more 60 sec waiting -> cron next iteration will start very quick -> doing x40 more data -> x40 lesser time to do this.

May be create some logic, which track avg time of work and boost this $rebuild value to larger, if last task do fast. And this logic can decrease this setting if time of work be more than 60 sec. This logic must try to set 50-55 seconds for working. With that it's guarantee for that speed.

upd. and may be set avg time of work is 5 min is good too? why not? with this time we get a lesser non-working time than with 60s

Where did you changed that? I have enough processor and RAM sitting empty and upgrading my test forum has been peta. I have less then 2GB of DB size and it's taking hours. First run took me 24-48 hrs. That's damm slow. 

Link to comment
Share on other sites

8 hours ago, AlexJ said:

Where did you changed that? I have enough processor and RAM sitting empty and upgrading my test forum has been peta. I have less then 2GB of DB size and it's taking hours. First run took me 24-48 hrs. That's damm slow. 

Background tasks already run as much as they can until process allowances are exhausted (time limit or memory usage). Changing rebuild values is not recommended and won't make any difference due to the fact that the process will run more than once anyway.

Link to comment
Share on other sites

10 hours ago, Stuart Silvester said:

Background tasks already run as much as they can until process allowances are exhausted (time limit or memory usage). Changing rebuild values is not recommended and won't make any difference due to the fact that the process will run more than once anyway.

I simply do not understand how this can be true. It looks like many others can't understand it either since this question keeps popping up. Surely that should tell you something isn't being communicated as it should be.

If my server is idling (CPU, RAM and disk read/write usage are low) then how can the tasks be running as much as they can? I and many others have stated our servers' hardware could handle more. What resource on the server is the bottleneck?

If you had told me that my server's CPU was at 100%, I would understand but I am yet to find out what exactly is slowing down the background task execution.

Link to comment
Share on other sites

15 hours ago, Stuart Silvester said:

Background tasks already run as much as they can until process allowances are exhausted (time limit or memory usage). Changing rebuild values is not recommended and won't make any difference due to the fact that the process will run more than once anyway.

How can I increase memory usage? I didn't find any documentation for it. My current usage - 

07.26.2017-22.06.31

 

More or less, I keep seeing this in dashboard - I am not sure why? 

The following tasks appear to be locking frequently: queue.
Please run them manually. If you require assistance with any errors shown please contact technical support.

 

Link to comment
Share on other sites

9 hours ago, Charles said:

If the tasks are locking then that is a problem support can assist you on.

I have a ticket and here is the response I got. Can you please look into it? Because if I run manually it throws the php error. It's been locked since last T-W-Thursday. If that's long it's going to take, my users would be unhappy since right now even search doesn't work.

Ticket - #985293

Quote

 

Hello,

Tasks are run with traffic by default. As its a test installation, there is no traffic, therefore they don't run. I would suggest running these manually in a new tab, using the link provided.

Kind Regards,
Marc Stridgen

 

 

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.

×
×
  • Create New...