Jump to content

Recommended Posts

Posted

Hi everyone,

Just wondering if anybody has any experience or potential "DO NOT DO THIS UNDER ANY CIRCUMSTANCES" warnings with using an auto-scaling network of identical web servers to host Invision, so a load balancer directing traffic to one or more servers behind it, with the number of servers changing depending on the traffic volume? I'll be using AWS but I guess the same would be feasible on other cloud platforms. I think I'd put the codebase on a volume that would be shared and mounted on all of the servers, with assets served from S3/CloudFront.

Any suggestions or ideas welcome!

Posted

We do this with our Community In The Cloud offerings, so I can assure you it's doable. You'll want to store datastore data either in the database or in Redis, and use S3 to store your uploaded files, but otherwise it's pretty straightforward.

  • 4 months later...
Posted
On 8/29/2020 at 3:32 PM, sobrenome said:

The cronjob was set in the instance that generated the image used to generate automatically the other instances by the autoscaling group.

The workaround I found for that - as it is possible that you'll end up with race conditions if the cron did execute simultaneously on multiple instances - was to use the Web Service option and wrote a Lambda function to make the request every minute.

Posted (edited)
4 hours ago, Steve Grant_189967 said:

The workaround I found for that - as it is possible that you'll end up with race conditions if the cron did execute simultaneously on multiple instances - was to use the Web Service option and wrote a Lambda function to make the request every minute.

How much do you pay monthly to run the cron job on lambda?

Edited by sobrenome
Posted
On 9/1/2020 at 5:50 PM, sobrenome said:

How much do you pay monthly to run the cron job on lambda?

Nothing - it's Free Tier eligible. As an example for when that year has elapsed, last month I was billed for 16,000 GB-Seconds (the free tier limit is 400,000), and made 7,718 requests, as I set it to only run every 5 minutes rather than every minute, which seems wholly unnecessary.

Posted
1 hour ago, Steve Grant_189967 said:

Nothing - it's Free Tier eligible. As an example for when that year has elapsed, last month I was billed for 16,000 GB-Seconds (the free tier limit is 400,000), and made 7,718 requests, as I set it to only run every 5 minutes rather than every minute, which seems wholly unnecessary.

Is it free for 12 months or forever, like SES up to 62.000 emails?

Posted
3 hours ago, Steve Grant_189967 said:

Good question - just checked, and it looks like it's always free, up to 1m requests and 3.2m seconds of compute time per month, which is obviously more than enough. 👍

That’s awesome! Have you written a python script to run the IPS cronjob? Could you share here! 😃

Posted
On 9/4/2020 at 3:46 PM, sobrenome said:

That’s awesome! Have you written a python script to run the IPS cronjob? Could you share here! 😃

It's written in Node.js 12.x

exports.handler = __f0;

function __f1(__0, __1, __2, __3) {
  return (function() {
    with({  }) {

return function (thisArg, _arguments, P, generator) {
    function adopt(value) { return value instanceof P ? value : new P(function (resolve) { resolve(value); }); }
    return new (P || (P = Promise))(function (resolve, reject) {
        function fulfilled(value) { try { step(generator.next(value)); } catch (e) { reject(e); } }
        function rejected(value) { try { step(generator["throw"](value)); } catch (e) { reject(e); } }
        function step(result) { result.done ? resolve(result.value) : adopt(result.value).then(fulfilled, rejected); }
        step((generator = generator.apply(thisArg, _arguments || [])).next());
    });
};

    }
  }).apply(undefined, undefined).apply(this, arguments);
}

function __getTaskUrl(__0) {
  return (function() {
    with({ __awaiter: __f1, axios_1: require("axios/index.js"), getTaskUrl: __getTaskUrl }) {

return function /*getTaskUrl*/(url) {
    return __awaiter(this, void 0, void 0, function* () {
        try {
            const response = yield axios_1.default.get(url);
            console.log(response);
        }
        catch (error) {
            console.error(error);
            throw (error);
        }
    });
};

    }
  }).apply(undefined, undefined).apply(this, arguments);
}

function __f0(__0) {
  return (function() {
    with({ __awaiter: __f1, getTaskUrl: __getTaskUrl, url: "https://###SITE_URL###/applications/core/interface/task/web.php?key=680cb24a3d18c05c1d5d35e169cb9a4a" }) {

return (event) => __awaiter(void 0, void 0, void 0, function* () {
    console.log('getting url');
    yield getTaskUrl(url);
    console.log('exiting...');
});

    }
  }).apply(undefined, undefined).apply(this, arguments);
}

I should also add that I've since reverted back to using cron. While if you're running multiple instances there is a risk of encountering a race condition with the cron running on multiple servers simultaneously, I've found that my site currently hasn't needed to use the autoscaling capability - generally getting average CPU usage of about 20%. My bottleneck is the database, the schema is baffling, to be honest - using InnoDB as it's best for relational tables and yet using denormalised tables 🤔

Posted
2 hours ago, Steve Grant_189967 said:

While if you're running multiple instances there is a risk of encountering a race condition with the cron running on multiple servers simultaneously

I haven't seen any issues so far. The same issue could occur based on many instances running and not using cron but using members visiting the website to trigger it, like IPS on cloud.

2 hours ago, Steve Grant_189967 said:

generally getting average CPU usage of about 20%

For IPS with low to moderate traffic 2 micro instances are fine if you use S3, Elastic Search (I limited search to 250 results), Redis (underused on 4.4) and RDS services on the cloud (nano instances of these services are fine). Remember that is good to have a load balancer and more than one EC2 in different locations to assure that your community will be always online.

2 hours ago, Steve Grant_189967 said:

using InnoDB as it's best for relational tables

Yes, I use InnoDB on MariaDB on RDS. Sometimes I feel that it could be a little more faster for writing. For reading it is very fast.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...