Jump to content

Terry Ellison

Clients
  • Posts

    14
  • Joined

  • Last visited

 Content Type 

Downloads

Release Notes

IPS4 Guides

IPS4 Developer Documentation

Invision Community Blog

Development Blog

Deprecation Tracker

Providers Directory

Forums

Events

Store

Gallery

Everything posted by Terry Ellison

  1. @Marc Stridgen, Just an update. The live cut-over went smoothly. One extra step that I did do was to revalidate the licence key immediately after relocating to the new server. I also upped the REBUILD constants by 5× in the config. Likewise I pause the cron task and used a bash loop on my FPM container: for ((i=1;i<1000;i=i+1)); do > ((t=SECONDS)) > php -d memory_limit=-1 -d max_execution_time=0 \ > /var/www/ipb/applications/core/interface/task/task.php \ > 9e0ff925b30af547068652c6ee9929c7 > ((t=SECONDS-t)) > echo -n "$t:" > sleep 10 > done 179:86:85:85:87:91:88:86:89:89:84:78:76:78:160:82:77:91:80:79:80:84:87:84:86:81:80:166:83:84:83:82:82:74:74:86:181:74:71:24:0:0:... You can see the solid PHP task script running with no time metering. Still about 15× faster than my earlier trials.
  2. @Marc Stridgen, I just did another dress rehearsal of my Migration from v4.4.9 to v4.6.10. My host server is a 6-core VPS with 16Gb RAM and I've 4Gb allocated to InnoDB cache. The migration took about 2 hr 46 mins, and the average CPU utilisation was under 1% for most of this period. The I/O rates were under 10 Mb/s writes which, given that the FS is on NVMe is maybe 20× less than maximum throughput. After about 30 mins of the conversion seeming to be stalled, I temporarily replaced the crontab job by a custom poll loop to execute task.php: for ((i=1;i<1000;i=i+1)); do declare -i start=$SECONDS php $opts $tasker $TASK_KEY declare -i t=$(($SECONDS - $start)) echo -n "$t:" sleep 10 done which was an adhoc scripting hack, but this showed the throttling nature. This was the output during the conversion (with CRs added for formatting): 0:0:1:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:1:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0: 0:0:1:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:1:0:2:0:0:0:0:0:0:0:1:1:0:0:0:0:0:0: 0:0:0:0:0:0:0:0:0:87:90:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:1:0:0:0:0:0:0:0: 0:0:0:0:0:0:0:0:0:0:0:1:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:1:0:0:0:0:0:0:0:0:0: 0:0:0:0:0:0:0:0:0:0:0:0:1:0:0:88:89:89:98:86:89:90:177:87:87:89:65:0:0:0: In other words, even though there was a conversion queue stacked for the cronjob, task.php typically executes under 1 sec elapsed processing every 2-3 mins. The exception was a short burst of 3 mins batch processing maybe 45 mins in and then a solid burst of 19mins to complete the conversion another 45 mins later. This started about a third of the way through the post re-indexing so the step-up didn't occur on a functional boundary. BTW, this why you really don't want to click the "manually run them now" option because seems to lock you into this trickle mode for the entire conversion. I do have to wonder if your developers have ever done timed conversions on representative customer databases. I am going to draw a line under this because a ~3hr migration downtime is good enough for us. Better than the ~30hrs in our early trials on a test laptop. Thanks for your attention, Marc 😊
  3. @Marc StridgenLet me properly instrument my next trial conversion run for you. Any feedback needs be supported by clear and hard data for it to be informative for you. 😊
  4. To follow this point after configuring Redis, the phrase Data Storage Method is very misleading here because even with the Redis Data Storage Method ticked, MySQL is still used for the relational data store for all of the user, topics, posts , etc. related data. Redis is only used as a memory cache for essentially ephemeral data, very much in the way that memcached was previously used.
  5. Sorry Marc, I am not sure what and where your "there" refers to, though I will happily to any Qs that the IC team asks of me. As I explained in my Migration Notes wiki page, migrating from 4.4 to 4.6 on a new server is complicated because of LAMP stack version and caching options incompatibility between these two IC versions. This all gets messy if you have a native LAMP stack on the hosting server, because the standard (Ubuntu) packages don't comfortably support multiple version of this Stack installed or worse running at the same time without a lot of faffing around in the configuration, and so you end up doing serial installs and removal of packages, and these tend to /etc garbage. Using docker addresses this whole issue as it is trivial to spin up then bring down separate docker-specific stacks. I realise that IC-hosted forums are the commercially preferred option for IC, but in our community and forum are UK-based so want a UK hosted option; we also can't afford a five-fold increase in hosting fees (because we would pay a fair commercial price for such a managed service) on our no-advertising funding model. I am sure that many self-hosting customers have similar drivers, which why I added this Self-Hosting topic. After quite a few trial migrations, I think it entirely fair to describe your upgrade process as both unduly long and worse fragile in term of runtime. For our forum, the best I've achieved is a 3½ hr elapse for our conversion time on a dedicated 6-core VPS with NVMe SSD, and that is working to a tight script. Do the wrong click in one of the ACP screens and this can go up three-fold. Using a Docker approach under GitHub makes the whole end-to-end more determined and maintainable.
  6. @Stuart Silvester you might be interested in the following:
  7. I am the SysAdmin for a self-host UK forum for self-builders. We are currently doing a platform refresh from a 4-core to a 6-core VPS (mainly because the latter bundle rebaselines to a currently supported Ubuntu LTS version and doubles the SSD storage available). I am moving to using Docker for both production and test on the new server. Here is the announcement topic: BuildHub – Moving to New Server and Github. Here is the Github repo for the project: https://github.com/TerryE/docker-buildhub. I don't have any assistance to request, ATM. The project is pretty much self documented. You just need a host server with docker-compose, docker.io and git installed (I have a local RPi4 with 256Gb NVMe SSD running Raspbian 64bit Lite and it runs fine on it). Set up your project directory and git init it, git pull this repo. Set up your environment and secrets according to the env.default-template. Run extensions/build-all to build the extension images and run the docker-compose to build and bring the forum up. This will bring up 5 containers for the stack: httpd, php, mysql, redis and hk (cron-based housekeeping). Note that the build supports two separate instance stacks: forum (on 80 and 443) and test (on 8080 and 4443). This is a work-in-progress, but about 90% complete. Anyone is free to copy / clone / raise issues through the GitHub issues system. Enjoy. 😊
  8. @Matt I can see to clear advantages of caching for complex apps such as yours, MediaWiki, WordPress, etc. as this typically gives a 2× or better throughput improvement. I can also see the advantages from your PoV as a vendor in dropping APCu, memcache, etc. for a single supported cache: Redis. What confuses me is that selecting Redis as a cache seems to state that use of Redis for forces a swap from MySQL DB as the primary data store: ACP->Advanced Configuration->Data Storage Method is switched from MySQL to Redis. If so this has huge data security / integrity / backup implications. If not, then this menu option needs rewording to clarify. Adding some decent configuration documentation would also help. ATM, all I can do is to retro-engineer the implications on a test instance. 😒
  9. @Stuart Silvester I have my forum running on a 4-core VPS at v4.4 and about to cut over to a 6-core VPS at current 4.6.10. My reason for getting stuck at 4.4 was historic (one of the then mods had customised our default template in a way that was dropped with 4.4, so we would have a look-and-feel bump.) I want to minimise actual cut-over The forum isn't that large (< ½M posts), but we are still looking at a ~1 day downtime. My forum LAMP stack is all docker containered, so spinning it up various test rigs is easy. I am currently trial testing on the 6coreVPS and on a 4 core Ubuntu laptop which I use as a local dev machine. You application dashboard includes the following statement in the Background Processes pane when doing the conversion: "These processes are performed in the background in batches and may take a long time to complete. Alternatively, you can manually run them now and wait until they all complete. This wording implies that running manually will help speed the conversion, but so long as the cronjob has been correctly configured, This is terrible advice as doing so will slow down the conversion maybe 10×. It needs rewording. Even when correctly configured, the conversion process is neither CPU nor IO bound. There seems to be some back-off time throttling within the task.php architecture. The forum will be offline during the conversion presenting a DoS to user community, so why not do as MediaWiki does and offer a batch script so do this, for example a "run at full rate until complete" option on task.php?
  10. @Marc Stridgen, I am referring to the old InvisionPower website notes circa 2016 where you gave Test URL volatility advice. (My possibly flawed memory only; I can't be bothered to start trawling Wayback for exact content). However, I accept in our fast moving world, statements made 5 years ago are ancient history. At last the penny drops: how you set the test URL seems to have changed. Your the Manage Purchase > Change Licensed URL function doesn't allow you to set or modify your test instance. (I am not sure what it does.) You implement setting the test URL by the customer doing a clean install or by executing the AdminCP > System > License Key > Change License Key function in the test instance with -TESTINSTALL appended to your already registered key, and this registers your test instance. Hey Presto! Thank-you! Note that you don't actually say this anywhere in your License Keys & URLs documentation. Perhaps you expect customers to work this out by inspiration or divine intervention? 😋 A simple para in plain English in your documentation would really help other customers to avoid this hassle and reduce nugatory support agent time. How about supplementing the paras "In addition to the main live URL..." with a cross reference to the Install and Upgrade page for more details? Also it would be really helpful if this Install and Upgrade section could spell out the details a little more explicitly: This section should contain an explicit statement on business constraints of use for any shared test instance: access should be limited to testers for the sole purpose of testing, and access control mechanisms should be in place to ensure that the test service is not accessible to public or general community members. Localhost test instances are for the sole use of the developer using the workstation. As well as developer testing of customised skins and add-ons, the second most common test usecase is where a sysadmin might want to do a full dress rehearsal of a live upgrade for timing and volumetric validation, as well as ensuring the upgrade doesn't screw up and extensions and add-ons. This latter case can only be meaningfully done on a full clone of live, and hence this will typically not be a fresh install, but more usually a (tweaked) database and filestore deep copy, but with a test license key applied through the ACP > Change License Key function updating the key to a test install (and not the somewhat misnamed Manage Purchase > Change Licensed URL function). If the customer needs to change the live or test URLs for valid business reason then this can be discussed by raising a case through the standard Support Request mechanism. Once the relevant field has been cleared, a new live or test URL can be registered through the ACP > Change License Key function.
  11. My production server has a subdomain forum and my test subdomain test. I submitted a ticket to change it from https://test.domain/ to https://test.domain:4443/. Your agent (Marc?) responded by clearing the test URL and suggesting that I do this myself — which I can't. Check the date of the change. I really can't understand why you feel that using a non-standard port for test could represent an abuse of the testing facility. Surely if anything switching to non-standard port would surely make such abuse more difficult as end users would be unlikely to use non-standard ports? @Marc Stridgen, IIRC you have some limitation on up to two test URL changes per year. This is my first such change request in ~5 years, and all I am asking is to add a non-standard port. In your email response you state: If I click on the Change Licenced URL function to add a test URL as you suggest, I get the error: So I am simply asking you to reinstate my test URL but with the non-standard port 4443 as your maintenance function doesn't allow me to do so. This is compliant with RFC 2818 (see para 2.3). As I explained, I want to bring my test instance online because we are running out of space on our current production VPS and I want to do a full scale test of the migration. The production VPS doesn't have the capacity to run the test instance, so I want to move it to another server where we use the convention of port 4443 for test instances. The alternative is to procure another VPS because Invisioncommunity doesn't seem to support RFC 2818. If this is a hard constraint then I will have to investigate alternatives. I am just really confused as to why you feel using non-standard port for test instances is such an "odd case", when it is really quite common practice in the industry.
  12. @Marc Stridgen, thanks for getting back to me. Can you please enlighten me as to the business rationale for asking your customers to remove / decommission a production website in order to change the test URL? Through your customer support request function, I logged a call to change my test URL from https://xxx/ to https://xxx:4443/ and your agent responded by removing the test URL. However, I still can't set a new test URL to https://xxx:4443/ without getting stuck on that tedious you must first remove the existing installation error. In our case, our Virtual Server hosting our production service also used to host our shared test instance. However over the years our uploads hierarchy is now so large that we need to migrate to a new larger VS; incidentally our production server can no loner carry two complete instances: prod + test. Clearly, I still want to do full dress rehearsals before the move; however, because of port forwarding constraints, it was just a lot easier to ask our testers to use https://xxx:4443/, since we only have a few testers. Simple, I thought. Surely this a pretty standard usecase? Nope. Yes, I agree that it is entirely reasonable that you ask customers to decommission an old test site before doing a change to the test URL, but in my case there hasn't been a test forum offered at the old test URL for over a year. IMO, the flaw in your CRUD validation is that you don't treat the prod and test URLs as separate entities, but instead blur them into a composite. If the production site URL is unchanged, then surely you shouldn't require that the production site is decommissioned in order to change the test URL: only check the subdomain that is being changed. Also make sure that your reporter / licence check API treats hostname and hostname:port keys consistently; in other words if you treat xxx:4443 and xxx(:443) as separate installations, then an install on xxx:4443 is not evidence of an install on xxx:443. Lastly, if your customer asks to change a test URL to XXXX and your existing maintenance validation rules (invalidly) prevent what is a reasonable request from a business perspective then please don't reset the URL and ask them to try again; just do what they ask.
  13. The current algos in init.c:licenseKey() and init.c:checkLicenseKey() currently allow sloppy compares of the registered and actual site URLs; effectively ignoring http vs https. However, http://site/ is a synonym for http://site:80/ and https://site/ is a synonym for http://site:443/ yet if you (sometimes) use the explicit ports, the key checks will fail. I picked this up because I need use a non-default https port for my test instance and I was debugging why this was raising a licence error. More annoyingly in this case , the Manage Purchase > Change Licensed URL option does not allow you to change the Test URL to add an explicit port, because: Before you can change the URL, you must first remove the existing installation. So by this logic I've got bring my production forum down and offline for some unspecified time just to update the port used for my test environment. Well some business analyst really thought through the specification of that update function! 😅🤣
  14. We currently use MSQL + memcache, but I am currently doing a test upgrade to v4.6.7. I would be happy to switch to MySQL + Redis cache. However, in this version of the ACP > Advanced configuration > Data Storage page, in that the qualifying paras and interlock are now quite explicit: it's an either / or: MySQL + no caching or "Enabling Redis will use Redis for storage and caching". This withdrawal of a RAM cache option for MySQL users seems to be a fundamental change to the storage architecture at a minor dor release without being reflected in the current online documentation or raised in release notes. What is the maintainers' position on this?
×
×
  • Create New...