Jump to content

ASTRAPI

Members
  • Posts

    1,638
  • Joined

  • Last visited

  • Days Won

    6

ASTRAPI last won the day on December 31 2019

ASTRAPI had the most liked content!

Recent Profile Visitors

17,049 profile views
  1. Hello @growroom At the moment i am not available 😞 I am really sorry...
  2. Hello There is a vulnerability that has been discovered in popular Java logging library Log4j 2 which may allow attackers to run code remotely on your servers. Apache Log4j 2 is bundled with and used in many Java applications including Elasticsearch. So if you are using Elastic Search you may be vulnerable. Vulnerability info: https://nvd.nist.gov/vuln/detail/CVE-2021-44228 As there is no official patches out yet and the exploitation of the vulnerability already started you may want to apply a workaround until an official patch released: So for Elasticsearch version 6.4 and up: Edit your jvm.options configuration file usually located at: /etc/elasticsearch/jvm.options and edit at the end this line: -Dlog4j2.formatMsgNoLookups=true Then restart Elastic Search using something: systemctl restart elasticsearch If you are using ELastic Search version 6.3 and and any earlier version please upgrade asap to the latest supported version by Invision. The 6.3 and earlier versions are using an old version of Log4j which means the above workaround will not work ! Update also your JDK : When running on older JDKs, an attacker is able to inject and execute a remote Java class. On recent JDKs the attack is limited to potential DoS - causing data ingestion to temporarily stop - and information leakage, but no remote code execution attack vectors are known. Keep your servers secured !!!! Thanks
  3. Thanks for your kind words 🙂
  4. Not yet as no needed 🙂
  5. Cloudflare limits upload size (HTTP POST request size) per plan type: 100MB Free and Pro. 200MB Business. 500MB Enterprise by default (contact Customer Support to request a limit increase)
  6. If you have enabled the Cloudflare then you should not have any direct download from Wasabi as all requested files will be going through Cloudflare that there is no limit.
  7. Hello Circo, I think the manually installation of addons issue is not related to the topic. It will be better to open a new topic about it....
  8. Hello 🙂 In my opinion it will be better to optimize and scale as a solution rather than archiving. It will be better to have a proper solution so when the database increase later to be ready for it and not start archiving again. Archiving is limiting your users interaction...
  9. I prefer also MariaDB 🙂
  10. And this one for Wasabi and Cloudflare 🙂
  11. It is a plan that you must do or your server admin or both 🙂 Installing Centos 7 for the next 4 years and then when things are more mature you can migrate. You may be able to adjust your existing installation and start using it following another Centos like distribution.... so no need to reinstall 🙂 Or you can go ahead with Ubuntu for example. But check first that your control panel (if you use any) to support it and check also any scripts that you will use to be compatible also. IPS don't care if it is on top of Centos or Ubuntu... It just needs the web server, php, mysql, and other related software like Redis or Elastic search to work. IPS will perform better on the most minimal installation of the OS and to the better optimized Network, kernel, software e.t.c
  12. Yes this is the best option: Intel Xeon-E 2288G - 8 c / 16 t - 3.7 GHz / 5 GHz But adding a better cpu is one part of the performance results in general ..... Optimizing the OS, network, software like Nginx, Phpfpm, Mysql, Redis e.t.c must be done to improve in general your server performance. If you have already optimize them then adding resources will help 🙂
  13. Hello Gauravk Centos 8 is dead ! To be more specific it will be dead in 1 year from now. Centos 7 has 4 years before the end also. We are in the process that a new Centos like system will take over like Rocky Linux or the Cloudlinux option. In the next few months we will see. You can wait a bit or get an alternative like Ubuntu or Debian e.t.c Or get Centos 7 and take your time to decide (4 years) and then migrate. Centos stream rolling doesn't seem to be the best and more stable option for server environments. As i prefer Centos personally i will pick up the most supported alternative that at the moment seems to be the Rocky linux that the owner of it is one of the owners of the original Centos... The name is coming from his partner that both build Centos that is not in life anymore 😞 For the cpu it is a combination of both. A core with high clock will help on single core tasks like backing up a database if you use the traditional way to backup the database but it depends also how new is the Cpu and the instruction sets that it has. Let us know the exact cpu models and we will let you know which one is better 🙂
  14. I am not saying that Invision platform should monitor the disk space of the server. I just had an idea to warn the user that the specific Invision task will not work and will kill the server/forum on the above specific scenario. Thanks @Charles
  15. Hello I think it is very important to add a check for server free disk space and check also the files size from S3 Cloud before starting the transfering from S3 back to server. If the available free disk space on the server is not enough for the files that coming from the S3 cloud back then the server will be full and the server will die. Nothing will work with 0 free space and the user will be in a case that he will not be able to revert that task and the only solution will be to request from the data center extra hard disks to added on the server. Until then the forum will be down .... Doesn't seem hard to add that very useful info for the user when he will try to run that task and from a quick check i found: https://www.php.net/manual/en/function.disk-free-space.php For Linux hosts: $df = round(disk_free_space("/") / 1024 / 1024 / 1024); print("Free space: $df GB"); Or in your case it sounds like you're running on Windows so: $df = round(disk_free_space("C:") / 1024 / 1024 / 1024); print("Free space: $df GB"); Some related info on how to get the size of the bucket also: https://stackoverflow.com/questions/3910071/check-file-size-on-s3-without-downloading https://www.cloudsavvyit.com/1755/how-to-get-the-size-of-an-amazon-s3-bucket/ https://serverfault.com/questions/84815/how-can-i-get-the-size-of-an-amazon-s3-bucket But i think that you know how to do it anyway 🙂 Thanks !!!! Please add this asap as it is very important !
×
×
  • Create New...