Jump to content

Skillshot

Clients
  • Posts

    12
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Skillshot's Achievements

  1. Besides the PHP-version check, everything is alright (as of the script). This happens on our current server (php 7.4) and out soon-to-be server (php 8.1). It looks like we have been attacked - there are more than 3000 calendar entries with stuff in various fields that looks like SQL injection attacks and other garbage. I'll clean this up and we'll see if everything is fine again.
  2. We are encountering various exceptions like Exception: DateTime::__construct(): Failed to parse time string (-1-9-1) at position 4 (-): Double timezone specification The specif error varies. I can not say when this exactly started. In addition, we encounter an ever rising amount of Submitting Data to IndexNow tasks. Starting the cron script from the commandlien yields: Exception: Exception Object ( [message:protected] => Unknown or bad format (P-5D) [string:Exception:private] => [code:protected] => 0 [file:protected] => /srv/www/sites/forum/htdocs/applications/calendar/sources/Event/Event.php [line:protected] => 764 [trace:Exception:private] => Array ( [0] => Array ( [file] => /srv/www/sites/forum/htdocs/applications/calendar/sources/Event/Event.php [line] => 764 [function] => __construct [class] => DateInterval [type] => -> ) [1] => Array ( [file] => /srv/www/sites/forum/htdocs/applications/calendar/sources/Event/Event.php [line] => 363 [function] => _findOccurances [class] => IPS\calendar\_Event [type] => :: ) [2] => Array ( [file] => /srv/www/sites/forum/htdocs/applications/communitymap/extensions/communitymap/Mapmarkers/Calendar.php [line] => 224 [function] => findOccurrences [class] => IPS\calendar\_Event [type] => -> ) [3] => Array ( [file] => /srv/www/sites/forum/htdocs/applications/communitymap/extensions/core/Queue/RebuildCache.php [line] => 119 [function] => getLocations [class] => IPS\communitymap\extensions\communitymap\Mapmarkers\_Calendar [type] => -> ) [4] => Array ( [file] => /srv/www/sites/forum/htdocs/system/Task/Task.php [line] => 47 [function] => run [class] => IPS\communitymap\extensions\core\Queue\_RebuildCache [type] => -> ) [5] => Array ( [file] => /srv/www/sites/forum/htdocs/applications/core/tasks/queue.php [line] => 43 [function] => runQueue [class] => IPS\_Task [type] => :: ) [6] => Array ( [file] => /srv/www/sites/forum/htdocs/system/Task/Task.php [line] => 375 [function] => IPS\core\tasks\{closure} [class] => IPS\core\tasks\_queue [type] => -> ) [7] => Array ( [file] => /srv/www/sites/forum/htdocs/applications/core/tasks/queue.php [line] => 55 [function] => runUntilTimeout [class] => IPS\_Task [type] => -> ) [8] => Array ( [file] => /srv/www/sites/forum/htdocs/system/Task/Task.php [line] => 274 [function] => execute [class] => IPS\core\tasks\_queue [type] => -> ) [9] => Array ( [file] => /srv/www/sites/forum/htdocs/system/Task/Task.php [line] => 237 [function] => run [class] => IPS\_Task [type] => -> ) [10] => Array ( [file] => /srv/www/sites/forum/htdocs/applications/core/interface/task/task.php [line] => 58 [function] => runAndLog [class] => IPS\_Task [type] => -> ) ) [previous:Exception:private] => ) As we are right in the middle of a planned migration, this is a real bummer - any help would be appreciated while we are searching for the reason ourselfes. Thank you!
  3. It seems that we had the exact same problem. We only had two entries in core_stream_subscriptions which lead to the exact same odd behaviour. I saved the entries for further reference, but after removing them the forum seems to be back to speed.
  4. After we has several different problems we finally managed to resort the utf8mb4 upgrade and the upgrade to the last board version. But what still remains is that the scheduled maintenance tasks (via cron) hog CPU and have very long runtimes: top - 19:18:22 up 30 days, 2:28, 9 users, load average: 10.80, 10.14, 9.90 Tasks: 397 total, 6 running, 301 sleeping, 1 stopped, 0 zombie %Cpu(s): 21.9 us, 4.6 sy, 0.7 ni, 62.8 id, 9.3 wa, 0.0 hi, 0.7 si, 0.0 st KiB Mem : 32799228 total, 251304 free, 22990724 used, 9557200 buff/cache KiB Swap: 33554428 total, 33036028 free, 518400 used. 9034408 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 18689 mysql 20 0 25.397g 0.019t 15632 S 156.2 62.4 149:39.61 /usr/sbin/mysqld 6997 www-data 20 0 544948 222060 25340 S 75.0 0.7 59:21.98 /usr/bin/php -d memory_limit=-1 -d max_execution_time=0 /srv/www+ 7028 www-data 20 0 389092 71196 25256 R 75.0 0.2 59:20.20 /usr/bin/php -d memory_limit=-1 -d max_execution_time=0 /srv/www+ 12347 www-data 20 0 605152 60776 36040 R 75.0 0.2 0:03.24 /usr/sbin/apache2 -k start 11783 www-data 39 19 417828 100796 25368 S 68.8 0.3 6:53.39 /usr/bin/php -d memory_limit=-1 -d max_execution_time=0 /srv/www+ 12058 www-data 39 19 362404 45348 25176 S 68.8 0.1 5:33.87 /usr/bin/php -d memory_limit=-1 -d max_execution_time=0 /srv/www+ 11607 www-data 39 19 362404 45660 25204 R 62.5 0.1 10:11.00 /usr/bin/php -d memory_limit=-1 -d max_execution_time=0 /srv/www+ 12655 www-data 20 0 592528 43996 31820 S 12.5 0.1 0:00.33 /usr/sbin/apache2 -k start Our board is getting unusable and we se the same behaviour on newly provisioned Nodes using NVMe disks. Any idea on how to nail down the culprit? It looks like a lot of this has to do with stream subscriptions ...
  5. I will revert the changes for now. But this is clearly a usability flaw you should consider fixing, as you allow creating different storage configurations using the same bucket with different bucket paths. The resulting configurations are not distinguishable in the selection drop down because the display name only uses the bucket name itself, not the path. Please consider the following patch: --- a/system/File/Amazon.php 2022-03-08 13:02:43.745541144 +0100 +++ b/system/File/Amazon.php 2022-03-08 13:02:14.809124470 +0100 @@ -153,7 +153,7 @@ */ public static function displayName( $settings ) { - return \IPS\Member::loggedIn()->language()->addToStack( 'filehandler_display_name', FALSE, array( 'sprintf' => array( \IPS\Member::loggedIn()->language()->addToStack('filehandler__Amazon'), $settings['bucket'] ) ) ); + return \IPS\Member::loggedIn()->language()->addToStack( 'filehandler_display_name', FALSE, array( 'sprintf' => array( \IPS\Member::loggedIn()->language()->addToStack('filehandler__Amazon'), $settings['bucket'] . ' - ' . $settings['bucket_path'] ) ) ); } /* !File Handling */
  6. As stated in my earlier reply: the change in the Amazon.php simply enables us to select the appropriate destination, because with your default using just the bucket name as display name it is impossible to distinguish storage configurations using the same bucket but differnt paths therein. As for your other point: before we decided to give S3 a try, we already tested locally with moving (in this case the calendar items) to a separate directory. Even back then we where forced to create a support ticket, because the movement got stuck several times. And for the final move of data, we are talking about nearly 1TB of data - there should be some kind of CLI tool to faciliate movements that large. The connection errors are under investigation, nevertheless it should in no way lead to breakage of the whole process if the destination has timeouts - there simply should be proper error handling and retry logic. Otherwise it will be nearly impossible to transfer that amount of small files if even a simple timeout breaks the movement.
  7. The primary storage location is currently used for all the different kinds of assets (I'm not quite sure if there were distinct locations when the board started) and there is no way (besides reading code and figuring out everything by database access). As the file movement logic has to know which files to move it should be possible to provide the enduser with some kind of CLI to assist in moving large amounts of files manually (which would need a list of files belonging to a certain plugin/extension). As side note: we patched the display code for the storage configuration page as with the default it is not possible to distinguish storage configurations using the same S3 bucket with different bucket paths, as the display name only uses the bucket name. There is no way to distinguish different storage locations in the drop down menus. We pacthed it so the the bucket path is part of the storage configuration's display name.
  8. The connections are purely local ones. There are no peaks in webserver access. If I follow the traffic via tcpdump everything is originating from localhost, so there is no external access directly on elasticsearch.
  9. It seems like something is opening nearly 30.000 connections to the elasticsearch backend without reusing existing connections (leaving a lot of connections in TIME_WAIT), peaking at around 500req/s. The querys all look like POST /content/_search HTTP/1.1 Host: 127.0.0.1:9200 User-Agent: Invision Community 4 Accept: */* Content-Type: application/json Content-Length: 910 {"query":{"bool":{"must":[],"must_not":[],"filter":[{"bool":{"should":[{"terms":{"index_class":["IPS\\core\\Statuses\\Status","IPS\\co re\\Statuses\\Reply"]}},{"terms":{"index_class":["IPS\\forums\\Topic\\Post"]}},{"terms":{"index_class":["IPS\\calendar\\Event","IPS\\c alendar\\Event\\Comment","IPS\\calendar\\Event\\Review"]}},{"terms":{"index_class":["IPS\\nexus\\Package\\Item","IPS\\nexus\\Package\\ Review"]}},{"terms":{"index_class":["IPS\\cms\\Pages\\PageItem"]}},{"terms":{"index_class":["IPS\\cms\\Records1","IPS\\cms\\Records\\C omment1","IPS\\cms\\Records\\Review1"]}},{"terms":{"index_class":["IPS\\communitymap\\Markers","IPS\\communitymap\\Markers\\Comment"," IPS\\communitymap\\Markers\\Review"]}}]}},{"match_none":{}},{"range":{"index_date_created":{"gt":0}}},{"terms":{"index_permissions":[3 ,"m96298","*"]}},{"term":{"index_hidden":0}}]}},"sort":[{"index_date_created":"desc"}],"from":0,"size":11} In the same interval, mysql is peaking at 1400qps.
  10. As far as I can see, the errors are mostly from the stream subscription task(s): #0 /srv/www/sites/www.germanscooterforum.de/htdocs/system/Http/Request/Curl.php(422): IPS\Http\Request\_Curl->_execute() #1 /srv/www/sites/www.germanscooterforum.de/htdocs/system/Http/Request/Curl.php(298): IPS\Http\Request\_Curl->_executeAndFollowRedirects() #2 /srv/www/sites/www.germanscooterforum.de/htdocs/system/Content/Search/Elastic/Query.php(1235): IPS\Http\Request\_Curl->get() #3 /srv/www/sites/www.germanscooterforum.de/htdocs/applications/core/sources/Stream/Subscription.php(145): IPS\Content\Search\Elastic\_Query->search() #4 /srv/www/sites/www.germanscooterforum.de/htdocs/applications/core/sources/Stream/Subscription.php(90): IPS\core\Stream\_Subscription->getContentForStream() #5 /srv/www/sites/www.germanscooterforum.de/htdocs/applications/core/tasks/weeklyStreamSubscriptions.php(40): IPS\core\Stream\_Subscription::sendBatch() #6 /srv/www/sites/www.germanscooterforum.de/htdocs/system/Task/Task.php(367): IPS\core\tasks\_weeklyStreamSubscriptions->IPS\core\tasks\{closure}() #7 /srv/www/sites/www.germanscooterforum.de/htdocs/applications/core/tasks/weeklyStreamSubscriptions.php(41): IPS\_Task->runUntilTimeout() #8 /srv/www/sites/www.germanscooterforum.de/htdocs/system/Task/Task.php(266): IPS\core\tasks\_weeklyStreamSubscriptions->execute() #9 /srv/www/sites/www.germanscooterforum.de/htdocs/system/Task/Task.php(229): IPS\_Task->run() #10 /srv/www/sites/www.germanscooterforum.de/htdocs/applications/core/interface/task/task.php(58): IPS\_Task->runAndLog() #11 {main}
  11. We did a migration for calendar events at first which went ok'ish (with some minor problems). The migration has completed, there where no tasks pending. When we switched the storage location for custom emojis including the "move data" option, there was an error displayed stating that another migration is still in place and therefore it is not possible to switch locations. Despite that, the location was switched for new posts leading to broken images because the images had not been transfered to the new location. Is there a way to manually re-trigger migrations in such a case? Is there some kind of cli tooling available to run such tasks manually without involving the frontend? Is there an easy way other than poking around directly in the database to just generate a list of file (and locations) that a specific movement would touch?
  12. There are no errors in the elasticsearch log, the elasticsearch instance has status "green", querying /content/_search from commandline works flawless. I don't see any reason why the IPB instance should not be able to talk to the local elasticsearch instance. If I do a tcpdump I can see tons of these querys: {"query":{"bool":{"must":[],"must_not":[],"filter":[{"bool":{"should":[{"terms":{"index_class":["IPS\\core\\Statuses\\Status","IPS\\core\\Statuses\\Reply"]}},{"terms":{"index_class":["IPS\\forums\\Topic\\Post"]}},{"terms":{"index_class":["IPS\\calendar\\Event","IPS\\calendar\\Event\\Comment","IPS\\calendar\\Event\\Review"]}},{"terms":{"index_class":["IPS\\nexus\\Package\\Item","IPS\\nexus\\Package\\Review"]}},{"terms":{"index_class":["IPS\\cms\\Pages\\PageItem"]}},{"terms":{"index_class":["IPS\\cms\\Records1","IPS\\cms\\Records\\Comment1","IPS\\cms\\Records\\Review1"]}},{"terms":{"index_class":["IPS\\communitymap\\Markers","IPS\\communitymap\\Markers\\Comment","IPS\\communitymap\\Markers\\Review"]}}]}},{"match_none":{}},{"range":{"index_date_created":{"gt":0}}},{"terms":{"index_permissions":[3,"m96298","*"]}},{"term":{"index_hidden":0}}]}},"sort":[{"index_date_created":"desc"}],"from":0,"size":11} with an answer of HTTP/1.1 200 OK and a body of {"took":0,"timed_out":false,"_shards":{"total":1,"successful":1,"skipped":0,"failed":0},"hits":{"total":{"value":0,"relation":"eq"},"max_score":null,"hits":[]}} so the elasticsearch instance is clearly working.
×
×
  • Create New...