Jump to content

imJexs

Clients
  • Posts

    83
  • Joined

  • Last visited

Profile Information

  • Gender
    Male
  • Location
    Austin, Texas, USA

Recent Profile Visitors

6,823 profile views

imJexs's Achievements

  1. Stumbled upon this while looking for this exact feature. In a world where companies are charging hundreds if not thousands of $$$ a month for authentication, having this supported by IPS would be amazing. Even if its a standalone fee/service I would love to see it supported at some point. Surely the groundwork is mostly done with OAuth2 implemented already. So many of the open source/self hosted alternatives are dated or way too overly complicated.
  2. I think it just comes down to how the replication is configured and what method is being used. I'm certainly no SME on database replication, but based on some searches it does appear that "no replication without a primary key" is indeed a thing in some configurations. It's a shame that DO chose to go that route when there are better solutions. Ultimately we chose not to modify IPS at all as I'm sure it would cause more headaches than its worth in the long run. Instead we moved our database to AWS Aurora and are running our IPS containers in AWS ECS.
  3. It was the cookie path that was being set incorrectly. Nginx was configured: server_name _; which was then being used by $_SERVER['SERVER_NAME']; in \IPS\Request::getCookiePath() which was causing the cookie path to be set to "//domain.com" instead of "/". I was under the impression that Nginx automatically set the server name to the request's host when using "_", but that is incorrect. Simply setting the correct server_name or manually changing the server name header to $host would fix it.
  4. I'm a doofus and didn't see the Advanced Self Hosting Support. If any mod sees this, please move this to https://invisioncommunity.com/forums/forum/524-advanced-self-hosting-assistance/
  5. This might be a bit of a long post as I want to include as much details as possible. However, I'd greatly appreciate it if you took the time as I'm completely stumped by this one. I know nginx isn't officially supported by IPS (even though it 100% most certainly should be - it's 2022), which is why this is in community support, but I'd love any ideas from IPS guys on this as well. Setup: I am in the middle of moving our website from AWS Elastic Beanstalk to Digital Ocean running inside a Docker Swarm (maybe soon to be k8s, doesn't matter). On AWS, I am running Apache/PHP 7.3.30 and in DO I'm running Nginx/PHP 8.0.13. Both web servers are now talking to the same managed MySQL database (8.0.26), and are using the same Redis cluster for caching as well as PHP sessions. Yes, I have triple checked that PHP sessions and caching is working properly on both web servers. Problem: Whenever I am using the PHP 7.3.30 server, everything works exactly as expected. Been using this setup for ages. No issues. Whenever I switch over to the Nginx/PHP 8.0.13, everything works as expected EXCEPT I cannot login to the front end. However, logins to the ACP work fine (like, what?). Our site uses the Steam login handler with the Standard login method disabled. Even when I turn on the Standard login and try to use user/pass, I get redirected to website.com/?_fromLogin=1 but the user state doesn't actually change. The userbar still has "Login or Sign In" and if I refresh I am not actually logged in. Investigation I did some more investigating, and to make things even weirder, even just requesting the website root (mywebsite.com/), increments the "core_members.failed_login_count" and adds a timestamp to "core_members.failed_logins" columns. So just refreshing the site does that. Then after 1-2 login attempts, it locks the user's account with "too many failed login attempts". Conclusion Exact same code base, exact same 3rd party apps enabled (all written by me and PHP 8 compatible), same MySQL server, same Redis cache, same PHP session handler, but can't sign in on PHP8. I've pretty much determined it has to either be a nginx configuration error like some sort of headers thing (grasping at straws here) or its an issue with this IPS release and the said version of PHP. I am VERY open to suggestions as I'm completely out of ideas. I've traced through the login process' code and I didn't find anything that would lead to the symptoms I saw. 😕 Best, Jexs
  6. Hey, was wondering if you had any updates on getting private files working with DO Spaces? I'm in the middle of moving from S3 to DO Spaces and would love to be able to move our Downloads files (which must stay private) to Spaces as well. From my very brief investigation it seems like DO is mostly compatible with S3. They accept v4 signature, pre-signed URLs, etc. Seems like the only issue is IPS uses "bucket-owner-read" for "X-Amz-Acl" header whereas DO only accepts "private". I'm curious if it could be as simple as intercepting those requests and swapping the header value out if its a request to DO. I'll probably do some more testing later this week but was curious if you had any more insight.
  7. Yeah, its certainly not ideal. Unfortunately this particular cloud provider requires primary keys for their replication. Maybe we'll revisit the planning board and see if there's a better option.
  8. Context I am currently migrating our community to a new provider who requires primary key existence on all tables for binary log replication purposes. It's a little frustrating and was very much so unforeseen when planning this migration. Nonetheless, we're stuck with it and I've got to make it work. Question There are currently 23 tables in Core, 3 in CMS, 1 in Forums, and 4 in Nexus that do not have a primary key set. From what I've seen looking through the source it appears it would be safe to just add a primary key auto increment column to all of these tables, but I'm curious if anyone has any experience with this or if there are any issues I'm not seeing right now. Future Request/Suggestion Ideally, it would be great to see this updated with an official IPS release and set as a standard moving forward to require a primary key. I know it doesn't completely make sense for simple mapping tables, but it does help with replication and I think is justified. If I add a key/column to each of these tables, I'd prefer to not have to monitor each release for new tables without a key. 😅 Tables without a PRIMARY or UNIQUE key: cms_database_fields_reciprocal_map cms_url_store core_acp_search_index core_attachments_map core_googleauth_used_codes core_search_index_tags core_sys_social_group_members core_tasks_log core_theme_settings_values core_view_updates Tables without a PRIMARY but with a UNIQUE key (could set primary as same as unique): cms_page_widget_areas core_acp_tab_order core_automatic_moderation_pending core_cache core_follow_count_cache core_item_markers core_item_member_map core_members_logins core_oauth_server_access_tokens core_oauth_server_authorization_codes core_post_before_registering core_reputation_leaderboard_history core_search_index_item_map core_security_answers core_tags_cache core_tags_perms forums_view_method nexus_customer_spend nexus_package_filters_map nexus_package_filters_values nexus_support_staff_dpt_order
  9. That makes sense. I'll just play it on the safe/recommended side and run an external service to trigger it remotely every minute. Thanks!
  10. Hey there! I'm looking for information regarding the frequency of the cronjob/task. For context, we're moving our site from AWS Elastic Beanstalk where I was able to force the cronjob to only run on 1 server in the auto-scaling group. However, we're moving to a Docker Swarm infrastructure where this isn't quite as easy. So the question is, if I have, let's say, 6 instances of IPS running in the swarm, is it bad if they are all hitting the cron every minute? Should the cron only be called once a minute, or is once a minute the maximum recommended time? In this setup, we would be hitting the cronjob 6 times/minute (or really however many instances/replicas of the web server we have spun up). Obviously our alternative is to just setup a separate job/container that just calls the "core/interface/task/web.php" remotely. Maybe that's just the smarter thing to do regardless. Any insight into this would be appreciated. Cheers!
  11. Awesome! I can't wait for full release. 🙂 Cheers.
  12. Does the new `downloads/files/{id}/download` endpoint work with S3 storage? Currently any files uploaded to Downloads that are stored in S3 need an access token created before it can be downloaded by a user. The existing endpoint `GET downloads/files/{id}` does not provide a proper URL to download from as S3 gives "AccessDenied". Will the new endpoint provide a proper URL to actually be able to download the file from?
  13. Hey @ehren. , I just wanted to check if you have any sort of ETA for the 4.3 updates and if you're already working on them. I know 4.3 is still in beta, but I'd love to be able to upgrade ASAP but that won't be possible as the search functionality is broken on the theme. Thank you for any updates. Love your work!
  14. This should be a standard feature, but I see it's not. Applications can add there own, but there is no way of editing, adding, or removing any of them? Why has this not been added in before? Seem's rather simple to me. Please add for IPB 3.0.4! Especially since CCS allows you to have custom pages, just no way of getting to them, nice. ;)
×
×
  • Create New...