Jump to content

One of my communities is disappearing from Google


Go to solution Solved by usmf,

Recommended Posts

I've got two IPS communities, one is about four years old and the other is fifteen years old. Starting in the last week or two, the older community is not showing up in Google searches. No one has been doing any kind of AdminCP or server maintenance or anything like that -- is this known bug with the SEO in the software at this time?

This is what happens when searching to see what is indexed on the site:

https://www.google.com/search?q=site%3Ausmilitariaforum.com&source=hp&ei=11ZLYvazOMCZptQPu-2sQA&iflsig=AHkkrS4AAAAAYktk5zWAUz1WxvrPre0Tl7PEssQchcq6&ved=0ahUKEwi2-4Cvofv2AhXAjIkEHbs2CwgQ4dUDCAk&uact=5&oq=site%3Ausmilitariaforum.com&gs_lcp=Cgdnd3Mtd2l6EAM6CAgAELEDEIMBOhEILhCABBCxAxCDARDHARDRAzoLCAAQgAQQsQMQgwE6BQgAEIAEOg4ILhCABBCxAxDHARDRAzoICAAQgAQQsQM6CAguELEDEIMBOgUILhCABDoICC4QgAQQsQM6CwguEIAEELEDEIMBOhEILhCABBCxAxCDARDHARCjAjoLCC4QgAQQxwEQ0QM6CwguEIAEEMcBEK8BOg4ILhCABBCxAxDHARCjAjoRCC4QgAQQsQMQxwEQ0QMQ1AJQAFj3B2C-CmgAcAB4AIABbIgBxQSSAQMzLjOYAQCgAQGgAQI&sclient=gws-wiz

 

Images seem to be showing fine, but no pages.

Bing.com seems to be working fine for both communities.

Edited by usmf
Link to comment
Share on other sites

Are you using the optimized robots.txt setting in ACP?  Have you confirmed your site map is configured and submitted to Google?

What does the Google webmaster search console say?

Also… recent versions of IPB software has improved SEO by telling Google not to crawl certain low value pages such as profiles etc. That way more crawl budget can be spent on valuable content such as topics. 

Link to comment
Share on other sites

Google can remove pages from time to time based on their algorithm. There is not a bug in the software itself which would cause all pages being dropped from Google, or we would have some pretty angry customers on our hands here 🙂 .

Looking at your community, I am not seeing anything obvious. I would suggest taking a look at Google Webmasters to see if there is any insight to what you're seeing there. I do see that your robots.txt file is blocking a user agent I am unfamiliar to your root which shouldn't be an issue but you'll want to test that in Google Webmasters as well.

Link to comment
Share on other sites

I am checking in now with the server manager about the Webmaster Console, as he's the only one handling that.

The Crawl Management in the AdminCP is set on "Invision Community optimized."

Thanks so much! I thought the SEO in IPS was so much better with some of the recent updates, which is why I wanted to check in on this.

Link to comment
Share on other sites

  • Solution

Following up on this -- it looks like the site was hit on April 1st with some hacking, which seems to have included an altered robots.txt file. All replaced now and everything seems fine. Thanks for the advice!

Link to comment
Share on other sites

What was changed in your robots.txt file?

We had a similar issue in Google this week but our robots.txt file hasn't changed at all but, it looks like the way google interprets has so all of a sudden, Google was believing it was being blocked.

I think it's because we had a User-agent: * rule to set Crawl-delay: 1, because Google ignores the Crawl-delay part it was then skipping that line which meant the User-agent: * bit was then getting grouped with a rule further down blocking one particular bot so Google was assuming it was meant to be blocked as well.

I've confirmed this behaviour in Googles robots.txt tester. I can only assume the way Google parses robots.txt files has changed recently and this behaviour has started occurring. It's likely to only affect people with a particular robots.txt format.

Link to comment
Share on other sites

2 hours ago, Stargazers Lounge said:

What was changed in your robots.txt file?

We had a similar issue in Google this week but our robots.txt file hasn't changed at all but, it looks like the way google interprets has so all of a sudden, Google was believing it was being blocked.

I think it's because we had a User-agent: * rule to set Crawl-delay: 1, because Google ignores the Crawl-delay part it was then skipping that line which meant the User-agent: * bit was then getting grouped with a rule further down blocking one particular bot so Google was assuming it was meant to be blocked as well.

I've confirmed this behaviour in Googles robots.txt tester. I can only assume the way Google parses robots.txt files has changed recently and this behaviour has started occurring. It's likely to only affect people with a particular robots.txt format.

Some of that may have been happening, though it wasn't the main problem in the end. We do have a crawl delay but have always had. Actually, lines had been added to block certain robots. I'm not the one who handles that part of the website, but it did have certain names blocked. All that was removed and the file put back to the normal lines (which have been in place for over ten years) and after three days everything is going back to normal. Whew!

Link to comment
Share on other sites

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...