
Matt
Management
Everything posted by Matt
-
How to create a french translation ?
We often have what looks to be duplicate strings, but the context can be very different and in the past we have had requests to ensure that separate translations for each context is available. What makes sense for us in English, German and French doesn't in Korean, Chinese, etc.
-
Community Buzz: November.2021.1
🟢 Scaling your community requires overcoming many barriers and learning new ways of working with your community. Rosie explores this in her blog: How we are at the small scale is who we are at the large scale. "In community, we often say to do things that don't scale. To start small. To get the foundations right. To trust that how we are and what we do is what the community becomes, on a larger scale. Our behaviour, our intentions, our alignment, and our goals all influence what the community can become." 🧠 What we think: There is no right or wrong way to scale your community from its humble beginnings and it can be a lot of hard work but that doesn't mean we should change our core values and how we approach helping others. 🟢 Should you respond to questions before your members? Is a question explored by Richard at Feverbee. "If you (the community manager) respond to a question in a community, other members are less likely to respond. This makes it harder for top members to earn points and feel a sense of influence. But if you don’t respond to a question in a community, it can linger and look bad. It also means the person asking a question is waiting for a response and becoming increasingly frustrated." 🧠 What we think: There are certain areas where you need your team to lead. Right here on this forum we want to provide the best service for our customers so our support team are active and quick to reply to all questions. There are other community-led sections that definitely benefit from allowing time for other members to reply to share their knowledge. It's a good feeling helping others. 🟢 CMX explores how to move your community online. Much of this is great advice for anyone considering moving platform (to Invision Community, right?). "Christiana recommends viewing community migration as a process that requires patiences, “this is not a race meant to be run fast. We are changing the mindset of the people in our ecosystem”. " 🧠 What we think: Patience is definitely key when moving platforms. The sooner you start engaging with your own community and explaining the reasons for the move and the benefits it'll bring, the easier it will be. 🟢 Michelle can't find the bathroom when at a party which inspires a blog on 5 secrets to community onboarding. "Walking into a party without your host can feel confusing, alienating, and frustrating. And for your customers, joining a new community without onboarding is just as bad." 🧠 What we think: Onboarding is critical to your community's success. New members can often feel lost and unsure where to start. It can be intimidating in real life to enter a room full of people that know each other, and this is true in the online space too. 🎧 Podcast: What makes a community a home? Patrick explores this by interviewing members of his own community, which opened 20 years ago and is still going strong. 🧠 What we think: We love hearing about long established communities that are still thriving and hearing how those early online relationships shaped people's lives.
-
Hump Day: Facebook name-change may be on the horizon
-
Pages (CMS) > Field > Upload > file size limit
Yes, that's a sensible idea. I'll make a note.
-
Gallery album thumbs in Steam results and pic ordering
Appreciate the feedback, I hear you loud and clear.
-
Improvements to Badges, Ranks and Rules
Appreciate the ideas, thanks!
-
Google Core web vitals
Yes we do monitor web core vitals and have plans to improve performance.
-
Hump Day: we're on 4.6.8 Beta 1 🎉
- 4.6.8
Our November release contains over one hundred bug fixes and improvements including: SEO improvements with improved crawl efficiency New achievement actions for Commerce and Downloads Achievement ranks and points added to the member CSV export Achievement filters added for bulk mail and group promotion New REST API endpoints for reporting and reacting to content Audio files now play in-browser New emails for when a new rank or badge is earned JSON-LD improvement for Pages and Gallery New statistic graphs for many areas including: moderator activity, deleted content, reports, warnings, follows, member preferences, spam defense, QA topics, solved topics by forum and achievement badges by member or member group.- Pages Block. Refresh one time per day. How?
It really depends on how you have structured your database and code. There are areas of the software that we manage caching independently of the rest of the suite, and when we do this, we typically store the data like so: \IPS\Data\Store::i()->yourDataKey = [ 'time' => time(), 'data' => [ ... your data here ... ]; Then when reading you can check 'time' against the current time and make a decision as to whether to use the data or refresh it. For an example of this, check out /system/Widget/Widget.php, around line 879- Pages Block. Refresh one time per day. How?
You'll need to do the caching/refresh logic inside the block itself if you want more control over how long the item is cached for.- Robots.txt suggestions
I think if you have tags disabled, it's fine to disallow those. Notifications, checkout, subs purchase are all fine to omit. I'm unclear what google will index with 'index.php?*' but this is likely fine as long as you have rewritten friendly URLs on. I'd need to audit to see what 'app=*' would disallow. Generally it should be fine but there may be some functionality that doesn't have a friendly URL. do=download/do=email both fine to disallow. I'll tweak the robots.txt file to include some of those.- Hump Day: Facebook name-change may be on the horizon
Same as my 13 year old. He thinks of Facebook as some ancient relic washed up on the shore. He is either in Playstation Chat, TikTok or WhatsApp.- Hump Day: Facebook name-change may be on the horizon
I agree. My eldest son is 13 and he doesn't want anything to do with it. It probably has less than a decade left in its current form.- Hump Day: Facebook name-change may be on the horizon
I hope Facebook do something interesting or at least stimulate a new market. I don't think Facebook IS evil, I just think the platform is not fit for the number of active users they have.- Hump Day: Facebook name-change may be on the horizon
This is similar to when Google created Alphabet to enable them to expand into other markets. Clearly Facebook understand that it is past its peak and can now only decline so it makes sense for them to look at where they can go next. I'm not overly keen on living in a Facebook universe though.- seo suggestion - removal of &do=getLastComment
As Stuart mentions, a permalink has to be dynamic to account for issues arising when deleting posts (the page may change) or merging topics, etc. However, it's of little use to guests so they have been removed in November's release.- Google - Invalid object type for field "itemReviewed"
This should be fixed in our November release.- New notification when content is approved in 4.6
Hi! Apologies for the delay. This is a very high level issue which is greater than just this one notification. Essentially, once you have set your notifications up, any new notification types are not being picked up by your settings without resetting them in the Admin CP. We are working on a solution but it's not a simple fix and we are still working on the best route to take.- Push notifications not really working
Have you given your browsers permission to use push notifications? There is a little message at the footer of your notification settings page.- Bug with UI.Dialog
Without seeing the surrounding code, it's hard to diagnose. It's worth nothing that when you close a dialog, it just gets hidden. You will need to call dialog.remove() if you want to remove it from the DOM. This may be the issue you are seeing.- SEO: Improving crawling efficiency
No matter how good your content is, how accurate your keywords are or how precise your microdata is, inefficient crawling reduces the number of pages Google will read and store from your site. Search engines need to look at and store as many pages that exist on the internet as possible. There are currently an estimated 4.5 billion web pages active today. That's a lot of work for Google. It cannot look and store every page, so it needs to decide what to keep and how long it will spend on your site indexing pages. Right now, Invision Community is not very good at helping Google understand what is important and how to get there quickly. This blog article runs through the changes we've made to improve crawling efficiency dramatically, starting with Invision Community 4.6.8, our November release. The short version This entry will get a little technical. The short version is that we remove a lot of pages from Google's view, including user profiles and filters that create faceted pages and remove a lot of redirect links to reduce the crawl depth and reduce the volume of thin content of little value. Instead, we want Google to focus wholly on topics, posts and other key user-generated content. Let's now take a deep dive into what crawl budget is, the current problem, the solution and finally look at a before and after analysis. Note, I use the terms "Google" and "search engines" interchangeably. I know that there are many wonderful search engines available but most understand what Google is and does. Crawl depth and budget In terms of crawl efficiency, there are two metrics to think about: crawl depth and crawl budget. The crawl budget is the number of links Google (and other search engines) will spider per day. The time spent on your site and the number of links examined depend on multiple factors, including site age, site freshness and more. For example, Google may choose to look at fewer than 100 links per day from your site, whereas Twitter may see hundreds of thousands of links indexed per day. Crawl depth is essentially how many links Google has to follow to index the page. The fewer links to get to a page, is better. Generally speaking, Google will reduce indexing links more than 5 to 6 clicks deep. The current problem #1: Crawl depth A community generates a lot of linked content. Many of these links, such as permalinks to specific posts and redirects to scroll to new posts in a topic, are very useful for logged in members but less so to spiders. These links are easy to spot; just look for "&do=getNewComment" or "&do=getLastComment" in the URL. Indeed, even guests would struggle to use these convenience links given the lack of unread tracking until logged in. Although they offer no clear advantage to guests and search engines, they are prolific, and following the links results in a redirect which increases the crawl depth for content such as topics. The current problem #2: Crawl budget and faceted content A single user profile page can have around 150 redirect links to existing content. User profiles are linked from many pages. A single page of a topic will have around 25 links to user profiles. That's potentially 3,750 links Google has to crawl before deciding if any of it should be stored. Even sites with a healthy crawl budget will see a lot of their budget eaten up by links that add nothing new to the search index. These links are also very deep into the site, adding to the overall average crawl depth, which can signal search engines to reduce your crawl budget. Filters are a valuable tool to sort lists of data in particular ways. For example, when viewing a list of topics, you can filter by the number of replies or when the topic was created. Unfortunately, these filters are a problem for search engines as they create faceted navigation, which creates duplicate pages. The solution There is a straightforward solution to solve all of the problems outlined above. We can ask that Google avoids indexing certain pages. We can help by using a mix of hints and directives to ensure pages without valuable content are ignored and by reducing the number of links to get to the content. We have used "noindex" in the past, but this still eats up the crawl budget as Google has to crawl the page to learn we do not want it stored in the index. Fortunately, Google has a hint directive called "nofollow", which you can apply in the <a href> code that wraps a link. This sends a strong hint that this link should not be read at all. However, Google may wish to follow it anyway, which means that we need to use a special file that contains firm instructions for Google on what to follow and index. This file is called robots.txt. We can use this file to write rules to ensure search engines don't waste their valuable time looking at links that do not have valuable content; that create faceted navigational issues and links that lead to a redirect. Invision Community will now create a dynamic robots.txt file with rules optimised for your community, or you can create custom rules if you prefer. The new robots.txt generator in Invision Community Analysis: Before and after I took a benchmark crawl using a popular SEO site audit tool of my test community with 50 members and around 20,000 posts, most of which were populated from RSS feeds, so they have actual content, including links, etc. There are approximately 5,000 topics visible to guests. Once I had implemented the "nofollow" changes, removed a lot of the redirect links for guests and added an optimised robots.txt file, I completed another crawl. Let's compare the data from the before and after. First up, the raw numbers show a stark difference. Before our changes, the audit tool crawled 176,175 links, of which nearly 23% were redirect links. After, just 6,389 links were crawled, with only 0.4% being redirection links. This is a dramatic reduction in both crawl budget and crawl depth. Simply by guiding Google away from thin content like profiles, leaderboards, online lists and redirect links, we can ask it to focus on content such as topics and posts. Note: You may notice a large drop in "Blocked by Robots.txt" in the 'after' crawl despite using a robots.txt for the first time. The calculation here also includes sharer images and other external links which are blocked by those sites robots.txt files. I added nofollow to the external links for the 'after' crawl so they were not fetched and then blocked externally. As we can see in this before, the crawl depth has a low peak between 5 and 7 levels deep, with a strong peak at 10+. After, the peak crawl depth is just 3. This will send a strong signal to Google that your site is optimised and worth crawling more often. Let's look at a crawl visualisation before we made these changes. It's easy to see how most content was found via table filters, which led to a redirect (the red dots), dramatically increasing crawl depth and reducing crawl efficiency. Compare that with the after, which shows a much more ordered crawl, with all content discoverable as expected without any red dots indicating redirects. Conclusion SEO is a multi-faceted discipline. In the past, we have focused on ensuring we send the correct headers, use the correct microdata such as JSON-LD and optimise meta tags. These are all vital parts of ensuring your site is optimised for crawling. However, as we can see in this blog that without focusing on the crawl budget and crawl efficiency, even the most accurately presented content is wasted if it is not discovered and added into the search index. These simple changes will offer considerable advantages to how Google and other search engines spider your site. The features and changes outlined in this blog will be available in our November release, which will be Invision Community 4.6.8.- Moderate a user's publications ?
This effectively stops them from posting at all.- (Pages) How to use page title
The bug here is no. 4 should use the page title. When you choose to not use categories, then it will use the page details in both H1 and page title I do agree that it's a bit scrappy and we will clarify this in the UI at a later date.- SEO - Robots.txt
What is a robots.txt file? When Google or other search engines come to your site to read and store the content in its search index, it will look for a special file called robots.txt. This file is a set of instructions to tell search engines where they can look to crawl content and where they are not allowed to crawl content. We can use these rules to ensure that search engines don't waste their time looking at links that do not have valuable content and avoid links that produce faceted content. Why is this important? Search engines need to look at and store as many pages that exist on the internet as possible. There are currently an estimated 4.5 billion web pages active today. That's a lot of work for Google. It cannot look and store every single page, so it needs to decide what to keep and how long it will spend on your site indexing pages. This is called a crawl budget. How many pages a day Google will index depends on many factors, including how fresh the site is, how much content you have and how popular your site is. Some websites will have Google index as few as 30 links a day. We want every link to count and not waste Google's time. What does the suggested Robots.txt file do? The Invision Community optimised rules exclude site areas with no unique content but instead redirect links to existing topics, such as the leaderboard, the default activity stream. Also excluded are areas such as the privacy policy, cookie policy, log in and register pages and so on. Submit buttons and filters are also excluded to prevent faceted pages. Finally, user profiles are excluded as these offer little valuable content for Google but contain around 150 redirect links. Given that Google has mere seconds on your site, these links that exist elsewhere eat up your crawl budget quickly. What is the suggested Robots.txt file? Here is the content of the suggested Robots.txt file. Depending on your configuration, Invision Community can automatically serve this. If your community is inside a directory, you will need to apply it to the root of your site manually. So, for example, if your community was at /home/site/public_html/community/ - you would need to create this robots.txt file and add it to /home/site/public_html. The Admin CP will guide you through this. # Rules for Invision Community (https://invisioncommunity.com) User-Agent: * # Block pages with no unique content Disallow: /startTopic/ Disallow: /discover/unread/ Disallow: /markallread/ Disallow: /staff/ Disallow: /cookie/ Disallow: /online/ Disallow: /discover/ Disallow: /leaderboard/ Disallow: /search/ Disallow: /tags/ Disallow: /*?advancedSearchForm= Disallow: /register/ Disallow: /lostpassword/ Disallow: /login/ # Block faceted pages and 301 redirect pages Disallow: /*?sortby= Disallow: /*?filter= Disallow: /*?tab= Disallow: /*?do= Disallow: /*ref= Disallow: /*?forumId* Disallow: /*?&controller=embed # Sitemap URL Sitemap: https://www.yourURLHere.com/sitemap.php *Note, if you are copying this file, you may need to add the path name and correct the sitemap URL. - 4.6.8