.
Invision Community AI can now automatically detect and hold images not suitable for your community.
Our new Image Scanner and Discoverability tool, built into the newly-launched Smart Community section of the ACP for clients on select standard plans, has immense moderation power.
This new AI feature scans images uploaded by a member, detects what objects the image contains then decides whether or not it’s appropriate to share the image within the community.
If the AI believes the image contains anything adult, suggestive and racy, visually disturbing and/or violent, it will either hold the image for moderation or reject the image altogether.
Should the image meet the approval requirements and get posted, the image is labeled with what the image “could contain.” These terms optionally show when hovering over the image and allow the image to appear as a search result.
These keywords will also support your search.
In this example, I searched for the word “apple,” and results included a photo that @Matt posted of an apple.
Score thresholds
Each uploaded image is assigned a score - essentially a gatekeeper to what is deemed appropriate (and what isn’t).
How does the AI determine this score?
For each of the categories, a score is returned indicating how confident the service is that the image matches the corresponding category. Depending on the threshold percentage, you can choose to either hold the post for approval, or reject the image.
If the content being posted cannot be held for approval (for example inside a personal conversation) the image will be rejected at either threshold.
When choosing your percentages, the higher the percentage, the more confident you want the AI to be when it scans images and identifies what the image contains and before holding or rejecting the image.
For example, if an image is scanned for adult content and the threshold is 75% or greater in confidence that it contains adult content, the platform will hold the image for moderator approval. For the same image, if it is 85% or greater in confidence that it contains adult content, it will reject the image.
If you want to hold more images, resulting in more moderator oversight, you would keep your percentages low.
For example, if an image is scanned for visually disturbing content and is 40% or greater in confidence that it contains visually disturbing content, the image will be held for moderator approval. For the same image, if it is 75% or greater in confidence that it contains adult content, it will reject it.
The same applies to the suggestive and racy / violence and gore categories:
Here are a few more real life examples:
Example 1: A sneaky troll decides to disrupt a corporate brand community by posting NSFW images. The image detection can automatically enforce predetermined rules set by the administrator and stop the photo from seeing the light of day in the community.
Example 2: A travel company has a community for people to share vacation experiences and information with others. Someone innocently posts a photo wearing a bikini during their trip to the beach, however posting scantily clad images in this community goes against the terms of the community. It is therefore automatically either held for moderation, or is automatically hidden from view.
The Image Scanner and Discoverability feature is available now on select standard plans.
ACP -> System -> Smart Community -> Features -> Image Scanner
Please note the video above uses a Beta version of the Image Scanner; the screen shots in this post reflect the most up-to-date interface. However, the logic remains the same. 😀
Interested in moving to a plan with the Image Scanner feature? Please feel free to reach out to us.
Questions? Comments? Let us know what you think about the feature in the replies.