The Image Content Analysis (ICA) add on for Web Security protects employees and IT environments from inappropriate ‘Not Safe For Work’ (NSFW) images – including adult, offensive and extremist content.
ICA incorporates AI and deep learning technology to provide extremely high accuracy, eradicating false positives.
ICA enables powerful Acceptable Use Policy enforcement – without the need for human moderation – whilst protecting employees, avoiding legal liability, and maintaining company culture and brand image.
- Prevent NSFW imagery from entering the environment
- Identify users that are potentially misusing Internet (web) access
- Detect and manage high-risk image and video content
- Proactive, automatic enforcement of policy as and when required – with an audit trail to demonstrate action / protection
ICA is fully integrated within the Web Security service as an optional add on to the Censornet Cloud Security Platform. ICA scans all images and videos accessed online automatically, replacing any inappropriate content with a ‘safe’ symbol.
In most countries employers can be held vicariously liable for the actions of their employees – unless they can demonstrate they have taken all reasonable steps to protect their people from a hostile working environment.
ICA protects organisations from harassment and offensive content claims – or even criminal charges if illegal content is involved – and ensures a safe workplace for everyone.
Companies spend exceptional amounts of time and money building a compelling brand and positive brand image. Negative press coverage can damage an organisation’s brand almost instantaneously but take an age to restore. ICA mitigates the risk of unwanted publicity.
Image Content Analysis is also available for CASB Inline Mode – scanning content of files uploaded to cloud storage applications including box, Dropbox, Google Drive, and OneDrive.
Image Content Analysis is also available as an add-on for Email Security scanning images in email messages and attachments.