Create AI CM:nsfw task
This algorithm allows to quickly detect inappropriate content, determining that the content is NSFW (“Not Safe For Work”) or normal. Generic info about all capabilities and limits see in the generic “Content Moderation” method.
What is “Not Safe For Work”?
The algorithm has recognized inappropriate content in a video and it might not be suitable to view in public places. The solution provides its confidence level (in percentage) of how sure it is that the content is NSFW, or it most likely does not contain any sexual or similar content.
Different to soft-nudity-detection and hard-nudity-detection, this model will only check for sensitive material that can be considered not-safe-for-work.
How to use?
Frames within the specified video are analyzed.
Response will contain only frames for which the class nsfw is detected with a confidence of more than 50%.
Example of detected NSFW:
{
"`nsfw_detected`": true,
"`detection_results`": [ "nsfw" ],
"frames": [
{
"label": "nsfw",
"confidence": 0.93,
"`frame_number`": 1
},..
]
}
Example of a response without detecting inappropriate content:
{
"`nsfw_detected`": false,
"`detection_results`": [],
"frames": []
}
Please note that the API only provides a set of data (json) about the objects found, so no video is generated. The demo video video (above ^) was specially created based on json from the API for visual demonstration and better perception of the possibilities.
Authorizations
API key for authentication.
Body
Response
Response returns ID of the created AI task. Using this AI task ID, you can check the status and get the video processing result. Look at GET /ai/results method.
The response is of type object
.