The Comedy of Digital Materials
Humour is a complicated facet, albeit, it is more so in the context of digital content. Where something might be funny to some someone it might be damaging and non-receptive to others, especially when the content gets too near the Not Safe for Work (NSFW) territory. Comedy too is highly dependent on cultural context, timing, or even can takes advantage of somewhat implicit cues that are conventionally difficult to for AI to understand.
The Current State of AI Content Moderation
It's a lot better at scoping out explicit content than AI-from five years back. An AI can accurately detect visual NSFW elements in images and videos with a success rate of at least 85-95% using algorithms like Convolutional Neural Networks (CNNs). Its text NSFW detection is as impressive as its image moderation, and the AI has become extremely adept at detecting harmful language with nearly the same level of precision.
But, this is where things gets a little more complicated because in addition to it being hard, people also have a hard time figuring out intent. For instance, adult humor memes that might contain offensive language or content which would cause AI to flag them but AI would not understand that the meme is joking or perhaps serving as a reflection of the current culture and simply treats them with zero-tolerance.
Challenges in Detecting Humor
As it turns out, the largest hurdle for AI in separating humor from offensive words in NSFW material is the subjectivity of humor. It requires technological innovation as well as a cultural understanding which needs to be deeply embedded into AI models.
A significant challenge is introduced by the immense range of what humour is among various cultures and demographics. If the AI has been predominantly trained on data from Western cultures, it may not catch the humor that creators elsewhere in the world intend. When you combine that with the quickly changing landscape of internet slang and meme culture, it only makes it that much easier for a new word or concept to pop up overnight.
Pros And Cons Of Data-Driven Methods
These AI systems are trained on datasets that can be made of up to millions of labeled examples of text and imagery. However, big data provides more data for AI to learn, but it needs scalable and rich labeling allowing it to generalize humor across different cultures and contexts. Achieving this in an unbiased and representative way for these datasets is an ongoing challenge.
Additionally, humor detection false positives and negatives can result in unjustified censorship or inadvertently propagate inappropriate content. Imagine a parody video ridiculing old stereotypes: It might be identified and taken down by AI which misses the satirical element.
The Future for AI (in Detecting Humor)
Developments in AI, especially in natural language processing (NLP) and machine learning, are slowly starting to help AI understand deeper aspects of human emotions and intentions. At the cutting-edge of this domain is the development of algorithms that can make sense of context, history and the nuances of human behaviour.
Perhaps AI's discriminatory skills could be sharpened in this regard by joint efforts of AI developers and scientists in culture and linguistics. A more holistic view of language could facilitate AI systems less biased and more rooted in the diversity of global online communities.
Finding a balance with AIAs with the flagging system, AI must straddle that fine line between protecting users and the freedom to express themselves. As technology continues to grow, the ability for AI to sail these deep social waters will likely grow, giving us a future in which digital interactions are as safe as they are fully human.
For further information about AI characters in the detection of NSFW content, feel free to hop on to nsfw character ai.