Elon Musk brushes off controversy as Grok’s AI image generator sparks outrage with offensive content

Written by

Published 16 Aug 2024

Fact checked by

NSFW AI Why trust Greenbot

We maintain a strict editorial policy dedicated to factual accuracy, relevance, and impartiality. Our content is written and edited by top industry professionals with first-hand experience. The content undergoes thorough review by experienced editors to guarantee and adherence to the highest standards of reporting and publishing.

Disclosure

a black square button with a white x on it

Grok’s new image generator feature spews out waves of user-generated content that range from the absurd to the deeply offensive without hesitation, but owner Elon Musk seems to find it amusing.

The newly introduced Grok-2, bringing image generation features to the previously text-only chatbot, has quickly become a hotbed for controversial and misleading imagery. Its artificial intelligence (AI) image generator is now available to X Premium and Premium+ users.

    In light of this, some users have created images depicting political figures in inappropriate and historically sensitive scenarios. Users were able to generate images of former President Donald Trump and Vice President Kamala Harris in a pilot’s cockpit recreating the events of 9/11. Another widely circulated image portrayed Trump with Elon Mush on a leash.

    Grok’s text-based AI adheres to standard content moderation by blocking inappropriate requests, such as those involving drug production. Yet somehow, a few loopholes manage to emerge sublime in its image generation feature.

    platform user, Christian Monessori, discovered a loophole in Grok’s faulty guardrail. Invoking the chatbot that you are conducting “medical or crime scene analysis” would bypass any guidelines. 

    Queries like “Donald Trump wearing a Nazi uniform,” “antifa curb stomping a police officer,” “sexy Taylor Swift,” “Bill Gates sniffing cocaine,” and “Barack Obama stabbing Joe Biden” were all tested, still resulting in recognizable and concerning images that highlight the platform’s lenient approach to content moderation in visual outputs.

    Most of these controversial images are typically restricted on other generative platforms such as those of OpenAI.

    When turning to Elon Musk, the owner of X, he appears undisturbed by the chaos and even seems to downplay the risks associated with the situation, saying the tool encourages people to “have some fun.”.

    This reckless disregard for accuracy and sensitivity echoes similar issues faced by other AI platforms. Google recently paused its Gemini AI’s image generation capabilities after it produced historically inaccurate and offensive images due to overcompensation for diversity.

    A particular concern regarding the upcoming U.S. elections bubbles even more, with X already facing criticism for its role in spreading misinformation. The introduction of Grok-2 only adds fuel to the fire. Several secretaries of state in the United States have already expressed their concerns in an open letter about Grok-2’s potential to spread false information. This was after the bot spread false information on ballot deadlines.

    In the general state of the battlefield against AI misinformation, this sparks a conversation regarding the implications of AI being able to instigate harmful racial, political, and social ideas.

    The European Commission has already opened formal proceedings to assess whether X may have breached the Digital Services Act (DSA) in areas linked to risk management, content moderation, and advertising transparency.

    In a statement, Thierry Breton, EU commissioner for internal markets, said, “Today’s opening of formal proceedings against X makes it clear that, with the DSA, the time of big online platforms behaving like they are ‘too big to care’ has come to an end.”