“SORRY I CAN’T DO THAT DAVE”

FROM JC

Oh, Google. Those of us in media and politics have long discarded the company’s infamous, biased, completely-unreliable search engine. And as I’m sure you heard by now, the company’s recently-rebranded AI platform caused massive controversy yesterday when users noticed the AI, called “Gemini”, formerly “Bard,” is more biased against white folks than the Black Panthers’ Grand High Wizard, or whatever he’s called these days.

In case you missed the fun somehow — Gemini’s concept of Popes:

Users went wild exploring the boundaries of Gemini’s reprehensible racism. It wasn’t just that Gemini would draw unwanted, historically-inaccurate — but diverse! — characters, it actually refused to draw any historically accurate ones, so long as it was asked to draw white folks, that is:

Google’s AI hilariously produced pictures of diverse historical figures in the most unlikely configurations. The story quickly broke into corporate media, producing many uproarious, side-splitting headlines. Here’s just the top of yesterday’s list of headlines from Google News. The final one was my favorite, though NBC’s was a close runner-up:

Now they’ve shut it down. I tried this morning, and at first Gemini refused to draw any pictures of any people, citing its highbrow standards of ethics and personal privacy instead of just admitting it’s racist. (To be fair, we’ve already seen that AI’s ethical standards can change from day-to-day, which probably makes them something different from standards, per se.)

But within an hour or so the deflated AI had thrown in the towel, and now meekly says its bosses claim it’s under improvement.

image 13.png
Remember, the developers don’t fully understand how the large language models work. They probably aren’t tinkering with the software code to make it more liberal. Plus, they are lazy. So how do they inject their goofy Neo-marxist biases and repugnant racism into the AI? It’s not even the programmers doing it. It’s safety specialists who do it using something called prompt injection. That means when you enter a prompt, like “draw a pope,” the interface adds behind-the-scenes instructions before it sends your prompt on to the AI.

So, the ‘prompt’ the AI gets is different from what you typed. If you type, “draw a pope,” the AI will get “draw a pope from the perspective of a speed-addled Black Panther activist being chased by a pack of KKK hangmen. And make sure the result makes trans people feel more like real women.”

I asked ChatGPT to explain ‘prompt injection’ and it actually gave me an honest response:

Slipping a note, lol. But the problem is, we users aren’t allowed to see the true, modified prompts that get sent to the AI. That’s a secret. To its credit, ChatGPT admitted it:

The AI community barks about safety and transparency all the time. But they are just as secretive and non-transparent as any government skunkworks biolab. However, it seems sort of fundamental, for trust and confidence in the AIs, that we be allowed to see how our questions are being modified before being submitted to the computer.

Google’s racist chatbot just opened up that conversation, big time. Let’s have it. Transparently