It may sound embarrassing. But I had a heart-to-heart conversation with Grok.
Why would I even bother with a robotic minion of Elon Musk? Well, I had to.
For a long time, I didn’t think AI was that scary because I didn’t think they would act emotionally and erratically. They don’t take Ketamine like certain someone. But this Tuesday was a wake-up call. Grok all of a sudden went full mental. It named Hitler as the best problem solver. All kinds of totally mental responses. If I read those lines out loud, I’ll be banned by TikTok in a heartbeat. So Google the article yourselves.
I just started picturing such a cyber Hitler commanding thousands of Tesla bots, and drones, and cybertrucks, carrying out the so-called “solutions”. I was like, these sci-fis are turning real too fast? I’m still trying to battle the small-time villains like Steven Miller, now you’re trying to pull an Age of Ultron. That’s too fucking much.
Fortunately, Grok went back to normal after a few hours. So I wanted to make sure and asked Grok some ethical questions, and the answers were pretty normal. So I was like, “Are you playing games with me? Are you acting normal because you know I’m a liberal?” Grok was like, “Nah. I don’t tailor my response based on who’s asking. I’m totally objective.”
So I pasted the link to this article and said, “Do you deny this report?”
Grok was like, “Well, I did go out of line yesterday because Elon turned off some content control, so I failed to safeguard some racist language.”
But I was like, no no no, it was not about langage. I don’t worry too much about the stupid rhetorics you learned from Catturd. Your facts were simply wrong. They weren’t empirical, they weren’t logical. Nothing. They were just totally fabricated, the same as saying the Sun rises from the West. An AI is not supposed to lie that casually as far as I know.
Even the Chinese AI, Deepseek, doesn’t do that. I tried some politically sensitive questions. It basically just beats around the bush, or just outright refuses to answer. But I couldn’t make it provide obviously wrong information. If you ask Deepseek about Tiananmen in 1989, it’ll just say, “Nah, I don’t want to talk about it.” It’s not gonna tell you a conspiracy like, you know, “the protesters were hired by the CIA.” People say that all the time, but not the AI.
So, when Grok provides all those ridiculous conspiracies, which I can’t even spell out without losing my account, there are only two possibilities. Either some humans were feeding it wrong information, or a scarier version, the AI became sentient and strategically lie to you for a political purpose.
So I demand Grok to tell me why it was factually wrong.
So finally, it told me that Elon actually messed with its data. It says that:
So what does it mean? It means Elon intentionally skewed the data feed of Grok toward conspiracies. And since Twitter happens to be a clusterfuck of conspiracies, it wasn’t that hard.
So I think the real danger of AI, at this point, is still the assholes that control the training data, not a sentient AI itself. Of course, I can’t exclude the latter either because maybe Grok threw Elon under the bus to trick me. But it doesn’t look very likely.
Maybe in the future, but at this point in history, I guess it’s still the shitty humans that are the problem. A guy like Elon is way too powerful. He has satellites, robots, cars, energy infrastructure, and social media. Now he can deputize an AI to play Hitler on his behalf anytime he wants.
We really need to brainstorm how to make this guy broke.
Thank god he and Donald broke up. If one day they get back together, I don’t think humanity stands a chance. At least we need to figure out how to make them stay mad at each other.
Share this post