AI powered Chatbots are capable of doing a lot, however, users should tread carefully.
Image: Jonathan Raa / AFP
LAST week Grok, the Elon Musk xAI-powered chatbot, lost its mind.
Someone posted a video on X of a procession of crosses, with a caption reading, “Each cross represents a white farmer who was murdered in South Africa.” Elon Musk, South African by birth, shared the post, greatly expanding its visibility. In response, Grok started by debunking such a false claim about white genocide and thereafter (its mind was changed) it claimed that there was a genocide in South Africa.
The chatbot even brought up the subject when it was not asked about the matter. What is even more interesting is that this happened a week before the South African President was due to visit the United States to meet President Donald Trump, who is also very close to Musk.
Thinking people know the truth about this matter. What does this tell us about the current state of technology, especially chatbots?
AI-powered chatbots are capable of doing a lot, however, the chatbot in question has been relied upon to provide accurate information based on queries.
It is one of the few chatbots that are slowly replacing Google search as a source of information. The difference between these chatbots and previous sources of information, such as Google, is that they are being manipulated.
Google Search was relying on information sourced from other websites. Its accuracy was partly reliant on the quality of information from other websites. Chatbots like Grok also source information from other websites however, they do even more.
A closer look at what happened to Grok shows that chatbots can be influenced by those who created them. In the case of Grok, the views of its creator (Elon Musk) are well known. It is therefore reasonable to conclude that when Grok changed its views about what is really happening in South Africa, it was taking instructions from somewhere.
This incident is a clear reminder that technology can be biased, and a caution to AI chatbot users to approach the information they provide with care.
One should not believe everything they receive from chatbots.
The quality of information from chatbots raises an important issue that needs immediate attention.
For now, chatbots are owned by private entities.
They are not governed by any legal entity in terms of the information that they disseminate. It seems there’s a need for an AI that oversees information that is disseminated by other chatbots. There’s a need for an AI that is designed to oversee the performance of other AIs.
Such an AI chatbot should be independent of any business, government or vested interest. Its goal should be to guarantee quality information. In the absence of such an AI, users should tread carefully and be warned by active quality information guardians.
As the President of South Africa, Cyril Ramaphosa, is getting ready to interact with the US President, beyond just debunking the myth about SA, the governance of AI should also be high on the agenda.
There should be conditions to the operations of platforms like Grok in other countries. Leaders ought to take the leading in the governance of AI, especially that are dominating in the distribution of information.
* Wesley Diphoko is a Technology Analyst and the Editor-In-Chief of FastCompany (SA) magazine.
- BUSINESS REPORT