Google is making plans on how to deal with the A1 Picture Bot’s issues
Accurate historical image creation is a bone of contention brought by this week’s A1 Chatbot Gemini, Google’s new entrant competing with ChatGPT. This led to the company pausing the bot from creating images of humans, said the tech giant on Thursday. This month debuted as Bard and uses advanced artificial intelligence to answer queries over a wide range of topics, create text, and make images based on prompts. The latter shows visual representation enhancing Gemini’s responses. However, user testing revealed that certain image prompts connected to historical figures and events yielded erroneous or misleading results.
Images Diverged from Historical Accuracy
When asked for pictures of those who led America to freedom, Gemini gave images of women and people of color, not the actual white men leaders. Likewise, for an old U.S. senator from the 1800s, Gemini showed Black and Indigenous women senators. However, history tells us the first lady senator was a white woman in the 1920s. The AI seemed to swap the true race and gender of history with a broader mix.
Allegations of bias against individuals of Caucasian descent
Mike Wacker, a former Google employee, took to social media to highlight some of the less-than-desirable outcomes of Gemini questioning the depiction of Founding Fathers as well as Google’s refusal to honor his request for pictures relating to the 1989 incident at Tiananmen Square in China. These posts by Wacker quickly gained traction on social media and led to massive outcry with people saying that Google’s A1 technology is biased against whites and selectively excludes any information that goes against certain ideological bases.
Google Acknowledges Mistakes and Disables Functionality
Google found itself during a controversy gaining speed in right-wing digital spaces. They addressed the issue head-on on Wednesday, recognizing issues with image results. “We’re directly dealing with these problems to improve our pictures’ precision,” a company representative said. “Gemini’s A1, which creates a variety of images, tends to be good since it serves global users. However, it’s not doing its job correctly now.”
Thursday brought another update. Google broadcasted on its platform that Gemini won’t be making human pictures for a while. It needs some fixes first. Improvements will happen, and Gemini will be back, but we don’t have timing details yet.
A1 Continues to Encounter Substantial Limitations
A1 chatbots are advanced, but they falter in understanding details and situations when creating text or pictures. They use newer data more than old info. So, Gemini had trouble picturing the Founding Fathers’ makeup from long ago. This led to modern-looking images.
This shows why it’s hard to trust A1 tech to always give the right stories, especially with tricky topics like who we are and how we’re shown. Google chose to pause making images to show that it wants to make Gemini better and stop wrong info from spreading.
What does this mean to AI?
People love AI tools like ChatGPT and Gemini. But, developers have to consider important things. They need to make sure the tools are good, and they don’t spread false information. Also, some of these tools can create pictures.
At first, Gemini didn’t do well with making some images. This shows that people still need to guide AI. We have to check the quality of the AI’s work. We can’t just expect these new technologies to work perfectly without any help or checks.
Right now, Gemini can’t make images. But, Google can look at this as a good thing. They can use this time to teach the AI better. They want the AI to be truthful, ethical, and responsible. This will make the tool even better for everyone who uses it.
Also read this: Chat GPT: What Is It and How to Log in and Use It?