Google has implemented restrictions on its AI chatbot, Gemini, preventing it from responding to election-related queries in countries where elections are taking place. This move aims to address concerns about the potential misuse and dissemination of inaccurate or misleading information during the election process. The restrictions have been rolled out in the U.S., India, and other major countries with upcoming elections.

Queries related to political parties, candidates, and politicians now prompt a pre-set message from Gemini, stating, “I’m still learning how to answer this question. In the meantime, try Google Search.” Despite these restrictions, there are instances where the AI tool may still respond to queries with typos, indicating the ongoing challenge of fine-tuning responses.

Google acknowledged the importance of providing accurate information on election-related queries and emphasized its commitment to improving protections. The move in India coincided with an advisory from the government requiring tech firms to obtain government permission before launching new AI models. The advisory initially faced criticism, but the government clarified that it primarily applied to significant tech companies rather than start-ups.

The update follows a previous incident where Gemini responded to a query about Indian Prime Minister Narendra Modi being labelled a fascist, prompting concerns and accusations of violating IT Rules, 2021. Google suspended Gemini’s ability to generate people’s images last month due to historical inaccuracies and pledged to release an improved version.

It remains unclear whether Google will lift the restrictions on Gemini for election-related queries after the elections conclude. TechCrunch is awaiting Google’s response for more information on the countries where the update is live. The company emphasized its commitment to continuously enhancing protections for election-related queries on the Gemini AI chatbot.

Share.

Comments are closed.

Exit mobile version
Follow