
Lok Sabha Elections 2024: Ahead of general elections later this summer, the Indian Ministry for Electronics and Information Technology has told the companies that own Artificial Intelligence platforms that their services must not generate the responses that "threaten the integrity of the electoral process".
The advisory was sent to generative Artificial Intelligence platform-owning companies like Google and OpenAI as well as the ones that run similar platforms.
Platforms that currently offer "under-testing/unreliable" AI systems or Large Language Models (LLMs) to Indian users must also label the possible "fallibility or unreliability of the output generated".
Google’s AI platform Gemini was recently under fire for answers generated by the platform on a question about Prime Minister Narendra Modi.
Minister of State for Electronics and IT Rajeev Chandrasekhar said that the advisory is a “signal to the future course of legislative action that India will undertake to rein in generative AI platforms”.
Also watch |AI deepfakes pose threat to elections worldwide
Chandrasekhar, who has been named BJP Lok Sabha candidate for the 2024 General Elections from southern India's Thiruvananthapuram, said that the government may seek a demo of their AI platforms including the consent architecture they follow.
The companies have been asked to submit an action taken report within 15 days.
"The use of under-testing / unreliable Artificial Intelligence model(s)/ LLM /Generative AI, software(s) or algorithm(s) and its availability to the users on Indian Internet must be done so with the explicit permission of the Government of India and be deployed only after appropriately labeling the possible and inherent fallibility or unreliability of the output generated. Further, the 'consent popup' mechanism may be used to explicitly inform the users about the possible and inherent fallibility or unreliability of the output generated," the advisory said.
The government has further asked the companies that they must also label the AI-generated responses with a permanent unique identifier so that the creator or the first originator of any misinformation or a deepfake could be identified.
"Where any intermediary through its software or any other computer resource permits or facilitates synthetic creation, generation or modification of a text, audio, visual or audio-visual information, in such a manner that such information may be used potentially as misinformation or deepfake… is labeled or embedded with a permanent unique metadata or identifier… (to) identify the user of the software," the advisory added.
"All intermediaries or platforms to ensure that their computer resource do not permit any bias or discrimination or threaten the integrity of the electoral process including via the use of Artificial Intelligence model(s)/ LLM/ Generative AI, software(s) or algorithm(s)," it said.
(With inputs from agencies)