
India is reversing a recent AI advisory after facing backlash from local and global entrepreneurs and investors. Read more…
The Ministry of Electronics and IT shared an updated AI advisory with industry stakeholders on Friday, eliminating the requirement for government approval before launching or deploying an AI model in the South Asian market. Instead, firms are now advised to label under-tested and unreliable AI models to inform users of potential fallibility or unreliability.
This revision comes after severe criticism towards India’s IT ministry earlier this month, with figures like Martin Casado from Andreessen Horowitz calling it “a travesty.” The new guidelines mark a reversal from India’s previous hands-off approach to AI regulation, emphasizing the importance of the sector in India’s strategic interests.
The newly revised advisory, although not publicly available online, has been reviewed by TechCrunch. The ministry stated that while the advisory is not legally binding, it indicates the future of regulation and requires compliance from stakeholders.
The advisory highlights that AI models must not propagate unlawful content under Indian law, nor should they promote bias, discrimination, or threats to the electoral process integrity. Intermediaries are encouraged to use mechanisms like consent popups to explicitly inform users about the unreliability of AI-generated output.
Furthermore, the ministry emphasizes the need to easily identify deepfakes and misinformation by labeling content with unique metadata or identifiers. The requirement to identify the “originator” of messages has been removed from the guidelines.