Image Source: IIIT Hyderabad
In a historic move for the Indian digital landscape, Google has launched Gemini AI Kids—a kid-friendly version of its generative AI chatbot—targeting kids under the age of thirteen through supervised accounts in the Family Link app. This is a crucial step in embedding artificial intelligence in early learning, but it also brings the laser-focus back onto the legal and regulatory frameworks in India addressing AI’s interaction with minors.
The initiation of AI products for children has raised pressing issues around child safety, data privacy, psychological harm, and adequacy of regulatory regulators or mechanisms. India’s primary legislation in this regard, the Digital Personal Data Protection Act, 2023 (DPDP Act), requires data fiduciaries to gain verifiable parental consent prior to processing any personal data related to children under the age of eighteen. The Act also prohibits behavioral monitoring and targeted advertising of children, to protect their health and safety not only in a physical sense, but in a moral, ethical, and emotional capacity.
Despite the provisions in the DPDP Act, experts also agree that it does not constitute a comprehensive framework dealing with risks associated with AI. There is no explicit requirement to age-gate, and many of the safeguard mechanisms adopted platforms are self-regulatory. Disclaimers made by AI platforms frequently acknowledge the likelihood that children may be exposed to mature content, thus emphasizing the ability of human error to undermine the effectiveness of the various protective measures.
Any party processing large amounts of sensitive sets of minor data, such as AI chatbot companies, may be labeled as Significant Data Fiduciaries under the DPDP Act, and subject them to more stringent compliance requirements. Nevertheless, the Act and the draft rules primarily focus on data consent and do not adequately address and/or consider the behavioral, psychological, and emotional impacts of generative AI on minors.
While regulators across the world are addressing greater governance measures for child-targeted AI, such as California’s proposed Leading Ethical AI Development for Kids Act and the EU’s AI Act which both identify child-targeted AI as high-risk and require transparency and safety audits, India as a regulatory approach is more diffuse. While certain provisions exist such as under the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, and Bharatiya Nyaya Sanhita, 2023, to offer some protections, overall a coherent approach to AI risks is absent.
As generative AI tools continue to become more accessible to children, legal experts are continuing to push for a strong child-centric regulatory scheme in India that is proportional to the new and now urgent need to safeguard children’s digital rights, as well as psychological safety and well-being. Children are a part of this digital ecosystem, and they must not be left vulnerable in this age of artificial intelligence, as the digital conditions are advance.
Source: Lakshmikumaran & Sridharan Attorneys
Advertisement
Advertisement