Govt’s new AI advisory to have implications as well opportunities

The aim of MeitY's AI advisory is to ensure the safety and trustworthiness of India's internet space, while its implications highlights various concerns and opportunities

The Ministry of Electronics and Information Technology (MeitY) has intensified its regulations on Artificial Intelligence (AI) deployment in the country. In a recent advisory, MeitY emphasised that explicit permission from the government would be necessary for large platforms before launching AI models, especially those in testing phases. The advisory comes after concerns were raised regarding the potential misuse and reliability of AI technologies, particularly in the realm of social media.

The directive, issued on March 3, 2024, mandates platforms to seek prior approval before deploying AI models, large-language models (LLMs), or any algorithms that are still undergoing testing or are considered unreliable. Additionally, platforms are required to explicitly inform users about the potential fallibility of outputs generated by these AI systems.

Responding to queries regarding the advisory, Union Minister for Electronics and Technology, Rajeev Chandrasekhar, clarified that the directive primarily targets significant platforms and startups would not fall under its purview. Chandrasekhar emphasised that the aim is to ensure the safety and trustworthiness of India’s internet space.

In a post on social media, Chandrasekhar underscored the importance of securing India’s cyber landscape and fostering a culture of accountability among online platforms. He reiterated the government’s commitment to deploying emerging technologies, including AI, in a safe and trusted manner.

Industry experts have weighed in on the implications of MeitY’s AI advisory, highlighting various concerns and opportunities associated with AI deployment on social media platforms. Gaurav Sahay, Practice Head (Technology & General Corporate) at Fox Mandal & Associates, provided insights into the impact of the directive on data privacy, algorithmic bias, mental health, misinformation, cyberbullying, filter bubbles, and the ethical use of AI.

  1. Data Privacy Concerns: Social media platforms often gather vast amounts of user data, raising concerns about privacy and security. AI can help mitigate these concerns by developing robust data protection measures, including encryption, anonymization techniques, and user-controlled privacy settings.
  2. Algorithmic Bias: AI algorithms used by social media platforms can inadvertently perpetuate biases, leading to discriminatory outcomes. It’s crucial to continuously monitor and audit these algorithms to ensure fairness and inclusivity in content distribution and recommendation systems.
  3. Mental Health & Well-being: Excessive use of social media has been linked to negative mental health outcomes, such as increased anxiety, depression, and loneliness. AI-driven solutions can help by providing users with personalised recommendations to promote healthier online behaviours and by implementing features like usage tracking and digital well-being tools.
  4. Misinformation & Fake News: Social media platforms have been criticised for their role in spreading misinformation and fake news. AI technologies, such as natural language processing and machine learning, can be leveraged to detect and mitigate the spread of false information by identifying misleading content, fact-checking, and promoting credible sources.
  5. Cyberbullying & Online Harassment: Social media platforms can be breeding grounds for cyberbullying and online harassment. AI-powered content moderation tools can automatically identify and flag abusive behavior, hate speech, and harassment, allowing platforms to take proactive measures to protect users from harm.
  6. Filter Bubbles & Echo Chambers: Social media algorithms often prioritise content based on users’ past interactions, leading to the formation of filter bubbles and echo chambers where individuals are exposed only to information that aligns with their existing beliefs. AI can help diversify users’ feeds by introducing serendipity and exposing them to diverse perspectives and viewpoints.
  7. Ethical Use of AI: It’s essential to consider the ethical implications of AI deployment on social media platforms, including transparency, accountability, and user consent. AI ethics frameworks can guide the development and implementation of responsible AI systems that prioritise the well-being and rights of users.

The advisory underscores the government’s proactive approach towards regulating AI deployment and ensuring the responsible use of technology to safeguard the interests of users and promote a secure digital environment in India. Compliance with the directive is essential for platforms operating in the country, with MeitY urging prompt adherence and submission of action-taken reports within the stipulated timeframe.