Written by 3:15 pm Technology

Women in AI: Urvashi Aneja is researching the social impact of AI in India

To highlight the contributions of AI-focused women academics and others, TechCrunch is launching a series of interviews focusing on remarkable women in the AI revolution. Check out more profiles here.

Urvashi Aneja is the founding director of Digital Futures Lab, an interdisciplinary research effort focusing on the interaction between technology and society in the Global South. She is also an associate fellow at the Asia Pacific program at Chatham House in London.

Aneja’s research centers on the societal impact of algorithmic decision-making systems in India and platform governance. She recently conducted a study on AI use in India across sectors like policing and agriculture.

Q&A

How did you start in AI and what attracted you to the field?

I began my career in research and policy engagement in the humanitarian sector, focusing on the use of digital technologies in low-resource crises. This experience made me concerned about the narrative surrounding AI’s potential in India and the lack of critical discourse.

What work are you most proud of in the AI field?

I am proud of shedding light on the political economy of AI production and its broader implications for social justice, labor relations, and environmental sustainability. We have also translated these concerns into concrete policy and regulation.

How do you navigate the challenges of the male-dominated tech industry?

I let my work speak for itself and constantly question the status quo.

What advice would you give to women entering the AI field?

Develop deep expertise in AI while studying various fields to understand AI as a socio-technical system shaped by history and culture.

What are some pressing issues facing the evolution of AI?

The concentration of power in tech companies, the environmental impacts of AI, and the techno-solutionist narratives around AI in socio-economic development are critical issues to address.

What are some issues AI users should be aware of?

Users should understand that AI is not magic but a computational tool based on historical patterns. They should also be cautious of shifting responsibility onto end-users in low-resource contexts.

How can AI be responsibly built?

Start with assessing the need for AI, re-center domain knowledge in AI development, and prioritize participation, inclusive teams, and labor rights.

How can investors push for responsible AI?

Investors should consider the entire life cycle of AI production, including labor values, environmental impacts, business models, and demand better evidence of AI benefits.

Visited 2 times, 1 visit(s) today
[mc4wp_form id="5878"]
Close Search Window
Close