To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focusing on remarkable women who’ve contributed to the AI revolution. We’ll publish several pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Read more profiles here.

Amba Kak is the executive director of the AI Now Institute, where she helps create policy recommendations to address AI concerns. She was also a senior AI advisor at the Federal Trade Commission and previously worked as a global policy advisor at Mozilla and a legal advisor to India’s telecom regulator on net-netruality.

Briefly, how did you get your start in AI? What attracted you to the field?

It’s not a straightforward question because “AI” is a term that’s in vogue to describe practices and systems that have been evolving for a long time now; I’ve been working on technology policy for over a decade and in multiple parts of the world and witnessed when everything was about “big data,” and then everything became about “AI”. But the core issues we were concerned with — how data-driven technologies and economies impact society — remain the same.

I was drawn to these questions early on in law school in India where, amid a sea of decades, sometimes century-old precedent, I found it motivating to work in an area where the “pre-policy” questions, the normative questions of what is the world we want? What role should technology play in it? Remain open-ended and contestable. Globally, at the time, the big debate was whether the internet could be regulated at the national level at all (which now seems like a very obvious, yes!), and in India, there were heated debates about whether a biometric ID database of the entire population was creating a dangerous vector of social control.  In the face of narratives of inevitability around AI and technology, I think regulation and advocacy can be a powerful tool to shape the trajectories of tech to serve public interests rather than the bottom lines of companies or just the interests of those who hold power in society. Of course, over the years, I’ve also learned that regulation is often entirely co-opted by these interests too, and can often function to maintain the status quo rather than challenge it. So that’s the work!

What work are you most proud of (in the AI field)?

Our 2023 AI Landscape report was released in April in the midst of a crescendo of chatGPT-fueled AI buzz — was part diagnosis of what should keep us up at night about the AI economy, part action-oriented manifesto aimed at the broader civil society community. It met the moment — a moment when both the diagnosis and what to do about it were sorely missing, and in its place were narratives about AI’s omniscience and inevitability.  We underscored that the AI boom was further entrenching the concentration of power within a very narrow section of the tech industry, and I think we successfully pierced through the hype to reorient attention to AI’s impacts on society and on the economy… and not assume any of this was inevitable.

Later in the year, we were able to bring this argument to a room full of government leaders and top AI executives at the UK AI Safety Summit, where I was one of only three civil society voices representing the public interest. It’s been a lesson in realizing the power of a compelling counter-narrative that refocuses attention when it’s easy to get swept up in curated and often self-serving narratives from the tech industry.

I’m also really proud of a lot of the work I did during my term as Senior Advisor to the Federal Trade Commission on AI, working on emerging technology issues and some of the key enforcement actions in that domain. It was an incredible team to be a part of, and I also learned the crucial lesson that even one person in the right room at the right time really can make a difference in influencing policymaking.

How do you navigate the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry?

The tech industry, and AI in particular, remains overwhelmingly white and male and geographically concentrated in very wealthy urban bubbles. But I like to reframe away from AI’s white dude problem not just because it’s now well known but also because it can sometimes create the illusion of quick fixes or diversity theater that on their own won’t solve the structural inequalities and power imbalances embedded in how the tech industry currently operates. It doesn’t solve the deep-rooted “solutionism” that’s responsible for many harmful or exploitative uses of tech.

The real issue we need to contend with is the creation of a small group of companies and, within those — a handful of individuals that have accumulated unprecedented access to capital, networks, and power, reaping the rewards of the surveillance business model that powered the last decade of the internet. And this concentration of power is tipped to get much, much worse with AI. These individuals act with impunity, even as the platforms and infrastructures they control have enormous social and economic impacts.

How do we navigate this? By exposing the power dynamics that the tech industry tries very hard to conceal. We talk about the incentives, infrastructures, labor markets, and the environment that power these waves of technology and shape the direction it will take. This is what we’ve been doing at AI Now for close to a decade, and when we do this well, we find it difficult for policymakers and the public to look away — creating counter-narratives and alternative imaginations for the appropriate role of technology within society.

What advice would you give to women seeking to enter the AI field?

For women, but also for other minoritized identities or perspectives seeking to make critiques from outside the AI industry, the best advice I could give is to stand your ground. This is a field that routinely and systematically will attempt to discredit critique, especially when it comes from not traditionally STEM backgrounds – and it’s easy to do given that AI is such an opaque industry that can make you feel like you’re always trying to push back from the outside. Even when you’ve been in the field for decades like I have, powerful voices in the industry will try and undermine you and your valid critique simply because you are challenging the status quo.

You and I have as much of a say in the future of AI as Sam Altman does since the technologies will impact us all and potentially will disproportionately impact people of minoritized identities in harmful ways. Right now, we’re in a fight for who gets to claim expertise and authority on matters of technology within society… so we really need to claim that space and hold our ground.



Source link


administrator