Mutale Nkonde’s nonprofit is working to make AI less biased

[ad_1]

To give AI-focused female academics and others their deserved – and overdue – time in the spotlight, TechCrunch is launching a series of interviews Focusing on notable women who have contributed to the AI ​​revolution. As the AI ​​boom continues, we’ll be publishing several pieces throughout the year, highlighting key work that often goes unrecognized. Read more profiles Here,

mutale nakonde is the founding CEO of the non-profit AI for the People (AFP), which seeks to increase the amount of Black voices in technology. Previously, he helped introduce the Anti-Algorithms and Deep Fake Algorithms Act in the US House of Representatives, in addition to the No Biometric Barriers to Housing Act. He is currently a Visiting Policy Fellow at the Oxford Internet Institute.

Briefly, how did you get your start in AI? What attracted you to this field?

When a friend of mine posted that Google Pictures, the predecessor to Google Images, had labeled two black people as gorillas in 2015, I got curious about how social media works. I was involved in many “tech in black” circles, and we were outraged, but it wasn’t until the publication of “Weapons of Math Destruction” in 2016 that I understood that this was due to algorithmic bias. This inspired me to start applying for fellowships where I could study this further and ended up with my role. Co-author of a report called Advancing Racial Literacy in Tech, Which was published in 2019. This was noticed by the people at the MacArthur Foundation and launched the current phase of my career.

I was attracted to questions about racism and technology because they seemed under-researched and counter-intuitive. I like to do things that other people don’t do, so it seemed like a lot of fun to learn more and spread this knowledge within Silicon Valley. I’ve since started a nonprofit advancing racial literacy in tech AI for people Which focuses on advocating for policies and practices to reduce the expression of algorithmic bias.

What work (in the AI ​​field) are you most proud of?

I’m really proud to be a leading advocate of the Algorithm Accountability Act, which was first introduced in the House of Representatives in 2019. It established AI for the People as a leading thinker on how to develop protocols for design, deployment, and guidance. Administration of AI systems that comply with local non-discrimination laws. This has led to us being included in the Schumer AI Insights Channel as part of an advisory group for various federal agencies and some exciting upcoming work on the Hill.

How do you deal with the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry?

In fact I have more problems with academic gatekeepers. Most of the people I work with at tech companies have been charged with developing systems for use for Black and other non-white populations, and so they are much easier to work with. Mainly because I am acting as an outside expert who can either validate or challenge existing practices.

What advice would you give to women wanting to enter the AI ​​field?

Find a niche and then become one of the best in the world in it. I had two things that helped me build credibility. The first was that I was advocating for policies to reduce algorithmic bias, while people in academia were beginning to discuss the issue. This gave me a first-mover advantage in the “solution space” and made AI for the People an authority on the Hill five years before the executive order. The second thing I would say is that look at your shortcomings and remove them. AI for People is four years old and I’m pursuing academic credentials to ensure I’m not forced out of the position of thought leader. I can’t wait to graduate from Columbia in May and hope to continue research in this area.

What are some of the most pressing issues facing AI during its development?

I have been thinking deeply about strategies that can be adopted to include more Black and people of color in building, testing, and annotating fundamental models. This is because technologies are only as good as their training data, so how can we create inclusive datasets at a time when DEI is under attack, from black venture funds being sued for targeting black and female founders And black academics are being publicly attacked. Who will do this work in the industry?

What issues should AI users be aware of?

I think we need to think about AI development as a geopolitical issue and how the United States can become a leader in truly scalable AI by creating products that have high efficacy rates on people in every demographic group. This is because China is the only other large AI producer, but they are producing products within a largely homogenous population, and even though they have a large footprint in Africa. The US tech sector could dominate that market if it invested aggressively in the development of anti-bias technologies.

What’s the best way to create AI responsibly?

A multi-pronged approach is needed, but one thing to consider is to pursue research questions that focus on marginalized people. The easiest way to do this is to pay attention to cultural trends and then consider how this affects technological developments. For example, asking questions like how do we design scalable biometric technologies in a society where more people are identifying as trans or nonbinary?

How can investors better push for responsible AI?

Investors should look at demographic trends and then ask themselves whether these companies will be able to sell to a population that is increasingly black and brown due to falling birth rates in European populations around the world? This should lead them to ask questions about algorithmic bias during the due diligence process, as this will increasingly become an issue for consumers.

At a time when AI systems are performing low-risk labor-saving tasks, much work remains to be done on re-skilling our workforce. How can we ensure that people living on the margins of our society are included in these programmes? What insights can they give us about how AI systems do and don’t work, and how can we use these insights to ensure that AI really works for people?

[ad_2]

Thanks For Reading

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top
Global Social Media Outage: Facebook, Instagram, Messenger – Resolved! Cal.com: Know how this productivity tool can keep you on track at work; it is free for individuals Amazon is offering a whopping 26 pct discount on iPhone 14 Plus: Check offers here iPhone 15 price drop: Get a huge 11% discount on Amazon now – check deal NASA captures the most powerful black hole eruption ever recorded! Check details here. Private US moon lander Odysseus enters lunar orbit en route to historic touchdown attempt Want to buy the new Samsung Galaxy S24 Ultra? Check out this huge Amazon discount Grab 11 pct discount on iPhone 15! Check deals and whopping exchange offer on Amazon NASA calls for volunteers to join simulated one-year Mars surface mission iPhone 14 price drop: Huge 15% discount now on Flipkart; check Rs. 42000 exchange offer too