[ad_1]
To give AI-focused female academics and others their deserved – and overdue – time in the spotlight, TechCrunch is launching a series of interviews Focusing on notable women who have contributed to the AI revolution. As the AI boom continues, we’ll be publishing several pieces throughout the year, highlighting key work that often goes unrecognized. Read more profiles Here,
Rashida Richardson is Senior Counsel at Mastercard, where her scope covers legal issues related to privacy and data security in addition to AI.
Formerly the director of policy research at the AI Now Institute, a research institute studying the social implications of AI, and a senior policy advisor for data and democracy in the White House Office of Science and Technology Policy, Richardson has been an adjunct professor of law. and Political Science at Northeastern University from 2021. There, he specializes in race and emerging technologies.
Rashida Richardson, Senior Counsel, AI at Mastercard
Briefly, how did you get your start in AI? What attracted you to this field?
My background is as a civil rights attorney, where I worked on a wide range of issues including privacy, surveillance, school desegregation, fair housing, and criminal justice reform. While working on these issues, I witnessed the early stages of government adoption and experimentation with AI-based technologies. In some cases, the risks and concerns were clear, and I helped lead several technology policy efforts in New York State and City to create more monitoring, evaluation, or other safeguards. In other cases, I was naturally skeptical of claims about the benefits or efficacy of AI-related solutions, especially those marketed to solve or mitigate structural issues like school desegregation or fair housing.
My prior experience also made me very aware of existing policy and regulatory shortcomings. I immediately saw that there were few people with my background and experience in the AI field or offering the analysis and potential interventions I was developing in my policy advocacy and academic work. So I realized that this is an area and place where I can contribute meaningfully and build on my past experience in unique ways.
I decided to focus both my legal practice and academic work on policy and legal issues related to AI, specifically their development and use.
What work in the AI field are you most proud of?
I am glad that this issue is finally getting more attention from all stakeholders, especially policy makers. There is a long history of legislation in the United States not capturing or ever adequately addressing technology policy issues, and five-six years ago, it felt like that might be the fate of AI, because I was very concerned with policymakers. I remember engaging with it, both in formal settings like US Senate hearings or academic forums, and that most policymakers considered the issue mysterious or something that was not urgently needed despite the rapid adoption of AI across all sectors. Yet, over the past year or so, there has been a significant shift such that AI is a consistent feature of the public discussion and policymakers better understand the risks and the need for informed action. I also think that stakeholders across all sectors, including industry, recognize that AI creates unique benefits and risks that cannot be resolved through traditional practices, so there will be greater acceptance – or at least appreciation – for policy interventions. Is.
How do you deal with the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry?
As a black woman, I’m used to being in the minority in many places, and while AI and the tech industries are extremely homogeneous fields, they are not innovative or different from other fields of immense power and wealth, like finance and technology. legal profession. So I think my previous work and life experience helped prepare me for this industry, because I’m very aware of the preconceptions I may have to overcome and the challenging dynamics I may face. Could. I rely on my experience to navigate, as I have a unique background and perspective from working on AI across all industries – academia, industry, government, and civil society.
What issues should AI users be aware of?
AI users should be aware of two key issues: (1) a better understanding of the capabilities and limitations of various AI applications and models, and (2) the ability of current and potential laws to conflict or resolve certain concerns. There is great uncertainty in how AI is used.
On the first point, there is an imbalance in public discussion and understanding regarding the benefits and possibilities of AI applications and their actual capabilities and limitations. This problem is compounded by the fact that AI users may not appreciate the differences between AI applications and models. Public awareness of AI increased with the release of chatgpt and other commercially available generic AI systems, but those AI models are different from other types of AI models that consumers have engaged with for years, such as recommendation systems. When the conversation about AI gets muddled – where the technology is treated as monolithic – it distorts public understanding of what each type of application or model can actually do, and the risks associated with their limitations or shortcomings. .
On the second point, law and policy regarding AI development and use is evolving. Although a variety of laws (e.g. civil rights, consumer protection, competition, fair lending) already apply to the use of AI, we are in the early stages of seeing how these laws will be enforced and interpreted. Will go. We are also in the early stages of policy development specifically tailored to AI – but what I have seen from both legal practice and my research is that there are areas that are left unresolved by this legal patchwork and These will only be resolved if there are more lawsuits related to AI development and use. In general, I don’t think there is a very good understanding of the current state of law and AI, and legal uncertainty around key issues such as liability may mean that there will be some risks, pitfalls and disputes between businesses or between regulators. Years of litigation may remain unsettled. And the companies offer legal precedent that may provide some clarity.
What’s the best way to create AI responsibly?
The challenge in building AI responsibly is that many of the underlying pillars of responsible AI, such as fairness and safety, are based on normative values – with no shared definition or understanding of these concepts. So one could possibly act responsibly and still cause harm, or one could act maliciously and rely on the fact that there are no shared norms of these concepts to claim good faith action. . Until there is a global standard or some shared framework for building responsible AI, the best way to achieve this goal is to have clear principles, policies, guidance, and standards for responsible AI development and use that can be enforced through internal oversight. is applied. Benchmarking and other governance practices.
How can investors better push for responsible AI?
Investors could do a better job of defining or at least clarifying what responsible AI development or use is, and taking action when an AI actor’s practices do not align. Currently, “responsible” or “trustworthy” AI are effectively marketing terms because there are no clear standards for evaluating AI actor practices. While some new rules like i have act will establish some governance and oversight requirements, there are still areas where AI actors can be encouraged by investors to develop better practices that center human values or social well-being. However, if investors are unwilling to take action when there is evidence of misalignment or bad actors, there will be little incentive to adjust behavior or practices.
[ad_2]
Thanks For Reading