[ad_1]
To give AI-focused female academics and others their deserved – and overdue – time in the spotlight, TechCrunch is launching a series of interviews Focusing on notable women who have contributed to the AI revolution. As the AI boom continues, we’ll be publishing several pieces throughout the year, highlighting key work that often goes unrecognized. Read more profiles Here,
Irene Soliman began her career in AI as a researcher and public policy manager at OpenAI, where she led a new approach to releases. GPT-2, the predecessor of ChatGPT. After serving as AI policy manager at Zillow for about a year, she joined Hugging Face as head of global policy. His responsibilities there range from creating and leading the company’s AI strategy globally to conducting socio-technical research.
Solaiman also advises the Institute of Electrical and Electronics Engineers (IEEE), the professional association for electronics engineering, on AI issues, and is a recognized AI expert at the Intergovernmental Organization for Economic Co-operation and Development (OECD).
Irene Soliman, Head of Global Policy at Hugging Face
Briefly, how did you get your start in AI? What attracted you to this field?
Completely non-linear career paths are common in AI. My budding interest began the same way many teens with awkward social skills discover their passion: through sci-fi media. I originally studied human rights policy and then took computer science courses, because I saw AI as a means to work on human rights and build a better future. Being able to conduct technical research and lead policy in an area with so many unanswered questions and unexplored paths is what keeps my work exciting.
What work (in the AI field) are you most proud of?
I am most proud when my expertise resonates with people in the AI field, especially my writing on release ideas across the complex landscape of AI system releases and openness. see my paper on one ai release gradient frame technical deployment The rapid discussion among scientists and its use in government reports is confirming – and a good sign that I am working in the right direction! Personally, some of the work I’m most inspired by is on cultural value alignment, dedicated to ensuring that systems work best for the cultures in which they are deployed. With my incredible co-author and now dear friend, Christy Dennison, working on a Process of adopting language model in society It was a whole-hearted (and many debugging hours) project that has shaped the security and alignment function today.
How do you deal with the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry?
I have found, and am still finding, my people – from working with incredible company leadership who care deeply about the same issues I prioritize, to great research co-authors with whom I spend every working session. I can start with a mini therapy session. Affinity groups are extremely helpful for building community and sharing tips. It is important to highlight interdisciplinarity here; My community of Muslim and BIPOC researchers continues to inspire.
What advice would you give to women wanting to enter the AI field?
Create a support group whose success is your success. In youth terms, I believe it is a “girly girl”. The same women and colleagues with whom I entered this field are my favorite coffee dates and late-night nervous calls before deadlines. One of the best pieces of career advice I ever read was from Arvind Narayan on the platform formerly known as Twitter, instituting the “Liam Neeson Principle” about not being the smartest, but having a particular set of skills. The principle was established.
What are some of the most pressing issues facing AI during its development?
The most important issues evolve themselves, so the meta answer is: international coordination for safe systems for all people. People who use and are affected by the system, even within the same country, have different preferences and ideas about what is safest for them. And the issues that will arise will depend not only on how AI is developed, but also on the environment in which they are deployed; Security priorities and our definitions of capability vary regionally, such that more digitalized economies are at greater risk of cyberattacks on critical infrastructure.
What issues should AI users be aware of?
Technical solutions rarely address risks and harms holistically. While users can take some steps to increase their AI literacy, it is important for them to invest in a number of safeguards as risks evolve. For example, I’m excited about more research on watermarking as a technological tool, and we also need coordinated policymaker guidance on content distribution, particularly generated on social media platforms.
What’s the best way to create AI responsibly?
Our methods are constantly being reevaluated with those affected and for evaluating and implementing security technologies. Both beneficial applications and potential harms continue to evolve and require iterative feedback. The means by which we improve AI safety must be examined collectively as a field. The most popular valuations for models in 2024 are much stronger than the valuations I ran in 2019. Today, I am more optimistic about technical valuations than red-teaming. I find human assessments to be extremely useful, but as more evidence emerges about the mental burden and unequal costs of human response, I am becoming increasingly enthusiastic about standardizing assessments.
How can investors better push for responsible AI?
They already are! I am pleased to see many investors and venture capital firms actively engaging in the security and policy conversation, including through open letters and congressional testimony. I’m eager to learn more about investors’ expertise on what drives small businesses in different sectors, especially as we see greater use of AI in areas outside of the core tech industries.
[ad_2]
Thanks For Reading