[ad_1]
To give AI-focused female academics and others their deserved – and overdue – time in the spotlight, TechCrunch is launching a series of interviews Focusing on notable women who have contributed to the AI revolution. As the AI boom continues, we’ll be publishing several pieces throughout the year, highlighting key work that often goes unrecognized. Read more profiles Here,
As an AI expert at the Organization for Economic Co-operation and Development (OECD) and the Global Partnership on AI (GPAI), an international initiative to promote responsible AI use, Tydrich develops approaches to AI that align with law. Evaluates and manages risk. Science with policy and practice. She has served on the faculty of Duke University and advised several companies, and was a longtime partner at the law firm Covington & Burling LLP.
Tydrich, a technology transaction and intellectual property attorney, also served on the Biden campaign policy committee and is registered to practice before the United States Patent and Trademark Office (USPTO).
Lee Tidrich, Global Partnership on AI
Briefly, how did you get your start in AI? What attracted you to this field?
I have been working at the intersection of technology, law and policy for decades, starting with cellular, then the Internet and e-commerce to today. I am prepared to help organizations optimize emerging technology benefits and mitigate risks in a rapidly changing and complex legal environment. I’ve been working on AI matters for years and long before it was in the spotlight, starting when I was a partner at Covington & Burling LLP. In 2018, as business AI use and legal challenges grew, I became co-chair of Covington’s global and multidisciplinary practice.
Artificial Intelligence Initiative and focuses much of its practice on AI, including AI governance, compliance, transactions, and government affairs.
What work (in the AI field) are you most proud of?
Global and multidisciplinary solutions are needed to unlock the benefits of AI and mitigate the risks. I am proud of our broad work that unites diverse disciplines, geographies, and cultures to help solve these serious challenges. This work began while working on AI at Covington
Governance and other matters with client lawyers, engineers and business teams. More recently, as a member of both the Organization for Economic Co-operation and Development (OECD) AI and the Global Partnership on AI (GPAI) global expert groups, I have been working on a series of high-risk multidisciplinary AI cases, Those include AI governance, responsible AI data and model sharing and how to address climate, intellectual property and privacy matters in an AI-driven world. I co-lead both the GPAI Intellectual Property Committee and the Environmentally Responsible AI Strategy (RAISE) Committee. My multidisciplinary work also extends to Duke, where I designed and taught a course that brought together graduate students from different programs to work on real-world responsible technology matters with the OECD, corporations, and others. It is deeply gratifying to help prepare the next generation of AI leaders to solve multidisciplinary AI challenges.
How do you deal with the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry?
I have worked in male-dominated fields for most of my life, starting as a Duke undergraduate where I was one of the few female electrical engineering students. I was also the 22nd woman selected to the Covington Partnership and my practice focused on technology.
Advancing in male-dominated industries starts with creating great innovative work and promoting it with confidence. This increases the demand for your work and generally leads to more opportunities. Women should also focus on building good relationships within the AI ecosystem. This helps in securing important advisors and sponsors as well as clients. I also encourage women to use their networks to actively pursue opportunities to expand their knowledge, profile and experience, which may include participating in industry associations and other activities.
Finally, I urge women to invest in themselves. There are many resources and networks that can help women grow and advance in AI and other industries. Women must set goals and identify and use resources that can help them achieve these goals.
What advice would you give to women wanting to enter the AI field?
There are plenty of opportunities in the AI field, including engineers, data scientists, lawyers, economists, and business and government affairs experts. I encourage women to find and pursue an aspect of the AI field they are passionate about. People often excel more when they work on matters they care about.
Women should also invest in developing and promoting their expertise. This may include joining professional associations, attending networking events, writing articles, giving public speeches, or obtaining continuing legal education. Given the wide range of innovative and challenging issues A.I.
Presenting, there are many opportunities for young professionals to become experts quickly. Women should actively take advantage of these opportunities. Building expertise and a good professional network can help.
What are some of the most pressing issues facing AI during its development?
AI holds great promise for advancing global prosperity, security, and social well-being, including helping to address climate change and achieve the United Nations Sustainable Development Goals. However, if AI is not developed or used properly, it may introduce safety and other risks, including to individuals and the environment. Society faces the huge challenge of developing frameworks that unlock the benefits of AI and mitigate the risks. This requires multidisciplinary collaboration, as laws and policies need to take into account relevant technologies as well as market and social realities. Since technology transcends borders, international coordination is also important. Standards and other tools can help advance international harmonisation, particularly when legal frameworks differ across different jurisdictions.
What issues should AI users be aware of?
I recently called I published a piece with OECD for the Global AI Learning Campaign. This suggests an urgent need for users to be aware of the benefits and risks of the AI applications they wish to use. This knowledge will empower them to make better decisions about whether or not to use AI applications, including mitigating risks.
Additionally, AI users should be aware that AI has become increasingly regulated and litigated. Government AI enforcement is also expanding, and AI users may be liable for damages caused by their AI systems provided by third-party vendors. To minimize potential liability and other risks, AI users should establish proactive AI governance and compliance programs to manage their AI deployments. They should also conduct due diligence of third-party AI systems before agreeing to use them.
What’s the best way to create AI responsibly?
Creating and deploying AI responsibly requires several key steps. This starts with publicly adopting and upholding good responsible AI values, such as those embodied by the OECD AI Principles, to serve as a North Star. Given the complexities of AI, it is also necessary to develop and implement AI governance frameworks that apply across the AI system lifecycle and foster multidisciplinary collaboration between technical, legal, business, sustainability, and other experts. The governance structure should include NIST AI Risk Management Framework and other important guidance, including ensuring compliance with applicable laws. Because the AI legal and technology landscape changes rapidly, governance frameworks must enable the organization to respond deftly to new developments.
How can investors better push for responsible AI?
Investors generally have several ways to pursue responsible AI in their portfolio companies. For starters, they should adopt responsible AI as an investment priority. Besides being the right thing to do, it’s also good for business. Demand for responsible AI is increasing in the market, which should increase portfolio company profitability. Furthermore, in our increasingly regulated and litigated AI world, responsible AI practices must minimize the risk of litigation and potential reputational damage caused by poorly designed AI.
Investors can also advance responsible AI by exercising oversight through their corporate board appointments. Increasingly, corporate boards are increasing their oversight of technology matters. They should also consider structuring investments to include other monitoring mechanisms.
Additionally, even if not addressed in investment agreements, investors can introduce portfolio companies to potential responsible AI appointees or advisors and encourage and support their participation in the ever-growing responsible AI ecosystem.
[ad_2]
Thanks For Reading