Reluctance to regulate AI in a meaningful manner is a “knee jerk reaction by companies and investors to evade oversight. Regulation can actually spur a more competitive market for businesses to operate in.”
This is the view of Dr. Brandie M. Nonnecke, Director of the Center for Information Technology Research in the Interest of Society (CITRIS) Policy Lab at UC Berkeley and Associate Adjunct Professor at the Goldman School of Public Policy. .
Nonnecke is the guest on the latest episode of “Understanding IP Matters” the popular podcast, with more than 11,000 downloads.
Nonnecke is faculty co-director of the Berkeley Center for Law and Technology at Berkeley Law where she leads the Project on Artificial Intelligence, Platforms, and Society. She also co-directs the UC Berkeley AI Policy Hub, an interdisciplinary initiative training researchers to develop effective AI governance and policy frameworks. She is also host of TecHype, a monthly podcast that looks at AI and regulation.
“What’s most important about responsible AI development is taking stock of this careful balance between the benefits and risks.”
Among the topics Nonnecke and UIPM host Bruce Berman discuss :
- The status of AI regulation: “The European Union passed the most comprehensive AI related legislation to date, the EU AI Act, which is now in force… the United States is a whole other story”.
- Why credit should be given to U.S. Legislators regarding
AI. “U.S. legislators have put a lot of effort into learning more about [AI] technology and especially having their staffers learn more… I myself have hosted briefings before members of Congress of essentially debunking the misunderstandings around what the technology can actually do.”
For the full episode summary, visit IPWatchdog to listen there or download to listen later on the platform of your choice.
- Even though we “do not have a comprehensive law governing AI, let’s not forget that we have established laws that apply. If you develop an AI system that causes harm, as a developer, you could be sued under product liability tort, or if you develop an AI tool for reviewing resumes and it’s discriminatory, well, you can be sued on non- discrimination laws.”
- How some countries may be favoring open source AI models, as opposed to proprietary closed AI models, because “they won’t be able to keep pace with the United States. So I’m definitely seeing European countries are pushing more for open.”
Responsible AI
“What’s most important about responsible AI development,” Nonnecke tells Berman, “is taking stock of this careful balance between the benefits and risks and ensuring that we’re putting in place appropriate either technical interventions or governance interventions to ensure that the balance leans in favor of the benefits.”
High Valuations
Nonnecke believes that valuations for AI companies are “wildly high”, and that “the bubble will burst at some point, just like we had the dot com boom. I guarantee it.”
Listen to or download “AI needs the right balance of innovation and regulation to thrive.”
Image source: CIPU; understandingip.org
