AI Regulation and Human Rights: Building Trust through Multi-Stakeholder Collaboration
As digital technologies, especially AI, become more powerful and pervasive, safeguarding human rights online is becoming increasingly crucial. The EU Delegation, in collaboration with the Office of the High Commissioner for Human Rights (OHCHR), the Global Network Initiative (GNI), and Humane Intelligence, organized a pivotal event to address this issue. Over 70 experts from international organizations, diplomatic missions, private tech companies, and NGOs convened to explore the intersection of human rights and technology.
The event underscored the importance of establishing regulatory frameworks that not only address the potential harms of AI but also harness its capabilities to empower individuals. Ambassador Lotte Knudsen, Head of the EU Delegation, emphasized, “It’s through this multi-stakeholder approach that we can most effectively not just address the potential harm of these new technologies, but also make sure that they truly empower individuals. We heard today how important it is to establish AI guardrails, and that we don’t have to choose between safety and innovation. They should go hand in hand! Only when society trusts AI and other new technologies, can these be scaled up.”
The EU’s Digital Services Act (DSA) and the newly adopted EU AI Act are at the forefront of these regulatory efforts. The DSA focuses on risk assessment, mitigation, auditing, and data transparency to hold large digital services accountable while protecting fundamental rights. The EU AI Act, the world’s first comprehensive legal framework on AI, aims to ensure that AI systems respect fundamental rights, safety, and ethical principles by addressing the risks posed by powerful AI models.
Similar regulatory initiatives are emerging globally. Latin American countries are preparing their own AI regulations, and the African Union Commission is actively working on AI governance. These efforts are expected to build on voluntary practices like transparency reporting, human rights risk assessments, and auditing developed under the UN Guiding Principles on Business and Human Rights (UNGPs).
However, the path to effective implementation of these regulations is fraught with challenges. There is a need for detailed guidance on how companies and assessors can implement risk assessments and auditing mechanisms aligned with the UNGPs. Additionally, meaningful engagement from civil society and academia is crucial for these processes to be robust and comprehensive.
The UN Human Rights B-Tech project, in collaboration with BSR, GNI, and Shift, has developed several papers to guide approaches to risk management related to generative AI. These documents emphasize the need for business and human rights practices to inform AI risk assessments, especially in the context of regulations like the DSA and the EU AI Act. There is also a pressing need to engage the technical community on these implications.
Read more: New Report Says AI Regulations Lag Behind Industry Advances
The event delved into key questions surrounding AI and human rights, including:
- What are the key global trends regarding regulation requiring tech companies to assess human rights risks?
- How can stakeholders, including engineers, encourage comparable AI risk assessment and auditing benchmarks?
- What might appropriate methodologies for AI auditing look like, and what data is needed to perform accountable AI audits?
- What is the role of enforcing/supervisory mechanisms?
- How can civil society and academia most meaningfully engage around these processes?
- How can AI risk assessments and audits be used by companies and external stakeholders to ensure accountability and catalyze change?
Notable speakers at the event included Juha Heikkila, Adviser for AI in the European Commission Directorate-General for Communications Networks, Content and Technology (CNECT); Rumman Chowdhury, CEO of Humane Intelligence; Lene Wendland, Chief of Business and Human Rights at the United Nations Human Rights; Mariana Valente, Deputy Director of Internet Lab Brazil and Professor of Law at the University of St. Gallen; Alex Walden, Global Head of Human Rights at Google; and Jason Pielemeier, Executive Director of the Global Network Initiative.
Source: EEAS
Featured News
AI Regulation and Human Rights: Building Trust through Multi-Stakeholder Collaboration
Jun 3, 2024 by
CPI
FBI Raids Corporate Landlord in Major Rent Price-Fixing Probe
Jun 3, 2024 by
CPI
American Airlines Seeks New Arrangement with JetBlue Pending Court Ruling Reversal
Jun 3, 2024 by
CPI
EU Regulators May Proceed Without Seeking Feedback on EEX-Nasdaq Remedies
Jun 3, 2024 by
CPI
Paramount-Skydance Deal Reaches Agreement on Merger Terms
Jun 3, 2024 by
CPI
Antitrust Mix by CPI
Antitrust Chronicle® – Healthcare Antitrust
May 31, 2024 by
CPI
AI and Antitrust Considerations in U.S. Health Care
May 31, 2024 by
CPI
Uncertainty in the Bottom Line: New Antitrust Scrutiny and Enforcement in Private Equity Transactions
May 31, 2024 by
CPI
Effecting M&A Diligence When Competitors Are Involved
May 31, 2024 by
CPI
AI, Pharmaceutical Sector, and EU Competition Law and Policy: What Does the Future Look Like?
May 31, 2024 by
CPI