Cost, Benefits and Efficiency: An Economic Perspective On The Proposed EU’s AI Act
-Sajjad Momin
Introduction
The rapid advancement of Artificial Intelligence (AI) has spurred global interest in regulatory frameworks to harness its potential while ensuring ethical and economic considerations. This analysis delves into the economic implications of the, a significant step in regulating AI technologies within the European Union. The study employs a systematic approach to evaluate the potential effects of the Act on various economic aspects like efficiency, potential influence on innovation, competition, and labour markets. By offering actionable insights, this study aims to contribute to the ongoing discourse surrounding the proposed EU AI Act and its multifaceted impact on economies, industries, and society at large.
Baseline Scenario
The EU, and every country around the world, currently lacks a comprehensive regulation tailored specifically to AI systems and their associated risks. Existing rules under product safety, data protection and sector-specific legislation provide limited governance.
Major technology providers like Microsoft, Amazon, Google, IBM and startups are active in developing AI within the EU. The main economic benefits so far stem from efficiency gains in manufacturing, logistics, agriculture, healthcare and financial services. Total global corporate investments in AI has also expanded rapidly, reaching approximately $92 billion in 2022.
However, public opinion reflects some scepticism about data privacy, job impacts, lack of transparency in AI systems and absence of clear accountability.
Recent European initiatives, like the European AI Strategy have outlined loose ethical principles for AI development but compliance is voluntary. The only binding EU instrument applicable in this case is the GDPR which provides some controls around data processing and automated decision-making. But many have called for tailored and mandatory regulation to govern the full lifecycle of AI systems, ensure accountability, and build public trust.
The current scenario lacks a systematic approach to identifying and mitigating AI risks across sectors. This proliferates uncertainties for providers, users and society. The proposed AI Act aims to address these gaps through harmonized legislation focused on mandatory conformity assessments and proportional risk management for high-risk applications.
The Legislation
The objective of the proposed AI Act is to bolster Europe’s global standing as a centre of AI excellence, ensuring that AI technologies within Europe adhere to values and regulations, while harnessing AI’s potential for industrial applications.
Central to the proposed AI Act is a classification system that gauges the potential risk an AI technology might pose to an individual’s well-being, safety, or fundamental rights. This framework comprises four risk tiers: unacceptable, high, limited, and minimal.
AI systems categorized as having limited or minimal risk, such as spam filters or video games, can be used with minimal requirements, mainly centered around transparency obligations. Conversely, AI systems categorized as posing an unacceptable risk, such as government social scoring or real-time biometric identification in public spaces, are almost entirely prohibited. High-risk AI systems, like autonomous vehicles and medical devices, are allowed but subject to rigorous testing, meticulous data quality documentation, and an accountability framework with human oversight.
The legislation also outlines regulations for general-purpose AI, which encompass versatile AI systems with varying levels of risk, like large language model generative AI systems such as ChatGPT.
Kay Firth-Butterfield, Executive Director of the Centre for Trustworthy Technology, affiliated with the World Economic Forum’s Fourth Industrial Revolution Network, commended the EU’s effort in making AI systems future-ready and aligned with human aspirations.
Cost Benefit Analysis
The European Union’s proposed Artificial Intelligence Act aims to establish clear rules and accountability mechanisms for AI systems to protect fundamental rights, health and safety. However, the regulation also imposes costs. According to estimates by the Centre for Data Innovation, compliance expenses for businesses required by the Act could total €31 billion between now and 2025, reaching over €10 billion annually by that time. The increased regulatory burden may disproportionately affect smaller companies with more limited resources.
These compliance costs stem from new requirements around risk assessment audits, detailed documentation, transparency obligations, and restrictions on certain types of AI uses. Administrative costs will also be incurred by public authorities to develop standards, provide oversight, and carry out market surveillance under the Act. By one estimate, these compliance and administrative burdens could reduce private sector investment in European AI development by 20% over the next few years, hampering innovation.
However, . Tighter regulation of high-risk AI applications will likely prevent privacy violations, algorithmic biases, and physical or psychological harms. Prohibiting certain types of manipulative and exploitative AI practices will protect vulnerable populations. Increased transparency and accountability will build public and consumer trust in AI systems. Reduced use of certain unsafe or unethical AI applications will limit negative externalities.
Enhanced safety and ethical requirements may also encourage responsible innovation in the long run. The Act could level the playing field between big tech firms and smaller startups if compliance overhead costs are proportionately less for the latter. It may accelerate research into creative technical solutions for trustworthy and rights-respecting AI. Standardizing auditing and documentation protocols could make costs predictable for businesses while supporting innovation.
Analysing Using Kaldor Hicks Efficiency Criteria
The EU AI Act is expected to generate significant benefits for EU citizens through enhanced privacy, security, and protection from potentially harmful AI applications. However, AI developers and companies, especially smaller firms, may face considerable compliance costs.
Applying the Kaldor Hicks efficiency framework, an attempt to analyse whether those benefiting from the proposed AI Act could theoretically compensate the losers while still ensuring that net welfare gain is made. Citizens and consumer groups are likely to highly value curbs on invasive data collection and unsafe AI systems enabled by the Act. Their potential willingness to pay for these protections creates scope for compensating affected companies through transfers.
AI firms facing compliance costs are potential losers under the Act. But compensation can come by passing on some of the citizen welfare gains through mechanisms like tax credits, subsidies, or softened regulations for smaller players. This could offset compliance overheads that would hamper innovation.
However, appropriate compensation mechanisms must be designed. Citizens could pay a small levy on AI purchases that gets transferred to companies as relief or governments could levy taxes on AI firms’ revenues above a threshold to subsidize compliance by smaller firms. Either way, with around 500 million EU inhabitants, a compensation of €7.75 billion suggests a willingness to pay of around €15 per person. This seems reasonable to ensure adequate data protections.
The Act’s constraints on high-risk applications will also reduce negative externalities imposed on society, like privacy violations, biases, and exploitative practices. The value of mitigating such externalities accrues to citizens, which further enhances their willingness to pay to compensate affected firms.
Policy Recommendations
To foster responsible and ethical AI practices, policymakers should prioritize the support of small and medium-sized enterprises (SMEs) in adhering to stringent AI regulations. SMEs should receive guidance and resources that nurture their creativity without stifling innovation. Additionally, global collaboration is crucial for establishing consistent AI rules that uphold ethics and principles across borders, promoting the responsible adoption of AI on a global scale.
Ensuring the adaptability of AI regulations is essential. Regular feedback from diverse stakeholders, including industry and academia, should inform rule adjustments to keep pace with ever-evolving technology. Investing in research to comprehend AI’s long-term impact on society, employment, and the economy is pivotal in guiding policy adjustments based on valuable insights.
Flexibility remains a key principle. AI regulations should be designed to adapt in response to real-world experiences and technological advancements. Proportionality in conformity assessments is necessary to align oversight with risk levels, promoting accountability while minimizing compliance costs. Support for smaller companies through financial incentives can level the playing field, fostering competition and innovation. Lastly, a forward-looking strategy must address job displacement due to AI adoption, emphasizing the transition and retraining of the workforce to maintain Europe’s competitiveness and equality in the face of AI advancements.
Conclusion
In summary, the proposed AI Act by the EU reflects a pivotal stride toward cultivating a harmonious synergy between technological progress and societal values. By introducing a well-defined classification system for AI risks and implementing stringent regulations, the Act signifies Europe’s proactive commitment to responsible AI development. This legislative initiative, coupled with the establishment of the AI Governance Alliance, exemplifies a concerted global effort to ensure the ethical and transparent evolution of artificial intelligence. As the EU takes the lead, its comprehensive framework offers a blueprint for fostering innovation while safeguarding fundamental rights and safety in the rapidly advancing AI landscape.
The author is a student of NALSAR University of Law, Hyderabad.