Professor of Labour Law, University of Bristol
I am honoured to be invited as a discussant – and to contribute to these discussions on the AI Act Proposal
I should start by confessing that I come at a disadvantage, which is two-fold.
Firstly, I am no expert on AI or digitalisation – I do not have a great deal to add to the technical analysis offered by the experts speaking today. I expected to learn a great deal and I have.
Instead, my aim is to consider how these brilliant presentations could be placed in a wider context: especially in terms of the tension between economic and social objectives at play here – but also the relationship between the ‘risks’ and ‘trust’ – as well as the relevance of the language of sustainability and just transitions.
My second disadvantage (if I can put it that way) is that I live and work in the UK – and so post-Brexit might seem less invested in the outcome of an EU proposal. Here, at least I can say that, while any Regulation adopted may not have direct effect in the UK, the significance of the EU in regulating the field of AI is likely to be considerable.
It will be watched.
As Rosanna Fanni’s co-authored Study for the Commission explained, when considering other initiatives, most countries are only at a nascent state of recognising the influence of AI.
In the UK, as her Study observes (see esp. p. 90 onwards), it was only in August 2019 that the UK Office for Artificial Intelligence published a ‘Guide [not a law] on using AI in the public sector’.
This followed use of AI in Risk-Based Verification’ (RBV), used by local public authorities to determine an individual’s eligibility for housing and council tax benefits (see pp. 46 – 47).
In the UK, there is now also (since 2020) Guidance on AI Auditing Framework, a self-regulatory voluntary framework, but one which will be a source of reference for enforcement of our data protection laws (based of course still on the General Data Protection Regulation (EU) 2016/679 (GDPR)).
Despite interest in AI as one of the ‘grand challenges’ identified in the UK’s Industrial Strategy, the UK, like many other countries, is some way off introducing laws expressly tackling use of AI in an employment context, such as recruitment, promotion, selection for redundancies and so on.
In that sense, there is likely to be a ‘role on the global stage’ for the EU as a social actor in the field of AI – stressed in the Porto declaration on 8 May 2021.
We find similar assertions of a desire to act as a global leader in social policy in the 2021 European Social Pillar Action Plan (pp. 38 – 39).
It could be that the level playing field provisions in the EU-UK CTA relating to social protection prompt the UK to pay attention to what the EU does. [See for example Article 411 regarding the conditions for rebalancing where significant divergences impacting trade and investment arise.]
But actually I wonder whether this initiative from the EU is more likely to attract attention by virtue of its strategic positioning as a smart regulatory move – which the UK may wish to imitate.
I suspect there will be ‘mimetic pressure’, which occurs when States seek solutions to an uncertain situation by imitating those adopted by others, as a way to establish legitimacy for their actions.
In organisational theory, it is well understood that in an uncertain environment, strategies are often selected with reference to known alternatives whatever their limitations – promoting homogeneity at the expense of effectiveness, efficiency and indeed other values.
(See Hayes, L., Novitz, T. A., & Herzfeld Olsson, P. (2013). Migrant workers and collective bargaining: institutional isomorphism and legitimacy in a resocialized Europe. In N. Countouris, & M. Freedland (Eds.), RESOCIALIZING EUROPE : IN A TIME OF CRISIS (pp. 448 – 465). Cambridge University Press.)
There may also be under the Johnson administration an attempt to undercut any regulatory costs. China and the US don’t have the monopoly on that.
Let’s see what the upcoming National Artificial Intelligence Strategy now holds for the UK. But I do worry that this EU initiative sets a precedent for employment issues as one of many AI issues to be lightly regulated. I also suspect that the failure to consider such issues as how transparency juxtaposes with trade secrets (as raised in the first session) is the kind of fudge that may be appealing to the UK current Conservative government.
As you know, the context for AI initiative (entailing a coordinated plan alongside a regulatory framework) are the green and digital transitions which the European Commission has stressed in its brief Communication on ‘A Strong Social Europe for Just Transitions’ issued on 14 January 2020 before the pandemic struck … (COM(2020) 14 final 14.1.20).
We can see echoes of this reference to digital transitions in the Commission Communication (COM(2021)205 final) which accompanies the actual Commission Proposal (COM(2021) 206 final).
Here the Commission Communication is explicit about the potential economic (and social) advantages of rapid technological development of AI and states a desire to ‘harness the many opportunities and address the challenges’ (p. 1)
The ‘global leadership’ as described at pp. 7 – 9 of the Commission Communication seems to be concerned with both:
- First – achieving a global competitive advantage (‘to provide European industry with a competitive edge’ at p. 9) – and this is explicitly at a time when there is a need to boost a post-Covid economic recovery and
- Second – establishing an example of what the terms of fair competition should be. In this way, and I quote – ‘EU action can facilitate the adoption of EU standards for trustworthy AI globally and ensure that the development, uptake and dissemination of AI is sustainable’… (p. 9)
This is both about EU competitive success and ethical leadership on the global stage.
An interesting question arising from all the presentations we have heard today is whether these competitive and ethical objectives can so easily be reconciled.
We are told (and this point was made at the outset by Guido Smorto) that the proposed legal framework on AI must ‘intervene only where this is strictly needed and in a way that minimises the burden for economic operators’ (Commission Communication, p. 6).
This may tell us something about where any balance between the economic and the social is expected to lie. So too may the narrow remit of the exceptions to which Guido pointed and the uncertainty as to what are acceptable risks and is proportionate. And that, as Vincenzo Pietrogiovanni said at the outset, ‘what is NOT forbidden and therefore left unregulated is HUGE’.
Of course, it is part of the language of sustainability that economic and social goals can be reconciled.
This is after all a facet of the 2030 Agenda adopted by a UNGA Resolution in 2015 setting out the 17 Sustainable Development Goals (or SDGs) which are described as ‘integrated and indivisible’.
If they are reconciled – and ‘risks’ of the use of AI can be tackled appropriately – then there can be ‘trust’ which facilitates economic prosperity and social peace. (Possibly also some environmental benefits as well…)
Much then turns on the methods by which this reconciliation takes place, which have been the subject of the analysis provided by our speakers today.
It is interesting that the chief regulatory tool is one of risk identification, grading and management. That approach and its dynamics were set out brilliantly in Guido Smorto’s presentation. There are to be of course measures to promote transparency, and ex ante, ongoing (iterative) and post facto risk assessments, including potential for ‘human oversight’ (which as Valerio de Stefano has pointed out seems unprotected). The Commission’s Communication COM(2021) 205 does seem to envisage significant funding for investment and perhaps enforcement, although meeting the needs that Frank Pasquale has highlighted is not so clear. Less space has been made for the specifics of collective worker representation (and its protection) in these processes.
As Aude Cefaliello has pointed out when explaining links between the proposed AI Act and OSH, we have become very familiar with risk regulation, as labour lawyers analysing Covid-19 safety and health measures (this has been managed under the Framework Directive 89/391/EC on health and safety of workers to which Aude referred).
As she notes, the design is different making their interaction complex. This is also, I think, an important comparison to make, especially given the failure of risk assessment processes and procedures in Covid times.
For those interested, Peter Andersson and I have written on this as part of a Swedish Research Council project and our article on risk assessment should appear in a special issue of the Comparative Labour and Social Security Law Journal later this year. One of the failures that we found was the presumption of safety (the ‘trust’) that followed risk assessment processes – for example, in the UK the label ‘Covid-secure’ which employers could use after a risk assessment created a false sense of confidence in the often formalistic measures taken by those employers. Trade union voice challenging the assessment of safety was disregarded – at a considerable cost to lives & health.
One would not want the same failings to apply in an AI context where, as Valerio has said, there are considerable risks associated with employment. These are of course reflected by the European Commission’s acknowledgement of those high risks in the Proposed Regulation through Annex III – linked to Article 6(2) – and Aude’s point that I & C needs to be built in through a complementary reading with OSH. But is that sufficient?
That seems unlikely, given ‘the product safety’ paradigm on which Michael Veale has spoken and the potential trade-off between respective obligations that arise for providers and users of AI systems, to which Miriam Kullmann has pointed. This gives scope for regulatory arbitrage which Frank Pasquale identified. It poses problems for OSH too.
Overall – I share the concerns that Valerio has voiced. I too worry about the legal base for the measure and its implications. While certain fundamental labour rights may see forms of protection – and the inclusion of platform workers (stressed in para. 26 of the preamble) is welcome, I worry about the scope for social dialogue here.
Social dialogue was so recently stressed as vital to a digital fair transition – as a facet of sustainability – in the Porto Declaration – but does not seem to be given sufficient space in this Proposal.
And, as Valerio has said, this obviously has implications for reconciliation of the Regulation with existing information and consultation obligations but also the potential scope for collective bargaining on such issues at national level.
I’ve argued elsewhere that sustainability entails – in ideal terms:
- 1st, durable (longer term) policy solutions
- 2nd, a holistic treatment of economic, social and environmental objectives at interlinked global, regional and national levels and
- 3rd, a dynamic participatory inclusive approach (as the route to achieve these).
On the basis of the presentations given today, I am not yet convinced that the proposed Regulation laying down harmonised rules on artificial intelligence (the AI Act) will actually be sustainable.