AI in education

The AI revolution is underway. It will affect everyone. Policymakers must make the right decisions now to safeguard our future.

The provisional agreement on the European Union’s Artificial Intelligence Act (AI Act) by the European Parliament and Council on 9 December 2023 triggered varied reactions – from governments expressing an eagerness to replicate the model to big tech firms expressing their concerns about the AI Act’s potential to limit innovation in the European Union.

Under the new legislation AI systems are assessed and classified according to the risks they bring to their users’ lives: unacceptable, high, or low risks.

Education, vocational training, and employment are considered to be high-risk. But what does it mean concretely? To answer our questions, we had the privilege to interview Dragoş Tudorache, a Member of the European Parliament who was heavily involved in shaping the AI Act.

AI is known for its rapid evolution. New algorithms, models, and applications are constantly developed. Is it possible to regulate something that evolves so quickly?

We tried to develop a future-proof AI Act. First, the text was written in a technology-neutral manner, with no direct link to a concrete technology or tool. As AI technology evolves at an extraordinary pace, developing a technology-neutral AI Act meant it would remain relevant no matter the size or complexity of AI algorithms. Second, although technology neutral, parts of the text will have to evolve as technology evolves. So, a specific mandate will be given to the future AI regulator to adapt the AI Act accordingly.

Who contributed to developing the AI Act? 

An extensive consultation process was carried out, involving the European Commission, various stakeholders, and the private sector. We basically listened to everyone – all stakeholders, not only industry, but civil society, academia, trade unions, and all sorts of associations that had a stake.   

What do you tell big tech representatives who argue that the AI Act could limit innovation?

Well, they'll have to hire people to ensure that their tools and technologies are compliant with EU regulation. I’m certain they have resources to do that. They are concerned that we are breaking some of their business models. But what we do is to safeguard the privacy of the EU citizens who must know that, if they interact with AI-powered tools, it will not have unexpected consequences for them.

AI-powered tools are already used in recruitment. What does the AI Act foresee in this field?

Recruitment is classified as high-risk in the AI Act. It doesn't mean that AI is banned; it can be used but with the guarantees of transparency of data governance, accuracy etc. AI-driven recruitment should not contain any type of biases or discrimination. AI is going to be continuously used in the labour market and it must have extra safeguards. In practice it means that companies developing AI recruitment tools would have to undergo a thorough compliancy check. In fact, we’ve included in the AI Act the condition that a human being will always oversee the process. No decision will be taken without human control.

The ETF works on education and training. What will be the impact of the AI Act on education?

Two parts of the AI Act are relevant to the education sector. The first one is a prohibition. For example, it is prohibited to use emotional recognition in educational environments. So, educational institutions, schools, universities, kindergartens cannot buy or deploy AI-powered tools that use emotional recognition as these are considered too intrusive for the privacy of pupils. There is one exception: emotional recognition used in a medical context. This technology has proven to be very efficient for children with special needs, like learners with autism.

The second part covers AI-powered tools that can bring benefits to education; the use of AI should not be prohibited but we need to be aware of the risks. So, if an education ministry decides to use an AI algorithm to allocate children in schools, and therefore allow an AI-powered system to make decisions that are directly affecting the life of pupils, its compliance needs to be checked.

In terms of education provision, AI is going to become an essential tool. I believe it is going to revolutionise the way education is carried out, with good and bad consequences. And our societies must have a proper debate about it.

Do you think that policymakers are aware of the revolutionary impact of AI on education?

I don't think there is enough awareness. When we started to work on the AI legislation, a couple of years ago, you could literally count on the fingers of one hand the policymakers that would be, even remotely, interested. When I was saying, at the beginning of 2019, that I wanted to make AI regulation my priority, people would look at me and say: “Why would you do that? Who cares about AI? It's so technical.” Then, only few years later, most people realised what an enormous impact AI would have on our societies.

But I think that policymakers still need to understand fully the consequences of AI and to realise what they must do. Having the AI Act is not enough; it addresses the big concerns, the big risks, but the reality is that AI is going to be everywhere. And we must start talking about it, to our societies, to our education systems, to our labour market players, to trade unions, to employers, to everyone. AI is going to be part of almost everything we do. And for the education systems, which tend to be rather conservative in terms of accepting change or dealing with change rapidly, this is even more important. Children are already interacting with the AI technology at home. And most schools are in a catch-up game. Banning AI tools such as ChatGPT, just because we don't understand how to integrate them into our education systems, is not a solution. We must integrate AI-powered tools into the education process whilst safeguarding all the rules.

What’s your advice to education policymakers in view of the AI revolution?

First, policymakers should urgently carry out a consultation or a debate with all the relevant stakeholders in the education sector, including industry representatives as they know the reality of AI and the direction it may take.

Then they should think about the competences that will be needed in tomorrow’s AI-driven economy. We need to prepare people for future, AI-transformed jobs. If the industrial revolution affected mainly the low-skilled jobs, the AI revolution will transform mainly the intellectual work. Everyone will be affected – from air traffic controllers to lawyers, from doctors to teachers. We will have to learn to live in a world with AI.

This joint reflection should enable policymakers to understand the implications of this transformation and its impact on learning methodologies, and to draw up decisions.

Did you like this article? If you would like to be notified when new content like this is published, subscribe to receive our email alerts.