Author: Stephan Geering, Deputy General Counsel and Trustworthy AI & Global Privacy Officer at Anthology.
Artificial Intelligence (AI) is rapidly transforming the education sector, unlocking vast potential while introducing complex ethical and regulatory challenges. As higher education institutions harness AI’s capabilities, ensuring its responsible and ethical integration into academic environments is crucial.
With the adoption of the EU AI Act, it will be critical for ed-tech companies, educational institutions, and other stakeholders to work towards compliance with this key legislation.
The Act applies to both public and private entities that market, deploy, or provide AI-related services within the European Union. Its primary objectives are to safeguard fundamental rights, including privacy, non-discrimination, and freedom of expression, while simultaneously fostering innovation. The Act aims to provide clear legal frameworks that support the development and use of AI systems that are not only safe and ethical but also aligned with societal values and the broader public interest.
Staying in Control of AI Systems
A core principle of responsible AI deployment is maintaining human oversight throughout its implementation. Educational institutions must establish a robust governance framework with a senior leader designated to oversee the ethical use of AI.
Edtech companies can further support this approach by making new AI features optional so that institutions have to actively (and deliberately) opt in for generative AI features. This enables educators to evaluate and adopt AI technologies in line with their internal policies and objectives.
Promoting Transparency and Inclusion
Inclusion is at the heart of ethical AI deployment. Institutions must ensure that diverse individuals – including faculty, staff, and students – are involved in shaping AI policies. This can be done by establishing councils and advisory bodies that bring together multi-disciplinary perspectives to guide AI strategies.
Keeping all parties, and particularly the end users of AI systems, informed is pivotal. Institutions should produce and regularly update comprehensive materials, such as policy statements and explanations of specific AI tools. Edtech companies should support this with detailed information about their AI systems. These resources ensure that everyone is well-informed as AI tools rapidly evolve.
Training and Awareness: Strengthening AI Literacy
To use AI effectively and responsibly, institutions must prioritise ongoing training and awareness courses for both staff and students. Users must understand not only the technical basics and the opportunities AI presents but also the associated risks, such as bias and inaccuracy.
Institutions can offer targeted, role-specific training that covers AI risk and ethical AI principles. Beyond focusing on students and faculty, institutions should also consider specialist training for teams that support in critical areas, such as IT and security.
Data Privacy and Security Considerations
As the use of AI becomes increasingly integral to both students and staff in their daily tasks, it’s crucial for institutions to not only have clear guidelines on permitted and prohibited AI tools but also to consider enterprise versions of AI tools. Enterprise versions generally offer enhanced privacy features and come with a commitment not to use any data for the training of AI models.
In addition, many security tools now include monitoring capabilities that allow institutions to track AI usage and, when necessary, restrict access to external generative AI tools that are not approved for use.
Strategic Steps for Institutions to Comply with the EU AI Act
Institutions should start by developing a comprehensive Trustworthy AI programme and policy framework. This provides a strong foundation for aligning trustworthy AI practices with the regulatory standards of the EU AI Act.
The next step is to conduct a thorough inventory and classification of AI systems across the organisation. Identifying where AI is being used and determining which risk category each system falls into is essential for understanding compliance requirements.
Once systems are categorised, institutions can go on to conduct a gap assessment to identify the necessary steps for compliance, particularly for “high-risk” AI systems. This includes creating action plans to address specific regulatory obligations, such as quality management systems and conformity assessments.
Another key step is updating vendor risk management processes. Given that many institutions rely on third-party AI systems, it’s important to review procurement practices to assess potential risks associated with these external AI tools as part of vendor due diligence.
Finally, institutions must stay informed about ongoing regulatory developments. As the EU AI Act will be complemented by a raft of new guidelines and standards, institutions need to designate a team – such as a legal, privacy or compliance team – to track these changes and ensure the organisation remains aligned with the latest requirements.
Looking Ahead
AI holds immense promise for higher education, but as it becomes more integrated into institutional practices, maintaining ethical oversight and adhering to regulatory requirements is paramount.
For inquiries, please contact vaishnavi.nashte@31media.co.uk