In focus

Generative AI: putting people first

Upholding our reputation for trusted content

Cambridge University Press & Assessment is an integral part of the University of Cambridge, actively contributing to the community it serves. Our learner- and researcher-first position is clear in our approach to the challenges raised by generative AI.

Our reputation for trusted content truly matters in a changing world of both AI and disinformation. Whatever the subject, opinion or debate, upholding trust in the accuracy and authenticity of scholarly communications protects our core values of freedom of thought and expression.

Unlocking the potential of AI

AI is now a well-established part of our lives, making a difference in classrooms, laboratories and businesses; we are working to unlock the potential of AI in a safe, legal and ethical way. We believe that AI is a tool, and one that humans can, and must, deploy well; it is people, not machines, that come first.

AI with humans at the centre

We see AI as a huge opportunity if it is done right: the development of trustworthy AI systems depends on first-rate human-created content and an approach that focuses on the needs of teachers and learners. We are researching how AI can enhance the validity and efficiency of assessment and we are developing AI tools designed to assist educators with tasks such as lesson planning and generating insights.

Our approach to licensing for generative AI

We have established our approach to generative AI licensing, which is designed to uphold academic integrity, while embracing innovation that aligns with our mission. Licensing matters because it protects authors’ rights, increases the discoverability and impact of research and makes high-quality information available to improve AI tools.

Embracing AI and protecting academic integrity

We responded to the UK government’s consultation on AI and copyright with a clear statement that we back innovations in AI, responsible licensing and collaboration with the tech industry. In our response, we also warned of the risks of an ‘opt out’ approach, which allows tech companies to scrape copyrighted material without sufficient protection.

Majority of authors choose to ‘opt in’

After consulting with more than 40,000 authors around AI use of their content, a significant revenue-generating deal was signed with a customer to use a subset of the company’s publishing material in AI initiatives. The agreement is a tangible example of licensing as the fair, ethical and legal route for high-quality content to be used in the training of large language models (LLMs).

Opposing piracy

Cambridge and the UK Publishers Association spoke out when it was revealed that Meta had turned to piracy to harvest content for its AI development, including books and journals from Cambridge authors. Subsequent developments, such as the Bartz v. Anthropic case in the United States, have further underlined the criticality of responsible AI training.

Developing AI tools to assist educators and learners

We are developing AI tools designed to help, freeing up their time to focus on delivering engaging and supportive learning experiences. In 2025, the English team launched Teacher’s Hub, which features generative AI technology. This new platform uses our extensive AI experience and university research partnerships to provide educators with high-quality, agile and reliable support. We are also researching how AI can enhance the fairness, validity and efficiency of assessment.

© Cambridge University Press & Assessment, 2025