AI at Loughborough – Building confidence, capability and responsibility
Co-authored by Prof. Nick Jennings and Vipin Ahlawat.
Artificial Intelligence (AI) is transforming how we live, learn, and work – and Loughborough is taking a thoughtful, people-centred approach to this rapidly changing landscape. From supporting learning and research to enhancing everyday productivity, AI has the potential to make a positive difference to all of us. We do, however, need to make sure we use AI in a responsible way.
We’re responding to AI opportunities with purpose: building the right foundations, supporting our community, and ensuring that innovation goes hand-in-hand with responsibility. One of us (Nick) has written about these issues for universities in general in a previous blog post, but today we want to focus on Loughborough and challenge you all to think about this technology and how you can use it
Our foundations: Responsible, inclusive and human-centred
Over the past year, we’ve been putting in place a strong framework to guide how AI is explored and adopted across the University. This includes:
- AI Governance Model – overseen by the IT Governance Committee (ITGC), ensuring AI projects are well-managed, risks are identified early, and activity aligns with our University strategy.
- AI Principles for Loughborough – a set of commitments approved by ITGC that guide how we design, deploy and use AI tools. These principles focus on:
- Human-centred design – keeping people at the heart of every AI decision.
- Academic integrity – ensuring AI supports, not undermines, assessment and research standards.
- Skills and confidence – helping staff and students build their AI literacy.
- Access to tools – promoting fair, inclusive access to secure AI technologies.
- Safety and ethics – encouraging responsible and transparent use.
- Sustainability – considering the environmental and social impact of AI.
- Ongoing review – keeping our approach under regular evaluation as technology and expectations evolve.
Putting principles into practice
AI is becoming part of how we all work and learn. This is why we have made Microsoft 365 Copilot Chat available to all staff and students, as well as providing everyone with free credits for Adobe Firefly for AI image generation and editing. Other recent developments include:
- Secure tools – promoting trusted options like Microsoft 365 Copilot Chat, integrated into the University’s Microsoft environment to keep AI use secure and compliant with data protection policies.
- Ethical guidance for learning and assessment – including AI responsible use declarations, staff training, and a three-tier assessment model to ensure transparency and fairness in how AI is used in coursework and exams.
- AI Communities of Practice – over 300 colleagues from across academic and professional services collaborating to explore AI opportunities and share best practice.
- Pilot projects – departments are identifying pilot use cases for AI in both education and operations, supported by a structured AI Pilot Framework to manage risk and learning. For example, we are exploring how we can use Copilot to automatically create meeting summaries, as well as AI-powered tools like Studystash for more personalised, adaptive learning.
A challenge for everyone
We know AI will have profound implications for the future, but it’s also changing how we work right now.
We both use AI systems regularly and think you should as well. So here’s our challenge to you:
Try using AI in your daily or weekly routine. And if you’re not using it yet — ask yourself why not?
Even small tasks can benefit from AI support. You could try using:
- Copilot Chat to summarise documents or reports.
- Copilot or Firefly to generate ideas or create visuals for social media.
- Copilot in Word or Outlook to polish or draft documents and emails.
- Copilot in Excel to analyse data or spot trends.
These tools are available to everyone and every experiment helps you learn how to use AI effectively and responsibly.
Building AI literacy
Understanding how to use AI responsibly is as important as the tools themselves. Over the coming months, we’ll be focusing on AI literacy for staff and students, including:
- Practical sessions on using generative AI effectively and responsibly.
- Further guidance on when (and when not) to use AI tools in learning and research.
- Case studies showcasing staff who are using AI to save time, improve workflows, or spark creativity.
- How to make more informed and sustainable choices when using AI.
If you’re new to AI, the Responsible AI Guidance site is a great place to start – it offers guidance, “dos and don’ts”, and links to useful training resources.
Get involved
AI is a collective opportunity. Whether you’re a colleague or student, there are many ways to engage:
- Visit: Responsible AI Guidance site
- Join: one of our AI-focused Communities of Practice.
- Try: approved AI tools such as Microsoft 365 Copilot Chat and Adobe Firefly, available to all staff and students.
- Learn: find out more about AI and look out for more guidance on responsible AI use coming soon.
Our goal is simple:
To harness the potential of AI in ways that enhance human creativity, uphold academic integrity, and strengthen our community.
Together, we can make AI work for people, with purpose, and guided by our values.
Vice-Chancellor's Communications
Opinions and comment from the Vice-Chancellor, Professor Nick Jennings