Artificial intelligence in education

This section describes how artificial intelligence (AI) can be used in the context of education. The content is organised into several sections. This page offers an introduction to generative AI, and the subpages provide more detailed guidance on using AI to support teaching and the principles governing the use of AI. As these guidelines have been written from the perspective of Tampere Universities, please check the policies that apply within your own organisation by consulting a reliable source, such as your IT services unit. 

AI can offer valuable support to teaching staff in planning and delivering instruction. Students can also make use of AI in many ways during their studies, and they must be aware its potential as they transition into the world of work. All users should be familiar with the basic principles that underpin the responsible and effective use of AI. 

All AI users should have a general understanding of how Copilot Chat, ChatGPT and other conversational AI applications work, be aware of the purposes for which they can be used, and recognise both the opportunities and challenges associated with their use. However, each user employs AI applications in ways that best meet their individual needs. 

As a general rule, students at Tampere University and Tampere University of Applied Sciences are permitted to use AI applications to support their learning. To use AI effectively, teachers must have a basic understanding of how AI applications work, how to use them safely and how to assess the reliability of AI‑generated content. Teachers should also be familiar with AI tools that can support their work. In addition to technical competence, the effective use of AI requires a positive attitude towards AI technologies, resilience, and a critical and responsible approach to implementing AI in practice. 

What is generative AI? 

In everyday discussion, the term AI typically refers to generative AI applications – such as Microsoft Copilot Chat, ChatGPT or Google Gemini – that produce responses based on the data used to train their underlying language models. 

In addition to text‑based systems, AI applications can recognise sounds or images, enhance image or audio quality, analyse data according to predefined rules, or translate text. In the context of teaching, AI generally refers to generative AI tools that create text, images or other content in response to a user’s written prompts by using patterns they have learned during training. 

Language model = the engine behind generative AI that learns by analysing vast collections of text and images in order to generate new content by predicting the most likely next word or symbol in a given context. Language models process images as combinations of pixels, colours and shapes to infer meaning and identify statistical patterns. The models use these patterns to create new images. 

Users interact with generative AI applications by conversing with them or giving instructions. This process is known as prompting. The AI responds to a prompt by drawing on the models and data on which it has been trained. 

The clarity and precision of a prompt largely determine the quality of the response: a vague prompt may produce an answer that is too broad to be useful, while a well‑crafted prompt is more likely to generate a specific and useful outcome. 

Prompt = the input or instruction provided by the user to the language model in order to elicit a particular response. A prompt may include a question, task, example or any other content that guides the language model’s operation. 

Generative AI is especially well suited to producing answers, solving problems and creating content. It can be used, for example, when preparing texts, emails, reports, presentations and drafts. AI is also effective at generating summaries: it can sum up lengthy documents and identify the key points in a report or an email. In addition, users can ask AI to answer specific questions based on extensive source materials, such as reports or articles containing dozens of pages. When translating between languages, generative AI is better at understanding context than traditional translation software. AI is also a useful tool for analysing large volumes of data and presenting information. 

The limitations of AI relate to tasks that require human judgement. AI cannot make decisions that rely on human intuition, because it does not have all the information needed to make ethical choices. In addition, AI does not create original ideas, because its responses are always based on existing data. All AI‑generated content must be reviewed by a human, as it may contain inaccuracies or lead to incorrect conclusions. 

AI tools may struggle with understanding context, especially when the subject matter is very narrow or detailed. As a result, AI may be unable to provide information that is sufficiently precise. In addition, AI does not understand human emotions and cannot take them into account in its responses. 

Where does AI get its information? 

The language models that power AI applications learn to understand and generate text by processing vast amounts of training data. This data includes, for example:  

  • openly accessible books, articles and websites; 
  • online discussion threads, instructions and technical documentation; 
  • publicly available resources, such as Wikipedia and open-access databases. 

Language models are trained to learn linguistic structures, vocabulary, writing styles and contextual awareness. They are not trained to recall specific sources but to generalise how language is used in different situations. AI applications are increasingly designed to provide sources that support their responses, but these sources should always be reviewed critically. A further issue concerns training data, which may include copyright‑protected material that has been used without authorisation. This has led to legal disputes, as authors, illustrators, software developers and other stakeholders have brought lawsuits against AI companies. 

How should teachers engage with AI? 

It is important that teachers promote the responsible and transparent use of AI among students. Teachers must clearly inform students of the principles governing the use of AI, either for entire courses or specific assignments. The use of AI may also be prohibited for justified reasons. Teachers must have a sufficient understanding of the opportunities, limitations and constraints associated with AI use, so they can guide students in the responsible use of AI. When using AI to support their learning, students must comply with legal requirements concerning copyright, personal data processing and confidential information as well as any applicable institutional policies and guidelines. Since AI-generated text may contain inaccuracies and fabricated references, emphasising the critical evaluation of sources is important. 

Teachers are encouraged to share their experiences of using AI with colleagues, so that effective practices can be widely implemented. This also supports the collegial development of AI use. 

Like many higher education institutions in Finland, Tampere Universities have adopted the AI traffic light model developed by the Rectors’ Conference of Finnish Universities of Applied Sciences (Arene). This model illustrates the permitted and prohibited uses of AI and helps teachers advise students on how to acknowledge the use of AI in assignments, including the required level of detail. For example, teachers can ask students to complete a form which, in its simplest form, specifies the purpose for which an AI tool was used in an assignment and includes a brief description of the use of this tool. 

What AI applications can be safely used? 

AI applications can be valuable tools for designing teaching activities and supporting brainstorming. However, their use must always be pedagogically justified and aligned with the intended learning outcomes. Using a variety of AI applications can also strengthen individual teachers’ AI competence. 

However, it is important to choose secure AI applications that are appropriate for educational purposes. At Tampere Universities, staff and students are expected to primarily use Microsoft Copilot Chat while logged in with their TUNI username and password. Logging in to Copilot with an organisational account ensures enhanced data protection and security compared with freely available AI chat services. Even when logged in to Copilot with an organisational account, all personal data and confidential information must be processed carefully and in accordance with the organisation’s data classification model. 

AI applications available at Tampere Universities: 

  • Copilot Chat: An AI chat system similar to ChatGPT, with a built‑in ability to generate images. 
  • Microsoft 365 Copilot (for Word, Outlook, Teams etc.): AI features are integrated into Microsoft 365 applications, such as Word, OneNote, Outlook and Teams (available to staff only). 
  • Scopus AI: Combines generative AI with the Scopus abstract and citation database to help users find information, search for references and create summaries. 

Other AI-powered applications: 

  • Teams (Class workspace): AI can assist in creating assignments and assessment criteria. 
  • Clipchamp: A video editor offering several AI-powered features, such as text-to-speech. 

Many digital tools include AI features, but the applications listed above are provided as examples. 

Can we trust AI? 

Teachers are responsible for promoting source criticism and copyright awareness among students. In addition to reminding students that they must acknowledge any use of AI during a course, teachers must also make clear that students remain fully responsible for their submissions and for the accuracy of all content. 

AI may generate text that contains false information, and it may produce citations for sources that do not exist. The importance of source criticism and the critical evaluation of AI-generated content must be emphasised. AI is a useful resource when it is used to generate text and ideas on a topic with which students are already reasonably familiar and therefore able to assess the reliability of the responses. 

Although AI applications based on large language models (LLMs) are often used to retrieve information in a way that resembles a traditional Google search, they are not search engines. Even when referencing conventions are appropriately followed, the use of AI as a source must always be evaluated critically, as AI may generate fake citations. When AI applications are used in a university context, it is important to ensure adherence to academic standards, especially referencing conventions and the ethical principles governing the use of AI. For related guidance, please visit the Tampere University Library’s website: Information searching and AI.