Generative AI in communications at ETH Zurich
These guidelines serve as a foundation for the responsible use of generative artificial intelligence (AI) in ETH Zurich communications.
Guiding principles
We perceive AI as an opportunity
AI opens up new possibilities for editing and creating text, images, audio and video content. We regard these potentials as an opportunity for our work. We use AI primarily as a tool that makes our work easier or creates added editorial value. At the same time, we are well aware of the risks associated with the use of AI. We can minimise these risks through the responsible and prudent use of AI.
We protect and promote the credibility of ETH Zurich
Maintaining the credibility of ETH Zurich and strengthening the trust of our target groups in our institution and our communication is a crucial objective of our communication. This premise also applies to the use of AI. The content we publish is relevant, accurate and meets our quality aspirations and standards – regardless of whether AI support was used or not.
We use AI responsibly
Whoever uses AI is capable of assessing and critically questioning the results generated. Every person harnessing AI is also aware of the associated responsibility. No content is published without being checked by a competent human source. Automated translations, which are identified as such, are the only exception here. We only use AI tools in processes that allow us to control the final results. It must be possible to correct or change content at any time. This responsibility covers four areas:
- Awareness of typical AI weaknesses
Anyone using AI must have a basic understanding of how it works and the risks involved – for example, with regard to possible hallucinations, errors, distortions, bias and AI-typical style. - Fact checking
AI is used with a critical eye and prudent verification of the accuracy of the results. - Credibility
AI-generated content must not mislead the audience or jeopardise the reputation and credibility of ETH. - Transparency
In the case that we publish texts, images, videos or other works that have been created or modified in a significant way by AI, we make this transparent.
Data handling
When using AI, we pay attention to data security. Non-public data may only be entered into AI tools that do not use such data for training or if the data remains in a protected virtual ETH environment. The guidelines for users of AI tools at ETH Zurich, published by the Cyber and Information Security, shall apply here. Please note:
We only enter internal or unpublished information, such as manuscripts of unpublished scientific publications, into an AI tool if the data is not used for training the system or if it remains in a protected virtual ETH environment.
ChatGPT: In the settings, you must strictly deactivate the “Improve the model for everyone” option (setting: Off), thereby ensuring that the data is not used to train the tool.
Microsoft 365 Copilot: To be used in connection with an ETH login. This ensures that the data remains in a virtual ETH environment.
Information classified as strictly confidential, as well as particularly sensitive personal data or other sensitive data that requires protection, must not be fed into AI systems as a matter of principle and under any circumstances.
Application for texts
AI can be used to assist in the creation of editorial texts. AI, however, must not be the “author” of an entire article.
AI tools are well suited for:
- Research support, including summarising and quickly grasping the content of scientific papers, preparing for a research interview
- Transcribing audio files and translating raw material
- Generating ideas for titles, leads and summaries
- Initial editing, condensing and optimising of a draft text, checking draft texts for compliance
- with our writing guidelines and appropriate style (AI as a sparring partner; e.g. with custom AI agents or other means)
- Summarising and preparing existing content for other uses or for specific target groups (e.g. creating social media posts and summary teasers)
- General automation of repetitive tasks (custom AI agents)
Labelling
In the above cases, the use of AI does not need to be labelled.
In the event that entire editorial articles are written predominantly by AI (if the author of a text has used AI for more than just support), this must be labelled.
Labelling recommendation: This article was created with the help of generative AI and checked for accuracy by our team.
Please note:
As always with generative AI: what seems to be correct may in fact be nonsense. Content researched using AI tools must always be checked for factual accuracy. We never rely solely on AI summaries to understand or evaluate research findings or other information.
We do not publish any texts without subjecting them to revision and examination. The final version of all our editorial texts is created by humans, editorially approved and accounted for. We do not publish AI-generated texts stating the generic author attribution “Editorial team”. All texts must meet our quality criteria in terms of content and style. We seek to avoid AI-typical tonality and style as far as possible.
Applications for images
AI offers a wide range of possibilities in the field of image creation and editing (photos, illustrations and photo montages). At the same time, authenticity remains an important criterion in the field of images, which AI is not capable of fulfilling.
AI tools are well suited for:
- Creating symbolic images to visualise complex topics or on the topic of AI
- Optimising existing images without changing their message (e.g. reducing image noise)
- Generative enhancement of photos (e.g. adding backgrounds)
- Removal of irrelevant objects from images
Labelling
All images generated or edited using AI must be labelled as such.
Exceptions: moderate and careful extension of the background in order to adjust the image format, as long as the image content is not altered; moreover, minor edits such as colour corrections, noise reduction, minor retouching and the use of cropping tools that do not significantly alter the authenticity of the image.
Labelling recommendation:
- Image created with AI: Creators / ETH Zurich
- Image edited with AI: Creators / ETH Zurich
Please note:
There is a high risk of deception, especially with photorealistic representations of persons (students, employees), living beings, rooms, places or real situations in general. Such images are therefore only used in exceptional cases – and exclusively with transparent labelling. Instead, as a rule, we use authentic photographs or conventional agency images (e.g. from Adobe Stock).
AI-generated images must not be used in communications as a substitute for scientific images, especially if they give the impression of being authentic scientific images.
Application for audio and video content
Numerous AI tools support the production of audio and video. However, not all of them allow users to edit the final result and thereby retain control over the material.
AI tools are well suited for the following audio tasks:
- Transcription and translation of audio
- Creating jingles, intros or music for podcasts
- Improving sound recordings without changing the content (e.g. noise reduction)
AI tools are well suited for the following tasks in the video area:
- Automated editing to increase efficiency
- Automated generation of subtitles
- AI-generated animations in explanatory videos
- Abstract AI videos for social media teasers
Labelling
All AI-generated audio and video content must be labelled as such.
In particular, the following content must be clearly labelled if it is AI-generated:
Voices (voice output), music and audio quotes, video sequences (e.g. B-roll, animations). If AI material is used in a video, this must be labelled in such a way that the audience is informed as to the extent to which AI was used and in which scenes. The relevant scenes must be labelled with a caption stating created with AI or edited with AI.
If the video is embedded in a website, an additional note must be placed directly below the video: “Video created with AI” or “Video contains scenes created with AI”. The following are not subject to labelling requirements: sounds and subtitles.
Also in the event that audio or video content is edited with AI, labelling must be provided if this results in significant changes. Audio and video editing for quality improvements is not subject to labelling requirements.
Labelling recommendation:
- Audio created with AI
- Video created with AI
- Audio edited with AI
- Video edited with AI
Please note:
It is not permissible to publish or issue audio or video files generated using an AI tool that does not allow the final results to be edited (e.g. changing the script). NotebookLM is an example of such an AI tool that is unsuitable for audio and video production.
Videos generated or edited with AI must not be used in communications as a substitute for scientific videos, especially if they give the impression of being authentic scientific videos.
Voice imitation (AI voices that mimic real people) and deepfakes (videos that show real people in speeches or actions they did not actually perform) are not permitted. Use in a clearly recognisable illustrative context is an exception, whereby extreme caution is required: Animating the voice or image of a real person without their consent (e.g. making them speak, sing or dance) may violate their personal rights. The person depicted must therefore expressly consent to AI animation. Labelling is an important issue. If such content has the potential to damage the reputation of ETH Zurich, it must not be published.
No standardisation is permissible: We do not use avatars and AI voices across the board in videos or podcasts.
- ChatGPT
- Microsoft 365 Copilot
- Custom AI agents in ChatGPT or Microsoft 365 Copilot
Sonix
- DeepL Write
- Custom AI agents in ChatGPT or Microsoft 365 Copilot
- DeepL Translate
- Google Translate
- Le Chat Mistral
- Adobe Firefly
- Adobe Photoshop
- Adobe Illustrator
Microsoft 365 Copilot
- Cleanvoice
- Adobe Podcast
- Premiere Pro
- Sonix
Contact
Sources
- external page Guidelines on the use of generative AI, University of Oxford, 4 Julyexternal page 2025
- external page AI instructions SRF, Swiss Radio and Television, version 2.2, 15 Novemberexternal page 2024 (in German)
- Communication 3.0: Guide to the use of artificial intelligence in communication, University of Zurich Communication, 22 July 2025