Creating accessible video and audio
You must provide an accessible alternative way of presenting the information in a video.
In a video information is presented in a variety of ways including animation, text or graphics, the setting and background, the actions and expressions of people, animals etc. Video-only content is inaccessible to people who are blind and to some who have low vision.
People who are deaf or have limited hearing cannot hear the audio. Therefore, it is important to have alternatives.
This page page highlights some general guidance on video alternative options.
If you need workflow guidance for creating videos for learning and teaching, like online lectures. Go to our Video standards for online teaching and learning page.
Further down this page you can find specific College video platform guidance and links to how to use the platforms.
A great text alternative for video is to use captions. Captions must be provided for all video content in synchronised media where possible. This allows users who cannot hear (deaf) to be able to access real-time broadcasts and provides a way for people who may just have trouble hearing the dialogue and/or the sounds.
Captions should not be confused with subtitles. Subtitles only provide text of the dialogue and do not include important sounds, for example musical cues.
There are two different types of captioning:
Closed captions (soft)
Closed captions will appear as separate selectable tracks in a video. These can be enabled and disabled as required by the user. They can also, on some platforms, be translated to any language automatically.
Open captions (hard burned)
Open captions mean the text is written on top of the image permanently. This is sometimes referred to as hard burned. They cannot be turned on or off and they only support one language per video.
There are accessibility issues surrounding open captions. Open captions on a video is the equivalent of saving a word file as a JPEG image. It cannot be interacted with once rendered and cannot be identified by accessibility tools. They cannot be automatically translated either. This restricts international audiences who prefer or require captions in an alternative language.
We recommend using closed captions for online videos since they are more accessible, and their language can be changed. Try to use closed captioning where possible.
You can view a good example of an accessible video that uses closed captions and has a transcript available on the RNIB website.
What must you do?
In order to present the same information in an accessible form you must use one of these techniques:
- In built captions - Where possible, switch on the audio tracks/captions or subtitles within video platforms to produce spoken versions / audio descriptions. For example, by using a control within the content itself or by selecting a control or preference in the media player or operating system. Then listen to the synchronised media and check the captions or audio tracks are conveying the right information.
- Static text - For videos that have no ‘important visual details’, for example talking head videos, audio description is not necessary because there is no time-based visual information in the video that is ‘important to the understanding of the content. All that is necessary in this case is a static text alternative which would contain a general description of the context of the environment, any opening/closing credits, any text that appears in the bottom of the video with the name of the speaker, and other basic information, if these are seen on the screen and cannot be heard in the audio. This can be added as general content on the web page or as an accessible document. This technique does not apply to a situation where there are multiple speakers and where the identity of each new speaker is not evident in the audio track but is identified on screen with visual text as they speak. In this case, audio description should be used.
- Alternative versions - Provide a second version of the video with audio descriptions OR extended audio descriptions. The second version can be added to another webpage or check the platform you are using as it may have this feature provided.
- An audio description - A narration added to the soundtrack to describe important visual details that cannot be understood from the main soundtrack alone.
- An extended audio description that is added to an audiovisual presentation by pausing the video so that there is time to add additional description
If you cannot do the above you must as a minimum provide:
- Transcripts - Create a transcript document that tells the same story and presents the same information as the pre-recorded video-only content. The accessible document serves as a long description for the content and includes all of the important information as well as descriptions of scenery, actions, expressions, etc. that are part of the video.
- Audio tracks - Provide an audio track describing the information in the video. The audio should be a common audio format used on the internet, such as MP3 and linked to in the content near the video.
Live videos are live streamed content, for example, a streamed lecture or a broadcast event.
Although live video captioning does not fall under the legal obligations at this time, you should try to provide captions/subtitles when it is possible for you to switch them on within the platform that you are live streaming/broadcasting from.
Many live broadcasting platforms now have built in functionality that you can switch on to create automatic open or closed captions. If closed captioning support exists for the video platform please use it.
Not all of systems accurately reproduce the spoken word as it is hard to recreate the audio when it is flowing too quickly. If yo uare using live captioning tell your guest speakers to speak clearly and slowly when presenting as this will help with the captioning.
Remember, once your live video has streamed it can be uploaded to a website or watched again within the video platforms so you should treat this like a pre-recorded video. After your broadcast has streamed, you can go back in to most platforms and post-edit the captions to better reflect what was said.
If the video is to be used post recording, for example, to be used as a study aid or training material and the platform you are using does not provide any automatic captioning functionality, you must provide a transcript. (Some users may request a transcript from you - so it is good to have this ready.)
What video systems are available at the College?
Panopto allows for simultaneous capture of audio, video and software applications. It can be used by Imperial staff to record lectures and presentations. These recordings can then be shared via the Panopto platform or embedded in a website.
Panopto does not provide auto captions or subtitles for live streaming / broadcasting videos.
Panopto does provide auto captions for video recordings once uploaded to Panopto.
From the academic year 2020-21 and beyond captions will be generated and made available automatically for all teaching videos stored in module folders. For other recordings, captions can be made viewable subject to approval of the recording owner.
Remember, auto captioning should not be totally relied upon as they are not 100% accurate, you are advised to post edit your captions to ensure accuracy.
If the accuracy of the auto generated by Panopto is very poor – which may be the case for highly technical content – the Verbit captioning service can be used to provide more accurate captioning in this situation.
YouTube uses speech recognition technology to automatically create captions for your videos.
If you are producing a video and you do not have any budget for professional subtitles or a transcript. The subtitles and closed captioning through YouTube is free and relatively straightforward. You can add your own subtitles and closed captions or upload your own transcript.
Read the Accessibility for social media page to find out more about videos on social media.
Videos in MS Teams can be used to create a live stream broadcast of an event or record a meeting.
Please be aware that live captioning is not always reliable and misspellings will occur.
Live events broadcasting
Live captions and subtitles are currently a preview feature in Microsoft Teams.
Live event attendees can view automatic live captions and subtitles in up to six languages in addition to the language being spoken. Event organisers can select the languages from a list of over 50.
Live captions are a preview feature in Microsoft Teams, and they're only available in English (US) for now.
Teams can detect what’s said in a meeting and present real-time captions.
Live captions can make your meeting more inclusive to participants who are deaf or hard-of-hearing, people with different levels of language proficiency, and participants in loud places by giving them another way to follow along.
You can publish a video to using Office 365 Microsoft Stream by uploading the video directly.
Publishing to Microsoft Stream gives you the added benefit of having a closed-caption file automatically created for you.
You can also review and edit the captions/transcript, and you can also download it as a transcript if someone requests it.
You can embed your video with captions into a presentation or a webpage to make it accessible for all.
If you need further help on accessibility for videos contact your Faculty Web Officer.
You can find out more about making your digital content accessible on our Web Guide.
Find out more about the College accessibility framework project.
Contact the accessibility team for further queries.