WebEx, Zoom, and other video conferencing tools have automatic transcription—live or retroactive.

Automated transcription enables spoken language in recorded meetings to be processed and converted to text. These generated transcripts usually come with minute-to-minute breakdowns of what was said. If supported, sometimes this even includes who was speaking.

This is a wonderful feature for anyone with a memory-related disability, people who are deaf or hard of hearing, and the myriad of people who benefit for various other reasons.

Transcription for non-spoken languages

I'd to like to see transcription cover the breadth of sign languages. Just as people who communicate verbally get their audio transcribed, people who use sign language should share this same benefit.

If two or more people are in a meeting and all parties are using a signed language to communicate, it seems obvious they too would want to have their meeting transcribed for later reference.

With the sophistication of hand-tracking technology built into products today, there’s no reason we can’t generate transcription for some of the 300 languages worldwide.

Catalyst

This idea came to me likely because I'm taking ASL classes over Zoom. The lectures are recorded, but since 95% of the class is without spoken language, if there were transcriptions of what signs were made, I would understand which signs are being taught when I rewatch the lecture.

With transcriptions for sign language, if I wanted to search for a word we learned during a Zoom call, it would appear in the transcription. This way I could scrub to the relevant part of the lecture.

Pessimism

Despite the excitement I have for this idea, my optimism is cautious.

If this functionality ever came to pass, I have a funny feeling the applications that introduce this will be sure to bugger it up. Not majorly, but just enough of that it's difficult to get the gist of the conversation—just as it is for transcripts and spoken language. Now that's an equitable experience!