On September 10, 2020, EEG Video hosted a webinar to educate media professionals on our closed captioning solutions built specifically for the needs of municipalities.
Closed Captioning for Municipalities • September 10, 2020
In this 30-minute webinar, Matt Mello, Sales Associate for EEG, and Daniell Krawczyk, Founder of Municipal Captioning, explained best practices for making meetings, events, and forums more accessible with EEG closed captioning and subtitling solutions.
Topics covered included:
- All about EEG’s closed captioning solutions for municipalities
- How to use EEG products for municipal captioning needs and workflows
- The latest closed captioning advancements at EEG
- A live Q&A session
Featured EEG solutions included:
To find out about upcoming EEG webinars, as well as all other previously streamed installments, visit here!
Regina: Hi everyone and thank you so much for tuning in to this webinar about closed captioning for municipalities! I'm glad you all could join. My name is Regina Vilenskaya and I'm the Director of Marketing here at EEG. I'll be your moderator today.
With me on this webinar are Matt Mello, Sales Associate at EEG, and Daniell Krawczyk, Founder at Municipal Captioning. For the Q&A portion at the end of today's event we'll be joined by Bill McLaughlin, VP of Product development at EEG.
For today's webinar, Matt and Daniell will be sharing their experience helping municipalities get accessible. You'll find out about EEG's closed captioning solutions and how you can use our products for your captioning needs and workflows.
I'm now going to welcome Matt and Daniell to kick off the webinar Closed Captioning for Municipalities. Welcome, Matt and Daniell!
Daniell: Thank you.
Regina: Thank you for being part of this webinar. Could you please start by introducing yourselves and tell us a little bit about what your company does? Daniell, we'll start with you.
Daniell: Thanks! So my name is Daniell Krawczyk. I'm the founder of Municipal Captioning, Inc. I've been working with local government TV channels since 2001. I worked at a couple stations and then I sold playback systems and transmission systems, and a couple years ago I started Municipal Captioning, Inc.
What we do is we help cities and counties and other local government examine all of the options, from human captioning to traditional ways to using the new AI tools, and we give them all the options, let them compare them, and help them implement them so that they can become compliant with ADA regulations quickly, as opposed to taking a long time to compare and test and implement. So that's what we do and I've been doing it, like I said, about three years.
Matt: Hi all! My name is Matt Mello. I'm with the sales team here at EEG. For those who aren't familiar with EEG we can be thought of as sort of a one-stop shop for all things captioning. We've been a leading manufacturer of broadcast video equipment for decades, helping customers from nearly all industries get compliant with our products.
Our equipment and services can be found in most, if not all, major broadcasting facilities around the country, as well as many levels of government agency, ranging from local town halls all the way into entire state legislatures.
EEG has a wide range of solutions for SDI, IP, RTMP live stream encoding, as well as live - live stream data caption creation. If you're creating any video content, we have the captioning solutions and expertise to ensure that your content is accessible and compliant.
Regina: Thank you. So the first thing that we're going to be discussing is municipalities and broadcasts. And the first question I have for the both of you is, How have you seen the nature of broadcasting workflows change because of COVID-19? Daniell, let's start with you.
Daniell: Okay, thanks. So, you know, I work with a lot of local TV stations, some of them very small. And they may have been doing a subset of the meetings that the ones that are put to TV, but there's usually a lot of other meetings that have been happening that maybe weren't being broadcast.
And those folks seem to have been doing not just the same amount of work, but sometimes double or triple the amount of work, because they're helping to then get those other meetings on Zoom or Google Meet or whichever ways they're virtualizing it, and then often putting those on TV and streaming.
So what I've seen is that COVID-19 has made the folks who are actually running the channels and putting the meetings on air and dealing with that workflow having even more work to do, and then a larger audience, because more people are watching those meetings.
So when I'm hearing from folks reaching out about captioning, it's because they're having even more meetings they're being asked to do and a larger audience, and now the concern is greater, so I've noticed that.
Matt: Broadcasting kind of as a whole has not really changed so much as an industry because of the virus, but definitely what has changed is broadcasting in relation to municipalities, because oftentimes these meetings are held in-person and we can't do that anymore.
So how are we going to do that is oftentimes it's now through Zoom and we're changing our workflow entirely just to be able to go out to live streams or broadcasts over, you know, a virtual network.
Regina: Thank you. And Daniell, when adding captioning into broadcasting workflows, what decisions or adjustments are you noticing that municipalities often have to make?
Daniell: I mean, first off they're often working within a pretty constrained budget, so if they were doing human captioning for one core meeting, they're now trying to figure out how they can budget to cover a larger amount of meetings and they have to determine, are they going to be able to do 30 hours in a month or 45 hours in a month or 60 hours in a month?
Or are they going to be able to cover everything that they're doing on their local channel, so - so they're really looking at how much they have to do. And then there's, you know, factors on whether they want to do locally or cloud, but they're usually starting out by trying to figure out how much they have to cover to be compliant.
Matt: Sure, sure. So one of the big things that I've noticed you have to look at when you're starting to consider adding captioning is, where do you want the captions to be embedded? At what point in the signal flow do you want them to be added? So you can do it downstream from the RTMP encoding, do it upstream, you could do it, you know, if you're doing broadcast.
So you really want to think about, like, what - where you would like these captions embedded before going anywhere else. And you've got to think of things like, is it going to get fit into what I have currently, so do I need to add or replace any hardware or software into this workflow to - for captions to be edited in general?
Regina: And another question for you. What should be considered when researching solutions. Daniell?
Daniell: So obviously the things that need to be considered are what your local needs are and how those compare, so things like quality. Obviously you want to be able to have the highest quality you can get, and to get that higher quality you need to be able to have word models where you can put in the names of your local counselor or your mayor so that those key things are accurate–or acronyms–so those are things to look at.
Also, you know, looking at whether it's going to be held locally or in the cloud for, you know, whether your data is going to be something that's only in your hands or something that's - that's out there if that's a concern. But really, I mean, the big thing I think that comes down to folks is trying to figure out what they need to do to stay compliant with ADA regulations and how they can best accomplish that while meeting the quality expectations of their local community.
Matt: Sure, so I had some of the same points as Daniell here, so a lot of the things are going to be, like, while you're going into, you have to consider, are you looking for an AI captioning solution or are you looking for a live captioner? Like, about how many months are you - I'm sorry, how many hours per month are you looking to do of captioning?
Just so you have an idea going into it so you can come to us and have, you know, a good idea of what we can give you on, like, pricing and things like that. All these things are a good thing to keep in mind when looking adding to live captions. After the webinar, I'll be happy to discuss any sort of pricing information that you're looking for, for any of these solutions.
So we're going to start off this webinar with a general overview of what the captioning process looks like. You're going to see a few slides with this sort of breakdown throughout the presentation, so it's a good idea to get familiar with this sort of workflow.
This present - this particular chart uses the HD492 hardware encoder or Alta IP encoder as an example. To get captions added, the video will be passed through an encoder, the audio is sent over our caption delivery network called iCap, and the audio is received by either Lexi or a captioning agency of your choice. Most major captioning agencies have access to iCap, and if they don't, they can feel free to contact us for a download of necessary software. Once the caption data is created, it can then be sent back to the encoder to be embedded into the signal.
So after the last slide, you're probably wondering what Lexi is and how it can be used as an alternative to a human captioner. Lexi is EEG's automatic speech recognition system capable of creating caption data in English, Spanish, and French. This webinar is actually being captioned by Lexi if you'd like to see it in action.
Lexi is a cloud-hosted service and can be accessed by any of the EEG''s encoders with very little setup. Lexi has very low latency, with captions appearing approximately 2-3 seconds after it was said on stream. It can be accessed from any EEG encoder again - so any that we mention today, Lexi will work with. The base accuracy of Lexi starts at approximately 90%, and this could be increased by having a single speaker with clear audio, and also by using Lexi's Topic Models feature.
So Topic Models are a great tool that can help Lexi understand what is trying to be said and can help improve overall accuracy from the base of 90% during your programs. Lexi's Topic Models are especially useful in cases where names, industry-specific jargon, and terms that may not be found in the dictionary are spoken. A common example here for municipalities can be names of the local mayor or nearby park name.
Users can select EEG's developer Topic Models or generate their own by supplying Lexi with word libraries or relevant URLs. This feature enables Lexi to recognize topics and immerses itself in distinctive vocabulary and observes context through the absorption of relevant web data.
Lexi on its own is available in English, Spanish, and French, but if you're looking to expand your reach, iCap Translate can translate caption data to and from eight major languages at very affordable rates. Lexi Leash is a free tool that gives you more control over your Lexi jobs. We'll talk about that in just a minute.
Here we can see a screenshot of the - of the dashboard for Lexi's Topic Model Manager. This is where you can - can manage all of your custom Topic Models and choose a Core Model.
EEG has several Core Models that can be selected as a topical base vocabulary, which can give Lexi a greater understanding of the discussion at hand. For any municipalities watching here, we have a Core Model with over 1,000 entities and phrases geared towards government and legislative purposes that might - that you might find particularly useful. From there, you can import your own words and URLs for Lexi to learn from.
A great use of this feature is to upload names of nearby towns, names of commonly mentioned people, and any other local information that may be referenced often. Lexi's Topic Models are a very simple way to customize Lexi into what you need to get out of captioning.
iCap Translate is an easy and affordable way to bring greater accessibility to a larger audience. This service will allow you to take captions created by either Lexi or a captioning agency and translate them to or from English, Spanish, French, Italian, Portuguese, German, Danish, and Maori. This means that viewers of broadcasts or live streams can follow along with their content, even without understanding the program language, allowing for a broader audience, which can also help increase viewership.
As mentioned before, here's a screen cap of the Lexi Leash interface. Lexi Leash is a free Windows application that makes managing Lexi jobs easier. This tool is created in consideration of organizations who may not be staffed with experts in the captioning and production field, so it may be very beneficial for them to easily track Lexi usage quotas, monitor current jobs, restart similar previous jobs, and prevent accidental overage charges. So now let's move into on-premise.
Regina: This question is for Matt. Could you tell us the main differences and similarities between broadcast use and in-room meetings?
Matt: Sure. So much of the workflow between live captioning and broadcast for in-room presentations is actually very similar, but the equipment used can be certainly different depending on what you're looking to accomplish. Oftentimes, if you're looking to display captions in the presentation room, you'll be searching for a caption decoder rather than an encoder.
Our HD492 encoder also has the ability to decode captions on the video output for open caption display, but we also have a more specialized equipment that allows for much more customization, which we'll get to in just a few minutes.
Regina: And question for the both of you: How do you recommend municipalities determine whether they should adopt cloud or on-premises solutions? And as a follow-up, what workflows do you recommend for each? Matt, what are your thoughts?
Matt: So cloud solutions versus on-premise solutions are going to come down to mostly a couple of key things. Are you able to configure your current setup to allow for new inbound and out - outbound connections? What type of licensing works best for your company?
Do you have the caption con- I'm sorry, do you need your caption content to have the utmost security? Things like that are going to be all things that we ask you whether you're debating going cloud versus on-premise.
Regina: And Daniell?
Daniell: Sure. So one way I help people look at their workflow is, are you looking at something where there's a person whose job it is to start and stop for the meetings? And in those cases you're going to want to use, let's say, a cloud solution where you're paying for a certain number of hours.
But if you're looking for something where it's just running 24/7, just like your actual broadcast transmitter is, and everything going in is getting captioned and going out so that it's no one's actual responsibility to start and stop, that might cost more upfront to get that and get that in an on-prem solution, but then you have less operating costs of it being somebody's staff hours. And in some places there's only one person or one and a half hour, you know, people's worth of time.
So one thing I ask people to consider is, do you want to make that a all-the-time, 24/7 captioning, which leads towards doing it on-premises, or doing only a fixed number of hours in the cloud, which is a different way of doing it?
And both are valid, just depends on what your city or county, and really what the attorneys feel is going to best cover your liabilities under the ADA. A lot of stations felt like because they didn't fall under the FCC size requirements, they were safe, they didn't have to deal with captioning because the FCC wasn't pushing them. But the ADA is something that they do have to, you know, provide effective civic communication, and captions are one of the most popular ways to do that.
So what they - I really recommend before they decide exactly what to do is look at their budget, but also look at what the attorneys for the city and the county feel is the most effective way to cover their liabilities.
Regina: Great response. And Daniell, what - what are municipalities' main concerns when captioning in-room meetings?
Daniell: Oh sure. For in-room meetings, latency is huge. They don't want to have it displayed 45 seconds later, 35 seconds later. I know in television, a lot of times we might be used to seeing a lot of latency when, you know, you're watching a professional sports game or news and it's not right along, but with in-room meetings, definitely having a short latency is really crucial so that people who are there following along in-person can follow along as best as possible.
Matt: Sure. So - so my my biggest thing for their concerns has been, where are you gonna actually place the captions? Because oftentimes when you have, like, you know, a room with one screen in it, that - that screen is actually going to be used for a presentation or something similar to that. So that's a big thing is - is, how am I actually going to add captions to an in-room meeting at all?
And another common concern is, how is everybody going to be able to see the captions? So if you - if - how are they going to see captions and in which - what size room they're in? So they want to make sure that everybody has the same accessibility to be able to see captions equally throughout the same room space.
Regina: And Matt, how do you think visuals play a role for municipalities?
Matt: Sure. So visuals in a meeting provide greater understanding of the topic being discussed, because many people are actually visual learners and can benefit from having a presentation to follow along to.
Additionally, for people who are deaf or hard of hearing, visuals can be an invaluable tool for understanding the topic being discussed. Captioning also falls into this category of understanding for visual learners and people who are hard of hearing.
So that's going to bring us into Lexi Local. So EEG's biggest priorities lately has been adding accessibility options to local settings and meetings. As everybody begins to move back into the office, we're going to start seeing a resurgence of in-person meetings at every level of government.
If you're looking to create a completely on-premise solution for broadcast, live stream, or for in-room productions, then Lexi Local can be an ideal choice. Lexi Local offers the same performance and accuracy as the cloud-based version of Lexi, but it's completely on-premise and works with any cloud - without any cloud connectivity, excuse me.
A big advantage of Lexi Local is that it provides complete internal control of data flow and it never touches the cloud, which is perfect for any content featuring classified or sensitive information. This is a rack mount unit, which is - which can be placed into your facility and using on a limited basis.
If you're captioning a large amount of content and are also worried about the month-to-month billing of other captioning methods, then Lexi local may be the perfect solution, with its annual unlimited licensing model.
Here you can see a captioning workflow, including a Lexi Local unit. You'll notice that Lexi Local replaces the captioner or cloud version of Lexi, and the HD492 will receive caption data directly from the Lexi Local server that it's plugged into.
To say it again, there's no internet connection required to add captions with Lexi Local. This example also shows that, from here, you can send the caption video to both a broadcast stream and to the AV610 caption decoder for an in-room display.
So everything you need to operate a basic captioning system is included on the Lexi Local server. You can connect multiple encoders to Lexi Local and, if you need to run more than one channel of live captioning, there are additional costs to run multiple channels simultaneously, but you don't need any extra pieces of hardware. You can also connect the system to custom Topic Model data, which is stored entirely locally.
To connect to external captioners, you can also use dial-in cards or a customer-supplied VPN. So as I've been alluding to in the last few slides, EEG has a product that we've created specifically for in-room presentations called the AV610 CaptionPort.
The AV610 is a caption decoder that allows you to add cap- add open captions to over your video output, as well as simply display captions in the presentation room. It also has the ability to scale any input video down by 15%, allowing for a dedicated space to place captions and not interfere with any images. The AV610 is compatible with any source of iCap, including Lexi, and accepts character sets not supported by a standard caption encoder. It can be - can also be configured to receive captions over RS-232, telnet, and optional modem.
So one way of using the AV610 in an in-room setting is with the video scaler, which allows you to add captions without interfering with the video. The AV610 can be configured to - can - excuse me, can be configured to scale the video and allow for room above or below the presentation space, depending on which works better for the environment that it's in. It takes SDI in and outputs SDI, which is going to be converted to HDMI for screen displays if necessary.
Another workflow for the AV610 is the ability to make a larger text display with a static image so you don't need to - you don't need an input source of SDI video at all. You'd upload the image before the event–a logo of your organization or conference, for example–and the AV610 will generate its own output video signal from its internal processor. This is a simple way to add accessibility options to any meeting. And that brings us into live streaming.
Regina: Daniell, what are some unique ways you've seen municipalities adapt to the need to move meetings and events online?
Daniell: Sure, I mean, one big thing I've seen is that they've often had to use multiple platforms. They've had folks who may be already using Zoom and now they're - some are using Zoom and then they're trying other things in Google Meet and other platforms. And then they're also working to add captioning because they want to have that also be compliant.
So, you know, the folks running these channels, these unsung heroes, have had to not only find ways to get these stream meetings onto their channels, they also need to look for ways to get captioning into those communities.
Matt: Sure. So there's definitely obviously been a huge push to move everything virtual due to COVID. Municipalities have always been no exception to that.
Many municipalities have - have been doing everything in-person, have been quickly forced to adapt to the online world. Most meetings are now held on Zoom, obviously in other platforms, with some of these meetings actually being pushed out to broadcast, as well as a replacement for standard board meetings.
Adaptation of streaming has become more widespread due to COVID, and many of the organizations who have not had to use any live streaming before may now have a better understanding of it moving forward and how they can utilize it.
Regina: And this question's for you both. What are a few reasons municipalities don't take advantage of adding closed captioning to their streams? Matt, let's hear from you first.
Matt: Sure. So I would imagine that the biggest factor - factor would be unfamiliarity with adding any captioning to a live stream, because it might seem like a daunting task if you've never done live streaming at all. However, adding captions is actually not as difficult as some might think for a live stream.
Also, some people might be under the impression that their video player of choice doesn't suppose - support closed captions, but we actually have a solution that will work with the majority of popular streaming platforms.
Regina: And Daniell?
Daniell: Sure, so one thing I hear a lot when I'm talking to folks who run the local channel, the public access channels, or actually the school of government channels, is that they - they think that they're not required to because they're the size that the FCC limits don't really apply to. And that's a way that people have kind of made themselves feel better about the fact that it seemed too expensive, it seemed too complicated, it just seemed out of reach.
But what - what I think that they start to realize as they hear - as the city attorneys and county attorneys talk to their peers and find out about other cities and counties where they've been sued and were very expensive ADA lawsuits have started because of the lack of captioning, they're starting to realize that all of them are required under the ADA to do this. If it's a city channel or a school channel, if it's directly required, if it's a public access channel working on behalf of the city and putting the city's meetings up, then it's required, you know, to do that on behalf of the city.
So I think the main thing is people aren't realizing how much liability they have - they have to address, because as they figure that out, that really prompts them to reach out and say, "Hey, I need to figure out a solution or what that costs."
Regina: And another question for you, Daniell. Which destinations are you most often seeing municipalities streaming to?
Daniell: So a lot of folks who are streaming to the big tech destinations, you know, they're streaming to Youtube or Google, or they're streaming to Facebook. And then they've got their own platforms, you know, the city channel might go out through the playback system, it might go straight to a Wowza server, may go to Vimeo.
So there's - there's a range of different platforms. What I am seeing mostly is it's no longer something where they say, "Well, we only stream to this one place." They almost all are using two or three platforms, or at least testing different platforms.
Matt: Yeah, same here. It's all been - it's been pretty much the big platforms that you see regularly. You'll see Facebook Live, you'll see Youtube Live, Vimeo, and even Twitch nowadays. So really how do you go about adding captions to these live streams?
So we actually have a streaming platform called Falcon, which is a perfect solution to this problem. Falcon's gonna act as the middleman between your streaming start point and the live streaming platform of your choice. You point your stream to us, add captions, and then point Falcon to the content delivery network or CDN. Falcon is compatible with most major streaming platforms, such as Youtube Live, Facebook Live, Twitch, Vimeo, Vbrick, like we mentioned, and many more.
We've seen a lot of new interest in Falcon lately, as the majority of caption agencies recognize that Falcon is the easiest and most seamless way to caption live streams. Falcon can be purchased and managed completely through our cloud site at eegcloud.tv.
Here's another very similar looking diagram that I'm sure we're all getting familiar with. In this example, Falcon sits between the streaming media encoder and streaming platform.
The live video source and program audio are uplinked to the cloud using RTMP through a streaming encoder, whether that's AWS Elemental, Telestream Wirecast, OBS, or similar. You can then also use a - you could also use a hardware stream encoder if you'd like, such as the AJA HELO.
Your program audio is then captioned, again, either by a live captioner or by Lexi, and live caption data is returned to Falcon right away and embedded into the stream. So very similar to any of our hardware hardware based methods, only without any additional hardware required edit - to add captions.
We also have an HTTP version of Falcon available for any platforms that have a separate HTTP uplink specifically for captions. Let's take Zoom, for example. Uh, instead of sending an RTMP stream through Falcon and having captions added to it, you have a separate HTTP link that only the caption text is sent to.
So if you're the presenter or organizer for the meeting or event, you can get the link by going to the CC display at the bottom of the window and retrieving that URL, which is typically unique for each Zoom meeting. So with this method, you can have caption data added directly within a Zoom chat, again, with no additional hardware required. So Falcon is our solution to live captions. Let's move into post-production content.
Regina: Matt, what has been your experience helping municipalities who had a backlog of videos and needed to get accessible as quickly as possible?
Matt: Sure, absolutely. So captioning VOD and backlog content is an entirely different concept than captioning live streams, and I think it's important to note very different products for each.
But there have been times where we've been asked what the best way to add captioning to older content is, and of course we do have products that quickly and automatically add captions in a post-production format as well. Generally when we run into these situations, it's people who have a lot of older content to caption and they need something simple and quick to do the job.
Regina: And Daniell, once an organization has actually caught up on captioning their backlogged content, what solutions or methods do you recommend that they adopt to ensure that they continue to stay compliant?
Daniell: Yeah, again, I mean, the biggest thing I think is to make sure that attorneys from the city or the ADA commissioner or–depending on the size of the local entity, they have different folks who are looking at this–but that they work with them and determine that yes, we need to do all the meetings, or at least all the public facing meetings, or we need to have all the meetings and every civic promotional video or every press release video.
Some - some cities have been very consistent at having ASL interpreters at every COVID-related press release event. Others have struggled to have that. Some are thinking that they want to have captioning in every single event that the city is holding because they want to be covered. Others feel that that might not be. So really it's making sure that they see all the opportunities and that they're identifying them.
Matt: So EEG's solution to captioning VOD and backlog content is called Scribe. Scribe is a Windows-based post-production application which can make getting caught up on - getting caught up both easy and affordable. For VOD content, you can upload transcripts or automatically generate transcripts with Lexi and create a timed caption file output at a fraction of the real-time video length.
You can also edit any existing captions alongside the video on a timeline, allowing for easy and quick fixes. We also have a product that can work in conjunction with Scribe called CCPlay FilePro, which gives you the ability to stitch the caption file to the video once it's created.
Once your caption files are created, you can submit them to the EEG Cloud website for a full QC check against the video asset, which checks for spelling errors frame - frame rate mismatches, caption timing, and more. Issues found are highlighted for you so you can perform a quick fix before publishing. Along with all of our other software and cloud-based products, Scribe can be demoed for free from our website. And feel free to contact me after the webinar to get started here.
So this is a dashboard - this is the screenshot of the Scribe interface so you can get an idea of what working this environment looks like. Note the text bucket at the bottom, as well as the ASR button - Lexi ASR button at the top. But the Lexi ASR feature within Scribe works very quickly creating captions from the program audio in approximately one third of the time of the program.
In addition to Lexi, Scribe also allows you to import transcripts to be aligned, QC'd, and exported as a caption file or stitched to the video with the CCPlay FilePro bundle. Scribe is very simple to use and also provides a seamless workflow for anybody looking to add captions in a post-production setting. So that about wraps up our webinar portion. I believe we have time for a Q&A now, Regina?
Regina: Yeah, so we have now reached the Q&A a portion of the webinar, and we will - we will also be introducing Bill McLaughlin, the VP of product development for EEG. So if you have any questions and you haven't already done so, please enter them into the Q&A tool at the bottom of your Zoom window.
So the very first question that we have is, can you choose to use Lexi Local for one event but a live captioner via iCap for a different event?
Matt: Yeah, so you can do that, actually, with Lexi Local. And then if you have your HD492 encoder also, they can - they can connect via iCap to the HD492 encoder or you can do it through the Lexi Local unit with a VPN, Bill, I believe.
Bill: Yeah, it just - if you want to connect the encoder, if you want to switch the connection between the local-only network and a remote network with iCap, that's an encoder configuration that you would switch on per event basis.
But yeah, you can basically use - you can either use a captioner that's in the network, you know, through the Lexi Local box without going out to the cloud, or you can reconfigure the encoder to talk to the cloud and use a captioner through the cloud.
Regina: We have another question asking if - regarding translation, if we provide ASL to captioning - to captioning. A use case would be if the speaker is deaf and presenting in ASL, for example.
Bill: Oh, that would be cool. No, the technology, I don't think, is really there yet to kind of take that - you know, take a video, say, of a sign language speaker and to really be able to understand that and to translate into words. I mean, that would certainly be a very cool technology.
You know, it's interesting to see generally how sign language and captioning, you know, for some members of the audience, these are going to overlap in functionality. Some people, especially, you know, obviously deaf individuals who have been doing ASL for a long time are going to prefer that to captioning. On the other hand, there's a lot of individuals and audiences who really don't know ASL well or at all and who are going to benefit more from captioning.
So sometimes it presents as a little bit of an either/or. I think which one are we going to do, but I think it's important to understand that they kind of both have their unique benefits. There are times when both is the right answer. You know, it's kind of a question of exactly who the audience is and what, you know, what percentage of the audience that wants the captioning is going to, you know, understand sign language or be better served by a sign language interpreter. I don't know if you want to add to that, Daniell.
Regina: So - so those are all the questions that we have received. Just want to give a huge thanks to Matt, Daniell, and Bill for sharing your insight today. And thank you so much for all the - to all the attendees for joining us for Closed Captioning for Municipalities.
If you have any questions about EEG, Municipal Captioning, or any of the topics that we discussed today, you can reach out to Matt, Daniell, or me. Within the next few days you will receive an email from me with a link to the recording of today's event, as well as information about upcoming webinars. So thank you all again and have a great rest of your week!
Daniell: Thank you!
Bill: You guys did a great job. Thank you.