EEG is now an Ai-Media company, bringing leading, vertically-integrated solutions and services to customers around the world. Read more here.

October 26, 2021

Webinar Replay: Ask Us Anything About Captioning

EEG

EEG Video and Ai-Media keep bringing the show to you with our NAB Virtual Event Webinar Series: We’re exploring the most important developments in closed captioning with information-packed online events.

The second show in this set of live webinars is now available. Originally held on Thursday, October 14th we presented Ask Us Anything About Captioning. This interactive webinar fielded attendee questions on all things captioning, from live stream capabilities to the future of ASR. A panel of Ai-Media and EEG experts were on hand to provide their insights to broadcasters, educators and content creators. Bill McLaughlin, CTO at EEG; Phil Hyssong, Chief Customer Officer at Ai-Media; Jared Janssen, General Manager, Key Accounts - Americas at Ai-Media; and Matt Mello, Technical Sales Engineer at EEG shared their knowledge as attendees got the chance to ask their questions about captioning.

Ask Us Anything About Captioning • October 14, 2021

The combined strengths of EEG Video and Ai-Media form a one-stop-shop resource for captioning, translation and transcription solutions. Look to us for the latest closed captioning news, tips, and advanced techniques.

Visit here to see Ask Us Anything About Captioning and all previous webinars!

 

 

Transcript

Regina Vilenskaya: On behalf of Ai-Media and EEG, I'm very happy to welcome you to the NAB virtual event webinar series. I'm excited to introduce Tony Abrahams, CEO of Ai-Media, and Phil McLaughlin, CEO of EEG. Tony?

Tony Abrahams: Thanks very much, Regina. My name is Tony Abrahams, I'm the co-founder and CEO of Ai-Media, and I'm devastated that we can't all be together at NAB in person this year. But we've tried to do the next best thing which is to make all of our great content available to all of you virtually. And I'm delighted to be joined by a person who's very well known to almost all of you I'm sure, the CEO of EEG, Phil McLaughlin. Phil, how are you?

Phil McLaughlin: Oh, very good, Tony. Glad to be with you.

Tony Abrahams: And I think looking at one thing that's obviously changed since the last time we appeared at NAB is that Ai-Media and EEG have joined forces and Phil after running that business for 25 (UNKNOWN) years, what had you decide to join forces with Ai-Media. I guess two questions, why now and why Ai-Media?

Phil McLaughlin: This has been very heavy on my mind the last couple of years is we're the company that's built this level of excellence and momentum in the US market, where we can move from here. And the two things that happened is that we needed more international exposure as a company and we also, with our movement into captioning services with our Lexi automatic captioning product, we had become a significant service provider in the US market. And from that standpoint, for the last five years, we've had the pleasure of dealing with the Ai-Media as a trusted partner, a partner with customers in both the U.S., Canada, and all around the world. And we thought what would happen here, which is very much coming to fruition now, is that we could join together with Ai-Media where we can combine our products, where we have almost no overlap, but a tremendous amount of compatibility of these products. And we could bring these products both to the U.S. market, products and services, I should say, both the U.S. market and around the world.

Tony Abrahams: And I think also what we've been able to offer that's been really compelling, only in the last few months, we only completed this acquisition in May, is really offering that true one-stop shop right around the world where we are offering both the fully automated captioning through EEG's leading Lexi system, the traditional premium quality captioning that Ai-Media is known for, for many, many years, delivering accuracy of over 99.5%. But also, there's this really interesting spot in the middle, isn't it, with Smart Lexi that both Ai-Media had invented kind of moving down from a premium and that you'd moved into by moving up from Lexi and it's this happy place that we meet with Smart Lexi. So hopefully that's something for everyone in this great product (UNKNOWN). So Phil, thanks very much for joining us. And Regina, I'll hand back to you.

Phil McLaughlin: It was a pleasure. Thank you, Tony.

Regina Vilenskaya: Thanks, Tony and Phil, if you would like to hear about any of these topics or products in more detail, please join us for the NAB virtual event webinar series. We look forward to having you join us. So thank you.

Hello, everyone, and thank you for joining us for Ask Us Anything About Captioning. This is the second webinar of EEG and Ai-Media's NAB Virtual Event Webinar Series. My name is Regina Vilenskaya, and I'm the Marketing Lead at EEG. With me on this webinar, our panelists from EEG and AI-Media. We have Bill McLaughlin, Phil Hyssong, Jared Janssen and Matt Mello. Today this panel will be answering your questions about captioning. We've already received many questions from people who have signed up for this event. And if there are any questions that you would like us to answer, please submit them in the Q&A tool at the bottom of your zoom window. With that, I'm now going to welcome our featured speakers to kick off Ask Us Anything About Captioning. Welcome, everyone.

Matt Mello: Thanks, Regina. I appreciate the welcome here. Hi, Bill. Hi, Jared. Hi, Phil. We're happy to start answering questions for you guys. And I do see that we have a ton of them rolling in as we get started here. So, let's just get right into the first question, which is, what are the different types of captioning? So, the different types of captioning. Well, I take this as how do you display captions, and how would captions be presented on a video? So, the two different types that I think of immediately are closed captions, which means that the viewer at home can enable or disable as they want to, or open captions, which means that it's actually typed part of the video itself, and that it can't be turned on or off. That's my take on it. Does anyone else have thoughts on, you know, what different types of closed captioning there are?

Bill McLaughlin: Sure, I mean, you could talk about recorded versus live, you know, and whether essentially a live caption is going to, you're going to use someone, a sonographer or respeak, or a skilled real time transcriptionist, or else real time automatic captioning compared to when you have clips or programs, you know, any sitcom, anything that's pre prepared. Right. And then the captions a lot of times will be kind of held up to a somewhat different quality standard. You can really get all the timing and positioning speaker labels, everything kind of exactly right. It's like a video editing process compared to live captioning where typically it's mostly just the verbal content, maybe some music or, you know, simple sound effect indicators. And it typically scrolls through without being blocks of captions.

Phil Hyssong: I'm gonna take it just a little bit simpler, you guys. I'm gonna say that there's good captions, and there's bad captions. And, you know, I think that we've kind of discussed a number of different display types and uses, and you guys have done an excellent job on that. And I want to say that I think that there are really just about as many different types of captioning as there are programs. Right. I mean, it really becomes an individualized process. And that's one of the things that's kind of cool about AI-Media and EEG. In my opinion is that when we work with consumer, we identify, we talk with them about what their program is and what they need. And so we solve that specific problem. So, there's good captions, bad captions, and they're all displayed in any number of formats. Good job, guys.

Matt Mello: Alright. Thanks, guys. Next question that we have here is, how do I choose between human and automatic captioning? That's a very good question. And obviously, so there's two very different types of captioning. As far as, you know, how the captions can be generated. You can get them by humans sitting there listening to the audio and creating captions like that, or with an automatic speech engine. Like we do have Lexi as our speech recognition service. So, how you choose between them, I don't know this kind of seems like maybe it start off with Bill here.

Bill McLaughlin: Yeah, I mean, there's a lot of things to look at. I mean, you're going to see a basic angle. You know, I I'm a technical guy, I'll say a basic angle is you might want to take a quantitative angle, you know, what is the accuracy of the captions in both cases? You could use a simple metric, like a word accuracy rate, or a more complicated one, like NER, which kind of conveys a little bit more whether the meaning of something is preserved. NER is often a little bit friendlier to captions that are paraphrased, but that are extremely clear in their meaning. Where as, you know, a word accuracy rate is just verbatim, what was said is what's typed. So that's part of it. It is the quantitative angle, and it bears to measure that that can be very specific to a specific kind of content. You know, how many speakers you have, what they sound like, how they're might what they're talking about, how fast they're talking, these things can make a pretty large difference. And then there's questions about budget and your audience expectations that kind of need to be balanced with trying to get a certain result, I'm trying to measure that result, and trying to make sure that it's something that fits in your overall goals.

How much material do you have where audiences can benefit from captioning? And, you know, what's the budget right now? Realistically, everyone operates under some constraints. And sometimes one of the most important things is getting as much material captioned as possible. So, you know, I think all of those can be factors.

Jared Janssen: You know, I'll actually add into the budget, is one element, but also timeframe. Two events come up last minutes. Perhaps a premium human delivery is preferred, but we only have an hour out, two hours out. So, the amount of lead time can play a factor to into choosing, you know, human delivery versus ASR as well.

Matt Mello: Yeah, and I can even add on to that, and say that, you know, with something a human capture, you might have to schedule that beforehand. Whereas, with ASR, you can kind of just start and stop it as you need it. But that kind of comes with the negative of you have to start and stop it as you need it rather than having someone do it for you. So again There's, there's pros and cons to each and there's there's differences between the two. But you know, you can kind of choose between the two as you need to luckily.

OK. So, the next question that we have here is, how do I caption Microsoft Teams meetings? This has been a large discussion within EEG AI-Media as of recently because I think this is actually a new development for all of us that before, I think there was only the option to do Microsoft's built in auto captions. But now we've been working with them recently. Someone probably has more input on this than I do. I think, Bill, you've been working pretty directly with Microsoft on this project, if I recall correctly.

Bill McLaughlin: Yeah, our product team was asked to help pilot the solution with Microsoft and, you know, have some of the accessibility experts in our company really look at what they've done and look at the formatting. And so I mean, that was an honor. And it's allowed us to kind of get some advanced work on the solution, which I think is only been rolling out to Microsoft accounts within the past month. You know, you do need to make sure it's enabled on through the administrators of your team's account. I don't believe it's enabled by default yet. But essentially, once it's enabled, it works similar to card features that you might be familiar with already in Zoom or on YouTube where you can post the captions either through eg Falcon or through some other software integrations directly from a real time captioner. And they'll show up embedded in the screen. And, you know, to talk a little bit more about the human versus automatic issue, you can see through Microsoft putting considerable investment into wanting to do this, that there really are a lot of customers out there that are looking for solutions both ways that Microsoft Teams has had an automatic captioning feature.

That automatic caption feature is often quite good, but it doesn't have a lot of flexibility to customize. It isn't always as good as a client wants it to be. And they listened to the user feedback and said, we should give customers an opportunity to put in a higher quality caption option than this out of the box option. And to their credit, I think it looks really good and it's up. So, it's certainly something we can help you with.

Matt Mello: Yeah, it's a very new exciting development from all of us. It is very new. So, up until now we haven't been able to integrate with team. So, this is definitely new for us also.

Bill McLaughlin: And if you'd have a platform that you can put the captioning built in on. It's interesting to note if people haven't used that, that then a solution like Ai-Live can be used as kind of a side car. And at that point it's not quite as optimal because you're looking at two different windows. One for the main meeting and a separate browser window for viewing the caption transcript. But the caption transcript in Ai-Live is is very resize. And well you can choose how much history you want to view. So, it is a pretty good caption experience. It's just outside the box of meeting platform that doesn't support a built-in card option.

Phil Hyssong: Phil. This is Phil. What we're talking about here with Teams is very similar to what we have going on right now in Zoom. Correct. I'm watching the captions in Zoom and Ai-Media and EEG have created an interface here so that we're able to see the live captioning directly in our Zoom meeting. It's just like what we're watching today. Correct?

Bill McLaughlin: Yeah, yeah, absolutely. And, yeah, Teams now supports that in a very similar way.

Matt Mello: Yeah. Excellent. Alright. So, the next question that we have here is, do you have or are you creating an application to work with live with Zoom? So, my understanding is that we do actually have this existing already. I think that captioners can tap directly into a Zoom chat and caption directly in it as is. But EEG also has a solution for this also with our our Falcon encoder where it can listen into the audio in a Zoom meeting and create captions to put in there too. Phil, do you have some thoughts on this maybe?

Phil Hyssong: Well, that's what I had just said, Matt. That's what I was thinking that we had this solution already in place. And obviously, we do. I was just trying to point that out for those who are watching that if you click on the caption button at the bottom of your screen and view them you can see how the interface is already working. And I think kind of just picking up on what Bill has already said. There's a lot of platforms out there. And so we have been working at building interfaces with integration processes with different platforms. So, if someone who is a part of our webinar today has a particular platform that they would like to see us integrate with, by all means let us know that and we can certainly look into, we can't promise that because everybody has to cooperate in that process. But we can certainly look at providing that service.

Matt Mello: Yeah, absolutely. We are always looking to get everybody accessible as much as possible. So, integrating with these platforms is a huge step towards that goal.

Alright. So, the next question is, I want the ability to add English, Spanish and Japanese to my live streams, and can you help? So, I mean, yeah. That's a fair question. So, we do have the ability to do that in somewhat of a roundabout way. And it does require a little bit of manipulation with your stream as far as the endpoint and how you can get the feed into a stream as well. So, we do have, you know, our Falcon solution does allow you to encode captions in up to four to six languages depending on your workflow. English and Spanish being pretty much the most primary way of the easiest way to do that. But Japanese just become a little bit more technical. Bill, do you want to touch on that a little bit?

Bill McLaughlin: Yeah. Well, the hard part is the Japanese language in the proposed workflow because the English and Spanish you're going to be able to use Falcon to embed captions in an RTMP stream and pass that through to any platform that supports embedded captions. Or you'll be able to do it with a hardware encoder too and then when SDI does streaming media converter but elemental for example, or other products like that. So, the English and Spanish is going to be simple. What's hard about the Japanese is that the embedded caption standards really only support English and other European character languages. So, some like Latin character alphabet. Some accent characters like you've seen Spanish, French, German, etc. But they don't support East Asian, South Asian, Middle East, etc. So, to really get captions in those languages you need to not use the embedded caption option which is the most interoperable option. But we can do it either in open captions and Falcon which means a language overlaid on the screen that all the viewers will see as opposed to a CC button.

Or we can do it by originating a direct HLS stream which you can put in your player. And that'll actually support up to six languages which really can be any world language. And you'll be able to see that in a player and it's just not going to be transmitted through to, say a third party platform like YouTube, Facebook, etc. So, that's what that refers to I think with the complexity a little bit that you might be looking at having one platform that would be a primary platform for social and for VOD in the future and different kinds of applications like that. And a secondary platform where would be. Hey, here's a link where you could see a whole array of other languages and we provide machine translation into Japanese or we can provide human interpreter services out of our APAC services team in Australia and Asia. So, yeah. There are definitely some options for how to get the transcript and how to get it to the viewers. You're going to have a really easy time with multi-language in English and European languages.

You might have to think a little bit harder to go outside those character sets but it certainly can be done.

Matt Mello: Yeah, like what Bill was saying. It's a little bit tricky sometimes with the non European character set but it certainly can be done if you're willing to work with a different workflow that maybe is the kind of the standard right now.

Jared Janssen: Yeah and Matt, actually I'll add on really quickly. We do events that can really vary in complexity to add multiple languages to an event. If it's English audio you can add Spanish and Japanese machine translated languages pretty easily. Where we have seen some more complex events is where perhaps human Captioning in different languages is preferred. And as opposed to machine translation we had one come up recently where Russian was requested and they did not want machine translation. So, we, you know, solution for Russian audio interpretation as an alternative. So, really we can solution and we can provide options. Whether it's human delivered, whether it's machine translation, whether it's audio interpretation. We can really come together depending on your platform and depending on preferences to really make something accessible on the multi language front.

Phil Hyssong: And I'm going to jump in on that and share as well that one of the things that has made Ai-Media successful at this. Many folks know we have a pretty extensive division of our company in Europe and provide for a lot of languages there. And we've developed, we kind of call it some of our secret sauce a bit in how to combine human captioning and machine translation to provide a very accurate foreign language output. So, in some cases where the machine translation is not sufficient we use human captioners and are able to provide a translation then that is accurately translated. So, again problem solving, listening to what the customer wants, what the customer needs and then being able to build solutions that provide for effective service.

Matt Mello: Yeah. Thank you, Phil. Everything that everybody said is perfectly accurate. That's all great input. We are solutions company. So, if you come to us with anything that you might need for your events or whatever it is that you're trying to caption we're more than willing to work with you to figure out what the solution could be.

So, with that let's go into the next question which is, how accurate are your automatic captioning products? So, this actually ties in kind of nicely to the last question too. Relating to translation. So, EEG's automatic captioning product is called Lexi and then we'll take essentially the source language that's being spoken and turn that into caption data. We generally quote that around anywhere from 96 to 98% accurate, excuse me. Just kind of depending on the quality of the audio that's being received as well as how clear the speaker is. Things like that will kind of determine the accuracy of Lexi. And then you also have our iPad translate products which is an automatic process as well.

And that again is just going to vary depending on what essentially the source language captioning is and that's being created from. So, it does vary quite a bit and it's you know, it's a topic that's often discussed because that's like the number one question with automatic captioning is, how accurate is it compared to a human? It's getting there and it's an active work in progress. I'm sure that there's plenty of thoughts on this from the rest of the panel here too.

Bill McLaughlin: Yeah, I would almost turn the question around in a way and say like where would we recommend using automatic captioning. And I would say that from our experience with customer subjective experience, the discussion starts when you can hit maybe 90%. And to really get high satisfaction you want to be more in kind of the middle 90s. In 94, 95, 96 or higher kind of range. And that can really depend a lot on what kind of programming you're looking at. We're in that place for news content, we are in that place for a lot of kind of pretty polished corporate business video, conferences, presentations. That kind of content for a lot of styles of town meetings and that type of content you know for something like, if you were trying to live caption. A dance contest or an action, adventure movie. I don't think you'd get there and that's really the challenge of the question because quite frankly in the entire universe video content you can use the same product and see a range of accuracy that might, the ceiling might be 98% and the floor is going to be almost zero.

I mean, you can find a video that has like occasional kind of shouted dialog interspersed with all kinds of music and other activities and like the accuracy is just going to be close to zero. So, there's no single number that really I think defines the product across all use cases but where you want to be is in that low to mid 90s range. And that's going to be when you get a good result and for live captioning if you're in the mid to high 90s range that's for live to air. That's about the best you're going to do with any method.

Matt Mello: Yeah, absolutely. So, that actually looks like it ties into the next question. Well, also which is what is the difference between Lexi and Smart Lexi? So, I'm actually going to toss this one over to Jared, if he feels like he can answer this maybe.

Jared Janssen: Yeah, thanks Matt. And Tony and Phil touched on this in the beginning video a little bit with Smart Lexi being really that middle offering. But Lexi has been around for a bit. And it's a wonderful product and has great applications. And it's self service, Lexi is self service whereas Smart Lexi, that's full service. And it's a recent rollout from Ai-Media and EEG. And really what Smart Lexi is, is it adds that layer of dictionary creation, that human curation where just like any events where we love to get prep. We love to get PowerPoints, agenda, Speaker names, acronyms. Anything we can get our hands on. We love to have that for a really high quality output. And so with Smart Lexi it goes to our team. Our team of experts to curate those dictionaries to then apply that.

Matt Mello: To the captioning output. And our team can deliver Smart Lexi whether it's an ai-media Falcon account, whether it's a client encoder or Falcon account, you know, really, Smart Lexi falls into that full service where we can deliver that next layer in a self-starter full service way.

Bill McLaughlin: Yeah, that was a question of like where - if you want more accuracy on automatic caption, where are you losing the accuracy? And if it's something having to do with things like the specific topics being discussed, the names of people and places that are kind of specialized through your content, that's where you're really going to see the advantage of that extra layer of kind of preparation and curation and model building, for example, issues like background noise, Smart Lexi so far not going to do that much for you in that area but anything that involves the specialized vocabulary, it can really do a lot.

Matt Mello: Yeah, it definitely falls right in the middle ground between the premium human captioning service other than our Lexi service as like kind of the middle ground where they both meet. So definitely, it's a new offering from all of us and it's something that we're looking forward to rolling out and getting in the market.

So next question is what is the best and simplest way to add live captioning to Facebook Live and YouTube when live streaming from StreamYard? So I'm actually not too familiar with StreamYard as a whole but as long as you can stream from StreamYard to Facebook Live and YouTube regularly, you can add Falcon there as a middleman. And Falcon essentially allows you to add live captioning and embedded into the stream as closed captioning data. So that would probably be my suggestion that Falcon is definitely kind of the go to for our streaming products without any additional hardware needed there.

Bill McLaughlin: Yeah. And to look at where Falcon fits in in that workflow, you'd really out of... StreamYard is a production studio kind of that can put out an RTMP output. It's in the cloud, but oh, cool. It is in the cloud but not so different from I think what's shown here as wirecast which would be a tool that runs on your own computer but basically, you tape sources, you get an RTMP stream on the output and that RTMP stream goes to Falcon where you meet the captions and then Falcon can actually do the same type of multiple destinations feature that the original streaming encoders could do. So you would actually, you would send one output from your studio to Falcon and then Falcon would put it in the captions and address those to both Facebook and YouTube if you're streaming at Facebook and YouTube.

Matt Mello: Yeah, and to keep that on the diagram, too, I don't know if you can see my mouse but essentially this would split into two here and that you could do StreamYard, I'm sorry, Facebook and YouTube at the same time from Falcon.

Bill McLaughlin: And that's free with the Falcon license.

Matt Mello: Yeah.

Bill McLaughlin: You can do one caption streamed and you can actually put that same caption stream to as many RTMP output sources as destinations as you want.

Matt Mello: Alright. OK, so next question is we have a lot of videos on Vimeo that are embedded into our LMS. What is the best way to get these captioned? I'm not familiar with the acronym LMS.

Bill McLaughlin: I think it's the Learning Management System.

Matt Mello: So would this be like a post-production type of setting, I'm guessing then?

Bill McLaughlin: Well, it would. I'm seeing like prerecorded videos I'm imagining on the Vimeo platform and right, you have one of these systems like Coursera or something like that where essentially the videos go into we're embedded with lecture notes and slides and quizzes and almost anything else you could you could imagine some of the platforms. But so probably, the key would be getting an integrated through Vimeo and we can we quote batches, we have a recorded media services department that will add captions into videos on almost any platform and with a lot of platforms we have integrations where if you have a specific way of labeling which videos you want or want to use the API to send them to us, we can actually find those videos and kind of self download them out of your account with your permission through the integration. But basically, those would get captioned and you have a variety of kind of quality and turnaround time options on the recorded media and the captions will be either delivered back to you through our web platform or through an email or in some cases, and I don't know if Jared knows this.

I'm not sure if we're directly integrated into Vimeo for delivery, a number of web platforms we are. If we're not directly integrated for delivery it just means you're going to receive a caption file like a VTT style file is the most common one and then you would upload that kind of into the ingest system for for Vimeo or the other platform if it's not a complete end to end integration.

Matt Mello: Yeah, that was a great question, thank you. Next question is, do you offer live stream ASR captions that can be fed to a live stream CDN and like Vimeo or IBM as CEA-608 closed captions? Is live stream ASR captioning available as an online service that doesn't require the purchase of hardware? I mean, this sounds like a perfect pitch for our Falcon product essentially, which is a cloud encoding option that would allow you to add live captions to your live stream essentially where you would just be sending your live stream from a source like OBS or any any video live streaming source in RTMP to our cloud service which has Falcon and then you can add live closed captioning from any source of closed caption creation which could be a live human captioner or AI through Lexi. So, yeah, I mean, this is a softball essentially for our Falcon product. So yes, the answer is yes.

Bill McLaughlin: And it's worth noting, I mean, that the that the encoding capability in the live captioning capability are are sold separately. Don't worry too much, but it's very affordable. But the they're sold separately so you can use Falcon with human captioning as well interchangeably and the Falcon channel was billed on a monthly basis as like a subscription channel whereas the Lexi service is billed hourly only when you use it. So you have a mix of the two costs and if you ever wanted to mix in kind of more traditional steno captioning options you could do that with Falcon as well.

Matt Mello: Next question is we're hosting an in-person event that must be accessible for all attendees. Some will attend at the venue and others will tune in virtually. Do you have a product that would work for everybody who joins? Again, the answer here is yes, but there are a multitude of different ways of achieving this. Anyone who wants to tune in virtually, we have a variety of solutions for adding closed captions in any format. The question that comes in then is, of course, the attendees part and the people sitting in the room. We do have hardware options that would allow you to display just the closed captioning in the room, and that same closed captioning can then be used also for the live stream. But again, there's different ways of achieving this so can you kind of be a solutions type thing where you would you could come to us and say here's what we have, here's what we're looking to do and we'll both say here's the solutions that we have. Jared, do any thoughts on that, maybe?

Jared Janssen: Yeah. Thanks, Matt. So really what we're looking at is, it is a hybrid event and in the corporate environment, we're seeing this be a more common occurrence and we have a full advanced event team that really is this specialized to handle these types of conferences or events. And we're seeing what maybe used to be a large in-conference event scaled down in size and have a bigger virtual presence and sometimes that could be a week later. It could happen concurrently as well, but there are really advanced events team works to whether it's on site captures or ACL versus remote. Really, it kind of goes back to that solutions where it depends on what platform you're going to use, what is the event set up on site and really just working to find that that balance whether it's hardware, whether it's IP or really just it could be as simple as an iframe that works. So really, it's just working to a solution based on the overall scope of that particular event or conference.

Matt Mello: Yeah. Perfect, thank you, Jared. OK, next question that we have is in which languages do you provide captioning? So this is a very broad question and again it would take a very specific scenario to give you a very solid answer. But we do provide Live closed captioning in most languages. Again, just sort of depending on what you're looking to do with this event and the workflow and where you're decoding the captions and all of that type of stuff. But the answer is a lot of them and kind of depending on what you're looking to do.

Bill McLaughlin: Yeah. And I mean, there are some technological distinctions again, what kind of alphabets you're looking for in the captions? Whether it's something that works in traditional embedded or television captions or whether you know it's a language that wouldn't work in those but you're looking for a different streaming solution. As we said you can use open captions, you can use AI live, you can use Falcon with the HLS stream. And in terms of the services, we can leverage. We have a few different providers to leverage machine translation in. I think the number is over 100 different languages and there's probably more that we can offer, human interpreter resources in, but honestly with enough notice there is. We have a pretty global web of employees, freelancers, even subcontractors and given a reasonable amount of notice for an event, you'd probably be surprised how many languages we could actually support with humans as well.

Jared Janssen: Yeah, even... Sorry, Matt.

Matt Mello: No, go ahead.

Jared Janssen: So even on the demand side too after if it doesn't work or live, we do have a lot of options after, post event where we could do a Portuguese SRT file, a Spanish SRT file. So we have live availability and then the on-demand side as well that can be opened to do just a number of languages that maybe don't work for that live setting.

Matt Mello: Yeah, one of the coolest things that I'm seeing with the joining of ai-media and EEG is that we have solutions for pretty much everything closed captioning related now. So like pretty much no matter what the inquiry is, we have some division or department that's able to handle it even if I'm not directly, you know what I mean. So it's it's been a really cool joining of forces. So a lot of languages.

So next question here is sometimes what's spoken isn't captioned correctly. How can I solve this issue? That's a great question and I'm sure that that's a pin point for a lot of people and the answer again is going to depend on where were these closed captions created? How are they generated? Was it a human? Was that AI? It kind of just depends. And the answer is sort of live closed captioning is going to have its faults at some point because of the limitations of it being a live closed caption. There's no so much time to correct it and make it look good before sending it out for live air. So sometimes that may or may not happen.

But the lucky part is that you can fix this sometimes by having the prep work that I think Jared was touching on before, having the prep work of actually telling your captioner, if to human what they should be expecting to hear or what the topic of discussion is or maybe some keywords or places nearby beforehand so that they have an idea of what they should be listening for and if they hear that to put it out on in the captioning. And the same thing goes for Lexi and Smart Lexi as well. You can kind of train it beforehand or tell a person that you're coordinating with the Smart Lexi beforehand so that when that word pops up, Lexi is more likely to get it correct or oftentimes than not just by hearing the the phonetic, how you would spell it out and how it should be pronounced and all of that. So that would be the number one way that I would say you could fix it is to kind of train beforehand and kind of have some prep work put into the process.

Bill McLaughlin: Yeah, nothing's perfect, but you want to get the best possible results and if it's more of a Zoom meeting or classroom or other, what I'll kind of call up amateur presentation as opposed to something that's kind of all been set up by audio professionals, I mean, it's important not to neglect the factors that, use an external microphone, have it at a good distance from your mouth, step a little bit farther away from the air conditioner, try to enunciate. I mean, these things sound silly but in a lot of environments those are pin points for the captioning and they can make a difference.

Matt Mello: Next question is I am a CART provider and coordinator for a university. Everyone got used to excellent audio with remote classes as we transition to in-person learning, what audio options do you recommend to continue providing the best experience? So that actually kind of ties into what Bill just said pretty nicely. I think as far as the microphone thing goes and having a good quality audio setup is probably a good way to get the audio out to everybody in the best manner possible and also adds to the captioner so that they can hear clearly. Phil or Jared, do you have any thoughts on best experience for audio practices?

Phil Hyssong: Yeah. You know, Matt, I think that actually you and Bill really answered that quite well in the last statement. I think it's a refresher to professors, it is talk with your captioning provider about the microphone that's being used and you need to kind of set some expectations within the class. One of the beauties of Zoom was the fact that as the the person writing this, it said excellent audio but you could hear everyone via Zoom. Everyone has the option of of kind of micing in and and when you're in a typical classroom, there's a student in the back corner of the classroom that asks the question and if the professor is the only one with a microphone then it's going to be really tough to hear that question. So it really is simply a matter of re-educating our professors to restate questions, to speak clearly, to not put their livelier microphone underneath their jewelry or underneath their tie, but rather keep it exposed, keep it where their mouth is that kind of thing. But there are just limitations with that kind of situation and we just have to work with them.

Jared Janssen: I'll even add to internet speeds. Are you on Wi-Fi? How is your Wi-Fi stream?, Are you going to have buffering? Is the audio cutting in and out or are you directly plugged into really nice fast internet? So there's a lot of little things that kind of come into the greater equation like Ruben touched on to make your audio and really experience in the classroom the best it can be.

Matt Mello: Yeah, absolutely. Thanks, guys. Next question we have here is we provide live captions for in-person and virtual college related events. We'd like more information on the possibility of toggling captioning between live capture and automatic captioning solutions. Yes. So, I mean, this comes down to our encoding solutions, EEG's encoding solutions do always allow you to select between either or, you're not limited to just automatic or human, just depending. So yeah, I mean, that's a great question but there is options for either. Phil , you are going to comment on that.

Phil Hyssong: You know what I do, Matt, because we just are launching right now a study on the use of ASR in classrooms and is this a good idea? Is this something that schools are interested in and so forth? Because what we're seeing within, pardon me, what we're seeing with an industry is people are being given choices, And that's what I'm hearing in this question is we want to have the choice of which direction we want to go for what event and so we are actively looking at creating solutions at ai-media, EEG so that consumers and users can say, "hey, I want ASR for this, hey, I want a live caption or for this event, I need high level services for this event." We want to be able to give people options so that they can easily kind of toggle through, if you will, what they need in any given situation. So we're on this, I don't know if this individual is obviously related to a college, if they'd like to be part of the testing to please, let Regina know that. We're looking for some test cases that we can kind of practice with to develop the best practice concept and I'd be happy to talk with them further.

Matt Mello: Yeah, and it's important to note too that we do currently have solutions for in-person and virtual college events like there's definitely ways that you could do this right now. We're just looking to make it the most seamless process possible. So if you need something like next week, feel free to reach out to us and we can get something set up but then there's also the long term of how we can make this seamless for everybody going forward to. Yes. Thank you, Phil. How can I time multilingual captions in the original source language then simply drop the translation in its place without having to re-time? I think this is a little bit out of my scope of normal questions. Bill, do you have an answer for this one?

Bill McLaughlin: Well, it kind of kind of looks like we've got a recorded media process and you know what, say we have captions in one language and how would we get those translated and kind of do that in a do it yourself workflows. So I know we will provide that as a service, as workflow. Typically, there's one rate for original language captions and you might have to pay a second rate to also have it translated. If you already had the original language captions, you wouldn't have to pay for a fresh transcription. So to get it done as a service you would definitely have a savings there. If the question's kind of looking more for a tool, I don't know that I'd have a tool in our line that I could really specifically recommend for this case. The scribe editor from from the EEG product line is like a post-production recorded clip caption editor that installs on a Windows desktop. You can use it to edit captions, you could preserve the original timing but to kind of to thread that through a translation process.

It would definitely depend where there is operator sitting down and translating this kind of off the top of their head or is there an automatic translation process going through? So yeah, there's some options you could take with this but of course, as happens with a lot of things when you're changing caption files it can unfortunately chew up your time a lot more quickly than you were hoping.

Matt Mello: Thank you, Bill. Can users add their own content specific vocabulary to both Lexi and Smart Lexi? That is a great question. So that is kind of what we were touching on before with the differences between Lexi and Smart Lexi. Lexi does allow you to import your own words in places and proper nouns and whatever it may be into its dictionary so that it's more likely to get it correct when it hears that word being said and where Smart Lexi fits in is that it's essentially being a part of a managed service that ai-media and EEG are now handling as part of the managed process. So it does take some of the load off of whoever may be of managing that before. We can offer that as part of the process.

So let's see, the next question is could you please discuss inserting captions as 608/708 data into live video for TV/OTT/online streaming? So inserting caption data as 608/708 data is essentially what EEG great inverter is. It's what we've been doing for the last 30 odd years. So that's kind of the specialty of where our encoding solutions lies. Adding captions as closed caption data 608/708. Now in terms of adding i, as for TV, OTT and online video, those are all solutions that we have kind of dedicated options for and ones that are more geared towards others like, for example, for TV, I'd probably suggest one of our hardware encoding solutions like the HD 492. For OTT is for IP video, MPEG transport stream, things like that.

That's our all time solution. So that's a more recent encoding option that we've added. And then online streaming you could do that with any of the encoders that we have just depending on the workflow as a whole. But our Falcon solution is kind of perfect for online streaming without any additional hardware required. So yeah, we have options for all of these and 608/708 is kind of the specialty in kind of what we do.

So the next question we have is what are the best microphone/configurations for CART writers in classroom, lecture or group discussion environments? What are tips for maximizing these experiences? I'm not personally familiar on the microphone options. Does anyone have a good discussion on this?

Phil Hyssong: I do not. At this point I'd be more than happy to research that and find out, Matt but I'm sorry, I am not...

Bill McLaughlin: I think we already covered a lot of it and I mean, I don't know that I have a personal review for kind of like a brand name or a specific product. But typically, I mean, I think you do want something like a like a Lav or Lapel mic, something that'll clipped to your vest or something like that. I mean - but clip to your shirt and you're going to want is not rubbing against your body or something like that, a reasonable distance from your mouth. Check the audio on that if you're bringing it into your computer, make sure the level isn't into the clipping range and isn't like all the way down from zero to one but is in kind of a healthy middle of the range while you're speaking at a normal voice level ,for example, if you're speaking in front of an audience, you have to consider also that you don't want to test the get at one level when you're whispering and then kind of go up in front of the room and speak at a completely different level and really project and then your test isn't a very realistic test.

So basic realistic test is going to help at setting the level and a good mic is a little bit more directional in terms of it doesn't pick up as much ambient sound from around you but it's still good to be conscious what's around you. Something like moving away from a fan or an air conditioner or anything like you know what, a server rack, anything that makes a bunch of humming noise like that. A little bit of distance from that can can make a pretty big difference. So I think those are the basic things that you'd you'd want somebody to know kind of in one minute or less.

Matt Mello: Alright, so it looks like that was our last question that we had today. Thanks, everyone for putting in their questions, and I'm going to pass it back to Regina.

Regina Vilenskaya: Thank you, Matt. Thank you, thanks all for participating. And thank you, everybody for joining us today, for asking us anything about captioning. We did receive a lot of questions so if we didn't get to yours, we will follow up with you soon after the webinar. And if you have any questions and would like to get in touch. Please reach out to sales@eegent.com or sales@ai-media.tv. And within the next few days, everybody who signed up will receive a link to the recording and thank you, again to everybody who joined us today and we hope to see you at our next event.

Bill McLaughlin: Yeah, thank you for coming and these are always a lot of fun. So thanks for coming and thanks for bringing the questions.