EEG is now an Ai-Media company, bringing leading, vertically-integrated solutions and services to customers around the world. Read more here.

October 27, 2021

Webinar Replay: Closed Captioning 101: Live Video Solutions

EEG

The NAB Virtual Event Webinar Series is shedding new light on accessibility. Presented by EEG Video and Ai-Media, this trio of online events informed content creators on key closed captioning advances.

Presented on Thursday, October 21st, Closed Captioning 101: Live Video Solutions was an essential introduction for captioning newcomers. Ai-Media and EEG experts walked attendees through the fundamentals of improved accessibility for live video with closed captioning. Optimal captioning solutions were also discussed.

Closed Captioning 101: Live Video Solutions • October 21, 2021

The combined strengths of EEG Video and Ai-Media form a one-stop-shop resource for captioning, translation and transcription solutions. Look to us for the latest closed captioning news, tips, and advanced techniques.

Visit here to see Closed Captioning 101: Live Video Solutions and all previous webinars!

 

 

Transcript

Regina Vilenskaya: On behalf of Ai-Media and EEG, I'm very happy to welcome you to the NAB Virtual Event Webinar Series. I'm excited to introduce Tony Abrahams, CEO of Ai-Media, and Phil McLaughlin, CEO of EEG. Tony?

Tony Abrahams: Thanks very much, Regina. My name is Tony Abrahams, I'm the co-founder and CEO of Ai-Media, and I'm devastated that we can't all be together at NAB in person this year. But we've tried to do the next best thing which is to make all of our great content available to all of you virtually. And I'm delighted to be joined by a person who's very well known to almost all of you I'm sure, the CEO of EEG, Phil McLaughlin. Phil, how are you?

Phil McLaughlin: Oh, very good, Tony. Glad to be with you.

Tony Abrahams: And I think looking at one thing that's obviously changed since the last time we appeared at NAB is that Ai-Media and EEG have joined forces and Phil after running that business for 25-odd years, what had you decide to join forces with Ai-Media. I guess two questions, why now and why Ai-Media?

Phil McLaughlin: This has been very heavy on my mind the last couple of years is we're the company that's built this level of excellence and momentum in the US market, where we can move from here. And the two things that happened is that we needed more international exposure as a company and we also, with our movement into captioning services with our Lexi automatic captioning product, we had become a significant service provider in the US market. And from that standpoint, for the last five years, we've had the pleasure of dealing with the Ai-Media as a trusted partner, a partner with customers in both the U.S., Canada, and all around the world. And we thought what would happen here, which is very much coming to fruition now, is that we could join together with Ai-Media where we can combine our products, where we have almost no overlap, but a tremendous amount of compatibility of these products. And we could bring these products both to the U.S. market, products and services, I should say, both the U.S. market and around the world.

Tony Abrahams: And I think also what we've been able to offer that's been really compelling, only in the last few months, we only completed this acquisition in May, is really offering that true one-stop shop right around the world where we are offering both the fully automated captioning through EEG's leading Lexi system, the traditional premium quality captioning that Ai-Media is known for, for many, many years, delivering accuracy of over 99.5%. But also, there's this really interesting spot in the middle, isn't it, with Smart Lexi that both Ai-Media had invented kind of moving down from a premium and that you'd moved into by moving up from Lexi and it's this happy place that we meet with Smart Lexi. So hopefully that's something for everyone in this great product suite. So Phil, thanks very much for joining us. And Regina, I'll hand back to you.

Phil McLaughlin: It was a pleasure. Thank you, Tony.

Regina Vilenskaya: Thanks, Tony and Phil, if you would like to hear about any of these topics or products in more detail, please join us for the NAB Virtual Event Webinar Series. We look forward to having you join us. So thank you.

Hi, everyone, and thank you so much for joining us today for Closed Captioning 101: Live Video Solutions. This is the final webinar of EEG and Ai-Media's NAB Virtual Event Webinar Series. My name is Regina Vilenskaya, and I'm the Marketing Lead at EEG. The featured speakers for this event are Bill McLaughlin and Kyle Phillips. Bill is the CTO at EEG, and Kyle is the General Manager of Business Development at Ai-Media. Today, Bill and Kyle will walk through the basics of captioning and share the essentials for making live streams, meetings and more accessible. I'm now going to pass it over to Bill and Kyle to kick off Closed Captioning 101: Live Video Solutions. Welcome.

Bill McLaughlin: Hi, welcome everyone. Kyle, you're muted, we need you here.

Kyle Phillips: Love the technology. Everybody has to do that to start one of these, right? Thanks so much for joining. And we're really excited to walk you through this Live Captioning 101. I mean, first off, what is closed captioning. And closed captioning is the process of displaying text on a screen, as, you know, some of you who have enabled the live captions this morning are able to see. It's really the transcription of any audio portion of a program as it occurs that can happen in real time, as is happening now, or it can happen on recorded videos and content. I think, you know, Netflix or other programs where you enable captions to follow along with the captions. There's also subtitles. People often ask about what's the difference between closed captions and subtitles. And the way that we look at it is we define it as subtitles or just translated closed captions. So again, I'm thinking about Netflix and maybe watching some shows that are in languages other than English and turning on the subtitles to watch something.

In the last 16 months, we've seen a big, big shift to online events and meetings where live captions have played a really big role and we get lots of questions, you know, about, you know, how do we deliver live captions? And, you know, one of the first questions we might get from people is, why caption at all? Why do we need to caption? And you know, in some cases, you do need to caption. And you know, that point at the bottom, maintaining compliance with FCC or ADA regulations, depending on the event and depending on your situation, you may be required to provide captioning. So that's one reason. But you know, the better reason is it's the right thing to do, both from an accessibility standpoint, making sure that those attendees who are deaf or hard of hearing have access to your events and to your content, it's inclusive. Another reason, though, is to reach a larger audience, a global audience. We're working with so many different events, companies and corporations who are having global events now enabled by technology now that we use every single day.

And they're looking at ways of, you know, localizing their content and translating what's being spoken in English or other languages to multiple languages simultaneously. So, you can really drive impact of your event by bringing in people from other countries and letting them participate in your content and in your event. And then finally, for boosting just overall comprehensive and contextual understanding. Sometimes seeing the words on the screen backs up what you're hearing. And again, for those with cognitive processing issues, or those who can just benefit from turning down the volume and following along, it can really increase that comprehension. Now, needs to consider. What do I need to consider when I'm planning an event that needs to be captioned? You know, lots of things to consider. You know, what type of events are you hoping to catch? And, you know, where does your audience exist? Where are they following along? Are they on a social channel like YouTube or Twitch or Vimeo?

Are they meeting at an event platform like Hopin or Brandlive or Pathable? You know, are we considering other languages? And, you know, how much is this gonna cost me? What is my budget for this type of event? Now these types of questions, when you work with Ai-Media will walk you through and sort of help you navigate. These are typical ones that our clients are asking themselves before they make the move and before they go ahead and book captions. But regardless of, you know, how you answer those questions, you know, as Tony mentioned at the outset in the video, we are really a one-stop shop for all your captioning, transcription and translation solutions. So, whether you're using automatic speech captions or having your event live captioned by a professionally trained human captioner, as we are today, we have options for you at different price points and accuracy levels. We also have different options for how we're delivering the captions and how those are appearing in your screen.

How, you know, the technology for me, e.g. that allows the easy display of captions into your live events, whether they're in-person or virtual, is a big part of that one-shop model. And so when we think about, you know, the different options and the different questions that you might ask yourself, this slide, I think, gives you an idea of the type of events that we support and the types of formats that support those. So, you know, in-person events, virtual events, hybrid events, you hear that word a lot recently because so many events are doing a combination of an in-person, maybe a stripped down, smaller in-person these days and then a larger virtual conference. And sometimes those are happening simultaneously, recorded content broadcasting, sporting events, audio-only events. And then captioning formats like, you know, human captions, subtitles, automatic speech, even American sign language interpretation. And then, you know, multilingual supporting different languages that you see on the screen.

These are some of the ones that we typically translate on a daily basis, but there are many, many more. But, you know, what it really boils down to are two questions that you ask yourself, when you see all this text on the screen, is really, you know, who's performing the transcription? Is it a person? Is it a machine? Is it a combination? And then how is the captioning reaching the audience? And at the bottom here, you sort of see the different ways that we're delivering captions to an audience using the technology in place. Now, as I said, one of the ways that we are delivering captions into events is through live streams. And I'm gonna turn over to Bill because he's really our live stream guru, (CHUCKLES), as our chief technology officer and in coming from EEG and developing a lot of this technology. Bill, tell us more about delivering captions into a live stream.

Bill McLaughlin: Yeah, absolutely. So, that's one of the first places where, you know, a lot of newcomers to captioning are coming from, right? Because, you know, in the broadcast world, it's typically something where captioning has been required in a lot of places for a long time. But there's, you know, a lot of new live events and companies are trying to up their game and add captioning and translation to live events really all the time. And Falcon provides a pretty low cost, easy-to-onboard way to just start doing it for, you know, for a single event or for a series of events, and, you know, you can move up to a permanent subscription channel on it if you're doing regular events as you get comfortable with the workflow. So, Falcon works with delivering a video stream to platforms that are basically one-to-many broadcast platforms, you know, like Facebook, YouTube, Twitch, Vimeo or live stream. These types of platforms that are not quite conferencing, but more of a broadcast and will include features like social integration, search-the-audio recording.

And once you get the captions on that, they should stay on that. So, on the next slide, we have a bit of a block diagram to show you how that really works. And if you're producing a streaming live event, what you'll be used to, is that you have a place where your stream is originating. And that could be a software or virtual studio program, like Telestream Wirecast or, oh yes, an open-source platform or, you know, live stream a bunch of other brands, or it could be hardware. You could have SDI or MDI or other video formats, and they could be going into a hardware encoder that, you know, prepares and RTMP stream typically to go to a live platform, you know, again, like Facebook or YouTube. So, you're used to putting the Facebook or the YouTube stream key in the URL into that streaming encoder and sending the video out there. And so if you wanna add captions, all you need to do is add our Falcon technology in the middle, essentially, and it'll only delay the stream by about an extra second.

And what you're going to do is you have an account on eegcloud.tv, which is easy to sign up for and then you can you can pay for events either with a credit card or by arrangement with our sales team. But you will receive a stream key and an ingest URL from Falcon, the same as you would from the end stream. And so then it's possible to direct your streaming encoder at the event site to Falcon. The captions will be injected in a standards-compliant way that's compatible with, you know, dozens of different video platforms internally to Falcon. And you can apply Lexi captions to that through a self-service workflow on EEG Cloud, or you can book human captioning from Ai-Media or even from another preferred provider that uses the iCap network. And you'll be able to get those captions injected into the video stream in real time with Falcon. And then they're going to pass on to the end platform, which is responsible for, you know, again, everything that that platform would typically do for your stream; compatibility with different devices, integration with your social channels, even things like advertising insertion.

It's really all the same, except you've added Falcon in order to add the captions before you've linkedin to your final destination.

Kyle Phillips: So, what I'm hearing, Bill, the big difference here is we're just sticking Falcon in the middle of that. I don't need to have any sort of hardware. I don't need to, you know, configure any physical unit. Instead of sending directly to YouTube or Vimeo or other platforms, I'm sending to Falcon and then Falcon is sending along that stream with either the automatic..(CROSSTALK).

Bill McLaughlin: So, this can be a fully virtualized workflow. You also have the option, of course, of using a more traditional hardware closed caption encoder at your event site, in your video facility, and then converting those captions through the streaming encoder. A lot of streaming encoder support a feature like that if you have a hardware captioning Coder. That's a good option, but especially for smaller, more introductory production. There might be little or no physical or on-prem infrastructure, so we see the production. It could be as simple as, you know, as a camera hooked up to a laptop and the laptop uses a virtual studio and sends video to the cloud. So, yeah, it's fully virtual and fully compatible with that kind of software system.

Kyle Phillips: It's a very, very low barrier, low, very easy entry into adding captions into a stream there. I think we had a customer , Eventage, that sort of illustrates some of the things that they were looking to do recently transitioning from in-person to virtual events. Can you tell us about that?

Bill McLaughlin: Yeah. And this was, you know, we did an EEG case study with one of our great partners. And Eventage is kind of doing live and in-person events production for a pretty wide variety of different customers. You know, they started finding as a lot of people in production have found that, you know, clients asking for captioning can go over time from being just one client that has this, and you need to do some research to something where you realize, Oh, a lot of our clients are interested in learning more about this. And actually, it's something that it's a solution that if you're in the production space, it's pretty easy to develop an expertise in, I think, and then be able to make that a value add on your service. That one of the things that you can provide is you'll provide the connections and know how to do captioning. And, you know, Eventage did a really nice job on this. And we did about a dozen streams in the event that we worked on the case study together with, and it was a mix of Falcon and Lexi.

The Lexi Automatic Captioning can really do a very quality job for a lot of types of business events that get produced. And yeah, it's a great company and, you know, one of our, you know, sometimes honestly, at EEG, with our equipment, we work with some of the biggest broadcasting names and channels that are, you know, just tens of millions of people are watching. It's also really exciting to do more local scale events and corporate events and really, really get to bring more captions to people kind of right where they live. And, you know, not just on the biggest budget television broadcasting.

Kyle Phillips: Yeah, it's been very exciting. I know many, particularly over the last year and a half, the number of event companies who are doing exciting virtual work and who are able to get a solution quickly for their clients and their events. Talk to me a bit, though, when we're doing the in-person and the virtual streaming at the same time, because if I'm streaming, there's gonna be a bit of a lag. There's always a lag, whether we're inserting captions or not. How do I make sure that the captions in venue match the speaker and not waiting 20 seconds for the YouTube to follow along with the captions?

Bill McLaughlin: Right, exactly. So, when you're doing on-site captions, you'll wanna use a somewhat different solution than something like Falcon that involves taking the video out to the cloud and then playing it on a solution like Facebook that's in the cloud and the whole. You know, the delays on this kind of build up to the point where if someone's watching that on their device, you know, there is a significant amount of discord between what's happening in front of your eyes and what's happening on the screen. So it's not really a good way to do accessibility. And so when you're using in-person events, what's good is that it's easy to take a hardware encoder that can do SDI video and it's easy to either use that instead of Falcon in most cases. Or even when there is an important use case for both, you can chain them both through the same iCAP access code, which means that the same captioner can access the Falcon and the hardware encoder, and essentially you'll get the on-site and the cloud captions.

And, you know, that will have the same service cost and the same service set-up. It won't really be any more difficult. So, the dedicated product we have for the AV in event space is the AV610. And the way that that's a little bit different than a broadcasting closed caption encoder is that it specializes in open captions and putting the captions directly on the screen. And it can do some things with the captions on the screen that you can't do with the traditional broadcast closed captions set-up. So, you can create at the producer side effects with the video like, you can fold back a video, slide a bit and scale it to leave space for the captioning, where the captioning won't interfere with anything on the slide. The way you see in this picture, where if you had the captions overlaying, it would interfere with your ability to read the slide. But so you bring the captions down into a separate space on the reinforcement monitor to give them their own space. The product also supports outputs where you don't even have to have an input video.

And if you just wanted to put on a background image related to your event, make the text extra large and then put this out to an HDMI monitor. You can basically have a standalone caption screen, which in a lot of cases you might put off to the side of the stage kind of away from the speaker. You know, obviously, too far in the peripheral vision is not perfect, but essentially that gives you an ability to have captions there. Anybody that's interested in using the captions, it's helping them understand the presentation, will be able to look to the left or right when they need to and see the presentation screen. Or you can put it behind the speaker in a case where you want everyone's eyes up, front and center, and that's not gonna interfere with other visuals.

Kyle Phillips: Yeah, we have so many different use case scenarios where there's multiple, you know, there's both live streams and the in-person. And what I'm hearing here is lots of different options because depending on whether this is a commencement or convocation ceremony for a college or whether it's a conference that's both in-person and virtual, there's going to be an option with captions that are timed perfectly and visible to people, even if there's multiple PowerPoint slides and different speakers coming in with different visuals. I know there was a recent one where we delivered with both a live stream and there were about 120 different rooms. Wonder if you could... (CROSSTALK)

Bill McLaughlin: I mean, you say a recent and this is one of the last kind of big case studies events we honestly did pre-COVID. And hopefully we'll get those big events rolling back soon. But yeah, I mean, this conference was essentially 120 simultaneous live breakout room technology conference, and basically with a combination of the on-screen, in-room captions at each breakout room and captions going out to an API on the cloud video system that was running the conference. We were really able to do a lot of cool things and had a pretty enormous scale. For example, when you walk into the hall, there's videos of all the different breakout rooms put on a big multi-monitor and actually with the closed captions turned on. And this actually means that you can kind of go into the lobby. You can see a lot of things happening. You can read the captions. I mean, not all 120 of them in one view, obviously. But you can kind of say, actually, "Oh, what's going on in the different sessions that are in this hall zone like?

What are they up to? Is this something of interest? Where do I wanna go next?" So, it's an example in a way of really using the captions, not just for accessibility and compliance, but as a kind of directory or even a search feature that's helping people get to the content that they wanna get to and get the most out of the conference.

Kyle Phillips: That's very cool. So, it's like drawing them in by saying they can see keywords and it's like, "Oh, I wanna go into that room, it's interesting what they are seeing."

Bill McLaughlin: Yeah, if you've ever been, you don't wanna go to the conference and just choose only by the speaker’s face because you'll wind up, you wind up the sessions at all.

Kyle Phillips: Right. No, I think that's really cool. And again, cost effectiveness shows up there. I know we'll talk a little bit about different options that we have. And, you know, that's always a big question in people's mind. But you know, when we talk about cost effectiveness here, it's having that again, the in-person and hybrid potentially using, you know, in this case, one caption or two to listen into one event not having multiple, having to bring in one captioner for live and then another captioner four in-person. You are talking about significant cost savings and, you know, just making it that much easier for people. So, you know, in-person streams, both of those a lot of use cases, as Bill said, that was pre-COVID. What's happening these days and how are we captioning into virtual meetings? Well, I mean, I'd say we probably, hundreds of hours per day where we're delivering with Lexi and Smart Lexi, probably thousands of hours of captioning, we're delivering into virtual meetings every single day now.

You know, some of the common meeting rooms or meeting technologies are Zoom, as we're using today, and Webex and Adobe Connect and some others, as well as now Microsoft Teams, Teams that never had third party captioning integration. But that is something that's coming in November. We've been part of helping to develop that. So, we can deliver captions into virtual meetings. But even in cases where there wasn't or there isn't a dedicated captioning component like Google Meet, for example, you know where captions aren't able to be integrated live human captions into the session, into the platform, we can still deliver captions through our dedicated Ai-Live Caption Viewer or through streamed text. And simply, our captions will join those meetings, listen to the audio and then deliver those captions. Now, when we talk about the different options, automatic speech, human captioning, quality levels, accuracy levels, the cheapest is gonna be just out-of-the-box ASR. That's free and you know, free is free.

A lot of people like free and free can be good. Free can give you a good result, if you have good audio, depending on what's being talked about, subject matter. You can get to maybe a 90%, 91% accuracy with YouTube captions or automatic captions and Zoom or Teams or other technology. But as soon as you start to get into cases where, you know, the speaker names are important to differentiate, or in instances where you need to have brand names or there's technical jargon. Think about engineering and medical conferences and technical conferences, where automatic speech just doesn't deliver and the fact that the quality degrades significantly, I mean, even in a 90% accuracy scenario, every 10th word is gonna be incorrect. Now, that's fine for your audience. That's great. But you know, if you're looking to scale to something which is closer to accurate, it's still affordable. There are some ASR options that we offer, one of which is Lexi, which has been used for a number of years. Lexi uses topic models where clients can build out the terminology and jargon and speaker names that are used in their sessions to get to a better 95%, 96% accuracy level on the captions.

We also now have Smart Lexi to something Tony talked about at the beginning of the webinar. Smart Lexi sort of builds on the technology of Lexi to go even further to about a 97%, 98% accuracy by using human-curated custom dictionaries. So, the same captioning staff who work with us on delivering premium captioning are also able to take their technical knowhow and help build out the smart Lexi to deliver at a higher rate of accuracy. And then Premium live captions delivered by professionally trained human captioners, whether they're stenographers, re-speakers delivering at the highest level of accuracy, 99% accuracy or above, in many cases. So, which one works for you? It really depends on what your needs are. Some of those conferences, depending on your budget and what you need, some of those conferences, because they are highly visible, high stakes, will always use human captioners. And again, we've got ways to deliver that and provide that for you. Others will look to combine, you know, a combination of Lexi, Smart Lexi and human.

But this is why we talk about a one-stop shop and having options for every, every budget. Good example of virtual meetings, both in terms of a client of ours that uses us to support employees in their meetings, is Microsoft using our human captioning for a number of years now to provide accessible options for their employees, as well as for their internal events and client facing events. We will caption those as well. Those are all taking place virtually. They've been taking place in teams for a number of years now and we've been delivering the captions into our caption viewer for people to follow along with. As I mentioned earlier, though, rolling out this fall, I think it's next month, Microsoft Teams will have live captioning available right within the Teams interface. And that's something that we are excited about, being able to deliver now because so many companies use Microsoft as their platform and the collaboration tool. So that's a really nice use case for us. Another use case in the virtual space is in the classroom.

You know, for many classrooms over the last year in a bit, those classrooms have been Zoom or other online meeting platforms. As schools, this fall, started to transition back to either full time in the classroom or a combination of in-the-classroom and virtual, we're still providing services to those students virtually. The way that works is the student has their laptop and a microphone picking up the audio from the room. Or we're tapping into the learning management system or the technology that's used in place at that institution. And one of our captioners is listening in remotely and providing the captions back to a student so they can follow along in real time. It's usually a three to four second delay to deliver to that platform. And you can see, you know, sort of what the platform looks like here with students having real agency in terms of how they view the captions in their class, the size of the text and the font, and how quickly the captions are delivered. You know, that's a really big deliverable here for the way that we're bringing captions into education.

And a couple of examples of that, I was gonna pick just one, but I picked three that came to my head yesterday when I was going through and thinking about who are some of the colleges, different parts of across the country and Ryerson is in Canada. You know, different colleges that are using live captioning in the classroom or even to support students outside of the classroom. So, we will provide live captioning to students for their actual lectures and their tutorials. But we also do a ton of captioning for orientations and events that take place on campus; staff trainings, teaching and learning summits, convocation ceremonies, that have been a growing segment for the last few years. And even as, you know, there's been that pivot to online-only graduation ceremonies in the last year, we've been seeing a lot of push for and expansion into live captioning, so that not just the students, but their families are supported and can follow along with the ceremonies in real time using the captions.

I'd also say for those convocation ceremonies, we're seeing a lot of that multilingual translation come up for families maybe for whom English is not a first language, but can follow along with the ceremony by simply adding that translation. So, another use case there. I wanted to talk, wanted to get your questions and wanted to hear from you. But before we do, I just wanted to say in conclusion that, you know, we do have a number of captioning options for any need for a matter of budget. If there's something we haven't heard about, we'll sit down and talk to you about it. Bill and I have been on a number of calls in the last few weeks with customers, looking at a different way of delivering captions or even booking captions. And so, you know, we're always excited to sit down and talk to you. The other thing I would say is that we are not just booking in captioners or providing you with technology to deliver captioning solutions, but we're there to help you deliver. We have a dedicated team of service people who will walk you through whatever you need to know about setting up Falcon, or setting up one of your hardware encoders, or making sure that your captioners are in the sessions and we're getting all the prep materials that you need.

We're there to help, and that's the core of what we do. And Bill, I don't know if there's anything you wanted to add to that.

Bill McLaughlin: No, that's excellent, Kyle. You did a great job, I think that this webinar was intended, especially in the enterprise, university and corporate video spaces, to kind of give a really good basic introduction to what's going on with closed captioning and what's possible. Based on some feedback, we tried to go a little bit less into the deep technical details or different products and then some of our webinars, but try to just look at the range of different outcomes you could be looking to drive. So, definitely happy to take any more stuff in the questions about some specifics. Or, you know, email us, visit our website. There is a lot of really detailed information there, but sometimes I know we speak to people who say all this detailed information, it's overwhelming. Please, please, please talk to me like I haven't heard about captioning before, and let's figure out what we can do here. And so that's great to have an opportunity to do this.

Kyle Phillips: Yeah.

Bill McLaughlin: And I saw in the chat we've had viewers in the webinar from as far away as Kenya and Columbia. So, that's pretty awesome, too. Thanks, guys.

Kyle Phillips: Yeah, that's really cool. And, you know, that's a really good point. Sometimes people just want to talk to a human being. You know, there's great information online and hopefully we provide you on our websites with some great information where you can find the answers that you might need, but at some point you just wanna talk to somebody. There's an easy way to reach us. I'm gonna turn back to Regina, who I know has been fielding some questions that have come in. Regina?

Regina Vilenskaya: Yes, thank you. So, if you have any questions about any of the topics or the solutions that were discussed today, please feel free to pop those into the Q&A tool at the bottom of your Zoom window. But we are going to go ahead and get started with the Q&A session. And a question that came in from somebody who had signed up for the webinar. They had said that they just need closed captioning occasionally on Adobe Connect, Zoom and Google Classroom. So curious to get your thoughts on what it looks like to only occasionally need caption content.

Kyle Phillips: Yeah, I mean, we typically don't make somebody sign a contract for working with us. So, if this is something where you just need captions every once in a while, we have a lot of every-once-in-a-while customers, who they have a webinar every once in a while or a meeting that might come up where somebody requests captions. We can do that. We simply book in what you need, when you need it and you pay as you go. We book it in, we deliver it and then we send you an invoice afterwards. So, it's really, really simple.

Bill McLaughlin: Yeah, and a typical, you know, we can often staff events with, you know, the typical number given would be about three days of pre-notice. You know, if you're setting up an account for the first time, you probably should ask a little bit sooner than that. But yeah, it doesn't have to be planned a very great time in advance. Yeah.

Regina Vilenskaya: Someone says that they have had some issues with human captioners, which has pushed their customers to wanting to rely on Lexi much more than human captioning. So the question is, do you see Smart Lexi pushing out human captioning, given their increased accuracy and cost saved?

Kyle Phillips: I don't think so. I mean...

Bill McLaughlin: I hope the human captioners they had issues with are Ai-Media captioners. But, you know, I mean, I guess there are sometimes issues everywhere. But yeah, I mean, you have to, you know, obviously with automatic captioning being so available, I think when we're supplying human captions, it's really important to be providing, you know, a really head and shoulders experience above the automatic captions and that that's the quality of the captions and their accuracy and their timing. That's also gotta be the, you know, the services, the on-time performance, every part of that. I mean, I think there is a tendency to move to automatic when you just feel that getting a human caption or involved is too cumbersome. You have to book it too far in advance. You know, it's like you have to, you know, you have to provide too many details and that can be, you know, we talk a lot about accuracy, we talk a lot about cost. But that ease of doing business is something that automatic solutions, including the ones we offer, like Lexi, they do have an advantage that it just starts happening completely on a whim from the customer standpoint.

You know, you don't really need any advance notice at all. And that's a cool benefit of the solution. But for those events where you're going to want that more completely premium accuracy and also going to want a real human being to work with you on the event set-up and the issues, you know, it can be a double-edged sword, right? Sometimes we go for just convenience and just pressing that button. But other times you really want a human to work with us. So, I think that both types of captioning are going to continue to exist for a long time.

Kyle Phillips: Yeah, I think there are always, you know, especially in the last year, we've seen such a huge demand and increased for human captioners. And in some cases where we couldn't keep up with the demand and last fall in particular, had to say no, you know, to do certain events if we didn't get enough notice. So now, I think we do feel that we have something where we can deliver highly accurate in some instances, but we are certainly not gonna be putting any captioners out of work. There's a huge demand and a huge place for the incredible work that they do. It's just nice to have options for last minute, and for other use cases, and other budgets.

Regina Vilenskaya: Someone asks about Falcon, saying that we say that it's affordable, but they want to know what exactly that means. How is price calculated? And for example, what would an average one-hour live stream event cost?

Kyle Phillips: $10 million. No. What? No, no.

Bill McLaughlin: Sounds like there must have been a lot of prep time on that stream. Yeah, there are a couple of components of the cost, and that's maybe part of why, you know, looking at the websites for EEG or Ai-Media, it's not always totally transparent, just exactly what something's going to cost. For Falcon, the lowest cost for a Falcon and Lexi-like event package kind of solution would be to buy the monthly subscription on Falcon as a standalone, which is a $399 monthly package currently. So then there would be a question of the services and whether that was going to be booked through an Ai-Media caption or using Lexi. And those have different hourly costs and it can depend, you know, does your, you know, is your event, you know, we joked about the prep, but it's kind of, you know, is your event sort of one and done with an hour? Or, you know, would you want the captioner to attend rehearsals? Do you need to do technical qualification of the Falcon solution that's gonna require you to kind of use it for more than just one day?

So, there are some different factors. And whether you own the technology already. For example, you can rent a hardware closed caption encoder from EEG in most places in the United States for about $1,000 for a two-week period. If you own one of those, then you're not paying anything extra for that for the event. So, it can depend a lot on what technologies are in use.

Kyle Phillips: And typically, you know, we talk about costs for either human or the machine language. It's a function of time and hours. So, if we're talking about hours and all the variables that Bill talked about, we would quote you on an hourly basis and, you know, on a per minute basis or a pro-rated permanent basis after that. So once we know all of the particulars, we can get you a quote pretty quickly and walk you through the different options for what you need because, you know, it does vary depending on what you're there. But, you know, it can be a very easy and low price point to get into whether it's human or the Lexi and Smart Lexi.

Regina Vilenskaya: And someone in the chat asked a great question, asking how COVID 19 has affected the uptake of Ai and EEG solutions specifically for universities. And if so, how long will it take before things become normal, especially with vaccine rollouts in terms of captioning?

Kyle Phillips: Yeah, I mean, certainly in the last year, we've seen a sharp increase in demand for live captioning in the classroom. Still, I think very much a demand for human captioning because of ADA requirements and equity issues around providing the same access to content and high accuracy that goes into that. There are potential ways that we may be able to alleviate some of the burden on the human captioning with the Smart Lexi in the classroom and are looking at ways to deliver that soon. We're also seeing, in this case, some events where people are making use of automatic speech technology to deliver captions. But even with the shift now to back to the classroom in many parts of the country, there's still a very high demand for live captioning of some kind. So, you know, for us, for Ai-Media, it's having some options to deliver captioning where we just don't have a human captioner available. It hasn't been the case recently, but that certainly last fall at the height of COVID, any last minute requests that came in were often very difficult to fill.

Now, we do have a backup option for people where we can say, OK if we don't get coverage, can we bring in Lexi or can we set up the Smart Lexie instance, and that's a viable option. But I see the demand is still very strong from the university and college partners that are out there right now.

Regina Vilenskaya: Are there any plans for EEG to host their Falcon engines on multiple server boxes? Currently, Falcon engines are on the same RTMP stream but different stream keys. He's indicating that engines are being run on the same physical server. I believe this is a Bill question.

Bill McLaughlin: Yeah, so I mean, that's not 100% true, but, you know, essentially Falcon is currently available in, I think, five different geographic regions. So, in each of those regions, you're definitely streaming to different infrastructure that's local in those regions, and that's important for the stream quality. You can create, you can have fully redundant streams by having two channels linked together in iCAP in separate regions. And of course, this works best when you choose two regions that are both relatively close to where you're working. But for example, in the United States, we have East Coast and West Coast channels. So honestly, both of those locations really work pretty well for normal, you know, standard five megabit per second, 10 megabits per second resolution video on a good internet connection. You're not really going to have a problem if you're anywhere in the United States typically doing a stream to either the East or the West Coast. So you can get with redundancy on those two servers and on those two regions and on, you know, a significant amount of the network path by using an East and West Coast server.

And if you have two internet connections out of your site, you can have almost full complete AB redundancy on the Falcon. So, there are some options for that. I mean, even within the region, there's load balancing to different worker servers when you actually get behind that initial entry point. But if you're looking to make sure that you have two streams that are kind of on independent infrastructure, then using the multi-region capability is the best option.

Regina Vilenskaya:

Great. I hope that I am paraphrasing this question as the attendee had intended to ask it. So, this question is about the Lexi Automatic Captioning Service, asking if it can be used simultaneously through Google or Microsoft, for example, offer a French language captioning or if it can, if it works differently, depending on what platform you're using it with.

Bill McLaughlin: Yeah, we do have the power through Lexi to leverage tech from several different backend suppliers to provide captions. And part of the reason is the language lists differ, and the language quality differs from these different providers. For French, we have at least compatibility with the IBM and the Amazon backends right now. We may have some other backends for French online who are coming in soon. So, you know, typically with Lexi, the customer is just going to just choose a language and we'll use the engines and technologies and combinations that have tested best for us, and we'll work with that. But when you have a specific supplier in mind or when you wanna test suppliers against each other, because you wanna make sure that you're getting a supplier that's right for your language and your content, and I mean, we can certainly work with the customer on that as well.

Regina Vilenskaya: Great. So, it looks like we have time for just one more question, and that is what's the best way to schedule captioning for a multi-day virtual conference because our client prefers human captioning?

Kyle Phillips: I mean, the best way is to reach out to us directly. We will sit down, we'll jump on a Zoom meeting with you or jump on a call with you, we'll look at your run of show. We'll make sure we understand exactly what you need, and we can then schedule that into our system for you. We can set up a test time ahead of time to make sure that you've got, you know, we're delivering the way that you and your audience expect to deliver, and that we've answered all of your questions. We do a lot of multi-day conferences, particularly in the fall and in the spring. This is big time conference system for us and we do a lot of consultations every day. They can be very quick. We can schedule those in for whenever you need them and we can get you booked in up to speed pretty quickly.

Regina Vilenskaya: Thanks for that. And that brings us to the end of the webinar. So thank you, everybody who could join us today for Closed Captioning 101: Live Video Solutions. And a big thank you to Bill and Kyle and the captioning team behind the scenes. So, if we didn't get to your questions, we will follow up with you soon after the webinar. And if you have any questions or would like to learn more, please reach out to sales@eegent.com or sales@ai-media.tv. Thank you all again and have a great rest of your week.

Bill McLaughlin: Thank you.

Kyle Phillips: Thanks, everybody.