EEG Video’s series of live, free-to-attend webinars has begun for 2021!
On February 9, 2021, we hosted Captioning with EEG in 2021, featuring Bill McLaughlin, VP of Product Development for EEG, and Caleb McKerley, EEG Customer Success Manager. With a focus on how to caption more accurately and securely in 2021, this online event brought EEG’s industry-leading captioning expertise to our audience.
Captioning with EEG in 2021 • February 4, 2021
Topics covered included:
- Our new customer success program
- The latest Lexi 2.0 release: higher accuracy, workflow enhancements, and more
- Product updates coming in 2021
- A live Q&A session
Bill and Caleb discussed these EEG closed captioning solutions in depth:
- Lexi Automatic Captioning
- HD492 iCap Encoder
- Alta Software Caption Encoder
Look to EEG Video for the latest closed captioning news, tips, and advanced techniques. To find out about upcoming EEG webinars, as well as all other previously streamed installments, visit here!
Bill: Thanks everyone for coming. I'm really excited to see so many people here at our first EEG webinar of the new year. We had taken a little break since doing a whole bunch of presentations in the fall of last year. So welcome back and if you haven't been to one of our events before certainly hope you enjoy it.
So today speaking will be me. I am Bill McLaughlin, the VP of Product Development at EEG, and I'll be talking about our technical roadmap and some of the great things our engineering team is working with customers on and, you know, new products coming out on our roadmap now. And I'll be joined by Caleb McKerley, who is new to EEG having come on in December, and Caleb is our new Customer Success Manager. And Caleb is going to be responsible for a new program in the area, particularly of Lexi, Falcon, and our EEG Cloud services where we're going to try and get customers having a more regular engagement with us about what they can do with their subscription, you know, be that training which in in the past we haven't really offered that organized training program, and we're looking to do both a lot more of webinars and also personalized training as needed for Lexi and Falcon users.
And also, Caleb is going to be, you know, beginning to take over from our Sales and Support teams some of the generalized role in kind of sharing our product roadmap with you and maybe try to make a regular relationship where you can talk about what you're doing in captioning, what your needs are in captioning, and how we can make sure that the EEG products are continuing to work for you and really make that as collaborative a relationship as possible, you know, to get all that great video that everyone is producing captioned.
In addition to talking about Lexi and Falcon today–and Caleb's going to present that part beginning just in a minute–we'll talk about some of EEG's other products too. AI captioning has become a bigger and bigger part of what we've been doing in recent years. Customers are really strongly pulling things in that direction, as well as the technology becoming just, you know, very easily accessible, very affordable, and very high-quality for a lot of content. You know, that being said, we still do a lot of the same work that we've always done with SDI and IP video encoding and adding closed captions through expert human transcribers, expert human translators, and doing the video processing for the captioning even from pre-recorded sources and and any other form of a time code and metadata.
So I'm going to share some stuff with you after Caleb speaks about Lexi, about our road map on some of the encoders and, you know, we have some interesting new announcements regarding the 492 SDI encoder, as well as the Alta product that will be coming out in this quarter. So with that, I'm going to turn it over to Caleb and Caleb is going to walk you through some of our newest features for Lexi and help you start to understand our training programs on Lexi and where he's hoping to take the customer success program at EEG with your help. So Caleb, please take it.
Caleb: Yeah thank you Bill, I appreciate that. So, I have to introduce myself, I am Caleb McKerley. So I'm really excited to kind of go over Lexi today maybe touch a little bit about the EEG Cloud as a whole and then, as Bill said, I'll kind of explain a little bit of what our goals are and what we're hoping to offer with this customer success program with the EEG.
So just to get started, I'm sure we've kind of got a range of guests here today that are anywhere from very familiar with Lexi to are just now hearing about it for the first time, so I'm going to try and give kind of an overview.
This is a bit of a sample of what some of our training sessions have looked like in these early days of this success program. But, I kind of want to give some feature updates about Lexi on how we've improved it over the past few months and also give some best practices going forward.
So, to get started one of the things that we're really excited to announce is an introduction of Lexi's Core Models. To kind of speak a little bit about what these are used for what these are and how you can use them, a Core Model is an EEG curated model that you can use to start off your Topic Model. So I know I'm kind of talking about a number of different models here but, you know, the Topic Model feature has been something that's been available to our Lexi users for some time now. We have continued to improve it to make it much easier to refine Lexi to make sure it fits your needs and you get the top accuracy you want.
So right now, we're offering these Core Models. These are some basic case specific vocabulary models the EEG works on curates and updates that allow you to get kind of started to get your foot in the door of what a Topic Model should look like. We do have some categories here that are listed headline news, sports, christian broadcasting, etc. These are a great way to kind of get yourself going in the Topic Model world. We really recommend folks that don't fall into any of these specific categories to stick to the headline news Core Model when creating a Topic Model primarily because it's going to have the most up-to-date pop culture terms, political terms, primarily around the Coronavirus and COVID, a lot of things that really weren't in our language over a year ago. We want to make sure that we can kind of get you on the right foot there. So, we're really excited to introduce those Core Models.
So, this kind of brings me into Topic Models. Once you've created your Topic Model by using one of these new Core Models that like I said, we're working on for you, you kind of will come to a new screen that some of you may not be familiar with if you haven't used this feature in a little while. One thing that's exciting is we did launch Lexi 2.0. You'll notice when you go to that Topic Model page that Lexi 1.0 and 2.0 are both displayed and available for you. Really, this is a level of redundancy so that way you know that Lexi is always going to be available for you. We have stood by our uptime of this product and we know when it comes to events, uptime is very important for you so we do have some redundancy there.
Now you're going to be brought to our Vocab Control tab and I'm going to go over that and in just a second in more detail. Many of you that have used Lexi pryor are probably more familiar with the Learning Sources tab which is where you could maybe connect to a website and have Lexi scan a web page or upload a text file that's still available for you. You can still select the Learning Sources tab and use it but to get the most accurate the most refined Topic Models we now are encouraging our users to use this Vocab Control section and so I want to talk a little bit more about that so that way you can understand what it is that that offers.
Under Vocab Control is where you're going to be able to control the spelling of terms, capitalization primarily with, you know, proper nouns and names and then you can also edit the phonetic pronunciation of these terms. So a good example of this is highlighted there below is Beyonce. We know the word, it's common in our language but sometimes the pronunciation of that may confuse the automatic captioning so we want to be specific, bee-on-say. Right, now we're going to do this with, you know, a lot of names are probably going to fall into this category spelled one way pronounced another as well as regional terms right? You know, for most of the country, Houston is going to need to be pronounced Houston but for our New York folks we may want to throw in the house-ton pronunciation. So this allows you to kind of go in and make sure that if Lexi's maybe not hitting exactly the names that you're looking to, you can come in here, refine it and work a little tighter.
Now, you may not be able to see this screen because it's kind of shrunk in a little bit, you can do these one word at a time or you can create a CSV file with word on the left column phonetic pronunciation, on the right column, and you can upload multiple terms at one time. So, this is a really new great addition.
To kind of give you a summary of what I just went through, you can kind of think of the models and the Topic Models with this sentence, right? So if we were to say, Up next, WEEG will have Dr. McKerley on to discuss COVID, Anthony Fauci, and vaccine updates, just to kind of explain what it is that we've just gone through. All the text there that's in black is common language, right? "Vaccine," "discuss," "doctor," these are all terms that Lexi is going to know with or without a model. The words in blue, COVID or Anthony Fauci are more up-to-date terms that seem to have entered our common Lexicon but, you know, more than a year ago, they probably were things that were foreign to us. So EEG's Core Models are going to contain that type of language. And then finally, our green, that's what's been entered into the Topic Models a specific name and then also maybe the specific call letters of our station or anything else that you would like to put through. So that's kind of a brief overview of what we just went through.
Now with Lexi 2.0, we're seeing really excellent improved accuracy, reduction in word errors of 30 to 50, we're seeing a great improvement with punctuation and recognition of those proper nouns especially when you start using those Topic Models the feature just gets better and better. And then finally, you know, we did kind of get some feedback that a fast talker or maybe some hard to hear dialogue or some background noises coming in the audio could affect Lexi's captioning. With Lexi 2.0, we're seeing a lot of changes there.
So, I've kind of explained the accuracy, I've kind of explained the science behind that. I'd like to go into some more features that we've brought into the EEG Cloud along with Lexi to help you kind of use this to its most. So one thing that you can do with Lexi once you have a Topic Model created is we actually now have what we call our Lexi Instances, these are reusable templates of Lexi jobs. So if you're going to be using a very similar setup for running Lexi right, we're going to use the same Topic Model, the same iCap Access Code and we're going to be sending it to the same place we can pretty much create a single Lexi Instance and allow it to run as many times as we need to. This is a really excellent tool.
With your Lexi subscription, you'll be able to create as many Lexi Instances as you want because like I said, these are reusable templates, it's just best to build it out to where you're going to be most familiar with it and then you can set those jobs to run either by selecting them on or off from the Lexi Instance tab or by scheduling them out ahead of time using the Lexi Scheduling tool.
So, the Lexi Scheduling tool will help you schedule the instances that contain the Topic Models. You can kind of get a feel of how we're building up to the features that Lexi provides. Our new Scheduling tool allows you to, you know, set as many Lexi jobs ahead into the future as you want to. You can set the start time, the stop time, you can have the event email you before email you once it's over and you can access all of your transcripts as well. So this is a really robust Scheduling tool that we're really proud of and we really think that it could help folks that are going to have really consistent regular jobs that are going to be run and that way you kind of take off that advent responsibility and just let the system do its job, and you can just come in and monitor as you need to.
So that's a brief overview of Lexi. We can go into a lot more detail all of that. What we just went over is available on the EEG Cloud site along with Falcon our cloud encoder, iCap translate, really there's a lot more that we could go into, we would take up a lot more than the time that's scheduled today so, you know, we won't do it just now. But if you're interested in learning a little bit about what we have on the Cloud if you, you know, like to get any training, if you'd like to get some pricing info, well, that leads us into our customer success program which is what we're trying to do here.
Customer Success Program
So, I was brought in in December and been getting up to speed ever since so trying to learn as much as I can but as Bill said, I'm going to be working really closely with our EEG Cloud users. I'm happy to speak to any of our hardware users as well they're going to be using a little bit of both, because I want to learn as much about our clients as possible. So we have some goals, right? This was why I was brought in, this is why we're creating this program. Our goal is to learn as much about our users as possible and allow me to advocate for our users' needs to EEG themselves, right?
So, my hope is that I'm going to be able to develop relationships with a number of you. I want us to work hard for me to know what your business is, I want to be able to understand your needs, and I want to make sure that we're fulfilling that, right? I also want to see how it is that you're using our tools and maybe I can find a way to get more value out of them for you, so, you know, that's really my number one goal. The best way for me to do that is going to be to meet with you, do some trainings, hear a little bit about how you're using it and then get any product or EEG feedback that you may have, so my door is always going to be open and I'm probably going to be reaching out a little bit to you as well to try and get me with a little bit of that information.
Now, some of the ways that you can take advantage of this new customer success position is reaching out to me to schedule any training, right? We could do a much more in-depth overview of the Cloud like I just went through, you could let me know if you're having any issues with your Topic Models or kind of want some help curating what you already have, I'm happy to take the time to do that for you.
If we need to set up a recurring call to go over your status and, you know, maybe set some goals to hit some improvements when it comes to your captioning needs, I am happy to set that up for you as well. And then I'll also probably be reaching out to you whenever we have new feature updates, whenever we've got some ideas to get you some more value out of, you know, what you're already subscribed to. So, feel free to be on the lookout for communication from me.
And last but not least, how can you get in touch with me? If you're an existing Lexi customer I do have a Calendly that's set up. I'll make sure that this information is available to you all. You can always just book a time on my calendar we can talk about whatever you need to. Kind of consider open office hours. Other than that, I do have my extension there below, I've got my email address and my door is going to be open to you. I am here to listen to our customers. I want to know what you like, I want to know what you don't like, and I want to see if I can do my best to help make you happy.
So that's that's a brief overview. With that, I think I'm going to hand it back to Bill to take over the rest of the Q&A.
Bill: Thanks so much, Caleb. Yeah, that's a really good summary of what we're doing with this program and I mean, I really hope that everyone who's involved in this who's currently using Lexi is gonna see that that could have some value because honestly, I think even our most technically sophisticated users are going to benefit from something where you can talk to someone at EEG, kind of a little bit more regularly about the newest features we have coming down the pipe and and the newest recommendations that we're going to make because it's just definitely a very rapidly evolving product area. And, you know, we're already seeing that, you know, customers who have been on the system for a handful of years, you know, a lot of these things like the models can benefit from an update, you know, it can benefit to re-examine, you know.
At the time we installed this, there were only a couple of ways to schedule new jobs and, you know, now there are new ways like using the Calendar system, new GPI features or features on the hardware encoder to monitor whether there's any captions upstream and create captions intelligently and automatically in response to that. So, you know, I think everyone knows that with automatic captioning the goal is to continue improving and make a better and better automatic captioning system as we go forward and, you know, I think I think staying in touch and, you know, having the conversation with Caleb is really going to help a lot of customers.
HD492 iCap Encoder
So to pivot a little bit to the rest of the program today, we're going to talk about two important updates to the 492 iCap encoder. One of these is a software system update that's a field upgrade for existing customers and we're also announcing a new product. It's kind of a variant of the 492 that will work with UHD 4K video.
So beginning with the software update, you know, as a little bit of background, the 492’s have been our flagship SDI encoder for the best six years now and it's the encoder that, you know, we have the most of them out in the field. It's been a great product for EEG. I think it's powered an awful lot of captioning out there. And what we're seeing, you know, is that the hardware that's in the field even if it's six years old, it's still in great shape but, we actually from a security standpoint, we're realizing that, you know, the operating system on this needs a refresher for customers to continue getting use out of this.
In a lot of environments now, you know, you think of solar winds and all of that I mean, you know, a lot of our enterprise customers are very sensitive to the idea that you can't have things on the network that, you know, haven't been updated in 5 or 10 years, you know, or have a system that is no longer updatable just because that's kind of a vulnerability to, you know, malware getting into the network and it can affect all kinds of things. So what we're doing to keep those 492’s operating well in the field is we have a major new software system update coming out and that's going to be, emails are going to start going out about that at the end of this month. And it's going to put, you know, a new version of the underlying operating system that runs the web server and runs the encryption for, you know, sending iCap and Lexi data to the Cloud. And this is going to be fully back compatible with any 492.
It might take a little bit longer to run than some of the past updates you've installed, but it's a fully backed compatible update and I think with getting this installed from a security perspective you're going to be able to keep running these units out, you know, for another five years or something hopefully and get the maxim life out of the product. Because obviously that's something that's always been a strong point of the EEG hardware products and even as these products have more and more sophisticated software, we certainly want you to get the maximum number of years of use. The update is also going to contain, where's my other slide?
The update's also going to contain all of the new software systems for the 492. So depending on how recently a unit's been updated that could be a lot of things involving, you know, anything from iCap security to modules like CCPlay and CCRecord. But one of the big issues that's new on this update is new updates to the Lexi controller and the Lexi controller on the hardware box allows you an alternate way to access a lot of the features that Caleb talked about on eegcloud.tv as well as some features that are specific to your hardware box, like being able to run Lexi jobs based on a GPI trigger or based on what's in the upstream video and audio content.
So that'll be in the new build and you'll basically be up to date on everything. So I would strongly encourage, you know, everybody who's on our support list for the 492’s who has one of these units look out for the email. If you have one of these units and you think you might not be on the email list, if it's something where it was bought from a dealer or perhaps, you know, inherited by somebody from another group in your company, it's definitely a good idea. Email, please, firstname.lastname@example.org or email Caleb and we'll get you on the right list. And that way you'll know about the update when it comes out and it's ready for your unit.
So the other thing I want to announce about the 492’s that we're very excited about is we're going to be introducing a new hardware version of the 492 that's going to be capable of doing 12 gigabits per second SDI and so that enables you to do 4K video on a single SDI cable. And we think that that's, you know, we've gotten a lot of requests for a product like that especially from customers doing, you know, doing A/V and, you know, things like, a pre-pandemic obviously, there was a lot of build up with stadium video boards and things like that.
Some of that's been on pause but, you know, as live events are, you know, hopefully starting to come back over the summer, that's I think going to be a really big thing, you know, there's going to be very very little new build stuff I think that's going to go in one and a half or 3 gig SDI and probably customers are going to be wanting UDH captioning either with 12 gigabit per second SDI, or with IP. So the new 492 UHD board for new units, those are actually going to be capable of doing the 12 gig SDI. So if you're using SDI you're covered. And if you're using IP, you're also covered with EEG caption encoding and that's our Alta product.
The Alta product does IP video caption insertion either in MPEG transport streams or in SMPTE 2110. And it's also interoperable with kind of integrated master control systems. It can communicate through telnet and several other traditional protocols. So for Alta, we've seen a lot of new installations of that and I think, you know, as, you know, 2020 really strengthened the argument a lot in some ways for remote production and flexible cloud production.
So in addition to the basic feature updates that on the Alta encoder have been coming out at a pretty rapid pace I mean, you know, pretty much every month for both the TS and the 2110 product, we've also focused a lot on the deployment options. And so one of the things I want to focus on here that, you know, we're just starting to put a lot of new information on our website about and get some customers involved with, is a more organized ability to run Alta directly on the public cloud.
Particularly, most customers are using Amazon Web Services and we have an AMI for both the transport stream and the 2110 version of the Alta product. And the AMI can be dropped directly into an AWS account controlled by the customer. So we just share the AMI over to you and you're able to at that point launch your own, you know, completely ready to go version of the Alta VM. And we found that, you know, for a lot of customers that's actually substantially easier than the process of importing Alta into an on-prem VM system.
We also provide turnkey kind of server configurations and hardware for Alta, but when you have this up in Amazon, it can be a lot easier to use with certain workflows. So we've put in a lot of compatibility work into making sure that most of the common settings on the AWS media live encoder are supported and that we're able to work with Media Connect gateways which is, you know, the AWS service for bringing in Zixi SRT other kinds of encrypted streams in and out of different VPC’s meaning, you know, the private clouds on Amazon.
So with Alta on AWS you're able to really, I think, have an easy deployment of Alta and be able to run captions mostly, usually it's going to be in a transport stream, and you'll be able to run this between different Amazon accounts through Media Connect and really kind of recreate the kind of production chains that you would do, you know, on-prem or, you know, over, you know, over satellite links and things like that for contribution now increasingly actually completely in the public cloud.
We also have have moved a step further with this and, you know, moving from a customer who hosts Alta on-prem, to customers who actually are going to host Alta in their own AWS account, we also have now a fully hosted offer on Alta where we will host Alta, you know, in a space in EEG's account dedicated to a given customer or a given event. And what happens with that is you're able to ingest a stream from a contribution encoder either on-prem or in any AWS region around the world and you'll be able to import that to your dedicated link and we can provide captions on that either through Lexi automatic or through the human caption partners that make iCap happen and you'll be able to get an output stream, again, using any of the formats supported by Media Connect either to another AWS account for more processing or distribution or really anywhere on the internet into an on-prem workflow.
So I think this is a way that's going to really enable customers who have use cases like, for example, short running sports events or corporate conferences to use Alta the way we've seen customers use our Falcon RTMP product. And Falcon is really really flexible, has really really good pricing for short term events but, the RTMP format is something that's a little bit better suited to smaller scale streaming events and, you know, a little bit less towards really, you know, a large professional OTT kind of production and with this hosted Alta offer, I think we're offering that. Really, it's a good price and it's easy to set up and it really kind of splits the difference between the two to provide a good combination for a lot of these events. So I'm very excited about being able to work with some of you on that.
So that brings me to the end of what we've prepared for today in terms of our road map. Definitely can answer questions about really any of the other products with Lexi, Alta, Falcon, any of the SDI encoding products. And Caleb and I will be able to stick around for a few minutes to answer questions. I want to thank Regina Vilenskaya and the EEG marketing team for doing a great job organizing this getting a lot of people to show. I think this has been better attended than any of our webinars in 2020 which is amazing. And thanks everyone for coming and we'll start to just take some questions out of the chat and leave your question stick around or, you know, when you're done for the day. Have a great day!
Is the Lexi Cloud captioning service FedRAMP-certified? And yeah, we've looked into that program, the current short answer is no. It's a program that I think we have a lot of interest in getting involved in that. I think it's, you know, the number of places in federal government using our captioning products is definitely not insignificant. So I think, you know, FedRAMP is a federal government IT security and data security program and a formal certification relating to that and, you know, so that is something that I think we're looking to pursue in the medium term future.
You know, the other thing that I'd kind of like to point out on that is that when data security issues or, you know, limited cloud or no cloud policies are are an issue at your work, I think one of the things that we a lot of times work through with customers is the Lexi Local product. And the Lexi Local product, which is also usable with local human captioners, basically offers most of the same features and it offers it in an on-prem box. So basically, it's a no-cloud box that has routing connectivity that it enables from local encoders on your network to local captioners and also has a built-in kind of flexibly scalable license for doing automatic captioning, actually locally built into that box.
So the customer becomes a bit more responsible for really making sure the vocabulary is updated regularly since the box is not communicating back to EEG, but it can help you a lot when you have a concern where okay we have, you know, data that we have a lot of restrictions on putting this out into the cloud at all, you know, so the Lexi Local product is something I'd recommend looking into for that as well.
How is this webinar being captioned? And the answer is we are using Lexi and Falcon. We use the, essentially if you think about the block diagram of this for a Zoom webinar, we use our iCap webcast program which comes with Falcon and is a Windows program which allows you to feed audio from an external source like a microphone or another program on your computer, it allows you to feed that to iCap for captioning.
We're using Lexi for the webinar, you know, we have definitely put some time into putting the vocabulary that we typically use in an EEG webinar into the Topic Model. I think my biggest fault with the captioning on the Zoom webinars, which is kind of difficult for us to control, is a lot of times you see like, just kind of one row of captioning at a time and obviously like a more roll-up kind of TV display would I think be a lot nicer. And I think there's some work going on with that I assume but we have what we have for now. But, you know, it's pretty good and it's definitely something that, you know, we've been using in our webinars since the beginning and we have a lot of customers using that as well.
What is the accuracy rate percentage for Lexi? Very common question, you know, we get that a lot. It's a bit of a, I think it's a question that I hesitate to even try to answer it by just saying a number because the real answer is, it depends a lot on what the content is. What the audio quality is, you know, for what we think of as probably the strongest core use case doing like, you know, a local broadcast news show, you know, we've measured in a number of test numbers from about 94% to 96% or 97% on Lexi which is really pretty good, you know.
My impression is that most people when they see captioning that's at about 95% percent word accuracy and above they usually say, you know, hey that's very good that really that's very good, you know. If you see it about 98% or above you're pretty much like, you know, oh that's perfect, you know, you probably you'll have to watch a while to see anything that you don't like about it. You know, and something like about 90%, you know, the impression which honestly might be closer to what we get on a webinar like this where we have a more informal speaker, you know, less than professional micing, you know, 90% will have a noticeable number of errors.
But, you know it's almost always really clear what the speaker is saying what the basic meaning is so, you're still offering I think a very valuable service. If you're below about 80% percent on your ASR captioning for example because, you know, which might start happening when you have just piles of unfamiliar vocabulary that isn't trained well or if you have speakers who are difficult for say, for a human subjectively would say that person's sort of difficult to understand right now, then that that's when usually the captioning you have to really say oh, we need to do something about this, you know, this isn't conveying the meaning in a super consistent way.
So you definitely want to kind of stay out of that bad zone on it but I think, you know, the key is to understand what is your content like and really I think to test drive the Lexi captioning on the content that you actually want to use because, you know, a kind of a frankly a vendor's representation of what the percentage is on test data that doesn't really resemble your content, isn't going to be very valuable for your production.
So we do offer in most cases a significant amount of free test driving of Lexi and for these kinds of questions to see how that works. I would definitely just fire it up and, you know, kind of look at some evaluations and we can help with that quantitatively as well.
Can we deploy Lexi and Falcon on a private cloud that isn't owned by EEG? With the Lexi Local product you can do that with Lexi, with Alta you can certainly do that. Falcon does not currently have an off of eegcloud.tv product implementation. So if you want to kind of, if anybody wants to get in touch to talk about that more I mean, I think it's something that our team has looked at but pretty much right now you could do Alta, Lexi, 492’s in an on-prem type of implementation, but Falcon for RTMP is not currently included in that package.
Can multiple Learned Sources be included for a Lexi Topic Model? And yes, there are a couple of ways you can do that. We've done some recent changes to the Lexi model that let you manage it all kind of through uploading and downloading CSV captures. If you want to do it that way and that's one of the ways where if we have customers who are doing a lot of kind of sophisticated personalized management of the Topic Model like, for example, imagine you have a sports league and imagine you kind of want to get the roster for the two teams that are playing a game to be into the system for each given game, you know. If you think about what's the best way to do that while keeping the model relatively small, because really the smaller the model is the more focused it is that, you know, the better a chance you're going to have everything that's in the model always right.
I think one of the best ways to manage that is to kind of, you know, sort of manage a model for each team and then for a given game, you know, before the game occurs you can just upload the two CSV’s that are the two teams that are playing and you have kind of a a merger model with relatively little effort that way which I mean you can also automate that if you're handy with a, you know, anything with an HTTP script.
But basically that's what I'd kind of suggest if you're thinking about a problem where you have kind of multiple layers of learning that you'd want to combine in in different ways, you know, if you just have a single model for a television station or a series of company events or anything like that then you can kind of just you can put it all in one big model. And it can be a mix of some material imported from web pages, you know, CSV files, things that you just kind of punch up with the pronunciation directly in the web page like Caleb showed before and you can combine all those sources of learning.
Are any of these updates going to affect the 490 or 491 encoders? That's a good question. I am not sure of those encoders, I mean, the update that's coming out now for security updates on the 492 is for the 492 encoder. The 490 and 491 encoders that are going even further back are on somewhat limited support now, but I would have to check in on whether there's a plan to actually be able to get those on the latest operating systems and whether that's possible. Of course, those units are even older now but there's definitely still some of them out in the field and not trying to leave you high and dry, so hopefully we can get something done on that.
There's a question about the encryption on the iCap admin site and if anybody is used to - you have older links that go to just plain eegicap.com, we've been aware that, you know, people with newer browsers get a, you know, a complaint on that because then that goes through a gateway to the site that only uses TLS 1 instead of TLS 1.2 or it might be 1.1 that it uses but, you know.
But what you can do about that is you really should be going to the site through the newer gateway which is is on our newer documentation is accounts.eegicap.com instead of, you know, instead of just going into the plain DNS which is still used for the real-time service, which is a different protocol. But anyway, the short answer is accounts.eegicap.com and that should solve the problem for you.
How can we increase ASR accuracy for people who speak with an accent or speaking culturally specific English? That's a good question, I mean, a lot of that comes into the base ASR training and really, you know, what all the different transcription and audio that's gone into that historically is so, you know, we could try some different things on that, you know, it kind of depends what people are looking at I think, you know, it's not so much an issue of a totally generic solution as much as just, you know, the broader a training base an AI model that you're using has, you know, the more you're going to have. And, you know, I think clever things can be done in the models to try and make the same training go a little farther.
But basically, it's an issue that comes down to training and for different accents we do sometimes have different recommendations of how to tune these models and then we'll work with customers on that. But yeah, I think it really just kind of comes down to what specific accent do we have, do we have a model that has good training there or of the models we have what's the best training we can offer.
Is the Alta AWS image compatible with AWS GovCloud? That is an interesting question and honestly, I'm not sure yet. I'm not sure what the requirements are for being on GovCloud the, you know, using the Alta image it's really it's pretty much just an EC2 it doesn't reach out to, you know, to a lot of other AWS services so in that sense, it might work okay.
Now to get captions on that from a cloud service, you know, you'll need to reach be reaching out to eegicap.com and or eegcloud.tv so, you know, it kind of depends to what extent this is part of a fully local or on-prem system in something like GovCloud because I know that GovCloud has, you know, definitely has certain restrictions about reach outs and being self-contained, you know to have all the different resources deployed in GovCloud so that's something that we're relatively new in the AWS program for these AMI’s and I'm not sure if that works yet but we definitely should get in touch and and we'll be doing work on that I'm sure.
We got a request for an overview of the Falcon service. So, we didn't talk that much about Falcon in this webinar but, Falcon is a service where it's a caption encoder that's in the cloud-hosted by EEG that takes an RTMP stream in and will put an RTMP or HLS stream out. And so, you know, a typical use case for Falcon is something like, you know, let's say you're doing a stream from something like, you know, a telestream Wirecast or OBS software or, you know, any of the number of, you know, hardware encoders from brands like say Matrox or Elemental, and if you're doing a live stream then to a site like a, you know, sort of direct to a social media site like a YouTube or Facebook Live or if you're going to, you know, a server like like a Wowza server or Brightcove or Kaltura, and Falcon will go between your encoder that uplinks your video to that site and the service and will inject caption data in embedded form into the RTMP stream.
And it's available from EEG, hosted by EEG for a monthly service fee, basically and that gets you one channel of the product that you can use, you know, kinda all day and all night. Different links as needed but basically, one simultaneous video and it's compatible with all the things that other iCap encoders are compatible with. You can have human captioners write captioning to it, you can use Lexi you can use iCap translate to put captions in different languages and, you know, again that all then goes out to, you know, typically some kind of hosting platform that actually will host your videos and needs to be able to display closed captions, but the compatibility for English and other European languages is usually very good.
Sometimes the compatibility with these sites for, you know, Asian or other Non-Roman character languages, a lot of times that can be a pain point but some of the platforms are kind of starting to realize the importance in getting there as well.
We have a question about whether we can use Lexi captioning in NDI. We do not currently have a direct integration to NDI. You'd have to convert into one of the other streams that we do support. I actually when NDI first came out, NDI didn't really have a real closed caption feature. I'm not sure if that's changed.
Honestly, like some of the products that that do put do something like captioning on NDI, really are just they use a screen overlay so that, you know, it's it works well for a live event although it isn't closed captioning that's, you know, invisible until someone chooses to see it. We've definitely had some customers use products like AV610 which is an SDI caption decoder and makes a very pretty caption overlay and really take that out as kind of a, you know, a zone of the screen and and cut it in with other videos that come from NDI using a mixer.
So that's the kind of work around you could use, I mean, I think I'm definitely I'm interested in having us explore stuff on NDI because, you know, aside from 2110 which we've had a lot of involvement in and has also been kind of moving into the A/V space more with the IPMX program, NDI definitely still has a bigger deployed base in a lot of these kinds of projects and I'd like to be working a bit more in that so, you know. Hopefully we'll have a good announcement at the next webinar.
If we didn't get to some questions, especially any very complicated questions I think, you know, Regina and Caleb and I are definitely interested in getting in touch with some people if you want to talk a bit more after the show. But I think we're coming up on an hour here too, so. Thanks for coming and thanks for staying so long for everyone who stayed and I hope to see you at another EEG event soon or, you know, maybe even at a trade show someday. Have a good day everyone. Goodbye and thank you.