EEG Video continued its popular series of Zoom Webinars in May, educating media professionals about our full range of closed captioning innovations.
During this well-attended online event, John Voorheis, Director of Sales for EEG, took the audience through a guided tour of EEG solutions built for broadcasting, entertainment media, and more. This essential webinar covers:
All about EEG’s closed captioning solutions for live events and more
How you can use our products for your captioning needs and workflows
The latest closed captioning advancements at EEG
A live Q&A session
Featured EEG solutions included:
This was our second webinar to cover A/V, Live Events and Online Communications. Visit here to experience the first installment on this topic, which originally streamed on April 28, 2020.
To find out about upcoming EEG webinars, as well as all other previously streamed installments, visit here!
Regina: Hi everyone and thank you so much for joining us for today's webinar, Closed Captioning Solutions for A/V, Live Events, and Online Communications. Hope you're all doing well today. My name is Regina Vilenskaya and I'm the marketing content specialist here at EEG. Today we'll be joined by John Voorheis, the director of sales. John has been with EEG for four years now and has extensive experience in technical sales. For the Q&A portion of the webinar, we will be joined by Bill McLaughlin, the VP of product development.
For today's webinar, John will be going over who we are, what EEG does, the EEG solutions built for A/V, live events, and online communications, and the latest products and features. During the webinar you will see some chat notifications pop up at the bottom of your window, and that's just me providing some links to product information that you might find useful.
If you have any questions during today's webinar, feel free to enter those into the Q&A tool at the bottom of your Zoom window and at the end of the webinar, we will get to as many questions as possible. So that about covers it, so now I'm going to welcome John to kick off the webinar Closed Captioning Solutions for A/V, Live Events, and Online Communications. John, over to you!
John: Hey everybody. Thanks so much for joining us today on this Tuesday. Hope everybody had a healthy, fun, and safe socially-distanced Memorial Day Weekend and so glad you guys are all here to discuss closed captioning for A/V today.
Yeah, so just a little bit of background, you know, for those people who aren't already familiar with EEG. We have been, you know, a company focused on accessibility since - really, since the early 1980s when we were formed as Electrical Engineering Group and that was, of course, around the time that we'd helped work with PBS to first develop that North American 608 standard closed caption data.
You know, today we're going to be talking about our Falcon RTMP streaming media encoder, a software encoder for captioning live streams, as well as our Lexi and Lexi Local automated services, and our AV610 CaptionPort decoder, which is an SDI encoder or, rather, decoder for open captions, not to be confused with an encoder. So we'll talk a little bit about that today.
You know, recently, over these past several months, you know, we've been really getting an unbelievable amount of inquiries as to how to caption, you know, different online events due to the public health situation. You know, there's - there's been a real rise in interest in that sort of solution, you know, simply because of all the remote work and more and more people looking for, you know, to provide content, you know, accessible while socially distancing. So we definitely have some solutions for that. It's something that we've been working with, you know, just increased amount of content online. You know, we've been working these solutions for quite some time now, but it's really exciting to see these new applications.
So definitely, you know, if you do have any interesting stories about how you are captioning, you know, because of what's going on right now, because of current events, feel free to share them in the group chat and, you know, we're always interested in, you know, doing case studies with people who are using our solutions in some unique ways so, you know, that's a great way to generate some - some public interest around kind of the work that you might be doing. Also helps us share our customer stories and showcase, you know, how they're able to succeed with accessibility so, you know, definitely always interested to hear about that. So let Regina know or myself, you know, in the chat box or via email if you have any interest in participating in something like that.
So today we're really going to be talking about, as I said, our Falcon fully virtual RTMP software closed captioning encoder, our Lexi Local on-premises automatic captioning server, as well as our AV610 CaptionPort decoder for captioning live presentations.
So streaming captioned video to remote viewers which, you know, I believe we are captioning right now–so you can see our captioning in action during this presentation using, I believe, Falcon and Lexi. Since we are in a Zoom meeting, we're using Falcon to encode the captions and then, of course, Lexi to to caption our automated captioning service. So you're able to see it - see it in action right now if you activate the captions on your device.
So Falcon is EEG's live RTMP streaming closed caption encoder and, you know, we've - we've had this solution - this is a pretty mature solution. It's been - you know, it's been out for as long as I've been with EEG, so since early - mid-2016 and even before then.
And what Falcon is, is just a software closed caption encoder that has an RTMP input and RTMP output. So you can caption the output from your streaming media encoder using iCap and Falcon.
Live Streaming Setup (without Falcon-Sourced Captions)
So right here is a live streaming setup without Falcon closed captions where on the right is just depicted is the interface for, like, a streaming media encoder such as a, you know, Sony, TriCaster, Teradek Cube, something like that where you can convert your video to RTMP format and then push that out as a live stream broadcast, you know, like Facebook or Live or YouTube.
Now here's that same workflow if you were to caption with Falcon. You'd send your video source to your streaming media encoder, convert it to an RTMP stream, and then Falcon would just become a midway point between your CDN (your content delivery network) and your streaming media encoder.
So what Falcon does is - just like any iCap encoder, it extracts the program audio, sends it securely through iCap to either your contracted transcriptionist or to our Lexi automated captioning service, the captions are generated, and that data's sent back to Falcon, where it's encoded as 608 data into the RTMP stream and you send that out to your CDN or multiple CDNs.
And for those of you who follow EEG or attended last Thursday's webinar, again, this is that same diagram, more or less, that you've seen time and again, because it leverages that same iCap architecture. It's really simple, or we try to make it that way. You, know, in practice, you know, it - it may not seem so simple sometimes, but we do our best to make it as simple as possible, and Falcon really does do that.
So here's - here's a screenshot of the Falcon interface where you take the RTMP ingest URL, you enter that into your streaming media encoder to send the RTMP stream to Falcon, and then you input the stream key. And here you can see your input with the - with the video that's going into Falcon prior to the audio being extracted sent to the captioner, and then you see the streaming output with the closed captions. So you can see everything that's going on right through that Falcon interface.
And here's - here's a screenshot of where you configure the output so you can - you can have multiple output - outputs to, you know, multiple destinations such as YouTube, Twitch, Facebook at the same time, and you just select your stream type, as well as you share the Access Code with whatever caption agency you have retained to transcribe your live stream. So again, very similar to the interface for our hardware decoders.
Using HTTP Falcon to Send Captions
Now, there's an important distinguishment here with Falcon and how it operates. There's actually a second version of Falcon you can use, as a number of services, including Zoom, take in captions through a separate stream. And you would go within that interface and post the - post the HTTP link where it has a caption source linked in the interface. That's how you do a Zoom, YouTube, or something like that. So we are equipped to handle that, it's just the captions are sent as a separate - separate stream and not part of the RTMP stream, so we - we can accommodate HTTP.
Using the HD492 iCap Encoder to Stream Content
Now, you can also use an HD492 hardware encoder to caption streaming content, and this may be ideal in certain circumstances where you're already set up for SDI video, you have a lot of equipment that's configured for that, and you'd like to caption upstream from your closed caption encoder. The important - some other considerations might be you may want to have something like open captions in-venue once we return to live gatherings of that nature, and with a 492, you'd be equipped to do that with a decode output display open captions over video, and then you can send your caption-embedded SDI video feed through your streaming media encoder, which would then pass the captions to your CDN or CDNs. So that's an alternative workflow, really, and, you know, there's some other ways Falcon can address in-venue captions in conjunction with our CaptionCast tool, for example.
So really what I always tell to people when they - when they'll call me and ask about captioning live streaming is, you know, "Do you want to embed your captions upstream or downstream from your standing media encoder? And do you need captions in-venue?" If you need captions in-venue as well as streaming, the 492's a great solution because of that decode output, but if you still want to use Falcon, we have a solution known as CaptionCast where you can actually pull the iCap data and display that in a web URL.
So you can put that web URL up on monitors in-venue for, you know, participants in the audience to see, or they can pull those captions up on their smart phones and that's - that's, again, with the CaptionCast solution. We don't really talk about that much in this presentation, but there are a couple of ways to achieve that and, of course, you can caption for streaming with a hardware encoder; that's another option, but, you know, the focus today will be on Falcon.
Falcon Features and Advancements
So just some advancements for Falcon, you know, we're looking to expand the HLS - the ability for an HLS output. What that'll do is - you know, as I mentioned earlier, Falcon currently encodes captions as 608 data. With HLS, that'll allow it to further support non-European language character sets, and you can also output VTT or TTML caption tracks to further enable that support.
You know, we've been getting a lot of inquiries, especially for some of the Asian characters sets–you know, Cantonese, Mandarin, Korean, things of that nature. You know, we get a lot of requests for different non-European character sets, be it Russian, Arabic, etc. So doing - doing this with this HLS output expansion will really enable us to work with a lot more of those, so that's a really exciting advancement. Again, that should be coming in Q3 or around the time that IBC would ordinarily be scheduled.
Lexi Automatic Captioning
So moving - moving right along, talking about caption delivery with EEG and how that can be done. So, you know one of the great advantages of Falcon is it does leverage iCap, which is EEG's protocol for taking - taking audio, sending it to live captioners, and then receiving that caption data and, you know, as I - as you probably heard me mention before, caption about 5 million minutes a month through iCap.
It is a free software download we provide captioners, so if any of you are ever using an EEG encoder, you have a captioner who you've been with for a number of years, who you want to be with, you want to use this captioner, we can provide iCap to them at, you know, at no cost and, you know, generally speaking, captioners are really thrilled to get on iCap. This is really - it's a - it's a really simple, reliable way to connect and send captions, you know, for whatever broadcast or stream you're captioning. And the other beauty of iCap is whatever EEG encoder you're using, whether or not it's Falcon, whether or not it's the HD492 or the CaptionPort encoder we'll talk about later in this presentation, it all looks the same to the captioner. So just want to kind of touch on that a little bit.
Now, you know, sometimes it's not always feasible to have a lot of captioners, either, you know, you may be - you may be captioning a live stream, you know, that might not be mandated, either. Maybe, you know, availability constraints or even budget constraints. And automatic captioning, you know, is actually quite good at this point.
You know, with our Lexi cloud-based captioning service, you know, we see an excess of 90% accuracy in situations with clean, single-speaker audio. You know, without people talking over one another as you can see right now as I'm talking, you know, the accuracy is pretty good but, you know, again, there is no background noise, there's - there's no background music, and there's no one talking over me. So it can work quite well, you know, it's - you can instantly set it up, you can activate it through an EEG Cloud account within, you know, within EEG Cloud alongside Falcon, so it requires no additional equipment to begin captioning with Lexi and Falcon, so you can really have a fully virtual workflow for captioning, you know, using Lexi in conjunction with Falcon.
Now we've also - with Lexi, a little bit more there, we state 90% accuracy, you know, out of the box which, you know, would be the example of what you're seeing today if you have those closed captions turned on.
Lexi also features custom Topic Models, which is our way of programming it with vocabulary. You can upload lists of names, places, acronyms, things of that nature, to improve the accuracy. You can actually phonetically spell out different terms. And with this all done, with investing some time and configuring Topic - custom Topic Models, we regularly see accuracy in excess of 95%. So you can definitely - you have a really robust end-to-end captioning solution with Falcon and Lexi together.
Now, we also offer Lexi Local, which is automatic captioning without cloud connectivity. This requires no internet to operate, so it's great if you're discussing highly sensitive information, you know, if it's something like, you know, some examples, you know, things I think would be very sensitive, you were talking about, like, you know, the minutes of a corporate board meeting where you go in-depth and to say the next quarter's earnings, or if you're talking about a product release, such as the iPhone, things like that or, you know, if you're with, you know, some of the folks who - who work to keep us - keep us safe, like the three-letter - three-letter agencies and things like that.
This can be a great solution, but because it doesn't have internet connectivity or doesn't require internet connectivity, you can periodically take this, divide with this clients online to to download updates, etc. But to actually caption, it doesn't require internet connectivity, so it's a great failover. It's great for high security, but at the same time the usage is not metered, so we license this in such a way that it's licensed per simultaneous stream that it's captioning.
And you can set up a single Lexi Local unit to caption multiple video feeds simultaneously–that's a licensing cost, but each license really entitles you to caption one video feed 24/7. So with the cost model - and, you know, we can - if you have questions about how that works and if that might be, you know, beneficial to you from a licensing perspective, from a caption cost perspective, I'm definitely kind of happy to take you through that and your options for the cloud-based Lexi online - er, offline, excuse me. So I can definitely kind of take you through some scenarios in which it might be more beneficial for you to use Lexi Local from a cost perspective, or also from a security perspective.
Lexi Local Workflow
And again here's the workflow with Lexi Local. You just have a video source going into your iCap encoder. The program audio goes to Lexi Local and the captions are encoded in real-time, so you have a captioned video output; really very similar to the iCap workflow you're used to seeing, except no cloud connectivity's required. So again, kind of went through the benefits of that.
If you're doing a high volume of captioning, there can be some cost benefits and, of course, tremendous amount of security benefit if you just have a no-cloud policy. You know, while iCap is extremely secure and proven in that regard, you know, for some scenarios people just do not want that information going out of the facility, out of their office, what have you. That's where Lexi Local can come in.
Here you see Lexi Local within the web interface, you know, right next to, you know, your other options for cloud-based Lexi, recent Lexi sections, etc.
Lexi vs Lexi Local
Here just breaks down kind of the differences between Lexi and Lexi Local, which are all - you know, they both feature custom Topic Models, the ability to access archived caption jobs, the ability to deliver highly accurate closed captions automatically using speech recognition, ability to build custom vocabulary.
With Lexi Local, no internet connection is required. It is a hardware device, so there is - you do need to install the appliance but, again, it's a very much a plug-and-play device. I could probably install it on one of my better days, and I wouldn't consider myself to be the most technically savvy in terms setting up broadcast equipment by any means.
And it's also not a pay-per-use model. It's a fixed annual cost so, you know, as I've said an addition to security, there can be some cost benefits to going with that local appliance over cloud-based Lexi if you're doing, you know, a significant amount of live captioning, you know, in the hundreds of hours per month, somewhere in there. So definitely get in touch if you have questions about, you know, what might be best in cost perspective and we can answer those questions offline, you know, with the sales department.
So displaying live captions, you know, for - for live events, live display captioning. This is where, you know, it's important to make that distinguishment between closed captions versus open captions. Closed captions are what you're used to seeing on television, what you've activated today if you hit the CC button on your Zoom to see all our beautiful Lexi captions, you know, what we might see on a YouTube player if you hit the CC button. Those are all closed captions. When you hit the CC button, the player, your television, or, like, your web player, they know - as we're discussing today, the web player knows to decode closed captions, and that's something else important to consider when setting up a streaming workflow is, you know, you always want to make sure you have a web player that supports live closed captions. But those - those are closed captions.
Open captions are burned into the video, similar to what you might think of the subtitles. If you're watching a foreign film or you've turned on subtitles in an alternate language where they're actually burned into the video, the caption data. It can't be disabled by the user. This can be useful in live events where you have mixed accessibility needs, so the captions are present for everyone, most specifically for those who need them and those who, you know, don't necessarily need closed captions can follow along if they choose. But that's a distinguishment.
It's really pretty simple; open captions are, of course, burned - burned into the video. They cannot be turned off, so the AV610 CaptionPort, it is an iCap decoder in that it doesn't encode closed captions, but it - it burns in captions, so we call it the CaptionPort decoder as in it sends the audio to the captioner via iCap via - to Lexi, takes in the caption data, and displays it over video as open captions.
You can also receive captions via serial port from an on-site captioner, you know, via telnet. If you have someone on your internal network, that might be a scenario that you consider or, you know, for whatever reason you wanted to go the route or - and use an optional modem - POTS route, rather, excuse me - if you wanted to go the POTS route and use the optional modem, so a lot of options there. This is essentially, you know, very similar to a 492 and how to connect to live captioners, but it doesn't encode captions. It burns in open captions, and we covered the distinction on that.
So it's really good - you know, and what else it does is it enables you to scale video as you see there. This is really designed for, like, a presentation scenario where you want open captions, you know, but you might not want to obstruct the presentation, because it's important content, there may be some histograms, some pie charts what have you. You don't want, you know, closed captions covering up, you know, important data, you don't want accessibility to cause inaccessibility, rather, and that's that's where the CaptionPort comes in.
AV610 Workflow (Scaled Video)
So this is what a workflow looks like for a CaptionPort and, again, you see this is identical to that workflow we showed with the 492 or with, you know, very similar to Falcon where the encoder tracks the audio, sends it via iCap to your caption - to your contracted transcriptionist or to Lexi, the caption data is returned, and you have a scaled video output with open captions. You can figure those be above or below the video. We typically show above generally, because that's - that's most successful in case, you know, you have somebody wearing their 10-gallon hat in front of you, what have you. You can still see the post captions, but you can do both with the CaptionPort.
It enables you to scale video down by 15%, place the caption data outside of the presentation area. So ideal solution for meetings, conferences, events, anything in-person where you're wanting to have captions available for all without obstructing your video.
Here's an example of what it might look like in a meeting, in a corporate application. You know, again, this is a great solution if you have, like, a large number of conference rooms where you want to provide accessibility, do you want to provide accessibility on-site, you know, for meetings, things like that.
AV610 Workflow (Text with Background)
Here's an alternative workflow where you can actually use the CaptionPort decoder to display captions on-site with audio only. You can just take - have a static image go through the CaptionPort decoder. Program audio will be passed either to your captioner or to Lexi via iCap, and then the real - and then captions are returned in real-time to the CaptionPort where they're displayed as open video overlaid on an image of your choice. So you can use this to, you know, caption audio only as well, which is actually a pretty big question we get, so it is possible with the AV610 CaptionPort decoder.
And this just shows how you configure the AV610 through a web interface. You know, you can choose, you know, whether or not you have video going through or presentation, you can use a static image. This is where you would choose, you know, the text position, the alignment, and even choose font and color. So, you know, a lot of ways to - to make your captions, you know, not only be accessible, but also look appealing with the AV610. Since they're open captions, you have a little bit more flexibility than typically closed captions where, you know, the caption appearance is somewhat restricted by the decoder.
AV610 CaptionPort Features and Advancements
So with the AV610, you know, really, the features and advancements - you know, a lot of our work, you know, this year and kind of this release cycle has been for, you know, increased accessibility, you know, for globally. So we've expanded in non-Latin character language supported by - it's supported by the AV610 CaptionPort and that's now expanded to include any language as a font set supported by the AV610, including Arabic, Hebrew, Japanese, Korean, and much more.
If you have some specific - have specific questions about what languages we support, please ask and we will get back to you as to whether or not they're currently supported or what the time frame is to support them. You might be surprised at some of the language - languages we've supported over the years so, you know, if you have any questions regarding that, please let us know. You know, there - there are no silly questions, so please ask.
We will be updating the AV610 to support 4K. That should be - that should be shipping sometime this summer. It displays overlay captions in native resolution 4K video with a 12 Gigabit SDI connection. It's designed for large-screen live events. If you're using that standard, you know, for any large-screen live presentations once those return, please reach out to discuss how the AV610 CaptionPort decoder can help.
So just to recap today, you know, there's a lot of solutions. You know, we've really kind of seen, like, some interesting developments, to say the least, these past few months, and you see a lot more people kind of changing the way they consume, you know, a lot of content, the way a lot of business is done. And fortunately, EEG has been, you know, working with, like, these technologies that we're seeing be part of our day-to-day lives for a number of years. You know, working with these and, you know, and we're fully prepared to support accessibility in this new normal for however long it lasts, you know, with Falcon for closed captioning a fully virtual work flow for any streaming that you're doing, including with services like Zoom, Facebook Live.
We have Lexi Local, you know, to caption on-premises for any highly sensitive information with the license model that may be favorable to those of you who are captioning a high volume of content. And finally, we have the AV610 for captioning, you know, on-site events with open captions or, you know, captioning audio-only feeds with open captions or captioning presentations.
So that really concludes my portion of today's presentation. You know, as I said, kind of covered quite a bit here. Happy to answer any questions you guys have. I know we have Bill McLaughlin, our VP of product, standing by who lives and breathes this stuff on a day-to-day basis, much as I do but, you know, kind of a little bit more technical level. Definitely interested to hear of any, you know, kind of interesting use cases you guys have all had come up here in these past few months, because I've certainly heard a lot of interesting use cases, you know, as you see - yeah, I mean, there's - there's been a lot of interesting stuff going on.
You know, one that came up recently was, you know, there was someone whose City Council meetings had moved to Zoom meetings. They were actually sending Zoom out to broadcast because, of course, City Council meetings are part of the public access channel. So that was something interesting that came up recently, so definitely interested to hear any stories you guys have, and also, you know, answer any questions.
So with that I'm gonna go ahead and open this up to questions. And thanks so much guys for attending! This has been great. I really enjoyed talking with you guys all, but thanks so much for coming out to these webinars. I hope they're beneficial to you and, like I said, happy to answer any questions. Thanks so much.
Regina: Thank you, John. Yeah, so we've now reached the question-and-answer portion of the webinar. So if you have any questions and haven't already done so, you can enter those into the Q&A tool at the bottom of the Zoom window.
So the first question that we have is, Nicolas is asking what languages are handled by Lexi and whether French-Canadian is supported.
Bill: French is supported. I believe that the model is not exclusive to French-Canadian versus French-French. We have had some good feedback that it's worked OK - actually, primarily French-Canadian users, but yeah, so I would encourage you try that. We would certainly be interested to take some feedback on, you know, whether that's working better in certain - certain dialects and regions than others, but there is a French model.
Regina: Can Lexi link to third-party CC encoders?
Bill: Yeah, so if you want to deliver any iCap source, whether it's human captioning over iCap or iCap captioners, and you want to link that to third-party encoders, there are some devices that have third-party support for iCap built directly into them, for example, Imagine Versio, Pebble Beach encoders, there's - there's a few different kinds of devices that actually have partner libraries. If you have an older closed caption encoder that doesn't have an iCap library but, you know, it can be connected to through standard telnet or serial port, then you can use a converter product, which basically will take the audio out of the program and send it to iCap or Lexi since that's necessary with iCap and something like telnet or a serial port doesn't actually have an audio delivery capability.
So you'll use a card that will take an SDI or software that can take in, say, an MPEG transport stream that even could just be audio-only. And those converters can put out telnet or serial port data that goes to an existing inserter. If you still want to have an existing inserter plugged in that's an older device that doesn't support iCap connections. You know, in not all cases is that really a long-term economical choice to say we want to keep this thing in and have a separate thing that does the communications, but where that's probably most useful is a system that if you have something like an Evertz Overture system, you know, like a brand new IP system, a lot of great features, but then for closed captioning that's really only providing, like, a telnet-style input. So if you want to get Lexi and iCap into that, our Alta software is - is probably the most widely-used way to do that to get other sources.
Regina: One question that keeps on coming up is asking how the captions are being populated in the - in the webinar, so if you could just explain if there's a Zoom API and describe the signal flow for that.
Bill: Yeah, so there is - there is an SDK for Zoom on that. You know, caption - third-party caption partners can send their text into Zoom and we're doing that through an integration with Lexi and with our Falcon software, which actually delivers the captions to Zoom.
If you're the event organizer, basically you can click in the closed caption menu and you'll - you'll see a link, and that link is unique for each meeting, and the link is where you send the closed captions to. So you enter that into Falcon as your caption destination and Falcon can then send that and you can use - you can use, again, any source of iCap captioning with that, and Falcon serves as the connector over Zoom.
Regina: For a potential client who is a small market television station, which product or products would be best suited to support closed captioning services for them?
Bill: Most -
John: Sure, I can -
Bill: Yeah, yeah sure, John, go.
John: Yeah, yeah, I mean, we have - I mean, definitely it would be one of our SDI encoders. You know, if you're looking at a pure - pure automated captioning workflow, we have an encoder specifically designed for that, that can be outfitted with an optional modem and does feature TCP/IP telnet connectivity. It's fully compatible with Lexi; that was really what it was primarily designed for, so that's definitely gonna, you know, provide, you know, a little bit of cost savings from the HD492. And it's kind of an apples-to-apples encoder to any other non-EEG, non-iCap encoder you might see. So, you know, that's about, you know, roughly about half the cost of an HD492, but if it's just a single input/output SDI encoder, then you have the HD492, our flagship iCap encoder, which gives you that flexibility of using iCap to connect to live captioners, as well as to connect to Lexi, and that's a dual input/dual output SDI encoder with a decode output.
So definitely a little bit more robust piece of gear there. In terms of what's right for you, you know, there's a couple of things to take in consideration with that, you know, cost, kind of the flexibility to connect to a live captioner, you know, where else you might be sourcing your captions from. Both of those units I mentioned, the EN537 and the HD492, they can take in captions via serial ports. If you're working with a teleprompter feed or something like that, those would both - those would both work for you. They're both compatible with Lexi.
So it's really do you need the flexibility of iCap and, you know, will there be some scenarios where you could use that secondary output. So, you know, those would kind of be the two best options and, you know, I'm happy to have a conversation offline about, you know, some specifics of what you're doing and what your objectives are for, you know, getting a new closed caption encoder or adding a closed caption encoder. I hope that answers the question.
Regina: Jason here is asking if Vimeo is a good - good tool to use for closed captioning, saying that - that there are sometimes issues with the hardware. So if you could just speak to using Vimeo for closed captioning.
Bill: I've worked on that with a number of clients. You know, I'm assuming - I'm assuming the question's referring primarily to, you know, the product line that - the Livestream product, which is now sort of the live component of Vimeo, so that's kind of where I'm speaking to, I assume it's what the question is. And, you know, we've had success with that, but then one of the challenges I'd say that we've had with that is - I'm probably going to step all over trying to talk about a different company's sort of different tiers here and what they're called, but basically if you have the very basic plan on Livestream, what we've - what we've seen with customers is it basically includes - you know, you can use the Livestream software to upload directly to Livestream and it's kind of more of a closed ecosystem, where if you want to use something like Falcon, it doesn't - it doesn't have an RTMP pass-through like that for a third-party source. Or if you want to put an SDI video with ancillary data, you know, you have to make sure that you have the SDI product that uploads directly to Vimeo.
So a lot of those restrictions are not there when you have more of–I think they call it the enterprise account–and basically at that point you can do a lot more in terms of taking in third-party RTMP streams, using third-party hardware equipment. And once you've gotten the captions in there and you have the right integrations, the playback of the captions I've seen has been good, so I think it's a pretty good system, but then there is a caveat that users on the lower-priced accounts might have a lot fewer options for what they can - how they can use captioning.
Regina: Ellen is saying that there seems to be lots of conflicting info from the FCC around what it means to be ADA compliant, like for live events, internet distribution, etc. So for closed captioning, what absolutely must be there as opposed to just nice to have for viewers?
Bill: Yeah, yeah, that's a great question and, I mean, there is kind of - it can be very confusing and, you know, this is - this is a technical webinar and not a legal webinar also so, you know - but - but broadly, there's - there's programming that is - video programming that is under the mandate of the FCC in the United States, and that's basically anything that's on over-the-air TV, that's on cable TV, or things that are being presented online that are basically simulcast or replays of things that are on television in the more conventional sense.
And the FCC has fairly specific requirements for what - what captioning should be like and it breaks it down into how it should be for live performances and how it should be for, you know, post-produced like more - like episodic content, movies. And the FCC rules are pretty specific. They tell you, you know, to be accurate, they tell you about some rules for positioning, they tell you about timing, they - you know, it basically tells you that all of the material has to be captioned to do sound effects. It's pretty - it's pretty specific guidance and it has exceptions and that's a pretty easily understood field comparatively. When you move beyond those things that are FCC regulated into kind of the broader area of ADA for events and for things like that, it can become a very case-specific thing what's - what's necessary.
You know, there's guidelines that a lot of organizations have that vary state by state. For example, if you are a public entity, if you're a school that's publicly funded, if you're a government or municipal agency, often there are specific guidance for doing captions. If you're a private company, especially a small or medium-sized private company, there's often very little specific guidance and I think that it becomes kind of important to kind of understand what your audience's need is and to try to just work with the audience on what's practical for a given standpoint and how you can help the audience, because there's really not a single document or source of law that just, I think, has really a one-size-fits-all approach to captioning and accessibility kind of for every - every business situation - you know, every meeting of every size, every type of scenario.
So the ADA is something that really is not so much covered, I think, by a really existing set of administrative law and it comes down more to, you know, when you have people who are kind of being disadvantaged by the lack of services, you know, the threat, of course, ultimately is that they will sue or threaten to sue the organization and so, clearly, you kind of want to head that off before it reaches that point, clearly.
But the - there's a concept in the ADA of reasonable access and what is reasonable for a given situation, that can obviously be a very broad range of things depending on the situation, so it's a complicated subject. You know, we work with clients to kind of talk to them about what we've seen other organizations of a similar size do and, in the end, you know, sometimes getting local - getting your counsel involved in it sometimes makes sense, asking what they know about the local laws in the situation, and kind of what they are comfortable with going forward with.
Regina: Can you overlay the captioning over the stills or video on the AV610 as opposed to above or below?
Bill: Yes, if you put the picture to a full-screen picture, then you can still position the captions on top or bottom over the fullscreen picture. You know, presumably in that case you would want to have a still image that kind of had a protect area for the captions where it wasn't kind of blocking a piece of the image that you want people to see.
But you can use it that way and, of course, you can also use it with an input video signal like a more conventional closed caption decoder and just have the running input video signal and the captions will overlay over the video without using the scaling function. That's - that's another option.
Regina: Alright, well that looks like all of the questions that we've gotten today, so I would like to thank everybody again for attending - attending today's webinar, Closed Captioning Solutions for A/V, Live Events, and Online Communications. If you have any questions, you can reach out to me at firstname.lastname@example.org. You can also reach out to John at email@example.com. So thank you very much and take care.
Bill: Thank you.
John: Thank you guys. Everybody just one more thing to add. If you guys are curious about using Falcon, you can register for an eegcloud.tv account yourself and sign up for a trial. It will be watermarked but you can use Falcon in that way as well if you're just interesting in playing around with it. So be sure to check that out.
I'm typing my email right after Regina's here in the chat box. Actually sent the previous one to all panelists, which didn't do you guys any good. But here are the relevant emails and, like I said, if you guys are doing anything interesting with EEG right now that, you know, has changed because of the current situation or anything like that, you know, we'd love to hear about that stuff. You know - you know, perhaps even put some eyeballs on all the things that you're doing for accessibility and highlighting those in the context of EEG. So we're always excited to hear those stories. Please let us know.
Everybody have a great week. Stay safe. Thanks so much for joining us!