Captioning FAQ

What is Closed Captioning?

Closed Captioning (sometimes called “captions”) are the textual  representation of a video’s audio content. They are critical for viewers who suffer from hearing loss, and they are also a great tool for improving the reading and listening skills of others. They include audio cues such as sirens, doors slamming, or phones ringing, among many other things. CClogo

If you upload video to the web, and that video includes sound, you should always include a text alternative, such as captions. As an added bonus, since most captioning for the web relies on text, providing captions for your videos will ensure that they are indexed by search engines more quickly and accurately, meaning your video will reach more people. This information is usually called metadata.

What is the difference between subtitles and closed captions?

Closed captions are hidden in the video signal and have to be turned on to be seen, while subtitles are always visible. Closed captions also contain all the audible information (i.e. sound) necessary to fully understand a video’s content. Subtitles are generally used to only display the spoken words, as in the cases of different languages being displayed other than the video’s language.

What is the difference between open and closed captions?

Open captions are part of the video, and cannot be turned off. Closed Captioning is able to be activated or deactivated at the viewer’s discretion.

Why are there so many names for essentially the same thing?

People generally refer to any writing on the bottom of their screen as captions.  However, it is important to note the differences and options as standards required by the FCC may differ from compliance with the Americans with Disabilities Act as well as if you plan to distribute your content to an entity such as Netflix or in another country.

What are the different styles of closed captioning?

There are two major styles of captions currently being used in the industry: pop-on and roll-up.

Pop-on: Pop-on captions are usually one or two lines of captions that appear onscreen and remain visible for one to several seconds before they disappear. A few frames of media are left without captions before the next line(s) of captions “pop-on”. They also use placement to help indicate speaker changes, and are formatted to generally be complete sentences or thoughts.

Roll-up: Roll-up captions scroll up the screen, and are generally used with live programming.. Captions follow double chevrons (>>), and are used to indicate different speaker identifications. Each sentence “rolls up” to about three lines. The top line of the three disappears as a new bottom line is added, allowing the continuous rolling up of new lines of captions. Triple chevrons (>>>) usually indicate a topic change.

What are the benefits of having captions?

  • Captions afford viewers who may be Deaf or suffer from hearing loss greater access to televised programming, while offering the producer a much larger viewing audience. There are currently over one million Deaf people in the United States, and over 28 million people affected by hearing loss. ccart
  • Captions help children with word identification, meaning, acquisition, and retention.
  • Reading captions motivates viewers to read more and read more often.
  • Captions can help children establish a systematic link between the written word and the spoken word.
  • Pre-readers; by becoming familiar with captions, will have familiar signposts when they begin reading print-based material.
  • Captioning has been related to higher comprehension skills when compared to viewers watching the same media without captions.
  • Children who have a positive experience in reading will want to read; reading captions provides such an experience.
  • Reading is a skill that requires practice, and practice in reading captions is practice with authentic text.
  • Captions provide missing information for individuals who have difficulty processing speech and auditory components of the visual media (regardless of whether this difficulty is due to a hearing loss or a cognitive delay).
  • Students often need assistance in learning content-relevant vocabulary (in biology, history, literature, and other subjects), and with captions they see both the terminology (printed word) and the visual image.
  • Captioning is essential for children who are deaf and hard of hearing, can be very beneficial to those learning English as a second language, can help those with reading and literacy problems, and can help those who are learning to read.

How are closed captions incorporated in your program?

Closed captioning information is encoded within the video signal, in line 21 of the vertical blanking interval (VBI). The text only becomes visible with the use of a decoder, which is built into your television set or available as a set-top box for older tube televisions. In general, an onscreen menu on newer televisions allows you to turn closed captioning on or off, and most newer TV remotes have a dedicated closed captioning button.

Most programs are captioned in advance of transmission, but the nature of some programs, such as live news broadcasts, requires Real-time captioning. For Real-time captioning, a real time captioner listens to the broadcast and either types the show using a stenographer like a court reporter, or re-speaks the audio using special software.  That signal is sent to the television station’s closed captioning encoder, where it becomes embedded in the video of the broadcast. That is why there is usually a bit of a delay in live broadcasts between the captions and the program.

According to the Television Decoder Circuitry Act of 1990, all televisions made in the United States since 1993 must have a built-in caption decoder if their picture tubes are larger than 13 inches. In July 2000, the Federal Communications Commission (FCC) mandated sections of industry standard EIA-708-B, “Digital Television (DTV) Closed Captioning” into its broadcast regulations. The new rules will make it possible for users to select the size, color, and font of their captions and to select among multiple streams, choosing, for example, a particular language.

What is the difference between Offline/Post production and Real-time Captioning?

The difference is essentially live vs. pre-recorded. Real-time captioning is performed at the same time the broadcast is aired. A captioner is linked directly to the station and words are captioned as they are spoken. Offline captioning is done after the media is recorded, meticulously edited before it is used to create a new captioned master, which then goes to air.

What are the guidelines for closed captioning?

It is important that the captions be (1) synchronized and appear at approximately the same time as the audio is available; (2) verbatim when time allows, or as close as possible; (3) equivalent and equal in content; and (4) accessible and readily available to those who need or want them.  These are standards the FCC requires.  For more information about why these standards have been put in place, you can view the FCC’s Report and Order that made the guidelines, released on February 24, 2014. Page 21 has the breakdown of why each area matters. FCClogo

The most important thing about captions is that, when they appear on the screen, they are in an easy-to-read format, commonly referenced as “readability”. Good captions adhere to the following guidelines when possible:

  • Captions appear on-screen long enough to be read.
  • It is preferable to limit on-screen captions to no more than two lines.
  • Captions are synchronized with spoken words.
  • Speakers should be identified when more than one person is on-screen or when the speaker is not visible with the use of line placement or speaker identifiers.
  • Punctuation is used to clarify meaning.
  • Spelling is correct throughout the production.
  • Sound effects are written when they add to understanding.
  • All words are captioned as spoken, regardless of language, dialect, or grammatical accuracy.
  • Use of slang and accent is preserved and identified.

What are the current laws regarding closed captioning?

The principal laws mandating closed captioning in America are the Telecommunications Act of 1996 (the Telecomm Act) and Section 508 of the Rehabilitation Act.

Following the Telecomm Act, the Federal Communications Commission (FCC) issued regulations requiring video program distributors (broadcasters, cable operators and satellite distributors) to gradually phase in closed captioning of their television programs. The schedule set by the FCC distinguishes between “new” programming (analog programming first shown after January 1, 1998, and digital programming first shown after July 1, 2002), and “pre-rule” programming (analog and digital programming first shown prior to such dates).shutter 4

In addition, there is a separate schedule for Spanish language programming. Currently, 75% of all new, and 30% of all pre-rule, English-language programming must be captioned. The caption requirement for English-language programming increases to 100% on January 1, 2006 for new programming, and to 75% on January 1, 2008 for pre-rule programming. Spanish-language programming is being phased in on a later schedule with 50% of the new, and 30% of pre-rule, programming currently required to be captioned. Captioning of new Spanish-language programming increases to 75% in 2007, and to 100% in 2010, while captioning of pre-rule programming increases to 75% in 2012.

The FCC has exempted certain programming from their captioning requirements entirely, including most programming shown between 2:00 a.m. and 6:00 a.m., advertisements under five minutes in length, public service announcements shorter than 10 minutes (unless they are federally-funded or produced), and programming provided by distributors with less than $3 million in annual gross revenues.

For more detailed information on the FCC’s regulations governing captioning, please visit: http://www.fcc.gov/cgb/consumerfacts/closedcaption.html

Section 508 of the Rehabilitation Act, coupled with the Workforce Investment Act of 1998, requires all electronic and information technology provided by Federal agencies to be accessible to people with disabilities, including employees and the general public. This means that all informational and training videos and other multimedia productions developed, procured, maintained, or used by any Federal agency must be open or closed captioned to provide access to the deaf and hard-of-hearing.

For more detailed information on Section 508 please visit: http://www.section508.gov

How are sound effects and music identified in captioning program?

Descriptions of non-verbal sound effects and music can greatly enhance the narrative of a captioned program. Because these are not words contained in the audio, they are generally distinguished by brackets or parentheses. Sound-effect captions can also indicate the source of a sound or describe the way in which something is spoken by its placement in front of regular captioned text.  These audio cues are crucial, such as a phone ringing or a doorbell sounding both can cause a person to say “Are you going to answer that?”

Why are Real-time captions delayed?

Part of the FCC Caption Quality Best Practices is that closed captioning must be synchronous with the program audio, but must also be on screen long enough to be read completely. With Real-time captioning, captions are usually 5-9 seconds behind, as the captioner takes the time to listen and “write” what they’re hearing (2-3 seconds), captions are transmitted to the networks (1 second) and encoded into the video transmission signal (4-5 seconds). Offline/Post production captioning shouldn’t have any delay at all, and should appear onscreen synchronously with program audio. If you’re noticing a significant delay in closed captioning to where it’s hindering your understanding of the program, this could be a transmission issue with your video programming distributor/cable provider, broadcaster or satellite provider. Per the FCC, they must pass through captions, and make sure they’re passing through correctly.  If you are experiencing closed captioning issues, such as garbled captions or delays, you can contact your TV provider or the TV station to work with them as to why you are experiencing this.  The FCC outlines this process here.