FAQ

 

Captions (sometimes called “subtitles”) are the textual representation of a video’s soundtrack. They are critical for viewers who are deaf or hard of hearing, and they are also a great tool for improving the reading and listening skills of others.

If you upload video to the Web, and that video includes sound, you should always include a text alternative, such as captions. As an added bonus, since most captioning for the Web relies on text, providing captions for your videos will ensure that they are indexed by search engines more quickly and accurately, meaning your video will reach more people.

Comment on this FAQ

Your email address will not be published. Required fields are marked *

Load More

 

  • Captions afford Deaf and Hard-of-Hearing viewers greater access to televised programming, while offering the producer a much larger viewing audience. There are currently over one million Deaf people in the United States, and over 28 million people affected by hearing loss.
  • Captions help children with word identification, meaning, acquisition, and retention.
  • Reading captions motivates viewers to read more and read more often.
  • Captions can help children establish a systematic link between the written word and the spoken word.
  • Pre-readers, by becoming familiar with captions, will have familiar signposts when they begin reading print-based material.
  • Captioning has been related to higher comprehension skills when compared to viewers watching the same media without captions.
  • Children who have a positive experience in reading will want to read; reading captions provides such an experience.
  • Reading is a skill that requires practice, and practice in reading captions is practice with authentic text.
  • Captions provide missing information for individuals who have difficulty processing speech and auditory components of the visual media (regardless of whether this difficulty is due to a hearing loss or a cognitive delay).
  • Students often need assistance in learning content-relevant vocabulary (in biology, history, literature, and other subjects), and with captions they see both the terminology (printed word) and the visual image.
  • Captioning is essential for children who are deaf and hard of hearing, can be very beneficial to those learning English as a second language, can help those with reading and literacy problems, and can help those who are learning to read.

Comment on this FAQ

Your email address will not be published. Required fields are marked *

Load More

 

Closed captioning information is encoded within the video signal, in line 21 of the vertical blanking interval (VBI). The text only becomes visible with the use of a decoder, which may be built into a television set or available as a set-top box. In general, an onscreen menu on newer televisions allows you to turn closed captioning on or off.

Most programs are captioned in advance of transmission, but the nature of some programs, such as live news broadcasts, requires Real-time captioning. For Real-time captioning, a stenographer listens to the broadcast and types a shorthand version into a program that converts the shorthand into captions and adds that data to the television signal.

According to the Television Decoder Circuitry Act of 1990, all televisions made in the United States since 1993 must have a built-in caption decoder if their picture tubes are larger than 13 inches. In July 2000, the Federal Communications Commission (FCC) mandated sections of industry standard EIA-708-B, “Digital Television (DTV) Closed Captioning” into its broadcast regulations. The new rules will make it possible for users to select the size, color, and font of their captions and to select among multiple streams, choosing, for example, a particular language.

Comment on this FAQ

Your email address will not be published. Required fields are marked *

Load More

 

Open captions are an integral part of a transmission that cannot be turned off by the viewer.

Comment on this FAQ

Your email address will not be published. Required fields are marked *

Load More

 

There are two major styles of captions currently being used in the industry: pop-on and roll-up.

Pop-on

Pop-on captions are usually one or two lines of captions that appear onscreen and remain visible for one to several seconds before they disappear. A few frames of media are left without captions before the next line(s) of captions “pop-on.”

Roll-up

Roll-up captions are usually verbatim and synchronized. Captions follow double chevrons (which look like “greater than” symbols), and are used to indicate different speaker identifications. Each sentence “rolls up” to about three lines. The top line of the three disappears as a new bottom line is added, allowing the continuous rolling up of new lines of captions.

Comment on this FAQ

Your email address will not be published. Required fields are marked *

Load More

 

Although closed captions (CCs) and subtitles look similar, they’re designed for two different purposes. Subtitles provide a text alternative for the dialogue of video footage – the spoken words of characters, narrators and other vocal participants.

Closed Captions, on the other hand, not only supplement for dialogue but other relevant parts of the soundtrack – describing background noises, phones ringing and other audio cues that need describing.

Essentially, subtitles assume an audience can hear the audio, but need the dialogue provided in text form as well. Meanwhile, closed captioning assumes an audience cannot hear the audio and needs a text description of what they would otherwise be hearing.

Comment on this FAQ

Your email address will not be published. Required fields are marked *

Load More

 

The difference is essentially live vs. pre-recorded. Real-time captioning is performed at the same time the broadcast is aired. A captioner is linked directly to the station and words are captioned as they are spoken. Offline captioning is done after the media is recorded, and meticulously edited before it is used to create a new captioned master, which then goes to air.

Comment on this FAQ

Your email address will not be published. Required fields are marked *

Load More

 

It is important that the captions be (1) synchronized and appear at approximately the same time as the audio is available; (2) verbatim when time allows, or as close as possible; (3) equivalent and equal in content; and (4) accessible and readily available to those who need or want them.

The most important thing about captions is that, when they appear on the screen, they are in an easy-to-read format. Good captions adhere to the following guidelines when possible:

  • Captions appear on-screen long enough to be read.
  • It is preferable to limit on-screen captions to no more than two lines.
  • Captions are synchronized with spoken words.
  • Speakers should be identified when more than one person is on-screen or when the speaker is not visible with the use of line placement or speaker identifiers.
  • Punctuation is used to clarify meaning.
  • Spelling is correct throughout the production.
  • Sound effects are written when they add to understanding.
  • All actual words are captioned, regardless of language or dialect.
  • Use of slang and accent is preserved and identified.

Comment on this FAQ

Your email address will not be published. Required fields are marked *

Load More

 

The principal laws mandating closed captioning in America are the Telecommunications Act of 1996 (the Telecomm Act) and Section 508 of the Rehabilitation Act.

Following the Telecomm Act, the Federal Communications Commission (FCC) issued regulations requiring video program distributors (broadcasters, cable operators and satellite distributors) to gradually phase in closed captioning of their television programs. The schedule set by the FCC distinguishes between “new” programming (analog programming first shown after January 1, 1998, and digital programming first shown after July 1, 2002), and “pre-rule” programming (analog and digital programming first shown prior to such dates).

In addition, there is a separate schedule for Spanish language programming. Currently, 75% of all new, and 30% of all pre-rule, English-language programming must be captioned. The caption requirement for English-language programming increases to 100% on January 1, 2006 for new programming, and to 75% on January 1, 2008 for pre-rule programming. Spanish-language programming is being phased in on a later schedule with 50% of the new, and 30% of pre-rule, programming currently required to be captioned. Captioning of new Spanish-language programming increases to 75% in 2007, and to 100% in 2010, while captioning of pre-rule programming increases to 75% in 2012.

The FCC has exempted certain programming from their captioning requirements entirely, including most programming shown between 2:00 a.m. and 6:00 a.m., advertisements under five minutes in length, public service announcements shorter than 10 minutes (unless they are federally-funded or produced), and programming provided by distributors with less than $3 million in annual gross revenues.

For more detailed information on the FCC’s regulations governing captioning, please visit: http://www.fcc.gov/cgb/consumerfacts/closedcaption.html

Section 508 of the Rehabilitation Act, coupled with the Workforce Investment Act of 1998, requires all electronic and information technology provided by Federal agencies to be accessible to people with disabilities, including employees and the general public. This means that all informational and training videos and other multimedia productions developed, procured, maintained, or used by any Federal agency must be open or closed captioned to provide access to the deaf and hard-of-hearing.

For more detailed information on Section 508 please visit: http://www.section508.gov

  • WilbertJuicy says:
    Your comment is awaiting moderation.

    Hi. I see that you don’t update your blog too often. I know
    that writing articles is boring and time consuming.

    But did you know that there is a tool that allows
    you to create new articles using existing content (from article
    directories or other websites from your niche)? And
    it does it very well. The new posts are high quality and pass the
    copyscape test. Search in google and try: miftolo’s tools

  • Comment on this FAQ

    Your email address will not be published. Required fields are marked *

    Load More

     

    Descriptions of non-verbal sound effects and music can greatly enhance the narrative of a captioned program. Because these are not words contained in the audio, they are generally distinguished by brackets or parentheses. Sound-effect captions can also indicate the source of a sound or describe the way in which something is spoken by its placement in front of regular captioned text.

    Comment on this FAQ

    Your email address will not be published. Required fields are marked *

    Load More

     

    Part of the FCC Caption Quality Best Practices is that closed captioning must be synchronous with the program audio, but must also be on screen long enough to be read completely. With Real-time captioning, captions are usually 5-9 seconds behind, as the captioner takes the time to listen and “write” what they’re hearing, captions are transmitted to the networks and encoded into the video transmission signal. Offline/Post production captioning shouldn’t have any delay at all, and should appear onscreen synchronously with program audio. If you’re noticing a significant delay in closed captioning to where it’s hindering your understanding of the program, this could be a transmission issue with your video programming distributor/cable provider, broadcaster or satellite provider. Per the FCC, they must pass through captions, and make sure they’re passing through correctly.

    Comment on this FAQ

    Your email address will not be published. Required fields are marked *

    Load More