1     Scope of Television Media Analysis

2     Reference Model of Television Media Analysis

3     I/O Data of Television Media Analysis

4     Functions of AI Modules of Television Media Analysis

5     I/O Data of AI Modules of Television Media Analysis

6     AIWs, AIMs, and JSON Metadata of Television Media Analysis

1      Scope of Television Media Analysis

Television Media Analysis (OSD-TMA) produces Audio-Visual Event Descriptors in the form of a set of significant set of Audio-Visual Scene Descriptors that include Audio, Visual, or Audio-Visual scene changes, IDs of speakers and faces with their spatial positions, and text from utterances of a video program provided as input. The set of Audio-Visual Scene Descriptors is packaged in Audio-Visual Event Descriptors.

2      Reference Model of Television Media Analysis

Figure 1 depicts the Reference Model of TV Media Analysis.

Figure 1 – Reference Model of OSD-TMA

3      I/O Data of Television Media Analysis

Table 1 provides the input and output data of the TV Media Analysis Use Case:

Table 1 – I/O Data of Conversation with Personal Status

Input Descriptions
Input Audio-Video Audio-Video to be analysed.
Input Descriptions
Audio-Visual Event Descriptors Resulting analysis of Input Audio-Video.

4      Functions of AI Modules of Television Media Analysis

Table 2 provides the functions of the TV Media Analysis Use Case. Note that processing proceeds asynchronously, e.g., TV Splitter separates audio and video for the entire duration of the file and passes the entire audio and video files.

Table 2 – Functions of AI Modules of Conversation with Personal Status

AIM Function
TV Splitter 1.     Receives Audio-Visual

a.     An audio-video file.

b.     Metadata (e.g., title, date).

2.     Produces

a.     Video file

b.     Audio file

3.     When the files of the full duration of the video are ready, AV Splitter informs the following AIMs.

Visual Change Detection 1.     Receives Video file.

2.     Iteratively

a.     Looks for a video frame that conveys a scene changed from the preceding scene (depends on threshold).

b.     Assigns a video clip identifier to the video clip.

c.     Produces a set of images with StartTime and EndTime.

i.     An image

ii.     Time stamp

TV Diarisation 1.     Receives Audio file.

2.     Iteratively detects speaker change.

a.     For each audio segment (from one change to the next):

i.     Becomes aware that there is speech.

ii.     Assigns a speech segment ID and anonymous speaker ID (i.e., the identity is unknown) in the segment.

iii.     Decides whether:

1.     The existing speaker has stopped.

2.     A new speaker has started a speech segment.

iv.     If a speaker has started a speech:

1.     Assigns a new speech segment ID.

2.     Check whether the speaker is new or old in the session.

3.     If old retain old anonymous speaker ID.

4.     If new assign a new anonymous speaker ID.

b.     Produces a series of audio sequences each of which contains:

i.     A speech segment.

ii.     Start and end time.

iii.     Anonymous Speaker ID.

iv.     Overlay information

Face Identity Recognition 1.     Receives a set of images per video clip.

2.     For each image identifies the bounding boxes.

3.     Extracts faces from the bounding boxes.

4.     Extracts the embeddings that represent a face.

5.     Compares the embeddings with those stored in the face recognition data base.

6.     Associates the embeddings with a face ID.

Speaker Identity Recognition 1.     Receives a speech segment and Overlay information.

2.     Extracts the embeddings that represent the speech segment.

3.     Compares the embeddings with those stored in the speaker recognition data base.

4.     Associates the embeddings with a Speaker ID.

Audio-Visual Alignment 1.     Receives

1.     Face ID

2.     Bounding Box

3.     Face Time

4.     Speaker ID

5.     Speaker Time

2.     Associates Speaker ID and Face ID

Automatic Speech Recognition 1.     Receives a speech segments.

2.     Produces the transcription of the speech payload.

3.     Attaches time stamps to specific portions of the transcription.

Audio-Visual Scene Description 2.     Receives

1.     Bounding box coordinates, Face ID, and time stamps

2.     Speaker ID and time stamps.

3.     Reconciles Face ID and Speaker ID.

4.     Text and time stamps

5.     Produces a JSON multiplexing the input data.

3.     Produces Audio-Visual Scene Descriptors

Audio-Visual Event Description 1.     Receives Audio-Visual Scene Descriptors

2.     Produces Audio-Visual Event Descriptors

5      I/O Data of AI Modules of Television Media Analysis

Table 3 provides the I/O Data of the AI Modules of the TV Media Analysis Use Case.

Table 3 – I/O Data of AI Modules of Television Media Analysis

AIM Receives Produces
TV Splitter Audio-Video Audio

Video

Visual Change Detection Video Image
TV Diarisation Audio Speech Segments

Overlay

Face Identity Recognition Image Face ID

Bounding Box

Speaker Identity Recognition Speech Segments Speaker ID
Automatic Speech Recognition Speech Segments

Overlay

Text
Audio-Visual Scene Description 1.     Bounding box coordinates

2.     Face ID

3.     Scene time stamps

4.     Speaker ID

5.     Speech time stamps.

6.     Recognised Text

7.     Text and time stamps

AV Scene Descriptors with reconciled Face and Speech IDs.
Audio-Visual Event Description Audio-Visual Scene Descriptors AV Event Descriptors

6      AIWs, AIMs, and JSON Metadata of Television Media Analysis

Table 4 – AIWs, AIMs, and JSON Metadata

AIW AIM Name JSON
OSD-TMA Television Media Analysis X
OSD-TVS Television Splitter X
OSD-VCD Visual Change Detection X
MMC-TVD Audio Diarisation X
PAF-FIR Face Identity Recognition X
MMC-SIR Speaker Identity Recognition X
OSD-AVA Audio-Visual Alignment X
MMC-ASR Automatic Speech Recognition X
OSD-AVE Audio-Visual Event Description X
OSD-AVS Audio-Visual Scene Description X