Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 11

REPUBLIQUE DU CAMEROUN REPUBLIC OF CAMEROON

PAIX TRAVAIL PATRIE PEACE WORK FATHERLAND


*********** ***********
MINISTERE DE LENSEIGNEMENT MINISTRY OF HIGHER EDUCATION
SUPERIEUR ***********
*********** NANFAH HIGHER INSTITUTE OF
INSTITUT SUPERIEUR DE SCIENCES ET SCIENCE AND TECHNOLOGY
TECHNOLOGY NANFAH

SOUNDS AS A MULTIMEDIA TOOL

PRESENTED BY:
 DJOUKENG MOUNGANG LYSIE M.

 DONGMO DIFFO LANDRY O.

 TIOMOU LECKEU FRANCK R. J

 WEBING YETCHOM JEANNE

OPTION: NETWORK AND SECURITY and

SOFTWARE ENGENEERING

LECTURER: Mr NKENFACK N. AURIOL


I.S SECURITY ENGINEER
SOUNDS

I. INTRODUCING SOUND AS A MULTIMEDIA TOOL


Multimedia elements combine more than one type of medium typically in digital form such as
computers, smartphones, audio players and other technology. These element helps the reader use
sight, hearing and other senses to experience what they are reading
Sound is a meaningful speech in any language from a whisper to a scream. It can provide the
listening pleasure of music, the start ling accent of special effects or the ambiance of a mood
setting background
Audio is the digitized from of sound. It is one of the five basic elements in multimedia. It helps
in application of multimedia as it ease delivery of its content by supplementing animation,
images, videos.
The audible range of human ear is 20Hz to 20KHz.
II. Digital audio

1. Preparing a digital audio file


When preparing a digital audio for distribution, whether it’s a music release, voiceover, Or
any other form of audio content they are several steps to consider that is;

 Format and quality selection: choose the appropriate audio format and quality settings
based on the intended use of audio file.
 File name: assign a clear and descriptive file name to the audio file, providing information
about the content and relevant identifiers.
 Normalization and peak level adjustment: consider normalizing the audio to optimize its
overall level and balance.
 Quality assurance and checks: perform comprehensive quality checks on the audio file
from any technical issues such as clicks or pops.
 Exporting in the desired format: export the digital audio file in the desired format and
quality settings.
 Backup and archiving: create a backup of the final audio file,storing in a secure location.
Consider archiving the project files and associated assets for future reference and potential
re-edits.
 Documentation and distribution: document the specifics of the audio file including its
technical details,usage rights,and any relevant information.
By following these, you can effectively prepare a digital audio file for distribution. Each step
is critical in optimizing the audio for its intended use and ensuring that it maintains high
quality throughout its lifecycle.
2. Types of digital audio file formats
The most popular digital audio file formats are the AAC (Advanced audio coding), MP3
(which is an acronym for MPEG audio layer 3), WAV( wave form audio file
format),FLAC( free lossless audio codec), and WMA ( windows media audio ). The two
most common audio file formats are the MP3 file formats and the WAV file format. Each has
a valuable role to play in the world of digital audio. MP3 compresses and store audio and
provide a high quality of a sound file and the WAV use for strorage of audio data on
windows,in audio recording and processing. Each of the types has its own advantage and
disadvantage for example, MP3 file are smaller in size and therefore take up less space on
your hardrive,however they are not as high quality as WAV.
3. Editing digital recording
Digital recording is the process of converting sound or images into nubers. Audio editing is
the process of altering recorded sound to create the desire effect. The process of audio editing
generally involves,editing the length of audio file,adjusting the volume, making sure the
different sound elements are balanced to suite your desire and results.
When it comes to edting digital recordings, whether it’s a podcast,music,voice over or any
other form of audio content, below are some steps to consider while editing digitral
recording;

 Organizing your files: Before you begin the editing process ensure that all of your digital
audio files are properly organized.
 Importing the audio: using a digital audio workstation such as adobe audition or pro
tools, import the audio files into the software, creating individual tracks for each source
or parts of the recording .
 Triming and cutting : this might invole removing long pauses,background noise,or any
segments that are irrelevant to the final content.
 Noise reduction and restoration : utilize noise reduction tools to eliminate background
noise and any unwanted artifacts. This could include clicks or pops.
 Effects and processing: if needed, add effects and addition processing to the audio to
enhances its character.
 Review and quality check: after making all the necessary edits, review the entire
composition to ensure that it flows smoothly without any jarring transitions.
 Exporting the final product: when you are satistied with the editing,export the final audio
in the appropriate format and quality settings for its for publishing,in a different context.

III. MIDI Audio


MIDI means musical instrument interface
It is a protocol that allows electronic musical instruments, computers and other divices will
communicate and control one and other. It does not transmit audio directly, but it rather, it
transmit data that represent musical notes,timing and other performance parameters.
MIDI messages are typically generated by MIDI controller such as keyboard, drum pads, or
electronic wind instrument. Thse messages are then send MIDI syntesizers or software based
vitual instrument which interpret the data and produce audio base on it.
When a MIDI message is recive, the syntesyzer or vitual instrument use the information to
generate sounds based on the assigned instrument or sound bank. This allows for a wide variety
of sounds and effect to be produce, as the MIDI data can be use to control perametres such pitch,
velocity, duration and motivation.
MIDI offers several advantages over audio recording or playback. Which include;
 It allows for precise control and manipulate of musical elements such as pitch correction,
quantization, and tempo adjustments.
 MIDI data is also highly compact compared to audio files, making it easy to store, edit
and transmit

To convert MIDI data into audio, you need a device or software that can receive the MIDI
messages and generate based on them. This can be a hardware synthesizer, a MIDI compactible
keyboard, connected to a computer runnuig musical production software synthesizer.
A. Advantages of MIDI
Below are some advantages of MIDI
 Versality: MIDI allows for communication and control between a wide range of
electronic musical instrument, computer and devices.
 Compactness: MIDI data is very small in size compared to audio files, making it easy to
store, edit, and transmit.
 Editability: MIDI data can be easily edited, allowing for precise control and
manipulation of musical element such as pitch, timing, and dynamics.
 Non-destructive: MIDI allows for non-destructive editing, meaning you can change and
refine your musical performance without affecting the original source material.
 Flexibility: MIDI allows for flexibility and dynamic arrangements, as you can easily
change the instrument sounds, adjust tempos, and modify other performance parameters.
 Integration: MIDI can be seamlessly integrated with computer based music production
software, allowing for extensive control and automation options.
 Real-time control: MIDI controller provide real-time control over various parameters
allowing for expressive performances and live improvisation.
 Virtual instrument: MIDI can be used to drive software-based virtual instrument,
providing access to a wide range of high quality sounds and effects.
 Synchronization: MIDI allows for synchronization between different devices, ensuring
that multiple instrument s and devices play in perfect time.
 Standardization: MIDI is a widely accepted industry standard, ensuring compatibility
between different MIDI devices and software.

This advantages make MIDI a powerful tool for musical production, performance, and
composition, offering flexibility, control, and creative possibilities.

B. Disadvantages of MIDI
One of the main disadvantage if MIDI is that;
It depends on the quality and compatibility of the sound source and the playback.

IV. Audio file formats


Audio file formats are the different ways for storing digital audio data on a computer system. The bit
lay out of the audio data is called the audio coding format and can be uncompressed or compressed to
reduce the file size often using lossy compression. An audio codec performs the encoding and decoding
of the raw audio data while this encoded form is usually stored in a container file.
There are three major group of audio file formats lossless compressed file formats, lossy compressed file
formats and uncompressed file formats.
 Uncompressed file formats : it encode both sound and silence with the same number of bits per
unit time.one major uncompressed audio format, LPCM is the variety of PCM as used in compact
disk digital audio and is the format most commonly accepted by low level audio API and DIA
converter hardware. Although LPCM can be store on a computer as a raw audio format it is
usually stored in a .WAV file on a windows or in a .AIFF file on mac OS. WAV and AIFF are
widely supported and can store LPCM they are suitable formats for storing and archiving an
original recording. Broadcast wave format (BWF) was bought as a successor of WAV it allows
morfe robust data to be stored in the file. It is the primary recording format used in many
professional audio workstations in the television and film industry.
 Lossless compressed : it stores data in less space without losing any infomations. The original
uncompressed data can be recreated from the compressed version. In a lossless compressed
format the music will occupy a smaller file than an uncompressed format and the silence would
take up almost no space. Lossless compression formats include FLAC, wave pack, monkeys
audio, AlAC( apple lossless). They provide a compression ratio of about 2:1 that is their file
take up half the space of PCM . Development In lossless compression formats aims to reduce
processing time while maintaining a good compressions ratio.
 Lossy compressed audio format : enables greater reduction of file size by removing some of the
audio information and simplifying the data. This leads to a reduction in audio quality. A variety
of techniques are used to remove the parts of the sound that has the least effect on pecieved
quality and to minimize the amount of audible noise added during the process. The most popular
are MP3 and ACC format. Most formats offer a range of degree of compression genrally
measured in bit rate. The lower the rate the smaller the file and the more significant the quality
loss.
Open file formats; these are file formats that are published and freely available for any one
to use.

V. MIDI versus digital audio


 A MIDI is a software for representing musical information in a digital format. WHILE
Digital Audio refers to the reproduction and transmission of sound stored in a digital
format.
 MIDI has an abstract representation of musical sound and sound effects. WHILE Digital
Audio has a digital representation a physical sound waves.
 MIDI comprises a series of command that represent musical notes, volume, and other
musical parameters. WHILE Digital Audio comprises analog sound waves that are
converted into a series of 0s and 1s.
 No actual sound is stored in the midi file. WHILE Actual sound is stored in the Digital
Audio file.
 In MIDI files are small in size and compact. WHILE Files are large in size and are loose.
 In MIDI the quality of sound is not in proportion to the file size. WHILE In Digital
Audio the quality of sound is in proportion to the file size.
 MIDI sound a little different from the original sound. WHILE Digital Audio reproduce
the exact sound in a digital format.
 MIDI is used for creating and controlling electronic music, such as synthesizers and drum
machines. WHILE Digital Audio is used for recording and playback of music, sound
effects, and voiceovers.

VI. Audio CD play back


Audio CD play back refers to the processe of playing audio content from a compact disc (CDs).
Audio CDs are a popular meduim for distributing musics and other audio recordings.
To play an audio, you typically need a CD player or a device that supper CD play back, such as a
CD player, a computer with a CD/DVD drive , or a dedicated portable CD player.
here are the general steps to play an audio CD:
1) Insert: Insert the audio CD: Place the audio CD into the CD player or the CD/DVD drive of
your computer. The CD should be insertd with the lable side facing up.
2) Start play black: Depending on the device or software you are using, the CD playback may
start automatically, or you may need to initiate it manually. If it doesn't start automatically, open
the media player software on your computer or press the play button on your CD player.
3) Control playback: Once the CD start playing, you can control the playback using the
available controls. these control typically include play, pause, stop, skip backward, and volume
adjustment. Use the appropriate buttons or options to control the playback according to
preferences.
4) Eject the CD: After you finish listening to the CD you can eject it from the CD player or the
CD/DVD drive, in the media player software to safely remove the CD.
It’s important to note that with the rise of digital music and streaming services, physical audio
CDs have become less common in recent years. However, many devices and software still
support CD playback for those who have collection of CDs or prefer the audio quality and
experience provided by CDs.

VII. Audio recording

1. Types of storage
Audio recording is the process by which sound information is captured onto a storage
medium like magnetic tape, optical disc.Digital audio can be stored on a variety of storage media
including compact disc,audio DVD, or as a computer file. They are two types of storage that is
primary and secondary storage. With primary storage acting as a computers short term memory
and secondary as a computer long term memory. Some examples of storage devices are the hard
disk,magnetic disk, Sd card. Audio recording is done in the storage device hard drive or SSD
( solid state drive). Recording is done in the storage device DVD( digital versatile disc).here are
the main types of storage commonly used in the the field of audio recording.

 Hard disk drives: they are commonly used for storing recorded audio data in
professional recording setups. They provide large storage capacities at a relatively
affordable cost.
 Solide state drives: it offers faster data access speeds and are incleasingly popular for
audio recording applications. They provide swift write and read speeds making them
ideal for capturing high fifelity audio.
 Digital audio workstation : often use specialized storage systems optimized forf
handling real time audio streaming and recording.
 Optical disk : they are used for long term storage and backup of audio recordings
providing an additional layer of redundancy and data preservation.

2. Audio recording guidelines


To create a quality audio recording, there are several factors to consider whether you are
recording a voice over,music or any other type of audio content. The guidelines to follow while
creating audio recordings are;

 Sound proofing: choose a quite and controlled environment to minimize background


noise.
 Microphone selection : invest in a quality microphone appropriate for your specific
recordoing needs.
 Microphone placement : position the microphone correctly to capture the intended sound
 Audio interface.

Whether you are recording in a professional studio or in a home set up, paying attention to
these few can significally enhance the quality of your audio recording.

VIII. Voice recognition and response


Voice recognition and response, also known as voice recognition technology or speech
recognition, refers to the ability of system or software to understand spoken language and
response accordingly. It involves converting spoken words into written text or commands and
then processing that information to provide and appropriate response.
Voice recognition and response system use various techniques and technologies to achieve
accurate and reliable results.
1. Using Microphone
Voice recognition and response using a microphone involve capturing audio input through a
microphone and processing it to understand the spoken word and generate appropriate responses.
Here is a general overview of how the process works.
1) Audio input: The system or application receive audio pot from a microphone. The
microphone capture the user’s voice and convert it into an electrical signal.
2) Per-processing: The capture audio signal may undergo pre-processing to remove noise,
adjust volume level, or apply other techniques to enhance the quality of the audio.
3) Speech recognition: the pre-processed audio signal is then passed through a speech
recognition system or algorithm. The algorithm analysis the audio input, identifies
individuals words or phonemes, and convert them into written text. This step is known as
speech-to-text conversion or automatic speech recognition (ASR).
4) Text analysis: The resulting text from the speech recognition step is analyzed to
understanding its meaning and intent. Natural language processing(NLP) techniques may
be applied to parse the text, extract keywords, and determine the user’s intent or query.
5) Response generation: Base on the analyzed text, a response is generated. This can be in
the form of spoken word, text displayed on a screen, or actions performed by the system.
6) Response output: The generated response is delivered to the user through a speaker or a
text-to-speech (TTS) engine that converts the text into spoken words. Alternatively, the
response can be displayed as text on a screen or other output devices.

It’s important to note that the accuracy of voice recognition and response system can vary
depending on factors such as the quality of the speech recognition and natural language
processing algorithm being used continuous advancements in technology have
significantly improved the accuracy and reliability of voice application systems, making
them increasingly prevalent in various applications and devices.

2.Performance features
Voice recognition and response systems can have various performance feature that
contribute to their accuracy, reliability and user experience. Here are some important
performance features to consirder.
1) Accuracy: Accuracy refers to the system’s ability to correctly recognize and interpret
spoken words.
2) Language support: The system’s language support determines the range of language
and dialects it can effectively recognize and respond to.
3) Noise cancellation: Noise cancellation technologies help filter out background noise,
such as ambient sounds or echoes, to improve speech recognition accuracy.
4) Speaker adaptation: Speaker adaptation allows the system to adapt and recognize
the unique voice characteristics of individual users.
5) Response time: Response time refers to the speech at which the system at which the
system provides a response after receiving an input.
6) Continuous speech recognition: Continuous speech recognition enables the system
to process and interpret speech in real time, without requiring user’s to pause between
words or phrases.
7) Contextual understanding: Contextual understanding involves the system’s ability
to interpret the meaning of spoken words within the context of the conversation or
user’s previous interactions.
8) Error handling and correction: Effective error handling and correction mechanisms
helps the system recover from recognition error or ambiguous inputs.
9) Personalization: Personalization features allows the system to learn from user
interactions and adapt it’s responses over time.
10) Integration capabilities: Integration capabilities enable the voice recognition and
response system to seamlessly integration with other applications, platforms, or
devices.
It’s important to note that the performance features of voice recognition and response
systems can vary depending on the specific software or platform being used.
Different systems may prioritize and excel indifferent aspects, so it’s essential to
consider the specific requirement and goals when evaluating and selecting a voice
recognition and response solution.

You might also like