Download as pdf or txt
Download as pdf or txt
You are on page 1of 62

A MAJOR PROJECT REPORT ON

Guest Greeting and Guiding Robot for Institution Purpose


using Raspberry Pi
Bachelor of Technology
In
Electronics & Communication Engineering
Submitted by
Paripelli Vamshi – 19671A0440
Sana Naga Venkata Subhash Naidu - 19671A0445
Chinthalapeta Raja Reddy - 20675A0406

Under the Esteemed Guidance of


Dr. Towheed Sultana
Prof. ECE Dept

Department of Electronics & Communication Engineering


J.B. INSTITUTE OF ENGINEERING AND TECHNOLOGY
(UGC Autonomous)
(Accredited by NBA & NAAC, Approved by AICTE & Affiliated to JNTU,
Hyderabad)
Yenkapally, Moinabad
J.B. INSTITUTE OF ENGINEERING AND TECHNOLOGY
(UGC Autonomous)
(Accredited by NBA & NAAC, Approved by AICTE & Affiliated to JNTU,
Hyderabad)

CERTIFICATE

This is to certify that the dissertation work entitled “Guest Greeting and
Guiding Robot for Institution Purpose using Raspberry Pi” is carried out
by P. Vamshi – 19671A0440, Sana Naga Venkata Subhash Naidu -
19671A0445 Chinthalapeta Raja Reddy - 20675A0406, in partial
fulfillment of the requirements for the degree of Bachelor of Technology in
Electronics and Communication Engineering of the J.B. Institute of
Engineering and Technology, Hyderabad, during the academic year 2022-23.

Internal Guide HOD-ECE

Dr. Towheed Sultana Dr. Towheed Sultana

Professor. ECE Dept Professor


DECLARATION

We (P. Vamshi, S.N.V. Subhash Naidu, Ch. Raja Reddy) hereby declare that the major
project entitled” Guest Greeting and Guiding Robot for Institution Purpose using
Raspberry Pi” done by us under the guidance of Dr. Towheed Sultana (Professor) at “J.B.
Institute of Engineering and Technology” is submitted in the partial fulfilment of the
requirements for the award of degree in Electronics and Communication Engineering.

1
ACKNOWLEDGEMENTS

This is to acknowledgement of the intensive drive and technical competence


of many individuals who have contributed to the success of our dissertation.

We would like to sincerely thank to our internal guide, Dr. Towheed Sultana,
Professor Who stimulated many thoughts for this project and Staff-Members of
Department of ECE for their goodwill gestures towards me.

We are very grateful to Hod Dr. TOWHEED SULTANA, Professor who has not
only shown at most patience, but fertile in suggestions, vigilant in directions of
error and who have been infinitely helpful.

We wish to express deepest gratitude and thanks to principal


Dr.P.KRISHNAMACHARI for his constant support and encouragement in providing
all the facilities in the college to do the project work.

P. Vamshi – 19671A0440
S.N.V. Subhash Naidu - 19671A0445
Ch. Raja Reddy - 20675A0406

2
ABSTRACT

In the new world of Technology robots are playing a critical role. We observe before many
tourist places, Institutions that guides are the people working mechanically by receiving guests and
helps in exploring the place. This project work more effectively by doing loop work such as receiving
guests greeting them and helps them with path finding. Robot receives command for destination path
through led display. The proposed model can avoid obstacle by using ultrasonic sensors. By using the
proposed model, an organization can make it easy to explore the area. It can be used in
Institutions/malls/airports/museums/restaurants. This proposed model allows the guest to locate and
explore the Area. With the help of programming sensors and raspberry pi, we can accomplish this.

Index Terms- Raspberry pi, Sensors, Guest greeting, Line path follower, Face recognition.

3
TABLE OF CONTENTS
TOPICS
CERTIFICATES

DECLARATION ................................................................................................................................................... 1

ACKNOWLEDGEMENTS ................................................................................................................................... 2

ABSTRACT ........................................................................................................................................................... 3

TABLE OF CONTENTS ....................................................................................................................................... 4

List of Figures ........................................................................................................................................................ 6

List of Tables ..........................................................................................................................................................8

CHAPTER 1: INTRODUCTION .......................................................................................................................... 9

1.1. Introduction of the Project ...................................................................................................................... 9

1.2. Objective ................................................................................................................................................. 9

1.3. Literature Survey .....................................................................................................................................9

1.4. Advantages and Applications ................................................................................................................10

CHAPTER 2: EMBEDDED SYSTEMS ............................................................................................................. 12

2.1. Embedded Systems ............................................................................................................................... 12

2.2. Explanation of Embedded Systems .......................................................................................................13

2.2.1. Software Architecture ......................................................................................................................13

2.2.2. Stand Alone Embedded System ...................................................................................................... 15

2.2.3. Real-time Embedded Systems .........................................................................................................15

2.2.4. Network communication embedded systems .................................................................................. 16

CHAPTER 3: HARDWARE DESCRIPTION .................................................................................................... 17

3.1. Introduction to Hardware ...................................................................................................................... 17

3.2. Rechargeble Battery Power Supply ...................................................................................................... 17

3.3. Raspberry pi3 ........................................................................................................................................ 18

3.4. D.C. Motor ............................................................................................................................................ 19

3.5. PIC Microcontrollers .............................................................................................................................20

4
3.6. DC Motor Driver ...................................................................................................................................23

3.7. Pi camera ...............................................................................................................................................24

3.8. Ultrasonic Sensor .................................................................................................................................. 26

3.9. IR Sensor ...............................................................................................................................................27

3.10. SPEAKERS ...........................................................................................................................................30

3.11. Push buttons .......................................................................................................................................... 30

CHAPTER 4: LINUX .......................................................................................................................................... 33

4.1. Introduction to Lunix ............................................................................................................................ 33

4.2. Components of Linux System ...............................................................................................................34

4.3. Basic Features ....................................................................................................................................... 35

4.4. Architecture of Lunix ............................................................................................................................36

4.5. Programming on Linux ......................................................................................................................... 36

4.6. Linux Basic Commands ........................................................................................................................ 37

4.7. Current application of Linux systems ................................................................................................... 38

4.8. Basic Commands of Linux ....................................................................................................................38

4.9. Properties of Linux ................................................................................................................................41

CHAPTER 5: SOFTWARE DESCRIPTION ...................................................................................................... 44

5.1. Raspbian OS ..........................................................................................................................................44

5.2. Setting up the Raspberry Pi and camera interface ................................................................................ 45

5.3. Installing the dependencies ................................................................................................................... 49

5.4. Python Compiler ................................................................................................................................... 50

5.5. OpenCV ................................................................................................................................................ 50

5.6. SIFT in OpenCV ................................................................................................................................... 54

CHAPTER 6: PROJECT DESCRIPTION ...........................................................................................................56

CHAPTER 7: SUMMERY OF PROJECT AND FUTURE SCOPE ...................................................................59

REFERENCES .....................................................................................................................................................60

5
List of Figures

Figure 2.1.1 A modern example of embedded system .......................................................................... 13

Figure 3.1.1 Block diagram of Guest Greeting and Guiding Robot ......................................................17

Figure 3.3.1 Raspberry pi3 B+ .............................................................................................................. 18

Figure 3.5.1 Architecture of CPU ......................................................................................................... 22

Figure 3.5.2 Relation between instruction cycles and clock cycles for PIC micro controllers ............23

Figure 3.6.1 L293D IC .......................................................................................................................... 23

Figure 3.7.1 Pi Camera ..........................................................................................................................24

Figure 3.8.1 Ultrasonic Sensor .............................................................................................................. 26

Figure 3.9.1 IR Sensor module .............................................................................................................27

Figure 3.9.2 IR Sensor module pinout ................................................................................................. 28

Figure 3.10.1 Speakers .......................................................................................................................... 30

Figure 4.2.1 Linux Operating System ................................................................................................... 35

Figure 4.4.1 Architecture of Lunix ........................................................................................................36

Figure 5.2.1 Interfacing options ............................................................................................................ 46

Figure 5.2.2 Enabiling Pi camera .......................................................................................................... 47

Figure 5.2.3 Interfacing camera with Raspberry pi ...............................................................................48

Figure 5.3.1 Installing the depenciencies .............................................................................................. 49

Figure 5.5.1 working of SIFT Algorithm .............................................................................................. 51

Figure 5.5.2 Gaussian and DOG ........................................................................................................... 52

Figure 5.5.3 Image with key points .......................................................................................................52

Figure 5.6.5 SIFT based key point detection ........................................................................................55

Figure 6.1 Schematic diagram of Guest greeting and guiding robot ...................................................566

Figure 6.2 Front view of Robot ............................................................................................................. 57

Figure 6.3 Side view of Robot ...............................................................................................................57


6
Figure 6.4 Top view of Robot ............................................................................................................... 58

7
List of Tables

Table 3.8.1 Pin Configuration of IR sensor module ..............................................................................28

Table 5.8.1 Basic commands of Linux ..................................................................................................39

Table 6.1 Experimental Data .................................................................................................................56

8
CHAPTER 1: INTRODUCTION

1.1. Introduction of the Project

Several great works has been done to develop robots in greeting [1], navigation [2], human-
robot interaction [3][5][6][8], built on face recognition, etc using various concepts. With the
advancement of technology in recent years, most of the Companies, Libraries, Educational
Institutions, and Restaurants, are choosing Autonomous robots as greeters, guiders and receptionist.
Few studies have assisted to use robots in social places to analyse human reaction towards robot
conversation and behaviour [10]. Most of the existing model were about designing and implementing
a humanoid robot with no structural design which cannot give stability to robot and with no proper
communication ability such as taking long pauses in- between the interactions.This paper is oriented
towards providing a design for guest greeting and guiding robot using raspberry pi (Figure 3) and it
offers simple informative interactions between human and robot design. Mindfulness of time is very
important as people have very low forbearance for lengthy interruptions in discussion. The device is
suitable for executing the task with the help of Raspberry Pi processor. To execute this task,
embedded ‘Linux’ is used to program the Raspberry pi processor.

1.2. Objective

The main objective of this project is to reduce mechanical work, stress on humans, and to
provide a human-robot simple interaction with no long pauses.

1.3. Literature Survey

Haruka Okada, Kenji Suzuki, Toshiaki Uchiyama, Yadong Pan [1] in the year 2013, have conducted
various experiments using four robots. First two single robots, Palro and Nao, were utilized in
locating the guests individually and greet them, contrast the same task was carried out by hotel-staff.
Next two twin robots Gemini and dual Naos, are occupied in interaction regarding the hotel, while
recorded video of interaction is being display on TV. Cui, Ning Jing, Qingquan, Yin Xunhe [2] in the
year 2020 found an optimal autonomous path to be followed by the robot using ant colony algorithm.
The simulation model can visit guests in the optimal path and in shortest time individually and can
dispatch respective information and data in accordance with the predetermined method. A. Holroyd [3]
in the year 2011 has theorized about engagements using connection events, it has shown that

9
connection events can be created by a humanoid robot. Boiney, L. [4] in the year 2005 revealed how
challenging the critical functions of human-technology been for operators bombarded with
information, how existing technology held up to the hinders key activities, and how teams adapt their
procedures or technologies to meet real time demands. B. Mutlu,H. Ishiguro, T. Kanda, T. Shiwa, and
N. Hagita [5] in the year 2009 , designed a robot that can setup part of informative partners via cues.
They came up with a gaze of behaviors to robot for signaling the Participants roles. A. Holroyd, B.
Ponsler, and C. Sidner, C.Rich [6] in the year 2010 have designed, developed, implemented a model
for recognizing interaction in-between robot and human. The model can recognize four types of
computational model including gesture and speech. John A. Groeger, David T. Field [7] In the year
2004 claimed the processing of info in short-term memory procedures. They reported four
experiments first one leaded with increasing size of set in a pitch memory task, further two
experiments was due to interactivity between short-term memory requirements and temporal
production and in the final experiment, subjects executed temporal production while remembering
time duration of sets of tone. D. Bohus and E. Horvitz [8] in the year 2009 have proposed an approach
that can master in recognizing desire of interaction upto false positive rate of 2 to 4% via machine
learning. M. A. Goodrich and J. W. Crandall [9] in the year 2002 have presented a framework for
evaluating the efficiency of an interaction in terms of sensitivity to neglect and environmental
complexity. They performed a case study on interaction efficiency and compared shared control
teleoperation algorithm with direct teleoper-ation. D. Sakamoto, H. Ishiguro, K. Hayashi, M. Shiomi,
N. Hagita, S. Koizumi, T. Kanda, and T. Ogasawara [10] in the year 2007 have hypothesized 3 cases.
In hypothesis one addressed about how people feel when robot interacted with them. Hypothesis two
shown about limited-realistic interaction makes people to lose interest in information. The hypothesis
three shown that how people reacted to the conversation between two robots.

The main purpose of existing model was to develop human-robot interaction and autonomous guiding
robot. Our proposed project aim is to use a single robot for guest greeting and guiding, which is
simple economical by using raspberry pi which is able to greet the people and navigate the guest to
their selected request like classroom or lab.

1.4. Advantages and Applications

Advantages:

 Design of real time robot which is guide and greet the visitors through voice.
10
 Automatic obstacle sensing robot.
 Automatic line following system.
 Using image processing to detect known and unknown person.
 Highly efficient and low-cost design.
 This robot eliminates the manpower.
 Low power consumption.

Applications:

1) Colleges

2) Universities

11
CHAPTER 2: EMBEDDED SYSTEMS

2.1. Embedded Systems

An embedded system is a computer system designed to perform one or a few dedicated


functions often with real-time computing constraints. It is embedded as part of a complete device
often including hardware and mechanical parts. By contrast, a general-purpose computer, such as a
personal computer (PC), is designed to be flexible and to meet a wide range of end-user needs.
Embedded systems control many devices in common use today. Embedded systems are controlled by
one or more main processing cores that are typically either micro controllers or digital signal
processors (DSP). The key characteristic, however, is being dedicated to handle a particular task,
which may require very powerful processors. For example, air traffic control systems may usefully
be viewed as embedded, even though they involve mainframe computers and dedicated regional and
national networks between airports and radar sites. (Each radar probably includes one or more
embedded systems of its own.)
Since the embedded system is dedicated to specific tasks, design engineers can optimize it to
reduce the size and cost of the product and increase the reliability and performance. Some embedded
systems are mass-produced, benefiting from economies of scale. Physically embedded systems range
from portable devices such as digital watches and MP3 players, to large stationary installations like
traffic lights, factory controllers, or the systems controlling nuclear power plants. Complexity varies
from low, with a single micro controller chip, to very high with multiple units, peripherals and
networks mounted inside a large chassis or enclosure.
In general, "embedded system" is not a strictly definable term, as most systems have some
element of extensibility or programmability. For example, handheld computers share some elements
with embedded systems such as the operating systems and microprocessors which power them, but
they allow different applications to be loaded and peripherals to be connected. Moreover, even
systems which don't expose programmability as a primary feature generally need to support software
updates. On a continuum from "general purpose" to "embedded", large application systems will have
subcomponents at most points even if the system as a whole is "designed to perform one or a few
dedicated functions", and is thus appropriate to call "embedded". A modern example of embedded
system is shown in fig: 2.1.

12
Figure 2.1.1 A modern example of embedded system

Labeled parts include microprocessor (4), RAM (6), flash memory (7).Embedded systems
programming is not like normal PC programming. In many ways, programming for an embedded
system is like programming PC 15 years ago. The hardware for the system is usually chosen to make
the device as cheap as possible. Spending an extra dollar a unit in order to make things easier to
program can cost millions. Hiring a programmer for an extra month is cheap in comparison. This
means the programmer must make do with slow processors and low memory, while at the same time
battling a need for efficiency not seen in most PC applications. Below is a list of issues specific to the
embedded field.
2.2. Explanation of Embedded Systems

2.2.1. Software Architecture

There are several different types of software architecture in common use.


 Simple Control Loop
In this design, the software simply has a loop. The loop calls subroutines, each of which
manages a part of the hardware or software.
 Interrupt Controlled System
Some embedded systems are predominantly interrupt controlled. This means that tasks
performed by the system are triggered by different kinds of events. An interrupt could be generated
13
for example by a timer in a predefined frequency, or by a serial port controller receiving a byte.
These kinds of systems are used if event handlers need low latency and the event handlers are short
and simple.
Usually these kinds of systems run a simple task in a main loop also, but this task is not very
sensitive to unexpected delays. Sometimes the interrupt handler will add longer tasks to a queue
structure. Later, after the interrupt handler has finished, these tasks are executed by the main loop.
This method brings the system close to a multitasking kernel with discrete processes.
 Cooperative Multitasking:
A non-preemptive multitasking system is very similar to the simple control loop scheme,
except that the loop is hidden in an API. The programmer defines a series of tasks, and each task gets
its own environment to “run” in. When a task is idle, it calls an idle routine, usually called “pause”,
“wait”, “yield”, “nop” (stands for no operation), etc.The advantages and disadvantages are very
similar to the control loop, except that adding new software is easier, by simply writing a new task,
or adding to the queue-interpreter.
 Primitive Multitasking
In this type of system, a low-level piece of code switches between tasks or threads based on a
timer (connected to an interrupt). This is the level at which the system is generally considered to have
an "operating system" kernel. Depending on how much functionality is required, it introduces more or
less of the complexities of managing multiple tasks running conceptually in parallel.
As any code can potentially damage the data of another task (except in larger systems using
an MMU) programs must be carefully designed and tested, and access to shared data must be
controlled by some synchronization strategy, such as message queues, semaphores or a non-blocking
synchronization scheme.
Because of these complexities, it is common for organizations to buy a real-time operating
system, allowing the application programmers to concentrate on device functionality rather than
operating system services, at least for large systems; smaller systems often cannot afford the
overhead associated with a generic real time system, due to foot step counteractions regarding
memory size, performance, and/or battery life.
 Microkernels And Exokernels
A micro kernel is a logical step up from a real-time OS. The usual arrangement is that the
operating system kernel allocates memory and switches the CPU to different threads of execution.
User mode processes implement major functions such as file systems, network interfaces, etc.

14
In general, microkernels succeed when the task switching and intertask communication is fast,
and fail when they are slow. Exokernels communicate efficiently by normal subroutine calls. The
hardware and all the software in the system are available to, and extensible by application
programmers. Based on performance, functionality, requirement the embedded systems are divided
into three categories.

2.2.2. Stand Alone Embedded System

These systems takes the input in the form of electrical signals from transducers or commands
from human beings such as pressing of a button etc.., process them and produces desired output. This
entire process of taking input, processing it and giving output is done in standalone mode. Such
embedded systems comes under stand alone embedded systems
Eg: microwave oven, air conditioner etc..

2.2.3. Real-time Embedded Systems

Embedded systems which are used to perform a specific task or operation in a specific time
period those systems are called as real-time embedded systems. There are two types of real-time
embedded systems.
 Hard Real-time embedded systems
These embedded systems follow an absolute dead line time period i.e.., if the tasking is not
done in a particular time period then there is a cause of damage to the entire equipment.
Eg: consider a system in which we have to open a valve within 30 milliseconds. If this valve is not
opened in 30 ms this may cause damage to the entire equipment. So in such cases we use embedded
systems for doing automatic operations.
 Soft Real Time embedded systems
These embedded systems follow a relative dead line time period i.e.., if the task is not done in a
particular time that will not cause damage to the equipment.
Eg: Consider a TV remote control system, if the remote control takes a few milliseconds delay it will
not cause damage either to the TV or to the remote control. These systems which will not cause
damage when they are not operated at considerable time period those systems comes under soft real-
time embedded systems.

15
2.2.4. Network communication embedded systems

A wide range network interfacing communication is provided by using embedded systems.


Eg:Consider a web camera that is connected to the computer with internet can be used to spread
communication like sending pictures, images, videos etc.., to another computer with internet
connection throughout anywhere in the world.
 Consider a web camera that is connected at the door lock.Whenever a person comes near the door,
it captures the image of a person and sends to the desktop of your computer which is connected to
internet. This gives an alerting message with image on to the desktop of your computer, and then
you can open the door lock just by clicking the mouse.

16
CHAPTER 3: HARDWARE DESCRIPTION

3.1. Introduction to Hardware

In this chapter the block diagram of the project and design aspect of independent modules are
considered. Block diagram is shown in fig: 3.1.1

Figure 3.1.1 Block diagram of Guest Greeting and Guiding Robot

The main blocks of this project are


• Battery Power supply.
• Raspberry pi3.
• SD card.
• Pi camera.
• Push buttons.
• IR sensors.
• Ultrasonic sensor.
• DC motor with l293d motor driver.
• Speaker.

3.2. Rechargeble Battery Power Supply

17
A rechargeable battery, storage battery, secondary cell, or accumulator is a type of electrical
battery which can be charged, discharged into a load, and recharged many times, as opposed to a
disposable or primary battery, which is supplied fully charged and discarded after use. It is composed
of one or more electro chemical cells. The term "accumulator" is used as it accumulates and stores
energy through a reversible electro chemical reaction. Rechargeable batteries are produced in many
different shapes and sizes, ranging from button cells to megawatt systems connected to stabilize
an electrical distribution network.
Battery Power Supply
12V,2AMP Rechargeable battery For power supply of the project.

3.3. Raspberry pi3

Figure 3.3.1 Raspberry pi3 B+

The Raspberry Pi 3 is the third generation Raspberry Pi. It replaced the Raspberry Pi 2 Model
B in February 2016. Compared to the Raspberry Pi 2 it has:
A 1.2GHz 64-bit quad-core ARMv8 CPU, 802.11n Wireless LAN, Bluetooth 4.1, Bluetooth
Low Energy (BLE), Like the Pi 2, it also has 1GB RAM, 4 USB ports, 40 GPIO pins, Full HDMI
port, Ethernet port, Combined 3.5mm audio jack and composite video, Camera interface (CSI),
Display interface (DSI), Micro SD card slot (now push-pull rather than push-push), VideoCore IV
3D graphics core.
The Raspberry Pi 3 has an identical form factor to the previous Pi 2 (and Pi 1 Model B+) and
has complete compatibility with Raspberry Pi 1 and 2. We recommend the Raspberry Pi 3 Model B
for use in schools, or for any general use. Those wishing to embed their Pi in a project may prefer
the Pi Zero or Model A+, which are more useful for embedded projects, and projects which require
very low power.

18
3.4. D.C. Motor

A dc motor uses electrical energy to produce mechanical energy, very typically through the
interaction of magnetic fields and current-carrying conductors. The reverse process, producing
electrical energy from mechanical energy, is accomplished by an alternator, generator or dynamo.
Many types of electric motors can be run as generators, and vice versa. The input of a DC motor is
current/voltage and its output is torque (speed).
The DC motor has two basic parts: the rotating part that is called the armature and the
stationary part that includes coils of wire called the field coils. The stationary part is also called
the stator. Figure shows a picture of a typical DC motor, Figure shows a picture of a DC armature,
and Fig shows a picture of a typical stator. From the picture you can see the armature is made of coils
of wire wrapped around the core, and the core has an extended shaft that rotates on bearings. You
should also notice that the ends of each coil of wire on the armature are terminated at one end of the
armature. The termination points are called the commutator, and this is where the brushes make
electrical contact to bring electrical current from the stationary part to the rotating part of the machine.
Operation
The DC motor you will find in modem industrial applications operates very similarly to the
simple DC motor described earlier in this chapter. Figure 12-9 shows an electrical diagram of a simple
DC motor. Notice that the DC voltage is applied directly to the field winding and the brushes. The
armature and the field are both shown as a coil of wire. In later diagrams, a field resistor will be added
in series with the field to control the motor speed. When voltage is applied to the motor, current
begins to flow through the field coil from the negative terminal to the positive terminal. This sets up a
strong magnetic field in the field winding. Current also begins to flow through the brushes into a
commutator segment and then through an armature coil. The current continues to flow through the
coil back to the brush that is attached to other end of the coil and returns to the DC power source. The
current flowing in the armature coil sets up a strong magnetic field in the armature.
The magnetic field in the armature and field coil causes the armature to begin to rotate. This
occurs by the unlike magnetic poles attracting each other and the like magnetic poles repelling each
other. As the armature begins to rotate, the commutator segments will also begin to move under the
brushes. As an individual commutator segment moves under the brush connected to positive voltage,
it will become positive, and when it moves under a brush connected to negative voltage it will become

19
negative. In this way, the commutator segments continually change polarity from positive to negative.
Since the commutator segments are connected to the ends of the wires that make up the field winding
in the armature, it causes the magnetic field in the armature to change polarity continually from north
pole to south pole. The commutator segments and brushes are aligned in such a way that the switch in
polarity of the armature coincides with the location of the armature's magnetic field and the field
winding's magnetic field. The switching action is timed so that the armature will not lock up
magnetically with the field. Instead the magnetic fields tend to build on each other and provide
additional torque to keep the motor shaft rotating.
When the voltage is de-energized to the motor, the magnetic fields in the armature and the
field winding will quickly diminish and the armature shaft's speed will begin to drop to zero. If
voltage is applied to the motor again, the magnetic fields will strengthen and the armature will begin
to rotate again.

3.5. PIC Microcontrollers

PIC stands for Peripheral Interface Controller given by Microchip Technology to identify its
single-chip microcontrollers. These devices have been very successful in 8-bit microcontrollers. The
main reason is that Microchip Technology has continuously upgraded the device architecture and
added needed peripherals to the micro controller to suit customers' requirements. The development
tools such as assembler and simulator are freely available on the internet at www.microchip.com.

 Low - end PIC Architectures

Microchip PIC microcontrollers are available in various types. When PIC micro controller
MCU was first available from General Instruments in early 1980's, the micro controller consisted of a
simple processor executing 12-bit wide instructions with basic I/O functions. These devices are
known as low-end architectures. They have limited program memory and are meant for applications
requiring simple interface functions and small program & data memories. Some of the low-end device
numbers are

12C5XX
16C5X
16C505
20
 Mid range PIC Architectures

Mid range PIC architectures are built by upgrading low-end architectures with more number of
peripherals, more number of registers and more data/program memory. Some of the mid-range
devices are

16C6X
16C7X
16F87X

Program memory type is indicated by an alphabet.

C = EPROM

F = Flash

RC = Mask ROM

Popularity of the PIC microcontrollers is due to the following factors.

1. Speed: Harvard Architecture, RISC architecture, 1 instruction cycle = 4 clock cycles.

2. Instruction set simplicity: The instruction set consists of just 35 instructions (as opposed to 111
instructions for 8051).

3. Power-on-reset and brown-out reset. Brown-out-reset means when the power supply goes below a
specified voltage (say 4V), it causes PIC to reset; hence malfunction is avoided.

A watch dog timer (user programmable) resets the processor if the software/program ever
malfunctions and deviates from its normal operation.

1. PIC micro controller has four optional clock sources.

 Low power crystal

 Mid range crystal

 High range crystal

 RC oscillator (low cost).

21
2. Programmable timers and on-chip ADC.

3. Up to 12 independent interrupt sources.

4. Powerful output pin control (25 mA (max.) current sourcing capability per pin.)

5. EPROM/OTP/ROM/Flash memory option.

6. I/O port expansion capability.

Free assembler and simulator support from Microchip at www.microchip.com.

 CPU Architecture:

The CPU uses Harvard architecture with separate Program and Variable (data) memory interface.
This facilitates instruction fetch and the operation on data/accessing of variables simultaneously.

Figure 3.5.1 Architecture of CPU

PIC Micro controller Clock

Most of the PIC microcontrollers can operate up to 20MHz. One instructions cycle (machine cycle) consists
of four clock cycles

22
Figure 3.5.2 Relation between instruction cycles and clock cycles for PIC micro controllers

Instructions that do not require modification of program counter content get executed in one
instruction cycle.

3.6. DC Motor Driver

The L293 and L293D are quadruple high-current half-H drivers. The L293 is designed to
provide bidirectional drive currents of up to 1 A at voltages from 4.5 V to 36 V. The L293D is
designed to provide bidirectional drive currents of up to 600-mA at voltages from 4.5 V to 36 V. Both
devices are designed to drive inductive loads such as relays, solenoids, dc and bipolar stepping motors,
as well as other high-current/high-voltage loads in positive-supply applications.
All inputs are TTL compatible. Each output is a complete totem-pole drive circuit, with a
Darlington transistor sink and a pseudo-Darlington source. Drivers are enabled in pairs, with drivers 1
and 2 enabled by 1,2EN and drivers 3 and 4 enabled by 3,4EN.When an enable input is high, the
associated drivers are enabled and their outputs are active and in phase with their inputs. When the
enable input is low, those drivers are disabled and their outputs are off and in the high-impedance
state. With the proper data inputs, each pair of drivers forms a full-H (or bridge) reversible drive
suitable for solenoid or motor applications. On the L293, external high-speed output clamp diodes
should be used for inductive transient suppression. A VCC1 terminal, separate from VCC2, is
provided for the logic inputs to minimize device power dissipation. The L293and L293D are
characterized for operation from 0°C to 70°C.

Figure 3.6.1 L293D IC

Features of L293D
 600mA Output current capability per channel
 1.2A Peak output current (non repetitive) per channel
23
 Enable facility
 Over temperature protection
 Logical “0”input voltage up to 1.5 v

 High noise immunity


 Internal clamp diodes
3.7. Pi camera

The camera consists of a small (25mm by 20mm by 9mm) circuit board, which connects to
the Raspberry Pi's Camera Serial Interface (CSI) bus connector via a flexible ribbon cable. The
camera's image sensor has a native resolution of five megapixels and has a fixed focus lens. The
software for the camera supports full resolution still images up to 2592x1944 and video resolutions
of 1080p30, 720p60 and 640x480p60/90. The camera module is shown below:

Figure 3.7.1 Pi Camera

The OV5647 has an image array capable of operating up to 15 fps in 2592x1944 resolution
with user control of image quality, data transfer, camera functions through the SCCB interface. The
OV5647 uses innovative OmniBSI technology to improve the sensor performance without the
physical and optical trade-off.
 Features
1.4 µm x 1.4 µm pixel with Omni BSI technology for high performance (high sensitivity, low
crosstalk, low noise) optical size of 1/4" automatic image control functions: automatic exposure
control (AEC), automatic white balance (AWB), automatic band filter (ABF), automatic 50/60 Hz
luminance detection, and automatic black level calibration (ABLC) programmable controls for frame
24
rate, AEC/AGC 16-zone size/position/weight control, mirror and flip, cropping, windowing, and
panning image quality controls: lens correction, defective pixel canceling support for output formats:
8-/10-bit raw RGB data support for video or snapshot operations support for LED and flash strobe
mode support for internal and external frame synchronization for frame exposure mode support for
horizontal and vertical sub-sampling standard serial SCCB interface digital video port (DVP) parallel
output interface MIPI interface (two lanes) 32 bytes of embedded one -time programmable (OTP)
memory on- chip phase lock loop (PLL) embedded 1.5V regulator for core power programmable I/O
drive capability, I/O tri-state configurability support for black sun cancellation.
 Specifications
active array size: 2592 x 1944 power supply:
core: 1.5V + 5% (with embedded 1.5V regulator) analog: 2.6 ~ 3.0V (2.8V typical)
I/O: 1.7V ~ 3.0V
power requirements: active: TBD standby: TBD
temperature range:
operating: -30°C to 70°C stable image: 0°C to 50°C
output formats: 8-/10-bit RGB RAW output
lens size: 1/4"
lens chief ray angle: 24 input clock frequency: 6~27 MHz
S/N ratio: TBD
dynamic range: TBD
maximum image transfer rate:
QSXGA (2592 x 1944): 15 fps 1080p: 30 fps
960p: 45 fps 720p: 60 fps
VGA (640 x 480): 90 fps QVGA (320 x 240): 120 fps
sensitivity: TBD
shutter: rolling shutter / global shutter
maximum exposure interval: 1968 x tROW
pixel size: 1.4 µm x 1.4 µm
well capacity: TBD dark current: TBD
fixed pattern noise (FPN): TBD
image area: 3673.6 µm x 2738.4 µm
die dimensions: 5520 µm x 4700 µm

25
3.8. Ultrasonic Sensor

HC-SR04 Ultrasonic Sensor - Working

As shown above the HC-SR04 Ultrasonic (US) sensor is a 4 pin module, whose pin names
are Vcc, Trigger, Echo and Ground respectively. This sensor is a very popular sensor used in many
applications where measuring distance or sensing objects are required. The module has two eyes like
projects in the front which forms the Ultrasonic transmitter and Receiver. The sensor works with the
simple high school formula that
Distance = Speed × Time
The Ultrasonic transmitter transmits an ultrasonic wave, this wave travels in air and when it
gets objected by any material it gets reflected back toward the sensor this reflected wave is observed
by the Ultrasonic receiver module as shown in the picture below.

Figure 3.8.1 Ultrasonic Sensor

Now, to calculate the distance using the above formulae, we should know the Speed and time.
Since we are using the Ultrasonic wave we know the universal speed of US wave at room conditions
which is 330m/s. The circuitry inbuilt on the module will calculate the time taken for the US wave to
come back and turns on the echo pin high for that same particular amount of time, this way we can
also know the time taken. Now simply calculate the distance using a micro controller or
microprocessor.

HC-SR04 Sensor Features

 Operating voltage: +5V


 Theoretical Measuring Distance: 2cm to 450cm
 Practical Measuring Distance: 2cm to 80cm
26
 Accuracy: 3mm
 Measuring angle covered: <15°
 Operating Current: <15mA
 Operating Frequency: 40Hz

Applications

 Used to avoid and detect obstacles with robots like biped robot, obstacle avoider robot, path
finding robot etc.
 Used to measure the distance within a wide range of 2cm to 400cm.
 Can be used to map the objects surrounding the sensor by rotating it.
 Depth of certain places like wells, pits etc can be measured since the waves can penetrate through
water.

3.9. IR Sensor

Figure 3.9.1 IR Sensor module

27
Figure 3.9.2 IR Sensor module pinout

Table 3.8.1 Pin Configuration of IR sensor module

Pin Name Description

VCC Power Supply Input

GND Power Supply Ground

OUT Active High Output

IR Sensor Module Features

 5VDC Operating voltage


 I/O pins are 5V and 3.3V compliant
 Range: Up to 20cm

28
 Adjustable Sensing range
 Built-in Ambient Light Sensor
 20mA supply current
 Mounting hole
Brief about IR Sensor Module

The IR sensor module consists mainly of the IR Transmitter and Receiver, Opamp, Variable
Resistor (Trimmer pot), output LED in brief.
IR LED Transmitter

IR LED emits light, in the range of Infrared frequency. IR light is invisible to us as its
wavelength (700nm – 1mm) is much higher than the visible light range. IR LEDs have light emitting
angle of approx. 20-60 degree and range of approx. few centimeters to several feets, it depends upon
the type of IR transmitter and the manufacturer. Some transmitters have the range in kilometers. IR
LED white or transparent in colour, so it can give out amount of maximum light.
Photodiode Receiver

Photodiode acts as the IR receiver as its conducts when light falls on it. Photodiode is a
semiconductor which has a P-N junction, operated in Reverse Bias, means it start conducting the
current in reverse direction when Light falls on it, and the amount of current flow is proportional to
the amount of Light. This property makes it useful for IR detection. Photodiode looks like a LED,
with a black colour coating on its outer side; Black colour absorbs the highest amount of light.
LM358 Opamp

LM358 is an Operational Amplifier (Op-Amp) is used as voltage comparator in the IR sensor.


the comparator will compare the threshold voltage set using the preset (pin2) and the photodiode’s
series resistor voltage (pin3).
Photodiode’s series resistor voltage drop > Threshold voltage = Opamp output is High
Photodiode’s series resistor voltage drop < Threshold voltage = Opamp output is Low
When Opamp's output is high the LED at the Opamp output terminal turns ON (Indicating the
detection of Object).
Variable Resistor
The variable resistor used here is a preset. It is used to calibrate the distance range at which
object should be detected.
IR Sensor Module Features

 5VDC Operating voltage.


29
 I/O pins are 5V and 3.3V compliant.
 Range: Up to 20cm.
 Adjustable Sensing range.
 Built-in Ambient Light Sensor.
 20mA supply current.
 Mounting hole
Applications

 Obstacle Detection

 Industrial safety devices

 Wheel encoder

3.10. SPEAKERS

Figure 3.10.1 Speakers


USB Powered Speakers Features:
 Fully Compatible with the Raspberry Pi
 100 Hz - 18 KHz frequency response
 Power requirements: 5VDC 1A peak (at max volume output)
 4 Ohm impedance speakers, 3W each
 Speaker Dimensions: (Height x Width x Depth) 80mm x 70mm x 70mm
 Cable Length: 1m.

3.11. Push buttons

PUSH BUTTON/ CONTROL SWITCH

30
A push-button (also spelled push button) (press-button in the UK) or simply button is a simple
switch mechanism for controlling some aspect of a machine or a process. Buttons are typically made
out of hard material, usually plastic or metal. The surface is usually flat or shaped to accommodate the
human finger or hand, so as to be easily depressed or pushed. Buttons are most often biased switches,
though even many un-biased buttons (due to their physical nature) require a spring to return to their
un-pushed state. Different people use different terms for the "pushing" of the button, such as press,
depress, mash, and punch.
Uses
The "push-button" has been utilized in calculators, push-button telephones, kitchen appliances,
and various other mechanical and electronic devices, home and commercial.
In industrial and commercial applications, push buttons can be linked together by a
mechanical linkage so that the act of pushing one button causes the other button to be released. In this
way, a stop button can "force" a start button to be released. This method of linkage is used in simple
manual operations in which the machine or process have no electrical circuits for control.
Push buttons are often color-coded to associate them with their function so that the operator
will not push the wrong button in error. Commonly used colors are red for stopping the machine or
process and green for starting the machine or process.
Red push buttons can also have large heads (called mushroom heads) for easy operation and to
facilitate the stopping of a machine. These push buttons are called emergency stop buttons and are
mandated by the electrical code in many jurisdictions for increased safety. This large mushroom shape
can also be found in buttons for use with operators who need to wear gloves for their work and could
not actuate a regular flush-mounted push button. As an aid for operators and users in industrial or
commercial applications, a pilot light is commonly added to draw the attention of the user and to
provide feedback if the button is pushed. Typically this light is included into the center of the push
button and a lens replaces the push button hard center disk. The source of the energy to illuminate the
light is not directly tied to the contacts on the back of the push button but to the action the push button
controls. In this way a start button when pushed will cause the process or machine operation to be
started and a secondary contact designed into the operation or process will close to turn on the pilot
light and signify the action of pushing the button caused the resultant process or action to start.
In popular culture, the phrase "the button" (sometimes capitalized) refers to a (usually fictional)
button that a military or government leader could press to launch nuclear weapons.
A Load control switch is a remotely controlled relay that is placed on home appliances which
consume large amounts of electricity, such as air conditioner units and electric water heaters.
31
Most load control switches consist of a communication module and the relay switch and can
be used as part of a demand response energy efficiency system such as a smart grid. Such a switch
operates similarly to a pager, receiving signals from the power company or electrical frequency shift
to turn off or reduce power to the appliance during times of peak electrical demand. Usually, the
device has a timer that will automatically reset the switch back on after a preset time. Some operation
intolerant appliances, such as dryers, use switches that can reduce or shut off power to their heating
coils yet still tumble until signaled to resume full power.

32
CHAPTER 4: LINUX

4.1. Introduction to Lunix

Linux is a Unix-like and mostly POSIX-compliant computer operating system assembled


under the model of free and open source software development and distribution. The defining
component of Linux is the Linux kernel, an operating system kernel first released on 5 October 1991
by Linus Torvalds. The Free Software Foundation uses the name GNU/Linux, which has led to some
controversy.

The Linux Standard Base (LSB) is a joint project by several Linux distributions under the
organizational structure of the Linux Foundation to standardize the software system structure,
including the file system hierarchy used in the GNU/Linux operating system. The LSB is based on
the POSIX specification, the Single UNIX Specification, and several other open standards, but
extends them in certain areas.

 According to the LSB:

The goal of the LSB is to develop and promote a set of open standards that will increase
compatibility among Linux distributions and enable software applications to run on any compliant
system even in binary form. In addition, the LSB will help coordinate efforts to recruit software
vendors to port and write products for Linux Operating Systems. The LSB compliance may be
certified for a product by a certification procedure.

The LSB specifies for example: standard libraries, a number of commands and utilities that extend
the POSIX standard, the layout of the file system hierarchy, run levels, the printing system,
including spoolers such as CUPS and tools like Foomatic and several extensions to the X Window
System.

The command lsb_release -a is available in many systems to get the LSB version details, or can
be made available by installing lsb-release.

Linux was originally developed as a free operating system for Intel x86-based personal
computers. It has since been ported to more computer hardware platforms than any other operating
system. It is a leading operating system on servers and other big iron systems such as mainframe
computers and supercomputers. As of June 2013, more than 95% of the world's 500 fastest
supercomputers run some variant of Linux, including all the 44 fastest. Linux also runs on embedded

33
systems, which are devices whose operating system is typically built into the firmware and is highly
tailored to the system; this includes mobile phones, tablet computers, network routers, facility
automation controls, televisions and video game consoles. Android, which is a widely used operating
system for mobile devices, is built on top of the Linux kernel.

The development of Linux is one of the most prominent examples of free and open source
software collaboration. The underlying source code may be used, modified, and distributed—
commercially or non-commercially—by anyone under licenses such as the GNU General Public
License. Typically, Linux is packaged in a format known as a Linux distribution for desktop and
server use. Some popular mainstream Linux distributions include Debian, Ubuntu, Linux
Mint, Fedora,Arch Linux, and the commercial Red Hat Enterprise Linux and SUSE Linux Enterprise
Server. Linux distributions include the Linux kernel, supporting utilities and libraries and usually a
large amount of application software to fulfill the distribution's intended use.

A distribution oriented toward desktop use will typically include X11 or Wayland as
the windowing system, and an accompanying desktop environment such as GNOME or the KDE
Software Compilation. Some such distributions may include a less resource intensive desktop such
as LXDE or Xfce, for use on older or less powerful computers. A distribution intended to run as a
server may omit all graphical environments from the standard install, and instead include other
software to set up and operate a solution stack such as LAMP. Because Linux is freely redistributable,
anyone may create a distribution for any intended use.

Linux is one of popular version of UNIX operating System. It is open source as its source code
is freely available. It is free to use. Linux was designed considering UNIX compatibility. It's
functionality list is quite similar to that of UNIX.

4.2. Components of Linux System

Linux Operating System has primarily three components

 Kernel - Kernel is the core part of Linux. It is responsible for all major activities of this operating
system. It is consists of various modules and it interacts directly with the underlying hardware. Kernel
provides the required abstraction to hide low level hardware details to system or application programs.
 System Library - System libraries are special functions or programs using which application
programs or system utilities accesses Kernel's features. These libraries implements most of the
functionalities of the operating system and do not requires kernel module's code access rights.

34
 System Utility - System Utility programs are responsible to do specialized, individual level tasks.

Figure 4.2.1 Linux Operating System

4.3. Basic Features


 Portable - Portability means software’s can works on different types of hardware’s in same way.
Linux kernel and application programs supports their installation on any kind of hardware platform.
 Open Source - Linux source code is freely available and it is community based development project.
Multiple teams works in collaboration to enhance the capability of Linux operating system and it is
continuously evolving.
 Multi-User - Linux is a multi-user system means multiple users can access system resources like
memory/ ram/ application programs at same time.
 Multi programming - Linux is a multi programming system means multiple applications can run at
same time.
 Hierarchical File System - Linux provides a standard file structure in which system files/ user files
are arranged.
 Shell - Linux provides a special interpreter program which can be used to execute commands of the
operating system. It can be used to do various types of operations, call application programs etc.
 Security - Linux provides user security using authentication features like password protection/
controlled access to specific files/ encryption of data.

35
4.4. Architecture of Lunix

Figure 4.4.1 Architecture of Lunix

Linux System Architecture is consists of following layers:

 Hardware layer - Hardware consists of all peripheral devices (RAM/ HDD/ CPU etc).
 Kernel - Core component of Operating System, interacts directly with hardware, provides low level
services to upper layer components.
 Shell - An interface to kernel, hiding complexity of kernel's functions from users. Takes commands
from user and executes kernel's functions.
 Utilities - Utility programs giving user most of the functionalities of an operating systems.

4.5. Programming on Linux


Most Linux distributions support dozens of programming languages. The original
development tools used for building both Linux applications and operating system programs are found
within the GNU toolchain, which includes the GNU Compiler Collection (GCC) and the GNU build

36
system. Amongst others, GCC provides compilers for Ada, C, C++, Java, and Fortran. First released
in 2003, the LLVM project provides an alternative open-source compiler for many languages.
Proprietary compilers for Linux include the Intel C++ Compiler, Sun Studio, and IBM
XL C/C++ Compiler. BASIC in the form of Visual Basic is supported in such forms
as Gambas, FreeBASIC, and XBasic, and in terms of terminal programming or QuickBASIC or Turbo
BASIC programming in the form of QB64.

A common feature of Unix-like systems, Linux includes traditional specific-purpose


programming languages targeted at scripting, text processing and system configuration and
management in general. Linux distributions support shell scripts, awk, sed and make. Many programs
also have an embedded programming language to support configuring or programming themselves.
For example, regular expressions are supported in programs like grep, or locate, while advanced text
editors, like GNU Emacs have a complete Lisp interpreter built-in.

Most distributions also include support for PHP, Perl, Ruby, Python and other dynamic
languages. While not as common, Linux also supports C# (via Mono), Vala, and Scheme. A number
of Java Virtual Machines and development kits run on Linux, including the original Sun Micro
systems JVM (HotSpot), and IBM's J2SE RE, as well as many open-source projects
like Kaffe and JikesRVM.

GNOME and KDE are popular desktop environments and provide a framework for developing
applications. These projects are based on the GTK+ and Qt widget toolkits, respectively, which can
also be used independently of the larger framework. Both support a wide variety of languages.There
are a number of Integrated development environments available
including Anjuta, Code::Blocks, CodeLite, Eclipse, Geany, ActiveState
Komodo, KDevelop, Lazarus, MonoDevelop, NetBeans, and Qt Creator, while the long-established
editors Vim, nano and Emacs remain popular.

4.6. Linux Basic Commands

Linux operating system has a beautiful graphical interface which most of us will be using. It
will be good to learn the basic commands in Linux to work interactively with the Linux operating
system. Linux has a back end access know as shell. You can control and activate all the process in
Linux from the shell. So it is very important to learn few basic commands to work with Linux
operating system.
37
First we will learn how to login for shell access. There are 7 terminals for Linux. 6 terminals
are non - GUI and 1 terminal for GUI access. You can login to each terminal using Alt + Ctrl + F1 ,
F2, .. F7. Each terminals will request your username and password for login. If you want to use the
shell in the graphical interface (GUI), press Alt + F2 and type "konsole". As a user you will have
permission to access only your /home/user directory and other directories in it.
*Note: user denotes the username.

4.7. Current application of Linux systems

Today Linux has joined the desktop market. Linux developers concentrated on networking and
services in the beginning, and office applications have been the last barrier to be taken down. We
don't like to admit that Microsoft is ruling this market, so plenty of alternatives have been started over
the last couple of years to make Linux an acceptable choice as a workstation, providing an easy user
interface and MS compatible office applications like word processors, spreadsheets, presentations and
the like.

On the server side, Linux is well-known as a stable and reliable platform, providing database
and trading services for companies like Amazon, the well-known online bookshop, US Post Office,
the German army and many others. Especially Internet providers and Internet service providers have
grown fond of Linux as firewall, proxy- and web server, and you will find a Linux box within reach of
every UNIX system administrator who appreciates a comfortable management station. Clusters of
Linux machines are used in the creation of movies such as "Titanic", "Shrek" and others. In post
offices, they are the nerve centers that route mail and in large search engine, clusters are used to
perform internet searches.These are only a few of the thousands of heavy-duty jobs that Linux is
performing day-to-day across the world.

It is also worth to note that modern Linux not only runs on workstations, mid- and high-end
servers, but also on "gadgets" like PDA's, mobiles, a shipload of embedded applications and even on
experimental wristwatches. This makes Linux the only operating system in the world covering such a
wide range of hardware.

4.8. Basic Commands of Linux

38
Below is a listing of each of the Unix and Linux commands currently listed on Computer
Hope and a brief explanation of what each of the commands do. This is a full listing which means not
all the below commands will work with your distribution and may also not work because of your
privileges. Clicking on any of the commands will display additional help and information about that
command.

Table 5.8.1 Basic commands of Linux

Command Description

a2p Creates a Perl script from an awk script.

ac Prints statistics about user connection time.

access A system function which checks real user's permissions to access a file.

addgroup Adds a new group to the system.

adduser Adds a new user to the system.

agrep Version of the grep utility which also matches approximate patterns.

alias Creates another name for a command or command string.

apropos Searches the manual pages for a keyword or regular expression.

apt-cache Queries the APT software package cache.

apt-get Command line tool for working with APT software packages.

39
aptitude Text-based front-end for the APT package management system.

ar Creates, modifies, and extracts from archives.

arch Displays the architecture of the current host.

arp Manipulate the system ARP cache.

as An assembler.

aspell Interactive spell checker.

at Command scheduler.

awk Awk script processing program.

basename Deletes any specified prefix from a string.

bash Command Bourne interpreter

bc Calculator.

bdiff Compare large files.

bfs Editor for large files.

bg Continues a program running in the background.

biff Enable and disable incoming mail notifications.

40
break Break out of while, for, foreach, or until loop.

bs Battleship game.

bye Alias often used for the exit command.

4.9. Properties of Linux

Linux Pros

A lot of the advantages of Linux are a consequence of Linux' origins, deeply rooted in UNIX, except
for the first advantage, of course:

 Linux is free:

As in free beer, they say. If you want to spend absolutely nothing, you don't even have to pay
the price of a CD. Linux can be downloaded in its entirety from the Internet completely for free. No
registration fees, no costs per user, free updates, and freely available source code in case you want to
change the behavior of your system.

1. Most of all, Linux is free as in free speech:

The license commonly used is the GNU Public License (GPL). The license says that anybody
who may want to do so, has the right to change Linux and eventually to redistribute a changed version,
on the one condition that the code is still available after redistribution. In practice, you are free to grab
a kernel image, for instance to add support for teletransportation machines or time travel and sell your
new code, as long as your customers can still have a copy of that code.

2. Linux is portable to any hardware platform:

A vendor who wants to sell a new type of computer and who doesn't know what kind of OS
his new machine will run (say the CPU in your car or washing machine), can take a Linux kernel and
make it work on his hardware, because documentation related to this activity is freely available.

41
3. Linux was made to keep on running:

As with UNIX, a Linux system expects to run without rebooting all the time. That is why a lot
of tasks are being executed at night or scheduled automatically for other calm moments, resulting in
higher availability during busier periods and a more balanced use of the hardware. This property
allows for Linux to be applicable also in environments where people don't have the time or the
possibility to control their systems night and day.

4. Linux is secure and versatile:

The security model used in Linux is based on the UNIX idea of security, which is known to be
robust and of proven quality. But Linux is not only fit for use as a fort against enemy attacks from the
Internet: it will adapt equally to other situations, utilizing the same high standards for security. Your
development machine or control station will be as secure as your firewall.

5. Linux is scalable:

From a Palmtop with 2 MB of memory to a petabyte storage cluster with hundreds of nodes:
add or remove the appropriate packages and Linux fits all. You don't need a supercomputer anymore,
because you can use Linux to do big things using the building blocks provided with the system. If you
want to do little things, such as making an operating system for an embedded processor or just
recycling your old 486, Linux will do that as well.

6. The Linux OS and most Linux applications have very short debug-times:

Because Linux has been developed and tested by thousands of people, both errors and people
to fix them are usually found rather quickly. It sometimes happens that there are only a couple of
hours between discovery and fixing of a bug.

Linux Cons

7. There are far too many different distributions:

"Quot capites, tot rationes", as the Romans already said: the more people, the more opinions.
At first glance, the amount of Linux distributions can be frightening, or ridiculous, depending on your

42
point of view. But it also means that everyone will find what he or she needs. You don't need to be an
expert to find a suitable release.

When asked, generally every Linux user will say that the best distribution is the specific
version he is using. So which one should you choose? Don't worry too much about that: all releases
contain more or less the same set of basic packages. On top of the basics, special third party software
is added making, for example, TurboLinux more suitable for the small and medium enterprise,
RedHat for servers and SuSE for workstations. However, the differences are likely to be very
superficial. The best strategy is to test a couple of distributions; unfortunately not everybody has the
time for this. Luckily, there is plenty of advice on the subject of choosing your Linux. A quick search
on Google, using the keywords "choosing your distribution" brings up tens of links to good advise.
The Installation HOWTO also discusses choosing your distribution.

8. Linux is not very user friendly and confusing for beginners:

It must be said that Linux, at least the core system, is less userfriendly to use than MS
Windows and certainly more difficult than MacOS, but... In light of its popularity, considerable effort
has been made to make Linux even easier to use, especially for new users. More information is being
released daily, such as this guide, to help fill the gap for documentation available to users at all levels.

9. Is an Open Source product trustworthy?

How can something that is free also be reliable? Linux users have the choice whether to use
Linux or not, which gives them an enormous advantage compared to users of proprietary software,
who don't have that kind of freedom. After long periods of testing, most Linux users come to the
conclusion that Linux is not only as good, but in many cases better and faster that the traditional
solutions. If Linux were not trustworthy, it would have been long gone, never knowing the popularity
it has now, with millions of users. Now users can influence their systems and share their remarks with
the community, so the system gets better and better every day. It is a project that is never finished,
that is true, but in an ever changing environment, Linux is also a project that continues to strive for
perfection.

43
CHAPTER 5: SOFTWARE DESCRIPTION

5.1. Raspbian OS

Raspbian is a Debian-based computer operating system for Raspberry Pi. There are several
versions of Raspbian including Raspbian Buster and Raspbian Stretch. Since 2015 it has been
officially provided by the Raspberry Pi Foundation as the primary operating system for the family of
Raspberry Pi single-board computers.Raspbian was created by Mike Thompson and Peter Green as an
independent project. The initial build was completed in June 2012. The operating system is still under
active development. Raspbian is highly optimized for the Raspberry Pi line's low-performance ARM
CPUs.
Raspbian uses PIXEL, Pi Improved X-Window Environment, Lightweight as its main desktop
environment as of the latest update. It is composed of a modified LXDE desktop environment and the
Openbox stacking window manager with a new theme and few other changes. The distribution is
shipped with a copy of computer algebra program Mathematica and a version of Minecraft called
Minecraft Pi as well as a lightweight version of Chromium as of the latest version.
There are different types of Raspbian OS. Raspbian Jessie is one kind of OS using in our
project.
 Raspbian Jessie

Jessie is the official operating system for Raspberry Pi. As compared to Windows 10 IoT,
Raspbian Jessie is a full desktop operating system where you can perform lots of tasks just like any
PC. A lot of tools are available out of the box, like LibreOffice Suite, Java development environment
etc., and you can browse the internet, configure your mails, play games etc. in this operating system.
Along with these common uses, you can use this OS in your home automation projects as well.
In a bid to make Raspberry Pi is not just cheap computers for education, but cheap computers
in their own right, the Foundation made some small changes to make it feel more like a 'real' PC. For
example, LibreOffice suite and Claws Mail were installed as standard so users could use word
processors, create spreadsheets and manage their email from within Raspbian. Also, instead of
booting to a Linux command line, Raspberry Pi is booted to a Raspbian desktop GUI by default for
the first time as a result of an update to the distro.
Along with the regular security patches and under-the-hood improvements, Jessie brought
some more noticeable features in too.

44
In September 2016, Raspbian Jessie with PIXEL was made available to those who wanted a
GUI desktop. The forcibly acronymed PIXEL (Pi Improved Xwindow Environment, Lightweight)
desktop was the first time the OS received a GUI desktop when before it was just a Linux code screen
- it even received a boot splash page like a proper OS too.
Performance indicators were also added. For example, when the Pi was being overworked in
older versions, red and yellow pixels would appear in the screen. This was redesigned to show a
lightning bolt to indicate Under voltage or a thermometer for over temperature.

5.2. Setting up the Raspberry Pi and camera interface

To get started, we will need a Raspberry Pi board already loaded with Raspbian OS and a Raspberry
Pi camera board module.
Step 1. Connect the camera module with the Raspberry Pi.
Step 2. Enable the camera module.
Open the terminal and run “sudo raspi-config”.
pi@raspberry:~$ sudo raspi-config
This will open a window as shown below.

45
Figure 5.2.1 Interfacing options

b. Go to the “Interfacing Options” and select “Camera”.

46
Figure 5.2.2 Enabiling Pi camera

c. Enable the camera module.

47
Figure 5.2.3 Interfacing camera with Raspberry pi

Bingo, now the camera interface is enabled. To test this, open the terminal and execute
“raspistill -o image.jpg”. This will activate the camera module, displays a preview of the image, and
then it captures the image and saves it in the current working directory as “image.jpg”.

pi@raspberry:~$ mkdir test

pi@raspberry:~$ cd test/

pi@raspberry:~/test$ raspistill -o image.jpg

pi@raspberry:~/test$ ls

image.jpg

48
5.3. Installing the dependencies
Now, we will be installing opencv for image processing and tflite_runtime to load our
face_mask_detection model. The remaining dependencies can be installed directly via pip or pip3.
Step 1. Installing tflite_runtime.

a. Go to https://www.tensorflow.org/lite/guide/python.

Figure 5.3.1 Installing the Dependencies

b. Select the appropriate package for installed python version on your Raspberry Pi (“In case you
don’t know then open the terminal and execute “python3 --version”).
c. Copy the URL and install the package using pip3. For example, if you have Raspberry Pi that's
running Raspbian Buster (which has Python 3.7), install the Python wheel by executing:
pip3 install https://dl.google.com/coral/python/tflite_runtime-2.1.0.post1-cp37-cp37m-linux_armv7l.
whl
This will install the tflite_runtime on your Raspberry Pi. You can always check the installed packages
by running “pip3 list” in the terminal.

49
5.4. Python Compiler

The Python compiler package is a tool for analyzing Python source code and generating
Python byte code. The compiler contains libraries to generate an abstract syntax tree from Python
source code and to generate Python byte code from the tree. The compiler package is a Python source
to byte code translator written in Python. It uses the built-in parser and standard parser module to
generate a concrete syntax tree. This tree is used to generate an abstract syntax tree (AST) and then
Python byte code. The full functionality of the package duplicates the built-in compiler provided with
the Python interpreter. It is intended to match its behavior almost exactly. Why implement another
compiler that does the same thing? The package is useful for a variety of purposes. It can be modified
more easily than the built-in compiler. The AST it generates is useful for analyzing Python source
code.

5.5. OpenCV

Introduction to SIFT (Scale-Invariant Feature Transform)


Goal
In this chapter,
 We will learn about the concepts of SIFT algorithm
 We will learn to find SIFT Key points and Descriptors.
 Theory
In last couple of chapters, we saw some corner detectors like Harris etc. They are rotation-
invariant, which means, even if the image is rotated, we can find the same corners. It is obvious
because corners remain corners in rotated image also. But what about scaling? A corner may not be a
corner if the image is scaled. For example, check a simple image below. A corner in a small image
within a small window is flat when it is zoomed in the same window. So Harris corner is not scale
invariant.

50
Figure 5.5.1 working of SIFT Algorithm

So, in 2004, D.Lowe, University of British Columbia, came up with a new algorithm, Scale
Invariant Feature Transform (SIFT) in his paper, Distinctive Image Features from Scale-Invariant
Key points, which extract key points and compute its descriptors. *(This paper is easy to understand
and considered to be best material available on SIFT. So this explanation is just a short summary of
this paper)*.
There are mainly four steps involved in SIFT algorithm. We will see them one-by-one.

A. Scale-space Extrema Detection

From the image above, it is obvious that we can't use the same window to detect key points
with different scale. It is OK with small corner. But to detect larger corners we need larger windows.
For this, scale-space filtering is used. In it, Laplacian of Gaussian is found for the image with
various σ values. LoG acts as a blob detector which detects blobs in various sizes due to change in σ.
In short, σ acts as a scaling parameter. For eg, in the above image, gaussian kernel with low σ gives
high value for small corner while gaussian kernel with high σ fits well for larger corner. So, we can
find the local maxima across the scale and space which gives us a list of (x,y,σ) values which means
there is a potential key point at (x,y) at σ scale.

But this LoG is a little costly, so SIFT algorithm uses Difference of Gaussians which is an
approximation of LoG. Difference of Gaussian is obtained as the difference of Gaussian blurring of an
image with two different σ, let it be σ and kσ. This process is done for different octaves of the image
in Gaussian Pyramid. It is represented in below image:

51
Figure 5.5.2 Gaussian and DOG

Once this DoG are found, images are searched for local extrema over scale and space. For eg.,
one pixel in an image is compared with its 8 neighbors as well as 9 pixels in next scale and 9 pixels in
previous scales. If it is a local extrema, it is a potential key point. It basically means that key point is
best represented in that scale. It is shown in below image:

Figure 5.5.3 Image with key points

Regarding different parameters, the paper gives some empirical data which can be summarized
as, number of octaves = 4, number of scale levels = 5, initial σ=1.6, k=2–√ etc as optimal values.

B. Key point Localization

Once potential key points locations are found, they have to be refined to get more accurate
results. They used Taylor series expansion of scale space to get more accurate location of extrema,
52
and if the intensity at these extrema is less than a threshold value (0.03 as per the paper), it is rejected.
This threshold is called contrastThreshold in OpenCV. DoG has higher response for edges, so edges
also need to be removed. For this, a concept similar to Harris corner detector is used. They used a 2x2
Hessian matrix (H) to compute the principal curvature. We know from Harris corner detector that for
edges, one eigen value is larger than the other. So here they used a simple function, If this ratio is
greater than a threshold, called edgeThreshold in OpenCV, that key point is discarded. It is given as
10 in paper.
So, it eliminates any low-contrast key points and edge key points and what remains is strong
interest points.

C. Orientation Assignment

Now an orientation is assigned to each key point to achieve invariance to image rotation. A
neighbourhood is taken around the key point location depending on the scale, and the gradient
magnitude and direction is calculated in that region. An orientation histogram with 36 bins covering
360 degrees is created. (It is weighted by gradient magnitude and gaussian-weighted circular window
with σ equal to 1.5 times the scale of key point. The highest peak in the histogram is taken and any
peak above 80% of it is also considered to calculate the orientation. It creates key points with same
location and scale, but different directions. It contribute to stability of matching.

D. Key point Descriptor

Now key point descriptor is created. A 16x16 neighbourhood around the key point is taken. It
is divided into 16 sub-blocks of 4x4 size. For each sub-block, 8 bin orientation histogram is created.
So a total of 128 bin values are available. It is represented as a vector to form key point descriptor. In
addition to this, several measures are taken to achieve robustness against illumination changes,
rotation etc.

E. Key point Matching

Key points between two images are matched by identifying their nearest neighbors. But in
some cases, the second closest-match may be very near to the first. It may happen due to noise or
some other reasons. In that case, ratio of closest-distance to second-closest distance is taken. If it is
greater than 0.8, they are rejected. It eliminators around 90% of false matches while discards only 5%
correct matches, as per the paper.
53
So this is a summary of SIFT algorithm. For more details and understanding, reading the
original paper is highly recommended. Remember one thing, this algorithm is patented. So this
algorithm is included in the opencv contrib repo

5.6. SIFT in OpenCV

So now let's see SIFT functionalities available in OpenCV. Let's start with key point detection
and draw them. First we have to construct a SIFT object. We can pass different parameters to it which
are optional and they are well explained in docs.
import numpy as np
import cv2 as cv
img = cv.imread('home.jpg')
gray= cv.cvtColor(img,cv.COLOR_BGR2GRAY)
sift = cv.xfeatures2d.SIFT_create()
kp = sift.detect(gray,None)
img=cv.drawKeypoints(gray,kp,img)
cv.imwrite('sift_key points.jpg',img)
sift.detect() function finds the key point in the images. You can pass a mask if you want to search
only a part of image. Each key point is a special structure which has many attributes like its (x,y)
coordinates, size of the meaningful neighbourhood, angle which specifies its orientation, response that
specifies strength of key points etc.
OpenCV also provides cv.drawKeyPoints() function which draws the small circles on the
locations of key points. If you pass to it and provide it with a flag called
cv.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS, it will draw a circle with size of
key point and it will even show its orientation. See below example.
img=cv.drawKeypoints(gray,kp,img,flags=cv.DRAW_MATCHES_FLAGS_DRAW_RICH_
KEYPOINTS)
cv.imwrite('sift_key points.jpg',img)
See the two results below:

54
Figure 5.6.5 SIFT based key point detection

Now to calculate the descriptor, OpenCV provides two methods.


1) Since you already found key points, you can call sift.compute() which computes the descriptors
from the key points we have found. Eg: kp,des = sift.compute(gray,kp)
2) If you didn't find key points, directly find key points and descriptors in a single step with the
function, sift.detectAndCompute().
We will see the second method:
sift = cv.xfeatures2d.SIFT_create()
kp, des = sift.detectAndCompute(gray,None)
Here kp will be a list of key points and des is a numpy array of shape Number_of_Key points×128.
So we got key points, descriptors etc. Now we want to see how to match key points in different
images. That we will learn in coming chapters.

55
CHAPTER 6: PROJECT DESCRIPTION

In this chapter, schematic diagram and interfacing of raspberrypi3 processor with each
module is considered.

Figure 6.1 Schematic diagram of Guest greeting and guiding robot

The above schematic diagram Guest Greeting and Guiding Robot explains the interfacing
section of each component with raspberrypi3.

Table 6.1 Experimental Data

Sl. Volunteer photos Recorded face samples Confidence rate Time taken to
No in first time (%) recognize face

1 70% 2 sec

56
2 85% 1.5 sec

3 90% 1 sec

Figure 6.2 Front view of Robot

Figure 6.3 Side view of Robot

57
Figure 6.4 Top view of Robot

This paper is oriented towards providing a design for guest greeting and guiding robot using
raspberry pi (Figure 3) and it offers simple informative interactions between human and robot design.
Mindfulness of time is very important as people have very low forbearance for lengthy interruptions
in discussion. The device is suitable for executing the task with the help of Raspberry Pi processor. To
execute this task, embedded ‘Linux’ is used to program the Raspberry pi processor.

58
CHAPTER 7: SUMMERY OF PROJECT AND FUTURE SCOPE

The project “Guest Greeting and Guiding Robot” was designed by a line following robot
which can greet and guide the visitors to their destination. Whenever the person entering into the
college this robot captures that person image through pi camera and verify that image into the
database using image processing. When the person image matches the stored images, this robot greets
to the person though voice and when the person images does not match the stored images it will asks
to select a destination through voice. By using push buttons person need to enter the destination and
after selecting the destination this robot guides the person by following the line. After completing the
task the robot will return back to the starting point automatically.

Conclusion:
The Guest greeting and guiding robot was successfully developed and designed by Integrating
features of all the hardware parts. Presence of each and every module has been justified and placed
precisely, thereby presenting the finest operating unit. With the usage of the latest IC’s provided with
developing technology, this design has been successfully designed, tested and executed.
The robot developed will stop automatically if it detects any obstacles. Further work can be extended
to develop a robot to avoid obstacles.

Future Scope:
 We can add book reading system.
 We can add finger print module for exam authentication.

59
REFERENCES
[1] A. Holroyd. Generating engagement behaviors in human-robot interaction. Master's thesis,
Worcester Polytechnic Institute, Worcester, Mass., USA, December 2010.
[2] A. Holroyd., C. Rich, C. Sidner, and B. Ponsler. Generating connection events for human robot
collaboration. Submitted to ACM Conf. on HumanRobot Interaction, 2011.
[3] Boiney, L. Team “Decision Making in Time-Sensitive Environments” In Proceedings Command
and Control Research and Technology McLean,VA, June 2005.
[4] B. Mutlu, T. Shiwa, T. Kanda, H. Ishiguro, and N. Hagita. Footing in human-robot conversations:
How robots might shape participant roles using gaze cues. In Proc. ACM Conf. on Human-Robot
Interaction, San Diego, Calif., USA, 2009.
[5] C.Rich, B. Ponsler, A. Holroyd, and C. Sidner. Recognizing engagement in human-robot
interaction. In Proc. ACM Conf. on Human-Robot Interaction, Osaka, Japan, Mar. 2010.
[6] David T. Field,John A. Groeger “Temporal interval production and short-term memory”
University of Surrey, Guildford, England Psychonomic Society, Inc. 2004.
[7] D. Bohus and E. Horvitz. Learning to predict engagement with a spoken dialog in openworld
settings. In Proceedings of the SIGDIAL 2009 Conference, London, UK, September 2009.
Association for Computational Linguistics, pp. 244-252.
[8] D. Bohus and E. Horvitz. Models for multiparty engagement in open-world dialog. In
Proceedings of the SIGDIAL 2009 Conference, London, UK, September 2009. Association for
Computational Linguistics, pp. 225-234.
[9] J. W. Crandall and M. A. Goodrich, “Principles of Adjustable Interactions.” In proceedings of
the 2002 AAAI Fall Symposium Human-Robot Interaction Workshop, North Falmouth, MA, USA,
November 15- 17, 2002.
[10] K. Hayashi, D. Sakamoto, T. Kanda, M. Shiomi, S. Koizumi, H. Ishiguro, T. Ogasawara, and N.
Hagita, “Humanoid robots as a passive social medium—A field experiment at a train station,” in
Proc. 2nd ACM/IEEE Int. Conf. Hum.–Robot Interact., 2007, pp. 137–144.

60

You might also like