Thesis Telepresence Robot

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 89

VIRTUAL TELEPRESENCE ROBOT

DE-41 (MTS)
Abbas,
Farhan,
Ans

COLLEGE OF
ELECTRICAL AND MECHANICAL ENGINEERING
NATIONAL UNIVERSITY OF SCIENCES AND
TECHNOLOGY RAWALPINDI
2023

i
DE-41 MTS
PROJECT REPORT

VIRTUAL TELEPRESENCE ROBOT

Submitted to the Department of Mechatronics


Engineering in partial fulfillment of the requirements
for the degree of
Bachelor of Engineering
in
Mechatronics
2023
Sponsoring DS: Submitted By:
AP. Kanwal Naveed PC Muhammad Abbas Naseer
Col. Dr. Kunwar Faraz Ahmad Khan NC Farhan Khalid
Kambojan NC Muhammad
ii
Ans

iii
DECLARATION

We certify that no portion of the work cited in this project has ever been used to support an
application for another university's degree or credential by signing this form. Depending on the
severity of the confirmed offence, we are completely accountable for any disciplinary action that
is taken against us if any act of plagiarism is discovered.

iv
ACKNOWLEDGEMENTS

Alhamdulillah, our project has been successfully finished, and we are grateful to Allah for
providing us with the courage and motivation to keep moving forward and for assisting us along
the way. Thanks to our supervisors, Assistant Professor Kanwal Naveed, and Dr. Kanwar Faraz,
who helped us a lot, tremendously, on every single issue, their help and guidance became a source
of strong determination for us. We also want to express our gratitude to our parents and friends
because without their unwavering encouragement and support, we might not have been able
to finish our project. We will always be grateful to them for the extraordinary part they played in
our journey. We accomplished more than we could have imagined thanks to their unwavering
support, and when we had lost all hope for ourselves, they gave us fresh hope.

v
ABSTRACT

Virtual Telepresence Robot allows users to remotely control a robot in a virtual environment.
virtual reality technology relies on the use of computer technology to provide an immersive
experience to the user which enables more spatial awareness and increases productivity. Users
can experience a 3D environment rather than relying on a 2D view of the environment on a
screen. The primary component of a VR system is a headset that offers a stereoscopic view of the
virtual environment. A feedback system, like game controllers, is also incorporated into VR
systems, allowing users to interact with their surroundings. The robot is controlled using a VR
headset and its controllers, and the user's movements are mapped to the robot's movements in the
virtual environment. It can be used for a variety of applications, like education, healthcare, and
business.

vi
TABLE OF CONTENTS

DECLARATION..........................................................................................................................................iii
ACKNOWLEDGMENTS.............................................................................................................................iv
ABSTRACT…...............................................................................................................................................v
TABLE OF CONTENTS...............................................................................................................................vi
LIST OF FIGURES.......................................................................................................................................ix
LIST OF TABLES.........................................................................................................................................xi
LIST OF SYMBOLS....................................................................................................................................xii
Chapter 1 – INTRODUCTION...................................................................................................................1
1.1 Introduction....................................................................................................................................1
1.2 Problem Statement.........................................................................................................................1
1.3 Solution..........................................................................................................................................1
1.4 Scope..............................................................................................................................................2
1.5 Deliverables....................................................................................................................................2
1.6 Structure.........................................................................................................................................2
Chapter 2 – BACKGROUND AND LITERATURE REVIEW...............................................................3
2.1 Background.................................................................................................................................3
2.2 Existing Models of Telepresence robot.........................................................................................3
2.2.1 Virtual reality teleoperation robot......................................................................................3
2.2.1.1 Control Software.....................................................................................................6
2.2.1.2 Vehicle specifications.............................................................................................7
2.2.2 Omnidirectional Telepresence Robot................................................................................9
2.2.3 Real-Time Telepresence robot with 360 degree view.....................................................14
Chapter 3 – DESIGNING AND MANUFACTURING...........................................................................17
3.1 Overview....................................................................................................................................17
3.2 Development in Game Engine....................................................................................................18
3.3 Industry Standards.......................................................................................................................18
3.3.1 Unreal Engine...................................................................................................................19
3.3.2 Unity................................................................................................................................20
3.3.3 Unity Features and Usage Details…...............................................................................20
3.3.3.1 2D and 3D Graphics............................................................................................20
3.3.3.2 Physics Engine…................................................................................................21

vii
3.3.3.3 Character Control................................................................................................21
3.3.3.4 Collision.............................................................................................................22
3.3.3.5 Joint.....................................................................................................................22
3.3.3.6 Supported Languages for Scripting.....................................................................23
3.3.3.7 Audio and Video.................................................................................................24
3.3.3.8 Animations…......................................................................................................25
3.3.3.9 Animator Controller............................................................................................27
3.3.3.10 Animation State Machines................................................................................27
3.3.3.11 User Interface....................................................................................................28
3.3.3.12 Auto Layout......................................................................................................29
3.3.3.13 Navigation System............................................................................................30
3.3.3.13.1 NavMesh Agent...............................................................................31
3.3.3.13.2 Off-Mesh Link.................................................................................32
3.3.4 Unity Platforms Supported…...........................................................................................33
3.4 CAD SOFTWARES...................................................................................................................33
3.4.1 SolidWorks........................................................................................................................33
3.4.2 Fusion 360.........................................................................................................................34
3.5 HARDWARE COMPONENTS...............................................................................................34
3.5.1 Oculus Quest 2.................................................................................................................34
3.5.2 Raspberry Pi.....................................................................................................................35
3.5.3 Gimbal..............................................................................................................................36
3.5.3.1 Tilt Motion..........................................................................................................37
3.5.3.2 Pan Motion…......................................................................................................37
3.5.3.3 Roll Motion…....................................................................................................37
3.5.3.4 Gimbal Design…...............................................................................................38
3.6.3 Robot Base…....................................................................................................................38
Chapter 4 - RESULTS..................................................................................................................................41

4.1 Results of VR Headset movement...................................................................................41

4.2 Results of controller movement.......................................................................................42

4.3 Head Mounted Display Orientation system.................................................................... 44

viii
4.4 Video of Raspberry Pi camera........................................................................................ 44
Chapter 5 - CONCLUSION AND FUTURE WORKS..............................................................................46
5.1 Conclusion....................................................................................................................................46

5.2 Marketing Strategy..........................................................................................................46


5.3 Challenges to Virtual Reality.......................................................................................................46

5.4 Benefits of Virtual Telepresence Robot..........................................................................47

5.4.1 Medical Field….................................................................................................47

5.4.2 Surveillance.......................................................................................................47

5.4.3 Entertainement…............................................................................................... 47

5.4.4 Education….......................................................................................................48
5.3 Future Work.................................................................................................................................48
REFERENCES..............................................................................................................................................49

ix
LIST OF FIGURES

Figure 1. System’s overview..........................................................................................................................................4


Figure 2. Overview of the System and Connections......................................................................................5
Figure 3. Oculus Quest 2 Headset..................................................................................................................................5
Figure 4. Test Screenshot...............................................................................................................................................6
Figure 5. Exterior of Robot............................................................................................................................................7
Figure 6. Wiring Schematic of System..........................................................................................................................8
Figure 7. Close view of Robotic Hardware....................................................................................................................9
Figure 8. Structure of the method................................................................................................................................10
Figure 9. HoloLens device...........................................................................................................................................11
Figure 10. Signals of human hand...............................................................................................................................11
Figure 11. Flow of process..........................................................................................................................................12
Figure 12. Output of robot...........................................................................................................................................12
Figure 13. Output of sEMG.........................................................................................................................................13
Figure 14. Output of OMR..........................................................................................................................................13
Figure 15. Flowchart of system...................................................................................................................................14
Figure 16. 2D video conversion into 3D.....................................................................................................................15
Figure 17. Result of false joining.................................................................................................................15
Figure 18. Working Diagram........................................................................................................................17
Figure 19. Sysytem Requirements of Modeling Softwares..........................................................................18
Figure 20. Logo of Development Engines....................................................................................................19
Figure 21. Specifications for Unreal Engine................................................................................................19
Figure 22. Specifications for Unity Engine..................................................................................................20
Figure 23. Character Controller....................................................................................................................21
Figure 24. Movement Axes System..............................................................................................................23
Figure 25. Signal Transformation.................................................................................................................24
Figure 26. Supported Formats of Audio Files..............................................................................................25
Figure 27. Animator Window in Unity.........................................................................................................26
Figure 28. Animator View............................................................................................................................27
Figure 29. State Machine Flow.....................................................................................................................28
Figure 30. Toolbar of UI System..................................................................................................................28
Figure 31. Layout Elements..........................................................................................................................29
Figure 32. Path Finding................................................................................................................................30
Figure 33. Global and Local.........................................................................................................................31
Figure 34. NavMesh Agent...........................................................................................................................32
Figure 35. Off-Mesh Link Connection Status..............................................................................................32
Figure 36. Unity Supported Platforms..........................................................................................................33
Figure 37. VR headset- Oculus Quest 2.......................................................................................................33
Figure 38. Raspberry Pi................................................................................................................................34
Figure 39. SolidWorks Logo........................................................................................................................35
Figure 40. Fusion 360 Logo..........................................................................................................................36
Figure 41. Axes of Rotation..........................................................................................................................37
Figure 42. Gimbal Design.............................................................................................................................38
Figure 43. Robotic Base...............................................................................................................................39
Figure 44. Top View of Robot......................................................................................................................39
Figure 45. Front View of Robot....................................................................................................................40
Figure 46. Data of VR headset movement....................................................................................................42
Figure 47. Data of VR controller movement................................................................................................43

x
Figure 48. HMD Orientation axes................................................................................................................44
Figure 49. Video of Raspberry Pi camera.....................................................................................................45

xi
LIST OF TABLES

Table 1. Values of IMU sensor...................................................................................................................................16


Table 2. Values gathered by Binocular Stereo.............................................................................................16
Table 3. Controller movements....................................................................................................................43

xii
LIST OF SYMBOLS

VTR Virtual Telepresence Robot


VR Virtual Reality
IP Internet Protocol
ESCs Electronic Speed Controllers
sEMG Surface Electromyography
OMR Omnidirectional Mobile Robot
MYO armband Myoelectric signals acquisition device
PCA Principal Component Analysis
OMR Omnidirectional Mobile Robot
HMD Head Mounted Display
FOV Field of View

xiii
Chapter 1 – INTRODUCTION

1.1 Introduction
Artificial intelligence (AI) chatbots utilize large language models (LLMs) to produce
responses that resemble those of a human. These LLMs are complex algorithms built to handle
enormous volumes of data and use machine learning methods to understand the information.
Now limited only by their training parameters and the amount of their datasets, they can generate
responses that closely resemble human speech and converse naturally on a broad variety of
topics.

The field of home automation is growing because it uses cloud computing to create
networks between devices so that they may be controlled remotely without requiring human
interaction. With centralized control interfaces that can be accessed by computers, tablets, or
smartphones, this technology gives customers the ability to regulate a variety of home features,
including lighting, temperature, and security systems.

1.2 Problem Statement


During the peak of the COVID-19 pandemic in 2020, hospitals worldwide faced a
critical shortage of personnel to attend to the increasing number of patients, exacerbating the
strain on healthcare systems. With the number of COVID-19 cases soaring into the millions
globally, hospitals struggled to cope with the overwhelming demand for medical care. For
instance, in some regions, hospitals were operating at over 150% of their normal capacity, with
medical staff working extended shifts to manage the influx of patients. This underscored the
urgent need for a solution that could effectively monitor patients while ensuring their isolation to
prevent further transmission of the virus. Concurrently, patients experienced heightened feelings
of isolation due to restricted visitation policies, exacerbating their psychological well-being.
Additionally, the need for assistance with basic needs, such as controlling home appliances,
became more pronounced as patients struggled with limited mobility and resources.

1.3 Solution
Our project is the development of an online assistant to assist the needs of patients and
medical professionals. The project aims to integrate the revolutionary potential of Large
Language Models (LLMs) and Internet of Things (IoT) technologies to offer support and
companionship to individuals. The envisioned system will mitigate the adverse psychological
effects experienced by isolated patients, while the IoT services embedded within the application
will grant patients a sense of independence and liberty. Moreover, the project encompasses a
monitoring system allowing caregivers to oversee and address the needs of multiple patients
simultaneously, thereby alleviating the burden on the healthcare system, particularly evident
during the COVID-19 pandemic.
14
1.4 Deliverables
Following are the deliverables for Ai chatbot for home appliance control:

1.5 Structure
The structure of the final year project report is:
 Chapter-2 mainly deals with background and literature review including different existing
models of versions and available solutions.
 Chapter-3 includes the methodology we adopted to design and manufacture our project
and explains how the project is different from existing products.
 Chapter-4 deals with the results of working projects at different stages.
 Chapter-6 consists of concluding the report and exploring future possibilities and
directions in which the project can be taken.

15
Chapter 2 – BACKGROUND AND LITERATURE REVIEW

2.1 Background
The evolution of chatbots in home appliance control dates back to the late 20th century,
with early experiments in home automation systems. In the 1990s, pioneering projects explored
remote appliance control via text-based interfaces, laying the groundwork for modern chatbots.
Simple command-line interfaces emerged as precursors to contemporary chatbots, requiring
users to understand basic programming. With advancements in technology, particularly
smartphones and smart home devices, chatbots became more sophisticated and accessible to
users. Integration of artificial intelligence (AI) and natural language processing (NLP) further
enhanced chatbots' capabilities, enabling them to understand complex commands. Today,
chatbots are integral to smart home ecosystems, offering seamless integration and enabling users
to control appliances via voice commands or text messages, making them indispensable tools for
modern home automation.

2.2 Existing Models of Telepresence robot

Different models of Virtual Telepresence Robot have been developed over the years.
Some of them are given below:

2.2.1 Virtual Reality Teleoperation Robot

Small automobiles and virtual reality display headsets were linked by Alexis
et al. (2020) [2]. The robot moved similarly to a radio-controlled vehicle. The user needs
to activate the application while wearing the headset to view the footage from the
vehicle's cameras. The robot was guided by a remote control to its destination and moved
in response to input from the camera. Two primary connections can be made across the
entire system. VR camera, vehicle controller. This system's goal was to provide people
with the ability to experience the environment like they were present in a car. With the
use of an external controller and a camera feed, the user may steer the vehicle and explore
its surroundings. In the following section, the project was put into action. The figure
below shows the mobility system, virtual reality system, and control system.

16
Figure 1 System’s overview

The main system controller was a Raspberry Pi 4. Used to control most of the connection
logic, this location is inside the car. The camera feed is compiled by the Raspberry Pi 4
and streamed online so that the VR system can understand it properly. Additionally, it
decodes Bluetooth signals from the controller that manages the engine of the vehicle.
Most of the debugging was done on board the Pi because it serves as the primary
connection for all system components. In figure 2, the system is depicted in more depth.
The colors of the arrows signify critical data flows. The arrows in orange depict transfer
of data from user to the engine of vehicle, while the green arrows display the data that
gives the user video feedback.

17
Figure 2 Overview of the System and Connections

Because it was readily available during the COVID-19 outbreak, the Oculus Quest 2
headset [3] was chosen. An illustration of the VR technology employed in this system is
shown in Figure 3.

Figure 3 Oculus Quest 2 Headset

18
The Raspberry Pi was set up with an interface that supported a camera and ran the
Raspberry OS. In this script, a server is created using the Raspberry PI's IP address, and it
subsequently receives data from the Pi and sends it to the URL. Using this method, RPI
can provide real-time, less-latency video stream which, when given the correct IP
address, can be seen in any web browser. The VR system's IP address was added using a
C# Unity script. A Unity scene playing a camera feed is seen in a screenshot in Figure
1.4. You may test your Unity environment's functioning with any camera stream. Tests
listed here used Turkish scripts from the tests [4].

Figure 4 Test Screenshot

The Oculus Quest 2 headset was then used to execute Unity programs as a system test.
Connect a cable from laptop to VR to set up this test. Run the Unity project after that to
see the finished VR experience.

2.2.1.1 Control Software:


A control system managed the vehicle's direction and speed.
A PS4 controller, a Raspberry Pi, an electronic speed controller (ESC),
motors, and a few signal converters make up the system. On a Raspberry
Pi, a Python script served as the vehicle's control program. To establish the
output signal for the motor controller and control the system's speed, code
develops a connection shell via Bluetooth [5] which evaluates input of
controller. The API was utilized in the lightweight module PS4Controller
[6], which reads an input stream and segments it as events take place.

19
The output of the Python script creates a link between the controllers of
PS4 input variables and the motors' speed control output signals. PWM
(Pulse Width Modulation) signal was the output signal. The PWM signal
can define 35 different speeds and up to 35 different duty cycles. The
output duty cycle increases as the joystick is pushed higher, causing the
speed to increase.

2.2.1.2 Vehicle Specifications


The car is built like a tricycle, with a third steering wheel at
the back for stability and two motors for two-wheel drive at the front side.
Each of his motors had an individual ESC, allowing him to run each wheel
at a distinct speed. The joystick's Y axis was used by the Bluetooth
controller to input the throttle. The left joystick's position determined the
speed of tire on left side, and the position of right joystick determined the
speed of the right wheel. Figure 5 displays a picture of the finished
product.

Figure 5 Exterior of Robot

Plastic components made up the chassis. Initial experiments with a steel


body indicated issues with the motor's internal demagnetization, which led
to this choice. With the help of 3D printed brackets that can be obtained at
[7], mounted all the components to the chassis.
Two of his 6S LiPo batteries, totaling 6600 mAh, were linked in parallel
to power the system, which had a voltage of 22.2 volts. The motor power
was 190 KV, and the nominal battery voltage was 50 °C. The figure
displays the system wiring diagram. 6.

20
Figure 6 Wiring Schematic of System

Fans and level shifters received 5 volts from a DC-DC converter that was
powered by a 22.2-volt battery. The Raspberry Pi was kept from shutting
down due to thermal problems by using a cooling fan. You can use the
Raspberry Pi without powering the ESC because the 5-voltage power
supply is used to control it independently from the voltage system of 22.2-
volt. To prevent the wiring harness for the power system from melting, an
80-amp circuit breaker was employed as the main power switch for the
22.2- volt system. Figure 7 displays a picture of the interior of the car,
which houses most of the parts.

21
Figure 7 Close view of Robotic Hardware

2.2.2 Omnidirectional Telepresence Robot


This work was written by Mulun Wu et al. (2018) primarily to describe the
wider applications of remote his control processes and interactions [8]. To make his two
gadgets, the HoloLens, and the MYO bracelet, operate together, omnidirectional mobile
robot (OMR) control was integrated with virtual reality and surface electromyography
(sEMG) signals. I was. OMR control. The HoloLens user's primary means of controlling
her OMR's mobility was through gestures or her gaze. Wearers of MYO wristbands
produced various sEMG signals by making various motions [9] to regulate the rate of
OMR. The complete movement of the OMR may be controlled by combining the two
devices.

22
A laptop, Microsoft's HoloLens mixed reality headset, the OMR, the MYO bracelet, a
wearable device that records myoelectric impulses, and several other gadgets were used.
The MYO bracelet and HoloLens control application were created on this laptop. Four
Omni Wheels (OW) on the OMR allowed for unrestricted movement in any direction. It
was very easy to execute and program commands [10]. Figure 8 depicts the system
configuration for this technique.

Figure 8 Structure of the method

Figure 9 depicts the head-mounted Windows 10 device known as HoloLens. The


holographic processing unit (HPU), graphics processing unit, and central processor unit
of this gadget were all cutting-edge. The sensors were installed in 1869, and they
generated a lot of data that the numerous chips inside the gadget could immediately
process. An infrared depth camera and a visible light camera were used to map space
swiftly and precisely [11]. The Windows 10 SDK for Visual Studio 2017, Unity3D, and
the HoloToolKit development package were all necessary for creating a HoloLens
application.

23
Figure 9 HoloLens device

The forearm muscles emit various sEMG signals depending on the gestures the user
makes while wearing the MYO bracelet on their forearm. The MYO bracelet has
extremely sensitive sensors that collect this signal, which are then analyzed by inbuilt
algorithms to identify and transmit various messages. Movements Send instructions for
Bluetooth gestures to the cloud. At this point, MYO can identify various user motions. B.
Spread your hands out in front of you, clench your fists, and wave them back and forth.
In Figure 10, a specific gesture is displayed.

Figure 10 Signals of human hand

Features were recovered after signal processing in accordance with the various gestures
depicted in [12]. The MYO bracelet has eight sensors that it can utilize to collect data
from, but there are too many functions and too much data that can be processed to fully
utilize the data. As a result, we were forced to implement principal component analysis to
decrease data's dimensionality also it was required to compress it. The flowchart is
displayed in Figure 11.

24
Figure 11 Flow of process

The running condition of the OMR is shown in graphical form as in fig.12.

Figure 12 Output of robot

The acceleration and deceleration states were represented by processing the sEMG data.
The velocity values are added to the velocity values in the project of Unity3D to
collectively direct the mobility condition of the omnidirectional mobile robot's (OMR)
ROS system, which receives this data. In Figure 13, under sEMG signal regulation. In

25
Figure 14, the OMR speed is displayed. The two photos can be compared to see how the
sEMG signal instantly regulates his OMR velocity. In addition, as the operator clenched
his fingers, the omni directional mobile robot accelerated. While on spreading fingers
apart, speed decreased while speed in the normal state stayed constant.

Figure 13 Output of sEMG

Figure 14 Output of OMR

26
2.2.3 Real Time Telepresence robot with 360-degree view
To transmit live 360-degree video, Dan Vincent G. Alcabaza et al. (2019)
proposed connecting a Ricoh Theta S to an Android smartphone mounted on a BoboVR
[13]. The suggested method is to utilize a 360-degree camera to record an all-
encompassing scene and develop an app that feeds the recorded 360-degree run time
footage from the server to Android smartphone of the client. The proper viewport was
selected from the surrounding area by the IMU of phone. Because of the difference in the
value between focal base and focal-length, depth can be felt. The flow of ideas that make
up the suggested system in Figure 15 are depicted in the following block diagram.

Figure 15 Flowchart of system

27
Over the Internet, live footage was transmitted from a PC to an Android smartphone. A
section of a sphere was isometrically projected onto a planar image, with simple
longitudes and simple latitudes serving as its horizontal and vertical coordinates,
respectively. A 3D spherical model was used to transfer the generated videos onto. Figure
16 illustrates how his 3D spherical model was projected from the 2D equirectangular
footage.

Figure 16 2D video conversion into 3D

Between the two Ricoh Theta S-taken pictures, there is a stitch mismatch. This is because
of some spherical environment components not being completely tucked (see Figure 17).

Figure 17 Result of false joining.

28
The data in table 1 relates to a view in particular direction that can be easily visible in VR.
Table 1 Values of IMU sensor

Table 2 displays the data gathered to evaluate the geometric realism of binocular
stereoscopic considering base and focal lengths.
Table 2 Values gathered by Binocular Stereo

29
Chapter 3 – METHODOLOGY

3.1 Overview
A real-time video feed from Raspberry Pi camera mounted on a robot in a remote
location, would be broadcasted to a web server over the local IP. A python script would process
the individual frames of the camera stream at a desired framerate and push them onto the server.
The video stream would be integrated into the Unity based android application for VR headset. A
C# script would load frames of video stream from the webserver and project these frames onto a
flat game object to act as display screen within the immersive 3D environment of the Unity
application. The VR headset can be used remotely to access this feedback.
The control inputs from the game controllers of VR headset will be fetched in the Unity
application. A C# script will fetch these inputs from game controllers and transmit them to the
Raspberry Pi on the mobile robot using web sockets over a local IP. The Raspberry Pi on the
robot will bind itself to the web socket port of the local IP and access these control inputs using a
script written in Python. The algorithm in the script will make decisions regarding the movement
of the robot according to the received inputs and change the pin states of GPIOs of RPi for the
drive circuit of robot motors.
The VR headset has a built-in 6-DOF IMU (Inertial Measurement Unit) to monitor the
orientation of the VR headset. The values for the axes are fetched by Track-Posed Driver in
Unity XR. The axes values are then converted into their corresponding Euler angles and sent to
the Raspberry Pi using web sockets through a C# script. The RPi will receive these Euler angles
and control the movement of servos in camera gimbal and ensure accurate gimbal orientation in
sync with HMD orientation of VR headset.

Figure 18 Working Diagram

30
3.2 Development in Game Engine
We can create amazing quality games and apps using game developing engines. Game
engine simplifies the complexities by using APIs, libraries and plugins which allows the
developer to focus more on the creativity, developers can make instant changes and test their
code which reduces the time taken on the development. There are many game engines some
focus on only 2D or only 3D graphics, others can be used both like Unity and Unreal Engine can
be used to create 2D as well as 3D games. Game engines are being used in scientific research for
different purposes mainly because of their 3D navigation and coordinate systems, some uses are
discussed in [14]. In our project we are focusing on Unity because it provides realism and great
rendering quality in VR also Users reported that Unreal Engine has a somewhat higher learning
curve than Unity, which is a platform that is easier to dive in and start producing on.

Figure 19 System Requirements of Modeling Softwares

3.3 Industry Standards


There are many game engines [15] in the world right now but only a few are recognized
worldwide and used for all kinds of game development and visual effects. The two most
recognized are Unity and Unreal Engine. Both are used in countless industries for games, visual
effects, virtual reality, film production and other research purposes. These engines were initially
developed for entertainment purposes but later after extensive research and testing, this powerful
software was used for training of troops, [16] discusses that in October 2002, a staff sergeant
stated that they were using the game titles America’s Army for training. He explained how the
new recruits were having trouble with the rifles in the training range, they were ordered to spend
time playing this game. Their performance was seen to improve a lot, after that extensive
research went into creating
31
such games and applications that could facilitate the military. This shows that this powerful
software can not only be used for entertainment purposes but also for research, military,
surveillance, and the medical field.

Figure 20 Logo of Development Engines

3.3.1 Unreal Engine


Unreal engine is industry standard for creating big games and applications that
require a lot of realistic graphics. It allows users to use the programming language C++.
Other than that, it allows the use of node-based code called Visual Scripting. Unreal
Engine is well known for realistic graphics, high-fidelity visuals, and realistic lighting
effects. Most notable is the Real-Time Ray Tracing, which provides life-like graphics.
Unreal engine provides support for good cinematics and support for Virtual reality. It
uses optimized rendering and good support for motion controllers to make it a preferred
software for VR apps.

Figure 21 Specifications for unreal engine

32
3.3.2 Unity
Unity is a well-known game development engine. Unity provides good
features that make development easy. It uses a component-based architecture so scripts
can be attached to different components. It allows users to work with different
programming languages like C-sharp, JavaScript, and Boo. It takes re-usability to another
level. It uses the What you see is what you get approach in the viewport making game
design and graphics easy to calibrate. We are using Unity for our project to design an
app.

Figure 22 Specifications for unity engine

3.3.3 Unity Features and Usage Details


The first version of Unity was released for the Mac OS in 2005. It is an all-in-
one destination that provides a full range of tools and capabilities, making it perfect for
producing games of all kinds, from 2D platformers to 3D action games. The engine was
first created to produce animations and interactive web content, but it quickly grew to
support several platforms, including desktop computers, mobile phones, and gaming
consoles. Several characteristics of the Unity game engine are made to assist with the
creation, distribution, and monetization of video games. Among these characteristics are
listed below [17].

3.3.3.1 2D and 3D Graphics


In 3D games, textures and materials are used to give items the
appearance that they are positioned in solid areas. To impart perspective, the
games also use a camera. Closer to the camera, an object appears larger. 2D
games don't have camera perspective and use flat graphics in contrast to 3D
games. They can still use 3D graphics to enhance the aesthetic and depth of their
33
2D experiences.

34
3.3.3.2 Physics Engine
The Physics engine in Unity helps programmers to make sure that
objects accelerate and move accurately. Collisions, gravity, and other natural
forces are handled by it. The object-oriented physics engine, which utilizes a
single thread and core by default, is one of the two physics engines that Unity
supports [18]. The other is the data-oriented technology stack, which has a newer
internal design that has been improved for speed, lightness, and multi-threading.

3.3.3.3 Character Control


For the character in a first- or third-person game to avoid falling
through the ground or walking through walls, collision-based physics is typically
required. Because the character's movement and acceleration are not physically
precise, in many applications, the character can speed up, slow down, and shift
direction almost rapidly and without being altered by motion. With the use of the
avatar administrator, this kind of behavior in 3D physics can be created. This
component gives the character a simple elliptical collider that stays upright. The
controller has its own special ability to change the speed and direction of the
object although no rigid body is needed, and the momentum effect is
impractical [19]. Character controllers follow floors and get blocked by walls
because they cannot traverse static colliders in a scene. While moving, it can push
rigid body objects away but won't be accelerated by approaching collisions. The
Character Controller is primarily used for player control in first- or third-person
perspectives without the usage of rigid body physics.

Figure 23 Character controller

35
The Controller does not respond to forces on its own or move Rigid bodies out of
the way. The OnControllerColliderHit() function in scripting allows you to apply
forces to any object that the Character Controller collides with in order to push
rigid bodies or other objects. However, employing a rigid body rather than a
character controller may be preferable if you want your player character to be
affected by physics.

3.3.3.4 Collision
In Unity, colliders must be used to set up collisions between
GameObjects. To physical collisions, a GameObject's shape is defined by
Colliders. These Colliders can then be used to control collision events. Unity uses
colliders, which are attached to game objects and specify a game object's shape
for the sake of physical collisions, to handle collisions between game objects.
Since a collider is unseen, it is not necessary for it to have the same mesh as the
game object [20]. In games, a crude approximation of the mesh is frequently more
effective and undetectable. Primitive collider types are the simplest sorts of
colliders.

3.3.3.5 Joint
A rigidbody is joined to the other rigidbody by a joint. Joint
limitations restrict movement by applying forces that move stiff bodies.
Rigidbodies have the following degrees of freedom thanks to joints. Ragdoll
effects mostly employ character joints. These joints let you limit the joint along
each axis because they are extended ball-socket joints [21]. You have the most
control over the limitations when using the twist axis, as we can define a lower
and upper limit in degrees. The rotation around the twist axis is restricted to a
range of -30 and 60 degrees.

36
Figure 24 Movement axes system

To limit the joint's strength, employ the break force and break torque attributes. If
these are less than infinity and an object is subjected to a force or torque that is
larger than these limits, the Fixed Joint will be destroyed, and the object will no
longer be restrained by its restraints.

3.3.3.6 Supported Languages for Scripting


It supports several scripting languages, including C#, Boo, and
JavaScript. Due to its performance and usability, C# is a language that is
frequently chosen by professional game developers and is utilized extensively in
Unity. This code controls the object hierarchy and plans events so that they take
place when they should. Scripts also produce visual effects, regulate the way
things behave physically, and manage the AI of the characters in a game or
experience [22].

37
3.3.3.7 Audio and Video
It has tools for blending and augmenting sound with predefined
effects and supports 3D audio. Additionally, it offers a video component that
enables the experience to incorporate video. Developers and artists can produce
audio and visual components outside of Unity and then add them to their creations
that use Unity.

Figure 25 Signal Transformation

AIFF, WAV, MP3, and Ogg audio files can be imported into Unity by dragging
them into the Projects window. After importing an audio file, an audio clip can be
added to the audio source or called from a script. Short audio samples are used as
"instruments" in Unity's music tracking plug-ins, which are subsequently
combined to play songs. Track modules can be imported from there and used the
same way as conventional audio clips. .mod,.it,.s3m, and.xm. Unity can use
scripts to directly record audio tracks using a computer microphone. The
Microphone class has a straightforward API that makes it simple to find nearby
microphones.

38
Figure 26 Supported formats of audio files.

3.3.3.8 Animations
When discussing the Unity animation system, the term "Mecanim"
could be used. The title indicates the system's user-friendly approach for
specifying and setting up items and character animations. To animate the
character, body components can also be given coded logic. Multiple body parts
can be animated using a single animation script, which makes it simpler to reuse
your animation code and save time by eliminating the need to write different
pieces of code for every part. It is possible to blend and incorporate animations
produced both inside and outside of Unity into a single game [23].

39
Figure 27 Animator window in unity

Animation clips produced by outside parties can be produced by animators or


artists using third-party programs like Max or Maya, or they can be purchased
through motion capture firms. The animation clips are then placed in a hierarchy
that resembles a diagram in the Animator Controller. Numerous humanoid
animations can be used to simulate the primary activities in a more complex
animation controller, which can also smoothly swap between multiple clips at
once to allow players smooth mobility inside the scene.
You can retarget humanoid animations from any source to your own character
models using Unity's animation system. A lot of elements in Unity's animation
system are designed explicitly for using humanoid models. By turning
anthropomorphic beings into conventional internal structures, Unity's avatar
system may benefit from these distinctive characteristics. All these Components,
Movie Clips, Animator Controllers, and Avatars are combined into a single Game
Object by the Animator Component.

40
3.3.3.9 Animator Controller
Animation controllers let you organize and manage animation
collections of characters and other animated game objects. Controllers contain
references to the animation clips used internally and can be used to create various
animations using state machines (also known as flow charts or simple programs)
built in Unity's visual programming language.
A "state machine" describes how a controller manages different animation states
and transitions between them. It is like a diagram. It can also be a straightforward
program created using the Unity visual programming language. The major portion
has a dark grey grid in the layout section. To add a new status button, right-click
on the grid. To rotate the view, press and hold the middle mouse button or use
Alt/Option drag. Click and drag the state buttons to rearrange your state machine
layout.

Figure 28 Animator view

3.3.3.10 Animation State Machines


State machines are made up of States, Transitions, and Events;
smaller Sub-State machines can be included into bigger machines as parts. Using
Unity's Animation State Machines, it is possible to inspect all the animation clips
associated
41
with a specific character and to set different animations to be triggered by various
game events, such as user interaction. A character or other animated game object
frequently has a variety of animations that match the many actions it might carry
out in the game. The marker or markers that have been set on one of the nodes can
be used to compare the current state with those markers, and the only way to
switch to the other node is to follow one of the directions. State machines are
essential for animation since they can be created and modified with little to no
coding. Every state has a corresponding Motion, which will start playing anytime
the machine is in that condition. With no need to worry about the coding, an
animator or designer may specify the potential sequences of character actions and
animations.

Figure 29 State Machine Flow

3.3.3.11 User Interface


You may quickly and easily develop user interfaces using the UI
system. For layout purposes, each UI element is represented by a rectangle. The
Rect Tool in the toolbar can be used to adjust this rectangle in the Scene View.
The Rect Tool is utilized for the UI and Unity's 2D features, and it can also be
used for 3D objects. UI items can be moved, resized, and rotated with the Rect
Tool. A UI element can be moved once it has been chosen by clicking anywhere
inside the rectangle and dragging [24]. By clicking and dragging the corners or
edges, you can resize it. By moving the mouse pointer slightly away from the
corners until it resembles a rotation symbol, the element can be turned.

Figure 30 Toolbar of UI system

42
When using the Rect Tool to alter an object's size, it typically modifies the
object's local scale for both 3D and 2D Sprites in the 2D system. When applied to
an item that already has a Rect Transform, however, it will alter the width and
height but leave the local scale alone. Font sizes, the border of sliced pictures, and
other elements won't be impacted by this resizing.

3.3.3.12 Auto Layout


The Rect Transform layout technology provides for completely
freeform placement of elements and is adaptable enough to handle a wide variety
of layout variations. But occasionally, a little more organization may be required.
The auto layout system offers options for inserting elements into nested layout
groups, such as grids, vertical groups, or horizontal groups. Additionally, it
enables elements to automatically adjust their size to the content they contain. For
instance, a button's size can be changed dynamically to precisely match its text
content and some padding. The fundamental Rect Transform layout technique is
the foundation for the auto layout system. It may be applied to part or all the
elements as an option.
By including a Layout Element component to the Game Object, you can alter the
minimum, preferred, or flexible size. You can change the values for one or more
layout properties using the Layout Element component. Select the value you want
to override with after checking the box for the property you wish to change.

Figure 31 Layout elements

It is not recommended to manually alter the sizes and placements of specific UI


elements while using Inspector or Scene View because a layout controller in the
auto layout system can do so. The layout controller would simply reset any
modified values during the subsequent layout calculation. There are some
components integrated into the auto layout system, but it is also possible to design
additional components that regulate layouts in unique ways. This is accomplished
43
by having a component implement interfaces that the auto layout system can
recognize.

3.3.3.13 Navigation System


Characters that can move about the game world can be made
through Unity's navigation system. This technology features dynamic barriers that
can impact character runtime navigation as well as off-mesh links that allow
for operations like opening doors or diving from ledges. The navigation system
enables us to create characters with intelligent movement capabilities by using
navigation grids that are automatically created from your scene geometry. We can
do actions like opening doors or jumping off ledges using off-grid links
and can also alter character's running path by using dynamic barriers.

Figure 32 Path finding.

Understanding the distinction between global and local navigation is among the
most crucial navigational concepts. To locate corridors around the world, global
navigation is used. It costs money and uses a lot of memory and computing power
to find a path across the world. A versatile data format for steering, the linear list
of polygons describing the path can be locally altered when the agent's position

44
changes. The goal of local navigation is to determine the most effective path to
take to reach the next corner without running into any other agents or moving
objects.

Figure 33 Global and local

Other forms of barriers, rather than only other agents, are necessary for many
navigational applications. These might be vehicles, or the typical boxes and
barrels found in shooting games. Local obstacle avoidance or global pathfinding
can be used to deal with the obstructions. Local obstacle avoidance is the greatest
strategy for dealing with moving impediments. In this manner, the agent can
anticipate the impediment and avoid it. Global navigation should be impacted
when the barrier becomes immovable and can be viewed as blocking the passage
of all agents. Carving refers to altering the NavMesh. The procedure makes holes
in the NavMesh by detecting where portions of the obstacle hit it. Local collision
avoidance is sometimes employed to steer clear of thinly spaced obstacles as well.
Since the algorithm is local, it can't avoid traps or handle situations when an
obstacle blocks a path and will only consider the upcoming collisions. Carving
can be used to resolve these situations.

3.3.3.13.1 NavMesh Agent


Characters can be created using NavMeshAgent
components that avoid one another as they move in the direction of their
objective. Using the NavMesh to navigate the game area, agents can avoid
other agents as well as other moving objects. The NavMesh Agent's
scripting API is used to handle pathfinding and spatial reasoning. An
upright cylinder, whose dimensions are determined by the Radius and
Height attributes, defines the agent. Even as the object turns, the cylinder
follows along but always remains upright. Cylinders are used to detect and
react to collisions between other agents and barriers. If the anchor point of
the GameObject is not at the base of the cylinder, use the Base Offset
attribute.

45
Figure 34 NavMesh Agent

The characteristics of the individual agents and the NavMesh bake


parameters are where the height and radius of the cylinder are given. The
agent's size is typically configured to be the same in both locations.

3.3.3.13.2 OffMesh Link


Using the OffMeshLink component, you can provide
navigation shortcuts that aren't represented by a walkable surface. Verify
that both end points are properly linked if the agent does not traverse an
Off-Mesh link. A circle should be visible around the access point on an
end point that is correctly connected.

Figure 35 Off-Mesh Link Connection Status

46
3.3.4 Unity Platforms Supported
Unity represents a few of the compatible with multiple platforms engines [25].
Although the Unity game engine now enables making games for over 19 various
platforms, including mobile, desktop, consoles, and virtual reality (VR), the Unity editor
is only accessible for Windows, macOS, and Linux. The following platforms are those
that Unity officially supports are listed.

Figure 36 Unity Supported Platforms

3.4 CAD SOFTWARES

3.4.1 SolidWorks
SolidWorks is a Microsoft Windows-based program for computer-aided
modelling (CAD) and computer-aided engineering (CAE) [27].

Figure 37 SolidWorks Logo

47
3.4.2 Fusion 360
Fusion 360 is a platform for 3D, CAD, CAM, CAE, and PCB modelling tools
that is cloud-based and used for product design and manufacturing. It has more
advantages than SolidWorks, so we are using this software. There are simplified apps for
Android and iOS in addition to Windows, macOS, and web browsers. It is offered in a
free, restricted, home-based, non-commercial personal edition [28]. Sheet metal,
simulation, documentation, and 3D modelling are all included in Fusion 360. It can
control production procedures including milling, turning, machining, and additive
manufacturing. Additionally, it provides electrical design automation (EDA) functions
including component management, PCB design, and schema design. Additionally, it can
be used for a variety of complex simulation jobs (FEA), rendering, animation, and
generative design.

Figure 38 Fusion 360 Logo

3.5 HARDWARE COMPONENTS

3.5.1 Oculus Quest 2


With Unity VR, projects can target virtual reality devices without the need for
extra external plugins. In our project, we're using Oculus Quest 2 [29]. It offers a
straightforward API and a feature set that works across platforms. Software and
equipment from the future will already provide backward compatibility. The VR API's
scope is deliberately limited but will expand as VR advances.

48
Figure 39 VR headset- Oculus Quest 2

Several actions are taken automatically when Unity's VR feature is enabled: The head-
mounted display (HMD) can be rendered directly by every Camera in scene. Field of
view, head tracking, and location tracking are all considered automatically when
adjusting the view and projection matrices. When the device is mounted on the head, the
range of view as well as head tracking are instantly applied to the camera. It is not
possible to directly change the values of the camera. However, the range of view can be
manually adjusted to a specific value.

3.5.2 Raspberry pi
A compact sized, affordable personal computer barely the size of a pocket that
can be mounted to a computer monitor or TV is operated by the Raspberry Pi using a
regular keyboard and mouse. With the aid of this capable small gadget, everyone can
learn to code and whatever want. Everything can be done which we do on computers,
such view high-definition movies, browse the web, make spreadsheets, and Word
documents, play
49
games and more [30]. From Pi 1 to Pi 4, the Raspberry Pi series had a lot of upgrades.
We use a Raspberry Pi 3b for our project.

Figure 40 Raspberry Pi

One of the three languages most frequently used on the Raspberry Pi is typically C or C+
+, with Python coming in second. We are working on python with raspberry pi.

3.5.3 Gimbal
A gimbal is a device that enables us to take video with a camera that is
steadied while the robot is moving. The two most popular shapes are 3-axis and 2-axis
gimbals. The 3-axis stabilizer limits the three movements of up/down (yaw), left/right
(pitch), and forward/backward (roll). In our situation, a 2-axis gimbal is used because it is
adequate for the job.

50
Figure 41 Axes of rotation

3.5.3.1 Tilt Motion


It refers to up and down. The 'pitch' axis is another name for it. This
would be used to track someone climbing stairs or jumping off a trampoline.

3.5.3.2 Pan Motion


The camera pans from side to side, most frequently from left to
right. The 'yaw' axis, as it is commonly referred to, is located beneath the camera.
This might be used to follow a subject as they moved from one location to
another, such as a person on a bike or a person walking along the street.

3.5.3.3 Roll Motion


To catch a subject that is moving around in the shot, the roll axis
travels horizontally. When a bothersome subject is moving around or not staying
in one place, it helps you stay focused.

51
3.5.3.4 Gimbal Design
We have designed our gimbal using fusion 360. We finalized it after
several iterations because of the shaky footage of the previous designs. The
finalized design of gimbal is attached.

Figure 42 Gimbal Design

3.5.4 Robot Base


The robot we used in our project is 3D printed except the base with two motors
which we bought from the market and assembled later for our requirements. We 3D
printed all the parts which were required for our project.

52
Figure 43 Robotic Base

After completing the design according to our requirements, the finalized form looked like
the figures attached.

Figure 44 Top View of robot

53
LED is attached to the robot to display readings like battery percentage and other
required values. Button is to turn the power on/off.

Figure 45 Front view of robot

54
Chapter 4 – RESULTS

Virtual Telepresence Robot is completed and fulfilling all the requirements committed in
deliverables of the project. We are getting a live video feed of the camera attached to the robot
placed at a far distance where we are not present. The person wearing VR headset can
successfully control the robot using the movement of Oculus Quest 2 and its controllers. Video
feed has some latency and gimbal has slight vibrations which can be improved. The prototype
consists of a raspberry pi camera attached to the gimbal. Gimbal has two servo motors for pitch
and yaw movements of the camera. Arduino LCD (16x2) is attached to the robot on front side as
visible which is for the purpose of displaying values like charging percentage of battery, current
running commands from oculus, head mounted display (HMD) orientation coordinates.

4.1 Results of VR Headset movement

The orientation of a VR headset changes as a user rotates their head while wearing it.
The sensors in the headset monitor this shift in orientation and use it to refresh the view of user.
The sensors can track the head rotations in three dimensions. As the user moves their head, the
values of the HMD coordinates are continuously changing. These values can be shown on the
HMD's screen as well as on a laptop. The user can observe how their head motions are affecting
the virtual environment because of this.

A coordinate system that is centered on the head of user displays HMD coordinates.
Coordinate’s X-axis is placed to the right side, Y-axis is oriented upside, and Z-axis is placed in
front side. HMD coordinates are a useful resource for VR headset users. Users can control their
perspective and interact with things in the virtual environment, and they can see how their head
movements are changing the environment, the data showing this change is shown in figure.

55
Figure 46 Data of VR headset movement

4.2 Results of Controller movement

A signal is transmitted to RPi when we press any button of controller. After interpreting
the signal, the Raspberry Pi controls the servos with a command. The gimbal then rotates in
response to the order by the servos. Since the gimbal and camera are linked, when one moves,
the other must likewise. By merely pressing a button on the controller, the user can view in any
direction. The values of the data that is sent from the controller to the Raspberry Pi are depicted
in the figure below. The Y-axis denotes the up-down motion, while the X-axis denotes the left-
right direction.

The Raspberry Pi determines the new location of the gimbal using the data provided. The new
position is derived by adding the gimbal's present location to the data's value. The gimbal and
camera constantly shift in response to the user's input. The Raspberry Pi calculates the new
position of the gimbal when the user adjusts the controller, and it then sends the appropriate
command to the servos. The camera and gimbal are then both moved to the new location by the
servos.

56
Figure 47 Data of controller movement

The data transferred from VR to raspberry pi has some values ranging from -1 to 1 which gives
the information to perform relevant action. These values are sent to the Raspberry Pi in a
continuous stream. Below is the table containing the data of VR controllers.
Table 3 Controller movements

57
4.3 Head Mounted Display Orientation

The Head Mounted Display orientation axes track the location and direction of HMD
while sending data from camera to VR and vice versa. Three axes depicted in colors, red, green,
and blue serve as the reference orientation axes for HMDs. Right-pointing X-axis, ascending Y-
axis, and forward-pointing Z-axis make up this coordinate system. The sensors, accelerometers,
gyroscopes, and magnetometers track the axes of reference for the HMD orientation. These
HMD- based sensors track the device's movement. To determine the HMD's location and
orientation, these sensors' data is then employed. After then, the VR headset receives the HMD's
location and orientation. The virtual environment that the user is viewing is updated using this
data by the VR headset.

Figure 48 HMD Orientation axes

4.3 Video of Raspberry Pi camera


VR headset when connected to the WIFI shows the run time video footage of camera
placed at a distant position. The viewpoint of RPi camera captured using oculus quest 2
controllers is attached. It is the real time image of the working of our project virtual telepresence
robot. It is the result and main deliverable of our project to get the video footage of the camera
inclined with
58
VR headset’s movement. It is now ready to be controlled from a safe distance to help the society
in various purposes like remote education, telemedicine, entertainment (to play video games and
watch movies in 3-dimensional view), surveillance, restaurants, tourist guide and many other
fields. With the help of this technology exploring dangerous or inaccessible environments,
providing remote assistance to people in need, conducting security operations, delivering
presentations and virtual training would be easy. Below is the image attached of the video
footage of our robot.

Figure 49 Video of Raspberry Pi camera

59
Chapter 5 – CONCLUSION AND FUTURE WORK

5.1 Conclusion
Our final year project is finally manufactured and fulfils all the requirements. The
objectives set at the start have thus been met and are stated. In the medical field it enables
doctors to attend patients through remote locations by using VR headset by doctor and the robot
to be located at a fixed spot which is easily accessible by patients. All this procedure will be
performed through WIFI, doctors will examine and prescribe the medicine to patients. It can be
used for surveillance purposes as well by allowing video access of remote locations where
visiting humans is dangerous. For entertainment purposes we have installed some games as well
which can serve as a source of relief in hectic routine. It can serve educational purposes too by
enabling video access of the classroom in VR in cases when a student is ill or cannot attend class
physically.

5.2 Marketing Strategy

We can market our project by focusing on the maximum audience of every field. We
would target the audience by mentioning the wide usage of Virtual Telepresence Robot.

 The ability to work from any location due to the virtual telepresence robot can increase
productivity. This can reduce travel costs and save time.
 The robot may be controlled by the user from a secure distance, which is useful in
hazardous or dangerous circumstances.
 The virtual telepresence robot can help people to communicate more effectively by
allowing them to see and hear each other in real time. This can promote connection
development and improve teamwork.
 • The ability to access data and resources from anywhere in the world can help users save
time and money.
 The ability to engage with virtual settings and objects provided by the virtual
telepresence robot can aid in improving learning outcomes. By doing so, learning may
become more interesting and memorable.
 • By enabling people to work from home or other relaxing areas, the virtual telepresence
robot can help in stress reduction. The risk of burnout can be decreased, and the work-life
balance can be improved.

5.3 Challenges to Virtual Telepresence Robot

Virtual Telepresence Robot using VR headset to explore remote locations is a new


technology, it may face some challenges.

60
 The technology is in a phase of improvements, so there are some technical limitations
like shakiness in the gimbal and latency in video.
 Its adoption may be constrained by the cost of labor, which is inexpensively available in
most countries.
 Since many individuals are unfamiliar with it and find it new, acceptance may be
challenging.

5.4 Benefits of Virtual Telepresence Robot

Few of the benefits of virtual telepresence robot are mentioned below in different fields.

5.4.1 Medical Field

 By allowing doctors to treat patients who are in remote locations, it can aid in
extending access to people.
 It can help to raise the standard of care by enabling doctors to offer consultations and
evaluations in real-time.
 It can lower expenses by lowering the need for travel and lodging.

5.4.2 Surveillance

 The 24/7 surveillance of distant places made possible by the virtual telepresence robot
can help to boost security.
 By allowing security officers to work from a safe distance, the virtual telepresence
robot can help lower the danger of injury.
 By letting security staff watch over many places at once, it can increase efficiency.

5.4.3 Entertainment

 The virtual telepresence robot can boost participation by giving users a more
engaging experience.
 The ability to connect with people who are located elsewhere is provided by the
virtual telepresence robot, which can help to lessen isolation.
 The virtual telepresence robot can enhance wellbeing by giving people a means of
relaxing and de-stressing.

61
5.4.1 Education

 By enabling students to attend classes from distant locations, the virtual telepresence
robot can contribute to improving access to higher education.
 To provide students with a more dynamic and interesting learning environment, the
virtual telepresence robot can help in enhancing learning results.
 By eliminating the need for travel and lodging, the virtual telepresence robot can aid
in cost reduction.
5.5 Future Work
The project has great prospects for the future. As a complete product there are multiple
new ways to make it even better and provide more development work. Many of the planned
improvements could not be implemented due to time constraints. We hope that these
recommendations will be taken with a positive outlook and will be worked on with great zeal.

 Machine learning and AI methods can be used to train it to interpret photographs and find
pathways based on previously stored images. As a result, the virtual telepresence robot
would be better able to navigate its surroundings, even in strange areas.
 Without constant human supervision, SLAM (Simultaneous Local and Mapping)
techniques can be used to investigate an environment. As a result, the virtual telepresence
robot would be able to explore its environment without needing regular human input.
 Robot energy efficiency can be increased by investigating other power sources, such as
solar electricity, which will be more advantageous when used for monitoring. This
would increase the sustainability of the virtual telepresence robot and enable it to
function for longer periods of time without requiring recharging.
 The temperature, humidity, and any other vital signs can all be detected via sensors. As a
result, the virtual telepresence robot might be employed for several tasks, such as keeping
an eye on patients in hospitals or monitoring the environment in hazardous locations.
 The virtual telepresence robot can be strengthened by employing more robust materials or
by adding safety precautions. By doing this, the robot would be less likely to sustain
harm while in operation.
 By mass producing robots or using less expensive components, the virtual telepresence
robot can be made more affordable.

62
REFERENCES

[1] Nielsen C W, Goodrich M A, Ricks R W. Ecological Interfaces for Improving Mobile


Robot Teleoperation[J]. IEEE Transactions on Robotics, 2007, 23(5):927-941.
[2] https://docslib.org/doc/7371580/virtual-reality-teleoperation-robot-alexis-koopmann-
kressa-fox-phillip-raich-jenna-webster
[3] K. Castle. (2020, 10) Oculus Quest 2 Review. [Online]. Available:
https://www.rockpapershotgun.com/2020/10/13/oculus-quest-2-review/
[4] “Telipito es karbantarto.” [Online]. Available: http://mail.bekescsaba.hu:
8080/mjpg/video.mjpg
[5] R. H. Espinosa, “Joystick API Documentation,” 8 1998. [Online]. Available:
https://www.kernel.org/doc/Documentation/input/joystick-api.txt
[6] A. Spirin, “pyPS4Controller.” [Online]. Available: https://pypi.org/
project/pyPS4Controller
[7] “Project Site.” [Online]. Available: https://uofutah.sharepoint.com/sites/
SeniorProject2020
[8] httpswww. semanticscholar.orgpaperOmnidirectional-Mobile-Robot-Control-based-
on-Mixed-Wu-Xu13cbecd65547839af3b2b74782739ba037f62d8a
[9] Ray G C, Guha S K. Relationship between the surface e.m.g. and muscular force[J].
Medical & Biological Engineering & Computing, 1983, 21(5):579-586.
[10] Li W, Yang C, Jiang Y, et al. Motion Planning for Omnidirectional Wheeled Mobile
Robot by Potential Field Method[J]. Journal of Advanced Transportation, 2017,
2017(3):1- 11.
[11] Gottmer M L. Merging Reality and Virtuality with Microsoft HoloLens[J]. 2015.
[12] Kim J, Mastnik S. EMG-based hand gesture recognition for realtime biosignal
interfacing[C]// International Conference on Intelligent User Interfaces, January 13-16,
2008, Gran Canaria, Canary Islands, Spain. DBLP, 2008:30-39.
[13] httpswww.semanticscholar.orgpaperCamera-and-a-Virtual-Reality-Box-Alcabaza-
Legaspia25f962fdefa7e24d031ad16b09433523eec3ad8

63
[14] M. Lewis and J. Jacobson, “Game engines,” Communications of the ACM, vol. 45,
no. 1, p. 27, 2002.
[15] Lewis, M. and J. Jacobson. 2002. Game Engines in Scientific Research.
Communications of the ACM. 45(1): p. 27--31.
[16] M. Zyda, “From visual simulation to virtual reality to games,” Computer, vol. 38,
no. 9, pp. 25–32, 2005.
[17] https://docs.unity3d.com/Manual/index.html
[18] Seugling, A., Rölin, M.: Evaluation of physics engines and implementation of a
physics module in a 3d-authoring tool. Master’s thesis, Umea University Department of
Computing Science Sweden (2006)
[19] Kim S L 2014 Using Unity 3D to facilitate mobile augmented reality game
development 2014 IEEE World Forum on the Internet of Things (WF-IoT). IEEE
[20] Janus Liang, "Generation of a virtual reality-based automotive driving training
system for CAD education", Computer Applications in Engineering Education, vol. 17,
no. 2, pp. 148-166, 2009.
[21] Chung, S., Cho, S., & Kim, S. (2018). Interaction using Wearable Motion Capture
for the Virtual Reality Game. Journal of The Korean Society for Computer Game, 31(3),
81– 89. https://doi.org/10.22819/kscg.2018.31.3.010
[22] Halpern, J. (2019). Developing 2D Games with Unity. In Developing 2D Games
with Unity. https://doi.org/10.1007/978-1-4842-3772-4
[23] Yang Haoran, Shen Jing, Cheng Tufeng, Lai Haodong and Chen Jie, "Design and
implementation of virtual reality electronic building blocks based on unity3D
[J]", Computer Knowledge and Technology, vol. 17, no. 24, pp. 126-128, 2021.
[24] Gonzalez, J.: Pilot training next, air force ROTC partner for distance learning
instruction. Air Education and Training Command Public Affairs (2020)
[25] Lee, W.T., et al.: High‐resolution 360 video foveated stitching for real ‐time VR.
Computer Graphics Forum. 36, 115–123 (2017)
[27] Cekus D, Posiadała B and Waryś P 2014 Integration of modeling in Solidworks and
Matlab/Simulink environments Arch. Mech. Eng.

64
[28] Timmis, H. (2021). Modeling with Fusion 360. In: Practical Arduino Engineering.
Apress, Berkeley, CA. https://doi.org/10.1007/978-1-4842-6852-0_3
[29] Carnevale, A.; Mannocchi, I.; Sassi, M.S.H.; Carli, M.; De Luca, G.; Longo, U.G.;
Denaro, V.; Schena, E. Virtual Reality for Shoulder Rehabilitation: Accuracy Evaluation
of Oculus Quest 2. Sensors 2022, 22, 5511. https://doi.org/10.3390/s22155511
[30] Gareth Mitchell, The Raspberry Pi single-board computer will revolutionize
computer science teaching [For & against], vol. 7, no. 3, pp. 26, 2012.

65
ANNEX A

Unity Code
MJPEGStreamDecoder
using System.IO;
using System.Threading;
using System.Collections;
using System.Collections.Generic;
using System.Net;
using UnityEngine;
public class MJPEGStreamDecoder : MonoBehaviour
{
[SerializeField] bool tryOnStart = false;
float RETRY_DELAY = 5f;
int MAX_RETRIES = 3;
int retryCount = 0;
public static string rpiIP;
byte[] nextFrame = null;
Thread worker;
int threadID = 0;

static System.Random randu;


List<BufferedStream> trackedBuffers = new List<BufferedStream>();
void Start()
{
/*string strHostName = "raspberrypi.local";
IPHostEntry ipEntry = System.Net.Dns.GetHostEntry(strHostName);

66
IPAddress[] addr = ipEntry.AddressList;
string ip = addr[1].ToString();*/
string ip = "192.168.43.22";
rpiIP = ip;
string ipString = "http://" + ip +":8000/stream.mjpg";
Debug.Log(ipString);
randu = new System.Random(Random.Range(0, 65536));
if (tryOnStart)
StartStream(ipString);
}
private void Update()
{
if (nextFrame != null)
{
SendFrame(nextFrame);
nextFrame = null;
}

}
private void OnDestroy()
{
foreach (var b in trackedBuffers)
{
if (b != null)
b.Close();
}
}
public void StartStream(string url)

67
{
retryCount = 0;
StopAllCoroutines();
foreach (var b in trackedBuffers)
b.Close();
worker = new Thread(() => ReadMJPEGStreamWorker(threadID = randu.Next(65536), url));
worker.Start();
}
void ReadMJPEGStreamWorker(int id, string url)
{
var webRequest = WebRequest.Create(url);
webRequest.Method = "GET";
List<byte> frameBuffer = new List<byte>();
int lastByte = 0x00;
bool addToBuffer = false;
BufferedStream buffer = null;
try
{

Stream stream = webRequest.GetResponse().GetResponseStream();


buffer = new BufferedStream(stream);
trackedBuffers.Add(buffer);
}
catch (System.Exception ex)
{
Debug.LogError(ex);
}
int newByte;

68
while (buffer != null)
{
if (threadID != id) return;
if (!buffer.CanRead)
{
Debug.LogError("Can't read buffer!");
break;
}
newByte = -1;
try
{
newByte = buffer.ReadByte();
}
catch
{
break;
}
if (newByte < 0)
{
continue; // End of data
}
if (addToBuffer)
frameBuffer.Add((byte)newByte);
if (lastByte == 0xFF)
{
if (!addToBuffer)
{

69
if (IsStartOfImage(newByte))
{
addToBuffer = true;
frameBuffer.Add((byte)lastByte);
frameBuffer.Add((byte)newByte);
}
}
else
{
if (newByte == 0xD9)
{
frameBuffer.Add((byte)newByte);
addToBuffer = false;
nextFrame = frameBuffer.ToArray();
frameBuffer.Clear();
}
}
}
lastByte = newByte;
}
if (retryCount < MAX_RETRIES)
{
retryCount++;
Debug.LogFormat("[{0}] Retrying Connection {1}...", id, retryCount);
foreach (var b in trackedBuffers)
b.Dispose();
trackedBuffers.Clear();

70
worker = new Thread(() => ReadMJPEGStreamWorker(threadID = randu.Next(65536), url));
worker.Start();
}
}
bool IsStartOfImage(int command)
{
switch (command)
{
case 0x8D:
Debug.Log("Command SOI");
return true;
case 0xC0:
Debug.Log("Command SOF0");
return true;
case 0xC2:
Debug.Log("Command SOF2");
return true;
case 0xC4:
Debug.Log("Command DHT");
break;
case 0xD8:
//Debug.Log("Command DQT");
return true;
case 0xDD:
Debug.Log("Command DRI");
break;
case 0xDA:

71
Debug.Log("Command SOS");
break;
case 0xFE:
Debug.Log("Command COM");
break;
case 0xD9:
Debug.Log("Command EOI");
break;
}

return false;
}
void SendFrame(byte[] bytes)
{
Texture2D texture2D = new Texture2D(2,
2); texture2D.LoadImage(bytes);
if (texture2D.width == 2)
return; // Failure!
Graphics.Blit(texture2D, renderTexture);
Destroy(texture2D);
}
}

72
ControllerCommandTransmit

using UnityEngine;
using System.Collections;
using System;
using System.IO;
using System.Net;
using System.Net.Sockets;

public class ControllerCommandTransmit : MonoBehaviour {


bool socketReady = false;
TcpClient mySocket;
public NetworkStream theStream;
StreamWriter theWriter;
StreamReader theReader;
public String Host = "192.168.43.22";
public Int32 Port = 5001;
public bool lightStatus;
private OVRManager manager;
float time = 0f;
float timeDelay = 0.15f;
string HMDOrientation = "";
//[SerializeField] Rotation axesvals;
void Start() {
/*string strHostName = "raspberrypi.local";
IPHostEntry ipEntry = System.Net.Dns.GetHostEntry(strHostName);
IPAddress[] addr = ipEntry.AddressList;
Host = addr[1].ToString();

73
Debug.Log(Host);*/
Host = MJPEGStreamDecoder.rpiIP;
setupSocket ();
}
void Update() {
OVRInput.Update();
var primaryThumbstick = OVRInput.Get(OVRInput.Axis2D.PrimaryThumbstick);

var secondaryThumbstick = OVRInput.Get(OVRInput.Axis2D.SecondaryThumbstick);


float primaryXaxis = Mathf.Round(primaryThumbstick.x * 100.0f) * 0.01f;
float primaryYaxis = Mathf.Round(primaryThumbstick.y * 100.0f) * 0.01f;
float secondaryXaxis = secondaryThumbstick.x;
float secondaryYaxis = secondaryThumbstick.y;
time = time + 1f*Time.deltaTime;
if (time >= timeDelay)
{ time = 0f;
GameObject HMDOrientationBody = GameObject.Find("HMDOrientationBody");
HMDOrientationTransmit HMDOrientationTransmit =
HMDOrientationBody.GetComponent<HMDOrientationTransmit>();
HMDOrientation = "aX=" + HMDOrientationTransmit.FinalRotationx.ToString() + "bY=" +
HMDOrientationTransmit.FinalRotationy.ToString() + "c";
writeSocket(HMDOrientation);
Debug.Log(HMDOrientation);
/*
writeSocket("X=" + HMDOrientationTransmit.FinalRotationx.ToString());
Debug.Log("X=" + HMDOrientationTransmit.FinalRotationx.ToString());
writeSocket("Y=" + HMDOrientationTransmit.FinalRotationy.ToString());
Debug.Log("Y=" + HMDOrientationTransmit.FinalRotationy.ToString()); */
if (OVRInput.Get(OVRInput.Button.Two)){

74
writeSocket("BBB");
Debug.Log("BBB");
}
if(OVRInput.Get(OVRInput.Button.One)){
writeSocket("AAA");
Debug.Log("AAA");
}
if(OVRInput.Get(OVRInput.Button.Three)){
writeSocket("XXX");
Debug.Log("XXX");
}
if(OVRInput.Get(OVRInput.Button.Four)){
writeSocket("YYY");
Debug.Log("YYY");
}
if(OVRInput.Get(OVRInput.Button.Start)){
writeSocket("STR");
Debug.Log("STR");
}
if(OVRInput.Get(OVRInput.Button.PrimaryThumbstick)){
writeSocket("LTS");
Debug.Log("LTS");
}

if(OVRInput.Get(OVRInput.Button.SecondaryThumbstick)){
writeSocket("RTS");
Debug.Log("RTS");

75
}
if(OVRInput.Get(OVRInput.Touch.PrimaryThumbstick)){
if(primaryXaxis > 0.4f){
writeSocket("PXF");
Debug.Log("PXF");
}else if(primaryXaxis < -0.4f){
writeSocket("PXR");
Debug.Log("PXR");
}else{ writeSocket("
PX0");
Debug.Log("PX0");
}
if(primaryYaxis > 0.4f){
writeSocket("PYF");
Debug.Log("PYF");
}else if(primaryYaxis < -0.4f){
writeSocket("PYR");
Debug.Log("PYR");
}else{ writeSocket("
PY0");
Debug.Log("PY0");
}

/*if(OVRInput.Get(OVRInput.Touch.SecondaryThumbstick)){
writeSocket("PX:" + primaryXaxis.ToString() + "," + "PY:" + primaryYaxis.ToString());
Debug.Log("PX:" + primaryXaxis.ToString() + "," + "PY:" + primaryYaxis.ToString());
writeSocket("SX:" + secondaryXaxis.ToString() + "," + "SY:" + secondaryYaxis.ToString());

76
Debug.Log("SX:" + secondaryXaxis.ToString() + "," + "SY:" + secondaryYaxis.ToString());
}else{
writeSocket("PX:" + primaryXaxis.ToString() + "," + "PY:" +
primaryYaxis.ToString()); Debug.Log("PX:" + primaryXaxis.ToString() + "," + "PY:" +
primaryYaxis.ToString());
}*/
}
if(OVRInput.Get(OVRInput.Touch.SecondaryThumbstick)){
if(secondaryXaxis > 0.4f){
writeSocket("SXF");
Debug.Log("SXF");
}else if(secondaryXaxis < -0.4f){
writeSocket("SXR");
Debug.Log("SXR");
}else{ writeSocket("
SX0");
Debug.Log("SX0");
}

if(secondaryYaxis > 0.4f){


writeSocket("SYF");
Debug.Log("SYF");
}else if(secondaryYaxis < -0.4f){
writeSocket("SYR");
Debug.Log("SYR");
}else{ writeSocket("
SY0");
Debug.Log("SY0");
77
}
/*writeSocket("SX:" + secondaryXaxis.ToString() + "," + "SY:" + secondaryYaxis.ToString());
Debug.Log("SX:" + secondaryXaxis.ToString() + "," + "SY:" + secondaryYaxis.ToString());*/
}
}
}
public void setupSocket() {
try {
mySocket = new TcpClient(Host, Port);
theStream = mySocket.GetStream();
theWriter = new StreamWriter(theStream);
theReader = new StreamReader(theStream);
socketReady = true;
}
catch (Exception e) {
Debug.Log("Socket error:" +
e);
}
}

public void writeSocket(string theLine) {


if (!socketReady)
return;
String tmpString = theLine;
theWriter.Write(tmpString);
theWriter.Flush();
}
public String readSocket() {
78
if (!socketReady)
return "";
if (theStream.DataAvailable)
return theReader.ReadLine();
return "NoData";
}
public void closeSocket() {
if (!socketReady)
return;
theWriter.Close();
theReader.Close();
mySocket.Close();
socketReady = false;
}
public void maintainConnection(){
if(!theStream.CanRead) {
setupSocket();
}

}
}

79
HMDOrientationTransmit

using UnityEngine;
using System.Collections;
using System;
using System.IO;
using System.Net;
using System.Net.Sockets;

public class HMDOrientationTransmit : MonoBehaviour {


public GameObject body;
public string Rotation = "";
float time = 0f;
float timeDelay = 0.1f;
public float FinalRotationx;
public float FinalRotationy;
public float FinalRotationz;
void Start() {
}

void Update() {
float Rotationx;
float Rotationy;
float Rotationz;
time = time + 1f*Time.deltaTime;
if (time >= timeDelay) {
time = 0f;
if(body.transform.localRotation.eulerAngles.x <= 180f)
{

80
Rotationx = body.transform.localRotation.eulerAngles.x;
}
else
{
Rotationx = body.transform.localRotation.eulerAngles.x - 360f;
}
if(body.transform.localRotation.eulerAngles.y <= 180f)
{
Rotationy = body.transform.localRotation.eulerAngles.y;
}
else
{
Rotationy = body.transform.localRotation.eulerAngles.y - 360f;
}
if(body.transform.localRotation.eulerAngles.z <= 180f)
{
Rotationz = body.transform.localRotation.eulerAngles.z;
}
else
{
Rotationz = body.transform.localRotation.eulerAngles.z - 360f;
}
FinalRotationx = Mathf.Round(Rotationx * 1.0f) * 1f;
FinalRotationy = Mathf.Round(Rotationy * 1.0f) * 1f;
FinalRotationz = Mathf.Round(Rotationz * 1.0f) * 1f;
Rotation = "X= " + Rotationx.ToString() + "Y= " + Rotationy.ToString();
//Debug.Log(Rotation);}}}

81
Python Script for Motors

import socket
from gpiozero import Servo
from time import sleep
from gpiozero.pins.pigpio import PiGPIOFactory
import RPi.GPIO as GPIO
# Create a PiGPIOFactory instance
factory = PiGPIOFactory()
# Set GPIO mode
GPIO.setmode(GPIO.BCM)
# Define motor pins
motor1_enable_pin = 17
motor1_in1_pin = 27
motor1_in2_pin = 22
motor2_enable_pin = 18
motor2_in1_pin = 23
motor2_in2_pin = 24
# Set up motor pins as outputs
GPIO.setup(motor1_enable_pin, GPIO.OUT)
GPIO.setup(motor1_in1_pin, GPIO.OUT)
GPIO.setup(motor1_in2_pin, GPIO.OUT)
GPIO.setup(motor2_enable_pin, GPIO.OUT)
GPIO.setup(motor2_in1_pin, GPIO.OUT)
GPIO.setup(motor2_in2_pin, GPIO.OUT)
# Create servo instances

82
servoX = Servo(12, min_pulse_width=0.5/1000, max_pulse_width=2.5/1000,
pin_factory=factory)
servoY = Servo(13, min_pulse_width=0.5/1000, max_pulse_width=2.5/1000,
pin_factory=factory)
xVal = ''
yVal = ''
motorsCmnd = ''
backlog = 1
size = 1024
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind(('192.168.43.22', 5001))
s.listen(backlog)
def parseData():
for i in range(0, len(data1)):
if data[i] == 'a':
xVal += data[i + 3:data.find('b')]
elif data[i] == 'Y':
yVal += data[i + 2:data.find('c')]
else:
motorsCmnd = data[i]
def servosControl():
print("Running Servos")
print(xVal)
print(yVal)

servoX.value = int(xVal) / 90
servoY.value = int(yVal) / 90
def motorsControl():
print('motors')

83
# Block to move the robot forward
if motorsCmnd == 'SXF':
GPIO.output(motor1_in1_pin, GPIO.HIGH)
GPIO.output(motor1_in2_pin, GPIO.LOW)
GPIO.output(motor2_in1_pin, GPIO.HIGH)
GPIO.output(motor2_in2_pin, GPIO.LOW)
GPIO.output(motor1_enable_pin, GPIO.HIGH)
GPIO.output(motor2_enable_pin, GPIO.HIGH)
# Block to move the robot backward
elif motorsCmnd == 'SXR':
GPIO.output(motor1_in1_pin, GPIO.LOW)
GPIO.output(motor1_in2_pin, GPIO.HIGH)
GPIO.output(motor2_in1_pin, GPIO.LOW)
GPIO.output(motor2_in2_pin, GPIO.HIGH)
GPIO.output(motor1_enable_pin, GPIO.HIGH)
GPIO.output(motor2_enable_pin, GPIO.HIGH)
# Block to turn the robot left
elif motorsCmnd == 'SXL':
GPIO.output(motor1_in1_pin, GPIO.LOW)
GPIO.output(motor1_in2_pin, GPIO.HIGH)
GPIO.output(motor2_in1_pin, GPIO.HIGH)
GPIO.output(motor2_in2_pin, GPIO.LOW)
GPIO.output(motor1_enable_pin, GPIO.HIGH)
GPIO.output(motor2_enable_pin, GPIO.HIGH)
# Block to turn the robot right
elif motorsCmnd == 'SXR':

84
GPIO.output(motor1_in1_pin, GPIO.HIGH)
GPIO.output(motor1_in2_pin, GPIO.LOW)
GPIO.output(motor2_in1_pin, GPIO.LOW)
GPIO.output(motor2_in2_pin, GPIO.HIGH)
GPIO.output(motor1_enable_pin, GPIO.HIGH)
GPIO.output(motor2_enable_pin, GPIO.HIGH)
# Block to stop the robot
elif motorsCmnd == 'S':
GPIO.output(motor1_enable_pin, GPIO.LOW)
GPIO.output(motor2_enable_pin, GPIO.LOW)
# Main code
try:
print("is waiting")
client, address = s.accept()
while 1:
data = client.recv(size)
if data:
data.decode()
print(data)
parseData()
servosControl()
motorsControl()
except:
print("closing socket")
client.close()
s.close()

85
Camera Code

import io
import picamera
import logging
import socketserver
from threading import Condition
from http import server
PAGE = """\
<html>
<head>
<title>Raspberry Pi - Surveillance Camera</title>
</head>
<body>
<center><h1>Raspberry Pi - Surveillance Camera</h1></center>
<center><img src="stream.mjpg" width="640" height="480"></center>
</body>
</html>
"""
class StreamingOutput(object):
def _init_(self):
self.frame = None
self.buffer = io.BytesIO()
self.condition = Condition()
def write(self, buf):
if buf.startswith(b'\xff\xd8'):
# New frame, copy the existing buffer's content and notify all

86
# clients it's available
self.buffer.truncate()
with self.condition:
self.frame = self.buffer.getvalue()
self.condition.notify_all()
self.buffer.seek(0)
return self.buffer.write(buf)

class StreamingHandler(server.BaseHTTPRequestHandler):
def do_GET(self):
if self.path == '/':
# Redirect root URL to index.html
self.send_response(301)
self.send_header('Location', '/index.html')
self.end_headers()
elif self.path == '/index.html':
# Serve the HTML page with the video stream
content = PAGE.encode('utf-8')
self.send_response(200)
self.send_header('Content-Type', 'text/html')
self.send_header('Content-Length', len(content))
self.end_headers()
self.wfile.write(content)

elif self.path == '/stream.mjpg':


# Serve the video stream
self.send_response(200)
self.send_header('Age', 0)
self.send_header('Cache-Control', 'no-cache, private')

87
self.send_header('Pragma', 'no-cache')
self.send_header('Content-Type', 'multipart/x-mixed-replace; boundary=FRAME')
self.end_headers()
try:
while True:
with output_stream.condition:
output_stream.condition.wait()
frame = output_stream.frame
self.wfile.write(b'--FRAME\r\n')
self.send_header('Content-Type', 'image/jpeg')
self.send_header('Content-Length', len(frame))
self.end_headers()
self.wfile.write(frame)
self.wfile.write(b'\r\n')
except Exception as e:
logging.warning(
'Removed streaming client %s: %s',
self.client_address, str(e))
else:
# Handle 404 Not Found
self.send_error(404)
self.end_headers()

class StreamingServer(socketserver.ThreadingMixIn, server.HTTPServer):


allow_reuse_address = True
daemon_threads = True

88
with picamera.PiCamera(resolution='640x480', framerate=24) as cam:
output_stream = StreamingOutput()
# Uncomment the next line to change your Pi's Camera rotation (in degrees)
# cam.rotation = 90
cam.start_recording(output_stream, format='mjpeg')
try:
address = ('', 8000)

streaming_server = StreamingServer(address, StreamingHandler)


streaming_server.serve_forever()
finally:
cam.stop_recording()

89

You might also like