Professional Documents
Culture Documents
Thesis Telepresence Robot
Thesis Telepresence Robot
Thesis Telepresence Robot
DE-41 (MTS)
Abbas,
Farhan,
Ans
COLLEGE OF
ELECTRICAL AND MECHANICAL ENGINEERING
NATIONAL UNIVERSITY OF SCIENCES AND
TECHNOLOGY RAWALPINDI
2023
i
DE-41 MTS
PROJECT REPORT
iii
DECLARATION
We certify that no portion of the work cited in this project has ever been used to support an
application for another university's degree or credential by signing this form. Depending on the
severity of the confirmed offence, we are completely accountable for any disciplinary action that
is taken against us if any act of plagiarism is discovered.
iv
ACKNOWLEDGEMENTS
Alhamdulillah, our project has been successfully finished, and we are grateful to Allah for
providing us with the courage and motivation to keep moving forward and for assisting us along
the way. Thanks to our supervisors, Assistant Professor Kanwal Naveed, and Dr. Kanwar Faraz,
who helped us a lot, tremendously, on every single issue, their help and guidance became a source
of strong determination for us. We also want to express our gratitude to our parents and friends
because without their unwavering encouragement and support, we might not have been able
to finish our project. We will always be grateful to them for the extraordinary part they played in
our journey. We accomplished more than we could have imagined thanks to their unwavering
support, and when we had lost all hope for ourselves, they gave us fresh hope.
v
ABSTRACT
Virtual Telepresence Robot allows users to remotely control a robot in a virtual environment.
virtual reality technology relies on the use of computer technology to provide an immersive
experience to the user which enables more spatial awareness and increases productivity. Users
can experience a 3D environment rather than relying on a 2D view of the environment on a
screen. The primary component of a VR system is a headset that offers a stereoscopic view of the
virtual environment. A feedback system, like game controllers, is also incorporated into VR
systems, allowing users to interact with their surroundings. The robot is controlled using a VR
headset and its controllers, and the user's movements are mapped to the robot's movements in the
virtual environment. It can be used for a variety of applications, like education, healthcare, and
business.
vi
TABLE OF CONTENTS
DECLARATION..........................................................................................................................................iii
ACKNOWLEDGMENTS.............................................................................................................................iv
ABSTRACT…...............................................................................................................................................v
TABLE OF CONTENTS...............................................................................................................................vi
LIST OF FIGURES.......................................................................................................................................ix
LIST OF TABLES.........................................................................................................................................xi
LIST OF SYMBOLS....................................................................................................................................xii
Chapter 1 – INTRODUCTION...................................................................................................................1
1.1 Introduction....................................................................................................................................1
1.2 Problem Statement.........................................................................................................................1
1.3 Solution..........................................................................................................................................1
1.4 Scope..............................................................................................................................................2
1.5 Deliverables....................................................................................................................................2
1.6 Structure.........................................................................................................................................2
Chapter 2 – BACKGROUND AND LITERATURE REVIEW...............................................................3
2.1 Background.................................................................................................................................3
2.2 Existing Models of Telepresence robot.........................................................................................3
2.2.1 Virtual reality teleoperation robot......................................................................................3
2.2.1.1 Control Software.....................................................................................................6
2.2.1.2 Vehicle specifications.............................................................................................7
2.2.2 Omnidirectional Telepresence Robot................................................................................9
2.2.3 Real-Time Telepresence robot with 360 degree view.....................................................14
Chapter 3 – DESIGNING AND MANUFACTURING...........................................................................17
3.1 Overview....................................................................................................................................17
3.2 Development in Game Engine....................................................................................................18
3.3 Industry Standards.......................................................................................................................18
3.3.1 Unreal Engine...................................................................................................................19
3.3.2 Unity................................................................................................................................20
3.3.3 Unity Features and Usage Details…...............................................................................20
3.3.3.1 2D and 3D Graphics............................................................................................20
3.3.3.2 Physics Engine…................................................................................................21
vii
3.3.3.3 Character Control................................................................................................21
3.3.3.4 Collision.............................................................................................................22
3.3.3.5 Joint.....................................................................................................................22
3.3.3.6 Supported Languages for Scripting.....................................................................23
3.3.3.7 Audio and Video.................................................................................................24
3.3.3.8 Animations…......................................................................................................25
3.3.3.9 Animator Controller............................................................................................27
3.3.3.10 Animation State Machines................................................................................27
3.3.3.11 User Interface....................................................................................................28
3.3.3.12 Auto Layout......................................................................................................29
3.3.3.13 Navigation System............................................................................................30
3.3.3.13.1 NavMesh Agent...............................................................................31
3.3.3.13.2 Off-Mesh Link.................................................................................32
3.3.4 Unity Platforms Supported…...........................................................................................33
3.4 CAD SOFTWARES...................................................................................................................33
3.4.1 SolidWorks........................................................................................................................33
3.4.2 Fusion 360.........................................................................................................................34
3.5 HARDWARE COMPONENTS...............................................................................................34
3.5.1 Oculus Quest 2.................................................................................................................34
3.5.2 Raspberry Pi.....................................................................................................................35
3.5.3 Gimbal..............................................................................................................................36
3.5.3.1 Tilt Motion..........................................................................................................37
3.5.3.2 Pan Motion…......................................................................................................37
3.5.3.3 Roll Motion…....................................................................................................37
3.5.3.4 Gimbal Design…...............................................................................................38
3.6.3 Robot Base…....................................................................................................................38
Chapter 4 - RESULTS..................................................................................................................................41
viii
4.4 Video of Raspberry Pi camera........................................................................................ 44
Chapter 5 - CONCLUSION AND FUTURE WORKS..............................................................................46
5.1 Conclusion....................................................................................................................................46
5.4.2 Surveillance.......................................................................................................47
5.4.3 Entertainement…............................................................................................... 47
5.4.4 Education….......................................................................................................48
5.3 Future Work.................................................................................................................................48
REFERENCES..............................................................................................................................................49
ix
LIST OF FIGURES
x
Figure 48. HMD Orientation axes................................................................................................................44
Figure 49. Video of Raspberry Pi camera.....................................................................................................45
xi
LIST OF TABLES
xii
LIST OF SYMBOLS
xiii
Chapter 1 – INTRODUCTION
1.1 Introduction
Artificial intelligence (AI) chatbots utilize large language models (LLMs) to produce
responses that resemble those of a human. These LLMs are complex algorithms built to handle
enormous volumes of data and use machine learning methods to understand the information.
Now limited only by their training parameters and the amount of their datasets, they can generate
responses that closely resemble human speech and converse naturally on a broad variety of
topics.
The field of home automation is growing because it uses cloud computing to create
networks between devices so that they may be controlled remotely without requiring human
interaction. With centralized control interfaces that can be accessed by computers, tablets, or
smartphones, this technology gives customers the ability to regulate a variety of home features,
including lighting, temperature, and security systems.
1.3 Solution
Our project is the development of an online assistant to assist the needs of patients and
medical professionals. The project aims to integrate the revolutionary potential of Large
Language Models (LLMs) and Internet of Things (IoT) technologies to offer support and
companionship to individuals. The envisioned system will mitigate the adverse psychological
effects experienced by isolated patients, while the IoT services embedded within the application
will grant patients a sense of independence and liberty. Moreover, the project encompasses a
monitoring system allowing caregivers to oversee and address the needs of multiple patients
simultaneously, thereby alleviating the burden on the healthcare system, particularly evident
during the COVID-19 pandemic.
14
1.4 Deliverables
Following are the deliverables for Ai chatbot for home appliance control:
1.5 Structure
The structure of the final year project report is:
Chapter-2 mainly deals with background and literature review including different existing
models of versions and available solutions.
Chapter-3 includes the methodology we adopted to design and manufacture our project
and explains how the project is different from existing products.
Chapter-4 deals with the results of working projects at different stages.
Chapter-6 consists of concluding the report and exploring future possibilities and
directions in which the project can be taken.
15
Chapter 2 – BACKGROUND AND LITERATURE REVIEW
2.1 Background
The evolution of chatbots in home appliance control dates back to the late 20th century,
with early experiments in home automation systems. In the 1990s, pioneering projects explored
remote appliance control via text-based interfaces, laying the groundwork for modern chatbots.
Simple command-line interfaces emerged as precursors to contemporary chatbots, requiring
users to understand basic programming. With advancements in technology, particularly
smartphones and smart home devices, chatbots became more sophisticated and accessible to
users. Integration of artificial intelligence (AI) and natural language processing (NLP) further
enhanced chatbots' capabilities, enabling them to understand complex commands. Today,
chatbots are integral to smart home ecosystems, offering seamless integration and enabling users
to control appliances via voice commands or text messages, making them indispensable tools for
modern home automation.
Different models of Virtual Telepresence Robot have been developed over the years.
Some of them are given below:
Small automobiles and virtual reality display headsets were linked by Alexis
et al. (2020) [2]. The robot moved similarly to a radio-controlled vehicle. The user needs
to activate the application while wearing the headset to view the footage from the
vehicle's cameras. The robot was guided by a remote control to its destination and moved
in response to input from the camera. Two primary connections can be made across the
entire system. VR camera, vehicle controller. This system's goal was to provide people
with the ability to experience the environment like they were present in a car. With the
use of an external controller and a camera feed, the user may steer the vehicle and explore
its surroundings. In the following section, the project was put into action. The figure
below shows the mobility system, virtual reality system, and control system.
16
Figure 1 System’s overview
The main system controller was a Raspberry Pi 4. Used to control most of the connection
logic, this location is inside the car. The camera feed is compiled by the Raspberry Pi 4
and streamed online so that the VR system can understand it properly. Additionally, it
decodes Bluetooth signals from the controller that manages the engine of the vehicle.
Most of the debugging was done on board the Pi because it serves as the primary
connection for all system components. In figure 2, the system is depicted in more depth.
The colors of the arrows signify critical data flows. The arrows in orange depict transfer
of data from user to the engine of vehicle, while the green arrows display the data that
gives the user video feedback.
17
Figure 2 Overview of the System and Connections
Because it was readily available during the COVID-19 outbreak, the Oculus Quest 2
headset [3] was chosen. An illustration of the VR technology employed in this system is
shown in Figure 3.
18
The Raspberry Pi was set up with an interface that supported a camera and ran the
Raspberry OS. In this script, a server is created using the Raspberry PI's IP address, and it
subsequently receives data from the Pi and sends it to the URL. Using this method, RPI
can provide real-time, less-latency video stream which, when given the correct IP
address, can be seen in any web browser. The VR system's IP address was added using a
C# Unity script. A Unity scene playing a camera feed is seen in a screenshot in Figure
1.4. You may test your Unity environment's functioning with any camera stream. Tests
listed here used Turkish scripts from the tests [4].
The Oculus Quest 2 headset was then used to execute Unity programs as a system test.
Connect a cable from laptop to VR to set up this test. Run the Unity project after that to
see the finished VR experience.
19
The output of the Python script creates a link between the controllers of
PS4 input variables and the motors' speed control output signals. PWM
(Pulse Width Modulation) signal was the output signal. The PWM signal
can define 35 different speeds and up to 35 different duty cycles. The
output duty cycle increases as the joystick is pushed higher, causing the
speed to increase.
20
Figure 6 Wiring Schematic of System
Fans and level shifters received 5 volts from a DC-DC converter that was
powered by a 22.2-volt battery. The Raspberry Pi was kept from shutting
down due to thermal problems by using a cooling fan. You can use the
Raspberry Pi without powering the ESC because the 5-voltage power
supply is used to control it independently from the voltage system of 22.2-
volt. To prevent the wiring harness for the power system from melting, an
80-amp circuit breaker was employed as the main power switch for the
22.2- volt system. Figure 7 displays a picture of the interior of the car,
which houses most of the parts.
21
Figure 7 Close view of Robotic Hardware
22
A laptop, Microsoft's HoloLens mixed reality headset, the OMR, the MYO bracelet, a
wearable device that records myoelectric impulses, and several other gadgets were used.
The MYO bracelet and HoloLens control application were created on this laptop. Four
Omni Wheels (OW) on the OMR allowed for unrestricted movement in any direction. It
was very easy to execute and program commands [10]. Figure 8 depicts the system
configuration for this technique.
23
Figure 9 HoloLens device
The forearm muscles emit various sEMG signals depending on the gestures the user
makes while wearing the MYO bracelet on their forearm. The MYO bracelet has
extremely sensitive sensors that collect this signal, which are then analyzed by inbuilt
algorithms to identify and transmit various messages. Movements Send instructions for
Bluetooth gestures to the cloud. At this point, MYO can identify various user motions. B.
Spread your hands out in front of you, clench your fists, and wave them back and forth.
In Figure 10, a specific gesture is displayed.
Features were recovered after signal processing in accordance with the various gestures
depicted in [12]. The MYO bracelet has eight sensors that it can utilize to collect data
from, but there are too many functions and too much data that can be processed to fully
utilize the data. As a result, we were forced to implement principal component analysis to
decrease data's dimensionality also it was required to compress it. The flowchart is
displayed in Figure 11.
24
Figure 11 Flow of process
The acceleration and deceleration states were represented by processing the sEMG data.
The velocity values are added to the velocity values in the project of Unity3D to
collectively direct the mobility condition of the omnidirectional mobile robot's (OMR)
ROS system, which receives this data. In Figure 13, under sEMG signal regulation. In
25
Figure 14, the OMR speed is displayed. The two photos can be compared to see how the
sEMG signal instantly regulates his OMR velocity. In addition, as the operator clenched
his fingers, the omni directional mobile robot accelerated. While on spreading fingers
apart, speed decreased while speed in the normal state stayed constant.
26
2.2.3 Real Time Telepresence robot with 360-degree view
To transmit live 360-degree video, Dan Vincent G. Alcabaza et al. (2019)
proposed connecting a Ricoh Theta S to an Android smartphone mounted on a BoboVR
[13]. The suggested method is to utilize a 360-degree camera to record an all-
encompassing scene and develop an app that feeds the recorded 360-degree run time
footage from the server to Android smartphone of the client. The proper viewport was
selected from the surrounding area by the IMU of phone. Because of the difference in the
value between focal base and focal-length, depth can be felt. The flow of ideas that make
up the suggested system in Figure 15 are depicted in the following block diagram.
27
Over the Internet, live footage was transmitted from a PC to an Android smartphone. A
section of a sphere was isometrically projected onto a planar image, with simple
longitudes and simple latitudes serving as its horizontal and vertical coordinates,
respectively. A 3D spherical model was used to transfer the generated videos onto. Figure
16 illustrates how his 3D spherical model was projected from the 2D equirectangular
footage.
Between the two Ricoh Theta S-taken pictures, there is a stitch mismatch. This is because
of some spherical environment components not being completely tucked (see Figure 17).
28
The data in table 1 relates to a view in particular direction that can be easily visible in VR.
Table 1 Values of IMU sensor
Table 2 displays the data gathered to evaluate the geometric realism of binocular
stereoscopic considering base and focal lengths.
Table 2 Values gathered by Binocular Stereo
29
Chapter 3 – METHODOLOGY
3.1 Overview
A real-time video feed from Raspberry Pi camera mounted on a robot in a remote
location, would be broadcasted to a web server over the local IP. A python script would process
the individual frames of the camera stream at a desired framerate and push them onto the server.
The video stream would be integrated into the Unity based android application for VR headset. A
C# script would load frames of video stream from the webserver and project these frames onto a
flat game object to act as display screen within the immersive 3D environment of the Unity
application. The VR headset can be used remotely to access this feedback.
The control inputs from the game controllers of VR headset will be fetched in the Unity
application. A C# script will fetch these inputs from game controllers and transmit them to the
Raspberry Pi on the mobile robot using web sockets over a local IP. The Raspberry Pi on the
robot will bind itself to the web socket port of the local IP and access these control inputs using a
script written in Python. The algorithm in the script will make decisions regarding the movement
of the robot according to the received inputs and change the pin states of GPIOs of RPi for the
drive circuit of robot motors.
The VR headset has a built-in 6-DOF IMU (Inertial Measurement Unit) to monitor the
orientation of the VR headset. The values for the axes are fetched by Track-Posed Driver in
Unity XR. The axes values are then converted into their corresponding Euler angles and sent to
the Raspberry Pi using web sockets through a C# script. The RPi will receive these Euler angles
and control the movement of servos in camera gimbal and ensure accurate gimbal orientation in
sync with HMD orientation of VR headset.
30
3.2 Development in Game Engine
We can create amazing quality games and apps using game developing engines. Game
engine simplifies the complexities by using APIs, libraries and plugins which allows the
developer to focus more on the creativity, developers can make instant changes and test their
code which reduces the time taken on the development. There are many game engines some
focus on only 2D or only 3D graphics, others can be used both like Unity and Unreal Engine can
be used to create 2D as well as 3D games. Game engines are being used in scientific research for
different purposes mainly because of their 3D navigation and coordinate systems, some uses are
discussed in [14]. In our project we are focusing on Unity because it provides realism and great
rendering quality in VR also Users reported that Unreal Engine has a somewhat higher learning
curve than Unity, which is a platform that is easier to dive in and start producing on.
32
3.3.2 Unity
Unity is a well-known game development engine. Unity provides good
features that make development easy. It uses a component-based architecture so scripts
can be attached to different components. It allows users to work with different
programming languages like C-sharp, JavaScript, and Boo. It takes re-usability to another
level. It uses the What you see is what you get approach in the viewport making game
design and graphics easy to calibrate. We are using Unity for our project to design an
app.
34
3.3.3.2 Physics Engine
The Physics engine in Unity helps programmers to make sure that
objects accelerate and move accurately. Collisions, gravity, and other natural
forces are handled by it. The object-oriented physics engine, which utilizes a
single thread and core by default, is one of the two physics engines that Unity
supports [18]. The other is the data-oriented technology stack, which has a newer
internal design that has been improved for speed, lightness, and multi-threading.
35
The Controller does not respond to forces on its own or move Rigid bodies out of
the way. The OnControllerColliderHit() function in scripting allows you to apply
forces to any object that the Character Controller collides with in order to push
rigid bodies or other objects. However, employing a rigid body rather than a
character controller may be preferable if you want your player character to be
affected by physics.
3.3.3.4 Collision
In Unity, colliders must be used to set up collisions between
GameObjects. To physical collisions, a GameObject's shape is defined by
Colliders. These Colliders can then be used to control collision events. Unity uses
colliders, which are attached to game objects and specify a game object's shape
for the sake of physical collisions, to handle collisions between game objects.
Since a collider is unseen, it is not necessary for it to have the same mesh as the
game object [20]. In games, a crude approximation of the mesh is frequently more
effective and undetectable. Primitive collider types are the simplest sorts of
colliders.
3.3.3.5 Joint
A rigidbody is joined to the other rigidbody by a joint. Joint
limitations restrict movement by applying forces that move stiff bodies.
Rigidbodies have the following degrees of freedom thanks to joints. Ragdoll
effects mostly employ character joints. These joints let you limit the joint along
each axis because they are extended ball-socket joints [21]. You have the most
control over the limitations when using the twist axis, as we can define a lower
and upper limit in degrees. The rotation around the twist axis is restricted to a
range of -30 and 60 degrees.
36
Figure 24 Movement axes system
To limit the joint's strength, employ the break force and break torque attributes. If
these are less than infinity and an object is subjected to a force or torque that is
larger than these limits, the Fixed Joint will be destroyed, and the object will no
longer be restrained by its restraints.
37
3.3.3.7 Audio and Video
It has tools for blending and augmenting sound with predefined
effects and supports 3D audio. Additionally, it offers a video component that
enables the experience to incorporate video. Developers and artists can produce
audio and visual components outside of Unity and then add them to their creations
that use Unity.
AIFF, WAV, MP3, and Ogg audio files can be imported into Unity by dragging
them into the Projects window. After importing an audio file, an audio clip can be
added to the audio source or called from a script. Short audio samples are used as
"instruments" in Unity's music tracking plug-ins, which are subsequently
combined to play songs. Track modules can be imported from there and used the
same way as conventional audio clips. .mod,.it,.s3m, and.xm. Unity can use
scripts to directly record audio tracks using a computer microphone. The
Microphone class has a straightforward API that makes it simple to find nearby
microphones.
38
Figure 26 Supported formats of audio files.
3.3.3.8 Animations
When discussing the Unity animation system, the term "Mecanim"
could be used. The title indicates the system's user-friendly approach for
specifying and setting up items and character animations. To animate the
character, body components can also be given coded logic. Multiple body parts
can be animated using a single animation script, which makes it simpler to reuse
your animation code and save time by eliminating the need to write different
pieces of code for every part. It is possible to blend and incorporate animations
produced both inside and outside of Unity into a single game [23].
39
Figure 27 Animator window in unity
40
3.3.3.9 Animator Controller
Animation controllers let you organize and manage animation
collections of characters and other animated game objects. Controllers contain
references to the animation clips used internally and can be used to create various
animations using state machines (also known as flow charts or simple programs)
built in Unity's visual programming language.
A "state machine" describes how a controller manages different animation states
and transitions between them. It is like a diagram. It can also be a straightforward
program created using the Unity visual programming language. The major portion
has a dark grey grid in the layout section. To add a new status button, right-click
on the grid. To rotate the view, press and hold the middle mouse button or use
Alt/Option drag. Click and drag the state buttons to rearrange your state machine
layout.
42
When using the Rect Tool to alter an object's size, it typically modifies the
object's local scale for both 3D and 2D Sprites in the 2D system. When applied to
an item that already has a Rect Transform, however, it will alter the width and
height but leave the local scale alone. Font sizes, the border of sliced pictures, and
other elements won't be impacted by this resizing.
Understanding the distinction between global and local navigation is among the
most crucial navigational concepts. To locate corridors around the world, global
navigation is used. It costs money and uses a lot of memory and computing power
to find a path across the world. A versatile data format for steering, the linear list
of polygons describing the path can be locally altered when the agent's position
44
changes. The goal of local navigation is to determine the most effective path to
take to reach the next corner without running into any other agents or moving
objects.
Other forms of barriers, rather than only other agents, are necessary for many
navigational applications. These might be vehicles, or the typical boxes and
barrels found in shooting games. Local obstacle avoidance or global pathfinding
can be used to deal with the obstructions. Local obstacle avoidance is the greatest
strategy for dealing with moving impediments. In this manner, the agent can
anticipate the impediment and avoid it. Global navigation should be impacted
when the barrier becomes immovable and can be viewed as blocking the passage
of all agents. Carving refers to altering the NavMesh. The procedure makes holes
in the NavMesh by detecting where portions of the obstacle hit it. Local collision
avoidance is sometimes employed to steer clear of thinly spaced obstacles as well.
Since the algorithm is local, it can't avoid traps or handle situations when an
obstacle blocks a path and will only consider the upcoming collisions. Carving
can be used to resolve these situations.
45
Figure 34 NavMesh Agent
46
3.3.4 Unity Platforms Supported
Unity represents a few of the compatible with multiple platforms engines [25].
Although the Unity game engine now enables making games for over 19 various
platforms, including mobile, desktop, consoles, and virtual reality (VR), the Unity editor
is only accessible for Windows, macOS, and Linux. The following platforms are those
that Unity officially supports are listed.
3.4.1 SolidWorks
SolidWorks is a Microsoft Windows-based program for computer-aided
modelling (CAD) and computer-aided engineering (CAE) [27].
47
3.4.2 Fusion 360
Fusion 360 is a platform for 3D, CAD, CAM, CAE, and PCB modelling tools
that is cloud-based and used for product design and manufacturing. It has more
advantages than SolidWorks, so we are using this software. There are simplified apps for
Android and iOS in addition to Windows, macOS, and web browsers. It is offered in a
free, restricted, home-based, non-commercial personal edition [28]. Sheet metal,
simulation, documentation, and 3D modelling are all included in Fusion 360. It can
control production procedures including milling, turning, machining, and additive
manufacturing. Additionally, it provides electrical design automation (EDA) functions
including component management, PCB design, and schema design. Additionally, it can
be used for a variety of complex simulation jobs (FEA), rendering, animation, and
generative design.
48
Figure 39 VR headset- Oculus Quest 2
Several actions are taken automatically when Unity's VR feature is enabled: The head-
mounted display (HMD) can be rendered directly by every Camera in scene. Field of
view, head tracking, and location tracking are all considered automatically when
adjusting the view and projection matrices. When the device is mounted on the head, the
range of view as well as head tracking are instantly applied to the camera. It is not
possible to directly change the values of the camera. However, the range of view can be
manually adjusted to a specific value.
3.5.2 Raspberry pi
A compact sized, affordable personal computer barely the size of a pocket that
can be mounted to a computer monitor or TV is operated by the Raspberry Pi using a
regular keyboard and mouse. With the aid of this capable small gadget, everyone can
learn to code and whatever want. Everything can be done which we do on computers,
such view high-definition movies, browse the web, make spreadsheets, and Word
documents, play
49
games and more [30]. From Pi 1 to Pi 4, the Raspberry Pi series had a lot of upgrades.
We use a Raspberry Pi 3b for our project.
Figure 40 Raspberry Pi
One of the three languages most frequently used on the Raspberry Pi is typically C or C+
+, with Python coming in second. We are working on python with raspberry pi.
3.5.3 Gimbal
A gimbal is a device that enables us to take video with a camera that is
steadied while the robot is moving. The two most popular shapes are 3-axis and 2-axis
gimbals. The 3-axis stabilizer limits the three movements of up/down (yaw), left/right
(pitch), and forward/backward (roll). In our situation, a 2-axis gimbal is used because it is
adequate for the job.
50
Figure 41 Axes of rotation
51
3.5.3.4 Gimbal Design
We have designed our gimbal using fusion 360. We finalized it after
several iterations because of the shaky footage of the previous designs. The
finalized design of gimbal is attached.
52
Figure 43 Robotic Base
After completing the design according to our requirements, the finalized form looked like
the figures attached.
53
LED is attached to the robot to display readings like battery percentage and other
required values. Button is to turn the power on/off.
54
Chapter 4 – RESULTS
Virtual Telepresence Robot is completed and fulfilling all the requirements committed in
deliverables of the project. We are getting a live video feed of the camera attached to the robot
placed at a far distance where we are not present. The person wearing VR headset can
successfully control the robot using the movement of Oculus Quest 2 and its controllers. Video
feed has some latency and gimbal has slight vibrations which can be improved. The prototype
consists of a raspberry pi camera attached to the gimbal. Gimbal has two servo motors for pitch
and yaw movements of the camera. Arduino LCD (16x2) is attached to the robot on front side as
visible which is for the purpose of displaying values like charging percentage of battery, current
running commands from oculus, head mounted display (HMD) orientation coordinates.
The orientation of a VR headset changes as a user rotates their head while wearing it.
The sensors in the headset monitor this shift in orientation and use it to refresh the view of user.
The sensors can track the head rotations in three dimensions. As the user moves their head, the
values of the HMD coordinates are continuously changing. These values can be shown on the
HMD's screen as well as on a laptop. The user can observe how their head motions are affecting
the virtual environment because of this.
A coordinate system that is centered on the head of user displays HMD coordinates.
Coordinate’s X-axis is placed to the right side, Y-axis is oriented upside, and Z-axis is placed in
front side. HMD coordinates are a useful resource for VR headset users. Users can control their
perspective and interact with things in the virtual environment, and they can see how their head
movements are changing the environment, the data showing this change is shown in figure.
55
Figure 46 Data of VR headset movement
A signal is transmitted to RPi when we press any button of controller. After interpreting
the signal, the Raspberry Pi controls the servos with a command. The gimbal then rotates in
response to the order by the servos. Since the gimbal and camera are linked, when one moves,
the other must likewise. By merely pressing a button on the controller, the user can view in any
direction. The values of the data that is sent from the controller to the Raspberry Pi are depicted
in the figure below. The Y-axis denotes the up-down motion, while the X-axis denotes the left-
right direction.
The Raspberry Pi determines the new location of the gimbal using the data provided. The new
position is derived by adding the gimbal's present location to the data's value. The gimbal and
camera constantly shift in response to the user's input. The Raspberry Pi calculates the new
position of the gimbal when the user adjusts the controller, and it then sends the appropriate
command to the servos. The camera and gimbal are then both moved to the new location by the
servos.
56
Figure 47 Data of controller movement
The data transferred from VR to raspberry pi has some values ranging from -1 to 1 which gives
the information to perform relevant action. These values are sent to the Raspberry Pi in a
continuous stream. Below is the table containing the data of VR controllers.
Table 3 Controller movements
57
4.3 Head Mounted Display Orientation
The Head Mounted Display orientation axes track the location and direction of HMD
while sending data from camera to VR and vice versa. Three axes depicted in colors, red, green,
and blue serve as the reference orientation axes for HMDs. Right-pointing X-axis, ascending Y-
axis, and forward-pointing Z-axis make up this coordinate system. The sensors, accelerometers,
gyroscopes, and magnetometers track the axes of reference for the HMD orientation. These
HMD- based sensors track the device's movement. To determine the HMD's location and
orientation, these sensors' data is then employed. After then, the VR headset receives the HMD's
location and orientation. The virtual environment that the user is viewing is updated using this
data by the VR headset.
59
Chapter 5 – CONCLUSION AND FUTURE WORK
5.1 Conclusion
Our final year project is finally manufactured and fulfils all the requirements. The
objectives set at the start have thus been met and are stated. In the medical field it enables
doctors to attend patients through remote locations by using VR headset by doctor and the robot
to be located at a fixed spot which is easily accessible by patients. All this procedure will be
performed through WIFI, doctors will examine and prescribe the medicine to patients. It can be
used for surveillance purposes as well by allowing video access of remote locations where
visiting humans is dangerous. For entertainment purposes we have installed some games as well
which can serve as a source of relief in hectic routine. It can serve educational purposes too by
enabling video access of the classroom in VR in cases when a student is ill or cannot attend class
physically.
We can market our project by focusing on the maximum audience of every field. We
would target the audience by mentioning the wide usage of Virtual Telepresence Robot.
The ability to work from any location due to the virtual telepresence robot can increase
productivity. This can reduce travel costs and save time.
The robot may be controlled by the user from a secure distance, which is useful in
hazardous or dangerous circumstances.
The virtual telepresence robot can help people to communicate more effectively by
allowing them to see and hear each other in real time. This can promote connection
development and improve teamwork.
• The ability to access data and resources from anywhere in the world can help users save
time and money.
The ability to engage with virtual settings and objects provided by the virtual
telepresence robot can aid in improving learning outcomes. By doing so, learning may
become more interesting and memorable.
• By enabling people to work from home or other relaxing areas, the virtual telepresence
robot can help in stress reduction. The risk of burnout can be decreased, and the work-life
balance can be improved.
60
The technology is in a phase of improvements, so there are some technical limitations
like shakiness in the gimbal and latency in video.
Its adoption may be constrained by the cost of labor, which is inexpensively available in
most countries.
Since many individuals are unfamiliar with it and find it new, acceptance may be
challenging.
Few of the benefits of virtual telepresence robot are mentioned below in different fields.
By allowing doctors to treat patients who are in remote locations, it can aid in
extending access to people.
It can help to raise the standard of care by enabling doctors to offer consultations and
evaluations in real-time.
It can lower expenses by lowering the need for travel and lodging.
5.4.2 Surveillance
The 24/7 surveillance of distant places made possible by the virtual telepresence robot
can help to boost security.
By allowing security officers to work from a safe distance, the virtual telepresence
robot can help lower the danger of injury.
By letting security staff watch over many places at once, it can increase efficiency.
5.4.3 Entertainment
The virtual telepresence robot can boost participation by giving users a more
engaging experience.
The ability to connect with people who are located elsewhere is provided by the
virtual telepresence robot, which can help to lessen isolation.
The virtual telepresence robot can enhance wellbeing by giving people a means of
relaxing and de-stressing.
61
5.4.1 Education
By enabling students to attend classes from distant locations, the virtual telepresence
robot can contribute to improving access to higher education.
To provide students with a more dynamic and interesting learning environment, the
virtual telepresence robot can help in enhancing learning results.
By eliminating the need for travel and lodging, the virtual telepresence robot can aid
in cost reduction.
5.5 Future Work
The project has great prospects for the future. As a complete product there are multiple
new ways to make it even better and provide more development work. Many of the planned
improvements could not be implemented due to time constraints. We hope that these
recommendations will be taken with a positive outlook and will be worked on with great zeal.
Machine learning and AI methods can be used to train it to interpret photographs and find
pathways based on previously stored images. As a result, the virtual telepresence robot
would be better able to navigate its surroundings, even in strange areas.
Without constant human supervision, SLAM (Simultaneous Local and Mapping)
techniques can be used to investigate an environment. As a result, the virtual telepresence
robot would be able to explore its environment without needing regular human input.
Robot energy efficiency can be increased by investigating other power sources, such as
solar electricity, which will be more advantageous when used for monitoring. This
would increase the sustainability of the virtual telepresence robot and enable it to
function for longer periods of time without requiring recharging.
The temperature, humidity, and any other vital signs can all be detected via sensors. As a
result, the virtual telepresence robot might be employed for several tasks, such as keeping
an eye on patients in hospitals or monitoring the environment in hazardous locations.
The virtual telepresence robot can be strengthened by employing more robust materials or
by adding safety precautions. By doing this, the robot would be less likely to sustain
harm while in operation.
By mass producing robots or using less expensive components, the virtual telepresence
robot can be made more affordable.
62
REFERENCES
63
[14] M. Lewis and J. Jacobson, “Game engines,” Communications of the ACM, vol. 45,
no. 1, p. 27, 2002.
[15] Lewis, M. and J. Jacobson. 2002. Game Engines in Scientific Research.
Communications of the ACM. 45(1): p. 27--31.
[16] M. Zyda, “From visual simulation to virtual reality to games,” Computer, vol. 38,
no. 9, pp. 25–32, 2005.
[17] https://docs.unity3d.com/Manual/index.html
[18] Seugling, A., Rölin, M.: Evaluation of physics engines and implementation of a
physics module in a 3d-authoring tool. Master’s thesis, Umea University Department of
Computing Science Sweden (2006)
[19] Kim S L 2014 Using Unity 3D to facilitate mobile augmented reality game
development 2014 IEEE World Forum on the Internet of Things (WF-IoT). IEEE
[20] Janus Liang, "Generation of a virtual reality-based automotive driving training
system for CAD education", Computer Applications in Engineering Education, vol. 17,
no. 2, pp. 148-166, 2009.
[21] Chung, S., Cho, S., & Kim, S. (2018). Interaction using Wearable Motion Capture
for the Virtual Reality Game. Journal of The Korean Society for Computer Game, 31(3),
81– 89. https://doi.org/10.22819/kscg.2018.31.3.010
[22] Halpern, J. (2019). Developing 2D Games with Unity. In Developing 2D Games
with Unity. https://doi.org/10.1007/978-1-4842-3772-4
[23] Yang Haoran, Shen Jing, Cheng Tufeng, Lai Haodong and Chen Jie, "Design and
implementation of virtual reality electronic building blocks based on unity3D
[J]", Computer Knowledge and Technology, vol. 17, no. 24, pp. 126-128, 2021.
[24] Gonzalez, J.: Pilot training next, air force ROTC partner for distance learning
instruction. Air Education and Training Command Public Affairs (2020)
[25] Lee, W.T., et al.: High‐resolution 360 video foveated stitching for real ‐time VR.
Computer Graphics Forum. 36, 115–123 (2017)
[27] Cekus D, Posiadała B and Waryś P 2014 Integration of modeling in Solidworks and
Matlab/Simulink environments Arch. Mech. Eng.
64
[28] Timmis, H. (2021). Modeling with Fusion 360. In: Practical Arduino Engineering.
Apress, Berkeley, CA. https://doi.org/10.1007/978-1-4842-6852-0_3
[29] Carnevale, A.; Mannocchi, I.; Sassi, M.S.H.; Carli, M.; De Luca, G.; Longo, U.G.;
Denaro, V.; Schena, E. Virtual Reality for Shoulder Rehabilitation: Accuracy Evaluation
of Oculus Quest 2. Sensors 2022, 22, 5511. https://doi.org/10.3390/s22155511
[30] Gareth Mitchell, The Raspberry Pi single-board computer will revolutionize
computer science teaching [For & against], vol. 7, no. 3, pp. 26, 2012.
65
ANNEX A
Unity Code
MJPEGStreamDecoder
using System.IO;
using System.Threading;
using System.Collections;
using System.Collections.Generic;
using System.Net;
using UnityEngine;
public class MJPEGStreamDecoder : MonoBehaviour
{
[SerializeField] bool tryOnStart = false;
float RETRY_DELAY = 5f;
int MAX_RETRIES = 3;
int retryCount = 0;
public static string rpiIP;
byte[] nextFrame = null;
Thread worker;
int threadID = 0;
66
IPAddress[] addr = ipEntry.AddressList;
string ip = addr[1].ToString();*/
string ip = "192.168.43.22";
rpiIP = ip;
string ipString = "http://" + ip +":8000/stream.mjpg";
Debug.Log(ipString);
randu = new System.Random(Random.Range(0, 65536));
if (tryOnStart)
StartStream(ipString);
}
private void Update()
{
if (nextFrame != null)
{
SendFrame(nextFrame);
nextFrame = null;
}
}
private void OnDestroy()
{
foreach (var b in trackedBuffers)
{
if (b != null)
b.Close();
}
}
public void StartStream(string url)
67
{
retryCount = 0;
StopAllCoroutines();
foreach (var b in trackedBuffers)
b.Close();
worker = new Thread(() => ReadMJPEGStreamWorker(threadID = randu.Next(65536), url));
worker.Start();
}
void ReadMJPEGStreamWorker(int id, string url)
{
var webRequest = WebRequest.Create(url);
webRequest.Method = "GET";
List<byte> frameBuffer = new List<byte>();
int lastByte = 0x00;
bool addToBuffer = false;
BufferedStream buffer = null;
try
{
68
while (buffer != null)
{
if (threadID != id) return;
if (!buffer.CanRead)
{
Debug.LogError("Can't read buffer!");
break;
}
newByte = -1;
try
{
newByte = buffer.ReadByte();
}
catch
{
break;
}
if (newByte < 0)
{
continue; // End of data
}
if (addToBuffer)
frameBuffer.Add((byte)newByte);
if (lastByte == 0xFF)
{
if (!addToBuffer)
{
69
if (IsStartOfImage(newByte))
{
addToBuffer = true;
frameBuffer.Add((byte)lastByte);
frameBuffer.Add((byte)newByte);
}
}
else
{
if (newByte == 0xD9)
{
frameBuffer.Add((byte)newByte);
addToBuffer = false;
nextFrame = frameBuffer.ToArray();
frameBuffer.Clear();
}
}
}
lastByte = newByte;
}
if (retryCount < MAX_RETRIES)
{
retryCount++;
Debug.LogFormat("[{0}] Retrying Connection {1}...", id, retryCount);
foreach (var b in trackedBuffers)
b.Dispose();
trackedBuffers.Clear();
70
worker = new Thread(() => ReadMJPEGStreamWorker(threadID = randu.Next(65536), url));
worker.Start();
}
}
bool IsStartOfImage(int command)
{
switch (command)
{
case 0x8D:
Debug.Log("Command SOI");
return true;
case 0xC0:
Debug.Log("Command SOF0");
return true;
case 0xC2:
Debug.Log("Command SOF2");
return true;
case 0xC4:
Debug.Log("Command DHT");
break;
case 0xD8:
//Debug.Log("Command DQT");
return true;
case 0xDD:
Debug.Log("Command DRI");
break;
case 0xDA:
71
Debug.Log("Command SOS");
break;
case 0xFE:
Debug.Log("Command COM");
break;
case 0xD9:
Debug.Log("Command EOI");
break;
}
return false;
}
void SendFrame(byte[] bytes)
{
Texture2D texture2D = new Texture2D(2,
2); texture2D.LoadImage(bytes);
if (texture2D.width == 2)
return; // Failure!
Graphics.Blit(texture2D, renderTexture);
Destroy(texture2D);
}
}
72
ControllerCommandTransmit
using UnityEngine;
using System.Collections;
using System;
using System.IO;
using System.Net;
using System.Net.Sockets;
73
Debug.Log(Host);*/
Host = MJPEGStreamDecoder.rpiIP;
setupSocket ();
}
void Update() {
OVRInput.Update();
var primaryThumbstick = OVRInput.Get(OVRInput.Axis2D.PrimaryThumbstick);
74
writeSocket("BBB");
Debug.Log("BBB");
}
if(OVRInput.Get(OVRInput.Button.One)){
writeSocket("AAA");
Debug.Log("AAA");
}
if(OVRInput.Get(OVRInput.Button.Three)){
writeSocket("XXX");
Debug.Log("XXX");
}
if(OVRInput.Get(OVRInput.Button.Four)){
writeSocket("YYY");
Debug.Log("YYY");
}
if(OVRInput.Get(OVRInput.Button.Start)){
writeSocket("STR");
Debug.Log("STR");
}
if(OVRInput.Get(OVRInput.Button.PrimaryThumbstick)){
writeSocket("LTS");
Debug.Log("LTS");
}
if(OVRInput.Get(OVRInput.Button.SecondaryThumbstick)){
writeSocket("RTS");
Debug.Log("RTS");
75
}
if(OVRInput.Get(OVRInput.Touch.PrimaryThumbstick)){
if(primaryXaxis > 0.4f){
writeSocket("PXF");
Debug.Log("PXF");
}else if(primaryXaxis < -0.4f){
writeSocket("PXR");
Debug.Log("PXR");
}else{ writeSocket("
PX0");
Debug.Log("PX0");
}
if(primaryYaxis > 0.4f){
writeSocket("PYF");
Debug.Log("PYF");
}else if(primaryYaxis < -0.4f){
writeSocket("PYR");
Debug.Log("PYR");
}else{ writeSocket("
PY0");
Debug.Log("PY0");
}
/*if(OVRInput.Get(OVRInput.Touch.SecondaryThumbstick)){
writeSocket("PX:" + primaryXaxis.ToString() + "," + "PY:" + primaryYaxis.ToString());
Debug.Log("PX:" + primaryXaxis.ToString() + "," + "PY:" + primaryYaxis.ToString());
writeSocket("SX:" + secondaryXaxis.ToString() + "," + "SY:" + secondaryYaxis.ToString());
76
Debug.Log("SX:" + secondaryXaxis.ToString() + "," + "SY:" + secondaryYaxis.ToString());
}else{
writeSocket("PX:" + primaryXaxis.ToString() + "," + "PY:" +
primaryYaxis.ToString()); Debug.Log("PX:" + primaryXaxis.ToString() + "," + "PY:" +
primaryYaxis.ToString());
}*/
}
if(OVRInput.Get(OVRInput.Touch.SecondaryThumbstick)){
if(secondaryXaxis > 0.4f){
writeSocket("SXF");
Debug.Log("SXF");
}else if(secondaryXaxis < -0.4f){
writeSocket("SXR");
Debug.Log("SXR");
}else{ writeSocket("
SX0");
Debug.Log("SX0");
}
}
}
79
HMDOrientationTransmit
using UnityEngine;
using System.Collections;
using System;
using System.IO;
using System.Net;
using System.Net.Sockets;
void Update() {
float Rotationx;
float Rotationy;
float Rotationz;
time = time + 1f*Time.deltaTime;
if (time >= timeDelay) {
time = 0f;
if(body.transform.localRotation.eulerAngles.x <= 180f)
{
80
Rotationx = body.transform.localRotation.eulerAngles.x;
}
else
{
Rotationx = body.transform.localRotation.eulerAngles.x - 360f;
}
if(body.transform.localRotation.eulerAngles.y <= 180f)
{
Rotationy = body.transform.localRotation.eulerAngles.y;
}
else
{
Rotationy = body.transform.localRotation.eulerAngles.y - 360f;
}
if(body.transform.localRotation.eulerAngles.z <= 180f)
{
Rotationz = body.transform.localRotation.eulerAngles.z;
}
else
{
Rotationz = body.transform.localRotation.eulerAngles.z - 360f;
}
FinalRotationx = Mathf.Round(Rotationx * 1.0f) * 1f;
FinalRotationy = Mathf.Round(Rotationy * 1.0f) * 1f;
FinalRotationz = Mathf.Round(Rotationz * 1.0f) * 1f;
Rotation = "X= " + Rotationx.ToString() + "Y= " + Rotationy.ToString();
//Debug.Log(Rotation);}}}
81
Python Script for Motors
import socket
from gpiozero import Servo
from time import sleep
from gpiozero.pins.pigpio import PiGPIOFactory
import RPi.GPIO as GPIO
# Create a PiGPIOFactory instance
factory = PiGPIOFactory()
# Set GPIO mode
GPIO.setmode(GPIO.BCM)
# Define motor pins
motor1_enable_pin = 17
motor1_in1_pin = 27
motor1_in2_pin = 22
motor2_enable_pin = 18
motor2_in1_pin = 23
motor2_in2_pin = 24
# Set up motor pins as outputs
GPIO.setup(motor1_enable_pin, GPIO.OUT)
GPIO.setup(motor1_in1_pin, GPIO.OUT)
GPIO.setup(motor1_in2_pin, GPIO.OUT)
GPIO.setup(motor2_enable_pin, GPIO.OUT)
GPIO.setup(motor2_in1_pin, GPIO.OUT)
GPIO.setup(motor2_in2_pin, GPIO.OUT)
# Create servo instances
82
servoX = Servo(12, min_pulse_width=0.5/1000, max_pulse_width=2.5/1000,
pin_factory=factory)
servoY = Servo(13, min_pulse_width=0.5/1000, max_pulse_width=2.5/1000,
pin_factory=factory)
xVal = ''
yVal = ''
motorsCmnd = ''
backlog = 1
size = 1024
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind(('192.168.43.22', 5001))
s.listen(backlog)
def parseData():
for i in range(0, len(data1)):
if data[i] == 'a':
xVal += data[i + 3:data.find('b')]
elif data[i] == 'Y':
yVal += data[i + 2:data.find('c')]
else:
motorsCmnd = data[i]
def servosControl():
print("Running Servos")
print(xVal)
print(yVal)
servoX.value = int(xVal) / 90
servoY.value = int(yVal) / 90
def motorsControl():
print('motors')
83
# Block to move the robot forward
if motorsCmnd == 'SXF':
GPIO.output(motor1_in1_pin, GPIO.HIGH)
GPIO.output(motor1_in2_pin, GPIO.LOW)
GPIO.output(motor2_in1_pin, GPIO.HIGH)
GPIO.output(motor2_in2_pin, GPIO.LOW)
GPIO.output(motor1_enable_pin, GPIO.HIGH)
GPIO.output(motor2_enable_pin, GPIO.HIGH)
# Block to move the robot backward
elif motorsCmnd == 'SXR':
GPIO.output(motor1_in1_pin, GPIO.LOW)
GPIO.output(motor1_in2_pin, GPIO.HIGH)
GPIO.output(motor2_in1_pin, GPIO.LOW)
GPIO.output(motor2_in2_pin, GPIO.HIGH)
GPIO.output(motor1_enable_pin, GPIO.HIGH)
GPIO.output(motor2_enable_pin, GPIO.HIGH)
# Block to turn the robot left
elif motorsCmnd == 'SXL':
GPIO.output(motor1_in1_pin, GPIO.LOW)
GPIO.output(motor1_in2_pin, GPIO.HIGH)
GPIO.output(motor2_in1_pin, GPIO.HIGH)
GPIO.output(motor2_in2_pin, GPIO.LOW)
GPIO.output(motor1_enable_pin, GPIO.HIGH)
GPIO.output(motor2_enable_pin, GPIO.HIGH)
# Block to turn the robot right
elif motorsCmnd == 'SXR':
84
GPIO.output(motor1_in1_pin, GPIO.HIGH)
GPIO.output(motor1_in2_pin, GPIO.LOW)
GPIO.output(motor2_in1_pin, GPIO.LOW)
GPIO.output(motor2_in2_pin, GPIO.HIGH)
GPIO.output(motor1_enable_pin, GPIO.HIGH)
GPIO.output(motor2_enable_pin, GPIO.HIGH)
# Block to stop the robot
elif motorsCmnd == 'S':
GPIO.output(motor1_enable_pin, GPIO.LOW)
GPIO.output(motor2_enable_pin, GPIO.LOW)
# Main code
try:
print("is waiting")
client, address = s.accept()
while 1:
data = client.recv(size)
if data:
data.decode()
print(data)
parseData()
servosControl()
motorsControl()
except:
print("closing socket")
client.close()
s.close()
85
Camera Code
import io
import picamera
import logging
import socketserver
from threading import Condition
from http import server
PAGE = """\
<html>
<head>
<title>Raspberry Pi - Surveillance Camera</title>
</head>
<body>
<center><h1>Raspberry Pi - Surveillance Camera</h1></center>
<center><img src="stream.mjpg" width="640" height="480"></center>
</body>
</html>
"""
class StreamingOutput(object):
def _init_(self):
self.frame = None
self.buffer = io.BytesIO()
self.condition = Condition()
def write(self, buf):
if buf.startswith(b'\xff\xd8'):
# New frame, copy the existing buffer's content and notify all
86
# clients it's available
self.buffer.truncate()
with self.condition:
self.frame = self.buffer.getvalue()
self.condition.notify_all()
self.buffer.seek(0)
return self.buffer.write(buf)
class StreamingHandler(server.BaseHTTPRequestHandler):
def do_GET(self):
if self.path == '/':
# Redirect root URL to index.html
self.send_response(301)
self.send_header('Location', '/index.html')
self.end_headers()
elif self.path == '/index.html':
# Serve the HTML page with the video stream
content = PAGE.encode('utf-8')
self.send_response(200)
self.send_header('Content-Type', 'text/html')
self.send_header('Content-Length', len(content))
self.end_headers()
self.wfile.write(content)
87
self.send_header('Pragma', 'no-cache')
self.send_header('Content-Type', 'multipart/x-mixed-replace; boundary=FRAME')
self.end_headers()
try:
while True:
with output_stream.condition:
output_stream.condition.wait()
frame = output_stream.frame
self.wfile.write(b'--FRAME\r\n')
self.send_header('Content-Type', 'image/jpeg')
self.send_header('Content-Length', len(frame))
self.end_headers()
self.wfile.write(frame)
self.wfile.write(b'\r\n')
except Exception as e:
logging.warning(
'Removed streaming client %s: %s',
self.client_address, str(e))
else:
# Handle 404 Not Found
self.send_error(404)
self.end_headers()
88
with picamera.PiCamera(resolution='640x480', framerate=24) as cam:
output_stream = StreamingOutput()
# Uncomment the next line to change your Pi's Camera rotation (in degrees)
# cam.rotation = 90
cam.start_recording(output_stream, format='mjpeg')
try:
address = ('', 8000)
89