Professional Documents
Culture Documents
Virtual Reality Project
Virtual Reality Project
BANGALORE-560064
A Project Report on
Hardware Equipment Virtualization for Training
Submitted in partial fulfillment of the requirement for the award of the degree of
BACHELOR OF
ENGINEERINGIN
INFORMATION SCIENCE AND ENGINEERING
By
Kartik Saini 1NT19IS068
Minchan Bopaiah 1NT19IS081
Priyojit Paul 1NT19IS119
Sanidhya Agarwal 1NT19IS140
Dr. Mohan S G
Prof. Balachandra A
Department of Information Science and Engineering
Nitte Meenakshi Institute of Technology, Bengaluru - 560064
CERTIFICATE
Certified that the project work entitled” Hardware Equipment Virtualization for Training” carried out by Kartik Saini-
1NT19IS068, Minchan Bopaiah-1NT19IS081, Priyojit Paul-1NT19IS119, Sanidhya Agarwal- 1NT19IS140, Bonafide
students of Information Science and Engineering - NMIT in partial fulfillment for the award of Bachelor of Engineering
in Information Science and Engineering of the Visvesvaraya Technological University, Belgaum during the year 2021 -
23. It is certified that all corrections/suggestions indicated for Internal Assessment have been incorporated in the Report
deposited in the departmental library. The project report has been approved as it satisfies the academic requirements in
respect of Project work prescribed for the said Degree.
External Viva
1.
2.
NITTE MEENAKSHI INSTITUTE OF TECHNOLOGY
(AN AUTONOMOUS INSTITUTION, AFFILIATED TO VTU, BELGAUM)
DECLARATION
We are trying to simulate various hardware equipment through augmented reality to be used in real world training for students
in labs of various colleges which will lower the cost of purchasing hardware equipment. It will also reduce any damage to the tool.
It will also enable the students to visualize the complex working of the equipment. Various organization can simulate equipment
according to their need.
i
ACKNOWLEDGMENT
The satisfaction and euphoria that accompany the successful completion of any task would be incomplete without the mention of
the people who made it possible, whose constant guidance and encouragement crowned our effort with success. I express my
sincere gratitude to our Principal Dr. H. C. Nagaraj, Nitte Meenakshi Institute of Technology for providing facilities.
We wish to thank our HOD, Dr. Mohan S. G. for the excellent environment created to further educational growth in our college.
We also thank him for the invaluable guidance provided which has helped in the creation of a better project.
We hereby like to thank our Guide Dr. Mohan S. G. Department of Information Science & Engineering on their periodic inspection,
time to time evaluation of the project and help to bring the project to the present form.
Thanks to our Departmental Project coordinators. We also thank all our friends, teaching and non-teaching staff at NMIT,
Bangalore, for all the direct and indirect help provided in the completion of the project.
ii
CONTENT
1 Introduction 1
2 Literature Review 2
4 Requirement Specification 4
6 Implementation 6
7 Conclusion 24
8 Bibliography 25
iii
List of Figures
1. System design
2. Marker Based AR
3. Markerless Based AR
7. Flow Chart
iv
Chapter 1
Introduction
Augmented Reality (AR) technology has evolved to become one of the significant parts of education, healthcare, entertainment,
engineering, and much more. It combines real and virtual objects in a real environment. Runs interactively, and in real time.
Registers (aligns) real and virtual objects with each other. AR refers to the collection of computer hardware including mobile/
personal computer, Head Mounted Display (HMD )that are used primarily to bring all users on a same shared computational
platform.
Our research is aimed at checking and proofing the aptness of AR/VR to be used in representing client user-interfaces in remote
labs. Students can carry out an engineering experiment represented by real and virtual elements, components and equipment
overlaid with virtual objects. Educational engineering labs present an essential part in engineering education because they provide
practical knowledge for students. Unfortunately, these labs equipped with costly instruments are available for little and lim ited
periods of time for a huge number of students.
1
Chapter 2
Literature Review
TeamViewer
Drawbacks: -
Many times, users find difficulties and several errors before connecting to the servers.
Users in many cases are not allowed to change the resolution.
Masters of pie
Drawbacks: -
Superfast processing computers are needed to implement various AR/VR mode
2
Chapter 3
3.2 Objective
To create an adaptive 3D virtual environment that meets requirements of labs in colleges using appropriate development
applications.
To test and evaluate the performance of simulated hardware equipment.
To identify new techniques and approaches to design, build and evaluate virtual and augmented reality systems.
3
Chapter 4
Requirement Specification
Hardware Requirements: -
• Camera Sensor
• Image Processor
Software Requirements: -
• Python Programming Language
• HTML
• Unity 3D
Functional Requirements: -
• Virtual Demonstration – A video tutorial would be present for the references
• Interaction with Virtual Tools – It will get easy for the students to connect with the tools
Non-Functional Requirements: -
• Reliability
4
Chapter 5
Figure 1
5
Chapter 6
Implementation
6.1 Types of AR
Target images (markers) are used in marker-based AR applications to position items in an area. These markers
specify where the 3D digital content will be displayed inside the user's field of view. To superimpose the 3D
virtual object on a particular physical picture pattern marker in a real-world context, these applications are
connected to it. Thus, the cameras must continuously scan the input and place a marker for image pattern
recognition to create its geometry. This is a straightforward and low-cost method to include in filters through
a unique program to recognize patterns through a camera.
Figure 2
6
6.1.2 Markerless Based AR: -
Augmented reality without markers does not make use of markers. Markerless augmented reality scans the
physical world and overlays digital content on recognized features, like flat surfaces. The digital pieces are thus
positioned depending on geometry rather than being fixed to a marker. In video games like Pokémon Go, where
characters may wander around the environment, markerless augmented reality is particularly popular. Additionally,
it is frequently used for virtual product placement and live events.
Figure 3
7
6.1.3 Projection Based AR
This approach is used to deliver digital data in a stationary setting. When a fixed projector and a tracking camera
are positioned in a defined location, augmented reality (AR) enables the user to move freely across the surroundings.
By projecting artificial light onto actual flat surfaces, this technique primarily serves to produce illusions regarding
the depth, position, and orientation of an object. Because instructions may be presented in a specific region,
projection-based augmented reality, for instance, is excellent for streamlining complex activities in commerce or
industry and removing computers. This system can also give input to improve digital identification procedures for
production cycles.
Figure 4
8
6.1.4 Location Based Marker less AR
Location-based marker less AR strives to merge 3D virtual items into the real-world environment in which the
user is situated. This technology places the virtual object at the appropriate spot or area of interest using the
location and sensors of a smart device. Pokémon GO, a location-based, marker less augmented reality app for
smartphones, is an example of this kind of augmented reality. By reading the data in real time from the camera, GPS,
compass, and accelerometer, this augmented reality connects the virtual image to a particular location. Also, as it is
based on marker less AR, no image track is required for its operation, as it can predict the user’s approach to match
the data in real time with the user’s location. This typology gives users the possibility to add interactive and practical
digital information to interesting geographies, which is highly advantageous for tourists visiting a particular location
because it makes the environment more understandable through 3D virtual objects or movies.
Figure 5
9
6.2 Slam algorithm for markerless AR
A feature-based monocular SLAM system that operates in real time, in small and large, indoor and outdoor
environments. The system is robust to severe motion clutter, allows wide baseline loop closing and relocalization,
and includes full automatic initialization. Building on excellent algorithms of recent years, we designed from
scratch a novel system that uses the same features for all SLAM tasks: tracking, mapping, relocalization, and loop
closing. A survival of the fittest strategy that selects the points and keyframes of the reconstruction leads to
excellent robustness and generates a compact and trackable map that only grows if the scene content changes,
allowing lifelong operation.
Figure 6
10
Code for SLAM algorithm:
#include "LocalMapping.h"
#include "LoopClosing.h"
#include "ORBmatcher.h"
#include "Optimizer.h"
#include <ros/ros.h>
namespace ORB_SLAM
{
void LocalMapping::Run()
{
ros::Rate r(500);
while (ros::ok())
{
// Check if there are keyframes in the
queueif (CheckNewKeyFrames())
{
// Tracking will see that Local Mapping is busy
SetAcceptKeyFrames(false);
// Check recent
MapPoints
MapPointCulling();
// Triangulate new
MapPoints
CreateNewMapPoints();
11
Optimizer::LocalBundleAdjustment(mpCurrentKeyFrame, &mbAbortBA);
mpMap->SetFlagAfterBA();
mpLoopCloser->InsertKeyFrame(mpCurrentKeyFrame);
}
SetAcceptKeyFrames(true);
}
ResetIfRequested();
r.sleep();
}
}
bool LocalMapping::CheckNewKeyFrames()
{
boost::mutex::scoped_lock lock(mMutexNewKFs);
return (!mlNewKeyFrames.empty());
}
void LocalMapping::ProcessNewKeyFrame()
{
{
boost::mutex::scoped_lock lock(mMutexNewKFs);
mpCurrentKeyFrame = mlNewKeyFrames.front();
mlNewKeyFrames.pop_front();
}
12
// Compute Bags of Words structures
mpCurrentKeyFrame->ComputeBoW();
if (mpCurrentKeyFrame->mnId == 0)
return;
// Associate MapPoints to the new keyframe and update normal and descriptor
vector<MapPoint *> vpMapPointMatches = mpCurrentKeyFrame->GetMapPointMatches();
if (mpCurrentKeyFrame->mnId > 1) // This operations are already done in the tracking for the first two keyframes
{
for (size_t i = 0; i < vpMapPointMatches.size(); i++)
{
MapPoint *pMP = vpMapPointMatches[i];
if (pMP)
{
if (!pMP->isBad())
{
pMP->AddObservation(mpCurrentKeyFrame, i);
pMP->UpdateNormalAndDepth();
pMP->ComputeDistinctiveDescriptors();
}
}
}
}
if (mpCurrentKeyFrame->mnId == 1)
{
for (size_t i = 0; i < vpMapPointMatches.size(); i++)
{
MapPoint *pMP = vpMapPointMatches[i];
if (pMP)
{
mlpRecentAddedMapPoints.push_back(pMP);
}
}
}
void LocalMapping::MapPointCulling()
{
// Check Recent Added MapPoints
list<MapPoint *>::iterator lit = mlpRecentAddedMapPoints.begin();
const unsigned long int nCurrentKFid = mpCurrentKeyFrame->mnId;
while (lit != mlpRecentAddedMapPoints.end())
{
MapPoint *pMP = *lit;
13
}
else if (pMP->GetFoundRatio() < 0.25f)
{
pMP->SetBadFlag();
lit = mlpRecentAddedMapPoints.erase(lit);
}
else if ((nCurrentKFid - pMP->mnFirstKFid) >= 2 && pMP->Observations() <= 2)
{
pMP->SetBadFlag();
lit = mlpRecentAddedMapPoints.erase(lit);
}
else if ((nCurrentKFid - pMP->mnFirstKFid) >= 3)lit
= mlpRecentAddedMapPoints.erase(lit);
else
lit++;
}
}
void LocalMapping::CreateNewMapPoints()
{
// Take neighbor keyframes in covisibility graph
vector<KeyFrame *> vpNeighKFs = mpCurrentKeyFrame->GetBestCovisibilityKeyFrames(20);
14
if (ratioBaselineDepth < 0.01)
continue;
// Linear Triangulation
Method cv::Mat A(4, 4,
CV_32F);
A.row(0) = xn1.at<float>(0) * Tcw1.row(2) - Tcw1.row(0);
15
cv::SVD::compute(A, w, u, vt, cv::SVD::MODIFY_A | cv::SVD::FULL_UV);
if (x3D.at<float>(3) == 0)
continue;
// Euclidean coordinates
x3D = x3D.rowRange(0, 3) / x3D.at<float>(3);
cv::Mat x3Dt = x3D.t();
if (dist1 == 0 || dist2 == 0)
16
float ratioOctave = mpCurrentKeyFrame->GetScaleFactor(kp1.octave) / pKF2->GetScaleFactor(kp2.octave);if
(ratioDist * ratioFactor < ratioOctave || ratioDist > ratioOctave * ratioFactor)
continue;
// Triangulation is succesfull
MapPoint *pMP = new MapPoint(x3D, mpCurrentKeyFrame, mpMap);
pMP->AddObservation(pKF2, idx2);
pMP->AddObservation(mpCurrentKeyFrame, idx1);
mpCurrentKeyFrame->AddMapPoint(pMP, idx1);
pKF2->AddMapPoint(pMP, idx2);
pMP->ComputeDistinctiveDescriptors();
pMP->UpdateNormalAndDepth();
mpMap->AddMapPoint(pMP);
mlpRecentAddedMapPoints.push_back(pMP);
}
}
}
void LocalMapping::SearchInNeighbors()
{
// Retrieve neighbor keyframes
vector<KeyFrame *> vpNeighKFs = mpCurrentKeyFrame->GetBestCovisibilityKeyFrames(20); vector<KeyFrame
*> vpTargetKFs;
for (vector<KeyFrame *>::iterator vit = vpNeighKFs.begin(), vend = vpNeighKFs.end(); vit != vend; vit++)
{
KeyFrame *pKFi = *vit;
if (pKFi->isBad() || pKFi->mnFuseTargetForKF == mpCurrentKeyFrame->mnId)
continue;
vpTargetKFs.push_back(pKFi);
pKFi->mnFuseTargetForKF = mpCurrentKeyFrame->mnId;
17
matcher.Fuse(pKFi, vpMapPointMatches);
}
matcher.Fuse(mpCurrentKeyFrame, vpFuseCandidates);
// Update points
vpMapPointMatches = mpCurrentKeyFrame->GetMapPointMatches();
for (size_t i = 0, iend = vpMapPointMatches.size(); i < iend; i++)
{
MapPoint *pMP = vpMapPointMatches[i];
if (pMP)
{
if (!pMP->isBad())
{
pMP->ComputeDistinctiveDescriptors();
pMP->UpdateNormalAndDepth();
}
}
}
18
cv::Mat R12 = R1w * R2w.t();
cv::Mat t12 = -R1w * R2w.t() * t2w + t1w;
cv::Mat K1 = pKF1->GetCalibrationMatrix();
cv::Mat K2 = pKF2->GetCalibrationMatrix();
void LocalMapping::RequestStop()
{
boost::mutex::scoped_lock lock(mMutexStop);
mbStopRequested = true;
boost::mutex::scoped_lock lock2(mMutexNewKFs);
mbAbortBA = true;
void LocalMapping::Stop()
{
boost::mutex::scoped_lock lock(mMutexStop);
mbStopped = true;
}
bool LocalMapping::isStopped()
{
boost::mutex::scoped_lock lock(mMutexStop);
return mbStopped;
}
bool LocalMapping::stopRequested()
{
boost::mutex::scoped_lock lock(mMutexStop);
return mbStopRequested;
}
void LocalMapping::Release()
{
boost::mutex::scoped_lock lock(mMutexStop);
mbStopped = false;
mbStopRequested = false;
for (list<KeyFrame *>::iterator lit = mlNewKeyFrames.begin(), lend = mlNewKeyFrames.end(); lit != lend; lit++)
delete *lit;
mlNewKeyFrames.clear();
}
bool LocalMapping::AcceptKeyFrames()
{
19
void LocalMapping::SetAcceptKeyFrames(bool flag)
{
boost::mutex::scoped_lock lock(mMutexAccept);
mbAcceptKeyFrames = flag;
}
void LocalMapping::InterruptBA()
{
mbAbortBA = true;
}
void LocalMapping::KeyFrameCulling()
{
// Check redundant keyframes (only local keyframes)
// A keyframe is considered redundant if the 90% of the MapPoints it sees, are seen
// in at least other 3 keyframes (in the same or finer scale)
vector<KeyFrame *> vpLocalKeyFrames = mpCurrentKeyFrame->GetVectorCovisibleKeyFrames();
int nRedundantObservations =
0;int nMPs = 0;
for (size_t i = 0, iend = vpMapPoints.size(); i < iend; i++)
{
MapPoint *pMP = vpMapPoints[i];
if (pMP)
{
if (!pMP->isBad())
{
nMPs++;
if (pMP->Observations() > 3)
{
int scaleLevel = pKF->GetKeyPointUn(i).octave;
map<KeyFrame *, size_t> observations = pMP->GetObservations();
int nObs = 0;
for (map<KeyFrame *, size_t>::iterator mit = observations.begin(), mend = observations.end(); mit !=
mend; mit++)
{
KeyFrame *pKFi = mit->first;if
(pKFi == pKF)
continue;
int scaleLeveli = pKFi->GetKeyPointUn(mit->second).octave;if
(scaleLeveli <= scaleLevel + 1)
{
nObs++;
if (nObs >= 3)
20
}
if (nObs >= 3)
{
nRedundantObservations++;
}
}
}
}
}
void LocalMapping::RequestReset()
{
{
boost::mutex::scoped_lock lock(mMutexReset);
mbResetRequested = true;
}
ros::Rate r(500);
while (ros::ok())
{
{
boost::mutex::scoped_lock lock2(mMutexReset);
if (!mbResetRequested)
break;
}
r.sleep();
}
}
void LocalMapping::ResetIfRequested()
{
boost::mutex::scoped_lock lock(mMutexReset);
if (mbResetRequested)
{
mlNewKeyFrames.clear();
mlpRecentAddedMapPoints.clear();
mbResetRequested = false;
21
6.3 Object detection using Core-ML
Step 1: The app generates an image of the object using the camera from a phone or tablet. The app stores
feature descriptors that help identify the reference image.
Step 2: AR software recognizes the object within the real-world environment through feature points. To
recognize an object, the camera finds matches between the reference and frame images.
Step 3: The object is recognized through an identifiable constellation of points, then the digital model is placed
accordingly. Learners can then interact with and manipulate the 3D digital object.
Figure 7
22
6.4 Flow Chart
Figure 8
23
Chapter 7
CONCLUSION
More specifically, the results showed that AR technology is not widespread at social and educational levels since no diffusion of AR
in teaching is observed while some of the teachers who had used some AR application didn’t know that those were AR
applications. However, a rising but slow diffusion of AR technology is noticed. Virtualization should never be seen as a simple
solution to a specific problem; that is the main idea we have been trying to convey. It is a principle, a technology that is applicable
in a very large range of different solutions. It is also a buzzword to get cash flowing nowadays, and is heralded by a lot of
companies as "the next best thing in IT". A common conclusion is that augmented reality (AR) applications can enhance the
learning process, learning motivation andeffectiveness. Despite the positive results, more research is necessary.
24
BIBLIOGRAPHY
[1] S. Dormido, H.Vargas, J. Sánchez, N. Duro, R. Dormido, S. Dormido-Canto, F. Esquembre, "Using Web-Based Laboratories for
Control
Engineering Education,” International Conference on Engineering Education, Coimbra, Portugal, September, 2007.
[2] Z. Nedic, J. Machotka, A. Nafalski, "Remote laboratories versus virtual and real Laboratories,” Proc. 33rd ASEE/IEEE Frontiers in
Education Conference, Boulder, Colorado, USA, November, 2003.
[3] Z. Nedic, J. Machotka, A. Nafalski, "Remote Laboratory Net Lab for Effective Interaction with Real Equipment over the
Internet,” Proc. 2008 IEEE Conference on Human Systems Interaction (HSI), Krakow, Poland, pp.846-851, May, 2008.
[4] L. D. Feise, A. J. Rosa, "The Role of the Laboratory in Undergraduate Engineering Education,” Journal of Engineering Education,
Vol. 94, pp.121-130, January, 2005.
[5] J. Y. Ma, J. S. Choi, "The Virtuality and Reality of Augmented Reality,” Journal of Multimedia, Vol. 2, No.1, pp.32-37, February,
2007.
[6] M. Sairio, Augmented Reality, Helsinki University of Technology, 2001.
25