Download as pdf or txt
Download as pdf or txt
You are on page 1of 32

AN AUTONOMOUS INSTITUTION AFFILIATED TO VTU, BELGAUM, ACCREDIATED by NAAC (‘A+’ Grade)YELAHANKA,

BANGALORE-560064

A Project Report on
Hardware Equipment Virtualization for Training
Submitted in partial fulfillment of the requirement for the award of the degree of

BACHELOR OF
ENGINEERINGIN
INFORMATION SCIENCE AND ENGINEERING
By
Kartik Saini 1NT19IS068
Minchan Bopaiah 1NT19IS081
Priyojit Paul 1NT19IS119
Sanidhya Agarwal 1NT19IS140

Under the Guidance of

Dr. Mohan S G

Prof. Balachandra A
Department of Information Science and Engineering
Nitte Meenakshi Institute of Technology, Bengaluru - 560064

DEPARTMENT OF INFORMATION SCIENCE AND ENGINEERING


(Accredited by NBA Tier-1)
2021-23
NITTE MEENAKSHI INSTITUTE OF TECHNOLOGY
(AN AUTONOMOUS INSTITUTION, AFFILIATED TO VTU, BELGAUM)

DEPARTMENT OF INFORMATION SCIENCE AND ENGINEERING

Accredited by NBA Tier-1

CERTIFICATE

Certified that the project work entitled” Hardware Equipment Virtualization for Training” carried out by Kartik Saini-
1NT19IS068, Minchan Bopaiah-1NT19IS081, Priyojit Paul-1NT19IS119, Sanidhya Agarwal- 1NT19IS140, Bonafide
students of Information Science and Engineering - NMIT in partial fulfillment for the award of Bachelor of Engineering
in Information Science and Engineering of the Visvesvaraya Technological University, Belgaum during the year 2021 -
23. It is certified that all corrections/suggestions indicated for Internal Assessment have been incorporated in the Report
deposited in the departmental library. The project report has been approved as it satisfies the academic requirements in
respect of Project work prescribed for the said Degree.

External Viva

Name of the examiners Signature with date

1.

2.
NITTE MEENAKSHI INSTITUTE OF TECHNOLOGY
(AN AUTONOMOUS INSTITUTION, AFFILIATED TO VTU, BELGAUM)

DEPARTMENT OF INFORMATION SCIENCE AND ENGINEERING

Accredited by NBA Tier-1

DECLARATION

We Kartik Saini-1NT19IS068, Minchan Bopaiah-1NT19IS081, Priyojit Paul-1NT19IS119, Sanidhya Agarwal-1NT19IS140


bonafide students of Nitte Meenakshi Institute of Technology, hereby declare that the project entitled ‘’ Hardware
Equipment Virtualization for Training ” submitted in partial fulfillment for the award of BE in Information Science
and Engineering of the Visvesvaraya Technological University, Belgaum during the year 2021 - 2023 is my original work
and the project has not formed the basis for the award of any other degree, fellowship or any othersimilar titles.

Signature of the Students with Date

Place: Bangalore Date: 17-12-2022


ABSTRACT

We are trying to simulate various hardware equipment through augmented reality to be used in real world training for students
in labs of various colleges which will lower the cost of purchasing hardware equipment. It will also reduce any damage to the tool.
It will also enable the students to visualize the complex working of the equipment. Various organization can simulate equipment
according to their need.

i
ACKNOWLEDGMENT

The satisfaction and euphoria that accompany the successful completion of any task would be incomplete without the mention of
the people who made it possible, whose constant guidance and encouragement crowned our effort with success. I express my
sincere gratitude to our Principal Dr. H. C. Nagaraj, Nitte Meenakshi Institute of Technology for providing facilities.
We wish to thank our HOD, Dr. Mohan S. G. for the excellent environment created to further educational growth in our college.
We also thank him for the invaluable guidance provided which has helped in the creation of a better project.
We hereby like to thank our Guide Dr. Mohan S. G. Department of Information Science & Engineering on their periodic inspection,
time to time evaluation of the project and help to bring the project to the present form.
Thanks to our Departmental Project coordinators. We also thank all our friends, teaching and non-teaching staff at NMIT,
Bangalore, for all the direct and indirect help provided in the completion of the project.

ii
CONTENT

1 Introduction 1

2 Literature Review 2

3 Problem Statement and Objective 3

4 Requirement Specification 4

5 Framework and System Design 5

6 Implementation 6

7 Conclusion 24

8 Bibliography 25

iii
List of Figures

1. System design

2. Marker Based AR

3. Markerless Based AR

4. Location Based Marker less AR

5. Slam algorithm for markerless AR

6. Object detection using Core-ML

7. Flow Chart

iv
Chapter 1

Introduction

Augmented Reality (AR) technology has evolved to become one of the significant parts of education, healthcare, entertainment,
engineering, and much more. It combines real and virtual objects in a real environment. Runs interactively, and in real time.
Registers (aligns) real and virtual objects with each other. AR refers to the collection of computer hardware including mobile/
personal computer, Head Mounted Display (HMD )that are used primarily to bring all users on a same shared computational
platform.

Our research is aimed at checking and proofing the aptness of AR/VR to be used in representing client user-interfaces in remote
labs. Students can carry out an engineering experiment represented by real and virtual elements, components and equipment
overlaid with virtual objects. Educational engineering labs present an essential part in engineering education because they provide
practical knowledge for students. Unfortunately, these labs equipped with costly instruments are available for little and lim ited
periods of time for a huge number of students.

1
Chapter 2

Literature Review

TeamViewer

Founder of TeamViewer is Tilo Rossmanith(2005).


TeamViewer Assist AR is a remote support solution that provides easy, fast, and secure augmented reality-powered visual
assistance to identify and solve problems from anywhere in the world.
Here an expert (expert in that particular field) guides the user remotely to work according to the AR which is being displayed.

Drawbacks: -
Many times, users find difficulties and several errors before connecting to the servers.
Users in many cases are not allowed to change the resolution.

Masters of pie

Founder/CEO of Masters of pie is Karl Maddix(2011).


Masters of Pie is the collaboration solution that brings people, data and immersive tools together to get work done. From the
automotive, space, aviation, healthcare, defense, and manufacturing sectors, companies around the world are using the Radical
software development kit to connect their teams and drive productivity gains.
It supports real time synchronization which provides lower quality version of model that can be used by the users
It develops software to enable immersive collaboration. The software framework, called Radical, integrates natively with
enterprise applications to support the seamless and secure sharing in real-time of large and complex 2D and 3D data across
AR/VR, Desktop & Mobile devices.

Drawbacks: -
Superfast processing computers are needed to implement various AR/VR mode

2
Chapter 3

Problem Statement and Objective

3.1 Problem Statement


To simulate various hardware equipment and tools needed for training in labs for various schools and colleges. e.g.: - Screw gauge
in physics lab, simple pendulum-based Experiments, etc.

3.2 Objective
To create an adaptive 3D virtual environment that meets requirements of labs in colleges using appropriate development
applications.
To test and evaluate the performance of simulated hardware equipment.
To identify new techniques and approaches to design, build and evaluate virtual and augmented reality systems.

3
Chapter 4

Requirement Specification
Hardware Requirements: -
• Camera Sensor

• Image Processor

• Central Processing Unit

• Graphical Processing Unit

Software Requirements: -
• Python Programming Language

• HTML

• Vuforia Developmental Portal

• Unity 3D

Functional Requirements: -
• Virtual Demonstration – A video tutorial would be present for the references

• Equipment Display – 3D representation of the equipment is available

• Interaction with Virtual Tools – It will get easy for the students to connect with the tools

Non-Functional Requirements: -
• Reliability

• Accuracy – Accuracy of Equipment Model

• Efficiency – Response Time

• Interoperability – Both for Android and IOS

• Usability – Ease of Learning

4
Chapter 5

Framework and System Design

Figure 1

5
Chapter 6

Implementation

6.1 Types of AR

6.1.1. Marker Based AR: -

Target images (markers) are used in marker-based AR applications to position items in an area. These markers
specify where the 3D digital content will be displayed inside the user's field of view. To superimpose the 3D
virtual object on a particular physical picture pattern marker in a real-world context, these applications are
connected to it. Thus, the cameras must continuously scan the input and place a marker for image pattern
recognition to create its geometry. This is a straightforward and low-cost method to include in filters through
a unique program to recognize patterns through a camera.

Figure 2

6
6.1.2 Markerless Based AR: -

Augmented reality without markers does not make use of markers. Markerless augmented reality scans the
physical world and overlays digital content on recognized features, like flat surfaces. The digital pieces are thus
positioned depending on geometry rather than being fixed to a marker. In video games like Pokémon Go, where
characters may wander around the environment, markerless augmented reality is particularly popular. Additionally,
it is frequently used for virtual product placement and live events.

Figure 3

7
6.1.3 Projection Based AR

This approach is used to deliver digital data in a stationary setting. When a fixed projector and a tracking camera
are positioned in a defined location, augmented reality (AR) enables the user to move freely across the surroundings.
By projecting artificial light onto actual flat surfaces, this technique primarily serves to produce illusions regarding
the depth, position, and orientation of an object. Because instructions may be presented in a specific region,
projection-based augmented reality, for instance, is excellent for streamlining complex activities in commerce or
industry and removing computers. This system can also give input to improve digital identification procedures for
production cycles.

Figure 4

8
6.1.4 Location Based Marker less AR

Location-based marker less AR strives to merge 3D virtual items into the real-world environment in which the
user is situated. This technology places the virtual object at the appropriate spot or area of interest using the
location and sensors of a smart device. Pokémon GO, a location-based, marker less augmented reality app for
smartphones, is an example of this kind of augmented reality. By reading the data in real time from the camera, GPS,
compass, and accelerometer, this augmented reality connects the virtual image to a particular location. Also, as it is
based on marker less AR, no image track is required for its operation, as it can predict the user’s approach to match
the data in real time with the user’s location. This typology gives users the possibility to add interactive and practical
digital information to interesting geographies, which is highly advantageous for tourists visiting a particular location
because it makes the environment more understandable through 3D virtual objects or movies.

Figure 5

9
6.2 Slam algorithm for markerless AR

A feature-based monocular SLAM system that operates in real time, in small and large, indoor and outdoor
environments. The system is robust to severe motion clutter, allows wide baseline loop closing and relocalization,
and includes full automatic initialization. Building on excellent algorithms of recent years, we designed from
scratch a novel system that uses the same features for all SLAM tasks: tracking, mapping, relocalization, and loop
closing. A survival of the fittest strategy that selects the points and keyframes of the reconstruction leads to
excellent robustness and generates a compact and trackable map that only grows if the scene content changes,
allowing lifelong operation.

Figure 6

10
Code for SLAM algorithm:
#include "LocalMapping.h"
#include "LoopClosing.h"
#include "ORBmatcher.h"
#include "Optimizer.h"

#include <ros/ros.h>

namespace ORB_SLAM
{

LocalMapping::LocalMapping(Map *pMap) : mbResetRequested(false), mpMap(pMap), mbAbortBA(false),


mbStopped(false), mbStopRequested(false), mbAcceptKeyFrames(true)
{
}

void LocalMapping::SetLoopCloser(LoopClosing *pLoopCloser)


{
mpLoopCloser = pLoopCloser;
}

void LocalMapping::SetTracker(Tracking *pTracker)


{
mpTracker = pTracker;
}

void LocalMapping::Run()
{

ros::Rate r(500);
while (ros::ok())
{
// Check if there are keyframes in the
queueif (CheckNewKeyFrames())
{
// Tracking will see that Local Mapping is busy
SetAcceptKeyFrames(false);

// BoW conversion and insertion in


MapProcessNewKeyFrame();

// Check recent
MapPoints
MapPointCulling();

// Triangulate new
MapPoints
CreateNewMapPoints();

// Find more matches in neighbor keyframes and fuse point


duplicationsSearchInNeighbors();

11
Optimizer::LocalBundleAdjustment(mpCurrentKeyFrame, &mbAbortBA);

// Check redundant local


KeyframesKeyFrameCulling();

mpMap->SetFlagAfterBA();

// Tracking will see Local Mapping idle


if (!CheckNewKeyFrames())
SetAcceptKeyFrames(true);
}

mpLoopCloser->InsertKeyFrame(mpCurrentKeyFrame);
}

// Safe area to stop


if (stopRequested())
{
Stop();
ros::Rate r2(1000);
while (isStopped() && ros::ok())
{
r2.sleep();
}

SetAcceptKeyFrames(true);
}

ResetIfRequested();
r.sleep();
}
}

void LocalMapping::InsertKeyFrame(KeyFrame *pKF)


{
boost::mutex::scoped_lock lock(mMutexNewKFs);
mlNewKeyFrames.push_back(pKF);
mbAbortBA = true; SetAcceptKeyFrames(false);
}

bool LocalMapping::CheckNewKeyFrames()
{
boost::mutex::scoped_lock lock(mMutexNewKFs);
return (!mlNewKeyFrames.empty());
}

void LocalMapping::ProcessNewKeyFrame()
{
{
boost::mutex::scoped_lock lock(mMutexNewKFs);
mpCurrentKeyFrame = mlNewKeyFrames.front();
mlNewKeyFrames.pop_front();
}
12
// Compute Bags of Words structures
mpCurrentKeyFrame->ComputeBoW();

if (mpCurrentKeyFrame->mnId == 0)
return;

// Associate MapPoints to the new keyframe and update normal and descriptor
vector<MapPoint *> vpMapPointMatches = mpCurrentKeyFrame->GetMapPointMatches();
if (mpCurrentKeyFrame->mnId > 1) // This operations are already done in the tracking for the first two keyframes
{
for (size_t i = 0; i < vpMapPointMatches.size(); i++)
{
MapPoint *pMP = vpMapPointMatches[i];
if (pMP)
{
if (!pMP->isBad())
{
pMP->AddObservation(mpCurrentKeyFrame, i);
pMP->UpdateNormalAndDepth();
pMP->ComputeDistinctiveDescriptors();
}
}
}
}

if (mpCurrentKeyFrame->mnId == 1)
{
for (size_t i = 0; i < vpMapPointMatches.size(); i++)
{
MapPoint *pMP = vpMapPointMatches[i];
if (pMP)
{
mlpRecentAddedMapPoints.push_back(pMP);
}
}
}

// Update links in the Covisibility Graph


mpCurrentKeyFrame->UpdateConnections();

// Insert Keyframe in Map


mpMap->AddKeyFrame(mpCurrentKeyFrame);
}

void LocalMapping::MapPointCulling()
{
// Check Recent Added MapPoints
list<MapPoint *>::iterator lit = mlpRecentAddedMapPoints.begin();
const unsigned long int nCurrentKFid = mpCurrentKeyFrame->mnId;
while (lit != mlpRecentAddedMapPoints.end())
{
MapPoint *pMP = *lit;

13
}
else if (pMP->GetFoundRatio() < 0.25f)
{
pMP->SetBadFlag();
lit = mlpRecentAddedMapPoints.erase(lit);
}
else if ((nCurrentKFid - pMP->mnFirstKFid) >= 2 && pMP->Observations() <= 2)
{

pMP->SetBadFlag();
lit = mlpRecentAddedMapPoints.erase(lit);
}
else if ((nCurrentKFid - pMP->mnFirstKFid) >= 3)lit
= mlpRecentAddedMapPoints.erase(lit);
else
lit++;
}
}

void LocalMapping::CreateNewMapPoints()
{
// Take neighbor keyframes in covisibility graph
vector<KeyFrame *> vpNeighKFs = mpCurrentKeyFrame->GetBestCovisibilityKeyFrames(20);

ORBmatcher matcher(0.6, false);

cv::Mat Rcw1 = mpCurrentKeyFrame->GetRotation();


cv::Mat Rwc1 = Rcw1.t();
cv::Mat tcw1 = mpCurrentKeyFrame->GetTranslation();
cv::Mat Tcw1(3, 4, CV_32F);
Rcw1.copyTo(Tcw1.colRange(0, 3));
tcw1.copyTo(Tcw1.col(3));
cv::Mat Ow1 = mpCurrentKeyFrame->GetCameraCenter();

const float fx1 = mpCurrentKeyFrame->fx;


const float fy1 = mpCurrentKeyFrame->fy;
const float cx1 = mpCurrentKeyFrame->cx;
const float cy1 = mpCurrentKeyFrame->cy;
const float invfx1 = 1.0f / fx1;
const float invfy1 = 1.0f / fy1;

const float ratioFactor = 1.5f * mpCurrentKeyFrame->GetScaleFactor();

// Search matches with epipolar restriction and


triangulatefor (size_t i = 0; i < vpNeighKFs.size(); i++)
{
KeyFrame *pKF2 = vpNeighKFs[i];

// Check first that baseline is not too short


// Small translation errors for short baseline keyframes make scale to diverge
cv::Mat Ow2 = pKF2->GetCameraCenter();
cv::Mat vBaseline = Ow2 - Ow1;

14
if (ratioBaselineDepth < 0.01)
continue;

// Compute Fundamental Matrix


cv::Mat F12 = ComputeF12(mpCurrentKeyFrame, pKF2);

// Search matches that fulfil epipolar


constraintvector<cv::KeyPoint>
vMatchedKeysUn1; vector<cv::KeyPoint>
vMatchedKeysUn2; vector<pair<size_t,
size_t>> vMatchedIndices;
matcher.SearchForTriangulation(mpCurrentKeyFrame, pKF2, F12, vMatchedKeysUn1, vMatchedKeysUn2,
vMatchedIndices);

cv::Mat Rcw2 = pKF2->GetRotation();


cv::Mat Rwc2 = Rcw2.t();
cv::Mat tcw2 = pKF2->GetTranslation();
cv::Mat Tcw2(3, 4, CV_32F);
Rcw2.copyTo(Tcw2.colRange(0, 3));
tcw2.copyTo(Tcw2.col(3));

const float fx2 = pKF2->fx;


const float fy2 = pKF2->fy;
const float cx2 = pKF2->cx;
const float cy2 = pKF2->cy;
const float invfx2 = 1.0f / fx2;
const float invfy2 = 1.0f / fy2;

// Triangulate each match


for (size_t ikp = 0, iendkp = vMatchedKeysUn1.size(); ikp < iendkp; ikp++)
{
const int idx1 = vMatchedIndices[ikp].first;
const int idx2 = vMatchedIndices[ikp].second;

const cv::KeyPoint &kp1 = vMatchedKeysUn1[ikp];


const cv::KeyPoint &kp2 = vMatchedKeysUn2[ikp];

// Check parallax between rays


cv::Mat xn1 = (cv::Mat_<float>(3, 1) << (kp1.pt.x - cx1) * invfx1, (kp1.pt.y - cy1) * invfy1, 1.0);cv::Mat
ray1 = Rwc1 * xn1;
cv::Mat xn2 = (cv::Mat_<float>(3, 1) << (kp2.pt.x - cx2) * invfx2, (kp2.pt.y - cy2) * invfy2, 1.0);cv::Mat
ray2 = Rwc2 * xn2;
const float cosParallaxRays = ray1.dot(ray2) / (cv::norm(ray1) * cv::norm(ray2));

if (cosParallaxRays < 0 || cosParallaxRays > 0.9998)


continue;

// Linear Triangulation
Method cv::Mat A(4, 4,
CV_32F);
A.row(0) = xn1.at<float>(0) * Tcw1.row(2) - Tcw1.row(0);

15
cv::SVD::compute(A, w, u, vt, cv::SVD::MODIFY_A | cv::SVD::FULL_UV);

cv::Mat x3D = vt.row(3).t();

if (x3D.at<float>(3) == 0)
continue;

// Euclidean coordinates
x3D = x3D.rowRange(0, 3) / x3D.at<float>(3);
cv::Mat x3Dt = x3D.t();

// Check triangulation in front of cameras


float z1 = Rcw1.row(2).dot(x3Dt) + tcw1.at<float>(2);if
(z1 <= 0)
continue;

float z2 = Rcw2.row(2).dot(x3Dt) + tcw2.at<float>(2);if


(z2 <= 0)
continue;

// Check reprojection error in first keyframe


float sigmaSquare1 = mpCurrentKeyFrame->GetSigma2(kp1.octave);
float x1 = Rcw1.row(0).dot(x3Dt) + tcw1.at<float>(0);
float y1 = Rcw1.row(1).dot(x3Dt) + tcw1.at<float>(1);
float invz1 = 1.0 / z1;
float u1 = fx1 * x1 * invz1 + cx1;
float v1 = fy1 * y1 * invz1 + cy1;
float errX1 = u1 - kp1.pt.x;
float errY1 = v1 - kp1.pt.y;
if ((errX1 * errX1 + errY1 * errY1) > 5.991 * sigmaSquare1)
continue;

// Check reprojection error in second keyframe


float sigmaSquare2 = pKF2->GetSigma2(kp2.octave);
float x2 = Rcw2.row(0).dot(x3Dt) + tcw2.at<float>(0);
float y2 = Rcw2.row(1).dot(x3Dt) + tcw2.at<float>(1);
float invz2 = 1.0 / z2;
float u2 = fx2 * x2 * invz2 + cx2;
float v2 = fy2 * y2 * invz2 + cy2;
float errX2 = u2 - kp2.pt.x;
float errY2 = v2 - kp2.pt.y;
if ((errX2 * errX2 + errY2 * errY2) > 5.991 * sigmaSquare2)
continue;

// Check scale consistency


cv::Mat normal1 = x3D - Ow1;
float dist1 = cv::norm(normal1);

cv::Mat normal2 = x3D - Ow2;


float dist2 = cv::norm(normal2);

if (dist1 == 0 || dist2 == 0)

16
float ratioOctave = mpCurrentKeyFrame->GetScaleFactor(kp1.octave) / pKF2->GetScaleFactor(kp2.octave);if
(ratioDist * ratioFactor < ratioOctave || ratioDist > ratioOctave * ratioFactor)
continue;

// Triangulation is succesfull
MapPoint *pMP = new MapPoint(x3D, mpCurrentKeyFrame, mpMap);

pMP->AddObservation(pKF2, idx2);
pMP->AddObservation(mpCurrentKeyFrame, idx1);

mpCurrentKeyFrame->AddMapPoint(pMP, idx1);
pKF2->AddMapPoint(pMP, idx2);

pMP->ComputeDistinctiveDescriptors();

pMP->UpdateNormalAndDepth();

mpMap->AddMapPoint(pMP);
mlpRecentAddedMapPoints.push_back(pMP);
}
}
}

void LocalMapping::SearchInNeighbors()
{
// Retrieve neighbor keyframes
vector<KeyFrame *> vpNeighKFs = mpCurrentKeyFrame->GetBestCovisibilityKeyFrames(20); vector<KeyFrame
*> vpTargetKFs;
for (vector<KeyFrame *>::iterator vit = vpNeighKFs.begin(), vend = vpNeighKFs.end(); vit != vend; vit++)
{
KeyFrame *pKFi = *vit;
if (pKFi->isBad() || pKFi->mnFuseTargetForKF == mpCurrentKeyFrame->mnId)
continue;
vpTargetKFs.push_back(pKFi);
pKFi->mnFuseTargetForKF = mpCurrentKeyFrame->mnId;

// Extend to some second neighbors


vector<KeyFrame *> vpSecondNeighKFs = pKFi->GetBestCovisibilityKeyFrames(5);
for (vector<KeyFrame *>::iterator vit2 = vpSecondNeighKFs.begin(), vend2 = vpSecondNeighKFs.end(); vit2 !=vend2;
vit2++)
{
KeyFrame *pKFi2 = *vit2;
if (pKFi2->isBad() || pKFi2->mnFuseTargetForKF == mpCurrentKeyFrame->mnId || pKFi2->mnId ==
mpCurrentKeyFrame->mnId)
continue;
vpTargetKFs.push_back(pKFi2);
}
}

// Search matches by projection from current KF in target KFs


ORBmatcher matcher(0.6);
vector<MapPoint *> vpMapPointMatches = mpCurrentKeyFrame->GetMapPointMatches();

17
matcher.Fuse(pKFi, vpMapPointMatches);
}

// Search matches by projection from target KFs in current KF


vector<MapPoint *> vpFuseCandidates;
vpFuseCandidates.reserve(vpTargetKFs.size() * vpMapPointMatches.size());

for (vector<KeyFrame *>::iterator vitKF = vpTargetKFs.begin(), vendKF = vpTargetKFs.end(); vitKF != vendKF;vitKF++)


{
KeyFrame *pKFi = *vitKF;

vector<MapPoint *> vpMapPointsKFi = pKFi->GetMapPointMatches();

for (vector<MapPoint *>::iterator vitMP = vpMapPointsKFi.begin(), vendMP = vpMapPointsKFi.end(); vitMP !=


vendMP; vitMP++)
{
MapPoint *pMP = *vitMP;
if (!pMP)
continue;
if (pMP->isBad() || pMP->mnFuseCandidateForKF == mpCurrentKeyFrame->mnId)
continue;
pMP->mnFuseCandidateForKF = mpCurrentKeyFrame->mnId;
vpFuseCandidates.push_back(pMP);
}
}

matcher.Fuse(mpCurrentKeyFrame, vpFuseCandidates);

// Update points
vpMapPointMatches = mpCurrentKeyFrame->GetMapPointMatches();
for (size_t i = 0, iend = vpMapPointMatches.size(); i < iend; i++)
{
MapPoint *pMP = vpMapPointMatches[i];
if (pMP)
{
if (!pMP->isBad())
{
pMP->ComputeDistinctiveDescriptors();
pMP->UpdateNormalAndDepth();
}
}
}

// Update connections in covisibility graph


mpCurrentKeyFrame->UpdateConnections();
}

cv::Mat LocalMapping::ComputeF12(KeyFrame *&pKF1, KeyFrame *&pKF2)


{
cv::Mat R1w = pKF1->GetRotation();
cv::Mat t1w = pKF1->GetTranslation();

18
cv::Mat R12 = R1w * R2w.t();
cv::Mat t12 = -R1w * R2w.t() * t2w + t1w;

cv::Mat t12x = SkewSymmetricMatrix(t12);

cv::Mat K1 = pKF1->GetCalibrationMatrix();
cv::Mat K2 = pKF2->GetCalibrationMatrix();

return K1.t().inv() * t12x * R12 * K2.inv();


}

void LocalMapping::RequestStop()
{
boost::mutex::scoped_lock lock(mMutexStop);
mbStopRequested = true;
boost::mutex::scoped_lock lock2(mMutexNewKFs);
mbAbortBA = true;

void LocalMapping::Stop()
{
boost::mutex::scoped_lock lock(mMutexStop);
mbStopped = true;
}

bool LocalMapping::isStopped()
{
boost::mutex::scoped_lock lock(mMutexStop);
return mbStopped;
}

bool LocalMapping::stopRequested()
{
boost::mutex::scoped_lock lock(mMutexStop);
return mbStopRequested;
}

void LocalMapping::Release()
{
boost::mutex::scoped_lock lock(mMutexStop);
mbStopped = false;
mbStopRequested = false;
for (list<KeyFrame *>::iterator lit = mlNewKeyFrames.begin(), lend = mlNewKeyFrames.end(); lit != lend; lit++)
delete *lit;
mlNewKeyFrames.clear();
}

bool LocalMapping::AcceptKeyFrames()
{

19
void LocalMapping::SetAcceptKeyFrames(bool flag)
{
boost::mutex::scoped_lock lock(mMutexAccept);
mbAcceptKeyFrames = flag;
}

void LocalMapping::InterruptBA()
{
mbAbortBA = true;
}

void LocalMapping::KeyFrameCulling()
{
// Check redundant keyframes (only local keyframes)
// A keyframe is considered redundant if the 90% of the MapPoints it sees, are seen
// in at least other 3 keyframes (in the same or finer scale)
vector<KeyFrame *> vpLocalKeyFrames = mpCurrentKeyFrame->GetVectorCovisibleKeyFrames();

for (vector<KeyFrame *>::iterator vit = vpLocalKeyFrames.begin(), vend = vpLocalKeyFrames.end(); vit != vend;vit++)


{
KeyFrame *pKF = *vit;if
(pKF->mnId == 0)
continue;
vector<MapPoint *> vpMapPoints = pKF->GetMapPointMatches();

int nRedundantObservations =
0;int nMPs = 0;
for (size_t i = 0, iend = vpMapPoints.size(); i < iend; i++)
{
MapPoint *pMP = vpMapPoints[i];
if (pMP)
{
if (!pMP->isBad())
{
nMPs++;
if (pMP->Observations() > 3)
{
int scaleLevel = pKF->GetKeyPointUn(i).octave;
map<KeyFrame *, size_t> observations = pMP->GetObservations();
int nObs = 0;
for (map<KeyFrame *, size_t>::iterator mit = observations.begin(), mend = observations.end(); mit !=
mend; mit++)
{
KeyFrame *pKFi = mit->first;if
(pKFi == pKF)
continue;
int scaleLeveli = pKFi->GetKeyPointUn(mit->second).octave;if
(scaleLeveli <= scaleLevel + 1)
{
nObs++;
if (nObs >= 3)

20
}
if (nObs >= 3)
{
nRedundantObservations++;
}
}
}
}
}

if (nRedundantObservations > 0.9 * nMPs)


pKF->SetBadFlag();
}
}

cv::Mat LocalMapping::SkewSymmetricMatrix(const cv::Mat &v)


{
return (cv::Mat_<float>(3, 3) << 0, -v.at<float>(2), v.at<float>(1),
v.at<float>(2), 0, -v.at<float>(0),
-v.at<float>(1), v.at<float>(0), 0);
}

void LocalMapping::RequestReset()
{
{
boost::mutex::scoped_lock lock(mMutexReset);
mbResetRequested = true;
}

ros::Rate r(500);
while (ros::ok())
{
{
boost::mutex::scoped_lock lock2(mMutexReset);
if (!mbResetRequested)
break;
}
r.sleep();
}
}

void LocalMapping::ResetIfRequested()
{
boost::mutex::scoped_lock lock(mMutexReset);
if (mbResetRequested)
{
mlNewKeyFrames.clear();
mlpRecentAddedMapPoints.clear();
mbResetRequested = false;

21
6.3 Object detection using Core-ML

Step 1: The app generates an image of the object using the camera from a phone or tablet. The app stores
feature descriptors that help identify the reference image.

Step 2: AR software recognizes the object within the real-world environment through feature points. To
recognize an object, the camera finds matches between the reference and frame images.

Step 3: The object is recognized through an identifiable constellation of points, then the digital model is placed
accordingly. Learners can then interact with and manipulate the 3D digital object.

Figure 7

22
6.4 Flow Chart

Figure 8

23
Chapter 7

CONCLUSION
More specifically, the results showed that AR technology is not widespread at social and educational levels since no diffusion of AR
in teaching is observed while some of the teachers who had used some AR application didn’t know that those were AR
applications. However, a rising but slow diffusion of AR technology is noticed. Virtualization should never be seen as a simple
solution to a specific problem; that is the main idea we have been trying to convey. It is a principle, a technology that is applicable
in a very large range of different solutions. It is also a buzzword to get cash flowing nowadays, and is heralded by a lot of
companies as "the next best thing in IT". A common conclusion is that augmented reality (AR) applications can enhance the
learning process, learning motivation andeffectiveness. Despite the positive results, more research is necessary.

24
BIBLIOGRAPHY

[1] S. Dormido, H.Vargas, J. Sánchez, N. Duro, R. Dormido, S. Dormido-Canto, F. Esquembre, "Using Web-Based Laboratories for
Control
Engineering Education,” International Conference on Engineering Education, Coimbra, Portugal, September, 2007.
[2] Z. Nedic, J. Machotka, A. Nafalski, "Remote laboratories versus virtual and real Laboratories,” Proc. 33rd ASEE/IEEE Frontiers in
Education Conference, Boulder, Colorado, USA, November, 2003.
[3] Z. Nedic, J. Machotka, A. Nafalski, "Remote Laboratory Net Lab for Effective Interaction with Real Equipment over the
Internet,” Proc. 2008 IEEE Conference on Human Systems Interaction (HSI), Krakow, Poland, pp.846-851, May, 2008.
[4] L. D. Feise, A. J. Rosa, "The Role of the Laboratory in Undergraduate Engineering Education,” Journal of Engineering Education,
Vol. 94, pp.121-130, January, 2005.
[5] J. Y. Ma, J. S. Choi, "The Virtuality and Reality of Augmented Reality,” Journal of Multimedia, Vol. 2, No.1, pp.32-37, February,
2007.
[6] M. Sairio, Augmented Reality, Helsinki University of Technology, 2001.

25

You might also like