Uploaded by anbudansubbu

AR Navigation

advertisement
DEGREE PROJECT IN TECHNOLOGY,
FIRST CYCLE, 15 CREDITS
STOCKHOLM, SWEDEN 2021
A Proposal for Augmented
Reality Integration in
Navigation
DANIEL MUNENE NYEKO MOINI
KTH ROYAL INSTITUTE OF TECHNOLOGY
SCHOOL OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCE
2
Abstract
In today’s society, approximately half of the world population utilize their smart
phone for their everyday tasks as handling scheduled events, financial inquires
and general social life, according to Ash Turner’s statistical study on the topic.
They have been an amazing tool for helping people handle these tasks easily,
conveniently and swiftly, all at the palm of our hand. This also includes traveling
and navigation. Even for traveling by foot, smartphones possess features that
makes it a flexible option. As such, navigation apps have been of heavy use.
They do have some problems, such as lacking information of more recent paths
or smaller known areas that have not been added to the database.
Hence, Neava AB had an interest in exploring this field and find a solution
using Augmented Reality (AR). This thesis will document the process, from the
creation of the development plan, to the development process, the final result
and discussion. It will also showcase other applications of the AR technology in
similar and completely different fields, along with future potential usage.
The final prototype is a small program that allows the user to guided from a
start position to a destination with visual aid in terms of an arrow that points
the user towards the destination. It guides the user to a certain checkpoint
between the two points. When the user has reached the checkpoint, they press
the button on the screen to be guided to the next checkpoint with audio in the
form of a audio clip informing which direction to turn. This is repeated until
the destination is reached. At that point, pressing the button again will bring
out text. The current text is dummy text currently, but is meant to contain
information about the destination in the final build.
This thesis will also analyse the reliability of the GPS and Gyroscope components within a smartphone by request of the examiner. This test is done
by walking from a starting position towards a set location while recording the
current coordinates and time since departure. These values are recorded in a
text file. The program is then terminated and the process repeats itself back to
the starting position. The data will then be visualised analysed to determine
the stability.
In the end, the prototype showed promise in becoming a commercial product
with more development time. There were many features that did not make into
the prototype such as a virtual trail for improved visual guidance and have
voicelines and checkpoints change automatically. The GPS tests showed great
promise in reliability and stability, showcasing small deviation which could be
attributed to other factors than the component itself. The gyroscope was a bit
less reliable, but that can be addressed in software.
Keywords
Navigation, AR, GPS, Unity, Gyroscope, Component Research
i
ii
Sammanfattning
I dagens samhälle, använder sig ungefär hälften av världens befolkning av smarttelefoner för att sin vardagliga åtgärder. Detta kan inkludera hantering av
schemalagda åtgärder, finansiella behov och fritidslivet, enligt Ash Turner’s
statistiska forskning. Smarttelefoner har blivit ett praktiskt modell för att
hantera dessa åtgärder enkelt, lämpligt och snabbt. Detta inkluderar även resa
och navigering. När man även reser till fots, är smarttelefoner ett mer praktiskt val för navigeringsmedel än traditionella kartor. På grund av detta, har
navigeringsappar sett stor användning. Dock är de inte utan sina nackdelar,
speciellt i mindre kända områden.
Neava AB har haft ett intresse att undersöka detta fall och skapa en lösning
till problemet. Därför önskade de en prototyp för smarttelefoner som använder
sig av Augmented Reality(AR). Denna rapport dokumenterar hela processen,
från skapandet av utvecklingsplanen till utvecklingsprocessen, slutresultatet och
diskussion. Rapporten kommer även visa gå igenom tidigare tillämpningar av
AR i olika industrier och framtida tillämpningar.
Slutprototypen blev ett litet program som låter användaren bli vägled från
Punkt A till Punkt B med både en pil och ljudstöd. Mellan dessa punkter finns
det mellanpunkter som kopplar Punkt A till Punkt B. Användaren börjar med
att bli vägled till den första mellanpunkten. När den har nåtts så trycker man
på knappen och då vägleds man till nästa mellanpunkt, med pilen riktad åt den
och ljudklipp som säger vilken riktning man ska ta. Denna process upprepas tills
man har kommit till punkt B, då man trycker på knappen och textinformation
dyker upp på skärmen.
Rapporten kommer också undersöka pålitligheten hos GPS och gyroskop
komponenten inom en smarttelefon. Undersökningen kommer att genomföras
genom ett test då programmet startas och telefon är tagen från startposition,
genom en specifik rutt, till en destination. Under tiden man rör sig mot destination, sparas tid och nuvarande koordinater varje sekund i en textfil. När
man har nått destinationen, termineras programmet, åter-uppstartas och man
rör tillbaks till startpositionen, samma process. All data från textfilen sedan
visualieras och analyseras.
Prototypen har en bra chans att bli en kommersiell produkt om det byggs på
vidare. Det fanns flera funktioner och idéer som kunde inte implementeras i prototypen på grund av tid. Några av dessa funktioner var bättre visuell vägledning
(rita en virtuell linje går från användaren till den kommande mellanpunkten).
En annan idéer var att låta ljudspelningar och mål byta automatiskt när man
når en mellanpunkt, istället för att användaren måste göra det manuellt. Resultatet från testerna för GPSen visar att den är stabil och pålitlig när de inte är
påverkade av andra faktorer. Gyroskopen visade sig vara mindre pålitlig, men
det kan lösa mig hjälp av mjukvara som rättar till den.
Nyckelord
Navigering, AR, GPS, Unity, Gyroskop, Komponent-pålitlighet
iii
iv
Foreword
First, I would like to thank Staffan Johansson, the supervisor from Neava AB
for all his help and assistance during the course of this project. With guiding
me through Unity, aiding me with recommended lessons and being of great help
during all the meetings. It was great discussing ideas with you and it always
felt that every meeting resulted in positive progress. I would also like to thank
Mira Kajko-Mattson and my examiner Anders Sjögren for all the guidelines in
how to assemble a report. Lastly, I would like to thank Johan Montelius for all
the extra ideas and viewpoints given to me during our few meetings over the
course of the project.
v
vi
Contents
1 Introduction
1.1 Background . . . . . .
1.2 Problem . . . . . . . .
1.3 Purpose . . . . . . . .
1.4 Goal . . . . . . . . . .
1.5 Method . . . . . . . .
1.6 Limitation and Scope
1.7 Ethical Considerations
1.8 Outline . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
1
1
1
1
1
2
2
2
2 Extended Background
2.1 Augmented Reality .
2.2 Google . . . . . . . .
2.3 GPS . . . . . . . . .
2.4 Gyroscope . . . . . .
2.5 Camera . . . . . . .
2.6 Unity . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
5
5
5
6
6
7
7
3 Method
3.1 Research Process .
3.2 Project Strategy .
3.3 Technical Method .
3.4 Hardware Research
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
13
13
16
20
20
.
.
.
.
4 System Architecture and Implementation
23
4.1 System Architecture Diagram . . . . . . . . . . . . . . . . . . . . 23
4.2 Final Prototype . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
4.3 Component Research . . . . . . . . . . . . . . . . . . . . . . . . . 37
5 Results
5.1 Sub
5.2 Sub
5.3 Sub
5.4 Sub
Question
Question
Question
Question
1:
2:
3:
4:
Previous Applications of AR Navigation
Future Developments in AR . . . . . . .
Final Prototype . . . . . . . . . . . . . .
Hardware Research . . . . . . . . . . . .
6 Discussion
6.1 Previous Applications of AR Navigation
6.2 Future Developments in AR . . . . . . .
6.3 Final Prototype . . . . . . . . . . . . . .
6.4 Hardware Research . . . . . . . . . . . .
6.5 Sub Question 5: Validity Threats . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
39
39
39
39
39
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
45
45
45
45
46
46
7 Conclusion
49
7.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
7.2 Future Plans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
References
51
vii
viii
List of Figures
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
Google LiveView . . . . . . . . . . . . . . . . . . . . . . . . . . .
Movements that the gyroscope component tracks . . . . . . . . .
The Unity Development Environment . . . . . . . . . . . . . . .
Screenshot of Pokemon GO, a popular game built in Unity . . .
Graphical representation of Bunge’s research method, adapted
for this project. Original was provided from Anders Sjögren[1] .
Visualisation of a Triple Constraint . . . . . . . . . . . . . . . . .
Visualisation of the modified waterfall model for this project, seen
as step 4-8 in Bunge’s research method. . . . . . . . . . . . . . .
Visualization of the path . . . . . . . . . . . . . . . . . . . . . . .
Visual Example of a class[2] . . . . . . . . . . . . . . . . . . . . .
Visual example of Inheritance[2] . . . . . . . . . . . . . . . . . .
System Architecture Diagram for the Prototype (AR Functionality is coded in red). All the classes were created in Unity. . . .
An Actor[3] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
A use case[3] . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
An Actor and Use Case connected via a relation[3] . . . . . . . .
Example of a Use Case Diagram[4] . . . . . . . . . . . . . . . . .
Use Case Diagram for the Prototype . . . . . . . . . . . . . . . .
The Final Prototype . . . . . . . . . . . . . . . . . . . . . . . . .
Latitude values towards Destination. Orange line denotes the
overall average. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Longitude values towards Destination. Orange line denotes the
overall average. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Latitude values returning back. Orange line denotes the overall
average. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Longitude values returning back. Orange line denotes the overall
average. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Gyroscope values on the X-Axis when heading towards the destination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Gyroscope values on the Y-Axis when heading towards the destination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Gyroscope values on the Z-Axis when heading towards the destination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Gyroscope values on the X-Axis when returning back . . . . . . .
Gyroscope values on the Y-Axis when returning back . . . . . . .
Gyroscope values on the Z-Axis when returning back . . . . . . .
ix
6
7
8
11
14
17
19
20
23
24
25
27
27
27
28
29
30
40
40
41
41
42
42
43
43
44
44
x
1
Introduction
This chapter introduces the main purpose of the thesis and give context into
some terminology and concepts which will be discussed over the course of the
thesis. Section 1.1 gives a general background, Section 1.2 describes the problem
the thesis is centered around, Section 1.3 details the purpose of this thesis,
Section 1.4 details the goal of the the thesis, Section 1.5 briefly explains the
method of how to solve the problem, Section 1.6 states the limitation and scope
the thesis has to abide to, Section 1.7 details a few ethical concerns that can
arise and Section 1.8 shows an outline of the thesis.
1.1
Background
In the current world, around half the earth’s population use smartphones[5].
Smartphones are most commonly used for completing financial transactions, as
a news source, keep connections with one’s social circle and more[6]. Another
major use case for smartphones is also navigation[7]. About two-thirds of smartphones users use navigation apps regularly[8]. In those situations, users tend to
use Google Maps as their main tool to guide them to their destination[9].
It is possible to represent navigation on a map, but projecting it to the
camera view on the phone screen has not been going smoothly. Google attempted this with their Live2DView[10] feature, but it is currently clunky in its
functionality.
Neava AB, an IT consultant company, has been interested in exploring this
field for quite some time. They specialize in 3D software to assist the everyday
life of average users. One of their main project is Arvue Design Studio, a piece
of software that makes use of Augmented Reality (hence referred to as AR) to
allow the user to see their potential new home in real-time 3D in 360◦ .
1.2
Problem
Neava wishes to enter the navigation market with an application that guides
the user between two locations using AR technology. Neava lack experience in
the field and as such, requested a prototype.
1.3
Purpose
To showcase the potential of AR technology in a navigation system. This also
allows them to get a headstart in terms of development without allocating resources.
1.4
Goal
Realize the advantages and benefits with AR technology and showcase it for
Neava AB and anyone else interested in pursuing the AR field with a prepared
framework.
1.5
Method
The challenge of this thesis is to determine some possibilities AR has for navigation. This will be showcased by creating a prototype that:
1
• Guide the user between two geographical points using AR.
– Provide visual assistance for directions via AR (Arrow that points at
Point B at all times).
– Provide audio assistance for directions via AR (audio clips that indicates how to turn).
– Provide information about Point B for the user to learn.
This will be accomplished by:
• Identifying the features needed to be implemented in terms of priority and
estimated time
• Implement the features in iterations
• Test stability of the components in a smartphone
These points will be expanded upon in the Method chapter.
1.6
Limitation and Scope
Due to limited time, the developed app will not be a full-fledged app but only a
prototype. This means that the User Interface and general codebase will not be
completely refined and ready for market, as the goal is to only showcase the idea.
For the development environment, it will be developed in Unity specifically as
it is the environment used by Neava AB.
1.7
Ethical Considerations
When dealing with navigation apps, common ethical questions that can be discussed are questions of privacy. In order for navigation apps to work efficiently,
they need to be aware of the users current location[11]. This raises questions in
regards to privacy. For example:
• Should private companies be able to know your current position at all
times?
• What can these companies with this information in terms of monetary
gain (selling this information to advertisers and other companies)?
These questions will not be discussed in this thesis as that is not the current
focus, but are regardless acknowledged.
1.8
Outline
The thesis is structured with the following chapters:
• Chapter 2 will consist of an extended background. This chapter will cover
the tools used to develop the application. It will also go into detail the
components required for it to function as well.
• Chapter 3 discusses the development strategy used for the project and
how it applies to the project. Descriptions of the tests for the smartphone
components will be performed.
2
• Chapter 4 showcases the System Architecture and Implementation for the
prototype, going into detail how the components build it up.
• Chapter 5 contains the results regarding research on AR and showcasing
the prototype.
• Chapter 6 discusses the results along with the validity threats.
• Chapter 7 concludes the research and discusses future plans.
3
4
2
Extended Background
In this chapter, AR in general will be explained and Google’s approach to navigation will be delved into deeper along with their current attempt at AR navigation. The components the application utilizes will also be explored further.
At the end, the development environment will be delved into. Section 2.1 delves
into Augmented Reality, how it work and some use cases. Section 2.2 will consist
of Google’s approach to navigation, Subsection 2.2.1 will consist of information
on LiveView, Google’s attempt at AR navigation, Section 2.3-2.5 will consist
of exploration of the GPS, Gyroscope and Camera. The final Section 2.6 will
discuss the Unity Environment in terms of functionality.
2.1
Augmented Reality
AR refers to a technology that allows one to experience the real world with
added digital elements that make use of the five human senses[12]. By digital
elements, it refers to information which cannot be physically be interacted with.
These elements can represent any type of data, from text to sounds and 3Drendered models. AR has been utilized in multiple fields, including navigation,
retailing and entertainment[12]. For retailing, it has been utilized in multiple
ways. For example, showcasing additional information about a product in stock
aside from what is on the price tag[12]. This is through an app affiliated with
the shop[12]. IKEA also utilizes AR in order to allow customers to visualise
how furniture would look in their room[13].
2.2
Google
Google Maps utilizes a few techniques to give the best user experience. In order
to obtain the most up-to-date geographical data, Google cooperates with thirdparty companies and governments[14] as well as obtain their own data with
multiple vehicles traversing all over the world, taking 360◦ photographs of every
possible street and pathway. For signboards and important hoardings in cities,
Google utilizes AI technology to parse them[15]. This combined with Google
Street View and base map data, allows Google Maps to generate the maps
known today. Google has been experimenting with improving their navigation
experience with more visual assistance, which will be discussed next[16].
2.2.1
LiveView
Google LiveView is Google’s attempt at utilizing AR technology for Google
Maps. It makes use of AR technology and Street View, a Google service on
Maps which allows users a 360◦ horizontal and 290◦ vertical panoramic image of
specific areas on Google Maps[15]. With these two technologies, it is able to
overlay geographical information on your screen as you travel. What information
that is to be shown is determined by an AI and the Global Positioning System
(GPS)[17]. This makes sure that the information is displayed in the correct
position on your phone[18]. It has a mini-map at the bottom of the screen
that the user can utilize as a extra tool for navigating. If the user travels the
wrong direction, it will make use of voice messages to direct the user towards the
correct direction[18]. To initialize LiveView, the user scans their path and then
5
Figure 1: Google LiveView
it overlays the directions to the screen. If it can not be found on StreetView,
one will be have to scan around until Street View can recognize it.
2.3
GPS
As mentioned earlier, the GPS is one factor in securing that information is
displayed in the correct position. The GPS is a physical component allows
the smartphone to pinpoint its current location in real-time in the form of
latitude(measurement of location north or south of the Equator[19]) and longitude(measurement of location east or west of the prime meridian located in
Greenwich, England. This is considered the base point to measure from[19]).
When an application requests locations services, it requests the information
that the GPS component has computed, which is then manipulated to perform
various tasks such as navigation or unlocking certain features.
The GPS makes use of three sub-components in order to fulfill its task.
These are:
• Satellites. A set of satellites that orbit around the Earth. Four satellites
are required for a GPS device to calculate its location. Three of them
generate the location information via calculation of location, velocity and
elevation. The fourth satellite validates this result before sending it back
to the GPS device. This process is known as trilateration[17].
• Ground Control. Consists of monitor stations located in almost every
continent[17]. Their role is to track and monitor the satellites and the
transmissions to GPS devices.
• User Control. These are GPS components on user devices, such as smartphones and smartwatches[17].
2.4
Gyroscope
The gyroscope is another component used in multiple devices including smartphones. The main purpose is to sense motion occurring on the device and
6
translate it into numerical values[20]. This information can then be used to
trigger desired commands within the device’s software. Prominent use cases for
gyro technology include being used in mobile games to feature motion controls.
This includes tilting the device like a steering wheel for racing games, or shaking
the phone as a game objective. It is also used in cameras for image stabilization,
a technique that addresses hand trembling when snapping a photo, resulting in
a less shaky image[20]. For electrical devices, gyroscopes are installed in the
form of microelectromechanical system (MEMS)[21]. The gyroscope makes use
of the angular velocity, a measurement for speed of rotation for an object[22].
This measurement can either be defined in degrees per second (◦ /s) or revolutions per second (RPS) and be used to determine how off the object is from a
neutral position.
Figure 2: Movements that the gyroscope component tracks
2.5
Camera
The camera component allows the phone to capture visual information it sees
from the outside and project it to a display for the user. The camera has been
a stable component within phones for decades and closing the gap between it
and digital cameras in terms of quality. This has been done thanks to both
improvement in hardware and software. The camera component consists of
multiple small parts that work together in order to achieve its goal.
To start off, one of the components is the lens. This is the part of the camera
that is visible to the naked eye from the outside. It consists of a transparent
material that could be of glass or plastic[23]. The goal of the lens is to make
sure that light enters the sensor correctly[23]. Without it, the sensor would not
be able to gain enough information to recreate an image of the outside world,
if at all.
The last major component is the sensor itself. This component acts as a
translator between the natural light and your device.[23]. The devices receives
the light as a set of signals and those signals are then converted into viewable
images on the display of the device. This is done by the Image Signal Processor
(ISP)[23]. Modern ISP also handles color correction, repairs damaged pixels
and even does AI correction[23].
2.6
Unity
Unity is the development environment in which the application is developed
in. It was chosen as it is the development environment Neava AB utilizes for
developing their products, so their experience is of value for any possible challenges that can arise. It consists of a simplified view that abstracts all low-level
code to toggles and menu option, making it for creators without much program
experience to develop full games or programs.Unity started off as a game en7
gine released in 2005[24]. Factors that have contributed its growth and success
include:
• Support for both 3D and 2D Graphics [24]: This grants it flexibility and
accessibility, as developers are not limited to which dimension they have
to utilize for development.
• Easy-to-Understand: The Unity Editor allows creators with little to no
knowledge to create events/scripts as everything has been simplified to
options in the editor. One does not have to write much code to create
a function in Unity, instead they can make use of the Editor which will
translate into C# code when building the complete application.
• Large User Community: Unity allows users to upload any of their created
assets to the Unity Asset Store, an online platform that hosts them. From
there, anyone can download these assets (free or paid) and then import in
their own project for instant use.
Figure 3: The Unity Development Environment
Every new Unity project starts off with a scene. This is a landscape where
all the assets are placed. How these assets are viewed by the user is determined
by the positioning of the camera for that scene. Whatever the camera points at,
that becomes the screen for the user. A project can consist of multiple scenes,
allowing one to jump between them seamlessly if allowed by the developer.
u s i n g System . C o l l e c t i o n s ;
u s i n g System . C o l l e c t i o n s . G e n e r i c ;
u s i n g UnityEngine ;
8
p u b l i c c l a s s NewBehaviourScript : MonoBehaviour
{
// S t a r t i s c a l l e d b e f o r e t h e f i r s t frame update
void Start ( )
{
}
// Update i s c a l l e d once p e r frame
v o i d Update ( )
{
}
}
Listing 1: An unaltered script file
Code in Unity is handled in files known as scripts. Scripts have a fundamental structure that can be built upon to fit any task a developer would like
to tackle. A standard script with no modifications have two function names
already given to you. These are called Start and Update. The Start function
takes instructions within it and run them only once at run-time. Here, the developer adds the preparation for variables and values before anything else in the
script is executed. The Update function runs constantly every frame, a unit of
time, and executes the instructions within it during that frame.
As code is categorized in scripts and these scripts work together to make
the entire program, the low-level building blocks that create the programs work
in a object-oriented way[25]. To see how, let us define what Object-Oriented
Programming (OOP) is. The structure of OOP consists for four factors:
• Classes. General data-types that hold information and methods to be
manipulated within said classes[25]. For Unity, scripts allows one to create
data types akin to classes.
• Objects. Copies of a class which are created on command with initial data
inserted, known as instancing[25]. Any changes made to an object does
not affect the class itself, allowing one to make unique copies from the
same source. This is also possible in Unity with scripts.
• Attributes. Fields within a class that holds data[25]. This includes types
such as int, char or string. When an object is instantiated, every attribute
is given data what is applies to that object only. This data can then be
manipulated to fulfill any task. Scripts handle data in this way.
• Methods. Operations that manipulate the data within a class[25]. They
can be called by referencing the chosen object. With this, one can make
use of the method multiple times without having to copy-paste it. This is
possible in Unity.
There are also some properties and rules one has to follow in order to be
considered OOP. These are:
9
• Encapsulation. The idea here is that objects are not allowed to access
attributes/methods that are not their own. The exception is that if an
object explicitly allows for it[25]. This is handled with the private and
public properties. Setting any attribute/method to public allows other
objects to access it. Setting it to private denies access from any other
objects. This is achievable in Unity as the private and public are present.
• Abstraction. The concept of only showcasing what is necessary. This
means that an object hides information that other objects have no need of
utilizing[25]. It makes it easier for developers to understand properties of
an object can be used by other objects, since only the required information
is shown. This is possible in Unity.
• Inheritance. This allows a new class to make use of another class as
an attribute[25]. With this, developer will not have to rewrite code if
they want to use a class as an attribute. When changes are made to the
class being inherited, all inheriting classes will receive the same changes,
removing the risk of the inheritance of one class being out-dated. This
property is available in Unity.
• Polymorphism. This means that an object can take the form of multiple
types of data, decided by a parent class[25]. This is useful for when an
object needs to be defined as multiple things which already has classes.
One can make use of inheritance to achieve this, which is possible in Unity.
We can see that Unity can achieve all these aspects mentioned above, and
therefore conclude that it is an OOP.
10
Figure 4: Screenshot of Pokemon GO, a popular game built in Unity
Unity has grown to the point of not only being used for game development, but for general app development, especially for smartphones. It is also
the preferred option for independent (indie) game development studios for its
low upfront cost and cross-platform support (can build and deploy to multiple
platforms without much hassle).
11
12
3
Method
In this chapter, the development plan and research strategy is discussed, along
with any other related tasks. Section 3.1 focuses on the research process, with
Subsection 3.1.1 goes into the Research Question and Subsection 3.1.2 going
into the Research Strategy. Section 3.2 on the Project Strategy, with Subsection
3.2.1 explaining the concept of Triple Constraint and how it is applied to this
project, Subsection 3.2.2 MoSCoW and Subsection 3.2.3 the Waterfall Model.
Section 3.3 discusses the Technical Method and 3.4 discusses how the hardware
is researched
3.1
Research Process
To establish a scientific work that showcases the validity of the results presented
in this work, a proven research method will be utilized. For this thesis, it has
been decided that Bunge’s research method is the fitting for this method, as it
is designed for iterative processes of designing a product[26], which applies to
the prototype being built for this project. Bunge’s research method is a practice
that breaks down the construction of a research into ten steps. These steps are
the following:
1. How can one come to a conclusion of the established research
question?
2. How can a technology/product that solves the problem efficiently be developed?
3. What base knowledge is available and required to develop the
technology/product?
4. Based on the information gathered from step 3, develop the technology/product. If the technology/product is deemed adequate,
continue to step 6.
5. Attempt again with a new technology/product.
6. Make a model/simulation of the proposed technology/product.
7. What consequences entail for the model/simulation in step 6?
8. Test the application based of the model/simulation. If the result
is not satisfactory go to step 9, otherwise go to step 10.
9. Identify and address faults in the model/simulation. If everything suffices, continue development.
10. Evaluate the final result in comparison to existing knowledge
and practice, along with identifying new problems that could be
researched further.
and by following these steps, one should have a completed, legitimate research project[26]. How these steps are covered within this research project will
be covered in subsequent sub-chapters.
13
Figure 5: Graphical representation of Bunge’s research method,
adapted for this project. Original was provided from Anders
Sjögren[1]
14
3.1.1
Research Question
For the first step in Bunge’s research method, the initial research question and
any possible followup questions are showcased[26]. The initial research question
is defined as
How can AR technology enhance the navigation experience?
There are a few methods to answer this question. One possible method is
to gather opinions from a sample set of people, analyse the results and reach
a conclusion. For this thesis, a prototype program will be created to test the
utility of AR along with testing the main components the technology will utilize,
the GPS and gyroscope. These components will also tested for their stability
during use. The entire process, from planning to completion, will be documented
step-by-step, along with tools were used and how they were used.
There is also a set of sub-questions that will need to be addressed to solidify
the authenticity of the research. These sub-questions consist of:
1. Are there any previous implementations of this idea already
available?
2. Are there any unexplored avenues of AR that could be explored
now?
3. How will the application be designed and developed? What tools
and strategies be used?
4. How reliable are the hardware sensors for the goal of this project?
5. What is required for this research to be scientifically proven?
These questions will be explored and answered during these thesis, with
the following sub-chapters detailing the process of discovering the answers and
reaching a solid conclusion.
3.1.2
Research Strategy
In order to tackle the questions presented earlier, a qualitative research strategy
will be established. The research strategy details the process of uncovering the
answers stated in the previous sub-chapter. For the first sub-question, previous
utilization of AR will be discussed on their purpose, how they work and improve
the general experience. This means discussing other company’s usage of the
technology and how it was an improvement of their product. This is how the
second and third step in Bunge’s research method will be tackled.
For the second sub-question, research regarding advancements and potential
usage cases will be discussed. What that entails is listing future advancements
that are not market-ready but in current development. This includes any developments that companies have publicly disclosed and any research papers that
have been officially published. These developments will be discussed in their
possibilities, positive and negative.
For the third sub-question, this entire process is discussed in the ”Project
Strategy” sub-chapter, detailing the project method and development environment. It also covers steps 4, 5, 6 and 7 in Bunge’s research method.
15
As for the fourth sub-question, tests will be done to confirm the stability of
the hardware sensors. The method of this test will be described in the Hardware
Research sub-chapter.
For the final sub-question, the method and results will be checked against
a set of validity threats, criteria that if not followed, deems one’s research not
scientifically trustworthy as qualitative research[27]. These criteria were proposed by the author Guba to improve legitimacy for qualitative research[28],
since that kind of research analyses non-numerical data, such as opinions and
interviews to reach a conclusion[29]. These validity threats are as follows:
• Credibility. This criteria entails that the research process and result are
legitimate and can be verified by unaffiliated personals. There are multiple
methods to meeting this criteria[28]. For this project it will be defined by
how it compares to previous findings with the same field[28]. For this
research, earlier studies with AR will be found and discussed.
• Transferability. This criteria states that the results of a research can
be utilized in different time periods and even different fields from when
the research was taking place[28]. This means that this research will be
applicable for anyone interested in AR, regardless of how much time has
passed since publication.
• Dependability. This criteria states that the research should be reproducible and achieve similar results by anyone following the research process[28].
Guba defined this as extensive documentation for the implementation and
any data gathered of the course of the research[28]. This has already been
addressed earlier in this chapter.
• Confirmability. This criteria states that the research should be carried
out in good faith and objectively[28]. That means that the researcher or
any other party should not impede to gain a favorable result. For this
research, all the data and results found will be displayed without edits
made to them. Along with that, the positives and the negatives of the
results.
How the research abide by these validity threats will be discussed in the
Discussion chapter.
3.2
3.2.1
Project Strategy
Triple Constraint
When constructing a project plan, there are three major factors that has to be
taken into consideration. These factors are known as the Triple Constraint[30]
and consist of:
• Time: The time required to finish the project[30].
• Cost: How much resources are required to finish the project. This includes
manpower or budget-related[30].
• Scope: The final result of the project. How much has to be in the project
in order to be considered completed[30].
16
Figure 6: Visualisation of a Triple Constraint
These factors have to be satisfied during the planning of the project to make
sure the process run steadily. However, trying to satisfy all three will only be
a detriment to the quality of the final result. This is why one has to decide on
two factors to focus. The more one focuses on a factor, the less margin one has
for it[30].
For development of the prototype, time is the most important factor, as
there is a hard deadline for the it to be completed. Costs is not a huge factor
since it is a one-man team and more cannot be hired. Scope is not particularly
large as it is a prototype and not have to be fully featured. To address the time
factor, a list of tasks that are considered for the prototype. These features will
then be grouped based on priority. Each feature will be implemented in form of
iterations. This list is known as a MoSCoW list[31], which will delved into in
the next sub-chapter.
3.2.2
MoSCoW
As mentioned earlier, all tasks were listed and grouped in terms of priority. The
reason for this is to determine the order in which tasks are to be completed
within the deadline. The groups within a MoSCoW list are:
• Must-Have: The essentials of the project. Without these, the project
will not be able to fulfills the requirements set on it and is considered
unfinished[31].
• Should-Have: Second-to-most significant tasks for the project. These tasks
will not have to be completed for the project to be considered finished but
improve it if they are[31].
• Could-Have: The least significant tasks. These tasks add little to the
overall project and usually small extra things which are added at the end
of the project if the budget allows. Not finishing these tasks have little
effect on the overall functionality of the project[31].
• Wont-Have: These are tasks that will not be done within the project[31].
There could multiple reasons for tasks to not be addressed, be it lack of
budget, time or manpower within the development team.
17
By grouping tasks in this way, it becomes clear which tasks to start with first.
One can also move tasks into different priority groups depending on how any
party’s wishes. There could also be unforeseen factors which result in changes
to be made to reach the deadline. For this project, the MoSCoW list turned
out like this:
• Must-Have
– GPS functionality
– Some sort of graphical AR utilization for navigation
– Overlay AR with the real world through the camera
• Should-Have
– Additional AR utilization through audio
– Information about the destination when reached
• Could-Have
– Information regarding the surrounding area during the navigation
• Won’t-Have
– Make use of a public API for navigation information.
– Completed User Interface
With the list completed, tasks are now able to be tackled, starting from the
top going down.
3.2.3
Waterfall Model
The waterfall method is a development framework makes uses of determined
phases. One phase is worked on at a time, where one can only move on to the
next phase when all tasks in the current phase are completed[32]. A distinct
factor for waterfall is that developers are also not allowed to go back up to any
completed phase[32]. This means that when a phase is deemed complete, it is
stuck in that state for the rest of the development cycle. A variant of the model is
also possible that allows one to bypass this rule, known as modified waterfall
model, which allows one to go back to previous phases for verification and
validation[32]. The development cycle for this project made use of the modified
waterfall model. Below is a visual representation of a modified waterfall model
along with explanations for each phase.
18
Figure 7: Visualisation of the modified waterfall model for this
project, seen as step 4-8 in Bunge’s research method.
• Brainstorm/Validation: This is the part of the project where all parties
express their desired features for the final prototype. All ideas are gathered
and then the impossible ones are filtered out, resulting in a set of decided
features the prototype will possess when completed.
• Project Outline/Validation: A description that shows the features to be
implemented, their importance and time required for implementation. The
required tools and development environment is also stated here.
• Implementation/Verification: Here is where the implementation happens.
Code is written, implementing and testing the features stated in the outline that combined create the complete prototype.
• Customer Feedback/Evaluation: Every week, a meeting with the customer
(which is NEAVA AB) is scheduled to update them on the progress of the
prototype, what has been implemented, if there are any possible changes
to be made or new features to be added etc.
• Testing/Verification: When prototype is deemed feature-complete, a set
of tests are done to determine if it fulfills the assigned objective.
• Deployment/Evaluation: The prototype is deemed complete and deployed
to the customer.
19
3.3
Technical Method
The development environment used for this project will be the Unity Engine,
specifically version 2019.4.23f1. As previously discussed, this is the development
environment used by Neava AB and as such will be used for building this prototype. Unity is a framework which allows a developer to create applications
without the need to build many of the tools themselves. This is done with APIs
within Unity that can be used in many combinations and even be edited within
one’s application to achieve the desired outcomes. Since Unity uses C# for its
code, all code written will be done in C#.
3.4
Hardware Research
The examiner had interest in the reliability of the GPS and gyroscope within
smartphones. Thus it was requested to research the accuracy the values generated and analyse them. The device used will be an OnePlus 5 running Android
10. For the actual testing, the application will be started and the device will be
taken a certain route until it reaches a destination. This route starts with:
• A straight path
• a right turn down a slope
• small descending path
• right turn
• straight descending path to destination
Figure 8: Visualization of the path
20
When the destination is reached, the program will be terminated and restarted.
After the restart, the same route will be taken back to the starting point, following the same process as before. This data will then be visualized in a graph
along with the average for both paths.
21
22
4
System Architecture and Implementation
4.1
System Architecture Diagram
The system architecture diagram is a visual abstraction of the structure of a
program, detailing how every component within the program interact with each
other and their purpose[33]. They are constructed with:
• Class - Representation of a component within a program[2]. They are
built of:
– Name - What the component is identified as[2].
– Attributes - The variables and their type within the component[2].
– Operations - Methods and their return type within the component[2].
Figure 9: Visual Example of a class[2]
23
• Inheritance - A child class that can also use attributes and operations from
a parent class[2]. This is indicated with an arrow line from the child class
to the parent class. In the following image, User and Coach able to use
the attributes and operations of Person within their own class.
Figure 10: Visual example of Inheritance[2]
Thanks to the high-level abstraction, it is possible for someone with little to
no programming knowledge to understand the general structure of the program
and the goal of each component. It also helps other non-affiliated programmers,
since they might not have experience with the codebase. Below will the diagram
for the prototype be showcased:
24
Figure 11: System Architecture Diagram for the Prototype (AR
Functionality is coded in red). All the classes were created in Unity.
25
As we can see from the diagram, the CalculateDistance component is the
main component which handles a major functionality and allows the result generated to be used for other components. For example, the UpdateDistance component takes the value from the result variable, which shows the remaining
distance to the next checkpoint, and write out in text. For the other connected
components:
• The Audio component takes the value from the POI variable in CalculateDistance to understand what checkpoint the user has reached and play
the appropriate sound clip.
• The AssetGyro component uses the value from directionAngle to calculate
the correct rotation for the 3D arrow.
• The DestinationInfo component uses the CalculateDistance to confirm
that the user has reached the destionation. If so, it will display information
about the location.
CalculateDistance makes use of one component, which is the GPS component. This is for obtaining the coordinates for the distance calculations. It also
makes use of the PermissionGPS component allow the program to utilize the
GPS hardware on the Android device.
The last component which makes use of other component is the CameraScript
component. The goal of this component is utilize the camera within the Android
device and gain access to the video feed. The components connected to it are
CameraPermission and GyroscopeScript. The former is similar to GPSPermission but for the camera hardware. The latter applies Gyroscope functionality
to the camera, allowing one to twist and turn the device and the video feed
remains stable.
4.1.1
Use Case Diagram
Next is to showcase a real world scenario for this prototype. For this we will
make use of a Use Case Diagram, an high-level abstraction of how a user interacts with a system[34]. It describes what the user is meant to achieve when
interacting with the system and how it is achieved in the background. It shows
the relation between the parts of the system as well[4]. It will not explain in
detail how the parts interact with each other. The major aspects of a Use Case
Diagram are:
26
• Actors - Representation of the users that will interact with the system[34].
Figure 12: An Actor[3]
• Use Case - A function/task performed by the system[34]. A group of
use cases are into a system boundary box to visualise the scope of the
program[4].
Figure 13: A use case[3]
• Relations - A line that signifies the relation between use cases and actors[34].
Use cases can have relations between other as well signifying how they
work together. This does not work for actors.
Figure 14: An Actor and Use Case connected via a relation[3]
27
Figure 15: Example of a Use Case Diagram[4]
In the example diagram, we can see there are five actors and four use cases
within the system. The Customer has a relation to three use cases, in which
they would want to log in to the shopping site as a member for bonuses or
member discounts. Then of course they would want to be able to view the
items the shop has to offer and make purchases if interested. The PayPal actor
handles the transactions between the user and the shop, hence their relation to
the ”Complete Checkout” use case. This also applies to the Credit Payment
Service actor. The Identity Provider actor makes the Customer gets the correct
items via the database and makes sure the transactions are correct. The Authentication actor is a service that checks that the correct login information for
the user to gain access to their account. Along with that, they check everything
else in regards to items and purchases.
For this project, the use case diagram became like this:
28
Figure 16: Use Case Diagram for the Prototype
29
4.2
Final Prototype
Figure 17: The Final Prototype
The final prototype has a layout with three elements to it:
• A button that allows the user to change Point of Interest, which will be
explained shortly
• A pointer arrow to showcase what direction to head towards
• A Distance counter, indicating how close one is to the Point of Interest
In the source code, a list of GPS coordinates have been hard-coded. This
list consists of a path from the Start location to the destination, with points
in-between to assist the guiding. By the supervisor, these points were called
Points of Interest (POI). This is also why the button on the screen is named
”Next Point of Interest”. Then the program has started, A voice clip will play,
indicating to the user that the journey has begun. They will then be guided to
the first POI. When the distance meter reads close to 0m, the user presses the
”Next Point Of Interest” button, the pointer will now point to the next POI.
A voice clip also played to indicate what direction to take to reach the next
POI. This process is repeated until the final POI, the destination is reached.
When the button is pressed now, a new scene with text will appear. This is
supposed to have text that describe the destination, but for this prototype,
dummy text was used to present the idea. It is built with the combination
of APIs available in the Unity framework and self-written code to tailor it for
this project. Within this chapter, important bits of the code that create each
component will be discussed. If one is interested in accessing the full code, a
GitHub link is provided here: GitHub Link[35]
4.2.1
Camera
For the camera, a combination of Unity’s Camera API and use of regular assets
allowed for a live video feed to be display. First of all, in order to be granted
30
access to the camera, permission has to be attained from the device via a check.
This is done with a CameraPermission script. This script will check if the
program has been approved access of the camera from Android on bootup. If
not, the permission will be requested and the user will need to confirm that the
application is allowed to use the camera. When confirmed, the application will
be able to utilize the camera to gain a video feed. This confirmation will only
have to be done on the first boot if accepted, otherwise the request will reappear
on each subsequent boot.
using
using
using
using
System . C o l l e c t i o n s ;
System . C o l l e c t i o n s . G e n e r i c ;
UnityEngine ;
UnityEngine . UI ;
p u b l i c c l a s s CameraScript : MonoBehaviour
{
i n t currentCamIndex = 0 ;
p u b l i c RawImage d i s p l a y ;
p u b l i c Text s t a r t S t o p T e x t ;
p u b l i c Quaternion b a s e R o t a t i o n ;
WebCamTexture t e x ;
private void Start ( ) {
WebCamDevice d e v i c e = WebCamTexture . d e v i c e s [
currentCamIndex ] ;
t e x = new WebCamTexture ( d e v i c e . name ) ;
display . texture = tex ;
t e x . Play ( ) ;
}
Listing 2: Activating the Camera
Next, in order to grab the camera feed, a script is created. This script has
two global variables: currentCamIndex, an int that indexes which camera to use
(every camera has a pointer that is stored within an array). This index is set to
0 as that points to the back camera on our test device. The other variable is the
RawImage class which is a canvas the video stream is overlaid on. When the
script starts, a WebCamDevice Class, which allows control of a camera module
in the hardware used, is created. For hardware with multiple cameras, it is
required to specify the specific camera to be used, which the currentCamIndex
variable is used for. After that, an instance of the WebCamTexture class called
tex is acts like a background for the video feed to overlay on [36]. This is texture
is then overlaid to the canvas of the display class by changing the texture property of the class to it. Afterwards, the play function within the WebCamTexture
class is played, activating the camera and showing the video feed.
Now the camera feed will be shown on your screen but the orientation will
be incorrect. This is corrected in the Update function. Here we make use of
the orient variable that is the negative value of the current camera angle, giving
the correct orientation. The feed is then turned to the correct angle with the
localEulerAngles function.
31
v o i d Update ( ) {
i n t o r i e n t = −t e x . v i d e o R o t a t i o n A n g l e ;
d i s p l a y . r e c t T r a n s f o r m . l o c a l E u l e r A n g l e s = new
Vector3 ( 0 , 0 , o r i e n t ) ;
}
}
Listing 3: Correcting the Orientation
4.2.2
GPS
To gain access to the GPS component, likewise to the camera, permission for
access has to be granted. The same process as for the camera applies here but
a different script is used, called PermissionGPS, as the permissions required are
for the GPS. Afterwards, in a different script called GPS, a few global variables
are created. Two of these global variables will hold the values extracted from
the GPS and the last one an instance of the class itself that can be inherited
by other classes. They are called latitude and longitude (of type float that
hold the respective values received from the GPS component) and a static GPS
instance. When the script runs for the first frame, the instance is initialized and
the DontDestroyOnLoad function is called on it, to make sure nothing happens
to the asset holding this script when shifting between scenes [37].
p u b l i c c l a s s GPS
{
public st atic
public float
public float
: MonoBehaviour
GPS I n s t a n c e { s e t ; g e t ; }
latitude ;
longitude ;
private void Start ( ) {
Instance = this ;
DontDestroyOnLoad ( gameObject ) ;
StartCoroutine ( StartLocationService () ) ;
}
p r i v a t e IEnumerator S t a r t L o c a t i o n S e r v i c e ( )
{
i f ( ! Input . l o c a t i o n . isEnabledByUser ) {
//Debug . Log ( ” User has not e n a b l e d GPS
Permissions ”) ;
y i e l d break ;
}
Input . l o c a t i o n . S t a r t ( ) ;
i n t maxwait = 1 ;
w h i l e ( Input . l o c a t i o n . s t a t u s ==
L o c a t i o n S e r v i c e S t a t u s . I n i t i a l i z i n g && maxwait
> 0) {
y i e l d r e t u r n new WaitForSeconds ( 1 ) ;
maxwait−−;
32
}
i f ( maxwait <= 0 ) {
//Debug . Log ( ” Timed Out ” ) ;
y i e l d break ;
}
i f ( Input . l o c a t i o n . s t a t u s == L o c a t i o n S e r v i c e S t a t u s
. Failed ){
//Debug . Log ( ” Unable t o d e t e r m i n e d e v i c e
l o c a t i o n ”) ;
y i e l d break ;
}
l a t i t u d e = Input . l o c a t i o n . l a s t D a t a . l a t i t u d e ;
l o n g i t u d e = Input . l o c a t i o n . l a s t D a t a . l o n g i t u d e ;
digitalFilterGPS () ;
y i e l d break ;
}
Listing 4: Obtaining GPS Coordinates
private void digitalFilterGPS ( ) {
int counter = 0;
i n t maxcount = 1 0 ;
f l o a t d = 0.5 f ;
w h i l e ( c o u n t e r < maxcount )
{
f l o a t newLatitude = Input . l o c a t i o n . l a s t D a t a .
latitude ;
f l o a t newLongitude = Input . l o c a t i o n . l a s t D a t a .
longitude ;
float filteredLatitude = latitude + d ∗ (
newLatitude − l a t i t u d e ) ;
float filteredLongitude = longitude + d ∗ (
newLongitude − l o n g i t u d e ) ;
latitude = filteredLatitude ;
longitude = filteredLongitude ;
}
}
Listing 5: Filtering the GPS Coordinates
After that, a function called ”StartLocationService” is called. This function is
the main backbone for accessing the GPS and extracting information from it.
This function has a timer of 1 second. When this timer has passed, it collects
33
the current coordinates in the form of latitude and longitude. These values are
then stored separately in the previously created variables. To make sure the
values are not influenced by any interference caused by sudden movements on
the device, they are put into another function called digitalFilterGPS. This was
a function conceived by the supervisor, in which it first extracts the next set of
values to come from the GPS. From there, it makes use of the d variable (has
to be less than 1) multiplied by the difference between of the new value and
the old value. We subtract to the values to check the difference between the
reading and then multiply with the d variable to make sure the difference is even
smaller. This is done a number of times in a loop in order to make the difference
as small as possible. One of the main uses for these values is to calculate the
remaining distance between the user and the destination. This is done within
the self-created CalculateDistance script. This is done by interpreting the path
between the start and the destination as two points on a grid. With that idea in
mind, it allows one to see it as a triangle, with the hypotenuse representing the
distance. This allows use to make of known geometry, such as the Pythagoras
Theorem and the Distance Formula. Utilizing these formulas with radian values
(Working with degrees led to incorrect results) allows one to calculate the angle
between the user and the destination. This value is then sent to the arrow
pointer, making sure it is always pointing towards the destination.
4.2.3
Gyroscope
p u b l i c c l a s s G y r o S c o p e S c r i p t : MonoBehaviour
{
public s t a t i c GyroScopeScript Instance { get ; s e t ; }
p r i v a t e b o o l gyroEnabled ;
p u b l i c Gyroscope gyro ;
public string gyroStats ;
p r i v a t e GameObject cameraContainer ;
p r i v a t e Quaternion r o t ;
private void Start ( ) {
cameraContainer = new GameObject ( ” Camera
Container ”) ;
cameraContainer . t r a n s f o r m . p o s i t i o n = t r a n s f o r m .
position ;
t r a n s f o r m . S e t P a r e n t ( cameraContainer . t r a n s f o r m ) ;
Instance = this ;
gyroEnabled = EnableGyro ( ) ;
}
p r i v a t e b o o l EnableGyro ( ) {
i f ( SystemInfo . supportsGyroscope ) {
gyro = Input . gyro ;
gyro . e n a b l e d = t r u e ;
34
cameraContainer . t r a n s f o r m . r o t a t i o n =
Quaternion . E u l e r ( 9 0 f , 9 0 f , 0 f ) ;
r o t = new Quaternion ( 0 , 0 , 1 , 0 ) ;
return true ;
}
return f a l s e ;
}
p r i v a t e v o i d Update ( ) {
i f ( gyroEnabled ) {
t r a n s f o r m . l o c a l R o t a t i o n = gyro . a t t i t u d e ∗ r o t
;
g y r o S t a t s = gyro . a t t i t u d e . e u l e r A n g l e s .
ToString ( ) ;
}
}
}
Listing 6: Activating the Gyroscope
As for the gyroscope, no permissions are required to to gain access to it, so no
permission script was created for it. Just two scripts are used for the Gyroscope
utilization of this application. The first script is simply GyroscopeScript and
handles the main gyroscope functionality on the device. The scripts checks if
the the device has gyroscope support, and if so, enables it through the API and
adjusts the rotation so that it orientates correctly with the user. This rotation
is then constantly updated in the Update function.
The other script is called AssetGyro. This script applies gyroscope physics
to the arrow pointer, allowing the user to move the device and have the arrow
adjust itself based on the orientation of the phone. This is what allows to
always point at the destination regardless of where the phone is positioned. To
achieve this, the path from start to finish is imagined as a triangle, with the
start representing one point A and the destination representing point B. From
there, one can calculate the hypotenuse between the two points and from there,
calculate the angle.
4.2.4
Audio
p u b l i c c l a s s Audio : MonoBehaviour
{
p u b l i c s t a t i c Audio I n s t a n c e { g e t ; s e t ; }
p u b l i c AudioSource s o u r c e ;
p u b l i c AudioClip c l i p ;
p u b l i c AudioClip s t r a i g h t ;
p u b l i c AudioClip l e f t ;
p u b l i c AudioClip r i g h t ;
p u b l i c AudioClip s t a r t ;
p u b l i c AudioClip soon ;
35
p u b l i c AudioClip g o a l ;
public bool played ;
void Start ( ) {
played = f a l s e ;
playSound ( s t a r t ) ;
}
p u b l i c v o i d playSound ( AudioClip s t a t u s ) {
s o u r c e . PlayOneShot ( s t a t u s ) ;
}
v o i d Update ( ) {
i f ( C a l c u l a t e D i s t a n c e . I n s t a n c e . POI == 1 &&
C a l c u l a t e D i s t a n c e . I n s t a n c e . p l a y e d == f a l s e ) {
playSound ( s t r a i g h t ) ;
CalculateDistance . Instance . played = true ;
}
i f ( C a l c u l a t e D i s t a n c e . I n s t a n c e . POI == 2 &&
C a l c u l a t e D i s t a n c e . I n s t a n c e . p l a y e d == f a l s e )
{
playSound ( r i g h t ) ;
CalculateDistance . Instance . played = true ;
}
i f ( C a l c u l a t e D i s t a n c e . I n s t a n c e . POI == 3 &&
C a l c u l a t e D i s t a n c e . I n s t a n c e . p l a y e d == f a l s e )
{
playSound ( l e f t ) ;
CalculateDistance . Instance . played = true ;
}
i f ( C a l c u l a t e D i s t a n c e . I n s t a n c e . POI == 4 &&
C a l c u l a t e D i s t a n c e . I n s t a n c e . p l a y e d == f a l s e )
{
playSound ( soon ) ;
CalculateDistance . Instance . played = true ;
}
i f ( C a l c u l a t e D i s t a n c e . I n s t a n c e . POI == 5 &&
C a l c u l a t e D i s t a n c e . I n s t a n c e . p l a y e d == f a l s e )
{
playSound ( g o a l ) ;
CalculateDistance . Instance . played = true ;
}
}
36
}
Listing 7: Handling Audio
For the audio clips, only one simple script is used. The script simply called
Audio, which handles what audio clip to play to assist in directing the user.
This scirpt hold global variables for every voice clip and a bool to indicate if a
voice clip is playing. The script starts with confirming the played is set to false,
plays the start line to beginning. For every checkpoint that the user reaches, a
voice clip will be played to indicate which direction to head to next. When the
destination has been reached, the voice clip indicating the destination has been
reached will be played.
4.3
Component Research
In order to gather data on the GPS and gyroscope components, a script was
created that continuously grabs the time taken since application boot up and
the data from the components. This is then written into a text file every second
until the application terminates.
using
using
using
using
using
using
System . C o l l e c t i o n s ;
System . C o l l e c t i o n s . G e n e r i c ;
UnityEngine ;
UnityEngine . UI ;
System . IO ;
System ;
p u b l i c c l a s s WriteToTextFile : MonoBehaviour
{
string filePath ;
p u b l i c DateTime s t a r t T i m e ;
private void Start ( ) {
InvokeRepeating (” writeToFile ” , 0.98 f , 1.0 f ) ;
s t a r t T i m e = DateTime . Now ;
}
void writeToFile ( ) {
f i l e P a t h = A p p l i c a t i o n . p e r s i s t e n t D a t a P a t h+ ”/” +
” data . t x t ” ;
StreamWriter w r i t e r = new StreamWriter ( f i l e P a t h ,
true ) ;
TimeSpan currentTime = System . DateTime . Now −
startTime ;
w r i t e r . WriteLine ( currentTime . Minutes . T o S t r i n g ( ) +
” : ” + currentTime . Seconds . T o S t r i n g ( ) + ” ; ” +
GPS . I n s t a n c e . l a t i t u d e . T o S t r i n g ( ) + ” ; ” + GPS .
Instance . l o n g i t u d e . ToString ( ) + ” ; ” +
G y r o S c o p e S c r i p t . I n s t a n c e . gyro . a t t i t u d e .
ToString ( ) ) ;
writer . Close () ;
}
}
Listing 8: Gathering Data
37
The script starts with creating two variables, one holding the file path for the
text file and the other the current time and data for the current time and date.
In the Start function, the writeFile function is first called 0.98 seconds after
startup and repeatedly called every second. the startTime variable is set to the
current time and date.
The writeToFile function is the main function in the script and does all
the work. It starts by assigning the filePath variable to a set directory with
the Android filesystem, with the appended data.txt defining the filename for
the text file. Afterwards, a new instance of a StreamWriter is instantiated.
This class will allow us to write text to the text file. The class requires two
parameters, one that identifies which file to write to and the other allowing to
write continuously. To find out how long the application has been running, a
currentTime variable is created which holds the difference between the current
time (always counting) and startTime (set value). Lastly, the function does a
WriteLine call that writes the current time, latitude, longitude and gyroscope
values to the text file.
38
5
Results
This chapter will showcase what the results of the findings based on the main
and sub questions in Chapter 3. It will even showcase the final prototype and
briefly explain how it works. Section 5.1 showcases the previous application of
AR Navigation. Section 5.2 showcases the future development of AR, Section
5.3 showcases the final prototype and Section 5.4 showcasing the results of the
hardware research.
5.1
Sub Question 1: Previous Applications of AR Navigation
Utilizing AR for navigation has been done in the automotive industry. In the
past few years from the date this is written, car manufacturers such as Mercedes,
GMC and Jaguar[38] along with navigation companies such as Sygic[39] have
applied to overlay navigation information in the view of the driver. Mercedes
first implemented the technology into their cars with the 2020 Mercedes-Benz
GLE450. Jaguar has been exploring the technology since 2014 and GMC has
applied it to their 2020 Sierra HD for allowing the driver to see behind the
trailer.
5.2
Sub Question 2: Future Developments in AR
There has been research done for applying AR to various fields and how they can
be assist in the success within those fields. One research paper proposed there
were possible future applications of AR within the medical field. it state the
possibility of utilizing AR for patients with impaired sense by strengthening their
remaining functional senses[40]. The possible scenarios proposed was visual aid
to strengthen aural information for hearing-impaired patients and vice versa for
vision-impaired patients (strengthening visual information with aural aid).
5.3
Sub Question 3: Final Prototype
A final prototype was developed that is able to showcase the practicality of
Neava’s proposal. It is able to guide the user from the current position of the
user to a preset destination through a set of checkpoints guiding the user bit by
bit. When the user has reached the destination, text will pop up that details
the location. It makes use of GPS to gain information about the whereabouts
of the user and how to guide them into the next checkpoint along with the user
and a 3D arrow to visually showcase where to head next.
5.4
Sub Question 4: Hardware Research
Earlier in the report, the path and data measuring process for the test was
detailed. The test was done as planned and the gathered data was processed
during finding the average and comparing them for both traversals. A possible
fault with this approach is getting inconsistent values recorded, as the device
will held by the hand. This can result in somewhat shaky values as the hand
can go through small movements during the traversal. This is addressed by
also showcasing the average of the values recorded, mitigating the impact the
39
possible false values have on the final result. Attached below will be the results
of the tests conducted to measure the reliability of the hardware for both the
GPS and Gyroscope. Unity stores information about the state of the Gyroscope
within the X, Y and Z axes.
Figure 18: Latitude values towards Destination. Orange line denotes
the overall average.
Figure 19: Longitude values towards Destination. Orange line denotes the overall average.
40
Figure 20: Latitude values returning back. Orange line denotes the
overall average.
Figure 21: Longitude values returning back. Orange line denotes
the overall average.
41
Figure 22: Gyroscope values on the X-Axis when heading towards
the destination
Figure 23: Gyroscope values on the Y-Axis when heading towards
the destination
42
Figure 24: Gyroscope values on the Z-Axis when heading towards
the destination
Figure 25: Gyroscope values on the X-Axis when returning back
43
Figure 26: Gyroscope values on the Y-Axis when returning back
Figure 27: Gyroscope values on the Z-Axis when returning back
44
6
Discussion
This chapter will consist of the discussion over all the aspect of the research
and show how it passes the validity threats. Section 6.1 discusses how AR has
be utilized previously, Section 6.2 discusses the future applications that AR can
potentially be utilized in other fields. Section 6.3 discusses thoughts about the
final prototype, regarding development etc. Section 6.4 discusses the results of
the hardware research and Section 6.5 discusses the validity threats and how
they are reached, with Sections 6.5.1-6.5.4 going through each validity threat.
6.1
Previous Applications of AR Navigation
This is an amazing development in terms of navigation in vehicles. It can bring
great advantages for users without having to resort to external tools, easing
accessibility for customers interested in the feature. Drivers are also able to
dedicate more focus to the road ahead of them instead of looking to the side
from time to time at a traditional navigation system. It can even be expanded
further with showcasing information regarding objects and buildings in the view
of the driver, such as wildlife if one is driving in a nature-filled area or historical
buildings within a city. One would have to account for driver’s safety and
and not take attention away from the driver ahead, this information should be
presented via audio clips.
6.2
Future Developments in AR
It is interesting to see how the medical field has been attempting to utilize AR
technology for assistance tools in disabled patients. Being able to augment certain part of their reality to the point that their disability is less of a hindrance
in their daily lives is a gigantic feat and could possibly lead to practical replacements for patients that have gone through traumatic events and can no
longer recover a lost sense. There are many potential applications for AR in the
medical field that have not been potentially explored yet and those possibilities
could make a breakthrough within the field.
6.3
Final Prototype
The prototype was an interesting application to work on over the course of the
project. Having to work in a completely new environment with new technology
in terms of experience, was somewhat terrifying at first but thanks to Neava AB
providing an introductory class for Unity, it made transitioning to Unity a more
smooth experience. Writing in C# for the first time was not difficult to get into
thankfully due to similar syntax. Testing builds on the target device was also
quite easy, only having to build and deploy while the phone was connected to
the computer via USB.
The most difficult part of building the prototype was utilizing the hardware
components. In order to gain access to the GPS and Camera, one has to gain
permission to access and manipulate data from these components by sending a
request to the Android which the user accepts. This part is easy and only has to
be done once. However, making sure the components do their tasks as desired
and returning the wanted results was more than cumbersome. A lot of research
45
online had to be done to figure out to even output the camera feed onto the
display of the phone. The GPS component had to be researched online as well
to figure out how to access the coordinates values generated and them to other
components. For the 3D arrow, having it realign correctly in accordance to the
phone’s position was a lot of work, as it became inconsistent on the starting
position of the arrow when the program booted but was able to be fixed.
When testing the prototype during development, utilizing the 3D arrow for
direction as it was overlaid on the real world, made it quite clear where to
head towards. There was never any doubt on when to turn or continue straight
ahead. It was also clear when the destination had been reached. It felt that a
combination of the 3D arrow along with the voice clips working in tandem, it
became a more solid experience.
Overall, the project development had many hardships and challenges that
had to be passed for it to reach its completion. There were many hours where
there was just trial and error being done with some components to make sure the
right values were extracted along with making sure all the components worked
together in a correct manner, as in how component’s actions would trigger an
action from another component in the desired scenarios.
6.4
Hardware Research
The results for both routes were somewhat identical but mirrored in where they
intersect with the average, which is to be expected if the GPS is supposedly
consistent. The latitude graphs intersect the average line twice in the graph,
about 1/3 and 2/3 into the graph for both routes. This shows that we are
getting similar values regardless of with point we begin from, starting or destination. The same applies for the longitude values, where both intersect with
their average lines at about the half-way point. The are small deviation in the
values between both routes as well. This can be attributed to small changes in
path back, but overall the GPS component seem to be quite reliable in tracking
one’s location.
As for the Gyroscope, the results were a lot less sporadic on the return compared to when heading towards the destination. This is interesting, as the same
path was taken for both measurements. The X and W axes in particular do not
have large deviations from the average while the Y and Z axes have deviations
similar to all the deviations when heading towards the destination. This could
be the result of the component being sensitive to small hand movements that
happen while one is walking. The fact that both sets of graphs are quite different from each other overall indicate a small lack of reliability in the component,
but it could be possible to address this in software with code that filter out such
movements.
6.5
Sub Question 5: Validity Threats
As explained earlier, in order to strengthen the legitimacy of the research process, it has to pass a set of criteria. These criteria will be listed along with
explanations of how each criteria is achieved.
46
6.5.1
Credibility
This criteria stated that a research process has to be verified by those unaffiliated
with the research by finding similar results in previous studies within the same
field. This means that the research process and results are acknowledged if
one can find similar results with other studies. In this case, there has been
documented application of AR that has resulted in improvement within their
field, some that were covered within this thesis.
6.5.2
Transferability
This criteria states that the research results being usable at other time periods
along with other fields than the one the research was conducted on. The results
from this research can used as a stepping stone for those wishing to apply AR
to their product or idea, regardless of when that time comes. This research
could also be used by researchers or developers in fields that are not related to
navigation, receiving a general overview of the technology and read how it was
applied for navigation among other fields.
6.5.3
Dependability
This criteria states that anyone should be able to recreate the research steps and
achieve similar result. In order for this to be done, every step of the research
process has been described extensively with the motivations for their approach.
The prototype is also extensively documented, having a dedicated chapter which
details every method used within it along with source code.
6.5.4
Confirmability
The final criteria states that the research should be carried out objectively and
in good faith, meaning that the process and results should not be tampered with
in order to reach a certain conclusion. The goal of this research is to find out
if AR can be used to improve the user experience. This means that regardless
of the results argue for or against AR, it will still serve as a great source for
anyone interested of looking into the technology for any reason.
47
48
7
7.1
Conclusion
Conclusion
The goal of this thesis was to explore the possibility of making use of AR
to enhance the user experience, in request of Neava AB. To test this test, a
prototype of a future possible application was created. This prototype was
created with the main functionality requested by Neava AB implemented, which
includes navigation from point A to point B with visual and audio assistance.
Previous implementations of AR and along with future applications that are
currently being researched were looked into as well with how AR was utilized
in those scenarios.
7.1.1
Research Question
In conclusion, utilizing AR in different fields has shown to have an positive
outcome in terms of effectiveness. It has helped in allowing access to more
information at one time without having to compromise the user’s current practices. It could even possibly be used to alleviate handicaps people have not
been able to recover from previously. It seems that with the evolution of AR
technology, its benefits will grow stronger and stronger, to the point where one’s
everyday life could be surrounded in AR.
7.1.2
Final Prototype
The final prototype presented a possibility for such an application to be of commercial value. There are similar companies as Google who are experimenting
with the technology and it is possible for great development in the field with
years to come. Google also has a huge advantage with their backlog of geographical information due to Google Maps and Google Earth. For them, getting the
correct geographical information is a lot easier due to their database. This can
become an issue for smaller companies wanting to achieve the same scope.
There are possible features that were to be added into the prototype itself,
but had to be cut due to time constraints. One of them was for checkpoints to
update automatically and have the voicelines to play as well. This could not
happen due inexperience with the framework. Another was having a virtual
trail on the device screen. This trail would specify the path the user should
take in order to reach the destination. This would immensely helpful for the
experience so it was unfortunate that it could not be implemented. When this
idea is expanded upon to a full-fledged application, the previously mentioned
features are ideas one could possibly try to get working. The GPS have also
proven to be reliable when no external forces could interference. This bodes well
for the initial reliability, if one want to use them for any sort of development.
Sadly, not the same could be said for the gyroscope as it was shown to be a bit
more inconsistent.
7.2
Future Plans
For the future of this app, it would be interesting to continue working on after
this thesis is complete with Neava AB, but at the same time it would also be fun
to take the experiences gained from this project to work on different application
49
with a similar base. There are many ideas out there which have not materialized
yet so being able to take those opportunities would be exciting.
50
References
[1] A. Sjögren. (2021) Plantumlspecbunge. [Online]. Available:
https://kth-my.sharepoint.com/personal/as ug kth se/ layouts/15/
onedrive.aspx?id=%2Fpersonal%2Fas%5Fug%5Fkth%5Fse%
2FDocuments%2FExjobb%2FExamensarbete%20TIDAB%2FPlantUML%
2FplantumlSpecBunge%2Epng&parent=%2Fpersonal%2Fas%5Fug%
5Fkth%5Fse%2FDocuments%2FExjobb%2FExamensarbete%20TIDAB%
2FPlantUML&originalPath=
aHR0cHM6Ly9rdGgtbXkuc2hhcmVwb2ludC5jb20vOmk6L2cvcGVyc29uYWwvYXNfdWdfa3RoX3NlL0VYSG
5FcnRpbWU9endmdWl1LWEyVWc
[2] K. Fergusson. (2018) Uml class diagrams in draw.io. [Online]. Available:
https://drawio-app.com/uml-class-diagrams-in-draw-io/
[3] S. Systems. (2021) Use case diagram. [Online]. Available:
https://sparxsystems.com/enterprise architect user guide/14.0/
model domains/usecasediagram.html
[4] Lucidchart. (2021) Uml use case diagram tutorial. [Online]. Available:
https://www.lucidchart.com/pages/uml-use-case-diagram
[5] A. Turner. (2021) How many smartphones are in the world? march 2021
mobile user statistics: Discover the number of phones in the world and
smartphone penetration by country or region. [Online]. Available: https:
//www.bankmycell.com/blog/how-many-phones-are-in-the-world#part-1
[6] A. Wireless. (2020) Importance of smartphones in daily life. [Online].
Available:
https://www.assistwireless.com/importance-of-smartphones-in-daily-life/
[7] R. Panko, “The popularity of google maps: Trends in navigation apps in
2018,” 2018. [Online]. Available: https://themanifest.com/mobile-apps/
popularity-google-maps-trends-navigation-apps-2018#:∼:
text=Over%20three%2Dfourths%20(77%25)%20of%20smartphone%
20owners%20use%20navigation,comes%20in%20second%20at%2012%25
[8] A. He. (2019) People continue to rely on maps and navigational apps.
[Online]. Available: https://www.emarketer.com/content/
people-continue-to-rely-on-maps-and-navigational-apps-emarketer-forecasts-show
[9] S. R. Department. (2018) Most popular mapping apps in the united
states as of april 2018, by monthly users (in millions). [Online]. Available:
https://www.statista.com/statistics/865413/
most-popular-us-mapping-apps-ranked-by-audience/
[10] Google. (2021) Use live view on google maps. [Online]. Available:
https://support.google.com/maps/answer/9332056?co=GENIE.
Platform%3DAndroid&hl=en
[11] I. Tkachenko. (2020) How to create a location-based app for android and
ios. [Online]. Available: https:
//theappsolutions.com/blog/development/develop-app-with-geolocation/
51
[12] A. Hayes. (2018) Augmented reality. [Online]. Available:
https://www.investopedia.com/terms/a/augmented-reality.asp
[13] T. F. Institute. What is augmented reality?
[14] R. Nightingale. (2017) How does google maps work? [Online]. Available:
https:
//www.makeuseof.com/tag/technology-explained-google-maps-work/
[15] P. Pawar. (2020) This is how google maps works. [Online]. Available:
https://www.priteshpawar.com/how-google-maps-works/
technology-explained/priteshpawar/
[16] H. Stark, “Since you asked, here’s how google maps really works,” 2017.
[Online]. Available:
https://www.forbes.com/sites/haroldstark/2017/04/26/
since-you-asked-heres-how-google-maps-really-works/?sh=3e4570ea4dbe
[17] GeotabTeam. (2020) What is gps? [Online]. Available:
https://www.geotab.com/blog/what-is-gps/
[18] C. Hall. (2021) What is google maps ar navigation and live view and how
do you use it? [Online]. Available:
https://www.pocket-lint.com/apps/news/google/
147956-what-is-google-maps-ar-navigation-and-how-do-you-use-it
[19] T. E. of Encyclopaedia Britannica. (2021) Latitude and longitude.
[Online]. Available: https://www.britannica.com/science/latitude
[20] techahead. (2021) How does a gyroscope sensor work in your smartphone?
[Online]. Available: https://www.techaheadcorp.com/knowledge-center/
how-gyroscope-sensor-work-in-smartphone/
[21] D. B. Yan Michalevsky. (2014) Gyrophone: Recognizing speech from
gyroscope signals. [Online]. Available: https://www.usenix.org/
conference/usenixsecurity14/technical-sessions/presentation/michalevsky
[22] Member23999. (2013) Gyroscope. [Online]. Available:
https://learn.sparkfun.com/tutorials/gyroscope/all
[23] M. Berry. (2021) What’s inside my smartphone camera:
Hardware-software unity explained. [Online]. Available: https://fossbytes.
com/whats-inside-a-smartphone-camera-components-explained/
[24] L. Schardon. (2021) Making your dream games: What is unity? [Online].
Available: https://gamedevacademy.org/what-is-unity/#What is Unity
[25] S. L. Alexander S. Gillis. (2021) Definition: object-oriented programming
(oop). [Online]. Available: https://www.techtarget.com/
searchapparchitecture/definition/object-oriented-programming-OOP
[26] N. Andersson and A. Ekholm, “Vetenskaplighet-utvärdering av tre
implementeringsprojekt inom it bygg & fastighet 2002,” 2002.
52
[27] A. K. Shenton. (2004) Strategies for ensuring trustworthiness in
qualitative research projects. [Online]. Available:
https://www.researchgate.net/publication/228708239 Strategies for
Ensuring Trustworthiness in Qualitative Research Projects
[28] D. Pickell. (2013) Qualitative vs quantitative data – what’s the difference?
[Online]. Available: https://learn.g2.com/qualitative-vs-quantitative-data
[29] P. Bhandari. (2020) An introduction to qualitative research. [Online].
Available: https://www.scribbr.com/methodology/qualitative-research/
[30] L. Bogdevic. (2018) Project management: What is the ‘triple constraint’
model? [Online]. Available:
https://www.skillpoint.uk.com/triple-constraint-model/
[31] A Guide to the Business Analysis Body of Knowledge. International
Institute of Business Analysis, 2009. [Online]. Available:
https://www.academia.edu/6555031/A Guide to the Business Analysis
Body of Knowledge BABOK Guide Version 2 0
[32] E. Conrad, S. Misenar, and J. Feldman, “Chapter 4 - domain 4: Software
development security,” in Eleventh Hour CISSP (Second Edition), second
edition ed., E. Conrad, S. Misenar, and J. Feldman, Eds. Boston:
Syngress, 2014, pp. 63–76. [Online]. Available: https:
//www.sciencedirect.com/science/article/pii/B9780124171428000042
[33] J. Freeman. (2021) Complete guide to architecture diagrams. [Online].
Available: https://www.edrawsoft.com/architecture-diagram.html
[34] IBM. (2021) Use cases. [Online]. Available: https://www.ibm.com/docs/
en/rational-soft-arch/9.6.1?topic=diagrams-use-cases
[35] D. N. Moini. (2021) Codebase for the prototype. [Online]. Available:
https://github.com/danielzx2/AR-Navigation-Bachelor-Project[36] U. Team. (2021) Webcamtexture. [Online]. Available:
https://docs.unity3d.com/ScriptReference/WebCamTexture.html
[37] ——. (2021) Dontdestroyonload. [Online]. Available: https:
//docs.unity3d.com/ScriptReference/Object.DontDestroyOnLoad.html
[38] B. Cooley. Augmented reality is coming to your car. [Online]. Available:
https://www.cnet.com/roadshow/news/
augmented-reality-is-coming-to-your-car/
[39] M. Altaweel. (2021) Augmented reality and computer vision in
navigation. [Online]. Available: https://www.gislounge.com/
augmented-reality-and-computer-vision-in-navigation/
[40] A. O. Alkhamisi, S. Arabia, M. M. Monowar et al., “Rise of augmented
reality: Current and future application areas,” International journal of
internet and distributed systems, 2013.
53
54
TRITA -EECS-EX-2022:35
www.kth.se
Download