Download as pdf or txt
Download as pdf or txt
You are on page 1of 97

A

PROJECT REPORT
ON
A Face Recognition system for Android Mobile Phones
Submitted in partial fulfilment for the award of the degree of

BACHELOR OF TECHNOLOGY
in
COMPUTER SCIENCE & ENGINEERING
By

JANGITI PINKY - 17Q91A05M0

SAI SUMANTH - 16Q91A0577

DINESH PALVANCHA - 16E31A0594

SHRADDHA BISWAS - 15Q91A05L3

Under the guidance of


Mr. SYED RASHEED UDDIN
Assistant Professor, Dept. of CSE

DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING


MALLA REDDY COLLEGE OF ENGINEERING
(Approved by AICTE-Permanently Affiliated to JNTU-Hyderabad)
Accredited by NBA & NAAC, Recognized section 2(f) & 12(B) of UGC New Delhi ISO
9001:2015 certified Institution
Maisammaguda, Dhulapally (Post via Kompally), Secunderabad- 500100
2020 - 2021
MALLA REDDY COLLEGE OF ENGINEERING
(MALLA REDDY GROUP OF INSTITUTIONS)
(Approved by AICTE- Permanently Affiliated to JNTU Hyderabad) Accredited by NBA & NAAC,
Recognized under section 2(f) &12(B) of UGC New Delhi: ISO 9001:2015 certified Institution
Maisammaguda, Dhulapally (Post via Kompally), Secunderabad- 500100

CERTIFICATE
This is to certify that the major project report on “A FACE RECOGNITION SYSTEM FOR
ANDROID MOBILE PHONES” is successfully done by the following students of
Department of Computer Science and Engineering of our college in partial fulfillment of the
requirement for the award of B.Tech degree in the year 2020-21.The results embodied in this
report have not been submitted to any other University for the award of any diploma or degree.

JANGITI PINKY - 17Q91A05M0


SAI SUMANTH - 16Q91A0577
DINESH PALVANCHA - 16E31A0594
SHRADDHA BISWAS - 15Q91A05L3

INTERNAL GUIDE HOD PRINCIPAL


Mr.Syed Rasheed Uddin Ms. Ch. Vijaya Kumari Dr. M.Sreedhar Reddy
Asst.Professor Assoc. Professor Professor

Submitted for the viva voce examination held on

InternalExaminer ExternalExaminer

i
DECLARATION

We, Jangiti Pinky, Sai Sumanth, Dinesh Palvancha, Shraddha Biswas bearing Reg. No.

17Q91A05M0, 16Q91A0577, 16E31A0594, 16Q91A05L3 are hereby declaring that the

major project report entitled “A FACE RECOGNITION SYSTEM FOR ANDROID MOBILE

PHONES” has done by us under the guidance of Mr. Syed Rasheed Uddin, Assistant

Professor, Department of CSE is submitted in partial fulfillment of the requirements for the

award of degree in BACHELOR OF TECHNOLOGY in COMPUTER SCIENCE AND

ENGINEERING

Signature of the Candidate

Jangiti Pinky 17Q91A05M0

Sai Sumanth 16Q91A0577


Dinesh Palvancha 16E31A0594

Shraddha Biswas 15Q91A05L3

PLACE: Maisammaguda
DATE:

ii
ACKNOWLEDGEMENT

First and foremost we would like to express our immense gratitude towards our
institution Malla Reddy College of Engineering, which helped us to attain profound technical
skills in the field of Computer Science & Engineering, there by fulfilling our most cherished
goal.
We are pleased to thank Sri Ch. Malla Reddy, our Founder, Chairman MRGI, Sri
Ch. Mahender Reddy, Secretary, MRGI for providing this opportunity and support
throughout the course.
It gives us immense pleasure to acknowledge the perennial inspiration of
Dr.M.Sreedhar Reddy our beloved principal for his kind co-operation and encouragement in
bringing out this task.
We would like to thank Dr. T. V. Reddy our vice principal, Ms. Ch.Vijaya Kumari,
HOD, CSE Department for their inspiration adroit guidance and constructive criticism for
successful completion of our degree.
We convey our gratitude to Mr.Gladson Mario Britto R & D Dean, Mr. Syed
Rasheed uddin, & Mr. Ch. Vengaiah, Assistant Professor, our project coordinator(s) for
their valuable guidance.
We would like to thank Mr.Syed Rasheed uddin Assistant Professor our internal
guide, for his valuable suggestions and guidance during the exhibition and completion of this
project.

Finally, we avail this opportunity to express our deep gratitude to all staff who have
contribute their valuable assistance and support making our project success.

JANGITI PINKY (17Q91A05M0)


SAI SUMANTH (16Q91A0577)
DINESH PALVANCHA (16E31A0594)
SHRADDHA BISWAS (15Q91A05L3)

iii
ABSTRACT

More and more personal information is stored in the smart phone, especially including the

photos and pictures. It is important to manage these numbers of rapid growth photos in smart

phone. In this paper, we develop an intelligent photo management system used for android

phone. The key function mainly includes the basic function of classify based on time and

location, the advanced function of quickly searching and intelligent classification based on

face recognition. The core technology of face recognition consists of three steps: face

detection, face comparison and face searching.

Experiment shows that it works well and is efficient for mobile photo management.

iv
INDEX

LIST OF CONTENTS PAGE NO


CERTIFICATE i
DECLARATION ii
ACKNOWLEDGEMENT iii
ABSTRACT iv
TABLE OF CONTENTS v
LIST OF FIGURES vi
LIST OF SCREENSHOTS viii

NAME OF THE TOPIC PAGE NO

CHAPTER 1: INTRODUCTION 1
1.1 Introduction 2

CHAPTER 2: LITERATURE SURVEY 3


2.1 Literature Survey 4
2.2 Existing System 4
2.3 Disadvantages of Existing System 5
2.4 Proposed system 5
2.5 Advantages of proposed System 5
CHAPTER 3: SYSTEM ANALYSIS 6
3.1 Software Requirements 7
3.2 Hardware Requirements 7
3.3 Functional Requirements 7
3.4 Non-Functional Requirements 7
CHAPTER 4: SYSTEM DESIGN 9
4.1 Architecture of Proposed System 10

v
4.2 Modules 10
CHAPTER 5: DOMAIN SPECIFICATION AND EXPLANATION 11
5.1 Java Introduction 12
5.1.1 Java Server Pages (JSP) 16
5.1.2 Servlet-Front end 18
5.1.3 Java Database Connectivity (JDBC) 22
5.2 Introduction to Android 23
5.2.1 Android Versions 28
5.2.2 Create your first Android app 31
5.2.3 Understanding the Build Process 41
5.2.4 Text and Scrolling views 51
5.3 Source Code 61
CHAPTER 6: SCREENSHOTS 79
CHAPTER 7: TESTING 82
7.1 Testing Methodologies 83

CHAPTER 8: CONCLUSION 87
REFERENCES 88

LIST OF FIGURES

Figure Name of the Figure Page No.


No
1 Architecture of Proposed System 10

2 An overview of the software development process. 13


3 Java Platform 14
4 The API and Java Virtual Machine insulate the program from the 15
underlying hardware.
5 Architecture of JSP 17
5.1.1 Architecture of JSP 18
6 2 Tier Processing Model 22
7 3 Tier Processing Model 23

vi
8 Android Mobile Phones 24
9 Platform for Android Mobile phone 25
10 Touch Screen User Interface 25
11 Android Architecture 26
12 Distribution options in Android 28
13 Android Versions 29
14 Building for a multi-screen world 30
15 The development process 31
16 Choosing target devices and the minimum SDK 33
17 Choosing a template 34
18 Android Studio window panes 34
19 Exploring a project 35
20 Viewing the Android Manifest 36
21 Viewing and editing Java code 40
22 viewing and editing layouts 40
23 Build process 41
24 Syncing your project 45
25 Creating a virtual device 46
26 Select Hardware 47
27 Run Pane 47
28 Viewing your log messages 51
29 Scroll view 54
30 Relative Layout 55
31 ScrollView with a LinearLayout 55
32 Homepage 57
33 Android Studio page 58
34 Browse samples 60
35 The app project 60
36 Using activity templates 61

vii
LIST OF SCREENSHOTS

Fig No Screenshot Name Page No


37 Fcae Detection 80
38 Adding Face along with their Names 80
39 OUTPUT – Face Recognition 81

viii
CHAPTER 1
INTRODUCTION

1
1. INTRODUCTION

1.1 Introduction

Intelligence mobile phone as a high-technology product has more powerful function

with the development of times. The improvement of hardware, such as high resolution

cameras, fast processor and larger memory, requires more efficient and intelligent system and

software. With more photos stored on personal devices, there is a growing need of photo tools

that can empower users to organize and manage personal photo graphs. A great album app is

not just scanning the pictures, what is more important is the convenience of the user's lookup,

which leads developers to start thinking about how to make it easier to find a photo. Our goal

is motivated by this to develop an app which helps user to quickly find the photos that they

wanted.

Generally the photo from mobile phone is taken to record the important time or

thing. To achieve our goal, face detection is essential to this function. Although face++ [1]

offers online face detection technology, if all the photos of the user are sent to the server for

online face detection, it results a time-consuming networking task, and consumes a lot of data

traffic, which is obviously unacceptable for users. First, the user is less patient to wait for the

end of the long process. Second, data traffic is also a highly charged by network operators.

To solve the problem, we started a service in the background to complete the time-consuming

task, invisible for the user. The service will launch itself after the phone is turned on. In other

words, backstage services work at any time to classify the photos

2
CHAPTER 2

LITERATURE SURVEY

3
2. LITERATURE SURVEY

2.1 LITERATURE SURVEY


Besides the photo albums provided by mobile phone system,
there are also many researches about photo management.The Pixelsior provided a
unified view of photos to all apps and eliminates the complexity of metadata and photo
editing management. Lu et al studied the overlapped user on different photo-sharing
websites. The authors explored the use of photos for autobiographical purposes.
Photo4W developed a photo mangagement system including face annotation and scene
classification.Since the IOS system has the first mobile application Leafsnap for plant
leaf identification, Zhao et al developed an andriod-based plant image retrieval system,
namely Apleaf. In, the authors presented a segmentation algorithm. That is also our
motivation that IOS 10.3 has the better performance than other photo application which
can be intelligent classification by face and time, but not by location. Choi et al
proposed photo editiong system and live wallpaper based on android system.Some
others developed a real-time image monitoring system based on android
mobile terminals.

2.2 EXISTING SYSTEM


The identification of objects in an image and this process would
probably start with image processing techniques such as noise removal, followed by
(low-level) feature extraction to locate lines, regions and possibly areas with certain
textures. The clever bit is to interpret collections of these shapes as single objects, e.g.
cars on a road, boxes on a conveyor belt or cancerous cells on a microscope slide. One
reason this is an AI problem is that an object can appear very different when viewed
from different angles or under different lighting. Another problem is deciding what
features belong to what object and which are background or shadows etc. The human
visual system performs these tasks mostly unconsciously but a computer requires skilful
programming and lots of processing power to approach human performance.
Manipulation of data in the form of an image through several possible techniques. An
image is usually interpreted as a two-dimensional array of brightness values, and is most

4
familiarly represented by such patterns as those of a photographic print, slide, television
screen, or movie screen. An image can be processed optically or digitally with a
computer.

2.3 DISADVANTAGES OF EXSISTING SYSTEM


➢ It fails to detect Multiple Faces at a time.
➢ Existing system does not provide accurate results.

2.4 PROPOSED SYSTEM

Our proposed system presents an Android smart phone based application which can
detect face and provide names of people after performing face recognition. The Android
App also has the additional feature of registering and training a face on the mobile phone.
This allows the user to register new faces on his local device without the need to perform
training on another device and import the model. The registration process takes the
image of the new person as input and trains a model for the new person. If face to be
added is already registered then improve the face recognition model for the person else
add the new person to the face recognition model.

2.5 ADVANTAGAES OF PROPOSED SYSTEM

➢ It is free and easily available. Any person having an Android based smartphone can
download the app and use it.
➢ Detects Multiple Faces at a time.
➢ Provides Accurate face dectection.

5
CHAPTER 3

SYSTEM ANALYSIS

6
3.SYSTEM ANALYSIS

3.1 SOFTWARE REQUIREMENTS


➢ Android studio
➢ JDK
➢ Firebase

3.2 HARDWARE REQUIREMENTS


➢ Os – Windows 8
➢ Ram 8 GB
➢ 64bit configuration
➢ Android Mobile

3.3Functional Requirements

➢ 3.3.1 Function_1: User Registration:


➢ Purpose: The purpose of registration is to attach all the images of the moves of
unauthorized intruder.
➢ Inputs: The user will enter details in the registration form according to required fields,
some of the fields include are username, password, confirm password, first name etc
➢ Processing: On the server side the servlet is used to capture the details of the user and
then store the details in the database.
➢ Output: After the registration the user will be directed to the main home

➢ Error Handling : If error is encountered than try to handle it with some suitable
exception handling method

3.4 Non-Functional Requirements

➢ Some of the non-functional requirements are as follows:


➢ 1. Performance-is high as only it gets processed when necessary.
➢ 2. Reliability – is high as only registered users will get the intruder’s images.

7
➢ 3. Security- It provides security to the data by authentication, only those users who
are registered can access
➢ 4. Portability-SISME-HSS is a real time project which can be implemented on any
system which supports the requirements.
➢ 5. Logical Database Requirements – data base is needed for storing the details of the
users and images.

8
CHAPTER 4
SYSTEM DESIGN

9
3 SYSTEM DESIGN
4.1 ARCHITECTURE OF PROPOSED SYSTEM

Image is taken Photo Filter Detecting the


as input Face

Return data to Compare the


phone face

Fig:1 Architecture of Proposed System

4.2 MODULES
FACE DETECTION

Face detection technology uses face API . By sending photos to the server, we can
get a
rectangle of the face a unique identification of a human face and multiple attributes of
a human face . The face_token is very useful to us. The face API is a match, but a
photo may contain multiple faces. There are two solutions to this problem. The first is
to crop all the faces in the picture, and save the faces to the files, then perform a
comparison of faces. The second is a direct comparison of the signs of each face.
Obviously, the second option is better than the first one either in the running time or in
the probability of abnormal situation. The first option would also generate additional
file data that would cause more unwanted image files in the memory.

FACE RECOGNITION

Face recognition is the core technology of app.Face recognition includes face


detection, which is to check out the exact location of the face in a picture, face
comparsion or face matching technique, that it need at least two faces and test the
similarity,face search technology, that is giving a face that find the most similar faces
in a face set.

10
CHAPTER 5

DOMAIN SPECIFICATION AND EXPLANATION

11
5.1 JAVA INTRODUCTION
HISTORY

The JAVA language was created by James Gosling in June 1991 for
use in a set top box project. The language was initially called Oak, after an oak tree
that stood outside Gosling's office - and also went by the name Green - and ended up
later being renamed to Java, from a list of random words. Gosling's goals were to
implement a virtual machine and a language that had a familiar C/C++ style of
notation. The first public implementation was Java 1.0 in 1995. It promised "Write
Once, Run Anywhere” (WORA), providing no-cost runtimes on popular platforms. It
was fairly secure and its security was configurable, allowing network and file access
to be restricted. Major web browsers soon incorporated the ability to run secure Java
applets within web pages. Java quickly became popular. With the advent of Java 2,
new versions had multiple configurations built for different types of platforms. For
example, J2EE was for enterprise applications and the greatly stripped down version
J2ME was for mobile applications. J2SE was the designation for the Standard Edition.
In 2006, for marketing purposes, new J2 versions were renamed Java EE, Java ME,
and Java SE, respectively.

In 1997, Sun Microsystems approached the ISO/IEC JTC1 standards bodyand


later the Ecma International to formalize Java, but it soon withdrew from the process.
Java remains a standard that is controlled through the Java Community Process. At
one time, Sun made most of its Java implementations available without charge
although they were proprietary software. Sun's revenue from Java was generated by
the selling of licenses for specialized products such as the Java Enterprise System.
Sun distinguishes between its Software Development Kit (SDK) and Runtime
Environment (JRE)which is a subset of the SDK, the primary distinction being that in
the JRE, the compiler, utility programs, and many necessary header files are not
present.

On 13 Novmber2006, Sun released much of Java as free softwareunder the


terms of the GNU General Public License(GPL). On 8 May2007Sun finished the
process, making all of Java's core code open source, aside from a small portion of
code to which Sun did not hold the copyright.

12
Primary goals

There were five primary goals in the creation of the Java language:

• It should use the object-oriented programming methodology.


• It should allow the same program to be executed on multiple operating systems.
• It should contain built-in support for using computer networks.
• It should be designed to execute code from remote sources securely.
• It should be easy to use by selecting what were considered the good parts of other
object-oriented languages
The Java Programming Language:

The Java programming language is a high-level language that can be


characterized by all of the following buzzwords:

• Simple
• Architecture neutral
• Object oriented
• Portable
• Distributed
• High performance

Each of the preceding buzzwords is explained in The Java Language


Environment , a white paper written by James Gosling and Henry McGilton. In the
Java programming language, all source code is first written in plain text files ending
with the .java extension. Those source files are then compiled into .class files by
the javac compiler. A .class file does not contain code that is native to your
processor; it instead contains byte codes — the machine language of the Java Virtual
Machine1 (Java VM). The java launcher tool then runs your application with an
instance of the Java Virtual Machine.

fig:2 An overview of the software development process.

13
Because the Java VM is available on many different operating systems, the
TM
same .class files are capable of running on Microsoft Windows, the Solaris
Operating System (Solaris OS), Linux, or Mac OS. Some virtual machines, such as the
Java Hot Spot virtual machineperform additional steps at runtime to give your
application a performance boost. This include various tasks such as finding
performance bottlenecks and recompiling (to native code) frequently used sections of
code.

Fig:3 Java Platform

Through the Java VM, the same application is capable of running on multiple
platforms.

The Java Platform


A platform is the hardware or software environment in which a program runs.
We've already mentioned some of the most popular platforms like Microsoft Windows,
Linux, Solaris OS, and Mac OS. Most platforms can be described as a combination of
the operating system and underlying hardware. The Java platform differs from most
other platforms in that it's a software-only platform that runs on top of other hardware-
based platforms.

The Java platform has two components:

The Java Virtual Machine


The Java Application Programming Interface (API)
You've already been introduced to the Java Virtual Machine; it's the base for the
Java platform and is ported onto various hardware-based platforms.

The API is a large collection of ready-made software components that provide


many useful capabilities. It is grouped into libraries of related classes and interfaces;

14
these libraries are known as packages. The next section, What
CanJavaTechnologyDo?Highlights some of the functionality provided by the API.

Fig:4 The API and Java Virtual Machine insulate the program from the
underlying hardware.

As a platform-independent environment, the Java platform can be a bit slower


than native code. However, advances in compiler and virtual machine technologies are
bringing performance close to that of native code without threatening portability.

Java Runtime Environment

The Java Runtime Environment, or JRE, is the software required to run any
application deployed on the Java Platform. End-users commonly use a JRE in software
packages and Web browser plug-in. Sun also distributes a superset of the JRE called the
Java 2 SDK(more commonly known as the JDK), which includes development tools
such as the Javacompiler,Javadoc, Jarand debugger.

One of the unique advantages of the concept of a runtime engine is that errors
(exceptions) should not 'crash' the system. Moreover, in runtime engine environments
such as Java there exist tools that attach to the runtime engine and every time that an
exception of interest occurs they record debugging information that existed in memory
at the time the exception was thrown (stack and heap values). These Automated
Exception Handling tools provide 'root-cause' information for exceptions in Java
programs that run in production, testing or development environments.

Uses OF JAVA

Blue is a smart card enabled with the secure, cross-platform, object-oriented


Java Card API and technology. Blue contains an actual on-card processing chip,

15
allowing for enhance able and multiple functionality within a single card. Applets that
comply with the Java Card API specification can run on any third-party vendor card
that provides the necessary Java Card Application Environment (JCAE). Not only can
multiple applet programs run on a single card, but new applets and functionality can be
added after the card is issued to the customer
• Java Can be used in Chemistry.
• In NASA also Java is used.
• In 2D and 3D applications java is used.
• In Graphics Programming also Java is used.
• In Animations Java is used.
• In Online and Web Applications Java is used.

5.1.1 JSP:

JavaServer Pages (JSP) is a Java technology that allows software developers


to dynamically generate HTML, XML or other types of documents in response to a Web
client request. The technology allows Java code and certain pre-defined actions to be
embedded into static content.

The JSP syntax adds additional XML-like tags, called JSP actions, to be used to
invoke built-in functionality. Additionally, the technology allows for the creation of JSP
tag libraries that act as extensions to the standard HTML or XML tags. Tag libraries
provide a platform independent way of extending the capabilities of a Web server.

JSPs are compiled into Java Servlet by a JSP compiler. A JSP compiler may
generate a servlet in Java code that is then compiled by the Java compiler, or it may
generate byte code for the servlet directly. JSPs can also be interpreted on-the-fly
reducing the time taken to reload changes

JavaServer Pages (JSP) technology provides a simplified, fast way to create


dynamic web content. JSP technology enables rapid development of web-based
applications that are server and platform-independent.

Architecture OF JSP

16
Fig:5 Architecture OF JSP

The Advantages of JSP


Active Server Pages (ASP). ASP is a similar technology from Microsoft. The
advantages of JSP are twofold. First, the dynamic part is written in Java, not Visual Basic
or other MS-specific language, so it is more powerful and easier to use. Second, it is
portable to other operating systems and non-Microsoft Web servers. Pure Servlet. JSP
doesn't give you anything that you couldn't in principle do with a Servlet. But it is more
convenient to write (and to modify!) regular HTML than to have a zillion println
statements that generate the HTML. Plus, by separating the look from the content you
can put different people on different tasks: your Web page design experts can build the
HTML, leaving places for your Servlet programmers to insert the dynamic content.
Server-Side Includes (SSI). SSI is a widely-supported technology for including
externally-defined pieces into a static Web page. JSP is better because it lets you use
Servlet instead of a separate program to generate that dynamic part. Besides, SSI is really
only intended for simple inclusions, not for "real" programs that use form data, make
database connections, and the like. JavaScript. JavaScript can generate HTML
dynamically on the client. This is a useful capability, but only handles situations where
the dynamic information is based on the client's environment.
With the exception of cookies, HTTP and form submission data is not available
to JavaScript. And, since it runs on the client, JavaScript can't access server-side
resources like databases, catalogs, pricing information, and the like. Static HTML.
17
Regular HTML, of course, cannot contain dynamic information. JSP is so easy and
convenient that it is quite feasible to augment HTML pages that only benefit marginally
by the insertion of small amounts of dynamic data. Previously, the cost of using dynamic
data would preclude its use in all but the most valuable instances.

ARCHITECTURE OF JSP

Fig:5.1.1 Architecture OF JSP

• The browser sends a request to a JSP page.


• The JSP page communicates with a Java bean.
• The Java bean is connected to a database.
• The JSP page responds to the browser.

5.1.2 SERVLETS – FRONT END

The Java Servlet API allows a software developer to add dynamic content to a
Web server using the Java platform. The generated content is commonly HTML, but may
be other data such as XML. Servlet are the Java counterpart to non-Java dynamic Web
content technologies such as PHP, CGI and ASP.NET. Servlet can maintain state across
many server transactions by using HTTP cookies, session variables or URL rewriting.

The Servlet API, contained in the Java package hierarchy javax. Servlet, defines
the expected interactions of a Web container and a Servlet. A Web container is essentially
the component of a Web server that interacts with the Servlet. The Web container is
responsible for managing the lifecycle of Servlet, mapping a URL to a particular Servlet
18
and ensuring that the URL requester has the correct access rights.A Servlet is an object
that receives a request and generates a response based on that request. The basic Servlet
package defines Java objects to represent Servlet requests and responses, as well as
objects to reflect the Servlet configuration parameters and execution environment. The
package javax. Servlet. Http defines HTTP-specific subclasses of the generic Servlet
elements, including session management objects that track multiple requests and
responses between the Web server and a client. Servlet may be packaged in a WAR file
as a Web application.Servlet are Java technology's answer to CGI programming. They
are programs that run on a Web server and build Web pages. Building Web pages on the
fly is useful (and commonly done) for a number of reasons:.

The Web page is based on data submitted by the user. For example the results
pages from search engines are generated this way, and programs that process orders for
e-commerce sites do this as well. The data changes frequently. For example, a weather-
report or news headlines page might build the page dynamically, perhaps returning a
previously built page if it is still up to date. The Web page uses information from
corporate databases or other such sources. For example, you would use this for making
a Web page at an on-line store that lists current prices and number of items in stock.

The Servlet Run-time Environment


A Servlet is a Java class and therefore needs to be executed in a Java VM by a
service we call a Servlet engine. The Servlet engine loads the servlet class the first time
the Servlet is requested, or optionally already when the Servlet engine is started. The
Servlet then stays loaded to handle multiple requests until it is explicitly unloaded or the
Servlet engine is shut down.

Some Web servers, such as Sun's Java Web Server (JWS), W3C's Jigsaw and
Gefion Software's Lite Web Server (LWS) are implemented in Java and have a built-in
Servlet engine. Other Web servers, such as Netscape's Enterprise Server, Microsoft's
Internet Information Server (IIS) and the Apache Group's Apache, require a Servlet
engine add-on module. The add-on intercepts all requests for Servlet, executes them and
returns the response through the Web server to the client. Examples of Servlet engine
add-ons are Gefion Software's WAI Cool Runner, IBM's Web Sphere, Live Software's
JRun and New Atlanta's Servlet Exec.

19
All Servlet API classes and a simple Servlet-enabled Web server are combined
into the Java Servlet Development Kit (JSDK), available for download at Sun's official
Servlet site .To get started with Servlet I recommend that you download the JSDK and
play around with the sample Servlet.

Life Cycle OF Servlet

• The Servlet lifecycle consists of the following steps:


• The Servlet class is loaded by the container during start-up.
The container calls the init() method. This method initializes the Servlet and must
be called before the Servlet can service any requests. In the entire life of a Servlet, the
init() method is called only once. After initialization, the Servlet can service client-
requests.
Each request is serviced in its own separate thread. The container calls the
service() method of the Servlet for every request.
The service() method determines the kind of request being made and dispatches
it to an appropriate method to handle the request. The developer of the Servlet must
provide an implementation for these methods. If a request for a method that is not
implemented by the Servlet is made, the method of the parent class is called, typically
resulting in an error being returned to the requester. Finally, the container calls the
destroy() method which takes the Servlet out of service. The destroy() method like init()
is called only once in the lifecycle of a Servlet.

Request and Response Objects


The do Get method has two interesting parameters: HttpServletRequest and
HttpServletResponse. These two objects give you full access to all information about the
request and let you control the output sent to the client as the response to the request.
With CGI you read environment variables and stdin to get information about the request,
but the names of the environment variables may vary between implementations and some
are not provided by all Web servers.
The HttpServletRequest object provides the same information as the CGI
environment variables, plus more, in a standardized way. It also provides methods for
extracting HTTP parameters from the query string or the request body depending on the
type of request (GET or POST). As a Servlet developer you access parameters the same

20
way for both types of requests. Other methods give you access to all request headers and
help you parse date and cookie headers.

Instead of writing the response to stdout as you do with CGI, you get an
OutputStream or a PrintWriter from the HttpServletResponse. The OuputStream is
intended for binary data, such as a GIF or JPEG image, and the PrintWriter for text
output. You can also set all response headers and the status code, without having to rely
on special Web server CGI configurations such as Non Parsed Headers (NPH). This
makes your Servlet easier to install.

ServletConfig and ServletContext

There is only one ServletContext in every application. This object can be used by
all the Servlet to obtain application level information or container details. Every Servlet,
on the other hand, gets its own ServletConfig object. This object provides initialization
parameters for a servlet. A developer can obtain the reference to ServletContext using
either the ServletConfig object or ServletRequest object.

All servlets belong to one servlet context. In implementations of the 1.0 and 2.0
versions of the Servlet API all servlets on one host belongs to the same context, but with
the 2.1 version of the API the context becomes more powerful and can be seen as the
humble beginnings of an Application concept. Future versions of the API will make this
even more pronounced.
Many servlet engines implementing the Servlet 2.1 API let you group a set of
servlets into one context and support more than one context on the same host. The
ServletContext in the 2.1 API is responsible for the state of its servlets and knows about
resources and attributes available to the servlets in the context. Here we will only look at
how ServletContext attributes can be used to share information among a group of
servlets.
There are three ServletContext methods dealing with context attributes:
getAttribute, setAttribute and removeAttribute. In addition the servlet engine may
provide ways to configure a servlet context with initial attribute values. This serves as a
welcome addition to the servlet initialization arguments for configuration information
used by a group of servlets, for instance the database identifier we talked about above, a
style sheet URL for an application, the name of a mail server, etc.

21
5.1.3 JDBC

Java Database Connectivity (JDBC) is a programming framework for Java


developers writing programs that access information stored in databases, spreadsheets,
and flat files. JDBC is commonly used to connect a user program to a "behind the scenes"
database, regardless of what database management software is used to control the
database. In this way, JDBC is cross-platform . This article will provide an introduction
and sample code that demonstrates database access from Java programs that use the
classes of the JDBC API, which is available for free download from Sun's site .

A database that another program links to is called a data source. Many data
sources, including products produced by Microsoft and Oracle, already use a standard
called Open Database Connectivity (ODBC). Many legacy C and Perl programs use
ODBC to connect to data sources. ODBC consolidated much of the commonality
between database management systems. JDBC builds on this feature, and increases the
level of abstraction. JDBC-ODBC bridges have been created to allow Java programs to
connect to ODBC-enabled database software .

JDBC Architecture
Two-tier and Three-tier Processing Models

The JDBC API supports both two-tier and three-tier processing models for
database access.

Fig:6 two-tier processing model

In the two-tier model, a Java applet or application talks directly to the data source.
This requires a JDBC driver that can communicate with the particular data source being

22
accessed. A user's commands are delivered to the database or other data source, and the
results of those statements are sent back to the user. The data source may be located on
another machine to which the user is connected via a network. This is referred to as a
client/server configuration, with the user's machine as the client, and the machine housing
the data source as the server. The network can be an intranet, which, for example,
connects employees within a corporation, or it can be the Internet.

In the three-tier model, commands are sent to a "middle tier" of services, which
then sends the commands to the data source. The data source processes the commands
and sends the results back to the middle tier, which then sends them to the user.

Fig:7 Three tier processing model

Until recently, the middle tier has often been written in languages such as C or
C++, which offer fast performance. However, with the introduction of optimizing
compilers that translate Java byte code into efficient machine-specific code and
technologies such as Enterprise JavaBeans™, the Java platform is fast becoming the
standard platform for middle-tier development. This is a big plus, making it possible to
take advantage of Java's robustness, multithreading, and security features. With
enterprises increasingly using the Java programming language for writing server code, the
JDBC API is being used more and more in the middle tier of a three-tier architecture.
Some of the features that make JDBC a server technology are its support for connection
pooling, distributed transactions, and disconnected rowsets. The JDBC API is also what
allows access to a data source from a Java middle tier.

5.2: Introduction to Android

Android

23
Android is an operating system and programming platform developed by Google for
smartphones and other mobile devices (such as tablets). It can run on many different
devices from many different manufacturers. Android includes a software development kit
for writing original code and assembling software modules to create apps for Android
users. It also provides a marketplace to distribute apps. All together, Android represents
an ecosystem for mobile apps.

Fig:8 Android mobile phones

Why develop apps for Android?

Apps are developed for a variety of reasons: addressing business requirements, building
new services, creating new businesses, and providing games and other types of content
for users. Developers choose to develop for Android in order to reach the majority of
mobile device users.

Most popular platform for mobile apps

24
As the world's most popular mobile platform, Android powers hundreds of millions of
mobile devices in more than 190 countries around the world. It has the largest installed
base of any mobile platform and is still growing fast. Every day another million users
power up their Android devices for the first time and start looking for apps, games, and
other digital content.

Fig:9 Platform for mobile apps

Best experience for app users

Android provides a touch-screen user interface (UI) for interacting with apps. Android's
user interface is mainly based on direct manipulation, using touch gestures such as
swiping, tapping and pinching to manipulate on-screen objects. In addition to the
keyboard, there’s a customizable virtual keyboard for text input. Android can also
support game controllers and full-size physical keyboards connected by Bluetooth or
USB

Fig:10 touch-screen user interface (UI)


The Android home screen can contain several pages of app icons, which launch the
associated apps, and widgets, which display live, auto-updating content such as the
weather, the user's email inbox or a news ticker. Android can also play multimedia
content such as music, animation, and video. The figure above shows app icons on the
home screen (left), playing music (center), and displaying widgets (right). Along the top
of the screen is a status bar, showing information about the device and its connectivity.

25
The Android home screen may be made up of several pages, between which the user can
swipe back and forth.

Easy to develop apps

Use the Android software development kit (SDK) to develop apps that take advantage of
the Android operating system and UI. The SDK includes a comprehensive set of
development tools including a debugger, software libraries of prewritten code, a device
emulator, documentation, sample code, and tutorials. Use these tools to create apps that
look great and take advantage of the hardware capabilities available on each device.To
develop apps using the SDK, use the Java programming language for developing the app
and Extensible MarkupLanguage (XML) files for describing data resources. By writing
the code in Java and creating a single app binary, you will have an app that can run on
both phone and tablet form factors. You can declare your UI in lightweight sets of XML
resources, one set for parts of the UI that are common to all form factors, and other sets
for features specific to phones or tablets. At runtime, Android applies the correct resource
sets based on its screen size, density, locale, and so on. To help you develop your apps
efficiently, Google offers a full Java Integrated Development Environment (IDE) called
Android Studio,with advanced features for developing, debugging, and packaging
Android apps. Using Android Studio, youcan develop on any available Android device,
or create virtual devices that emulate any hardware configuration.Android provides a rich
development architecture. You don’t need to know much about the components of this
architecture, but it is useful to know what is available in the system for your app to use.
The following diagram shows the major components of the Android stack — the
operating system and development architecture.

Fig:11 Android Architecture


In the figure above:

26
Apps: Your apps live at this level, along with core system apps for email, SMS
messaging, calendars, Internetbrowsing, or contacts.
Java API Framework: All features of Android are available to developers through
application programming interfaces(APIs) written in the Java language. You don't need
to know the details of all of the APIs to learn how to develop Android apps, but you can
learn more about the following APIs, which are useful for creating apps:

View System usedto build an app's UI, including lists, buttons, and menus.
Resource Manager usedto access to non-code resources such as localized strings,
graphics, and layout files.
Notification Manager usedto display custom alerts in the status bar.
Activity Manager thatmanages the lifecycle of apps.
Content Providers thatenable apps to access data from other apps.
All framework APIsthat Android system apps use.

Libraries and Android Runtime: Each app runs in its own process and with its own
instance of the Android Runtime,which enables multiple virtual machines on low-
memory devices. Android also includes a set of core runtime libraries that provide most
of the functionality of the Java programming language, including some Java 8 language
features that the Java API framework uses. Many core Android system components and
services are built from native code that require native libraries written in C and C++.
These native libraries are available to apps through the Java API framework.
Hardware Abstraction Layer (HAL): This layer provides standard interfaces that
expose device hardware capabilitiesto the higher-level Java API framework. The HAL
consists of multiple library modules, each of which implements an interface for a specific
type of hardware component, such as the camera or bluetooth module.
Linux Kernel: The foundation of the Android platform is the Linux kernel. The above
layers rely on the Linux kernel forunderlying functionalities such as threading and low-
level memory management. Using a Linux kernel enables Android to take advantage of
key security features and allows device manufacturers to develop hardware drivers for a
well-known kernel.

Many distribution options

27
You can distribute your Android app in many different ways: email, website or an app
marketplace such as Google Play. Android users download billions of apps and games
from the Google Playstore each month (shown in the figure below). Google Play is a
digital distribution service, operated and developed by Google, that serves as the official
appstore for Android, allowing consumers to browse and download apps developed with
the Android SDK and published through google.

Fig12:distribution options in android

Development Tools

The Android SDK includes a variety of custom tools that help you develop mobile applications
on the Android platform. Three of the most significant tools are:
⮚ Android Emulator -A virtual mobile device that runs on our computer -use to design, debug,
and test our applications in an actual Android run-time environment
⮚ Android Development Tools Plugin -for the Eclipse IDE - adds powerful extensions to the
Eclipse integrated environment
⮚ Dalvik Debug Monitor Service (DDMS) -Integrated with Dalvik -this tool let us manage
processes on an emulator and assists in debugging
⮚ Android Asset Packaging Tool (AAPT) – Constructs the distributable Android package files
(.apk)
⮚ Android Debug Bridge (ADB) – provides link to a running emulator. Can copy files to emulator,
install .apk files and run commands.

5.2.1 Android versions

Google provides major incremental upgrades to the Android operating system every six
to nine months, using confectionery-themed names. The latest major release is Android
7.0 "Nougat".

28
API level
Code name Version number Initial release date
N/A 1.0 23 September 2008 1
N/A 1.1 9 February 2009 2
Cupcake 1.5 27 April 2009 3
Donut 1.6 5 September 2009 4
Eclair 2.0-2.1 26 October 2009 5-7
Froyo 2.2-2.23 20 May 2010 8
Gingerbread 2.3 – 2.3.7 6 December 2010 9- 10
Honeycomb 3.0 – 3.2.6 22 February 2011 11-13
Ice Cream Sandwich 4.0 – 4.0.4 18 October 2011 14–15
Jelly Bean 4.1 – 4.3.1 9 July 2012 16–18
KitKat 4.4 – 4.4.4 31 October 2013 19–20
Lollipop 5.0 – 5.1.1 12 November 2014 21–22
Marshmallow 6.0 – 6.0.1 5 October 2015 23
Nougat 7.0 22 August 2016 24

Fig:13 Android versions

See previous versions and their features at The Android Story.


The Dashboard for Platform Versionsis updated regularly to show the distribution of
active devices running each version of Android, based on the number of devices that visit
the Google Play Store. It's a good practice to support about 90% of the active devices,
while targeting your app to the latest version.
This support library allows your app to use recent platform APIs on older devices.

The challenges of Android app development

While the Android platform provide rich functionality for app development, there are still
a number of challenges you need to address, such as:

• Building for a multi-screen world


• Getting performance right
• Keeping your code and your users secure
• Remaining compatible with older platform versions
• Understanding the market and the user.

Building for a multi-screen world

29
Android runs on billions of handheld devices around the world, and supports various form
factors including wearable devices and televisions. Devices can come in different sizes
and shapes that affect the screen designs for UI elements in your apps.

fig:14 Building for a multi-screen world

In addition, device manufacturers may add their own UI elements, styles, and colors to
differentiate their products. Each manufacturer offers different features with respect to
keyboard forms, screen size, or camera buttons. An app running on one device may look
a bit different on another. The challenge for many developers is to design UI elements that
can work on all devices It is also the developer’s responsibility to provide an app’s
resources such as icons, logos, other graphics, and text styles to maintain uniformity of
appearance across different devices.

Maximizing app performance

An app's performance—how fast it runs, how easily it connects to the network, and how well it
manages battery and memory usage—is affected by factors such as battery life, multimedia
content, and Internet access. You must be aware of these limitations and write code in such a way
that the resource utilization is balanced and distributed optimally. For example, you will have to
balance the background services by enabling them only when necessary; this will save battery life
of the user’s device.

Keeping your code and your users secure

You need to take precautions to secure your code and the user’s experience when using your app.
Use tools such as ProGuard (provided in Android Studio), which detects and removes unused
classes, fields, methods, and attributes, and encrypt all of your app's code and resources while
packaging the app. To protect your user's critical information such as logins and passwords, you
must secure the communication channel to protect data in transit (across the Internet) as well as
data at rest (on the device).

30
Remaining compatible with older platform versions

Consider how to add new Android platform version features to an app, while ensuring that the
app can still run on devices with older platform versions. It is impractical to focus only on the
most recent Android version, as not all users may have upgraded or may be able to upgrade their
devices.

5.2.2 Create Your First Android App

This chapter describes how to develop applications using the Android Studio Integrated
Development Environment (IDE).

The development process

An Android app project begins with an idea and a definition of the requirements necessary to
realize that idea. As the project progresses, it goes through design, development, and testing.

Fig:15 The development process

The above diagram is a high-level picture of the development process, with the following steps:
Defining the idea and its requirements: Most apps start with an idea of what it should
do, bolstered by market and userresearch. During this stage the app's requirements are
defined
Prototyping the user interface: Use drawings, mock ups and prototypes to show what
the user interface would look like,and how it would work.
Developing and testing the app: An app consists of one or more activities. For each
activity you can use AndroidStudio to do the following, in no particular order:

31
Create the layout: Place UI elements on the screen in a layout, and assign string
resources and menu items, usingthe Extensible Markup Language (XML).
Write the Java code: Create source code for components and tests, and use testing
and debugging tools.
Register the activity: Declare the activity in the manifest file.
Define the build: Use the default build configuration or create custom builds for
different versions of your app.
Publishing the app: Assemble the final APK (package file) and distribute it through
channels such as the Google Play.

Using Android Studio

Android Studio provides tools for the testing, and publishing phases of the development
process, and a unified development environment for creating apps for all Android devices. The
development environment includes code templates with sample code for common app features,
extensive testing tools and frameworks, and a flexible build system.

Starting an Android Studio project

After you have successfully installed the Android Studio IDE, double-click the Android
Studio application icon to start it. Choose Start a new Android Studio project in the
Welcome window, and name the project the same name that you want to use for the app.

When choosing a unique Company Domain, keep in mind that apps published to the
Google Play must have a unique package name. Since domains are unique, prepending
the app's name with your name, or your company's domain name, should provide an
adequately unique package name. If you are not planning to publish the app, you can
accept the default example domain. Be aware that changing the package name later is
extra work.

Choosing target devices and the minimum SDK

When choosing Target Android Devices, Phone and Tablet are selected by default, as shown in the figure
below. The choice shown in the figure for the Minimum SDK — API 15: Android 4.0.3

32
(IceCreamSandwich) — makes your app compatible with 97% of Android devices active on the Google
Play Store.

Fig:16 Choosing target devices and the minimum SDK

Different devices run different versions of the Android system, such as Android 4.0.3 or Android
4.4. Each successive version often adds new APIs not available in the previous version. To
indicate which set of APIs are available, each version specifies an API level. For instance,
Android 1.0 is API level 1 and Android 4.0.3 is API level 15.

The Minimum SDK declares the minimum Android version for your app. Each successive version
of Android provides compatibility for apps that were built using the APIs from previous versions,
so your app should always be compatible with future versions of Android while using the
documented Android APIs.

Choosing a template

Android Studio pre-populates your project with minimal code for an activity and a screen layout
based on a template. A variety of templates are available, ranging from a virtually blank template
(Add No Activity) to various types of activities.
You can customize the activity after choosing your template. For example, the Empty Activity
template provides a single activity accompanied by a single layout resource for the screen. You
can choose to accept the commonly used name for the activity (such as MainActivity) or change
the name on the Customize the Activity screen. Also, if you use the Empty Activity template, be
sure to check the following if they are not already checked:
Generate Layout file: Leave this checked to create the layout resource connected to this activity,
which is usually named activity_main.xml. The layout defines the user interface for the activity.
Backwards Compatibility (AppCompat): Leave this checked to include the AppCompat library
so that the app is compatible with previous versions of Android even if it uses features found
only in newer versions.

33
Fig:17 Choosing a template

Android Studio creates a folder for the newly created project in the AndroidStudioProjects folder
on your computer.

Android Studio window panes

The Android Studio main window is made up of several logical areas, or panes, as shown in the
figure below.

Fig:18 Android Studio window panes

In the above figure:

The Toolbar.The toolbar carries out a wide range of actions, including running the Android app
and launching Android tools.
The Navigation Bar. The navigation bar allows navigation through the project and open files for
editing. It provides a more compact view of the project structure.
The Editor Pane. This pane shows the contents of a selected file in the project. For example, after
selecting a layout (as shown in the figure), this pane shows the layout editor with tools to edit the
layout. After selecting a Java code file, this pane shows the code with tools for editing the code.
The Status Bar. The status bar displays the status of the project and Android Studio itself, as well
as any warnings or messages. You can watch the build progress in the status bar.
The Project Pane. The project pane shows the project files and project hierarchy.

34
The Monitor Pane. The monitor pane offers access to the TODO list for managing tasks, the
Android Monitor for monitoring app execution (shown in the figure), the logcat for viewing log
messages, and the Terminal application for performing Terminal activities.

Exploring a project

Each project in Android Studio contains the AndroidManifest.xml file, component source-code
files, and associated resource files. By default, Android Studio organizes your project files
based on the file type, and displays them within the Project: Android view in the left tool pane,
as shown below. The view provides quick access to your project's key files.

To switch back to this view from another view, click the vertical Project tab in the far left
column of the Project pane, and choose Android from the pop-up menu at the top of the Project
pane, as shown in the figure below.

Fig:19 Exploring a project

In the figure above:

The Project tab. Click to show the project view.

The Android selection in the project drop-down menu.

The AndroidManifest.xml file. Used for specifying information about the app for the Android
runtime environment. The template you choose creates this file.
The java folder. This folder includes activities, tests, and other components in Java source code.
Every activity, service, and other component is defined as a Java class, usually in its own file. The

35
name of the first activity (screen) the user sees, which also initializes app-wide resources, is
customarily MainActivity.
The res folder. This folder holds resources, such as XML layouts, UI strings, and images. An
activity usually is associated with an XML resource file that specifies the layout of its views. This
file is usually named after its activity or function.
The build.gradle (Module: App) file. This file specifies the module's build configuration. The
template you choose creates this file, which defines the build configuration, including the
minSdkVersion attribute that declares the minimum version for the app, and the targetSdkVersion
attribute that declares the highest (newest) version for which the app has been optimized. This file
also includes a list of dependencies, which are libraries required by the code — such as the
AppCompat library for supporting a wide range of Android versions.

Viewing the Android Manifest

Before the Android system can start an app component, the system must know that the component
exists by reading the app's AndroidManifest.xml file. The app must declare all its components in
this file, which must be at the root of the app project directory.
To view this file, expand the manifests folder in the Project: Android view, and double-click the
file (AndroidManifest.xml).

Its contents appear in the editing pane as shown in the figure below.

Fig:20 Viewing the Android Manifest

Android namespace and application tag

The Android Manifest is coded in XML and always uses the Android namespace:

xmlns:android="http://schemas.android.com/apk/res/android"

36
package="com.example.android.helloworld">

The package expression shows the unique package name of the new app. Do not change this
once the app is published.
<application

...

</application>

The <application tag, with its closing </application> tag, defines the manifest settings for the
entire app.

Automatic backup
The android:allowBackup attribute enables automatic app data backup:

...

android:allowBackup="true"

...
Setting the android:allowBackup attribute to true enables the app to be backed up automatically
and restored as needed. Users invest time and effort to configure apps. Switching to a new
device can cancel out all that careful configuration. The system performs this automatic backup
for nearly all app data by default, and does so without the developer having to write any
additional app code.
For apps whose target SDK version is Android 6.0 (API level 23) and higher, devices running
Android 6.0 and higher automatically create backups of app data to the cloud because the
android:allowBackup attribute defaults to true if omitted. For apps < API level 22 you have to
explicitly add the android:allowBackup attribute and set it to true .

The app icon


The android:icon attribute sets the icon for the app:

android:allowBackup="true"

37
android:icon="@mipmap/ic_launcher"

...

The android:icon attribute assigns an icon in the mipmap folder (inside the res folder in
Project: Android view) to the app.

The icon appears in the Launcher for launching the app. The icon is also used as the default icon
for app components.

App label and string resources

As you can see in the previous figure, the android:label attribute shows the string "Hello World"
highlighted. If you click on this string, it changes to show the string resource @string/app_name
...

android:label="@string/app_name"

...

After opening the strings.xml file, you can see that the string name app_name is set to Hello
World . You can change the app name by changing the Hello World string to something else.
String resources are described in a separate lesson.

The app theme

The android:theme attribute sets the app's theme, which defines the appearance of user interface
elements such as text:

...

android:theme="@style/AppTheme">

...

The theme attribute is set to the standard theme AppTheme . Themes are described in a separate
lesson.

Declaring the Android version

38
Different devices may run different versions of the Android system, such as Android 4.0 or Android
4.4. Each successive version can add new APIs not available in the previous version. To indicate
which set of APIs are available, each version specifies an API level. For instance, Android 1.0 is
API level 1 and Android 4.4 is API level 19.

The API level allows a developer to declare the minimum version with which the app is
compatible, using the <uses-sdk> manifest tag and its minSdkVersion attribute. For example, the
Calendar Provider APIs were added in Android 4.0 (API level 14). If your app can't function
without these APIs, declare API level 14 as the app's minimum supported version like this:
<manifest ... >

<uses-sdkandroid:minSdkVersion="14" android:targetSdkVersion="19" />

...

</manifest>

The minSdkVersion attribute declares the minimum version for the app, and the targetSdkVersion
attribute declares the highest (newest) version which has been optimized within the app. Each
successive version of Android provides compatibility for apps that were built using the APIs from
previous versions, so the app should always be compatible with future versions of Android while
using the documented Android APIs.
The targetSdkVersion attribute does not prevent an app from being installed on Android versions
that are higher (newer) than the specified value, but it is important because it indicates to the
system whether the app should inherit behavior changes in newer versions. If you don't update the
targetSdkVersion to the latest version, the system assumes that your app requires some backward-
compatibility behaviors when running on the latest version. For example, among the behavior
changes in Android 4.4, alarms created with the AlarmManager APIs are now inexact by default
so that the system can batch app alarms and preserve system power, but the system will retain the
previous API behavior for an app if your target API level is lower than "19" .

Viewing and editing Java code

Components are written in Java and listed within module folders in the java folder in the Project: Android
view. Each module name begins with the domain name (such as com.example.android) and includes the
app name.
The following example shows an activity component:

39
Click the module folder to expand it and show the MainActivity file for the activity written in Java (the
MainActivity class).

Double-click MainActivity to see the source file in the editing pane, as shown in the figure below.

Fig:21 Viewing and editing Java code

At the very top of the MainActivity.java file is a package statement that defines the app
package. This is followed by an import block condensed in the above figure, with " ... ".
Click the dots to expand the block to view it. The import statements import libraries
needed for the app, such as the following, which imports the AppCompatActivity library:

import android.support.v7.app.AppCompatActivity;
Each activity in an app is implemented as a Java class. The following class declaration
extends the AppCompatActivity class to implement features in a way that is backward-
compatible with previous versions of Android:
public class MainActivity extends AppCompatActivity {

...

Viewing and editing layouts

Layout resources are written in XML and listed within the layout folder in the res folder
in the Project: Android view. Click res > layout and then double-click activity_main.xml
to see the layout file in the editing pane.
Android Studio shows the Design view of the layout, as shown in the figure below. This
view provides a Palette pane of user interface elements, and a grid showing the screen
layout.

40
Fig:22 viewing and editing layouts

5.2.3 Understanding the build process

The Android application package (APK) is the package file format for distributing and installing
Android mobile apps. The build process involves tools and processes that automatically convert
each project into an APK.
Android Studio uses Gradle as the foundation of the build system, with more Android-specific
capabilities provided by the Android Plugin for Gradle. This build system runs as an integrated
tool from the Android Studio menu.

Understanding build.gradle files

When you create a project, Android Studio automatically generates the necessary build files in the
Gradle Scripts folder in
Project: Android view. Android Studio build files are named build.gradle as shown below:

Fig:23 Build process

Each project has the following:

build.gradle (Project: apptitle)

This is the top-level build file for the entire project, located in the root project directory, which
defines build configurations that apply to all modules in your project. This file, generated by
Android Studio, should not be edited to include app dependencies.

build.gradle (Module: app)

Android Studio creates separate build.gradle (Module: app) files for each module. You can edit
the build settings to provide custom packaging options for each module, such as additional build

41
types and product flavors, and to override settings in the manifest or top-level build.gradle file.
This file is most often the file to edit when changing app-level configurations, such as declaring
dependencies in the dependencies section. The following shows the contents of a project's file:

apply plugin: 'com.android.application'

android {

compileSdkVersion 24

buildToolsVersion "24.0.1"

defaultConfig {

"com.example.android.hellowor
applicationId ld2"

minSdkVersion 15

targetSdkVersion 24

versionCode 1

versionName "1.0"

testInstrumentationRunner "android.support.test.runner.AndroidJUnitRunner"

buildTypes {

release {

minifyEnabled false

proguardFilesgetDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'

42
}

dependencies {

compile fileTree(dir: 'libs', include: ['*.jar']) androidTestCompile('com.android.support.test.espresso:espresso-


core:2.2.2', {

exclude group: 'com.android.support', module: 'support-annotations'

})

compile 'com.android.support:appcompat-v7:24.2.1'

testCompile 'junit:junit:4.12'

The build.gradle files use Gradle syntax. Gradle is a Domain Specific Language (DSL) for
describing and manipulating the build logic using Groovy, which is a dynamic language for the Java
Virtual Machine (JVM). You don't need to learn Groovy to make changes, because the Android
Plugin for Gradle introduces most of the DSL elements you need.

Plugin and Android blocks

In the above build.gradle (Module: app) file, the first statement applies the Android-
specific Gradle plug-in build tasks:

apply plugin: 'com.android.application'

android {

...

The android { } block specifies the following for the build:


The target SDK version for compiling the code:

43
compileSdkVersion 24
The version of the build tools to use for building the app:

buildToolsVersion "24.0.1"

The defaultConfig block

Core settings and entries for the app are specified in a defaultConfig { } block within the
android { } block:
...
defaultConfig {
applicationId "com.example.hello.helloworld"
minSdkVersion 15
targetSdkVersion 23
versionCode 1
versionName "1.0"
testInstrumentationRunner
"android.support.test.runner.AndroidJUnitRunner"
}
...
The minSdkVersion and targetSdkVersion settings override any AndroidManifest.xml settings for
the minimum SDK version and the target SDK version. See "Declaring the Android version"
previously in this chapter for background information on these settings.

The testInstrumentationRunner statement adds the instrumentation support for testing the user
interface with Espresso and UIAutomator. These are described in a separate lesson.

Build types

Build types for the app are specified in a buildTypes { } block, which controls how the app is
built and packaged.
...

buildTypes {
release {
minifyEnabled false
proguardFilesgetDefaultProguardFile('proguard-android.txt'),
'proguard-rules.pro'
}
}

44
The build type specified is release for the app's release. Another common build type is
debug . Configuring build types is described in a separate lesson.

Dependencies

Dependencies for the app are defined in the dependencies { } block, which is the part of the
build.gradle file that is most likely to change as you start developing code that depends on other
libraries. The block is part of the standard Gradle API and belongs outside the android { } block.
...
dependencies {
compile fileTree(dir: 'libs', include: ['*.jar'])
androidTestCompile('com.android.support.test.espresso:espresso-core:2.2.2', {
exclude group: 'com.android.support', module: 'support-annotations' })
compile 'com.android.support:appcompat-v7:24.2.0'
testCompile 'junit:junit:4.12'
}
In the above snippet, the statement compile fileTree(dir: 'libs', include: ['*.jar']) adds a dependency
of all ".jar" files inside the libs directory. The compile configuration compiles the main application
— everything in it is added to the compilation classpath, and also packaged into the final APK.

Syncing your project

When you make changes to the build configuration files in a project, Android Studio requires that
you sync the project files so that it can import the build configuration changes and run some checks
to make sure the configuration won't create build errors.

To sync the project files, click Sync Now in the notification bar that appears when making a change,
or click Sync Project from the menu bar. If Android Studio notices any errors with the
configuration — for example, if the source code uses API features that are only available in an API
level higher than the compileSdkVersion — the Messages window appears to describe the issue.

Fig:24 Syncing your project

45
Running the app on an emulator or a device

With virtual device emulators, you can test an app on different devices such as tablets or
smartphones — with different API levels for different Android versions — to make sure it
looks good and works for most users. Although it's a good idea, you don't have to depend
on having a physical device available for app development.

The Android Virtual Device (AVD) manager creates a virtual device or emulator that
simulates the configuration for a particular type of Android device. Use the AVD Manager
to define the hardware characteristics of a device and its API level, and to save it as a
virtual device configuration. When you start the Android emulator, it reads a specified
configuration and creates an emulated device on your computer that behaves exactly like
a physical version of that device.

Creating a virtual device

To run an emulator on your computer, use the AVD Manager to create a configuration that
describes the virtual device.

Select Tools > Android > AVD Manager, or click the AVD Manager icon in the toolbar.

The "Your Virtual Devices" screen appears showing all of the virtual devices created previously. Click the
+Create VirtualDevice button to create a new virtual device.

Fig:25 Creating a virtual device

You can select a device from a list of predefined hardware devices. For each device, the table
shows its diagonal display size (Size), screen resolution in pixels (Resolution), and pixel density
(Density). For example, the pixel density of the Nexus 5 device is xxhdpi , which means the app

46
uses the icons in the xxhdpi folder of the mipmap folder. Likewise, the app will use layouts and
drawables from folders defined for that density as well.

Fig:26 Select Hardware

You also choose the version of the Android system for the device. The Recommended tab shows
the recommended systems for the device. More versions are available under the x86 Images and
Other Images tabs.

Running the app on the virtual device

To run the app on the virtual device you created in the previous section, follow these steps:

In Android Studio, select Run > Run app or click the Run icon in the toolbar.
In the Select Deployment Target window, under Available Emulators, select the virtual device
you created, and click OK.The emulator starts and boots just like a physical device. Depending
on the speed of your computer, this may take a while.The app builds, and once the emulator is
ready, Android Studio uploads the app to the emulator and runs it. You should see the app created
from the Empty Activity template ("Hello World") as shown in the following figure, which also
shows Android Studio's Run pane that displays the actions performed to run the app on the
emulator.

Fig:27 Run Pane

In the above figure:


The Emulator running the app.
47
The Run Pane. This shows the actions taken to install and run the app.

Running the app on a physical device

Always test your apps on physical device, because users will use the app on physical devices.
While emulators are quite good, they can't show all possible device states, such as what
happens if an incoming call occurs while the app is running. To run the app on a physical
device, you need the following:

• An Android device such as a smartphone or tablet.


• A data cable to connect the Android device to your computer via the USB
port.
• If you are using Linux or Windows, it may be necessary to perform additional
steps to run the app on a hardware device. Check the Using Hardware
Devicesdocumentation. On Windows, you may need to install the appropriate
USB driver for the device. See OEM USB Drivers.
To let Android Studio communicate with a device, turn on USB Debugging on the Android
device. On Android version 4.2 and newer, the Developer options screen is hidden by default.
Follow these steps to turn on USB Debugging:

1. On the physical device, open Settings and choose About phone at the bottom of the Settings
screen.
Tap the Build number information seven times. You read that correctly: Tap it seven times.
Return to the previous screen (Settings). Developer options now appears at the bottom of the
screen. Tap Developer options.
Choose USB Debugging.
Now, connect the device and run the app from Android Studio.
Troubleshooting the device connection
If Android Studio does not recognize the device, try the following:
Disconnect the device from your computer, and then reconnect it.
Restart Android Studio.
If your computer still does not find the device or declares it "unauthorized":
Disconnect the device from your computer.
On the device, choose Settings > Developer Options.
Tap Revoke USB Debugging authorizations.

48
Reconnect the device to your computer.
When prompted, grant authorizations.
You may need to install the appropriate USB driver for the device. See Using Hardware
Devices documentation.Check the latest documentation, programming forums, or get help
from your instructors.

Using the log

The log is a powerful debugging tool you can use to look at values, execution paths, and
exceptions. After you add logging statements to an app, your log messages appear along
with general log messages in the logcat tab of the Android Monitor pane of Android
Studio.
To see the Android Monitor pane, click the Android Monitor button at the bottom of
the Android Studio main window. The Android Monitor offers two tabs:

The logcat tab. The logcat tab displays log messages about the app as it is
running. If you add logging statements to the app, your log messages from
these statements appear with the other log messages under this tab.
The Monitors tab. The Monitors tab monitors the performance of the app,
which can be helpful for debugging and tuning your code.

Adding logging statements to your app

Logging statements add whatever messages you specify to the log. Adding logging
statements at certain points in the code allows the developer to look at values, execution
paths, and exceptions.
For example, the following logging statement adds "MainActivity" and "Hello World" to the
log:
Log.d("MainActivity", "Hello World");
The following are the elements of this statement:

Log : TheLogclassis the API for sending log messages.

49
d : You assign alog levelso that you can filter the log messages using the drop-
down menu in the center of thelogcat tab pane. The following are log levels you
can assign:

d : ChooseDebugorVerboseto see these messages.

e : ChooseErrororVerboseto see these messages.

w : ChooseWarningorVerboseto see these messages.

i: ChooseInfoorVerboseto see these messages


"MainActivity" : The first argument is alog tagwhich can be used to filter messages
under thelogcattab. This iscommonly the name of the activity from which the message
originates. However, you can make this anything that is useful to you for debugging the
app. The best practice is to use a constant as a log tag, as follows:

1. Define the log tag as a constant before using it in logging statement:


private static final String LOG_TAG =

MainActivity.class.getSimpleName();

2. Use the constant in the logging statements:


Log.d(LOG_TAG, "Hello World");
"Hello World" : The second argument is the actual message that appears after the log
level and log tag under the logcat tab.

Viewing your log messages

The Run pane appears in place of the Android Monitor pane when you run the app on
an emulator or a device. After starting to run the app, click the Android Monitor button
at the bottom of the main window, and then click the logcat tab in the Android Monitor
pane if it is not already selected.

50
Fig:28 Viewing your log messages

In the above figure:


The logging statement in the onCreate() method of MainActivity .
Android Monitor pane showing logcat log messages, including the message from the
logging statement.
By default, the log display is set to Verbose in the drop-down menu at the top of the logcat
display to show all messages. You can change this to Debug to see messages that start with
Log.d , or change it to Error to see messages that start with Log.e , and so on.

5.2.4 Text and Scrolling Views

This chapter describes one of the most often used views in apps: the TextView, which shows
textual content on the screen. A TextView can be used to show a message, a response from a
database, or even entire magazine-style articles that users can scroll. This chapter also shows how
you can create a scrolling view of text and other elements.

TextView

One view you may use often is the TextViewclass, which is a subclass of the Viewclass
that displays text on the screen. You can use TextView for a view of any size, from a
single character or word to a full screen of text. You can add a resource id to the TextView,
and control how the text appears using attributes in the XML layout file.
You can refer to a TextView view in your Java code by using its resource id , and update
the text from your code. If you want to allow users to edit the text, use EditText, a subclass
of TextView that allows text input and editing. You learn all about EditText in another
chapter.

51
TextView attributes

You can use XML attributes to control:


Where the TextView is positioned in a layout (like any other view)
How the view itself appears, such as with a background color
What the text looks like within the view, such as the initial text and its style, size, and
color
For example, to set the width, height, and position within a LinearLayout:
<TextView
...
android:layout_width="match_parent"
android:layout_height="wrap_content"
… />
To set the initial text value of the view, use the android:text attribute:
android:text="Hello World!"
You can extract the text string into a string resource (perhaps called hello_world ) that's
easier to maintain for multiple-language versions of the app, or if you need to change the
string in the future. After extracting the string, use the string resource name with @string/
to specify the text:

android:text="@string/hello_world"

Using embedded tags in text

In an app that accesses magazine or newspaper articles, the articles that appear would
probably come from an online source or might be saved in advance in a database on the
device. You can also create text as a single long string in the strings.xml resource.
In either case, the text may contain embedded HTML tags or other text formatting codes.
To properly display in a text view, text must be formatted following these rules:
If you have an apostrophe (') in your text, you must escape it by preceding it with a
backslash (\'). If you have a double-quote in your text, you must also escape it (\"). You
must also escape any other non-ASCII characters. See the "Formatting and Styling"

52
section of String Resources for more details. The TextView ignores all HTML tags
except the following:
Use the HTML and </b> tags around words that should be in bold.
Use the HTML and </i> tags around words that should be in italics. Note, however, that
if you use curled apostrophes within an italic phrase, you should replace them with
straight apostrophes. You can combine bold and italics by combining the tags, as in ...
words...</i></b>.

To create a long string of text in the strings.xml file, enclose the entire text within
<string name="your_string_name"></string>in the strings.xml file
(your_string_nameis the name you provide the string resource, such asarticle_text).
Text lines in the strings.xml file don't wrap around to the next line — they extend beyond
the right margin. This is the correct behavior. Each new line of text starting at the left
margin represents an entire paragraph.
Enter \n to represent the end of a line, and another \n to represent a blank line. If you
don't add end-of-line characters, the paragraphs will run into each other when displayed
on the screen.

Referring to a TextView in code

To refer to a TextView in your Java code, use its resource id . For example, to update a
TextView with new text, you would:
Find the TextView (with the id show_count ) and assign it to a variable. You use the
findViewById() method of the View class, and refer to the view you want to find using
this format:
R.id.view_id

In which view_id is the resource identifier for the view:


mShowCount = (TextView) findViewById(R.id.show_count);

After retrieving the view as a TextView member variable, you can then set the text of
the text view to the new text using the setText()method of the TextView class:
mShowCount.setText(mCount_text);

53
Scrolling views
If the information you want to show in your app is larger than the device's display, you
can create a scrolling view that the user can scroll vertically by swiping up or down, or
horizontally by swiping right or left.
You would typically use a scrolling view for news stories, articles, or any lengthy text that
doesn't completely fit on the display. You can also use a scrolling view to combine views
(such as a TextView and a Button) within a scrolling view.

Creating a layout with a ScrollView


The ScrollViewclass provides the layout for a vertical scrolling view. (For horizontal
scrolling, you would use HorizontalScrollView.)ScrollView is a subclass
ofFrameLayout,which means that you can place onlyoneview as a child
within it; that child contains the entire contents to scroll.

Fig:29 Scroll view-1


Even though you can place only one child view inside a ScrollView, the child view could
be a view group with a hierarchy of child views, such as a LinearLayout. A good choice for
a view within a ScrollView is a LinearLayout that is arranged in a
vertical orientation.

Fig:29.1 Scroll view-2


ScrollView with a TextView

To show a scrollable magazine article on the screen, you might use a RelativeLayout for
the screen that includes a separate TextView for the article heading, another for the article
subheading, and a third TextView for the scrolling article text (see figure below), set within

54
a ScrollView. The only part of the screen that would scroll would be the ScrollView with
the article text.

Fig:30 Relative Layout

ScrollView with a LinearLayout

The ScrollView view group can contain only one view; however, that view can be a view
group that contains views, such as LinearLayout.You cannesta view group such as
LinearLayoutwithinthe ScrollView view group, thereby scrollingeverything that is inside
the LinearLayout.

Fig:31 ScrollView with a LinearLayout

When adding a LinearLayout inside a ScrollView, use match_parent for the


LinearLayout'sandroid:layout_width attribute to match the width of the parent view group
(the ScrollView), and use wrap_content for the LinearLayout's attribute to make the view
group only big enough to enclose its contents and padding.
Since ScrollView only supports vertical scrolling, you must set the LinearLayout
orientation to vertical (by using the android:orientation="vertical" attribute), so that the
entire LinearLayout will scroll vertically. For example, the followingXML layout scrolls
the articleTextView along with the article_subheadingTextView:

<ScrollView

android:layout_width="wrap_content"

55
android:layout_height="wrap_content"

android:layout_below="@id/article_heading">

<LinearLayout

android:layout_width="match_parent"

android:layout_height="wrap_content"

android:orientation="vertical">

<TextView

android:id="@+id/article_subheading"

android:layout_width="match_parent"

android:layout_height="wrap_content"

android:padding="@dimen/padding_regular"

android:text="@string/article_subtitle"

android:textAppearance="@android:style/TextAppearance" />

<TextView

android:id="@+id/article"

android:layout_width="wrap_content"

android:layout_height="wrap_content"

android:autoLink="web"

android:lineSpacingExtra="@dimen/line_spacing"

android:text="@string/article_text" />

</LinearLayout>

</ScrollView>

Exploring Android developer documentation

The best place to learn about Android development and to keep informed about the
newest Android development tools is to browse the official Android developer
documentation.

developer.android.com

56
Home page

Fig:32 Homepage

This documentation contains a wealth of information kept current by Google. To start


exploring, click the following links on the home page:

Get Android Studio:Download Android Studio, the official integrated development


environment (IDE) for buildingAndroid apps.

Browse sample code:Browse the sample code library in GitHub to learn how to build
different components for yourapps. Click the categories in the left column to browse the
available samples. Each sample is a fully functioning Android app. You can browse the
resources and source files, and see the overall project structure. Copy and paste the code
you need, and if you want to share a link to a specific line you can double-click it to the
get the URL. For more sample code, see "Exploring code samples in the Android SDK"
in this chapter.
Watch stories:Learn about other Android developers, their apps, and their successes with
Android and Google Play.The page offers videos and articles with the newest stories about
Android development, such as how developers improved their users experiences, and how
to increase user engagement with apps.

The home page also offers links for Android developers to preview their apps for the
newest version of Android, and to join the Google Play developer program:

Developer Console:The Google Play store is Google's digital distribution system for apps
developed with the AndroidSDK. On the Google Play Developer Console page you can
accept the Developer Agreement, pay the registration fee, and complete your account
details in order to join the Google Play developer program.

57
Preview:Go to the preview page for the newest version of Android to test your apps for
compatibility, and to takeadvantage of new features like app shortcuts, image keyboard
support, circular icons, and more.

Android,Wear,TV,andAuto:Learn about the newest versions of Android for


smartphones and tablets, wearabledevices, television, and automobiles.

Android Studio page

Fig:33 Android Studio page

After clicking Get Android Studio on the home page, the Android Studio page, shown
above, appears with the following useful links:

Download Android Studio:Download Android Studio for the computer operating


system you are currently using.
Read the docs:Browse the Android Studio documentation.
See the release notes:Read the release notes for the newest version of Android
Studio.
Features:Learn about the features of the newest version of Android Studio.
Latest:Read news about Android Studio.
Resources:Read articles about using Android Studio, including a basic introduction.
Videos:Watch video tutorials about using Android Studio.
Download Options:Download a version of Android Studio for a different operating
system than the one you are using.

Design, Develop, Distribute, and Preview

The Android documentation is accessible through the following links from the home
page:

58
Design:This section covers Material Design, which is a conceptual design philosophy
that outlines how apps shouldlook and work on mobile devices. Use the following links
to learn more:
Introducing material design:An introduction to the material design
philosophy.
Downloads for designers:Download color palettes for compatibility
with the material design specification.
Articles:Read articles and news about Android design.
Scroll down the Design page for links to resources such as videos, templates,
font, and color palettes.
Develop:This section is where you can find application programming interface (API)
information, referencedocumentation, tutorials, tool guides, and code samples, and gain
insights into Android's tools and libraries to speed your development. You can use the site
navigation links in the left column, or search to find what you need. The following are
popular links into the Develop section that are useful for this training:

Installing offline documentation

To access to documentation even when you are not connected to the internet, install the
Software Development Kit (SDK)
documentation using the SDK Manager. Follow these steps:
Choose Tools > Android > SDK Manager.
In the left column, click Android SDK.
Select and copy the path for the Android SDK Location at the top of the screen, as you
will need it to locate the documentation on your computer:
Click the SDK Tools tab. You can install additional SDK Tools that are not installed by
default, as well as an offline version of the Android developer documentation.
Click the checkbox for "Documentation for Android SDK" if it is not already installed,
and click Apply.
When the installation finishes, click Finish.
Navigate to the sdk directory you copied above, and open the docs directory.
Find index.html and open it.
Exploring code samples in the Android SDK
You can explore hundreds of code samples directly in Android Studio. Choose Import
an Android code sample from the Android Studio welcome screen, or choose File >

59
New > Import Sample if you have already opened a project. The Browse Samples
window appears as shown below.

Fig:34 Browse samples

Choose a sample and click Next. Accept or edit the Application name and Project
location, and click Finish. The app project appears as shown below, and you can run the
app in the emulator provided with Android Studio, or on a connected device

Fig:35 The app project


Using activity templates

Android Studio provides templates for common and recommended activity designs.
Using templates saves time, and helps you follow best practices for developing activities.

60
Each template incorporates an skeleton activity and user interface. You choose an activity
template for the main activity when starting an app project. You can also add an activity
template to an existing projectt. Right-click the java folder in the Project: Android view
and choose New > Activity > Gallery.

fig:36 Using activity template


.
5.3 Source Code :
Front end:

<?xml version="1.0" encoding="utf-8"?>


<androidx.constraintlayout.widget.ConstraintLayoutxmlns:android="http://schemas.android.
com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".LoginActivity">
<LinearLayout
android:layout_width="match_parent"
android:layout_height="wrap_content"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toTopOf="parent"

61
android:layout_marginLeft="32dp"
android:layout_marginRight="32dp"
android:orientation="vertical">
<com.google.android.material.textfield.TextInputLayout
android:layout_width="match_parent"
android:layout_height="56dp">
<EditText
android:id="@+id/email_edt_text"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:hint="Email-ID"/>
</com.google.android.material.textfield.TextInputLayout>
<com.google.android.material.textfield.TextInputLayout
android:layout_width="match_parent"
android:layout_height="56dp">
<EditText
android:id="@+id/pass_edt_text"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:hint="Password"
android:inputType="textPassword"/>
</com.google.android.material.textfield.TextInputLayout>
<Button
android:id="@+id/login_btn"
android:layout_width="match_parent"
android:layout_height="56dp"
android:layout_marginTop="16dp"
android:text="LOGIN" />
</LinearLayout>
</androidx.constraintlayout.widget.ConstraintLayout>

Backend:

Package org.tensorflow.lite.examples.detection;

62
import android.Manifest;
import android.app.Fragment;
import android.content.Context;
import android.content.Intent;
import android.content.pm.PackageManager;
import android.hardware.Camera;
import android.hardware.camera2.CameraAccessException;
import android.hardware.camera2.CameraCharacteristics;
import android.hardware.camera2.CameraManager;
import android.hardware.camera2.params.StreamConfigurationMap;
import android.media.Image;
import android.media.Image.Plane;
import android.media.ImageReader;
import android.media.ImageReader.OnImageAvailableListener;
import android.os.Build;
import android.os.Bundle;
import android.os.Handler;
import android.os.HandlerThread;
import android.os.Trace;
import androidx.annotation.NonNull;
import androidx.appcompat.app.AppCompatActivity;
import androidx.appcompat.widget.SwitchCompat;
import androidx.appcompat.widget.Toolbar;
import android.util.Size;
import android.view.Surface;
import android.view.View;
import android.view.ViewTreeObserver;
import android.view.WindowManager;
import android.widget.CompoundButton;
import android.widget.ImageView;
import android.widget.LinearLayout;
import android.widget.TextView;
import android.widget.Toast;

63
import com.google.android.material.bottomsheet.BottomSheetBehavior;
import com.google.android.material.floatingactionbutton.FloatingActionButton;
import java.nio.ByteBuffer;
import org.tensorflow.lite.examples.detection.env.ImageUtils;
import org.tensorflow.lite.examples.detection.env.Logger;

public abstract class CameraActivity extends AppCompatActivity


implements OnImageAvailableListener,
Camera.PreviewCallback,
CompoundButton.OnCheckedChangeListener,
View.OnClickListener {
private static final Logger LOGGER = new Logger();
private static final int PERMISSIONS_REQUEST = 1;
private static final String PERMISSION_CAMERA = Manifest.permission.CAMERA;
protected int previewWidth = 0;
protected int previewHeight = 0;
private boolean debug = false;
private Handler handler;
private HandlerThread handlerThread;
private boolean useCamera2API;
private boolean isProcessingFrame = false;
private byte[][] yuvBytes = new byte[3][];
private int[] rgbBytes = null;
private int yRowStride;
private Runnable postInferenceCallback;
private Runnable imageConverter;
private LinearLayout bottomSheetLayout;
private LinearLayout gestureLayout;
private BottomSheetBehavior<LinearLayout> sheetBehavior;
protected TextView frameValueTextView, cropValueTextView, inferenceTimeTextView;
protected ImageView bottomSheetArrowImageView;
private ImageView plusImageView, minusImageView;
private SwitchCompat apiSwitchCompat;
private TextView threadsTextView;

64
private FloatingActionButton btnSwitchCam;
private static final String KEY_USE_FACING = "use_facing";
private Integer useFacing = null;
private String cameraId = null;
protected Integer getCameraFacing() {
return useFacing;
}
@Override
protected void onCreate(final Bundle savedInstanceState) {
LOGGER.d("onCreate " + this);
super.onCreate(null);
Intent intent = getIntent();
//useFacing = intent.getIntExtra(KEY_USE_FACING,
CameraCharacteristics.LENS_FACING_FRONT);
useFacing = intent.getIntExtra(KEY_USE_FACING,
CameraCharacteristics.LENS_FACING_BACK);
getWindow().addFlags(WindowManager.LayoutParams.FLAG_KEEP_SCREEN_ON);
setContentView(R.layout.tfe_od_activity_camera);
Toolbar toolbar = findViewById(R.id.toolbar);
setSupportActionBar(toolbar);
getSupportActionBar().setDisplayShowTitleEnabled(false)
if (hasPermission()) {
setFragment();
} else {
requestPermission();
}
threadsTextView = findViewById(R.id.threads);
plusImageView = findViewById(R.id.plus);
minusImageView = findViewById(R.id.minus);
apiSwitchCompat = findViewById(R.id.api_info_switch);
bottomSheetLayout = findViewById(R.id.bottom_sheet_layout);
gestureLayout = findViewById(R.id.gesture_layout);
sheetBehavior = BottomSheetBehavior.from(bottomSheetLayout);
bottomSheetArrowImageView = findViewById(R.id.bottom_sheet_arrow);

65
btnSwitchCam = findViewById(R.id.fab_switchcam);
ViewTreeObserver vto = gestureLayout.getViewTreeObserver();
vto.addOnGlobalLayoutListener(
new ViewTreeObserver.OnGlobalLayoutListener() {
@Override
public void onGlobalLayout() {
if (Build.VERSION.SDK_INT < Build.VERSION_CODES.JELLY_BEAN) {
gestureLayout.getViewTreeObserver().removeGlobalOnLayoutListener(this);
} else {
gestureLayout.getViewTreeObserver().removeOnGlobalLayoutListener(this);
}
// int width = bottomSheetLayout.getMeasuredWidth();
int height = gestureLayout.getMeasuredHeight();
sheetBehavior.setPeekHeight(height);
}
});
sheetBehavior.setHideable(false);
sheetBehavior.setBottomSheetCallback(
new BottomSheetBehavior.BottomSheetCallback() {
@Override
public void onStateChanged(@NonNull View bottomSheet, int newState) {
switch (newState) {
case BottomSheetBehavior.STATE_HIDDEN:
break;
case BottomSheetBehavior.STATE_EXPANDED:
{
bottomSheetArrowImageView.setImageResource(R.drawable.icn_chevron_down);
}
break;
case BottomSheetBehavior.STATE_COLLAPSED:
{
bottomSheetArrowImageView.setImageResource(R.drawable.icn_chevron_up);
}
break;

66
case BottomSheetBehavior.STATE_DRAGGING:
break;
case BottomSheetBehavior.STATE_SETTLING:
bottomSheetArrowImageView.setImageResource(R.drawable.icn_chevron_up);
break;
}
}
@Override
public void onSlide(@NonNull View bottomSheet, float slideOffset) {}
});
frameValueTextView = findViewById(R.id.frame_info);
cropValueTextView = findViewById(R.id.crop_info);
inferenceTimeTextView = findViewById(R.id.inference_info);
apiSwitchCompat.setOnCheckedChangeListener(this);
plusImageView.setOnClickListener(this);
minusImageView.setOnClickListener(this);
btnSwitchCam.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
onSwitchCamClick();
}
});

private void onSwitchCamClick() {


switchCamera();
}
public void switchCamera() {
Intent intent = getIntent();

if (useFacing == CameraCharacteristics.LENS_FACING_FRONT) {
useFacing = CameraCharacteristics.LENS_FACING_BACK;
} else {

67
useFacing = CameraCharacteristics.LENS_FACING_FRONT;
}
intent.putExtra(KEY_USE_FACING, useFacing);
intent.addFlags(Intent.FLAG_ACTIVITY_NO_ANIMATION);
restartWith(intent);
}
private void restartWith(Intent intent) {
finish();
overridePendingTransition(0, 0);
startActivity(intent);
overridePendingTransition(0, 0);
}
protected int[] getRgbBytes() {
imageConverter.run();
return rgbBytes;
}
protected int getLuminanceStride() {
return yRowStride;
}
protected byte[] getLuminance() {
return yuvBytes[0];
}
/** Callback for android.hardware.Camera API */
@Override
public void onPreviewFrame(final byte[] bytes, final Camera camera) {
if (isProcessingFrame) {
LOGGER.w("Dropping frame!");
return;
}
try {
// Initialize the storage bitmaps once when the resolution is known.
if (rgbBytes == null) {
Camera.Size previewSize = camera.getParameters().getPreviewSize();
previewHeight = previewSize.height;

68
previewWidth = previewSize.width;
//rgbBytes = new int[previewWidth * previewHeight];
//onPreviewSizeChosen(new Size(previewSize.width, previewSize.height), 90);
rgbBytes = new int[previewWidth * previewHeight];
int rotation = 90;
if (useFacing == CameraCharacteristics.LENS_FACING_FRONT) {
rotation = 270;
}
onPreviewSizeChosen(new Size(previewSize.width, previewSize.height), rotation);
}
} catch (final Exception e) {
LOGGER.e(e, "Exception!");
return;
}
isProcessingFrame = true;
yuvBytes[0] = bytes;
yRowStride = previewWidth;
imageConverter =
new Runnable() {
@Override
public void run() {
ImageUtils.convertYUV420SPToARGB8888(bytes, previewWidth, previewHeight,
rgbBytes);
}
};
postInferenceCallback =
new Runnable() {
@Override
public void run() {
camera.addCallbackBuffer(bytes);
isProcessingFrame = false;
}
};
processImage();

69
}
/** Callback for Camera2 API */
@Override
public void onImageAvailable(final ImageReader reader) {
// We need wait until we have some size from onPreviewSizeChosen
if (previewWidth == 0 || previewHeight == 0) {
return;
}
if (rgbBytes == null) {
rgbBytes = new int[previewWidth * previewHeight];
}
try {
final Image image = reader.acquireLatestImage();
if (image == null) {
return;
}
if (isProcessingFrame) {
image.close();
return;
}
isProcessingFrame = true;
Trace.beginSection("imageAvailable");
final Plane[] planes = image.getPlanes();
fillBytes(planes, yuvBytes);
yRowStride = planes[0].getRowStride();
final int uvRowStride = planes[1].getRowStride();
final int uvPixelStride = planes[1].getPixelStride();
imageConverter =
new Runnable() {
@Override
public void run() {
ImageUtils.convertYUV420ToARGB8888(
yuvBytes[0],
yuvBytes[1],

70
yuvBytes[2],
previewWidth,
previewHeight,
yRowStride,
uvRowStride,
uvPixelStride,
rgbBytes);
}
};
postInferenceCallback =
new Runnable() {
@Override
public void run() {
image.close();
isProcessingFrame = false;
}
};
processImage();
} catch (final Exception e) {
LOGGER.e(e, "Exception!");
Trace.endSection();
return;
}
Trace.endSection();
}
@Override
public synchronized void onStart() {
LOGGER.d("onStart " + this);
super.onStart();
}
@Override
public synchronized void onResume() {
LOGGER.d("onResume " + this);
super.onResume();

71
handlerThread = new HandlerThread("inference");
handlerThread.start();
handler = new Handler(handlerThread.getLooper());
}
@Override
public synchronized void onPause() {
LOGGER.d("onPause " + this);
handlerThread.quitSafely();
try {
handlerThread.join();
handlerThread = null;
handler = null;
} catch (final InterruptedException e) {
LOGGER.e(e, "Exception!");
}
super.onPause();
}
@Override
public synchronized void onStop() {
LOGGER.d("onStop " + this);
super.onStop();
}
@Override
public synchronized void onDestroy() {
LOGGER.d("onDestroy " + this);
super.onDestroy();
}

protected synchronized void runInBackground(final Runnable r) {


if (handler != null) {
handler.post(r);
}
}
@Override

72
public void onRequestPermissionsResult(
final int requestCode, final String[] permissions, final int[] grantResults) {
super.onRequestPermissionsResult(requestCode, permissions, grantResults);
if (requestCode == PERMISSIONS_REQUEST) {
if (allPermissionsGranted(grantResults)) {
setFragment();
} else {
requestPermission();
}
}
}
private static boolean allPermissionsGranted(final int[] grantResults) {
for (int result : grantResults) {
if (result != PackageManager.PERMISSION_GRANTED) {
return false;
}
}
return true;
}
private boolean hasPermission() {
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M) {
return checkSelfPermission(PERMISSION_CAMERA) ==
PackageManager.PERMISSION_GRANTED;
} else {
return true;
}
}
private void requestPermission() {
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M) {
if (shouldShowRequestPermissionRationale(PERMISSION_CAMERA)) {
Toast.makeText(
CameraActivity.this,
"Camera permission is required for this demo",
Toast.LENGTH_LONG)

73
.show();
}
requestPermissions(new String[] {PERMISSION_CAMERA},
PERMISSIONS_REQUEST);
}
}
// Returns true if the device supports the required hardware level, or better.
private boolean isHardwareLevelSupported(
CameraCharacteristics characteristics, int requiredLevel) {
int deviceLevel =
characteristics.get(CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL);
if (deviceLevel ==
CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL_LEGACY) {
return requiredLevel == deviceLevel;
}
// deviceLevel is not LEGACY, can use numerical sort
return requiredLevel <= deviceLevel;
}
private String chooseCamera() {
final CameraManager manager = (CameraManager)
getSystemService(Context.CAMERA_SERVICE);

try {
for (final String cameraId : manager.getCameraIdList()) {
final CameraCharacteristics characteristics =
manager.getCameraCharacteristics(cameraId);
final StreamConfigurationMap map =
characteristics.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP);
if (map == null) {
continue;
}
// Fallback to camera1 API for internal cameras that don't have full support.
// This should help with legacy situations where using the camera2 API causes
// distorted or otherwise broken previews.

74
//final int facing =
//(facing == CameraCharacteristics.LENS_FACING_EXTERNAL)
// if (!facing.equals(useFacing)) {
// continue;
// }
final Integer facing = characteristics.get(CameraCharacteristics.LENS_FACING);
if (useFacing != null &&
facing != null &&
!facing.equals(useFacing)
){
continue;
}
useCamera2API = (facing ==
CameraCharacteristics.LENS_FACING_EXTERNAL)
|| isHardwareLevelSupported(
characteristics,
CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL_FULL);
LOGGER.i("Camera API lv2?: %s", useCamera2API);
return cameraId;
}
} catch (CameraAccessException e) {
LOGGER.e(e, "Not allowed to access camera");
}

return null;
}
protected void setFragment() {
this.cameraId = chooseCamera();
Fragment fragment;
if (useCamera2API) {
CameraConnectionFragment camera2Fragment =
CameraConnectionFragment.newInstance(
new CameraConnectionFragment.ConnectionCallback() {
@Override

75
public void onPreviewSizeChosen(final Size size, final int rotation) {
previewHeight = size.getHeight();
previewWidth = size.getWidth();
CameraActivity.this.onPreviewSizeChosen(size, rotation);
}
},
this,
getLayoutId(),
getDesiredPreviewFrameSize());
camera2Fragment.setCamera(cameraId);
fragment = camera2Fragment;
} else {
int facing = (useFacing == CameraCharacteristics.LENS_FACING_BACK) ?
Camera.CameraInfo.CAMERA_FACING_BACK :
Camera.CameraInfo.CAMERA_FACING_FRONT;
LegacyCameraConnectionFragment frag = new LegacyCameraConnectionFragment(this,
getLayoutId(),
getDesiredPreviewFrameSize(), facing);
fragment = frag;

}
getFragmentManager().beginTransaction().replace(R.id.container, fragment).commit();
}
protected void fillBytes(final Plane[] planes, final byte[][] yuvBytes) {
// Because of the variable row stride it's not possible to know in
// advance the actual necessary dimensions of the yuv planes.
for (int i = 0; i < planes.length; ++i) {
final ByteBuffer buffer = planes[i].getBuffer();
if (yuvBytes[i] == null) {
LOGGER.d("Initializing buffer %d at size %d", i, buffer.capacity());
yuvBytes[i] = new byte[buffer.capacity()];
}
buffer.get(yuvBytes[i]);
}

76
}
public boolean isDebug() {
return debug;
}
protected void readyForNextImage() {
if (postInferenceCallback != null) {
postInferenceCallback.run();
}
}
protected int getScreenOrientation() {
switch (getWindowManager().getDefaultDisplay().getRotation()) {
case Surface.ROTATION_270:
return 270;
case Surface.ROTATION_180:
return 180;
case Surface.ROTATION_90:
return 90;
default:
return 0;
}
}
@Override
public void onCheckedChanged(CompoundButton buttonView, boolean isChecked) {
setUseNNAPI(isChecked);
if (isChecked) apiSwitchCompat.setText("NNAPI");
else apiSwitchCompat.setText("TFLITE");
}
@Override
public void onClick(View v) {
if (v.getId() == R.id.plus) {
String threads = threadsTextView.getText().toString().trim();
int numThreads = Integer.parseInt(threads);
if (numThreads >= 9) return;
numThreads++;

77
threadsTextView.setText(String.valueOf(numThreads));
setNumThreads(numThreads);
} else if (v.getId() == R.id.minus) {
String threads = threadsTextView.getText().toString().trim();
int numThreads = Integer.parseInt(threads);
if (numThreads == 1) {
return;
}
numThreads--;
threadsTextView.setText(String.valueOf(numThreads));
setNumThreads(numThreads);
}
}
protected void showFrameInfo(String frameInfo) {
frameValueTextView.setText(frameInfo);
}
protected void showCropInfo(String cropInfo) {
cropValueTextView.setText(cropInfo);
}
protected void showInference(String inferenceTime) {
inferenceTimeTextView.setText(inferenceTime);
}
protected abstract void processImage();
protected abstract void onPreviewSizeChosen(final Size size, final int rotation);
protected abstract int getLayoutId();
protected abstract Size getDesiredPreviewFrameSize();
protected abstract void setNumThreads(int numThreads);
protected abstract void setUseNNAPI(boolean isChecked);
}

78
CHAPTER-6

SCREENSHOTS

79
6. SCREENSHOTS

Fig:37 Face Detection

Fig:38 Adding Faces along with their Names

80
OUTPUT:

Fig:39 Face Recognition

81
CHAPTER-7

TESTING

82
7. TESTING

INTRODUCTION TO TESTING

Software testing is an investigation conducted to provide stakeholders with


information about the quality of the software product or service under test. Software testing
can also provide an objective, independent view of the software to allow the business to
appreciate and understand the risks of software implementation. Test techniques include the
process of executing a program or application with the intent of finding software bugs (errors
or other defects), and verifying that the software product is fit for use.

7.1 DIFFERENT TYPES OF TESTING METHODOLOGIES

UNIT TESTING

Unit Testing is a level of software testing where individual units/ components of a


software are tested. The purpose is to validate that each unit of the software performs as
designed. A unit is the smallest testable part of any software. It usually has one or a few inputs
and usually a single output.

INTEGRATION TESTING

Integration Testing is a level of software testing where individual units are combined
and tested as a group. The purpose of this level of testing is to expose faults in the interaction
between integrated units. Test drivers and test stubs are used to assist in Integration Testing.

FUNCTIONAL TESTING

Functional Testing is a type of software testing whereby the system is tested against
the functional requirements/specifications. Functions (or features) are tested by feeding them
input and examining the output. Functional testing ensures that the requirements are properly
satisfied by the application.

TESTING METHODS

White Box Testing

White Box Testing (also known as Clear Box Testing, Open Box Testing, Glass Box
Testing, Transparent Box Testing, Code-Based Testing or Structural Testing) is a software

83
testing method in which the internal structure/design/implementation of the item being tested
is known to the tester. The tester chooses inputs to exercise paths through the code and
determines the appropriate outputs. Programming know-how and the implementation
knowledge is essential. White box testing is testing beyond the user interface and into the
nittygritty of a system.

Black Box Testing

Black-box testing is a method of software testing that examines the functionality of


an application without peering into its internal structures or workings. This method of test can
be applied virtually to every level of software testing: unit, integration, system and acceptance.
It is sometimes referred to as specification-based testing. 9.3 Performance Testing In software
quality assurance, performance testing is in general, a testing practice performed to determine
how a system performs in terms of responsiveness and stability under a particular workload.
It can also serve to investigate, measure, validate or verify other quality attributes of the
system, such as scalability, reliability and resource usage.

Performance testing
In software engineering, performance testing is in general testing performed to
determine how a system performs in terms of responsiveness and stability under a particular
workload. It can also serve to investigate, measure, validate or verify other quality attributes of
the system, such as scalability, reliability and resource usage.

Performance testing is a subset of performance engineering, an emerging computer


science practice which strives to build performance into the implementation, design and
architecture of a system.

Verification and validation


Verification and Validation are independent procedures that are used together for
checking that a product, service, or system meets requirements and specifications and that it
full fills its intended purpose. These are critical components of a quality management
system such as ISO 9000. The words "verification" and "validation" are sometimes preceded
with "Independent" (or IV&V), indicating that the verification and validation is to be
performed by a disinterested third party.

84
System testing
System testing of software or hardware is testing conducted on a complete, integrated
system to evaluate the system's compliance with its specified requirements. System testing
falls within the scope of black box testing, and as such, should require no knowledge of the
inner design of the code or logic.

Structure Testing:
It is concerned with exercising the internal logic of a program and traversing particular
execution paths.
Output Testing:
● Output of test cases compared with the expected results created during design of test cases.
● Asking the user about the format required by them tests the output generated or displayed by
the system under consideration.
● Here, the output format is considered into two was, one is on screen and another one is printed
format.
● The output on the screen is found to be correct as the format was designed in the system design
phase according to user needs.
● The output comes out as the specified requirements as the user’s hard copy.

User acceptance Testing:


● Final Stage, before handling over to the customer which is usually carried out by the customer
where the test cases are executed with actual data.
● The system under consideration is tested for user acceptance and constantly keeping touch
with the prospective system user at the time of developing and making changes whenever
required.
● It involves planning and execution of various types of test in order to demonstrate that the
implemented software system satisfies the requirements stated in the requirement document.

85
CHAPTER 8

CONCLUSION

86
8. CONCLUSION

Using face recognition technology for photo album management, it is the development trend
of future photo album app. In the case of large photo management, the variety of ways of
sorting can make finding more convenient. The result is that pattern recognition approach can
dramatically increase the efficiency of images and the efficiency of the user's access to
information.

87
REFERENCES

[1] S. Chang, et al., “Histogram of the Oriented Gradient for Face Recognition,” vol. 16, pp.
216-224, Apr 2011.
[2] T. T. Chen, et al., “Detection of Psychological Stress Using a Hyper spectral Imaging,”
vol. 5, pp. 391-405, 2014.
[3] T. Chunlin, et al., “SWF-SIFT Approach for Infrared Face Recognition,” vol. 15, pp. 357-
362, 2010.
[4] Y. Chen, et al., “Dictionary-Based Face and Person Recognition from Unconstrained
Video,” vol. 3, pp 1783-1798,
2015.
[5] S. Yan, et al., “A Face Detection Method Combining Improved AdaBoost Algorithm and
Template Matching in
Video Sequence,”8th International Conference on Intelligent Human-Machine Systems
and Cybernetics (IHMSC), vol. 02, pp. 231-235, 2016.
[6] Y. Ma and X. Ding, “Robust real-time face detection based on cost-sensitive AdaBoost
method,” vol. 2, pp. 465-
468, 2003.
[7] P. I. Rani and K. Muneeswaran, “Robust real time face detection automatically from
video sequence based on Haar
features,” International Conference on Communication and Network Technologies, pp. 276-
280, 2014.
[8] G. I. Hapsari, et al., “Face recognition smart cane using haar-like features and
eigenfaces,” TELKOMNIKA
Telecommunication, Computing, Electronics and Control, vol. 17.
[9] E. Winarno, et al., “Asymmetrical Half-join Method on Dual Vision Face Recognition,”
International Lournal of
Electrical and Computer Engineering (IJECE), vol, 7.
[10] M. Fachrurrozi, et al., “Real-time Multi-object Face Recognition Using Content Based
Image Retrieval (CBIR),”
International Lournal of Electrical and Computer Engineering (IJECE), vol 8

88

You might also like