Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 38

NICAL SEMINAR REPORT ON Authentication

By

[07E41A1211]

k.swarupini

Department of Information Technology


Sree Dattha Institute of Engineering and Science
Sagar Road,Sheriguda,Ibrahimpatnam,R.R.Dist.-501510 (Approved By AICTE & Affiliated To JNT University Hyderabad) (2010 2011)

CERTIFICATE

This is to certify that the Technical Seminar entitled Authentication is being submitted by k.swarupini [07E41A1211] B.Tech (IT) is a bonafied work carried out by her during the academic year 2010-2011.

Supervisor

Head of the department

ABSTRACT
Authentication is the act of establishing or confirming something as authentic, that is, that claims made by or about the subject are true. This might involve confirming the identity of a person, tracing the origins of an artifact, ensuring that a product is what its packaging and labeling claims to be, or assuring that a computer program is a trusted one In art, antiques, and anthropology, a common problem is verifying that a given artifact was produced by a certain famous person, or was produced in a certain place or period of history. The first is comparing the attributes of the object itself to what is known about objects of that origin. For example, an art expert might look for similarities in the style of painting, check the location and form of a signature, or compare the object to an old photograph. An archaeologist might use carbon dating to verify the age of an artifact, do a chemical analysis of the materials used, or compare the style of construction or decoration to other artifacts of similar origin. Attribute comparison may be vulnerable to forgery. In general, it relies on the fact that creating a forgery indistinguishable from a genuine artifact requires expert knowledge, that mistakes are easily made, or that the amount of effort required to do so is considerably greater than the amount of money that can be gained by selling the forgery. Criminal and civil penalties for fraud, forgery, and counterfeiting can reduce the incentive for falsification, depending on the risk of getting caught. External records have their own problems of forgery and perjury, and are also vulnerable to being separated from the artifact and lost. Currency and other financial instruments commonly use the first type of authentication method. Bills, coins, and cheques incorporate hard-to-duplicate physical features, such as fine printing or engraving, distinctive feel, watermarks, and holographic imagery, which are easy for receivers to verify. Consumer goods such as pharmaceuticals, perfume, fashion clothing can use either type of authentication method to prevent counterfeit goods from taking advantage of a popular brand's reputation. A trademark is a legally protected marking or other identifying feature which aids consumers in the identification of genuine brand-name goods
.

Index

S.No

TITLE

PAGE NO.

1. Introduction 2. Authentication 2.1authentication and its identity 2.2historyandinformationcontent 3. Authentication vs. authorization 3. Access control 4. Brief history of authentication

12 14

5. Cookies

6. Session tokens 7.smartcard 8 .fingerprint technology 9.facialscan technology 10.information security 11.conclusion 43

45

INTRODUCTION
Authentication is that claims made by or about the subject are true. This might involve confirming the identity of a person, tracing the origins of an artifact, ensuring that a product is computer program is a trusted one. In art, antiques, and anthropology, a common problem is verifying that a given artifact was produced by a certain famous person, or was produced in a certain place or period of history. There are two types of techniques for doing this. The first is comparing the attributes of the object itself to what is known about objects of that origin. For example, an art expert might look for similarities in the style of painting, check the location and form of a signature, or compare the object to an old photograph. An archaeologist might use carbon dating to verify the age of an artifact, do a chemical analysis of the materials used, or compare the style of construction or decoration to other artifacts of similar origin. The physics of sound and light, and comparison with a known physical environment, can be used to examine the authenticity of audio recordings, photographs, or videos. Attribute comparison may be vulnerable to forgery. In general, it relies on the fact that creating a forgery indistinguishable from a genuine artifact requires expert knowledge, that mistakes are easily made, or that the amount of effort required to do so is considerably greater than the amount of money that can be gained by selling the forgery. In art and antiques certificates are of great importance, authenticating an object of interest and value. Certificates can, however, also be forged and the authentication of these pose a problem. For instance, the son of Han van Meegeren, the wellknown art-forger, forged the work of his father and provided a

certificate for its provenance as well; see the article Jacques van Meegeren. Criminal and civil penalties for fraud, forgery, and counterfeiting can reduce the incentive for falsification, depending on the risk of getting caught.

Authentication factors and identity


The ways in which someone may be authenticated fall into three categories, based on what are known as the factors of authentication: something you know, something you have, or something you are. Each authentication factor covers a range of elements used to authenticate or verify a person's identity prior to being granted access, approving a transaction request, signing a document or other work product, granting authority to others, and establishing a chain of authority. Security research has determined that for a positive identification, elements from at least two, and preferably all three, factors be verified the three factors (classes) and some of elements of each factor are: the ownership factors: Something the user has (e.g., wrist band, ID card, security token, software token, phone, or cell phone)

the knowledge factors: Something the user knows (e.g., a password, pass phrase, or personal identification number (PIN), challenge response (the user must answer a question))

The inherence factors: Something the user is or does (e.g., fingerprint, retinal pattern, DNA sequence (there are assorted definitions of what is sufficient),signature, face, voice, unique bio-electric signals, or other biometric identifier).

Two-factor authentication
When elements representing two factors are required for identification, the term twofactor authentication is applied. . e.g. a bankcard (something the user has) and a PIN (something the user knows). Business networks may require users to provide a 6

password (knowledge factor) and a random number from a security token (ownership factor). Access to a very high security system might require a mantrap screening of height, weight, facial, and fingerprint checks (several inherence factor elements) plus a PIN and a day code (knowledge factor elements), but this is still a two-factor authentication.

PRODUCT AUTHENTICATION

A Security hologram label on an electronics box for authentication Counterfeit products are often offered to consumers as being authentic. Counterfeit consumer goods such as electronics, music, apparel, and Counterfeit medications have been sold as being legitimate. Efforts to control the supply chain and educate consumers to evaluate the packaging and labeling help ensure that authentic products are sold and used.

Information content
The authentication of information can pose special problems (especially man-in-the-middle attacks), and is often wrapped up with authenticating identity. Literary forgery can involve imitating the style of a famous author. If an original manuscript, typewritten text, or recording is available, then the medium itself (or its packaging - anything from a box to email headers) can help prove or disprove the authenticity of the document. However, text, audio, and video can be copied into new media, possibly leaving only the informational content itself to use in authentication. Various systems have been invented to allow authors to provide a means for readers to reliably authenticate that a given message

originated from or was relayed by them. These involve authentication factors like: A difficult-to-reproduce physical artifact, such as a seal, signature, watermark, special stationery, or fingerprint.

A shared secret, such as a pass phrase, in the content of the message.

An electronic signature; public key infrastructure is often used to cryptographically guarantee that a message has been signed by the holder of a particular private key.

The opposite problem is detection of plagiarism, where information from a different author is passed of as a person's own work. A common technique for proving plagiarism is the discovery of another copy of the same or very similar text, which has different attribution. In some cases excessively high quality or a style mismatch may raise suspicion of plagiarism. Factual verification Determining the truth or factual accuracy of information in a message is generally considered a separate problem from authentication. A wide range of techniques, from detective work to fact checking in journalism, to scientific experiment might be employed. Video authentication It is sometimes necessary to authenticate the veracity of video recordings used as evidence in judicial proceedings. Proper chainof-custody records and secure storage facilities can help ensure the admissibility of digital or analog recordings by the Court

Historically, fingerprints have been used as the most authoritative method of authentication, but recent court cases in the US and elsewhere have raised fundamental doubts about fingerprint reliability. Outside of the legal system as well, fingerprints have been shown to be easily spoofable, with British Telecom's top computer-security official noting that "few" fingerprint readers have not already been tricked by one spoof or another. Hybrid or two-

tiered authentication methods offer a compelling solution, such as private keys encrypted by fingerprint inside of a USB device. In a computer data context, cryptographic methods have been developed (see digital signature and challenge-response authentication) which are currently not spoofable if and only if the originator's key has not been compromised. That the originator (or anyone other than an attacker) knows (or doesn't know) about a compromise is irrelevant. It is not known whether these cryptographically based authentication methods are provably secure since unanticipated mathematical developments may make them vulnerable to attack in future. If that were to occur, it may call into question much of the authentication in the past. In particular, a digitally signed contract may be questioned when a new attack on the cryptography underlying the signature is discovered. Strong authentication

Authentication vs. authorization


The process of authorization is sometimes mistakenly thought to be the same as authentication; many widely adopted standard security protocols, obligatory regulations, and even statutes make this error. However, authentication is the process of verifying a claim made by a subject that it should be allowed to act on behalf of a given principal (person, computer, process, etc.). Authorization, on the other hand, involves verifying that an authenticated subject has permission to perform certain operations or access specific resources. Authentication, therefore, must precede authorization. For example, when you show proper identification credentials to a bank teller, you are asking to be authenticated to act on behalf of the account holder. If your authentication request is approved, you become authorized to access the accounts of that account holder, but no others. Even though authorization cannot occur without authentication, the former term is sometimes used to mean the combination of both. To distinguish "authentication" from the closely related "authorization", the short-hand

notations A1 (authentication), A2 (authorization) as well as AuthN / AuthZ(AuthR) or Au / Az are used in some communities. Normally delegation was considered to be a part of authorization domain. Recently authentication is also used for various type of delegation tasks. Delegation in IT network is also a new but evolving field.

Access control One familiar use of authentication and authorization is access


control. A computer system that is supposed to be used only by those authorized must attempt to detect and exclude the unauthorized. Access to it is therefore usually controlled by insisting on an authentication procedure to establish with some degree of confidence the identity of the user, thence granting those privileges as may be authorized to that identity. Common examples of access control involving authentication include: A captcha is a means of asserting that a user is a human being and not a computer program.

A computer program using a blind credential to authenticate to another program


Entering a country with a passport Logging in to a computer

Using a confirmation E-mail to verify ownership of an email address


Using an Internet banking system Withdrawing cash from an ATM

In some cases, ease of access is balanced against the strictness of access checks. For example, the credit card network does not require a personal identification number for authentication of the claimed identity; and a small transaction usually does not even require a signature of the authenticated person for proof of authorization of the transaction. The security of the system is maintained by limiting distribution of credit card numbers, and by the threat of punishment for fraud. 10

Security experts argue that it is impossible to prove the identity of a computer user with absolute certainty. It is only possible to apply one or more tests which, if passed, have been previously declared to be sufficient to proceed. The problem is to determine which tests are sufficient, and many such are inadequate. Any given test can be spoofed one way or another, with varying degrees of difficulty.

A Brief History of authentication


In cryptography, a password-authenticated key agreement method is an interactive method for two or more parties to establish cryptographic keys based on one or more party's knowledge of a password. Password-authenticated key agreement generally encompasses methods such as:

Balanced password-authenticated key exchange Augmented password-authenticated key exchange Password-authenticated key retrieval Multi-server methods Multi-party methods

In the most stringent password-only security models, there is no requirement for the user of the method to remember any secret or public data other than the password. Password authenticated key exchange (PAKE) is where two or more parties, based only on their knowledge of a password, establish a cryptographic key using an exchange of messages, such that an unauthorized party (one who controls the communication channel but does not possess the password) cannot participate in the method and is constrained as much as possible from guessing the password. (The optimal case yields exactly one guess per run exchange.) Two forms of PAKE are Balanced and Augmented methods. 11

Balanced PAKE allows parties that use the same password to negotiate and authenticate a shared key. Examples of these are:

Encrypted Key Exchange (EKE) PAK and PPK SPEKE (Simple password exponential key exchange) J-PAKE (Password Authenticated Key Exchange by Juggling)

Augmented PAKE is a variation applicable to client/server scenarios, in which an attacker must perform a successful bruteforce attack in order to masquerade as the client using stolen server data. Examples of these are:

AMP Augmented-EKE B-SPEKE PAK-Z SRP

Password-authenticated key retrieval is a process in which a client obtains a static key in a password-based negotiation with a server that knows data associated with the password, such as the Ford and Kaliski methods. In the most stringent setting, one party uses only a password in conjunction with two or more (N) servers to retrieve a static key, in a way that protects the password (and key) even if any N-1 of the servers are completely compromised.

The first successful password-authenticated key agreement methods were Encrypted Key Exchange methods described by Steven M. Bellovin and Michael Merritt in 1992. Although several of the first methods were flawed, the surviving and enhanced forms of EKE effectively amplify a shared password into a shared key, which can then be used for encryption and/or message authentication. The first provably-secure PAKE protocols were given in work by M. Bellare, D. Pointcheval, and P. Rogaway (Eurocrypt 2000) and V. Boyko, P. MacKenzie, and S. Patel (Eurocrypt 2000). These protocols were proven secure in the so-called random oracle 12

model (or even stronger variants), and the first protocols proven secure under standard assumptions were those of O. Goldreich and Y. Lindell (Crypto 2001) and J. Katz, R. Ostrovsky, and M. Yung (Eurocrypt 2001). The first password-authenticated key retrieval methods were described by Ford and Kaliski in 2000. A considerable number of refinements, alternatives, variations, and security proofs have been proposed in this growing class of password-authenticated key agreement methods.

Cookies
Love 'em or loath them, cookies are now a requisite for use of many online banking and e-commerce sites. Cookies were never designed to store usernames and passwords or any sensitive information. Being attenuated to this design decision is helpful in understanding how to use them correctly. Cookies were originally introduced by Netscape and are now specified in RFC 2965 (which supersedes RFC 2109), with RFC 2964 and BCP44 offering guidance on best practice. There are two categories of cookies, secure or non-secure and persistent or non-persistent, giving four individual cookies types.

Persistent and Secure Persistent and Non-Secure Non-Persistenet and Secure Non-Persistent and Non-Secure

Persistent vs. Non-Persistent


Persistent cookies are stored in a text file (cookies.txt under Netscape and multiple *.txt files for Internet Explorer) on the client and are valid for as long as the expiry date is set for (see below). Non-Persistent cookies are stored in RAM on the client and are destroyed when the browser is closed or the cookie is explicitly killed by a log-off script.

Secure vs. Non-Secure


Secure cookies can only be sent over HTTPS (SSL). Non-Secure cookies can be sent over HTTPS or regular HTTP. The title of secure is somewhat misleading. It only provides transport security. Any

13

data sent to the client should be considered under the total control of the end user, regardless of the transport mechanism in use.

How do Cookies work?


Cookies can be set using two main methods, HTTP headers and JavaScript. JavaScript is becoming a popular way to set and read cookies as some proxies will filter cookies set as part of an HTTP response header. Cookies enable a server and browser to pass information among themselves between sessions. Remembering HTTP is stateless, this may simply be between requests for documents in a same session or even when a user requests an image embedded in a page. It is rather like a server stamping a client and saying show this to me next time you come in. Cookies can not be shared (read or written) across DNS domains. In correct client operation Domain A can't read Domain B's cookies, but there have been much vulnerability in popular web clients which have allowed exactly this. Under HTTP the server responds to a request with an extra header. This header tells the client to add this information to the client's cookies file or store the information in RAM. After this, all requests to that URL from the browser will include the cookie information as an extra header in the request.

What's in a cookie?
A typical cookie used to store a session token (for redhat.com for example) looks much like: Domain: The website domain that created and that can read the variable. Flag: A TRUE/FALSE value indicating whether all machines within a given domain can access the variable. Path: The path attribute supplies a URL range for which the cookie is valid. If path is set to /reference, the cookie will be sent for URLs in /reference as well as sub-directories such as/reference/web protocols. A pathname of " / " indicates that the cookie will be used for all URLs at the site from which the cookie originated. Secure: A TRUE/FALSE value indicating if an SSL connection with the domain is needed to access the variable. Expiration: The Unix time that the variable will expire on. Unix time is defined as the number of seconds since 00:00:00 GMT on Jan 1, 1970. Omitting the expiration date signals to the browser to

14

store the cookie only in memory; it will be erased when the browser is closed. Name: The name of the variable (in this case Apache). So the above cookie value of Apache equals 64.3.40.151.16018996349247480 and is set to expire on July 27, 2006, for the website domain at http://www.redhat.com. The website sets the cookie in the user's browser in plaintext in the HTTP stream like this: Set-Cookie: Apache="64.3.40.151.16018996349247480"; path="/"; domain="www.redhat.com"; path spec; expires="200607-27 19:39:15Z"; version=0

Session Tokens
Cryptographic Algorithms for Session Tokens
All session tokens (independent of the state mechanisms) should be user unique, non-predictable, and resistant to reverse engineering. A trusted source of randomness should be used to create the token (like a pseudo-random number generator, Yarrow, EGADS, etc.). Additionally, for more security, session tokens should be tied in some way to a specific HTTP client instance to prevent hijacking and replay attacks. Examples of mechanisms for enforcing this restriction may be the use of page tokens which are unique for any generated page and may be tied to session tokens on the server. In general, a session token algorithm should never be based on or use as variables any user personal information (user name, password, home address, etc.)

Appropriate Key Space


Even the most cryptographically strong algorithm still allows an active session token to be easily determined if the key space of the token is not sufficiently large. Attackers can essentially "grind" through most possibilities in the token's key space with automated brute force scripts. A token's key space should be sufficiently large enough to prevent these types of brute force attacks, keeping in mind that computation and bandwidth capacity increases will make these numbers insufficient over time.

15

Session Management Schemes Session Time-out


Session tokens that do not expire on the HTTP server can allow an attacker unlimited time to guess or brute force a valid authenticated session token. An example is the "Remember Me" option on many retail websites. If a user's cookie file is captured or brute-forced, then an attacker can use these static-session tokens to gain access to that user's web accounts. Additionally, session tokens can be potentially logged and cached in proxy servers that, if broken into by an attacker.

Regeneration of Session Tokens


To prevent Session Hijacking and Brute Force attacks from occurring to an active session, the HTTP server can seamlessly expire and regenerate tokens to give an attacker a smaller window of time for replay exploitation of each legitimate token. Token expiration can be performed based on number of requests or time.

Session Forging/Brute-Forcing Detection and/or Lockout


Many websites have prohibitions against unrestrained password guessing (e.g., it can temporarily lock the account or stop listening to the IP address). With regard to session token brute-force attacks, an attacker can probably try hundreds or thousands of session tokens embedded in a legitimate URL or cookie for example without a single complaint from the HTTP server. Many intrusion-detection systems do not actively look for this type of attack; penetration tests also often overlook this weakness in web e-commerce systems. Designers can use "booby trapped" session tokens that never actually get assigned but will detect if an attacker is trying to brute force a range of tokens. Resulting actions can either ban originating IP address (all behind proxy will be affected) or lock out the account (potential DoS). Anomaly/misuse detection hooks can also be built in to detect if an authenticated user tries to manipulate their token to gain elevated privileges.

Session Re-Authentication
some types of cross-site scripting attacks that exploit user accounts Critical user actions such as money transfer or significant purchase

16

decisions should require the user to re-authenticate or be reissued another session token immediately prior to significant actions. Developers can also somewhat segment data and user actions to the extent where re-authentication is required upon crossing certain "boundaries" to prevent.

Session Token Transmission


If a session token is captured in transit through network interception, a web application account is then trivially prone to a replay or hijacking attack. Typical web encryption technologies include but are not limited to Secure Sockets Layer (SSLv2/v3) and Transport Layer Security (TLS v1) protocols in order to safeguard the state mechanism token.

Session Tokens on Logout


With the popularity of Internet Kiosks and shared computing environments on the rise, session tokens take on a new risk. A browser only destroys session cookies when the browser thread is torn down. Most Internet kiosks maintain the same browser thread. It is therefore a good idea to overwrite session cookies when the user logs out of the application.

Page Tokens
Page specific tokens or "nonce" may be used in conjunction with session specific tokens to provide a measure of authenticity when dealing with client requests. Used in conjunction with transport layer security mechanisms, page tokens can aide in ensuring that the client on the other end of the session is indeed the same client which requested the last page in a given session. Page tokens are often stored in cookies or query strings and should be completely random. It is possible to avoid sending session token information to the client entirely through the use of page tokens, by creating a mapping between them on the server side, this technique should further increase the difficulty in brute forcing session authentication tokens.

SSL and TLS


The Secure Socket Layer protocol or SSL was designed by Netscape and included in the Netscape Communicator browser. SSL is probably the widest spoken security protocol in the world and is built in to all commercial web browsers and web servers. The current version is Version 2. As the original version of SSL designed by Netscape is technically a proprietary protocol the Internet 17

Engineering Task Force (IETF) took over responsibilities for upgrading SSL and have now renamed it TLS or Transport Layer Security. The first version of TLS is version 3.1 and has only minor changes from the original specification. SSL can provide three security services for the transport of data to and from web services. Those are:

Authentication Confidentiality Integrity

Contrary to the unfounded claims of many marketing campaigns, SSL alone does not secure a web application! The phrase "this site is 100% secure, we use SSL" can be misleading! SSL only provides the services listed above. SSL/TLS provide no additional security once data has left the IP stack on either end of a connection. All flaws in execution environments which use SSL for session transport are in no way abetted or mitigated through the use of SSL. SSL uses both public key and symmetric cryptography. You will often here SSL certificates mentioned. SSL certificates are X.509 certificates. A certificate is a public key that is signed by another trusted user (with some additional information to validate that trust). For the purpose of simplicity we are going to refer to both SSL and TLS as SSL in this section. A more complete treatment of these protocols can be found in Stephen Thomas's "SSL and TLS Essentials". The additional steps are; 1. 4.) The server sends a Certificate request after sending its own certificate. 2. 6.) The client provides its Certificate. 3. 8.) The client sends a Certificate verify message in which it encrypts a known piece of plaintext using its private key. The server uses the client certificate to decrypt, therefore ascertaining the client has the private key.

Access control mechanisms are a necessary and crucial design element to any application's security. In general, a web application should protect front-end and back-end data and system resources by implementing access control restrictions on what users can do, 18

which resources they have access to, and what functions they are allowed to perform on the data. Ideally, an access control scheme should protect against the unauthorized viewing, modification, or copying of data. Additionally, access control mechanisms can also help limit malicious code execution, or unauthorized actions through an attacker exploiting infrastructure dependencies (DNS server, ACE server, etc.).

Smart card:
A smart card, chip card, or integrated circuit card (ICC), is any pocket-sized card with embedded integrated circuits. There are two broad categories of Ices. Memory cards contain only non-volatile memory storage components, and perhaps dedicated security logic. Microprocessor cards microprocessor components. contain The volatile card is memory made of and plastic,

generally polyvinyl chloride, but sometimes acrylonitrile butadiene styrene or polycarbonate . Smart cards may also provide strong security authentication for single organizations. sign-on(SSO) within large

Overview
A smart card may have the following generic characteristics: Dimensions similar to those of a credit card. ID-1 of the ISO/IEC 7810 standard defines cards as nominally 85.60 by 53.98 millimetres (3.370 2.125 in). Another popular size is ID000 which is nominally 25 by 15 millimetres (0.984 0.591 in) (commonly used in SIM cards). Both are 0.76 millimetres (0.030 in) thick.

Contains a tamper-resistant security system (for example a secure cryptoprocessor and a secure file system) and provides security services (e.g., protects in-memory information).

19

Managed by an administration system which securely interchanges information and configuration settings with the card, controlling card blacklisting and application-data updates.

Communicates with external services via card-reading devices, such as ticket readers, ATMs, etc.

Benefits
Smart cards can provide identification, authentication, data storage and application processing. The benefits of smart cards are directly related to the volume of information and applications that are programmed for use on a card. A single contact/contactless smart card can be programmed with multiple banking credentials, medical entitlement, drivers license/public transport entitlement, loyalty programs and club memberships to name just a few. Multi-factor and proximity authentication can and has been embedded into smart cards to increase the security of all services on the card. For example, a smart card can be programed to only allow a contactless transaction if it is also within range of another device like a uniquely paired mobile phone. This can significantly increase the security of the smart card. Governments gain a significant enhancement to the provision of publicly funded services through the increased security offered by smart cards. These savings are passed onto society through a reduction in the necessary funding or enhanced public services. Individuals gain increased security and convenience when using smart cards designed for interoperability between services. For example, consumers only need to replace one card if their wallet is lost or stolen. Additionally, the data storage available on a card could contain medical information that is critical in an emergency should the card holder allow access to this.

20

RFID
Radio-frequency identification (RFID) is a technology that uses communication via radio waves to exchange data between a reader and an electronic tag attached to an object, for the purpose of identification and tracking. It is possible in the near future, RFID technology will continue to proliferate in our daily lives the way that bar code technology did over the forty years leading up to the turn of the 21st century bringing unobtrusive but remarkable changes when it was new. RFID makes it possible to give each product in a grocery store its own unique identifying number, to provide assets, people, work in process, medical devices etc. all with individual unique identifiers like the license plate on a car but for every item in the world. This is a vast improvement over paper and pencil tracking or bar code tracking that has been used since the 1970s. With bar codes, it is only possible to identify the brand and type of package in a grocery store, for instance. Furthermore, passive RFID tags (those without a battery) can be read if passed within close enough proximity to an RFID reader. It is not necessary to "show" them to it, as with a bar code. In other words it does not require line of sight to "see" an RFID tag, the tag can be read inside a case, carton, box or other container, and unlike barcodes RFID tags can be read hundreds at a time. Bar codes can only read one at a time. Some RFID tags can be read from several meters away and beyond the line of sight of the reader. The application of bulk reading enables an almost-parallel reading of tags. Radio-frequency identification involves the hardware known as interrogators (also known as readers), and tags (also known as labels), as well as RFID software or RFID middleware.

21

Most RFID tags contain at least two parts: one is an integrated circuit for storing and processing information, modulating and demodulating a radio-frequency (RF) signal, and other specialized functions; the other is an antenna for receiving and transmitting the signal. RFID can be either passive (using no battery), active (with an onboard battery that always broadcasts or beacons its signal) or battery assisted passive "BAP" which has a small battery on board that is activated when in the presence of an RFID reader. Passive tags in 2011 start at $ .05 each and for special tags meant to be mounted on metal, or withstand gamma sterilization go up to $5. Active tags for tracking containers, medical assets, or monitoring environmental conditions in data centers all start at $50 and can go up over $100 each. BAP tags are in the $3-10 range and also have sensor capability like temperature and humidity. The term RFID refers to the technology. The tags should properly be called "RFID tags" not "RFIDs".

Fixed RFID and Mobile RFID:


Depending on mobility, RFID readers are classified into two different types: fixed RFID and mobile RFID. If the reader reads tags in a stationary position, it is called fixed RFID. These fixed readers are set up specific interrogation zones and create a "bubble" of RF energy that can be tightly controlled if the physics is well engineered. This allows a very definitive reading area for when tags go in and out of the interrogation zone. On the other hand, if the reader is mobile when the reader reads tags, it is called mobile RFID. Mobile readers include hand Helds, carts and vehicle mounted RFID readers from manufacturers such as Motorola, Intermec, Impinj, Sirit, etc. There are three types of RFID tags: passive RFID tags, which have no power source and require an external electromagnetic field to initiate a signal transmission, active RFID tags, which contain a battery and can transmit signals once an external source ('Interrogator') has been successfully identified, and battery assisted passive (BAP) RFID tags, which require an external source 22

to wake up but have significant higher forward link capability providing greater range. There are a variety of groups defining standards and regulating the use of RFID, including the International Organization for Standardization (ISO), the International Electrotechnical Commission (IEC), ASTM International, the DASH7 Alliance and EPCglobal. (Refer to Regulation and standardization below.)There are also several specific industries that have set guidelines including the Financial Services Technology Consortium (FSTC) has set a standard for tracking IT Assets with RFID, the Computer Technology Industry Association CompTIA has set a standard for certifying RFID engineers and the International Airlines Transport Association IATAset tagging guidelines for luggage in airports. RFID has many applications; for example, it is used in enterprise supply chain management to improve the efficiency of inventory tracking and management. The Healthcare industry has used RFID to create tremendous productivity increases by eliminating "parasitic" roles that don't add value to an organization such as counting, looking for things, or auditing items. Many financial institutions use RFID to track key assets and automate Sarbanes Oxley SOX compliance. Also with recent advances in social media RFID is being used to tie the physical world with the virtual world. RFID in Social Media first came to lite in 2010 with face books annual conference (f8). RFIDs are easy to conceal or incorporate in other items. For example, in 2009 researchers at Bristol University successfully glued RFID micro-transponders to livens in order to study their behavior.This trend towards increasingly miniaturized RFIDs is likely to continue as technology advances. Hitachi holds the record for the smallest RFID chip, at 0.05mm x 0.05mm. This is 1/64th the size of the previous record holder, the mu-chip.Manufacture is enabled by using the silicon-oninsulator (SOI) process. These dust-sized chips can store 38-digit numbers using 128-bit Read Only Memory (ROM). A major challenge is the attachment of the antennas, thus limiting read range to only millimeters.

23

Potential alternatives to the radio frequencies (0.1250.1342, 0.1400.1485, 13.56, and 840960 MHz) used are seen in optical RFID (or OPID) at 333 THz (900 nm), 380 THz (788 nm), 750 THz (400 nm).The awkward antennas of RFID can be replaced with photovoltaic components and Ireless on the ICs.

PHYSICAL BIOMETRICS:
Fingerprint technology
Fingerprint recognition or fingerprint authentication refers to the automated method of verifying a match between two human fingerprints. Fingerprints are one of many forms of biometrics used to identify individuals and verify their identity. This article touches on two major classes of algorithms (minutia and pattern) and four sensor designs (optical, ultrasonic, passive capacitance, and active capacitance)

Patterns
The three basic patterns of fingerprint ridges are the arch, loop, and whorl. An arch is a pattern where the ridges enter from one side of the finger, rise in the center forming an arc, and then exit the other side of the finger. The loop is a pattern where the ridges enter from one side of a finger, form a curve, and tend to exit from the same side they enter. In the whorl pattern, ridges form circularly around a central point on the finger. Scientists have found that family members often share the same general fingerprint patterns, leading to the belief that these patterns are inherited.

Minutia features
The major Minutia features of fingerprint ridges are: ridge ending, bifurcation, and short ridge (or dot). The ridge ending is the point at which a ridge terminates. Bifurcations are points at which a single ridge splits into two ridges. Short ridges (or dots) are ridges which are significantly shorter than the average ridge length on the
fingerprint. Minutiae and patterns are very important in the analysis of fingerprints since no two fingers have been shown to be identical.

24

Fingerprint scanner

A fingerprint sensor is an electronic device used to capture a digital image of the fingerprint pattern. The captured image is called a live scan. This live scan isdigitally processed to create a biometric template (a collection of extracted features) which is stored and used for matching. This is an overview of some of the more commonly used fingerprint sensor technologies.

Optical
Optical fingerprint imaging involves capturing a digital image of the print using visible light. This type of sensor is, in essence, a specialized digital camera. The top layer of the sensor, where the finger is placed, is known as the touch surface. Beneath this layer is a light-emitting phosphor layer which illuminates the surface of the finger. The light reflected from the finger passes through the phosphor layer to an array of solid state pixels (a charge-coupled device) which captures a visual image of the fingerprint. A scratched or dirty touch surface can cause a bad image of the fingerprint. A disadvantage of this type of sensor is the fact that the imaging capabilities are affected by the quality of skin on the finger. For instance, a dirty or marked finger is difficult to image properly. Also, it is possible for an individual to erode the outer layer of skin on the fingertips to the point where the fingerprint is no longer visible. It can also be easily fooled by an image of a fingerprint if not coupled with a "live finger" detector. However, unlike capacitive sensors, this sensor technology is not susceptible to electrostatic discharge damage.

Ultrasonic
Ultrasonic sensors make use of the principles of medical ultrasonography in order to create visual images of the fingerprint. Unlike optical imaging, ultrasonic sensors use very high frequency sound waves to penetrate the epidermal layer of skin. The sound waves are generated using piezoelectric transducers and reflected energy is also measured using piezoelectric materials. Since the dermal skin layer exhibits the same characteristic pattern of the fingerprint, the reflected wave measurements can be used to form an image of the fingerprint. This eliminates the need for clean, undamaged epidermal skin and a clean sensing surface.[5] 25

Capacitance
Capacitance sensors utilize the principles associated with capacitance in order to form fingerprint images. In this method of imaging, the sensor array pixels each act as one plate of a parallel-plate capacitor, the dermal layer (which is electrically conductive) acts as the other plate, and the nonconductive epidermal layer acts as a dielectric.

Passive capacitance
A passive capacitance sensor uses the principle outlined above to form an image of the fingerprint patterns on the dermal layer of skin. Each sensor pixel is used to measure the capacitance at that point of the array. The capacitance varies between the ridges and valleys of the fingerprint due to the fact that the volume between the dermal layer and sensing element in valleys contains an air gap. The dielectric constant of the epidermis and the area of the sensing element are known values. The measured capacitance values are then used to distinguish between fingerprint ridges and valleys. Matching algorithms are used to compare previously stored templates of fingerprints against candidate fingerprints for authentication purposes. In order to do this either the original image must be directly compared with the candidate image or certain features must be compared.

Pattern-based (or image-based) algorithms


Pattern based algorithms compare the basic fingerprint patterns (arch, whorl, and loop) between a previously stored template and a candidate fingerprint. This requires that the images be aligned in the same orientation. To do this, the algorithm finds a central point in the fingerprint image and centers on that. In a pattern-based algorithm, the template contains the type, size, and orientation of patterns within the aligned fingerprint image. The candidate fingerprint image is graphically compared with the template to determine the degree to which they match.

26

Facial recognition:
A facial recognition system is a computer application for automatically identifying or verifying a person from a digital image or a video frame from a video source. One of the ways to do this is by comparing selected facial features from the image and a facial database. It is typically used in security systems and can be compared to other biometrics such as fingerprint or eye irisrecognition systems

Traditional
Some facial recognition algorithms identify faces by extracting landmarks, or features, from an image of the subject's face. For example, an algorithm may analyze the relative position, size, and/or shape of the eyes, nose, cheekbones, and jaw. These features are then used to search for other images with matching features. Other algorithms normalize a gallery of face images and then compress the face data, only saving the data in the image that is useful for face detection. A probe image is then compared with the face data. One of the earliest successful systems is based on template matching techniques applied to a set of salient facial features, providing a sort of compressed face representation. Recognition algorithms can be divided into two main approaches, geometric, which look at distinguishing features, or photometric, which is a statistical approach that distills an image into values and comparing the values with templates to eliminate variances. Popular recognition algorithms include Principal Component Analysis with eigenface, Linear Discriminate Analysis, Elastic Bunch Graph Matching fisherface, theHidden Markov model, and the neuronal motivated dynamic link matching. 27

3-D A newly emerging trend, claimed to achieve previously unseen accuracies, is three-dimensional face recognition. This technique uses 3-D sensors to capture information about the shape of a face. This information is then used to identify distinctive features on the surface of a face, such as the contour of the eye sockets, nose, and chin. One advantage of 3-D facial recognition is that it is not affected by changes in lighting like other techniques. It can also identify a face from a range of viewing angles, including a profile view.[

Skin texture analysis


Another emerging trend uses the visual details of the skin, as captured in standard digital or scanned images. This technique, called skin texture analysis, turns the unique lines, patterns, and spots apparent in a persons skin into a mathematical space.
.

Additional uses
In addition to being used for security systems, authorities have found a number of other applications for facial recognition systems. While earlier post 9/11deployments were well publicized trials, more recent deployments are rarely written about due to their covert nature. At Super Bowl XXXV in January 2001, police in Tampa Bay, Florida, used Identix' facial recognition software, FaceIt, to search for potential criminals and terrorists in attendance at the event. (it found 19 people with pending arrest warrants) In the 2000 presidential election, the Mexican government employed facial recognition software to prevent voter fraud. Some individuals had been registering to vote under several different names, in an attempt to place multiple votes. By comparing new facial images to those already in the voter database, authorities were able to reduce duplicate registrations. Similar technologies are being used in the United States to prevent people from obtaining fake identification cards and drivers licenses.

28

There are also a number of potential uses for facial recognition that are currently being developed. For example, the technology could be used as a security measure at ATMs; instead of using a bank card or personal identification number, the ATM would capture an image of your face, and compare it to your photo in the bank database to confirm your identity. This same concept could also be applied to computers; by using a webcam to capture a digital image of yourself, your face could replace your password as a means to log-in. As part of the investigation of the disappearance of Madeleine McCann the British police are calling on visitors to the Ocean Club Resort, Praia da Luz in Portugalor the surrounding areas in the two weeks leading up to the child's disappearance on Thursday 3 May 2007 to provide copies of any photographs of people taken during their stay, in an attempt to identify the abductor using a biometric facial recognition application. Also, in addition to biometric usages, modern digital cameras often incorporate a facial detection system that allows the camera to focus and measure exposure on the face of the subject, thus guaranteeing a focused portrait of the person being photographed. Some cameras, in addition, incorporate a smile shutter, or take automatically a second picture if someone closed their eyes during exposure.

Weaknesses
Face recognition is not perfect and struggles to perform under certain conditions. Ralph Gross, a researcher at the Carnegie Mellon Robotics Institute, describes one obstacle related to the viewing angle of the face: "Face recognition has been getting pretty good at full frontal faces and 20 degrees off, but as soon as you go towards profile, there've been problems. Other conditions where face recognition does not work well include poor lighting, sunglasses, long hair, or other objects partially covering the subjects face, and low resolution images. Another serious disadvantage is that many systems are less effective if facial expressions vary. Even a big smile can render the

29

system less effective. For instance: Canada now allows only neutral facial expressions in passport photos.

Effectiveness Privacy concerns


Many citizens are concerned that their privacy will be invaded. Some fear that it could lead to a total surveillance society, with the government and other authorities having the ability to know where you are, and what you are doing, at all times. This is not to be
an underestimated concept as history has shown that states have typically abused such access before.

In 2006, the performance of the latest face recognition algorithms was evaluated in the Face Recognition Grand Challenge (FRGC). High-resolution face images, 3-D face scans, and iris images were used in the tests. The results indicated that the new algorithms are 10 times more accurate than the face recognition algorithms of 2002 and 100 times more accurate than those of 1995. Some of the algorithms were able to outperform human participants in recognizing faces and could uniquely identify identical twins. Low-resolution images of faces can be enhanced using face hallucination. Further improvements in high resolution, megapixel cameras in the last few years have helped to resolve the issue of insufficient resolution.

30

IRIS RECOGNITION
Iris recognition is a method of biometric authentication that uses pattern-recognition techniques based on high-resolution images of the irides of an individual's eyes. Not to be confused with another, less prevalent, ocular-based technology, retina scanning, iris recognition uses camera technology, with subtle infrared illumination reducing specular reflection from the convex cornea, to create images of the detailrich, intricate structures of the iris. Converted into digital templates, these images provide mathematical representations of the iris that yield unambiguous positive identification of an individual. Iris recognition efficacy is rarely impeded by glasses or contact lenses. Iris technology has the smallest outlier (those who cannot use/enroll) group of all biometric technologies. Because of its speed of comparison, iris recognition is the only biometric technology wellsuited for one-to-many identification. A key advantage of iris recognition is its stability, or template longevity, as, barring trauma, a single enrollment can last a lifetime. Breakthrough work to create the iris-recognition algorithms required for image acquisition and one-to-many matching was pioneered by John G. Daugman, Ph.D, OBE (University of Cambridge Computer Laboratory). These were utilized to effectively debut commercialization of the technology in conjunction with an early version of the IrisAccess system designed and manufactured by Korea's LG Electronics. Dugans algorithms are the basis of almost all currently (as of 2006) commercially deployed iris-recognition systems. (In tests where the matching thresholds arefor better comparabilitychanged from their default settings to allow a falseaccept rate in the region of 103 to 104 the IrisCode false-reject rates are comparable to the most accurate single-finger fingerprint matchers

31

Operating principle

An iris-recognition algorithm first has to identify the approximately concentric circular outer boundaries of the iris and the pupil in a photo of an eye. The set of pixels covering only the iris is then transformed into a bit pattern that preserves the information that is essential for a statistically meaningful comparison between two iris images. The mathematical methods used resemble those of modern loss compression algorithms for photographic images. In the case of Dugans algorithms, a Gabor wavelet transform is used in order to extract the spatial frequency range that contains a good best signalto-noise ratio considering the focus quality of available cameras. The result is a set of complex numbers that carry local amplitude and phase information for the iris image. In Daugman's algorithms, all amplitude information is discarded, and the resulting 2048 bits that represent an iris consist only of the complex sign bits of the Gabor-domain representation of the iris image. Discarding the amplitude information ensures that the template remains largely unaffected by changes in illumination and virtually negligibly by iris color, which contributes significantly to the long-term stability of the biometric template. To authenticate via identification (one-to-many 32

template matching) or verification (one-to-one template matching), a template created by imaging the iris is compared to a stored value template in a database. If the Hamming distance is below the decision threshold, a positive identification has effectively been made. A practical problem of iris recognition is that the iris is usually partially covered by eyelids and eyelashes. In order to reduce the false-reject risk in such cases, additional algorithms are needed to identify the locations of eyelids and eyelashes and to exclude the bits in the resulting code from the comparison operation.

Advantages
The iris of the eye has been described as the ideal part of the human body for biometric identification for several reasons: It is an internal organ that is well protected against damage and wear by a highly transparent and sensitive membrane (the cornea). This distinguishes it from fingerprints, which can be difficult to recognize after years of certain types of manual labor.

The iris is mostly flat, and its geometric configuration is only controlled by two complementary muscles (the sphincter pupillae and dilator pupillae) that control the diameter of the pupil. This makes the iris shape far more predictable than, for instance, that of the face.

The iris has a fine texture thatlike fingerprintsis determined randomly during embryonic gestation. Even genetically identical individuals have completely independent iris textures, whereas DNA (genetic "fingerprinting") is not unique for the about 0.2% of the human population who have a genetically identical twin.

An iris scan is similar to taking a photograph and can be performed from about 10 cm to a few meters away. There is no need for the person to be identified to touch any equipment that has recently been touched by a stranger, thereby eliminating an objection that has been raised in some cultures against fingerprint scanners, where a finger has to touch a surface, or

33

retinal scanning, where the eye can be brought very close to a lens (like looking into a microscope lens). Some argue that a focused digital photograph with an iris diameter of about 200 pixels contains much more long-term stable information than fingerprints

The originally commercially deployed iris-recognition algorithm, John Daugman's IrisCode, has an unprecedented false match rate (better than 1011).

While there are some medical and surgical procedures that can affect the colour and overall shape of the iris, the fine texture remains remarkably stable over many decades. Some iris identifications have succeeded over a period of about 30 years.

Disadvantages
Iris scanning is a relatively new technology and is incompatible with the very substantial investment that the law enforcement and immigration authorities of some countries have already made into fingerprint recognition.

Iris recognition is very difficult to perform at a distance larger than a few meters and if the person to be identified is not cooperating by holding the head still and looking into the camera. However, several academic institutions and biometric vendors are developing products that claim to be able to identify subjects at distances of up to 10 meters ("standoff iris" or "iris at a distance").

As with other photographic biometric technologies, iris recognition is susceptible to poor image quality, with associated failure to enroll rates.

As with other identification infrastructure (national residents databases, ID cards, etc.), civil rights activists have voiced concerns that iris-recognition technology might help government

34

Information security
means protecting information and information systems from unauthorized access, use, disclosure, disruption, modification, perusal, inspection, recording or destruction. The terms information security, computer security and information assurance are frequently incorrectly used interchangeably. These fields are interrelated often and share the common goals of protecting the confidentiality, integrity and availability of information; however, there are some subtle differences between them. These differences lie primarily in the approach to the subject, the methodologies used, and the areas of concentration. Information security is concerned with the confidentiality, integrity and availability of data regardless of the form the data may take: electronic, print, or other forms. Computer security can focus on ensuring the availability and correct operation of a computer system without concern for the information stored or processed by the computer. Governments, military, corporations, financial institutions, hospitals, and private businesses amass a great deal of confidential information about their employees, customers, products, research, and financial status. Most of this information is now collected, processed and stored on electronic computers and transmitted across networks to other computers. Should confidential information about a business' customers or finances or new product line fall into the hands of a competitor, such a breach of security could lead to lost business, law suits or even bankruptcy of the business. Protecting confidential information is a business requirement, and in many cases also an ethical and legal requirement. For the individual, information security has a significant effect on privacy, which is viewed very differently in different cultures. 35

The field of information security has grown and evolved significantly in recent years. There are many ways of gaining entry into the field as a career. It offers many areas for specialization including: securing network(s) and allied infrastructure, securing applications and databases, security testing, information systems auditing, business continuity planning and digital forensics science, etc. This article presents a general overview of information security and its core concepts. ts to track individuals be Business continuity is the mechanism by which an organization continues to operate its critical business units, during planned or unplanned disruptions that affect normal business operations, by invoking planned and managed procedures. Unlike what most people think business continuity is not necessarily an IT system or process, simply because it is about the business. Today disasters or disruptions to business are a reality. Whether the disaster is natural or man-made (the TIME magazine has a website on the top 10), it affects normal life and so business. So why is planning so important? Let us face reality that "all businesses recover", whether they planned for recovery or not, simply because business is about earning money for survival. The planning is merely getting better prepared to face it, knowing fully well that the best plans may fail. Planning helps to reduce cost of recovery, operational overheads and most importantly sail through some smaller ones effortlessly. For businesses to create effective plans they need to focus upon the following key questions. Most of these are common knowledge, and anyone can do a BCP.

Should a disaster strike, what are the first few things that I should do? Should I call people to find if they are OK or call up the bank to figure out my money is safe? This is Emergency Response. Emergency Response services help take the first hit when the disaster strikes and if the disaster is serious enough the Emergency Response teams need to quickly get a Crisis Management team in place. 36

What parts of my business should I recover first? The one that brings me most money or the one where I spend the most, or the one that will ensure I shall be able to get sustained future growth? The identified sections are the critical business units. There is no magic bullet here, no one answer satisfies all. Businesses need to find answers that meet business requirements. How soon should I target to recover my critical business units? In BCP technical jargon this is called Recovery Time Objective, or RTO. This objective will define what costs the business will need to spend to recover from a disruption. For example, it is cheaper to recover a business in 1 day than in 1 hour. What all do I need to recover the business? IT, machinery, records...food, water, people...So many aspects to dwell upon. The cost factor becomes clearer now...Business leaders need to drive business continuity. Hold on. My IT manager spent $200000 last month and created a DRP (Disaster Recovery Plan), whatever happened to that? a DRP is about continuing an IT system, and is one of the sections of a comprehensive Business Continuity Plan. Look below for more on this. And where do I recover my business from... Will the business center give me space to work, or would it be flooded by many people queuing up for the same reasons that I am. But once I do recover from the disaster and work in reduced production capacity, since my main operational sites are unavailable, how long can this go on. How long can I do without my original sites, systems, people? this defines the amount of business resilience a business may have. Now that I know how to recover my business. How do I make sure my plan works? Most BCP pundits would recommend testing the plan at least once a year, reviewing it for adequacy and rewriting or updating the plans either annually or when businesses change.

37

Conclusion

Information security is the ongoing process of exercising due care and due diligence to protect information, and information systems, from unauthorized access, use, disclosure, destruction, modification, or disruption or distribution. The never ending process of information security involves ongoing training, assessment, protection, monitoring & detection, incident response & repair, documentation, and review. This makes information security an indispensable part of all the business operations across different domains. Change management is a formal process for directing and controlling alterations to the information processing environment. This includes alterations to desktop computers, the network, servers and software. The objectives of change management are to reduce the risks posed by changes to the information processing environment and improve the stability and reliability of the processing environment as changes are made. It is not the objective of change management to prevent or hinder necessary changes from being implemented. Any change to the information processing environment introduces an element of risk. Even apparently simple changes can have unexpected effects. One of Managements many responsibilities is the management of risk. Change management is a tool for managing the risks introduced by changes to the information processing environment. Part of the change management process ensures that changes are not implemented at inopportune times when they may disrupt critical business processes or interfere with other changes being implemented.

38

You might also like