Professional Documents
Culture Documents
Mobile COMPUTING Notes
Mobile COMPUTING Notes
Mobile COMPUTING Notes
UNIT-1
INTRODUCTION:
The frequencies used vary according to the cellular network technology implemented.
For GSM, 890 - 915 MHz range is used for transmission and 935 -960 MHz for
reception. The DCS technology uses frequencies in the 1800MHz range while PCS in
the 1900MHz range.
Each cell has a number of channels associated with it. These are assigned to subscribers
on demand. When a Mobile Station (MS) becomes 'active' it registers with the nearest
BS. The corresponding MSC stores the information about that MS and its position. This
information is used to direct incoming calls to the MS.
If during a call the MS moves to an adjacent cell then a change of frequency will
necessarily occur - since adjacent cells never use the same channels. This procedure is
called hand over and is the key to Mobile communications. As the MS is approaching
the edge of a cell, the BS monitors the decrease in signal power. The strength of the
signal is compared with adjacent cells and the call is handed over to the cell with the
strongest signal.
During the switch, the line is lost for about 400ms. When the MS is going from one
area to another it registers itself to the new MSC. Its location information is updated,
thus allowing MSs to be used outside their 'home' areas.
DATA COMMUNICATIONS
Data Communications is the exchange of data using existing communication networks.
The term data covers a wide range of applications including File Transfer (FT),
interconnection between Wide-Area-Networks (WAN), facsimile (fax), electronic mail,
access to the internet and the World Wide Web (WWW).
Data Communications have been achieved using a variety of networks such as PSTN,
leased-lines and more recently ISDN (Integrated Services Data Network) and ATM
(Asynchronous Transfer Mode)/Frame Relay. These networks are partly or totally
analogue or digital using technologies such as circuit - switching, packet - switching
e.t.c.
Circuit switching implies that data from one user (sender) to another (receiver) has to
follow a prespecified path. If a link to be used is busy , the message can not be
redirected , a property which causes many delays.
Packet switching is an attempt to make better utilization of the existing network by
splitting the message to be sent into packets. Each packet contains information about
the sender, the receiver, the position of the packet in the message as well as part of the
actual message. There are many protocols defining the way packets can be send from
the sender to the receiver. The most widely used are the Virtual Circuit-Switching
system, which implies that packets have to be sent through the same path, and the
Datagram system which allows packets to be sent at various paths depending on the
network availability. Packet switching requires more equipment at the receiver, where
reconstruction of the message will have to be done.
The introduction of mobility in data communications required a move from the Public
Switched Data Network (PSDN) to other networks like the ones used by mobile
phones. PCSI has come up with an idea called CDPD (Cellular Digital Packet Data)
technology which uses the existing mobile network (frequencies used for mobile
telephony).
Mobility implemented in data communications has a significant difference compared to
voice communications. Mobile phones allow the user to move around and talk at the
same time; the loss of the connection for 400ms during the hand over is undetectable by
the user. When it comes to data, 400ms is not only detectable but causes huge distortion
to the message. Therefore data can be transmitted from a mobile station under the
assumption that it remains stable or within the same cell.
• In courts
Defense counsels can take mobile computers in court. When the opposing counsel
references a case which they are not familiar, they can use the computer to get direct,
real-time access to on-line legal database services, where they can gather information
on the case and related precedents. Therefore mobile computers allow immediate
access to a wealth of information, making people better informed and prepared.
• In companies
Managers can use mobile computers in, say, and critical presentations to major
customers. They can access the latest market share information. At a small recess, they
can revise the presentation to take advantage of this information. They can
communicate with the office about possible new offers and call meetings for discussing
responds to the new proposals. Therefore, mobile computers can leverage competitive
advantages.
• Stock Information Collation/Control
In environments where access to stock is very limited ie: factory warehouses. The use
of small portable electronic databases accessed via a mobile computer would be ideal.
Data collated could be directly written to a central database, via a CDPD network,
which holds all stock information hence the need for transfer of data to the central
computer at a later date is not necessary. This ensures that from the time that a stock
count is completed, there is no inconsistency between the data input on the portable
computers and the central database.
• Credit Card Verification
At Point of Sale (POS) terminals in shops and supermarkets, when customers use credit
cards for transactions, the intercommunication required between the bank central
computer and the POS terminal, in order to effect verification of the card usage, can
take place quickly and securely over cellular channels using a mobile computer unit.
This can speed up the transaction process and relieve congestion at the POS terminals.
• Taxi/Truck Dispatch
Using the idea of a centrally controlled dispatcher with several mobile units (taxis),
mobile computing allows the taxis to be given full details of the dispatched job as well
as allowing the taxis to communicate information about their whereabouts back to the
central dispatch office. This system is also extremely useful in secure deliveries i.e.:
Security. This allows a central computer to be able to track and receive status
information from all of its mobile secure delivery vans. Again, the security and
reliability properties of the CDPD system shine through.
• Electronic Mail/Paging
Usage of a mobile unit to send and read emails is a very useful asset for any business
individual, as it allows him/her to keep in touch with any colleagues as well as any
urgent developments that may affect their work. Access to the Internet, using mobile
computing technology, allows the individual to have vast arrays of knowledge at his/her
fingertips.
Paging is also achievable here, giving even more intercommunication capability
between individuals, using a single mobile computer device.
THE FUTURE
With the rapid technological advancements in Artificial Intelligence, Integrated
Circuitry and increases in Computer Processor speeds, the future of mobile computing
looks increasingly exciting.
With the emphasis increasingly on compact, small mobile computers, it may also be
possible to have all the practicality of a mobile computer in the size of a hand held
organizer or even smaller.
Use of Artificial Intelligence may allow mobile units to be the ultimate in personal
secretaries, which can receive emails and paging messages, understand what they are
about, and change the individual’s personal schedule according to the message. This
can then be checked by the individual to plan his/her day.
The working lifestyle will change, with the majority of people working from home,
rather than commuting. This may be beneficial to the environment as less transportation
will be utilised. This mobility aspect may be carried further in that, even in social
spheres, people will interact via mobile stations, eliminating the need to venture outside
of the house.
This scary concept of a world full of inanimate zombies sitting, locked to their mobile
stations, accessing every sphere of their lives via the computer screen becomes ever
more real as technology, especially in the field of mobile data communications, rapidly
improves and, as shown below, trends are very much towards ubiquitous or mobile
computing.
The GSM network can be divided into three broad parts. The Mobile Station is carried
by the subscriber; the Base Station Subsystem controls the radio link with the Mobile
Station. The Network Subsystem, the main part of which is the Mobile services
Switching Center, performs the switching of calls between the mobile and other fixed
or mobile network users, as well as management of mobile services, such as
authentication. Not shown is the Operations and Maintenance center, which oversees
the proper operation and setup of the network. The Mobile Station and the Base Station
Subsystem communicate across the Um interface, also known as the air interface or
radio link. The Base Station Subsystem communicates with the Mobile service
Switching Center across the A interface.
Mobile Station
The mobile station (MS) consists of the physical equipment, such as the radio
transceiver, display and digital signal processors, and a smart card called the Subscriber
Identity Module (SIM). The SIM provides personal mobility, so that the user can have
access to all subscribed services irrespective of both the location of the terminal and the
use of a specific terminal. By inserting the SIM card into another GSM cellular phone,
the user is able to receive calls at that phone, make calls from that phone, or receive
other subscribed services.
The mobile equipment is uniquely identified by the International Mobile Equipment
Identity (IMEI). The SIM card contains the International Mobile Subscriber Identity
(IMSI), identifying the subscriber, a secret key for authentication, and other user
information. The IMEI and the IMSI are independent, thereby providing personal
mobility. The SIM card may be protected against unauthorized use by a password or
personal identity number.
The Base Station Subsystem is composed of two parts, the Base Transceiver Station
(BTS) and the Base Station Controller (BSC). These communicate across the specified
Abis interface, allowing (as in the rest of the system) operation between components
made by different suppliers.
The Base Transceiver Station houses the radio tranceivers that define a cell and handles
the radiolink protocols with the Mobile Station. In a large urban area, there will
potentially be a large number of BTSs deployed. The requirements for a BTS are
ruggedness, reliability, portability, and minimum cost.
The Base Station Controller manages the radio resources for one or more BTSs. It
handles radiochannel setup, frequency hopping, and handovers, as described below.
The BSC is the connection between the mobile and the Mobile service Switching
Center (MSC). The BSC also translates the 13 kbps voice channel used over the radio
link to the standard 64 kbps channel used by the Public Switched Telephone Network
or ISDN.
Network Subsystem
The central component of the Network Subsystem is the Mobile services Switching
Center (MSC). It acts like a normal switching node of the PSTN or ISDN, and in
addition provides all the functionality needed to handle a mobile subscriber, such as
registration, authentication, location updating, handovers, and call routing to a roaming
subscriber. These services are provided in conjuction with several functional entities,
which together form the Network Subsystem. The MSC provides the connection to the
public fixed network (PSTN or ISDN), and signalling between functional entities uses
the ITUT Signaling System Number 7 (SS7), used in ISDN and widely used in current
public networks.
The Home Location Register (HLR) and Visitor Location Register (VLR), together
with the MSC, provide the call routing and (possibly international) roaming capabilities
of GSM. The HLR contains all the administrative information of each subscriber
registered in the corresponding GSM network, along with the current location of the
mobile. The current location of the mobile is in the form of a Mobile Station Roaming
Number (MSRN) which is a regular ISDN number used to route a call to the MSC
where the mobile is currently located. There is logically one HLR per GSM network,
although it may be implemented as a distributed database.
The Visitor Location Register contains selected administrative information from the
HLR, necessary for call control and provision of the subscribed services, for each
mobile currently located in the geographical area controlled by the VLR. Although
each functional entity can be implemented as an independent unit, most manufacturers
of switching equipment implement one VLR together with one MSC, so that the
geographical area controlled by the MSC corresponds to that controlled by the VLR,
simplifying the signaling required. Note that the MSC contains no information about
particular mobile stations - this information is stored in the location registers.
The other two registers are used for authentication and security purposes. The
Equipment Identity Register (EIR) is a database that contains a list of all valid mobile
equipment on the network, where each mobile station is identified by its International
Mobile Equipment Identity (IMEI). An IMEI is marked as invalid if it has been
reported stolen or is not type approved. The Authentication Center is a protected
database that stores a copy of the secret key stored in each subscriber's SIM card, which
is used for authentication and ciphering of the radio channel.
GPRS data transfer is typically charged per megabyte of traffic transferred, while data
communication via traditional circuit switching is billed per minute of connection time,
independent of whether the user actually is using the capacity or is in an idle state.
GPRS is a best-effort packet switched service, as opposed to circuit switching, where a
certain quality of service (QoS) is guaranteed during the connection for non-mobile
users.
2G cellular systems combined with GPRS are often described as 2.5G, that is, a
technology between the second (2G) and third (3G) generations of mobile telephony. It
provides moderate speed data transfer, by using unused time division multiple access
(TDMA) channels in, for example, the GSM system. Originally there was some thought
to extend GPRS to cover other standards, but instead those networks are being
converted to use the GSM standard, so that GSM is the only kind of network where
GPRS is in use. GPRS is integrated into GSM Release 97 and newer releases. It was
originally standardized by European Telecommunications Standards Institute (ETSI),
but now by the 3rd Generation Partnership Project (3GPP).
GPRS was developed as a GSM response to the earlier CDPD and i-mode packet
switched cellular technologies.
UNIT-2
Wireless Networking:
Wireless network refers to any type of computer network that is wireless, and is
commonly associated with a telecommunications network whose interconnections
between nodes is implemented without the use of wires.[1] Wireless telecommunications
networks are generally implemented with some type of remote information
transmission system that uses electromagnetic waves, such as radio waves, for the
carrier and this implementation usually takes place at the physical level or "layer" of the
network.
Types
Wireless PAN
Wireless Personal Area Network (WPAN) is a type of wireless network that
interconnects devices within a relatively small area, generally within reach of a person.
For example, Bluetooth provides a WPAN for interconnecting a headset to a laptop.
ZigBee also supports WPAN applications.[3]
Wireless LAN
Wireless Local Area Network (WLAN) is a wireless alternative to a computer Local
Area Network (LAN) that uses radio instead of wires to transmit data back and forth
between computers in a small area such as a home, office, or school. Wireless LANs
are standardized under the IEEE 802.11 series.
IEEE 802.11 is a set of standards carrying out wireless local area network (WLAN)
computer communication in the 2.4, 3.6 and 5 GHz frequency bands. They are
implemented by the IEEE LAN/MAN Standards Committee (IEEE 802).
Protocols
802.11
The original version of the standard IEEE 802.11 was released in 1997 and clarified in
1999, but is today obsolete. It specified two net bit rates of 1 or 2 megabits per second
(Mbit/s), plus forward error correction code. It specifed three alternative physical layer
technologies: diffuse infrared operating at 1 Mbit/s; frequency-hopping spread
spectrum operating at 1 Mbit/s or 2 Mbit/s; and direct-sequence spread spectrum
operating at 1 Mbit/s or 2 Mbit/s. The latter two radio technologies used microwave
transmission over the Industrial Scientific Medical frequency band at 2.4 GHz. Some
earlier WLAN technologies used lower frequencies, such as the U.S. 900 MHz ISM
band.
Legacy 802.11 with direct-sequence spread spectrum was rapidly supplemented and
popularized by 802.11b.
802.11a
The 802.11a standard uses the same data link layer protocol and frame format as the
original standard, but an OFDM based air interface (physical layer). It operates in the 5
GHz band with a maximum net data rate of 54 Mbit/s, plus error correction code, which
yields realistic net achievable throughput in the mid-20 Mbit/s.
Since the 2.4 GHz band is heavily used to the point of being crowded, using the
relatively un-used 5 GHz band gives 802.11a a significant advantage. However, this
high carrier frequency also brings a disadvantage: The effective overall range of
802.11a is less than that of 802.11b/g; and in theory 802.11a signals cannot penetrate as
far as those for 802.11b because they are absorbed more readily by walls and other
solid objects in their path due to their smaller wavelength. In practice 802.11b typically
has a higher distance range at low speeds (802.11b will reduce speed to 5 Mbit/s or
even 1 Mbit/s at low signal strengths). However, at higher speeds, 802.11a typically has
the same or higher range due to less interference.
802.11b
802.11b has a maximum raw data rate of 11 Mbit/s and uses the same media access
method defined in the original standard. 802.11b products appeared on the market in
early 2000, since 802.11b is a direct extension of the modulation technique defined in
the original standard. The dramatic increase in throughput of 802.11b (compared to the
original standard) along with simultaneous substantial price reductions led to the rapid
acceptance of 802.11b as the definitive wireless LAN technology.
802.11b devices suffer interference from other products operating in the 2.4 GHz band.
Devices operating in the 2.4 GHz range include: microwave ovens, Bluetooth devices,
baby monitors and cordless telephones.
802.11g
In June 2003, a third modulation standard was ratified: 802.11g. This works in the 2.4
GHz band (like 802.11b), but uses the same OFDM based transmission scheme as
802.11a. It operates at a maximum physical layer bit rate of 54 Mbit/s exclusive of
forward error correction codes, or about 22 Mbit/s average throughput.[4] 802.11g
hardware is fully backwards compatible with 802.11b hardware and therefore is
encumbered with legacy issues that reduce throughput when compared to 802.11a by
~21%.
The then-proposed 802.11g standard was rapidly adopted by consumers starting in
January 2003, well before ratification, due to the desire for higher data rates, and
reductions in manufacturing costs. By summer 2003, most dual-band 802.11a/b
products became dual-band/tri-mode, supporting a and b/g in a single mobile adapter
card or access point. Details of making b and g work well together occupied much of
the lingering technical process; in an 802.11g network, however, activity of an 802.11b
participant will reduce the data rate of the overall 802.11g network.
Like 802.11b, 802.11g devices suffer interference from other products operating in the
2.4 GHz band.
Bluetooth
is an open wireless protocol for exchanging data over short distances from fixed and
mobile devices, creating personal area networks (PANs). It was originally conceived as
a wireless alternative to RS232 data cables. It can connect several devices, overcoming
problems of synchronization.
Bluetooth Implementation
Bluetooth uses a radio technology called frequency-hopping spread spectrum, which
chops up the data being sent and transmits chunks of it on up to 79 frequencies. In its
basic mode, the modulation is Gaussian frequency-shift keying (GFSK). It can achieve
a gross data rate of 1 Mb/s. Bluetooth provides a way to connect and exchange
information between devices such as mobile phones, telephones, laptops, personal
computers, printers, Global Positioning System (GPS) receivers, digital cameras, and
video game consoles through a secure, globally unlicensed Industrial, Scientific and
Medical (ISM) 2.4 GHz short-range radio frequency bandwidth. The Bluetooth
specifications are developed and licensed by the Bluetooth Special Interest Group
(SIG). The Bluetooth SIG consists of companies in the areas of telecommunication,
computing, networking, and consumer electronics.
Data broadcasting
Data broadcast approach is an efficient technique for disseminating data in mobile
computing environments. To reduce the response time and the power consumption of
the data broadcast approach, a mobile client may store frequently accessed data items in
its cache. When a cached data item becomes out-of-date, the mobile client has to
reaccess the new content of the data item from the broadcast channel. Reaccessing a
cached data item may incur significant power consumption and suffer from a long
delay. In this paper, we propose a data reaccess scheme which enables a mobile client
to efficiently reaccess a cached data item. The strength of the proposed scheme lies in
its capability to allow a mobile client to correctly reaccess its cached data items while
the server inserts data items into or deletes data items from the broadcast structure in
the course of data broadcasting. Our experiment shows that the proposed scheme
significantly reduces the tuning time required in reaccessing a cached data item.
Introduction to Mobile IP
IP version 4 assumes that a node’s IP address uniquely identifies its physical
attachment to the Internet. Therefore, when a corespondent host (CH) tries to send a
packet to a mobile node (MN), that packet is routed to the MN’s home network;
independently of the current attachment of that MN (this is because CHs do not have
any knowledge of mobility).
When the MN is on its home network, and a CH sends packets to the mobile node, the
Mobile Node obtains those packets and answers them as a normal host (this is one
important requirement in Mobile IP), but if the MN is away from its home network, it
needs an agent to work on behalf of it. That agent is called Home Agent (HA). This
agent must be able to communicate with the MN all the time that it is “on-line”,
independently of the current position of the MN. So, HA must know where the physical
location of the MN is.
In order to do that, when the MN is away from home, it must get a temporary address
(which is called care-of address), which will be communicated to the HA to tell its
current point of attachment. This care-of address can be obtained by several ways, but
the most typical one is that the MN gets that address from an agent. In this case, this
agent is called Foreign Agent (FA).
Therefore, when a MN is away from home, and it’s connected to a foreign network, it
detects is on a different network and sends a registration request through the FA to the
HA requesting mobile capabilities for a period of time. The HA sends a registration
reply back to the MN (through the FA) allowing or denying that registration. This is
true when the Mobile Node is using a Foreign Agent for the registration. If the Mobile
Node obtains the care-of address by other meanings, that step (registration through the
FA) is not necessary.
If the HA allows that registration, it will work as a proxy of the MN. When MN’s home
network receives packets addressed to the MN, HA intercepts those packets (using
Proxy ARP), encapsulates them, and sends them to the care-of address, which is one of
the addresses of the FA. The FA will decapsulate those packets, and it will forward
them to the MN (because it knows exactly, where the MN is).
So, when the MN is on a foreign network, it uses its home agent to tunnel encapsulated
packets to itself via FA. This occurs until the lifetime expires (or the MN moves away).
When this happens (time out) MN must register again with its HA through the FA (if
the MN obtains its care-of address for other meanings, it acts as its own FA).
When the MN moves to another network and it detects so, it sends a new registration
request through (one more time) the new FA. In this case, HA will change MN’s care-
of address and it will forward encapsulated packets to that new care-of address (which,
usually, belongs to the FA). Some extensions of Mobile IP allows to a MN to have
several care-of addresses. Then, HA will send the same information to all the care-of
addresses. This is particularly useful when the MN is at the edges of cells on a wireless
environment, and it is moving constantly.
MN bases its movement detection basically looking at periodic adverts of the FA (and
HA), which sends to its localnet. Those messages are one extension of the ICMP router
discovery messages and they are called Agent Advertisement (because they advertises a
valid agent for Mobile Nodes).
a) The first method is based on network prefixes. For further information look at
Mobile IP RFC 2002 (section 2.4.2.2, page 22). This method is not included in our
current implementation.
b) The second method is based upon the Lifetime field within the main body of the
ICMP Router Advertisement portion of the Agent Advertisement. Mobile nodes keep
track of that Lifetime and if it expires, it sends an Agent Solicitation (asking for a new
Agent Advertisement) and it presumes that it has been moved.
When the MN returns to its home network, it does not require mobility capabilities, so
it sends a deregistration request to the HA, telling it that it’s at home (just to deactivate
tunneling and to remove previous care-of address (es)).
At this point, MN does not have to (de)register again, until it moves away from its
network. The detection of the movement is based on the same method explained
before.
Wireless Application Protocol (commonly referred to as WAP)
A WAP browser provides all of the basic services of a computer based web browser but
simplified to operate within the restrictions of a mobile phone, such as its smaller view
screen. WAP sites are websites written in, or dynamically converted to, WML
(Wireless Markup Language) and accessed via the WAP browser.
Before the introduction of WAP, service providers had extremely limited opportunities
to offer interactive data services. Interactive data applications are required to support
now commonplace activities such as:
Technical specifications:
• The WAP standard describes a protocol suite that allows the interoperability of
WAP equipment and software with many different network technologies. The
rationale for this was to build a single platform for competing network
technologies such as GSM and IS-95 (also known as CDMA) networks.
• The bottom-most protocol in the suite is the WAP Datagram Protocol (WDP),
which is an adaptation layer that makes every data network look a bit like UDP
to the upper layers by providing unreliable transport of data with two 16-bit port
numbers (origin and destination). WDP is considered by all the upper layers as
one and the same protocol, which has several "technical realizations" on top of
other "data bearers" such as SMS, USSD, etc. On native IP bearers such as
GPRS, UMTS packet-radio service, or PPP on top of a circuit-switched data
connection, WDP is in fact exactly UDP.
• WTLS provides a public-key cryptography-based security mechanism similar to
TLS. Its use is optional.
• WTP provides transaction support (reliable request/response) that is adapted to
the wireless world. WTP supports more effectively than TCP the problem of
packet loss, which is common in 2G wireless technologies in most radio
conditions, but is misinterpreted by TCP as network congestion.
• Finally, WSP is best thought of on first approach as a compressed version of
HTTP.
This protocol suite allows a terminal to emit requests that have an HTTP or HTTPS
equivalent to a WAP gateway; the gateway translates requests into plain HTTP.
Wireless Application Environment (WAE)
In this space, application-specific markup languages are defined.
For WAP version 1.X, the primary language of the WAE is WML, which has been
designed from scratch for handheld devices with phone-specific features. In WAP 2.0,
the primary markup language is XHTML Mobile Profile.
TCP over wireless network
TCP is the most common transport protocol used in Internet. It was designed primarily
for wired networks.The characteristics of wireless links are very different from wired
links, particularly in terms of loss behaviour.
Introduction
Wireless links have fundamentally different characteristics than wired links. They are
characterized by low bandwidth and error rates that are high, bursty and time-varying.
This difference in error characteristics causes signicant degradation in the performance
of TCP
That was originally developed for wired networks. For example, TCP misinterprets
packet loss over a wireless link as a sign of network congestion, causing poor
throughput.
Nanda et al. have suggested introducing reliability at the link layer for wireless link
using nite number of retransmissions. This approach does not completely shield the
source transport layer from all losses on the wireless link. Also, studies have shown that
link-layer retransmissions may interfere with TCP’s end-to-end retransmissions leading
to degraded performance.
Bakre et al. and Yavatkar et aladvocate Splitting end-to-end TCP connection into two
separate TCPconnections - one the wired network and the other over the wireless link,
with base station serving as the common point for two connections. This approach
isolates the transport layer from the erratic behavior of the wireless link. However, this
approach does not preserve the semantics of TCP.Also; every packet incurs the
overhead of going through the TCP
protocol processing twice at the base station. Further, it requires the base station to
maintain state for every TCP connection passing through it.
a module called snoop agent to the routing code at the base station. This agent monitors
packets owing on a TCPconnection. It maintains a cache of unacknowledged TCP
Packets on a per-connection basis and performs local retransmission when it detects a
packe loss. This approach does not completely shield the sender from losses on a
wireless link. Also, it assumes that a wire- less link is the last hop in the network path.
It also requires a base station to maintain state information and a cache of
unacknowledged packets for every TCP connection passing through it.
All proposed solutions are based on the assumption that no changes can be made to
existing TCP imple- mentations. However, we note that it is possible that newer hosts
run modied TCP
Implementations. We need to only ensure that the modications should be backward-
compatible. And that the older hosts should not notice a degradation of performance.
In this paper, we propose to augment TCP to respond to control signals sent from
wireless gateway nodes. The control signals are such that they are unambiguosly
discarded by the receiving hosts running older version of TCP.The simulation results
show that the scheme provides substantial performance improvement with low cost
overhead.
A Simple Scheme
2.1 Description of a Simple Scheme In the proposed scheme, the base station generates
an ICMP-DEFER message when the rst attempt at transmitting the packet on the
wireless link is unsuccessful. This policy ensures that within one round trip time, TCP
Will receive either an acknowledgment or an ICMP message. This will ensure that
end-to-end retransmissions do not start while link-layer retransmissions may be going
on. A lack of both will signal a congestion loss. Thus, TCP
Can distinguish between two kinds of losses. The control information consists of TCP
And IP headers (typical ICMP message). This is enough for source TCP
To decide which particular packet was lost on the wireless link. ICMP-DEFER
messages are not retransmitted, thus the overhead on the network is minimal.
TCP performs the following actions on receipt of an ICMP-DEFER message. If
retransmit timer is set for the segment indicated in the ICMP message, it postpones its
expiry by the current estimate of retransmission timeout (RTO) value. This will avoid
conflict between the link-layer retransmissions and end-to-end retransmissions. We
assume that one RTO time is su-
Cient for the base station to exhaust all local retransmission attempts for a packet. If the
wireless link remain in error-state for longer duration, TCP
Would stop transmission because there would be no acks, and thus send window would
not move.
Issues
Wireless Medium (continued)
•= (Variant Connectivity)
Wireless technologies vary on the degree of bandwidth and reliability they provide.
•= (Broadcast Facility)
There is a high bandwidth broadcast channel from the base station to all mobile clients
in its cell.
•= (Tarrifs)
For some networks (e.g., in cellular telephones), network access is charged per
connection-time, while for others (e.g., in packet radio), it is charged per message
(packet).
Issues
Portability of Mobile Elements
•= Mobile elements are resource poor when compared to static elements.
Mobile elements must be light and small to be easily carried around. Such
considerations, in conjunction with a given cost and level of technology, will keep
mobile elements having less resources than static elements, including memory, screen
size and disk capacity.
This results in an asymmetry between static and mobile elements.
•= Mobile elements rely on battery.
Even with advances in battery technology, this concern will not cease to exist.
Concern for power consumption must span various levels in hardware and software
design.
Mobile elements are easier to be accidentally damaged, stolen,or lost.
Thus, they are less secure and reliable than static elements.
Introduction
Mobile computing refers to computing using devices that are not attached to a specific
location, but instead their position (network or geographical) changes. Mobile
computing can be traced back to file systems and the need for disconnected operations
in the end of the 80s. With the rapid growth in mobile technologies and the cost
effectiveness in deploying wireless networks in the 90s, the goal of mobile computing
was to support of AAA(anytime, anywhere and any-form) access to data by users from
their portable computers and mobile phones, devices with small displays and limited
resources. This led to research in mobile data management including transaction
processing, query processing and data dissemination [22]. A key characteristic of all
these research efforts was the great emphasis on the mobile computing challenges,
including:
• Intermittent Connectivity This refers to the fact that computation must proceed
despite short or long periods
of network unavailability.
• Scarcity of Resources Due to the small sizes of portable devices, there are implicit
restrictions in the availability of storage and computation capacity and mostly of
energy.
• Mobility The implications of mobility are varying. First, mobility introduces a
number of technical challenges at the networking layer. It also offers a number of
opportunities at the higher layers for explicitly using location information either at the
semantic level (for instance, for providing personalization) or at the performance level
(for instance, for data prefetching). Finally, it amplifies heterogeneity.
In general, one can distinguish between single-hop and multi-hop underlying
infrastructures. In single hop infrastructures, each mobile device communicates with a
stationary host, which corresponds to its point of attachment to the network. The rest of
the routing is the responsibility of a stationary infrastructure, e.g., the Internet. On the
other hand, in multi-hop wireless communication, an ad-hoc wireless network is formed
in which, wireless hosts participate in routing messages among each other. In both
infrastructures, the host between the source (and sources) that holds the data and the
requester of data (or data sink) form a dissemination tree. The hosts (mobile or
stationary) that form the dissemination tree may store data and take part in the
computation towards achieving in network processing (e.g., aggregation). Locally
caching or replicating data at the wireless host or at the intermediate nodes of the
dissemination tree are important for improving systemperformance and availability.
Caching and replication generally attempt to guarantee that most data requests are for
data that is being held in main memory or local storage, negating the need to perform
I/O, or remote data retrieval. Hence, the uses of appropriate caching/replication
schemes have been traditionally used to improve performance and reduce service time.
In mobile environments the performance considerations go beyond simple speedups
and data retrieval delays. In this article, we examine how caching and replication has
been utilized in mobile data management and more specifically in the first
infrastructure where data are cached at the mobile device in order to avoid excessive
energy consumption and to cope with intermittent connectivity.
Consistency Levels
We consider the case in which a mobile computing device (such as a portable computer
or cellular phone) is connected to the rest of the network typically through a wireless
link. Wireless communication has a double impact on the mobile device since the
limited bandwidth of wireless links increases the response times for accessing remote
data from a mobile host and transmitting as well as receiving of data are high energy
consumption operations.
The principal goal is to store appropriate pieces of data locally at the mobile device so
that it can operate on its own data, thus reducing the need for communication that
consumes both energy and bandwidth. At some point, operations performed at the
mobile device must be synchronized with operations performed at other sites. The
complexity of this synchronization depends greatly on whether updates are allowed at
the mobile device. The main reason for allowing such updates is to sustain network
disconnections. When there are no local updates, the important issue is disseminating
updates from the rest of the network to the mobile device.
Synchronization depends on the level at which correctness is sought. This can be
roughly categorized as replica-level correctness and transaction-level correctness. At
the replica level, correctness or coherency requirements are expressed per item in terms
of the allowable divergence among the values of the copies of each item. There are
many ways to characterize the divergence among copies of an item.
For example, with quasi copies, the coherency or freshness requirements between a
cached copy of an item and its primary at the server are specified by limiting (a) the
number of updates (versions) between them, (b) their distance in time, or (c) the
difference between their values. At the transaction level, the strictest form of
correctness is achieved through global serializability that requires the execution of all
transactions running at mobile and stationary hosts to be equivalent to some serial
execution of the same transactions. In case of replication, one copy serializability
provides equivalence with a serial execution on a one-copy database. One-copy
serializability does not allow any divergence among copies.
There is a large number of correctness criteria proposed besides serializability. A
practical such criterion
is snapshot isolation [5]. With snapshot isolation, a transaction is allowed to read data
from any database snapshot at a time earlier than its start time. Central are also criteria
that treat read-only transactions, i.e.,transactions with no update operations, differently.
Consistency of read-only transactions is achieved by ensuring
that transactions read a database instance that does not violate any integrity constraints
(as for example with snapshot isolation), while freshness of read-only transactions
refers to the freshness of the values read.
Finally, relaxed-currency serializability allows update transactions to read out-of-date
values as long as they satisfy some freshness constraints specified by the users.
There are two basic ways of propagating updates. Eager replication synchronizes all
copies of an item within a single transaction, whereas with lazy replication, transactions
for keeping replica coherent run as separate, independent database transactions after the
original transaction. One-copy serializability as well as other forms of correctness can
be achieved either through eager or lazy update propagation.
3 Two-tier Caching
In this section, we assume that data can be updated at the mobile device. The main
motivation is support for disconnected operation. Disconnected operation refers to the
autonomous operation of a mobile client, when network connectivity becomes
unavailable for instance, due to physical constraints, or undesirable, for example, for
reducing power consumption. Preloading or prefetching data to sustain a forthcoming
disconnection is often termed hoarding. The content of data to be prefetched may be
determined (a) automatically by the system by utilizing implicit information, most often
based on the past history of data references, or (b) by instructions given explicitly by
the users, as in profile-driven data prefetching, where a simple profile language is
introduced for specifying the items to be prefetched along with their relative
importance. Additional information such as a set of allowable operations or a
characterization of the required data quality may also be cached along with data. For
example, in the Pro-Motion infrastructure the unit of caching and replication is a
compact, an object that encapsulates the cached data, operations for accessing the
cached data, state information (such as the number of accesses to the object),
consistency rules that must be followed to guarantee consistency, and obligations (such
as deadlines).
To allow concurrent operation at both the mobile client and other sites during
disconnection, optimistic approaches to consistency control are typically deployed.
Optimistic consistency maintenance protocols allow data to be accessed concurrently at
multiple sites without a priori synchronization between the sites, potentially resulting in
short term inconsistencies. Such protocols trade-off quality of data for improving
quality of service.Optimistic replication has been extensively studied as a means for
consistency maintenance in distributed systems (for example, see for a thorough recent
survey). In this paper, we present some representative examples of optimistic protocols
in the context of mobile computing.
Consistent operation during disconnected operation has been also extensively addressed
in the context of network partitioning. In this context, a network failure partitions the
sites of a distributed database system into disconnected clusters. Various approaches
have been proposed and are excellently reviewed in In general; protocols in network
partition follow peer-to-peer models where transactions executed in any partition are of
equal importance, whereas the related protocols in mobile computing most often
consider transactions at the mobile host as second-class, for instance, by considering
their updates as tentative. Furthermore, disconnections in mobile computing are
common and some of them may be considered foreseeable.
Disconnections correspond to the extreme case of total lack of connectivity. Other
connectivity constraints, such as weak or intermittent connectivity also affect the
protocols for enforcing consistency. In general, weak connectivity is handled by
appropriately revising those operations whose deployment involves the network. For
instance, the frequency of propagation to the server of updates performed at the local
data may depend on connectivity conditions.
In early research in mobile computing, a general concern has been whether issues such
connectivity or mobility should be transparent or hidden from the users. In this respect,
adapting the levels of transaction or replica correctness to the system conditions such as
the availability of connectivity or the quality of the network connection and providing
applications with less than strict notions of correctness can be seen as making such
conditions visible to the users. This is also achieved by explicitly extending queries
with quality of data specifications for example for constraining the divergence between
copies.
Some common characteristics of protocols for consistency in two-tier caching are:
• The propagation of updates performed at the mobile site follow in general lazy
protocols,
• Reads are allowed at the local data, while updates of local data are tentative in the
sense that they need to
Be further validated before commitment.
• For integrating operations at the mobile hosts with transactions at other sites, in the
case of replicalevel
Consistency, copies of an item are reconciled following some conflict resolution
protocol. At the transaction-level, local transactions are validated against some
application or system level criterion. If the criterion is met, the transaction is
committed. Otherwise, the execution of the transaction is aborted, reconciled or
compensated. Such actions may have cascaded effects on other tentatively committed
transactions that have seen the results of the transaction.
Next, we present a number of consistency protocols that have been proposed for mobile
computing.
Isolation-Only transactions in Coda Coda is one of the first file systems designed to
support disconnections and weak connectivity. Coda introduced isolation-only
transactions (IOTs) in file systems. An IOT is a sequence of file access operations. A
transaction T is called a first-class transaction, if it does not have any partitioned file
access, i.e., the mobile host maintains a connection for every file it has accessed.
Otherwise, T is called a second-class transaction. Whereas the result of a first-class
transaction is immediately committed, a second-class transaction remains in the
pending state till connectivity is restored. The result of a second-class transaction is
held within the local cache and visible only to subsequent accesses on the same host.
Second-class transactions are guaranteed to be locally serializable among them. A first-
class transaction is guaranteed to be serializable with all transactions that were
previously resolved or committed at the fixed host. Upon reconnection, a second-class
transaction T is validated against one of the following two serialization constraints.
The first is global serializability, which means that if a pending transaction’s local
result were written to the fixed host as is, it would be serializable with all previously
committed or resolved transactions. The second is a stronger consistency criterion
called global certifiability (GC) which requires a pending transaction be globally
serializable not only with, but also after all previously committed or resolved
transactions.
Two-tier Replication With two-tier replication replicated data have two versions at
mobile nodes: master and tentative versions. A master version records the most recent
value received while the site was connected.
A tentative version records local updates. There are two types of transactions analogous
to second- and firstclass IOTs: tentative and base transactions. A tentative transaction
works on local tentative data and produces tentative data. A base transaction works
only on master data and produce master data. Base transactions involve only connected
sites. Upon reconnection, tentative transactions are reprocessed as base transactions. If
they fail to meet some application-specific acceptance criteria, they are aborted.
Two-Layer Transactions With two-layer transactions [17], transactions that run solely
at the mobile host are called weak, while the rest are called strict. A distinction is drawn
between weak copies and strict copies. In contrast to strict copies, weak copies are only
tentatively committed and hold possibly obsolete values. Weak transactions update
weak copies, while strict transactions access strict copies located at any site. Weak
copies
are integrated with strict copies either when connectivity improves or when an
application-defined freshness limit to the allowable deviation among weak and strict
copies is passed. Before reconciliation, the results of weak transactions are visible only
to weak transactions at the same site. Strict transactions are slower than weak
transactions, since they involve the wireless link but guarantee permanence of updates
and currency of reads.
During disconnection, applications can only use weak transactions. In this case, weak
transactions have similar semantics with second-class IOTs and tentative transactions
Adaptability is achieved by restricting the number of strict transactions depending on
the available connectivity and by adjusting the application-defined degree of
divergence among copies.
Bayou Bayou is built on a peer-to-peer architecture with a number of replicated servers
weakly connected to each other. Bayou does not support full-fledged transactions. A
user application can read-any and write-any available copy. Writes are propagated to
other servers during pair-wise contracts called antientropy sessions. When a write is
accepted by a Bayou server, it is initially deemed tentative. As in two-tier replication
[each server maintains two views of the database: a copy that only reflects committed
data and another full copy that also reflects the tentative writes currently known to the
server. Eventually, each write is committed using a primary-commit schema. That is,
one server designated as the primary takes responsibility for committing updates.
Because servers may receive writes from clients and other servers in different orders,
servers may need to undo the effects of some previous tentative execution of a write
operation and re-apply it. The Bayou systems provide dependency checks for automatic
conflict detection and merge procedures for resolution. Instead of transactions, Bayou
supports sessions. Sessions is an abstraction for a sequence of read and writes
operations performed during the execution of an application. Session guarantees are
enforced to avoid inconsistencies when accessing copies at different servers; for
example, a session guarantee may be that read operations reflect previous writes or that
writes are propagated after writes that logically precede them. Different degrees of
connectivity are supported by individually selectable session guarantees, choices of
committed or tentative data, and by placing an age parameter on reads. Arbitrary
disconnections among Bayou’s servers are also supported since Bayou relies only on
pair-wise communication. Thus, groups of servers may be disconnected from the rest of
the system yet remain connected to each other.
4 Update Dissemination
In this section, we consider data at the mobile device to be read-only, as in traditional
client-server caching. In this case, the main issue is developing efficient protocols for
disseminating server updates to mobile clients. Most such cache invalidation protocols
developed for mobile computing focus on the case in which a large number of clients is
attached to a single server. Often, the server is equipped with an efficient broadcast
facility that allows it to propagate updates to all of its clients. Different assumptions are
made on whether the server maintains or not any information about which clients it is
serving, what are the contents of their cache, and when their cache was last validated.
Servers that hold such information are called stateful, while servers that do not are
called stateless. Another issue pertinent to mobile computing is again handling
disconnections, in particular, ensuring that cache invalidation are received by clients
despite any temporary network unavailability.
Update propagation may be either synchronous or asynchronous. In synchronous
methods, the server broadcasts an invalidation report periodically. A client must listen
for the report first to decide whether its cache is valid or not. Thus, each client is
confident for the validity of its cache only as of the last invalidation report.
This adds some latency to query processing, since to answer a query, a client has to
wait for the next invalidation report. In case of disconnections, synchronous methods
surpass asynchronous in that clients need only periodically tune in to read the
invalidation report instead of continuously listening to the broadcast. However, if the
client remains inactive longer than the period of the broadcast, the entire cache must be
discarded, unless specialchecking is deployed.
Invalidation protocols vary in the type of information they convey to the clients. In case
of replica level correctness, it suffices that single read operations access current data. In
this case, invalidation may include just a list of the updated items or in addition to this,
their updated values. Including the updated values may be wasteful of bandwidth
especially when the corresponding items are cached at only a few clients. On the other
hand, if the values are not included, the client must either discard the item from its
cache or communicate with the server to receive the updated value. The reports can
provide information for individual items or aggregate information for sets of items. In
case of transaction level correctness, invalidation reports must include additional
information regarding server transactions.
The efficiency of an update dissemination protocol depends on the connectivity
behavior of the mobile clients. In clients that are often connected are called workaholic,
while clients that are often disconnected are the sleepers. Three synchronous strategies
for stateless servers are considered. In the broadcasting timestamps strategy (TS), the
invalidation report contains the timestamps of the latest change for items that have had
updates in the last w seconds. In the amnestic terminals strategy (AT), the server only
broadcasts the identifiers of the items that changed since the last invalidation report. In
the signatures strategy, signatures are broadcast. A signature is a checksum computed
over the value of a number of items by applying data compression techniques similar to
those used for file comparison. Each of these strategies is shown to be effective for
different types of clients. Signatures are best for long sleepers, that is, when the period
of disconnection is long and hard to predict. The AT method is best for workaholic.
Finally, TS is shown to be advantageous when the rate of queries is greater than the rate
of updates provided that the clients are not workaholics.
Another model of operation in the content of mobile databases is that of a broadcast or
push model In this model, the server broadcasts (periodically) data to a set of mobile
clients. Clients monitor the broadcast and retrieve the data items they need as they
arrive. Data of interest may also be cached locally at the client.
When clients read data from the broadcast, a number of different replica-level
correctness models are reasonable. For example, if clients do not cache data, the server
always broadcasts the most recent values, and there is no backchannel for on-demand
data delivery, then the latest value model is models that arise naturally. In this model,
clients read the most recent value of a data item. Methods for enforcing transaction-
level correctness are presented in With the invalidation method; the server broadcasts
an invalidation report with the data items that have been updated since the broadcast of
the previous report. Transactions that read obsolete items are aborted. With the
serialization graph testing (SGT) method, the server broadcasts control information
related to conflicting operations. Clients use this information to ensure that their read-
only transactions are serializable with the server transactions. With multiversion
broadcast multiple versions of each item are broadcast, so that client transactions
always read a consistent database snapshot.
A general theory of correctness for broadcast databases as well as the fundamental
properties underlying the techniques for enforcing it are given in . Correctness
characterizes the freshness of the values seen by the clients with regards to the values at
the server as well as the temporal discrepancy among the values read by the same
transaction.
More recently, the concept of materialized views was extended in the context of mobile
databases to operate in a fashion similar to data caches supporting local query
processing As in traditional databases, materialized views in mobile databases provide
a means to present different portions of the databases based on users’ perspectives and,
similar to data warehouses, materialized views provide a mean to support personalized
information gathering from multiple data sources. Personalization is expressed in the
form of view maintenance options for recomputation and incremental maintenance.
They offer a finer grain of control and balance between data availability and currency,
the amount of wireless communication and the cost of maintaining consistency. In
order to better characterize these personalizations, in recomputational consistency was
introduced and the materialized view consistency was enhanced with new levels which
correspond to specific view currency customizations.
UNIT-4
Mobile agent:
Threats to security generally fall into three main classes: disclosure of information,
denial
Of service, and corruption of information. There are a variety of ways to examine these
Classes of threats in greater detail as they apply to agent systems. Here, we use the
Components of an agent system to categorize the threats as a way to identify the
possible
Certain computer software and hardware products and standards are identified in this
paper for
Illustration purposes. Such identification is not intended to imply recommendation or
endorsement by the National Institute of Standards and Technology, nor is it intended
to imply that these computer software and hardware products and standards identified
are necessarily the best available.
It is important to note that many of the threats that are discussed have counterparts in
conventional client-server systems and have always existed in some form in the past
(e.g., executing any code from an unknown source either downloaded from a network
or supplied on floppy disk). Mobile agents simply offer a greater opportunity for abuse
and misuse, broadening the scale of threats significantly.
Four threat categories are identified: threats stemming from an agent attacking an agent
Platform, an agent platform attacking an agent, an agent attacking another agent on the
Agent platform and other entities attacking the agent system. The last category covers
the
Cases of an agent attacking an agent on another agent platform, and of an agent
platform
Attacking another platform, since these attacks are primarily focused on the
Communications capability of the platform to exploit potential vulnerabilities. The last
Category also includes more conventional attacks against the underlying operating
system
Of the agent platform.
2.1. Agent-to-Platform
The agent-to-platform category represents the set of threats in which agents exploit
Security weaknesses of an agent platform or launch attacks against an agent platform.
This
set of threats includes masquerading, denial of service and unauthorized access.
Masquerading
. Denial of Service
Mobile agents can launch denial of service attacks by consuming an excessive amount
of
The agent platform's computing resources. This denial of service attacks can be
launched
Intentionally by running attack scripts to exploit system vulnerabilities, or
unintentionally
Through programming errors. Program testing, configuration management, design
reviews, independent testing, and other software engineering practices have been
proposed to help reduce the risk of programmers
Intentionally, or unintentionally, introducing malicious code into an organization’s
Computer systems.
Unauthorized Access
Access control mechanisms are used to prevent unauthorized users or processes from
Accessing services and resources for which they have not been granted permission and
Privileges as specified by a security policy. Each agent visiting a platform must be
subject
to the platform's security policy. Applying the proper access control mechanisms
requires
The platform or agent to first authenticate a mobile agent’s identity before it is
instantiated
On the platform. An agent that has access to a platform and its services without having
the
Proper authorization can harm other agents and the platform itself. A platform that
hosts
Agents representing various users and organizations must ensure that agents do not
have
Read or write access to data for which they have no authorization, including access to
Residual data that may be stored in a cache or other temporary storage.
Agent-to-Agent
The agent-to-agent category represents the set of threats in which agents exploit
security
Weaknesses of other agents or launch attacks against other agents. This set of threats
Includes masquerading, unauthorized access, denial of service and repudiation. Many
Agent platform components are also agents themselves. These platform agents provide
System-level services such as directory services and inter-platform communication
Services. Some agent platforms allow direct inter-platform agent-to-agent
communication,
While others require all incoming and outgoing messages to go through a platform
Communication agent. These architecture decisions intertwine agent-to-agent and
agentto-
Platform security. This section addresses agent-to-agent security threats and leaves the
Discussion of platform related threats to sections 2.1 and 2.3.
Masquerade
Agent-to-agent communication can take place directly between two agents or may
require
the participation of the underlying platform and the agent services it provides. In either
case, an agent may attempt to disguise its identity in an effort to deceive the agent with
which it is communicating. An agent may pose as a well-known vendor of goods and
services, for example, and try to convince another unsuspecting agent to provide it with
credit card numbers, bank account information, some form of digital cash, or other
private
information. Masquerading as another agent harms both the agent that is being deceived
and the agent who's identity has been assumed, especially in agent societies where
reputation is valued and used as a means to establish trust.
. Denial of Service
In addition to launching denial of service attacks on an agent platform, agents can also
launch denial of service attacks against other agents. For example, repeatedly sending
messages to another agent, or spamming agents with messages, may place undue
burden
on the message handling routines of the recipient. Agents that are being spammed may
choose to block messages from unauthorized agents, but even this task requires some
processing by the agent or its communication proxy. If an agent is charged by the
number
of CPU cycles it consumes on a platform, spamming an agent may cause the spammed
agent to have to pay a monetary cost in addition to a performance cost. Agent
communication languages and conversation policies must ensure that a malicious agent
doesn't engage another agent in an infinite conversation loop or engage the agent in
elaborate conversations with the sole purpose of tying up the agent's resources.
Malicious
agents can also intentionally distribute false or useless information to prevent other
agents
from completing their tasks correctly or in a timely manner.
Repudiation
. Unauthorized Access
If the agent platform has weak or no control mechanisms in place, an agent can directly
interfere with another agent by invoking its public methods (e.g., attempt buffer
overflow,
reset to initial state, etc.), or by accessing and modifying the agent's data or code.
Modification of an agent’s code is a particularly insidious form of attack, since it can
radically change the agent's behavior (e.g., turning a trusted agent into malicious one).
An
agent may also gain information about other agents’ activities by using platform
services
to eavesdrop on their communications.
. Platform-to-Agent
The platform-to-agent category represents the set of threats in which platforms
compromise the security of agents. This set of threats includes masquerading, denial of
service, eavesdropping, and alteration.
Masquerade
One agent platform can masquerade as another platform in an effort to deceive a mobile
agent as to its true destination and corresponding security domain. An agent platform
masquerading as a trusted third party may be able to lure unsuspecting agents to the
platform and extract sensitive information from these agents. The masquerading
platform
can harm both the visiting agent and the platform whose identity it has assumed.
Denial of Service
When an agent arrives at an agent platform, it expects the platform to execute the
agent's
requests faithfully, provide fair allocation of resources, and abide by quality of service
agreements. A malicious agent platform, however, may ignore agent service requests,
introduce unacceptable delays for critical tasks such as placing market orders in a stock
market, simply not execute the agent’s code, or even terminate the agent without
notification.
. Eavesdropping
The classical eavesdropping threat involves the interception and monitoring of secret
communications. The threat of eavesdropping, however, is further exacerbated in
mobile
agent systems because the agent platform can not only monitor communications, but
also
can monitor every instruction executed by the agent, all the unencrypted or public data
it
brings to the platform, and all the subsequent data generated on the platform. Since the
platform has access to the agent’s code, state, and data, the visiting agent must be wary
of
the fact that it may be exposing proprietary algorithms, trade secrets, negotiation
strategies, or other sensitive information.
Alteration
When an agent arrives at an agent platform it is exposing its code, state, and data to the
platform. Since an agent may visit several platforms under various security domains
throughout its lifetime, mechanisms must be in place to ensure the integrity of the
agent's
code, state, and data. A compromised or malicious platform must be prevented from
modifying an agent's code, state, or data without being detected. Modification of an
agent's code, and thus the subsequent behavior of the agent on other platforms, can be
detected by having the original author digitally sign the agent's code.
Agent platforms can also tamper with agent communications. Tampering with agent
communications, for example, could include deliberately changing data fields in
financial
transactions or even changing a "sell" message to a "buy" message. This type of
goaloriented
alteration of the data is more difficult than simply corrupting a message, but the
attacker clearly has a greater incentive and reward, if successful, in a goal-oriented
alteration attack.
Agent Platform
The other-to-agent platform category represents the set of threats in which external
entities, including agents and agent platforms, threaten the security of an agent
platform.
This set of threats includes masquerading, denial of service, unauthorized access, and
copy
and replay.
. Masquerade
Agents can request platform services both remotely and locally. An agent on a remote
platform can masquerade as another agent and request services and resources for which
it
is not authorized. Agents masquerading as other agents may act in conjunction with a
malicious platform to help deceive another remote platform or they may act alone. A
remote platform can also masquerade as another platform and mislead unsuspecting
platforms or agents about its true identity.
Unauthorized Access
Remote users, processes, and agents may request resources for which they are not
authorized. Remote access to the platform and the host machine itself must be carefully
protected, since conventional attack scripts freely available on the Internet can be used
to
subvert the operating system and directly gain control of all resources. Remote
administration of the platform's attributes or security policy may be desirable for an
administrator that is responsible for several distributed platforms, but allowing remote
administration may make the system administrator’s account or session the target of an
attack.
Denial of Service
Agent platform services can be accessed both remotely and locally. The agent services
Offered by the platform and inter-platform communications can be disrupted by
common
Denial of service attacks. Agent platforms are also susceptible to all the conventional
Denial of service attacks aimed at the underlying operating system or communication
Protocols. These attacks are tracked by organizations such as the Computer Emergency
Response Team (CERT) at the Carnegie Mellon University and the Federal Computer
Incident Response Capability (FedCIRC).
Every time a mobile agent moves from one platform to another it increases its exposure
to
security threats. A party that intercepts an agent, or agent message, in transit can
attempt
to copy the agent, or agent message, and clone or retransmit it. For example, the
interceptor can capture an agent’s "buy order" and replay it several times, having the
agent
buy more than the original agent had intended. The interceptor may copy and replay an
agent message or a complete agent.
Security Requirements
The users of networked computer systems have four main security requirements:
confidentiality, integrity, availability, and accountability. The users of agent and mobile
agent frameworks also have these same security requirements. This section provides a
brief overview of these security requirements and how they apply to agent frameworks.
Confidentiality
Any private data stored on a platform or carried by an agent must remain confidential.
Agent frameworks must be able to ensure that their intra- and inter-platform
communications remain confidential. Eavesdroppers can gather information about an
agent's activities not only from the content of the messages exchanged, but also from
the
message flow from one agent to another agent or agents. Monitoring the message flow
may allow other agents to infer useful information without having access to the actual
message content. A burst of messages from one agent to another, for example, may be
an
indication that an agent is in the market for a particular set of services offered by the
other
agent.
Integrity
The agent platform must protect agents from unauthorized modification of their code,
state, and data and ensure that only authorized agents or processes carry out any
modification of shared data. The agent itself cannot prevent a malicious agent platform
from tampering with its code, state, or data, but the agent can take measures to detect
this
tampering.
Accountability
Each process, human user, or agent on a given platform must be held accountable for
their
actions. In order to be held accountable each process, human user, or agent must be
uniquely identified, authenticated, and audited. Example actions for which they must be
held accountable include: access to an object, such as a file, or making administrative
changes to a platform security mechanism.
Audit logs keep users and processes accountable for their actions. Mobile agents create
audit trails across several platforms, with each platform logging separate events and
possibly auditing different views of the same event (e.g., registration within a remote
directory). In the case where an agent may require access to information distributed
across
different platforms it may be difficult to reconstruct a sequence of events. Platforms
that
keep distributed audit logs must be able to maintain a concept of global time or
ordering
of events.
Accountability is also essential for building trust among agents and agent platforms.
An
authenticated agent may comply with the security policies of the agent platform, but
still
exhibit malicious behavior by lying or deliberately spreading false information.
Additional
auditing may be helpful in proving the malicious agent's attempts at deception. For
example, the communication acts of an ACL conversation could be logged, but this
results
in costly overhead. If it is assumed that the malicious agent does not value its reputation
within an agent community, then it would be difficult to prevent malicious agents from
lying.
Availability
The agent platform must be able to ensure the availability of both data and services to
local and remote agents. The agent platform must be able to provide controlled
concurrency, support for simultaneous access, deadlock management, and exclusive
access as required. Shared data must be available in a usable form, capacity must be
available to meet service needs, and provisions for the fair allocation of resources and
timeliness of service must be made.
Anonymity
The agent platform may need to balance an agent's need for privacy with the platform's
need to hold the agent accountable for its actions. The platform may be able to keep the
agent's identity secret from other agents and still maintain a form of reversible
anonymity
where it can determine the agent’s identity if necessary and legal.
Countermeasures
One of the main concerns with an agent system implementation is ensuring that agents
are
not able to interfere with one another or with the underlying agent platform. One
common
approach for accomplishing this is to establish separate isolated domains for each agent
and the platform, and control all inter-domain access. In traditional terms this concept is
referred to as a reference monitorAn implementation of a reference monitor has the
following characteristics:
Implementations of the reference monitor concept have been around since the early
1980’s
and employ a number of conventional security techniques, which are applicable to the
agent environment. Such conventional techniques include the following:
• Mechanisms to isolate processes from one another and from the control process,
• Mechanisms to control access to computational resources,
• Cryptographic methods to encipher information exchanges,
• Cryptographic methods to identify and authenticate users, agents, and platforms,
and
• Mechanisms to audit security-relevant events occurring at the agent platform.
•
More recently developed techniques aimed at mobile code and mobile agent security
have
for the most part evolved along these traditional lines. Techniques devised for
protecting
the agent platform include the following:
A fundamental technique for protecting an agent system is signing code or other objects
With a digital signature. A digital signature serves as a means of confirming the
Authenticity of an object, its origin, and its integrity. Typically the code signer is either
the
Creator of the agent, the user of the agent, or some entity that has reviewed the agent.
State Appraisal
The goal of State Appraisal is to ensure that an agent has not been somehow
subverted due to alterations of its state information. The success of the technique relies
on
the extent to which harmful alterations to an agent's state can be predicted, and
countermeasures, in the form of appraisal functions, can be prepared before using the
agent.
Path Histories
The basic idea behind Path Histories is to maintain an authentic table record of
the prior platforms visited by an agent, so that a newly visited platform can determine
whether to process the agent and what resource constraints to apply..
The approach taken by Proof Carrying Code obligates the code producer (e.g., the
author of an agent) to formally prove that the program possesses safety properties
previously stipulated by the code consumer (e.g., security policy of the agent platform).
Proof Carrying Code is a prevention technique, while code signing is an authenticity
and
identification technique used to deter, but not prevent the execution of unsafe code. The
code and proof are sent together to the code consumer where the safety properties can
be
verified..
. Protecting Agents
One approach used to detect tampering by malicious hosts is to encapsulate the results
of
an agent’s actions, at each platform visited, for subsequent verification, either when the
agent returns to the point of origin or possibly at intermediate points as well.
Encapsulation may be done for different purposes with different mechanisms, such as
providing confidentiality using encryption, or for integrity and accountability using
digital
signature.
One interesting variation of Path Histories is a general scheme for allowing an agent's
itinerary to be recorded and tracked by another cooperating agent and vice versa in a
mutually supportive arrangement. When moving between agent platforms, an agent
conveys the last platform, current platform, and next platform information to the
cooperating peer through an authenticated channel..
A faulty agent platform can behave similar to a malicious one. Therefore, applying fault
tolerant capabilities to this environment should help counter the effects of malicious
platforms.
. Execution Tracing
Obfuscated Code
Mobile agent technology is beginning to make its way out of research labs and is
finding
its way into many commercial applications areas. The following section takes a look at
these application areas and discusses relevant security issues for typical scenarios.
. Electronic Commerce
Mobile agent-based electronic commerce applications have been proposed and are
being
developed for a number of diverse business areas, including contract negotiations,
service
brokering, auctions, and stock trading .
Network Management
Mobile agents are also well suited for network management applications such as remote
network management, software distribution, and adaptive response to network events.
Most of the current network management software is based on the Simple Network
Management Protocol (SNMP).
Personal Digital Assistants (PDA)
Manufacturers of cell phones, personal organizers, car radios, and other consumer
electronic devices are introducing more and more functionality into their products and
are
becoming the focus of agent developers.
A number of advantages of using mobile code and mobile agent computing paradigms
have been proposed These advantages include: overcoming network latency,
reducing network load, executing asynchronously and autonomously, adapting
dynamically, operating in heterogeneous environments, and having robust and
faulttolerant
behavior.
Mobile agent solutions have been proposed for critical systems that need to respond to
changes in their environments in real time. An example of such an application is the use
of
mobile agents to control robots employed in distributed manufacturing processes.
Mobile
agents have been offered as a solution, since they can be dispatched from a central
controller to act locally and directly execute the controller's instructions
Mobile agents are well suited for search and analysis problems involving multiple
distributed resources that require specialized tasks that aren't supported by the data
server
A mobile agent-based search and data analysis approach can help decrease
network traffic resulting from the transfer of large amounts of data across a network for
local processing.
A lot of attention is being focused on the use of mobile agents with mobile devices such
as
cellular phones, personal digital assistants, automotive electronics, and military
equipment
[4]. Their asynchronous execution and autonomy makes them well-suited for
applications
that use fragile or expensive network connections. A mobile agent can be launched and
continue to operate even after the machine that launched it is no longer connected to the
network.
. Adapting Dynamically
Mobile agents have the ability to sense their execution environment and autonomously
react to changes ..
The ability of mobile agents to react dynamically to unfavorable situations and events
makes it easier to build robust and fault-tolerant distributed systems.