Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

What is mobile computing explain in details

Mobile computing refers to the technology and practices that enable users to access
and utilize computing resources while on the move. It encompasses a broad range of
elements, including:

Devices: Mobile devices such as smartphones, tablets, laptops, and wearables are the
primary tools for mobile computing. These devices are designed to be portable and
have features that support wireless connectivity, data processing, and user interaction.

Networks: Mobile computing relies on various wireless networks for connectivity,


including Wi-Fi, cellular networks (3G, 4G, 5G), and satellite networks. These networks
allow mobile devices to communicate with each other and access the internet.

Software: Mobile operating systems such as Android, iOS, and Windows Phone are
specifically designed for mobile devices and provide the platform for running
applications. Mobile applications are designed to be optimized for use on mobile
devices and offer various functionalities, such as communication, entertainment,
productivity, and information access.

Services: Mobile computing services enable users to access information and resources
remotely. Examples include email, social media, cloud storage, mobile banking, and
streaming services.

Protocols: Mobile computing relies on various protocols for communication, data


transfer, and security. These protocols ensure reliable and secure communication
between mobile devices and other systems.
Discuss about Registration area-based location management

Registration area-based location management is a core aspect of mobile network


technology, enabling efficient tracking of mobile devices while minimizing signaling
overhead. It allows mobile devices to roam freely within a defined area without needing
to constantly re-register with the network.

Here's a detailed breakdown of the concept:

Components:

• Tracking Area (TA): The network is divided into smaller sub-regions called
Tracking Areas. Each TA has a unique identifier and covers a specific
geographical area.
• Registration Area (RA): A Registration Area comprises one or more Tracking
Areas. When entering a new RA, a mobile device registers its presence with the
network.
• Visitor Location Register (VLR): This database stores the location information of
mobile devices currently roaming within its coverage area.
• Home Location Register (HLR): This database stores the permanent information
of all mobile devices belonging to the network.

Process:

1. Registration: When a mobile device enters a new Registration Area, it sends a


registration request to the network.
2. Location Update: The network updates the VLR with the device's new location
information.
3. Tracking: The network continues to track the device's location within the RA
without requiring further registration.
4. Movement: If the device moves to a different RA, it repeats the registration
process with the new VLR.
5. Deregistration: When the device leaves the network coverage area, it sends a
deregistration message.
Explain in detail PCS location management scheme

Location management is a vital aspect of mobile computing, enabling mobile devices to


seamlessly roam across a network while maintaining connectivity and allowing services
to be delivered efficiently. Here's a breakdown of key concepts and technologies
involved:

Components:

• Mobile Device: The user's mobile phone, smartphone, or other portable device
equipped with a radio transceiver and a unique identifier (IMSI).
• Base Station (BS): An antenna tower that provides radio coverage for a specific
geographic area, also known as a cell.
• Mobile Switching Center (MSC): A central network element responsible for
routing calls and messages, performing handovers, and managing location
information.
• Home Location Register (HLR): A central database that stores the permanent
location information and subscription data for each mobile device on the network.
• Visitor Location Register (VLR): A temporary database within a specific MSC that
stores location information for mobile devices currently roaming within its
coverage area.

Location Management Techniques:

• Cell-based Location Management: This basic scheme updates the network every
time a mobile device moves to a new cell. It's simple but generates significant
signaling overhead.
• Location Area-based Location Management (LA): The network is divided into
larger areas called Location Areas, each containing multiple cells. Devices
register upon entering a new LA, reducing signaling overhead.
• Hierarchical Location Management: Combines cell and LA-based
approaches, offering a balance between signaling overhead and location
accuracy.
• Pointer-based Location Management: Stores pointers to VLRs in the
HLR, reducing data storage but requiring additional processing.
• Movement-based Location Management: Tracks the direction and speed of the
device, predicting its future location and reducing update frequency.
What is data dissemination and management scheme

A data dissemination and management scheme refers to a set of procedures and


technologies used for efficiently distributing and managing data within a network or
system. It involves several key functions:

1. Data Acquisition: Collecting and gathering data from various sources, including
sensors, databases, and user inputs. 2. Data Processing: Cleaning, transforming, and
formatting the data into a usable format for further analysis and dissemination. 3. Data
Storage: Archiving and storing data in a secure and reliable manner, ensuring its
accessibility and long-term preservation. 4. Data Access: Providing authorized users
with controlled access to specific data based on their roles and permissions. 5. Data
Dissemination: Distributing data to relevant recipients or systems in a timely and
efficient manner, considering factors like data format, security, and network bandwidth.
6. Data Management: Maintaining the data throughout its lifecycle, including updating,
version control, and ensuring data integrity and consistency.

Types of Data Dissemination and Management Schemes:

• Push-based: Data is proactively sent to designated recipients based on


predefined schedules or triggers.
• Pull-based: Users request specific data themselves by querying the system.
• Hybrid: Combines both push and pull mechanisms to provide flexibility and
control over data delivery.
• Broadcast: Data is simultaneously transmitted to all interested parties within a
network.
• Multicast: Data is efficiently delivered to a specific group of recipients
simultaneously.
• Unicast: Data is sent directly from one source to a single recipient.

Factors to Consider when Choosing a Data Dissemination and Management Scheme:

• Data volume and frequency of updates: Large data volumes or frequent updates
may require more robust and scalable solutions.
• Data sensitivity and security requirements: Sensitive data may require additional
security measures like encryption and access control.
• Network infrastructure and bandwidth limitations: The scheme should be
compatible with the available network resources and bandwidth.
• Latency and real-time requirements: Some applications may require immediate
data delivery with minimal latency.
Give the Toxonomy of cache maintenance schemes.

Cache maintenance schemes are crucial for ensuring efficient data access and
optimizing performance in distributed systems. These schemes can be categorized
based on various criteria, including:

1. Write Policy:

• Write-through: All updates are immediately written to both the cache and the
main memory.
• Write-back: Updated data remains in the cache until it is replaced or explicitly
flushed to the main memory.
• Write-around: Writes are directly directed to the main memory, bypassing the
cache.

2. Invalidation Strategy:

• Write-invalidate: When a data item is updated in the main memory, the


corresponding entry in the cache is invalidated and must be re-fetched on the
next access.
• Write-update: When a data item is updated in the main memory, the
corresponding entry in the cache is also updated with the new value.
• No-write allocate: When a data item is written to the cache, its corresponding
entry in the main memory is not updated until the data is read from the cache.

3. Consistency Model:

• Strong consistency: Guarantees that all reads and writes are immediately
reflected in all caches and the main memory.
• Weak consistency: Allows temporary inconsistencies among caches and the
main memory, achieving better performance but potentially sacrificing data
integrity.
• Eventual consistency: Guarantees that all replicas will eventually converge to the
same state, but inconsistencies may exist for a period of time.

4. Replication Strategy:

• Read-only: Caches are used for read operations only, with updates directed to
the main memory.
Briefly discuss about enumeration based, role based and context aware
computing

Enumeration-based computing is a type of parallel computing that uses a set of pre-


defined rules or instructions to solve problems. These rules are typically applied to a
large number of potential solutions in order to find the best one. Enumeration-based
computing is often used for problems that have a large number of possible solutions,
such as scheduling problems, optimization problems, and search problems.

• Pros:
o Well-defined rules make it easy to understand and implement.
o Can be very efficient for problems with a small number of solutions.
• Cons:
o Can be very inefficient for problems with a large number of solutions.
o May not be able to find the best solution if the rules are not well-defined.

Role Based Computing

Role-based computing is a type of access control that assigns different levels of access
to users based on their roles within an organization. Roles are typically defined based
on a user's job function, department, or other factors. Role-based computing can be
used to improve security and compliance by ensuring that users only have access to the
data and resources they need to do their jobs.

• Pros:
o Improves security and compliance by restricting access to data and
resources.
o Makes it easier to manage user access by grouping users into roles.
• Cons:
o Can be complex to set up and maintain.
o May not be flexible enough for organizations with rapidly changing user
needs.

Context Aware Computing

Context-aware computing is a type of computing that uses information about the user's
environment to adapt the behavior of the system. This information can include the user's
location, time of day, and other factors. Context-aware computing can be used to
improve the user experience by making the system more responsive and personalized.
What is mobile middleware briefly explain adaption, agents and Service
Discovery.
Mobile middleware is software that provides a layer of abstraction between mobile
applications and the underlying operating system and network. It allows developers to
create applications that are independent of the specific device or network they are
running on. Some of the key features of mobile middleware include:

Adaptation
Adaptation middleware helps applications to adapt to the changing conditions of the
mobile environment. This can be done by:

• Monitoring the environment: Adaptation middleware can monitor the mobile


environment for changes in network connectivity, battery life, and other factors.
• Taking actions: When the environment changes, adaptation middleware can take
actions to adapt the application's behavior. For example, it can reduce the
application's bandwidth usage if the network connectivity is poor.

Agents
Mobile agents are autonomous pieces of code that can move between devices and
networks. They can be used to perform a variety of tasks, such as:

• Data collection: Mobile agents can be used to collect data from mobile devices
and send it back to a central server.
• Task execution: Mobile agents can be used to execute tasks on mobile
devices, such as downloading and installing applications.
• Service discovery: Mobile agents can be used to discover and access services
that are available on the network.

There are three main types of mobile agents:

• Data agents: These agents are designed to collect and transport data.
• Code agents: These agents are designed to execute code on mobile devices.
• Service agents: These agents are designed to discover and access services on
the network.

Service Discovery
Service discovery middleware allows applications to discover and access services that
are available on the network. This is done by:

• Registering services: Service providers can register their services with a service
discovery server.
• Publishing service descriptions: Service providers can publish descriptions of
their services that include information about the service's name, description, and
location.

Elaborate on Discovery and advertisement protocols

Discovery and advertisement protocols play a crucial role in various networks, enabling
devices and services to find and communicate with each other efficiently. Here's an in-
depth explanation of these protocols and their functions:

Discovery Protocols:

These protocols allow devices to find and learn about available services or resources
within a network. They typically involve the following functionalities:

• Sending probes or requests: Devices broadcast or unicast messages asking for


specific services or resources.
• Responding to inquiries: Services or resource providers respond to probes with
information about their capabilities and availability.
• Filtering and selection: Devices can filter and select the most suitable service or
resource based on their specific needs and the received information.

Examples of Discovery Protocols:

• DNS (Domain Name System): Translates human-readable domain names into


numeric IP addresses, allowing devices to find websites and other online
resources.
• DHCP (Dynamic Host Configuration Protocol): Assigns IP addresses and other
network configuration settings to devices automatically.
• Bonjour (Zeroconf): Enables automatic discovery and configuration of devices
and services on local networks without a central server.
• Service Discovery Protocol (SDP): Used in Bluetooth networks to discover
nearby devices and services.

Advertisement Protocols:

These protocols allow services and resource providers to announce their availability and
capabilities to devices within a network. They often involve:

• Broadcasting or unicasting messages: Services advertise their presence and


information about themselves to nearby devices.
• Information inclusion: Advertisements typically include details like service
name, description, location, and access protocols.
• Filtering and subscription: Devices can filter advertisements based on their needs
and subscribe to receive updates from specific services.

Write short note on garbage collection and eventing

Garbage collection is a process in computer science that automatically identifies and


removes unused objects from computer memory. This helps to prevent memory leaks
and improve the overall performance of the system. There are two main types of
garbage collection:

Eventing:

Eventing is a programming paradigm in which objects communicate by sending and


receiving events. Events are notifications that something has happened, and they can
be used to trigger specific actions or reactions in other objects. Eventing can be
implemented in different ways, but some common approaches include:

• Publish-subscribe: Objects can subscribe to events published by other


objects. When an event is published, all subscribed objects are notified.
• Topic-based: Events are published to specific topics, and objects can subscribe
to specific topics. This allows for more granular control over event delivery.
• Direct communication: Objects can directly send events to each other. This can
be useful for situations where objects need to communicate directly with each
other.

Benefits of Garbage Collection:


• Improves performance: By preventing memory leaks, garbage collection can help
to improve the performance of a system.
• Reduces memory usage: By removing unused objects from memory, garbage
collection can help to reduce memory usage.
• Simplifies memory management: By automating memory management, garbage
collection can simplify the task of developing software.

Discuss mobile and wireless security issues.

1. Data Security:

• Interception: Wireless transmissions are susceptible to eavesdropping, allowing


attackers to steal sensitive data like passwords, financial information, and
personal communication.
• Data Leakage: Malicious applications can steal data stored on the device or
access sensitive information through vulnerabilities in the operating system or
applications.
• Unsecured Public Wi-Fi: Public Wi-Fi networks are often unencrypted, making
them vulnerable to attacks where users' data can be intercepted.

2. Device Security:

• Operating System vulnerabilities: Mobile operating systems are complex and


constantly evolving, making them susceptible to vulnerabilities that attackers can
exploit.
• App vulnerabilities: Malicious apps can exploit vulnerabilities in legitimate apps to
gain access to sensitive data and functionality.
• Physical theft: Mobile devices are easily lost or stolen, exposing sensitive data
stored on the device if not adequately protected.

3. Network Security:

• Weak authentication: Weak authentication mechanisms for accessing mobile


networks and services can be exploited by attackers to gain unauthorized
access.
• Man-in-the-middle attacks: Attackers can intercept communication between
devices and networks, eavesdropping on conversations and manipulating data.
What are the integrity codes? differentiate between checksum and cryptographic
hash.

Integrity codes are digital fingerprints used to verify the data's integrity and authenticity. They
are generated by applying a mathematical algorithm to the data, resulting in a unique value. Any
alteration to the data will lead to a different value, indicating data corruption or tampering.

1. Checksums:

• Function: Checksums are simple algorithms designed to detect accidental data errors
during transmission or storage.
• Characteristics:
o Fast and efficient to compute.
o Collision-prone: Two different data sets can produce the same checksum, leading
to false negatives.
o Not secure: Offer minimal protection against intentional data manipulation.
• Common examples:
o CRC (Cyclic Redundancy Check): Used in file transfers, network protocols, and
storage media.
o Fletcher's checksum: Used in networking protocols like TCP.

2. Cryptographic Hashes:

• Function: Cryptographic hashes are more complex algorithms designed to


ensure data integrity and authenticity, providing stronger protection against
intentional data manipulation.
• Characteristics:
o Computationally expensive: Requires more processing power to compute
compared to checksums.
o Collision-resistant: Highly unlikely for two different data sets to produce
the same hash value.
o Secure: Offer stronger protection against malicious attacks due to their
one-way nature, making it difficult to reverse the hash and reconstruct the
original data.
Explain in detail Message authentication Code (MAC)

A Message Authentication Code (MAC), also sometimes referred to as a tag, is a


cryptographic checksum appended to a message to ensure its authenticity and integrity.
It serves two main purposes:

1. Data integrity: Guarantees that the message has not been altered during
transmission or storage. Any modification to the message will result in a different MAC
value, indicating data tampering. 2. Data origin authentication: Verifies the message's
origin and prevents unauthorized entities from impersonating the legitimate sender.

How it works:

1. MAC generation: The sender generates a MAC using a secret key shared with
the intended recipient. This secret key is not included in the message itself and
remains confidential.
2. Message transmission: The sender transmits the original message along with the
generated MAC to the recipient.
3. MAC verification: Upon receiving the message, the recipient applies the same
MAC algorithm using the shared secret key to generate a new MAC value.
4. Comparison: The recipient compares the generated MAC value with the received
MAC value. If both values match, it confirms the message's integrity and
authenticity.

Types of MACs:

• HMAC (Hash-based Message Authentication Code): The most widely used MAC
algorithm, based on cryptographic hash functions like SHA-256 and SHA-3.
• CBC-MAC (Cipher Block Chaining Message Authentication Code): A MAC
algorithm based on block cipher encryption algorithms like AES.

Benefits of using MACs:

• Enhanced security: Provides stronger protection against data tampering and


forgery compared to simple checksums.
• Non-repudiation: Ensures the message can be traced back to its legitimate
sender, preventing denial of origin.
• Improved message integrity: Guarantees the message received is identical to the
one sent, even in the presence of transmission errors.
What is wired equivalent Privacy? give goals of WEP

Wired Equivalent Privacy (WEP) is a security protocol designed to provide data


confidentiality for wireless networks, aiming to offer a level of security comparable to
that of a wired local area network (LAN). It was introduced in the original IEEE 802.11
standard ratified in 1997 and was the primary security solution for Wi-Fi networks for
several years.

Goals of WEP:

• Data confidentiality: WEP aimed to encrypt data transmitted over the wireless
network, preventing unauthorized users from eavesdropping and stealing
sensitive information.
• Message integrity: WEP was designed to ensure the integrity of data
transmissions, preventing unauthorized modifications to messages and
guaranteeing that data received is identical to the data sent.
• Access control: WEP implemented basic access control mechanisms, allowing
network administrators to restrict access to authorized users only.
• Authentication: WEP incorporated authentication mechanisms to verify the
identity of users attempting to connect to the network and prevent unauthorized
access.
• Interoperability: WEP targeted interoperability across different Wi-Fi devices and
network implementations.

However, WEP was plagued by several weaknesses and vulnerabilities:

• Weak encryption: WEP used the RC4 stream cipher algorithm, which was found
to be insecure and susceptible to various cryptanalysis attacks.
• Limited key management: WEP employed static encryption keys, making them
vulnerable to cracking and reuse attacks.
• Vulnerable initialization vectors (IVs): WEP used short and predictable
IVs, leading to vulnerabilities that allowed attackers to decrypt messages and
forge new ones.
• Lack of integrity protection: Early versions of WEP lacked mechanisms to ensure
data integrity, making them vulnerable to data manipulation attacks.
Explain WEP data frame

A WEP data frame is the basic unit of information transmitted over a wireless network
using the Wired Equivalent Privacy (WEP) protocol. It encapsulates the data payload
and additional information necessary for secure communication.

Structure of a WEP data frame:

• Frame header: Contains information about the frame type, source and
destination addresses, and length.
• WEP header: Includes the IV (Initialization Vector), Key ID, and other control
flags.
• WEP data: The actual data payload encrypted with the RC4 stream cipher using
a shared secret key and the IV.
• ICV (Integrity Check Value): A checksum value generated over the header and
data to detect data corruption.
• Trailer: Marks the end of the frame.

Key components of a WEP data frame:

• IV (Initialization Vector): A random 24-bit value used to prevent replay attacks


and ensure that the same encryption key is not used for multiple messages in a
predictable way.
• Key ID: Identifies the specific encryption key used to encrypt the data.
• RC4 stream cipher: A symmetric encryption algorithm used to encrypt the data
payload.
• ICV (Integrity Check Value): A checksum calculated using a cryptographic hash
function to verify the integrity of the header and data.

Vulnerabilities of WEP data frames:

• Weak key management: Static keys used in WEP are vulnerable to cracking and
reuse attacks.
• Weak IVs: Short and predictable IVs make WEP data frames susceptible to
cryptanalysis attacks.
• Lack of integrity protection: Early versions of WEP did not offer data integrity
protection, making them vulnerable to data manipulation attacks.
Briefly discuss wireless generation 1G to 4G

1G (First Generation):

• Introduced in the early 1980s, 1G was primarily used for voice calls.
• Analog technology offered limited data transmission capabilities.
• Limited coverage and poor call quality were common issues.
• Examples: AMPS (Advanced Mobile Phone System), NMT (Nordic Mobile
Telephony).

2G (Second Generation):

• Emerged in the mid-1990s, offering significant improvements over 1G.


• Introduced digital technology, allowing for higher data transfer rates.
• Enabled basic data services like text messaging and SMS.
• Examples: GSM (Global System for Mobile Communications), CDMA (Code
Division Multiple Access).

3G (Third Generation):

• Launched in the early 2000s, 3G marked a significant leap forward in mobile


technology.
• Provided significantly faster data speeds, enabling mobile internet access and
multimedia applications.
• Supported video calls, mobile gaming, and basic streaming services.
• Examples: UMTS (Universal Mobile Telecommunications System), CDMA2000.

4G (Fourth Generation):

• Introduced in the late 2000s, 4G revolutionized mobile communication with its


unprecedented data speeds.
• Enabled high-definition video streaming, online gaming, and mobile cloud
services.
• Paved the way for the development of new applications and services reliant on
high-speed mobile internet.
• Examples: LTE (Long-Term Evolution), WiMAX (Worldwide Interoperability for
Microwave Access).

You might also like