Professional Documents
Culture Documents
MC-1
MC-1
Mobile computing refers to the technology and practices that enable users to access
and utilize computing resources while on the move. It encompasses a broad range of
elements, including:
Devices: Mobile devices such as smartphones, tablets, laptops, and wearables are the
primary tools for mobile computing. These devices are designed to be portable and
have features that support wireless connectivity, data processing, and user interaction.
Software: Mobile operating systems such as Android, iOS, and Windows Phone are
specifically designed for mobile devices and provide the platform for running
applications. Mobile applications are designed to be optimized for use on mobile
devices and offer various functionalities, such as communication, entertainment,
productivity, and information access.
Services: Mobile computing services enable users to access information and resources
remotely. Examples include email, social media, cloud storage, mobile banking, and
streaming services.
Components:
• Tracking Area (TA): The network is divided into smaller sub-regions called
Tracking Areas. Each TA has a unique identifier and covers a specific
geographical area.
• Registration Area (RA): A Registration Area comprises one or more Tracking
Areas. When entering a new RA, a mobile device registers its presence with the
network.
• Visitor Location Register (VLR): This database stores the location information of
mobile devices currently roaming within its coverage area.
• Home Location Register (HLR): This database stores the permanent information
of all mobile devices belonging to the network.
Process:
Components:
• Mobile Device: The user's mobile phone, smartphone, or other portable device
equipped with a radio transceiver and a unique identifier (IMSI).
• Base Station (BS): An antenna tower that provides radio coverage for a specific
geographic area, also known as a cell.
• Mobile Switching Center (MSC): A central network element responsible for
routing calls and messages, performing handovers, and managing location
information.
• Home Location Register (HLR): A central database that stores the permanent
location information and subscription data for each mobile device on the network.
• Visitor Location Register (VLR): A temporary database within a specific MSC that
stores location information for mobile devices currently roaming within its
coverage area.
• Cell-based Location Management: This basic scheme updates the network every
time a mobile device moves to a new cell. It's simple but generates significant
signaling overhead.
• Location Area-based Location Management (LA): The network is divided into
larger areas called Location Areas, each containing multiple cells. Devices
register upon entering a new LA, reducing signaling overhead.
• Hierarchical Location Management: Combines cell and LA-based
approaches, offering a balance between signaling overhead and location
accuracy.
• Pointer-based Location Management: Stores pointers to VLRs in the
HLR, reducing data storage but requiring additional processing.
• Movement-based Location Management: Tracks the direction and speed of the
device, predicting its future location and reducing update frequency.
What is data dissemination and management scheme
1. Data Acquisition: Collecting and gathering data from various sources, including
sensors, databases, and user inputs. 2. Data Processing: Cleaning, transforming, and
formatting the data into a usable format for further analysis and dissemination. 3. Data
Storage: Archiving and storing data in a secure and reliable manner, ensuring its
accessibility and long-term preservation. 4. Data Access: Providing authorized users
with controlled access to specific data based on their roles and permissions. 5. Data
Dissemination: Distributing data to relevant recipients or systems in a timely and
efficient manner, considering factors like data format, security, and network bandwidth.
6. Data Management: Maintaining the data throughout its lifecycle, including updating,
version control, and ensuring data integrity and consistency.
• Data volume and frequency of updates: Large data volumes or frequent updates
may require more robust and scalable solutions.
• Data sensitivity and security requirements: Sensitive data may require additional
security measures like encryption and access control.
• Network infrastructure and bandwidth limitations: The scheme should be
compatible with the available network resources and bandwidth.
• Latency and real-time requirements: Some applications may require immediate
data delivery with minimal latency.
Give the Toxonomy of cache maintenance schemes.
Cache maintenance schemes are crucial for ensuring efficient data access and
optimizing performance in distributed systems. These schemes can be categorized
based on various criteria, including:
1. Write Policy:
• Write-through: All updates are immediately written to both the cache and the
main memory.
• Write-back: Updated data remains in the cache until it is replaced or explicitly
flushed to the main memory.
• Write-around: Writes are directly directed to the main memory, bypassing the
cache.
2. Invalidation Strategy:
3. Consistency Model:
• Strong consistency: Guarantees that all reads and writes are immediately
reflected in all caches and the main memory.
• Weak consistency: Allows temporary inconsistencies among caches and the
main memory, achieving better performance but potentially sacrificing data
integrity.
• Eventual consistency: Guarantees that all replicas will eventually converge to the
same state, but inconsistencies may exist for a period of time.
4. Replication Strategy:
• Read-only: Caches are used for read operations only, with updates directed to
the main memory.
Briefly discuss about enumeration based, role based and context aware
computing
• Pros:
o Well-defined rules make it easy to understand and implement.
o Can be very efficient for problems with a small number of solutions.
• Cons:
o Can be very inefficient for problems with a large number of solutions.
o May not be able to find the best solution if the rules are not well-defined.
Role-based computing is a type of access control that assigns different levels of access
to users based on their roles within an organization. Roles are typically defined based
on a user's job function, department, or other factors. Role-based computing can be
used to improve security and compliance by ensuring that users only have access to the
data and resources they need to do their jobs.
• Pros:
o Improves security and compliance by restricting access to data and
resources.
o Makes it easier to manage user access by grouping users into roles.
• Cons:
o Can be complex to set up and maintain.
o May not be flexible enough for organizations with rapidly changing user
needs.
Context-aware computing is a type of computing that uses information about the user's
environment to adapt the behavior of the system. This information can include the user's
location, time of day, and other factors. Context-aware computing can be used to
improve the user experience by making the system more responsive and personalized.
What is mobile middleware briefly explain adaption, agents and Service
Discovery.
Mobile middleware is software that provides a layer of abstraction between mobile
applications and the underlying operating system and network. It allows developers to
create applications that are independent of the specific device or network they are
running on. Some of the key features of mobile middleware include:
Adaptation
Adaptation middleware helps applications to adapt to the changing conditions of the
mobile environment. This can be done by:
Agents
Mobile agents are autonomous pieces of code that can move between devices and
networks. They can be used to perform a variety of tasks, such as:
• Data collection: Mobile agents can be used to collect data from mobile devices
and send it back to a central server.
• Task execution: Mobile agents can be used to execute tasks on mobile
devices, such as downloading and installing applications.
• Service discovery: Mobile agents can be used to discover and access services
that are available on the network.
• Data agents: These agents are designed to collect and transport data.
• Code agents: These agents are designed to execute code on mobile devices.
• Service agents: These agents are designed to discover and access services on
the network.
Service Discovery
Service discovery middleware allows applications to discover and access services that
are available on the network. This is done by:
• Registering services: Service providers can register their services with a service
discovery server.
• Publishing service descriptions: Service providers can publish descriptions of
their services that include information about the service's name, description, and
location.
Discovery and advertisement protocols play a crucial role in various networks, enabling
devices and services to find and communicate with each other efficiently. Here's an in-
depth explanation of these protocols and their functions:
Discovery Protocols:
These protocols allow devices to find and learn about available services or resources
within a network. They typically involve the following functionalities:
Advertisement Protocols:
These protocols allow services and resource providers to announce their availability and
capabilities to devices within a network. They often involve:
Eventing:
1. Data Security:
2. Device Security:
3. Network Security:
Integrity codes are digital fingerprints used to verify the data's integrity and authenticity. They
are generated by applying a mathematical algorithm to the data, resulting in a unique value. Any
alteration to the data will lead to a different value, indicating data corruption or tampering.
1. Checksums:
• Function: Checksums are simple algorithms designed to detect accidental data errors
during transmission or storage.
• Characteristics:
o Fast and efficient to compute.
o Collision-prone: Two different data sets can produce the same checksum, leading
to false negatives.
o Not secure: Offer minimal protection against intentional data manipulation.
• Common examples:
o CRC (Cyclic Redundancy Check): Used in file transfers, network protocols, and
storage media.
o Fletcher's checksum: Used in networking protocols like TCP.
2. Cryptographic Hashes:
1. Data integrity: Guarantees that the message has not been altered during
transmission or storage. Any modification to the message will result in a different MAC
value, indicating data tampering. 2. Data origin authentication: Verifies the message's
origin and prevents unauthorized entities from impersonating the legitimate sender.
How it works:
1. MAC generation: The sender generates a MAC using a secret key shared with
the intended recipient. This secret key is not included in the message itself and
remains confidential.
2. Message transmission: The sender transmits the original message along with the
generated MAC to the recipient.
3. MAC verification: Upon receiving the message, the recipient applies the same
MAC algorithm using the shared secret key to generate a new MAC value.
4. Comparison: The recipient compares the generated MAC value with the received
MAC value. If both values match, it confirms the message's integrity and
authenticity.
Types of MACs:
• HMAC (Hash-based Message Authentication Code): The most widely used MAC
algorithm, based on cryptographic hash functions like SHA-256 and SHA-3.
• CBC-MAC (Cipher Block Chaining Message Authentication Code): A MAC
algorithm based on block cipher encryption algorithms like AES.
Goals of WEP:
• Data confidentiality: WEP aimed to encrypt data transmitted over the wireless
network, preventing unauthorized users from eavesdropping and stealing
sensitive information.
• Message integrity: WEP was designed to ensure the integrity of data
transmissions, preventing unauthorized modifications to messages and
guaranteeing that data received is identical to the data sent.
• Access control: WEP implemented basic access control mechanisms, allowing
network administrators to restrict access to authorized users only.
• Authentication: WEP incorporated authentication mechanisms to verify the
identity of users attempting to connect to the network and prevent unauthorized
access.
• Interoperability: WEP targeted interoperability across different Wi-Fi devices and
network implementations.
• Weak encryption: WEP used the RC4 stream cipher algorithm, which was found
to be insecure and susceptible to various cryptanalysis attacks.
• Limited key management: WEP employed static encryption keys, making them
vulnerable to cracking and reuse attacks.
• Vulnerable initialization vectors (IVs): WEP used short and predictable
IVs, leading to vulnerabilities that allowed attackers to decrypt messages and
forge new ones.
• Lack of integrity protection: Early versions of WEP lacked mechanisms to ensure
data integrity, making them vulnerable to data manipulation attacks.
Explain WEP data frame
A WEP data frame is the basic unit of information transmitted over a wireless network
using the Wired Equivalent Privacy (WEP) protocol. It encapsulates the data payload
and additional information necessary for secure communication.
• Frame header: Contains information about the frame type, source and
destination addresses, and length.
• WEP header: Includes the IV (Initialization Vector), Key ID, and other control
flags.
• WEP data: The actual data payload encrypted with the RC4 stream cipher using
a shared secret key and the IV.
• ICV (Integrity Check Value): A checksum value generated over the header and
data to detect data corruption.
• Trailer: Marks the end of the frame.
• Weak key management: Static keys used in WEP are vulnerable to cracking and
reuse attacks.
• Weak IVs: Short and predictable IVs make WEP data frames susceptible to
cryptanalysis attacks.
• Lack of integrity protection: Early versions of WEP did not offer data integrity
protection, making them vulnerable to data manipulation attacks.
Briefly discuss wireless generation 1G to 4G
1G (First Generation):
• Introduced in the early 1980s, 1G was primarily used for voice calls.
• Analog technology offered limited data transmission capabilities.
• Limited coverage and poor call quality were common issues.
• Examples: AMPS (Advanced Mobile Phone System), NMT (Nordic Mobile
Telephony).
2G (Second Generation):
3G (Third Generation):
4G (Fourth Generation):