Professional Documents
Culture Documents
Dos
Dos
Dos
Pablo de la Fuente
Departamento de Informática
E.T.S. de Ingeniería Informática
Campus Miguel Delibes
47011 Valladolid (Spain)
pfuente@infor.uva.es
Organization
General concepts.
NFS.
AFS.
Coda.
Enhancements to NFS
Organization
General concepts.
NFS.
AFS.
Coda.
Enhancements to NFS
Introduction (1)
Method Comment
UNIX semantics Every operation on a file is instantly visible to all processes
Session semantics No changes are visible to other processes until the file is closed
Immutable files No updates are possible; simplifies sharing and replication
Transaction All changes occur atomically
Servidor
Caching (3)
1 No caching
3 Cache located in
client’s disk
4 1
Client’s disk Server’s disk
4 Cache located in
client’s main
memory
Modification propagation.-
The aim is keeping file data cached at multiple client nodes consistent.
There are several approaches related with:
1.When to propagate modifications made to a cached data to the
corresponding file server
2.How to verify the validity of cached data
1. Increased availability
2. Increased reliability
3. Improved response time
4. Reduced network traffic
5. Improved system throughput
6. Better scalability
7. Autonomous operation
Read(filename,100,100,buf)
General concepts.
NFS.
AFS.
Coda.
Enhancements to NFS
NFS. Introduction
remote files
UNIX NFS UNIX
NFS NFS Client
file file
Other
client server
system system
NFS
protocol
(remote operations)
Colouris Distributed systems
NFS Architecture (4)
• Stateless server, so the user's identity and access rights must be checked
by the server on each request.
– In the local file system they are checked only on open()
• Every client request is accompanied by the userID and groupID
• Server is exposed to imposter attacks unless the userID and groupID are
protected by encryption
• Kerberos has been integrated with NFS to provide a stronger and more
comprehensive security solution
y Mount operation:
mount(remotehost, remotedirectory, localdirectory)
y Server maintains a table of clients who have mounted filesystems at that
server
y Each client maintains a table of mounted file systems holding:
< IP address, port number, file handle>
y Hard versus soft mounts
NFS. Naming (1)
NFS client catches attempts to access 'empty' mount points and routes
them to the Automounter
– Automounter has a table of mount points and multiple candidate
serves for each
– it sends a probe message to each candidate server and then uses
the mount service to mount the filesystem at the first server to
respond
• Keeps the mount table small
• Provides a simple form of replication for read-only filesystems
– E.g. if there are several servers with identical copies of /usr/lib then
each server will have a chance of being mounted at some clients.
NFS. Automounting (2)
Attribute Description
TYPE The type of the file (regular, directory, symbolic link)
SIZE The length of the file in bytes
Indicator for a client to see if and/or when the file has
CHANGE
changed
FSID Server-unique identifier of the file's file system
Attribute Description
ACL an access control list associated with the file
FILEHANDLE The server-provided file handle of this file
FILEID A file-system unique identifier for this file
FS_LOCATIONS Locations in the network where this file system may be found
OWNER The character-string name of the file's owner
TIME_ACCESS Time when the file data were last accessed
TIME_MODIFY Time when the file data were last modified
TIME_CREATE Time when the file was created
Operation Description
Lock Creates a lock for a range of bytes
Lockt Test whether a conflicting lock has been granted
Locku Remove a lock from a range of bytes
Renew Renew the leas on a specified lock
General concepts.
NFS.
AFS.
Coda.
Enhancements to NFS
AFS. Introduction
• A distributed computing environment under development since 1983 at
Carnegie-Mellon University.
• Andrew is highly scalable; the system is targeted to span over 5000
workstations.
• Andrew distinguishes between client machines (workstations) and
dedicated server machines. Servers and clients run the 4.2BSD UNIX
OS and are interconnected by an inter-net of LANs.
• NFS compatible.
Design characteristics:
Whole-file serving. (In AFS-3 files larger than 64 kbytes are transferred
in 64-kbyte chunks).
Whole-file caching. The copy or chunk transferred to client is stored in
a cache on local disk. The cache is permanent. On open request the
local copies are preferred to remote copies.
AFS. Initial considerations
• most files are small--transfer files rather than disk blocks?
• reading more common than writing
• most access is sequential
• most files have a short lifetime--lots of applications generate
temporary files (such as a compiler).
• file sharing is unusual (in terms of reads and writes)--argues for
client caching
• processes use few files
• files can be divided into classes--handle “system” files and “user”
files differently.
AFS. Characteristics
• Clients are presented with a partitioned space of file names: a local name
space and a shared name space.
• Dedicated servers, called Vice, present the shared name space to the
clients as an homogeneous, identical, and location transparent file
hierarchy.
• The local name space is the root file system of a workstation, from which
the shared name space descends.
• Workstations run the Virtue (Venus) protocol to communicate with Vice, and
are required to have local disks where they store their local name space.
• Servers collectively are responsible for the storage and management of the
shared name space.
• Clients and servers are structured in clusters interconnected by a backbone
LAN.
• A cluster consists of a collection of workstations and a cluster server and is
connected to the backbone by a router.
• A key mechanism selected for remote file operations is whole file caching.
Opening a file causes it to be cached, in its entirety, on the local disk.
AFS. Processes distribution
Workstations Servers
User Venus
program
Vice
UNIX kernel
UNIX kernel
Venus
User Network
program
UNIX kernel
Vice
Venus
User
program UNIX kernel
UNIX kernel
Workstation
User Venus
program
UNIX file Non-local file
system calls operations
UNIX kernel
UNIX file system
Local
disk
Fetch(fid) -> attr, data Returns the attributes (status) and, optionally, the contents of file
identified by the fid and records a callback promise on it.
Store(fid, attr, data) Updates the attributes and (optionally) the contents of a specified
file.
Create() -> fid Creates a new file and records a callback promise on it.
Remove(fid) Deletes the specified file.
SetLock(fid, mode) Sets a lock on the specified file or directory. The mode of the
lock may be shared or exclusive. Locks that are not removed
expire after 30 minutes.
ReleaseLock(fid) Unlocks the specified file or directory.
RemoveCallback(fid) Informs server that a Venus process has flushed a file from its
cache.
BreakCallback(fid) This call is made by a Vice server to a Venus process. It cancels
the callback promise on the relevant file.
• How does AFS gain control when an open or close system call
referring to a file in the shared file space is issued by a client?
• How is the server holding the required file located?
• What space is allocated for cached files in workstations?
• How does AFS ensures that the cached copies are up to date when
files may be updated by several clients?
One of the file partitions on the local disk of each workstations is used as a
cache holding the cached copies of files from shared space. Venus
manages the cache. The workstation cache is usaully large enough to
accommodate several hundred average-sized files. If the user do not
modify the files cached, the workstations are largely independent of the
Vice servers.
Organization
General concepts.
NFS.
AFS.
Coda.
Enhancements to NFS
Coda. Introduction (I)
• Salient features:
– Support for disconnected operations
• Desirable for mobile users
– Support for a large number of users
Disconnected operation
– a temporary deviation from normal operation as a client of a
shared repository
Why?
– enhance availability
How?
– data cache
Coda. Design overview (1)
• AFS guarantees
9open: result of last close anywhere
9close: immediate propagation everywhere
9failure: server or network failure
• Coda guarantees
9open: result of last close in accessible universe
9close: immediate propagation to accessible universe
eventual propagation everywhere
9failure: cache miss when disconnected
Coda
Hoarding
• Prioritized Cache Management
– Hoard Profiles specify user interest (directories allowed)
– Recent usage
– Hoard priority based on above two
• Hoard walking
– Since priority based on recent usage, every once in a while need to update
file system to reflect priorities
– 10 min default. Can be changed.
Emulation
• Allow updating without contacting file server
• All updates logged in a per volume “replay log”
• Log optimizations to reduce log size
• Persistence achieved using Recoverable Virtual Memory (RVM)
Coda. Reintegration
Conflicts resolution
• Unresolved conflict represented as dangling symbolic link
General concepts.
NFS.
AFS.
Coda.
Enhancements to NFS
NFS enhancement - Spritely NFS