Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 28

IMPLEMENTATION

SUPPORT
 This chapter discuss about the programming support that is
provided for the implementation of an interactive system.
 The detailed design specification gives the programmer
instructions ( what the interactive application must do)
 The programmer must translate design specifications into
machine executable instructions( how that will be achieved on
the available hardware devices).(Ie. the level of the software that
runs the hardware devices )
 The objective of the programmer then is to translate down to the
level of the software that runs the hardware devices
 This software provides the ability to do things like read events
from various input devices and write primitive graphics
commands to a display.
 The job is very tedious and highly error prone
 first important feature is its ability to provide programmer
independence from the specifics of the hardware devices.
 workstation will involve some visual display screen, a keyboard and
some pointing device, such as a mouse-WIMP interface
 Any variety of these hardware devices can be used in any interactive
system and they are all different in terms of the data they
communicate and the commands that are used to instruct them.
 It is imperative to be able to program an application that will run on
a wide range of devices.
 To do this, the programmer wants to direct commands to an abstract
terminal which understands a more generic language and can be
translated to the language of many other specific devices.and then
any application program can access it.
 Interaction based on windows, icons, menus and pointers – the
WIMP interface
 Windowing systems provide the ability to support several separate
user tasks simultaneously by sharing the resources of a single
hardware configuration with several copies of an abstract
terminal.
 Each abstract terminal will behave as an independent process and
the windowing system will coordinate the control of the
concurrent processes.
 The window system must also provide a means of displaying the
separate applications, and this is accomplished by dedicating a
region of the display screen to each active abstract terminal.
 The coordination task then involves resolving display conflicts
when the visible screen regions of two abstract terminals overlap.
 In summary, the role of a windowing system, as providing
independence from the specifics of programming separate
hardware devices; management of multiple, independent but
simultaneously active applications.
 3 possible software architectures to
implement
◦ all assume device driver is separate differ in how
multiple application management is implemented
1. each application manages all processes
Each application worries about
synchronization conflict with other hardware
devices.
reduces portability of applications
2. management role within kernel of operating
system
applications tied to operating system
3. management role as separate application
maximum portability
the client-server architecture
 pixel imaging model with some pointing mechanism
 X protocol defines server-client communication,more device
independent.
 Each client of the X11 server is associated to an abstract
terminal or main window.
 The X server performs the following tasks:
 allows (or denies) access to the display from multiple client
applications;
 interprets requests from clients to perform screen operations or
provide other information;
 demultiplexes the stream of physical input events from the user
and passes them to the appropriate client;
 minimizes the traffic along the network by relieving the clients
from having to keep track of certain display information, like
fonts, in complex data structures that the clients can access by
ID numbers
 A separate client – the window manager – enforces policies to
resolve conflicting input and output requests to and from the
other clients.
 There are several different window managers which can be
used in X, and they adopt different policies.
 For example,
 the window manager would decide how the user can change
the focus of his input from one application to another.
 One option is for the user to nominate one window as the
active one to which all subsequent input is directed.
 The other option is for the active window to be implicitly
nominated by the position of the pointing device.
 Whenever the pointer is in the display space of a window, all
input is directed to it. Once the pointer is moved to a position
inside another window, that window becomes active and
receives subsequent input.
 Another example of window manager policy is whether
visible screen images of the client windows can overlap or
must be non-overlapping (called tiling).
 the client applications can define their own hierarchy of
subwindows, each of which is constrained to the coordinate
space of the parent window and allows the programmer to
manage the input and output for a single application similar
to the window manager.
 To aid in the design of specific window managers, the X
Consortium has produced the Inter-Client Communication
Conventions Manual (ICCCM), which providesvarious policy
issues and these policies include:
 rules for transferring data between clients;
 methods for selecting the active client for input focus;
 layout schemes for overlapping/tiled windows as screen
regions.
 Two programming paradigms- used to organize
the flow of control within the application.
 The windowing system does not necessarily
determine which of these two paradigms is to be
followed.
 Read-Evaluation loop
 Programming on the Macintosh follows this
paradigm.
 The server sends user inputs as structured events
to the client application.
 The client application is programmed to read any
event passed to it and determine all of the
application-specific behavior that results as a
response to it.
Read-Evaluation loop Programming Paradigm: Logical flow
and pseudo-code for Client application. The application has
complete control over the processing of events that it receives

repeat
read-event(myevent)
case myevent.type
type_1:
do type_1 processing
type_2:
do type_2 processing
...
type_n:
do type_n processing
end case
end repeat
 Notification based programming paradigm
 In which the main control loop for the event
processing and it does not reside within the application.
 The application program informs the notifier what
events are of interest to it, and for each event declares
one of its own procedures as a callback before turning
control over to the notifier.
 When the notifier receives an event from the window
system, it sees if that event was identified by the
application program and, if so, passes the event and
control over to the callback procedure that was
registered for the event.
 After processing, the callback procedure returns
control to the notifier, either telling it to continue
receiving events or requesting termination
 the application program read-event(myevent)
detects error,it produce a pre- case myevent.type
emptive dialog box, wants to type_1 :
obtain confirmation from the do type_1 processing
user before proceeding. type_2 :
...
 This dialog discards all if (error-condition) then
subsequent user actions except repeat
the area selected by the user read-event(myevent2)
inside a certain region of the case myevent2.type
screen. type_1 :
 To do this in the read– .
.
evaluation paradigm is fairly .
event of type type_2. type_n :
 Once the error condition is end case
recognized, the application then until (end-condition2)
begins another read–evaluation end if
loop contained within that ...
.
branch of the case statement. type_n :
 Within that loop, all non- do type_n processing
relevant events can be received end case
and discarded. until (end-condition)
 Interaction objects
◦ input and outputbehaviours are intrinsically linked

move press release move

 example of how input and output are combined for interaction with a
button object.
 As the user moves the mouse cursor over the button, it changes to a
finger to suggest that the user can push it.
 Pressing the mouse button down causes the button to be highlighted
,Releasing the mouse button unhighlights the button and moving the
mouse off the button changes the cursor to its initial shape, indicating
that the user is no longer over the active area of the button
 To aid the programmer in fusing input and output
behaviors, another level of abstraction is placed on top
of the window system – the toolkit.
 A toolkit provides the programmer with a set of
ready-made interaction objects –gadgets or widgets –
which she can use to create her application programs
 Toolkits provide this level of abstraction
◦ programming with interaction objects (or techniques,
widgets, gadgets)
◦ promote consistency and generalizability through
similar look and feel(create the illusion of the
interaction object )
◦ amenable to object-oriented programming
 Toolkits provide only a limited range of interaction
objects, limiting the kinds of interactive behavior
allowed between user and system.
 Toolkits are expensive to create and are still very
difficult to use by non-programmers.
 UIMS add another level above toolkits
◦ toolkits too difficult for non-programmers
◦ alternatively:
 UI development system (UIDS)
 UI development environment (UIDE)
 The main concerns of a UIMS:
 a conceptual architecture for the structure of
an interactive system which concentrates on a
separation between application semantics and
presentation;
 techniques for implementing a separated
application and presentation whilst preserving
the intended connection between them;
 support techniques for managing,
implementing and evaluating a run-time
interactive environment.
 As a conceptual architecture
◦ separation between the semantics of the application and the interface
provided for the user to make use of that semantics.
◦ There are many good arguments to support this separation of concerns:
 Portability To allow the same application to be used on different systems
 Reusability components can be reused in order to cut development costs.
 Multiple interfaces To enhance the interactive flexibility of an application,
several different interfaces can be developed to access the same functionality.
 Customization customized by both the designer and the user to increase its
effectiveness without having to alter the underlying application.
 The logical components of a UIMS were identified as:
 Presentation The component responsible for the appearance of the interface,
including what output and input is available to the user.
 Dialog control The component which regulates the communication between
the presentation and the application.
 Application interface The view of the application semantics that is provided
as the interface.
 Another concern not addressed by the model is how to build
large and complex interactive systems from smaller
components.

 One of the earliest was the model–view–controller paradigm


– MVC for short – suggested in the Smalltalk programming
environment .
 Smalltalk was one of the earliest successful object-oriented
programming systems whose main feature was the ability to
build new interactive systems based on existing ones.
 The model represents the application
semantics; the view manages the graphical
and/or textual output of the application; and
the controller manages the input
 Another is multi-agent architecture for interactive
systems is the presentation– abstraction–control
 PAC is based on a collection of triads also: with
application semantics represented by the abstraction
component; input and output combined in one
presentation component; and an explicit control
component to manage the dialog and correspondence
between application and presentation
 There are three important differences between PAC and
MVC.
 First, PAC groups input and output together, whereas
MVC separates them.
 Secondly, PAC provides an explicit component whose
duty it is to see that abstraction and presentation are kept
consistent with each other, whereas MVC does not assign
this important task to any one component, leaving it to
the programmer/designer .
 Finally, PAC is not linked to any programming
environment and more of a conceptual architecture than
MVC because it is less implementation dependent.
 Techniques for dialogue controller
◦ Menu networks:
◦ The communication between application and
presentation is modeled as a network of menus
and submenus.
◦ To control the dialog, the programmer must
simply encode the levels of menus and the
connections between one menu and the next
submenu or an action.
◦ The menu is used to embody all possible user
inputs at any one point in time.
◦ Links between menu items and the next
displayed menu model the application response to
previous input
Grammar notations:
 The dialog between application and presentation can be
treated as a grammar of actions and responses,
 formal context-free grammar notation
 it is difficult to model communication of values.
State transition diagrams:
 used as a graphical means of expressing dialog.
 The difficulty with these notations lies in linking dialog
events with corresponding presentation or application
events.
 Also, it is not clear how communication between
application and presentation is represented.
Event languages:
◦ good for describing localized input–output behavior
in terms of production rules.
◦ A production rule is activated when input is
received and it results in some output responses.
◦ It is now more difficult to model the overall flow of
the dialog.
◦ Declarative languages:
◦ All of the above techniques (except for menu
networks) are poor for describing how information
flows between the two.
◦ Describe what should result from the
communication between application and
presentation.
◦ Constraints: a special subset of declaraive
languages.
◦ used to make explicit the connection between
independent information of the presentation and
the application.
◦ graphical specification: techniques allow the
dialog specification to be programmed
graphically
◦ programming by demonstration since the
programmer is building up the interaction dialog
directly in terms of the actual graphical
interaction objects that the user will see, instead
of indirectly by means of some textual
specification language that must still be linked
with the presentation objects.

You might also like