Professional Documents
Culture Documents
Centralized Automation -DCS
Centralized Automation -DCS
TRAINING MANUAL
Course EXP-MN-SI110
Revision 0
Formation Exploitation
Instrumentation
Automatisme Centralisé - DCS
INSTRUMENTATION
CENTRALIZED AUTOMATION -
DCS SUMMARY
1. OBJECTIVES .................................................................................................................. 7
2. INTRODUCTION ............................................................................................................. 8
2.1. HISTORY OF PROCESS CONTROL ....................................................................... 8
2.2. BEGINNINGS OF LOCAL CONTROL PANELS........................................................ 8
3. EVOLUTION OF PLANT PROCESSING CONTROL .................................................... 10
3.1. MORE SOPHISTICATED CONTROL ROOMS ....................................................... 10
3.2. CENTRAL CONTROL OF THE CENTRAL COMPUTER ........................................ 10
3.2.1. Direct Digital Control (NDT) ............................................................................. 10
3.2.2. Digitally controlled analog control (DDAC) ...................................................... 11
3.3. DISTRIBUTED PROCESS CONTROL ................................................................... 13
3.4. DISTRIBUTED PROCESSOR SYSTEMS .............................................................. 15
3.5. PROGRAMMABLE LOGIC CONTROLLERS (PLC) AND PROCESS CONTROL .. 16
3.6. DCS VS. PLC COMPARISON: EASE OF SETUP .................................................. 18
3.6.1. Typical configuration of a PLC system ............................................................ 19
3.6.2. Typical DCS configuration ............................................................................... 19
3.7. SCADA SYSTEM .................................................................................................... 20
3.8. INCREASED ROLE OF PERSONAL COMPUTERS (PCS) .................................... 21
4. WHAT IS A DCS? .......................................................................................................... 22
5. WHAT IS THE DIFFERENCE WITH A PLC CONTROLLER? ....................................... 23
6. THE HARDWARE PART: STRUCTURE OF A DCS ..................................................... 24
6.1. THE BASE .............................................................................................................. 24
6.2. L’ALIMENTATION ................................................................................................... 25
6.3. INPUT/OUTPUT CARDS ........................................................................................ 26
6.3.1. The logical input card ...................................................................................... 26
6.3.2. The Logic Output Board .................................................................................. 26
6.3.3. The analog input board.................................................................................... 27
6.3.4. The analog output board ................................................................................. 27
6.3.5. The Microprocessor ......................................................................................... 28
6.3.6. The communication card ................................................................................. 29
6.4. HARDWARE STRUCTURE OF CONTROLLERS................................................... 30
6.4.1. Classic Process Controller .............................................................................. 30
6.4.2. Controller Architectures ................................................................................... 31
6.5. CONTROLLER SOFTWARE STRUCTURE ........................................................... 34
6.5.1. Programming ................................................................................................... 34
6.5.2. Organization of execution time for control actions ........................................... 35
6.5.3. Progress in software structure ......................................................................... 36
6.5.4. Programming vs. configuration. ....................................................................... 36
6.5.5. Function blocks ............................................................................................... 37
6.5.6. Connecting the blocks ..................................................................................... 38
6.6. CONTROLLER REUNDANCY ................................................................................ 41
6.6.1. The myth of integrity as a simple loop ............................................................. 41
1. OBJECTIVES
The aim of this course is to allow a future instrumentalist to know the instrumentation that
is located in a risk area and its different markings on an industrial site with a predominance
of oil.
At the end of the course, in the field of standards and symbols in instrumentation, the
participant should be able to:
We'll also look at some arbitrary distinctions between DCS, API (PLC), and PC.
These will only be incomplete comparisons, of course, due to the many creations and
innovations of suppliers, but for the sake of understanding, we will go through
generalizations.
In the first processing plants, process control most often required several operators. They
had to constantly monitor each process unit, observe large measuring instruments
installed on site and manipulate valves. The entire operation of the plant therefore
commonly required operators to come and "visit" the plant, tablet in hand, to record many
essential parameters. At the end of their first pass, appropriate calculations had to be
made, in preparation for the next visit, to adjust the valves, dampers, drives and other final
elements.
This implied that each operator had to develop his or her own sensitivity to the process, an
art if ever there was one. One of the challenges of such plant management was to
coordinate the many operators so that they could manage the flow of product from one
end of the plant to the other in a consistent manner. Because of the subjectivity of this
"feeling" of the operation, the results of the plant could vary according to the different
operators and their different emotional states. Lead times and other resulting inefficiencies
were the factors limiting the plant's productivity.
With technological advances, it has become possible to transmit pneumatic signals. The
control room was introduced in the most important factories and the large measuring
devices were therefore placed in one place, with a few control devices that transmitted
the signals back to the nearest valves in the field.
This allowed different operators to record their readings in a log and make some
adjustments to the operating processes without having to visit the sites as frequently as
before. Naturally, it was still necessary to visit the factory to adjust the most distant valves,
dampers and other final elements.
A concept had just been born: it was now a question of bringing the factory to the
operators rather than the other way around.
Because most of the information needed by operators is provided, the time taken to make
decisions on process results has been greatly reduced. This made it easier and faster to
identify interactions between different portions of the process.
All this was achieved by control-command and supervision, thanks to direct wiring and
analog signals. The advantage was that there was no need for too much wiring (or pipes,
in the case of pneumatic installations).
But the disadvantage was that there was only a small margin for control, supervision and
alarm.
After the Second World War, electrical controls became more robust and practical for use
in industrial environments.
More measurements then became possible due to the lower cost of sensors.
In addition, new types of sensors were beginning to exist to measure parameters that were
previously impossible to measure. In addition, it became possible to measure a greater
number of parameters online, rather than taking laboratory samples.
The controllers were smaller in size, so that more of them could fit on the same board and
in a smaller area. All of this led to a more complex control room, hence the need for more
cabling to this location.
As technological advances have brought down the prices of computers, they have become
more common, on larger and more complex installations. This has led to the further
development of centralized control rooms.
Although these computers could now process all this new data, they were still designed
primarily for companies. During the 60s and 70s, two types of process control computers
appeared:
Benefits:
Sophisticated control
Flexible control
Disadvantages:
Computer reliability
Expensiveness
Benefits:
High reliability
Sophisticated control
Full redundancy
Disadvantages:
Expensiveness
This central control room thus offered a much more accurate picture of the overall
operation of the plant. However, once all the remote parts of the factory were connected to
this single room, the following elements became very expensive due to:
Engineering Design
A serious problem also appeared: a failure on the computer could then cause the entire
factory to shut down! To solve this, backup controllers were often introduced into the
computer system.
In order to ensure better reliability of the system, it was frequently necessary to duplicate
the control systems (i.e. 2 sets of control for each element). This redundancy often
involved the use of analog instruments to keep the plant running.
The operators had to be able to operate the computers and also to know the process
control. This made it difficult to find qualified staff, whose salaries were therefore
particularly high.
The use of a control computer to manage set points and other parameters on analog
controllers prevented signal loss at the end points when the computer was shut down. This
did not save the dual control system, but at least the operators could avoid learning how to
manage the process by computer. If the control computer was properly implemented,
operators could even forget about its presence.
Benefits:
Disadvantages:
Lots of wiring
Limited scalability.
Indeed, it was difficult to extend the solutions without reprogramming the entire computer.
Correcting these drawbacks was expensive.
The high cost of controlling the mainframe meant that computers were used only for
operations that were large enough or processes critical enough to justify the effort required
for such automation.
A computer-type control system has now become more essential. Indeed, with maturity,
each industry must optimize its treatment methods. The cost of raw materials, waste,
pollution and compliance with national regulations has become an increasingly important
element in the efficiency of industrial operations.
The start of Distributed Control has become possible because of the capabilities of video
juxtaposition technologies to display data, or even for the operator to initiate control-
command actions "by video".
The central control room gathered information without all processing being located in one
place, which spread the risk.
The cost and complexity of the cabling could also be reduced by the use of a digital signal
through a simple cable serving as a communication network (information highway), thus
connecting the different parts of the plant. The secret of all these signals is actually an old
technology: the telegraph.
The use of Morse code was indeed the digital communication of analog values (voice-type
information, as with radio).
The distributed process architecture allows for functional division of tasks between
different processors, reducing the risk of an overall failure. As methods of reducing mass
loops have emerged, physical distribution has also become possible. These critical
elements therefore began to open up possibilities for fixing the central information on local
control for sites where it was crucial.
This view of plant operations from the central control room provides the operator with a
single window into the entire process. And operators no longer have to go around the
factory.
They now visit with their fingertips, viewing each controller or group of controllers on their
screen to supervise the progress of their treatment.
If necessary, they can easily take stock and control changes from their keypad, as well as
manage all alarms in the event of a process alert.
In addition, if necessary, a plant can have several operator stations on this network. A
local operator station can be located on a specific part of the plant, directly on the same
information highway or directly wired to a set of control loops.
On the plus side, distributed control meant fewer cable runs, no cabling between
controllers and control room, less risk of breakdowns, and a more scalable system, in case
the system needed to be expanded without too many replacement costs.
On the downside, these distributed control systems always had sensors and end elements
connected to control cabinets, and connections between components from different
manufacturers could present a number of difficulties. This is what digital I/O or fieldbus is
all about.
Connected to a function and in conjunction with each other, this combination of computers
establishes a distributed system of microprocessors.
Some purists will be unhappy with this somewhat vague use of definitions. That said,
these purists never agree with each other!
If the assembled components of systems continue to perform usual tasks, it is with very
different methods today.
As seen above, these hardware advancements are inherent in the smaller size, lower
price, and increased reliability of the technology of these components. The controller
architecture evolved from a single central computer that performed all control tasks and
offered display, processing, and communication, with inputs and outputs, to a processor
architecture distributed across the system.
The intermediate step between these two stages was the evolution of these specific
microcomputers into proprietary controllers and operator stations.
The hardware was proprietary because in the 1960s, 1970s and 1980s, normal computers
lacked the processing speed and memory capacity to operate in real time, i.e. to respond
immediately to process actions and inform the operator and tell him what to do next.
Proprietary systems: As each manufacturer had to find a way to get the data as quickly as
needed, each had to change their original technology, hence the different proprietary
systems. The latter were an opportunity to provide customers with a functional digital
system that was ahead of the standards as quickly and cheaply as possible.
While this is not typical of traditional processing plants, some operations can also harness
the extremely powerful capabilities of a PLC.
Today's PLCs can be far more efficient than ever for sequencing, controlling, and locking
operations. Real-time control for locking motors and related equipment is a very practical
operation within PLCs used in the world of process control.
A good example of this is the control of batch processes using process management
functions, which are configured by a personal computer (PC) or PC-type operator
workstation. The newer PLCs have been used for distributed computing, through an
operation involving multiple PLCs on the networks.
These networks are sometimes, but not always, P2P: for example, one PLC can talk to
another directly, without going through any intermediary device.
Decentralized control is now available on most current PLC systems, via intelligent remote
I/O.
Typical applications include equipment stop/start and safety interlocks, filtering, simple
bundling, packaging, bottling and
Handling of equipment. PLCs are most often a cheap alternative to DCSs, which do not
require sophisticated processing loop strategies.
Very inexpensive, which allows them to be adapted to the functions of the product
Tolerance in hostile environments; do not require a clean room like many other traditional
computers and DCS (non-corrosive atmosphere, however)
Need for the host computer or PC to interface with process controls and other more
complex operations
The available user interfaces do not always have the capability of those
accompanying distributed control; the use of human-machine interfaces (HMI) from
other manufacturers limits the capabilities of PLCs (a nice presentation is not
enough)
PLC suppliers and distributors lack process expertise, resulting in the related
services and costs of an independent integrator.
Everyone regularly asks for a comparison between DCS, PLC, mainframes, PCs and PCs
with PLCs, but the products are constantly changing. Each supplier offers a number of
different features to meet as many specific situations. These comparisons are all
generalizations! In reality, most PLCs are found in the manufacturing world and the
packaging part of processing plants. Sophisticated control (multi-loop control) typically
cannot use PLC for a host of different reasons.
PCs, DCS, and PLCs all suffer from the trade-offs between cost and performance, even if
they are not the same. Arguments for or against all the elements of comparison that
appear here or in almost every available magazine article can easily be found. As with
anything, there is no ideal solution for making the right decision... Otherwise, you wouldn't
be here trying to find out how to buy the right product for your factory (no more need for
engineers, a purchase order would be enough to buy everything).
To find the right system, all parameters must be taken into account. As with choosing
between two brands, the decision between PLC, DCS or general purpose computers (or
PCs), the necessary functions are the key parameter. The buyer must always have in mind
the understanding of the process. (Indeed, most of them know but do not understand the
process. Understanding may come from gathering information that a new system might
offer.)
There was no standard in this area, but most of these systems were rather simple to set
up. In principle, it was also the supplier who designed the workstation to ensure that the
Operator Interface was in line with these
"instruments". In addition to this simplicity, the user was sure to have real-time operation
and not to risk inadvertently "polluting" his software.
In addition, PLCs were in principle used to replace relays and were programmed either by
relay logic or, if a more complex command and control was required, in one of the
advanced languages such as Pascal, Basic or a mixture of the two.
A much simpler operator interface then existed, with a very minimal level of
communication and a very limited need for real-time communication between devices.
Each PLC must be configured separately and you have to be very organized to avoid
duplicating process markers, etc. Complex strategies are normally reserved for individual
DFCs.
The PC needs to be set up to communicate with each PLC to find the specific variables,
then set up for views, then histories, then trends, etc. PLC systems typically have multiple
databases for configuration and matching.
(Although used for data acquisition and control, this type of system is not really a SCADA
system. SCADA is a term that has been used for more than 40 years to refer to systems
that switch equipment beyond the factory site, using a traditional telephone, microwave or
satellite connections, and that require a unique communications technique to ensure
integrity under conditions beyond the control of the user.)
Configuration is usually done from a workstation designed for a particular system. All
controllers form a kind of database for P2P communication in complex strategies.
The database can fit on a simple workstation, with copy to the controllers. Sometimes
downloads are required to back up redundant controllers. Views, histories and trends will
also need to be configured.
Some may present a common database, depending on the provider (and the age of the
system).
PCs, most often companions, must configure unique links for command control and views,
etc. on an independent database, as with PLC systems.
DLC systems typically have a single database and do not need to be regularly
synchronized with each other.
In recent years, the term SCADA (data acquisition and control system) has been used
very often to refer to systems that are actually data acquisition systems that today also
provide control.
But that has not been the definition of the term for the last 50 years. On a large scale, true
SCADA systems were used for remote control and information collection actions from the
factory.
These SCADA systems were not normally used for process control, but rather for the start-
up and shutdown of remote units, such as remote power transformers or water or gas
pumps on pipelines.
Most often, the connection is not by cable but by radio transmission, telephone line, or
even satellite. The delays on these SCADA systems meant that the control and command
of the details of the process itself had to be ignored remotely.
The control and command part was only supposed to turn off or on particular units or to
short-circuit units that had suffered damage, for example as a result of a storm or an
accident. Energy distribution companies always have to deal with events of this type.
All communications in such remote transmissions, as with a SCADA system, must tolerate
long delays between the request for action and the occurrence of the action. Another
cause for concern is the frequent unplanned interruptions of a transmitted signal.
In principle, this prohibits any continuous action process, which requires a better response
time. For decades, proprietary technologies have been implemented to counteract these
limitations of control, such as the extremely rigorous "check before execution" routines on
any data transmission.
Like it or not, the mid-90s became the era of Bill Gates' Microsoft, which is increasingly
influential on all of our technology.
The volume of Microsoft products and the volumes of compatible products has created
unbeatable de facto standards and price orders. Within a few short years, this has filtered
into all product lines of process control, in addition to commercial products and the
resulting professional practices.
The first part of system architecture in which the PC appears to be the most requested
device by users is the workstation. UNIX has been the workhorse of processing power
capacity and stability. Over the years, it has become the undisputed foundation of reliable
use for life-saving operations. However, Windows NT or its successors will overtake it.
Price, power and universality are what generate the demand from users, who then also
want robustness for these products.
The process controls industry, on the other hand, has never had the volume advantage for
most of the products it uses.
Unlike commercial systems or, even more so, consumer products. It is enough to compare
the number of concrete plants supported by global economic requirements to the number
of TV, microwave, video game sets (endless list) benefiting from them. Prices fall as
volumes grow. Research into new technologies will always respond to markets that can
support these investments. Other users just have to wait for the results and then modify
the technology for their own use.
There are a few exceptions. In 1970, Honeywell Industrial Division (Fort Washington,
Pennsylvania – USA) funded the development of General Instrument's first 16-bit
microprocessor to create the TDC2000, the first commercially successful distributed
control system. They had to amortize the costs incurred by 8 control loops that could justify
this expense and then adapt the cost of single-loop controllers. (8 bits would not have
been enough). And this 10 years with the first 16-bit PCs!
4. WHAT IS A DCS?
A Digital Control System is, first and foremost, a specialized computer.
A diet,
One (or more) communication cards per RS232 or RS422 / RS485 serial link,
Classic I/O cards (analog inputs, analog outputs, logic inputs, logic outputs).
DCS stands for Distributive Control System and is more often referred to in Europe
as SNCC (Numerical Control System).
A DCS system is therefore simply intended for the adjustment of control loops, the
recording of all the measurements we need, the monitoring of all the parameters
necessary for the process.
There is still a difference for the scanning cycles which are generally faster on PLC than
on DCS (normal since we only scan logical inputs and outputs on a PLC!!).
In fact, the distinction between DCS and PLC is more and more a historical (or usual)
question and less and less of a profound reality.
Moreover, DCS and PLC use the same programming languages defined by the
IEC 61131-3 standard.
As for the supervisor, he is external to the DCS (or the PLC). It is a program in an
external computer that is responsible for creating the interface necessary for the human
control of industrial processes.
Generally, PLCs are supplied without a supervisor (SCADA) in the basic configuration,
while DCSs are supplied with their dedicated supervisor, hence some common confusion
between DCS and supervisor.
The header is simply the rack on which you will plug all your I/O cards, the power board,
the CPU card (microprocessor associated with the memory) and finally your
communication card.
The header is very practical because the PLC power supply is distributed over all the
modules of this header at the backplane, so only one power supply is needed (via the
power board).
Figure 9: Embase
Manuel de Formation EXP-MN-SI110-FR
Dernière Révision: 08/04/2009 Page 24 de 249
Formation Exploitation
Instrumentation
Automatisme Centralisé - DCS
In this type of cabinet, you will most often now have two
servers installed and connected to the DCS network.
6.2. L’ALIMENTATION
The logic input card will allow us to monitor all the following types
of logic inputs (among others):
Push Button,
Engine Feedback,
Pressure Switch,
Thermostat, Level
Switch,
They are equipped with LEDs that allow us to indicate the logical
state of the affected inputs on the board.
The logic output board will allow us to control all the following types
of actuators (among others):
Pressure,
Flow Rate,
Temperature,
Level,
The analog output board will allow us to regulate all of the following
actuators:
Control valve,
Variable speed
drive,
Be careful because the analog input and output cards look similar, so
it's best to take a good look at the card references that are often
indicated on each card.
Program execution.
RUN-P
All PG functions are allowed
Program execution.
RUN
Only PG playback functions are allowed
General Erase: This feature erases all user data from the CPU. It must be done once
before the start of the programming (at the beginning of the project for example).
It is also used by the maintenance technician who can connect his laptop to it to
check that it is not working.
You can see on the photo that you have two Ethernet ports, this is what it will
allow you to connect your PCs with an RJ45 network cable.
Various vendors have used different approaches to design their products, starting from
how they envisioned the role of their analog counterpart.
Let's not forget that the programmable logic controllers (PLCs) used in automation in
companies were first developed to replace relay batteries, with, for any operator interface,
on/off buttons to initiate actions and lamps to monitor the progress of an operation and
notify the operator of its completion. The origins of process control are quite different.
The first process controllers were physically one with the operator tray. They not only had
a process variable (PV) indicator on a calibrated scale, but also a set point SP on the
same scale as well as a control signal output indicator.
On some instruments, this controller was not an indication of the controller's output but
rather the actual position of the final element (valve, motor drive unit, etc.) from a feedback
signal
In the rack (or plug) of the distributed controller, the slot of the board has become that of
different loops sharing a processor (Figure below).
As before, there were cables to the sensors and end elements, but this controller rack no
longer had direct cables or traditional connections.
Both had to convert to digital. Alarm values in a loop could now trigger cascading actions.
In addition to the not-so-obvious fact that an "infinite" number of cables can be connected
to a digital point without electrical "load"...
In the past, the latter idea led to a severe restriction on the ability to develop different
control strategies.
These electronic devices called controllers frequently featured specific function boards in
the early stages of their design. There was a special board, dedicated specifically to inputs
and outputs, as well as a board to store all the algorithms (or function blocks) used in this
system.
In the mid-90s, two very general controller architectures appeared and took over most of
the DCSs still in operation, among the tens of thousands existing worldwide. Both types
have influenced the approach taken in the most recent designs, and it's best to understand
how they work, especially if your company uses a few of them.
Both versions are based on distributed processing, but in a specific way. For the shared-
function version, all loops arriving at that particular controller shared multiple
microprocessor boards (Figure below).
They could be recognized because they had such and such a card as an output card (or a
card or storage station), a packaging card, a database card, an algorithm, an external
communication card. These cards were usually accompanied by other necessary cards,
such as a power card or information highway card.
The main advantage of the Shared Function Controller is that all controllers have the
same hardware/software sets, making it easy to order, install, modify orders, train,
maintain and store parts.
The second classical type of construction was based on individual microprocessor boards
for each loop or set of loops (Figure below).
Some boards were dedicated to loop control and others to logic control. The functions of
output, input, signal conditioning, etc. were provided on each card by the same processor.
There were also other boards dedicated to programming, according to the vendor's
standard set of algorithms. Often, these cards are said to be multifunction, multitasking...
communicate all these cards with each other. The systems of the time were most often
Bailey, Fisher Controls, Foxboro IA, Rosemount, Taylor, and Toshiba.
The main advantage of the single-loop controller approach is that the loss of a processor
only affects a loop or a small group of loops.
Later designs began to accommodate these unclassifiable ideas, taking advantage of the
ever-increasing power of processors and memory capacities. As a result of these two
initial designs, a more common design was born, where all functions are embedded on the
same board or module.
This possibility eliminates the objective of simple loop integrity, which is almost impossible
to meet, regardless of the locking control strategy chosen.
The only protection for today's control strategies is the redundant controller, which is now
more convenient and cheaper than before.
4 of these languages are based on function blocks for continuous control, ladder logic for
numerical needs, sequential function diagrams for sequencing and batch control, and
structured texts for complex calculations. (The 5th is equivalent to the assembler, not very
user-friendly for process configuration).
A single process provider has used multiple configuration languages on the same module
since 1992 and offers the possibility to mix the configuration of all languages on the same
graphical technique. Most other vendors are starting to use one or more of the IEC
languages for their control configuration.
Early analog controllers influenced early hardware designs as well as software. The latter
was to perform the actual functions of the controllers, but new functions have arisen from
the exceptional possibilities of software.
More precisely, it is new combinations of functions that have become possible and
continue to enable new solutions, at a lower cost. This is the exciting field of development,
which is changing and greatly evolving the entire field of control and command.
6.5.1. Programming
In the PC field, we often talk about hardware, software and firmware. Hardware
Firmware, on the other hand, is what makes the PROM run and remains immutable,
making certain routines run in the same way, such as an algorithm for a trimodal controller,
a multiplier, or a divisor.
Firmware programming must be comprehensive and very specific. The process controller's
microprocessors have typically been programmed to perform a number of basic,
command-based routines. The same applies to the normal operating procedure on a
military site.
The routines are inaccessible, the corresponding combination of bits is stored in the read-
only memory (ROM), inaccessible to the user. There is a very basic routine that tells the
CPU to consider an instruction for a particular register, perform the command that that
instruction defines and then move on to the next instruction. In the absence of an
instruction, it waits and scrutinizes periodically, until another instruction appears, which it
processes.
To generalize a bit, the instructions direct the information that is stored on a specific
address in the memory, provided to the microprocessor, and then placed on a
data registry. They are then transmitted to a logical arithmetic unit (LAU), in which the data
undergo the arithmetic or logical operations.
This allows specific tasks to be carried out. The memory units in which this processed
information is stored are not located in the read-only memory (ROM) but in the random
access memory (RAM), which is accessible to the programmer.
The programmer can combine the commands that the microprocessor can execute in a
specific consecutive order that will meet his needs. In our analogy with the military camp,
this program is comparable to the series of activities carried out by recruits, aimed at
making them soldiers.
Depending on the vendor, the software works with different characteristics, depending on
the design chosen. As with hardware, software design can vary, for the same function.
Advances in distributed systems have also greatly benefited from the new approaches that
are being taken and make it possible to configure computer systems without being a
programmer.
The objective is for the plant engineer who is in charge of the development of process
control systems to be able, without being a programmer, to fully understand the
programming to operate his control system.
That said, the elements necessary for control have evolved in a few years, from
programming to configuration.
You may know the details of how a bulb works, but it is not essential to use the light
produced. You just need to be able to turn the knob and be lit enough to work. The design
of a control system is based on this same principle.
Subtract 150 from the temperature value stored in accumulator A and place the result
in A
If the value of the accumulator is less than zero, go to the LOOP1 statement,
otherwise
Store the contents of accumulator A in the MOTOR slot (starts the motor fan)
Relay logic originated from relay hardware and has made it easier for North American
utilities to understand semiconductor malfunctions. Programming complex functions like
PIDs is very cumbersome, and engineers prefer Boolean logic to express the same
programming information.
The increase in memory and the increase in processing power have made it possible to
use advanced languages that are however quite impervious to those who are not
programmers. These languages are often oriented towards specific missions: science
(Pascal), business, machine tools (APT), simulation, circuit analysis, etc.
All these languages have played a role in the history of control programming methods,
whether logical control or distributed control of programming. All of them have evolved
over time.
Process control vendors found most of these languages too complicated for users who
needed tools that were easy to set up rather than program. This has led these providers to
develop function blocks to have control strategies, linking them in different ways by virtual
cabling.
The first can offer large, very complete and powerful function blocks, some of which offer
appropriate alarms and on-board digital status links. If later you need to add alarms or link
the status of a new function, you will have to connect to the existing one without being able
to change the scan times or find the necessary space to slide the function(s) into the
configuration.
In general, vendors that have used the fixed time interval approach have implemented
large function blocks.
The second provider may offer several smaller function blocks. Multiple function blocks
may be required to create an operation that is performed by a single block from the first
provider. (I remember a provider who called 42 small function blocks to perform a
cascading control action, which one of his competitors provided with only 2 of his function
blocks).
Smaller function blocks clearly offer greater flexibility, but any changes will likely involve a
greater reorganization of the control strategy, a tedious recalculation of scan times, and
consideration of other interactions. This sometimes requires a check of the order
necessary for computer code.
In general, providers who have chosen the variable time interval offer blocks with a smaller
function.
Vendors have created the equivalent of several types of instrument hardware by creating
function blocks performing the same calculations.
All of these are in principle firmware, software that is thought of as hardware, a bit like a
phantom product (vaporware), sold before the actual existence of the product.
These standardized function blocks typically vary from provider to provider, and their
design prohibits the user from modifying fundamental actions. However, they offer a
number of adjusters and terminals, as well as for the equipment they replace.
Thus, the "programming" of these functions does not require systematic testing every time
it is used. The person in charge of the configuration can therefore perfectly expect them to
always work in the same way, without having to resort to what programmers call linking.
You make yourself an omelette one morning and a 4-year-old helps you out. Anyone who
has them knows that children this age are desperate to help but aren't experienced
enough to actually do so. You expect this experience of helping to bear fruit later.
"Give me two eggs quickly," you tell him. The child's reaction is to throw them at you, didn't
you say quickly?
They obviously break. You say "no, bring them to me and put them in a safe place". The
child thinks about putting them in his pockets, where they break again. Since you want
whole eggs, you modify your query again. The child finds a bag and delicately places the 2
new eggs in it. Between the length of his arm and the size of the bag, the eggs smash
again, this time on the ground (they were safe in the bag!), at the moment when the happy
child comes to you with his bag.
That said, each of your instructions was right and the answers given were suitable, but the
eggs never arrived whole. This is what programmers call coordination. We can all give
instructions to a computer, but with the real experience of the application we can above all
say everything not to do!
Like what:
A ramp generator function block could vary the gain as a function of a variable
or the difference between two variables.
Etc. The only limiting factor is the imagination of the process engineer!
A few years ago, this was only possible through a single mainframe, through vulnerable
and slow communication links.
With equipment, it was probably not possible at all or lacked practicality, to say the least. It
is from these configuration cases that changes can appear in the process techniques that
ensure productivity.
Productivity was the real reason for your boss to tell you to look in the DCS.
Left and right in the following figure, you can see physical connections on a controller, from
the real world of sensors and terminal elements.
Between the vertical bars are function blocks connected by virtual wiring, establishing a
control strategy.
The example here is an industrial boiler, with analog control of fuel and oil. These controls
are interconnected, so that if the air controller set point is changed, control links and
circuits ensure the correct fuel level.
The operator does not need to manage them independently, with imperfect results.
You will also notice the discreet triggers in case the flame goes out, the oil level is too low,
etc. They are also connected to logic control circuits, so that the boiler can be switched off
if all the conditions require it.
In addition, note the impact of steam pressure, oil flow, column pressure, and airflow.
There is even an equation for the efficiency of the boiler here. Of course, a boiler for a
power installation would be much more complex, but could still be implemented by function
blocks and virtual wiring.
In the good old days of single-loop control hardware, it was quite rare for a process loop to
be doubled by another loop.
If the process had any interlocking control, even as simple as a cascade control, such a
backup solution proved to be very difficult and expensive.
If interlock relays were needed, redundancy became very complicated because of the
additional components, which could eventually make the strategy less reliable!
There are undoubtedly redundant control possibilities for a simple process loop. However,
we can think that more has been done than necessary. Most microprocessor-based
controllers are located on process units with more than one simple variable, such as
temperature.
Often, this loop is related to pressure, flow, and an analyzer or other devices. In this case,
the loop is no longer independent and if the loop goes wrong, the entire farm stops.
To avoid this, the whole strategy must be redundant or the single loop must be doubled
with a large number of vulnerable relays and circuit breakers, which, as we have already
mentioned before, are more likely to fail than the control loop itself.
The success of simple loop integrity began in the early days of distributed systems, when
shared loop control was the only cheap way to use microprocessors. In a bad business
environment, a provider without DCS faced a provider that offered them and lost business.
To change the cost/benefit ratio of the DCS vendor, the second vendor mounted a
campaign to write specifications on the need to offer backups for each loop,
independently of the others.
Redundancy was in its infancy, and the DCS vendor had developed a scheme to back up
its multi-loop controller plugs in a cost-effective way by sharing up to 8 controller plugs with
a single backup "director" connected to an empty single controller plug.
Because of this cost, this scheme only became profitable when 4 or more controller plugs
shared this system. The second vendor, which was struggling with conventional single-
loop controllers, had to develop an advertising program hammering home that only single-
loop one-on-one fallback was reliable.
Still, and for a very long time after these commercial battles, vendors continue to
encounter customers who demand single-loop integrity in their specifications, if only
because they are unaware that the real issue is to make the entire control strategy
redundant.
When an error appeared on the first station (when and not if), the swap routine had to
compare each line of code before allowing the change. This work thus cost a considerable
amount of time, sometimes hours of work.
This would be completely unacceptable in most process control applications. Keep in mind
that most of our IT legacy came from commercial computers, for which loss of function
was not as vital as product loss on a production line.
With the advent of the first microprocessor controllers, the first designs were that of split-
loop controllers for cost reasons.
There were 8 loops in the same case. It was recognized that these digital versions of
process loops residing in the same package could effectively transfer information between
loops and enable complex control strategies. As a result, it was then assumed that this
was the best back-up for the overall control strategy.
Through the control unit and the digital communication link between it and the controllers,
each controller could maintain the same inputs/outputs without the need for physical
switching, rather than using a controller that remained available to maintain the configured
software control strategy.
When the control unit perceived that the diagnosis of one of these 8 controllers was poor,
it could immediately switch the database from the failed controller to a backup controller,
redirect inputs and outputs, and continue operation from this new controller, as the main or
primary controller. Everything was going well within the time frame of failure of the
deficient controller.
In an architecture where all hardware/software sets are identical, the control policies
selected for these controllers are not required. A 1 for N backup is possible because all the
control strategies are doubled. The redundancy is transparent for the process but also for
the operator display (except, of course, diagnosis concerning the initial failure).
The system relies on the statistically low possibility that an additional controller will fail
before the primary controller is returned to service. This was a relatively safe backup
method, but naturally removing a database from a failing controller frightened many
people. I think that this fear was unfounded in practice.
There are only a few of these versions on the market, but there are tens of thousands of
them in operation in factories around the world.
primary. A few vendors allow backup loading with a different configuration and talk about
the benefit of allowing a hot reconfiguration.
This can be dangerous, as it requires rigorous and systematic discipline on the part of
each of the users of the system, in order to ensure that the correct configuration is saved
in case a failure occurs. If not... Some very surprising things could happen on the process!
There is a shared form of backup, 1-for-N with some vendors, in case more than one
identical (hardware) multifunction card is backed up by a
"director" on a backup card. Here, as with shared controller racks, each configuration
strategy must be unique.
The I/O is swapped from the failed card to the redundant card. On some of the shared
versions, a specific memory card retains all the configurations (but not the current values)
of all the cards in that rack and downloads the appropriate one as needed. Others retrieve
the configuration adapted to the controller, from a workstation.
Both types (shared and one-for-one) have redundant adapters on the same backplane,
and some use redundant circuit paths on that backplane.
Some vendors employ redundant bus paths on the backplane to provide security, stating
that the backplane is passive and prone to failure. However, many of them don't bother to
place the redundant card on a separate power supply.
It is also possible to have an independent power source, in case the failure also affects
this power supply.
Some systems allow rack-by-rack backup with different cabinets equipped with different
power sources, so these different sources may have different power source paths for each
of these cards.
This is what happens if you need to reload the configuration from another location on a
replacement module. In the field of control of a heat treatment furnace, a pharmaceutical
plant or a batch process for the chemical industry, this could be a problem. Some vendors
have designed the redundant controller as a hot spare, continuously scanning current
values, as well as changes to the primary controller configuration, both offline and offline.
This ensures that the permutation is more flexible and that the process continues relatively
continuously.
It is also possible to select shared or redundant I/O and in some cases both shared and
redundant I/O on each controller. On some systems, redundant I/O is available on each
controller.
But not all providers offer this match. In this case, the controllers can be located in different
cabinets, with different power supplies, if this level of data integrity is required.
These different solutions are proposed so that the user can play between price and
redundancy, according to the needs on the same system.
We have previously established that a power interruption often means that the sequential
control strategy resumes its routine at the beginning. This can also be the case on several
PLC models.
Most often, this is not a problem for industrial automation situations, but for process control
it could compromise the process itself, depending on the complexity of the system.
While some controllers still need to recharge their configuration after a power outage,
many of them have internal batteries to protect the RAM, which contains the configuration.
The charging of these batteries is often monitored and a significant alert time is provided
by internal diagnostics. This alarm, which is usually triggered several weeks in advance, is
displayed on the controller itself (LED or alphanumeric code) as well as on the diagnostic
video display for operators and others.
The 2 main sources of input and output of the controller are at the interface of the process
and the operator. This chapter simply presents the material involved. The process
interface must handle all signals to and from the process, whether they come from sensors
or are modified by transmitters.
Technology is changing this whole sequence very quickly, not only on digital fieldbus
networks, but also the very nature of the sensors themselves. The other critical interface is
with the human operator.
In most control systems, the control station on a network manages all inputs and outputs
as well as control functions. In the process control loop, the connection to the sensor by
the controller and the return to the final control element must be free.
This implies that all paths to the controller's inputs and outputs must be as direct as
possible, even if they must be shared between different inputs and outputs.
The inputs and outputs of a controller are usually switched from analog to discrete.
A few pulse inputs have been made available over time for devices such as frequency
meters and pulse outputs for some motor drive units or equivalent.
Signal conversion usually occurred on the controller itself, with the analog-to-digital
conversion of digital controllers.
Connecting I/O to the real world also involves all the tricky issues such as ground loops,
radio frequency disturbance (RFI), electromagnetic interference (EMI), surge protection,
protection against hazardous atmospheres, etc.
Although technological changes have an impact on this part of the system, this document
(which seems to us to be sufficiently long) will not deal further with the many issues of
standard instrumentation wiring and suggests that you consult other publications on this
subject.
There are now several versions of I/O modules connected to most controller modules,
typically offered for discrete and analog values.
Other inputs can also be received, for the many signals from sensors, analyzers and other
devices (such as low-level signals from the various thermocouples, thermistors,
temperature sensors at [RTD], digital signals from chromatographs, etc.).
The choice and combination of these modules, in addition to the number of terminals
available on each of them, can make the difference in terms of cabinet space needed for a
particular project.
Rather than using controllers, the I/O modules themselves increasingly tend to feature
microprocessors for analog-to-digital conversion (and vice versa), linearization, and signal
pre-processing.
Different kinds of signals arrive at the controller, for example from discrete switches and
relays, light, temperature, flow and pressure. Digital communication signals can even
come from typing a keyboard.
All these signals, whether they come from discrete or analog values, must then be
converted to digital. In process control, these analog signals are typically between 4 and
20 milliamperes (mA), DC.
Once these digital signals have been manipulated within the controller, they must be
converted back into discrete and analog outputs. The discrete output signals are usually
between 1 and 5 volts (V) DC (4-20 mA) in order to be able to control actuators and valve
positioners or motor drives.
Discrete output signals drive solenoids, undercarriage brakes, relays, indicator lights, and
more. Digital communications are also used to transmit data to various displays, operators
and printers.
There are several techniques for converting analog and digital signals. The following figure
shows the principle of modifying a digital bit each time the signal crosses the threshold.
The greater the number of potential thresholds, the better the conformity between the
actual signal and the detected signal.
As the analog signal passes through different thresholds, all are recorded in bits. In this
example, it is a 3-bit conversion, when the signal goes from 0 to +4V and then goes down
to –4V.
Regular clock bits are read from left to right, and at each bit intersection a step is formed to
each point of the curve. The conversion is based on 23 combinations of 1 and 0.
Conformity is the measure of how close these steps represent the curve. The higher the
step and the steeper the slope, the worse the accuracy compared to the actual curve.
Conversely, with a small step and a gentler slope, the tops of the steps represent the
actual curve more accurately.
This increase in the number of steps and risers improves resolution. Thus, the more bits
there are to represent the shapes of the analog curve, the better the resolution. Given the
current technological advancement, there are typically 14-bit resolutions for inputs and 10-
bit for outputs.
To illustrate the improvement, a 13-bit input represents a resolution of 1 to 8,000 and a 14-
bit input a resolution of 1 to 16,000, a significant improvement in compliance.
1-bit resolution = 1 to 2
2-bit resolution = 1 to 4
3-bit resolution = 1 to 8
4-bit resolution = 1 to 16
5-bit resolution = 1 to 32
6-bit resolution = 1 to 64
7-bit resolution = 1 to 128
8-bit resolution = 1 for 256
9-bit resolution = 1 for 512
10-bit resolution = 1 in 1,024
11-bit resolution = 1 in 2,048
12-bit resolution = 1 in 4,096
13-bit resolution = 1 to 8,192
14-bit resolution = 1 to 16,384
15-bit resolution = 1 to 32,768
16-bit resolution = 1 for 65,536
17-bit resolution = 1 for 131,072
18-bit resolution = 1 for 262,144
19-bit resolution = 1 for 524,288
20-bit resolution = 1 to 1,048,576
(Note: the inputs have a better resolution than the outputs to allow rounding in the
calculations. Real-world sensor and terminal resolution rarely requires this, but the sum of
the quality codes presented in these calculations would do.)
Of course, with technological advances, these limitations are likely to change significantly,
especially as sensors and terminals themselves become digital and require less and less
conversion to analog signals. Digital fieldbus I/O communication is becoming the norm.
Remote I/O is becoming more and more common (often with signal processing). It has
become the digital communication subnet of the controller and most often the connection
is more parallel than serial. The parallel nature is obviously not mandatory.
It all depends on the nature of the supplier's communication link and how quickly process
information can enter and leave the controller.
The considerable savings come not only from the fewer terminals but also from savings on
the cost of building each terminal point, expensive thermocouple extension cables and the
like.
The use of optical fibers can increase cost savings because it does not require protection
against electromagnetic, radio, lightning and ground loops.
Optical fiber:
At the end of the 1990s, an ever-increasing number of sensors and elements could
transmit data via digital field communication networks (next figure).
HART (Remote Transducer, Bus Addressable) emerged as the protocol for Rosemount's
"smart" transmitter. This provider then chose to open up its protocol to allow it to be copied
by others. Other suppliers then used HART, which became a de facto standard.
There is now a HART committee, made up of a number of suppliers and users who
oversee the use and definitions of the standard. There are advantages to following fieldbus
standards in single drop mode, such as the interchangeability of sensor and end-element
hardware from different vendors for the same function. Multi-drop adds to these
advantages a saving in cabling costs.
There is considerable pressure from vendors and users to develop a digital link to the
sensors and final control elements, rather than the traditional 4 to 20 mA analog signal.
This digital link not only provides simple input from a sensor or input to the control
element, but also viable, bi-directional communication between these elements.
On most systems today, the connection to the process itself is always analog. There is,
however, a possibility that a digital link exists, which can carry more information, be more
accurate, and less vulnerable to signal conversion activity.
Induces even less risk and allows for greater fault tolerance
Enable more direct links between distributed control systems (DCS) and
interconnections with PLCs
These advantages create new opportunities to supervise the control mode of a factory.
Proper use of fieldbuses will thus offer the benefits of installation with less wiring, improved
remote configuration, more automated documentation, and compatibility with multiple
vendors.
In addition, they will offer the operational benefits of improved accuracy and precision,
better and safer control, more information, improved reliability and safety, and less
downtime. This also leads to maintenance gains, including increased reliability, low-cost
equipment replacements, automated documentation, and remote diagnostics.
The most important thing is that they require a common standard for a large number of
people and companies whose interests are not the same. As a result, there are political
problems between providers, each of whom often has several different types of information
to pass through the communication system.
For example, a supplier whose simple thermocouple input would have to incur the cost of
a very expensive monitoring software with an intelligent transmitter to communicate on the
same network. A complete communication system requires many software function blocks
to allow this variety of sophisticated products to coexist on the system.
This can make a simple temperature transmitter expensive due to current technology.
There are many other problems. So, how much information will have to pass on this
fieldbus?
There are many providers and their users have to provide considerable amounts of
information, which increases the costs of those with fewer needs.
Other providers feel that they only need a small amount of information, which reduces
costs but also flexibility. All these questions appear in committee and, as is often the case,
there are many discussions and feedback before arriving at a common and unified global
idea (which would apply in Asia, Africa, Europe and the Americas).
A few controllers can connect directly to PCs to allow for less expensive configuration of
terminal and operator (IO) interfaces. Some even have a PC on the same card rack, with
the I/O modules (next figure).
In the late 1990s, some of the major vendors distributed PC-based products, which are
claimed to be open when in fact they do not allow the migration path to later versions that
one might expect, due to their design technology, nor do they offer sufficient connectivity
to their existing large DCSs.
The next step in the migration would be the ability to link a PC to the controller(s)
backplane bus for local operation of a larger unit processing.
Beyond that, we could imagine a proprietary network on a more local control room, with
several PCs (next figure).
EEE 802.3 - Ethernet LAN standard except for standard zone (10 Mbps); standard
physical layer, CSMA/CD access method over LAN to bus structure.
EEE 802.4 - token-ring topology physical layer to LAN standard; almost identical to
the MAP protocol.
IEEE 802.6 – the metropolitan area network (or high-speed local area network) standard.
More sophisticated features can be derived from more powerful (and more expensive)
workstations already built from one of the many UNIX variants (figure below).
Windows NT is starting to take its place here, but UNIX is not about to be left behind!
Since it is possible to use X terminals, more views can be added in the control room, which
allows for more meaningful backups.
X terminals are basically those that use electronics on the workstation server.
Server - A processor that provides the network with a specific service, such as a routing
function, and acts as a common source of data, memory, or functions to be shared with
multiple devices that request them.
DCS systems communicate over an Ethernet network with the computers used by the
operators, often referred to as 'OS' (Operating Systems), all by communicating via
gateway and network cards.
Each PLC communication card and gateway port are addressed with an IP address.
7.1. FIELDBUS
7.1.1. Architecture
On this architecture, which is increasingly used, you can see that we wire all our
measuring instruments and actuators to remote inputs/outputs (here in our example these
are ABB remote inputs/outputs).
These remote I/O modules can be designed either as a standard version or as an Ex risk
zone version. They allow us to combine all the networked instrumentation with a DCS,
which allows you to save considerable wiring time.
These I/O modules in Profibus PA are interconnected with each other by a Profibus cable
(see accessory courses in instrumentation) and above all don't forget to install an end-of-
line resistor at the end of the line, otherwise, your DCS will cyclically inform you that there
is a network outage and your I/O wired on your entire network bus will never work.
This type of coupler is interconnected on the bus in RS485 using 9-pin D-SUB connectors
It's the same principle to network on/off inputs with your driving system except that you will
have the only difference of a DP / DP coupler.
For long distances between remote I/O module and the DCS system, it is preferable to
install fiber optic cables because in Profibus you are limited to a maximum distance of 300
meters.
On the same network, you can also add I/O in case of revamping, but you have to be
careful with the limited number of I/O depending on the supplier on the same network bus
(in this case you will also have to add RS485 repeaters).
But you must be thinking, how do I view all this information in the control room?
7.2.1. Introduction
A DCS control system is necessarily associated with industrial computers that will allow
the visualization and control of all the installations of an oil site.
Two servers,
Operating System known as 'OS' (these are the industrial PCs that operators use to
manage the proper functioning of their production facilities)
Engineering System called 'ES' (these are the industrial PCs that will be used to
program the DCS).
These are two highly performing industrial PCs with dual rack hard drives, this allows in
case one hard drive fails that the other takes over.
They are generally located in the instrumentation technical rooms. They are fully
redundant and connected to the microprocessor cards of each DCS system using their
network card (network card using mac address for more security or to avoid conflicts with
other cards).
As you can see in the photo, the server PCs of a DCS system are in 19-inch racks and are
often installed in technical room cabinets with a screen, keyboard and mouse in order to
intervene on the servers as you wish.
These servers have the function of storing all the histories, all the alarms, all the DCS
programs, all the synoptic views. It therefore manages all the OS client PCs as well as the
ES workstations.
Most of the time, you have a DVD burner that will allow you to save the entire configuration
of your driving system. It is important to make backups regularly.
QWERTY
Keyboard
Figure 48: Example of a
CP used by operators
On this kind of PC, you only have access to synoptic views, alarm lists, histories,
regulation loops with 'flat face' of regulators etc...
Client OSes are usually located in the control room. They are interconnected on an
ethernet network associated with the DCS system.
The client OS pc(s) have graphical software to install with an appropriate license. They will
actually come and fetch all the synoptic views that are saved in the servers. So you can
conclude that when your servers stop working, the operator will no longer be able to see
anything on the screen.
The client OSes come to get all the information necessary for their operation on the master
server, if the master server goes down they automatically fail over to the slave server and
you have a small interruption of the synoptic views and all the associated information
during the failover.
The Engineering System PCs have the same particularity as the client OS PCs except that
the ES PCs you can make modifications or complete projects from this station.
It's a configuration PC that will allow you to modify programs, add synoptic views, etc...
If you are on a project you will certainly have the opportunity to use the graphical interface
which you will see according to the builder offers you well-developed libraries nowadays.
You have almost all the symbols of the ISA standard that are integrated into the library as
well as small useful tools that have been added according to the manufacturer. Here are
some examples of tools from the SIEMENS graphics designer software library:
The ES pc will also allow you to compile and load all your changes into each of the
associated servers. You must not forget to do this because if you only save your changes
in the PC ES and you do not do it on the servers, well when you have a problem on the
servers all your changes will be lost.
On this synoptic view you can see in orange and yellow the process piping as well as
symbols of control valves, on/off valves, flow, pressure, level measurements, etc.....
Most of the time, when the valves are opened they are green or when they are closed they
are red.
The banner at the top of the synoptic view with buttons 1, 2, 3 and 4 on this example are
used to chain views: that is to say that when you click on one of these 4 buttons you go
directly to the desired view.
The black banner at the bottom of the synoptic view is most often a banner used to display
all alarms. Depending on the severity of the alarm (priority), you can acknowledge them.
As we just said, the alarms are displayed on the banner at the bottom of the synoptic view
and you can also see that the buttons 1 and 2 on the top banner, these are yellow, this
means that on views 1 and 2 you have priority alarms corresponding to the yellow color.
You guessed that there are also regulations on a synoptic view and well for this depending
on the settings you just have to click on a control valve or on a measurement that is
displayed to see that a regulator will be displayed as in the figure below
I can see here that by clicking on the 'bargraph' of the level measurement of the separator
and the regulator is displayed and you can do what you want for example:
Switch the regulator to manual and set the regulator outlet to 50% to open the
valve and thus reduce the level of the separator.
You can also display the actual trend of the measurement on the controller with
the output of the regulator valve to check the PID actions of the regulator.
Etc......
You can also start or stop a pump remotely as shown in the figure below.
Or also thanks to this flat face you can also see the defects that prevent you from starting
the pump (example: pressure switch, limit switch, closed valve, etc....)
You can control an entire industrial process with the help of the DCS, which is becoming
more and more scalable.
I advise you to do a training course at the supplier of your DCS which will really allow you
to improve a lot of details on a DCS and to intervene quickly in case of problems.
The process controllers, PLCs, Fire and gas and ESD are interconnected on switches via
their communication board and then connected to the servers.
7.3.1. Example architecture of the DCS Freelance 2000 from the supplier ABB
Frofibus-DP
Modbus
Fieldbus Foundation
The control system uses the FIELBUS network to connect the measuring instruments to
the controller's input-output boards.
7.3.3. Conclusion
There are a large number of types of field networks, a distinction must be made between
fairly generalist and classic networks such as profibus DP with optical fiber or the RS485
link as a medium and networks more specifically dedicated to the instrument in an
explosion-proof environment under the IEC1158-2 standard.
It should not be thought that a field network is associated with a particular DCS, on the
contrary you can generally install several different maps on the same DCS.
Each manufacturer naturally has its own preferences, but changes are rapid.
For example, in 2000, Endress and Hauser, a user of Profibus, adopted the possibility of
Fieldbus Foundation interfaces for its instruments, and similarly, Fisher-Rosemount, a
user of Fieldbus Foundation for its "Delta V" system, is marketing a card in Profibus
format.
We can mention 2 very well-known networks, which we will briefly study in the following
chapters, HPIB (IEEE 488) and RS 232. However, the expansion of the performance of
electronics created the development of computing as we know it, these networks were
quickly outdated, and although the RS 232 and HPIB standards are still used today, their
field of use should be put into perspective.
ETHERNET is the most widely used network in the industrial sector. However, there are
many applications where ETHERNET is unusable, for example, the simple transfer of
information in real time between PLCs and computers, or even worse between sensors
and a PLC. It is here that the new local industrial networks appear.
When using a network, the aim is generally to find a simple and transparent approach to
communications, a reduction in manufacturing costs (as cabling decreases and
development costs are often reduced) and, above all, a standard that is available for all
industrial manufacturing applications.
However, this is where the main problem with local networks lies, the multiplication of
standards often makes the selection complex, only the application and the available
hardware count. Indeed, PLC manufacturers are often also local network manufacturers,
hence the perfect compatibility of the machines, and by extension the inadequacy for the
machines of other brands.
Fortunately, some systems seem to become de facto standards, i.e. standards by force of
circumstances. In this little game, few winners will be, but it seems that CAN has become
the standard in terms of embedded networks (in the automotive or aeronautics industries).
The fact remains that the future is unclear.
It is therefore to be hoped that, like computing, the next few years will see the appearance
of not one, but a set of standardized, standardized networks accessible to all
manufacturers.
DATABUS: networks responsible for transferring large amounts of data over long
distances (+1000km), without any notion of real time.
DEVICEBUS: local networks that transmit small streams of data over short distances
(100m) in real time.
SENSORBUS: local networks that only transmit events over short distances
(from 10 to 100m) in real time.
As we have seen, there are many problems in transmitting information. So we tend to use
proven methods, inherited from telecommunications. But technological evolution has
severely challenged this uninnovative vision of things.
Before studying the methods used to transmit a signal in baseband or in shifted band (we
will see these terms later), it is necessary to specify how a link is made between two
machines, near or far.
As we have seen previously, noise is a great enemy of transmissions. We will see in the
following chapter that it is neither the only enemy, nor irreversible.
Connecting two machines together seems simple at first. There are three ways to
connect:
Connecting to a wire,
In the case of a single-wire transmission, we play on the fact that the ground is the same
for both machines, which can only be true for extremely short distances (on the same
electronic board for example).
This type of binding cannot be applied over long distances. For example, considering that
ground and earth are connected, if such wiring were to be built between Europe and the
United States of America, the potential difference between the two earths can exceed
100V.
The immediate temptation, to solve the problem of single-wire links, consists in adding a
second bond to the ground, both to solve the problem of the potential difference between
the two ground connections, but also to protect the signal, for example by making a shield.
This technique has advantages (e.g. data protection) and many disadvantages such as
ground interference (when the link is long), the difference in intercontinental potential
which can induce strong currents in the wires or simply the risk of lightning on the line
which could destroy both the transmitter and the receiver.
In the event of a disturbance, the two wires of the link are affected by the disturbance
almost at the same time and with almost the same noise power.
If at the start of the line we have an information signal of amplitude 2A. At the arrival of the
line, we have on the one hand on the first track the "half signal" of data (A) and a noise (B)
and on the other track, the same "half signal" but this time reversed (-A) and the noise (B).
Subtracting the signals from the two channels, we get the output S:
S = (A + B) − (−A + B) = A + B + A − B = 2 ⋅ A
As mass is no longer transmitted, the information is floating, without reference. At the end,
the subtraction of the 2 signals is carried out, with the local ground reference, which fixes
the potential of the signal at its final destination. No more problem of potential difference
between masses.
This type of link is used in a wide range of medium to large scale networks (ETHERNET)
or networks that are subject to an extremely noisy environment (ADC).
Now that we know the techniques for interconnecting two machines for transmission, we
need to look at the changes that need to be made to the data to allow them to flow on the
line.
We have seen that differential transmissions use transformers. It is therefore essential that
the signals transmitted have a zero continuous component, which is not the case with
classical "binary" signals. It will therefore be necessary to modify these signals in order to
be able to carry out these transmissions, but also to limit the spectrum of signals and thus
try to increase the number of channels in a link.
To transmit several signals in a line, analog methods have long been used: modulations.
Now that computers and digital systems are operating at high speeds, there has been a
natural turn to digital transmission techniques.
And on this basis, systems capable of compacting the signal over increasingly small
frequency ranges have been studied, these methods are grouped under the name of
baseband transmission.
The principle of in-band transmission consists of modifying the signal spectrum, without
shifting it in another frequency domain, by playing with amplitude parameters or by
associating different signals.
8.1.2.1. Polarities
A code is said to be unipolar when the coding of the information involves an electrical level
(in addition to ground). A code is said to be bipolar when the coding of the information
involves two electrical levels to encode the information (the ground can be used as the 3rd
level). Bipolar codes usually eliminate the continuous component of the transmitted signal.
Examples:
Transmission systems also play on the way a signal is encoded in terms of evolution over
time, rather than voltage level. The code we call the binary ('0' = 0V and '1' = +5V) is
actually a unipolar code (the voltage level), NRZ (the organization in time).
NRZ stands for Non Return to Zero, i.e.: which does not change state during the duration
of a bit.
This is actually the opposite of the RZ (Return to Zero) code which automatically creates a
return to zero state for the duration of a bit.
Example:
We can also talk about NRZI (Non Return to Zero Inverted) codes. These codes cause
changes in the levels of the signal output from the encoder to a given level of the
incoming signal. For example, the NRZI-S (Non Return to zero Inverted on Space)
code changes the output signal state whenever the bit presented at the encoder input
is at '0' (zero being considered a space).
The main problem with asynchronous codes is precisely present in their names, and that is
the lack of a clock.
The respective clocks of the transmitter and receiver are by definition not the same, they
beat at similar but different rates. It is therefore necessary to succeed in resynchronizing
the two machines regularly.
It is the edges on the data signal that are used to realign the receiver's clock with that of
the transmitter. However, the larger the transmission of data, the greater the probability of
having a long "blank", i.e. an absence of a front.
This is why it is extremely rare to use asynchronous codes without having small messages
to transmit (even if there are many small successive messages). As soon as the
transmission requires the transfer of a large amount of data, we tend to use synchronous
codes.
The principle of synchronous codes lies in the mixing of clock information with the data
signal to compose a signal that has both an easily usable clock and from which the data
can be easily extracted.
This code is quite used in networks, it is a two-phase and bipolar code. The principle of
Manchester coding is to translate the '1' as a descending edge and the '0' as a rising edge.
This is done by using a NO OR EXCLUSIVE logic function between the clock and the data
signal.
This signal may seem quite complicated to analyze, but in reality it is very simple. All odd
numbers of times T/2 (T/2, 3T/2, 5T/2, ...), there is systematically a front. If this front is
rising, it is a logical '0', if this edge is descending, it is a logical '1'. Episodically, there are
fronts on the signal after an even number of half periods. This phenomenon occurs when
there are two consecutive bits with the same value. Indeed, in order to be able to have two
descending fronts, it is essential to have a rising front between the two.
The regeneration of the clock can only be done by eliminating the transitions at multiple
instants of T, and by keeping only the edges at T/2. Originally, a system based on
monostables, which could not be retriggered, was used (before numerical techniques
replaced it) to eliminate T-edges. Then a PLL was used to re-fabricate from the signal
extracted from the monostables, a clock of the same frequency and in phase with that of
the transmitter. This clock was then used for data recovery.
The "slowness" of the PLL synchronization then required long bursts of data to allow the
receiver to stall. For Ethernet (which uses Manchester II encoding), there are 7 bytes used
as a burst.
The Miller code is a code derived from the classic two-phase code, it responds to the need
to transmit a signal in a narrowband. To make a Miller code, we pass the two-phase code
through a seesaw mounted in a divider by 2.
The HDB3 code is the code used in telephony (not at the user level, but at the exchange
and international level). The HDB3 code is a bipolar, zero-return, alternating (bits at '1' are
once positive, once negative), asynchronous. It allows extremely long frames to be
transmitted, without there being any desynchronization of the sender and receiver.
The principle used here is called bit stuffing, it consists (in the case of HDB3 code) of
substituting bits at '0' by bits fictitiously at '1'. This is called substitute bit stuffing. In the
case of other networks, there is no substitution, but the addition of a bit to '1'. This is called
additive bit stuffing.
In the HDB3 code, if a series of more than 3 consecutive zeros appears on the data line
(hence the 3 of HDB3), the fourth '0' is deliberately substituted by a '1' which is called the
rape bit.
In order to be able to immediately identify a rape bit from a bit to '1', the rape bit is always
of the same polarity as the last bit to '1' (whether it is a true bit to '1' or a stuffing bit). This
bit violates the rule of alternation.
However, this non-compliance with the alternation may, in the case of a very long series of
'0s', lead to the appearance of a direct voltage (the rape bits being all in the same
direction). To compensate for them, so-called jam bits are introduced to force alternations.
The jam bits follow the alternating rule. They are inserted into the transmission if the total
number of bits at '1' since the last rape bit is even. Thus, in the case of a long series of '0's,
the alternating tamping bits eliminate the continuous component.
The decoding of this system is simple, it consists of identifying the rape bits (easy, they
don't respect the alternation rule), then to eliminate the stuffing bits, you have to know
that a rape bit is preceded by a series of three '0's...
Shifted band transmission consists of processing information to change its frequency and
thus move its spectrum to a given location. This operation is called modulation.
Indeed, if several people "speak" at the same time in the same place, it is quite difficult to
extract the word of one from the word of the other. In the case of radio, a large number of
transmitters that "speak" at the same time, but fortunately for our ears, thanks to
modulations, not at the same frequencies.
In addition, the voice does not transmit well over long distances (the environment tends to
attenuate it very quickly). Modulation techniques therefore make it possible both to find the
frequency where the medium is most conducive to transmission, and at the same time to
allow several transmitters to speak at the same time.
Although relatively little used in the world of wired networks, modulations have recently
made a remarkable entry into the world of networks with the appearance of wireless
technologies.
There are two families of analog modulations. One causes the amplitude of a high-
frequency signal to evolve as a function of another signal containing the information. This
is called amplitude modulation.
The other no longer varies the amplitude, but the phase, or frequency, of the carrier signal
as a function of the informative signal. This is called angular modulation.
Amplitude modulations are generally unsuitable for logical signals because they do not
know how to transmit a edge properly (we have seen that the fundamental element of a
digital signal transmission is the edge of the data signal). In addition, amplitude
modulations are very sensitive to noise.
Angular modulation is much more complex to implement, but it offers, on the other hand,
many advantages such as good noise immunity or the ability to transmit fairly steep edges.
They are therefore much more widely used than amplitude modulations to transmit digital
signals.
Amplitude modulations are mainly used in so-called telluric wave ranges (frequencies that
propagate by ground effect over very long distances).
On the one hand, an informative signal called a modulating signal (f) and a "high
frequency" signal called a carrier (p). At the output, a modulated signal (s) is retrieved from
the modulator.
Mathematical expressions:
Knowing that:
and
If we now consider that the informative signal is no longer a sinusoid but a polychromatic
signal (which is made up of a spectrum of several lines).
We can then consider that if the Fourrier transform of the signal f(t) is F(ω), then we
find the expression for S(ω).
Amplitude modulation with carrier allows the informative signal to be transmitted while
maintaining an image of the carrier. The spectrum of amplitude modulation with carrier is
as follows:
Amplitude modulation with a carrier can be achieved by inserting a continuous component into
the information signal.
Now considering the absolute value of m, we can describe three types of modulation:
When performing carrier modulation, only modulation indices m greater than 1 are used.
Below 1, the informative signal falls below zero volts, envelope detection can no longer be
used for signal demodulation and only synchronous detection can demodulate the signal.
However, this type of demodulation is also the one used for carrierless modulations. In this
case, why consume energy to transmit a carrier that we do not physically need?
The temporal representation of the signals is then variable, depending on m. In the case
of over-modulation (index of m greater than 1), the informative signal is always greater
than 0.
However, while the main advantage of this type of modulation lies in the simplicity of
demodulation by envelope detection, their main drawback lies in their very low
transmission efficiency (generally less than 50%). As soon as a coherent demodulation is
necessary, amplitude modulations without carriers are preferred.
Carrierless modulations save the power that is supplied to the carrier, which greatly
reduces the transmission efficiency. In a carrierless amplitude modulation, we are faced
with the need to reconstitute a carrier to demodulate the signal.
However, we regularly (each time the informative signal passes through zero volts) a
reversal of the phase of the carrier. The difference between modulations with and without
carriers can be easily distinguished by temporal representation.
The additional cost generated by the demodulation of this signal places this modulation in
the same price range as angular modulations, which are, as we will see later, much more
resistant to disturbances. There is therefore a tendency not to use carrierless amplitude
modulation, in favor of angular modulations.
Reduced-band modulations are fairly recent transmission techniques. The best known of
these modulations is the BLU (Single Sideband) modulation. It has a modulation efficiency
of 100% (all the energy is used by the information signal).
Amplitude modulations are completely absent from the world of industrial local networks,
as the fronts of digital signals are not very suitable for the use of amplitude modulations.
Their use is generally limited to low-quality transmissions or at very long distances. In
addition, their too high sensitivity to external disturbances makes them dangerous
elements.
Indeed, they do not protect, or even make the signals transmitted a little more aware. The
slightest noise can distort the envelope of the modulated signal. Therefore, angular
modulation is used, more generally when it is desired to transmit digital data. It should
also be noted that in the field of ILRs, the use of modulations (angular or amplitude) is
extremely rare.
The other method to shift the spectrum of the informative signal is to change the instantaneous
phase or frequency of a carrier signal according to the amplitude of the informative signal.
Angular modulations are widely used for digital signals because they allow good edge
transmission as well as good noise immunity (the amplitude of the transmitted signal does
not contain any information).
The FSK
.
Manuel de Formation EXP-MN-SI110-FR
Dernière Révision: 08/04/2009 Page 87 de 249
Formation Exploitation
Instrumentation
Automatisme Centralisé - DCS
Where R is the maximum frequency of the data signal and f1 and f2 are the carrier
frequencies for a logical '0' and a logical '1' respectively.
The aim is generally to produce modulation indices fairly close to 0.66 (optimum spectral
occupancy).
However, often the transmission channel used is quite narrow, which means that f1 and f2
are imposed and often close to each other. We therefore play with the higher frequency of
the information signal to obtain the desired index.
It should be noted that it is not recommended to reduce or increase (when possible) the
distance between f2 and f1 for reasons related to the ability to discriminate between these
2 values (when they are too close) or to avoid too much spectral occupation (when they
are too far apart).
Here again, we must distinguish between two families of FSKs, on the one hand the
continuous phase FSKs, i.e. without phase jumps when moving from f1 to f2 (and vice
versa), or the discontinuous phase FSKs where phase jumps can take place.
As shown in the previous examples, continuous-phase FSKs have a much better spectrum
(reduced width) than their discontinuous-phase equivalents. Of course, continuous-phase
FSKs are much more complex to perform than discontinuous-phase FSKs.
The PSK
PSK (Phase Shift Keying) is another form of angular modulation. It acts by phase jumping.
Again, there are a large number of PSKs that use different techniques to translate the
transmitted numerical code. Some standard PSKs encode the logical '1' as a signal with a
phase of 180° and the '0' as a signal with a zero phase. These are absolute phase values.
Other PSKs (DPSKs for PSK differential) encode the phase relative to the previous value.
Thus the logical '1' and logical '0' correspond to phase jumps of +90° and -90° respectively
with respect to its previous phase of the signal.
There are also PSKs that encode several bits at the same time. For example, with two bits,
we will establish four phase shifts according to the value of the two bits. For example, the
"00" corresponds to a zero phase shift, the "01" to a 90° phase shift, etc.
Be careful, the more the PSK encodes a large number of bits at a time (and thus increase
the link throughput), the more difficult it is to discriminate between two phases, so the
more complicated it is to make the demodulator.
Below, after the clock, we will find another form of modulation, PWM. As it is not used in
In this chapter, we will continue to study the framework of data transmission, how this
data is prepared for transmission where the quality, speed or quantity of the data
transmitted can be improved by appropriate coding.
Information coding is used in transmissions, both baseband and offset, to prevent error
transmission. Since an error is always unacceptable, it is necessary to provide, in addition
to the data, a code that allows, in the worst case, to detect errors and in the best case, to
correct them.
The simplest code for transferring information is parity coding. This method, the most
basic, was one of the first to be used. Its principle is quite simple, it involves adding an
extra bit to the data transmitted. In the case of an even parity, this bit allows the message
to be composed of an always even number of bits at '1', and in the case of an odd parity,
an always odd number of bits at '1'. This encoding is achieved using the OR EXCLUSIVE
function for even parity and NO OR EXCLUSIVE for odd parity.
During decoding, we will use the same logical function as the one used for the encoding to
validate the message. For example, with even parity, we will realize an OR EXCLUSIVE
function between the n bits of the message and the parity bit. If the result is '0', the
message is considered true, if the result is '1' it is deemed false. However, this result must
be put into perspective, as a parity code can only detect an odd number of errors. It is
impossible for him to detect two errors. It is therefore considered that simple parity codes
can detect only one error.
We can also look at the number of control bits transmitted per message. In the case of a
simple parity code, it is generally considered dangerous to exceed a ratio of 1/8, i.e. one
control bit for eight bits of data.
At the reception, the message that has been transmitted is retrieved and a longitudinal and
transverse parity check is once again applied to it. If an error has crept into the code, a
longitudinal parity and a transverse parity are invariably set to '1'. This means that there
was an error in the transmission.
In our example, when we check the check values, we see that there are 2 invalid check
codes, one on a row, the other on a column. So we know the location of the error
rigorously. This code therefore makes it possible to correct the error since it is placed at
the intersection of the 2 faulty rows and columns. However, if on receipt, there are two or
three errors, there will always always be at least one bit of transverse or longitudinal parity
at '1'. Up to three errors can therefore be detected. However, only one error can be
corrected.
The ratio of control bits to data bits, at the time of transmission, increases (in our
example) from 1/5 (0.2) for simple parity to 11/25 (0.44) for compound parities.
Redundant encodings increase the number of bits in the message to allow possible errors
to be detected and possibly corrected. The most famous code is that of HAMMING,
although practically abandoned, it was the code of the first digital transmissions.
Example:
Let's imagine that we are trying to transmit a 4-bit word. The HAMMING code tells us that
for n bits of information, we need k control bits to correct an error.
We can express k in terms of n using the following formula: 2k ≥ n + k + 1
By iteration, we can solve this equation. For n Erreur sur le bit e3 e2 e1
= 4, we have k = 3.
Aucun 0 0 0
This means that we will transmit 7 bits (m = k + n).
1er bit (m1) 0 0 1
Now we have to encode the k bits for transmission.
2ème bit (m2) 0 1 0
To do this, we will code in a table all the possibilities
of error in the message. There is for a message of m 3ème bit (m3) 0 1 1
bits, m+1 possibility of error. This table is composed
4ème bit (m4) 1 0 0
of k (=3) columns. Hence the previous formula (m + 1
= n + k + 1 and 2k represents the number of 5ème bit (m5) 1 0 1
identifiable rows with k columns).
6ème bit (m6) 1 1 0
We now take out the equations of e1, e2 and e3, and we find:
7ème bit (m7) 1 1 1
e1 = m1 ⊕ m3 ⊕ m5 ⊕
m7 e2 = m2 ⊕ m3 ⊕ m6
⊕ m7 e = m ⊕ m ⊕ m ⊕
If we look closely, we realize that the terms m1, m2 and m4 appear only once in the
equations. We can therefore make them (to simplify the equations) the Hamming coding
bits.
M7 M m5 m m3 m m1
6 4 2
n4 n3 n2 K3 n1 K2 k1
To finish our encoding, we still have to fill in the table, i.e. find the values of k1, k2 and k3
which make e1, e2 and e3 zero. Indeed, as the message is not yet sent, it is not supposed
to contain any errors.
m1 = e1 ⊕ m3 ⊕ m5 ⊕ m7
k2 = m2 = e2 ⊕ m3 ⊕ m6 ⊕ m7
k3 = m4 = e3 ⊕ m5 ⊕ m6 ⊕ m7
k1 = m3 ⊕ m5 ⊕ m7
k2 = m3 ⊕ m6 ⊕ m7
k3 = m5 ⊕ m6 ⊕ m7
m m6 m m m m2 m
7 5 4 3 1
n4 n3 n2 K3 n1 K2 k1 n= k=
0 0 0 0 0 0 0 0 0
0 0 0 0 1 1 1 1 3
0 0 1 1 0 0 1 2 5
0 0 1 1 1 1 0 3 6
0 1 0 1 0 1 0 4 6
0 1 0 1 1 0 1 5 5
0 1 1 0 0 1 1 6 3
0 1 1 0 1 0 0 7 0
1 0 0 1 0 1 1 8 7
1 0 0 1 1 0 0 9 4
1 0 1 0 0 1 0 10 2
1 0 1 0 1 0 1 11 1
1 1 0 0 0 0 1 12 1
1 1 0 0 1 1 0 13 2
1 1 1 1 0 0 0 14 4
1 1 1 1 1 1 1 15 7
When you receive it, you just have to calculate the values of e1, e2 and e3 to define if
there was an error and where it is. We are then able to transmit a message by detecting
errors and correcting them.
e1 = m1 ⊕ m3 ⊕ m5 ⊕ m7 = 1 ⊕ 1 ⊕ 0 ⊕ 1
= 1 e2 = m2 ⊕ m3 ⊕ m6 ⊕ m7 = 0 ⊕ 1 ⊕ 1 ⊕
1 = 1 e3 = m4 ⊕ m5 ⊕ m6 ⊕ m7 = 0 ⊕ 0 ⊕ 1
We therefore know where the error is because e1, e2 and e3 indicate an error on the 3rd
bit. So we can correct the received code and write that the error-free code is 1100001 so
the message is 1100.
With this Hamming code, we know how to correct an error on a bit but we are able, without
correcting them, to detect up to three errors.
As for the proportion of control bits, we can see that the larger n is, the more k increases,
but not in a linear way. For example, if to detect 3 errors there is a need for 3 control bits
for 4 data bits, with 112 data bits, it is enough to use 7 control bits.
Be careful, remember that the probability of error increases linearly with the number of
bits transmitted. For a 119-bit transmission, the risk of error is therefore 17 times
greater than for a 7-bit transmission. Also, for such a large amount of data, it would be
reasonable to use a more powerful Hamming code, i.e. capable of detecting more
errors.
It all starts with the elaboration of the polynomial form of the binary message. For
example, 110100, i.e. 1 ⋅ 25 + 1 ⋅ 24 + 0 ⋅ 23 + 1 ⋅ 22 + 0 ⋅ 21 + 0 ⋅ 20 is written in
the polynomial form x5 + x4 + x2.
Where G(x) is a polynomial, defined by the network protocol, known to the sender and
receiver and of degree v. Q(x) is the quotient of the division of P(x).xv by G(x) and
therefore R(x) is the remainder of the division.
Since G(x) is of degree v, R(x) is necessarily of a degree less than or equal to v. It is this
remaining R(x) that will be transmitted in addition to P(x) to the receiver.
In the end, we know R(x) (the remainder transmitted with the message), v (which is
obtained by analysis of G(x)), G(x) (known by definition) and P(x) (the transmitted
message). We will then redo the same calculation as the one made by the issuer and
compare the remainders of the two divisions.
If there is a difference between the two remains, then we are sure that an error has crept
into the transmission.
G(x) is defined by the transmission standard used. Opinion V41 of the CCITT
(International Telegraph and Telephone Advisory Committee) standard defines G(x) as:
And this is where we see the effectiveness of the CRC code, since in the case of the CAN bus,
it allows you to identify the following errors:
This is why CRC coding is widely used for unreliable media, but also in most networks.
From the point of view of the proportions between the number of data bits and the number
of control bits, it can be seen that, for example, the Ethernet network uses a 32-bit CRC
code that allows it to ensure the security of its data frame, which can contain up to 1526
bytes, i.e. a ratio of 4/1526 (0.0026).
More generally, 64-bit CRC codes are used by data compression software (ZIP or RAR) to
validate files that can reach several tens of megabytes.
However, it should be noted that the CRC codes we have presented to you do not allow
you to correct errors, nor do they ensure any confidentiality of communications despite
their great similarity to military or civilian encryption codes. There are still CRC codes that
correct errors, but we will not study them.
In the world of networks, codes are only there to detect transmission errors, even if they
are able to correct them. The transfer capacity of modern networks, and the sensitivity of
the data, are such that it is preferable to retransmit a packet if it is likely to contain errors
rather than to commit new ones by hazardous corrections.
8.1.6. Multiplexing
The goal of transmissions has always been to transmit more data in a minimum of space.
At present, with the proliferation of terrestrial networks, with the rise of telephony, the lack
of space is cruel. However, the world of transmissions is governed by an intangible law:
To transmit several signals on the same channel, you must be able to extract them
all.
This meant that the signals to be transmitted had to be shifted either in time or in
frequency. This technique, called multiplexing, is therefore combined along 2 axes: on the
one hand, the time axis, and on the other, the frequency axis. In both cases, this amounts
to cutting the authorized strip into small intervals that will be as many communication
routes.
Frequency multiplexing consists of dividing the frequency band available into intervals of the
size of the communication channel.
This technique was widely used in telecommunications, at a time when digital systems
were not as efficient as they are today.
The human voice as imagined by telecoms is a signal occupying a band of 300 to 3400Hz
(a somewhat restrictive vision since in general, speech is judged to occupy a band of 20Hz
to 20KHz). Thanks to this restriction of the frequency band, and frequency multiplexing, it
is possible to place about 1800 simultaneous communications on the same line.
However, it is not possible to multiplex 1800 channels directly, it must be done in stages.
First of all, you go from the telephone set to the telephone interconnection box of the
building (or the street, depending on the situation), until then, your line is personal, you are
the only one to use it (it is this line that you can miss). When you arrive at the first
interconnection station, your line is multiplexed with 12 channels.
This 12-lane line goes to the central substation where it will be further multiplexed with
other tracks of the same gauge to create a 144-lane channel on one line. This canal was
then joined to others of the same size to form a line of 1728 tracks.
It should be noted, however, that as you are not the custodian of a fixed frequency, and
therefore you use the first frequency available at the building exchange, there are
systematically, in addition to the communication channels, coding channels that are
added, and thus make it possible to route all conversations to their respective recipients.
Those of your neighbors who are communicating, and sent on a compound line
This method of multiplexing was created with the advancement of digital technologies.
Less expensive than frequency multiplexing, which requires a plethora of very finely tuned
carriers, and SSB modulations (therefore rather complex to achieve), digital multiplexing
uses only one clock, coupled with high-frequency digital components.
It is therefore with the integration of digital components that we have been able to create
high-speed digital exchanges, thus paving the way for time multiplexing (it should be
remembered that it was in 1996 that the entire communication system of FRANCE
TELECOM switched to digital).
The principle of time multiplexing is to divide speech or data into small, simple pieces
(samples), which will be digitized to form digital data (packets) that are themselves
transmitted in a short time interval (time slot). We will then attach the bits from a packet
emitted by another source. All of these packets form a frame. In the next frame, we place
the next packet in the space allotted to us.
So we have a discontinuous link with our interlocutor. But if the emission of these small
pieces of speech is done at a very high rate, this makes this phenomenon inaudible (this is
the principle of sampling).
It should be noted that nowadays a third form of multiplexing has appeared, this is the
technology used for UMTS systems, it is called power multiplexing. In the previous
illustration, an axis is not exploited (the P axis for power), it is along this axis that this
multiplexing takes place.
The concept of this totally digital modulation is to mix the information to be transmitted
with a pseudo-random code at a very high frequency, and then to use correlation systems
to extract the information at the reception.
This technique is also used (but this time for protection purposes) in the concept of
GPS.
FULL DUPLEX : Transmission method allowing 2 machines to talk to each other at the same
time (not necessarily on the same line).
HALF DUPLEX : A method of transmission that allows 2 machines to talk to each other on the
same line, but not at the same time.
SIMPLEX: Method of transmission imposing on a line one machine that speaks and only one,
without the possibility of another machine speaking.
BAUD: Unit of transmission representing the number of events transmitted in one second.
It is opposed to bits per second by the fact that a measurement in bits per second can only
be applied to binary signals. In the case of a transmission of a 2-state signal, one baud is
equal to one bit per second.
Multi-point (or distributed) connection to use only one line to connect all machines.
REAL-TIME: This notion is an important element in local networks. The notion of real time
is in fact the ability of a system to respond in a given time. Let's take the example of an
automatic drilling bench, you usually have a sensor that can detect a break in the drill. In
the event of a breakage, it is necessary to react relatively quickly, either by diverting the
parts to other drilling units or by stopping the chain to allow a replacement of the defective
part. So the information has to get to the mainframe before a new piece comes along. So
the order responding to a drill break must be processed in a limited time. This time limit is
the basis of the notion of real time.
NODES: The point at which a machine connects to the network. This term expresses the
number of elements connected to a network. Be careful, between 2 nodes, you can place
a repeater, it will not be counted since it is not active with regard to the transfer of
information.
BIG ENDIAN: The high-value bit is passed first, the low-order bit at the end.
SMALL ENDIAN: Inverse of BIG ENDIAN, this is to transmit the low weight first and
the high weight at the end.
RS232 is a way of making two machines communicate with each other, but in no case can
the term network be applied to it. This is a pure truth, but, as we have to start somewhere,
let's start with this system since I'm sure you've heard of it (maybe under the name of
serial link).
The RS232 standard allows 2 and only 2 machines to be connected to each other.
Between these machines the information to be transmitted will circulate via a 3 or 9-wire
serial link. The signals present on these wires, to which should be added the ground,
are:
Pine Signal
1 DCD Data Carrier Detect Entranc
e
2 RD Received Data (réception) Entranc
e
3 TD Transmitted Data (transmission) Exit
4 DTR Data Terminal Ready (terminal prêt) Exit
5 Signal Ground
6 DSR Data Set Ready Entranc
e
7 RTS Request To Send (demande d’émission) Exit
8 CTS Clear To Send (ready to send) Entranc
e
9 LAU Ring Indicator Entranc
GH e
8.2.1.1. Presentation.
The RS 232 standard is a serial transmission protocol that allows full duplex, half duplex or
even simplex transmissions. The transmitted signals are encoded in ±12V with the logic
"1" equivalent to a -12V level. Communication is governed by hardware parameters that
must be identical on both communicating machines:
The flow rate: It can be chosen, depending on the versions of the standard, between
75 baud and several hundred kilo baud (typical value 9600 baud),
The type of parity control. It can be chosen from the following possibilities:
- no parity checks;
In addition, once the hardware parameters have been defined identically on the two
machines, for which they communicate, a transmission mode must be chosen, i.e. define
the flow control process used for the serial link.
The concept of flow control must be understood as a means of ensuring that between two
machines connected to a serial link with a fixed flow rate and different processing speeds
depending on the machine, the slowest processing capacity is aligned.
Even though nowadays the processing capacity of modern computers makes the time it
takes to process information ridiculous in relation to the duration of a bit, there are still
cases where this control is indispensable.
No flow control.
Material flow control consists of using additional signals to "rhythm" exchanges, i.e. the
communicating machines will transmit control data on additional channels.
There are several methods of material flow control using more or fewer signals.
Other wiring options include 5-wire (partial flow control) and 3-wire (without material
flow control) connections.
A software protocol can also be used to control the flow of exchanges. This time, the
connection no longer needs RTS, CTS, DTR, DSR and DCD signals, so we can limit
ourselves to a 3-wire link.
It is at the software level that machines will control their data exchanges. The protocol
(since it is a protocol) is called Xon/Xoff. It is limited to ASCII exchanges. It is impossible to
use this protocol to send binary information.
The ASCII codes used to control exchanges are Xon, which is ASCII code 17 (0x11) or
CTRL Q and Xoff which is ASCII code 19 (0x13) or CTRL S.
The operating principle of this flow control is based on a simple automatic concept.
From the point of view of the receiving machine, which is originally naturally ready to
receive data; Each byte received is stored in a receive buffer.
When this receive buffer reaches a certain percentage of occupancy (usually 80%), the
receiving machine sends the Xoff character to the sender. It will then process the received
data to empty the receive buffer.
Once a second threshold is reached (usually 50%), it then sends the Xon character to the
uploader so that the data transfer can resume.
On the transmitter side, the reception of the Xoff blocks the transmission until the reception of
an Xon.
Application
Thus, a transmission of 7 bits of data with an even parity check and 2 stop bits at 9600
baud represents a transmission of 11 bits in total, 7 of which are useful. Let (9600*7) /
11 = 6110 useful bits /s.
Once these points have been defined, the data can be transmitted using a UART.
The shape of the frames is then as follows:
The data is received by the internal clock of a UART, a multiple of the transmission
frequency (internal clock = N* transmission clock) to allow it to synchronize with the
transmitted signal.
A descending edge of the signal allows us to define the beginning of the start bit (which is
'0' while the unoccupied line or stop is at '1'), we then have a count of the edges of the
internal clock. As we have defined the transmission rate, the receiver knows how many
edges of the internal clock (N) represent a transmitted bit, at N/2, the UART knows the
data present on the line. The same is done with the other bits of the transmission at 3N/2,
5N/2, etc...
So we reconstruct the message without the need to transmit the clock. However, this is
only valid for short messages, otherwise desynchronization of the clocks may cause
problems that can lead to data loss. To avoid these malfunctions, it is sufficient to
resynchronize the reception clock at each upstream and/or downstream edge of the data
signal.
The RS232 standard is now reserved for basic use. Some evolutions (RS422, RS423 and
RS485) of the standard allow much higher throughput between multiple machines.
However, they are not "standard" on modern computers, which is a severe handicap to
their development.
The IEEE 488 bus is the first instrumentation bus to be standardized, born from the
HEWLETT PACKARD INSTRUMENTATION BUS (HEWLETT PACKARD
INSTRUMENTATION BUS), this bus is
exclusively dedicated to the automation of measurement chains.
The bus must be managed by a controller (often the data processing unit). On this bus, all
machines (including the controller) can be alternately TALKER (transmitter) or LISTENER
(receiver). There must be no more than one transmitter at a time, but there can be multiple
receivers.
The signals that pass over the lines are of 3 types (the numbers in parentheses represent
the pin number on the IEC connector):
The REN (5), IFC (10), ATN (12), SRQ (11) and EOI (6) control signals,
The flow control signals DAV (7), NRFD (8), NDAC (9), All
REN allows a machine to be taken over by bus. In the event that the REN signal is at "0",
the instruments (measuring or displaying) are accessible through their front panels. If
REN is set to 1, the instruments are controlled by the bus and their front panel is
disabled.
They can then be ordered by sending data. The format of the messages is then an ASCII
code, as defined by the machine manufacturer.
IFC allows the bus controller to initialize all machines attached to it. When IFC is at 1, all
machines stop. The line is freed and thus the data control signals remain free and allow
the controller to initialize the machines.
This signal (optional) is used by machines (some only) to warn the controller of the need
to give new orders. This line is common to all machines so the controller must decode,
following the passage to 1 of SRQ, the address of the requesting machine.
The EOI signal allows the controller to define the machine that is asking for help through
SRQ (identification).
These signals are transmitted with a sledgehammer to improve noise immunity. The grounds
are placed on pins 13 and 18 to 25.
ATN allows the bus controller to speak. When ATN is set to 1, the controller also becomes
the transmitter of operating orders. That is to say, it no longer provides data but
commands. When ATN is set to 0, the controller is a machine like any other, transmitting
or receiving according to the programming and the signals on the bus are then data.
The use of an asynchronous protocol imposes a set of control signals. The signals used
are:
DAV allows the transmitter to notify the receiver(s) that the data presented on
the bus is valid and can be entered.
NRFD allows a receiver to signal to the transmitter that it cannot enter any
data that it may present on the bus.
NDAC allows a receiver to notify the transmitter that it has not yet entered the
data presented on the bus.
Timetable of a transmission
Before T0, the receiver reports that it is not able to receive data.
At T1, the data has been present for some time on the line. The transmitter then
signals to the receiver(s) that the data is valid.
At T3, the data has been entered and the receiver reports it to the transmitter.
At T4, the issuer announces that the data is no longer valid (although it will still be
valid for a Tb time).
As you can see, networks are complex elements, using an important technique, let's
remember that there are not 2 networks identical, so there are an impressive number of
them, each with a preferred field, hence the definition of several levels of specification in
the form of a pyramid called CNIM.
This beautiful representation is being shattered by the increasingly widespread use of the
Internet. A large number of manufacturers have drastically reduced the diversity of their
network offer in this respect. More and more, the common lot became Ethernet + TCP/IP,
the pyramid being limited to 2 layers. An upper layer (Ethernet +TCP/IP) and then below
the sensor networks.
Started in 1977 and accepted in 1978, the OSI standard defines 7 levels of specification, each
presented as a layer superimposed on the previous one.
Epistolary analogies are generally used to present this notion of diapers, so let's sacrifice
to tradition and imagine that we want to send a letter of employment. Almost unbeknownst
to him, around the information to be transmitted, we define a set of parameters that allow
the data to be routed from us to a manager.
Once the letter has been written, it is slipped into a (physical) envelope.
On timbre (session).
That's it in 7 lines, I have defined the principle of all networks, we start with an
application (what we want to transmit), we add a presentation, etc.... And in the end, the
message is conveyed.
It is therefore understandable that it was necessary to graft a set of elements around the
text that for us represent little information, but which allow the letter to reach its target.
So successive layers of information were deposited. Hence the idea of defining these
additions in the form of layers.
The other notion is related to the use of the layer, i.e. the consistency with the
layer of the same level of the machine with which we are communicating. To use
our mason analogies, it is the coherence between 2 bricks side by side on the
same row. If one is larger than the other, there is a risk of surprises. The rule of
dialogue is then the possibility of communicating with a layer of the same level
(horizontal dialogue). This function is called the protocol.
The physical layer represents the lowest level of specification, it defines the electrical and
mechanical specifications of a network. The type of connection (full duplex, half duplex or
simplex), the type of link (serial or parallel), the media (microwave link, coaxial cable,
twisted pair, etc.) are defined.
The link layer is used to manage line access and information transfer between 2 adjacent
machines. It manages the connection and disconnection processes, it detects errors,
manages addressing and defines the formalism of the frame to adapt it to the physical
medium. There are 2 very important sublayers : the MAC (Medium Access Control)
sublayer which allows you to manage access to the network, manage conflicts or avoid
them and the LLC (Logic Link Control) sublayer which takes care of the service with the
higher layers.
The network layer allows you to route information from one network to another or even
through a network set.
The transport layer is used to control the flow of data in a network. It manages
transmission error control and link reliability.
The function of the session layer is to connect the services available in the 2 machines,
thus making the lower layers of the network transparent.
The presentation layer handles the presentation of the data in a syntactic (grammatical)
way. It allows, among other things, the coding and decoding of information (confidentiality
or compression). Its role is in fact to make heterogeneous machines compatible (for
example, dialogue between a MAC and a PC).
The application layer is the top layer of the network. It encompasses all the applications
that the network will use. It is usually this layer that the user will have as an interface.
There are 2 types of applications: applications in connected mode (where the connection
must be maintained) and applications in non-connected mode (where the connection is
intermittent, such as the INTERNET or email).
Layers 1, 2 and 3 are often referred to as lower layers (using industrial computing) as
opposed to the other so-called upper layers.
The definition of the OSI layers of a network in the global sense leads us to study the
influence of these layers in our applications. For example, the use of the IEEE 488 bus
implies that layers 1, 2 and 7 are used. This means that layers 3, 4, 5 and 6 are non-
existent.
This point is extremely important since it allows us to say that if OSI "modeling" makes it
possible to define all networks, in no case are networks forced to use all 7 layers of the
OSI model.
Encapsulation is the addition, layer by layer, of data in addition to that provided by the
upper layer to allow a correct decoding of the information.
The terms of frames and packets are very important elements in transmissions. Let's go
back to the example of mail, we have seen that on the text to be transmitted, additional
parameters such as the recipient's address, the sender's address, etc.
In the electronic reality of networks, we find more or less the same thing, if the text
represents n bits, additional terms must be added for the transmission to take place.
A frame represents the smallest unit that can be understood by a network, i.e. the
minimum number of bits for the message to be transmitted. So let's analyze how frames
are made in networks.
If we still base ourselves on our example, we realize that our message can be summarized
as:
Unfortunately, this basic framework is too simple for today's networks. Indeed, if the
message is not of constant length, there is a high risk of not knowing where it ends and
where the next one starts.
So we add an additional field to define the size of the text. But this may not be enough, we
must also be sure that each machine reading the message understands that if in the text
of the message, it finds its address, it is only a coincidence.
Also, we generally add a code at the beginning of the frame to locate the beginning, we
call this code the marker. A field for error checking will also be added.
This gives:
We saw in the previous chapter that elements are added to a text to be transmitted. But often,
the frames have a limited format, to allow others to
speaking (upper limit) or for technological reasons that we will discuss later (lower limit).
If in the case of the lower limit, the problem is usually solved by adding meaningless
characters, in the case of the upper limit, the text often has to be cut into pieces, of a
restricted format, thus allowing a transfer in the standards, this division is the packaging
and the extracts of the message are then called packets (for some networks, we are
talking about datagrams).
The OSI pyramid therefore has a flaw: it greatly increases the size of the
information to be transmitted.
However, it has the advantage of providing the data without error and in an
understandable way, which is the least we can do.
The physical layer is used to define all the physical characteristics (hence its name) of the
network. It makes it possible to adapt to the medium used for transmission by setting the
limits of its use (flow rate, length, etc.).
The topology of a network is the way machines are connected. There are 3 basic methods
of establishing communications.
The Ring
The machines are placed to form a closed loop, made with point-to-point connections.
The star
The machines are distributed, in a star shape, around another, called a concentrator, and
connected to the latter by a point-to-point link. This topology allows you to create small,
independent islands.
The bus
The machines are distributed along the entire line, like the memories on a microprocessor
bus, connected to the router by a multipoint link. As with the star, the removal or
installation of a machine (except for the router) can be done without any particular
condition, however, it only takes one machine to create a network failure.
In addition to these elementary methods, there are also more complex functions that allow
for more varied shapes, and therefore more adapted to the architecture of the buildings or
the mode of use. These shapes (the back-bone, the tree and the mesh network) are
respectively derivatives of the star and ring bus.
The Back-Bone
The translation of Back-bone gives backbone, it is on this line that all the other structures
are connected (such as the ribs that articulate on the axis that transmits nerve
information).
L'arborescence
The tree structure is like a tree whose number of branches increases as you move away
from the trunk.
This structure uses multiple point-to-point links offering multiple communication paths to get
from one point to another (link redundancy).
All these topologies each have one (or more) specific defects, the ring is at risk of opening,
the star is very difficult to reconfigure, the bus creates the risk of collisions and finally the
mesh network requires the use of a routing protocol.
Several connection topologies can be used for a global network, but generally, mesh
networks are used to cover an entire territory (city, country), a star or ring network locally
in a company (the preference is for hub-and-spoke networks). Finally, at the level of local
networks, the bus topology is very widespread.
It is not uncommon to see companies (IBM, HP, etc.) using a hub-and-spoke network as
the structure of the building and creating small islands with ring networks. On this subject,
let's break the neck of a few preconceived ideas:
Ring networks are often much faster than star or bus networks. Indeed, the latter
are "saturable", i.e. they have a capacity for dialogue that decreases with the
number of machines in communication, whereas a ring network always offers
the same performance,
Star networks or buses sometimes use shared resources (spread over several
machines) which make them as sensitive as rings to machine failure,
No network is recommended in advance for an application and only a study can say
that one is better than the other.
There is a wide variety of media for information, for example, cable links that we will see
later, but also microwave, infrared, ultrasound links, etc....
For example, the fieldbus used by EUROCOPTER to test the TIGRE helicopter uses a
microwave link to communicate with the sensors on the rotors for an obvious reason.
However, it must be admitted that it is essentially by wired links, on the other hand,
optical fibers.
A twisted pair is usually used to make small networks (in a building). Indeed, it is often
easier to accept a certain error rate than to use high-cost media.
The speed of propagation in a copper power line is globally around 220,000 Km/s, this
corresponds to the formula:
Where ε and μ are respectively the conductivity and permittivity of the medium, and c
the speed of light in a vacuum (299,792 Km/s).
c
It is this same formula, transformed into V which makes it possible to obtain the
speed of
n
propagation of a light wave in an optical fiber.
We define n as the index of the middle. In optical fibers, the middle index is of the order
of 1.5, which gives a propagation speed of the order of 200,000 Km/s. It can also be
noted that in water, the light reaches 75% of c and that it is between 50 and 60% of c in
glass.
We will therefore see that information propagates faster in copper than in optical fiber.
The use of the generic name "twisted pair" does not hide a uniformity of qualities, they
are classified according to 3 cumulative standards:
We are talking about the twisted pair category, there are 7 (or 8 categories) of twisted pair
bonds.
Maximum
Category Debit Area of use
Frequenc
y
Cat. 1
Telephone cable Abandoned
Cat. 2
Cat. 3 16 MHz 10 Mbps 10 BASE-T
Cat. 4 20 MHz 16 Mbps Token Ring or 100BASE-T4
Cat. 5 100 MHz 100 Mbps 100BASE-TX or ATM
Cat. 5e 100 MHz 1 Gbit/s
Cat. 6 200 MHz 1 Gbit/s et + Gigabit Ethernet or higher
Cat. 7 600 MHz 1 Gbit/s et +
This standard is better known as AWG (American Wire Gauge), it gives from a number the
diameter of the conductor but also its linear resistance. At least that's what was originally
planned, unfortunately it seems that there have been some discrepancies.
However, we can give from the reference AWG the cross-section and resistivity of a line:
AWG
Diameter 6
0.32"
AWG
r 2 6
This standard, which was originally used for guitar strings (going up a tone is like taking a
string in AWG+1), is now dedicated to electrical cables. There are many tables (not all of
them very coherent) to avoid tedious scholarly calculations.
Although in the vast majority of cases, the simple use of a differential transmission system
is more than enough to protect the information flowing in a twisted pair, there is a standard
describing the protection provided to a line. This classification is based on protection
index.
Unshielded Twisted
Unshielded UTP
Pair
Cable
Shielded-Foiled
Screened-shielded S-FTP
Twisted Pair
cable
Double Shielded-Shielded
S-STP
shielded Twisted Pair
cable
UTP twisted pairs have no special protection, so they are theoretically more sensitive than
others to disturbances.
In FTP, an aluminum screen (generally not connected to the ground) adds a screen
against external parasites, it is generally an intermediate solution between UTP and S-
FTP.
In S-FTP (sometimes called STP for Shielded Twisted Pair), in addition to the FTP screen,
a braid connected to the ground is added which plays the role of a Faraday cage.
Finally, S-STP is the ultimate twisted-pair link, each pair is individually shielded and the
whole thing is shielded as well. These transmission lines are virtually perfectly isolated
from all interference, it should be noted that they are horribly expensive...
For all twisted pairs of the UTP, FTP or S-FTP type, there is a sneaky form of interference,
crosstalk, i.e. the effect of one pair on another.
In addition to their protection, twisted pairs therefore have another trick that allows them to
reduce the effect of crosstalk: the twisting frequency of the pairs.
In a 2-pair bond, pairs 1 and 2 are twisted at different frequencies (usually there is a factor
of 2). In a 4-pair link, pairs 1 and 3 are of identical frequencies, as are pairs 2 and 4, but
these 2 frequencies are different.
There are 2 main types of optical fibers, singlemode fibers, where the light wave flows
through the core of the fiber, and multimode fibers (jump or index gradient), where the
wave is reflected off the walls of the fiber.
Optical fibers appeared in the early 60s, but it was not until the 70s that they entered the
field of networks, thanks to a better mastery of silicon and its dopants, which made it
possible to obtain attenuations of the order of 20dB/Km (instead of 1000dB/Km originally).
Manuel de Formation EXP-MN-SI110-FR
Dernière Révision: 08/04/2009 Page 127 de 249
Formation Exploitation
Instrumentation
Automatisme Centralisé - DCS
With the 80s of single-mode fibers, the attenuation in the lines was increased to 2dB/Km.
Index-hopping optical fibers (which are almost abandoned today) are considered to offer
speeds of around 50 Mbit/s, while those with index gradients can reach 1 Gbit/s. Single-
mode fibers can reach speeds of 40Gbit/s over distances ranging from 3Km for standard
single-mode fibers (G 652) to 25Km for "True Wave" fibers (G 655).
Optical fibers work in the near-infrared (from 800 to 1600 nm wavelength). They are made
of silicon oxide (SiO2) with a very low density of OH- ions (these ions have the unfortunate
tendency to absorb radiation in the near infrared). The core of the fibre is then doped with
Germanium or phosphorus, which slightly increases the index of the core. The cladding is
also doped with boron or fluorine to slightly reduce its index.
The core of an optical fiber has an index (n1) of about 1.5 for a diameter of about 200μm
for index-hopping fibers, 62.5μm for index-gradient fibers and 10μm for single-mode fibers.
n2
The sheath has an index (n2) very close to that of the core (0.99 ), for a diameter
n1
380μm fiber surface for index hopping fibers and 125μm for index gradient or single-mode
fibers.
The whole thing is wrapped in an acrylic envelope that ensures the absorption of
mechanical shocks. It is the optical fibre alone that gives the transmission line its
longitudinal mechanical properties, with the plastic casing only serving to protect it against
shear. In theory, a 29mm diameter fiber should support the weight of 216 elephants (1300
tons).
Optical fibers are not without defects, however, even if they are perfectly immune to
electromagnetic disturbances, they tend to scatter light, i.e. to create blurry spots, similarly,
they do not propagate all wavelengths at the same speed, which tends to distort the
transmitted signals. Finally, their natural attenuation limits transmission distances.
The other fundamental element of Layer 1 that allows interconnections without protocol
analysis is the HUB.
It is an element that allows traffic from multiple hosts to be concentrated and the signal to
be regenerated. It has from 4 to 32 ports to connect the machines to each other,
sometimes arranged in a star shape, which is why it is called a HUB . You can connect
several HUBS to each other, for this you will use a crossover cable between the HUBS.
8.6. ETHERNET
Ethernet is the most well-known standard in the world of networking, although it is among
the least efficient on the market, its worldwide use has made it the tool of excellence for all
companies. Designed in 1980 by Bob Metcalf (the founder of 3Com).
It is a low-cost interface for connecting machines via a bus topology to share resources.
Although not really part of the world of industrial LANs in the strict sense, Ethernet is still a
corporate LAN. The fact that it is no longer uncommon to see PLCs connected to Ethernet
if not to the Internet via Ethernet (e.g. web server).
Ethernet uses only the two lowest layers of the OSI pyramid. The physical layer allows
connection via 3 media families: twisted pair (UTP); coaxial cable (large or thin) and optical
fibers. The medium access layer uses a standardized process (IEEE 802.3).
But what characterizes this network the most is the incredible amount of applications that
have been developed "on" it. Indeed, many protocols have been added, such as TCP/IP,
PROFIBUS, FIELDBUS, etc...
Ultimately, Ethernet is a universal medium. This has earned him great popularity.
We will therefore study the standard Ethernet frame and then the connections used.
Finally, we will focus on the following chapters 'TCP/IP protocol and PROFIBUS'.
This framework has undergone significant changes since it was set up by the network's
first progenitors. Originally, the length of the frame was encoded in a specific field. Since
then, it has evolved into a definition of the type of data encapsulated.
Ethernet Header
The preamble consists of 7 bytes allowing the regeneration of the transmitter clock,
then the start delimiter. The first 7 bytes are composed alternately of 1 and 0, forming
the hexadecimal code AAH. While the leading byte forms the word ABH (only the last
bit changes).
The addresses are composed of 2 fields of 3 bytes, each of which allows the name of the
manufacturer to be defined (it is provided by a regulatory body) while the 3 bytes of low
weight encode the "serial number" of the card. These addresses are generally called MAC
addresses (from the name of the layer where they are used). A specificity of this
addressing is that it does not use odd addresses to talk to individual machines. These
addresses are strictly reserved for multiple broadcasts (multicast and broadcast).
Because Ethernet frames encapsulate a large number of other protocols, this field is used
to specify the type of information encapsulated. For example, IP packets are encoded
0800H. See the table of values below.
The data field is used by the upper layers to place data in it. This field must contain at
least 46 bytes. If a frame does not contain enough data to fill this interval, padding bits
are used to fill the space.
Originally, Ethernet was designed to use only 50Ω coaxial cable, with a bus topology and
therefore with termination impedances.
The use of 10BASE-5 cabling requires the installation of a "rigid" main cable (the Back
Bone), on which tranceivers or extenders are connected with vampire plugs (allowing a
connection without line breaks). This set forms a MAU for Media Attachment Unit.
This large cable must have a length that is an odd multiple of 3.4m (23.4m, 70.2m, 117m,
163.8, etc.) without exceeding 500m either at 491 or 21 segments of coaxial cable. Its
attenuation must not exceed 8.5 dB for 500m at 10MHZ and its resistivity must be less
than 10Mohms/m.
But the evolution of techniques and especially the real need to reduce cabling costs allows
the implementation of other versions such as the 10BASE-2 which uses a thin and much
less expensive coaxial cable, with the coaxial cables connected directly to each machine
by tee connectors. On the other hand, the distance between machines is reduced to 200m
and it is forbidden to connect a machine via a strand to the tee connector.
This cabling is called thin coaxial cabling (or yellow cable), its attenuation soars to 4.6dB
for 100m at 10MHz, it is also slower to propagate the electromagnetic wave than the large
coaxial cable.
This wiring still had the disadvantage of prohibiting any installation (or removal) of
machines without interrupting communications, and any failure of one element of the
network influenced all the others.
The use of more scalable cabling soon became a necessity, hence the IEEE committee's
implementation of twisted-pair cabling. This technology makes it possible to use
unshielded twisted wires, which is an extremely economical type of wiring. By using a
technology no longer in bus but in star, using the now famous RJ45 socket.
A connection with the machine can then be established via an interface (AUI for
Attachment Unit Interface), consisting of a CANON DB15 type connector.
Twisted-
thin coaxial cable when it comes to the propagation of electromagnetic signals. Finally, it
requires an attenuation of 11.5dB for 100m at 10MHZ. They are also known to have other
defects such as crosstalk (influence of one strand on the other), which should not exceed
26dB at 10MHz.
This wiring method has been given the name 10BASE-T. As it uses 2 pairs of twisted
wire, one for transmission, and one for reception, it allows FULL DUPLEX transfer of
information.
However, the evolution of Ethernet does not stop there since there are operating modes
using optical fiber, these are the 100BASE-FL modes. These modes generally use back-
bone topologies (made of optical fiber) and on which different shapes (rings, buses or
stars) are articulated.
The use of star cabling or back-bone structures requires the use of Hub (to broadcast
information in a star format) or Switch (to modify speeds, see the protocols related to
broadcasting methods). As long as there is no need to establish routing, Ethernet
protocols can be retained.
The link layer is based on the frame described above, and because of its size, it allows
easy encapsulation of messages from the upper layers. Thus, a standardized framework.
Thus, an IEEE 802.3 standardized frame coming directly from the application layer of the
OSI pyramid can be presented as follows:
While using encapsulations allows you to decompose the received frames again:
As can be seen, information from the upper layers (in this case from the LLC sublayer) has
been placed in the data field.
Since this information is addressed to elements of the machine not pointed by MAC
addresses, the data encapsulated in the data field of the MAC frame begins with the
definition of the recipient service and the source service (in our case, this is the addressing
of the SAP of the LLC layer entering the communication). This reduces the size of the data
field by the same amount.
Still within the framework of the protocol, we must dwell a little on the principles of
addressing. Thus, the 6 bytes of the address field are broken down into 2 times 3 bytes.
The first byte (or the high-weight byte) identifies the manufacturer.
It is also necessary to allow correct data routing, i.e. to establish for which service the data
on the bus is intended. So we converted the length field back into a 2-byte field encoding
the type of information carried. For example, an IP field is referenced 080016.
The first byte (or the high-weight byte) identifies the manufacturer. All countries identify
manufacturers in the same way, but this code is not conveyed by routing algorithms. It is
therefore only known locally. Manufacturers can therefore use the same address twice (or
more) provided that they do not sell these cards in the same geographical area, hence a
certain risk in buying network cards from the same manufacturer in two different countries.
The second byte (or the low-weight byte) is used to identify the card. This byte must be
even.
But even if the address is even, there are frames circulating on the network (as can be
seen with a spy) with odd addresses. These are frames destined for groups of cards
(MULTICAST) or even all cards (BROADCAST).
The BROADCAST address is quite simple to remember since it is composed (in binary)
of only 1 which gives in hexadecimal "FF FF FF FF FF". At this broadcast address, all
machines are supposed to respond if the content of the message concerns them. We will
see in the context of the use of TCP/IP how it works.
The MULTICAST addresses, on the other hand, are legion (about 8 million), they are
obtained by an algorithm from the addresses of the machines concerned by the message.
This algorithm, similar to CRC calculations, makes it possible to automatically recreate an
address common to all the machines concerned. Each machine, by applying the same
calculation, will thus find its own address.
The use of MULTICAST commands is more limited than the use of BROADCAST
commands. However, in the context of the use of the upper layers, it is not uncommon to
see this kind of frames circulating.
Information technologies play a driving role in the rise of automation systems. Shaking up
the company's pyramidal organization, revolutionizing its traditional patterns and radically
modifying its flows, they spare no sector of activity (continuous, manufacturing, logistics,
BMS, etc.).
The communication capabilities of industrial equipment and the transparent links that
permeate the entire company are the essential technological building blocks for building
the automation solutions of tomorrow.
Industrial communication became direct and transversal to federate all the equipment in
the field, but also vertical to integrate all levels of the CIM pyramid. Depending on the
requirements of the application and its economic constraints, scalable communication
networks such as PROFIBUS, AS-i and Ethernet are the cornerstones for a seamless
connection of all areas of production.
At the base of the industrial building, the signals from the digital equipment are transmitted
by a sensor/actuator bus. Simplicity and economy are the order of the day here: it is a
question of transporting, on the same cable, not only the useful data (cyclic exchanges),
but also the 24 V power supply of the field devices. This is the preferred area of the AS-i
network.
At the field level, the decentralized periphery—I/O, transmitters, variable speed drives,
valves, and operator interfaces—engages with automation on a network that combines
performance and real-time communication.
Process data is transmitted cyclically, while alarms, parameters and diagnostic information
are transmitted acyclic as required. PROFIBUS meets these requirements perfectly by
offering maximum transparency in both manufacturing and process.
Dialogue between PLCs and industrial PCs is at the cell level. You have to exchange
large volumes of data and rely on a multitude of very powerful communication features.
Industrial networks are the spearheads of this revolution. This is the case of PROFIBUS,
the real "backbone" of the plant's information system. Its integration with networks
connecting the entire company over TCP/IP.
purely electric
purely optical
infrared
The communication requirements between decentralized systems are therefore very high.
Decentralized structures have the following advantages, among others:
is:
Ability for higher-level structures to also perform diagnostic and logging functions
Improved plant availability, as the entire system is not affected by a substation failure.
Communication network for the LAN and cell domain with baseband transmission
according to IEEE 802.3 and CSMA/CD (Carrier Sense Multiple Access/Collision
Detecting) access procedure, using the media:
câble triaxial 50 W
optic cables
8.7.1.4. AS–Interface
Communication network for the lowest level of automation, i.e. the connection of digital
actuators and sensors to programmable logic controllers via an AS–I bus cable.
8.7.1.5. PROFIBUS
Communication network for cell and field level according to EN 50170–1–2 with hybrid
Token Bus and Master-Slave access procedures. Interconnection is done using two-wire
cables, FO cables, or infrared cables.
8.7.1.6. PROFIBUS – PA
Figure 109: The 3 main families of profibus profiles and the convergence of PROFIBUS and
d’Ethernet
PROFIBUS is an open, non-proprietary field network that meets the needs of a wide range
of applications in the manufacturing and process fields. Its universality (independence from
the manufacturer) and its openness are guaranteed by the European standards EN 50170,
EN 50254.et international IEC61158 PROFIBUS allows the dialogue of multi-manufacturer
equipment, without going through specialized interfaces.
It lends itself both to the transmission of data requiring reflex actions, in very short reaction
times, and to the exchange of large amounts of complex information. Constantly evolving,
PROFIBUS remains the industrial communication network of the future.
In addition, as part of the technical developments of the network, the PROFIBUS users'
association is currently working on the implementation of universal concepts of "vertical
integration" of all levels of the CIM pyramid, under TCP/IP. Finally, the application profiles
define the appropriate protocol and transmission technology for each type of equipment.
They also endeavour to specify the behaviour of the equipment, regardless of its
manufacturer.
The purpose of these profiles is to define the way in which data is transmitted serially by
the user, on the same physical medium.
DP
The most widespread communication profile in the industry and the most prized for its
speed, performance and low-cost connectivity, DP is reserved for the dialogue between
automation and the decentralized edge. It is a great replacement for the traditional
transmission of 24 V parallel signals in the manufacturer and analog signals on a 4-20 mA
loop or Hart interface in the process.
FMS
The field of action of a fieldbus is largely dictated by the choice of its physical medium. In
addition to the general requirements for reliable transmission, handling of long distances
and high throughputs, there are specific, process-oriented criteria: operation in hazardous
atmospheres and transmission of data and energy on the same cable.
These are all criteria that no single transmission technique can meet on its own; hence the
three physical profiles of PROFIBUS:
Optical fiber, synonymous with excellent immunity to interference and long distances.
It should be noted, however, that the future lies in a PROFIBUS physical layer built on
commercial Ethernet components, capable of transmitting at 10 Mbit/s and 100 Mbit/s.
With this in mind, the PROFIBUS offer already includes couplers and links for migration
from one technology to another. While the couplers implement the protocol in a
transparent way, taking into account the constraints of the environment, the links, which
are essentially "intelligent", provide the configuration of PROFIBUS networks with
extensive functionalities.
The PROFIBUS application profiles describe the interaction of the communication protocol
with the transmission technology used. They also define the behavior of field equipment on
PROFIBUS.
Other profiles are dedicated to electronic speed variation, driving and supervision (HMI),
and encoders with, in each case, the dual mission of establishing transmission rules
independent of the supplier and defining the behavior of each type of equipment.
The active masters or stations control the data transmission on the bus. A
master can freely send messages provided that he or she obtains the right to
access the network (token).
Slaves or passive stations are peripheral equipment (I/O blocks, valves, drives and
measurement transmitters) that are not allowed to
The ultra-fast DP profile only leverages the two lower layers 1 and 2, as well as the user
interface. This streamlining of the architecture ensures that transmission is fast and
efficient.
The Direct Data Link Mapper (DDLM) adapter makes it easy for the user interface to
access Layer 2. The user's application functions and the behavior of the various types of
PD equipment (systems and devices) are specified in the user interface.
The FMS Universal Profile implements layers 1, 2 and 7. The latter consists of the
Fieldbus Message Specification (FMS) messaging and the Lower Layer Iinterface (LLI)
interface. FMS specifies a host of advanced communication services between masters
and between masters and slaves. LLI defines the representation of these FMS services in
the Layer 2 transmission protocol.
The implementation of the RS 485 link is very easy; The installation of the twisted pair
does not require any special knowledge and the bus structure allows the addition or
removal of stations or the commissioning of the system in stages without repercussions on
the other stations.
Future extensions do not penalise the stations in operation. The user can choose the
speed, in a range from 9.6 kbit/s to 12 Mbit/s. This choice, made at the start of the
network, applies to all bus subscribers.
Manuel de Formation EXP-MN-SI110-FR
Dernière Révision: 08/04/2009 Page 145 de 249
Formation Exploitation
Instrumentation
Automatisme Centralisé - DCS
repeaters
The bus ends with an active termination, at each end of the segment. To avoid error, both
bus terminations must always have power. Termination can usually be enabled at the
device or bus termination connectors.
If the network has more than 32 subscribers or if its range is to be extended, repeaters
(line amplifiers) must be used to connect the various bus segments.
The maximum length of the cable is inversely proportional to the throughput; the values in
Table 2 are given for a type A cable meeting the following characteristics:
Capacity< 30 pF/m
It should be noted that the HAN-BRID connector is also available in a mixed version that
combines data transmission over fiber optics and the power supply of 24 V over copper
peripherals.
Pinout: 1 =
PV
2 = RxD/TxD-N
3 = DGND
4 = RxD/TxD-P
5 = shielding
Transmission of 24 V power supply and PROFIBUS data over copper for IP 65 equipment
PROFIBUS cables are all from well-known manufacturers. The quick connection system
deserves special mention: equipped with a specific cable and stripper, it guarantees
speed, reliability and simple wiring.
When connecting stations, be careful not to reverse the data lines. It is imperative to use
shielded data lines to ensure optimal interference suppression in environments with high
electromagnetic pollution. This shielding must be connected to the mechanical ground at
each end, while ensuring good conductivity with shielding collars covering the widest
possible area.
It is also recommended to decouple the data lines from the high-voltage cables. Jumper
cables should also be avoided for speeds up to or above 1.5 Mbit/s.
Commercial connectors allow the inlet and outlet cables to be connected directly to the
connector. Trunk lines are therefore useless and the connector can be inserted or
removed at any time, without interrupting the data exchange.
In this respect, it should be noted that 90% of problems on a PROFIBUS network are
attributable to wiring and installation errors. This can be remedied by using bus testers and
analyzers, which can detect a number of wiring faults even before commissioning.
Synchronous transmission technology in accordance with IEC 1158-2 (fixed rate of 31.25
kbit/s) is used in the process sector and meets the two main requirements of the chemical
and petrochemical industries: intrinsic safety and remote power supply of field instruments
on the bus via two-wire cabling. PROFIBUS can therefore be used in hazardous areas.
The possibilities and limits of PROFIBUS on an IEC 1158-2 link, in potentially explosive
atmospheres, are defined by the FISCO (Fieldbus Intrinsically Safe Concept) model.
Developed by the German physics institute Physikalisch Technische Bundesanstalt, this
concept is now an authority in this field.
Each segment has a single power source, the power supply unit.
Passive line termination is performed at each end of the bus's main cable.
In steady state, each station requires a minimum current of 10 mA. Thanks to the remote
power supply, this current is supplied to the field devices.
The operation of a PROFIBUS network in a hazardous area requires the FISCO/IEC 1158-
2 approval and certification of all equipment used by authorized bodies such as PTB, BVS
(Germany), UL and FM (USA).
If all this equipment is properly certified and the instructions for selecting the power supply,
line length and bus terminations are met, no further approval is required for PROFIBUS to
be put into operation.
Protection in hazardous areas Intrinsically safe (Eex ia/ib) and explosion-proof (Eex d/m/p/q) mode
The control and supervision station usually houses the process control system as well as
the operating and development tools communicating on PROFIBUS in RS 485. In the field,
a segment coupler or a link ensures the RS 485/IEC 1158-2 adaptation and, at the same
time, the remote power supply of the field instruments.
The segment couplers are RS 485/IEC 1158-2 signal converters, completely transparent
to the bus protocol. However, their use limits the maximum throughput of the RS 485
segment to 93.75 kbit/s.
Links, on the other hand, are "smart". They bundle all field devices connected to the IEC
1158-2 segment into a single RS 485 slave. In this case, the throughput of the RS 485
segment is not limited: hence the possibility of implementing fast networks ensuring, for
example, control functions with field instruments connected to IEC 1158-2.
The PROFIBUS network on IEC 1158-2 accepts both tree and linear topologies, both of
which can be combined.
In a linear topology, stations are connected to the main cable using T-connections. The
tree topology, on the other hand, is similar to the classic technique for installing field
equipment.
The stranded main cable is replaced by the two-wire bus cable. The field divider is always
used to connect the devices and to house the bus termination resistor. In a tree network,
all devices connected to the bus segment are wired in parallel into the splitter. In all cases,
the maximum permissible lengths of the connecting lines must be taken into account when
calculating the total length of the line.
The bus termination is already integrated into the segment coupler or the link. A polarity
reversal on field devices transmitting in IEC 1158-2 does not affect the functionality of the
bus, as these devices are normally equipped with an automatic polarity detection system.
The number of stations that can be connected to a segment is limited to 32. This number
can be further reduced by the chosen protection mode and the power supply on the bus. In
the case of systems designed for intrinsic safety, the maximum voltage and supply current
are defined within specific limits. Even for applications without intrinsic safety, the power of
the remote power unit is limited.
To determine the maximum line length empirically, it is sufficient to calculate the current
requirements of the field devices to be connected, select the power supply, and then
deduce the line length corresponding to the cable selection.
The required current (Σ) is given by the sum of the base currents of the equipment, of each
field device connected to the chosen segment, not to mention an additional margin of 9
mA per segment intended for the operating current of the
Optical fiber is the main use of PROFIBUS in three use cases: to compensate for
disruptive electromagnetic environments, to ensure perfect electrical isolation and to
increase the maximum range of the network at high speeds. There are several types of
fibre, whose properties vary with distance, cost and intended application.
The PROFIBUS fiber optic segments are designed in a star or ring shape. Some
manufacturers offer PROFIBUS/FO components that allow the redundancy of optical links:
the failure of a first link causes an automatic switch to the second. Many suppliers also
offer RS 485/FO couplers that allow you to switch between the two transmission carriers at
any time within the same network.
The access method is provided by the MAC (Medium Access Control) sub-layer, which
ensures that the communication channel is shared by guaranteeing that only one station
has the right to transmit at a given time. PROFIBUS meets two basic requirements of the
MAC access method:
Ensure that any complex (master) automation connected to the network has
enough time to complete its communication tasks within the allotted time.
To achieve this, the method of access to PROFIBUS is hybrid in nature (see following
figure): inter-master communication is based on the token method, while exchanges
between masters and slaves take place in the master-slave mode.
The token method, reserved for exchanges between complex stations, guarantees the
access of each master to the bus, at least once in a given time. In plain English, this
means that the token, a special telegram conveying a right to speak from master to
master, must be transmitted to each master at least once within a configurable time
window.
The master slave method allows the master holding the token to access his slaves to
send them messages or, conversely, to read their messages.
A hybrid setup.
A token ring is the chaining of active stations forming, by their address on the bus, a
logical ring, within which each participant passes to his neighbor, in a defined order
(increasing addresses), the token giving him the right to transmit or pass his turn. Upon
receipt of this token, any active station can assume the role of master for a given period of
time and thus communicate with all its slaves in master slave mode and all masters in
Manuel de Formation EXP-MN-SI110-FR
Dernière Révision: 08/04/2009 Page 154 de 249
Formation Exploitation
Instrumentation
Automatisme Centralisé - DCS
master master mode.
When the network starts, the MAC sublayer is responsible for detecting the logical
relationships between active stations and constituting the ring. During operation, it
eliminates faulty or shut down active stations from the network, and integrates new ones.
In addition, it ensures that the token circulates from one master to another, respecting the
ascending order of addresses. It should be noted that the speaking time of a master
depends on the maximum rotation time of the token.
Other essential functions are MAC that detects faults in the transmission medium and line
receiver, as well as addressing errors (multiple assignments) or token passing errors
(multiple token possession or loss).
Finally, Layer 2 of PROFIBUS ensures data security. Thanks to the format of its
telegrams, it offers a high level of integrity (Hamming distance of 4), in accordance with
the international standard IEC 870-5-1 (start and end characters, non-slip synchronization,
parity bit, control byte).
PROFIBUS communication profiles use a subset of Layer 2 services (table below) that are
invoked by the higher layers by means of service access points (Service Access point).
In PD, each SAP serves a well-defined function. Multiple SAS can be used at the same
time for all active and passive stations. It should be noted that a distinction is made
between source SAP and destination SAP (Destination SAP).
Table 11: The different services of the PROFIBUS data security layer
(layer2)
DP is intended for fast, mainly cyclic, serial exchanges between cell controllers (PLC or
PC) or control/supervision systems and decentralized periphery (I/O, drives, valves,
measurement transmitters, etc.). The corresponding communication functions are defined
by the basic functions of DP, which are EN 50 standardised
170. These are complemented by advanced, acyclic communication services for the
configuration, operation, monitoring and processing of alarms on intelligent field
equipment.
The cell controller (master) reads the inputs of his slaves and writes their outputs, in a
cyclical manner. The bus cycle time must be less than that of the automation program,
which is around 10 ms for many applications. In addition to this cyclical transfer of user
data, DP includes powerful diagnostic and commissioning tools with monitoring functions
on the master and slave sides
Access method:
Token passage between masters and master-slave mode between master and slaves
Maximum number of stations on the bus: 126 (masters and slaves combined)
Communication :
Mode d’explitation :
Synchronisation :
Features:
Data security:
Distance de Hamming = 4
Types of equipment:
DP slave: field device (digital or analog I/O, motor control, valve, ...)
The race for throughput is not the only criterion for the success of a fieldbus. Simplicity of
installation and operation, quality diagnostics and immunity to pests are among the user's
priorities.
Speed
DP takes only about 1 ms (at 12 Mbit/s) to transmit 512 input bits and 512 output bits to 32
remote stations. The transmission of I/O in a single message cycle explains the superiority
of DP over FMS in terms of speed. In DP, user data is transmitted with the Layer 2 SRD
service.
Diagnostic
DP's powerful diagnostic functions allow faults to be located quickly, using dedicated
messages, sent to the bus and retrieved to the master. There are three levels of message:
Channel Diagnostics: Specifies the cause of the fault on an I/O bit (or channel)
(e.g., output 7 short).
DP can operate in single master or multi master mode, hence its great flexibility of
configuration. A bus serves a maximum of 126 pieces of equipment, masters or slaves.
The network configuration specifications define the number of stations, the assignment of
station addresses to I/O addresses, the consistency of I/O data and the format of
diagnostic messages, and bus parameters.
Slave
It is a peripheral device (I/O block, drive, HMI, valve, measurement transmitter) which, as
an input, collects information and, as an output, sends it to the peripherals. This category
also includes devices that provide only inputs or outputs.
The amount of I/O depends on the type of device. A DP network allows a maximum of 244
bytes of input and 244 bytes of output.
In a single-master configuration, a single master holds the bus during network operation:
the controller orchestrates exchanges with the deported slaves via the transmission
medium.
In multi-master, several masters share the bus. These can be either independent
subnets, each consisting of a DPM1 master and his slaves, or additional configuration and
diagnostic equipment. Slave inputs and outputs can be read by all DP masters. However,
only one master (specifically, the DPM1 designated during configuration) can write access
to the outputs.
Figure 118: The two frames for cyclic transmission of user data in DP
DPM1 can be controlled either locally or via the bus via the configuration tool. There are
three main states:
Clear: DPM1 reads slave entries and keeps their exits positively secure.
DPM1 periodically sends its state to all slaves attached to it, using a multicast
command, according to a configurable periodicity. The automatic system
reaction to an error during the transfer of DPM1 (e.g. slave failure) is determined
by the auto-clear configuration parameter.
If true, DPM1 switches the exits of all its slaves to positive security as soon as one of
them is no longer able to transmit Clear.
If auto-clear is false, DPM1 remains operational even in the event of a fault, and it is
the user who specifies the system's response.
Data transmission between DPM1 and its slaves is automatically handled by DPM1, in a
defined, repetitive order. When configuring the bus, the user specifies the assignment of a
slave to the DPM1 and indicates the slaves to be integrated or removed from the cyclic
transmission.
This transmission takes place in three stages: configuration, configuration and data
transfer. During the first two, each DP slave compares his actual configuration to his
theoretical configuration: equipment type, format, length of information, and number of I/Os
must match.
The slave can then move on to the transference phase. These checks are intended to
make the protection against configuration errors more reliable. In addition to the data
transfer, which is automatically carried out by DPM1, a new configuration can be sent to
the slaves at the user's request.
DPM1 does more than just automatically execute the transfer of user data from a station. It
can also send commands to a single slave, to a group of slaves or to all the slaves at the
same time (multi-broadcasting).
These commands use two modes, synchro and freeze, to ensure event synchronization of
slaves.
When they receive a synchronization command from their master, the slaves are in
synchronization mode.
The exits of all addressed slaves are frozen in their present state. During subsequent
transmissions, the output data is stored in the slaves, with no change in its state.
This data is not sent to the outputs until a new synchronization command is received from
the master. Sync mode ends with an Unsync desync command.
Similarly, a Freeze command causes all addressed slaves to freeze, and the state of the
inputs is frozen, with the current value. This data is not refreshed until the master sends a
new freeze command. Freeze mode ends on an Unfreeze thaw command.
These functions, independent of cyclic user data exchanges, allow the acyclic
transmission of read/write functions and alarms between master and slaves.
This makes it possible, for example, to use a development tool (DPM2) to optimize the
parameters of the connected slaves or to know the status of the field devices without
disrupting the operation of the network.
With these extensions, DP meets the requirements of complex hardware that often needs
to be reconfigured during operation. At present, these extended functions are mainly used
for the online use of PA field instruments by development tools. The transmission of
acyclic data, which is not a priority, is carried out in parallel with the rapid transfer of cyclic
data.
However, the master must be given the time necessary to carry out these acyclic services,
which must be taken into account in the configuration of the network. To do this, the
parameterizer normally increases the token's rotation time so that the master can combine
cyclic and acyclic transmissions.
Existing equipment that can do without it remains usable, as these extensions only
complement the basic functions.
To address the data, PROFIBUS perceives the slaves as physical building blocks or
breaks them down into logical functional modules. This model also applies, in the basic
DP functions, to cyclic transmission where each module has a constant number of bytes of
inputs and/or outputs whose transmission position in the telegram of the user data remains
fixed.
Addressing is based on identifiers that characterize the type of module (input, output or
input/output), all of which constitute the configuration of a slave, which is also controlled by
DPM1 at the start of the network.
Acyclic services are also based on this model. All data blocks that are read or write committed
are also considered to belong to the modules.
These blocks can be addressed by slot number and index, with the slot number locating the
module, and the index, the data blocks attached to the module. Note that the maximum length
of a data block is 244 bytes.
On modular devices, each module is given a slot number, starting from 1 and following the
ascending order in which the modules are located in the equipment. Number 0 is reserved
for the equipment itself. Compact devices are treated as a single set of virtual modules,
subject to the same addressing principle (slot number + index).
The Read/Write Request Length field allows portions of a data frame to be read or written.
If block access is successful, the slave returns a positive read or write response; If it fails,
the slave issues a negative response, specifying the class of problem encountered.
There are five acyclic transmission functions between the cell controller (DPM1) and slaves:
There are five acyclic transmission functions between development and management tool
(DPM2) and slaves:
Acyclic transmission follows a predefined sequence of tasks, which are described below
with the MSAC2_Read service.
The master begins by making a MSAC2_Read read request to the slave, in which the
required data is identified by its location number and index. Upon receipt of this request,
the slave is ready to deliver the desired information. The master then sends periodic
telegrams of scrutiny to repatriate this data from the slave.
The latter responds with a brief acknowledgement, without issuing the requested data until
it has processed them. The next request for the master's polling is satisfied by a
MSAC2_Read response that allows the slave's data to be read and transmitted to the
master.
SAP 40 to 48 of the slave and SAP 50 of the DPM2 are reserved for the MSAC2 link.
FMS is reserved for advanced communication at the cell level, i.e. dialogue between
automatisms (PLC and PC); It focuses on functional richness and not response times.
The FMS Communication Profile is used to unify application processes that are distributed
into a common process by means of communication relationships.
The part of an application within a field device that is accessible for communication is a
virtual field device (VFD). Figure 13 relates the real device to its virtual counterpart; here,
only certain variables (quantity of elements, failure rate, downtime) are part of the VFD and
are read/write accessible via the two communication relationships. Note that the Setpoint
and Recipe variables are not taken into account by FMS.
All communication objects on an FMS device are listed in the Object Dictionary, which
contains the description, structure, and data type, as well as the relationship between the
internal addresses of the communication objects and their designation on the bus
(index/name).
Static communication objects appear in the Static Objects dictionary. Configured once
and for all, they cannot be modified during operation. FMS distinguishes five types of
communication objects:
Variable simple
Domain
Dynamic communication objects are entered in the dynamic part of the object dictionary.
They can be modified during operation.
Figure 120: Virtual Field Equipment (VFD) and Object Dictionary (OD)
The Manufacturing Message Specification (ISO) industrial messaging package for field
network applications, enhanced with communication object administration and network
management.
Unconfirmed services can also be used in offline mode (general and selective
broadcast), with two priority levels (priority or not).
VFD support : Equipment identification and status inquiry. These services can
also be sent spontaneously at the request of a device, in general or selective
distribution.
The adaptation from layer 7 to layer 2 is the responsibility of the LLI interface, which is
responsible for flow control and link monitoring, among other things.
The user dialogues with other processes on logical paths called communication
relationships.
LLI provides several types of communication relationships for the execution of FMS and
management services. These communication relationships offer various possibilities of
connection (monitoring, transmission and request to communication partners).
All transmission must begin with the establishment of the link to the Initiate service. If this
step is successful, the link is protected from unauthorized access and ready to transmit.
When it is no longer in use, it is released by the Abort service. The LLI interface allows
time control of the link in connected mode
Seen from the bus, this CR reference is defined by a station address, the Layer 2 SAP and
the LLI interface SAP. The CRL list gives the relationship between the CR reference and
the Layer 2 and LLI address. In addition, it indicates, for each CR, the FMS services
accessible, the length of telegrams, etc.
Ensuring uniform access for configuration tools is achieved by specifying the fault handling
binding. For each device that accepts FMA7 services, as a responder, a fault management
link referenced CR = 1 must be entered in the CRL list.
The use of PROFIBUS in the process area is defined by the PA profile. PA uses the IEC
1158-2 standardized transmission medium, in the case of 4-wire instrument (without
remote power supply), RS485 support can be an alternative. It defines the
parameterization and the
The description of the functionalities and behavior of the field instrument is based on the
function block model, which complies with international standardization. These advantages
make PROFIBUS a cost-effective alternative to 4-20 mA analogue or Hart digital
transmission.
Born from close collaboration with users in the process industry (NAMUR), the PA profile
meets the four main requirements of the sector:
Adding and removing stations from the bus, even in intrinsically safe areas,
without disrupting other stations.
Remote bus power supply of the measurement transmitters, on the same pair of
wires according to IEC 1158-2.
Use in hazardous areas with two protection modes: intrinsically safe (EEx ia/ib)
or explosion-proof (EEx d).
8.7.6.2. Communication on PA
The implementation of PROFIBUS in the process industry reduces the costs of design,
cabling, commissioning and maintenance by more than 40%, while offering a high level of
functionality and increased data security.
The figure below summarizes the differences between the conventional approach of 4-20
mA wire-to-wire cabling and a PROFIBUS network.
The field instruments installed in hazardous areas are connected to PROFIBUS by an IEC
1158-2 link which provides both data transmission and remote power supply over two
wires.
The transition to the healthy zone (PROFIBUS DP on RS 485) is carried out via a segment
coupler or a link. Unlike traditional wiring, which requires a line to be drawn for each signal
between the instrumentation and the I/O board of the control system (PLC, DCS), data
from several devices on PROFIBUS is routed over a single cable. Similarly, while
conventional cabling solutions require a power supply (explosion-proof, if necessary) for
each signal, on PROFIBUS, the segment coupler or the link performs this function
indiscriminately for multiple devices.
Figure 123: Comparison between the two cabling solutions: 4-20mA wire-to-wire and network
PROFIBUS BAR
Depending on the explosion risk and the power consumption of the field instruments, it is
possible to connect from 9 (Eex ia/ib) to 32 (non-ex) measuring instruments on a single
segment or link coupler; The savings therefore extend not only to the wiring, but also to the
system's I/O modules, which are replaced by the PROFIBUS interface. No more
disconnectors and other protections, as multiple transmitters can be powered by a single
source.
The transmission of measurements and status of the PA field instruments is carried out
cyclically and as a priority between the system (DPM1) and the measurement transmitters,
and takes over the basic functions of the DP high-speed bus; the instantaneous
measurement value and its status are therefore always up-to-date and accessible to the
DPM1 controller.
The specifications of the unit and the meaning of the instrument parameters (high/low
measurement range) are vendor-independent. To facilitate commissioning, these values
can also be simulated in the measurement transmitter.
The user can then substitute a fictitious measurement value for the actual value, which
is entered with the developer tool and then transmitted to the system. This makes it
easier to simulate the critical states of a plant and supports the personnel in
commissioning it in stages.
The behavior of the equipment is described by specifying standardized variables that give
the details of the properties of the measurement transmitters. The following figure
illustrates the principle of a pressure transmitter described with the Analog Input function
block.
This profile is suitable for describing devices that are limited to a single measured variable
(single variate) as well as multifunction devices with multiple variables (multivariate). The
equipment sheets of the current PA profile cover all the most common measurement
transmitters:
Valves, Positioners
Analyzers
PA Function Blocks
These blocks represent various user functions: Analog Input, Analog Output... Application-
oriented, they are complemented by two more equipment-oriented blocks: the Physical
Block and the Transmitter Block.
The input and output parameters of the function blocks can be connected via the bus and
connected to a process engineering application.
Physical block: This is the identity card of the equipment: designation, manufacturer,
version and serial number.
Analog Input: Reads the value measured by the sensor, as well as its status and scaling.
Analog Output: Provides the analog output with the value given by the system.
TOR Input: Provides the system with the value of the TOR input.
TOR Output: Provides the TOR output with the value given by the system. Each
application has several blocks of functions, integrated into the field instruments by the
manufacturer and accessible through communication and development tools.
The PROFISafe safety profile (no. 3.092) defines the connection of intrinsically safe
equipment (emergency stops, light curtains, interlocks) to programmable automation
systems on PROFIBUS.
The special area of security, where most of the components were previously connected by
wire, can thus benefit from the many advantages of open communication on PROFIBUS.
PROFISafe is a secure version of PROFIBUS that has developed with two main
objectives: to reduce cabling costs and to meet the requirements of a wide range of
applications in the manufacturing and process industries.
The equipment operating under PROFISafe is therefore able to operate without limits, in
perfect harmony with standard equipment, on the same cable. Based on the DP profile,
PROFISafe accepts RS 485, fiber optic or IEC 1158-2 transmissions.
PROFISafe also offers two decisive advantages: in the manufacturer, a very high
responsiveness inherited from DP and, in the process, the absence of additional power
supply for PA field instruments. It is therefore a software solution that combines safety
communication and standard transmission on a single channel, without any other specific
cabling.
PROFISafe takes into account all errors that could infiltrate the standard serial
transmission (repeat, loss, insertion, sequence error, delay, masquerade, data corruption
and addressing defect). Better still, it defines complementary security mechanisms that go
far beyond the simple error detection and correction of PROFIBUS access management.
A judicious selection and a subtle dosage of the available security measures (frame
numbering, time monitoring with acknowledgment, source-destination identification, cyclic
redundancy check and patented "SIL monitor") make it possible to achieve standardised
protection levels SIL3 or AK6 and in accordance with category 4 of the EN 954-1 standard.
In addition, PROFISafe has won the support of TÜV and BIA. Finally, it should be noted
that manufacturers of intrinsically safe equipment can rely on a software driver
implementing all the definitions of the PROFISafe profile.
Based on the DP communication profile, these application profiles are defined for four
types of equipment:
Numerical Controls and Robots (No. 3.052): This profile deals with the control
of handling and assembly robots on DP. Flowcharts describe the robot's
movements and commands from the perspective of next-level automation.
Encoders (No. 3.062): This profile covers the connection of rotary, angular and
linear encoders (single and multi turn) to DP. Two equipment classes define
basic and complementary functions, such as scaling, alarm processing and
diagnostics.
Variable speed (no. 3.072): This profile defines the parameterization of the
drives and the transmission of setpoints and actual values. It guarantees the
interchangeability of drives of different brands and contains the necessary
specifications for speed variation and positioning. It specifies the basic functions
of the drive while leaving room for application-specific extensions and future
developments.
These variants, depending on the type of equipment and the supplier, are usually listed in
the technical manual. To simplify the configuration of PROFIBUS and make it transparent
to the user (Plug and Play), the transmission characteristics of the equipment are listed in
electronic sheets, called equipment databases or simply GSD files.
Powerful tools enable the configuration of a PROFIBUS network. Based on GSD files, they
considerably facilitate this task for PROFIBUS networks federating multi-source
equipment.
Thanks to these files, the notion of open automation really descends into the field, as close
as possible to the operator on shift. They can be loaded during configuration using any
modern configuration tool, which brings more user-friendliness and simplicity to the
integration of multi-source equipment within PROFIBUS.
GSD files provide a clear and comprehensive description of the characteristics of a type of
equipment, in an extremely accurate format. Prepared for each type of equipment by the
supplier, they are offered to the user in electronic form. The very precise definition of the
file format allows the configuration tool to automatically draw all the information necessary
for the configuration of the bus.
The engineer is thus relieved of the tedious fishing for information in technical manuals.
Even in the middle of configuration, input errors are systematically tracked down and the
consistency between the data entered and the entire system is automatically checked.
Specifications reserved exclusively for master equipment and listing all their
parameters: maximum number of slaves that can be connected, download
possibilities, etc.
In each case, these parameters are separated by keywords. A distinction is made between
mandatory parameters (e.g. supplier Vendor_Name) and optional parameters
(synchronization mode Sync_Mode_supported).
Defining parameter groups allows you to choose different options. These parameters can
also be linked to point-by-point files containing the symbols of the equipment to be
integrated.
The GSD format guarantees a high degree of operational flexibility. It consists of lists (e.g.,
flow rates supported by the equipment) and provides sufficient space to describe the
various components of a modular machine.
Diagnostic messages can also be accompanied by plaintext texts. To make it easier for
manufacturers, the home page of the PROFIBUS website offers
http://www.profibus.com in its Download section the download of an editor and a GSD
controller that simplify the creation of these files. GSD file formats are also described in
two PROFIBUS guidelines.
In addition, the GSD files of PROFIBUS equipment that comply with the current standards
can be downloaded free of charge from the GSD Library section of our Internet
homepage.
Each PROFIBUS slave or DPM1 master must have an identification number; A golden rule
that allows the master to identify the types of equipment present on the bus, without
increasing the processing load of the protocol.
The master compares this number to the number in the configuration. The transfer of user
data cannot start until the correct type of equipment and station address is connected to
the bus. This is to secure the system against any misconfigurations.
Equipment manufacturers must request this number from the PROFIBUS association
(which is responsible for assigning and managing it) for each type of device. The
corresponding forms can be obtained from the regional branch of the association or on the
PROFIBUS website.
Generic identification numbers, between 9700H and 977FH, have been reserved for PA
field instruments. All PA devices that exactly meet the definitions of version 3.0 (or higher)
of the PA profile must therefore be numbered within this range. The definition of these
generic numbers reinforces the interchangeability of PA field instruments.
The choice of the number to be used to identify the equipment concerned depends on the
type and number of existing function blocks. The number 9760H is reserved for PA
instruments offering multiple function blocks or multivariables. The naming of GSD files for
these devices is also subject to strict rules, which are detailed in the PA profile.
The Electronic Device Description (EDD) file lists all the properties of PROFIBUS field
devices.
At the time of going to press, the PROFIBUS users' association is working on two areas of
development: enriching PROFIBUS with new features to extend its field of action; to make
PROFIBUS "THE" fieldbus par excellence, suitable for almost all industrial applications.
A few years ago, achieving cabling savings of 40% with fieldbuses was exceptional.
Today, this success is commonplace.
The challenge now is to further reduce development costs and expand the range of
applications to be able to operate a unified communication network that is transparent to
the user, still requiring a few specialized buses.
New savings are on the horizon (spare parts storage, commissioning, training and
maintenance) which are all levers of competitiveness for our machines and our networks
on the global market. Another observation is obvious: PROFIBUS' installed base now
amounts to more than 3 million devices; Compatibility is therefore the sine qua non
condition for the future development of the network.
This innovation involves the direct and transparent coupling of PROFIBUS and Ethernet.
PROFIBUS thus takes into account the inevitable evolution of industrial communication
towards "vertical" openness and transparency at all levels of the company, from
management to production, right down to the heart of remote intelligence.
This convergence of PROFIBUS and enterprise IT will take place in three stages:
Figure 127: The three main steps of the PROFIBUS – Ethernet reconciliation
PROFIBUS is also breaking new ground in the field of variable speed. In partnership with
leading experts in electronic speed variation, the PROFIBUS association wants to ensure
the control of rapid movement sequences on PROFIBUS.
These new functions make it possible to achieve closed-loop digital control with
PROFIBUS, which will synchronize the cycles of the application software of the higher-
level automation, the transmission on the bus and the cycles of the application software of
the drives.
To meet these technical requirements, the PROFIBUS protocol must be equipped with
new functions for clock synchronization and slave-to-slave communication between drives.
The objective? Drive twelve synchronized axes in a bus cycle time of less than 2 ms –
without altering the cycle – to enable acyclic access of parameters to operations,
monitoring and development tasks.
This development is justified by the fact that it has not been possible to cover all variable
speed requirements with a single network solution based on the open fieldbuses on the
market.
If, for example, the network has the triple purpose of controlling the drives, reading and
displaying remote I/O, or ensuring visualization and operation, these functions must be
spread over several buses. PROFIBUS' new features for motion control will mean that
users no longer need to rely on specialized buses for many applications.
Clock synchronization will consist of an equidistant clock signal on the bus and cyclic,
sent by the master to all his stations, in the form of a global control telegram. Master and
slaves will then be able to tune in to this signal to synchronize their applications.
In the field of speed variation, synchronous transmission serves as the basis for
synchronizing drives. Not only is telegram communication carried out on the bus in an
equidistant time slot, but the internal control algorithms (speed and current control in the
inverter or controller) are also synchronized at the level of the upper automation.
For common applications, the jitter of the clock signal, from cycle to cycle, should be less
than 1 μs. Larger drifts are considered to be defects in the cycle and, as such, ignored. If
one cycle is omitted, the next cycle must again be in the time slot.
The system clock is set by the user when the bus is configured.
Simple slaves (e.g., remote I/O) can take part in this synchronous bus, without any
modifications. Thanks to the Synchro and Freeze functions, inputs and outputs are frozen
at one point in the cycle and transmitted to the next cycle.
However, the perfect synchronization of all the participants of the bus is subject to a
limitation of the number of masters: a single DPM1 (the automation) and a single DPM2
(the development tool).
Existing slaves who do not yet have these protocol extensions can coexist on the same
bus segment with drives that already incorporate these new possibilities.
The definition of these functions and services also seeks to guarantee the simplicity and
reliability of the implementation, based on commercial ASICs, on both the master and
slave sides. The inclusion of these extensions in the PROFIBUS specification dates back
to the beginning of 1999 and the publication of the extended PROFIDrive profile at the end
of 1999; the integration of these extensions into DP is on the 2000 schedule.
PROFIBUS has won the support of several thousand manufacturers from all over the
world, specialists in production and process automation. Substantial gains, increased
flexibility and unparalleled availability are all assets that speak in its favour.
Its catalog of more than 2,000 products and services allows users to select the product
with the best performance, scalability and durability guarantees at any time to meet their
automation requirements.
The devices distributed throughout the workshop are integrated into the PROFInet IO
architecture; This uses the familiar I/O view of PROFIBUS DP and its cyclic transfer
mechanisms of the I/O of the remote equipment into the process image memory of the
PLC.
The design and implementation of a PROFInet IO network will hold no secrets for
PROFIBUS DP integrators, as the devices distributed in the field are, by configuration,
attached to a PLC.
8.8.2. Communication
The transmission of time-critical process data, within the plant perimeter, by real-
time software channel "SRT" (Soft Real Time), residing in the network controller.
The installation of PROFInet complies with the specific requirements for Ethernet networks
in industrial environments. Automation manufacturers receive precise specifications
stipulating the requirements for interfacing and wiring equipment. The "PROFInet
Installation Guide" provides manufacturers with information on the main rules for installing
Ethernet networks.
PROFInet uses Ethernet technology and widely used Internet mechanisms to enable
access to PROFInet components for dialogue on the Web.
Similarly, its openness to the other levels of the industrial hierarchy calls on the OPC DA and
DX standards
One of the great virtues of PROFInet is its ease of managing the transition between
existing bus technology (including PROFIBUS DP) and the industrial Ethernet solution:
PROFInet offers two integration methods, the first for field devices and the second for the entire
application:
Proxy integration: The device with this feature federates the slaves located
downstream of the Ethernet network. This method makes it possible to graft new
devices onto the existing one, in total transparency.
PROFInet IO enables the direct integration of field devices over Ethernet; for this purpose,
the producer/consumer model replaces the master slave access method of PROFIBUS
DP.
When it comes to communication, all the components of an Ethernet network are treated
democratically, with bandwidth shared equally. However, the configuration is used to
define the assignment of the field devices to a central automation, as the well-known
PROFIBUS user interface is transferred to the PROFInet devices: it is at the decentralized
edge that the signals are read and transmitted to the automation system that processes
them and then sends its outputs back.
Exchanges between controllers and I/O devices take different channels depending on the
type of data:
The controller sends the settings and configuration of the I/O devices attached to it through
the "Recording" communication relationship. Cyclic I/O transmission uses the "I/O"
communication relationship; finally, acyclic events are transmitted to the controller for
acknowledgement through an "Alarms" communication relationship.
The PROFInet IO I/O device is equipped with a uniform equipment model that allows for
the configuration of modular and compact field devices. The latter has the same features
as PROFIBUS DP and, in the case of modular equipment, includes
Insertion slots for modules, which are equipped with channels for process I/O signals.
In the PROFInet IO architecture, each I/O device receives a unique 32-bit identifier, which
is split into manufacturer code (16-bit) and device code (16-bit).
The manufacturer code is given by PROFIBUS International, while the device code can be
assigned by the manufacturer, depending on its product development.
This GSD file is based on XML, an open and widely used standard for describing data, which
contributes to the power of the tools and their properties:
Hierarchical structure.
The GSD file, standardized ISO 15745, includes a "Device" part (configuration and
configuration of the modules) and a "Communication" part (speed, connectors).
Figure 134: The 2 configuration steps before data exchange between controllers
and I/O device on PROFInet IO
The GSD file of the I/O devices is first imported into the configuration tool. Each I/O
channel is assigned a device address; Input addresses that repatriate process values are
parsed and processed by the application program, which creates output values and sends
them back to the process. This is also where the parameterization of each module or I/O
channel (e.g. 4-20 mA current range of an analog channel) is carried out.
When the configuration is complete, this data is uploaded to the controller, which
automatically configures and parameterizes the I/O devices, ready for cyclic transmission.
8.9.5. Diagnostic
PROFInet IO offers several levels of diagnostics for efficient fault detection and removal.
If an error occurs, the offending I/O device transmits a diagnostic alarm to the controller,
which calls the PLC subroutine to react to the fault.
If the fault requires the replacement of a module or the entire device, the controller
automatically takes care of setting up and configuring the new equipment.
Channel No. ;
Manufacturer data.
When an error occurs at a channel level, the offending I/O device transmits a diagnostic
alarm to the controller, which triggers the call of the corresponding error routine in the
control logic. When the I/O device is executed, the controller acknowledges the error in the
I/O device. This mechanism ensures that the error is handled sequentially in the controller.
These components can be freely assembled as easily reusable software bricks, regardless
of their programming and internal functionality. Access to the component's technology
interface is governed by a unified PROFInet definition.
When defining the granularity of modules, their reusability across multiple systems must
be considered from the perspective of cost and availability.
The aim is to combine these components with maximum flexibility, according to the
principle of modularity, to create a complete system; However, too fine a granularity risks
complicating the technological view of the installation and therefore increasing costs
of study. Conversely, too much granularity penalizes the reusability of the component by
increasing the costs of implementation.
The creation of the software components is the responsibility of the machine or plant
manufacturer. The design of the component plays a decisive role in reducing engineering
and material costs, and on the time characteristics of the automation system. During the
definition of a component, the granularity can range from the individual device to the
complete machine, equipped with a multitude of devices.
This model distinguishes between the programming of the control logic in each module
and the overall technological configuration of the system. There are three steps to creating
an application that covers the entire installation:
Creating Components
The software components representing the technology modules are first created
by the designer of the machine or plant. The programming and
Manuel de Formation EXP-MN-SI110-FR
Dernière Révision: 08/04/2009 Page 193 de 249
Formation Exploitation
Instrumentation
Automatisme Centralisé - DCS
The configuration of the automation components is carried out in the usual way
with the tools of the different manufacturers. This continuity makes it possible to
reuse existing application programs while taking advantage of the know-how of
the company's programmers and maintenance agents.
Interconnection
The Connection Editor extracts the components from its library and links them
together to build the application, simply by drawing lines between the respective
input and output interfaces (figure below).
The connection editor thus traces each application distributed throughout the
installation, regardless of the manufacturer, by materializing the connections
between PROFInet components.
Download
This XML file is created with the manufacturer's tools, if the manufacturer has a
"component generator".
The PCD file contains information about the functions and objects of the PROFInet
components, namely:
The Connection Editor essentially offers two synoptic views. In the installation view, the
necessary components are retrieved from the library and then linked to the display,
resulting in a technological view of the application and logical links between components.
The view of the network gives the real, physical infrastructure of the automation system:
field devices and programmable automations are connected to a bus whose addressing
rules they adopt.
Figure 140: View of the network representing the field devices connected to the bus
SRT (Soft Real Time) software for time-critical process data used in industrial
automation;
Isochronous Real Time (IRT) for advanced applications such as drive control
and synchronization (Motion Control).
These three levels cover all automation applications. Some of its key
features include:
These features are one of PROFInet's strengths; They guarantee cohesion at all levels of
the company, from the workshop to management, and a high level of responsiveness
within the process.
8.9.7.1. TCP/UDP et IP
Ethernet and TCP/IP are the pillars of PROFInet communication. TCP/IP is indeed the
communication protocol of the computer world. However, when it comes to application
interoperability, establishing a common TCP or UDP (Layer 4) transport channel on field
devices is not enough.
Indeed, only the use of the same application layer by all devices is a guarantee of
interoperability.
A few reminders:
Ethernet : Standardized as IEEE 802.3, these specifications set out the access method,
transmission procedures, and physical media for Ethernet (10 Mbps), Fast Ethernet (100
Mbps), and Gigabit Ethernet (1 Gbps) networks. PROFInet uses Fast Ethernet and
Gigabit Ethernet. Fast Ethernet is an extension of the 10 Mbps Ethernet network
specifications that integrates and standardizes full-duplex transmission and switching.
TCP : Transmitter-receiver transmission control protocol (no error, correct and complete
sequence). TCP provides a secure service in connected mode, with a link to be
established between two stations before transmission and then released at the end of the
exchange. TCP also incorporates mechanisms for permanent monitoring of the link.
UDP : A protocol for controlling the transmission of the sender and receiver, similar to
TCP, but operating in a non-connected mode and without guarantee of reliability
(treatment of each data packet as a single message, without acknowledgment). In the
absence of timeout monitoring or link establishment and release, UDP is better suited than
TCP for time-critical applications. This communication and data blocking monitoring, which
is implicit in TCP, can be carried out over UDP at the application layer, e.g. with RPC
(Remote Procedure Call)
Real-time communication must be able to minimize the CPU load on the devices and thus
ensure that the application program is prioritized. However, experience has shown that the
transmission time of data over a Fast Ethernet link
at 100 Mbps (or higher) is negligible compared to the processing time in the devices. The
time it takes to provide this data to the producer's application is not affected by the
communication.
The same applies to the processing of data received by the consumer. It is concluded that
any significant improvement in refresh time and thus in real-time response is mainly the
result of the proper optimization of the communication stack on both the producer and
consumer side.
First, removing multiple protocol levels reduces the length of the message; Second, the
time it takes to prepare the data for transmission and processing by the application is
shortened.
At the same time, the computing power reserved in the device for communication is
significantly lighter.
Unfortunately, this solution is not enough for Motion Control's positioning and timing
applications. These require refresh times of the order of 1 ms with uncertainty on the
synchronization tops (jitter) between two consecutive cycles of 1 μs, to synchronize a
maximum of 100 nodes.
Thanks to the clock synchronization of the bus participants (network constituents and
PROFInet devices), with the precision given above, it is possible to reserve a slice on the
network for the transmission of critical task data
automation. The transmission cycle is therefore segmented into "deterministic" and "non-
deterministic" parts: real-time cyclic telegrams solicit the deterministic slice while TCP/IP
telegrams occupy the non-deterministic range.
Just as if, by analogy with motorway traffic, the left-hand lane were reserved for express
traffic (real-time) and other users (TCP/IP transport) were confined to the right-hand lane,
so that traffic jams on this side of the road do not slow down traffic at critical times.
When PROFInet IO is started, the RPC protocol based on UDP/IP is used to initiate
exchanges between devices, to parameterize distributed equipment and to carry out
diagnostics. Thanks to the openness of this standardised protocol, the operator's stations
(HMI) and engineering stations (supervisors) can also access PROFInet IO I/O devices.
The real-time channel of PROFInet is then used to transmit I/O and alarms.
In a typical PROFInet IO network, a controller exchanges cyclic I/O with multiple I/O
devices through communication relationships. At each polling cycle, the input data of the
queried devices is sent to the controller, which sends the output data back to them.
The data transmission layer of PROFInet is defined in IEEE 802.3, which describes
protocol configuration and fault monitoring. A user data telegram consists of a minimum of
64 bytes and a maximum of 1500 bytes, of which 28 bytes are for real-time data.
In the PROFInet component view, DCOM (Distributed COM) is the TCP/IP application
protocol for sharing data between PROFInet components.
DCOM is the extension of the COM (Component Object Model) model for the distribution
of objects on the network and their interoperability. DCOM is based on the RPC standard.
PROFInet uses DCOM not only to access engineering functions
However, DCOM is not essential for the dialogue between PROFInet components. It is up
to the user to decide, at the engineering system level, whether to exchange user data on
DCOM or real-time channel.
When a communication is established, the devices (machines or parts of the system) can
then agree to use a real-time compatible protocol, as their needs in this area are not met
by TCP/IP or UDP. TCP/IP and DCOM are the ideal "Esperanto" to start exchanges
between devices.
The PROFInet real-time channel is then used for real-time communication between nodes,
within time-critical applications.
In the configuration tool, the user can determine the quality of service by setting the
frequency of change of the values and their transmission, either cyclical (during operation)
or punctual (only in case of change).
It should be noted that the cyclic solution is better suited to frequent value changes
because, conversely, the punctual querying of the devices for control and acquit increases
the load on the processor.
The international standard ISO/IEC 11801 and its European equivalent EN 50173, which
are identical in all respects, define a standardized, application-independent computer
network for office use within a building complex.
This means that none of them takes into account the imperatives and specificities of the
industrial environment:
Bus topology;
Robust wiring and connectors designed for industry: compliance with EMC,
temperature and humidity constraints, protection against dust and vibrations.
8.9.8.1. Topologies
Topologies are intended to satisfy the requirements of federated units on the network; The
most commonly used are the star, the bus, the tree and the ring. In practice, a network
tends to mix these structures, described below, which use physical media such as copper
or optical fiber, also on PROFInet.
Star
A central node (switch) distributes the signals between each branch connecting it to the
end nodes. The hub-and-spoke array is suitable for applications with high equipment
density and short radius of action (e.g. small manufacturing cells or isolated production
machines).
Tree
This topology consists of bringing together several stars to form a network combining
optical fiber and twisted pair if necessary; It makes it possible to subdivide complex
installations into sub-networks.
Line (bus)
The bus structure uses a switch that is located near the connection terminal or integrated
into the terminal. It is especially suitable for large-scale applications (e.g. conveying) or for
connecting manufacturing cells.
Ring (redundant)
In a ring network, all stations are connected in series in a closed loop. This topology is
valid for systems that require high availability and protection against line outages or
network component failures.
Industrial cables are subjected to enormous mechanical stresses; They are therefore
made especially for the workshop. PI has defined different types of cables optimized to
operate at the limits of industrial conditions. Thanks to the system's sufficient reserves, the
cable length of an industry-standard installation can vary without limit.
Connectors and cables form a perfectly coherent whole: only those whose compatibility
has been tested and validated are PROFInet certified. The wiring requirements at field
level are similar to those of PROFIBUS. Since stations receive both data and a 24 V power
supply, the hybrid cable (with signals and power) is ideal. The offer is twofold: mixed
Cu/FOC cable (2 optical fibers for data/4 wires for power); Cu/Cu cable (4 data wires/4
power wires).
Let's remember the two major advantages of optical fiber over the twisted pair: insensitivity
to electromagnetic disturbances and deployment of wide area networks.
PROFInet on copper
transmission medium is the shielded copper pair STP (Shielded Twisted Pair) constituting
100Base-TX cabling, at 100 Mbit/s (Fast Ethernet). Only shielded cables and connectors
are allowed. Each one must be in IEC 11801 standard category 5, and the entire link must
comply with class D, again according to IEC 11801.
In addition, PROFInet cables have an AWG 22 cross-section so that complex wiring can
be carried out with minimal signal loss. That's why PROFInet cabling focuses on
modularity, complying with IEC 11801 and simple installation rules. Equipment
connections are materialized by RJ45 or M12 plug-in connectors. The connecting cables
are provided with connectors at both ends, which can be pre-assembled with the AWG 22
cable.
All devices are connected to the network by active equipment. PROFInet uses switched
components, the specification of which guarantees simplicity of installation. The
transmission cables are equipped with identical connectors at both ends, pre-assembled
according to the same criteria. The length of a segment is limited to 100 m.
The PROFInet can use multimode or singlemode fibers. The transmission is carried out on
2 100Base-FX optical conductors, at 100 Mbit/s. The optical interfaces comply with
ISO/IEC 9314-3 (multimode) and ISO/IEC 9314-4 (single-mode) specifications.
For applications outside the control cabinet, the cable jacket must meet the mechanical,
chemical, and thermal protection requirements of the production site. The maximum length
of a segment is 2 km in multimode and 14 km in singlemode.
8.9.8.3. Connections
One of the first criteria for suitability for the industrial environment is the possibility of
implementing on-site connection systems. The M12 and RJ45 connectors are made for
this; Their assembly is facilitated by the use of standard tools.
The connectors located outside the cabinet must take into account industrial constraints:
they are IP65 or IP67 protected RJ45 or M12 connectors.
The PROFInet M12 connector is the shielded D-coded version, specified in the draft IEC
61076-2-101 standard.
The "duplex DC" connectors, which comply with ISO/IEC 11801, are mainly used for
optical fiber, which is itself described in the IEC 60874-14 standard.
The devices are equipped with the female socket and the connection cable with the male
plug. BFOC/2.5 fiber optic connectors standardized IEC 60874-10 can also be used.
The hybrid connector is used for distributed architectures where field devices are
connected by a connector that mixes data and power. The RJ45 protected
8.9.8.4. Switches
PROFInet always uses switches installed throughout the transmission between stations to
regenerate and direct the signals. This equipment, standardized ISO/IEC 15802-3, is used
to structure the network.
Suitable switches for PROFInet are those designed for Fast Ethernet (100 Mbit/s, IEEE
802.3u) and full-duplex transmission; in this mode, the switch receives and transmits
simultaneously on the same port, without the risk of collision and therefore without loss of
bandwidth due to Ethernet sensing mechanisms.
The configuration of the network is greatly simplified since there is no control of the
segment lengths in a collision domain. 10Base-TX (10 Mbit/s, CSMA/CD) is also
supported to ensure compatibility with existing infrastructures, isolated or legacy terminals,
or first-generation Ethernet hubs.
PROFInet switches also manage telegram priority according to IEEE 802.1Q and the
following functions: standardized diagnostics, automatic polarity change, auto-negotiation,
automatic cross-wiring detection, and optional port mirroring for diagnostics.
Office switches perform all these functions, but are not suitable for PROFInet. This
requires "rugged" switches, capable of withstanding the mechanical, electrical and
electromagnetic constraints of industry (IP protection, 24 V power supply, EMC, etc.) and
guaranteeing operational safety.
PROFInet provides a model for integrating existing PROFIBUS and other fieldbus
segments into PROFInet. It is therefore possible to build a system combining multiple
fieldbuses and Ethernet subnetworks to establish technological continuity between the
various levels of industrial communication, from the field to PROFInet.
Users focus on the ease of integrating their existing systems into a new PROFInet
solution.
Machine and plant builders want to be able to use their proven and documented
industrial assets in PROFInet automation projects without any modifications.
Automation vendors want to integrate their field devices into PROFInet without
additional costs due to modifications.
On Ethernet, the proxy represents one or more field devices (e.g. PROFIBUS slaves) and
makes communication between networks transparent (without message encapsulation) as
well as, among other things, the sending of cyclic data to the field devices.
Consider a PROFIBUS DP network: the proxy is both the DP master responsible for
coordinating the exchanges between PROFIBUS nodes and an Ethernet device
participating in the PROFInet communication.
In this case, if the existing fieldbus mechanisms (in this case, PROFIBUS DP) are
integrated into the component, the PROFInet mechanisms remain external to the
component.
This migration strategy preserves all the user's investments (equipment and infrastructure,
cabling), whether they are operators, plant managers or machine builders, while protecting
their application know-how. PROFInet thus enables a smooth transition to new network
segments.
Based on this, PROFInet can be integrated with other fieldbuses such as Foundation
Fieldbus, DeviceNet, Interbus, CC-Link, etc. A specific image of the component interfaces
for the different communication options is then defined for each bus, which is saved in the
proxy. This makes it possible to connect any field network to PROFInet in one fell swoop.
The figure below illustrates an example of a modular application in the food industry. The
bottling machine has 4 workstations: rinsing, filling, capping and packaging.
On the one hand, this example demonstrates the independent coexistence of PROFIBUS
and PROFInet within an overall system. On the other hand, it highlights the simplicity of
integrating existing manufacturing cells.
The specifications provide for the maintenance of PROFIBUS DP (rinsing and filling) but
for the modernisation and extension of capping and packaging on PROFInet.
The independence of the communication procedures and the use of a proxy make it
possible not to touch the PROFIBUS network.
All that is needed is to link the communications between components in the engineering of
the new machine configuration and to equip the PROFIBUS DP master with an Ethernet
module (hardware + software) and proxy functions.
The proxy function ensures that the PROFInet view remains encapsulated in the
automation system as a technology module. All operations upstream of PROFIBUS are
carried out as before.
8.10.1. TCP/IP
Figure 152: Presentation of the OSI model adapted to some elements of the TCP/IP suite
The role of the IP protocol is to route information, i.e. to allow the transport of information
from one network to another through several routers, thus going beyond layer 2 to cross
the router wall, and reach its target by taking the shortest route. For this purpose, a new
addressing format, the IP address, is used.
The IP address is composed, as we will see later, of 32 bits, but to simplify its use (this
simplification may be considered a little too "computer-oriented"), these terms have been
grouped by byte and the address has been listed in dotted decimal (4 decimal numbers
less than 255 and separated by dots).
This address is encapsulated in the Ethernet frame (in the first few bytes of the IEEE
802.3 frame data field). It is placed inside the IP header.
4 8 16 32-bit
Worm. IHL Type of service Total length
Identification Flags Fragment offset
Time to live Protocol Header checksum
Source address
Destination address
Option + Padding
Data
The IP address is defined by a set of 4 bytes. This makes it possible to define 232
addresses (4,300 billion nodes). These addresses are arranged according to 5 classes,
depending on the value of the first bits of the address.
Class A allows the creation of 126 networks of 224 machines (16 million), i.e. a use of
addresses 1.0.0.0 to 126.255.255.255.
Class B allows the creation of 16384 networks of 216 machines (65,536), i.e. a use of
addresses from 128.1.0.0 to 191.255.255.255.
Class C allows the creation of 221 networks (2 million) of 256 machines, i.e. a use of
the addresses 192.0.1.0 to 223.255.255.255.
Finally, class E is reserved for future uses, but it still uses addresses between
240.0.0.0 and 247.255.255.255.
This distribution also makes it possible to define not only a "tree" of the number of
networks, but also a hierarchical organization.
At the Class A level, all machines are routers or gateways that are interconnected on a
small number of networks. At this level, the interconnections of the major intercontinental
networks are formed.
At the level of class C, there is a small number of computers connected to a large number
of subnets (subnetworks driven by class A and B routers). We are here in the "general
public" field with lots of small networks on which users' machines are connected.
In grey, the encoding of the "name" of the network, in white the encoding of the "name" of the
machine.
This 4-byte value allows it to define the address of the network on which it is connected,
and by deduction to know its "name" on the network.
IP 192 10 15 20 IP 192 10 15 20
& Mask 255 255 255 0 + Mask 255 255 255 0
= Network 192 10 15 0 = Machine 255 255 255 20
Be careful, the previous numbers are presented in base 10, to better understand the
manipulation presented in this example of translating numbers into hexadecimal.
IP C0 A F 15 IP C0 A F 15
& Mask FF FF FF 0 + Mask FF FF FF 0
= Network C0 A F 0 = Machine FF FF FF 15
4 8 16 32 bits
See. IHL Type of service Total length
Identification Flags Fragment offset
Time to live Protocol Header checksum
Source address
Destination address
Option + Padding
Data
The first 4 bits define the IP version number, usually this number is 4 (IPv4).
The IHL (Ip Header Length) field gives the number of 32-bit words contained in the
header (the data is not part of it), but including options and padding.
The first 3 bits form a set that encodes the priority of the message.
001 : priority
10 : immediate
11 : flash
101 : critic
110 : internetwork control
The last 4 bits are used to describe the requested service. They are exclusive
(only one bit can be validated for a frame)
The TLF ( Total Length Field) allows you to set the total size of the IP packet (so we know
the size of the header of the data field. This field thus eliminates the padding terms used to
bring small IP frames to the minimum Ethernet frame format (46 bytes).
"More Fragments" warns that other fragments of the same datagram are to be
followed. The last fragment of the datagram must have this bit at "0" (No more
Fragment).
"Don't Fragment" allows you to prohibit the fragmentation of the datagram (this
can lead to errors since the maximum size of the message allowed by the lower
layers of the network is less than the size of the datagram).
The Fragment Offset field allows you to define the "order number" of the fragment, each
fragment can be routed independently of the others, it can go through a shorter path than
its predecessor and therefore arrive at its destination before it. The target machine will
then be responsible for recomposing the datagram by placing the fragments (which have
the same identity) in the correct order (ascending order of Fragment Offset).
Be careful, fragmentation is a double-edged sword because if a fragment is lost, the whole
message must be transmitted.
The TTL (Time To Live) field allows you to set the maximum number of routers that a
datagram can traverse. This field is initialized to a certain value initially. Then each router
it passes through decrements it. When it reaches 0, the datagram is rejected, and the
sender is notified. Contrary to what you might think, this method is not coercive, but it
eliminates packets lost on the network.
The protocol field allows you to define what type of service (TCP, UDP, etc.) uses the IP
data field to encapsulate its data. Some values in this field can be noted (1 for ICMP, 6 for
TCP and 17 for UDP)
The HCS (Header Check Sum) field contains a CRC code to validate the IP header and
only the IP header encapsulated in the Ethernet frame. It should be noted that the HCS
field itself is included in the header, so in the control field, for this reason it is considered
for the zero-value calculation.
Finally, the IP addresses of the source and the recipient are presented, then, before
stacking the data, 32 bits are systematically left free for the definition of options (if there
are any) or for later use.
The IP frame can contain options. These are used to develop networks. However, they
remain optional although they must be implemented by all the elements of a network.
The options are described in a single byte.
0 1 2 3 7
C Option class Option Number
The first bit of the byte is the copy indicator, it is used to indicate whether the information
about this option should be copied for each of the possible fragments (bit to 1) or not (bit
to 0).
00 : Control class
Options 9 and 3 allow you to set the entire path for the message for option 9,
and option 3 (source loose routing) is the required waypoints for a packet.
Option 4 (Class 2) allows you to create a timestamp of the packets. Each time
a router passes through a router, time information is added to the frame.
Since options use only one byte for their definition, padding bytes are systematically added
to complete the 4-byte packet started by defining the option. This is called PADDING.
The use of the IP protocol on the Internet requires the definition of two problems, on the
one hand where and how the data and the identification of the IP frame (in the physical
sense) are arranged, and on the other hand how the operation of the 2 protocols (in the
logical sense) is associated.
The IP protocol and the Ethernet protocol are linked to each other by an exchange table
called the ARP table (for Address Resolution Protocol). The purpose of this table is to
associate an IP address with a MAC address. This is done according to a very simple
procedure.
The 3 parameters that must be set for all machines using IP are respectively: the IP
address, the subnet mask and the IP address of its gateway (which must be on the same
physical network as the machine).
When sending an IP frame, the source machine compares the IP address of the target
machine with the address of its subnet. If it does not recognize the address of its own
network, it knows that the only gateway is able to transmit its message on the Web, so it
will try to contact its gateway. To do this, it will use the same procedure as the one that
allows it to contact a machine on the same subnet.
If the target is on the same subnet, considering that the source machine has been idle long
enough, then it must associate the IP address of its target with an address at the MAC
layer (IP is not a network, remember what I said, only the MAC layer gives access to the
physical link). To identify the address of its target, the source machine sends an ARP
Request frame.
The ARP frame uses the LAN frame data field to present its header and places the 0806
hex code in the encapsulated frame type definition field.
16 32-bit
Hardware Type Protocol Type
HLen (8) Plain (8) Operation
Sender Hardware Adress
Sender Protocol Address
Target Hardware Address
Target Protocol Address
The ARP Request frame then sends a message to all machines (BROADCAST ALL)
asking them to respond to the sender if their IP address is present in the frame. The
machine concerned will respond with an ARP Reply frame directly addressed to the
sender of the request.
The ARP header is composed of 2 types of fields, on the one hand fields oriented to the
physical layer (HARDWARE) and on the other hand fields oriented to the IP layer
(PROTOCOL). It all starts with 4 bytes defining respectively for the first 2 the type of MAC
layer used (Ethernet is characterized by a 1), and for the last 2 the type of network layer
used (we use the 0x0800 code for IP).
The Hlen and Plen fields define the byte size of the MAC layer addresses (6 for Ethernet)
and those of the network layer (4 for IP), respectively.
The OP field indicates whether it is an ARP request or response (1 for a request and 2 for
a response) or a Reverse Address Resolution Protocol (RARP) command.
The fields specified below are respectively the MAC address of the source, then its
address in the water followed by the MAC and network addresses of the target.
During the query, the field corresponding to the MAC address of the target is left blank.
When responding, it is the machine that was the target that becomes the source, so it is
the machine that provides the source address and target address values at the network
layer, as well as at the MAC layer. The target machine is then the machine that issued the
request.
This data is then stored in a dynamic table (which is deleted if it is not refreshed after 30
seconds). Thus, the data entered in the table is automatically deleted if it is no longer
used.
The PING command comes from Packet INternet Groper, it allows you to test the
response of a target machine to a request from a source machine. This solicitation is an
echo that the target machine must return.
The message sent is therefore composed of an IP header with the address of the source
machine and the target machine, followed by an ICMP header encapsulated in the IP data
field which has a type and a code of 0, followed by an identification code of the source
machine and the sequence number.
The message returned by the target is also composed of an IP header and an ICMP
header where the type is 8 and the code is 0, the identification field and the sequence
number are those of the echo request (the sent message).
When a machine wants to communicate with another, it must use a physical network
medium to transmit its data. Let's take the example of a fictitious network where 3
machines are enthroned.
These 3 machines are actually a router and 2 computers. Each of them has its own IP
address and apart from the router which uses 2 types of networks, the 2 computers
use the Ethernet network exclusively. We therefore have the following structure:
We will consider 2 cases, a first where machine n°1 will want to talk to machine n°2,
another where machine 2 will want to talk to machine 4. We will study after an example
with even more networks.
In our first case, we will imagine that a user on machine n°1 wants to communicate with
machine n°2. This machine has been inactive for a very long time, so it does not know its
neighbors.
To communicate, it needs the physical address of the target machine (if it is on the same
network as it) or the physical address of the router if its target machine is not on its local
network.
To define the address of its network, the source machine performs a logical AND
between its IP address and the mask. There are:
192 0 1 2
255 255 255 0
192 0 1 0
The network is therefore 192.0.1.0. The target has an address of 192.0.1.3, so we also
look for its network:
192 0 1 3
255 255 255 0
192 0 1 0
The 2 machines are therefore on the same network. The source machine will therefore
ask all the machines on the network for the one with the IP address : 192.0.1.3.
This request is made by sending an ARP request in BROADCAST ALL, i.e. to the address
of all the machines. We therefore find an ARP frame containing the following information:
Then an ARP response frame comes from the target that responds:
Now the 2 machines know each other, they communicate with each other via the Ethernet
network, without reissuing ARP commands. As long as these 2 machines continue to talk
to each other, they will keep the association locally
IP and MAC addresses, in the ARP table. In addition, a routing table now allows them to
know the way to follow to talk to each other.
Now let's take the second case: machine n°2 wants to talk to machine n°4, so we start the
procedure again:
192 0 1 3
255 255 255 0
192 0 1 0
192 0 2 3
255 255 255 0
192 0 2 0
Les réseaux étant différents, la machine source sait qu'elle ne peut pas discuter
directement avec sa cible, elle doit donc impérativement dialoguer avec son routeur pour
obtenir le transfert des informations vers la cible.
Comme la machine n'a pas dialogué avec le routeur depuis longtemps, elle doit associer à
nouveau, à l'adresse IP du routeur, l'adresse MAC de ce dernier. Pour cela elle lance une
requête ARP en BROADCAST ALL.
Désormais, la machine n° 2 sait parler à son routeur. C'est maintenant à lui d'établir la fin
de la communication. Pour cela, il existe plusieurs possibilités : le routage statique (le
chemin à suivre à été fixé par un administrateur) ou le routage dynamique (les routeurs
doivent se découvrir sans aide extérieure).
Mais quel que soit la méthode de routage, le principe reste le même, le routeur comme les
autres machines, vérifie la présence du réseau cible dans sa table de routage. Et si celui
ci n'est pas présent, il consulte les autres machines grâce à des protocoles d'échange de
routes (comme le protocole IRDP pour Internet Router Discovery Protocol).
Le principal protocole utilisé par les routeurs est le protocole RIP (pour Routing
Information Protocol). Celui ci permet à un routeur de définir automatiquement et
dynamiquement (c'est à dire sans intervention extérieure) le plus court chemin à suivre
pour atteindre une cible.
Les informations de routages ne sont pas centralisées mais sont diffusées localement.
Chaque routeur dispose de sa propre table de routage appelée table RIP. Il n'y a pas sur
le réseau, de noeud centralisateur de l'information de routage, pas plus qu'il n'y a de
routeur connaissant l'ensemble des réseaux disponible.
La définition du chemin le plus court passe par l'utilisation d'une mesure de distance, basé
sur le HOP. Un HOP correspondant au passage d'un routeur. La distance n'est donc pas
réelle mais fictive, en effet, peu importe qu'un réseau mesure de plusieurs centaines de
kilomètres, ce qui compte c'est que l'on mobilise un nombre minimum de routeur pour
transférer une information.
Régulièrement (environ toutes les 30 secondes), les routeurs diffusent leurs tables RIP.
Heureusement, il existe 2 garde-fous à ces échanges, qui sur un réseau aussi vaste
qu'Internet pourraient poser de gros problèmes de saturation. Le premier tient au fait que
les tables sont propagées avec une limitation à 15 HOP de la distance maximale
accessible. La seconde tiens au fait que seul le chemin le plus court est mémorisé.
Les 1, 2 et 3 placé sur les fils reliés aux routeurs représentent la fin de l’adresse IP du
point de connexion.
Ces tables sont entretenues dynamiquement, cela veut dire par exemple que si le réseau
192.0.3.0 est défaillant (par exemple, le lien est brisé), dès le routeur C (ou le routeur D)
essayera de transmettre sur ce réseau, il détectera une erreur et fera évoluer sa table de
routage et par propagation celle des autres.
La création de ces tables de routage est réalisée par propagation de la table d'autres
routeurs (en incrémentant les distances), via des échanges de trames RIP.
A ce point les 4 réseaux 192.0 X 0 sont tous connus, pour mieux comprendre le
cheminement utilisé, on va synthétiser ces résultats sous la forme d’un tableau.
Les distances devant être minimisées, après la troisième étape, l’ensemble des éléments
du réseau connaît le chemin le plus court pour atteindre sa cible. Si l’on ajoute les routeurs
E et F, la diffusion est un peu plus longue mais elle conduira aux mêmes résultats.
La trame RIP est encapsulée dans une trame IP. Elle se compose d'un nombre variable
de champs. Toutefois, au minimum, la trame RIP est composée de 24 octets. La trame
suivante est en octets.
1 1 2 2 2 4 8 4
C V ZERO AFI ZERO ADDRESS ZERO ETRIC
Les champs ZERO sont des champs « vides » (remplis de zéro)
C Cela peut être une requête (un routeur sollicite d’un autre
Command la propagation de sa table) ou une « réponse » qui est soit
une mise à jour régulière soit une mise à jour
extraordinaire (cas où un brin du réseau est défaillant).
V
Donne le numéro de version du protocole RIP utilisé
Version
Une trame RIP peut contenir jusqu'à 25 occurrences des champs ADDRESS et METRIC,
permettant ainsi (pour chaque trame) de donner la position de 25 routeurs.
L'adresse IP de la passerelle à contacter pour le routage est quand à elle dans la trame IP
qui encapsule la trame RIP.
Le dialogue maître – esclave peut être schématisé sous une forme successive de liaisons
point à point.
8.10.8.2. Adressage
Les abonnés du bus sont identifiés par des adresses attribuées par l’utilisateur.
Le maître interroge un esclave de numéro unique sur le réseau et attend de la part de cet
esclave une réponse
Le maître diffuse un message à tous les esclaves présents sur le réseau, ceux-ci
exécutent l’ordre du message sans émettre une réponse.
La question
Elle contient un code fonction indiquant à l’esclave adressé quel type d’action est
demandé.
Les données contiennent des informations complémentaires dont l’esclave a besoin pour
exécuter cette fonction.
La réponse
Si une erreur apparaît, le code fonction est modifié pour indiquer que la réponse est une
réponse d’erreur.
Les données contiennent alors un code (code d’exception) permettant de connaître le type
d’erreur.
Deux types de codage peuvent être utilisés pour communiquer sur un réseau Modbus.
Tous les équipements présents sur le réseau doivent être configurés selon le même type.
Type ASCII : chaque octet composant une trame est codé avec 2 caractères ASCII
(2 fois 8 bits).
Type RTU (Unité terminale distante) : chaque octet composant une trame est codé sur 2
caractères hexadécimaux (2 fois 4 bits).
Le mode ASCII permet d’avoir des intervalles de plus d’une seconde entre les différents
caractères sans que cela ne génère d’erreurs, alors que le mode RTU permet un débit
plus élevé pour une même vitesse de transmission.
Le maître s’adresse à l’esclave dont l’adresse est donnée dans le champ prévu à cet
effet.
Le champ de données est codé sur n mots en hexadécimal de 00 à FF, soit sur n octets.
Dans le cas du mode RTU, le champ contrôle d’erreur CRC (Contrôle de Redondance
Cyclique) contient une valeur codée sur 16 bits.
Nota : Le contrôle de parité peut dans certains cas être supprimé car d’autres contrôles
d’échanges sont mis en oeuvre (cas du contrôle CRC encore appelé contrôle par
Checksum)
L’esclave renvoie sa réponse ; il place sa propre adresse dans le champ adresse afin que
le maître puisse l’identifier.
Manuel de Formation EXP-MN-SI110-FR
Dernière Révision: 08/04/2009 Page 234 de 249
Formation Exploitation
Instrumentation
Automatisme Centralisé - DCS
Il utilise ensuite le champ fonction pour indiquer si la réponse contient une erreur. Pour
une réponse normale, l’esclave reprend le même code fonction que celui du message
envoyé par le maître, sinon il renvoie un code erreur correspondant au code original avec
son MSB à 1.
Le champ contrôle d’erreur contient une valeur codée sur 16 bits. Cette valeur est le
résultat d’un CRC (Cyclical Redundancy Check) calculé à partir d’un message.
Chaque octet composant un message est transmis en mode RTU de la manière suivante :
Avant et après chaque message, il doit y avoir un silence équivalent à 3,5 fois le temps de
transmission d’un mot.
Le protocole MODBUS ne définit que la structure des messages et leur mode d’échange.
On peut utiliser n’importe quel support de transmission RS 232, RS 422 ou RS 485, mais
la liaison RS 485 est la plus répandue car elle autorise le « multipoints ».
- Adresse esclave : 04
- CRC : 25 CA
- Adresse esclave : 04
- CRC : 01 31
- Adresse esclave : 04
- CRC : B8 DE
Le nombre d’appareils compatibles avec HART utilisés dans le monde dépasse les 20
millions et ils constituent le plus grand protocole de communication. La technologie HART
est facile à utiliser et elle est très fiable.
Il existe plusieurs raisons pour faire communiquer un ordinateur central avec un appareil
utilisateur. Entre autres :
Diagnostic de l’appareil
Dépannage de l’appareil
Et beaucoup d’autres !
HART est un protocole de communication maître-esclave créé dans les années 80 pour
faciliter la communication avec les appareils utilisateurs intelligents. HART signifie
Highway Addressable Remote Transducer (télétransducteur adressable par bus). Le
protocole HART utilise la norme de modulation par déplacement de fréquence (FSK) Bell
202 pour superposer les signaux de communication numériques bas niveau aux signaux
4-20 mA.
Cela permet le dialogue avec les utilisateurs ainsi que la communication d’informations
supplémentaires, en plus de la variable de processus normal, vers/à partir d’une machine
utilisateur intelligente. Le protocole HART communique à 1200 bits/seconde sans
interrompre le signal de 4-20 mA et permet à une application hôte (maître) de recevoir
deux mises à jour numériques, ou davantage, par seconde, d’une machine utilisateur.
Comme le signal numérique FSK est à phase continue, il n’y a pas d’interférence avec le
signal de 4-20 mA.
HART est un protocole maître/esclave, ce qui signifie qu’une machine utilisateur (esclave)
ne parle que si un maître s’adresse à elle. Le protocole HART peut être utilisé en divers
modes pour communiquer des informations vers/à partir de machines utilisateurs
intelligentes et vers/à partir de systèmes centraux de commande ou de surveillance.
Le protocole HART permet d’effectuer toutes les communications numériques avec les
machines utilisateurs selon des configurations en réseau point par point ou en réseau
multipoint.
HART offre deux voies de transmission simultanées : le signal analogique 4-20 mA ainsi
qu’un signal numérique. Le signal 4-20 mA transmet la valeur primaire mesurée (dans le
cas d’un appareil utilisateur) en utilisant la boucle de courant de 4-20 mA – la norme
industrielle la plus rapide et la plus fiable.
Ensemble, les deux voies de transmission offrent une solution complète de communication
avec les appareils utilisateurs, facile à utiliser et à configurer, de coût peu élevé et très
robuste.
Commandes HART
Parmi les applications, on peut citer l’interrogation à distance des variables de processus,
l’accès cyclique aux données de processus, la fixation de paramètres et les diagnostics.
La couche Application définit les commandes, les réponses, les types de données et
l’établissement d’états d’avancement de travaux, pris en charge par le Protocole. En plus,
certaines conventions du Protocole HART (par exemple, comment régler un courant de
boucle) sont également considérées comme faisant partie de la couche Application.
Alors que le Résumé des Commandes, les Tables Communes et les Spécifications sur le
Code de Réponse aux Commandes établissent les pratiques obligatoires de la Couche
Application (p. ex. types de données, définitions communes de données élémentaires, et
procédures), les Commandes Universelles indiquent le contenu minimum de la Couche
Application pour tous les appareils compatibles avec HART.
Plusieurs types de données ou d’informations peuvent être transmis à partir d’un appareil
compatible avec HART. Entre autres :
Données de mesure
Données d’étalonnage
Une fois que ces données sont intégrées dans les systèmes de commande, de gestion
d’actifs ou de sécurité, il est alors possible d’améliorer les opérations de l’unité de
production, de diminuer les coûts et d’augmenter la disponibilité de l’unité de production.
Entrer message
Entrer étiquette
Entrer date
Définir descripteur
Davantage d’états disponibles - Indique que les données sur l’état des appareils
supplémentaires sont disponibles
Variable primaire hors limites - Indique que la valeur de la variable primaire est en
dehors des limites du capteur
Nom du fabricant (Code) - Code établi par HCF et fixé par fabricant
Message – Espace message pour zone de travail (32 caractères), fixé par
l’utilisateur
Upper Range Value - Primary Variable Value in engineering units for 20mA
point, set by user
Lower Range Value - Primary Variable Value in engineering units for 4mA point,
set by user