Professional Documents
Culture Documents
Advanced Testing of Systems of Systems Volume 2 Practical Aspects 1St Edition Bernard Homes Full Chapter
Advanced Testing of Systems of Systems Volume 2 Practical Aspects 1St Edition Bernard Homes Full Chapter
Systems-of-Systems, Volume 2 -
Practical Aspects 1st Edition Bernard
Homes
Visit to download the full and correct content document:
https://ebookmass.com/product/advanced-testing-of-systems-of-systems-volume-2-pr
actical-aspects-1st-edition-bernard-homes/
Advanced Testing of Systems-of-Systems 2
Advanced Testing of
Systems-of-Systems 2
Practical Aspects
Bernard Homès
First published 2022 in Great Britain and the United States by ISTE Ltd and John Wiley & Sons, Inc.
Apart from any fair dealing for the purposes of research or private study, or criticism or review, as
permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced,
stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers,
or in the case of reprographic reproduction in accordance with the terms and licenses issued by the
CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the
undermentioned address:
www.iste.co.uk www.wiley.com
Any opinions, findings, and conclusions or recommendations expressed in this material are those of the
author(s), contributor(s) or editor(s) and do not necessarily reflect the views of ISTE Group.
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
10.8.3. Prevalidation regression tests, sanity checks and smoke tests . . . . . . . 179
10.8.4. What to automate? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
10.8.5. Test frameworks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
10.8.6. E2E test cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
10.8.7. Automated test case maintenance or not? . . . . . . . . . . . . . . . . . . 184
10.9. Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
10.9.1. Automated reporting for the test manager . . . . . . . . . . . . . . . . . . 186
Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
Index. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
I would also like to thank the many managers and colleagues I had the privilege
of meeting during my career. Some, too few, understood that quality is really
everyone’s business. We will lay a modest shroud over the others.
Testing Qualification Board), CFTL (Comité Français des Tests Logiciels, the
French Software Testing committee) and GASQ (Global Association for Software
Quality). I also dedicate these books to you, the reader, so that you can improve your
testing competencies.
Preface
Implementation
August 2022
1
deadlines, the scope initially considered increases, the level of quality of input data –
requirements, components to be tested, interfaces – is often of lower quality than
expected and the number of faults or anomalies is greater than anticipated. All of
these are under tighter budgetary and calendar constraints because, even if the
developments take longer than expected, the production launch date is rarely
postponed.
The methodologies offered by ITIL, PRINCE2, CMMI, etc. bring together a set
of good practices that can be adapted – or not – to our system-of-systems project.
CMMI, for example, does not have test-specific elements (only IVV), and it may be
necessary to supplement CMMI with test-specific tasks and actions as offered by
TMM and TMMI.
The execution of the tests is carried out in different test environments according
to the test levels envisaged. It will therefore be necessary to ensure the availability
of environments for each level.
4 Advanced Testing of Systems-of-Systems 2
Test environments, as well as their data and the applications they interface with
must be properly synchronized with each other. This implies an up-to-date definition
of the versions of each system making up the system-of-systems and of the
interfaces and messages exchanged between them.
It is obvious that the input test data of a test case and the expected data at the
output of a test case are necessary, and it is also important to have a set of other data
that will be used for testing:
– data related to the users who will run the tests (e.g. authorization level,
hierarchical level, organization to which they are attached, etc.);
– information related to the test data used (e.g. technical characteristics,
composition, functionalities present, etc.) and which are grouped in legacy systems
interfaced with the system-of-systems under test;
– historical information allowing us to make proposals based on this historical
information (e.g. purchase suggestions based on previous purchases);
– information based on geographical positioning (e.g. GPS position), supply
times and consumption volumes to anticipate stock replenishment needs (e.g. need
to fill the fuel tank according to the way to drive and consume fuel, making it
possible to offer – depending on the route and GPS information – one or more
service stations nearby);
– etc.
Test Project Management 5
The creation and provision of quality test data is necessary before any test
campaign. Designing and updating this data, ensuring that it is consistent, is
extremely important because it must – as far as possible – simulate the reality of the
exchanges and information of each of the systems of the system-of-systems to be
tested. We will therefore need to generate data from monitoring systems (from
sensors, via IoT systems) and ensure that their production respects the expected
constraints (e.g. every n seconds, in order to identify connection losses or deviations
from nominal operating ranges).
Test data should be realistic and consistent over time. That is, they must either
simulate a reference period and each of the campaigns must ensure that the systems
have modified their reference date (e.g. use a fixed range of hours and reset systems
at the beginning of this range) or be consistent with the time of execution of the test
campaign. This last solution requires generating the test data during the execution of
the test campaign, in order to verify the consistency of the data with respect to the
expected (e.g. identification of duplicate messages, sequencing of messages, etc.)
and therefore the proper functioning of the system-of-systems as a whole.
Development and construction projects are associated with often strict delivery
dates and schedules. The impact of a late delivery of a component generates
cascading effects impacting the delivery of the system and the system-of-systems.
Timely delivery, with the expected features and the desired level of quality, is
therefore very important. In some systems-of-systems, the completeness of the
functionalities and their level of quality are often more important than the respect of
the delivery date. In others, respecting the schedule is crucial in order to meet
imperatives (e.g. launch window for a rocket aiming for another planet).
This involves close collaboration between test manager and project managers in
charge of the design and production of components, products or systems to be
6 Advanced Testing of Systems-of-Systems 2
tested, as well as managers in charge of test environments and the supply of test
data.
In the context of Agile and Lean methods, any delay in deliveries and any
non-compliance with schedules is a “loss of value” and should be eliminated. It is
however important to note that the principles of agility propose that it is the
development teams that define the scope of the functionalities to be delivered at each
iteration.
Depending on the test levels, environments will include more and more
components, products and systems that will need to coordinate to represent test
environments representative of real life. Each environment includes one or more
systems, components, products, as well as interfaces, ETLs and communication
equipment (wired, wireless, satellite, optical networks, etc.) of increasing
complexity. The design of these various environments quickly becomes a full-time
job, especially since it is necessary to ensure that all the versions of all the software
are correctly synchronized and that all the data, files, contents of databases and
interfaces are synchronized and validated in order to allow the correct execution of
the tests on this environment.
Testing activities can start effectively and efficiently as soon as all their
prerequisites are present. Otherwise, the activities will have to stop and then start
again when the missing prerequisite is provided, etc. This generates significant
waste of time, not to mention everyone’s frustration. Before starting any test task,
we must make sure that all the prerequisites are present, or at the very least that they
will arrive on time with the desired level of quality. Among the prerequisites, we
have among others the requirements, the environment, the datasets, the component
to be tested, the test cases with the expected data, as well as the testers, the tools and
procedures for managing tests and anomalies, the KPIs and metrics allowing the
reporting of the progress of the tests, etc.
Test Project Management 7
One solution to ensure the presence of the prerequisites is to set up a TRR (Test
Readiness Review) milestone, a review of the start of the tests. The purpose of this
milestone is to verify – depending on the test level and the types of test – whether or
not the prerequisites are present. If prerequisites are missing, it is up to the project
managers to decide whether or not to launch the test activity, taking into account the
identified risks.
In Agile methods, such a review can be informal and only apply to one user story
at a time, with the acronym DOR for definition of ready.
The delivery of test datasets (TDS) is not limited to the provision of files or
databases with information usable by the component, product or system. This also
includes – for the applications, components, products or systems with which the
component, product or system under test interacts – a check of the consistency and
synchronization of the data with each other. It will be necessary to ensure that the
interfaces are correctly described, defined and implemented.
The design of coherent and complete datasets is a difficult task requiring a good
knowledge of the entire information system and the interfaces between the
component, product or system under test on the one hand and all the other systems
of the test environment on the other hand. Some components, products or systems
may be missing and replaced by “stubs” that will simulate the missing elements. In
this case, it is necessary to manage these “stubs” with the same rigor as if they were
real components (e.g. evolution of versions, data, etc.).
A Go-NoGo meeting is used to analyze the risks associated with moving to the
next step in a process of designing and deploying a component, product, system or
system-of-systems, and to decide whether to proceed to the next step.
In an Agile environment, the concept of Go-NoGo and TRB is detailed under the
concept of DOD (definition of done) for each of the design actions.
Another aspect to consider is the need for test automation and (1) the continued
increase in the number of tests to be executed, which will mean increasing test
execution time as well as (2) the need to ensure that the test classes in the software
(case of TDD and BDD) are correctly removed from the versions used in integration
tests and in system tests.
According to Kim et al. (2016), in companies like Amazon and Google, the
majority of teams practice continuous delivery and some practice continuous
deployment. There is wide variation in how to perform continuous deployment.
Monitoring test projects requires monitoring the progress of each of the test
activities for each of the systems of the system-of-systems, as well as on each of the
test environments of each of the test levels of each of these systems. It is therefore
important that the progress information of each test level is aggregated and
summarized for each system and that the test progress information of each system is
aggregated at the system-of-systems level. This involves defining the elements that
must be measured (the progress), against which benchmark they must be measured
(the reference) and identifying the impacts (dependencies) that this can generate.
Reporting of similar indicators from each of the systems will facilitate
understanding. Automated information feedback will facilitate information retrieval.
10 Advanced Testing of Systems-of-Systems 2
Systems-of-systems projects are subject to more risk than other systems in that
they may inherit upstream-level risks and a process’s tolerance for risk may vary by
organization and the delivered product. In Figure 1.1, we can identify that the more
we advance in the design and production of components by the various
organizations, the risks will be added and the impact for organizations with a low
risk tolerance will be more strongly impacted than others.
In Figure 1.2, we can identify that an organization will be impacted by all the
risks it can inherit from upstream organizations and that it will impose risks on all
downstream organizations.
can impact the final delivery of the component, product or system, or even the
system-of-systems.
– Side effects may appear on other components, so it will be necessary to retest
all components each time a component update is delivered. This solution can be
limited to the components interacting directly with the modified component(s) or
extend to the entire system-of-systems, and it is recommended to automate it.
– The interfaces between components may not be developed simultaneously and
therefore that the tests of these interfaces may be delayed.
It is not possible to envisage retesting all the combinations of data and actions of
the components of a level of a system-of-systems; this would generate a workload
disproportionate to the expected benefits. One solution is to verify that the design
and test processes have been correctly carried out, that the proofs of execution are
available and that the test activities – static and dynamic – have correctly covered
the objectives. These verification activities are the responsibility of the quality
assurance teams and are mainly based on available evidence (paper documentation,
execution logs, anomaly dashboards, etc.).
relationships with each other, others and to the outside. This information is grouped
into what NASA calls CRM (Crew Resource Management). Developed in the
1970s–1980s, CRM is a mature discipline that applies to complex projects and is
ideal for decision-making processes in project management.
It is essential to:
– recognize the existence of a problem;
– define what the problem is;
– identify probable solutions;
– take the appropriate actions to implement a solution.
If CRM is mainly used where human error can have devastating effects, it is
important to take into account the lessons that CRM can bring us in the
implementation of decision-making processes. Contrary to a usual vision, people
with responsibilities (managers and decision-makers) or with the most experience
are sometimes blinded by their vision of a solution and do not take into account
alternative solutions. Among the points to keep in mind is communication between
the different members of the team, mutual respect – which will entail listening to the
information provided – and then measuring the results of the solutions implemented
in order to ensure their effectiveness. Team members can all communicate important
information that will help the project succeed.
The specialization of the members of the project team, the confidence that we
have in their skills and the confidence that they have in their experience, the
management methods and the constraints – contractual or otherwise – mean that the
decision-making method and the decisions made can be negatively impacted in the
absence of this CRM technique. This CRM technique has been successfully
implemented in aeronautics and space, and its lessons should be used successfully in
complex projects.
2
Testing Process
These processes will all be involved to some degree in the testing processes.
Indeed, the test processes will decline the requirements, whether or not they are
defined in documents describing the conformity needs, and the way in which these
requirements will be demonstrated (type of IADT proofs), will split the types of
demonstration according to the levels test and integration (system, subsystem,
sub-subsystem, component, etc.) and static types (Analysis and Inspection for static
checks during design) or dynamic (demonstration and tests during the levels of
integration and testing of subsystems, systems and systems-of-systems). Similarly,
the activities of the test process will report information and progress metrics to
project management (CMMI PMC for Project Monitoring and Control process) and
will be impacted by the decisions descending from this management.
The processes described in this chapter apply to a test level and should be
repeated on each of the test levels, for each piece of software or containing software.
Any modification in the interfaces and/or the performance of a component
interacting with the component(s) under test will involve an analysis of the impacts
and, if necessary, an adaptation of the test activities (including about the test) and
evidence to be provided to show the conformity of the component (or system,
subsystem, equipment or software) to its requirements. Each test level should
coordinate with the other levels to limit the execution of tests on the same
requirements.
16 Advanced Testing of Systems-of-Systems 2
An additional process can be defined: the review process, which can be carried
out several times on a test level, on the one hand, on the input deliverables, and on
the other hand, on the deliverables produced by each of the processes of the level.
Review activities can occur within each defined test process.
The proposed test processes are applicable regardless of the development mode
(Agile or sequential). In the case of an Agile development mode, the testing
processes must be repeated for each sprint and for each level of integration in a
system-of-systems.
The processes must complement each other and – even if they may partially
overlap – it must be ensured that the processes are completed successfully.
2.1. Organization
Objectives:
– develop and manage organizational needs, in accordance with the company’s
test policy and the test strategies of higher levels;
– define the players at the level, their responsibilities and organizations;
– define deliverables and milestones;
– define quality targets (SLA, KPi, maximum failure rate, etc.);
– ensure that the objectives of the test strategy are addressed;
– define a standard RACI matrix.
Actor(s):
– CPI (R+A), CPU/CPO (I), developers (C+I);
– experienced “test manager” having a pilot role of the test project (R).
Prerequisites/inputs:
– calendar and budgetary constraints defined for the level;
– actors and subcontractors envisaged or selected;
– repository of lessons learned from previous projects.
Deliverables/outputs:
– organization of level tests;
– high-level WBS with the main tasks to be carried out;
18 Advanced Testing of Systems-of-Systems 2
Entry criteria:
– beginning of the organization phase.
Exit criteria:
– approved organizational document (ideally a reduced number of pages).
Indicators:
1) efficiency: writing effort;
2) coverage: traceability to the quality characteristics identified in the project test
strategy.
Points of attention:
– ensure that the actors and meeting points (milestones and level of reporting)
are well defined.
2.2. Planning
Objective:
– plan test activities for the project, level, iteration or sprint considering existing
issues, risk levels, constraints and objectives for testing;
– define the tasks (durations, objectives, incoming and outgoing, responsibilities,
etc.) and sequencing;
– define the exit criteria (desired quality level) for the level;
– identify the prerequisites, resources (environment, personnel, tools, etc.)
necessary;
– define measurement indicators and frequencies, as well as reporting.
Actor(s):
– CPI (R+A), CPU/CPO (I), developers (C+I);
– experienced testers “test manager”, having a role of manager of the test
project (R);
– testers (C+I).
Testing Process 19
Prerequisites/inputs:
– information on the volume, workload and deadlines of the project;
– information on available environments and interfaces;
– objectives and scope of testing activities.
Objective:
– plan test activities for the project or level, iteration or sprint considering
existing status, risk levels, constraints and objectives for testing;
– define the tasks (durations, objectives, incoming and outgoing, responsibilities,
etc.) and sequencing;
– define the exit criteria (desired quality level) for the level;
– identify prerequisites, resources (environment, personnel, tools, etc.)
necessary;
– define measurement indicators and frequencies, as well as reporting.
Actor(s):
– CPI (R+A), CPU/CPO (I), developers (C+I);
– experienced testers “test manager”, having a role of manager of the test project
(R);
– testers (C+I).
Prerequisites/inputs:
– REAL and project WBS defined in the investigation phase;
– lessons learned from previous projects (repository of lessons learned).
Deliverables/outputs:
– master test plan, level test plan(s);
– level WBS (or TBS for Test Breakdown Structure), detailing – for the
applicable test level(s) – the tasks to be performed;
– initial definition of test environments.
20 Advanced Testing of Systems-of-Systems 2
Entry criteria:
– start of the investigation phase.
Exit criteria:
– test plan approved, all sections of the test plan template are completed.
Indicators:
1) efficiency: writing effort vs. completeness and size of the deliverables
provided;
2) coverage: coverage of the quality characteristics selected in the project Test
Strategy.
Points of attention:
– ensure that test data (for interface tests, environment settings, etc.) will be well
defined and provided in a timely manner;
– collect lessons learned from previous projects.
Deliverables/outputs:
– master test plan, level test plan(s);
– level WBS, detailing – for the test level(s) – the tasks to be performed;
– detailed Gantt of test projects – each level – with dependencies;
– initial definition of test environments.
Entry criteria:
– start of the investigation phase.
Exit criteria:
– approved test plan, all sections of the applicable test plan template are
completed.
Indicators:
1) efficiency: writing effort;
2) coverage: coverage of the quality characteristics selected in the project’s test
strategy.
Testing Process 21
Points of attention:
– ensure that test data (for interface testing, environment settings, etc.) will be
well defined and provided in a timely manner.
Objective:
– throughout the project: adapt the test plan, processes and actions, based on the
hazards and indicators reported by the test activities, so as to enable the project to
achieve its objectives;
– identify changes in risks, implement mitigation actions;
– provide periodic reporting to the CoPil and the CoSuiv;
– escalate issues if needed.
Actor(s):
– CPI (A+I), CPU/CPO (I), developers (I);
– test manager with a test project manager role (R);
– testers (C+I) [provide indicators];
– CoPil CoNext (I).
Prerequisites/inputs:
– risk analysis, level WBS, project and level test plan.
Deliverables/outputs:
– periodic indicators and reporting for the CoPil and CoSuiv;
– updated risk analysis;
– modification of the test plan and/or activities to allow the achievement of the
“project” objectives.
Entry criteria:
– project WBS, level WBS.
Exit criteria:
– end of the project, including end of the software warranty period.
22 Advanced Testing of Systems-of-Systems 2
Indicators:
– dependent on testing activities.
2.4. Analyze
Objective:
– analyze the repository of information (requirements, user stories, etc. usable
for testing) to identify the test conditions to be covered and the test techniques to be
used. A risk or requirement can be covered by more than one test condition. A test
condition is something – a behavior or a combination of conditions – that may be
interesting or useful to test.
Actor(s):
– testers, test analysts, technical test analysts.
Prerequisites/inputs:
– initial definition of test environments;
– requirements and user stories (depending on the development method);
– acceptance criteria for (if available);
– analysis of prioritized project risks;
– level test plan with the characteristics to be covered, the level test environment.
Deliverables/outputs:
– detailed definition of the level test environment;
– test file;
– prioritized test conditions;
– requirements/risks traceability matrix – test conditions.
Entry criteria:
– validated and prioritized requirements;
– risk analysis.
Exit criteria:
– each requirement is covered by the required number of test conditions
(depending on the RPN of the requirement).
Testing Process 23
Indicators:
1) Efficiency:
- number of prioritized test conditions designed,
- updated traceability matrix for extension to test conditions.
2) Coverage:
2.5. Design
Objective:
– convert test conditions into test cases and identify test data to be used to cover
the various combinations. A test condition can be converted into one or more test
cases.
Actor(s):
– testers, test technicians.
Prerequisites/inputs:
– prioritized test conditions;
– requirements/risks traceability matrix – test conditions.
Deliverables/outputs:
– prioritized test cases, definition of test data for each test case (input and
expected);
– prioritized test procedures, taking into account the execution prerequisites;
– requirements/risks traceability matrix – test conditions – test cases.
Entry criteria:
– test conditions defined and prioritized;
24 Advanced Testing of Systems-of-Systems 2
– risk analysis.
Exit criteria:
– each test condition is covered by one or more test cases (according to the
RPN);
– partitions and typologies of test data defined for each test;
– defined test environments.
Indicators:
1) Efficiency:
- number of prioritized test cases designed,
- updated traceability matrix for extension to test cases.
2) Coverage:
- percentage of requirements and/or risks covered by one or more test cases
designed.
2.6. Implementation
Objective:
– finely describe – if necessary – the test cases;
– define the test data for each of the test cases generated by the test design
activity;
– automate the test cases that need to be;
– setting up test environments.
Actor(s):
– testers, test automators, data and systems administrators.
Prerequisites/inputs:
– prioritized test cases;
– risk analysis.
Deliverables/outputs:
– automated or non-automated test scripts, test scenarios, test procedures;
Testing Process 25
Entry criteria:
– prioritized test cases, defined with their data partitions.
Exit criteria:
– test data defined for each test;
– test environments defined, implemented and verified.
Indicators:
1) Efficiency:
- number of prioritized test cases designed with test data,
- updated traceability matrix for extension to test data,
- number of test environments defined, implemented and verified vs. number
of environments planned in the test strategy.
2) Coverage:
- percentage of test environments ready and delivered,
- coverage of requirements and/or risks by one or more test cases with data,
- coverage of requirements and/or risks by one or more automated test cases.
Objective:
– execute the test cases (on the elements of the application to be tested) delivered
by the development;
– identify defects and write anomaly sheets;
– report monitoring and coverage information.
Actor(s):
– testers, test technicians.
26 Advanced Testing of Systems-of-Systems 2
Prerequisites/inputs:
– system to be tested is available and managed in delivery (configuration
management), accompanied by a delivery sheet.
Deliverables/outputs:
– anomaly sheets filled in for any identified defect;
– test logs.
Entry criteria:
– testing environment and resources (including testing tools) available for the
level, and tested;
– anomaly management tool available and installed;
– test cases and test data available for the level;
– component or application to be tested available and managed in delivery
(configuration management);
– delivery sheet provided.
Exit criteria:
– coverage of all test cases for the level.
Indicators:
1) Efficiency:
- percentage of tests passed, skipped (not passed) and failed, by level of risk,
- percentage of test environment availability for test execution,
- test execution workload achieved vs. planned.
2) Coverage:
- percentage of requirements/risks tested with at least one remaining defect,
- percentage of requirements/risks tested without any defect remaining.
2.8. Evaluation
Objective:
– identify whether the test execution results show that the execution campaign
will be able to achieve the objectives;
Testing Process 27
– ensure that the acceptance criteria defined for the requirements or user stories
are met;
– if scope or quality changes impact testing, identify and select the mitigation
actions.
Actor(s):
– person responsible for project testing activities.
Prerequisites/inputs:
– definition of acceptance criteria;
– project load estimation data;
– actual usage data of project loads;
– progress data (coverage, deadlines, anomalies, etc.) of the project.
Deliverables/outputs:
– progress graphs, identification of trends;
– progress comments (identification of causes and proposals for mitigation).
Entry criteria:
– start of the project.
Exit criteria:
– end of the duration of each test task and of the test campaign;
– complete coverage achieved for the features or components to be tested.
Indicators:
1) Efficiency:
- identify the workload used, the anomalies identified – including priority and
criticality – as well as the level of coverage achieved, compare against the objectives
defined in the planning part.
2) Coverage:
- all the activities planned for the test task or for the test campaign have been
carried out,
- to be defined based on the planned objectives and their achievement for each
requirement or user story.
Another random document with
no related content on Scribd:
time in the imagos of Pteronarcys (see p. 401). Although these
fossils are of such enormous antiquity, the tracheae can, M.
Brongniart says, be still perceived in these processes.
They are very depressed, that is, flat, Insects, with a large head,
which exhibits a great variety of shape; frequently it is provided in
front of the antennae with some peculiar tubercles called trabeculae,
which in some cases are mobile. The antennae are never large,
frequently very small; they consist of from three to five joints, and are
sometimes concealed in a cavity on the under side of the head.
Fig. 215.—Under-surface of head of Lipeurus heterographus. (After
Grosse.) ol, Labium; md, mandible; mx, maxilla; ul, labium.
The eyes are very rudimentary, and consist of only a small number
of isolated facets placed behind the antennae; sometimes they are
completely absent. The mouth parts are situated entirely on the
under-surface of the head and in a cavity. The upper lip is frequently
of remarkable form, as if it were a scraping instrument (ol, Fig. 215).
The mandibles are sharply toothed and apparently act as cutting
instruments. The maxillae have been described in the principal work
on the family[270] as possessing in some cases well-developed palpi.
According to Grosse[271] this is erroneous; the maxillae, he says, are
always destitute of palpi, and are of peculiar form, being each merely
a lobe of somewhat conical shape, furnished on one aspect with
hooks or setae. The under lip is peculiar, and apparently of very
different form in the two chief groups of Mallophaga. The large
mentum bears, in Liotheides (Fig. 216, B), on each side a four-
jointed palpus, the pair of palps being very widely separated; the
ligula is broad and undivided; on each side there is a paraglossa
bearing an oval process, and above this is a projection of the
hypopharynx. In Philopterides (Fig. 216, A) the palpi are absent, and
the parts of the lower lip are—with the exception of the paraglossae
—but little differentiated. The lingua (hypo-pharynx) in Mallophaga is
largely developed, and bears near the front a chitinous sclerite
corresponding with another placed in the epipharynx.
The testes and ovaries are of a simple nature. The former consist of
two or three capsules, each having a terminal thread; the vasa
deferentia are tortuous and of variable length; they lead into the
anterior part of the ejaculatory duct, where also opens the elongate
duct proceeding from the bicapsular vesicula seminalis; these
structures have been figured by Grosse[272] as well as by Giebel.
The ovaries consist of three to five short egg-tubes on each side; the
two oviducts combine to form a short common duct with which there
is connected a receptaculum seminis.
It has been stated by some writers that the mouth is truly of the
sucking kind, and that the Mallophaga feed on the blood of their
hosts. This is, however, erroneous; they eat the delicate portions of
the feathers of birds, and of mammals perhaps the young hair. Their
fertility is but small, and it is believed that in a state of nature they
are very rarely an annoyance to their hosts. The majority of the
known species live on birds; the forms that frequent mammals are
less varied and have been less studied; most of them have only one
claw to the feet (Fig. 220), while the greater portion of the avicolous
species have two claws.
The Liotheides are more active Insects, and leave their host after its
death to seek another. But the Philopterides do not do so, and die in
about three days after the death of their host. Possibly Mallophaga
may be transferred from one bird to another by means of the
parasitic two-winged flies that infest birds. The writer has
recorded[276] a case in which a specimen of one of these bird-flies
captured on the wing was found to have some Mallophaga attached
to it.
The Embiidae are one of the smallest families of Insects; not more
than twenty species are known from all parts of the world, and it is
probable that only a few hundred actually exist. They are small and
feeble Insects of unattractive appearance, and shrivel so much after
death as to render it difficult to ascertain their characters. They
require a warm climate. Hence it is not a matter for surprise that little
should be known about them.
Fig. 223.—Under-surface of Embia sp. Andalusia.
The wings in Embiidae are very peculiar; they are extremely flimsy,
and the nervures are ill-developed; stripes of a darker brownish
colour alternate with pallid spaces. We figure the anterior wing of
Oligotoma saundersii, after Wood-Mason; but should remark that the
neuration is really less definite than is shown in these figures; the
lower one represents Wood-Mason's interpretation of the nervures.
He considers[278] that the brown bands "mark the original courses of
veins which have long since disappeared." A similar view is taken by
Redtenbacher,[279] but at present it rests on no positive evidence.
CHAPTER XVI
The term White Ants has been so long in use for the Termitidae that
it appears almost hopeless to replace it in popular use by another
word. It has, however, always given rise to a great deal of confusion
by leading people to suppose that white ants differ chiefly from
ordinary ants by their colour. This is a most erroneous idea. There
are scarcely any two divisions of Insects more different than the
white ants and the ordinary ants. The two groups have little in
common except that both have a social life, and that a very
interesting analogy exists between the forms of the workers and
soldiers of these two dissimilar Orders of Insects, giving rise to
numerous analogies of habits. The word Termites—pronounced as
two syllables—is a less objectionable name for these Insects than
white ants.
The wings of Termitidae are not like those of any other Insects; their
neuration is very simple, but nevertheless the wings of the different
forms exhibit great differences in the extent to which they are made
up of the various fields. This is shown in Fig. 228, where the
homologous nervures are numbered according to the systems of
both Hagen and Redtenbacher. The area, VII, that forms the larger
part of the wing in C, corresponds to the small portion at the base of
the wing in B. The most remarkable feature of the wing is, however,
its division into two parts by a suture or line of weakness near the
base, as shown in Fig. 225. The wings are used only for a single
flight, and are then shed by detachment at this suture; the small
basal portion of each of the four wings is horny and remains
attached to the Insect, serving as a protection to the dorsal surface
of the thorax.
The nature of the suture that enables the Termites to cast their wings
with such ease after swarming is not yet understood. There are no
true transverse veinlets or nervules in Termites. Redtenbacher
suggests[284] that the transverse division of the wing at its base, as
shown in Fig. 225, along which the separation of the wing occurs at
its falling off, may have arisen from a coalescence of the subcostal
vein with the eighth concave vein of such a wing as that of Blattidae.
The same authority also informs us that the only point of
resemblance between the wings of Termitidae and those of Psocidae
is that both have an unusually small number of concave veins.
The information that exists as to the internal anatomy of Termites is
imperfect, and refers, moreover, to different species; it would appear
that considerable diversity exists in many respects, but on this point
it would be premature to generalise. What we know as to the
respiratory system is chiefly due to F. Müller.[285] The number of
spiracles is ten; Hagen says three thoracic and seven abdominal,
Müller two thoracic and eight abdominal. In fertile queens there
usually exist only six abdominal stigmata. There is good reason for
supposing that the respiratory system undergoes much change
correlative with the development of the individual; it has been
suggested that the supply of tracheae to the sexual organs is
deficient where there is arrest of development of the latter.