Professional Documents
Culture Documents
Advanced Software Technologies For Post Peta Scale Computing The Japanese Post Peta CREST Research Project Mitsuhisa Sato
Advanced Software Technologies For Post Peta Scale Computing The Japanese Post Peta CREST Research Project Mitsuhisa Sato
https://textbookfull.com/product/contagious-metaphor-1st-edition-
peta-mitchell/
https://textbookfull.com/product/xcalablemp-pgas-programming-
language-from-programming-model-to-applications-mitsuhisa-sato/
https://textbookfull.com/product/basic-japanese-second-edition-
eriko-sato/
https://textbookfull.com/product/post-truth-post-press-post-
europe-euroscepticism-and-the-crisis-of-political-
communication-1st-edition-paul-rowinski/
Practice Makes Perfect: Complete Japanese Grammar 2nd
Edition Eriko Sato
https://textbookfull.com/product/practice-makes-perfect-complete-
japanese-grammar-2nd-edition-eriko-sato/
https://textbookfull.com/product/creating-and-using-virtual-
prototyping-software-principles-and-practices-sei-series-in-
software-engineering-1st-edition-post/
https://textbookfull.com/product/advanced-computing-and-
communication-technologies-jyotsna-kumar-mandal/
https://textbookfull.com/product/post-conflict-education-for-
democracy-and-reform-bosnian-education-in-the-post-war-
era-1995-2015-1st-edition-brian-lanahan-auth/
https://textbookfull.com/product/the-good-research-guide-for-
small-scale-research-projects-5th-edition-denscombe/
Mitsuhisa Sato Editor
Advanced Software
Technologies for
Post-Peta Scale
Computing
The Japanese Post-Peta CREST Research
Project
Advanced Software Technologies for Post-Peta
Scale Computing
Mitsuhisa Sato
Editor
Advanced Software
Technologies for Post-Peta
Scale Computing
The Japanese Post-Peta CREST Research
Project
123
Editor
Mitsuhisa Sato
RIKEN Center for Computational Science
Kobe, Japan
This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd.
The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721,
Singapore
Preface
v
vi Preface
In 2010, the Japan Science and Technology Agency (JST) has initiated a research
area, titled “Development of System Software Technologies for Post-Peta Scale
High Performance Computing,” as a part of its Strategic Basic Research Program
(CREST). The project was named “JST CREST Post-Petascale software project.”
The research area of the project aimed at developing system software technologies
as well as related systems to be used for high performance computing systems
including the next generations of the Japanese flagship system, the K computer,
which is under development. Several researches and developments were conducted
for system software enabling us to exploit maximum efficiency and performance
from supercomputers composed of general purpose many-core processors as well
as accelerators such as GPUs and FPGA. From 2010 to 2012, 14 research teams
were selected, and 5-year research has been being conducted by each research
team. Many Japanese researchers and graduate students related to HPC have been
participating in these research teams.
This book describes the major outcomes obtained by research teams of the JST
CREST post-petascale software project.
Advanced system software is the key technology for post-petascale and exascale
high performance computing systems which will be developed in next decade. I
hope that the technologies developed in the JST CREST post-petascale software
project will play a role bridging to exascale computing and beyond through system
software technologies and advance the future computational science.
vii
viii Contents
Mitsuhisa Sato
M. Sato ()
RIKEN Center for Computational Science, Kobe, Japan
e-mail: msato@riken.jp
Japan already has installed several petascale computers including the K com-
puter in RIKEN and now explores the evolution toward future exascale systems.
Following the end of the existing Moore law, the number of core per chip increases
and specialized hardware has been used to accelerate specific type of applications.
The number of processors and the interconnecting network increase also and
we have then to face new programming problems. Post-petascale systems and
future exascale computers are expected to have an ultra-large-scale and high-
performance architecture with nodes of many-core processors and accelerators.
To manage these ultra-large-scale parallel systems, we require new sophisticated
system software technologies, allowing to manage complex parallel computations
with huge distributed data, minimizing the energy consumption, and with fault-
resilient properties.
JST CREST post-petascale software project has been launched to establish
software technologies to explore extreme performance computing beyond petascale
computing, on the road to exascale computing. The ability to manage and program
these future high-performance systems efficiently is considered by all research
national agencies all along the world as a strategic and important issue.
The calls for project proposals were issued at every year from 2010 to 2012, and
finally the 14 projects were adopted: 5 projects in the first year, 5 projects in the
second year, and 4 projects in the third year as a result of peer reviews by advisory
committee for the project proposals submitted as responses to the calls. The duration
of each project is 5.5 years. The total budget from 2010 to 2017 was about 60 M
USD. Table 1.1 shows the adopted teams of JST CREST post-petascale software
project.
The advisory committee was organized to advice the research direction of the
project teams by following members:
• Mutsumi Aoyagi, Professor, Research Institute for Information Technology,
Kyushu University (assumption of office period: Oct. 2010–Dec. 2014)
• Yutaka Ishikawa, Project Leader, Flagship 2020 Project, RIKEN Advanced
Institute for Computational Science
• Kouichi Kumon, Member of the board, Fujitsu Laboratories Ltd.
• Kenji Kono, Professor, Keio University
• Hiroaki Kobayashi, Director, Cyberscience Center, Tohoku University
• Mitsuhisa Sato, Professor, Department of Computer Science, University of
Tsukuba (assumption of office period: Oct. 2010–Mar. 2015)
• Shinji Shimojo, Professor, Cybermedia Center, Osaka University
• Keiko Takahashi, Director, Center for Earth Information Science and Technol-
ogy, Japan Agency for Marine-Earth Science and Technology
• Yaoko Nakagawa, Senior Project Manager, Center for Technology Innovation-
Information and Telecommunications, Research & Development Group, Hitachi
Ltd.
• Hiroshi Nakashima, Professor and Director, Academic Center for Computing and
Media Studies, Kyoto University
• Junichiro Makino, Professor, Department of Planetology, Graduate School of
Science, Kobe University
• Satoshi Matsuoka, Professor, Global Scientific Information and Computing
Center, Tokyo Institute of Technology
To carry the projects out strategically, the policies and goals of management in
our research area were defined as follows:
(1) Research and development of highly functional and reliable system software
for sustainable high-performance computing technologies to solve social and
scientific problems
Numerical simulation and data analysis utilizing ultra-large-scale computational
resources and storage have dramatically been increasing the importance of its role
for modern science and technology. In response to this fact, the USA, Europe, China,
and Japan are racing to develop the next generation of supercomputer – exascale
systems – capable of a million trillion calculations a second by around 2020. In
our country, the project, FLAGSHIP2020, has been launched to develop the next
flagship system following the K computer.
4 M. Sato
Under such circumstances, in order to keep the ability to solve several social and
scientific problems by making full use of the supercomputer, it is extremely signif-
icant not only to have technologies to execute a large-scale programs (simulation
programs, data analysis programs, etc.) efficiently on current supercomputers but
1 JST CREST Post-petascale Software Project Bridging to Exascale Computing 5
also to perform researches on software technologies to bring out the full potential
of the next generation of high-performance systems and technologies in the future.
The sustainable progress of high-performance computing software is essential for
the sustainable contribution to science and technology advancement and innovation
by the high-performance computing.
In our research area, we are developing a high-performance, highly functional,
and high-reliability system software including programming language, compiler,
runtime system, operating system, communication middleware, file system, numer-
ical calculation library, job management system, and ultra-large-scale data process-
ing system software. In addition, from the viewpoint of sustainable contribution by
the high-performance computing mentioned above, it should be not only academic
research to demonstrate simple novel idea and its feasibility but the development of
actual usable software. It is requested for the teams to make the developed software
used in related community and is also to be emphasized in the evaluation.
(2) Exploration for the direction of future high-performance computing research
and development
At the time of calls for the project proposal, it was not clear what kind
of technologies would be used for the future next-generation high-performance
computing systems, general-purpose many-core processor, or specialize hardware
such as GPU. So, we requested to indicate the target system of the research and how
to make the proposed software executed efficiently in the proposed target systems.
The accepted research project teams were supposed to publish their research results
as open-source software.
As the duration of the adopted project was 5 years, in intermediate evaluation at
the third year, the team was requested to demonstrate that the developed software
will be usable realized at the end of the project. By this request, we expected that
the developed software would show the direction of system architecture of the
future high-performance computing systems and be used actually in these systems
including the next-generation system of the K computer or GPU-based accelerator
systems. For this purpose, we aimed for international collaboration and industry-
academia collaboration while sharing information with overseas researchers and
companies.
Furthermore, when each research team is about to be finished (from 2016), we
expected that the practical and usable software developed by the teams as a result of
the project is used to enable advanced large-scale simulation and valuable prediction
using large-scale data in a wide range of science and technology fields.
(3) Fostering of the next-generation leaders in the field of high-performance
computing
In order to keep the ability of sustainable development of high-performance com-
puting systems and promote the direction of future high-performance computing
research and development, human resource who can bear it is indispensable. We
actively appointed young researchers who were expected to be in the future at the
selection of the projects and took into account the development of young research
6 M. Sato
leaders who will be responsible for research and development in the next generation
of high-performance computing technology in Japan, by emphasizing autonomy in
the planning, making teams and management.
In this section, the research topics and some highlights of the adopted projects
are described briefly. As mentioned above, the research topics of our research
area covers from system software such as programming languages, compilers,
runtime systems, operating systems, communication middleware, and file systems to
application development support software and ultra-large data processing systems.
Several teams carried researches on programming models and frameworks for
post-petascale systems.
Maruyama’s team (Chap. 5) has been working on several programming frame-
works and libraries to make programming easy for the next generation of high-
performance systems. They developed Daino, a high-level framework for parallel
and efficient AMR on GPUs, and investigated effectiveness of high-level program-
ming techniques such as Gridtools for global climate model simulation.
Endo’s team (Chap. 12) was working on problems for recent trends on deeper
memory hierarchy. For exascale high-performance systems, the “Memory Wall”
problem will become even more severe. His team promotes research toward this
problem via co-design approach among application algorithms, system software,
and architecture. They have developed several libraries to make it easy to use the
system of deeper memory hierarchy.
Chiba’s team (Chap. 8) focused on productive programming models for post-
petascale systems. A single general programming language or framework that
covers all subjects will not be feasible for post-petascale supercomputing. Their goal
is to apply modern techniques for software engineering and theoretical foundations
of programming languages, such as software modularization, to high-performance
computing.
Takizawa’s team (Chap. 9) proposed the Xevolver framework which takes an
evolutionary approach to incremental migration of existing software resources to
new systems. The goal is to establish an effective migration path to new algorithms,
implementation schemes, and programming environments for massively parallel
and heterogeneous systems in an upcoming extreme-scale computing era.
Boku’s team (Chap. 15) has been working on TCA (tightly coupled accelera-
tors) by short-latency communication among GPUs over nodes to achieve strong
scalability on next-generation accelerated computing and its programming model.
They developed prototype system named PEACH2 which implemented by FPGA
for flexible design and suitability for PCIe interface. As its extension, they proposed
aggressive solution named AiS (Accelerator in Switch) to exploit high potential of
recent FPGA. To solve the low productivity on programming by MPI plus CUDA
1 JST CREST Post-petascale Software Project Bridging to Exascale Computing 7
Shioya’s team (Chap. 10) has been developing an open-source software called
ADVENTURE system. The ADVENTURE system is a general-purpose parallel
finite element analysis system and can simulate a large-scale analysis model with
supercomputer like the Earth Simulator or K computer. In the ADVENTURE
system, HDDM (hierarchical domain decomposition method), a very effective
technique for large-scale analysis, was developed. They aimed to develop a numer-
ical library based on HDDM that is extended to pre- and post-processing parts,
including mesh generation and visualization of large-scale data, for the post-
petascale simulation.
Fujisawa’s team (Chap. 11) has been developing advanced computing and
optimization infrastructures for extremely large-scale graphs on post-petascale
supercomputers. The large-scale graph analysis has attracted significant attention as
a new application of the next-generation supercomputer. It is, however, extremely
difficult to realize a high-speed graph processing in various application fields
by utilizing previous methods. They aimed to develop advanced computing and
optimization infrastructures for extremely large-scale graphs on the next-generation
supercomputers.
Noda’s team (Chap. 14) has been working on Project CASSIA (Comprehensive
Architecture of Social Simulation for Inclusive Analysis) which aims to develop a
framework to administer to execute large-scale multiagent simulations exhaustively
to analyze socially interactive systems. The framework realizes engineering envi-
ronment to design and synthesize social systems like traffics, economy, and politics.
The JST CREST post-petascale project has ended at the end of March 2018. At
the end, the final evaluation was done, and the overall results were evaluated as
“excellent” by the evaluation committee.
As planned at the beginning of the project, the goals described in Sect. 1.2 have
been achieved.
Regarding research and development of highly functional and reliable system
software for sustainable high-performance computing technologies, at first of all,
the researchers of each team have published many excellent technical papers and
valuable software as well as research presentations at prominent academic societies.
Furthermore, making use of the developed software, large-scale applications are
executed in many practical application fields from tsunami simulation, weather sim-
ulation to graph analysis, economic simulation, resulting in valuable contribution
to solving social and scientific problems. In particular, it was showing not only
the conventional large-scale numerical simulation but also the application of high-
performance computing to new and important field such as big data analysis and
social simulation. And, in the team having close to the real application, collaboration
and joint research with the industries were actively carried out.
1 JST CREST Post-petascale Software Project Bridging to Exascale Computing 9
“ppOpen-HPC [1, 2]” is an open source infrastructure for development and exe-
cution of optimized and reliable simulation code on post-petascale (pp) parallel
computers based on many-core architectures, and it consists of various types of
libraries, which cover general procedures for scientific computation. Source code
developed on a PC with a single processor is linked with these libraries, and the
parallel code generated is optimized for post-petascale systems. The target post-
petascale system is many-core-based systems, such as the Oakforest-PACS system
operated by JCAHPC [3]. ppOpen-HPC is part of a 5-year project (FY.2011–2015)
spawned by the “Development of System Software Technologies for Post-petascale
High-Performance Computing [4]” funded by JST-CREST.
The framework covers various types of procedures for scientific computations,
such as parallel I/O of datasets, matrix assembly, linear solvers with practical
and scalable preconditioners, visualization, adaptive mesh refinement, and dynamic
load balancing, in various types of computational models, such as FEM (finite
element method), FDM (finite difference method), FVM (finite volume method),
BEM (boundary element method), and DEM (discrete element method). Automatic
tuning (AT) technology enables automatic generation of optimized libraries and
applications under various types of environments. We release the most updated
version of ppOpen-HPC as open source software every year in November (2012–
2015), which is available at the home page of ppOpen-HPC [2].
In 2016, the team of ppOpen-HPC joined ESSEX-II (Equipping Sparse Solvers
for Exascale) project (Leading P.I. Professor Gerhard Wellein (University of
Erlangen-Nuremberg)) [5]), which is funded by JST-CREST and the German
DFG Priority Programme 1648 “Software for Exascale Computing” (SPPEXA)
[6] under Japan (JST)-Germany (DFG) collaboration until FY2018. In ESSEX-II,
we develop pK-Open-HPC (extended version of ppOpen-HPC, framework for exa-
feasible applications), preconditioned iterative solvers for quantum sciences, and a
framework for automatic tuning (AT) with performance model.
Original ppOpen-HPC includes the following four components (Fig. 2.1):
• ppOpen-APPL Frameworks for development of applications by FEM, FDM,
FVM, BEM, and DEM (Fig. 2.2) [1, 2]
User’s Program
Framework for
pK-Open-FVM ppOpen-APPL FEM FDM FVM BEM DEM Application
Development
pK-Open-AT Automatic
ppOpen-AT STATIC DYNAMIC
Tuning (AT)
ppOpen-HPC
BEM DEM
Boundary Element Method Discrete Element Method
Adaptive mesh refinement (AMR) technique [7, 8] can provide efficient numerical
calculation by generating hierarchical layers with different cell sizes at the local
regions where high resolution is needed. It is, however, generally difficult to
implement the AMR treatment in conventional simulation codes discretized by finite
volume method, finite difference method, and so on. To overcome this problem, a
block-based AMR framework, pK-Open-FVM, with which the AMR technique can
be relatively easily ported to generic simulation programs which hire the uniform
cell system has been developed. In this framework, the AMR technique is applied
to a block-structured region consisting of the fixed number of cells, as shown in
Fig. 2.3. A generic simulation program using uniform cell size can be implemented
in each block in the AMR framework [9]. Once a situation occurs where high
resolution is needed in a local region, the corresponding block-structured region is
divided into eight for three-dimensional case and new block-structured regions with
uniform cell with half size of the original one are generated. In the AMR framework,
the simulation domain is divided into multiple sub-domains, and they are assigned to
a number of processes for parallel computing using MPI. A sub-domain is composed
of multiple block-structured regions each of which has the fixed number of grids.
When high resolution is required at a certain region in the sub-domain, a block-
structured region with refined cells, which is called child block, is locally created.
For the application of the AMR framework, (1) Vlasov-Poisson plasma simu-
lations, which are composed of 1D uniform direction and 1D AMR direction, (2)
AMR plasma particle simulations for the development of reactive plasma deposition
equipment, and (3) particle-based sugarscape model simulations are introduced as
follows.
different components of velocity generate a vortex and combine into single electron
beam as time proceeds. As criteria for mesh refinement, the gradient of velocity
is set. When adaptive mesh direction (namely, velocity) have large gradient, mesh
refinement occurs, and high-resolution simulation can be conducted. The lower
right of Fig. 2.4 shows enlarged view of vortex. Lv. 3 block is generated where
the gradient of velocity is large. The execution time of the AMR simulation can
reduce 30% compared to the simulation with uniform mesh at velocity direction.
Plasma particle/fluid hybrid simulation code for the development of reactive plasma
deposition (RPD) equipment (Fig. 2.5) has been developed by using pK-Open-
FVM [10]. In this simulation, the interaction of electromagnetic fields and plasma
particles can be simulated by calculating electromagnetic fields defined on the
computational mesh and mesh-free plasma particles, simultaneously. In such a
plasma particle simulation, in general, several hundreds of particles per one mesh
are needed because the influence of statistical error depending on the number of
particles is reduced.
In a conventional simulation with uniform mesh, the statistical error increases
with increasing the difference of plasma density in the computational domain. In
order to reduce statistical error, much more particles are needed and the execution
time of simulation is also increased. On the other hand, by using the AMR
framework, cell size can be adjusted according to the number of particles (plasma
density) (Fig. 2.6). That is, cell size becomes fine at high plasma density region
or becomes coarse at low plasma density region. Furthermore, Fig. 2.7 shows the
comparison of the number of particles used in the simulation with/without AMR
technique. As a result, the total number of particles and execution time can be
reduced by using AMR technique. The simulation using indium tin oxide as a
material source which can’t be conducted in conventional code can evaluate an
experimental result (dependency of coil current), quantitatively.
Test simulations by adopting the sugarscape model which is proposed for the
simulation for an artificial society by using many agents representing inhabitants
in a certain area are conducted [11]. The inhabitants are treated as a bunch of
particles and the sugar amount are assigned at each grid as the environment in a
18 K. Nakajima et al.
2.3.1 Objective
The objective of this research is to develop an iterative solver with robustness and
exascale parallelism.
In the field of quantum systems, researchers are very interested in electrical,
structural, and chemical properties of materials such as graphene and topological
insulators. To clarify the physical properties of the target materials, we must
solve the generalized eigenvalue problems. A collaborating project of ppOpen-HPC
2 ppOpen-HPC/pK-Open-HPC: Application Development Framework. . . 19
Integral points
• Robustness
• Massive parallelism
In this study, we select an ILU preconditioned Krylov subspace method as the
solver. In addition, we applied regularization methods to incomplete LU (ILU)
preconditioner for the robustness [14]. For the massive parallelism, we proposed
hierarchical parallelization for multicoloring algorithms [15].
Then, the constant value α is decided by the user. α is then added to the diagonal
entries of the target matrix AZ , making it the diagonal dominant matrix Ãz . The
larger value of α makes a more diagonal dominant matrix. However, the effect of
ILU preconditioner gets smaller because of larger difference between AZ and Ãz .
The best parameter α depends on the target application and shift value z.
2 ppOpen-HPC/pK-Open-HPC: Application Development Framework. . . 21
Blocking
For numerical evaluations, we prepared 128 datasets. There are two types of
different model. One model simulates the graphene, and the other simulates the
topological insulator. Each model has four kinds of degrees of freedom (DoF) data,
and each set of data has 16 data shifts. These models are symmetrical and complex
values. For solving them, we implemented the block IC preconditioned conjugate
orthogonal conjugate gradient (COCG) method. The BIC-COCG solver converged
when the relative residual norm was less than 10−7 . If we applied only a blocking
technique with a small size, we would solve only 64 datasets. However, by applying
a larger block size (64) and (0.0, 1.0) data shifts, we solved all datasets (Fig. 2.12).
The dorsal vessel is connected with the roof of the body by some
short muscles, and is usually much surrounded by fat-body into
which tracheae penetrate; by these various means it is kept in
position, though only loosely attached; beneath it there is a delicate,
incomplete or fenestrate, membrane, delimiting a sort of space
called the pericardial chamber or sinus; connected with this
membrane are some very delicate muscles, the alary muscles,
extending inwards from the body wall (b, Fig. 72): the curtain formed
by these muscles and the fenestrate membrane is called the
pericardial diaphragm or septum. The alary muscles are not directly
connected with the heart.
Fig. 72.—Dorsal vessel (c), and alary muscles (b), of Gryllotalpa (after
Graber); a, aorta. N.B.—The ventral aspect is here dorsal, and
nearly the whole of the body is removed to show these parts.
It has been thought by some that delicate vessels exist beyond the
aorta through which the fluid is distributed in definite channels, but
this does not appear to be really the case, although the fluid may
frequently be seen to move in definite lines at some distance from
the heart.
Fat-Body.
The matter extracted from the food taken into the stomach of the
Insect, after undergoing some elaboration—on which point very little
is known—finds its way into the body-cavity of the creature, and as it
is not confined in any special vessels the fat-body has as unlimited a
supply of the nutritive fluid as the other organs: if nutriment be
present in much greater quantity than is required for the purposes of
immediate activity, metamorphosis or reproduction, it is no doubt
taken up by the fat-body which thus maintains, as it were, an
independent feeble life, subject to the demands of the higher parts of
the organisation. It undoubtedly is very important in metamorphosis,
indeed it is possible that one of the advantages of the larval state
may be found in the fact that it facilitates, by means of the fat-body,
the storage in the organisation of large quantities of material in a
comparatively short period of time.
Organs of Sex.
There are in different Insects more than one kind of diverticula and
accessory glands in connexion with the oviducts or uterus; a
receptaculum seminis, also called spermatheca, is common. In the
Lepidoptera there is added a remarkable structure, the bursa
copulatrix, which is a pouch connected by a tubular isthmus with the
common portion of the oviduct, but having at the same time a
separate external orifice, so that there are two sexual orifices, the
opening of the bursa copulatrix being the lower or more anterior. The
organ called by Dufour in his various contributions glande sébifique,
is now considered to be, in some cases at any rate, a spermatheca.
The special functions of the accessory glands are still very obscure.
Although the internal sexual organs are only fully developed in the
imago or terminal stage of the individual life, yet in reality their
rudiments appear very early, and may be detected from the embryo
state onwards through the other preparatory stages.
Parthenogenesis.
Glands.
CHAPTER V
DEVELOPMENT
EMBRYOLOGY–EGGS–MICROPYLES–FORMATION OF EMBRYO–VENTRAL
PLATE–ECTODERM AND ENDODERM–SEGMENTATION–LATER STAGES–
DIRECT OBSERVATION OF EMBRYO–METAMORPHOSIS–COMPLETE AND
INCOMPLETE–INSTAR–HYPERMETAMORPHOSIS–METAMORPHOSIS OF
INTERNAL ORGANS–INTEGUMENT–METAMORPHOSIS OF BLOWFLY–
HISTOLYSIS–IMAGINAL DISCS–PHYSIOLOGY OF METAMORPHOSIS–
ECDYSIS.
The processes for the maintenance of the life of the individual are in
Insects of less proportional importance in comparison with those for
the maintenance of the species than they are in Vertebrates. The
generations of Insects are numerous, and the individuals produced
in each generation are still more profuse. The individuals have as a
rule only a short life; several successive generations may indeed
make their appearances and disappear in the course of a single
year.
Although eggs are laid by the great majority of Insects, a few species
nevertheless increase their numbers by the production of living
young, in a shape more or less closely similar to that of the parent.
This is well known to take place in the Aphididae or green-fly Insects,
whose rapid increase in numbers is such a plague to the farmer and
gardener. These and some other cases are, however, exceptional,
and only emphasise the fact that Insects are pre-eminently
oviparous. Leydig, indeed, has found in the same Aphis, and even in
the same ovary, an egg-tube producing eggs while a neighbouring
tube is producing viviparous individuals.[69] In the Diptera pupipara
the young are produced one at a time, and are born in the pupal
stage of their development, the earlier larval state being undergone
in the body of the parent: thus a single large egg is laid, which is
really a pupa.
The eggs are usually of rather large size in comparison with the
parent, and are produced in numbers varying according to the
species from a few—15 or even less in some fossorial Hymenoptera
—to many thousands in the social Insects: somewhere between 50
and 100 may perhaps be taken as an average number for one
female to produce. The whole number is frequently deposited with
rapidity, and the parent then dies at once. Some of the migratory
locusts are known to deposit batches of eggs after considerable
intervals of time and change of locality. The social Insects present
extraordinary anomalies as to the production of the eggs and the
prolongation of the life of the female parent, who is in such cases
called a queen.
Formation of Embryo.
The mature, but unfertilised, egg is filled with matter that should
ultimately become the future individual, and in the process of
attaining this end is the seat of a most remarkable series of changes,
which in some Insects are passed through with extreme rapidity. The
egg-contents consist of a comparatively structureless matrix of a
protoplasmic nature and of yolk, both of which are distributed
throughout the egg in an approximately even manner. The yolk,
however, is by no means of a simple nature, but consists, even in a
single egg, of two or three kinds of spherular or granular
constituents; and these vary much in their appearance and
arrangement in the early stages of the development of an egg, the
yolk of the same egg being either of a homogeneously granular
nature, or consisting of granules and larger masses, as well as of
particles of fatty matter; these latter when seen through the
microscope looking sometimes like shining, nearly colourless,
globules.
Fig. 79.—Showing the two extruded polar bodies P1, P2 now nearly
fused and reincluded, and the formation of the spindle by junction
of the male and female pronuclei. (After Henking.)