Download as pdf or txt
Download as pdf or txt
You are on page 1of 56

congress review

Compiled by:
Ian Symington
Technical Officer, NAFEMS

Our thanks go to everyone involved in making


the NAFEMS World Congress 2015 possible:
Review Committee
Nigel Knowles Vic Rollo Jim Wood

NAFEMS World Congress Team


Christine Bell Akiko Kondoh Larissa Oswald Ian Symington
Gabi Beller Robert Kostrzewa David Quinn Daniel Trojer
Olivia Bugaj Matt Ladzinski Georg Schöpf Heike Wankel
Jo Davenport Tim Morris Margot Simmons Andrew Wood
Elangovan Kariappan Albert Roger Oswald Paul Steward

Training Course Tutors


Tony Abbey Kamran Fouladi Louis Komzsik
Peter Bartholomew Jean-Francois Imbert Mark Norris
Contents
Your Congress 2
A World of Engineering Simulation 3
Agenda 4
Key Messages - The Elevator Pitch 7
Focus on Keynotes 8
SPDM Conference Overview 10

Expert Angus Ramsay 12


View
Peter Bartholomew 13
Ron Simmons 14

Working High Performance Computing 15


Group Stochastics 16
View
Analysis Management 17
Composites 17
Systems Modelling and Simulation 18

Session Analysis Management 19


Review
Optimization 20
Fracture & Fatigue 22
Nonlinear Approaches in Multibody Simulation Analysis 24
Multibody Simulation 26
Simulation Engineering meets Additive Manufacturing 27

Congress Congress Highlights 28


Highlights A Primer on Model Based Systems Engineering 29
Models for Intralaminar Damage and Failure of Fiber Composites 34
Improvements of an Air-Liquid Interface In-Vitro Testing Method
for Inhalable Compounds Using CFD-Methods 43

NAFEMS Technical Working Groups & Regional Steering Committees 50

1
Your Congress
Albert Roger Oswald & Heike Wankel

F
rom the 21st to 24th of June 2015, San Diego played host to the engineering
analysis and simulation community, as we gathered for a packed agenda of
presentations, workshops, training courses, and, of course, social events.
Almost 600 attendees and more than 300 presentations meant that this was
always going to be one of the most technically rewarding and varied events
dedicated to analysis and simulation so far. From the ground-breaking
technologies at work in the BMW i8, to the challenges in space vehicle
development and everything in between, this was an outstanding event whatever
way you look at it.

But not everyone can attend, for one reason or another. And that’s where this
review comes in. It would of course be impossible to cover every session and
presentation in one publication, but what we have tried to do here is to give a
flavour of some of the sessions, an insight into the keynote presentations, and the
opinions of some of our working groups on the event. We have also asked several
acknowledged experts in various fields to give their views and opinions on some of
the sessions they managed to attend over the course of the Congress. Some of
these opinions vary, and some may outright contradict each other, but that’s the
beauty of an event like this – everyone involved comes away with something
different.

A guide to our various Technical Working Groups and Steering Committees can
also be found at the rear of this publication, which will help the reader to resolve
the acronyms that are used throughout. Much of the material that was presented
at the Congress, including presentations and technical papers, is available on our
website and can be accessed by NAFEMS members. We hope that this overview
publication will motivate you to take a look at these resources.

We are the engineering modelling, analysis and simulation community – NAFEMS


is proud to serve as your international association. We look forward to seeing you
at the NWC 2017 in Europe. ᔡ

On behalf of the NAFEMS World Congress Team


A World of
Engineering
Simulation
Costas Stavrinidis

T
he biennial NAFEMS World Congress was established in
order to strengthen worldwide alliances and working
relationships between industry, research institutes, and
academia in the area of engineering analysis and simulation.

This year’s congress was held in San Diego, USA from 21 – 24


June 2015 and as NAFEMS Chairman, I was delighted to
welcome more than 575 participants from 33 countries, making
this by far the largest NAFEMS event ever held outside Europe.
With over 300 technical presentations by leading experts, this
international forum provided a unique opportunity for
networking, knowledge exchange, sharing best practices and
discussing experiences of technical and scientific efforts.

NAFEMS has remained consistent in providing up-to-date


information on the latest technology in simulation, modelling &
analysis to the engineering community. The organization has
progressed over the years to take into account technology
changes, membership priorities, the market place, and the ways
in which engineers access information and progress their own
professional development. In line with this progression, we
successfully incorporated the 2nd international SPDM
conference into the NWC, as well as a special forum on additive
manufacturing and a focus on Systems Engineering.

Our thanks go to fantastic keynotes and invited speakers,


presenters, participants, exhibitors, NAFEMS working group
members, volunteer helpers and our tireless review committee
who helped make this congress a big success. A special mention
must also go to NAFEMS’ dedicated staff team, without whom
successful events such as this would not be possible.

With the Congress, we provide an opportunity to connect


different specialists and bring them together for the
advancement of analysis and simulation techniques and sharing
of respective experiences with software vendors and computer
providers for various engineering and scientific applications. We
would like to thank all of the exhibitors who presented their
latest hardware and software products related to CAE. We also
thank Dassault Simulia as Platinum Sponsor, Altair, Ansys and
Siemens as Gold Sponsors, Autodesk, MSC and Phoenix
Integration as Silver Sponsors, Esteco and Front End Analytics
(FEA) as SPDM sponsors, and our media partners Desktop
Engineering, Engineering.com, and MCADCAFÉ. ᔡ

3
Agenda
Monday 22nd June
Welcome: Costas Stavrinidis, NAFEMS Chairman
Sponsor Presentation: Steve Levine, Dassault Systèmes, USA
Keynote Presentation: Ferdinand Dirschmid, BMW Group, GER
Invited Presentation: Dennis Nagy, BeyondCAE, USA
Keynote Presentation: Klaus-Jürgen Bathe, Massachusetts Institute of Technology, USA
SPDM Keynote: Peter Coleman, Airbus Operations Ltd., GBR
Invited Presentation: Georg Schöpf, Additive Fertigung Magazine, AUT

Session 1 13:30 - 15:15


1A CFD 1 1B Dynamics 1 1C Composites 1 1D Optimization 1 1E Multiphysics 1 -

Session 2 16:00 - 17:45


2A CFD 2 2B Dynamics 2 2C Composites 2 / 2D Fracture & Fatigue 1 2E Stochastics 1 -
Multiscale Uncertainty
Characterization

Tuesday 23rd June


Keynote Speaker: Zlatko Penzar, Continental AG, GER
Keynote Speaker: Ahmed Noor, Old Dominion University, USA
Invited Presentation: Costas Stavrinidis, European Space Agency, ESTEC, NED
Invited Presentation: Joe Walsh, intrinSIM, USA

Session 3 11:00 - 12:25


3A CFD 3 / Acoustics 3B Multibody Simulation 1 3C Composites 3 - Failure 3D Computational 3E Multiphysics 2
Structural Mechanics Computing 1

Session 4 13:30 - 14:55


4A Fracture & Fatigue 2 4B Multibody Simulation 2 4C Composites 4 - 4D Impact 2 / Life Sciences 4E Multiphysics 3
Fibre-Reinforced Computing 2

Session 5 15:35 - 17:00


5A Premium Sponsor: 5B Gold Sponsor: 5C Gold Sponsor: 5D Gold Sponsor: 5E Silver Sponsors:
Dassault Systèmes Ansys Siemens PLM Software Altair Engineering Autodesk
Simulia / MSC Software

Session 6 17:20 - 18:45


6A Premium Sponsor: 6B Multibody Simulation 3 6C Education & Training 6D CAE Driven 6E SPDM - Vendor 2 A
Dassault Systèmes Product Design 2 (in parallel with 6K)
Simulia

Wednesday 24th June


Keynote Speaker: Johan Jergeus, Volvo Car Corporation, SWE
Keynote Speaker: Walter Schmidt, Stryker Orthopaedics, USA
Invited Presentation: David Fitzsimmons, Airbus Operations, GER
Invited Presentation: Louis Komzsik, Siemens PLM, USA

Session 7 11:00 - 12:45


7A CFD 4 - Thermal 7B Materials 7C Optimization 2 7D Joints 1 7E Preprocessing 1
Uncertainty Management

Session 8 13:45 - 15:30


8A CFD 5 - V&V 8B Fracture & Fatigue 3 8C Optimization 3 8D Joints 2 8E Manufacturing 2
1F Systems Engineering 1 1G Emerging Issues 1H Analysis Management 1 1J Forum 3D-Printing / 1K SPDM 1 - Introduction
Electro Mechanical / Additive Manufacturing / Applications
Electro Thermal

2F Systems Engineering 2 2G Manufacturing 1 2H Analysis Management 2 2J Impact 1 2K SPDM 2 - Automotive


- V&V

3F High Performance 3G Business Issues 1 3H ASME V&V 3J Forum 3D-Printing / 3K SPDM - Vendor 1
Additive Manufacturing

4F High Performance 4G CAE Driven Product 4H Simulation Governance 4J Forum 3D-Printing / 4K SPDM - Aerospace
Design 1 Additive Manufacturing

5F Silver Sponsor: 5G High Performance 5H Stochastics 2 - 5J Forum 3D-Printing / 5K SPDM - Sponsors:


Phoenix Integration Computing 3 - Cloud Discussion Additive Manufacturing Esteco /
Front End Analytics

6F Systems Engineering 3 6G High Performance 6H Geometry Interaction 6J Forum 3D-Printing / 6K SPDM – Vendor 2 B
Computing 4 - Cloud with Simulation Additive Manufacturing (in parallel with 6E)

7F Stochastics 3 - 7G Analysis Management 7H Methods 1 7J Simulation & 7K SPDM - Democratising


Systems Engineering CAE with SPDM

8F Preprocessing 2 8G Methods 2 8H Dynamics 3 8J CAD Geometry 8K SPDM -


for Meshing Deploying SPDM
6
The Elevator Pitch
Key Messages
Rod Dreisbach

Summing-up an entire three day conference into one concise set of ‘take-aways’ is no easy task.
The key messages however, from my perspective, can be summarised as follows.

Major Trends in Modeling, Analysis & Simulation Simulation & Process Data
(MA&S) Management (SPDM)
• More physics-based simulation and more multiphysics • Simulation is becoming more
simulation (e.g., CFD, thermal, acoustics, multiphase flow, FSI) pervasive throughout the lifecycle of
a product and must be managed as a
• New methodology & standards for cross-functional concurrent
best business practice (Simulation
simulation and co-simulation; aligned with systems-
Governance)
engineering based initiatives
• Tighter integration of design,
• Multi-scale modeling of materials and simulations from
requirements, workflows, and
systems to sub-systems to components to nano-scale and back
change management
again
• A growing need for managing ‘smart’
• Advanced modeling, analysis, and optimization of advanced
cross-functional simulation models;
composite structures and damage assessments
must replace the current use of
• Tighter parametric CAD-CAE bi-directional capabilities, but independent, disparate functional
lacking fidelity models
• Use of additive manufacturing and 3D printing as • SPDM needs to manage all product
complementary tools for simulation V&V; advanced simulation simulation data including
of manufacturing processes spreadsheet and template-based
analyses (beyond FEA and CFD)
• Growth in Model-Based Simulation, Analysis, and Systems
Engineering; integration of 1D,2D and 3D cross-technology • Growing desire for SPDM to exploit
simulations; coupling analysis, optimization, build, test, and international standards-based
certification, with common visualizers (topics of the information sharing techniques (e.g.,
NAFEMS/INCOSE WG and the ASSESS Initiative) ISO, OSLC)
• New Paradigm: Thinking of the product as cross-dependent • Business Value (ROI) still seen as a
systems hurdle to smaller companies
• Increased reliance on virtual simulations to reduce amount of
physical testing

• Additive Manufacturing Simulation


• Analysis Management: V&V, Simulation Governance
• CFD
Conference Topics, Grouped by •

Composites
HPC
Number of Presentations Tier1 •


Methods: CSM, Emerging Issues, Education & Training
Multiphysics
SPDM

Tier2
• Dynamics Tier3 • Impact




Fatigue & Fracture
Multibody Simulation
Optimization
Systems Engineering





Joints
Life Sciences
Manufacturing
Preprocessing
Stochastics
7
Focus on Keynotes
Tim Morris

P
lenary presentations, especially the keynotes, set and failure mechanisms, especially crack propagation.
the scene for any conference. NAFEMS takes Part of the response to this came from Louis Komzik,
great care when choosing who to invite, to ensure who outlined his view of the future directions of
that the audience receives a potent blend of inspiration, software development, including the potential for
thought-leadership and insight. So, expectations were meshless methods and new material models.
high. The speakers did not disappoint. Delegates
enjoyed a stimulating set of presentations from world Materials
leading academics and industry leaders. They covered a The various ways to represent the different
wide array of industry sectors, research themes and characteristics of material behaviour is invariably a
commentary on current trends. What did we learn from topic of great interest at NAFEMS events. At this World
these presentations? What key messages did we take Congress, there was a particular focus on the use of
away? more advanced materials. The keynote speakers took
the lead on this. Ferdinand Dirschmid was the first,
Shaping the Future with his fascinating presentation about the BMW i8.
Many of the plenary speakers shared their thoughts Simulation was one of a few key enablers that allowed a
about what the future might hold. Ahmed Noor’s radical concept car to be turned in to a production
presentation was dedicated to this theme, and gave an vehicle in only 42 months! Along the way, many analysis
amazing insight in to how cognitive computing could challenges needed to be met. The car is one of the first
impact on the world of engineering simulation, applications of carbon fibre involving mass production
explaining how we might expect a symbiotic techniques. Also, non-traditional joining methods were
relationship to develop between humans and machines, used. So it was welcome news to hear that BMW judged
with mass customisation (made possible by 3D printing that the predictions obtained were of a level of accuracy
and other techniques), artificial intelligence and hyper- that was not only adequate, but in many cases of a
connectivity all playing their part. He commented that comparable accuracy to analyses of traditional
we must decide how to react to the dramatic changes materials.
that were coming our way. We have three choices: to
“deny” the changes exist, to “resist” them, or to “guide” Costas Stavrinidis picked up on a similar theme,
them. He urged us to take the third option and to be touching upon the use of advanced materials by the
amongst those that embraced and welcomed these space industry, and overcoming the difficulties
changes, echoing the earlier observation from Dennis associated with representing the failure mechanisms of
Nagy that “everyone present at the conference can help composite materials. Walter Schmidt from Stryker
to shape the future”. talked about a very different set of materials as he
guided us through various biomedical applications, and
Other views of the future included those from Joe was rightfully proud of the role that simulation has
Walsh, who looked at the business trends and forecast played in helping his company to genuinely improve the
that “a revolution is coming” as the demand for quality of life for many people.
simulation will soon far outstrip the capacity that the
CAE community has to deliver it. Joe predicts that the Georg Schöpf introduced a different angle, explaining
technology will change to ensure that simulation can the many new material possibilities that are emerging
and will be exploited by the engineering population as a through the use of additive manufacturing techniques,
whole, rather than an expert minority. whereas Johan Jergeus commented on the difficulties
Meanwhile, Johan Jergeus explained how Volvo cars that remain in terms of accurately predicting material
had made huge advances in recent years in terms of failure with more traditional materials, highlighting the
exploiting the vastly increased computing power and difficulties in predicting crack propagation under the
improved algorithms, yet their thirst for more remained loading regimes associated with automotive crash
strong: they want meshes that are still finer, and conditions.
improved methods of representing material behaviour

everyone present at the conference can help to shape the future.


8
the demand for simulation will soon far
outstrip the capacity that the CAE
community has to deliver it.

Extended Range of Load Conditions Costas Stavrinidis who mentioned how the space
In fact, loading conditions was a subject that several of industry is already exploiting AM technology.
the keynote speakers brought to our attention. One of
the ways that industry is exploiting the enhanced Increased model complexity
simulation capabilities is to evaluate a much wider The increasing complexity of simulation models is
range of load conditions – this was mentioned by Peter clearly apparent whenever we compare the
Coleman from the perspective of the aeronautics presentations from consecutive world congresses, and
industry, Johan Jergeus on behalf of the automotive this was apparent from many of the plenary talks,
industry (as they investigate additional load case in including Johan Jergeus who spoke about the
their quest to eliminate complete car prototypes) and increasing level of detail now being included in his
Costas Stavrinidis, who spoke about the huge number models, and Peter Coleman who described how
of load conditions that need to be considered for space systems engineering and simulation approaches are
exploration, with the added complication that test can’t now converging.
be used to validate all space load conditions.
Founding Principles
Additive Manufacturing At the end of the day, we must not forget the essential
A topic that was new to NAFEMS World Congresses principles that underpin CAE usage, and the NAFEMS
was that of Additive Manufacturing (AM), with many organisation. We were reminded of these principles
sessions dedicated to how this new technology and through many of the presentations. Klaus-Jürgen
simulation can be used together. This topic was Bathe gave a remarkable insight in to some of the
addressed comprehensively by Georg Schöpf who fundamental technologies that are still being
started by reminding us that additive manufacturing is researched and developed in many ambitious ways,
actually nothing new – pottery and other such methods Louis Komzsik spoke about how the fruits of this new
of production have been doing a sort of “additive research feed in to the software of today and tomorrow,
manufacturing” for hundreds of years! Georg gave an and Zlatko Penzar, talking about how he solved a
interesting overview of the different types of additive troublesome problem involving a very noisy automotive
manufacturing technologies, how they can be used to pump, reminded us how an expert analyst must always
enrich the use of engineering simulation (e.g. opening remember to still think as an engineer, and that well
up new possibilities in terms of designs that can be thought through and clever simulations remain the
manufacture, or allowing new blends of materials to be answer to many of today’s industrial problems.
utilised). However clever the software becomes, we will always
want, and need, a smart and resourceful engineer to
The topic was also touched upon by Ahmed Noor when oversee things. ᔡ
he spoke of the impact of “mass customisation” and by

9
SPDM Conference
Overview
Mark Norris

F
erdinand Dirschmid highlighted the importance of
BMW’s CAE Bench SPDM solution during his
presentation of the virtual development of the
iconic BMW i8 hybrid petrol-electric carbon-fibre and
aluminium sports car. He stated that SPDM had
enabled the delivery of this vehicle in only 42 months,
managing all aspects of the 6000 major simulations
executed to develop and validate performance. This
may seem slow for a vehicle program until one
considers that the i8 embodies a new hybrid powertrain
concept as well as a carbon fibre reinforced plastic
body shell with aluminium alloy front and rear ends and
required new manufacturing processes and a new
factory. Ferdinand commented that BMW’s
homogenised CFRP failure model gave simulated
failure results within the scatter of the physical test
results. which required considerable engineering skill to start,
drive and repair en route.
Airbus is another leading company with SDM solutions
in place for a decade. Peter Coleman gave a
memorable presentation describing the programs
Airbus is leading to advance the state of the art of
SPDM conference
SPDM. He described the recent initiatives, TOICA, I had the honor of kicking- off the SPDM conference by
CONGA and SAVI (System Architecture for Virtual explaining that SPDM directly addresses the bottleneck
Integration), which extend the work done on of trained analyst capacity articulated by Joe Walsh.
CRESCENDO to demonstrate SPDM3.0: the Superimposing the growth in analysis capacity achieved
management of virtual experimentation across the by the SDM-based democratisation project at GKN
virtual enterprise. Peter explained that Airbus was driveline, we see that SPDM-enabled democratisation
promoting a cross-industry, standards-based approach directly delivers the growth in capacity that Joe
to collaborative virtual engineering because of the asserted that industry needs.
inherently heterogeneous nature of simulation
applications and xDM platforms running SDM
solutions. The proposed standard is named MoSSEC, A mature SPDM solution can enable even larger
Modelling and Simulation in a collaborative Systems increases in computational engineering throughput by
Engineering Context and particularly addresses the reducing time analysts spend on non-value-added
conceptual, up-front phase of product development. tasks and by enabling remote working. The reported
increase in fully trace-able computational engineering
Joe Walsh contrasted the business appetite for throughput by an SDM equipped Automotive OEM
simulation capacity with the actual growth being tracked Moore’s law, doubling every 18 months while
achieved worldwide. He stated that only 750,000 more recently BMW reported a doubling of program
engineers are using simulation applications out of a throughput over a four year period
potential market of 10 million engineers. Joe
articulated that it is a lack of trained analyst capacity Krista Comstock explained that a key driver of SPDM
which is preventing simulation being used more widely for Procter and Gamble is to provide a scaleable and
in industry, limiting growth to approximately 10% per cost effective solution to democratize simulation, where
annum. He made the analogy of the first motor cars possible, to non-experts. Her previous role was to

SPDM had enabled the delivery of this vehicle in only 42


months, managing all aspects of the 6000 major simulations
10 executed to develop and validate performance.
OEM for which he had to develop basic SDM
services, a web user interface acceptable to the
analysis community, in order to achieve
deployment. He also mentioned that his CAE
supplier provided specialist SDM services to
achieve this deployment. Mr Yoon stressed that
the automation of data acquisition and
management by SDM gave his engineers more
time to innovate and do computational
engineering.

Aaron Diachun of the Ford Motor Co. described the


challenges of simulating the increasingly complex
electro-mechanical systems installed in
passenger cars. He also concluded that
standards-based approaches are essential to
maintain two “democratised” simulation enable system simulations where modules are
applications for use by process development supplied by a range of different companies. He
engineers and she expects to be able to deliver described the use of the FMI (Functional Mock-up
many more packaged solutions through the SPDM Interface) standard to enable the simulation of
solution P&G are collaborating with Dassault modular automotive systems and received the
Systèmes to build, now based on formal, verified best-paper award.
industrial requirements. She also concurred that
a wide range of skills are required to deliver an Platforms for the Democratisation of Simulation
SDM project and explained that she uses specialist methods were a major theme of the conference.
SDM IT skills from a small company local to her Glenn Valine of GKN Driveline led the workshop on
facility in addition to partnering with the vendor of Democratisation and presented his experience of
long term developments. developing simulation applications suitable for
safe use by the design engineering community and
In contrast, Marc Hertlein described his delivering them globally with security and
background, Computer Science, and experience as traceability on an SDM platform. Glenn reported
Project Manager for the migration of BMW’s CAE that they have “democratised” complex duty cycle
Bench to the latest SPDM 2.0 from MSC software. and fatigue life simulations to approximately four
This required the migration of 90,000,000 times as many design engineers as their
metadata objects as well as links to 1.6 Petabytes population of analysts. Other vendors presenting
of data. Once SPDM usage reaches maturity in a platforms for democratised Simulation delivery
large engineering organisation the contribution of included FEA with the EASA platform, ESTECO
the IT organisation becomes increasingly with ModeFrontier, ESI with vDSS, Xogeny and
important. Comet Solutions. While these vendors stressed
how easy it can be to package simulation
Rodrigo Britto-Maria of Embraer echoed Krista’s applications for use over the web by non-
comments about the need for specialist SDM specialists, none of them explicitly addressed the
skills, which are difficult to find in Brazil. He also issue of governance – ensuring that the results
reported that his remoteness from the vendor, 12 obtained by non-specialists are accurate or even
hours by aeroplane, was an issue for support of plausible. This is a topic which must be explicitly
his SPDM deployment. addressed for wide adoption of democratised
solutions.
Dr Akkela of Ashok Leyland submitted a paper on
the deployment of a configurable, SPDM 2.0 In the main conference, speakers emphasised the
Solution to manage the simulation data and importance of more sophisticated material
processes of an Automotive (Truck) manufacturer modelling to achieve accurate results now that
and the productivity gains achieved in fatigue life meshes are fully representative of geometry. In the
prediction. This solution was delivered by a team SPDM conference, Leo Kilfoy of MSC and Fairfull
of three in one year. In contrast, Mr Yoon of of Granta described the key aspects of solutions to
Hyundai Motors in Korea presented his project to manage material models for use in simulation. ᔡ
deploy a PLM-based solution at an Automotive

The reported increase in fully traceable computational


engineering throughput by an SDM equipped automotive
OEM tracked Moore’s law, doubling every 18 months
11
Expert View
Angus Ramsay

W
orld Congresses are in many ways showcases that my FE models are producing reliable and accurate
of industries such as engineering simulation results. Without this experience I could have been
where one can rub shoulders with software responsible for a computer aided catastrophe like, for
developers, academic researchers and engineering example, the Sleipner Platform. It is of course the
practitioners and find out about current, emerging and issue that running a computer simulation needs a
future trends in the industry. different set of skills and experience than required for
designing a component or structure that makes the
Prior to this year’s NWC, the last finite element world goal of democratisation a difficult one to achieve in
congress I attended was in 1993 (Monte Carlo, Monaco) practice. A good window on the world of practical
which had the theme “FEM, Today and the Future” and I engineering simulation can be obtained through the
recognised some familiar faces in this year’s NWC from LinkedIn discussion groups – much can be learnt here
the event some 22 years ago. The theme this year was about the difficulties engineers face in applying
“A World of Engineering Simulation” and whereas in simulation technologies to their design problems.
1993 most papers concentrated on conventional finite
element methods applied in the field of stress analysis, It is interesting to note that one of the emerging
this year’s conference embraced all engineering technologies from the 1993 congress was error
simulation and included papers on much more than measures and adaptive mesh refinement,
just stress analysis. complementary tools that are supposed to relieve the
engineer of having to be concerned about the issue of
As is the norm for such congresses the standard of the discretization error and that might well today be seen
work reported and the quality of the presentations as tools towards the aim of democratisation of
made were extremely high and provides confidence, for engineering simulation. Of course error estimation is
those that need it, that the future of engineering part of the new subject of Simulation Governance which
simulation is in good hands. Some presentations I played a prominent part in this year’s congress and is a
attended on joints were particularly impressive in the subject which all engineers (experienced analysts or
way that simulation was being validated with measured not) need to be considering whenever simulation
data and how failure criteria were being developed from techniques are being applied.
the resulting correlation.
My conclusion from this NWC is that the future of
One of the emerging themes from this year’s congress engineering simulation is bright and is being led by
is termed the ‘democratisation’ of engineering some very able and talented people and I would like to
simulation; handing the power of simulation from the congratulate NAFEMS for organising and running such
few (experts) to the many (engineering designers, for a successful event. I look forward to the emergence of
example). I began my engineering career wanting to be robust simulation tools for the engineering designer –
a designer but felt that I could not be effective without at which point I may move into engineering design - but
understanding stress analysis and so went into a career in the meantime I will continue to apply the ‘Napoleonic
in finite element analysis. Over the last thirty years I Code’ to my simulation results, that is, Guilty until
have developed sufficient expertise to be able to ensure proven Innocent! ᔡ

One of the emerging themes from this year’s congress is


termed the ‘democratisation’ of engineering simulation;
handing the power of simulation from the few.

12
Expert View
Peter Bartholomew

T
he trend towards ever more sophisticated which the data defining requirements and use key
simulation that has been a feature of recent years values extracted from result sets to validate design
shows no sign of abating. This manifests itself in concepts, suggesting a close synergy exists between
the multi-level analysis with micro-, meso- and macro- the NAFEMS SMS and SDM activities.
scale analysis of the composite structure within the
BMW i8, the full aircraft non-linear analysis of the Another trend to emerge from the SPDM conference
A350, numerous co-simulations and examples of was the interest in democratisation as a means to
multiphysics simulation. Extrapolating, one can see increase the role of simulation without the
that the move to 3D modelling with several elements corresponding requirement to find large number of
used through the thickness of a shell will eventually advanced simulation experts. Traditionally NAFEMS
come to pass even if the modelling scale eventually members have been wary about the wisdom of placing
tends towards the material grain size. such tools in the hands of design engineers with little
knowledge or understanding of the numerical
There was one reason for caution, however, that techniques employed. GKN Driveline demonstrated
emerged from the Composites Failure session. Whilst that this can be achieved within a carefully managed
it may be possible to capture stresses to higher fidelity and constrained envelope.
at an ever more detailed level this is only of value if the
corresponding material allowables are available and It may well be that the goals of democratisation will
sufficiently precise. At present, of the many proposed pervade the whole of the engineering simulation world
failure criterial none consistently produces results as the subject continues to expand to embrace more
within 10% of the test results throughout the failure technical areas that could possibly be at the command
envelope and errors in excess of 50% are not of a single individual. Few could understand the basic
uncommon. numerical analysis initiatives that would be required to
exploit massively parallel HPCs whilst also
Meanwhile, the cross-disciplinary topics of Analysis commanding insight into turbulence modelling,
Management, Systems Engineering, and Simulation electromagnetics and thermal modelling, material
Data Management continue to receive increasing levels characteristics and the various regulatory
of attention. At present the Model Based System environments. It was commented that the ‘old style’
Engineering tends to focus upon what Conrad Block papers in which a theoretical development was outlined
referred to as ‘the low hanging fruit’ of 1D simulation and followed by examples of industrial application had
where the SysML data models are in closest alignment largely disappeared. What exists now is far more a
with the data required for simulation but more collaborative venture between researchers, vendors
comprehensive objectives are in mind. Meanwhile and, ultimately, the application engineer. ᔡ
Peter Coleman referred to a collaborative project in

Traditionally NAFEMS has been wary about the


wisdom of placing such tools in the hands of design
engineers with little knowledge or understanding of the
numerical techniques employed.
13
Expert View
Ron Simmons

T
he theme of the NAFEMS World Congress 2015 So, in 2015, the World of Simulation stands on the brink
was, “A World of Engineering Simulation”. of a shift in how analysis is approached. Co-simulation,
Simulation has been what finite element analysis the simulation of a situation using two routines that run
has been about since we have migrated from the through an analysis by trading off progression in a step-
mathematical theory to digital computers. So, what wise manner is state of the art. But in the near future
makes the simulation of 2015 different than what we true multiphysics, true co-simulation where two, or
have come to think about when we say engineering more, solution sequences actually run in parallel will
simulation? The major thread that went through the be the next big step in our world of engineering
plenary presentations and carried through the topical simulation.
sessions was multiphysics and co-simulation. It is
becoming more common to look at an analysis not as a Will the 2017 NAFEMS World Congress be crossing that
finite element analysis of a force applied to a threshold, or at least be closing in on it? Co-simulation
constrained model, but as a series of interactive is here to stay. In 2017 it will be more mainstream.
applied forces each affecting a model in its own way And the NAFEMS World Congress will then be, as it has
and thus being a more realistic approach to how things been in the past, the place to bring engineers and
work in the real world. engineering analysts together to share and exchange
information on this developing trend. ᔡ
There was a time not so very long ago where in a
congress you would have heard “FEA” in nearly every
presentation. In this congress, it was unusual to hear
that term. And when it was discussed, it was usually
one part of a whole analysis and rare to hear of it as an
end unto itself.

the NAFEMS World Congress will then be, as it has


been in the past, the place to bring engineers and
engineering analysts together to share and exchange
information
14
Working Group View
High Performance
Computing
Lee Margetts
University of Manchester
NAFEMS HPCWG - Chair

I
first attended the NAFEMS World Congress in 2005 Professor Bathe commented that simulation today is
where I presented my work on interactive finite 10,000x faster than in the 1980s. This growth is
element analysis - using one thousand processors impressive, but do we need to wait another 35 years for
to solve a 3D elasticity problem with one million finite a similar step change? In 2015, the largest HPC
elements in less than one second. Back then systems are already more than 10,000x faster than the
processors only had one core and one million 3D desktop. This was highlighted by Professor Mark
elements was too large a problem to solve using Shephard who discussed solving problems with trillions
commercial software. Memory was measured in of degrees of freedom using millions of cores. Perhaps
Megabytes, there were no GPU accelerators or many- more significant than this heroic effort is the growth in
integrated core processors and Cloud Computing had Cloud Computing: the focus of the Birds of a Feather
yet to be invented. I tried to persuade NAFEMS to set up session. The Cloud has the potential to bring HPC
an HPC Working Group in 2006, 2007, 2008 and so on. capability to the whole supply chain, on a pay-for-use
In 2011, I’d just about given up when I was invited by basis, with SMEs having no need to invest in their own
Tim Morris to go ahead. NAFEMS members were finally infrastructure. There are issues with the Cloud and
concerned about HPC, the driver being that all there is a long way to go regarding HPC adoption, as
workstations on the market had multicore processors the NAFEMS survey “Computing Platforms for
and GPU vendors were pushing their hardware for Engineering Simulation” showed. Most NAFEMS
general purpose computing. Runtime performance was members use desktop computing for engineering
becoming a potential market differentiator for software analysis, so I am sure HPC will be on the NAFEMS
vendors. agenda and a talking point at the World Congress for
many years to come. However, I hope it will one day
Fast forward to NWC15 and the keynote talk by become as silly a concept as a NAFEMS Laptop
Professor Klaus Jürgen Bathe at MIT reminded me why Working Group would be today, and our job will have
the HPC Working Group are so motivated to promote been done. ᔡ
the adoption of HPC across the NAFEMS community.

15
Working Group View
Stochastics
Alexander Karl Gordon May Dietmar Vogt
Rolls-Royce Rolls-Royce Airbus Group
NAFEMS SWG NAFEMS SWG Innovations
NAFEMS SWG

T
hree out of a total of 80 sessions at this year’s It was also evident that the importance of Uncertainty in
NAFEMS World Congress were dedicated to the Engineering Analysis was a significant factor in other
subject area of Stochastics – the first two sessions too: most notably the various sessions on
covering both the Management and Characterization of Verification and Validation in Analysis, Simulation
Uncertainty. Technical papers were presented on these Governance and SPDM included a significant amount of
subject areas, on areas of application ranging from: time discussing how accounting for the effects of
Advanced Gas Turbine Design, Vehicle Braking Uncertainty would have an important role in these
Systems, Pressure Vessel Design, Fatigue Analysis, areas too.
Safety Critical Systems and Manufacturing Methods.
The third session on Stochastics was run as a panel-led Presentations at the SPDM sessions highlighted the
discussion session, with the central theme being a importance of managing uncertainty data as an integral
review of the key messages derived from responses to part of the whole Engineering Design Lifecycle and the
the Stochastics Challenge Problem that had been need of efficient sharing, synchronization & integration
launched at the 2013 Salzburg NAFEMS World of a distributed dataset, and the role of standards –
Congress by the NAFEMS Stochastics Working Group. including MoSSEC.
Additional challenge problems from SANDIA and NASA
were also discussed during the Panel Session; these The NAFEMS Analysis Management Working Group is
and the NAFEMS Challenge Problem are quite active in developing guidance and procedures
complementary in terms of the level of complexity of related to V&V. One of the main publications so far is an
the engineering system at the heart of each problem, adaptation of ISO 9001 Quality Standard to the
but together they provoke a great deal of thought in Simulation domain. The ASME V&V Committee has
order to solve. It was discussed which type of challenge published three existing standards in this area too:
problems are attractive to volunteers from industry and
• ASME V&V 10-2006 Guide for Verification and
academia and how the problem definition and
Validation in Computational Solid Mechanics
description should be setup to increase the take up on
future challenge problems. • ASME V&V 20-2009 Standard for Verification and
Validation in Computational Fluid Dynamics and
The keynotes from Stryker and Simulia showed the Heat Transfer
inherent link of stochastics and modelling and
• ASME V&V10.1-2012 An Illustration of the Concepts
simulation of biomechanical and bio-multi-physical
of Verification and Validation in Computational Solid
systems. It was impressive to see how much emphasis
Mechanics
Stryker put in gathering a representative statistical
database for individual optimization of implants by
There are still key issues to be resolved around properly
simulation. It was stated that the statistical database
dealing with the issues arising from Uncertainty and a
for modelling is a big success story and a valuable
joint publication by NAFEMS and ASME on V&V
asset of the company.
standards might be a good opportunity to address this
topic. ᔡ

There are still key issues to be resolved around


properly dealing with the issues arising from
Uncertainty and a joint publication by NAFEMS
and ASME on V&V standards might be a good
16 opportunity to address this topic.
Working Group View
Analysis Management
Chris Rogers Jonathon Smith

F
or NAFEMS World Congress 2015 the Analysis free rein with respect to the structure and content of
Management Working Group (AMWG) hosted four the session; the session was well received and
sessions. The general theme of the sessions was informative. We expect to repeat the idea in 2017.
Verification and Validation (V&V), with an emphasis on
validation. Strong V&V is essential to demonstrate and The discussion session had a panel of four leading
measure the quality and applicability of any simulation; “experts” in V&V; they addressed two major topics: the
in very loose terms code verification is considered V&V 10 restricted view of validation and the fitting of
fundamentally a developer task; whereas calculation this view to ISO 9001. The restricted view of validation
verification and simulation validation are user tasks. requires validation to assessed against physical test or
experiment. ISO 9001 is the root International Standard
The four sessions comprised two papers sessions, a for quality management and is a requirement for many
discussion session and an ASME V&V 10 session. The industries worldwide. As V&V is a requirement of ISO
ASME Codes and Standards Committee V&V 10 9001 it is necessary to fit the restricted view of
Verification and Validation in Computational Mechanics validation to the ISO requirement. This is one of the
is one of four V&V committees; V&V 10 works closely current major tasks for AMWG. ᔡ
with AMWG with membership cross over. V&V 10 had

Working Group View


Composites
F
our out of a total of 80 sessions at this year’s The program also included a short training course on
NAFEMS World Congress were dedicated to the composite FEA. Further, a special edition of
analysis of composite materials and structures Benchmark Magazine with 10 articles on composites
made of composites. 18 technical papers were highlighted various aspects of composites analysis
presented on topics ranging from failure prediction to including process simulation, the calculation of
multi-scale analysis. effective material properties and damage analysis of
composite structures.
A highlight were the review presentations on
interlaminar damage and failure of fiber composites What makes the NAFEMS World Congress different
which summarized the state of the art but also compared to the specialized conferences which focus
emphasized the need for further development, code on composites such as the annual technical conference
verification and result validation. NAFEMS recognized of the American Society for Composites (ASC), the
the importance of analysis tailored towards composites International Conference for Composite Materials
in today’s world of simulation by inviting a keynote (ICCM) or its European counterpart (ECCM) is inclusion
speaker on the topic CFRP Lightweight Structure of the of other simulation rated topics, the immediate
BMW i8. participation of software vendors and the related
exhibition. ᔡ

17
Working Group View
Systems Modelling
and Simulation
Frank Popielas

T
his year’s NWC 2015 provided the first dedicated Development” from Peter Coleman and “The Changing
focus on System Modeling and Simulation with Role of Simulation” from Joe Walsh) and the
the addition of the “Systems Engineering” discussions in other sessions, like “Simulation
sessions. We had 3 sessions with 12 papers. The Governance” led by Keith Meintjes. We closed it out
papers represented the diversity of the member base of with an open session of the SMSWG on “Simulation &
the Systems Modeling and Simulation Working Group System Engineering: A roadmap for future
(SMSWG) with its wide ranging topics, from a general collaboration between NAFEMS and INCOSE”. This new
discussion of the understanding of Model Based area, where we help drive strategic direction for
Systems Engineering over requirements engineering, technology development and standards in the space of
trade-off studies, uncertainty investigation and complex engineering, bridging engineering analysis and
democratization of simulation to the virtual engineering systems engineering from a product perspective while
ecosystem and enabling infrastructure. Those papers at the same time providing directions for a modern
and following discussions showed clearly the need engineering ecosystem, will play a central role in
across the industries to guide them. This was also NAFEMS future activities. Some key phrases making /
evident in all keynotes (like, “Potential of Cognitive made their way into our discussion already are: Model-
Computing for Engineering Analysis and Design” from based Enterprise, Big Data and Internet-of-Things. A
Ahmed Noor, ”Reflections on SPDM for Collaborative, new thinking started to take hold, and this was very
Multidisciplinary and Agile Aircraft Product evident in this year’s NWC in San Diego. ᔡ

Some key phrases making / made their way into our


discussion already are: Model-based Enterprise, Big
Data and Internet-of-Things. A new thinking started to
take hold, and this was very evident in this year’s NWC
in San Diego.

18
Session Review
Session 1H

Analysis Management
Ron Simmons

I
n the world of simulation, as the methods, entered into a routine and then the inputs are varied to
procedures, and software become more robust the predictive model to describe a series of outcomes.
giving rise to simulation becoming on par with The results of the prediction routine are compared with
physical testing, the results of an analysis must be the results of the simulation to check for overall
assessed with not only the reduced data but a plausibility. This approach validates that the
statement of the fidelity of that data and its information used to build the simulation model is
significance. The question can be posed in a variety of entered correctly and that the overall approach is
ways. Are the results an accurate representation of the sound.
physical part or assembly being analyzed? What is the
confidence in the results? Are the results plausible? Stephen Hendry detailed a three factor review process
Generating confidence in the results was addressed in to be used with modeling and analysis to determine if a
a variety of ways by the presenters in this session. model accurately simulates a structure. The process
described is an in-depth in-situ method to be used as
Simon Chetwyn of AWE presented two papers on a part of the overall analysis routine. Eigenvalue model
method based, in part, on NAFEMS QSS publications. stability analysis is used to determine virtual kinetic
This paper detailed the use of a Saaty method in a two- energies and virtual strain energies to determine
step process that reviews an analysis by reducing a set information on the conditioning of elements. Element
of attributes covering the competence of the preparer displacement versus element distortion comparisons
and review process and the correctness of the specifics are used to determine significant figure fidelity in the
entered into the model to a confidence rating using a model and results. Finally, using a conceptual
spreadsheet. The method described gives a consistent assembly consisting of an abstraction layer on a more
review process that is easy to use, making the results detailed layer of interest presents an overall
understandable and useful. performance view of the model and results.

The other papers in the session approached the fidelity Based on how simulation is being approached and
of analysis results by addressing how simulation performed it is imperative that the engineering
modeling is being performed. These papers discussed simulation community have a way of expressing a level
the increasing power of FE software and the associated of confidence in the results of an analysis. As
hardware on workstations, clusters, and the Cloud for simulation models have gotten more complex it has
analysis and the emergence of CAD software to do become more difficult to identify inaccuracies or errors
conceptual analysis which is often performed by people in models and while there is no single solution to
who do not have the educational background or determining if a model accurately simulates a
experience to determine if the results are feasible. structure, there are a range of techniques that can help
express confidence in the model and results of the
Tobias Spruegel described a meta-model approach analysis. These papers addressed some of the
where information based upon analytical equations, questions of confidence in analysis from in-process
results from previous numerical simulations, or checks to final review and competencies. ᔡ
physical testing information on specific components is

Are the results an accurate representation of the physical


part or assembly being analyzed? What is the confidence
in the results? Are the results plausible?
19
Session Review
Sessions 1D, 7C & 8C

Optimization
Peter Bartholomew

N
WC15 in San Diego saw 13 presentations on the Truss Structures with Stochastic loading conditions’ and
theme of optimization split across three sessions. ‘Automated Optimization Methodology applied to Car
Whereas in the past one might have expected a external Aerodynamics for aero-drag Reduction’ were
substantial proportion of presentations being from the Research Institutes. Several presentations represented
Aerospace industry it was noticeable that the area of the on-going method developments in the area of design
greatest contribution has shifted to Automotive with optimization. Thirty years ago a typical design
almost all the presentations coming from that sector. optimization study would have been highly idealized and
Right from the start with the first presentation in Session based upon a single analysis code, augmented to produce
1D on ‘Composite Materials Multi-Objective Optimization design sensitivities which would allow solutions to be
of a Formula 1 Front Wing’ the complex mix of generated with a handful of function evaluations.
Multidisciplinary design or even Multiphysics design was
apparent. The balance was towards structural Since that time, as Lee Margetts suggested in his
disciplines but there were three further presentations thoughts on high performance computing, the basic
with a CFD focus including a paper of the ‘Automated speed of computers has increased by a factor of 10000s,
Optimization Methodology applied to Car external whilst HPC provides orders of magnitude improvement
Aerodynamics’ within the first session. upon that. This progressive change made feasible the
use of direct search methods such as Genetic
This complexity in terms of the analysis performed was Algorithms, with their requirement for 1000s of function
also evident from the more structural design oriented evaluations. In turn, the use of direct search methods
presentations including ‘Optimizing thermomechanical opened up the possibility of solving non-smooth design
strength of high-load turbochargers’ with its cyclic problems based upon a far wider range of simulation
thermal loading and fatigue calculation or ‘Virtualizing types. The down-side was that the direct search
the flexible Hose Design Process’ with its highly non- methods tend not to scale well so the numbers of design
linear statics analysis followed by Eigen-frequency variables considered simultaneously dropped from
calculations used to model the dynamic excitation. thousands to tens with screening techniques being
introduced to identify the main design drivers.
Perhaps it is this ever-increasing sophistication in terms
of the size and complexity of the design problems being It is therefore interesting that the San Diego conference
addressed that led to a strong vendor presence being saw the appearance of a number of presentations in
evident throughout the Optimization Sessions. In fact which adjoint sensitivity calculation was used, including
eight of the presentations were from vendors, only two of ‘Leveraging the continuous Adjoint Method for Industrial-
which were co-authored with end users (one of which scale Application’ and ‘Flow Topology Optimization of a
being Audi who co-authored the Dassault paper on high- Turbo-charger’s inflow Duct’. Hybrid approaches were
load turbochargers). also in which direct design space exploration was
followed by sensitivity-based approaches that exhibit
The primary contributor for two further presentations improved final convergence, one such being ‘Multi-
‘Use of Swarm Intelligence for Topology Optimization of strategy intelligent Optimization Algorithm for

in the past one might have expected a substantial


proportion of presentations being from the Aerospace
industry it was noticeable that the area of greatest
contribution has shifted to Automotive with almost
20 all the presentations coming from that sector
Another innovation over the
past 20 years has been the
rise of Process Integration
systems that figured in
many of the presentations
already mentioned here.

computationally-expensive CAE Simulations’. The


methods used for ‘Weld fatigue considerations in
structural Optimization’ were also gradient-based
using semi-analytic design sensitivities.

The paper on ‘The Optimization of semi-medium Bus


FMC ride and handling performance using Analytical
Target Cascading’ had its focus on the decomposition
of requirements and multi-level optimization. A non-
automotive presentation ‘A novel Topology Optimization
Approach applied for the Design of hollow Turbine
Blades’ used topology optimization to hollow out
turbine blades with Direct Metal Laser Sintering being
the proposed manufacturing technique.

Another innovation over the past 20 years has been the


rise of Process Integration systems that figured in
many of the presentations already mentioned here.
These allow multi-step processes to be orchestrated
and the optimization loops to be controlled. Their use
pervades much of the changing process landscape of
engineering simulation. Not only do they enable a
wider range of design optimization problems to be
addressed without laborious special-purpose scripting,
they assist stochastic simulation, the automation of
SPDM processes, Systems Engineering and
Democratization, so bringing together the aims and
objectives of a number of the NAFEMS Technical
Working Groups.

Finally, the importance of the changes now underway


to the competitiveness of our high technology
industries is evident from the investment cost being
committed to simulation and design optimization.
Some of the presentations indicated a willingness to
commit days of HPC computing effort to obtain the
desired result. ᔡ

21
Session Review
Session 2D

Fracture & Fatigue


Ron Simmons

T
here were several sessions on fracture and plastic wake on the crack tip stresses.
fatigue and the first session contained papers In one approach to dealing with crack growth to
that discussed stable ductile tearing, crack address structural integrity a collaborative effort
growth, fatigue assessment in welded joints, and between EDF Energy and the University of Manchester
rotating machinery fatigue. In each case, the paper described using XFEM in an evaluation of advanced
presented was a snapshot of what was being gas-cooled reactor (AGR) Nuclear Power Plants. It was
investigated and how the approaches were being discussed that AGR cores are composed of thousands
developed to gain greater understanding of the of graphite bricks and after years of service the
phenomena. irradiation and temperature gradually modify the
microstructure and stresses in the bricks which leads
On discussing the simulation of a stable ductile tearing to crack initiation and propagation. In order to
situation in finite element analysis Emily Hutchison of determine the lifetime of the cores it is necessary to
TWI presented the opening paper of the session understand the integrity and performance of the cores
describing the use of re-mesh techniques coupled with due to the cracking of the blocks. The paper discussed
nodal release. The main premise was that at a crack the two-step process that was used to evaluate the core
tip the measure of stress triaxality can be considered as structural performance with the crack growth. First,
a constraint and this paper presented using the T- the Incremental crack growth of a single block was
stress to quantify the constraint in the simulation of evaluated using XFEM with the energetic criteria
crack growth. A procedure was discussed that relevant for quasi brittle materials and the direction of
consisted of a set of steps to simulate the constraint propagation along the crack front was based upon the
based stable ductile crack growth. It consisted of maximum hoop stress and length of propagation for
calculating the T-stress at each node of a crack front; each step of the calculation which is dependent upon
calculating the crack driving force (J-integral) at each the value of strain energy release rate along the crack
node along the crack front; calculating the amount of front. Using the results of the individual block, a whole
crack extension based upon the R-curve coefficients core model was statistically modeled to determine the
and the J-integral value; re-meshing the model to performance of the core with the cracked blocks to
incorporate a new crack front shape; and finally determine the plant effective lifetime.
adjusting the boundary conditions so that the nodes in
a new crack front are released to simulate tearing Another paper that discussed the XFEM approach was
based upon a two-step process wherein first the presented by Ismael Rivero Arevalo of Airbus to discuss
applied displacement is held constant while the nodes the application to crack growth correlation in
for the first tearing increment are released and the aeronautical structures. The paper discussed that
second step applies an arbitrary displacement in order classical analytical methods of determining crack
to calculate the displacement for the next tearing growth in complex systems have high computational
increment. The process is repeated over several tearing costs along with introducing conservatisms that
increments until there is a nearly flat constraint based penalize designs in terms of weight, cost, and
R-curve along the crack front such that for small maintainability and that XFEM is one of the numerical
increases in applied J-integral values there are methodologies appearing as an alternative with lower
significant increases in crack extension which costs and the ability to evaluate complex systems that
demonstrates unstable crack growth. Further work is can produce acceptable assessments in fatigue and
to incorporate Q-stress as the constant parameter damage tolerance analyses. The paper went through a
since it may be more applicable to provide better crack series of benchmark tests to demonstrate that the
growth predictions since it will incorporate the effect of procedure resulted in results that were comparable

22
with the expected results. From the benchmark tests model, the HS method results were more conservative.
the paper went on to discuss five cases with more The conclusions drawn by the authors based upon the
complex geometries: A lug, a hollow cylinder, a solid two approaches were that the HS method gives a
shaft, an oblique crack under a uniaxial load, and a simpler approach by not detailing the notch
complex structural aircraft fitting. The results of the representation and allowing coarser meshes compared
analyses were consistent with the physical parts where to the NS method; a mesh convergence study is
crack growth was observed. The authors reported that recommended for the HS method; the NS method
using XFEM presented advantages over traditional finite requires the notch details to be modeled which greatly
element modeling since the crack path does not have increases the number of elements in the model,
to be specified since the propagation is not tied to submodeling techniques; the NS approach allows a
element boundaries in a mesh, it is a reliable method post-welding notch sensibility study during the design
for evaluating stress intensity factors and in predicting stage to accomplish fatigue life criteria while with the
crack growth, and produces high accuracy results HS method the notch detail is assumed to be included
which simulate crack path in complex problems in the S-N hot spot curve and only general changes,
faithfully. Further investigations are to introduce such as plate thickness increase, are possible.
complex stress sequences representing actual spectra
during operational life and the introduction of crack To close out the session Frederic Kihm of HBM-nCode
retardation by simulating local crack tip yielding. provided an update for the on-going study on fatigue life
from Sine-On-Random excitations. As was described,
In using classical finite element methods, Hugo Bottino Sine-On-Random occurs with rotating machinery when
Di Gioia Almeida presented a comparison of different the typical mechanical vibration is made up of sine
local approaches for fatigue assessment. The paper tones superimposed upon background noise. The paper
discussed analysis of subsea equipment in the gas and stated that components mounted on rotating machinery
oil industry where stress concentrations at geometric must be designed to survive these vibration levels over
discontinuities such as welded joints is directly related the entire service life. As described, the standard
to the initiation and propagation of cracks under cyclic method of estimating fatigue life from a Sine-On-
loading impacting fatigue life. Two alternate Random excitation is to perform a transient analysis
approaches in the assessment of a welded joint to the using a time domain realization of the required PSD
global stress method were considered for this study, and harmonic tones. This approach is very demanding
the Hot Spot (HS) method and the Notch Stress (NS) of CPU time. The presentation discussed a spectral
method. The HS method considers the element type, domain approach which offers an advantage over time
mesh refinement, element order, and integration domain when there is limited stress time history data
points. In the HS method, the nonlinear stress peak is or when computation times are prohibitive. Using the
not used since only the geometric discontinuity is method presented, the aim is to find a statistically
modeled creating a singular point in the stress derived, rainflow-like distribution of cycles
concentration region. The NS method objective is to corresponding to the stress PSD and sine tones. Using
calculate the effective notch stress generated by a local the derived rainflow distributions a robust method of
notch root assuming a linear elastic behavior. The deriving fatigue estimates was presented. Included in
greatest challenge to this method is to correctly model the presentation was a case study using the method to
the realistic behavior in the vicinity of the notch. And access the fatigue damage on a component attached to
due to the nature of the modeling near the notch it is a helicopter fuselage. ᔡ
recommended that second order 3D solid elements be
used in the analysis. Applying both methods to a

23
Session Review
Session 4B

Nonlinear Approaches
in Multibody Simulation
Analysis
Ron Simmons

T
he second session of Multibody (MB) simulation the final analysis runs. In the second approach, real-
took the topic into the nonlinear realm. While MB time cosimulation between the finite element analysis
simulations in the linear range are parallel single and the MB dynamics software is run. The process for
applications, in the nonlinear areas Newton time step attaching the model requires special setup between the
advancing schemes must be addressed for different FEA solution and the dynamics MB runs. Using this
processes each of which have their own manner of technique it is possible to accurately predict the forces
producing the time steps. An overall referee program at the attachment points between the leaf springs and
or algorithm must be employed to synchronize the the vehicle chassis. This method of using the local
processes so that they advance in step with each other. suspension behavior can be evaluated directly as part
of the total powertrain response for NVH using a single
The first paper of the session presented by Mario Felice MB simulation approach.
of Ford in collaboration with Dassault Systems provided
an evaluation of a small pickup truck powertrain Frederic Cugnon of Samtech addressed the situation
refinement analysis with respect to NVH using a system where when using MB simulation to couple nonlinear
level simulation methodology. The proposal was to analysis and MB dynamics large models are
incorporate component level nonlinear dynamics encountered with resultant high computational costs.
through the use of a nonlinear finite element analysis The paper addressed that typically when resolving large
routine in combination with MB simulation to assess models explicit solvers are used but that time
powertrain system level dynamic effects for NVH. interaction can be efficiently performed using implicit
methods and by using a multi-model solver
Two approaches were considered for this analysis. In computational time can be saved. The presentation
the first approach, the output forces and moments focused on one particular solver solution but indicated
generated by the finite element analysis using a fully that other solvers with similar capabilities could be
modeled leaf spring system are used in the MB adapted to use the methods presented.
dynamics software to build a beam model of the
springs. The beam element model is then correlated It was presented that a strongly coupled approach using
with the nonlinear FEA model for use for the final a master MB dynamics model and a slave nonlinear
analysis. The downside of this technique is that there finite element model means that MB simulations can
are details that are not included in the dynamics beam be run efficiently. This method strongly couples the
model and must either be accepted or accounted for in master and slave models where the internal and

Frederic Cugnon of Samtech addressed the situation where


when using multibody simulation to couple nonlinear
analysis and multibody dynamics large models are
24 encountered with resultant high computational costs.
a strongly coupled approach using
a master multibody dynamics
model and a slave nonlinear finite
element model that multibody
simulations can be run efficiently

connection degrees of freedom for each model are The algorithm implements a continuous
split, the time steps and iterations are synchronized communication of both the MB and finite element
allowing the building of coupled solutions between the codes until a solution is realized It was described that
codes, and at each iteration all models build their own cosimulation in this manner is a time consuming
iteration matrix and residual vector. Using the software operation where two areas needed improvement;
described, the coupling algorithm based upon the spatial domain and time domain speed performance.
exchanges at the integration matrix and residual vector To enhance the spatial domain speed performance
level works well allowing the user to switch from static several parallelization routines are employed and to
to kinematic and finally to dynamic analysis in a single improve time domain performance adaptive stiffness
coupled solution. Additionally, within the steps it was update logic is employed.
shown that specific multiprocessors can be directed to
perform specific steps that more efficiently handle that In the final paper of the session a master-slave
particular step or steps in parallel. In order to cosimulation technique was discussed by Giancarlo
demonstrate this method, two analysis model results Conti of Siemens Software where the coupling between
cases were presented that demonstrated that there solvers is achieved at the iteration level and each solver
was notable speed up in analysis times. uses its own integrator but only one master code is
used to solve the Newton iterations. Using this
In a paper addressing large MB simulation models, technique, the master code responsible for solving the
Motoharu Tateishi of MSC Software described a method nonlinear equations of the entire system continues
of using an algorithm for performing cosimulation of working with the coupled iterations until both solvers
MB and finite element models. The algorithm satisfy their won solution tolerances and convergence
described in the paper uses interface variable coupling is achieved. This technique is described as having
and basically runs a three part process where several greatly improved numerical stability and a reduced
software solutions are used for the analysis; one MB computation time when compared to a staggered
simulation, multiple finite element simulations, and cosimulation routine. The bulk of the paper described
one glue process. In this case the glue code acts as an an analysis run on a stamping press tool that uses
executive routine that directs the flow of data between contact surfaces to predict the dynamic stress
the MB and finite element codes. The cosimulation is distribution in the stamped parts. ᔡ
started by launching the glue code which prompts the
user to start the MB and finite element applications.

25
Session Review
Session 3B

Multibody Simulation
Ron Simmons

M
ultibody (MB) simulations were addressed in this on Earth are not representative of the dynamics of the
session to cover a range of conditions over what deployment of large arrays because of air effects and the
is generally the linear or stable range. Each difference of zero-G systems which add large
paper addressed a different need for MB simulation and perturbations to the analysis and it is not possible to
at the same time each expressed a need for higher deploy such large zero-G structures on the ground in two
fidelity analysis for complex connected systems. At or three directions. The importance of having a high
Jaguar Land Rover (JLR) it was to reduce the cost fidelity simulation is vital to the design of the solar arrays
associated with complex and time consuming physical since deployment of the structure is critical, the failure of
testing. Thales Alenia Space used the method to perform a successful activation can lead to mission and satellite
simulations on a structure that by its nature could not be loss while a poorly controlled deployment can lead to
physically tested on Earth as its deployment and use are destructive shock loading.
in zero-G and the near vacuum of space. The paper
presented by the University of Bristol and Siemens Previously, TAS has used 1D models to simulate the
presented a methodology on coupling unsteady deployment and tracking since the drive mechanisms
aerodynamic loads with flexible bodies to determine the where one axis systems. In order to more accurately
effects of short term loads on the aircraft structure. All follow the solar alignment to maximize power generated
different rationales for the application of MB simulation, by the arrays TAS has developed a two axis steerable
but all using it to define a synergistic solution to their mechanism to rotate around the longitudinal axis and tilt
particular problem. And each of the papers presented the panels for more accurate solar tracking. MB
validation of their methods by comparing their results of simulation results that tracked the two axis rotation and
the analysis with physical testing and actual in-service tilt were tested in a partial array zero-G test rig to
performance. determine the correlation of predicted test results with
the analysis results. The tracking of the MB analysis
The paper presented from JLR discussed the drivers showed excellent correlation with the test results. The
behind using MB analysis in the towing capacity of results of the modeling were compared with a real time
automotive vehicles. Key in this were competition in the satellite in orbit and the two were very accurate. With the
automotive industry where the company is introducing MB approach antisymetric bending, sliding, and torsional
new models and how the shortened development times modes effects were fully taken into account while they
affect how design development is approached to assure had previously been neglected in the previous 1D models.
that the customer is provided with a vehicle that meets
their expectations and to comply with regulated safety A collaborative research paper presented by the
specifications. The presentation covered the situation University of Bristol and Siemens PLM software did an
where the important towing capacity of the vehicle is investigation of the gust loading on an aircraft structure.
virtually tested which shortens the physical testing MB simulations for aircraft loads is a subject of interest
phase. Initial testing was performed using a bike carrier since there are many applications where the flexible body
attached to the tow bar and comparing the computed structure undergoes simultaneous different loadings
results with physical testing. Once good correlation such as wind gusts and landings. Many of these loadings
between virtual and physical testing was developed the can be considered nonlinear in nature and can
program was expanded to cover trailer towing of the implement the use of active structures. The authors
vehicle. MB analysis is used at JLR to integrate the discussed that the doublet lattice method with
platform system by combining in the analysis the corrections based upon CFD and rest tests is the
suspension components, tires, road surface conditions, traditional method of predicting aerodynamic aircraft
and overall body structure considerations from normal loads. Since the gusts contain short duration and
modes and static modeling. impulse loading this results in a high frequency response
that requires the unsteady effects of aerodynamics be
Thales Alenia Space (TAS) added to the session with a taken into account. MB simulations are by nature time-
discussion of applying MB analysis for a two axis oriented domain simulations and the unsteady aerodynamics is a
satellite solar array. While JLR uses MB to simulate frequency domain response. This has required that the
towing capacity to reduce the actual physical testing unsteady aerodynamics be approximated to be used in a
required TAS uses MB to simulate flight data time-domain solution. The paper presented the
where physical testing on the ground is not application of Roger’s Rational Fraction Approximation
possible. While some ground testing to validate method to the unsteady aerodynamic loads in a time-
the methods applied is done deployment tests domain MB solver. ᔡ

26
Session Review
Simulation Engineering
meets Additive
Manufacturing
Georg Schöpf

T
he 2015 World Congress provided the platform for
the 1st dedicated NAFEMS forum on Additive
Manufacturing (AM). The main industries
represented in the sessions were automotive, aerospace
and medical. Presenters from a range of companies and
several areas of expertise talked about their experiences
with additive manufactured parts and challenges involved
in simulating the additive build process.

Walter Schmidt, senior engineering manager of


modelling and simulation at Stryker Orthopedics (US)
works on the development of additive manufactured hip-
prosthetics. Walter discussed the robustness of the AM
process “The greatest difficulty is that materials differ
depending on the AM-method and built-parameters. Also
the biggest additive manufacturing machines worldwide
there are differences in material parameters depending
based on the fused deposition modelling method.
on part orientation in the building-space of the machine”.
Vlastimil Kunc of the Materials Science Division of the
Another key challenge that came out of the sessions was
Oak Ridge National Laboratory discussed how simulation
the lack of valid and reliable material models for
can address this issue, “We actually learn, how the
simulation of AM parts. AM technology can take
material behaves depend on building-parameters and
advantage of materials with added value (composites and
AM-method. Normally we need to simulate the building
filled materials) and there is a need to provide reliable
process first to get reasonably reliable results for the part
models to represent these materials in the simulation
itself” Vlastimil together with his colleagues at
environment.
Cincinnati Machine Inc. and the software company
Alphastar Corp built an electric Shelby Cobra car with a
More communication and collaboration is needed.
completely additive manufactured body. Cincinnati builds
Additive manufacturing technology has been developing
rapidly and there is a clear need to get more input from
the simulation community. It is necessary to document
AM methods including the related build-parameters in
order to increase the probability of producing a
successful product. Designers and simulation engineers
are encouraged to formulate their requirements and
communicate them to the suppliers of machines and
materials. It is also essential that machine
manufacturers disclose their latest developments and
coordinate in the development with simulation engineers.
Discussion and interaction is needed to get the
required information. ᔡ

27
Congress Highlights
A selection of noteworthy papers from this year’s Congress.

Ian Symington

T
he first conference paper I have chosen to The second paper that has been included is ‘Models for
highlight is ‘A Primer on Model Based Intralaminar Damage and Failure of Fiber Composites – A
Systems Engineering’ by Bill Brothers of Review’ by Prof Klaus Rohwer of DLR. In this paper Rohwer
Dassault Systèmes. The origins of NAFEMS began reviews failure and progressive failure models for laminated
with domain specific tools. The vast majority of our continuous fiber-reinforce polymer composites subjected to
members are familiar with CFD and FEA codes quasi-static loading. This paper provides an excellent round
and are aware of the assumptions and limitations up of the different approaches that are available and provides
of these tools. With the formation of the Systems comprehensive references to allow the reader to explore this
Modelling and Simulation Working group in 2013, area further.
NAFEMS started to interact with a new set of
engineers…the systems engineers. Finally the winner of the ‘Most Innovative Paper Award’,
‘Improvements of an Air-Liquid Interface In-Vitro Testing
When integrating two distinct groups it is essential Method for Inhalable Compounds Using CFD-Methods’ by Dr
that we are all talking the same language and that C Brodbeck of the Fraunhofer Institute SCAI is reproduced.
we all have an understanding of where we fit into The paper use computational fluid dynamics to investigate
the design process. Bill’s paper is an excellent the most effective technique for exposing cells with the goal
introduction to the traditional NAFEMS audience being to develop a general application that is suitable for all
on the role of Model Based Systems Engineering. kinds of airborne materials. ᔡ

When integrating two distinct groups it is essential


that we are all talking the same language
28
Congress Highlights
A Primer on Model
Based Systems Engineering
Bill Brothers - Dassault Systèmes SIMULIA Corp.
Member of the NAFEMS SMSWG

Introduction phase and continuing throughout development and


This paper presents an introduction to Model Based later life cycle phases (INCOSE-TP-2004-004-02,
Systems Engineering (MBSE) for finite element Version 2.03, September 2007). The emphasis is on
analysts. The finite element (FE) analyst can play a leveraging a virtual representation of a system to
significant role within systems engineering, but it is support the various engineering and business activities
necessary to first understand MBSE before the role can through a product lifecycle.
be leveraged.
Systems engineering is a discipline that ensures that a
Many companies are beginning to understand the value system of interest meets all internal and external
of MBSE and are enhancing their systems engineering requirements through the definition and application of
capabilities to achieve greater project success. One engineering processes. This is challenging when you
study from Eric Honour [1] shows average project cost consider that a customers requirements may include
overruns of 50% are eliminated with the increased vagaries such as how a surgical implement my feel in
application of System Engineering (SE) effort. This the hands of a surgeon, quantitative requirements such
same study revealed that schedule overruns are as cost, and statistical variances of risk and warranty.
reduced from an average of 80% to 0% with increased In addition to the customer requirements are the
SE effort. In both cases the variability about the average internal requirements necessary to achieve business
reduces with increased SE effort. objectives such as margin and delivery. This
combination results in a necessary consideration of the
Another study done between Carnegie Mellon integration of many diverse disciplines. The systems
University, NDIA, and CEI [2] finds a strong statistical engineer has visibility over all aspects of a program
correlation between systems engineering capabilities with responsibility for the integration of a diverse set of
and project performance. Furthermore, the study domains.
reveals that the correlation between systems
engineering capability and project performance Systems models, on the other hand, are a combination
becomes stronger with increasing project complexity. of behavioural expressions which when considered
As products become more complex, the need for independently cannot predict the behaviour of the
systems engineering capability also increases. whole. A system example is the calculation of vehicle
With the business benefits of MBSE becoming fuel economy by examining the engine, transmission,
apparent, it is a matter of time before finite element driveline, and electronic controls. It is impossible to
analysts will be expected to know the basics of MBSE calculate the fuel economy of a vehicle by considering
and be able to effectively contribute to a systems the behaviour of each of the components independently.
engineering process. This becomes apparent when you consider that the
electronic controls monitors many variables from the
engine and powertrain to adjust various conditions with
MBSE vs. SE vs. System Models how the engine and transmission operate, such as RPM
Model-based systems engineering (MBSE) is the and current gear selection. It is only when all the
formalized application of modelling to support system relevant behavioural expressions are considered
requirements, design, analysis, verification and together that we can calculate the system behaviour to
validation activities beginning in the conceptual design predict fuel economy.

29
The RFLP approach to systems engineering finite element model. The free body diagram is a logical
The RFLP approach to systems engineering is an representation of a mechanical system, thus finite
organized approach which utilizes a hierarchical element analysts already make use of logical models
decomposition of high level requirements into prior to the construction of the physical level finite
successive levels of model fidelity. Requirements come element model.
from the customer, knowledge of the industry, and
internal business objectives. Requirements are always The lowest modelling level within the RFLP approach is
changing, and as such need to be actively managed and the physical level. At the physical level, the geometry of
propagated continuously through the program. each component in an assembly is defined using CAD,
the behaviour is captured using finite element analysis,
and other domains are similarly detailed as required.
As requirements are defined, they are broken up into a Finite element analysis is the physical description of
functional decomposition of the product or program. the part or assembly revealing how a product is
The functional specification designates what a constructed and its response to imposed conditions.
particular product must do to satisfy the requirements.
There is no definition at the functional level as to how a Often the RFLP model is described as the systems
function is to be accomplished, only that it must be so engineering V as shown below in figure 1.
in order to meet the program requirements. The goal at
the functional level is to delay design decisions since It can be interpreted from Figure 1 that RFLP is a
imposing them too early will unnecessarily constrict sequential series of activities that flow through
the space of possible solutions. You would not want to modelling, to product realization, and finally followed by
assume a battery for a power source when the type and verification and validation activities. However, the real
amount of power is unknown. Solar or gas may fulfil world demands continuous execution of RFLP including
the requirements better. verification and validation occur simultaneously, and
thus the typical V diagram is in reality a filled triangle.
Typically a systems engineer will decompose the
requirements into functions and delegate these to the Almost every FEA engineer with significant industry
different engineering and business domains tasked experience has been in the situation where the model
with supporting the program. they were constructing was based on out of date
geometry or boundary conditions. With a robust
The decomposed functions are allocated to the various systems engineering capability, these updates are
engineering domains to create and apply logical models pushed to the analyst as they occur, but all too often
of behavioural expressions. Logical models define how the analyst ends up completing a model of questionable
the functions are carried out, but don’t yet describe the relevance to the program. In a like manner, the analyst
physical characteristics of how the product is made. A may discover that design changes are necessary to
logical model of a battery, for example, may include satisfy the requirements only to find that prototypes are
expressions describing voltage as a function of already being made. Both of these cases are symptoms
remaining capacity, battery chemistry, current delivery, of a lack of systems engineering capability and are
mass, and battery life, but it would not include the caused when engineering disciplines operate in
physical geometry of the battery itself. The logical isolated silos of expertise rather than in a mature and
models are mathematical expressions of a particular collaborative systems engineering environment. A
behaviour. need for collaboration can be observed when critical
project information is exchanged through informal or
An experienced finite element analyst will construct accidental means rather than a formal platform
and consult a free body diagram prior to creating a designed to track and capture this critical intellectual
property.

Figure 1: The systems engineering “V” proposes a Figure 2: Here is a traditional engineering workflow
hierarchical decomposition of a system and subsequent where domains work in isolation and rely on
verification and validation activity. In real life, however, knowing who to go to achieve integration.
these are not always done sequentially.

30
Figure 3: Here is a workflow relying on a collaboration Figure 4: This is a system model representation with model
platform which integrates and synchronizes all of the elements (i.e. expressions of behaviours), attributes, and
design domains. Critical IP that was communicated using relationships. Different relationships may be necessary
Ad-Hoc means is now stored and accumulated. from independence to requiring co-simulation.

Although inefficiencies accumulate within the In CAD this could be a collection of dimensions and
engineering domains without systems engineering tolerances.
capabilities, often the real problems are discovered
during the process of domain integration. The solutions The behavioural expressions may have different levels
obtained in each domain may be optimal within the of fidelity depending on the requirements of the model.
domain, but may be incompatible with solutions in In general, the lowest fidelity model is selected which
other domains, thus the end result is a sub-optimal meets the accuracy requirements of the system. For a
system solution, or at worst, a collection of geometry expression an example of increasing fidelity
incompatible domain solutions which result in a system would be a design envelope, a sketch, a dimensioned
which fails to meet the requirements. assembly, and an assembly composed of fully detailed
CAD parts with geometric tolerances.
A systems engineering approach requires the use of a Equally important to the creation of the behavioural
systems model so that all domains can be integrated as expressions are the definition and creation of the
they are developed rather than as an afterthought. relationships between the behavioural expressions.
Effective systems engineering does not rely on ad-hoc The degree of coupling is also determined along with
collaboration and instead rely on specific processes and identifying which expressions are related to another. In
tools to handle synchronization and collaboration. some cases the coupling across engineering domains
is very tight – such as between the structural and fluid
The system model domains within the deployment of a parachute. In other
The system model is a collection of behavioural cases, one domain independently drives another, such
expressions as model elements that contain attributes as the definition of geometric tolerances on product
and are related to other behavioural expressions cost.
through relationships.
Once the system model is assembled it can be used to
A behavioural expression could encompass any manner predict those system behaviours, which cannot be
of things that support product functions and thus predicted by examining the behaviour elements in
requirements, such as a isolation. This becomes a fundamental tool for the
systems designer.
• CAD geometry model,
• voltage model, The value of the system model
• finite element model,
With a system model one can predict a set of outcomes
• cost model,
given a set of conditions. If the outcomes satisfy the
• controls model,
complete set of requirements, then the model
• fluids model, etc.
represents a successful system.
It is the job of the systems engineer to ensure that
The real value of a system model is that it is a tool that
there are behavioural expressions to support the
may be used to explore and understand the entire
complete functional decomposition of the
response space of a system. The understanding of the
requirements.
design space provides the decision support needed to
make system design decisions.
Each behavioural expression may have a collection of
attributes. A finite element model may have a mesh
A finite element analysis could tell you if a particular
associated with it, or a voltage model for a battery may
fastener will survive a load history, but the design
follow a certain law depending on the battery chemistry.

31
question may be more along the lines of what Verification and Validation
fastener should I chose? A rudimentary system A discussion of verification and validation will usually evoke an
model could be constructed using a basic cost animated discussion between experts in the field of systems
table of fasteners related to a finite element engineering as there is not a uniformly applied definition. The
model within a process manager in order to two questions that are being asked with verification and
explore the response space allowing you to validation are, “am I solving it right?” and “am I solving the
answer the question, “which is the most cost right problem?” In finite element analysis, you can refine your
effective fastener that satisfies my load and mesh until your solution converges so that you can confidently
deflection requirements?” As behavioural solve the problem, but this will not tell you if you are solving
expressions are added the system solution the correct problem.
becomes more relevant to the product defined by
the requirements. An example illustrating the difference between verification
and validation is the answer to the question “Given that six
In a like manner, the question that should be wolves catch six lambs in six minutes, how many wolves will
asked by product designers is “what tolerance do be needed to catch sixty lambs in sixty minutes?” Often,
I need on a particular dimension?” This question following the pattern will solve this trick question – you could
can only be answered by examining tolerance verify that you solved the problem:
from a full requirements point of view including
cost and life as well as the more traditional stack [6 wolves, 6 lambs,6 min.]*10 = [x wolves,60 lambs,60 min.]
up requirements within an assembly. A systems 6 wolves*10 = 60 wolves
thinking approach will enable trade offs such as
material vs. tolerance for example. You can verify that 6 times 10 is 60 and that it is correct, but it
doesn’t solve the problem since the initial equation is wrong.
The next step Setting up the problem algebraically reveals that 6 wolves are
Once the system response surface is known and needed.
all domains are fully synchronized in a
collaborative environment one may be tempted to Physical validation of the solution is watching 6 wolves eat the
believe they are done. A system optimum on the lambs. This is what we do when we validate an FEA result
response surface can be located to realize the against a physical test.
best possible product, which satisfies all the
requirements. However, there is yet another step Virtual validation remains orthogonal to the original solution,
that can be taken in terms of systems engineering that is, the process used for validation must be independent of
capability. the original. In our lambs example, virtual validation would
be a calibrated battle simulation of lambs vs. wolves. Virtual
A paper by Alex Van der Velden, David Fox, and validation of an FEA requires an alternative modelling
Jeff Haan [3], asserts that each behavioural approach unrelated to FEA, such as handbook calculations.
element within a systems model contains Finally, to ensure that you are solving the right system you
uncertainty. The causes are various, but there is would want to validate the requirements. In this case, that
always uncertainty within each element. The would mean talking to the shepherd to determine that he
uncertainties through each behavioural element needs to know how many wolves are needed to eat his lambs.
contribute to an overall uncertainty in the system In fact, you may find out that the shepherd doesn’t care about
model, and thus there is an uncertainty band the wolves’ appetites and he’s more interested in not losing
about the system response space. any lambs.

An optimal solution will often by found to be Summary


coincident with some boundary on the allowable In summary, we have seen that the application of model
design space. When the uncertainty is based systems engineering provides significant cost and time
incorporated in the system model, then it is benefits to industry, and we can expect that FEA as well as
certain that if a manufacturing run is long enough other domains will be needed to participate in the process.
that some of the products will fail. To truly find the
system optimum, it is necessary to incorporate For systems engineering to be successful, requirements must
the system model uncertainty within the decision be carefully considered to ensure that all needed behavioural
support process. The fully mature systems expressions are present in the systems model. A systems
engineering process will have quantified and model will not be successful without actively incorporating
managed uncertainties within the system model.

32
system requirements, which tend to be
continuously evolving. Furthermore,
these model elements must by
synchronized with functional definitions
and requirements through the application
of a collaboration environment.

A system model provides value as it is


used to provide decision support by the
exploration of the system response
surface through cause-effect space. This
requires that the model elements be
robust and execute quickly or that
different expressions of the system model
be solved simultaneously and collected to
discover the response space. Each
behavioural model element contributes to
the response space, and it is the
responsibility of the domain experts to Figure 5: A humorous look at Verification and Validation
calculate and define the accuracy of the
model elements that they select. For an
FEA model to be used within a system
model the uncertainty it contributes
should be calculated.

Finally, although there is some debate


over the definition of verification and
validation, the two questions that an FEA
analyst needs to keep in mind are “am I
solving it right?” and “am I solving the
right thing?”

References
1 Honour, Eric C. (2004). “Understanding the
Value of Systems Engineering.” Paper
presented at the 2004 INCOSE
International Symposium in Toulouse,
France
2 Elm, Joseph P., Dennis Goldenson, Khaled
El Emam, Nicole Donatelli, Angelica Neisa,
and NDIA SE. (2008). “A Survey of Systems
Engineering Effectiveness - Initial
Results.” Carnegie Mellon University,
Research Showcase @ CMU, Software
Engineering Institute
3 Van der Velden, Alex, David Fox, Jeff Haan,
(2012). “Early Stage Verification and
Validation of Cyber-Physical Systems
through Requirements Driven Probabilistic
Certificate of Correctness Metric.” Paper
presented at the ASME 2012 International
Design Engineering Technical Conferences
and Computers and Information in
Engineering Confere

33
Congress Highlights
Models for Intralaminar
Damage and Failure of
Fiber Composites
Prof. Dr.-Ing. K. Rohwer - DLR, Germany
Member of the NAFEMS DACH Steering Committee and CWG

Introduction The scope of this review is focused on intralaminar


For an efficient design the structural engineer needs to fracture of laminates made from unidirectional layers
know as accurately as possible under what conditions which are subjected to quasi-static loading.
the designated material will develop damage and finally Delaminations, woven fabrics, and effects resulting
fail. Only then it is possible to fully exploit the potential from fatigue or impact loads are not covered. Further,
of the structure while maintaining the required safety. the material behavior after the first appearance of
In any case, some information on damage and failure damage is of interest, especially for fiber composites.
must be obtained by suitable tests. These tests are That is because in case of matrix failure the fibers are
usually performed on coupons and aimed at often able to carry much higher loads. Effects of the
determining the material strength under a specific progressive failure of fiber composites have been
single state of stress, be it pure tension, compression extensively studied by Knight[6]. He differentiated
or shear. The general state of stress in a loaded between ply discounting approaches and continuum
structure, however, consists of several, if not all, damage mechanics methods. Libonati and Vergani[7]
components of the stress tensor. Thus, a criterion is recently have tested fiber composite behavior before
needed which maps the actual state of stress to the and after failure onset using thermography. They have
limited number of test results. identified three regions: An initial region without
damage, a second region where micro-damages
This paper will review the different possibilities of appear which may be initiated by pre-existing defects,
formulating criteria and point out development and a third region with an extended damage size.
tendencies, limited to laminated continuous fiber- Considering these results, within this paper the main
reinforced polymer composites. Such reviews have focus is laid on the second and third region. Different
been performed previously, for instance by Nahas[1] or failure criteria and damage progression models will be
Thom[2]. Since then, however, models have been outlined, pros and cons be mentioned, and tendencies
developed further. In parts that is due to the in the development will be identified.
tremendous increase in computational power which
allows for more and more complex models. Besides, A vast majority of existing failure criteria is formulated
the World-Wide Failure Exercises WWFE-I[3], -II[4] and in stresses, and there are good arguments to do so.
–III[5] have demonstrated deficiencies in the existing Christensen[8], for instance, mentioned that such a
failure theories and therewith fired new developments. formulation would be more suitable in order to fit with
The large number of existing theories prohibits fracture mechanics or dislocation dynamics. In addition
recognizing them all; rather only those will be assessed he pointed out that viscous material can fail under
which in the opinion of the author have reached some constant stress, but not under constant strain due to
level of acceptance. Furthermore, not every detail of the relaxation. A major point of criticism against a stress-
respective theory can be outlined; only those aspects based criterion is related to strength measurements.
will be referred to which the author regards important. Usually strength is obtained as the load carrying
capacity at final failure. Many tests, however, show a
rather nonlinear stress-strain behavior, which at least

34
in parts is due to progressive damage. There is a need been applied quite successfully by Zinoviev et al.[13] in
to clearly define failure of composites. From WWFE-I. The failure criterion was supplemented by a
comparative studies between deterministic and special model characterizing the progressive damage
probabilistic analyses of cross-ply laminates under under transverse tension and in-plane shear of a UD
tension Sánchez-Heres et al.[9] concluded that an ply within a multidirectional laminate. This model
increased understanding is required regarding the describes the loading as linear elastic — ideal plastic,
effects of progressive matrix cracking in order to reach and the unloading as linear elastic with a smaller
a safer structure. module. A comparatively favorable performance was
highlighted by Hinton et al.[14]. Some discrepancies
During the design phase ‘quick and dirty’ methods are between theoretical predictions and test results
needed which are fast, simple to use and lead into the Zinoviev et al.[15] traced back to the assumption about
right direction, but do not claim to be highly accurate. the fatal impact of ultimate transverse compressive
Among these is the netting theory, where only the stresses in a single ply on the failure of the whole
fibers are accounted for carrying loads. Quite popular is composite laminate.
a limitation of strains to a fixed amount. Further there
is the 10% rule by Hart-Smith[10], predicting the Hart-Smith[16]–[18] applied modified maximum strain
strength and stiffness of fiber–polymer composites on as well as maximum stress criteria in the WWFE-I. The
the basis of simple rule-of mixtures. Though very modification affects a truncation of the failure envelope
useful, such methods will not be considered in the in the biaxial tension–compression quadrant.
following. Differences between analysis and test results were
explained by deficiencies with respect to matrix-
Homogeneous models dominant failure. The maximum strain criterion in
Shape of failure envelope conjunction with plasticity used by Bogetti et al.[19],[20]
delivered good results in the WWFE-I; the
Of course fiber composites are not homogeneous,
strengthening effect that appears under tri-axial
however, the overall behavior of the material can be
loading or hydrostatic pressure, however, is obviously
appropriately described by smeared out properties.
not well captured as has been admitted by Bogetti et
Also, a large number of failure models are based on the
al.[21]. Furthermore, Bogetti’s theory predicts a
assumption of a homogeneous anisotropic material,
completely closed failure envelope even for isotropic
specifying a failure envelope in stresses or strains.
materials.
There is a general agreement that the failure envelope
should be convex. Otherwise, unloading from a certain
Nahas[1] has referred to further non-interactive
state of stress may indicate failure. It is under
theories which to some degree account for the strength
discussion, however, as to whether the failure surface
of the constituents. In general, however, these theories
should be open or closed. Christensen[8] stated: “All
have not been used very often in practice. It is but the
historical efforts to derive general failure criteria used
maximum strain model which because of its simplicity
the condition that the isotropic material would not fail
is still applied especially in the initial design phase.
under compressive hydrostatic stress”, which means
that the failure surface is assumed open. In his treaties
on failure surfaces for polymeric materials Tschoegl[11] Interpolation criteria
pointed at “the common sense requirement that the Following yield conditions for isotropic and orthotropic
surface should be open in the purely compressive materials, Hoffman[22] proposed a quadratic fracture
octant (because hydrostatic compression at reasonable condition accounting for the difference between tensile
pressures cannot lead to failure in the ordinary sense)”. and compressive strength in fiber and transverse
For fiber composites the situation is different. Because directions. Based on the idea that a tensor polynomial
of the stiff fibers an external hydrostatic load causes can describe the failure surface, Tsai and Wu[23] came
matrix stresses which differ considerably from up with a similar approach. These popular failure
hydrostatic ones. Comparing theories and experiments criteria consider interactions between different
of the WWFE-II exercise Kaddour and Hinton[12] components of the stress tensor. They suffer, however,
mentioned “the diversity exhibited between the theories from certain drawbacks. Distinguishing between fiber
as to whether certain failure envelopes are ‘open’ or breakage, matrix cracks, or interface failure is not
‘closed’”. However, this discrepancy should not exist, possible by a smooth mathematical function. Besides,
and Christensen[8] has provided reasonable arguments determination of the interaction terms linked to the
why fiber composites cannot sustain unlimited product of two normal stress components requires
hydrostatic pressure. difficult tests under biaxial load; and these terms are
important since they may indicate implausible strength
Non-interactive criteria levels above those in fiber direction. By comparing with
test results under plain stress conditions
The easiest criterion limits every stress component
Narayanaswami and Adelman[24] concluded to rather
separately, not accounting for any interaction.
set these terms to zero. Liu and Tsai[25] underlined
Astonishingly enough this rather crude approach has

35
that the failure surface must be closed, and they gave inter-fiber failure he mentioned the idea to hold the
an overview over different possibilities for the stresses acting at the failure plane responsible. That
interaction terms. Further, they have outlined a implies to determine the most probable crack direction
procedure for determining progressive laminate failure which is computationally costly. Hence he settled for
using reduced moduli which in the end leads to last ply the quadratic formulation which leads to not fully
failure. DeTeresa and Larsen[26] have proposed satisfying solutions.
relations between the interaction terms and the
strengths in fiber and transverse direction which fit to Building up on Hashin’s original idea Puck[33],[34]
an open failure surface. Test under hydrostatic pressure formulated a criterion which yielded rather accurate
has shown no damage. results in the WWFE-I. He strictly distinguished
between FF and IFF, where the latter comprises matrix
There are a number of other interpolation criteria with cracks and fiber–matrix debonding. Puck, too, regarded
certain inconveniences or restrictions. The criterion the stresses in the fracture plain responsible for IFF. If
proposed by Norris[27] does not explicitly account for the normal stress on the fracture plain is positive
differences in tensile and compression strength; on (tensile), then all three stress components foster the
application the user must check the sign of the failure, whereas compressive stress increases the
different stress components and use corresponding strength by means of internal friction. The different
strength values. The same holds for the Tsai–Hill behavior under tension and compression requires
criterion as described by Azzi and Tsai[28], which differs additional material parameters which describe the
from the Norris criterion only in the interaction inclination of the fracture master surface at zero
between the axial and transverse normal stress. The normal stress. Recommendations for these inclination
proposal by Yamada and Sun[29] is sometimes looked parameters are provided by Puck et al.[35]. Based on
upon as a degeneration of the above mentioned criteria, Puck’s model the strength degradation of laminates
a view which ignores the intention to determine the which suffer from an IFF within a certain layer was
final failure of a laminate. Further, the shear strength investigated by Knops and Bögle[36]. Also the German
to be used in this criterion must be determined in tests engineering guideline[37] regarding the analysis of
with crossply laminates leading to much higher values components from fiber reinforced plastics relies on
than that obtained from a single ply. It is also worth Puck’s failure criterion. Dong et al.[38] complemented
mentioning that Yamada and Sun stressed the need to Puck’s theory by adding effects of ply thickness and ply
account for statistical distributions of the strength angles of neighboring laminae.
values. The criterion by Rotem[30],[31] differentiates
between failure in the fibers or in the matrix. Fiber The failure mode concept (FMC) as set up by Cuntze
failure (FF) is modeled by a maximum stress criterion and Freund[39] aimed at capturing the behavior of five
in fiber direction with some modifications accounting different failure modes. Based on stress invariants the
for effects of transverse stresses, whereas matrix model provides one failure condition each for two FF
failure is predicted using a quadratic interaction of modes and three IFF modes. Corresponding to Puck’s
axial, transverse, and shear stresses. By means of inclination parameters two curve parameters are to be
comparing with test results, Kaddour and Hinton[12] determined by multi-axial tests. Possible interactions
stated that there are indications ‘‘that the theory does between failure modes are accounted for by a
not discriminate adequately between initial and final probabilistically based series spring model approach.
failure’’. The FMC was subsequently improved by Cuntze[40],[
41]. In connection with the behavior of isolated and
Several other interpolation criteria have been embedded laminas special emphasis is put on the
mentioned by Nahas[1], which to the author’s difference between the onset of failure and the final
knowledge have not reached much public attention. failure of composite laminates. Furthermore,
Cuntze[42] carefully examined the tests provided for the
Physically based criteria WWFE-II and after certain corrections obtained rather
Distinguishing between interpolation criteria and good agreements.
physically based ones is a bit artificial and a traditional
classification. Neither are the interpolation criteria free At NASA Langley Research Center, Dávila et al.[43]
of some physical background nor are the physically have proposed failure criteria for fiber composite
based ones free of some simple interpolation aspects. laminates under plane stress conditions which were
There rather is a gradual transition between both extended to three-dimensional stress states by Pinho et
categories which makes it somewhat arbitrary where to al.[44] and eventually improved with respect to matrix
draw the line. compression failure by Pinho et al.[45]. As with
Hashin’s[32] approach the failure model considers four
In his model development, Hashin[32] pointed out that different scenarios: tension and compression in fiber
using a formulation quadratic in stresses is based on and transverse direction. For compression in fiber
curve fitting considerations rather than on physical direction the effect of fiber undulation is regarded. Nali
reasoning. He looked at the stress invariants and and Carrera[46] compared this approach against some
differentiated between four failure modes: tension or interpolation criteria for plane-stress problems and
compression in fiber or in transverse direction. For the found good agreement with test results.

36
In a detailed analysis Catalanotti et al.[47] described behavior up to failure of glass-fiber reinforced
certain pitfalls of existing 3D failure criteria. They composite laminates under various loads rather
pointed to the requirement of using in situ strength accurately. Time and temperature dependency of
properties in order to account for the ply thickness fracture strengths both in tension and compression
effect. The pitfalls could be avoided by an improved were thoroughly studied by Miyano et al.[57]. They found
criterion for transverse matrix failure. Longitudinal out that the strength master curves can be set up
tension failure is predicted by a maximum strain successfully by using the reciprocation law between
criterion, and longitudinal compression failure accounts time and temperature.
for fiber kinking. Building on this proposal and on the
three-dimensional plasticity model for composite A majority of models for damage progression in
laminates developed by Vogler et al.[48] Camanho et laminates are based on the unrealistic assumption that
al.[49] formulated new criteria where transverse failure each ply behaves independently of its neighbors. In
and kinking models are invariant-based. For validation order to account for the interaction between adjacent
in case of complex three-dimensional stress states layers Williams et al.[58] developed a continuum
computational micromechanics turned out to be a damage approach for sub-laminates. Therewith it is not
useful tool. intended to predict details of damage at the ply level,
rather to capture the sub-laminate’s overall response.
Damage mechanics approach The idea was further upgraded by Forghani et al.[59]
Damage mechanics does not provide conditions at considering several aspects specific for damage
which a certain type of damage occurs; rather it uses progression in multidirectional composite laminates
internal variables to describe the progressive loss of and applied to the open hole problems of the WWFE-III.
rigidity due to damage of material. Ladevèze and Le The open hole tension strength of composite laminates
Dantec[50] have applied damage mechanics to set up a was also studied by Ridha et al.[60]. They found a
model which describes ply-wise matrix microcracking significant interaction between delamination and in-
and fiber/matrix debonding. Reaching the maximum plane damage, so that neglecting delamination would
mean stress or a maximum of the load-deflection curve overestimate strength.
specifies the laminate failure. This model was adopted
by Payan and Hochard[51] to study the behavior of UD Frizzell et al.[61] developed a numerical method based
laminates from carbon fiber-reinforced plastics (CFRP) on continuum damage mechanics that is capable of
under shear and transverse tension. They found elastic describing sub-critical damage and catastrophic failure
behavior up to brittle failure in fiber direction, and mechanisms in composite laminates. They proposed a
gradient loss of rigidity due to damage under shear and ‘‘pseudo-current’’ damage evaluation approach which
transverse tension. Based on these results they avoids convergence problems even for complex damage
developed a model which covers the damage state by mechanisms.
means of two scalar-damage variables describing the
loss of rigidity under shear and transverse tension Inhomogeneous models
loading, respectively. The model has proven to be valid Strength of constituents
for a ‘‘diffuse damage’’ phase where micro-cracks Fibers and the matrix material are characterized by a
occur and it is limited to the first intralaminar macro- large disparity in stiffness and strength. Though
crack. Hochard et al.[52] have further extended the smeared out in the models reviewed above it certainly
model to problems with stress concentrations. The influences the failure process and thus is reflected in
approach is based on a Fracture Characteristic Volume certain features. In this section, approaches will be
which is a cylinder defined at the ply scale where the discussed which account for the inhomogeneity in one
average stress is calculated and compared to the way or the other. To this end strength properties of the
maximal strength of the material. constituents are needed. Measuring them, however,
encounters difficulties.
Barbero and de Vivo[53] presented a damage
mechanics approach where the damage surface has Resin strengths are typically measured in appropriate
the shape of the Tsai–Wu[28] criterion. But it goes tests with neat material. An overview over models with
beyond a failure criterion by ‘‘identifying a damage relevance to resin failure was given by Fiedler et al.[62].
threshold, hardening parameters for the evolution of These authors have proven that the type of resin failure
damage, and the critical values of damage’’. These depends not only on the material itself but also on the
parameters are all related to known material properties state of stress. They found out that “ductility is a
but not directly measurable (cf. Barbero and Cosso[54]). function of the amount of tri-axiality and explains why
ductile polymers behave brittle when used as a matrix
Van Paepegem et al.[55] performed tension tests with in fiber reinforced composites”. Such an effect was
[±45]2s laminates and used the results to determined detected and analyzed already by Asp et al.[63]. On the
one parameter each for shear modulus degradation other hand Pae[64] has found that brittle epoxy
and the accumulation of permanent shear strain. The develops yielding when hydrostatic pressure is
same authors[56] applied these parameters to a superimposed on the loading. Because of these
mesomechanical model which did not account for time- intricacies, properties determined from tests with neat
dependent effects like strain rate or viscoelasticity. resin must be handled with caution when used in a
Nevertheless they were able to describe the nonlinear micromechanical failure analysis.

37
Shear strength of the fiber-matrix interface can be al.[71] generated fiber–matrix debonding and observed
obtained from fiber pullout or pushout tests. Kerans its effect on crack density under transverse load. The
and Parthasarathy[65] proposed a procedure for authors developed a modified transverse cracking
extracting interface parameters from the test data. An toughness model.
analytical model describing the fiber pushout was
developed by Liang and Hutchinson[66]. More involved In order to accomplish the tasks put forward in the
is the determination of interface strength under WWFE-I, Gotsis et al.[72] used the computer code ICAN
transverse loads since secondary transverse stress by Murthy and Chamis[73], which determines material
perpendicular to the primary transverse compression properties using micromechanics and accounts for
affects the threat of fiber-matrix interface fracture. laminate attributes like delamination or free edge
Correa et al.[67] found out that secondary tensile stress effects. In addition to the maximum stress criterion a
increases the risk whereas compression decreases it. modified distortion energy failure criterion determines
the ply failure. Comparison with the test results as
Measuring fiber tensile strength seems to be a provided by Gotsis et al.[74] revealed reasonable results
relatively easy task. When performing the tests, in cases of fiber dominated failure, but rather large
however, it becomes apparent that the results depend discrepancies when matrix failure was predominant.
on the specimen length. The longer the specimen, the Analysis methods were further improved to a full
lower is the measured tensile strength. Even more hierarchical damage tracking and applied in the
questionable is the determination of the compressive WWFE-III challenge by Chamis et al.[75]. Therewith
strength in fiber direction. In a composite the constituent properties determined by inverse model
compressed fibers usually do not suffer a material application were used for the micromechanical analysis
failure but a loss of stability. Thus the strength limit part.
heavily depends on the matrix properties and cannot be
determined from the fibers alone. Tensile strength in fiber direction
Some effort was put on developing models for the
Models with some effect of inhomogeneity determination of tensile strength in fiber direction from
In this section, approaches will be discussed which to constituent properties. Considering the standard
some extent consider inhomogeneity but still show composite design with an extension to failure of the
relations to the homogeneous models mentioned matrix much higher than that of the fibers, the
above. This evidently holds for the discrete damage composite failure stress can be roughly estimated by
mechanics approach as proposed by Barbero and the rule of mixture from the failure stress of the fiber
Cortes[68]. By means of fracture mechanics applied to and the matrix stress at fiber rupture. However, that
the inhomogeneous material they determined does neither account for varying fiber strength along
parameters for stiffness reduction of the homogenized each single fiber nor for strength variation between
structure. Barbero and Cosso[54] showed that this fibers. A number of hypotheses accounting for these
approach can be successfully applied to model damage variations have been proposed, e.g. by Rosen[76] and
and failure of laminates from CFRP. Zweben[77], but the application is not very convincing.
More recent developments along this line are the global
Inhomogeneity plays an important role in tests of load sharing scheme by Curtin[78], the simultaneous
inplane shear strength. There is as yet a deep fiber-failure model by Koyanagi et al.[79] and statistical
disagreement as how to obtain reliable values. Odegard models for fiber bundles in brittle-matrix composites
and Kumosa[69] have thoroughly investigated the by Lamon[80].
standard Iosipescu test with 0° specimens as well as
the 10° off-axis test. They found good agreement only if Compressive strength in fiber direction
the Iosipescu tests are accompanied by fully nonlinear Models for compressive strength in fiber direction were
finite element analyses including plasticity and first set up by studying the buckling of fibers on an
premature cracks, and the 10° off-axis test must be elastic support. Depending on the fiber volume fraction
carefully machined to avoid micro-crack at the Dow and Rosen[81] differentiated between an extension
specimen edges. and a shear failure mode. Their results, however,
proposed too high strength values. Xu and
The growth of cracks in a UD fiber reinforced lamina Reifsnider[82] extended the model by assuming
was modeled by Cahill et al.[70]. By means of the slippage between fibers and the matrix over certain
extended finite element method (XFEM) for regions and therewith determined a good agreement
heterogeneous orthotropic materials where material with test results. Following a thorough review of the
interfaces are present as well as a modified maximum models developed until then Lo and Chim[83] proposed
hoop stress criterion for determining the direction of to improve the microbuckling concept by considering
the crack propagation at each step they found out that transverse isotropy of the fibers and the effects of resin
for a material with a large stiffness rate between fiber Young’s modulus, fiber misalignment, a weak fiber
and transverse direction the crack will propagate along matrix interface as well as voids. They also pointed out
the fiber direction, regardless of the specimen that in case the strain to failure of the fibers is reached
geometry, loading conditions or presence of voids. prior to buckling, then the compressive strength should
Matrix cracking and fiber–matrix debonding seem to be determined by the rule of mixture between fibers
impair each other. By means of shear load Nouri et and the matrix. The effect of fiber misalignment and

38
resultant kinking was studied by Budiansky and was studied by Asp et al.[98],[99]. They used a
Fleck[84]. Their model, however, was not able to predict micromechanical approach with a representative volume
the width of the kink band and its inclination. element, which thereafter became more and more
Micromechanical analyses of the kink band formation popular. Not accounting for fiber–matrix debonding they
after fiber buckling including the effect of fiber have found that the fiber modulus has a significant effect
misalignment were performed by Kyriakides et al.[85] on the failure caused by cavitation in the matrix. This
and by Jensen and Christoffersen[86]. After a thorough brittle failure occurred earlier than yielding. A thin
derivation of a stress based model for fiber kinking, interphase of a rubbery material improves the
Ataabadi et al.[87] pointed to certain drawbacks of the transverse failure properties. Tensile and compressive
model. In order to alleviate them they proposed an strength with perfect fiber–matrix adhesion on the one
improvement based on strains and used it to predict the hand and complete debonding on the other hand was
compressive strength depending on the fiber compared by Carvelli and Corigliano[100]. Assuming
misalignment. On validating this strain based model periodicity for rather small fiber volume fraction they
against test results Ataabadi et al.[88] found that for determined finite strength under biaxial tension only
specimens with an off-axis angle >0° this model can with debonded interfaces. Transverse tensile failure
predict the compressive strength of UD laminated behavior of fiber–epoxy systems was also studied by Cid
composites with acceptable accuracy. Gutkin et Alfaro et al.[101]. They pointed to a strong influence of
al.[89],[90] distinguished between two different failure the relative strength of the fiber–epoxy interface and the
mechanisms: shear-driven fiber compressive failure and matrix. Vaughan and McCarthy[102] found out that in
kinking/splitting. Similar to that approach Prabhakar case of a strong fiber–matrix interface residual thermal
and Waas[91] studied the interaction of kinking and stresses improve the transverse tensile strength.
splitting by means of a 2D finite element model. With a
perfect interface the stress–strain curve shows a typical Shear strength
instability behavior with a sharp peak and a snap-back Several authors applied micromechanical means for
branch afterwards. Since local strains then exceeded the analyzing fiber composite shear strength. King et
strain to failure for polymer matrix material discrete al.[103] determined the composite transverse shear
cohesive zone elements were applied at the fiber–matrix strength, mainly to predict the effect of fiber surface
interface. It turned out that it is important to know treatment and sizing on the interfacial bond strength.
especially the mode-II cohesive strength of the interface They found out that the predicted composite shear
in order to determine the compressive strength and strength strongly depends upon the type of matrix and
failure mode of UD laminates accurately. The same the interface strength, and is not significantly dependent
authors[92] further extended the micromechanical on the fiber properties. Axial tension tests on [±45°]s
model of failure under compression to multidirectional laminates are often used to determine the composite
laminates considering delaminations. That allowed shear stress–strain response. Comparing the shear
studying the effect of stacking sequence on the behavior of CFRP with epoxy and PEEK matrix, Lafarie-
compressive strength. Mishra and Naik[93] used the Frenot and Touchard[104] determined a pronounced
inverse micromechanical method to calculate fiber plastic deformation but no visible damage in the low
properties and applied them to determine the loading range. Higher load levels led to increased
compressive strength for a composite with a different damage in the epoxy matrix and early failure whereas
fiber volume fraction. A formulation capable of obtaining the PEEK material exhibited even larger plastic
the maximum compression stress, and the post-critical deformation in connection with a considerable change of
performance of the material once fiber buckling has the fiber angle. The detectability of microcracks,
taken place was proposed by Martinez and Oller[94]. however, may have been limited due to the fact, that
Dharan and Lin[95] questioned the role of initial fiber contrast agent for X-ray inspection was applied to the
waviness and kink band formation on the compressive free edges only. In contrary, by means of tests with dog
strength in fiber direction. Like Lo and Chim[83] did bone specimens and micromechanical analyses Ng et
earlier, they rather extended the micro-buckling model al.[105] found out that it is micro-cracking rather than
of Dow and Rosen[81] by accounting for an interface plasticity, which brings about the observed nonlinear
layer around the fibers, the thickness and shear softening. In V-notched rail shear tests on cross-ply
modulus of which have to be adjusted to test results. laminates reinforced with HS fibers Totry et al.[106] did
Zidek and Völlmecke[96] used a simple analytical model not find any evidence of damage in the MTM57 epoxy
introduced by Wadee et al.[97]. They improved it by resin after a shear deformation of 25%. If the same resin
accounting for initial fiber misalignment. Furthermore was reinforced with HM fibers, however, intraply damage
this model allows for predicting the kink band inclination occurred at γ12=15%. It seems rather unlikely that such
angle. large strains can appear without any damage. For
laminates out of glass-fiber reinforced epoxy
Obviously, there is not a generally accepted view yet as Giannadakis and Varna[107] determined viscoelasticity
to whether kink band formation is a failure mode that and viscoplasticity as the major cause for nonlinearity,
limits the compressive strength or rather a secondary whereas the effect of microdamage is very small. Until
effect which appears after buckling. verifying what really happens in the shear tests it seems
to be unreasonable to invest further effort into modeling
Normal strength in transverse direction it.
Tensile and compressive strength in transverse direction

39
Strength under combined loading There are quite a number of them available now. What
Micromechanics were also used for strength prediction is missing, however, is a reliable statement as to which
under combined loading. The influence of interface one should be applied in the respective case at hand.
strength on the composite behavior under out-of-plane Damage mechanics accounts for the residual strength
shear and transverse tension was studied by Canal et after initial damage. In general that is done by stiffness
al.[108]. They concluded that homogeneous models like reduction smearing out local effects and therewith
those proposed by Hashin or Puck cannot accurately simulating a material nonlinearity of the affected layer.
predict the failure surface. Transverse compression and There are indications that interactions between
out-of-plane shear was analyzed by Totry et al.[109], adjacent layers can have a considerable influence on
which led to the finding that interface decohesion must the laminate strength, which also can be accounted for
be taken into account for composites in matrix- by means of damage mechanics models.
dominated failure modes. Also for transverse
compression and longitudinal shear Totry et al.[110] More close to the behavior of fiber composites are
discovered that the interface strength plays an heterogeneous models. Talreja[116] has carefully
important role for the composite strength. Ha et al.[111] analysed ambiguities and uncertainties in classical
proposed a micromechanics based model which used failure predictions and provided remedies to overcome
the maximum stress criterion for FF, a modified von them, including a comprehensive analysis strategy. A
Mises yield criterion for matrix failure and a simple Micromechanics Simulation Challenge by Godsell et
quadratic criterion for failure of the fiber–matrix al.[117] is announced, presenting benchmarking
interface. In order to simulate the tasks of the WWFE-II exercises against which computational tools can be
Huang et al.[112] complemented these criteria with a compared. The greater computational effort required
progressive damage model taking care of the nonlinear with heterogeneous models is no longer a major
matrix behavior. A damage factor of 0.4 was assumed handicap thanks to the rapid increase of computational
for final rupture of the damaged material. Huang et power and storage capacity. It is more the difficulty to
al.[113] further adapted the approach to the test results determine relevant material properties. That especially
by using a quadratic FF criterion, a fiber kinking model, holds if the model considers an interface layer between
and a reduction of stress amplification factors for fibers and the matrix. Inverse methods cannot be
inplane shear terms. Melro et al.[114],[115] developed considered as the general solution to that problem
an elasto-plastic damage model suitable for epoxy since they require the choice of a micromechanical
matrix material which accounts for different behavior model in the first place. Compressive strength in fiber
under transverse tension, transverse compression, and direction has attracted special attention. However, the
longitudinal as well as transverse shear. role of kink band formation, which is observed in the
failure process, seems to be not thoroughly understood.
Conclusion and outlook All in all it must be concluded that models for
Considerable effort has been put into the development predicting fiber composite damage and failure have not
of suitable models to reliably predict damage and yet reached a fully satisfying state. For now and in the
failure of fiber composites. In spite of the foreseeable future virtual testing of fiber composites
inhomogeneity of the material homogeneous models can be suitably applied in the initial design phase and
were first choices for quite some time. They have serve as a useful supplement during structural
developed from simple maximum stress or strain qualification. But models need further improvement
criteria via interpolation criteria to physically based before tests on real structures can be fully replaced by
ones. On looking at the frequency of publications in this simulations.
field the development seems to have passed the top.

40
References failure criterion for a laminate. Compos Sci
Technol 1998; 58: 1023–1032.
of the elementary ply for laminated composites.
Compos Sci Technol 1992; 43: 257–267.
[1] Nahas MN. Survey of failure and post-failure [26] DeTeresa SJ and Larsen GJ. Reduction in the [51] Payan J and Hochard C. Damage modelling of
theories of laminated fiber-reinforced number of independent parameters for the Tsai– laminated carbon/epoxy composites under static
composites. J Compos Technol Res 1986; 8: 138– Wu tensor polynomial theory of strength for and fatigue loadings. Int J Fatigue 2002; 24: 299–
153. composite materials. J Compos Mater 2003; 37: 306.
[2] Thom H. A review of the biaxial strength of fibre- 1769–1785. [52] Hochard C, Lahellec N and Bordreuil C. A ply
reinforced plastics. Compos A 1998; 29A: 869– [27] Norris CB. Strength of orthotropic materials scale non-local fibre rupture criterion for CFRP
886. subjected to combined stresses. Forest Products woven ply laminated structures. Compos Struct
[3] Hinton MJ, Kaddour AS and Soden PD (eds). Laboratory, Report 1816, May 1962, Madison, WI. 2007; 80: 321–326.
Failure criteria in fibre reinforced polymer [28] Azzi VD and Tsai SW. Anisotropic strength of [53] Barbero EJ and de Vivo L. A constitutive model for
composites: The World-Wide Failure Exercise. composites. Exp Mech 1965; 5: 283–288. elastic damage in fiber-reinforced PMC laminae.
Oxford, UK: Elsevier Science Ltd, 2004. [29] Yamada SE and Sun CT. Analysis of laminate Int J Damage Mech 2001; 10: 73–93.
[4] Kaddour AS and Hinton MJ. Maturity of 3D failure strength and its distribution. J Compos Mater [54] Barbero EJ and Cosso FA. Determination of
criteria for fibre-reinforced composites: 1978; 12: 275–284. material parameters for discrete damage
Comparison between theories and experiments: [30] Rotem A. Prediction of laminate failure with the mechanics analysis of carbon-epoxy laminates.
Part B of WWFE-II. J Compos Mater 2013; 47: Rotem failure criterion. Compos Sci Technol 1998; Compos B 2014; 56: 638–646.
925–966. 58: 1083–1094. [55] Van Paepegem W, De Baere I and Degrieck J.
[5] Kaddour AS, Hinton MJ, Smith PA, et al. The [31] Rotem A. The Rotem failure criterion for fibrous Modelling the nonlinear shear stress–strain
background to the third world-wide failure laminated composite materials: three- response of glass fibre-reinforced composites.
exercise. J Compos Mater 2013; 47: 2417–2426. dimensional loading case. J Compos Mater 2012; Part I: experimental results. Compos Sci Technol
[6] Knight NF Jr. User-defined material model for 46: 2379–2388. 2006; 66: 1455–1464.
progressive failure analysis. NASA CR-2006- [32] Hashin Z. Failure criteria for unidirectional fiber [56] Van Paepegem W, De Baere I and Degrieck J.
214526, Washington DC, 2006. composites. J Appl Mech 1980; 47: 329–334. Modelling the nonlinear shear stress–strain
[7] Libonati F and Vergani L. Damage assessment of [33] Puck A. Festigkeitsanalyse von Faser-Matrix- response of glass fibre-reinforced composites.
composite materials by means of thermographic Laminaten: Modelle für die Praxis. München, Part II: model development and finite element
analyses. Compos B 2013; 50: 82–90. Wien: Hanser, 1996. simulations. Compos Sci Technol 2006; 66: 1465–
[8] Christensen RM. http://www.failurecriteria.com [34] Puck A and Schürmann H. Failure analysis of FRP 1478.
(accessed 16 Oct 2014). laminates by means of physically based [57] Miyano Y, Kanemitsu M, Kunio T, et al. Role of
[9] Sánchez-Heres LF, Ringsberg JW and Johnson E. phenomenological models. Compos Sci Technol matrix resin on fracture strengths of
Study on the possibility of increasing the 1998; 58: 1045–1067. unidirectional CFRP. J Compos Mater 1986; 20:
maximum allowable stresses in fibre-reinforced [35] Puck A, Kopp J and Knops M. Guidelines for the 520–538.
plastics. J Compos Mater 2013; 47: 1931–1941. determination of the parameters in Puck’s action [58] Williams KV, Vaziri R and Poursartip A. A
[10] Hart-Smith LJ. Expanding the capabilities of the plane strength criterion. Compos Sci Technol physically based continuum damage mechanics
Ten-Percent Rule for predicting the strength of 2002; 62: 371–378. model for thin laminated composite structures.
fibre–polymer composites. Compos Sci Technol [36] Knops M and Bögle C. Gradual failure in Int J Solids Struct 2003; 40: 2267–2300.
2002; 62: 1515–1544. fibre/polymer laminates. Compos Sci Technol [59] Forghani A, Zobeiry N, Poursartip A and Vaziri R.
[11] Tschoegl NW. Failure surfaces in principle stress 2006; 66: 616–625. A structural modelling framework for prediction
space. J Polym Sci C Polym Sympos 1971; 32: [37] Anonymus. Development of FRP components of damage development and failure of composite
239–267. (fibre-reinforced plastics) analysis, Part 3. Berlin: laminates. J Compos Mater 2013; 47: 2553–2573.
[12] Kaddour AS and Hinton MJ. Maturity of 3D failure Beuth Verlag, 2006. [60] Ridha M, Wang CH, Chen BY and Tay TE.
criteria for fibre-reinforced composites: [38] Dong H, Wang J and Karihaloo BL. An improved Modelling complex progressive failure in notched
comparison between theories and experiments: Puck’s failure theory for fibre-reinforced composite laminates with varying sizes and
Part B of WWFE-II. J Compos Mater 2013; 47: composite laminates including the in situ stacking sequences. Compos A 2014; 58: 16–23.
925–966. strength effect. Compos Sci Technol 2014; 98: 86– [61] Frizzell RM, McCarthy MA and McCarthy CT.
[13] Zinoviev PA, Grigoriev SV, Lebedeva OV, et al. The 92. Numerical method to control high levels of
strength of multilayered composites under a [39] Cuntze RG and Freund A. The predictive capability damage growth using an implicit finite element
plane-stress state. Compos Sci Technol 1998; 58: of failure mode concept-based strength criteria solver applied to notched cross-ply laminates.
1209–1223. for multidirectional laminates. Compos Sci Compos Struct 2014; 110: 51–61.
[14] Hinton MJ, Kaddour AS and Soden PD. A further Technol 2004; 64: 343–377. [62] Fiedler B, Hojo M, Ochiai S, et al. Failure behavior
assessment of the predictive capabilities of [40] Cuntze RG. Efficient 3D and 2D failure conditions of an epoxy matrix under different kinds of static
current failure theories for composite laminates: for UD laminae and their application within the loading. Compos Sci Technol 2001; 61: 1615–1624.
comparison with experimental evidence. Compos verification of the laminate design. Compos Sci [63] Asp LE, Berglund LA and Talreja R. A criterion for
Sci Technol 2004; 64: 549–588. Technol 2006; 66: 1081–1096. crack initiation in glassy polymers subjected to a
[15] Zinoviev PA, Lebedeva OV and Tairova LP. A [41] Cuntze R. The predictive capability of failure mode composite-like stress state. Compos Sci Technol
coupled analysis of experimental and theoretical concept- based strength conditions for laminates 1996; 56: 1291–1301.
results on the deformation and failure of composed of unidirectional laminae under static [64] Pae KD. Influence of hydrostatic pressure on the
composite laminates under a state of plane triaxial stress states. J Compos Mater 2012; 46: mechanical behavior and properties of
stress. Compos Sci Technol 2002; 62: 1711–1723. 2563–2594. unidirectional, laminated, graphite-fiber/ epoxy-
[16] Hart-Smith LJ. Predictions of the original and [42] Cuntze RG. Comparison between experimental matrix thick composites. Compos B 1996; 27B:
truncated maximum-strain failure models for and theoretical results using Cuntze’s ‘‘failure 599-611.
certain fibrous composite laminates. Compos Sci mode concept’’ model for composites under [65] Kerans RJ and Parthasarathy TA. Theoretical
Technol 1998; 58: 1151–1178. triaxial loadings—Part B of the second world- analysis of the fiber pullout and pushout test. J
[17] Hart-Smith LJ. Predictions of a generalized wide failure exercise. J Compos Mater 2012; 47: Am Ceram Soc 1991; 74: 1585-1596.
maximum-shear-stress failure criterion for 893–924. [66] Liang C and Hutchinson JW. Mechanics of the
certain fibrous composite laminates. Compos Sci [43] Dávila CG, Camanho PP and Rose CA. Failure fiber pushout test. Mech Mater 1993; 14: 207-221.
Technol 1998; 58: 1179–1208. criteria for FRP laminates. J Compos Mater 2005; [67] Correa E, París F and Mantič V. Effect of a
[18] Hart-Smith LJ. Comparison between theories and 39(4):323–345. secondary transverse load on the inter-fibre
test data concerning the strength of various fibre– [44] Pinho ST, Dávila CG, Camanho PP, et al. Failure failure under compression. Compos B 2014; 65:
polymer composites. Compos Sci Technol 2002; models and criteria for FRP under in-plane or 57-68.
62: 1591–1618. three-dimensional stress states including shear [68] Barbero EJ and Cortes DH. A mechanistic model
[19] Bogetti TA, Hoppel CPR, Harik VM, et al. non-linearity. NASA/TM- 2005-213530, 2005, for transverse damage initiation, evolution, and
Predicting the nonlinear response and Hampton, VA 23681. stiffness reduction in laminated composites.
progressive failure of composite laminates. [45] Pinho ST, Iannucci L and Robinson P. Physically- Compos B 2010; 41: 124–132.
Compos Sci Technol 2004; 64: 329–342. based failure models and criteria for laminated [69] Odegard G and Kumosa M. Determination of
[20] Bogetti TA, Hoppel CPR, Harik VM, et al. fibre-reinforced composites with emphasis on shear strength of unidirectional composite
Predicting the nonlinear response and failure of fibre kinking: Part I: development. Compos A materials with the Iosipescu and 10° off-axis
composite laminates: correlation with 2006; 37: 63–73. shear tests. Compos Sci Technol 2000; 60: 2917–
experimental results. Compos Sci Technol 2004; [46] Nali P and Carrera E. A numerical assessment on 2943.
64: 477–485. two-dimensional failure criteria for composite [70] Cahill LMA, Natarajan S, Bordas SPA, et al. An
[21] Bogetti TA, Staniszewski J, Burns BP, et al. layered structures. Compos B 2012; 43: 280–289. experimental/numerical investigation into the
Predicting the nonlinear response and [47] Catalanotti G, Camanho PP and Marques AT. main driving force for crack propagation in uni-
progressive failure of composite laminates under Three-dimensional failure criteria for fiber- directional fibre-reinforced composite laminae.
triaxial loading: correlation with experimental reinforced laminates. Compos Struct 2013; 95: Compos Struct 2014; 107: 119–130.
results. J Compos Mater 2012; 47: 793–804. 63–79. [71] Nouri H, Lubineau G and Traudes D. An
[22] Hoffman O. The brittle strength of orthotropic [48] Vogler M, Rolfes R and Camanho PP. Modeling experimental investigation of the effect of shear-
materials. J Compos Mater 1967; 1: 200–206. the inelastic deformation and fracture of polymer induced diffuse damage on transverse cracking in
[23] Tsai SW and Wu EM. A general theory of strength composites – Part I: plasticity model. Mech Mater carbon-fiber reinforced laminates. Compos Struct
for anisotropic materials. J Compos Mater 1971; 2013; 59: 50–64. 2013; 106: 529–536.
5: 58–80. [49] Camanho PP, Arteiro A, Melro AR, et al. Three- [72] Gotsis PK, Chamis CC and Minnetyan L.
[24] Narayanaswami R and Adelman HM. Evaluation dimensional invariant-based failure criteria for Prediction of composite laminate fracture:
of the tensor polynomial and Hoffman strength fibre-reinforced composites. Int J Solids Struct. micromechanics and progressive fracture.
theories for composite materials. J Compos Epub ahead of print 2014. DOI: Compos Sci Technol 1998; 58: 1137–1149.
Mater 1977; 11: 366–377. http://dx.doi.org/10.1016/j.ijsolstr.2014.03.038. [73] Murthy PLN and Chamis CC. Integrated
[25] Liu K-S and Tsai SW. A progressive quadratic [50] Ladeveze P and Le Dantec E. Damage modelling composite analyzer (ICAN), Users and

41
Programmers Manual. NASA Technical Paper Sci Technol 2010; 70: 1214–1222. inplane shear behavior of long-carbon-fibre
2515, Lewis Research center, Cleveland, OH, [90] Gutkin R, Pinho ST, Robinson P, et al. On the composites with thermoset or thermoplastic
1986. transition from shear-driven fibre compressive matrix. Compos Sci Technol 1994; 52: 417–425.
[74] Gotsis PK, Chamis CC and Minnetyan L. failure to fibre kinking in notched CFRP laminates [105] Ng WH, Salvi AG and Waas AM. Characterization
Application of progressive fracture analysis for under longitudinal compression. Compos Sci of the in-situ non-linear shear response of
predicting failure envelopes and stress–strain Technol 2010; 70: 1223–1231. laminated fiber-reinforced composites. Compos
behaviors of composite laminates: a comparison [91] Prabhakar P and Waas AM. Interaction between Sci Technol 2010; 70: 1126–1134.
with experimental results. Compos Sci Technol kinking and splitting in the compressive failure of [106] Totry E, Molina-Aldareguía JM, González C, et al.
2002; 62: 1545–1559. unidirectional fiber reinforced laminated Effect of fiber, matrix and interface properties on
[75] Chamis CC, Abdi F, Garg M, et al. composites. Compos Struct 2013; 98: 85–92. the in-plane shear deformation of carbon-fiber
Micromechanics-based progressive failure [92] Prabhakar P and Waas AM. Micromechanical reinforced composites. Compos Sci Technol 2010;
analysis prediction for WWFE-III composite modeling to determine the compressive strength 70: 970–980.
coupon test cases. J Compos Mater 2013; 47: and failure mode interaction of multidirectional [107] Giannadakis K and Varna J. Analysis of nonlinear
2695–2712. laminates. Compos A 2013; 50: 11–21. shear stress-strain response of unidirectional
[76] Rosen BW. Tensile failure of fibrous composites. [93] Mishra A and Naik NK. Inverse micromechanical GF/EP composite. Compos A 2014; 62:67-76.
AIAA J 1964; 2(11): 1985–1991. models for compressive strength of unidirectional [108] Canal LP, Segurado J and Llorca J. Failure
[77] Zweben C. A bounding approach to the strength of composites. J Compos Mater 2009; 43: 1199– surface of epoxy-modified fiber-reinforced
composite materials. Eng Fracture Mech 1972; 4: 1211. composites under transverse tension and out-of-
1–8. [94] Martinez X and Oller S. Numerical simulation of plane shear. Int J Solids Struct 2009; 46: 2265–
[78] Curtin WA. Theory of mechanical properties of matrix reinforced composite materials subjected 2274.
ceramic-matrix composites. J Am Ceram Soc to compression loads. Arch Comput Methods Eng [109] Totry E, González C and Llorca J. Failure locus of
1991; 74: 2837–2845. 2009; 16: 357–397. fiber-reinforced composites under transverse
[79] Koyanagi J, Hatta H, Kotani M, et al. A [95] Dharan CKH and Lin C-L. Longitudinal compression and out-of-plane shear. Compos Sci
comprehensive model for determining tensile compressive strength of continuous fiber Technol 2008; 68: 829–839.
strengths of various unidirectional composites. J composites. J Compos Mater 2007; 41: 1389– [110] Totry E, González C and Llorca J. Prediction of the
Compos Mater 2009; 43: 1901–1914. 1405. failure locus of C/PEEK composites under
[80] Lamon J. Stochastic models of fragmentation of [96] Zidek RAE and Völlmecke C. Analytical studies on transverse compression and longitudinal shear
brittle fibers or matrix in composites. Compos Sci the imperfection sensitivity and on the kink band through computational micromechanics. Compos
Technol 2010; 70: 743–751. inclination angle of unidirectional fiber Sci Technol 2008; 68: 3128–3136.
[81] Dow NF and Rosen BW. Evaluations of filament- composites. Compos A 2014; 64: 177-184. [111] Ha SK, Jin KK and Huang Y. Micro-mechanics of
reinforced composites for aerospace structural [97] Wadee MA, Völlmecke C, Haley JF, et al. failure (MMF) for continuous fiber reinforced
applications. Washington DC: NASA CR-207, Geometric modelling of kink banding in laminated composites. J Compos Mater 2008; 42: 1873–
1965. structures. Philos Trans R Soc A 2012; 370: 1827– 1895.
[82] Xu YL and Reifsnider KL. Micromechanical 1849. [112] Huang Y, Xu L and Ha SK. Prediction of three-
modeling of composite compressive strength. J [98] Asp LE, Berglund LA and Talreja R. Effect of fiber dimensional composite laminate response using
Compos Mater 1993; 27: 572–588. and interphase on matrix-initiated transverse micromechanics of failure. J Compos Mater 2012;
[83] Lo KH and Chim ES-M. Compressive strength of failure in polymer composites. Compos Sci 46: 2431–2442.
unidirectional composites. J Reinf Plast Compos Technol 1996; 56: 651–665. [113] Huang Y, Jin C and Ha SK. Strength prediction of
1992; 11: 383–396. [99] Asp LE, Berglund LA and Talreja R. Prediction of triaxially loaded composites using a progressive
[84] Budiansky B and Fleck N. Compressive failure of matrix-initiated transverse failure in polymer damage model based on micromechanics of
fiber composites. J Mech Phys Solids 1993; 41: composites. Compos Sci Technol 1996; 56: 1089– failure. J Compos Mater 2013; 47: 777–792.
183–211. 1097. [114] Melro AR, Camanho PP, Andrade Pires FM, et al.
[85] Kyriakides S, Arseculeratne R, Perry EJ, et al. On [100] Carvelli V and Corigliano A. Transverse resistance Micromechanical analysis of polymer composites
the compressive failure of fiber reinforced of long-fibre composites: influence of the fibre- reinforced by unidirectional fibres: Part I –
composites. Int J Solids Struct 1995; 32: 689–738. matrix interface. In: Proceedings of the 11th constitutive modeling. Int J Solids Struct 2013; 50:
[86] Jensen HM and Christoffersen J. Kink band European conference on composite materials 1897–1905.
formation in fiber reinforced materials. J Mech ECCM11, Rhodes, Greece, May 31–June 3, 2004. [115] Melro AR, Camanho PP, Andrade Pires FM, et al.
Phys Solids 1997; 45: 1121–1136. [101] Cid Alfaro MV, Suiker ASJ and de Borst R. Micromechanical analysis of polymer composites
[87] Ataabadi AK, Ziari-Rad S and Hosseini-Toudeshky Transverse failure behavior of fiber-epoxy reinforced by unidirectional fibres: Part II –
H. An improved model for fiber kinking analysis of systems. J Compos Mater 2010; 44: 1493–1516. micromechanical analyses. Int J Solids Struct
unidirectional laminated composites. Appl [102] Vaughan TJ and McCarthy CT. Micromechanical 2013; 50: 1906–1915.
Compos Mater 2011; 18: 175–196. modelling of the transverse damage behaviour in [116] Talreja R. Assessment of the fundamentals of
[88] Ataabadi AK, Hosseini-Toudeshky H and Rad SZ. fibre reinforced composites. Compos Sci Technol failure theories for composite materials. Compos
Experimental and analytical study on fiber- 2011; 71: 388–396. Sci Technol 2014; 105: 190–201.
kinking failure mode of laminated composites. [103] King TR, Blackketter DM, Walrath DE, et al. [117] Goodsell J, Pipes B and Yu W. (2014),
Compos B 2014; 61: 84–93. Micromechanics prediction of the shear strength "Micromechanics Simulation Challenge (Draft),"
[89] Gutkin R, Pinho ST, Robinson P, et al. Micro- of carbon fiber/epoxy matrix composites: the https://cdmhub.org/resources/550 (accessed 16
mechanical modelling of shear-driven fibre influence of the matrix and interface strengths. J Dec 2014).
compressive failure and of fibre kinking for failure Compos Mater 1992; 26: 558–573.
envelope generation in CFRP laminates. Compos [104] Lafarie-Frenot MC and Touchard F. Comparative

42
Congress Highlights
Improvements of an Air-
Liquid Interface In-Vitro
Testing Method for
Inhalable Compounds
Using CFD-Methods
Dr. C. Brodbeck - Fraunhofer Institute SCAI, Sankt Augustin, Germany
Dr. D. Ritter, Dr. J. Knebel - Fraunhofer Institute ITEM, Hannover, Germany

Winner of the NWC15


‘Most Innovative Use of Simulation’ Best Paper Award

Introduction In Vitro Air/Liquid-Interface


within the last decade the air/liquid-interface (ALI) cell An in-vitro air/liquid interface reproduces typical
culture technology became the state-of-the-art method human physical barrier systems, like intestinal mucosa
for in-vitro testing of airborne substances. Cells are or in the present case bronchial epithelial cells as a
cultured on microporous membranes and thereby get laboratory scaled setup. A nutrient solution must
efficiently into contact with the test atmosphere while ensure that the applied cells are kept alive (Figure 1).
being supplied with nutrients and being humidified
from the basal side of the membrane. Biological Any process to investigate substances with in vitro
models like cell lines from the human lung or complex systems is basically divided into three single steps: The
ex vivo models like precision cut lung slices (PCLS) can "cell system", the "exposure step" and the “read out”.
be applied. However, especially for the application of These elements are the basis for the establishment of
this basic technology on the testing of a broader range an efficient and meaningful test system. Optimization of
of airborne materials like droplet and particle aerosols, each individual step and re-integration to an overall
there is still no general scientific consensus on the process comprises the potential for a powerful test
question of the most suitable exposure design. system with high integrity and high efficiency. Typically,
Concepts including complex physical phenomena like the exposure step is organized as an isolated step in
electrostatic deposition of particles or thermophoresis the experimental cell based process, leading to an
have been introduced to address specific scientific unfavorable, time-consuming, expensive and cell-
questions. To develop a concept for a more general damaging handling of the biological test system. In
application of the ALI culture technology for all kinds of contrast to this common situation, the in this project
airborne materials in the long run, this study aimed at applied air/liquid-interface (P.R.I.T.® ExpoCube®)
the characterization of the influence of the aerosol enables with its “all-in-one-plate” concept a completely
conduction system on the particle deposition and the integrated workflow by use of a standard multiwell
behaviour of the liquid supply by the use of plate throughout the whole experimental process. A
computational fluid dynamics (CFD). typical setup with aerosol generation, monitoring and
exposure is shown in Figure 2.

43
Figure 1: An in vitro model replaces the real conditions in a human lung

Simulating the Aerosol Conduction System 1. Which design of the drain of the deposition device
As shown in Figure 2 the potential toxic aerosol is will cause a more equal flow distribution?
conducted to the air/liquid interface in order to bring it 2. How should the deposition device geometry be
in contact with the human cells. Due to the aerosol modified to enhance particle deposition on cell
generation process the volume flow of the aerosol membranes?
carrying air is higher than what is needed or rather is
allowed in the deposition area. This limitation comes 3. Is there a possibility to enhance particle mass flow
from the fact that an increased volume flow would into the branches feeding the deposition devices
harm the living cells in the system. As encountered in without increasing the fluid flow rate in the
many biomedical applications it is therefore necessary branches?
to branch off a defined amount of fluid. It is not only the
aim to increase the branched off amount to get a more Our numerical analyses started with an investigation
obvious and faster aerosol deposition, it is also desired how the shape of the nozzle which feeds the membrane
to get a nearly equal distribution into the multiple and how the shape of the drain influences the
branches and on the surface occupied with the cells. deposition of particles of certain sizes. For this purpose
For the present apparatus design the following we extracted one of the several deposition devices
questions were stated: depicted as simulation model in Figure 3.

Figure 2: Experimental apparatus to expose human cells to potential toxic aerosols

44
In the simulation model the fluid flow was assumed to be laminar and a insensitive. Injections were placed
steady state solver was applied. Due to the small portion of aerosols in the evenly and in some cases the inflow
flow there was no interaction of the particles (Ø 50nm to 3000nm as area was extended to account for the
cumulative distribution function CDF) with the carrier flow considered, so particle and flow history in the pipe
that the two phases could be simulated consecutively. In all simulations system.
with aerosols the particles were modelled in a Lagrangian frame of
reference with a spherical shape and gravity influence. Particles which Regarding the outward flow geometry
came into contact with a solid wall were regarded as trapped to that wall. it could be seen that as expected
The number of injections was increased until the results became increasing the number of drains from
four to seven smoothed the flow profile
in the vicinity of the cell membrane in
circumferential direction, but the
differences were rather small. The
shape of the nozzle was diversely
modified, but the simulations revealed
that the original shape was close to be
nearly optimal concerning the amount
of particles deposited; therefore only
small changes were applied. In Figure
4 the results for two actually differing
shapes are exemplarily compared.
Although the velocity distribution is
quite different the deposition of the
particles on the cell membrane
appeared surprisingly rather similar.
These simulations were accomplished
with STAR-CCM+® (CD-adapco).

Figure 3: From P.R.I.T. ExpoCube® (depicted on left side, inner construction Due to the partially submicron size of
disguised) extracted CFD model of the deposition device the particles we decided then to take
non-continuum effects like Brownian
motion [1] and a Stokes-Cunningham
particle drag law (slip correction) [2]
into account. In order to increase the
deposition rate further the effects of
thermophoresis [3] were included.
Those effects were unfortunately not
covered by STAR-CCM+® simulation
code so that we switched the
simulation software to ANSYS Fluent.
For particle sizes below 1μm these
effects revealed a significant influence
on the particle behaviour. To get a
temperature distribution inside the
deposition device a conjugate heat
transfer model was built up and the
results were applied as approximated
boundary conditions in further
simulations where thermophoresis
was considered.

We could confirm our previous


improvements of the deposition device
geometry simulating the different
models in ANSYS Fluent (Figure 5).
For the purpose of a clearer
evaluation, the cumulative distribution
for the particle sizes was dropped and
a representative number of particle
sizes was simulated individually. One
Figure 4: Comparison of two exemplarily selected geometries, could observe in Figure 5 that
velocity distribution, particle deposition on cell membrane and particles smaller than 15nm are not
particle tracks of different particle sizes for one injection point deposited; this effect showed up
especially after the Brownian motion

45
Figure 5: Comparison of deposition rates for different Figure 6: Fluent model of one row; supply pipe and
device geometries and activated thermophoresis, four deposition devices with flow pattern
additionally control for thicker lung slices (PCLS)

was included in the simulations. Tracks of these original geometry one can see that the branched off
particles exhibited that the particles are hardly particles are layered and the amount is reduced
following the direction of the carrier flow. Even with all concerning the subsequent branches.
the non-continuum models included it is unsettled, if
the simulation results represent the actual behaviour of Many particles leave the supply pipe without having a
such extremely small particle. chance to enter a branch and other particles are lost on
the pipe wall. To overcome this problem the idea came
At this point it was decided to extend the simulation up to break up the layered order by introducing
model and account at first for the supply pipe and the disturbances in the flow field right before the branches
branches without the deposition devices and afterwards and thereby drive particles from unutilised into branch
including them (Figure 6). Regarding the original feeding flow areas.
geometry it could be observed in the experiments and
simulations that the deposition rate decreases In order to disturb the flow several geometries, like
continuously concerning the sequence of the branches bumps, nozzles, airfoils etc. in different dimensions and
in flow direction. This behaviour seemed to be positions were implemented and simulated. It was
understandable due to a loss of particles in the supply recognized that an airfoil geometry on top of the supply
pipe by reason of gravity and branched off particles. In pipe yielded the best particle branch flows concerning
order to improve the system it was necessary to realize the absolute mass and the equal distribution. As one
the exact correlations between the flow pattern of the can see in Figure 7 this effect is due to the fact that a
carrier fluid and the particle tracks. It was therefore flow rotation (swirl) is placed in the pipe which pushes
inevitable to reveal the history of the particle tracks in fresh particles in the vicinity of the pipe wall where the
regard to the branch they turn in. As post-processing of fluid is sucked into the branches. Since a variety of
Fluent did not offer a graphical evaluation of particle particle sizes had to be considered, a compromise
history, a simple tool was developed to read Fluent concerning the magnitude of the rotational speed had
particle history data and plot it in a coloured manner. to be found due to the fact that larger particles tend to
Figure 7 shows the history of the particles of Ø1000nm be lost at the wall for stronger swirls.
for the original and a modified geometry. Regarding the

Figure 7: History of particles in certain slices exemplary for a diameter of 1000nm for the original and a
modified geometry; pink=leave through supply pipe, cyan=lost on supply pipe, red=enter fourth branch,
blue=enter third branch, green=enter second branch, black=enter first branch

46
For the found design with an
increased particle mass flow into the
branches, it was then verified if the
deposition rate on the cell
membrane measures up as well. In
Figure 8 for three different particles
sizes the deposition footprint on the
cell membrane and the deposition
rate related to the theoretical
deposition (branch particle mass
flow is proportional to fluid mass
flow) show a significant increase of
settled particles. This is mainly due
to the effect of thermophoresis but
was further increased by 10%
through the modifications in the
supply pipe. As the improvements
could be confirmed in several
experimental setups, the the present
constellation was applied for a
patent.

Simulating the Liquid


Supply System Figure 8: Particle footprint on cell membranes and deposition rate (DR)
The exposed cells are supplied with for the original geometry (Org) and the geometry disturbed by airfoils
a nutrient solution through a
ductwork which is integrated in the
air/liquid interface cube. In this
regard several questions were
posed, which were investigated
through numerical simulations:

1. Is there an equal distribution of


nutrient to all cell inserts?

2. What volume flow limit exists


regarding a maximum pressure
so not to harm the cells?

3. How fast can we flush the


system, i.e. replace the fluid in
the channels? Where are the
areas which are problematic for
flushing?
Figure 9: Approximated geometry of the liquid system, half
model due to symmetry, inflow (blue) and outflow (red)
4. How should we design the
channel cross-section to reduce
the pressure loss, respectively
the pressure on the membranes?

5. How should we design channels


if bubbles are present?
For question 1, 2, 3 and 4 we built up
a model of the fluid system (Figure
9), where we took advantage of the
symmetrical shape. Due to
confidentiality reasons the model in
Figure 9 is not depicted in its
original manner.
For several volume flows and
channel cross-sectional areas and Figure 10: Simulation result showing part of
shapes the pressure on the cell the liquid system after 3 minutes flushing,
membranes and the volume flows in areas with remains of old fluid are visible

47
the branches were determined in steady
state simulations and the flushing
behaviour was simulated applying
transient simulations. For these
simulations STAR-CCM+ was applied. In
the flushing simulations it was observed
that some areas, as depicted in Figure 10,
exhibited problems with a proper removal
of old liquid due to the rectangular shape
of the channels and sharp corners in the
cell membrane area.

These areas were modified, e.g. the


channel shape was switched from
Figure 11: Deviation from equal supply of liquid to the four
rectangular to circular. It is later shown
cell membranes through branched channel system
simulated in half model for various flow rates that this also improved the performance
of the system when air bubbles are
present. For the flushing time it is
advantageous to reduce the overall fluid
mass in the system by reducing the
channel cross-sectional area, but on the
other hand this generates higher
pressure losses and thus leads to higher
pressures on the cell membranes.
Therefore a compromise had to be made
by defining intermediate values for the
circular channel diameter. Looking at the
equal distribution of the fluid to supply all
cell membranes sufficiently, it became
quickly obvious that the original channel
guidance was chosen already in an
excellent manner, so that the deviation
from uniformity ranged only from less
than one per mill for low flow rates to a
maximum of ca. two per mill for higher
flow rates (Figure 11).

A well-known problem of channels with a


small cross-sectional area is when
unavoidable air bubbles are stuck in the
system. This effect which is called
Figure 12: Depiction of bubble position after t=0.2s and t=0.8s for different clogging [4] emerges when small
channel layouts which illustrates the movability of the bubble channels furcate and fluid forces through
surface tensions reach the same order of
magnitude as flow driving forces like
pressure. It is observed, that clogging
occurs preferentially in channel
contractions [5] and channel elbows [6].
When a bubble blocks a channel, this part
of the system is then insufficiently
supplied with nutrient solution. It is a
current but rather makeshift procedure to
actuate bubbles by clicking one’s finger at
the channel (if reachable) or bouncing the
whole apparatus. In order to reduce the
tendency for clogging, simulations with
different channel geometries were
conducted in ANSYS Fluent. The
extension of the model was successively
Figure 13: Moved bubble from chamber into channel with an reduced due to the costly computing
edged and rounded passage geometry, bubble on right side times of the transient multi-phase
has moved further in the same time period simulations.

48
To verify bubble behaviour in a bend of the channel a ways to enhance the amount of deposited particles on
small model with different cross-sectional shapes was the cell cultures through geometrical and physical
built up including a variation of the wetting behaviour of modifications. For that purpose on the one hand the
the surface. Due to a lack of data the contact angles deposition device’s shape was modified to settle more
were generally set to 45⁰ for a hydrophobic and 135⁰ for particles and on the other hand the supply pipe was
a hydrophilic surface. The flow is forked and a bubble is equipped with an apparatus to enhance the particle
initially placed in one of the branches. In Figure 12 the mass flow to the feeding branches to the cell
movability of the bubble is depicted for selective membranes. Additionally thermophoresis effects were
simulations and it is evident, that the modified channel introduced to further increase the efficiency. This
layouts are better than the original rectangular enhanced constellation was confirmed in experiments
hydrophobic design. Also due to the already and therefore applied for a patent.
experienced advantages for the flushing of the system a
circular cross-section for a hydrophobic surface was Concerning the liquid nutrient supply system, which is
selected. One might achieve even better results compactly included in the air/liquid interface, the
combining a circular shape with a hydrophilic surface, simulations helped to improve the performance by the
but this would have lead unfortunately to higher means of modifications of the channel geometry.
production costs. Attention was turned to the ability to uniformly supply
the cell inserts, to flush the system effectively without
To verify the bubble behaviour in the region where the harming the cells and to improve the capability to
flow leaves the cell membrane chamber a reduced overcome blockage by clogging effects of air bubbles.
model was set-up. Due to computational costs it was
intended to keep the model in an adequate range and References
therefore it was not possible to extend the geometry to
[1] ANSYS® (2013): ANSYS Fluent Theory Guide – 16.2.1.6.
a preceding channel junction. However to be able to Brownian Force, Release 15.0, p. 376
judge the movability of the bubble a pressure boundary
[2] Hinds W. (1999). Aerosol Technology, Properties, Behavior
condition was applied, so that the flow rate and the
and Measurement of Airborne Particles, John Wiley &
bubble velocity could decrease for tight stuck bubbles. Sons, Inc., p. 48-51
As expected a rounded transition showed a better
[3] ANSYS® (2013): ANSYS Fluent Theory Guide – 16.2.1.5.
bubble movability (Figure 13). Thermophoretic Force, Release 15.0, p. 375-376
[4] Jensen M.J. (2002): Bubbles in Microchannels, Master
Conclusions Thesis, Technical University of Denmark
In the scope of a project of the Fraunhofer Institutes [5] Jensen M.J.,Goranovic G., Bruus H. (2004): The Clogging
ITEM and SCAI the development and improvement of an Pressure of Bubbles in Hydrophilic Microchannel
air/liquid interface technology which emerged as a Contractions, Journal of Micromechanics and
compact device novelty was supported by Microengineering
computational fluid mechanics. [6] Cubaud T., Ho C. (2004): Transport of Bubbles in Square
Microchannels, Physics of Fluids, Vol. 6
Concerning the aerosol conduction and deposition
system the simulation results gave a meaningful
insight into the aerosol particle behaviour and provided

49
NAFEMS Technical
Working Groups &
Regional Steering
Committees
The strong reputation that NAFEMS has earned, for inspiring engineering practitioners
and promoting the effective use of simulation technology, is due in large part to the
invaluable guidance and practical advice that is encapsulated within our publications and
events. This material is developed by the many experts who volunteer their time to serve
on our Technical Working Groups and Regional Steering Committees.

NAFEMS Regional Steering Committees


ᔢ Americas ᔢ DACH (Germany Austria & Switzerland) ᔢ France
ᔢ Iberia (Spain & Portugal) ᔢ India ᔢ Italy
ᔢ Japan ᔢ Nordic (Norway, Denmark, Finland & Sweden) ᔢ UK

NAFEMS Technical Working Groups


Analysis Management (AMWG) Computational Structural Mechanics
The NAFEMS Analysis Management Working Group has (CSMWG)
the remit to produce, monitor and maintain; guidance, The NAFEMS Computational structural mechanics
procedures and advice relating to improving business working group is concerned with the branch of
practice and performance and best practice with engineering that uses numerical methods to calculate
respect to the definition and execution of engineering deformations, deflections, internal forces and stresses
simulation. within structures.

Composites (CWG) Dynamics and Testing (DTWG)


The NAFEMS Composites Working Group was formed The NAFEMS Dynamics and Testing Working Group
to create awareness and education for the simulation of brings together analysts and experimentalists to form a
composites by gathering independent information and common body of understanding in dynamics.
providing independent analysis of composites
simulation capabilities and needs. Dynamic analysis is required when a load or excitation
is varying with time and the inertia of the structure is
significant. In particular, the possibility of resonance
Computational Fluid Dynamics (CFDWG)
must be considered.
The NAFEMS CFD Working Group is concerned with all
aspects of Computational Fluid Dynamics, including the
flow of fluids (gases and liquids), heat and particulate
flows. All computational approaches are included and
the related technologies required whether for pre-
processing, solving or post-processing.

50
Education & Training (ETWG) Optimisation (OWG)
The NAFEMS Education and Training Working Group is The NAFEMS Optimisation Working Group is
formed to examine the education and training needs for responsible for promoting the adoption, further
all numerical analysts and to provide information and development and best practice of optimisation theory
documents to satisfy these needs. and methods to engineering simulation for the benefit
of the analysis community.
The Education and Training Working Group are
responsible for accrediting courses run by NAFEMS Optimisation is the process of selecting the best option
and other external agencies. In addition the working from a range of possible choices
group support the NAFEMS Professional Simulation
Engineer Scheme Simulation Data Management (SDMWG)
The NAFEMS Simulation Data Management Working
Geotechnical (GWG) Group promotes the advancement of the technology
The NAFEMS Geotechnical Working Group was formed and practices associated with the management of
with the aim of developing guidelines for the practical engineering simulation data management and
application of numerical methods in geotechnical processes.
engineering.
Engineering simulation data encompasses the data,
Numerical analysis using finite element and finite models, processes, documents and metadata intrinsic
difference methods has become a mainstream design to performing modelling, simulation, and analysis.
tool within geotechnics in the last decade or so. This is
Simulation Data Management provides for the
due to the development of sophisticated yet accessible
management of data objects and metadata at all levels
computer programs that can realistically model the
of granularity and abstraction, including design and
ground and adjacent structures.
analysis parameters, requirements, and results.
High Performance Computing (HPCWG) Stochastics (SWG)
The NAFEMS High Performance Computing Working
The NAFEMS Stochastics Working Group aim to
Group aims to provide a vendor-neutral, end-user
accelerate the adoption and further the development of
driven consortium that promotes the effective use of
stochastic methods.
High Performance Computing in engineering
simulation. Uncertainty enters into numerical simulation from a
variety of sources, such as variability in input
High Performance Computing is used as an umbrella parameters. Knowledge of the effect of uncertainties
term for a range of technologies such as traditional can lead the analyst to drastically different conclusions
supercomputing, grid computing, cloud computing, regarding which input parameters are most important.
high throughput computing, hardware acceleration, Quantifying the effect of uncertainty provide the analyst
data storage and visualization. with an estimate of the true performance of a design.

Multibody Dynamics (MBDWG) Systems Modelling & Simulation (SMSWG)


The NAFEMS Multibody Dynamics working group aims The NAFEMS Systems Modelling and Simulation
to foster discussions, benchmark methodologies, Working Group focus is on the merging of engineering
develop guidelines and highlighted the benefits gained analysis with overall systems analysis to perform more
by the use of multi body dynamics simulations. realistic, accurate and lifelike behaviour modelling and
simulation.
Multi Body Simulation consists in modelling the
dynamic behaviour of interconnected rigid or flexible The Systems Modelling & Simulation Working Group is
bodies, each of which may undergo large translational a collaboration between NAFEMS and INCOSE (the
and rotational displacements. It addresses the International Council on Systems Engineering). ᔡ
problems of modelling multiple bodies mechanical
dynamics in complex systems, the design and validation
of the control laws.

Multiphysics (MPWG)
The NAFEMS multiphysics working group has been set
up to promote and support the use of Multiphysics
simulation in industry
Industrial use of multiphysics simulations is a diverse
and challenging topic. The main driving force is the
need for more realistic numerical simulations of
coupled problems, combined with the continuing
improvements in hardware and software.

51
Social Media Response
Cognitive Computing
Dr Ahmed Noor of Old Dominion University gave one of the most talked about presentations at the 2015
NAFEMS World Congress “Potential of Cognitive Computing for Engineering Analysis and Design”.
Below you can find some of the reaction on Twitter
Thanks to our sponsors

Front End Analytics (FEA)


Delivering Business Advantage

inform. inspire. entertain.


© Published By NAFEMS

Order Ref: R0118

NAFEMS Ltd.
www.nafems.org
info@nafems.org

You might also like