Resumen Actividad 6

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

Synergy among software, cybersecurity, and artificial intelligence

(AI) engineering disciplines will enable future critical missions in


defense, national security, and other domains. Missions of the future
will be characterized by multi-domain planning and execution, real-
time operations in dynamic environments, a broad global context in a
world that is increasingly interconnected, and the need for adaptive
human-machine interfaces to manage complexity and respond to
opportunity. The Carnegie Mellon University Software Engineering
Institute (CMU SEI) envisions that a confluence of advances in those
disciplines will support an automated and secure software lifecycle –
including the supply chain.

In this blog post, I review the origins and interactions of the software,
cybersecurity, and AI engineering disciplines and posit how their
interrelationships would contribute to the intelligent systems of the
future.

Engineering Disciplines for Software, Cybersecurity,


and AI Are in Different Stages of Development
Software engineering has evolved into a proven discipline over several
decades. The U.S. government established the SEI in 1984 to advance
the state of the practice of software engineering, and since then we
have led development of crucial software engineering elements,
including software architectural risk reduction, non-functional quality
attributes, and architectural modeling. Software engineering
practices—developed, proven, matured, and codified over many
years—foster improvement across the software lifecycle, from design
and development through testing and assurance. Thanks in large part
to the widespread transition of effective software engineering
practices into common use, today’s software-reliant systems are
increasingly affordable, trustworthy, and evolvable, and succeed in
achieving their required performance goals in delivered products.
Cybersecurity engineering is newer, dating roughly from the Morris
Worm incident in 1988, which prompted the Defense Advanced
Research Projects Agency (DARPA) to fund creation of the CERT
Coordination Center (CERT/CC, now CERT Division) at the SEI. Building
on insights from the field of software engineering, cybersecurity now
consolidates the tools and analyses used in stages of the software-
development lifecycle to ensure effective operational results. It
reduces security weaknesses through, for example, secure
coding practices; mitigates and responds to threats;
increases network situational awareness; and enables
the assurance of critical software and information systems.

Artificial intelligence was first conceived in the 1950s. Carnegie Mellon


has been on the forefront of AI since collaborating in the creation of
the first AI computer program, Logic Theorist, in 1956. It also created
perhaps the first machine-learning (ML) department, studying how
software can make discoveries and learn with experience. Carnegie
Mellon's Robotics Institute has been a leader in enabling machines to
perceive, decide, and act in the world, including a renowned
computer-vision group that explores how computers can understand
images. As occurred in the disciplines of software engineering and
cybersecurity engineering, AI practices and applications are now
evolving from origins in craft, practiced by talented early adopters. We
are seeing an explosion today of scientific and commercial
applications of AI created by skilled craftspeople applying increasingly
well-established development procedures and practices. A discipline
of AI engineering is emerging that will be practiced by educated
professionals and characterized by research-based, validated analysis
and theory. This discipline will guide the creation of AI systems that
are robust and secure, scalable, trustworthy, and
importantly, human-centered. AI engineering builds on a strong
foundation of software engineering and cybersecurity, without which
progress in this field would not be possible.
If software, cybersecurity, and AI engineering disciplines are used
together, the resulting systems could see risk reduction in the supply
chain, software/data development pipeline, and operation. Research
and development work at the SEI is investigating the interaction of
those disciplines.

Software Engineering for AI Systems


The SEI-led study and research roadmap Architecting the Future of
Software Engineering: A National Agenda for Software Engineering
Research & Development calls for empirically validated practices and
verification methods, tools, and practices to engineer AI-enabled
software. Among the SEI research projects aiming to provide
verification methods is one to automatically detect and avoid
inconsistences between assumptions and decisions that create
delays, rework, and failure in the development, deployment, and
evolution of ML-enabled systems.

In addition, a multiyear collaboration among the SEI, Georgia Tech,


Kansas State University, Galois, and Adventium Labs researchers is
developing architecture tools to analyze the impact of AI functions on
the assurance of safety-critical systems.

AI for Software Engineering


The SEI study Architecting the Future of Software Engineering: A National
Agenda for Software Engineering Research & Development notes that “AI-
enabled and other automated capabilities will enable developers to
perform their tasks better and with increased quality and accuracy.”

One area for improving developers’ tasks is in the


necessary refactoring, often on a large scale, of software code. SEI
researchers—working with experts from CMU and other
universities—developed a tool to automate the isolation of the vast
majority of connections that need to be changed for the system to be
evolved rapidly and cost-effectively.

Another area where SEI researchers apply AI to developers’ tasks in


in automating code repair. This work, undertaken with government
collaborators, is developing automated source-code transformation
tools to remediate vulnerabilities in code that are caused by violations
of rules in the CERT Secure Coding Standards.

The Architecting the Future of Software Engineering study notes, as well,


that AI can aid software architecture reconstruction for the
modernization of legacy systems, an area pertinent in DoD reliant on
established systems.

Software Engineering for Cybersecurity


In June 2023, the SEI organized the Secure Software by Design
Conference to encourage collaboration toward improving the state of
a holistic secure development approach. Participants discussed threat
modeling, security requirements development, secure software
architectures, DevSecOps, secure development platforms and
pipelines, software assurance, secure coding practices, software
testing, and other topics.

One of the presentations examined the Acquisition Security


Framework for Supply Chain Risk Management in the context of the
software bill of materials (SBOM) concept. The talk described the
potential of using a properly integrated SBOM into effective cyber risk
management processes and practices and introduced the SEI SBOM
Framework of practices for managing vulnerabilities and risks in third-
party software.

Cybersecurity for Software Engineering


In the course of creating tools for the automated prioritization of
static analysis alerts, SEI researchers developed the Source Code
Analysis Integrated Framework Environment (SCAIFE) application
programming interface (API). An architecture for classifying and
prioritizing static analysis alerts, the SCAIFE integrates a wide variety
of static analysis tools using the API. The API is pertinent to
organizations that develop or research static analysis alert auditing
tools, aggregators, and frameworks. Building on that body of work, SEI
researchers are proposing, in recently initiated research, to create a
tool that can automatically repair 80 percent of alerts in 10 categories
of code weaknesses.

Assuring software system security also means finding adversaries in


the network before they can attack from the inside using cyber threat
hunting. Unfortunately, this approach is often costly and time-
consuming, to say nothing of the particular skills needed. SEI
researchers are addressing these shortcomings by applying game
theory to the development of algorithms suitable for informing a fully
autonomous threat hunting capability.

Cybersecurity for AI
Trustworthiness is key to the acceptance of results produced by AI
systems. Those systems using ML are susceptible to attacks that
cause those results to be less reliable. SEI research is addressing
issues with the secure training of ML systems. In this collaborative
work with CMU, a team is ensuring that an ML system does not learn
the wrong thing during training (e.g., data poisoning), do the wrong
thing during operation (e.g., adversarial examples), or reveal the
wrong thing during operation (e.g., model inversion or membership
inference). To support this research, the team created the publicly
available Juneberry framework for automating the training,
evaluation, and comparison of multiple models against multiple
datasets.

AI for Cybersecurity
The use of AI and ML for cybersecurity in, for example, anomaly
detection supports faster analysis and faster response than can be
provided by human power alone. In the SEI Artificial Intelligence
Defense Evaluation project, funded by the Department of Homeland
Security’s Cybersecurity and Infrastructure Security Agency (CISA), a
team is developing a means to test AI defenses. In early work, the
research team created e virtual environment representing a typical
corporate network and used the SEI-developed GHOSTS framework to
simulate user behaviors and generate realistic network traffic.

Researchers are also seeking ways to improve human use of AI


system results, including but not limited to those for cybersecurity.
This research is developing the Human-AI Decision Evaluation System,
a test harness for investigating AI-assisted human decision making in
a variety of simulation environments. The research team has
integrated the harness into game environments to observe the effect
of AI decision-support systems on gameplaying outcomes.

How You Can Support the Evolution of the Intelligent


Systems of the Future
As the disciplines of software, cybersecurity, and AI engineering
converge and cross-pollinate, SEI looks forward to learning from pilot
projects within the software-development community about
successes and challenges that developers and users experience. The
results of real-world applications in exercises will show us where pain
points emerge that require further research and development.

Undergraduate and graduate educational curricula, as well as


continuing education and professional development, must continue
to evolve to keep pace with the rapid developments in practice that I
have outlined in this post. Degree programs, certificates, and
certifications will go a long way toward promoting the integration of AI
with software and cybersecurity engineering, taking some of the
mystery out of the craft and professionalizing the maturation of
proven, trusted practices and applications. The SEI has contributed to
establishing curricula for software engineering and cybersecurity
engineering, and we plan to apply our experience to the field of AI
engineering in the future.

Future missions will need technologically advanced and engineered


intelligent systems that can scale quickly and gracefully to adapt to
different environments, generate data to respond dynamically to
changing conditions, and evolve with new mission parameters
(i.e., cyber-physical systems driven by intelligence). Through the
synergistic combination of software, cybersecurity, and AI
engineering, these intelligent, resilient, evolvable systems will be able
to scale, adapt in real time, and generate and use data to respond to
their environments. Reduction of the risk profile of such systems will
give their users greater confidence and trust, critical factors whenever
AI is added to the functionality of mission-critical systems.

You might also like