Professional Documents
Culture Documents
Resumen Actividad 6
Resumen Actividad 6
Resumen Actividad 6
In this blog post, I review the origins and interactions of the software,
cybersecurity, and AI engineering disciplines and posit how their
interrelationships would contribute to the intelligent systems of the
future.
Cybersecurity for AI
Trustworthiness is key to the acceptance of results produced by AI
systems. Those systems using ML are susceptible to attacks that
cause those results to be less reliable. SEI research is addressing
issues with the secure training of ML systems. In this collaborative
work with CMU, a team is ensuring that an ML system does not learn
the wrong thing during training (e.g., data poisoning), do the wrong
thing during operation (e.g., adversarial examples), or reveal the
wrong thing during operation (e.g., model inversion or membership
inference). To support this research, the team created the publicly
available Juneberry framework for automating the training,
evaluation, and comparison of multiple models against multiple
datasets.
AI for Cybersecurity
The use of AI and ML for cybersecurity in, for example, anomaly
detection supports faster analysis and faster response than can be
provided by human power alone. In the SEI Artificial Intelligence
Defense Evaluation project, funded by the Department of Homeland
Security’s Cybersecurity and Infrastructure Security Agency (CISA), a
team is developing a means to test AI defenses. In early work, the
research team created e virtual environment representing a typical
corporate network and used the SEI-developed GHOSTS framework to
simulate user behaviors and generate realistic network traffic.