We are happy to announce our invited speakers for CSF'24!
SCION is a new Internet architecture that addresses many of the security vulnerabilities of today’s Internet. Its clean-slate design provides, among other properties, route control, failure isolation, and multi-path communication. The verifiedSCION project is an effort to formally verify the correctness and security of SCION. It aims to provide strong guarantees for the entire architecture, from the protocol design to its concrete implementation. The project uses stepwise refinement to prove that the protocol withstands increasingly strong attackers. The refinement proofs assume that all network components such as routers satisfy their specifications. This property is then verified separately using deductive program verification in separation logic. This talk will give an overview of the verifiedSCION project and explain, in particular, how we verify code-level properties such as memory safety and conformance to the protocol.
Bio: Peter Müller has been Full Professor and head of the Programming Methodology Group at ETH Zurich since August 2008. His research focuses on languages, techniques, and tools for the development of correct software. His previous appointments include a position as Researcher at Microsoft Research in Redmond, an Assistant Professorship at ETH Zurich, and a position as Project Manager at Deutsche Bank in Frankfurt. Peter Müller received his PhD from the University of Hagen.
Due to their increasing power and flexibility, GenAI models are being incorporated into a broad range of user-facing applications. While AI model safety and alignment have received substantial attention, leading to improvements in the models themselves, addressing security concerns for AI-enabled systems requires a broader approach that includes interaction between AI-based and non-AI components. This will require extending modular approaches used for traditional systems to include compatible understanding of AI-based components. As an early step in this direction, this talk will look at sample architectures for leveraging AI models, security concerns for example systems, and possible approaches for security modeling of AI components.