Can we formally verify the security of systems?
Avi Shaked
Senior Research Associate, University of Oxford
We overview systems verification from a security perspective and discuss how formal verification can fit in with the general systems verification effort. The need for this discussion emerged from our project in the Digital Security by Design (DSbD) programme.
DSbD is a major UK initiative to promote the use of a new computer architecture that offers built-in hardware security mechanisms. The architecture has matured technically, including a full prototype by ARM. The DSbD ecosystem, however, lacks a mechanism to evaluate and communicate the potential security and resilience benefits of this technology when designing systems.
We discuss how systems are expected to be verified with respect to security concerns. We rely on a systems security engineering view, for which a primary source of advice is NIST’s Special Publication 800-160 Vol. 1, “Engineering trustworthy secure systems.” The publication relates to verification as “a set of activities that compares a system or system element with the required characteristics.” Such activities are usually associated with one of these verification methods: analysis, demonstration, inspection, testing, sampling, and by-similarity. Accordingly, and in alignment with systems engineering literature and practice, security requirements – just as other types of requirements – are to be associated with one or more of the verification methods. Evidently, the verification approach used to establish a system’s security posture is driven by requirements. Requirements should articulate the intended features and behaviour at any specific level of system hierarchy, before further, lower-level design is performed.
Requirements are typically specified in natural language, but this may gradually change. NIST’s Open Security Controls Assessment Language (OSCAL) is a prominent attempt to move specification of security controls to a formal structure. These controls can be considered as desirable system characteristics/requirements. Model-based systems security methodologies can facilitate a disciplined use of such controls in the context of the designs of systems. We refer to one tool – TRADES Tool (https://github.com/IAI-Cyber/T...) – showcasing such a methodology.
Regardless of how requirements are defined, what is meant by “security” remains somewhat subjective, interpreting common ideals – such as confidentiality, integrity and availability – as applied to the specific system of interest. This interpretation – typically by domain experts – must consider specific contexts, risk information (e.g., the threat landscape), and managerial decisions (e.g., risk appetite or budget constraints).
Both NIST SP800-160 and the Common Criteria framework acknowledge the need to include formal methods when verifying the security posture of high assurance systems. Formal verification aims to verify engineering artifacts with respect to a set of properties, using mathematical methods. Engineering artifacts are typically abstract mathematical models of the system. The properties are also formal mathematical models. We discuss the challenges associated with specifying and using such models in the context of systems security engineering. We are specifically concerned with the integration of formal verification efforts into the larger, overarching system verification. Formalising security properties with respect to desirable characteristics is one such challenge, which we intend to highlight in our presentation.
About Avi Shaked
Avi Shaked is a senior research associate at the University of Oxford’s Department of Computer Science. He received the BSc degree in physics and computer science, the MSc degree in physics and the PhD degree in systems engineering, all from Tel Aviv University. Dr. Shaked has over 15 years of experience in research and development leadership roles. His research interests include systems thinking, domain specific modelling, model-based design, cyber security and engineering management.