MEMOCODE 2019 TOC
Full Citation in the ACM Digital Library
A compositional semantics of Simulink/Stateflow based on quantized state hybrid automata
- Jin Woo Ro
- Avinash Malik
- Partha Roop
Simulink/Stateflow® is the de-facto tool for design of Cyber-physical Systems (CPS). CPS include hybrid systems, where a discrete controller guides a continuous plant. Hybrid systems are characterised by their continuous time dynamics with sudden discontinuities, caused by level/zero crossings. Stateflow can graphically capture hybrid phenomenon, making it popular with control engineers. However, Stateflow is unable to correctly and efficiently simulate complex hybrid systems, especially those characterised by even number of level crossings.
In this paper we first propose a new formal model for hybrid systems called Quantized State Hybrid Input Output Automaton (QSHIOA). QSHIOA is used to give a deterministic semantics to Stateflow in addition to efficiently handling even number of level crossing detections. In the proposed compositional semantics, a network of Stateflow charts can be compiled into a network of QSHIOAs. Benchmark results show that in the median case, the proposed stateflow execution technique, via QSHIOA, is 84% faster than using the best variable-step size solvers in Simulink/Stateflow®.
Further sub-cycle and multi-cycle schedulling support for Bluespec Verilog
- David J. Greaves
Bluespec [13] is a hardware description language where all behaviour is expressed in rules that execute atomically. The standard compilation semantics for Bluespec enforce a particular mapping between rule firing and hardware clock cycles, such as a register only being updated by exactly one firing of at most one rule in any clock cycle. Also, the standard compiler does not introduce any additional state, such as credit-based or round-robin arbiters to guarantee fairness between rules over time. On the other hand, many useful hardware resources, such as complex ALUs and synchronous RAMs, are pipelined. Unlike typical high-level synthesis tools, in standard Bluespec such resources cannot be invoked using infix operators in expressions such as A[e] or e1*e2 since binding to specific instances and multi-clock cycle schedules are required. In this paper we extend the reference semantics of Bluespec to decouple it from clock cycles, allowing multiple updates to a register within one clock cycle and automatic instantiation of arbiters for multi-clock cycle behaviour. We describe the new semantic packing rules as extensions of our standard compilation rules and we report early results from an open-source, fully-functional implementation.
Securing implantable medical devices with runtime enforcement hardware
- Hammond Pearce
- Matthew M. Y. Kuo
- Partha S. Roop
- Srinivas Pinisetty
In recent years we have seen numerous proof-of-concept attacks on implantable medical devices such as pacemakers. Attackers aim to breach the strict operational constraints that these devices operate within, with the end-goal of compromising patient safety and health. Most efforts to prevent these kinds of attacks are informal, and focus on application- and system-level security — for instance, using encrypted communications and digital certificates for program verification. However, these approaches will struggle to prevent all classes of attacks. Runtime verification has been proposed as a formal methodology for monitoring the status of implantable medical devices. Here, if an attack is detected a warning is generated. This leaves open the risk that the attack can succeed before intervention can occur. In this paper, we propose a runtime-enforcement based approach for ensuring patient security. Custom hardware is constructed for individual patients to ensure a safe minimum quality of service at all times. To ensure correctness we formally verify the hardware using a model-checker. We present our approach through a pacemaker case study and demonstrate that it incurs minimal overhead in terms of execution time and power consumption.
A timeless model for the verification of quasi-periodic distributed systems
- Maryam Dabaghchian
- Zvonimir Rakamarić
A cyber-physical system often consists of distributed multi-rate periodic processes that communicate using message passing; each process owns a local clock not synchronized with others. We call such systems quasi-periodic distributed systems. Traditionally, one would model them using timed automata, thereby having to deal with high-complexity verification problems. Recently, several researchers proposed discrete-time abstractions based on the calendar model to make the verification more tractable. However, even the calendar model contains a notion of time in the form of a global clock. We propose a novel, timeless computation model for quasi-periodic distributed systems to facilitate their verification. The main idea behind our model is to judiciously replace synchronization using a global clock and calendar with synchronization over lengths of message buffers. We introduce a simple domain-specific language for programming of such systems and use it to formalize the semantics of both the calendar and timeless model. Then, we prove that our timeless model is an overapproximation of the calendar model. Finally, we evaluate our timeless model using several benchmarks.
RTL bug localization through LTL specification mining (WIP)
- Vighnesh Iyer
- Donggyu Kim
- Borivoje Nikolic
- Sanjit A. Seshia
As the complexity of contemporary hardware designs continues to grow, functional verification demands more effort and resources in the design cycle than ever. As a result, manually debugging RTL designs is extremely challenging even with full signal traces after detecting errors in chip-level software simulation or FPGA emulation. Therefore, it is necessary to reduce the burden of verification by automating RTL debugging processes.
In this paper, we propose a novel approach for debugging with the use of LTL specification mining. In this approach, we extract fine-grained assertions that are implicitly encoded in the RTL design, representing the designer’s assumptions, to localize bugs that are only detected when high-level properties are violated from long-running full-system simulations. We employ template-based RTL spec mining to infer both safety and bounded liveness properties. We propose strategies to convert multi-bit signals to atomic propositions based on common RTL design idioms such as ready-valid handshakes and specific state transitions using automatic static analysis.
Our initial results with a tiny RISC-V core design show that this methodology is promising for localizing bugs in time and space by demonstrating that the mined fine-grained LTL properties are violated before a high-level test failure condition occurs, such as a timeout or hanging, and can point to specific lines of suspect RTL.
Encoding and monitoring responsibility sensitive safety rules for automated vehicles in signal temporal logic
- Mohammad Hekmatnejad
- Shakiba Yaghoubi
- Adel Dokhanchi
- Heni Ben Amor
- Aviral Shrivastava
- Lina Karam
- Georgios Fainekos
As Automated Vehicles (AV) get ready to hit the public roads unsupervised, many practical questions still remain open. For example, there is no commonly acceptable formal definition of what safe driving is. A formal definition of safe driving can be utilized in developing the vehicle behaviors as well as in certification and legal cases. Toward that goal, the Responsibility-Sensitive Safety (RSS) model was developed as a first step toward formalizing safe driving behavior upon which the broader AV community can expand. In this paper, we demonstrate that the RSS model can be encoded in Signal Temporal Logic (STL). Moreover, using the S-TaLiRo tools, we present a case study of monitoring RSS requirements on selected traffic scenarios from CommonRoad. We conclude that monitoring RSS rules encoded in STL is efficient even in heavy traffic scenarios. One interesting observation is that for the selected traffic data, vehicle parameters and response times, the RSS model violations are not frequent.
A compositional approach for real-time machine learning
- Nathan Allen
- Yash Raje
- Jin Woo Ro
- Partha Roop
Cyber-Physical Systems are highly safety critical, especially since they have to provide both functional and timing guarantees. Increasingly, Cyber-Physical Systems such as autonomous vehicles are relying on Artificial Neural Networks in their decision making and this has obvious safety implications. While many formal approaches have been recently developed for ensuring functional correctness of machine learning modules involving Artificial Neural Networks, the issue of timing correctness has received scant attention.
This paper proposes a new compiler from the well known Keras Neural Network library to hardware to mitigate the above problem. In the developed approach, we compile networks of Artificial Neural Networks, called Meta Neural Networks, to hardware implementations using a new synchronous semantics for their execution. The developed semantics enables compilation of Meta Neural Networks to a parallel hardware implementation involving limited hardware resources. The developed compiler is semantics driven and guarantees that the generated implementation is deterministic and time predictable. The compiler also provides a better alternative for the realisation of non-linear functions in hardware. Overall, we show that the developed approach is significantly more efficient than a software approach, without the burden of complex algorithms needed for software Worst Case Execution Time analysis.
Polyhedral fragments: an efficient representation for symbolically generating code for processor arrays
- Michael Witterauf
- Frank Hannig
- Jürgen Teich
To leverage the vast parallelism of loops, embedded loop accelerators often take the form of processor arrays with many, but simple processing elements. Each processing element executes a subset of a loop’s iterations in parallel using instruction- and datalevel parallelism by tightly scheduling iterations using software pipelining and packing instructions into compact, individual programs. However, loop bounds are often unknown until runtime, which complicates the static generation of programs because they influence each program’s control flow.
Existing solutions, like generating and storing all possible programs or full just-in-time compilation, are prohibitively expensive, especially in embedded systems. As a remedy, we propose a hybrid approach introducing a tree-like program representation, whose generation front-loads all intractable sub-problems to compile time, and from which all concrete program variants can efficiently be stitched together at runtime. The tree consists of so-called polyhedral fragments that represent concrete program parts and are annotated with iteration-dependent conditions.
We show that both this representation is both space- and time-efficient: it requires polynomial space to store—whereas storing all possibly generated programs is non-polynomial—and polynomial time to evaluate—whereas just-in-time compilation requires solving NP-hard problems. In a case study, we show for a representative loop program that using a tree of polyhedral fragments saves 98.88 % of space compared to storing all program variants.
Security-driven metrics and models for efficient evaluation of logic encryption schemes
- Yinghua Hu
- Vivek V. Menon
- Andrew Schmidt
- Joshua Monson
- Matthew French
- Pierluigi Nuzzo
Research in logic encryption over the last decade has resulted in various techniques to prevent different security threats such as Trojan insertion, intellectual property leakage, and reverse engineering. However, there is little agreement on a uniform set of metrics and models to efficiently assess the achieved security level and the trade-offs between security and overhead. This paper addresses the above challenges by relying on a general logic encryption model that can encompass all the existing techniques, and a uniform set of metrics that can capture multiple, possibly conflicting, security concerns. We apply our modeling approach to four state-of-the-art encryption techniques, showing that it enables fast and accurate evaluation of design trade-offs, average prediction errors that are at least 2× smaller than previous approaches, and the evaluation of compound encryption methods.
Modeling observability in adaptive systems to defend against advanced persistent threats
- Cody Kinneer
- Ryan Wagner
- Fei Fang
- Claire Le Goues
- David Garlan
Advanced persistent threats (APTs) are a particularly troubling challenge for software systems. The adversarial nature of the security domain, and APTs in particular, poses unresolved challenges to the design of self-* systems, such as how to defend against multiple types of attackers with different goals and capabilities. In this interaction, the observability of each side is an important and under-investigated issue in the self-* domain. We propose a model of APT defense that elevates observability as a first-class concern. We evaluate this model by showing how an informed approach that uses observability improves the defender’s utility compared to a uniform random strategy, can enable robust planning through sensitivity analysis, and can inform observability-related architectural design decisions.
Approximate computing for multithreaded programs in shared memory architectures
- Bernard Nongpoh
- Rajarshi Ray
- Ansuman Banerjee
In multicore and multicached architectures, cache coherence is ensured with a coherence protocol. However, the performance benefits of caching diminishes due to the cost associated with the protocol implementation. In this paper, we propose a novel technique to improve the performance of multithreaded programs running on shared-memory multicore processors by embracing approximate computing. Our idea is to relax the coherence requirement selectively in order to reduce the cost associated with a cache-coherence protocol, and at the same time, ensure a bounded QoS degradation with probabilistic reliability. In particular, we detect instructions in a multithreaded program that write to shared data, we call them Shared-Write-Access-Points (SWAPs), and propose an automated statistical analysis to identify those which can tolerate coherence faults. We call such SWAPs approximable. Our experiments on 9 applications from the SPLASH 3.0 benchmarks suite reveal that an average of 57% of the tested SWAPs are approximable. To leverage this observation, we propose an adapted cache-coherence protocol that relaxes the coherence requirement on stores from approximable SWAPs. Additionally, our protocol uses stale values for load misses due to coherence, the stale value being the version at the time of invalidation. We observe an average of 15% reduction in CPU cycles and 11% reduction in energy footprint from architectural simulation of the 9 applications using our approximate execution scheme.
Compositional construction of bounded error over-approximations of acyclic interconnected continuous dynamical systems
- Ratan Lal
- Pavithra Prabhakar
We consider the problem of bounded time safety verification of interconnections of input-output continuous dynamical systems. We present a compositional framework for computing bounded error approximations of the complete system from those of the components. The main crux of our approach consists of capturing the input-output signal behaviors of a component using an abstraction predicate that represents the input-output sample behaviors corresponding to the signal behaviors. We define a semantics for the abstraction predicate that captures an over-approximation of the input-output signal behaviors of a component. Next, we define how to compose abstraction predicates of components to obtain an abstraction predicate for the composed system. We instantiate our compositional abstraction construction framework for linear dynamical systems by providing concrete methods for constructing the input-output abstraction predicates for the individual systems.
Security analysis of cloud-connected industrial control systems using combinatorial testing
- Peter W. V. Tran-Jørgensen
- Tomas Kulik
- Jalil Boudjadar
- Peter Gorm Larsen
Industrial control systems are moving from monolithic to distributed and cloud-connected architectures, which increases system complexity and vulnerability, thus complicates security analysis. When exhaustive verification accounts for this complexity the state space being sought grows drastically as the system model evolves and more details are considered. Eventually this may lead to state space explosion, which makes exhaustive verification infeasible. To address this, we use VDM-SL’s combinatorial testing feature to generate security attacks that are executed against the model to verify whether the system has the desired security properties. We demonstrate our approach using a cloud-connected industrial control system that is responsible for performing safety-critical tasks and handling client requests sent to the control network. Although the approach is not exhaustive it enables verification of mitigation strategies for a large number of attacks and complex systems within reasonable time.
Detecting security leaks in hybrid systems with information flow analysis
- Luan Viet Nguyen
- Gautam Mohan
- James Weimer
- Oleg Sokolsky
- Insup Lee
- Rajeev Alur
Information flow analysis is an effective way to check useful security properties, such as whether secret information can leak to adversaries. Despite being widely investigated in the realm of programming languages, information-flow-based security analysis has not been widely studied in the domain of cyber-physical systems (CPS). CPS provide interesting challenges to traditional type-based techniques, as they model mixed discrete-continuous behaviors and are usually expressed as a composition of state machines. In this paper, we propose a lightweight static analysis methodology that enables information security properties for CPS models. We introduce a set of security rules for hybrid automata that characterizes the property of non-interference. Based on those rules, we propose an algorithm that generates security constraints between each sub-component of hybrid automata, and then transforms these constraints into a directed dependency graph to search for non-interference violations. The proposed algorithm can be applied directly to parallel compositions of automata without resorting to model-flattening techniques. Our static checker works on hybrid systems modeled in Simulink/Stateflow format and decides whether or not the model satisfies non-interference given a user-provided security annotation for each variable. Moreover, our approach can also infer the security labels of variables, allowing a designer to verify the correctness of partial security annotations. We demonstrate the potential benefits of the proposed methodology on two case studies.
Logical specification and uniform synthesis of robust controllers
- Paritosh K. Pandya
- Amol Wakankar
This paper investigates the synthesis of robust controllers from a logical specification of regular properties given in an interval temporal logic QDDC. Our specification encompasses both hard robustness and soft robustness. Here, hard robustness guarantees the invariance of commitment under relaxed (weakened) assumptions. A systematic framework for logically specifying the assumption weakening by means of a QDDC formula Rb(A), called Robustness criterion, is presented. This can be used with any user specified assumption DA to obtain a relaxed (weakened) assumption Rb(DA). A variety of robustness criteria encompassing some existing notions such as k, b resilience as well as some new notions like tolerating non-burst errors and recovery from transient errors are formulated logically. The soft robustness pertains to the ability of the controller to maintain the commitment for as many inputs as possible, irrespective of any assumption. We present a uniform method for the synthesis of a robust controller which guarantees the invariance of specified hard robustness and it optimizes the expected value of occurrence of commitment across input sequences. Through the case study of a synchronous bus arbiter, we experimentally show the impact of variety of hard robustness criteria as well as the soft robustness on the ability of the synthesized controllers to meet the commitment “as much as possible”.
Lattice-based SMT for program verification
- Karine Even-Mendoza
- Antti E. J. Hyvärinen
- Hana Chockler
- Natasha Sharygina
We present a lattice-based satisfiability modulo theory for verification of programs with library functions, for which the mathematical libraries supporting these functions contain a high number of equations and inequalities. Common strategies for dealing with library functions include treating them as uninterpreted functions or using the theories under which the functions are fully defined. The full definition could in most cases lead to instances that are too large to solve efficiently.
Our lightweight theory uses lattices for efficient representation of library functions by a subset of guarded literals. These lattices are constructed from equations and inequalities of properties of the library functions. These subsets are found during the lattice traversal. We generalise the method to a number of lattices for functions whose values depend on each other in the program, and we describe a simultaneous traversal algorithm of several lattices, so that a combination of guarded literals from all lattices does not lead to contradictory values of their variables.
We evaluate our approach on benchmarks taken from the robotics community, and our experimental results demonstrate that we are able to solve a number of instances that were previously unsolvable by existing SMT solvers.
Establishing a refinement relation between binaries and abstract code
- Freek Verbeek
- Joshua Bockenek
- Abhijith Bharadwaj
- Binoy Ravindran
- Ian Roessle
This paper presents a method for establishing a refinement relation between a binary and a high-level abstract model. The abstract model is based on standard notions of control flow, such as if-then-else statements, while loops and variable scoping. Moreover, it contains high-level data structures such as lists and records. This makes the abstract model amenable for off-the-shelf verification techniques such as model checking or interactive theorem proving. The refinement relation translates, e.g., sets of memory locations to high-level datatypes, or pointer arithmetic to standard HOL functions such as list operations or record accessors. We show applicability of our approach by verifying functions from a binary containing the Network Security Services framework from Mozilla Firefox, running on the x86-64 architecture. Our methodology is interactive. We show that we are able to verify approximately 1000 lines of x86-64 machine code (corresponding to about 400 lines of source code) in one person month.