Publications of Dr Stephen Fitzpatrick

2019

This report discusses: (i) generating implementations of an application that use alternative concrete representations of abstract data types; and (ii) generating run-time monitors from semantic information contained in abstract data types and their implementations.

2015

The VIBRANCE tool starts with a vulnerable Java application and automatically hardens it against SQL injection, OS command injection, file path traversal, numeric errors, denial of service, and other attacks. For a large class of attacks, the protection added by VIBRANCE blocks the attacks and safely continues execution.

2014

This work investigates the use of medium-grained synthetic diversity to inhibit an attacker s ability to identify and exploit weaknesses in a networked application. We specify the application and use formal refinements to generate numerous implementations that use different combinations of algorithms and data representations. By deploying different implementations we make it more difficult for an attacker to learn details about how any particular implementation works, what resources it uses and what operating system or network services it uses such information is a critical prerequisite for many cyber attacks. We also use semantic constraints in specifications and refinements to generate run-time monitors that can detect compromised data. The semantic information also enables either total or approximate repair of the data, allowing an application to recover from an attack. We also use formal transformations on specifications to augment data structures with additional semantic constraints to enhance monitoring and repair.

2006

In this work, we address the problem of coordinating the distributed execution of plans and schedules by multiple agents subject to a number of different execution uncertainties. The coordination of multi-agent teams in uncertain, dynamic domains is a challenging problem requiring the fusion of techniques from many disciplines. We describe an approach based on the dynamic and selective use of a family of different problem-solving strategies that combine stochastic state estimation with repair-based and heuristic-guided planning and scheduling techniques. This approach is implemented as a cognitive problem-solving architecture that combines (i) a deliberative scheduler, which performs partially-centralized solution repair, (ii) an opportunistic scheduler, which locally optimizes resource utilization for plan enhancement, and (iii) a downgrader, which proactively guides the execution into regions of higher likelihood of success. This paper characterizes the complexity of the problem through examples and experiments, and discusses the advantages and effectiveness of the implemented solution.

Our Criticality-Sensitive Coordination (CSC) agents are designed to enhance the performance of a human-team working together in uncertain and dynamic settings by monitoring and adapting their plans as dictated by the evolution of the environment. Such situations model military scenarios such as a coordinated joint operations or enterprise settings such as multiple-project management. Among the many challenges in these situations are the large space of possible states due to uncertainty, the distributed / partial knowledge of current state and plan among the agents and the need to react in a timely manner to events that may not be in the original model. In fact, reaction alone is often insufficient as in environments where success depends on completing sequences of coupled actions, one needs to anticipate future difficulties and enable contingencies to alleviate potential hazards.

2005

We investigate the problem of keeping the plans of multiple agents synchronized during execution. We assume that agents only have a partial view of the overall plan. They know the tasks they must perform, and know the tasks of other agents with whom they have direct dependencies. Initially, agents are given a schedule of tasks to perform together with a collection of contingency plans that they can engage during execution in case execution deviates from the plan. During execution, agents monitor the status of their tasks, adjusting their local schedules as necessary and informing dependent agents about the changes. When agents determine that their schedule is broken or that a contingency schedule may be better, they engage in coordinating plan changes with other agents. We present a dynamic partial centralization approach to coordination. When a unit detects a problem (task delay, inability to perform a task), it will dynamically form a cluster of the critically affected agents (a subset of all potentially affected agents). The cluster will elect a leader, who will retrieve all task and contingency plan information from the cluster members and compute a solution depending on the situation.

Respect is an effort to allow a problem solver to accept advice and exhibit learning. It is based on a high-level description of a problem solver that can be examined and manipulated by the problem-solver itself. Experiments are being conducted in such problem-solving arenas as meeting scheduling, type checking, and logistics planning. By specializing a general-purpose problem solver to a declarative problem specification, we obtain a procedure for that particular problem domain, which is not necessarily efficient. Because the initial problem solver is high-level, the information is explicitly available as to which operations can be reordered; reordering can lead to radical changes in the search space. In the meeting scheduling domain, this reordering can ensure that we schedule the scarce resources first (busy people, popular facilities). We have seen examples in which failed solutions can be examined automatically to suggest relaxation of constraints---a kind of problem reformulation. For example, if a scheduling problem is unsolvable, an examination of the search space can suggest that a trip be extended an additional day, or that some participants work an extra hour. The envisioned system accepts advice as to how to examine the search space.

2004

This project addressed the problem of the real-time management of distributed resources. It developed an abstract formulation in terms of distributed constraint optimization and developed a simple, anytime algorithm that was shown to be effective, efficient, robust and scalable to large problems. It extended the formulation and algorithm to the management of distributed sensors in a target detection and tracking application. It also developed high-level models for resource management in time-critical targeting in air campaigns and extended an existing scheduler generator tool to automatically generate executable schedulers that quickly assign aircraft and munitions to prosecute targets of opportunity while minimizing disruption to pre-planned target prosecutions.

This report details an approach to the real-time coordination of large networks of short-range sensors that communicate over short-range, low-bandwidth, high-latency radio channels.

2003

In this chapter, a peer-to-peer algorithm is described for approximately solving distributed, real-time, constraint optimization problems. The ANTS challenge problem is formulated as a distributed constraint optimization problem; an approximation version of the classical problem of graph k-coloring is formulated as a distributed constraint optimization problem to enable simple experimental assessment of the algorithm's performance.

2002

Through the use of software transformational synthesis technology, synthesize rescheduling algorithms that reschedule transportation assets using data from automatic identification technology and asset tracking hardware and software. Kestrel brought to bear its software synthesis technology which allows for the generation of correct-by-construction, high performance schedulers from formal specifications of the problem they are intended to solve. Savi Technology brought to bear its expertise in automatic identification technology and asset tracking hardware and software systems. In this effort Kestrel and Savi co-developed scheduler for Yard Management Systems based on data from the Army's Crane Ammunition Depot.

This paper reports on a simple, stochastic, scalable, peer-to-peer algorithm for approximately solving distributed constraint problems in soft real time. The performance of the algorithm is assessed using soft k-coloring of random graphs having known chromatic number and edge probability ranging from moderate to high.

Previous papers have reported on a simple, distributed, synchronous algorithm for approximately k-colouring large graphs in soft real time. In this paper, the effects of asynchronous execution and communication latency are investigated. The main conclusions are that strict synchrony is not required and that considerable communication latency can be tolerated. These results are important for practical applications of the algorithm involving large networks of low-performance hardware equipped with wireless communication.

This paper reports on an algorithm for anytime, stochastic, distributed constraint optimization that uses iterated, peer-to-peer interaction to try to achieve rapid, approximate solutions to large constraint problems in which the constraint variables are naturally distributed. Two examples are given - graph coloring and coordination of distributed sensors - together with experimental results on performance.

2001

This report details an approach to the real-time coordination of large networks of short-range sensors that communicate over short-range, low-bandwidth, high-latency radio channels.

Each sensor is limited in the distance over which it can scan and in the type of data that it can acquire, so nearby sensors must collaborate to acquire complementary data for accomplishing such tasks as multiple-target detection and tracking.

Operational limitations on sensors and real-time requirements on tasks restrict the number of tasks in which a given sensor can collaborate. The quality with which a given task is accomplished and the cost incurred are affected by which particular sensors collaborate in achieving the task. Consequently, a coordination mechanism is required to optimize collaboration.

The coordination mechanism reported is fully distributed - each sensor decides for itself which measurements it should take to achieve optimal collaboration with nearby sensors, based what it knows about their intended measurements, and informs those sensors of its own intentions so that they can likewise optimize their own measurements.

In order to determine which measurements are optimal, a sensor must have knowledge about the targets that are the intended subjects of the measurements. To this end, each sensor exchanges measurement data with nearby sensors, and operates a data fusion process to maintain local target estimates.

The reported coordination mechanism is claimed to be scalable, low-cost, adaptive and robust.

This paper reports on a simple, decentralized, anytime, stochastic, soft graph-colouring algorithm. The algorithm is designed to quickly reduce the number of colour conflicts in large, sparse graphs in a scalable, robust, low-cost manner. The algorithm is experimentally evaluated in a framework motivated by its application to resource coordination in large, distributed networks.

Soft, real-time distributed graph coloring is presented as a simplification of the problem of distributed resource management in an environment where communication is expensive and subject to relatively high latency. The resources are subject to dynamic task sets that may produce critical or super-critical loading - the objective is to quickly compute reasonable schedules for the resources that accomplish a satisfactory fraction of the task set. A class of simple, decentralized, anytime, approximate colorers is presented, together with a framework for assessing performance, and representative results from performance experiments using a simulator.

1998

The issue of correctness in the context of a certain style of program transformation is investigated. This style is characterised by the fully automated application of large numbers of simple transformation rules to a representation of a functional program (serving as a specification) to produce an equivalent efficient imperative program. The simplicity of the transformation rules ensures that the proofs of their correctness are straightforward.

A selection of transformations appropriate for use in a particular context are shown to preserve program meaning. The transformations convert array operations expressed as the application of a small number of general-purpose functions into applications of a large number of functions which are amenable to efficient implementation on an array processor.

1997

We present a set of program transformations which are applied automatically to convert abstract functional specifications of numerical algorithms into efficient implementations tailored to the AMT DAP array processor. The transformations are based upon a formal algebra of a functional array form, which provides a functional model of the array operations supported by the DAP programming language. The transformations are shown to be complete.

We present specifications and derivations of two example algorithms: an algorithm for computing eigensystems and an algorithm for solving systems of linear equations. For the former, we compare the execution performance of the implementation derived by transformation with the performance of an independent, manually constructed implementation; the efficiency of the derived implementation matches that of the manually constructed implementation.

1996

The automated application of program transformations is used to derive, from abstract functional specifications of numerical mathematical algorithms, highly efficient imperative implementations tailored for execution on sequential, vector and array processors. Emphasis is placed on transformations which tailor implementations to use special programming techniques optimized for sparse matrices. We demonstrate that derived implementations attain superior execution performance than manual implementations for two significant algorithms.

Function unfolding is a well-known program transformation technique: it is often used to reduce execution overheads incurred by an implementation's function-calling mechanism and to localise information available in global definitions (localization simplifies further optimizations such as constant propagation and the static evaluation of expressions).

Unfolding function definitions that exhibit either self- or mutual-recursion presents a problem for automated transformation systems: sophisticated control mechanisms may need to be incorporated into the unfolding process to ensure termination. Consequently, many automated transformation systems do not attempt to unfold recursive function definitions. Many of the optimizations that follow from unfolding non-recursive functions can still be performed using techniques such as function cloning and specialization but, because of the separation of function definitions and function uses, these techniques are more complex for an automated system to perform than are the corresponding techniques used for non-recursive functions.

In this paper, the use of the paradoxical combinator, Y, to unfold all function definitions, including recursive definitions, is discussed. Full unfolding with the Y combinator is simple for an automated system to perform (requiring only an exhaustive application of a straightforward substitution process). In particular, termination is assured without necessitating a separate control mechanism. After unfolding has been performed, many optimizations can be applied in the same manner as they are applied to unfolded, non-recursive definitions.

1995

We present a set of program transformations which are applied automatically to convert an abstract functional specification of numerical algorithms into efficient implementations tailored to the AMT DAP array processor. The transformations are based upon a formal algebra of a functional array form, which provides a functional model of the array operations supported by the DAP programming language. The transformations are shown to be complete.

We present specifications and derivations of two example algorithms: an algorithm for computing eigensystems and an algorithm for solving systems of linear equations. For the former, we compare the execution performance of the implementation derived by transformation with the performance of an independent, manually constructed implementation; the efficiency of the derived implementation matches that of the manually constructed implementation.

A number of program transformations are proved to preserve the meaning of programs. The transformations convert array operations expressed using a small number of general-purpose functions into applications of a large number of functions suited to efficient implementation on an array processor.

The automated application of program transformations is used to derive, from abstract functional specifications of numerical mathematical algorithms, highly efficient imperative implementations tailored for execution on sequential, vector and array processors. Emphasis is placed on additional transformations which tailor implementations to use special programming techniques optimized for sparse matrices.

1994

We show how efficient implementations can be derived from high-level functional specifications of numerical algorithms using automatic program transformation. We emphasize the automatic tailoring of implementations for manipulation of sparse data sets. Execution times are reported for a conjugate gradient algorithm.

It is generally a difficult task to construct efficient implementations of numerical mathematical algorithms for execution on high-performance computer systems. The difficulty arises from the need to express an implementation in a form that reflects the nature of the computer system, rather than a form that reflects the computations performed by the algorithm.

This thesis develops the method of program transformation to derive automatically efficient implementations of algorithms from high-level, machine-independent specifications.

The primary system considered is the AMT DAP array processor, but sequential and vector systems are also considered.

The transformational method is further extended to automatically tailor implementations to use sparse programming techniques.

The problem that we address in this paper is how to enable a programmer to obtain high performance from an implementation without having to write a low-level, machine-specific implementation. The approach that we use is to write high-level, machine independent specifications of algorithms in a pure, functional programming language (a subset of the SML programming language).

Such functional specifications can be executed, and indeed often are executed in the initial stages of development to give confidence that an algorithm has been correctly described. However, we intend our functional specifications to be a clear statement of the algorithm that captures its essence without consideration for execution efficiency. Such a specification executes prohibitively slowly using available functional language compilers (if such a compiler is available for the architecture at all), much more slowly than could be expected of a hand-crafted, imperative implementation of the algorithm.

So, rather than compile a functional specification, we derive imperative implementations, normally expressed in some dialect of Fortran, from the specification. Many imperative implementations can be derived from a single specification; each implementation is tailored for the hardware to be used and the data to be manipulated.

The derivation of an implementation from a specification is performed by automatically applying program transformations.

We show how efficient implementations can be derived from high-level functional specifications of numerical algorithms using automatic program transformation. We emphasize the automatic tailoring of implementations for manipulation of sparse data sets. Execution times are reported for matrix-vector multiplication and a conjugate gradient algorithm.

1993

The problem that we address in this paper is how to enable a programmer to obtain high performance from an implementation without having to write a low-level, machine specific implementation. The approach that we use is to write high-level, machine independent specifications of algorithms in a pure, functional programming language.

We derive imperative implementations of the specification, normally expressed in some dialect of Fortran. Many implementations may be derived from a single specification; each implementation is tailored for the hardware to be used and the data that are to be manipulated.

The derivation of an implementation from a specification is performed by automatically applying program transformations. Each transformation is a simple rewrite rule that produces a change in a program; normally, a single application of a transformation produces a minor, local change; a derivation effects the implementation of a functional specification by applying many transformations, each applied many times. We have demonstrated that it is possible to derive highly efficient implementations from our functional specifications; indeed, these implementations are comparable in performance to implementations written by a programmer using a conventional approach.

The construction of reliable and efficient scientific software for execution on novel computer architecture is a demanding and error-prone task. By automatically generating efficient parallel implementations of functional specifications using program transformation, many of these problems may be overcome. The basic method is discussed, with particular reference to its extension to generate implementations for exection on distributed memory MIMD machines.

1992

The general advantages of functional programming - naturalness of expression (in some problem areas), ease of proof, and clarity - are well known. In the field of numerical mathematical computation, however, these advantages are normally outweighed by functional programming's main disadvantage - inefficient execution. In this paper, we show how a functional style can be used for the elegant specification of numerical algorithms while still obtaining highly efficient implementations through the application of program transformations.

We have been investigating the automated derivation of efficient programs from functional specifications for a range of numerical algorithms. In this paper we describe recent work on deriving implementations tailored for execution on the AMT DAP computer of an eigenvalue algorithm.

1990