Publications of Dr Stephen Fitzpatrick
2006
- An Examination of Criticality-Sensitive Approaches to Coordination
- AAAI 2006 Spring Symposium on Distributed Plan and Schedule Management, 27-29 Match 2006, Stanford, California, U.S.A.
- PDF paper
In this work, we address the problem of coordinating the distributed execution of plans and schedules by multiple agents subject to a number of different execution uncertainties. The coordination of multi-agent teams in uncertain, dynamic domains is a challenging problem requiring the fusion of techniques from many disciplines. We describe an approach based on the dynamic and selective use of a family of different problem-solving strategies that combine stochastic state estimation with repair-based and heuristic-guided planning and scheduling techniques. This approach is implemented as a cognitive problem-solving architecture that combines (i) a deliberative scheduler, which performs partially-centralized solution repair, (ii) an opportunistic scheduler, which locally optimizes resource utilization for plan enhancement, and (iii) a downgrader, which proactively guides the execution into regions of higher likelihood of success. This paper characterizes the complexity of the problem through examples and experiments, and discusses the advantages and effectiveness of the implemented solution.
- CSC: Criticality-Sensitive Coordination
- Demonstration at Fifth International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2006), 8-12 May 2006, Hakodate, Japan
- PDF paper | Movie
Our Criticality-Sensitive Coordination (CSC) agents are designed to enhance the performance of a human-team working together in uncertain and dynamic settings by monitoring and adapting their plans as dictated by the evolution of the environment. Such situations model military scenarios such as a coordinated joint operations or enterprise settings such as multiple-project management. Among the many challenges in these situations are the large space of possible states due to uncertainty, the distributed / partial knowledge of current state and plan among the agents and the need to react in a timely manner to events that may not be in the original model. In fact, reaction alone is often insufficient as in environments where success depends on completing sequences of coupled actions, one needs to anticipate future difficulties and enable contingencies to alleviate potential hazards.
2005
- Plan Execution and Coordination
- Plan Execution: A Reality Check, Workshop at the Fifteenth International Conference on Automated Planning & Scheduling (ICAPS 2005), 5-10 June 2005, Monterey, California, U.S.A.
- PDF paper
We investigate the problem of keeping the plans of multiple agents synchronized during execution. We assume that agents only have a partial view of the overall plan. They know the tasks they must perform, and know the tasks of other agents with whom they have direct dependencies. Initially, agents are given a schedule of tasks to perform together with a collection of contingency plans that they can engage during execution in case execution deviates from the plan. During execution, agents monitor the status of their tasks, adjusting their local schedules as necessary and informing dependent agents about the changes. When agents determine that their schedule is broken or that a contingency schedule may be better, they engage in coordinating plan changes with other agents. We present a "dynamic partial centralization: approach to coordination. When a unit detects a problem (task delay, inability to perform a task), it will dynamically form a cluster of the critically affected agents (a subset of all potentially affected agents). The cluster will elect a leader, who will retrieve all task and contingency plan information from the cluster members and compute a solution depending on the situation.
2004
- The Distributed Construction of a Global Coordinate System in a Network of Static Computational Nodes from Inter-Node Distances
- Technical Report KES.U.04.04, March 2004, Kestrel Institute, Palo Alto, California
- PDF paper | Postscript paper
This report details an approach to the real-time coordination of large networks of short-range sensors that communicate over short-range, low-bandwidth, high-latency radio channels.
2003
- Distributed Coordination through Anarchic Optimization
- Chapter in Distributed Sensor Networks: A Multiagent Perspective, pp. 257-295; Victor Lesser, Charles L. Ortiz, Jr., & Milind Tambe (Eds.); Kluwer Academic Publishers, 2003; ISBN: 1-4020-7499-9
In this chapter, a peer-to-peer algorithm is described for approximately solving distributed, real-time, constraint optimization problems. The ANTS challenge problem is formulated as a distributed constraint optimization problem; an approximation version of the classical problem of graph k-coloring is formulated as a distributed constraint optimization problem to enable simple experimental assessment of the algorithm's performance.
2002
- Experiments on Dense Graphs with a Stochastic, Peer-to-Peer Colorer
- Probabilistic Approaches in Search, Workshop at Eighteenth National Conference on Artificial Intelligence (AAAI 2002), 28 July 2002, Edmonton, Alberta, Canada
- Carla Gomes & Toby Walsh (Ed.), AAAI Press, Technical Report WS-02-14, ISBN 1-57735-167-3, pp. 24-28
- PDF paper | Postscript paper
- PDF slides | HTML slides
This paper reports on a simple, stochastic, scalable, peer-to-peer algorithm for approximately solving distributed constraint problems in soft real time. The performance of the algorithm is assessed using soft k-coloring of random graphs having known chromatic number and edge probability ranging from moderate to high.
- Asynchronous Execution and Communication Latency in Distributed Constraint Optimization
- Proceedings of The Third International Workshop on Distributed Constraint Reasoning, First International Joint Conference on Autonomous Agents & Multiagent Systems (AAMAS 2002), 16 July 2002, Bologna, Italy
- Makoto Yokoo (Ed.), pp. 80-85
- PDF paper | Postscript paper
- PDF slides | HTML slides
Previous papers have reported on a simple, distributed, synchronous algorithm for approximately k-colouring large graphs in soft real time. In this paper, the effects of asynchronous execution and communication latency are investigated. The main conclusions are that strict synchrony is not required and that considerable communication latency can be tolerated. These results are important for practical applications of the algorithm involving large networks of low-performance hardware equipped with wireless communication.
- Scalable, Anytime Constraint Optimization through Iterated, Peer-to-Peer Interaction in Sparsely-Connected Networks
- Proceedings of The Sixth Biennial World Conference on Integrated Design & Process Technology (IDPT 2002), 23-28 June 2002, Pasadena, California, U.S.A.
- H. Ehrig, B.J. Kramer & A. Ertas (Ed.), Society for Design and Process Science, ISSN No. 1090-9389
- PDF paper | Postscript paper
- PDF slides | HTML slides
This paper reports on an algorithm for anytime, stochastic, distributed constraint optimization that uses iterated, peer-to-peer interaction to try to achieve rapid, approximate solutions to large constraint problems in which the constraint variables are naturally distributed. Two examples are given - graph coloring and coordination of distributed sensors - together with experimental results on performance.
2001
- Peer-to-Peer Coordination of Autonomous Sensors in High-Latency Networks using Distributed Scheduling and Data Fusion
- Technical Report KES.U.01.09, December 2001, Kestrel Institute, Palo Alto, California
- PDF paper | Postscript paper
This report details an approach to the real-time coordination of large networks of short-range sensors that communicate over short-range, low-bandwidth, high-latency radio channels.
Each sensor is limited in the distance over which it can scan and in the type of data that it can acquire, so nearby sensors must collaborate to acquire complementary data for accomplishing such tasks as multiple-target detection and tracking.
Operational limitations on sensors and real-time requirements on tasks restrict the number of tasks in which a given sensor can collaborate. The quality with which a given task is accomplished and the cost incurred are affected by which particular sensors collaborate in achieving the task. Consequently, a coordination mechanism is required to optimize collaboration.
The coordination mechanism reported is fully distributed - each sensor decides for itself which measurements it should take to achieve optimal collaboration with nearby sensors, based what it knows about their intended measurements, and informs those sensors of its own intentions so that they can likewise optimize their own measurements.
In order to determine which measurements are optimal, a sensor must have knowledge about the targets that are the intended subjects of the measurements. To this end, each sensor exchanges measurement data with nearby sensors, and operates a data fusion process to maintain local target estimates.
The reported coordination mechanism is claimed to be scalable, low-cost, adaptive and robust.
- An Experimental Assessment of a Stochastic, Anytime, Decentralized, Soft Colourer for Sparse Graphs
- 1^{st} Symposium on Stochastic Algorithms: Foundations and Applications, 13-14 December 2001 Berlin, Germany
- Lecture Notes in Computer Science 2264, Kathleen Steinhoefel (Ed.), Springer-Verlag, ISBN 3-540-43025-3, pp. 49-64
- PDF paper | Postscript paper
- PDF slides | HTML slides
This paper reports on a simple, decentralized, anytime, stochastic, soft graph-colouring algorithm. The algorithm is designed to quickly reduce the number of colour conflicts in large, sparse graphs in a scalable, robust, low-cost manner. The algorithm is experimentally evaluated in a framework motivated by its application to resource coordination in large, distributed networks.
- Soft, Real-Time, Distributed Graph Coloring using Decentralized, Synchronous, Stochastic, Iterative-Repair, Anytime Algorithms - A Framework
- Technical Report KES.U.01.05, May 2001, Kestrel Institute, Palo Alto, California
- PDF paper | Postscript paper | HTML paper
Soft, real-time distributed graph coloring is presented as a simplification of the problem of distributed resource management in an environment where communication is expensive and subject to relatively high latency. The resources are subject to dynamic task sets that may produce critical or super-critical loading - the objective is to quickly compute reasonable schedules for the resources that accomplish a satisfactory fraction of the task set. A class of simple, decentralized, anytime, approximate colorers is presented, together with a framework for assessing performance, and representative results from performance experiments using a simulator.
1998
- A Case Study on Proving Transformations Correct: Data-Parallel Conversion
- 2nd Irish Workshop in Formal Methods, 2-3 July 1998, Cork, Ireland;
- electronic Workshops in Computing, Sharon Flynn & Andrew Butterfield (Eds.), British Computer Society, ISBN 1-902505-12-3
- http://www.ewic.org.uk/ewic/workshop/view.cfm/IWFM-98
- PDF paper | Postscript paper
- PDF slides | Postscript slides
The issue of correctness in the context of a certain style of program transformation is investigated. This style is characterised by the fully automated application of large numbers of simple transformation rules to a representation of a functional program (serving as a specification) to produce an equivalent efficient imperative program. The simplicity of the transformation rules ensures that the proofs of their correctness are straightforward.
A selection of transformations appropriate for use in a particular context are shown to preserve program meaning. The transformations convert array operations expressed as the application of a small number of general-purpose functions into applications of a large number of functions which are amenable to efficient implementation on an array processor.
1997
- The Automated Transformation of Abstract Specifications of Numerical Algorithms into Efficient Array Processor Implementations
- Science of Computer Programming, Vol. 28, No. 1, January 1997, Elsevier Science, pp 1-41
- PDF paper | Postscript paper
We present a set of program transformations which are applied automatically to convert abstract functional specifications of numerical algorithms into efficient implementations tailored to the AMT DAP array processor. The transformations are based upon a formal algebra of a functional array form, which provides a functional model of the array operations supported by the DAP programming language. The transformations are shown to be complete.
We present specifications and derivations of two example algorithms: an algorithm for computing eigensystems and an algorithm for solving systems of linear equations. For the former, we compare the execution performance of the implementation derived by transformation with the performance of an independent, manually constructed implementation; the efficiency of the derived implementation matches that of the manually constructed implementation.
1996
- The Tailoring of Abstract Functional Specifications of Numerical Algorithms for Sparse Data Structures through Automated Program Derivation and Transformation
- The Computer Journal, Vol. 39, No. 2, 1996, The British Computer Society
- PDF paper | Postscript paper
The automated application of program transformations is used to derive, from abstract functional specifications of numerical mathematical algorithms, highly efficient imperative implementations tailored for execution on sequential, vector and array processors. Emphasis is placed on transformations which tailor implementations to use special programming techniques optimized for sparse matrices. We demonstrate that derived implementations attain superior execution performance than manual implementations for two significant algorithms.
- Unfolding Recursive Function Definitions Using the Paradoxical Combinator
- Durham Transformation Workshop, 1-2 April 1996
- PDF paper | Postscript paper
Function unfolding is a well-known program transformation technique: it is often used to reduce execution overheads incurred by an implementation's function-calling mechanism and to localise information available in global definitions (localization simplifies further optimizations such as constant propagation and the static evaluation of expressions).
Unfolding function definitions that exhibit either self- or mutual-recursion presents a problem for automated transformation systems: sophisticated control mechanisms may need to be incorporated into the unfolding process to ensure termination. Consequently, many automated transformation systems do not attempt to unfold recursive function definitions. Many of the optimizations that follow from unfolding non-recursive functions can still be performed using techniques such as function cloning and specialization but, because of the separation of function definitions and function uses, these techniques are more complex for an automated system to perform than are the corresponding techniques used for non-recursive functions.
In this paper, the use of the paradoxical combinator, Y, to unfold all function definitions, including recursive definitions, is discussed. Full unfolding with the Y combinator is simple for an automated system to perform (requiring only an exhaustive application of a straightforward substitution process). In particular, termination is assured without necessitating a separate control mechanism. After unfolding has been performed, many optimizations can be applied in the same manner as they are applied to unfolded, non-recursive definitions.
1995
- An Algebra for Deriving Efficient Implementations for an Array Processor Parallel Computer from Functional Specifications
- Technical Report 1995/Jun-SF.AS.MC.JMB, June 1995, Department of Computer Science, The Queen's University of Belfast, Northern Ireland
- PDF paper | Postscript paper
We present a set of program transformations which are applied automatically to convert an abstract functional specification of numerical algorithms into efficient implementations tailored to the AMT DAP array processor. The transformations are based upon a formal algebra of a functional array form, which provides a functional model of the array operations supported by the DAP programming language. The transformations are shown to be complete.
We present specifications and derivations of two example algorithms: an algorithm for computing eigensystems and an algorithm for solving systems of linear equations. For the former, we compare the execution performance of the implementation derived by transformation with the performance of an independent, manually constructed implementation; the efficiency of the derived implementation matches that of the manually constructed implementation.
- Array Form Transformations: Proofs of Correctness
- Technical Report 1995/May-SF.MC.PLK, May 1995, Department of Computer Science, The Queen's University of Belfast, Northern Ireland
- PDF paper | Postscript paper
A number of program transformations are proved to preserve the meaning of programs. The transformations convert array operations expressed using a small number of general-purpose functions into applications of a large number of functions suited to efficient implementation on an array processor.
- Correctness Issues in Transformational Refinement
- Abstracts of British Colloquium on Theoretical Computer Science 11, Swansea, England, April 1995;
- Bulletin of the European Association for Theoretical Computer Science, No. 58, February 1996, ISSN 0252-9742 p. 224
- PDF Abstract | Postscript Abstract
- The Automated Derivation of Sparse Implementations of Numerical Algorithms through Program Transformation
- Technical Report 1995/Apr-SF.MC.PLK, April 1995, Department of Computer Science, The Queen's University of Belfast, Northern Ireland
- PDF paper | Postscript paper
The automated application of program transformations is used to derive, from abstract functional specifications of numerical mathematical algorithms, highly efficient imperative implementations tailored for execution on sequential, vector and array processors. Emphasis is placed on additional transformations which tailor implementations to use special programming techniques optimized for sparse matrices.
1994
- Deriving Efficient Parallel Implementations of Algorithms Operating on General Sparse Matrices using Automatic Program Transformation
- Parallel Processing: CONPAR 94-VAPP VI, September 1994, Linz, Austria;
- Lecture Notes in Computer Science 854, Bruno Buchberger & Jens Volkert (Ed.), Springer-Verlag, pp. 148-159
- PDF paper | Postscript paper
- PDF slides | Postscript slides
We show how efficient implementations can be derived from high-level functional specifications of numerical algorithms using automatic program transformation. We emphasize the automatic tailoring of implementations for manipulation of sparse data sets. Execution times are reported for a conjugate gradient algorithm.
- The Specification of Array-Based Algorithms and the Automated Derivation of Parallel Implementations through Program Transformation
- Ph.D. Thesis, Faculty of Science, The Queen's University of Belfast, Northern Ireland, September 1994
- PDF paper | Postscript paper
It is generally a difficult task to construct efficient implementations of numerical mathematical algorithms for execution on high-performance computer systems. The difficulty arises from the need to express an implementation in a form that reflects the nature of the computer system, rather than a form that reflects the computations performed by the algorithm.
This thesis develops the method of program transformation to derive automatically efficient implementations of algorithms from high-level, machine-independent specifications.
The primary system considered is the AMT DAP array processor, but sequential and vector systems are also considered.
The transformational method is further extended to automatically tailor implementations to use sparse programming techniques.
- A Family of Data-Parallel Derivations
- High Performance Computing and Networking, April 18-20 1994, Munich, Germany
- Volume II, Lecture Notes in Computer Science 797, Wolfgang Gentzsch & Uwe Harms (Eds.) Springer-Verlag, pp. 457-462
- PDF paper | Postscript paper
- PDF slides | Postscript slides
The problem that we address in this paper is how to enable a programmer to obtain high performance from an implementation without having to write a low-level, machine-specific implementation. The approach that we use is to write high-level, machine independent specifications of algorithms in a pure, functional programming language (a subset of the SML programming language).
Such functional specifications can be executed, and indeed often are executed in the initial stages of development to give confidence that an algorithm has been correctly described. However, we intend our functional specifications to be a clear statement of the algorithm that captures its essence without consideration for execution efficiency. Such a specification executes prohibitively slowly using available functional language compilers (if such a compiler is available for the architecture at all), much more slowly than could be expected of a hand-crafted, imperative implementation of the algorithm.
So, rather than compile a functional specification, we derive imperative implementations, normally expressed in some dialect of Fortran, from the specification. Many imperative implementations can be derived from a single specification; each implementation is tailored for the hardware to be used and the data to be manipulated.
The derivation of an implementation from a specification is performed by automatically applying program transformations.
- Program Transformations for Automatically Tailoring Algorithms to Sparse Matrices
- Technical Report 1994/Feb-SF.TJH.JMB, February 1994, Department of Computer Science, The Queen's University of Belfast, Northern Ireland
- PDF paper | Postscript paper
We show how efficient implementations can be derived from high-level functional specifications of numerical algorithms using automatic program transformation. We emphasize the automatic tailoring of implementations for manipulation of sparse data sets. Execution times are reported for matrix-vector multiplication and a conjugate gradient algorithm.
1993
- A Family of Intermediate Forms
- Technical Report 1993/Nov-MC.SF.TJH.PLK.JMB, November 1993, Department of Computer Science, The Queen's University of Belfast, Northern Ireland
- PDF paper | Postscript paper
The problem that we address in this paper is how to enable a programmer to obtain high performance from an implementation without having to write a low-level, machine specific implementation. The approach that we use is to write high-level, machine independent specifications of algorithms in a pure, functional programming language.
We derive imperative implementations of the specification, normally expressed in some dialect of Fortran. Many implementations may be derived from a single specification; each implementation is tailored for the hardware to be used and the data that are to be manipulated.
The derivation of an implementation from a specification is performed by automatically applying program transformations. Each transformation is a simple rewrite rule that produces a change in a program; normally, a single application of a transformation produces a minor, local change; a derivation effects the implementation of a functional specification by applying many transformations, each applied many times. We have demonstrated that it is possible to derive highly efficient implementations from our functional specifications; indeed, these implementations are comparable in performance to implementations written by a programmer using a conventional approach.
- Deriving Distributed MIMD Implementations from Functional Specifications
- Advances in Parallel Computing 9, 1993
- G.R. Joubert, D. Trystan, F.J. Peters & D.J. Evans (Eds.), North-Holland, pp. 353-358
- PDF paper | Postscript paper
The construction of reliable and efficient scientific software for execution on novel computer architecture is a demanding and error-prone task. By automatically generating efficient parallel implementations of functional specifications using program transformation, many of these problems may be overcome. The basic method is discussed, with particular reference to its extension to generate implementations for exection on distributed memory MIMD machines.
1992
- The Construction of Numerical Mathematical Software for the AMT DAP by Program Transformation
- CONPAR VAPP V, September 1992, Lyon, France
- Lecture Notes in Computer Science 634, L. Bouge, M. Cosnard, Y. Robert & D. Trystram (Eds.), Springer-Verlag, pp. 761-767
- PDF paper | Postscript paper
- PDF slides | Postscript slides
The general advantages of functional programming - naturalness of expression (in some problem areas), ease of proof, and clarity - are well known. In the field of numerical mathematical computation, however, these advantages are normally outweighed by functional programming's main disadvantage - inefficient execution. In this paper, we show how a functional style can be used for the elegant specification of numerical algorithms while still obtaining highly efficient implementations through the application of program transformations.
- Deriving DAP Implementations of Numerical Mathematical Software through Automated Program Transformation
- Technical Report 1992/Jul-JMB.MC.SF.TJH, July 1992, Department of Computer Science, The Queen's University of Belfast, Northern Ireland
- PDF paper | Postscript paper
We have been investigating the automated derivation of efficient programs from functional specifications for a range of numerical algorithms. In this paper we describe recent work on deriving implementations tailored for execution on the AMT DAP computer of an eigenvalue algorithm.
1990
- An Environment for Transformational Programming
- M.Sc. Thesis, Faculty of Science, The Queen's University of Belfast, Northern Ireland, September 1990