This workshop was endorsed by SICSA, the Scottish Informatics and Computer Science Alliance. In particular, it was financially supported by the SICSA themes on 'Modelling and Abstraction' and 'Next Generation Internet'.
This workshop was also synchronised with the workshop on Mathematics of Networks that took place on 18th June 2010 in St. Andrews.
It was expected that this workshop would mainly be attended by contributors to the SICSA themes of 'Modelling and Abstraction' and 'Next Generation Internet'. However, the workshop was definitely not closed. Participation from other SICSA themes and from outside SICSA was welcome. Participants were expected to be permanent staff, RAs or PhD students with an interest in one or both of the following areas:
As this was a workshop, no formal proceedings were published. The workshop consisted of invited and other talks from speakers in the field. Proposals were solicited for talks of around 20 minutes. With speaker agreement, the PDF of talks was made available digitally to delegates. Submissions were not refereed, but they were vetted by the organisers to make sure that they fit the scope of the workshop.
Speakers were expected to help bridge the gap between modelling/analysis and networked/distributed systems rather than just describe previous work. The ideal talk described a formal technique with thoughts about its application to networking, or a networking problem with thoughts about what would benefit from a formal technique.
The workshop was organised by Computing Science and Mathematics, University of Stirling:
The workshop took place in Lecture Theatre B4, Cottrell Building, at the University of Stirling. The online directions provided information about getting to the campus. B4 can be found on the interior map.
Speaker slides are linked to PDF from the title of each talk. (Recent versions of Adobe Reader will allow multiple slides to be printed per page.)
Served in Room 2B74.
Prof. Ken Turner and Dr. Marwan Fayed, Computing Science and Mathematics, University of Stirling
Prof. Rob Hierons, Brunel University
Rob Hierons is Professor of Computing and a founder of the ATeSST research group at Brunel. His research interests include a variety of formally-based approaches to testing distributed and other systems based on specifications, models and mutations.
Formal approaches to testing have received significant attention over many years and are at the heart of several test automation techniques. In addition to automation, the formalisation of testing allows one to reason about what the outcome of testing says about the system under test. The talk will start by reviewing the main approaches to formalising testing in the context of distributed systems, and some of the resultant conformance relations and test generation algorithms. It will then consider distributed testing, where we test a system that has distributed interfaces by placing a separate tester at each interface. If the local testers cannot interact with one another and there is no global clock, then each local tester observes only the sequence of inputs and outputs at its interfaces (a local trace). This makes it hard, and sometimes impossible, to reconstruct the global trace that occurred. Importantly, if users are also affected by the restriction (only local traces are observed) then we should recognise this in the conformance relation used. This talk will explore conformance relations for distributed testing and some interesting properties of these relations.
Dr. Tim Griffin, University of Cambridge
Tim Griffin is a Senior Lecturer and Fellow at King's College, University of Cambridge. His research interests include network protocol design and analysis, with a focus on Internet routing protocols.
The Border Gateway Protocol (BGP) is the keystone among Internet routing protocols as it maintains connectivity in the entire global Internet. BGP has evolved into a rather mysterious beast. Theoretical work over the last ten years has made it clear that BGP represents something novel in the context of routing protocols: BGP does not compute globally optimal paths, but only locally optimal paths. The talk will explain how this exotic type of routing can be understood in an algebraic setting. The Stratified Shortest-Paths Problem (SSPP) is presented as a very simple example of this type of algebra. The SSPP is meant to capture the essence of a BGP-like path problem without BGP's many operational complexities.
Dr. Mario Kolberg and Jamie Furness, University of Stirling
Structured Peer to Peer (P2P) overlay networks are becoming increasingly popular. Extensive research activities are taking place in this domain. However, new algorithms and approaches are very hard to validate. This is largely due to the very dynamic nature of such systems, involving a very large numbers of nodes (often hundreds of thousands or even millions). On top of this, nodes are joining and leaving the network throughout the lifetime of the network. Thus the exact state of such a network at particular points in time is very hard to trace.
Implementations using network simulator engines, such as NS2 and Omnetpp are commonly used, but are hampered by the complexity of the implementations. Complexity is an issue in two ways: firstly the coding is often very detailed, and secondly the run-time complexity is such that simulation runs can take a long time even on more powerful machines.
This talk will explore the complexity of such simulations using some recent examples by the authors. The talk is expected to stimulate a discussion on how modelling approaches can improve this situation.
John Tang, University of Cambridge
The analysis of social and technological networks has attracted a lot of attention as social networking applications and mobile sensing devices have given us a wealth of real data. Classic studies looked at analysing static or aggregated networks, i.e. networks that do not change over time or are built as the result of aggregating information over a certain period of time. Given the soaring collections of measurements related to very large, real network traces, researchers are quickly starting to realise that connections are inherently varying over time and exhibit more dimensionality than static analysis can capture.
In this talk we will discuss new temporal distance metrics to quantify and compare the speed (delay) of information diffusion processes, taking into account the evolution of a network from a global view. Using real contact traces, we will show how these metrics are able to capture the temporal characteristics of time-varying graphs, such as delay, duration and time order of contacts (interactions). From this, we explore the concepts of reachability and small-world behaviour in time-varying graphs.
Served in Stirling Management Centre.
Prof. Ken Turner, University of Stirling
The aim of this talk is to encourage new applications of this approach to modelling and analysis of communications services. The talk will describe the capabilities of Cress (Communication Representation Employing Systematic Specification). This offers a graphical notation for describing the flow in a communications service. Service diagrams are automatically translated into formal specifications for automated validation and verification. This builds confidence in the service design. The same diagrams are then automatically compiled into implementation code for automated testing, performance evaluation and deployment.
The talk will introduce the Cress methodology and toolset, along with their application in a variety of communications domains: Intelligent Network, Internet telephony, Interactive Voice Response, Web Services, Grid Services and Device Services.
Dr. Paola Di Maio, University of Strathclyde
In the context of knowledge networks for systems engineering, I study the emergence that arises from the interaction of heterogeneous (hybrid) components of socio-technical systems (STS) where K is presented as an essential enabler for the optimal function of highly distributed self-organising systems (or would-be self organising systems).
This talk will illustrate illustrate (at least some aspects of) entanglement of the problem space and provide an overview of formalisation and representation methods available for such models. It is hoped to elicit suggestions from the audience for the way ahead for this research.
Prof. Philippe De Wilde, Heriot-Watt University
The Networking Problem The brain consists of three types of interacting networks: neural networks, astrocyte (glial) networks, and a cerebrovascular (blood flow) network. The interaction mechanisms are gradually becoming clear, and are an active research topic in neurophysiology. The astrocytes provide crosstalk between the neurons, and can make learning more robust. The cerebrovascular network provides a metabolic function. If this network fails when somebody has a stroke, parts of the neural network fail too. There are no models of the three interacting networks. Such a model would provide huge medical benefit, and would also inspire better machine learning and pattern recognition algorithms. Astrocytes, for example, play a role in vision and smell. Finally, such a model would improve our understanding of human intelligence.
Benefits from Formal Techniques The appropriate level of abstraction is unknown. Some researchers use graphs, some Petri nets, some partial differential equations. As the brain is a complex system of interacting elements, other researchers use statistical mechanics techniques. It is also unknown how to make the model scalable. With 1011 neurons and 1012 astrocytes, only a small part of the brain can be simulated. Classical neural computation has been successful since the 1980's, and a model of the three interacting networks in the brain will extend the range of applications.
In this talk I will give an overview of the problem, and will show what formal techniques have been attempted.
Served in Room 2B74.
Dr. Vashti Galpin, University of Edinburgh
Stochastic process algebras are used for the quantitative modelling of computing phenomena. They assign durations to actions. From this, it becomes possible to understand the performance of the whole system. Recently, the stochastic process algebra PEPA has been extended with spatial notions. This presentation will illustrate how networks can be modelled using this spatial stochastic process algebra. The inherent spatial aspects of a network can be represented by a graph. Since the formalism captures spatial information as a graph with weighted edges, this is a good match to the problem area.
The presentation will describe the formalism, followed by the model and an assessment of its utility.
Dr. Rashid Mehmood, University of Swansea
Modelling and simulation are important tools in the design and analysis of systems. Most systems of interest are huge in details. Both the amount of required memory and the time to compute the solution pose a major difficulty: a problem known as the Curse of Dimensionality or State Space Explosion.
In this talk, we will present our work on Computational Stochastic Modelling (probabilistic model checking) for large-scale systems, and will discuss the methods involved in the modelling process and give a few applications. In particular, we will focus on Continuous time Markov Chains (CTMCs) which have been widely used as a stochastic formalism for design and analysis of systems in engineering and science. In this context, traditional methods for performance analysis typically require the generation, storage, and the processing of the underlying state space for the numerical solution. For steady state problems, the numerical solution phase can be reduced to the solution of linear equation systems of the form Ax = 0. The transient solutions for the CTMC can be obtained by solving a differential equation. Both transient and steady state solutions involve large-scale matrix computations.
Subsequently, we will present parallel numerical computing techniques and compact data structures which we have developed to handle large systems. Using the discussed techniques, we will demonstrate analysis of stochastic models with over 1.2 billion states and 16 billion transitions on single and multiple workstations. (Equally, it involves solution of a sparse linear system with 1.2 billion equations and variables.) To the best of our knowledge, these are the largest models solved by the academic community working in this area.
Finally, we will discuss a few applications in telecommunication systems (Wireless and Optical Networks, Computational Grids), and overview the ongoing work.
Prof. Jane Hillston, University of Edinburgh
The stochastic process algebra PEPA has been used for modelling a variety of computer and communication systems over the last 15 years. In this talk I will outline a recent set of models which investigated the average throughput, handover rate and network blocking probabilities for mobile nodes in a 3G-WLAN heterogeneous network under a number of commonly used network selection strategies.
This talk will present joint work with Hao Wang and Dave Laurenson, IDCOM, University of Edinburgh. In particular I will discuss how the compositional structure of the process algebra made it particularly straightforward to compare the different strategies in a common framework.
Initial identification of shared problems, possible collaborations and road map
Informal discussions among delegates.