I'm broadly interested in optimisation and machine learning, including real world applications of these and underpinning theory.
Evolutionary algorithms and similar techniques are great at solving difficult problems, but don't tend to give much explanation to the end user about why they chose the solutions they did. Going back to my PhD work, I looked at ways we can build models of the solutions being visited by such algorithms during their search, then tried to extract information from those models and make sense of them. I currently have a PhD student looking at this very topic. I also have an ongoing interest in how we can use these models to better understand what makes some problems harder than others.
In 2015-2016 I worked as an RA on the project FAIME: A Feature based Framework to Automatically Integrate and Improve Metaheuristics via Examples. This was all about recognising features of problems and algorithms so we can match up algorithms to problems that they can easily solve. The aim was to automatically design new algorithms for problems, based on the problems' features.
Computer code fixing itself? This seems like proper sci-fi stuff but it is really happening, more-or-less. I'm interested in various ways of applying search-based optimisation to improving software.
I worked for a few years as a research assistant on the project DAASE (Dynamic Adaptive Automated Software Engineering), and am still working with some collaborators on this topic. My focus is currently on using search-based methods to improve existing code (e.g. making a simulator more accurate, or making code that causes the computer running it to consume less power), and on making the search algorithms solve problems in a more intelligent way.
I also recently led a small project funded by the Carnegie Trust called "TOGA: Towards Grammar Aware Operators for Genetic Improvement of Software", which looked at mutating the order of binary expressions and "if" statements in Java code to achieve speedups.
I am a contributor to the open source project Gin, a toolbox to simplify research in genetic imrovement of software.
We live and work in buildings, and so their comfort, and their energy efficiency, is of great importance. My interesting this this topic goes back to my time at Loughborough University (2010-2013), where I worked on three projects looking at how we can make buildings more green but still comfortable, without costing too much to build: OPTIMISE, SECURE, and LEEDR. I looked at ways we can use evolutionary algorithms to explore the trade-offs between different goals, how we could use surrogate models of fitness to speed up the search (building performance simulations being rather long-running), and how we could explore very large spaces like choosing which of the thousands of houses in a town should get new insulation.
Taxiing is one of the bottlenecks in the airport operations ecosystem. Planes are not made to run efficiently at ground level, meaning that emissions due to taxiing are suprisingly high, and delays in taxiing can cause knock-on impact in other resource-constrained parts of the airport (especially runways and gates/stands). So there are several interesting problems around predicting taxi times, modelling aircraft movements, and optimisation of the routes so they are more efficient and more robust. When I first came to Stirling I worked on the project SANDPIT: Integrating and Automating Airport Operations, in which I focussed on real-world aircraft ground movement (allocating taxi routes to aircraft), specifically approaches to handling uncertainty and sourcing free real-world data sets for the problem.
I am currently working on the TRANSIT project, funded by the EPSRC, on which I am a co-investigator. This project is working with several academic and industrial partners to develop automated routing algorithms for taxiing aircraft that are robust to uncertainty and more realistic than existing methods.
I am also working with a small business based in Scotland to look at smart algorithms to route aircraft in flight so we can make the most efficient use of the airspace we have available.
Datasets and tools: If you've come here looking for the GM Tools for manipulating airport ground movement data, then please visit the GM Tools project page at GitHub. Benchmark datasets can be found in the collection at ASAP Nottingham.
Here is a video produced by Microsoft to highlight the research, and particularly the role that the Azure cloud platform played (I was grateful to receive a year-long grant for free use of Azure from them). A case study is also now available.
I recently led a small project funded by the Royal Society of Edinburgh (Scottish Crucible) called "Crowd-sourcing the aural identities of places by evolutionary optimisation". This is joint with Suk-Jun Kim at University of Aberdeen, and Stella Chan and Szu-Han Wan at Edinburgh University, and will be looking at developing a distributed platform to use evolutionary algorithms to explore the sounds and collective memories associated with geographic locations.
Several of my undergraduate students have looked at ways to generate music using evolutionary algorithms, with some really nice results.
I lead the Being Connected research programme at University of Stirling, which brings together researchers from several disciplines (spanning social science, environmental science, computing and arts) to investigate how data science approaches can be used to explore and tackle division in society, as well as broad issues of trust and accountability around data and algorithms.
I have always been fascinated by computing and in particular artificial intelligence techniques. I particularly enjoy the interplay between the theoretical side of understanding what makes different algorithms tick and the huge range of interesting application areas that have meaningful real-world value (or are just fun!). My work has settled around approaches to dealing with real-world optimisation problems; handling uncertainty, solving problems with hard constraints and multiple objectives, dealing with long simulation run-times and analysis of optimisation results to better help with decision making.
I completed a Computer Science BSc(hons) in 2005 at Robert Gordon University in Aberdeen, Scotland. During the last year of that degree I was funded by the Carnegie Trust to conduct a short-term research project in applying genetic algorithms to cancer chemotherapy scheduling. My interest in this area grew, leading to an honours project in timetabling with memetic algorithms. I then progressed to work for a PhD, entitled Multivariate Markov Networks for Fitness Modelling in an Estimation of Distribution Algorithm. This covered a range of applications for evolutionary algorithms, and focussed on the construction of fitness models to support the evolutionary process. I continued to researchon my own time alongside a job as a software engineer in industry during 2008-2010, where I was working in the sector of oil, gas and renewable energy. I returned to academia full-time as a research associate in the building energy group at Loughborough University, and subsequently came to the CHORDS research group (now part of the Data Science RG) here at Stirling in 2013, first as a postdoc, then a lecturer from 2018.