Skip to main page content - your browser does not fully support our CSS, or is text-only.

Computing Science Seminars, Spring 2017

Spring 16 image

Seminars will take place in Room 4B96,  Cottrell Building, University of Stirling. Normally, from 15.00 to 16.00 on Friday afternoons during semester time, unless otherwise stated. For instructions on how to get to the University, please look at the following routes.

If you would like to give a seminar to the department in future or if you need more information,  
please contact the seminar organiser, Dr. John R. Woodward (jrw@cs.stir.ac.uk)

Spring 2017

Date Speaker Title/Abstract
Friday
20th Jan
Room:
2B87
Dr David Manlove (Hosted by John Woodward) is a Senior Lecturer in Computing Science at the University of Glasgow, where he has been since 1995. He is interested in algorithms for problems involving matching agents to commodities (e.g., junior doctors to hospitals, kidney patients to donors) in the presence of ordinal preferences or cardinal utilities. He has written or co-authored over 60 papers in this area, and his book "Algorithmics of Matching Under Preferences" was published in 2013. Much of this research has involved designing algorithms to cope with NP-hard optimization problems arising in healthcare-related settings, working in collaboration with the National Health Service in the UK. He is Vice-Chair of the ENCKEP COST Action (European Network for Collaboration on Kidney Exchange Programmes). Optimizing preferences and social welfare in healthcare-related matching problems .  Matching problems typically involve assigning agents to commodities, possibly on the basis of ordinal preferences or other metrics. These problems have large-scale applications to centralised matching schemes in many countries and contexts. In this talk I will describe the matching problems featuring in two such schemes in the UK that have involved collaborations between the National Health Service and the University of Glasgow. One of these dealt with the allocation of junior doctors to Scottish hospitals (1999-2012), and the other is concerned with finding kidney exchanges among incompatible donor-patient pairs across the UK (2007-date). In each case I will describe the applications, present the underlying (NP-hard) algorithmic problems, outline the various solution techniques and give an overview of results arising from real data connected with the matching schemes in recent years.
Friday
27th Jan
Room:
4B96
Wei Chen (Hosted by John Woodward) is a Research Associate in School of Informatics at University of Edinburgh. He received his PhD from University of Nottingham on Type Theory supervised by Prof. Roland C. Backhouse. In 2012 Wei worked with Prof. Martin Hofmann on type-based verification in Munich. He started his current RA with Prof. David Aspinall since 2013, focusing on learning policies for mobile security. Wei's main research interests are in formal methods, in particular, type theory, combinatorial games, and Buechi automata with their applications in program analysis and verification. He is currently working on combining formal methods and machine learning to help with mobile security. More Semantics More Robust: Improving Android Malware Classifiers .  Automatic malware classifiers often perform badly on the detection of new malware, i.e., their robustness is poor. We study the machine-learning-based mobile malware classifiers and reveal one reason: the input features used by these classifiers can?t capture general behavioural patterns of malware instances. We extract the best-performing syntax- based features like permissions and API calls, and some semantics-based features like happen-befores and unwanted behaviours, and train classifiers using popular supervised and semi-supervised learning methods. By comparing their classification performance on industrial datasets collected across several years, we demonstrate that using semantics- based features can dramatically improve robustness of malware classifiers.
Friday
3rd Feb
Room:
4B96
Beat (Hosted by Andrea Bracciali) is completing his PhD research on techniques in distributed computing, data visualization and bio-informatics tools. He developed his project jointly with the Institute of complex systems, University of Western Switzerland, and Universitaat Wurzburg, under the supervision of Prof. Pierre Kuonen, University of Applied Sciences of Western Switzerland. How computer science can help to make DNA analysis more accessible .  In recent years the world of genetics experienced a revolution through the rise of the so called Next Generation Sequencing (NGS) technologies. They allow to sequence DNA quickly and cheaply to create digital files that can be analysed. Through NGS the amount of data created exploded, requiring new approaches to handle this amount of data, both from a computational graphical user interface perspective. For the specific use-case of NGS in diagnostics we explored how to approach those problems by creating a user-friendly graphical NGS data analysis tool. We present the different types of data analysis that are possible to do with the pipeline as well as how we lowered the computing power required to do some of those analysis. We also present the integration of distributed computing into the pipeline to enable data analysis for people with a restricted access to a good computing infrastructure. In the end we look at upcoming research of how we intend to enable researchers to share data and computing power securely across multiple laboratories.
Friday
10 Feb
Room:
4B96
Dr Iain J. Gallagher (Hosted by John R. Woodward) completed a PhD in immunology at the University of Edinburgh (2006) before moving fields to work in transcriptomics and human physiology at Heriot-Watt University (2007). From there he moved to a post doctoral position in the Medical School of the University of Edinburgh (2009) where he continued research into human muscle pathology as well as developmental biology, metabolic disease and cancer. Iain briefly returned to immunology with a post doctoral position at the Roslin Institute (2011) before taking up his current post at the University of Stirling (2012). Iain uses transcriptomic technology to examine the pathology of muscle wasting and metabolic disease. He has recently published the first multi-tissue classifier of healthy ageing providing a tool for e.g. enrichment of clinical trials. Iain is a keen user of python and R as analysis tools in his research. He has a developing interest in Bayesian statistical approaches. The Bayesian approach to statistics, .  Most scientists are taught frequentist based null hypothesis significance testing (NHST) as the route to inference when faced with noisy, real world data. Developed by 'student' (WS Gosset), Egon Pearson, Jerzy Neyman and most notably Ronald Fisher NHST has stood as the statistical paradigm for around 80 years. Despite efforts by the originators of the approach the devotion to a p-value of <0.05 as indicating some substantive finding is pervasive. Pre-dating NHST by some 100 years the approach of Thomas Bayes (published posthumously and developed by Laplace) differs both philosophically and mechanically from NHST. Proponents suggest that whilst the Bayesian approach is more mathematically challenging the inference is clearer. Recent developments in computing power and a recognition that NHST contributes to irreproducibility in scientific research has led to an increase in interest in the Bayesian approach. In this talk I will introduce the Bayesian approach to statistics, define some key terms and illustrate the approach by example.
Friday
17 Feb
Room:
4B96
Julie R. Williamson (Hosted by Carron Shankland): I am a lecturer in HCI at the University of Glasgow. I am part of the Glasgow Interactive Systems Group (GIST), leading the Public and Performative Interaction theme within GIST. My research focuses on how people use technology in public spaces and how interactive technologies can be designed given the performative aspects of using technology in public. My current research looks at playful interfaces for public spaces the use embedded interaction, large format displays, and whole body input. Engagement with Spherical Displays in Public Spaces .  Over the past two years, I have completed a series of playful and performative deployments in public spaces using spherical displays. These displays have unique affordances that make them interesting objects for engagement, allowing groups of users to gather around the display and interact together. During this talk, I will discuss what I mean by engagement in this context, what makes spherical displays special in public spaces, and the implications and limitations of completing this intervention based research in public spaces.
Friday
24th Feb
Room:
4B96
Don Giovanni Battista Spinelli Barrile di Marianella (Hosted by John Woodward) is a Big Data Engineer and work-in-progress Data Scientist. Employed by Compass GmbH in Germany, he is also attending the University of Stirling where is completing the Master of Science in Big Data. Passionate Pythonista, he has experience with several databases such as MongoDB, Neo4J and different flavors of SQL. Lately he has been diving into Machine Learning with the specific interest of performing sentiment analysis on big data sets. He is currently in charge of the backend development of the Airbnb Monitor: a project focused on studying the mid to long-term effects of the popular peer-to-peer app on the housing and tourism market. He also works on Smena, a project concerning network analysis for data retrieved through social media. Scrapy: an insight on Crawling Data from the Web with Python .  Data Scientists' job rely on having access to (large) datasets. However, the data is not always directly available to be processed and analysed; it often needs to be retrieved initially from external sources. Such sources may be an API (application protocol interface) or even web pages from the Internet. Web Scraping or Web Harvesting is the technique of extracting large amounts of information directly from the source code standing behind web pages. During this seminar we will have a quick overview of the Python tools available to perform Web Scraping. A fter this, our focus will shift to Scrapy, an elaborated, free, open-source, web crawling framework written in Python. After explaining the framework's overall structure and its main components, we will see Scrapy in action during an interactive session; this will be done by coding some scripts that are able to recursively collect information from thousands of web pages. During the last stage of the seminar, we will learn how to select, retrieve and store specific information from web pages.
Friday
3rd Mar
Room:
4B96
title .  abstract
Friday
10th Mar
Room:
4B96
Dr Simon Rogers (Hosted by John Woodward) is a senior lecturer in the School of Computing Science at the University of Glasgow. He has an undergraduate degree in electrical and electronic engineering and a PhD in engineering maths. Since starting his PhD he has worked on the development of machine learning models for problems in biology, particularly those involving large, complex datasets. He is currently focused on the development of models for metabolomics (the study of the small molecule content of a biological sample) where the combination of large volumes of data and poor scientific models make machine learning a promising approach. This work has been published in many top journals (PNAS, Bioinformatics, etc). He is also the co-author (with Prof. Mark Girolami) of an introductory textbook on Machine Learning: A First Course in Machine Learning. Mining the fragmentome with topic models .  Abstract: Metabolites are small molecules central to the operation of living organisms. Measurement of metabolites is desirable across much of the medical and life sciences. However, metabolite identification from mass spectrometry data (a key step in measurement) is very challenging. One common method is to fragment measured molecules and compare the resulting fragments with reference databases. Unfortunately, reference databases are very small and aren&lsquot getting much bigger anytime soon. In this talk I&lsquoll describe an alternative approach based on the use of text mining algorithms (topic models) to analyse large collections of fragment spectra. I&lsquoll describe the topic modelling framework that we have developed as well as some of the challenges involved in working with this data and show some results that demonstrate the power of this approach.
Friday
17th Mar
Room:
2V2
Dr Livio Pompiano (Hosted by Andrea Bracciali) title .  abstract
Friday
24th Mar
Room:
4B96
Guevara Noubir (Hosted by Marwan Fayed) holds a PhD in Computer Science from the Swiss Federal Institute of Technology in Lausanne. His research covers both theoretical and practical aspects of privacy, security, and robustness in networked systems. Prior to joining Northeastern University, he was a senior researcher at CSEM SA (1997-2000) where he led the design and development of the data protocol-stack of the third generation Universal Mobile Telecommunication System (UMTS) and its world first 3G prototype. His research led to a wide range of mechanisms and algorithms for scalable, secure, private and robust wireless and mobile communications. He led the winning team of the 2013 DARPA Spectrum Cooperative Challenge. He is a recipient of the National Science Foundation CAREER Award (2005), the ACM Conference on Security and Privacy in Wireless and Mobile Networks (WiSec) best paper award in 2011 and runner-up best paper in 2013. Dr. Noubir held visiting research positions at Eurecom, MIT, and UNL. Dr. Noubir served as program co-chair of several conferences in his areas of expertise such as the ACM Conference on Security and Privacy in Wireless and Mobile Networks, IEEE Conference on Communications and Network Security, and IEEE WoWMoM. He serves on the editorial board of the ACM Transaction on Privacy and Security, and IEEE Transaction on Mobile Computing. Cross-Layer Attacks in Emerging Networks .  The last decade has seen the rise of several new networking technologies, from mobile and wireless to overlay anonymous communication networks such as Tor. In this talk, I will argue that such networks are vulnerable to a variety of cross-layer attacks on their intrinsic features. For instance, an adversary can infer users location using malicious apps without requiring permissions, or by exploiting the physical layer characteristics. I will also provide evidence that the Tor anonymity network is also subject to active attacks, and present a framework that identifies malicious relays. I will then discuss the result of the use of the framework, revealing over 100 malicious relays.
Friday
31st Mar
Room:
4B96
Foteini Katsarou (Hosted by John Woodward) title .  abstract
Friday
5th May
Room:
4B96
Dr.Juan Ye from St Andrews (Hosted by Sandy Brownlee) title .  abstract
Previous Seminar Series
2016:   Spring   Autumn
2015:   Spring   Autumn
2014:   Spring   Autumn
2013:   Spring   Autumn
2012:   Spring   Autumn
2011:   Spring   Autumn
2010:   Spring   Autumn

Top image: Illustrated example of running the Epsilon-constraint algorithm in order to maximise two objectives: find an optimal solution for objective 1; restrict the solution space according to the solution's value for objective 2 and look for an optimum solution of objective 1 in that space; repeat the previous step until there are no more solutions to be found. Any dominated solutions need to be filtered out of the set of solutions.
Courtesy of Dr. Nadarajen Veerapen. Related to a recent publication:

N. Veerapen, G. Ochoa, M. Harman and E. K. Burke. An Integer Linear Programming approach to the single and bi-objective Next Release Problem. Information and Software Technology, Volume 65, September 2015, Pages 1-13, ISSN 0950-5849. DOI:10.1016/j.infsof.2015.03.008


This page is maintained by:
Computing Science and Mathematics
Faculty of Natural Sciences
Room 4B102, Cottrell Building
University of Stirling, Stirling FK9 4LA
Tel: +44 1786 46 7286


© University of Stirling FK9 4LA Scotland UK • Telephone +44 1786 473171 • Scottish Charity No SC011159
Portal Logon

Forgotten login?

×