The transition to manycore systems is arguably the greatest challenge facing the ICT industry, Computational Science, and Computing Science research. The manycore revolution is fundamentally changing multiple levels of the execution stack: from processor architecture, through systems software, to applications.
The Manycore Summer School gives researchers an opportunity to learn
theory and practice in a range of emerging manycore technologies,
from seven world-leading academic and industrial researchers.
Participants engaged with cutting-edge material in lectures,
hands-on labs, and interactive poster sessions.
The Manycore Summer School was held from Monday 16th to Friday 20th July 2018 at the University of Glasgow.
Mr Jonathan Balkind
Princeton University OpenPiton Open Source Manycore Research Platform
Prof Andrew Brown
University of Southampton Event-Driven Computing
Dr Toni Collis
Women in High-Performance Computing Developing a diverse, inclusive and resilient HPC community
Prof Kerstin Eder
University of Bristol Whole Systems Energy Transparency
Dr Matt Horsnell
arm Weak Memory and Heterogeneous Computing
Dr Hans Vandierendonck
Queen's University Belfast High-Performance Graph Analytics in Shared Memory
Dr Mario Wolczko
Oracle Labs A Concise and Opinionated History of Virtual Machines
Registration and accommodation were free for
UK-based PhD students and early career researchers (postdocs),
thanks to generous sponsorship from EPSRC and SICSA.
Andrew Brown, University of Southampton
All modern computers are based on an idea first published by Alan Turing as a thought experiment to support his proof of key results about the fundamentals of computability. His concept of the Universal Machine forms the theoretical foundation for the stored-program computer, conceived in practical terms by von Neumann, Eckert and Mauchly, and which first became an engineering reality in the Manchester Baby machine and soon thereafter as a practical computing service supported by Maurice Wilke’s EDSAC in Cambridge. At the heart of this idea is the concept of sequential execution: each "instruction" starts with the state of the machine when the preceding instruction has completed, and leaves the machine in a well-defined state for its successor. High-speed implementations bend the actual timing of instruction execution as far as it will go without breaking semantics, but still emulate the sequential model.
The history of advances in computing has revolved around making this very simple execution model go faster, partly through bending the timing of instruction execution, but mainly through making transistors smaller, faster, and more energy-efficient, all thanks to Moore’s Law. This approach has delivered spectacular progress for over 50 years, but has hit a brick wall – the power wall is much vaunted; in reality the wall is multidimensional and complex, but solid nevertheless. Since then largely illusory (marketing) advances in performance have been delivered through multi-core and then many-core parallelism – putting a modest number of sequential execution engines on the same chip, whose potential (marketing) performance can rarely be realized due to the difficulty inherent in trying to make sequential programs work together in parallel.
The time has come to look again at the fundamentals of computation: to abandon sequential instruction execution as the only model of computation. Alternatives to sequential execution are all around us, including the massive parallel computing resource on a modern FPGA, and the vast complex of biological neurons inside each of our brains. The only minor difficulty is that we do not yet have any general theory of computing on such huge, distributed, networked resources. It’s time that changed. Sequential computing is not natural, it's not efficient and, in fact, the only thing it has going for it is that it's easy.
In these seminars, we present a little bit of history, and then suggest an alternative way of approaching real-world engineering computing problems - principally simulation, although this is a very broad term. This alternative approach is that of event-based computation - where we allow our model of a system to "relax" into some reasonable solution configuration corresponding to reality, in exactly the same way as occurs in nature. We generalise the issue almost beyond recognition, and suggest ways in which we might realise this alternative approach. Finally, we briefly discuss some of the array of application domains for which this approach offers a tremendous return on effort.
Mario Wolczko, Oracle Labs
A Concise and Opinionated History of Virtual Machines
Virtual machines are ubiquitous. Most popular language implementations use VMs, VMs are widely deployed in production, and are an enabling technology for many clouds. And yet relatively few know what goes on inside a VM, and almost all are unaware of the long and complex history of VM technologies. This talk will attempt to induct you into the inner circle of the VM cognoscenti.
Toni Collis, Women in HPC
Developing a diverse, inclusive and resilient HPC community
The underrepresentation of women is a challenge that the entire HPC and Supercomputing industry faces. Research shows that diverse teams increase productivity, so addressing the lack of gender diversity is as important to the community as the challenge of reaching Exascale computing. This session will explore the challenges the HPC and parallel programming community faces and how to address it. The second part of the session will focus on personal career development and ‘building resiliency’ to help attendees thrive in their career. Although this session is tailored for women, everyone is welcome to attend and all are likely to benefit both personally and in terms of addressing workplace diversity and inclusion.
This session will be split into two parts:
(1) Opportunities and Challenges:
As a community we are only just beginning to measure and understand how ‘leaky’ the HPC workforce pipeline is, but attrition rates are likely as high as the general tech community: 41% of women working in tech eventually leave the field (compared to just 17% of men). This session will discuss the potential causes of this, and provide an open forum for discussing solutions.
(2) Building resilience: maintaining well-being and dealing with work stress.
Have you ever felt overwhelmed and wonder how you are going to finish your paper or thesis? Have you experienced the ‘second year blues’ where everything seems to go wrong during your Ph.D. studies? You are not alone!
The postgraduate, postdoctoral and academic career pose serious stresses on individuals. This session will explore how to develop your own ‘Resilience’ toolkit, discussing problems and solutions to help you maintain your wellbeing day-to-day. This session is relevant for everyone, women and men, at all career stages, but is particularly relevant to those in under-represented groups (such as women in computing) where the added negatives coupled with being in a minority group mean resiliency is of increasing importance for succeeding and excelling in your career.
Matt Horsnell, arm
Micro-architectural Security and Heterogeneous Computing
Plucking Lemons - Can Architecture remove the low hanging Security fruit?
In this talk I will discuss some of the recent family of security exploits that have been in the press (Spectre, Meltdown, Rowhammer). I'll explore the historical context of micro-architecture designed for performance, give a basic understanding of how they work, and show how they are now being exploited by increasingly clever adversaries. I'll discuss some of the mitigations proposed and think more widely about the future of architecture and micro-architecture design in a new age in which security becomes a first order design constraint.
Apples and Oranges – Supporting Domain Specific Compute and Acceleration.
With the demise of Moore's Law and the end of classical Dennard scaling, many computer architects are looking at domain specific acceleration for future performance and efficiency gains. This talk focusses on how this might influence system design and composition, general purpose architecture and support for accelerators.
Kerstin Eder, University of Bristol
Whole Systems Energy Transparency: More power to software developers!
Energy efficiency is now a major, if not the major, constraint in electronic systems engineering. Significant progress has been made in low power hardware design for more than a decade. The potential for savings is now far greater at the higher levels of abstraction in the system stack. The greatest savings are expected from energy consumption-aware software. Promoting energy efficiency to a first class software design goal is therefore an urgent research challenge. Designing software for energy efficiency requires visibility of energy consumption from the hardware, where the energy is consumed, all the way through to the programs that ultimately control what the hardware does. This visibility is termed energy transparency. Energy transparency enables a deeper understanding of how algorithms and coding impact on the energy consumption of a computation when executed on hardware. It is a key prerequisite for informed design space exploration and helps system designers to find the optimal tradeoff between performance, accuracy and energy consumption of a computation. In this session I will outline our approach, techniques and recent results towards giving "more power" to software developers. We will cover energy monitoring of software, energy modelling at different abstraction levels, including insights into how data affects the energy consumption of a computation, and static analysis techniques for energy consumption estimation.
Jonathan Balkind, Princeton University
OpenPiton Open Source Manycore Research Platform
In these lectures and labs, we will introduce you to OpenPiton, the world's first open source, general-purpose, multithreaded, manycore processor. We will start by covering the simulation infrastructure for RTL implementation and the assembly test suite with its thousands of tests. Then, we will get hands on with the FPGA tools, including our push-button synthesis and implementation flow, and running code in a full Debian Linux environment on our supported FPGAs. Lastly, we will introduce our synthesis and backend infrastructure for estimating the power and area implications of your research ideas, with the possibility of taping out your own chip!
Hans Vandierendonck, Queen's University Belfast
High-Performance Graph Analytics in Shared Memory
This lecture will provide an overview of recent ideas in high-performance implementations of graph analytics, with a focus on shared memory systems. We will discuss programming models and runtime system implementation for a class of algorithms that conform to the Pregel model. We will analyse characteristics of graph algorithms and explain the impact of these characteristics on performance. We will review performance optimisations to address memory locality issues, to perform NUMA-aware data placement and code scheduling, and to achieve load balance through graph partitioning. Central to these optimisations is the understanding of write-sets in graph algorithms and the alignment of graph partitioning to these write-sets. A practical, hands-on session will give participants experience with these techniques.
Scotland is a beautiful country to explore, and Glasgow is a buzzing
city with a huge range of leisure, culture and sporting
opportunities. As part of the Summer School we had an excursion
a banquet, and a Ceilidh (Scottish country dancing)
Organization for the Manycore Summer School 2018:
Jeremy Singer, School of Computing Science, University of Glasgow
Phil Trinder, School of Computing Science, University of Glasgow