ML Meetup: Graphcore and Symbolic Representation Learning
At Man AHL, we believe in the Python Ecosystem and are successfully trading Machine Learning based systems since early 2014.
To give back and strengthen London’s Python and Machine Learning Communities, we sponsor and support PyData and Machine Learning London Meetups. In April, we had the pleasure to welcome Simon Knowles, CTO of Graphcore and Marta Garnelo Abellanas, Research Scientist at DeepMind to the London Machine Learning Meetup. Below are summaries of their talks.
Graphcore - Simon Knowles
Simon Knowles, CTO at Graphcore and founder of many successful startups in the area of processor development gives us an overview of the challenges and opportunities faced by today’s chip manufacturers in the context of intelligence computation (model learning & simulations). What is required such that these new types of computational tasks can be executed as fast and efficiently as possible?
In the first part, Simon walks us through the current state of processor development and why we do not see performance improvements as 10-15 years ago. He predicts that silicon scaling might yield another 3-10x performance in the next decade, but in order to get to 100x improvements, we need to come up with ground breaking new ideas, connect many chips together and adjust to the new forms of computation required. Intelligence computation can usually be represented in a shape of a graph, with nodes representing data transformation (computation) and edges dependencies (communication), but current CPUs (Scalar) and GPUs (Vector) are not designed for efficient processing of these types of data structures.
This is the space which Graphcore tries to fill. The result of their development is Colossus, an Intelligence Processing Unit (IPU) optimised to work efficiently on graph type data structures, with memory on chip, 2432 processor tiles, “compiled” communication. In order to avoid concurrency hazards, the IPU is based on Bulk SynchronousParallel. It can access 600MB at 90TB/s with near zero latency (compared to a GPUs 16GB at 900GB/s).
Watch the video if you are interested to know how the Intelligence Processing Unit works and how it relates and performs in the context of machine learning.
Symbolic Representation Learning - Marta Garnelo
Marta Garnelo, Research Scientist at DeepMind and PhD student at Imperial College London, invites us in this talk to think about the representations Deep Learning (DL) models generate, and how combining them with Symbolic AI could yield better results.
In the first part, Marta highlights the principle of Symbolic AI and identifies several advantages of this approach: i) interpretability, ii) generalization at the concept level, iii) well established solver/planning algorithms. She also recognizes this paradigm suffers from one critical drawback as relations, and so knowledge, need to be handcrafted. Comparing it to Neural Networks, she then emphasizes the complimentarity between DL and Symbolic AI and details her first attempt to reconcile both approaches in one of her early experiments. Garnelo M., Arulkumaran K. and Shanahan M., 2016 introduced a Deep Symbolic Reinforcement Learning pipeline, composed of a low level symbol extractor - using Convolutional Autoencoder - feeding into a Representation Building component and passed to a Q-learning algorithm. They compared their learning pipeline to Deepmind’s DQN in an environment where an agent must collect rewards in a discrete world. While DQN was quick to achieve a perfect score when the position of the rewards was kept constant on the grid, it failed to generalize to random positioning. Their algorithm, on the other hand, achieved consistent results in the different set-up of the experiment.
The second part of the talk focuses on the challenges faced by DL and reviews recent attempts to tackle these challenges. The need for interpretability is a controversial topic that has been hugely debated recently. NIPS 2017 - Interpretable ML Symposium is an example of the questions (and emotions) this topic raises. Recent work on disentangled representations (Chen et al 2016, Higgins et al 2016) consists of learning interpretable representations of the independent data generative factors. Another approach towards better interpretability also takes advantage of symbolic representations in DL algorithms, such as relational networks (Santoro et al, 2017). Another focus of improvement is on the ability to generalize at a concept level. Work on higher level generalization includes combining disentangled representations with symbolic description of the environment (Higgins et al 2017). Meta-learning, a field gaining popularity recently, can also be considered, by definition, to generalize at a higher level.
Opinions expressed are those of the author and may not be shared by all personnel of Man Group plc (‘Man’). These opinions are subject to change without notice, are for information purposes only and do not constitute an offer or invitation to make an investment in any financial instrument or in any product to which the Company and/or its affiliates provides investment advisory or any other financial services. Any organisations, financial instrument or products described in this material are mentioned for reference purposes only which should not be considered a recommendation for their purchase or sale. Neither the Company nor the authors shall be liable to any person for any action taken on the basis of the information provided. Some statements contained in this material concerning goals, strategies, outlook or other non-historical matters may be forward-looking statements and are based on current indicators and expectations. These forward-looking statements speak only as of the date on which they are made, and the Company undertakes no obligation to update or revise any forward-looking statements. These forward-looking statements are subject to risks and uncertainties that may cause actual results to differ materially from those contained in the statements. The Company and/or its affiliates may or may not have a position in any financial instrument mentioned and may or may not be actively trading in any such securities. This material is proprietary information of the Company and its affiliates and may not be reproduced or otherwise disseminated in whole or in part without prior written consent from the Company. The Company believes the content to be accurate. However accuracy is not warranted or guaranteed. The Company does not assume any liability in the case of incorrectly reported or incomplete information. Unless stated otherwise all information is provided by the Company. Past performance is not indicative of future results.