The programme overview page lists the time slots for various sessions. Details of the various sessions is in the technical program (updated 28 Nov).
Thursday: Prof. Sethu Vijayakumar
- Title: Shared Autonomy for Interactive Robotics: The Robots are Ready, Are you? (Slides | Demo video)
- Abstract: The next generation of robots are going to work much more closely with humans, other robots and interact significantly with the environment around it. As a result, the key paradigms are shifting from isolated decision making systems to one that involves shared control – with significant autonomy devolved to the robot platform; and end-users in the loop making only high level decisions. This talk will look at technologies ranging from robust multi-modal sensing, shared representations, compliant actuation and machine learning techniques for real-time learning and adaptation that are enabling us to reap the benefits of increased autonomy while still feeling securely in control. This also raises some fundamental questions: while the robots are ready to share control, what is the optimal trade-off between autonomy and control that we are comfortable with? Domains where this debate is relevant include self-driving cars, mining, shared manufacturing, exoskeletons for rehabilitation, active prosthetics, large scale scheduling (e.g. transport) systems as well as oil and gas exploration to list a few.
Friday: Dr. Andrew Saxe
- Title: Demystifying depth: Learning dynamics in deep linear neural networks (Slides)
- Abstract: Deep learning methods have swept through machine learning, posting impressive successes from image recognition to playing complicated games like Go. Although they work well, they are often hard to train and understand. I will describe a quantitative theory of deep learning dynamics in a simple model, the deep linear network, which illuminates some of the main trade offs behind deep learning. How does depth itself–as opposed to other aspects of a learning system such as neural nonlinearities–change the learning problem? How does learning speed scale with depth? Why does unsupervised pretraining speed learning? The theory provides a new intuitive picture of the difficulties of deep learning: saddle points, not local minima, are the major impediment to fast training; and better symmetry-breaking initializations can make standard gradient descent fast. Drawing on these results, I’ll then turn to deep learning in the brain and mind: anatomically, the brain itself is deep. How might this layered structure influence neural and behavioral plasticity? I will look at aspects of human semantic development by considering how a deep network will learn about richly structured environments specified as probabilistic graphical models. This simple scheme illuminates empirical phenomena documented by developmental psychologists, including the progressive differentiation of hierarchical structure; transient illusory correlations which go beyond direct experience; and changing patterns of inductive generalization. Deep linear networks yield a rich account of layered learning, shining light into the “black box” of neural networks, and generating novel hypotheses interlinking computation, neural representations, and behavior.