The Department organises the "Distinguished Seminar Series in Computing" once or twice per term. The speakers are distinguished researchers in Computer Science and leaders in their research field.

This is usually measured by the number of citations and flagged projects they have credited to their name and whether they are considered to be pioneers in their field of  expertise. Most speakers are chosen because they can deliver an exciting talk to a wide audience.

For more information about Distinguished Seminar Series in Computing, please contact Professor Maja Pantic (

Distinguished seminar archive


Professor Dejan Kostic, KTH Royal Institute of TechnologyRunning NFV Service Chains at the True Speed of the Underlying Hardware

Tuesday 16th July 2019 in Huxley 311.

Speaker: Professor Dejan Kostic, KTH Royal Institute of Technology

Abstract: Following the success of Software-Defined Networking (SDN), Network Functions Virtualization (NFV) is poised to dramatically change the way network services are deployed. NFV advocates running chains of network functions (NFs) implemented as software on top of commodity hardware. The emerging 100-Gbps deployments will soon challenge the packet processing limits of commodity hardware. As an illustration, the available time to process a 64-byte packet at 100 Gbps is only 5 nanoseconds.

In this talk we will present our vision for running NFV service chains at the true speed of the underlying hardware. First, we will introduce SNF, a framework that synthesizes network function service chains by eliminating redundant I/O and repeated elements, while consolidating stateful cross layer packet operations across the chain. SNF uses graph composition and set theory to determine traffic classes handled by a service chain composed of multiple elements. It then synthesizes each traffic class using a minimal set of new elements that apply single-read-single-write and early-discard operations. Second, we will describe Metron, an NFV platform that achieves high resource utilization by jointly exploiting the underlying network and commodity servers' resources. This synergy allows Metron to: (i) offload part of the packet processing logic to the network, (ii) use smart tagging to setup and exploit the affinity of traffic classes, and (iii) use tag-based hardware dispatching to carry out the remaining packet processing at the speed of the servers' fastest cache(s), with zero intercore communication. With commodity hardware assistance, Metron deeply inspects traffic at 40 Gbps and realizes stateful network functions at the speed of a 100 GbE network card on a single server.

We will conclude the talk by presenting our future research directions.

Bio: Dejan Kostic is a Professor of Internetworking at the KTH Royal Institute of Technology, where he is the Head of the Communication Systems Division and the Head of the Network Systems Laboratory. He is also associated with the Decisions, Networks and Analytics (DNA) Laboratory of RISE Research Institutes of Sweden. His research interests include Distributed Systems, Computer Networks, Operating Systems, and Mobile Computing.

Dejan Kostic obtained his Ph.D. in Computer Science at the Duke University. He spent the last two years of his studies and a brief stay as a postdoctoral scholar at the University of California, San Diego. He received his Master of Science degree in Computer Science from the University of Texas at Dallas, and his Bachelor of Science degree in Computer Engineering and Information Technology from the University of Belgrade (ETF), Serbia. From 2006 until 2012 he worked as a tenure-track Assistant Professor at the School of Computer and Communications Sciences at EPFL (Ecole Polytechnique Federale de Lausanne), Switzerland. In 2010, he received a European Research Council (ERC) Starting Investigator Award. From 2012 until June 2014, he worked at the IMDEA Networks Institute (Madrid, Spain) as a Research Associate Professor with tenure. He is a Professor of Internetworking at KTH since April 2014. In 2018, he received a European Research Council (ERC) Consolidator Award.


Generalised Universality

Friday 20th April at 15.00, in Huxley 308, followed by a drinks reception in Huxley 418 (DoC common room).

Speaker: Rachid Guerraoui, EPFL


Abstract: The notion of universality in computing is at least as old as Turing. This notion has however been revisited in the context of distributed systems, be they made of geographically distant machines connected through the Internet, or processors of a multi-core architecture connected through a shared memory. In short, universality in a distributed context means that a set of nodes can emulate a highly available, centralized Turing machine, as long as these nodes  can be connected through a consensus abstraction through which the nodes agree on common decisions.

The idea is at the heart of the robustness of most data centers today, as well as the celebrated blockchain protocol. Yet, consensus is just a special case of a more general abstraction, set-consensus, where nodes  agree on at most k different decisions. Given that this abstraction is often easier to implement than consensus,  it is natural to seek a generalization of universality with set-consensus in mind. (The work is joint with Eli Gafni).

Bio: Rachid Guerraoui is professor of Computer Science at Ecole Polytechnique Fédérale de Lausanne where he leads the Distributed Programming Laboratory. Rachid is fellow of the ACM and ha been awarded an advanced ERC grant and a Google focused award. He has also been affiliated in the past with Hewlett-Packard Laboratories in Palo Alto and the Massachusetts Institute of Technology. He his associate editor of the journal of the ACM and has written several books and hundreds of papers on distributed computing (see


Structured models for human action recognition

Wednesday 14th June at 17.00, in Huxley, Room 311, followed by a drinks reception in Huxley 418

Speaker: Cordelia Schmid, INRIA

Abstract: In this talk, we present some recent results for human action recognition in videos. We, first, introduce a pose-based convolutional  neural network descriptor for action recognition, which aggregates motion and  appearance information along tracks of human body parts.

We also present an approach for extracting such human pose in 2D and 3D. Next, we propose an approach for  spatio-temporal action localization, which detects and scores CNN action proposals at a frame  as well as at a tubelet level and then tracks high-scoring proposals in the video.  Actions are localized in time with an LSTM at the track level. Finally, we show how  to extend this type of method to weakly supervised learning of actions, which allows  to scale to large amounts of data without manual annotation.

Bio: Cordelia Schmid holds a M.S. degree in Computer Science from the University of Karlsruhe and a Doctorate, also in Computer Science, from the Institut National Polytechnique de Grenoble (INPG). Her doctoral thesis received the best thesis award from INPG in 1996.

Dr. Schmid was a post-doctoral research assistant in the Robotics Research Group of Oxford University in 1996--1997. Since 1997 she has held a permanent research position at INRIA Grenoble Rhone-Alpes, where she is a research director and directs an INRIA team. Dr. Schmid is the author of over a hundred technical publications. She has been an Associate Editor for IEEE PAMI (2001--2005) and for IJCV (2004--2012), editor-in-chief for IJCV (2013---), a program chair of IEEE CVPR 2005 and ECCV 2012 as well as a general chair of IEEE CVPR

2015 and ECCV 2020. In 2006, 2014 and 2016, she was awarded the Longuet-Higgins prize for fundamental contributions in computer vision that have withstood the test of time. She is a fellow of IEEE. She was awarded an ERC advanced grant in 2013, the Humbolt research award in 2015 and the Inria & French Academy of Science Grand Prix in 2016. She was elected to the German National Academy of Sciences, Leopoldina, in 2017.

Emotion Technology, Wearables, and Surprises

Wednesday 8th March at 17:00, in Huxley, Room 308, followed by a drinks reception in Huxley 418.

Speaker: Rosalind Picard, Massachusetts Institute of Technolog

Abstract: Years ago, my students at MIT and I began to create wearable sensors and algorithms for recognizing emotion.  We designed studies to elicit emotion, gathered data, and developed signal processing and machine learning methods to see what insights could be reliably obtained - especially studying common emotions (in computing) such as frustration and stress. In this talk I will highlight several of the most surprising findings during this adventure. These include new insights about the "true smile of happiness," discovering that regular cameras (and your smartphone, even in your handbag) can compute vital signals, finding electrical signals on the wrist that give insight into deep brain activity, and learning surprising implications of wearable sensing for autism, anxiety, depression, sleep, memory consolidation, epilepsy, and more.  I'll also describe our next focus: how might these new capabilities help prevent the future #1 disease burden?

Bio: Rosalind W. Picard is founder and director of the Affective Computing Research Group at the MIT Media Laboratory, co-founder of Affectiva, which delivers technology to help measure and communicate emotion, used by over 1/3 of the Global Fortune 100 companies, and co-founder and Chief Scientist of Empatica, improving lives with clinical quality wearable sensors and analytics. Picard is the author of over two hundred peer-reviewed scientific articles. She is known internationally for authoring the book, Affective Computing, which helped launch the field by that name. Picard holds a Bachelors in Electrical Engineering from the Georgia Institute of Technology and Masters and Doctorate degrees in Electrical Engineering and Computer Science from MIT. In 2005 she was named a Fellow of the IEEE for contributions to image and video analysis and affective computing.

Picard is an active inventor with nearly two dozen patents: her group's inventions have been twice named to "top ten" lists, including the New York Times Magazine's Best Ideas of 2006 for their Social Cue Reader, used in autism, and 2011's Popular Science Top Ten Inventions for a Mirror that Monitors Vital Signs. CNN named her in 2015 one of seven “Tech Superheroes to Watch." Picard’s lab at MIT develops technologies to better understand, predict, and regulate emotion, including machine-learning based analytics that work with wearables and smartphones, with applications aimed at helping people with autism, epilepsy, depression/anxiety, migraine, pain, and more.



Modeling Dyadic Phenomena in Intelligent Conversational Agents

Wednesday 18th May at 15:00,  in Huxley, Room 308, followed by a drinks reception in Huxley 418

Speaker:Justine Cassell, Carnegie Mellon University

Abstract: In this talk I propose a particular computational sociocultural approach to the study of the so-called "social emotions" - intrinsically dyadic states such as rapport, friendship, intimacy, interpersonal closeness. I rely on this approach to describe the surface level observable verbal and nonverbal behaviors that function to evoke, deepen, demonstrate, and destroy these dyadic social emotions. I highlight the need for differentiating the observable behaviors from inferable underlying states by demonstrating how putatively negative visible behaviors may play a positive role in underlying states. Finally, I describe some important roles that these often discounted aspects of human behavior play in learning, commercial transactions, and other facets of day-to-day life. Each step of this talk is illustrated by experiments that involve human-human and human-computer interaction. I include novel approaches to modeling and generating behaviors for human-computer interaction on the basis of the human-human corpora. And finally, lessons are drawn both for the study of human behavior, and the improved design of technologies capable of engaging in interaction with people over the long-term. This talk is accessible to students at all levels, and it is the speakers wish that undergraduates, as well as postgraduates and faculty, attend.

Biography: Justine Cassell is Associate Dean of Technology Strategy and Impact and Professor in the School of Computer Science at Carnegie Mellon University, and Director Emerita of the Human Computer Interaction Institute,. She co-directs the Yahoo-CMU InMind partnership on the future of personal assistants. Previously Cassell was faculty at Northwestern University where she founded the Technology and Social Behavior Center and Doctoral Program. Before that she was a tenured professor at the MIT Media Lab. Cassell received the MIT Edgerton Prize and Anita Borg Institute Women of Vision award, in 2011 was named to the World Economic Forum Global Agenda Council on AI and Robotics, in 2012 named a AAAS fellow, and in 2016 made a Fellow of the Royal Academy of Scotland. Cassell has spoken at the World Economic Forum in Davos for the past 5 years on topics concerning the impact of new technology on society.




Wednesday 16 December at 15:00, room 144

Speaker: Professor Willy Zwaenepoel, School of Computer and Communication Sciences EPFL

Abstract: Big graphs occur naturally in many applications, most obviously in social networks, but also in many other areas such as biology and forensics. Current approaches to processing large graphs use either supercomputers or very large clusters. In both cases the entire graph must reside in memory before it can be processed. We are pursuing an alternative approach, processing graphs from secondary storage. While this comes with some performance penalty, it makes analytics on very large graphs accessible on a small number of commodity machines. It also has the pleasing property that "if you can store a graph, you can compute on it". We have developed two systems, one for a single machine and one for a cluster of machines. X-Stream, the single machine solution, aims to make all secondary storage access sequential. It uses two techniques to achieve this goal, edge-centric processing and streaming partitions. X-Stream outperforms the state-of-the-art GraphChi system, because it achieves better sequentiality and because it requires less preprocessing. Slipstream, the cluster solution, starts from the observation that there is little benefit to locality when accessing data from secondary storage over a high-speed network. As a result, partitioning can be dynamic and can focus on achieving load balance, in combination with sequentiality of secondary storage access. The resulting system achieves good scaling behavior and outperforms the state-of-the-art out-of-core Giraph system. With Slipstream we have also been able to process a trillion-edge graph, a new milestone for graph size on a small cluster. I will describe both systems and their performance on a number of benchmarks and in comparison to state-of-the-art alternatives. This is joint work with Laurent Bindschaedler (EPFL), Jasmina Malicevic (EPFL) and Amitabha Roy (Intel Labs).

Biography:  Prof Willy Zwaenepoel received his BS/MS from the University of Gent, Belgium, and his PhD from Stanford University. He is currently a Professor of Computer Science at EPFL. Before he has held appointments as Professor of Computer Science and Electrical Engineering at Rice University, and as Dean of the School of Computer and Communication Sciences at EPFL.

His interests are in operating systems and distributed systems.

He is a Fellow of the ACM and the IEEE, he has received the IEEE Kanai Award and several best paper awards, and is a member of the Belgian and European Academies. He has also been involved in a number of startups, including BugBuster (acquired by AppDynamics), iMimic (acquired by Ironport/Cisco), Midokura and Nutanix.

On Human-Agent Collectives

Wednesday 20 May at 15.00, room 311

Speaker: Professor Nick Jennings, Chief Scientific Adviser  to the UK Government in the area of National Security and the inaugural Regius Professor of Computer Science in Electronics and Computer Science at Southampton University

Abstract: As computation increasingly pervades the world around us, it will profoundly change the ways in which we work with computers. Rather than issuing instructions to passive machines, humans and software agents will continually and flexibly establish a range of collaborative relationships with one another, forming human-agent collectives (HACs) to meet their individual and collective goals.

This vision of people and computational agents operating at a global scale offers tremendous potential and, if realised correctly, will help us meet the key societal challenges of sustainability, inclusion, and safety that are core to our future. To fully realise this vision, we require a principled science that allows us to reason about the computational and human aspects of these systems. In this talk, I will explore the science that is needed to understand, build and apply HACs that symbiotically interleave human and computer systems to an unprecedented degree.

Drawing on multidisciplinary work in the areas of artificial intelligence, agent-based computing, machine learning, decentralised information systems, crowd sourcing, participatory systems, and ubiquitous computing, the talk will explore the science of HACs to real-world applications in the critical domains of the smart grid, disaster response and citizen science.

Biography: Professor Jennings is a Chief Scientific Adviser to the UK Government in the area of National Security and the inaugural Regius Professor of Computer Science in Electronics and Computer Science at Southampton University. He is an internationally-recognized authority in the areas of artificial intelligence, autonomous systems and agent-based computing.

His research covers both the science and the engineering of such systems. He has undertaken fundamental research on automated bargaining, mechanism design, trust and reputation, coalition formation, human-agent collectives and crowd sourcing. He has also pioneered the application of multi-agent technology; developing real-world systems in domains such as business process management, smart energy systems, sensor networks, disaster response, telecommunications, citizen science and defence.

He has published over 500 articles and graduated 40 PhD students. With 59,000 citations and an h-index of 104, he is one of the world’s most highly cited computer scientists. He has received a number of international awards for his research: the Computers and Thought Award (the premier award for a young AI scientist and the first European-based recipient in the Award's 30 year history), the ACM Autonomous Agents Research Award and an IEE Achievement Medal.

He is a Fellow of the Royal Academy of Engineering, the Institute of Electrical and Electronic Engineers, the British Computer Society, the Institution of Engineering and Technology, and the Association for the Advancement of Artificial Intelligence (AAAI).

From Programs to Systems – Building a Smarter World

Wednesday 28 January at 15.00, room 308

Speaker: Prof. Joseph Sifakis, Rigorous System Design Laboratory, Lausanne, Switzerland

Abstract: The focus of computing has been continuously shifting from programs to systems over the past decades. Programs can be represented as relations independent from the physical resources needed for their execution. Their behavior is often terminating, deterministic and platform-independent. On the contrary, systems are interactive.  They continuously interact with an external environment. Their behavior is driven by stimuli from the environment, which, in turn, is affected by their outputs.

Modern computing systems break with traditional systems, such as desktop computers and servers, in various ways: 1) they are instrumented in order to interact with physical environments; 2) they are interconnected to allow interaction between people and objects in entirely new modes; 3) they must be smart to ensure predictability of events and optimal use of resources. Currently, we lack theory methods and tools for building trustworthy systems cost-effectively.

In this talk, I will advocate system design as a formal and accountable process leading from requirements to correct-by-construction implementations. I will also discuss current limitations of the state of the art and call for a coherent scientific foundation of system design based on a three-pronged vision: 1) linking the cyber and the physical worlds; 2) correctness-by-construction; 3) intelligence.

I will conclude with general remarks about the nature of computing and advocate a deeper interaction and cross-fertilization with other more mature scientific disciplines.

Bio: Joseph Sifakis is a computer scientist, laureate of the 2007 Turing Award, along with Edmund M. Clarke and E. Allen Emerson, for his work on model checking.

He studied Electrical Engineering at the National Technical University of Athens and Computer Science at the University of Grenoble. He is the founder of the Verimag laboratory, which he directed for 15 years.

He is a Full Professor at EPFL, Lausanne. His current research interests cover fundamental and applied aspects of embedded systems design. The main focus of his work is on the formalization of system design as a process leading from given requirements to trustworthy, optimized and correct-by-construction implementations.

Joseph Sifakis is a member of the French Academy of Sciences, a member of the French National Academy of Engineering and a member of Academia Europea. He is a Grand Officer of the French National Order of Merit, a Commander of the French Legion of Honor and a Commander of the Greek Order of the Phoenix. He has received the Leonardo da Vinci Medal in 2012. He is the President of the Greek National Council for Research and Technology.


Distinguished Seminar: Journeys Through Creative Computing (Steve Benford University of Nottingham)

26 November, 2014 at 3pm room Clore Lecture Theatre

Speaker: Prof. Steve Benford, University of Nottingham

Abstract: I have been working with artists and performers for over fifteen years to create, tour and study interactive artworks as a way of inspiring new techniques in human-computer interaction. My talk will draw on examples of this work, ranging from games that mix online participants with those on-the-streets, to the design of amusement rides that deliver thrilling experiences, to a musical instrument that captures and retells its life story. I will draw on these various examples to reveal how interactive creative experiences typically involve extended journeys through hybrid physical and digital realms. Consequently, I will introduce a conceptual framework based on trajectories to guide their design and analysis. Finally, I will draw on my own personal journey over the past fifteen years to reflect on the wider methodological opportunities and challenges of engaging artists in computing research.

Bio: Steve Benford is Professor of Collaborative Computing in the Mixed Reality Laboratory at the University of Nottingham. He is currently an EPSRC Dream Fellow and Director of the EPSRC-funded ‘Horizon: My Life in Data’ CDT. He was the first Visiting Professor at the BBC in 2012 and a Visiting Researcher at Microsoft Research in 2013. He has received best paper awards at the ACM’s annual Computer-Human Interaction (CHI) conference in 2005, 2009, 2011 and 2012. He also won the 2003 Prix Ars Elctronica for Interactive Art, the 2007 Nokia Mindtrek award for Innovative Applications of Ubiquitous Computing, and has received four BAFTA nominations. He was elected to the CHI Academy in 2012. His book Performing Mixed Reality was published by MIT Press in 2011.

Distinguished Seminar: Machines that (Learn to) See (Andrew Blake, Microsoft Research)

26 March, 2014 at 2.30pm room LT308

Speaker: Prof. Andrew Blake, Microsoft Research

Abstract: The world of computer science and artificial intelligence can indulge in a bit of cautious celebration. There are several examples of machines that have the gift of sight, even if to a degree that is primitive on the scale of human or animal abilities. Machines can: navigate using vision; separate object from background; recognise a variety of objects, including the limbs of the human body. These abilities are great spin-offs in their own right, but are also part of an extended adventure in understanding the nature of intelligence.

One question is whether intelligent systems will turn out to depend more on theories and models, or simply on largely amorphous networks trained on data at ever greater scale? In vision systems this often boils down to the choice between two paradigms: analysis-by-synthesis versus empirical recognisers. Each approach has its strengths, and one can speculate about how deeply the two approaches may eventually be integrated.

Bio: Andrew Blake is the Laboratory Director of Microsoft Research Cambridge, England. He joined Microsoft in 1999 as a Senior Researcher to found the Computer Vision group. In 2008 he became a Deputy Laboratory Director at the lab, before assuming his current position in 2010.

Andrew graduated from Trinity College, Cambridge in 1977 with a BA in Mathematics and Electrical Sciences. After a year as a Kennedy Scholar at MIT and two years in the defence electronics industry, he studied for a doctorate at the University of Edinburgh, which was awarded in 1983.

He was on the Computer Science faculty at the University of Edinburgh from 1983-87 and then joined the faculty of the Department of Engineering Science at the University of Oxford, where he became a Professor in 1996. He held a Royal Society Senior Research Fellowship from 1998-1999. Andrew has been a visiting Professor of Engineering with the University of Oxford and was appointed honorary Professor of Machine Intelligence at the University of Cambridge in 2007.

He was elected Fellow of the Royal Academy of Engineering in 1998, Fellow of the IEEE in 2008, and Fellow of the Royal Society in 2005. In 2006 the Royal Academy of Engineering awarded him its Silver Medal and in 2007 the Institution of Engineering and Technology presented him with the Mountbatten Medal (previously awarded to computer pioneers Maurice Wilkes and Tim Berners-Lee, amongst others.) In 2010, Andrew was elected to the council of the Royal Society.

In 2011, he and colleagues at Microsoft Research received the Royal Academy of Engineering MacRobert Award for their machine learning contribution to Microsoft Kinect human motion-capture. In 2012 Andrew was elected to the board of the EPSRC and also received an honorary degree of Doctor of Science from the University of Edinburgh. In 2013 Andrew was awarded an honorary degree of Doctor of Engineering from the University of Sheffield. In 2014, Andrew gave the prestigious Gibbs lecture at the Joint Mathematics Meetings (transcript available here).

Andrew has published several books including "Visual Reconstruction" with A. Zisserman (MIT press), "Active Vision" with Alan Yuille (MIT Press) and "Active Contours" with Michael Isard (Springer-Verlag). He researches the probabilistic principles of computer vision software, with applications to motion capture, user interface, image editing, remote collaboration and medical imaging.

Distinguished Seminar: On Gait and Soft Biometrics - Recognition by the way you walk and by the way people describe you

January 29, 2014 at 2pm room LT308

Speaker: Prof. Mark Nixon, University of Southampton

Abstract: The prime advantage of gait as a biometric is that it can be used for recognition at a distance whereas other biometrics cannot. There is a rich selection of approaches and many advances have been made, as will be reviewed in this talk. Soft biometrics is an emerging area of interest in biometrics where we augment computer vision derived measures by human descriptions. Applied to gait biometrics, this again can be used where other biometric data is obscured or at too low resolution. The human descriptions are semantic and are a set of labels which are co nvert ed into numbers.

Naturally, there are considerations of language and psychology when the labels are collected. After describing current progress in gait biometrics, this talk will describe how the soft biometrics labels are collected, and how they can be used to enhance recognising people by the way they walk. As well as reinforcing biometrics, this approach might lead to a new procedure for collecting witness statements, and to the ability to retrieve subjects from video using witness statements.

Biography: Mark Nixon is the Professor in Computer Vision at the University of Southampton UK. His research interests are in image processing and computer vision. His team develops new techniques for static and moving shape extraction which have found application in automatic face and automatic gait recognition and in medical image analysis. His team were early workers in face recognition, later came to pioneer gait recognition, later joined the pioneers of ear biometrics, and more recently started the new soft biometrics.

Amongst research contracts, he was Principal Investigator with John Carter on the DARPA supported project Automatic Gait Recognition for Human ID at a Distance and he was previously with the FP7 Scovis project and is currently with the EU-funded Tabula Rasa project. Mark has published many papers in peer reviewed journals, conference proceedings and technical books His vision textbook, with Alberto Aguado, Feature Extraction and Image Processing (Academic Press) reached 3rd Edition in 2012 and has become a standard text in computer vision.

With T. Tan and R. Chellappa, their 2005 book Human ID based on Gait is part of the Springer Series on Biometrics. He has chaired/program chaired BMVC 98, AVBPA 03, IEEE Face and Gesture FG06, ICPR 04, ICB 09, IEEE BTAS 2010, and given many invited talks.


Distinguished Seminar: Internet Challenges for 21st Century

November 27, 2013 at 3pm room LT308

Speaker: Vinton (Vint) G. Cerf


The Internet continue s to expand, to support new applications and to excite public and government concerns over abuse of this infrastructure. This talk will explore technical directions for further Internet evolution and explore the implications of the Internet of things, the Information explosion and the extension of the Internet to operate across the solar s ystem.


Vinton (Vint) G. Cerf is vice president and Chief Internet Evangelist for Google. He is responsible for identifying new enabling technologies and applications on the Internet and other platforms for the company.

Widely known as a "Father of the Internet," Vint is the co-designer with Robert Kahn of TCP/IP protocols and basic architecture of the Internet. In 1997, President Clinton recognized their work with the U.S. National Medal of Technology. In 2005, Vint and Bob received the highest civilian honor bestowed in the U.S., the Presidential Medal of Freedom. It recognizes the fact that their work on the software code used to transmit data across the Internet has put them "at the forefront of a digital revolution that has transformed global commerce, communication, and entertainment."

From 1994-2005, Vint served as Senior Vice President at MCI. Prior to that, he was Vice President of the Corporation for National Research Initiatives (CNRI), and from 1982-86 he served as Vice President of MCI. During his tenure with the U.S. Department of Defense's Advanced Research Projects Agency (DARPA) from 1976-1982, Vint played a key role leading the development of Internet and Internet-related data packet and security technologies.

Since 2000, Vint has served as chairman of the board of the Internet Corporation for Assigned Names and Numbers (ICANN) and he has been a Visiting Scientist at the Jet Propulsion Laboratory since 1998. He served as founding president of the Internet Society (ISOC) from 1992-1995 and was on the ISOC board until 2000. Vint is a Fellow of the IEEE, ACM, AAAS, the American Academy of Arts and Sciences, the International Engineering Consortium, the Computer History Museum and the National Academy of Engineering. Vint currently serves as President of the Association for Computing Machinery (ACM).

Vint has received numerous awards and commendations in connection with his work on the Internet, including the Marconi Fellowship, Charles Stark Draper award of the National Academy of Engineering, the Prince of Asturias award for science and technology, the Alexander Graham Bell Award presented by the Alexander Graham Bell Association for the Deaf, the A.M. Turing Award from the Association for Computer Machinery, the Silver Medal of the International Telecommunications Union, and the IEEE Alexander Graham Bell Medal, among many others.

He holds a Ph.D. in Computer Science from UCLA and more than a dozen honorary degrees.

Unifying logic and probability: A "New Dawn" for Artificial Intelligence?

October 9, 2013 at 2pm in Huxley room 308

Speaker: Stuart Russell


Logic and probability are ancient subjects whose unification holds significant potential for the field of artificial intelligence. A recent cover article in New Scientist went so far as to announce an "Intelligence Revolution" and a "New Dawn for AI". This talk will explain some of the underlying technical ideas and their application.

Speaker's bio:

Stuart Russell is a Profesor of Computer Science at UC Berkeley and Adjunct Professor of Neurosurgery at UC San Francisco. He is currently a visiting professor at the Universite Pierre et Marie Curie and holds the Chaire Blaise Pascal. His research covers many aspects of artificial intelligence and machine learning. He is a fellow of AAAI, ACM, and AAAS and winner of the IJCAI Computers and Thought Award.

His book "Artificial Intelligence: A Modern Approach" ( with Peter Norvig) is the standard text in the field.

For more info, see:

*-aware Software for Cyber Physical Systems

March 13, 2013 at 3pm in Huxley room 311

Speaker: John A. Stankovic

Abstract: Many exciting and next generation Cyber Physical Systems (CPS) will be based on wireless sensor networks as an underlying infrastructure. Many CPSs such as home health care, HVAC control, vehicular networks, the grid, and home security will operate in open environments and co-exist. The interactions in and across these systems will further enable new and important applications. While deploying CPSs in open and uncontrolled environments provides many benefits, it also gives rise to greater complexity, uncertainty, non-determinism, and privacy and security issues than found in today’s embedded systems. New approaches to developing software for these systems are required. This talk includes ideas and (partial) solutions for creating physically-aware, validation-aware, real-time-aware, and privacy-aware software as key solutions for such new methodologies. The fundamental principal underlying these solutions is the need to develop a scientific and systematic approach for dealing with the impact of the physical on the cyber.

Bio: Professor John A. Stankovic is the BP America Professor in the Computer Science Department at the University of Virginia. He served as Chair of the department, completing two terms (8 years). He is a Fellow of both the IEEE and the ACM. He won the IEEE Real-Time Systems Technical Committee's Award for Outstanding Technical Contributions and Leadership. He also won the IEEE Distributed Processing Technical Committee’s Award for Distinguished Achievement (inaugural winner). He has won five best paper awards in wireless sensor networks research. He is highly cited (h-index is 93) and presented many Invited Keynotes and Distinguished Lectures.

Professor Stankovic also served on the Board of Directors of the Computer Research Association for 9 years. Currently, he serves on the National Academy’s Computer Science and Telecommunications Board. Recently, he won the University of Virginia, School of Engineering Distinguished Faculty Award. Before joining the University of Virginia, Professor Stankovic taught at the University of Massachusetts where he won an outstanding scholar award.

He was the Editor-in-Chief for the IEEE Transactions on Distributed and Parallel Systems and was a founder and co-editor-in-chief for the Real-Time Systems Journal. His research interes ts are in wireless sensor networks, cyber physical systems, distributed computing, and real-time systems. Prof. Stankovic received his PhD from Brown University.

Distinguised Seminar: Effectively-Propositional Reasoning about Reachability in Linked Data Structures

Febuary 6, 2013 at 3.30pm, in Huxley room 341.

Speaker: Mooly Sagiv

Abstract: We propose a novel method of harnessing existing SAT solvers to verify reachability properties of programs that manipulate linked-list data structures. Such properties are essential for proving program termination, correctness of data structure invariants, and other safety properties. Our solution is complete, i.e., a SAT solver produces a counterexample whenever a program does not satisfy its specification.

This result is surprising since even first-order theorem provers usually cannot deal with reachability in a complete way, because doing so requires reasoning about transitive closure. Our result is based on the following ideas: (1) Programmers must write assertions in a restricted logic without quantifier al ternation or functio n symbols. (2) The correctness of many programs can be expressed in such restricted logic s, although we explain the tradeoffs. (3) Recent results in descriptive complexity can be utilized to show that every program that manipulates potentially cyclic, singly - and doubly - linked lists and that is annotated wi th assertions written in this restricted logic, can be verified with a SAT solver. We implemented a tool atop Z3 and used it to show the correctness.

This is a joint work with Shachar Itzhaky (TAU), Anindya Banerjee (IMDEA),, Neil Immerman (UMASS) and Aleksandar Nanevsk (IMDEA).

Bio: Prof. Mooly Sagiv is a senior member of staff in the Computer Sciences Department School of Mathematical Sciences Tel-Aviv University. A leading scientist in large scale (inter-procedural) program analysis, his fields of interests include Programming Languages, Compilers, Abstract interpretation, Profiling, Pointer Analysis, Shape Analysis, Inter-procedural dataflow analysis, Program Slicing, Language-based programming environments.

More information is available at

Data: Making it be there when you want it, and making it disappear when you want it gone

January 16, 2013, at 3.30 pm.

Speaker: Radia Perlman

Abstract: This talk describes a design that provides data storage with high availability, protection against unauthorized disclosure, and the ability to create data with an expiration date, such that after the expiration date it is unreadable, even if backups of the data still exist. The obvious approach, of course, is to encrypt the data, and then destroy keys at the appropriate times. But that still leaves the problem of managing the keys.

To ensure availability before expiration, the keys must be backed up in multiple places, but if there are enough backup copies of the keys to assure availability of unexpired keys, it will be difficult to assure that backups with unexpired keys are all destroyed. This talk presents a design that simultaneously solves both problems; it allows making arbitrarily many copies of all of the state of the file system (for high availability), and yet, once data expires it is impossible to recover, even though an old backup can still be found. This design is simple, easy to manage, and has minimal performance overhead.

Bio: Radia Perlman is a Fellow at Intel Labs, specializing on network protocols and security protocols. Many of the technologies she designed have been deployed in the Internet for decades, including the IS-IS routing protocol, and the spanning tree algorithm that has been the heart of Ethernet. More recently she invented the concept of TRILL, which improves upon spanning tree while still “being Ethernet”.

She has also made contributions to network security, including assured delete of data, design of the authentication handshake of IPSec (IKEv2), trust models for PKI, and network infrastructure robust against malicious trusted components. She is the author of the textbook “Interconnections: Bridges, Routers, Switches, and Internetworking Protocols”, and coauthor of “Network Security”. She has a PhD from MIT in computer science, and has received various industry awards including lifetime achievement awards from ACM’s SIGCOMM and Usenix, and an honorary doctorate from KTH.


Building Brain

November 28, 2012, at 1.00 pm, in Clore Lecture Theatre (Huxley Building, 180 Queens' Gate), followed by a drinks reception in DoC Common Room.

Speaker: Steve Furber

Abstract: When his concept of the universal computing machine finally became an engineering reality, Alan Turing speculated on the prospects for such machines to emulate human thinking. Although computers now routinely perform impressive feats of logic and analysis such as searching the vast complexities of the global internet for information in a second or two, they have progressed much more slowly than Turing anticipated towards achieving normal human levels of intelligent behaviour, or perhaps "common sense". Why is this?

Perhaps the answer lies in the fact that the principles of information processing in the brain are still far from understood. But progress in computer technology means that we can now realistically contemplate building computer models of the brain that can be used to probe these principles much more readily than is feasible, or ethical, with a living biological brain.

Bio: Steve Furber CBE FRS FREng is the ICL Professor of Computer Engineering in the School of Computer Science at the University of Manchester. He received his B.A. degree in Mathematics in 1974 and his Ph.D. in Aerodynamics in 1980 from the University of Cambridge, England. From 1980 to 1990 he worked in the hardware development group within the R&D department at Acorn Computer Ltd, and was a principal designer both of the BBC Microcomputer, which introduced computing into most UK schools, and of the ARM 32-bit RISC microprocessor, which today powers much of the world's mobile consumer electronics including mobile phones and tablet computers.

At Manchester he leads research into many-core computing, particularly as applied to the problem of supporting computer models of large-scale brain subsystems. His vision is both to accelerate und er s tanding of how the brain processes information , and to use that understanding to engineer more effective computer systems.

Steve is a Fellow of the Royal Society, t he Royal Aca demy of Engineering, the British Computer Society, the Institution of Engineering and Technology and the IEEE, and a member of Academia Europaea. His awards include a Royal Academy of Engineering Silver Medal, a Royal Society Wolfson Research Merit Award, the IET Faraday Medal, a CBE, and an Honorary DSc from the University of Edinburgh. He was a Laureate for the 2010 Millenium Technology Prize awarded by the Technology Academy of Finland, and was made a Fellow Award honoree of the Computer History Museum (Mountain View, CA) in 2012.

Video Enhancement and Analysis: From Content Analysis to Video Stabilization for YouTube

October 04, 2012, at 3.00 pm, in ro o m 3 1 1 (Huxley Buildin g, 180 Queens' Gate), followed by a drinks reception in DoC Common Room.

Speaker: Irfan Essa

Abstract: The talk will describe a variety of efforts undertaken on analysis of video to enhancement and synthesis of video. An overview of the past work on representing and analyzing videos as a stochastic process and use of this in a form of Video Textures will be provided. Majority of the talk will then focus on the recent effort which resulted in a widely-used video stabilizer (currently implemented on YouTube) and its extensions.

This method generates stabilized videos by employing L1-optimal camera paths to remove undesirable motions. We compute camera paths that are optimally partitioned into constant, linear and parabolic segments mimicking the camera motions employed by professional cinematographers. To this end, we propose a linear programming framework to minimize the first, second, and third derivatives of the resulting camera path.

Our method allows for video stabilization beyond the conventional filtering that only suppresses high frequency jitter. An additional challenge in videos shot from mobile phones are rolling shutter distortions. We demonstrate a solution based on a novel mixtur e model of homographies parametrized by scanline blocks to correct these rolling shutter distortions. Our method does not rely on a-priori knowledge of the readout time nor re quires pri or camera calibration. This wor k is in collaboration with M atthi as Grundmann and Vivek Kwatra at Google.

Bio: Irfan Essa is a Professor in the School of Interactive Computing (iC) of the College of Computing (CoC), and Adjunct Professor in the School of Electrical and Computer Engineering, Georgia Institute of Technology(GA Tech), in Atlanta, Georgia, USA. He works in the areas of Computer Vision,Computer Graphics, Computational Perception, Robotics and Computer Animation, Machine Learning, and Social Computing, with potential impact on Video Analysis and Production (e.g., Computational Photography & Video,Image-based Modeling and Rendering, etc.) Human Computer Interaction, and Artificial Intelligence research.

His specific research interests are in Video Analysis & Synthesis, and Activity & Behavior Recognition. He also works in the new area of Computational Journalism. Specifically, he is interested in the analysis, interpretation, authoring, and synthesis (of video), with the goals of building aware environments & supporting healthy living, recognizing & modeling human behaviors, empowering humans to effectively interact with each other, with media & with technologies, and developing dynamic & generative representations of time-varying streams. He has published over 150 scholarly articles in leading journals and conference venues on these topics. He has been awarded numerous awards, including the NSF CAREER award and is currently an IEEE Fellow.


Playing Games with Games

December 14, 2011, at 3.00 pm, in room 308 (Huxley Building, 180 Queens' Gate), followed by a drinks reception in DoC Common Room and then the DoC Christmas Party in room 344

Speaker: Michael Wooldridge

Abstract: The past decade has been witness to a huge explosion of interest in the computational aspects of game theory. One topic that has received much attention is that of mechanism design. Crudely, mechanism design can be understood as the problem of designing games so that, if every player in the game acts rationally, certain desirable outcomes will result. In mechanism design, it is usually assumed that the designer of the mechanism has complete freedom to design a mechanism as desired. But this is not the reality of most real-world mechanism design problems: when a Government develops a new law, for example, they do not usually have a blank slate, but must start from the framework of society as it exists.

In this talk, I will present work we have done on the computational aspects of such "mechanism design for legacy systems". In the settings we will consider here, a principal external to a system must try to engineer a mechanism to influence the system so that certain desirable outcomes will result from the rational action of agents within the system. We focus specifically on the possibility of imposing taxation schemes upon a system, so that the preferences of participants are perturbed in such a way that they collectively and rationally choose socially desirable outcomes. The specific framework within which we express these ideas is a framework known as Boolean games.

We discuss the computational complexity of the "implementation" problem for Boolean games, and derive a formal characterisation of feasible implementation problems for such games. If time permits, we will discuss extensions to the framework that take it much closer to contemporary CAV models, and will briefly survey some of the problems that arise in these richer settings, which will be studied in a 5-year ERC Advanced Grant, awarded in October 2011.

Bio: Michael Wooldridge is a Professor in the Department of Computer Science at the University of Liverpool, UK. He has been active in multi-agent systems research since 1989, and has published over two hundred articles in the area. His main interests are in the use of formal methods for reasoning about autonomous agents and multi-agent systems. Wooldridge was the recipient of the ACM Autonomous Agents Research Award in 2006. He is an associate editor of the journals ''Artificial Intelligence'' and ''Journal of AI Research (JAIR)''. His introductory textbook ''An Introduction to Multiagent Systems'' was published by Wiley in 2002 (Chinese translation 2003; Greek translation 2008; second edition 2009). In October 2011, he was awarded a 5-year ERC Advanced Grant entitled "RACE -- Reasoning about Computational Economies".

More information about the work of Michael Wooldridge.

The ‘New Deal on Data’: Making Health, Financial, Logistics, and Transportation Work

June 7, 2011, at 2.30 pm, in Clore Lecture Theatre (Huxley Building, 180 Queens' Gate), followed by a drinks reception in DoC Common Room

Speaker: Alex (Sandy) Pentland Abstract of the Talk: Most of the functions of our society are based on networks designed during the late 1800s, and are modelled after centralized water systems. The rapid spread of ubiquitous networks, and connected sensors such as those contained in smartphones and cars, allow these networks to be reinvented as much more active and reactive control networks - at the scale of the individual, the family, the enterprise, the city and the nation. This will fundamentally transform the economics of health, finance, logistics, and transportation. One key challenge is access to the personal data at scale to enable these systems to function more efficiently. In discussions with key CEOs, regulators, and NGOs at the World Economic Forum we have constructed a 'new deal on data' that can allow personal data to emerge as accessible asset class that provides strong protection for individuals. The talk will cover a range of prototype systems and experiments developed at MIT, and outline some of the challenges and growth opportunities that this dramatic trends present.

Short Bio of the Speaker: Alex (Sandy) Pentland directs MIT’s Human Dynamics Laboratory and the MIT Media Lab Entrepreneurship Program, and advises the World Economic Forum, Nissan Motor Corporation, and a variety of start-up firms. He has previously helped create and direct MIT’s Media Laboratory, the Media Lab Asia laboratories at the Indian Institutes of Technology, and Strong Hospital’s Center for Future Health. Profiles of Sandy have appeared in many publications, including the New York Times, Forbes, and Harvard Business Review. Sandy is among the most-cited computational scientists in the world, and a pioneer in computational social science, organizational engineering, mobile computing, image understanding, and modern biometrics. His research has been featured in Nature, Science, the World Economic Forum, and Harvard Business Review, as well as being the focus of TV features including Nova and Scientific American Frontiers.

Enriched Spoken Language Processing

April 12, 2011, at 3.00 pm, in room 308 (Huxley Building, 180 Queens' Gate), followed by a drinks reception in DoC Common Room

Speaker: Shrikanth (Shri) Narayanan

Abstract: The human speech signal is unique in the sense that it carries crucial information about not only communication intent and speaker's identity but also about underlying expressions and emotions. Automatically processing and decoding spoken language hence is a vastly challenging, and an inherently interdisciplinary, multifaceted endeavor. Recent technological approaches that have leveraged judicious use of both data and knowledge have yielded significant advances in this regards - beyond merely extracting underlying lexical information using automatic speech to text transcription - especially in terms of deriving rich information about prosody, discourse, and affect. This talk will focus on some of the advances and open challenges in creating algorithms for machine processing of spoken language including their applications in areas such as enriched speech translation and behavioral informatics.

Bio: Shrikanth (Shri) Narayanan is the Andrew J. Viterbi Professor of Engineering at the University of Southern California (USC), w here he holds appointments as Professor of Electrical Engineering, Computer Science, Linguistics and Psychology, and as Director of the USC Ming Hsieh Institute. Prior to USC he was with AT&T Bell Labs and AT&T Research. His research focuses on human-centered information processing and communication technologies. Shri Narayanan is a Fellow of the Acoustical Society of America, IEEE, and the American Association for the Advancement of Science (AAAS).

He is an Editor for the Computer Speech and Language Journal, and an Associate Editor for the IEEE Transactions on Multimedia, IEEE Transactions on Affective Computing and the Journal of the Acoustical Society of America. He is a recipient of several awards including Best Paper awards from the IEEE Signal Processing society in 2005 (with Alex Potamianos) and in 2009 (with Chul Min Lee) and selection as a Distinguished Lecturer for the IEEE Signal Processing society for 2010-11. His has published over 400 papers, and has eight patents.

For more information see:


Nonparametric Bayesian Modelling

November 17, 2010, at 3.00 pm, in room 308 (Huxley Building, 180 Queens' Gate), followed by a drinks reception in DoC Common Room

Speaker: Zoubin Ghahramani

Abstract: Because uncertainty, data, and inference play a fundamental role in the design of systems that learn, probabilistic modelling has become one of the cornerstones of the field of machine learning. Once a probabilistic model is defined, Bayesian statistics (which used to be called "inverse probability") can be used to make inferences and predictions from the model. Bayesian methods also elucidate how probabilities can be used to coherently represent degrees of belief in a rational artificial agent. Bayesian methods work best when they are applied to models that are flexible enough to capture the complexity of real-world data.

Recent work on non-parametric Bayesian machine learning provides this flexibility. I will touch upon key developments in the field, including Gaussian processes, Dirichlet processes, and the Indian buffet process (IBP). Focusing on the IBP, I will describe how this can be used in a number of applications such as collaborative filtering, bioinformatics, cognitive modelling, independent components analysis, time series modelling, and causal discovery. Finally, I will outline the main challenges in the field: how to develop new models, new fast inference algorithms, and compelling applications.

Bio: Zoubin Ghahramani is Professor of Information Engineering at the University of Cambridge, UK, and is also Associate Research Professor of Machine Learning at Carnegie Mellon University, USA. His current research focus is on Bayesian approaches to statistica l machine learning, with applications to bioinformati cs, econometrics, and information retrieval. He has served on the editorial boards of several leading journals in the field, including JMLR, JAIR, Annals of Statistics, Machine Learning, and Bayesian Analysis.

He is Associate Editor in Chief of IEEE Transactions on Pattern Analysis and Machine Intelligence, currently the IEEE's highest impact journal. He also serves on the Board of the International Machine Learning Society, and as Program Chair (2007) and General Chair (2011) of the International Conference on Machine Learning.

For more information see:

Automatically Reducing Energy Consumption, Improving Performance, and Tolerating Failures With Good Quality of Service

March 15, 2010, at 3.00 pm, in room 308 (Huxley Building, 180 Queens' Gate), followed by a drinks reception in DoC Common Room

Speaker: Martin Rinard

Abstract: Reducing energy consumption, improving performance, and tolerating failures are important goals in modern computing systems. We present two techniques for satisfying these goals. The first technique, loop perforation, finds the most time-consuming loops, then transforms the loops to execute fewer iterations. Our results show that this technique can reduce the computational resources required to execute the application by a factor of two to three (enabling corresponding improvements in energy consumption, performance, and fault tolerance) while delivering good quality of service.

The second technique, goal-directed parallelization, executes the most time-consuming loops in parallel, then (guided by memory profiling information) adds synchronization and replication as necessary to eliminate bottlenecks and enable the application to produce accurate output. Our results show that this approach makes it possible to effectively parallelize challenging applications without the use of complex static analysis.

Because traditional program transformations operate in the absence of any specification of acceptable program behavior, the transformed program must produce the identical result as the original program. In contrast, the two techniques presented in this talk exploit the availability of quality of service specifications to apply much more aggressive transformations that may change the result that the program produces (as long as the result satisfies the specified quality of service requirements). The success of these two techniques demonstrates the advantages of this approach.

Bio: Martin Rinard is a Professor in the MIT Department of Electrical Engineering and Computer Science and a member of the MIT Computer Science and Artificial Intelligence Laboratory. His research interests include parallel and distributed computing, programming languages, program analysis, program verification, software engineering, and computer systems. Much of his current research focuses on techniques that enable software systems to survive otherwise fatal errors or anomalies.

Results in this area include acceptability-oriented computing (a framework for ensuring that software systems satisfy basic acceptability properties), failure-oblivious computing (a technique for enabling programs to execute successfully through otherwise fatal memory addressing errors), and a technique for providing probabilistic bounds on the accuracy of program outputs in the presence of failures. Professor Rinard is a Fellow of ACM and holds many awards including the Most Influential Paper in 20 Years Award in the area of Concurrent Constraint Programming (awarded by The Association for Logic Programming in 2004).

For more information see:


MINIX 3: A Reliable and Secure Operating System

May 26, 2009, at 4.00 pm, in the Clore Lecture Theatre (Huxley Building), followed by a drinks reception in room 217/218

Speaker: Prof. Andrew S. Tanenbaum

Abstract: Most computer users nowadays are nontechnical people a nd have a mental model of what they expect from a computer based on their experience with TV sets and stereos: you buy it, plug it in, and it works perfectly for the next 10 years. Unfortunately, they are often disappointed as computers are not very reliable when measured against the standards of other consumer electronics devices. A large part of the problem is the operating system, which is often millions of lines of kernel code, each of which can potentially bring the system down.

The worst offenders ar e the device drivers, which have been shown to have bug rates 3-7x more than the rest of the system. As long as we maintain the current struc ture of the operating system as a huge single mono lithic program full of foreign code and running in kernel mode, the situation will only get worse. While there have been ad hoc attempts to patch legacy systems, what is needed is a different approach.

In an attempt to provide much higher reliability, we have created a new multiserver operating system with only 5000 lines in kernel and the rest of the operating system split up into small components each running as a separate user-mode process. For example, each device driver runs as a separate process and is rigidly controlled by the kernel to give it the absolute minimum amount of power to prevent bugs in it from damaging other system components.

A reincarnation server periodically tests each user-mode component and automatically replaces failed or failing components on the fly, without bringing the system down and in some cases without affecting user processes. The talk will discuss the architecture of this system, called MINIX 3, The system can be downloaded for free from

Bio: Andrew S. Tanenbaum was born in New York City and raised in White Plains, NY. He has an S.B.from M.I.T. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer Science at the Vrije Universiteit in Amsterdam. Prof. Tanenbaum is the principal designer of three operating systems: TSS-11, Amoeba, and MINIX. TSS-11 was an early system for the PDP-11. Amoeba is a distributed operating systems for SUN, VAX, and similar workstation computers. MINIX is a small operating system designed for high reliability and embedded applications as well as for teaching.

Prof. Tanenbaum is the author or coauthor of five books: "Distributed Systems 2/e" (2006) (with Maarten van Steen) "Modern Operating Systems 3/e" (2007) "Structured Computer Organization, 5/e" (2006)"Operating Systems: Design and Implementation, 3/e", (2006) (with Albert S. Woodhull) "Computer Networks, 4/e." (2003). These books have been translated into over 20 languages and are used all over the world. Prof. Tanenbaum has also published more than 140 refereed papers on a variety of subjects and has lectured in a dozen countries on many topics.

Prof. Tanenbaum is a Fellow of the ACM, a Fellow of the IEEE, and a member of the Netherlands Royal Academy of Arts and Sciences. In 1994 he was the recipient of the ACM Karl V. Karlstrom Outstanding Educator Award. In 1997 he won the ACM SIGCSE Award for Outstanding Contributions to Computer Science. In 2007 he won the IEEE James H. Mulligan, Jr Education Medal. In 2008, he received a prestigious ERC Advanced Grant of 2.5 million euro for research on dependable systems.

For more info see:

Security-Aware Computer Architecture

February 5, 2009, at 4.00 pm, in room 308 (Huxley Building), followed by a drinks reception in room 217/218

Speaker: Prof. Ruby B. Lee

Abstract: Security has not been a primary goal in the design of computers in the past few decades. The importance of interconnected computers to modern society and the escalating number of security violations in cyberspace suggest that it is time to rethink the basic architecture of computer systems. Current hardware, software and computing paradigm trends like multicore chips, virtualization techniques and cloud computing also provide opportunities to design security into the core architecture - rather than adding it on as an afterthought. In this talk, we explore minimalist hardware-software architecture that can enhance applications' security, even in spite of commodity operating systems with security vulnerabilities.

We define concepts like "hardware trust anchors" and explore minimal trust chains and resilient runtime attestation. We also assert that security will only be ubiquitous in commodity devices if it does not compromise performance, cost, usability and power consumption. We show that design for security can sometimes even improve performance - an unexpected result since secure systems have typically degraded performance. As an example, we show our new security-aware cache architecture mitigates information leakage (due to software side-channel attacks) while simultaneously improving performance over traditional cache architectures.

Short Bio of the Speaker: Ruby B. Lee is the Forrest G. Hamrick Professor of Engineering and Professor of Electrical Engineering at Princeton University, with an affiliated appointment in the Computer Science Department. She is the director of the Princeton Architecture Laboratory for Multimedia and Security (PALMS). Her current research is in designing security-aware architecture for computer and communications systems, verifying the security properties of new architectures, protecting critical information, preventing Internet-scale epidemics, and exploring ubiquitous parallelism and new media. She is a Fellow of the ACM, Fellow of the IEEE, Associate Editor-in-Chief of IEEE Micro, and Advi sory board member of IEEE Spectrum.

Prior to joining the Princeton faculty in 1998, Dr. Lee served as Chief Architect at Hewlett-Packard, responsible at different times for processor architecture, multimedia architecture and security architecture. She was a key architect of the PA-RISC architecture used in HP workstations and servers. She pioneered adding multimedia instructions to microprocessors, and facilitated ubiquitous multimedia on commodity HP platforms.

She co-led an Intel-HP architecture team designing new Instruction-Set Architecture for 64-bit Intel microprocessors. Simultaneous with her full-time HP tenure, she was also Consulting Professor of Electrical Engineering at Stanford University. She has a Ph.D. in Electrical Engine ering and an M.S. in Computer Scien ce, both from Stanfo rd U niversity, and an A.B. with distinction from Cornell University, where she was a College Scholar.

She has been granted over 120 United States and international patents, and has authored numerous conference and journal papers on secure computing, computer architecture, processor design, and multimedia topics. She has received various awards for her work, including the Best Paper Award at the 2006 IEEE International Conference on Application-Specific Systems, Architectures and Processors, and the IBM Faculty award for the importance of her work to industry.

For more info see:

Efficient Linear Programming, Duality and MRFs in Computer Vision and Medical Image Analysis

January 15, 2009, at 4 .0 0 pm, in room 145 (Huxley Building), followed by a drinks reception in room 217/218

Speaker: Prof. Dr. Nikos Paragios

Abstract: Mathematical visual perception aims to understand the environment through the estimation of some models parameters. Such a process involves the definition of the model, the association of the model parameters with the available observations and the estimation of the optimal ones through inference. MRFs are a powerful mathematical model that allows explicit modeling of an important number of vision tasks through partial int eractions between their degrees of freedom.

Such models can be associated with very efficient optimization techniques that do provide solid guarantees on the quality of the obtained solution (module the complexity o f the model). In this talk, we will present recent development in the field of efficient linear programming using the primal dual principle towards solving generic MRFs and some applications in medical imaging and computer vision, in particular knowledge-based segmentation and deformable registration.

Bio: Nikos Paragios is professor at the Ecole Centrale de Paris - one of most exclusive engineering schools "Grande Ecoles" - leading the Medical Imaging and Computer Vision Group at the Applied Mathematics Department. He is also affiliated with INRIA Saclay Ile-de-France, the French Research Institute in Informatics and Control heading the GALEN group, a joint research team between ECP/INRIA.

Prior to that he was professor/research scientist (2004-2005) at the Ecole Nationale de Ponts et Chaussees, affiliated w ith Siemens Corporate Research (Princeton, NJ, 1999-2004) as a project manager, senior research scientist and research scientist. In 2002 he was an adjunct professor at Rutgers University and in 2004 at New York University. N. Paragios was a visiting professor at Yale University in 2007. Professor Paragios has co-edited four books, published more than hundred papers (DBLP server) in the most prestigious journals and conferences of medical imaging and computer vision, twelve US issued patents and more than twenty pending.

His work has approx 3,250 citations in googlescholar, and his H-number according to scholar is 29. Professor Paragios is a Senior member of IEEE, associate editor for the IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), area editor for the Computer Vision and Image Understanding Journal CVIU) and member of the Editorial Board of the International Journal of
Computer Vision (IJCV), the Medical Image Analysis Journal (MedIA) and the Journal of Mathematical Imaging and Vision (JMIV).

Professor Paragios is one of the program chairs of the 11th European Conference in Computer Vision (ECCV'10, Heraklion, Crete). His research interests include im age processing, computer vision, medical image analysis and human computer interaction.


A Morphable Model for Reconstructing 3D Faces from Images or Partial Scans

November 14, 2008, at 4.00 pm, in Clore lecture theatre (Huxley Building), followed by a drinks reception in room 344

Speaker: Prof. Dr. Volker Blanz

Abstract: Capturing both the variations and the common features found among human faces, 3D Morphable Models of faces have a wide range of applications in Computer Vision and Graphics. Morphable Models represent shapes and surface colors (textures) as vectors such that any linear combination of individual shapes and textures is a realistic human face.

The talk summarizes how the 3D Morphable Model solves the underconstrained problem of 3D surface reconstruction from a single image, and how it can be used to animate faces in images.
The second part presents a new face recognition algorithm that is based on fitting the model to 3D scans. The algorithm relies on shape and color simultaneously, and it
compensates for variations in pose and lighting.

Bio: Volker Blanz studied physics in Tuebingen, Germany, and received his PhD for his work on 3D face reconstruction from single images, which he developed at MPI for Biological Cybernetics, Tuebingen. He was a visiting researcher with AT&T Bell Labs and MIT, and worked as a research assistant at University of Freiburg and MPI Saarbruecken. Since 2005, he holds a faculty position at University of Siegen, Germany. His work is focused on human faces and teeth, combining methods from Computer Vision, Graphics and Machine Learning.

For more info see: