Dr. Alex Smola — Researcher at Google and Professor at Carnegie Mellon University
Bio: I studied physics in Munich at the University of Technology, Munich, at the Universita degli Studi di Pavia and at AT&T Research in Holmdel. During this time I was at the Maximilianeum München and the Collegio Ghislieri in Pavia. In 1996 I received the Master degree at the University of Technology, Munich and in 1998 the Doctoral Degree in computer science at the University of Technology Berlin. Until 1999 I was a researcher at the IDA Group of the GMD Institute for Software Engineering and Computer Architecture in Berlin (now part of the Fraunhofer Geselschaft). After that, I worked as a Researcher and Group Leader at the Research School for Information Sciences and Engineering of the Australian National University. From 2004 onwards I worked as a Senior Principal Researcher and Program Leader at the Statistical Machine Learning Program at NICTA. From 2008 to 2012 I worked at Yahoo Research. In spring of 2012 I moved to Google Research to spend a wonderful year in Mountain View.
Abstract: Collaborative filtering has become a key tool in recommender systems. The Netflix competition was instrumental in this context to further development of scalable tools. At its heart lies the minimization of the Root Mean Squares Error (RMSE) which helps to decide upon the quality of a recommender system. Moreover, minimizing the RMSE comes with desirable guarantees of statistical consistency. In this talk I make the case that RMSE minimization is a poor choice for a number of reasons: firstly, review scores are anything but Gaussian distributed, often exhibiting asymmetry and bimodality in their scores. Secondly, in a retrieval setting accuracy matters primarily for the top rated items. Finally, such ratings are highly context dependent and should only be considered in interaction with a user. I will show how this can be accomplished easily by relatively minor changes to existing systems.
Dr. Lars Backstrom — Engineering Manager at Facebook
Bio: I Received my PhD from Cornell in 2009, and joined Facebook. At Facebook, I worked on friend suggestions until 2011, building the initial backend service that held the entire Facebook social graph in memory, sharded across many machines. In 2011, I switched to working on feed ranking. In the last three years, improvements to the ranking algorithms and features we use have allowed us to make dramatic improvements to the billion personalized newspapers we publish every day.
Abstract: Feed ranking’s goal is to provide our users with over a billion personalized newspapers. We strive to provide the most compelling content to each user, personalized to them so that they are most likely to see the content that is most interesting to them. Carrying on the newspaper analogy, putting the right stories above the fold has always been critical to engaging customers and interesting them in the rest of the paper. In feed ranking, we face a similar challenge, but on a grander scale. Each time a user visits, we need to find the best piece of content out of all the available stories and put that at the top of feed where people are most likely to see it. To accomplish this, we do large-scale machine learning to model each user, figure out which friends, pages and topics they care about, and use whatever signals we can come up with to pick the stories each particular user is interested in. The typical user has well over 1500 stories available to them each day, but only has time to consume a small fraction of those, so it’s important that we separate the best stories from the rest so that no one every misses an important story.
Dr. Suju Rajan — Senior Manager of Personalization Sciences at Yahoo Labs
Bio: Suju Rajan is a Senior Manager with the Personalization Science team at Yahoo Labs. At Yahoo, she works on personalizing user experiences, measured by the relevance and timeliness of content surfaced to the users. Her research interests are in content enrichment, user modeling and information retrieval and ranking. Dr. Rajan received her PhD from the University of Texas at Austin, focusing on semi-supervised and active learning based classification for dynamic environments.
Abstract: The stream of news on the Yahoo homepage is a personalized feed of media content that appears in a number of modules on the Yahoo network. The success of a personalized stream should be measured by not just the per-session user engagement but on whether we are optimizing it for the long term. So, how do we optimize for the long term? In this talk, we shall present a number of challenges & issues in designing an engaging stream: how do we cope with sparsity of explicit feedback data, how user behavior changes with context of the device, how do we build machine learned models for each user and what is the metric to optimize for.
Dr. Lihong Li — Researcher at Microsoft Research
Bio: Lihong Li is a Researcher in the Machine Learning Group at Microsoft Research-Redmond. Prior to joining Microsoft, he was a Research Scientist in the Machine Learning Group at Yahoo! Research. He obtained a PhD degree from Rutgers University, MSc from University of Alberta, and BE from Tsinghua University, all in Computer/Computing Science. His main research interests are in machine learning with interaction, including reinforcement learning, multi-armed bandits, online learning, active learning, and their numerous applications on the Internet like recommender systems, search, and advertising. He has published over 60 research papers, won paper awards at ICML, WSDM, & AISTATS, and a Superstar Team Award at Yahoo!. He has served as area chair or senior program committee member at prestigious conferences like ICML, NIPS, and IJCAI.
Abstract: Originally inspired by problems like clinical trials and resource allocation, multi-armed bandits have recently found novel and important uses on the Internet, as a powerful model that captures interaction between users and a Web service. In contrast to other, prediction-oriented machine-learning techniques, bandit algorithms aim to directly optimize online user-engagement utility such as click-through rate and revenue. A number of unique issues regarding algorithmic design and offline evaluation are raised. This talk reviews some of these applications in various domains, with a dual focus on algorithms and evaluation, and concludes with lessons and challenges that arose from experiences at Yahoo! and Microsoft.