shadow_tr

Colloquia

Spring 2014 Colloquia

Below is the list of talks in the computer science seminar series. Unless otherwise noted, the seminars meet on Fridays at 3pm in Stanley Thomas 302. If you would like to receive notices about upcoming seminars, you can subscribe to the announcement listserve by following this link.

January 24

Work Practice Simulation of Complex Human-Automation Systems: The Brahms Generalized Überlingen Model

Bill Clancey Florida Institute for Human and Machine Cognition

Abstract: The Brahms Generalized Überlingen Model (Brahms- GÜM) was developed at NASA Ames within the Assurance for Flight Critical Systems technical theme as a design verification and validation methodology for assessing aviation safety. The human-centered design approach involves a detailed computer simulation of work practices that includes people interacting with flight-critical systems. Brahms-GÜM was developed by analyzing and generalizing the roles, systems, and events in the Überlingen 2002 accident, a scenario that can be simulated as a particular configuration of the model. Simulation experiments varying assumptions about aircraft flights and system malfunctions revealed the time-sensitive interactions among TCAS, the pilots, and air traffic controller (ATCO) and particularly how a routinely complicated situation became cognitively complex for the ATCO. Brahms-GÜM demonstrates the strength of the framework for simulating asynchronous (or loosely coupled), distributed processes in which the sequence of behavioral interactions can become mutually constrained and unpredictable. The simulation generates metrics that can be compared to observational data and/or make predictions for redesign experiments. Brahms-GÜM can be adapted for other accident investigations, for identifying possible failures in proposed highly integrated systems, and for developing recovery strategies and procedures for system malfunctions.

About the Speaker: Dr. William J. Clancey is a Senior Research Scientist at the Florida Institute for Human and Machine Cognition; previously on assignment to NASA Ames Research Center, Chief Scientist for Human-Centered Computing, Intelligent Systems Division (1998-2013). Received Computer Science PhD, Stanford University; Mathematical Sciences BA, Rice University. Founding member of Institute for Research on Learning (1987-1997), where he was lead inventor of Brahms, a work systems design tool based on simulating work practice. Clancey has extensive experience in developing AI applications to medicine, education, robotics, and spaceflight systems (including OCAMS, recipient of NASA JSC Exceptional Software Award). He is a Fellow of the Association for Psychological Science, Association for Advancement of AI, and the American College of Medical Informatics. He has published seven books (including Situated Cognition: On Human Knowledge and Computer Representations and Working on Mars: Voyages of Scientific Discovery with the Mars Exploration Rovers, recipient of American Institute of Aeronautics and Astronautics 2014 Gardner-Lasser Aerospace History Literature Award), and has presented invited lectures in over 20 countries.

February 3

Machine Learning in Evidence-Based Medicine: Taming the Clinical Data Deluge

Byron Wallace Brown University


This event will be held on Monday, 2/3/2014, at 3:00 p.m. in Boggs Center, Room 122. Please note the special weekday and venue for this event.

Abstract: An unprecedented volume of biomedical evidence is being published today. Indeed, PubMed (a search engine for biomedical literature) now indexes more than 600,000 publications describing human clinical trials and upwards of 22 million articles in total. This volume of literature imposes a substantial burden on practitioners of Evidence-Based Medicine (EBM), which now informs all levels of healthcare. Systematic reviews are the cornerstone of EBM. They address a well-formulated clinical question by synthesizing the totality of the available relevant evidence. To realize this aim, researchers must painstakingly identify the few tens of relevant articles among the hundreds of thousands of published clinical trials. Further exacerbating the situation, the cost of overlooking relevant articles is high: it is imperative that all relevant evidence is included in a synthesis, else the validity of the review is compromised. As reviews have become more complex and the literature base has exploded in volume, the evidence identification step has consumed an increasingly unsustainable amount of time. It is not uncommon for researchers to read tens of thousands of abstracts for a single review. If we are to realistically realize the promise of EBM (i.e., inform patient care with the best available evidence), we must develop computational methods to optimize the systematic review process.

To this end, I will present novel data mining and machine learning methods that look to semi-automate the process of relevant literature discovery for EBM. These methods address the thorny properties inherent to the systematic review scenario (and indeed, to many tasks in health informatics). Specifically, these include: class imbalance and asymmetric costs; expensive and highly skilled domain experts with limited time resources; and multiple annotators of varying skill and price. In this talk I will address these issues in turn. In particular, I will present new perspectives on class imbalance, novel methods for exploiting dual supervision (i.e., labels on both instances and features), and new active learning techniques that address issues inherent to real-world applications (e.g., exploiting multiple experts in tandem). I will present results that demonstrate that these methods can reduce by half the workload involved in identifying relevant literature for systematic reviews, without sacrificing comprehensiveness. Finally, I will conclude by highlighting emerging and future work on automating next steps in the systematic review pipeline, and data mining methods for making sense of biomedical data more generally.

About the Speaker: Byron Wallace is an assistant research professor in the Department of Health Services, Policy & Practice at Brown University; he is also affiliated with the Brown Laboratory for Linguistic Processing (BLLIP) in the department of Computer Science. His research is in data mining/machine learning and natural language processing, with an emphasis on applications in health. Before moving to Brown, he completed his PhD in Computer Science at Tufts under the supervision of Carla Brodley. He was selected as the runner-up for the 2013 ACM SIGKDD Doctoral Dissertation Award and he was awarded the Tufts Outstanding Graduate Researcher at the Doctoral Level award in 2012 for his thesis work.

February 5

Autonomous Cars for City Traffic

Raul Rojas Gonzalez Freie Universität Berlin


This event will be held on Wednesday, 2/5/2014, at 4:00 p.m. in Boggs Center, Room 239. Please note the special weekday and venue for this event.

Abstract: We have been developing autonomous cars in Berlin since 2007. Our vehicle "MadeInGermany" has been driving in the city since 2011, covering stretches of up to 40Km (highway and streets) fully automatically.

In this talk I will present the hardware and software architecture of our vehicles. The main challenge is to safely detect, in real time, all obstacles and cars in a dynamic environment, and also to provide adequate control commands. The sensor architecture is still heavily loaded towards laser scanners and radars, but we are migrating towards a computer vision based approach using our own stereoscopic cameras. I will show some videos of the car driving in Berlin, Texas, and Mexico City, and will discuss the problems associated with a car able to navigate any conceivable city environment. I will comment on the current timetable of German car manufacturers for the introduction of autonomous cars.

About the Speaker: Raul Rojas Gonzalez is Professor of Computer Science and Mathematics at the Free University of Berlin and a renowned specialist in artificial neural networks.

He is now leading an autonomous car project called Spirit of Berlin. He and his team were awarded the Wolfgang von Kempelen Prize for his work on Konrad Zuse and the history of computers. His current research and teaching revolves around artificial intelligence and its applications. The soccer playing robots he helped build won world championships in 2004 and 2005. In 2009 the Mexican government created the Raul Rojas Gonzalez Prize for scientific achievement by Mexican citizens. He holds degrees in mathematics and economics.

February 7

Can You Hide in an Internet Panopticon?

Bryan Ford Yale University

Abstract: Many people have legitimate needs to avoid their online activities being tracked and linked to their real-world identities - from citizens of authoritarian regimes, to everyday victims of domestic abuse or law enforcement officers investigating organized crime. Current state-of-the-art anonymous communication systems are based on onion routing, an approach effective against localized adversaries with a limited ability to monitor or tamper with network traffic. In an environment of increasingly powerful and all-seeing state-level adversaries, however, onion routing is showing cracks, and may not offer reliable security for much longer. All current anonymity systems are vulnerable in varying degrees to five major classes of attacks: global passive traffic analysis, active attacks, "denial-of-security" or DoSec attacks, intersection attacks, and software exploits.

The Dissent project is developing a next-generation anonymity system representing a ground-up redesign of current approaches. Dissent is the first anonymous communication architecture incorporating systematic protection against the five major vulnerability classes above. By switching from onion routing to alternate anonymity primitives offering provable resistance to traffic analysis, Dissent makes anonymity possible even against an adversary who can monitor most, or all, network communication. A collective control plane ensures that a group of anonymous users behave indistinguishably even if an adversary interferes actively, such as by delaying messages or forcing users offline. Protocol-level accountability enables groups to identify and expel misbehaving nodes, preserving availability, and preventing adversaries from using denial-of-service attacks to weaken anonymity. The system computes anonymity metrics that give users realistic indicators of anonymity, even against adversaries capable of long-term intersection and statistical disclosure attacks, and gives users control over tradeoffs between anonymity loss and communication responsiveness. Finally, virtual machine insolation offers anonymity protection against browser software exploits of the kind recently employed to de-anonymize Tor users. While Dissent is still a proof-of-concept prototype with important functionality and performance limitations, preliminary evidence suggests that it may in principle be possible - though by no means easy - to hide in an Internet panopticon.

About the Speaker: Bryan Ford is an Assistant Professor of Computer Science at Yale University. He does research in operating systems, networking, and virtualization, and dabbles in storage systems, programming languages, and formal methods. His goal is to create a new, fully decentralized ("peer-to-peer") paradigm for applications distributed across personal devices and Internet services, built from novel OS abstractions such as personal groups, structured streams, and lightweight sandboxing.


February 10

Test-Driven Development: Evaluating & Influencing the Development Process

Kevin Buffardi Virginia Tech

This event will be held on Monday, 2/10/2014, at 3:00 p.m. in Boggs Center, Room 122. Please note the special weekday and venue for this event.

Abstract: Software testing can objectively verify that code behaves as expected. Consequently, testing is vital to software development. In particular, Test-Driven Development (TDD) is a popular approach in industry that involves incremental testing throughout the development process. While computer science students could benefit learning such professional skills, evidence shows that many programmers are reluctant to change their development process by adopting TDD. Over several years of teaching fundamental programming courses, we studied how different approaches to testing affected code quality. We also developed an adaptive feedback system that reinforces good testing practices by automatically customizing feedback and rewards based on how individual students test and develop programming assignments. In this talk, I will discuss the adaptive feedback system's impact on students' testing and outline plans for continuing this research.


About the Speaker: After earning his Bachelor's in Computer Science and Master's in Human-Computer Interaction, Kevin Buffardi worked as a User Experience Specialist and consulted for a variety of products from mobile phones to medical devices. He then returned to complete his Ph.D. at Virginia Tech, where he won awards for teaching and published research on software testing and adaptive feedback eLearning systems.
February 13

P versus NP: Which Problems Computers Can and Cannot Solve

Anastasia Kurdia Bucknell University

This event will be held on Thursday, 2/13/2014, at 3:30 p.m. in Boggs Center, Room 239. Please note the special weekday, time, and venue for this event.

Abstract: In this talk I will introduce the audience to the "P equals NP?" question, the major unsolved problem in computer science. We will understand the meaning of the question, and how it affects anyone who writes computer programs. We will encounter several problems that have very simple formulations and that cannot currently be solved by computers. (As an example, consider the task of finding the largest group of friends on a social network in which every person is a friend of every other person. A program for finding such a group may run for years, even on the fastest modern computer, and it's common to view such a problem as practically unsolvable). We will also discuss how resolving the "P equals NP?" question would affect security of credit card transactions on the Internet.

About the Speaker: Anastasia Kurdia is a Visiting Assistant Professor of Computer Science at Bucknell University, where she enjoys teaching introductory computer science courses. Before joining Bucknell in 2012, she taught at Connecticut College as a full-time adjunct faculty, and prior to that, she was a postdoctoral researcher at Smith College and studied combinatorial properties of protein structures. She received a Ph.D. degree in Computer Science from The University of Texas at Dallas in 2010. Her graduate work focused on geometric algorithms and their application to molecular biology. Anastasia's current research and personal work aims to make computer science education more effective, more inclusive and more fun.

February 14

What Will a Companionable Computational Agent Be Like?

Yorick Wilks Florida Institute for Human and Machine Cognition

Abstract: The talk begins by looking at the state of the art in modeling realistic conversation with computers over the last 40 years. I then move on to ask what we would want in a long-term conversational agent that was designed for a long-term relationship with a user, rather than the carrying out of a single brief task, like buying a railway ticket. Such an agent I shall call “companionable”: I shall distinguish several functions for such agents, but the feature they share will be that, in some definable sense, a computer Companion knows a great deal about its owner and can use that information.. By way of illustration, the talk describes the functionality and system modules of a Senior Companion (SC), one of two initial prototypes built in the first two years of the large-scale EC Companions project. The SC provides a multimodal interface for eliciting and retrieving personal information from the elderly user through a conversation about their photographs. The Companion, through conversation, elicits life memories, often prompted by discussion of their photographs. The demonstration is primitive but plausible and one of its key features is an ability to break out of the standard AI constraint on very limited pre-programmed knowledge worlds into a wider, unbounded world of knowledge in the Internet by capturing web knowledge in real time, again by Information Extraction methods. The talk finally discusses the prospects for machine learning in the conversational modeling field and progress to date on incorporating notions of emotion into AI systems. An outline is given of a current research project for a Companion for cognitively-damaged veterans.


About the Speaker: Yorick Wilks is Professor of Artificial Intelligence at the University of Sheffield, and is also a Senior Research Fellow at the Oxford Internet Institute at Balliol College. He studied math and philosophy at Cambridge, was a researcher at Stanford AI Laboratory, and then Professor of Computer Science and Linguistics at the University of Essex, before moving back to the US for ten years to run a successful and self-funded AI laboratory in New Mexico, the Computing Research Laboratory, a new institute set up by the state of New Mexico as a center of excellence in artificial intelligence in 1985. He has participated in and been the PI of numerous UK, US and EC grants, including the UK-government funded Interdisciplinary Research Centre AKT (2000-2006) on active knowledge structures on the web (www.aktors.org). His most recent book is Close Encounters with Artificial Companions (Benjamins, in press 2010). He is a Fellow of the American and European Associations for Artificial Intelligence, a member of the UK Computing Research Council and on the boards of some fifteen AI-related journals.In 2008 he was awarded the Zampolli Prize at LREC-08 in Marrakech, and the ACL Lifetime Achievement Award at ACL08 in Columbus, OH. In 2009 he was awarded the Lovelace Medal by the British Computer Society. In 2009 he was elected a Fellow of the ACM.
February 20

Computational Game Theory: From Theory to Practice and Back

Albert Jiang University of Southern California

This event will be held on Thursday, 2/20/2014, at 3:00 p.m. in Stanley Thomas, Room 316. Please note the special weekday and venue for this event.

Abstract: Large-scale systems with multiple self-interested agents are becoming ubiquitous in all aspects of modern life, from electronic commerce and social networks to transportation, healthcare and the smart grid. Due to the complexity of these multiagent systems, there is increasing demand for automated tools for intelligent decision making, both for users trying to navigate the system and for system designers trying to ensure the economic efficiency and security of the system. Making intelligent decisions in multiagent systems requires prediction of the behavior of self-interested agents. Game-theoretic solution concepts like Nash equilibrium and correlated equilibrium are mathematical models of such self-interested behavior; however, standard computational methods fail to scale up to real-world systems. Furthermore, real-world systems have uncertain environments and boundedly-rational agents, and as a result certain classical assumptions of game theory no longer hold.

My work in computational game theory aims to bridge this gap between theory and practice. In this talk I will focus on three topics. First, I present Action-Graph Games, a framework for computational analysis of large games that includes a general modeling language, a set of novel efficient algorithms for computing solution concepts, and publicly available software tools. Second, I will talk about applying computational game theory to real-world infrastructure security, in particular the TRUSTS system that generates randomized patrol strategies for fare enforcement in the LA Metro transit system. A major challenge in deploying our solutions is execution uncertainty: patrols are often interrupted. I propose a general approach to dynamic patrolling games in uncertain environments, which provides patrol strategies with contingency plans. Third, I discuss modeling bounded rationality of human decision makers, in the context of security applications. Most existing behavior models require estimation of parameters from data, which might not be available. I propose monotonic maximin, a new solution concept that is parameter-free and provides guarantees against a wide variety of boundedly-rational behavior.


About the Speaker: Albert Jiang is a postdoctoral research associate in Computer Science at the University of Southern California, working with Professor Milind Tambe and the Teamcore research group. He received his Ph.D. under Professor Kevin Leyton-Brown, at the Laboratory of Computational Intelligence (LCI), Department of Computer Science, University of British Columbia. His research addresses computational problems arising in game theory and multiagent systems, including the efficient computation of solution concepts such as Nash equilibrium, Stackelberg equilibrium and correlated equilibrium, as well as applications of game-theoretic computation to real-world domains such as large-scale infrastructure security and electronic commerce.

 

February 21

Dialogue as Collaborative Problem Solving: Moving Beyond Siri

James Allen Florida Institute for Human and Machine Cognition/University of Rochester.

Abstract: Automated spoken dialogue systems, which can interact with a user in natural language, are now in use for a variety of applications, from automatic telephone call handling systems to personal assistants such as Siri. Such systems, however, rigidly control the allowable interactions and generally cannot interpret utterances in context like humans do. What would it take to be able to build a dialogue system that could carry on conversation with human-like competence? I will describe work that we have done over the past few decades on trying to answer this question, and show a series of example systems that reveal the successes as well as the problems that remain. Underlying all this work is the hypothesis that dialogue results from human abilities to reason about and perform collaborative activities with each other. By viewing dialogue as collaborative problem solving, we can start to address some of the challenges in building systems that could display human-like conversational competence.


About the Speaker: James Allen is an Associate Director of the Institute for Human and Machine Cognition in Pensacola Florida, as well as the John H Dessauer Distinguished Professor of Computer Science at the University of Rochester He received his PhD in Computer Science from the University of Toronto and was a recipient of the Presidential Young Investigator award from NSF in 1984. A Founding Fellow of the American Association for Artificial Intelligence (AAAI), he was editor-in-chief of the journal, Computational Linguistics, the premier journal in natural language processing, from 1983-1993. He has authored numerous research papers in the areas of natural language understanding, knowledge representation and reasoning, and spoken dialogue systems.
February 24

Defending Data from Digital Disasters: Engineering Next Generation Systems for Emerging Problems in Data Science

Eric Rozier University of Miami

This event will be held on Monday, 2/24/2014, at 3:00 p.m. in Boggs Center, Room 122. Please note the special weekday and venue for this event.

Abstract: Of the data that exists in the world, 90% was created in the last two years. Last year over 2,837 exabytes of data were produced, representing an increase of 230% from 2010. By next year this total is expected to increase to 8,591 exabytes, reaching 40,026 exabytes by 2020. Our ability to create data has already exceeded our ability to store it, with data production exceeding storage capacity for the first time in 2007. Our ability to analyze data has also lagged behind the deluge of digital information, with estimates putting the percent of data analyzed at less than 1%, while an estimated 23% of data created would be useful if analyzed. Reliability, security, privacy, and confidentiality needs are outpacing our abilities as well, with only 19% of data protected. For these reasons we need systems that are not only capable of storing the raw data, but doing so in a trustworthy manner, while enabling state of the art analytics.

In this talk we will explore problems in data science applications to medicine, climate science, natural history, and geography, and outline the reliability, availability, security, and analytics challenges to data in these domains. We will present novel, intelligent, systems designed to combat these issues by using machine learning to apply a unique software defined approach to data center provisioning, with dynamic architectures, and on-the-fly reconfigurable middleware layers to address emergent problems in complex systems. Specifically we will address issues of data dependence relationships, and the threat they pose to long term archival stores, and curation, as well as techniques to protect them using novel theoretical constructs of second-class data and shadow syndromes. We will discuss the growing problem presented by the exponential explosion of both system and scientific metadata, and illustrate a novel approach to metadata prediction, sorting, and storage which allow systems to better scale to meet growing data needs. We will explore problems in data access in the cloud of private records, illustrating the pitfalls of trusting provider claims with real world audits conducted by our lab which successfully extracted synthetic patient data through inadvertent side-channels, and demonstrate novel search techniques which allow for regular expression based search over encrypted data while placing no trust in the cloud provider, ensuring zero information leakage through side-channels. Finally, we will conclude by discussing future work in systems engineering for Big Data, outline current challenges, and future pitfalls of next generation systems for data science.

About the Speaker: Dr. Eric Rozier is an Assistant Professor of Electrical and Computer Engineering, head of the Trustworthy Systems Engineering Laboratory, and director of the Fortinet Security Laboratory at the University of Miami in Coral Gables, Florida. His research focuses on the intersection of problems in systems engineering with Big Data, Cloud Computing, and issues of reliability, performability, availability, security and privacy. Prior to joining Miami, Dr. Rozier has served as a research scientist at NASA Langley Research Center, and the National Center for Supercomputing Applications, and as a Fellow at IBM Almaden Research Center. His work in Big Data and systems engineering has been the subject of numerous awards, including recently being named an Frontiers of Engineering Education Faculty Member by the National Academy of Engineering. Dr. Rozier completed his PhD in Computer Science at the University of Illinois at Urbana-Champaign where he served as an IBM Doctoral Fellow, and worked on issues of reliability and fault-tolerance of the Blue Waters supercomputer with the Information Trust Institute. Dr. Rozier has been a long time member of the IEEE, ACM,and a member of the AIAA Intelligent Systems Technical Committee where he serves with the Publications and the Professional Development, Education, and Outreach subcommittees.

February 28

Trust Between Humans and Intelligent, Autonomous Agents

David Atkinson Florida Institute for Human and Machine Cognition

Abstract: There are a number of time-critical and mission-critical applications in health, defense, transportation, and industry where we recognize an urgent need to employ intelligent, autonomous agents to problems that humans find difficult or dangerous to perform without assistance. It is essential that mixed human and intelligent agent teams tackling such challenges have appropriate interdependencies and reliance upon one another based on mutual trust.

This talk will explore interpersonal trust between humans and intelligent autonomous agents. We will present recent research that is leading us towards design of intelligent, autonomous agents that enable "reasonable" judgments of trustworthiness by interacting in a form and manner compliant with human social expectations. In other words, we might say we are reverse engineering the human social interface.

A significant body of research tells us that the cognitive, emotional, and social predispositions of humans play a strong role in trust of automation, and that we are predisposed to treat machines as social actors. We predict that as intelligent, autonomous agents interact with us in more sophisticated and natural ("human-like") ways, perhaps even embodied in humanoid robots, they will exhibit the kinds of behavior that will increasingly evoke our innate anthropomorphic social predispositions. The result will be attribution of mental states and characteristics to intelligent autonomous agents, among these trustworthiness. Our concern is that the social predispositions and inferential short cuts that work so well for human interpersonal trust are likely to lead us astray in ascribing trustworthiness to autonomous agents insofar as our fundamental differences lead to misunderstanding and unexpected behavior. Intelligent autonomous agents are not human, do not have our senses or reason as we do, and do not have a stake in human society or share common human experience, culture, or biological heritage. These differences are potentially very significant and therefore likely to result in misattribution of human-like characteristics to autonomous agents. The foreseeable results include miscommunication, errors of delegation, and inappropriate reliance; all symptomatic of poorly calibrated trust. To address this challenge, a major endeavor of our research group centers on the creation of mechanisms for intelligent autonomous agents to correctly use conventions of human social interaction to provide reliable signals that are indicative of the agent’s state during the conduct of joint activity, and do so in a manner that enables a human partner to construct an well-justified structure of beliefs about the agent. Our hypothesis is that this will lead to better calibrated human trust and consequently, better performance of the human-machine team.

About the Speaker: Dr. David J. Atkinson is a Senior Research Scientist at the Florida Institute for Human and Machine Cognition (IHMC). Dr. Atkinson’s current area of research targets applications of intelligent, autonomous agents as partners to humans in teamwork. His major focus is on fostering appropriate reliance and interdependency between human and agents, and specifically the role of human interpersonal trust and social interaction between humans and intelligent, autonomous agents. He is also interested in cognitive robotics, meta-reasoning and self-awareness, and affective computing. Dr. Atkinson worked over 20 years at Caltech/JPL where his work spanned basic research in artificial intelligence, autonomous systems and robotics with applications to robotic spacecraft, control center automation, and science data analysis. The research by his group at JPL resulted in many successful applications of intelligent systems to deep space exploration missions, including the Voyager, Galileo, Magellan, and Cassini spacecraft, and the "Spirit" and "Opportunity" Mars Exploration Rovers. As a senior executive, Dr. Atkinson managed a Division of JPL, served at NASA Headquarters where he directed Lunar robotic missions, and was also a program manager at the Air Force Office of Scientific Research before returning to basic research at IHMC. Dr. Atkinson holds a Bachelor’s degree in Psychology from University of Michigan, dual Master of Science and Master of Philosophy degrees in Computer Science (Artificial Intelligence) from Yale University, and the Doctor of Technology degree in Computer Systems Engineering from Chalmers University of Technology in Sweden.
March 14

Collective Annotation: From Crowdsourcing to Social Choice

Ulle Endriss Institute for Logic, Language and Computation, University of Amsterdam

Abstract: Crowdsourcing is an important tool, e.g., in computational linguistics and computer vision, to efficiently label large amounts of data using nonexpert annotators. The individual annotations collected then need to be aggregated into a single collective annotation that can serve as a new gold standard. In this talk, I will introduce the framework of collective annotation, in which we view this problem of aggregation as a problem of social choice, similar to the problem of aggregating the preferences of individual voters in an election. I will present both a formal model for collective annotation in which we can express desirable properties of diverse aggregation methods as axioms, and I will report on the empirical performance of several such methods on annotation tasks in computational linguistics using data we collected by means of crowdsourcing. The talk is based on joint work with Raquel Fernandez, Justin Kruger and Ciyang Qing.

About the Speaker: Ulle Endriss is an associate professor at the Institute for Logic, Language and Computation (ILLC) at the University of Amsterdam, where he directs the interdisciplinary Master of Logic programme. Dr. Endriss carries out research at the interface of logic, artificial intelligence, and mathematical economics. Specifically, he has worked on computational social choice, multiagent resource allocation, negotiation, combinatorial auctions, knowledge representation, automated reasoning, modal and temporal logics, communication in multiagent systems, and software tools for teaching logic. He received his PhD in Logic and Computation from King's College London in 2003, after having studied Computer Science in Karlsruhe, London, and Berlin.
March 21

Challenges in Securing Vehicular Networks

Rajeev Shorey Program Director, Media Lab Asia

Abstract: Vehicular networks have been the subject of much attention lately. A Vehicular Ad-Hoc Network, or VANET, is a form of mobile ad-hoc network, which provides communications among nearby vehicles and between vehicles and nearby fixed equipment, usually described as roadside equipment. Enabled by short-range to medium-range communication systems (vehicle-to-vehicle or vehicle-to-roadside), the vision of vehicular networks includes real-time and safety applications, sharing the wireless channel with mobile applications from a large, decentralized array of service providers. Vehicular safety applications include collision and other safety warnings. Non-safety applications include real-time traffic congestion and routing information, high-speed tolling, mobile infotainment, and many others.

VANETs allow setting up communication links between vehicles for the exchange of information between them. By exchanging information between each other, vehicles can warn drivers to prepare for a dangerous situation, i.e. post-crash notification. Emerging applications in VANETs have opened tremendous business opportunities and navigation benefits, but they also pose challenging research problems in security provisioning. In any Vehicular Ad-hoc Network, there is always a possibility of incorrect messages being transmitted either due to faulty sensors and/or intentional malicious activities.

The goal of the talk is to highlight technical challenges and key developments in the area of security in vehicular networks. The talk will primarily cover research & business challenges in the area of vehicular security. The talk will also cover emerging services and applications in the Automotive Space.

Keywords: V2V, V2I, OnStar, Safety, DSRC, WAVE Stack, Security, Public Key Cryptography, Private Key Cryptography, DoS attacks.

About the Speaker: Dr. Rajeev Shorey is the Project Director at IT Research Academy, Media Lab Asia, Dept of IT, Government of India. He is the Co-Founder of ChiSquare Analytics, a start-up in Data Analytics and Big Data. From 2009 to 2012, Dr. Shorey was the (Founder) President of NIIT University, India. Dr. Shorey received his Ph.D and MS (Engg) in Electrical Communication Engineering from the Indian Institute of Science (IISc), Bangalore, India in 1997 and 1991 respectively. He received his B.E degree in Computer Science and Engineering from IISc, Bangalore in 1987 and the B.Sc degree from St. Stephen’s College, Delhi University in 1984.
March 25

From 0 to 1 Million Samples: Theoretically Principled Machine Learning Algorithms for Real-World Applications

Francesco Orabona Toyota Technological Institute

This event will be held on Tuesday, 3/25/2014 at 3:30 p.m. in Dinwiddie Hall, Room 102. Please note the special weekday and venue for this event.

.
Abstract: Most of the research in machine learning has been directed to problems where with a reasonable size training set is available, and without concerns about the scalability of the algorithms. Even if this is a fundamental problem, it does not fit well important real-world applications. On one hand, as the big data paradigm is gaining momentum, scalability of learning algorithms has become the main bottleneck to obtain good performance in a reasonable amount of time. On the other hand, in many real-world applications the amount of training data is very limited and it is the only limitation to achieve an automatic learning.

In this talk, the speaker will present some of his latest results, that aim to solve tasks in the small and big data regimes. In particular, for the first setting, he will present an algorithm able to automatically transfer knowledge from other, possibly relevant, sources of information, to bootstrap the performance in new tasks with less available data. For the second setting, he will present the first theoretically optimal parameter-free stochastic gradient descent algorithm, to effortlessly train learning algorithms over millions of training samples.

Empirical and theoretical results will be shown for both domains.

About the Speaker: Francesco Orabona is a Research Assistant Professor at the Toyota Technological Institute at Chicago. His research interests are in the area of online learning and transfer learning, with applications to robotics and computer vision. He received a PhD degree in Electrical Engineering at the University of Genoa in 2007. He was a post-doctoral researcher with Barbara Caputo and Nicolo' Cesa-Bianchi. He is (co)author of more than 40 peer-reviewed papers.
April 4

Topic

Speaker University

Abstract: TBA.

April 11

Topic

Speaker University

Abstract: TBA.

April 18

Topic

Speaker University

Abstract: TBA.

April 25

Topic

Speaker University

Abstract: TBA.

School of Science and Engineering, 201 Lindy Boggs Center, New Orleans, LA 70118 504-865-5764 sse@tulane.edu