shadow_tr

Colloquia Archives: 2013-2014

Below is the list of talks in the computer science seminar series for the 2013-2014 academic year. If you would like to receive notices about upcoming seminars, you can subscribe to the announcement listserv by following this link.

Lecture Series Overview

Top ⇑ Fall 2013 Colloquia

September 25

Julia Sets, $\omega$-limit Sets, and Internal Chain Transitivity

Brian Raines Baylor University

This event will be held on Wednesday, 9/25/2013, at 3:30 p.m. in Stanley Thomas Hall 302. Please note the special weekday and time for this colloquium.


Abstract: We consider the dynamics of quadratic maps on the complex plane, $f_c(z)=z^2+c$.  The interesting dynamics of such a map are carried on the Julia set which is often a complicated, self-similar space.  There are many values of $c$ for which the Julia set is a \emph{dendrite},  a locally connected and uniquely arcwise connected compact metric space.  One example is when the value $c$ is strictly pre-periodic; such values are known as Misiurewicz points and the corresponding Misiurewicz maps are well studied.  There are many other parameters for which the Julia set contains infinitely many circles arranged in a "dendritic'' pattern.                                               

Baldwin gives an efficient encoding of the dynamics of these quadratic maps restricted to the Julia set, into a shift map on a non-Hausdorff itinerary space.  Using this symbolic representation we characterize the $\omega$-limit sets of these particular Julia sets.  We prove that  such quadratic maps have shadowing on their Julia sets, and also that for all such maps, a closed invariant set is an $\omega$-limit set of a point if, and only if, it is \emph{internally chain transitive}.  This implies that the space of $\omega$-limit sets is closed in the hyperspace topology.

About the Speaker: Brian Raines received his doctorate in 2002 from the University of Oxford, and is now an associate professor of mathematics at Baylor University. He has worked extensively in topology, topological dynamics and ergodic theory, chaos, the theory of inverse limit spaces, and theoretical computer science, and is perhaps best known for his work on Ingram's Conjecture. Dr. Raines recently has been elected to the Baylor Fellows Program in honor of his excellence in teaching.
September 27

Automatic Techniques for Termination and Complexity Analysis of Programs

Supratik Mukhopadhyay Louisiana State University

Abstract: Analysis of termination and other liveness properties of an imperative program can be reduced to termination proof synthesis for simple loops, i.e., loops with only variable updates in the loop body.

Among simple loops, the subset of Linear Simple Loops (LSLs) is particular interesting because it is common in practice and expressive in theory. Existing techniques can successfully synthesize a linear ranking function for an LSL if there exists one. However, when a terminating LSL does not have a linear ranking function, these techniques fail. In this talk we describe an automatic method that generates proofs of universal termination for LSLs based on the synthesis of disjunctive ranking relations.

The method repeatedly finds linear ranking functions on parts of the state space and checks whether the transitive closure of the transition relation is included in the union of the ranking relations. Our method extends the work of Podelski and Rybalchenko. We have implemented a prototype of the method and have shown experimental evidence of the effectiveness of our method.

We also introduce a new technique for refining the complex control structure of loops that occur in imperative programs. We first introduce a new way of describing program execution patterns { (+,.)- path expressions, which is a subset of conventional path expressions with the operators V and * eliminated. The programs induced by (+,.)-path expressions have no path interleaving or skipping-over inner loops, which are the two main issues that cause impreciseness in program analysis. Our refinement process starts from a conventional path expression E obtained from the control ow graph, and aims to calculate a finite set of (+,.)- path expressions such that the language generated by path expression E is equivalent to the union of the languages generated by each (+,.)-path expressions. In theory, a conventional path expression can potentially generate an infinite set of (+,.)-path expressions. To remedy that, we use abstract interpretation techniques to prune the infeasible paths. In practice, the refinement process usually converges very quickly.

We have applied our method to symbolic computation of average case bound for running time of programs. To the best of our knowledge it is the first tool that automatically computes average case bounds. Experiments on a set of complex loop benchmarks clearly demonstrate the utility of our tool.

About the Speaker: Dr. Mukhopadhyay is a faculty member of the Computer Science Division of LSU where he holds the Occidental Chemical Corporation Career Development Chair of the College of Engineering. His research has been supported by NSF, DARPA, NASA, ARO, DOE, industry, and state agencies.

October 4

Agile Computing

Niranjan Suri Florida Institute for Human and Machine Cognition

Abstract: Wirelessly networked dynamic and mobile environments, such as tactical military environments and disaster recovery operations, pose many challenges to building distributed computing systems. A wide variety of node types and resources, unreliable and bandwidth constrained communications links, high churn rates, and rapidly changing user demands and requirements make it difficult to build systems that satisfy the needs of the users and provide good performance.

Agile computing is an innovative metaphor for distributed computing systems and prescribes a new approach to their design and implementation. Agile computing may be defined as opportunistically discovering, manipulating, and exploiting available computing and communication resources. The term agile is used to highlight the desire to both quickly react to changes in the environment as well as to take advantage of transient resources only available for short periods of time.

This talk will briefly introduce the Agile Computing Middleware and its key components, including the Mockets Communications Library, the Group Manager discovery service, the DisService peer-to-peer information dissemination system, the DSPro proactive dissemination system, and finally the NetProxy that integrates all of the above capabilities with legacy applications.

About the Speaker: Niranjan Suri is a Research Scientist at the Florida Institute for Human and Machine Cognition (IHMC) in Pensacola, FL. He is also a Visiting Scientist at the U.S. Army Research Laboratory in Adelphi, MD. He received his Ph.D. in Computer Science from Lancaster University, England, and his M.Sc. and B.Sc. in Computer Science from the University of West Florida, Pensacola, FL.

October 18

Foundations for Computer Security

Michael Clarkson George Washington University

Abstract: This talk describes work on developing mathematical and logical foundations for computer security policies. Usually, security policies on computer systems are expressed in terms of confidentiality, integrity, and availability. But we don't really know how to formalize and verify such policies. I propose an alternative: a new mathematical framework called "hyperproperties." Every security policy can be formally expressed using hyperproperties.

A new logic of programs, HyperLTL, enables verification of hyperproperties. And, usually, security policies are qualitative. Engineering demands quantitative metrics, though. I propose information-flow metrics that can be used to quantify security. Study of these metrics led to new discoveries: a new kind of information-flow integrity, a new model of attacker beliefs that resolves an anomaly in previous models of information-flow confidentiality, and a new connection between information flow and database privacy. That connection revealed a new theorem about the tradeoff between privacy and utility.

About the Speaker: Michael Clarkson is an assistant professor in the Department of Computer Science at George Washington University in Washington, D.C. He received a PhD in computer science from Cornell University in 2010. Clarkson's research interests include computer security and programming languages. His work focuses on using principled techniques to define security and to construct secure systems.

November 1

Seeing Shapes: Lessons from Human Vision

Nathaniel Twarog Massachusetts Institute of Technology

Abstract: The comprehension and representation of shape, both two- and three-dimensional, is one of the most essential challenges of computer vision, playing a critical role in object recognition, visually-guided locomotion, object grasping and manipulation, facial recognition and identification, image segmentation, and content-based image retrieval. However, existing computational approaches to understanding shape still lag far behind the versatility and power of the human visual system in many respects. I will discuss two algorithms for the analysis and representation of shape which attempt to bridge the gap between vision in the human mind and vision in the modern computational system.
 
I will introduce an algorithm called Puffball, which maps arbitrary two-dimensional silhouettes to intuitive three-dimensional inflations. These inflations provide invaluable insights into the visual properties of two-dimensional shapes, which allow Puffball inflation to be applied to numerous visual tasks, including robust silhouette similarity measurement, fast transfer of material appearance, and segmentation of silhouette parts. The part segmentation approach, which locates minima of principal curvature of the Puffball surface, is far simpler and more intuitive than previous part segmentation approaches, which use curvature of the silhouette contour; the results of the Puffball-based part segmentation also agree more closely with human part segmentation judgments, making Puffball a powerful tool for modeling representation of shape in human subjects.
 
I will also describe a novel algorithm for inferring three-dimensional shape from shaded images (often referred to as shape-from-shading), which infers a small number of distinct interpretations of each local image patch in a given scene, and utilizes belief propagation to determine a coherent shape representation which best explains the image in its entirety. This approach is both simple and highly customizable and can be applied to almost any shape inference task, including shape from contours, shape from texture, and shape from sheen; in addition, the underlying representation can be augmented to allow inference of other related variables, including surface texture, material properties, and lighting environment.

About the Speaker: Nathaniel Twarog received a PhD in Cognitive Science from MIT in 2012. His research interests lie at the intersection of computer vision, human vision, and graphics, specializing in the development of algorithms to extract and represent structure, organization, and shape in complex signals.

November 13

STUDENT SEMINAR: Ambient Obstacle Avoidance

Brenan Keller Tulane University

This event will be held on Wednesday, 11/13/2013, at 3:30 p.m. in Stanley Thomas Hall 302. Please note the special weekday and time for this event.

Abstract: In this talk I will describe my technical contribution and experience concerning an internship at the Florida Institute for Human and Machine Cognition (IHMC).

I helped develop a novel ground vehicle interface called the Ambient Obstacle Avoidance (AOA) interface. The AOA interface advances modern ground vehicle interfaces in two main ways. It provides information lacking in current interfaces and does so in a unique way that allows the operator to leverage ambient vision rather than relying solely on focal vision.

I was tasked with building a physical prototype of the system and subsequently conducting an experiment to ascertain its effectiveness. Together with simulated prototypes, the team was able to demonstrate that AOA was far more effective than the current norm. Internships are a critical tool in learning how to apply theoretical knowledge to real-world situations. My internship was a great learning experience, giving me insight into how research institutes operate and how research develops.

About the Speaker: Brenan Keller is a Tulane undergraduate majoring in Business and completing the coordinate major in Computer Science. He spent the summer of 2012 as an intern at the IHMC in Pensacola, Florida.

November 15

Big Data Applications and Systems Research

Milenko Petrovic Florida Institute for Human and Machine Cognition

Abstract: The diversity and the amount of human-generated data are growing by leaps and bounds: GPS, motion detection, RFID, vehicle telemetry, biotelemetry, emails, tweets, OCR, medical records, and legal documents are only a few examples.

A Big Data system needs to have ability for effective collection, processing, and storage of data. Frequently, the data must be collected from highly distributed environments such as sensor networks and mobile systems. Analysis of Big Data often involves use of statistical machine learning and the complexity of applying these algorithms over big data requires systems and techniques that can take advantage of massive parallelism in new hardware such as multicore and GPU processors, cloud computing, large amounts of random access memory, and flash storage.

In this talk, I first describe challenges of Big Data in several domains including information extraction, content-based routing, ecological monitoring, and cancer research. I then describe our past and ongoing work in developing the systems to address these challenges. Lastly, I give a brief overview of the state-of-the-art systems for large scale data analysis.

About the Speaker: Milenko Petrovic is a Research Scientist at Florida Institute for Human and Machine Cognition (IHMC). He received a Ph.D. degree in Computer Engineering from the University of Toronto in 2007. He was a member of the middleware systems research group, MSRG. His research interests are in the area of large scale data dissemination in mobile and sensor systems. He worked at IBM on large scale semi-structured data processing in DB2; Innovative Scheduling on interactive data analytics system for freight railways; and Bayes Informatics on a large scale text analysis system, which was applied on projects for O’Reilly Media, GM Research, and Buzzient, social media analytics.

November 22

Program Analysis and Optimization for Multi-core Computing

Kleanthis Psarris Brooklyn College

Abstract: High end parallel and multi-core processors rely on compilers to perform the necessary optimizations and exploit concurrency in order to achieve higher performance. However, source code for high performance computers is extremely complex to analyze and optimize. In particular, program analysis techniques often do not take into account complex expressions during the data dependence analysis phase. Most data dependence tests are only able to analyze linear expressions, even though non-linear expressions occur very often in practice. Therefore, considerable amounts of potential parallelism remain unexploited. In this talk we propose new data dependence analysis techniques to handle such complex instances of the dependence problem and increase program parallelization. Our method is based on a set of polynomial time techniques that can prove or disprove dependences in source codes with non-linear and symbolic expressions, complex loop bounds, arrays with coupled subscripts, and if-statement constraints. In addition our algorithm can produce accurate and complete direction vector information, enabling the compiler to apply further transformations. To validate our method we performed an experimental evaluation and comparison against the I-Test, the Omega test and the Range test in the Perfect and SPEC benchmarks. The experimental results indicate that our dependence analysis tool is accurate, efficient and more effective in program parallelization than the other dependence tests. The improved parallelization results into higher speedups and better program execution performance in several benchmarks.

About the Speaker: Kleanthis Psarris is a Professor of Computer and Information Science and the Dean of the School of Natural and Behavioral Sciences at City University of New York - Brooklyn College. He holds a B.S. degree in Mathematics from the National University of Athens, Greece, and a M.Eng. degree in Electrical Engineering and a Ph.D. in Computer Science from Stevens Institute of Technology in Hoboken, New Jersey. His research interests are in the areas of Parallel and Distributed Systems, Programming Languages and Compilers, and High Performance Computing. He is an Editor of the Parallel Computing Journal, and his research has been funded by the National Science Foundation and the Department of Defense.

November 25

Insertion and Promotion for Tree-Based PseudoLRU Last-Level Caches

Daniel Jimenez Texas A&M University


This event will be held on Monday, 11/25/2013, at 3:00 p.m. in Stanley Thomas Hall 316. P
lease note the special weekday and venue for this colloquium.


Abstract: Last-level caches mitigate the high latency of main memory. A good cache replacement policy enables high performance for memory intensive programs. To be useful to industry, a cache replacement policy must deliver high performance without high complexity or cost. For instance, the costly least-recently-used (LRU) replacement policy is not used in highly associative caches; rather, inexpensive policies with similar performance such as PseudoLRU are used.

I propose a novel last-level cache replacement algorithm with approximately the same complexity and storage requirements as tree-based PseudoLRU, but with performance matching state-of-the-art techniques, such as dynamic re-reference interval prediction (DRRIP) and protecting distance policy (PDP). The algorithm is based on PseudoLRU, but uses set dueling to dynamically adapt its insertion and promotion policy. It has slightly less than one bit of overhead per cache block, compared with two or more bits per cache block for competing policies.

In this talk, I give the motivation behind the algorithm in the context of LRU with improved placement and promotion, then develop this motivation into a PseudoLRU-based algorithm, and finally give a version using set dueling to allow adaptivity to changing program behavior. I show that, with a 16-way set-associative 4MB last-level cache, an adaptive PseudoLRU insertion and promotion algorithm yields a geometric mean speedup of 5.6% over true LRU over all the SPEC CPU 2006 benchmarks using far less overhead than LRU or other algorithms. On a memory-intensive subset of SPEC, the technique improves geometric mean speedup by 15.6%. I show that the performance is comparable to state-of-the-art replacement policies that consume more than twice the area.

About the Speaker: Daniel A. Jimenez is an Associate Professor in the Department of Computer Science and Engineering at Texas A&M University. Previously he was a full Professor and Chair of the Department of Computer Science at The University of Texas at San Antonio and Associate Professor in the Department of Computer Science at Rutgers University. His research focuses on microarchitecture and low-level compiler optimizations. He introduced and developed the perceptron branch predictor which has inspired the design of two implemented microarchitectures. Daniel earned his B.S. (1992) and M.S. (1994) in Computer Science at The University of Texas at San Antonio and his Ph.D. (2002) in Computer Sciences at The University of Texas at Austin. Daniel was General Chair of the 2011 IEEE HPCA conference, and he is an NSF CAREER award recipient and ACM Distinguished Scientist.
December 4

Computational Models and Tools to Solve Abstract Argumentational Frameworks

Francesco Santini INRIA-Rocquencourt

This event will be held on Wednesday, 12/04/2013, at 4:00 p.m. in Stanley Thomas Hall 302. Please note the special weekday and time for this colloquium.


Abstract: An Abstract Argument Framework (AAF) is simply a pair consisting of a set A whose elements are called arguments, and of a binary relation R on A, called the attack relation. An abstract argument is not assumed to have any specific structure: an argument is anything that may attack or be attacked by another argument. An argumentation semantics is a formal definition of a method ruling the justification state of arguments: intuitively, an argument is justified if it has some way to survive the attacks it receives. An extension-based semantics specifies how to derive a subset of A representing a set of arguments that are collectively justified. Computational models and tools are used to model and compute these semantics and their correspondents in Weighted AAFs, i.e., where attacks or nodes are weighted with a strength score. Benchmarks based on small-world networks need to be assembled to test the scalability and applicability of these tools, even considering the hard problems related to Weighted AAFS.

About the Speaker: Francesco Santini is currently the ERCIM "Alain Bensoussan" Fellow at INRIA-Rocquencourt, in Paris. Santini received his Bachelor and Masters in Computer Science at University of Pisa (Italy) and his Ph.D. in Computer Science and Engineering at IMT-Lucca (Italy). His main research interests are Constraint Programming, Trust Management Systems, Argumentation Frameworks and Orchestration/Choreography/Discovery in Service Oriented Architectures.
December 6

An Introduction to Digital Forensics: Privacy, Practice, and Research

Golden Richard University of New Orleans

Abstract: Digital forensics is an important aspect of information assurance (IA) and plays an increasingly vital role in detecting and responding to cyberattacks, supporting civil and criminal litigation, protecting intellectual property, protecting minors from predators, and more. Recoverable digital evidence exists on a wide variety of electronic devices, from traditional computers, to smartphones, printers, video surveillance systems, voice recorders, and game consoles.

This talk provides an introduction to digital forensics, the art and science of discovering, preserving, and analyzing digital evidence, from three perspectives: privacy, practical digital investigation, and research. The talk covers basic concepts and investigative challenges before briefly addressing current research directions, most of which are concerned with techniques and tools for allowing investigators to deal with the ever-increasing size and complexity of forensic targets and the growing impact of malware in forensic investigations. These research approaches cover a wide spectrum, including the use of parallel and distributed architectures for forensics tools, Graphics Processing Units (GPUs), advanced file carving techniques, virtual machine introspection, and tools for live investigation and memory analysis.

The impact of digital forensics techniques on personal privacy is addressed throughout.

About the Speaker: Golden G. Richard III is Professor of Computer Science, University Research Professor, and Director of the Greater New Orleans Center for Information Assurance (GNOCIA) at the University of New Orleans. Dr. Richard has extensive experience in private digital forensic investigation and was one of the first academic researchers in this area. In addition to digital forensics, he is also interested in reverse engineering, malware analysis, and operating systems internals. Dr. Richard is a member of the United States Secret Service Electronic Crime Taskforce, sits on the Editorial Boards of the Journal of Digital Investigation and the International Journal of Digital Crime and Forensics (IJDCF), and is a member of the American Academy of Forensic Sciences (AAFS).

Top ⇑ Spring 2014 Colloquia

January 24

Work Practice Simulation of Complex Human-Automation Systems: The Brahms Generalized Überlingen Model

Bill Clancey Florida Institute for Human and Machine Cognition

Abstract: The Brahms Generalized Überlingen Model (Brahms- GÜM) was developed at NASA Ames within the Assurance for Flight Critical Systems technical theme as a design verification and validation methodology for assessing aviation safety. The human-centered design approach involves a detailed computer simulation of work practices that includes people interacting with flight-critical systems. Brahms-GÜM was developed by analyzing and generalizing the roles, systems, and events in the Überlingen 2002 accident, a scenario that can be simulated as a particular configuration of the model. Simulation experiments varying assumptions about aircraft flights and system malfunctions revealed the time-sensitive interactions among TCAS, the pilots, and air traffic controller (ATCO) and particularly how a routinely complicated situation became cognitively complex for the ATCO. Brahms-GÜM demonstrates the strength of the framework for simulating asynchronous (or loosely coupled), distributed processes in which the sequence of behavioral interactions can become mutually constrained and unpredictable. The simulation generates metrics that can be compared to observational data and/or make predictions for redesign experiments. Brahms-GÜM can be adapted for other accident investigations, for identifying possible failures in proposed highly integrated systems, and for developing recovery strategies and procedures for system malfunctions.

About the Speaker: Dr. William J. Clancey is a Senior Research Scientist at the Florida Institute for Human and Machine Cognition; previously on assignment to NASA Ames Research Center, Chief Scientist for Human-Centered Computing, Intelligent Systems Division (1998-2013). Received Computer Science PhD, Stanford University; Mathematical Sciences BA, Rice University. Founding member of Institute for Research on Learning (1987-1997), where he was lead inventor of Brahms, a work systems design tool based on simulating work practice. Clancey has extensive experience in developing AI applications to medicine, education, robotics, and spaceflight systems (including OCAMS, recipient of NASA JSC Exceptional Software Award). He is a Fellow of the Association for Psychological Science, Association for Advancement of AI, and the American College of Medical Informatics. He has published seven books (including Situated Cognition: On Human Knowledge and Computer Representations and Working on Mars: Voyages of Scientific Discovery with the Mars Exploration Rovers, recipient of American Institute of Aeronautics and Astronautics 2014 Gardner-Lasser Aerospace History Literature Award), and has presented invited lectures in over 20 countries.

February 3

Machine Learning in Evidence-Based Medicine: Taming the Clinical Data Deluge

Byron Wallace Brown University


This event will be held on Monday, 2/3/2014, at 3:00 p.m. in Boggs Center, Room 122. Please note the special weekday and venue for this event.

Abstract: An unprecedented volume of biomedical evidence is being published today. Indeed, PubMed (a search engine for biomedical literature) now indexes more than 600,000 publications describing human clinical trials and upwards of 22 million articles in total. This volume of literature imposes a substantial burden on practitioners of Evidence-Based Medicine (EBM), which now informs all levels of healthcare. Systematic reviews are the cornerstone of EBM. They address a well-formulated clinical question by synthesizing the totality of the available relevant evidence. To realize this aim, researchers must painstakingly identify the few tens of relevant articles among the hundreds of thousands of published clinical trials. Further exacerbating the situation, the cost of overlooking relevant articles is high: it is imperative that all relevant evidence is included in a synthesis, else the validity of the review is compromised. As reviews have become more complex and the literature base has exploded in volume, the evidence identification step has consumed an increasingly unsustainable amount of time. It is not uncommon for researchers to read tens of thousands of abstracts for a single review. If we are to realistically realize the promise of EBM (i.e., inform patient care with the best available evidence), we must develop computational methods to optimize the systematic review process.

To this end, I will present novel data mining and machine learning methods that look to semi-automate the process of relevant literature discovery for EBM. These methods address the thorny properties inherent to the systematic review scenario (and indeed, to many tasks in health informatics). Specifically, these include: class imbalance and asymmetric costs; expensive and highly skilled domain experts with limited time resources; and multiple annotators of varying skill and price. In this talk I will address these issues in turn. In particular, I will present new perspectives on class imbalance, novel methods for exploiting dual supervision (i.e., labels on both instances and features), and new active learning techniques that address issues inherent to real-world applications (e.g., exploiting multiple experts in tandem). I will present results that demonstrate that these methods can reduce by half the workload involved in identifying relevant literature for systematic reviews, without sacrificing comprehensiveness. Finally, I will conclude by highlighting emerging and future work on automating next steps in the systematic review pipeline, and data mining methods for making sense of biomedical data more generally.

About the Speaker: Byron Wallace is an assistant research professor in the Department of Health Services, Policy & Practice at Brown University; he is also affiliated with the Brown Laboratory for Linguistic Processing (BLLIP) in the department of Computer Science. His research is in data mining/machine learning and natural language processing, with an emphasis on applications in health. Before moving to Brown, he completed his PhD in Computer Science at Tufts under the supervision of Carla Brodley. He was selected as the runner-up for the 2013 ACM SIGKDD Doctoral Dissertation Award and he was awarded the Tufts Outstanding Graduate Researcher at the Doctoral Level award in 2012 for his thesis work.

February 5

Autonomous Cars for City Traffic

Raul Rojas Gonzalez Freie Universität Berlin


This event will be held on Wednesday, 2/5/2014, at 4:00 p.m. in Boggs Center, Room 239. Please note the special weekday and venue for this event.

Abstract: We have been developing autonomous cars in Berlin since 2007. Our vehicle "MadeInGermany" has been driving in the city since 2011, covering stretches of up to 40Km (highway and streets) fully automatically.

In this talk I will present the hardware and software architecture of our vehicles. The main challenge is to safely detect, in real time, all obstacles and cars in a dynamic environment, and also to provide adequate control commands. The sensor architecture is still heavily loaded towards laser scanners and radars, but we are migrating towards a computer vision based approach using our own stereoscopic cameras. I will show some videos of the car driving in Berlin, Texas, and Mexico City, and will discuss the problems associated with a car able to navigate any conceivable city environment. I will comment on the current timetable of German car manufacturers for the introduction of autonomous cars.

About the Speaker: Raul Rojas Gonzalez is Professor of Computer Science and Mathematics at the Free University of Berlin and a renowned specialist in artificial neural networks.

He is now leading an autonomous car project called Spirit of Berlin. He and his team were awarded the Wolfgang von Kempelen Prize for his work on Konrad Zuse and the history of computers. His current research and teaching revolves around artificial intelligence and its applications. The soccer playing robots he helped build won world championships in 2004 and 2005. In 2009 the Mexican government created the Raul Rojas Gonzalez Prize for scientific achievement by Mexican citizens. He holds degrees in mathematics and economics.

February 7

Can You Hide in an Internet Panopticon?

Bryan Ford Yale University

Abstract: Many people have legitimate needs to avoid their online activities being tracked and linked to their real-world identities - from citizens of authoritarian regimes, to everyday victims of domestic abuse or law enforcement officers investigating organized crime. Current state-of-the-art anonymous communication systems are based on onion routing, an approach effective against localized adversaries with a limited ability to monitor or tamper with network traffic. In an environment of increasingly powerful and all-seeing state-level adversaries, however, onion routing is showing cracks, and may not offer reliable security for much longer. All current anonymity systems are vulnerable in varying degrees to five major classes of attacks: global passive traffic analysis, active attacks, "denial-of-security" or DoSec attacks, intersection attacks, and software exploits.

The Dissent project is developing a next-generation anonymity system representing a ground-up redesign of current approaches. Dissent is the first anonymous communication architecture incorporating systematic protection against the five major vulnerability classes above. By switching from onion routing to alternate anonymity primitives offering provable resistance to traffic analysis, Dissent makes anonymity possible even against an adversary who can monitor most, or all, network communication. A collective control plane ensures that a group of anonymous users behave indistinguishably even if an adversary interferes actively, such as by delaying messages or forcing users offline. Protocol-level accountability enables groups to identify and expel misbehaving nodes, preserving availability, and preventing adversaries from using denial-of-service attacks to weaken anonymity. The system computes anonymity metrics that give users realistic indicators of anonymity, even against adversaries capable of long-term intersection and statistical disclosure attacks, and gives users control over tradeoffs between anonymity loss and communication responsiveness. Finally, virtual machine insolation offers anonymity protection against browser software exploits of the kind recently employed to de-anonymize Tor users. While Dissent is still a proof-of-concept prototype with important functionality and performance limitations, preliminary evidence suggests that it may in principle be possible - though by no means easy - to hide in an Internet panopticon.

About the Speaker: Bryan Ford is an Assistant Professor of Computer Science at Yale University. He does research in operating systems, networking, and virtualization, and dabbles in storage systems, programming languages, and formal methods. His goal is to create a new, fully decentralized ("peer-to-peer") paradigm for applications distributed across personal devices and Internet services, built from novel OS abstractions such as personal groups, structured streams, and lightweight sandboxing.


February 10

Test-Driven Development: Evaluating & Influencing the Development Process

Kevin Buffardi Virginia Tech

This event will be held on Monday, 2/10/2014, at 3:00 p.m. in Boggs Center, Room 122. Please note the special weekday and venue for this event.

Abstract: Software testing can objectively verify that code behaves as expected. Consequently, testing is vital to software development. In particular, Test-Driven Development (TDD) is a popular approach in industry that involves incremental testing throughout the development process. While computer science students could benefit learning such professional skills, evidence shows that many programmers are reluctant to change their development process by adopting TDD. Over several years of teaching fundamental programming courses, we studied how different approaches to testing affected code quality. We also developed an adaptive feedback system that reinforces good testing practices by automatically customizing feedback and rewards based on how individual students test and develop programming assignments. In this talk, I will discuss the adaptive feedback system's impact on students' testing and outline plans for continuing this research.


About the Speaker: After earning his Bachelor's in Computer Science and Master's in Human-Computer Interaction, Kevin Buffardi worked as a User Experience Specialist and consulted for a variety of products from mobile phones to medical devices. He then returned to complete his Ph.D. at Virginia Tech, where he won awards for teaching and published research on software testing and adaptive feedback eLearning systems.
February 13

P versus NP: Which Problems Computers Can and Cannot Solve

Anastasia Kurdia Bucknell University

This event will be held on Thursday, 2/13/2014, at 3:30 p.m. in Boggs Center, Room 239. Please note the special weekday, time, and venue for this event.

Abstract: In this talk I will introduce the audience to the "P equals NP?" question, the major unsolved problem in computer science. We will understand the meaning of the question, and how it affects anyone who writes computer programs. We will encounter several problems that have very simple formulations and that cannot currently be solved by computers. (As an example, consider the task of finding the largest group of friends on a social network in which every person is a friend of every other person. A program for finding such a group may run for years, even on the fastest modern computer, and it's common to view such a problem as practically unsolvable). We will also discuss how resolving the "P equals NP?" question would affect security of credit card transactions on the Internet.

About the Speaker: Anastasia Kurdia is a Visiting Assistant Professor of Computer Science at Bucknell University, where she enjoys teaching introductory computer science courses. Before joining Bucknell in 2012, she taught at Connecticut College as a full-time adjunct faculty, and prior to that, she was a postdoctoral researcher at Smith College and studied combinatorial properties of protein structures. She received a Ph.D. degree in Computer Science from The University of Texas at Dallas in 2010. Her graduate work focused on geometric algorithms and their application to molecular biology. Anastasia's current research and personal work aims to make computer science education more effective, more inclusive and more fun.

February 14

What Will a Companionable Computational Agent Be Like?

Yorick Wilks Florida Institute for Human and Machine Cognition

Abstract: The talk begins by looking at the state of the art in modeling realistic conversation with computers over the last 40 years. I then move on to ask what we would want in a long-term conversational agent that was designed for a long-term relationship with a user, rather than the carrying out of a single brief task, like buying a railway ticket. Such an agent I shall call “companionable”: I shall distinguish several functions for such agents, but the feature they share will be that, in some definable sense, a computer Companion knows a great deal about its owner and can use that information.. By way of illustration, the talk describes the functionality and system modules of a Senior Companion (SC), one of two initial prototypes built in the first two years of the large-scale EC Companions project. The SC provides a multimodal interface for eliciting and retrieving personal information from the elderly user through a conversation about their photographs. The Companion, through conversation, elicits life memories, often prompted by discussion of their photographs. The demonstration is primitive but plausible and one of its key features is an ability to break out of the standard AI constraint on very limited pre-programmed knowledge worlds into a wider, unbounded world of knowledge in the Internet by capturing web knowledge in real time, again by Information Extraction methods. The talk finally discusses the prospects for machine learning in the conversational modeling field and progress to date on incorporating notions of emotion into AI systems. An outline is given of a current research project for a Companion for cognitively-damaged veterans.


About the Speaker: Yorick Wilks is Professor of Artificial Intelligence at the University of Sheffield, and is also a Senior Research Fellow at the Oxford Internet Institute at Balliol College. He studied math and philosophy at Cambridge, was a researcher at Stanford AI Laboratory, and then Professor of Computer Science and Linguistics at the University of Essex, before moving back to the US for ten years to run a successful and self-funded AI laboratory in New Mexico, the Computing Research Laboratory, a new institute set up by the state of New Mexico as a center of excellence in artificial intelligence in 1985. He has participated in and been the PI of numerous UK, US and EC grants, including the UK-government funded Interdisciplinary Research Centre AKT (2000-2006) on active knowledge structures on the web (www.aktors.org). His most recent book is Close Encounters with Artificial Companions (Benjamins, in press 2010). He is a Fellow of the American and European Associations for Artificial Intelligence, a member of the UK Computing Research Council and on the boards of some fifteen AI-related journals.In 2008 he was awarded the Zampolli Prize at LREC-08 in Marrakech, and the ACL Lifetime Achievement Award at ACL08 in Columbus, OH. In 2009 he was awarded the Lovelace Medal by the British Computer Society. In 2009 he was elected a Fellow of the ACM.
February 20

Computational Game Theory: From Theory to Practice and Back

Albert Jiang University of Southern California

This event will be held on Thursday, 2/20/2014, at 3:00 p.m. in Stanley Thomas, Room 316. Please note the special weekday and venue for this event.

Abstract: Large-scale systems with multiple self-interested agents are becoming ubiquitous in all aspects of modern life, from electronic commerce and social networks to transportation, healthcare and the smart grid. Due to the complexity of these multiagent systems, there is increasing demand for automated tools for intelligent decision making, both for users trying to navigate the system and for system designers trying to ensure the economic efficiency and security of the system. Making intelligent decisions in multiagent systems requires prediction of the behavior of self-interested agents. Game-theoretic solution concepts like Nash equilibrium and correlated equilibrium are mathematical models of such self-interested behavior; however, standard computational methods fail to scale up to real-world systems. Furthermore, real-world systems have uncertain environments and boundedly-rational agents, and as a result certain classical assumptions of game theory no longer hold.

My work in computational game theory aims to bridge this gap between theory and practice. In this talk I will focus on three topics. First, I present Action-Graph Games, a framework for computational analysis of large games that includes a general modeling language, a set of novel efficient algorithms for computing solution concepts, and publicly available software tools. Second, I will talk about applying computational game theory to real-world infrastructure security, in particular the TRUSTS system that generates randomized patrol strategies for fare enforcement in the LA Metro transit system. A major challenge in deploying our solutions is execution uncertainty: patrols are often interrupted. I propose a general approach to dynamic patrolling games in uncertain environments, which provides patrol strategies with contingency plans. Third, I discuss modeling bounded rationality of human decision makers, in the context of security applications. Most existing behavior models require estimation of parameters from data, which might not be available. I propose monotonic maximin, a new solution concept that is parameter-free and provides guarantees against a wide variety of boundedly-rational behavior.


About the Speaker: Albert Jiang is a postdoctoral research associate in Computer Science at the University of Southern California, working with Professor Milind Tambe and the Teamcore research group. He received his Ph.D. under Professor Kevin Leyton-Brown, at the Laboratory of Computational Intelligence (LCI), Department of Computer Science, University of British Columbia. His research addresses computational problems arising in game theory and multiagent systems, including the efficient computation of solution concepts such as Nash equilibrium, Stackelberg equilibrium and correlated equilibrium, as well as applications of game-theoretic computation to real-world domains such as large-scale infrastructure security and electronic commerce.

 

February 21

Dialogue as Collaborative Problem Solving: Moving Beyond Siri

James Allen Florida Institute for Human and Machine Cognition/University of Rochester.

Abstract: Automated spoken dialogue systems, which can interact with a user in natural language, are now in use for a variety of applications, from automatic telephone call handling systems to personal assistants such as Siri. Such systems, however, rigidly control the allowable interactions and generally cannot interpret utterances in context like humans do. What would it take to be able to build a dialogue system that could carry on conversation with human-like competence? I will describe work that we have done over the past few decades on trying to answer this question, and show a series of example systems that reveal the successes as well as the problems that remain. Underlying all this work is the hypothesis that dialogue results from human abilities to reason about and perform collaborative activities with each other. By viewing dialogue as collaborative problem solving, we can start to address some of the challenges in building systems that could display human-like conversational competence.


About the Speaker: James Allen is an Associate Director of the Institute for Human and Machine Cognition in Pensacola Florida, as well as the John H Dessauer Distinguished Professor of Computer Science at the University of Rochester He received his PhD in Computer Science from the University of Toronto and was a recipient of the Presidential Young Investigator award from NSF in 1984. A Founding Fellow of the American Association for Artificial Intelligence (AAAI), he was editor-in-chief of the journal, Computational Linguistics, the premier journal in natural language processing, from 1983-1993. He has authored numerous research papers in the areas of natural language understanding, knowledge representation and reasoning, and spoken dialogue systems.
February 24

Defending Data from Digital Disasters: Engineering Next Generation Systems for Emerging Problems in Data Science

Eric Rozier University of Miami

This event will be held on Monday, 2/24/2014, at 3:00 p.m. in Boggs Center, Room 122. Please note the special weekday and venue for this event.

Abstract: Of the data that exists in the world, 90% was created in the last two years. Last year over 2,837 exabytes of data were produced, representing an increase of 230% from 2010. By next year this total is expected to increase to 8,591 exabytes, reaching 40,026 exabytes by 2020. Our ability to create data has already exceeded our ability to store it, with data production exceeding storage capacity for the first time in 2007. Our ability to analyze data has also lagged behind the deluge of digital information, with estimates putting the percent of data analyzed at less than 1%, while an estimated 23% of data created would be useful if analyzed. Reliability, security, privacy, and confidentiality needs are outpacing our abilities as well, with only 19% of data protected. For these reasons we need systems that are not only capable of storing the raw data, but doing so in a trustworthy manner, while enabling state of the art analytics.

In this talk we will explore problems in data science applications to medicine, climate science, natural history, and geography, and outline the reliability, availability, security, and analytics challenges to data in these domains. We will present novel, intelligent, systems designed to combat these issues by using machine learning to apply a unique software defined approach to data center provisioning, with dynamic architectures, and on-the-fly reconfigurable middleware layers to address emergent problems in complex systems. Specifically we will address issues of data dependence relationships, and the threat they pose to long term archival stores, and curation, as well as techniques to protect them using novel theoretical constructs of second-class data and shadow syndromes. We will discuss the growing problem presented by the exponential explosion of both system and scientific metadata, and illustrate a novel approach to metadata prediction, sorting, and storage which allow systems to better scale to meet growing data needs. We will explore problems in data access in the cloud of private records, illustrating the pitfalls of trusting provider claims with real world audits conducted by our lab which successfully extracted synthetic patient data through inadvertent side-channels, and demonstrate novel search techniques which allow for regular expression based search over encrypted data while placing no trust in the cloud provider, ensuring zero information leakage through side-channels. Finally, we will conclude by discussing future work in systems engineering for Big Data, outline current challenges, and future pitfalls of next generation systems for data science.

About the Speaker: Dr. Eric Rozier is an Assistant Professor of Electrical and Computer Engineering, head of the Trustworthy Systems Engineering Laboratory, and director of the Fortinet Security Laboratory at the University of Miami in Coral Gables, Florida. His research focuses on the intersection of problems in systems engineering with Big Data, Cloud Computing, and issues of reliability, performability, availability, security and privacy. Prior to joining Miami, Dr. Rozier has served as a research scientist at NASA Langley Research Center, and the National Center for Supercomputing Applications, and as a Fellow at IBM Almaden Research Center. His work in Big Data and systems engineering has been the subject of numerous awards, including recently being named an Frontiers of Engineering Education Faculty Member by the National Academy of Engineering. Dr. Rozier completed his PhD in Computer Science at the University of Illinois at Urbana-Champaign where he served as an IBM Doctoral Fellow, and worked on issues of reliability and fault-tolerance of the Blue Waters supercomputer with the Information Trust Institute. Dr. Rozier has been a long time member of the IEEE, ACM,and a member of the AIAA Intelligent Systems Technical Committee where he serves with the Publications and the Professional Development, Education, and Outreach subcommittees.

February 28

Trust Between Humans and Intelligent, Autonomous Agents

David Atkinson Florida Institute for Human and Machine Cognition

Abstract: There are a number of time-critical and mission-critical applications in health, defense, transportation, and industry where we recognize an urgent need to employ intelligent, autonomous agents to problems that humans find difficult or dangerous to perform without assistance. It is essential that mixed human and intelligent agent teams tackling such challenges have appropriate interdependencies and reliance upon one another based on mutual trust.

This talk will explore interpersonal trust between humans and intelligent autonomous agents. We will present recent research that is leading us towards design of intelligent, autonomous agents that enable "reasonable" judgments of trustworthiness by interacting in a form and manner compliant with human social expectations. In other words, we might say we are reverse engineering the human social interface.

A significant body of research tells us that the cognitive, emotional, and social predispositions of humans play a strong role in trust of automation, and that we are predisposed to treat machines as social actors. We predict that as intelligent, autonomous agents interact with us in more sophisticated and natural ("human-like") ways, perhaps even embodied in humanoid robots, they will exhibit the kinds of behavior that will increasingly evoke our innate anthropomorphic social predispositions. The result will be attribution of mental states and characteristics to intelligent autonomous agents, among these trustworthiness. Our concern is that the social predispositions and inferential short cuts that work so well for human interpersonal trust are likely to lead us astray in ascribing trustworthiness to autonomous agents insofar as our fundamental differences lead to misunderstanding and unexpected behavior. Intelligent autonomous agents are not human, do not have our senses or reason as we do, and do not have a stake in human society or share common human experience, culture, or biological heritage. These differences are potentially very significant and therefore likely to result in misattribution of human-like characteristics to autonomous agents. The foreseeable results include miscommunication, errors of delegation, and inappropriate reliance; all symptomatic of poorly calibrated trust. To address this challenge, a major endeavor of our research group centers on the creation of mechanisms for intelligent autonomous agents to correctly use conventions of human social interaction to provide reliable signals that are indicative of the agent’s state during the conduct of joint activity, and do so in a manner that enables a human partner to construct an well-justified structure of beliefs about the agent. Our hypothesis is that this will lead to better calibrated human trust and consequently, better performance of the human-machine team.

About the Speaker: Dr. David J. Atkinson is a Senior Research Scientist at the Florida Institute for Human and Machine Cognition (IHMC). Dr. Atkinson’s current area of research targets applications of intelligent, autonomous agents as partners to humans in teamwork. His major focus is on fostering appropriate reliance and interdependency between human and agents, and specifically the role of human interpersonal trust and social interaction between humans and intelligent, autonomous agents. He is also interested in cognitive robotics, meta-reasoning and self-awareness, and affective computing. Dr. Atkinson worked over 20 years at Caltech/JPL where his work spanned basic research in artificial intelligence, autonomous systems and robotics with applications to robotic spacecraft, control center automation, and science data analysis. The research by his group at JPL resulted in many successful applications of intelligent systems to deep space exploration missions, including the Voyager, Galileo, Magellan, and Cassini spacecraft, and the "Spirit" and "Opportunity" Mars Exploration Rovers. As a senior executive, Dr. Atkinson managed a Division of JPL, served at NASA Headquarters where he directed Lunar robotic missions, and was also a program manager at the Air Force Office of Scientific Research before returning to basic research at IHMC. Dr. Atkinson holds a Bachelor’s degree in Psychology from University of Michigan, dual Master of Science and Master of Philosophy degrees in Computer Science (Artificial Intelligence) from Yale University, and the Doctor of Technology degree in Computer Systems Engineering from Chalmers University of Technology in Sweden.
March 14

Collective Annotation: From Crowdsourcing to Social Choice

Ulle Endriss Institute for Logic, Language and Computation, University of Amsterdam

Abstract: Crowdsourcing is an important tool, e.g., in computational linguistics and computer vision, to efficiently label large amounts of data using nonexpert annotators. The individual annotations collected then need to be aggregated into a single collective annotation that can serve as a new gold standard. In this talk, I will introduce the framework of collective annotation, in which we view this problem of aggregation as a problem of social choice, similar to the problem of aggregating the preferences of individual voters in an election. I will present both a formal model for collective annotation in which we can express desirable properties of diverse aggregation methods as axioms, and I will report on the empirical performance of several such methods on annotation tasks in computational linguistics using data we collected by means of crowdsourcing. The talk is based on joint work with Raquel Fernandez, Justin Kruger and Ciyang Qing.

About the Speaker: Ulle Endriss is an associate professor at the Institute for Logic, Language and Computation (ILLC) at the University of Amsterdam, where he directs the interdisciplinary Master of Logic programme. Dr. Endriss carries out research at the interface of logic, artificial intelligence, and mathematical economics. Specifically, he has worked on computational social choice, multiagent resource allocation, negotiation, combinatorial auctions, knowledge representation, automated reasoning, modal and temporal logics, communication in multiagent systems, and software tools for teaching logic. He received his PhD in Logic and Computation from King's College London in 2003, after having studied Computer Science in Karlsruhe, London, and Berlin.
March 21

Challenges in Securing Vehicular Networks

Rajeev Shorey Program Director, Media Lab Asia

Abstract: Vehicular networks have been the subject of much attention lately. A Vehicular Ad-Hoc Network, or VANET, is a form of mobile ad-hoc network, which provides communications among nearby vehicles and between vehicles and nearby fixed equipment, usually described as roadside equipment. Enabled by short-range to medium-range communication systems (vehicle-to-vehicle or vehicle-to-roadside), the vision of vehicular networks includes real-time and safety applications, sharing the wireless channel with mobile applications from a large, decentralized array of service providers. Vehicular safety applications include collision and other safety warnings. Non-safety applications include real-time traffic congestion and routing information, high-speed tolling, mobile infotainment, and many others.

VANETs allow setting up communication links between vehicles for the exchange of information between them. By exchanging information between each other, vehicles can warn drivers to prepare for a dangerous situation, i.e. post-crash notification. Emerging applications in VANETs have opened tremendous business opportunities and navigation benefits, but they also pose challenging research problems in security provisioning. In any Vehicular Ad-hoc Network, there is always a possibility of incorrect messages being transmitted either due to faulty sensors and/or intentional malicious activities.

The goal of the talk is to highlight technical challenges and key developments in the area of security in vehicular networks. The talk will primarily cover research & business challenges in the area of vehicular security. The talk will also cover emerging services and applications in the Automotive Space.

Keywords: V2V, V2I, OnStar, Safety, DSRC, WAVE Stack, Security, Public Key Cryptography, Private Key Cryptography, DoS attacks.

About the Speaker: Dr. Rajeev Shorey is the Project Director at IT Research Academy, Media Lab Asia, Dept of IT, Government of India. He is the Co-Founder of ChiSquare Analytics, a start-up in Data Analytics and Big Data. From 2009 to 2012, Dr. Shorey was the (Founder) President of NIIT University, India. Dr. Shorey received his Ph.D and MS (Engg) in Electrical Communication Engineering from the Indian Institute of Science (IISc), Bangalore, India in 1997 and 1991 respectively. He received his B.E degree in Computer Science and Engineering from IISc, Bangalore in 1987 and the B.Sc degree from St. Stephen’s College, Delhi University in 1984.
March 25

From 0 to 1 Million Samples: Theoretically Principled Machine Learning Algorithms for Real-World Applications

Francesco Orabona Toyota Technological Institute

This event will be held on Tuesday, 3/25/2014 at 3:30 p.m. in Dinwiddie Hall, Room 102. Please note the special weekday and venue for this event.

.
Abstract: Most of the research in machine learning has been directed to problems where with a reasonable size training set is available, and without concerns about the scalability of the algorithms. Even if this is a fundamental problem, it does not fit well important real-world applications. On one hand, as the big data paradigm is gaining momentum, scalability of learning algorithms has become the main bottleneck to obtain good performance in a reasonable amount of time. On the other hand, in many real-world applications the amount of training data is very limited and it is the only limitation to achieve an automatic learning.

In this talk, the speaker will present some of his latest results, that aim to solve tasks in the small and big data regimes. In particular, for the first setting, he will present an algorithm able to automatically transfer knowledge from other, possibly relevant, sources of information, to bootstrap the performance in new tasks with less available data. For the second setting, he will present the first theoretically optimal parameter-free stochastic gradient descent algorithm, to effortlessly train learning algorithms over millions of training samples.

Empirical and theoretical results will be shown for both domains.

About the Speaker: Francesco Orabona is a Research Assistant Professor at the Toyota Technological Institute at Chicago. His research interests are in the area of online learning and transfer learning, with applications to robotics and computer vision. He received a PhD degree in Electrical Engineering at the University of Genoa in 2007. He was a post-doctoral researcher with Barbara Caputo and Nicolo' Cesa-Bianchi. He is (co)author of more than 40 peer-reviewed papers.
April 23

From Design Time To Run Time: Formal Methods for Ensuring the Safety of Safety-Critical Aeronautics Systems

Kristin Rozier NASA Ames Research Center

This event will be held on Wednesday, 4/23/2014, at 4:00 p.m. in Boggs Center, Room 239. Please note the special weekday, time, and venue for this event.

Abstract: Formal verification techniques are growing increasingly vital for the development of safety-critical systems. Techniques such as design-time model checking and runtime verification have been successfully used to assure systems for air traffic control, airplane separation assurance, autopilots, logic designs, medical devices, and other functions that ensure human safety. In 2007, we established Linear Temporal Logic (LTL) satisfiability checking as a standard for property assurance; system behavioral properties written early in the system-design process and utilized for analysis across all development phases, from design time to run time, increase the efficiency, consistency, and quality of the system under development. We introduced a set of 30 symbolic LTL encodings, demonstrating that a portfolio approach utilizing these encodings translates to significant, sometimes exponential, improvement over the standard encoding for symbolic LTL satisfiability checking. The increased scalability provided by this approach has led to major impacts in aeronautics. We use these formal verification techniques to ensure there are no potentially catastrophic design flaws remaining in the design of the next Air Traffic Control system before the next stage of production. Also, we introduce a new way of encoding LTL on-board FPGAs that enables not just runtime monitoring but runtime System Health Management (SHM) in a flight-certifiable way. Our real-time, Realizable, Responsive, Unobtrusive Unit (rt-R2U2) meets the emerging needs for SHM of new safety-critical embedded systems like automated vehicles, Unmanned Aerial Systems (UAS), or small satellites. SHM for these systems must be able to handle unexpected situations and adapt specifications quickly during flight testing between closely-timed consecutive missions, and must enable more advanced probabilistic reasoning for diagnostics and prognostics while running aboard limited hardware without affecting the certified on-board software. We highlight the unique contributions that can enable a fire-fighting UAS to fly!

About the Speaker: Dr. Kristin Y. Rozier holds a position as a Research Computer Scientist in the Intelligent Systems Division of NASA Ames Research Center and a courtesy appointment at Rice University. She earned a Ph.D. from Rice University in 2012 and B.S. and M.S. degrees from The College of William and Mary in 2000 and 2001. Dr. Rozier's research focuses on automated techniques for the formal specification, validation, and verification of safety critical systems. Her primary research interests include: design-time checking of system logic and system requirements; runtime system health management; and safety and security analysis. Her applications of computer science in the aeronautics domain earned her the American Helicopter Society's Howard Hughes Award, the American Institute of Aeronautics and Astronautics Intelligent Systems Distinguished Service Award, and the Women in Aerospace Inaugural Initiative-Inspiration-Impact Award. She has also earned the Lockheed Martin Space Operations Lightning Award, the NASA Group Achievement Award, and Senior Membership to IEEE, AIAA, and SWE. Dr. Rozier serves on the AIAA Intelligent Systems Technical Committee, where she chairs both the Publications and the Professional Development, Education, and Outreach (PDEO) subcommittees. She has served on the NASA Formal Methods Symposium Steering Committee since working to found that conference in 2008 and is serving as PC chair for the second time this year.
April 25

Selective Autofocusing in Optically Sectioned Automated Scanning Microscopy

Tyler Schlichenmeyer Tulane University

Abstract: The advent of new techniques in automated scanning microscopy, specifically structured illumination microscopy, calls for a re-examination of autofocusing methods in order to keep imaging times relevant to intra-operative timeframes. We re-examine the autofocus functions for our test data and compare local search algorithms to decide on an optimal strategy for optimization of our function. In the process, we also introduce pattern modulation depth as a novel autofocus function. Then, since we cannot afford to refocus at each new frame, we must selectively refocus, and in order to minimize refocusing, we developed an algorithm to keep the sample in focus for as long as possible. First, we optimize our sampling size to maintain accuracy while minimizing focus evaluation. Then, we trained a Gaussian process (GP) to estimate the change in focal plane at each frame for each direction using previously seen data as input, and direct the system to choose the direction that minimizes this change.


About the Speaker: Tyler Schlichenmeyer is a senior undergraduate with a primary major of Biomedical Engineering and is completing the requirements for the Computer Science coordinate major. His thesis work for the Biomedical Engineering requirements involved the development of the software and hardware interactions for the structured illumination microscope used in this presentation. For his BME team design capstone, he was the primary programmer for custom inventory software that incorporated elements of Python and SQL. Next year he will be attending Washington University in St. Louis in pursuit of a PhD in biomedical engineering with a focus in neural engineering and brain-computer interfaces. Around campus, he is actively involved in the Phi Sigma Pi National Honors Fraternity, Tulane Quiz Bowl, and the Tulane Club Baseball team.

School of Science and Engineering, 201 Lindy Boggs Center, New Orleans, LA 70118 504-865-5764 sse@tulane.edu