Colloquia Archives: 2013-2014

Lecture Series Overview

Top ⇑ Fall 2013 Colloquia

Below is the list of talks in the computer science seminar series. Unless otherwise noted, the seminars meet on Fridays at 3pm in Stanley-Thomas 302. If you would like to receive notices about upcoming seminars, you can subscribe to the announcement listserve by following this link.

September 25

Julia Sets, $\omega$-limit Sets, and Internal Chain Transitivity

Brian Raines Baylor University

This event will be held on Wednesday, 9/25/2013, at 3:30 p.m. in Stanley Thomas Hall 302. Please note the special weekday and time for this colloquium.

Abstract: We consider the dynamics of quadratic maps on the complex plane, $f_c(z)=z^2+c$.  The interesting dynamics of such a map are carried on the Julia set which is often a complicated, self-similar space.  There are many values of $c$ for which the Julia set is a \emph{dendrite},  a locally connected and uniquely arcwise connected compact metric space.  One example is when the value $c$ is strictly pre-periodic; such values are known as Misiurewicz points and the corresponding Misiurewicz maps are well studied.  There are many other parameters for which the Julia set contains infinitely many circles arranged in a "dendritic'' pattern.                                               

Baldwin gives an efficient encoding of the dynamics of these quadratic maps restricted to the Julia set, into a shift map on a non-Hausdorff itinerary space.  Using this symbolic representation we characterize the $\omega$-limit sets of these particular Julia sets.  We prove that  such quadratic maps have shadowing on their Julia sets, and also that for all such maps, a closed invariant set is an $\omega$-limit set of a point if, and only if, it is \emph{internally chain transitive}.  This implies that the space of $\omega$-limit sets is closed in the hyperspace topology.

About the Speaker: Brian Raines received his doctorate in 2002 from the University of Oxford, and is now an associate professor of mathematics at Baylor University. He has worked extensively in topology, topological dynamics and ergodic theory, chaos, the theory of inverse limit spaces, and theoretical computer science, and is perhaps best known for his work on Ingram's Conjecture. Dr. Raines recently has been elected to the Baylor Fellows Program in honor of his excellence in teaching.
September 27

Automatic Techniques for Termination and Complexity Analysis of Programs

Supratik Mukhopadhyay Louisiana State University

Abstract: Analysis of termination and other liveness properties of an imperative program can be reduced to termination proof synthesis for simple loops, i.e., loops with only variable updates in the loop body.

Among simple loops, the subset of Linear Simple Loops (LSLs) is particular interesting because it is common in practice and expressive in theory. Existing techniques can successfully synthesize a linear ranking function for an LSL if there exists one. However, when a terminating LSL does not have a linear ranking function, these techniques fail. In this talk we describe an automatic method that generates proofs of universal termination for LSLs based on the synthesis of disjunctive ranking relations.

The method repeatedly finds linear ranking functions on parts of the state space and checks whether the transitive closure of the transition relation is included in the union of the ranking relations. Our method extends the work of Podelski and Rybalchenko. We have implemented a prototype of the method and have shown experimental evidence of the effectiveness of our method.

We also introduce a new technique for refining the complex control structure of loops that occur in imperative programs. We first introduce a new way of describing program execution patterns { (+,.)- path expressions, which is a subset of conventional path expressions with the operators V and * eliminated. The programs induced by (+,.)-path expressions have no path interleaving or skipping-over inner loops, which are the two main issues that cause impreciseness in program analysis. Our refinement process starts from a conventional path expression E obtained from the control ow graph, and aims to calculate a finite set of (+,.)- path expressions such that the language generated by path expression E is equivalent to the union of the languages generated by each (+,.)-path expressions. In theory, a conventional path expression can potentially generate an infinite set of (+,.)-path expressions. To remedy that, we use abstract interpretation techniques to prune the infeasible paths. In practice, the refinement process usually converges very quickly.

We have applied our method to symbolic computation of average case bound for running time of programs. To the best of our knowledge it is the first tool that automatically computes average case bounds. Experiments on a set of complex loop benchmarks clearly demonstrate the utility of our tool.

About the Speaker: Dr. Mukhopadhyay is a faculty member of the Computer Science Division of LSU where he holds the Occidental Chemical Corporation Career Development Chair of the College of Engineering. His research has been supported by NSF, DARPA, NASA, ARO, DOE, industry, and state agencies.

October 4

Agile Computing

Niranjan Suri Florida Institute for Human and Machine Cognition

Abstract: Wirelessly networked dynamic and mobile environments, such as tactical military environments and disaster recovery operations, pose many challenges to building distributed computing systems. A wide variety of node types and resources, unreliable and bandwidth constrained communications links, high churn rates, and rapidly changing user demands and requirements make it difficult to build systems that satisfy the needs of the users and provide good performance.

Agile computing is an innovative metaphor for distributed computing systems and prescribes a new approach to their design and implementation. Agile computing may be defined as opportunistically discovering, manipulating, and exploiting available computing and communication resources. The term agile is used to highlight the desire to both quickly react to changes in the environment as well as to take advantage of transient resources only available for short periods of time.

This talk will briefly introduce the Agile Computing Middleware and its key components, including the Mockets Communications Library, the Group Manager discovery service, the DisService peer-to-peer information dissemination system, the DSPro proactive dissemination system, and finally the NetProxy that integrates all of the above capabilities with legacy applications.

About the Speaker: Niranjan Suri is a Research Scientist at the Florida Institute for Human and Machine Cognition (IHMC) in Pensacola, FL. He is also a Visiting Scientist at the U.S. Army Research Laboratory in Adelphi, MD. He received his Ph.D. in Computer Science from Lancaster University, England, and his M.Sc. and B.Sc. in Computer Science from the University of West Florida, Pensacola, FL.

October 18

Foundations for Computer Security

Michael Clarkson George Washington University

Abstract: This talk describes work on developing mathematical and logical foundations for computer security policies. Usually, security policies on computer systems are expressed in terms of confidentiality, integrity, and availability. But we don't really know how to formalize and verify such policies. I propose an alternative: a new mathematical framework called "hyperproperties." Every security policy can be formally expressed using hyperproperties.

A new logic of programs, HyperLTL, enables verification of hyperproperties. And, usually, security policies are qualitative. Engineering demands quantitative metrics, though. I propose information-flow metrics that can be used to quantify security. Study of these metrics led to new discoveries: a new kind of information-flow integrity, a new model of attacker beliefs that resolves an anomaly in previous models of information-flow confidentiality, and a new connection between information flow and database privacy. That connection revealed a new theorem about the tradeoff between privacy and utility.

About the Speaker: Michael Clarkson is an assistant professor in the Department of Computer Science at George Washington University in Washington, D.C. He received a PhD in computer science from Cornell University in 2010. Clarkson's research interests include computer security and programming languages. His work focuses on using principled techniques to define security and to construct secure systems.

November 1

Seeing Shapes: Lessons from Human Vision

Nathaniel Twarog Massachusetts Institute of Technology

Abstract: The comprehension and representation of shape, both two- and three-dimensional, is one of the most essential challenges of computer vision, playing a critical role in object recognition, visually-guided locomotion, object grasping and manipulation, facial recognition and identification, image segmentation, and content-based image retrieval. However, existing computational approaches to understanding shape still lag far behind the versatility and power of the human visual system in many respects. I will discuss two algorithms for the analysis and representation of shape which attempt to bridge the gap between vision in the human mind and vision in the modern computational system.
I will introduce an algorithm called Puffball, which maps arbitrary two-dimensional silhouettes to intuitive three-dimensional inflations. These inflations provide invaluable insights into the visual properties of two-dimensional shapes, which allow Puffball inflation to be applied to numerous visual tasks, including robust silhouette similarity measurement, fast transfer of material appearance, and segmentation of silhouette parts. The part segmentation approach, which locates minima of principal curvature of the Puffball surface, is far simpler and more intuitive than previous part segmentation approaches, which use curvature of the silhouette contour; the results of the Puffball-based part segmentation also agree more closely with human part segmentation judgments, making Puffball a powerful tool for modeling representation of shape in human subjects.
I will also describe a novel algorithm for inferring three-dimensional shape from shaded images (often referred to as shape-from-shading), which infers a small number of distinct interpretations of each local image patch in a given scene, and utilizes belief propagation to determine a coherent shape representation which best explains the image in its entirety. This approach is both simple and highly customizable and can be applied to almost any shape inference task, including shape from contours, shape from texture, and shape from sheen; in addition, the underlying representation can be augmented to allow inference of other related variables, including surface texture, material properties, and lighting environment.

About the Speaker: Nathaniel Twarog received a PhD in Cognitive Science from MIT in 2012. His research interests lie at the intersection of computer vision, human vision, and graphics, specializing in the development of algorithms to extract and represent structure, organization, and shape in complex signals.

November 13

STUDENT SEMINAR: Ambient Obstacle Avoidance

Brenan Keller Tulane University

This event will be held on Wednesday, 11/13/2013, at 3:30 p.m. in Stanley Thomas Hall 302. Please note the special weekday and time for this event.

Abstract: In this talk I will describe my technical contribution and experience concerning an internship at the Florida Institute for Human and Machine Cognition (IHMC).

I helped develop a novel ground vehicle interface called the Ambient Obstacle Avoidance (AOA) interface. The AOA interface advances modern ground vehicle interfaces in two main ways. It provides information lacking in current interfaces and does so in a unique way that allows the operator to leverage ambient vision rather than relying solely on focal vision.

I was tasked with building a physical prototype of the system and subsequently conducting an experiment to ascertain its effectiveness. Together with simulated prototypes, the team was able to demonstrate that AOA was far more effective than the current norm. Internships are a critical tool in learning how to apply theoretical knowledge to real-world situations. My internship was a great learning experience, giving me insight into how research institutes operate and how research develops.

About the Speaker: Brenan Keller is a Tulane undergraduate majoring in Business and completing the coordinate major in Computer Science. He spent the summer of 2012 as an intern at the IHMC in Pensacola, Florida.

November 15

Big Data Applications and Systems Research

Milenko Petrovic Florida Institute for Human and Machine Cognition

Abstract: The diversity and the amount of human-generated data are growing by leaps and bounds: GPS, motion detection, RFID, vehicle telemetry, biotelemetry, emails, tweets, OCR, medical records, and legal documents are only a few examples.

A Big Data system needs to have ability for effective collection, processing, and storage of data. Frequently, the data must be collected from highly distributed environments such as sensor networks and mobile systems. Analysis of Big Data often involves use of statistical machine learning and the complexity of applying these algorithms over big data requires systems and techniques that can take advantage of massive parallelism in new hardware such as multicore and GPU processors, cloud computing, large amounts of random access memory, and flash storage.

In this talk, I first describe challenges of Big Data in several domains including information extraction, content-based routing, ecological monitoring, and cancer research. I then describe our past and ongoing work in developing the systems to address these challenges. Lastly, I give a brief overview of the state-of-the-art systems for large scale data analysis.

About the Speaker: Milenko Petrovic is a Research Scientist at Florida Institute for Human and Machine Cognition (IHMC). He received a Ph.D. degree in Computer Engineering from the University of Toronto in 2007. He was a member of the middleware systems research group, MSRG. His research interests are in the area of large scale data dissemination in mobile and sensor systems. He worked at IBM on large scale semi-structured data processing in DB2; Innovative Scheduling on interactive data analytics system for freight railways; and Bayes Informatics on a large scale text analysis system, which was applied on projects for O’Reilly Media, GM Research, and Buzzient, social media analytics.

November 22

Program Analysis and Optimization for Multi-core Computing

Kleanthis Psarris Brooklyn College

Abstract: High end parallel and multi-core processors rely on compilers to perform the necessary optimizations and exploit concurrency in order to achieve higher performance. However, source code for high performance computers is extremely complex to analyze and optimize. In particular, program analysis techniques often do not take into account complex expressions during the data dependence analysis phase. Most data dependence tests are only able to analyze linear expressions, even though non-linear expressions occur very often in practice. Therefore, considerable amounts of potential parallelism remain unexploited. In this talk we propose new data dependence analysis techniques to handle such complex instances of the dependence problem and increase program parallelization. Our method is based on a set of polynomial time techniques that can prove or disprove dependences in source codes with non-linear and symbolic expressions, complex loop bounds, arrays with coupled subscripts, and if-statement constraints. In addition our algorithm can produce accurate and complete direction vector information, enabling the compiler to apply further transformations. To validate our method we performed an experimental evaluation and comparison against the I-Test, the Omega test and the Range test in the Perfect and SPEC benchmarks. The experimental results indicate that our dependence analysis tool is accurate, efficient and more effective in program parallelization than the other dependence tests. The improved parallelization results into higher speedups and better program execution performance in several benchmarks.

About the Speaker: Kleanthis Psarris is a Professor of Computer and Information Science and the Dean of the School of Natural and Behavioral Sciences at City University of New York - Brooklyn College. He holds a B.S. degree in Mathematics from the National University of Athens, Greece, and a M.Eng. degree in Electrical Engineering and a Ph.D. in Computer Science from Stevens Institute of Technology in Hoboken, New Jersey. His research interests are in the areas of Parallel and Distributed Systems, Programming Languages and Compilers, and High Performance Computing. He is an Editor of the Parallel Computing Journal, and his research has been funded by the National Science Foundation and the Department of Defense.

November 25

Insertion and Promotion for Tree-Based PseudoLRU Last-Level Caches

Daniel Jimenez Texas A&M University

This event will be held on Monday, 11/25/2013, at 3:00 p.m. in Stanley Thomas Hall 316. P
lease note the special weekday and venue for this colloquium.

Abstract: Last-level caches mitigate the high latency of main memory. A good cache replacement policy enables high performance for memory intensive programs. To be useful to industry, a cache replacement policy must deliver high performance without high complexity or cost. For instance, the costly least-recently-used (LRU) replacement policy is not used in highly associative caches; rather, inexpensive policies with similar performance such as PseudoLRU are used.

I propose a novel last-level cache replacement algorithm with approximately the same complexity and storage requirements as tree-based PseudoLRU, but with performance matching state-of-the-art techniques, such as dynamic re-reference interval prediction (DRRIP) and protecting distance policy (PDP). The algorithm is based on PseudoLRU, but uses set dueling to dynamically adapt its insertion and promotion policy. It has slightly less than one bit of overhead per cache block, compared with two or more bits per cache block for competing policies.

In this talk, I give the motivation behind the algorithm in the context of LRU with improved placement and promotion, then develop this motivation into a PseudoLRU-based algorithm, and finally give a version using set dueling to allow adaptivity to changing program behavior. I show that, with a 16-way set-associative 4MB last-level cache, an adaptive PseudoLRU insertion and promotion algorithm yields a geometric mean speedup of 5.6% over true LRU over all the SPEC CPU 2006 benchmarks using far less overhead than LRU or other algorithms. On a memory-intensive subset of SPEC, the technique improves geometric mean speedup by 15.6%. I show that the performance is comparable to state-of-the-art replacement policies that consume more than twice the area.

About the Speaker: Daniel A. Jimenez is an Associate Professor in the Department of Computer Science and Engineering at Texas A&M University. Previously he was a full Professor and Chair of the Department of Computer Science at The University of Texas at San Antonio and Associate Professor in the Department of Computer Science at Rutgers University. His research focuses on microarchitecture and low-level compiler optimizations. He introduced and developed the perceptron branch predictor which has inspired the design of two implemented microarchitectures. Daniel earned his B.S. (1992) and M.S. (1994) in Computer Science at The University of Texas at San Antonio and his Ph.D. (2002) in Computer Sciences at The University of Texas at Austin. Daniel was General Chair of the 2011 IEEE HPCA conference, and he is an NSF CAREER award recipient and ACM Distinguished Scientist.
December 4

Computational Models and Tools to Solve Abstract Argumentational Frameworks

Francesco Santini INRIA-Rocquencourt

This event will be held on Wednesday, 12/04/2013, at 4:00 p.m. in Stanley Thomas Hall 302. Please note the special weekday and time for this colloquium.

Abstract: An Abstract Argument Framework (AAF) is simply a pair consisting of a set A whose elements are called arguments, and of a binary relation R on A, called the attack relation. An abstract argument is not assumed to have any specific structure: an argument is anything that may attack or be attacked by another argument. An argumentation semantics is a formal definition of a method ruling the justification state of arguments: intuitively, an argument is justified if it has some way to survive the attacks it receives. An extension-based semantics specifies how to derive a subset of A representing a set of arguments that are collectively justified. Computational models and tools are used to model and compute these semantics and their correspondents in Weighted AAFs, i.e., where attacks or nodes are weighted with a strength score. Benchmarks based on small-world networks need to be assembled to test the scalability and applicability of these tools, even considering the hard problems related to Weighted AAFS.

About the Speaker: Francesco Santini is currently the ERCIM "Alain Bensoussan" Fellow at INRIA-Rocquencourt, in Paris. Santini received his Bachelor and Masters in Computer Science at University of Pisa (Italy) and his Ph.D. in Computer Science and Engineering at IMT-Lucca (Italy). His main research interests are Constraint Programming, Trust Management Systems, Argumentation Frameworks and Orchestration/Choreography/Discovery in Service Oriented Architectures.
December 6

An Introduction to Digital Forensics: Privacy, Practice, and Research

Golden Richard University of New Orleans

Abstract: Digital forensics is an important aspect of information assurance (IA) and plays an increasingly vital role in detecting and responding to cyberattacks, supporting civil and criminal litigation, protecting intellectual property, protecting minors from predators, and more. Recoverable digital evidence exists on a wide variety of electronic devices, from traditional computers, to smartphones, printers, video surveillance systems, voice recorders, and game consoles.

This talk provides an introduction to digital forensics, the art and science of discovering, preserving, and analyzing digital evidence, from three perspectives: privacy, practical digital investigation, and research. The talk covers basic concepts and investigative challenges before briefly addressing current research directions, most of which are concerned with techniques and tools for allowing investigators to deal with the ever-increasing size and complexity of forensic targets and the growing impact of malware in forensic investigations. These research approaches cover a wide spectrum, including the use of parallel and distributed architectures for forensics tools, Graphics Processing Units (GPUs), advanced file carving techniques, virtual machine introspection, and tools for live investigation and memory analysis.

The impact of digital forensics techniques on personal privacy is addressed throughout.

About the Speaker: Golden G. Richard III is Professor of Computer Science, University Research Professor, and Director of the Greater New Orleans Center for Information Assurance (GNOCIA) at the University of New Orleans. Dr. Richard has extensive experience in private digital forensic investigation and was one of the first academic researchers in this area. In addition to digital forensics, he is also interested in reverse engineering, malware analysis, and operating systems internals. Dr. Richard is a member of the United States Secret Service Electronic Crime Taskforce, sits on the Editorial Boards of the Journal of Digital Investigation and the International Journal of Digital Crime and Forensics (IJDCF), and is a member of the American Academy of Forensic Sciences (AAFS).

School of Science and Engineering, 201 Lindy Boggs Center, New Orleans, LA 70118 504-865-5764