Magic Center Ph.D Computing and Information Sciences Room Chair - To be Announced 9:00 a.m. – Biru Cui Title: Behavior prediction based on action history and social networks Abstract: When a novel research topic emerges, we are interested in discovering how the topic will propagate over the bibliography network, i.e., which author will research and publish about this topic. Inferring the underlying influencenetwork among authors is the basis of predicting such topic adoption. Existing works infer the influence network based on past adoption cascades, which is limited by the amount and relevance of cascades collected. This work hypothesizes that the influence network structure and probabilities are the results of many factors including the social relationships and topic popularity. These heterogeneous information shall be optimized to learn the parameters that define the homogeneous influence network that can be used to predict future cascade. Experiments using DBLP data demonstrate that the proposed method outperforms the algorithm based on traditional cascade network inference in predicting novel topic adoption." 9:30 a.m. – Zach Fitzsimmons Title: Single-Peaked Preferences and Partial Orders Abstract: Single-peakedness is one of the most commonly examined domain restrictions on preferences. We first provide a general overview of single-peaked preferences and their implications in elections. We focus on algorithms and complexity results for the single-peaked consistency problem, that is, determining if a given preference profile is single-peaked or not. The preference profiles discussed are not restricted to total linear orders, we also look at partial and weak preference orders and how we can decide the single-peaked consistency problem for them. In addition, different notions of ""nearly"" single-peakedness, where a preference profile is close to single-peaked with respect to some distance measure, are discussed. 10:00 a.m. – Erika Mesh Title: Mapping the Chasm: Using Grounded Theory to Study Academic Scientific Software Development Process Concerns Abstract: Beginning in the early days of computing, higher-level programming languages (e.g. FORTRAN) enabled scientists to tackle increasingly complex problems and data sets. Over time, the technology and lessons learned from their efforts was applied to non-scientific problems to enhance our everyday activities. As this shift occurred and software engineering (SE) evolved into a standalone discipline, the needs and concerns of scientists became more distanced from ""mainstream"" SE research. This ""chasm"", as described by Kelly (2007), has resulted in many scientific software development projects being left on their own to determine the applicability of and approach for adopting SE best practices to improve their overall SE process. In order to better understand this chasm and the SE process improvement (SPI) resources required to bridge it, we have used an iterative grounded theory methodology to build a preliminary model of the motivating concerns behind SPI decisions in a set of academic research projects. 10:30 a.m. – Xuan Guo Title: Co-registration of Eye Tracking with EEG data for exploring humans' semantic comprehension Abstract: Eye tracking and electroencephalograph (EEG) has been used for unveiling human visual perception and brain activities. We co-register these two data modalities to explore the process of humans' comprehending semantics by analyzing their eye fixation related potentials (FRPs) across different tasks/events. 11:00 a.m. – Sam Skalicky Title: A Scheduling Approach to Processor Performance Modeling Abstract: Large and complex systems are used more than ever before to solve the latest compute-intensive problems that include an evolving combination of heterogeneous processors. These processors operate on larger and larger data sizes, making performance estimation difficult. The execution time of a computation can be estimated by scheduling the work to the computational units of a processor. However this approach is limited by storage and computational complexity requirements of the scheduling algorithm. In this paper we present a new reduced graph representation to store very large graphs, and an algorithm for estimating the length of the schedule. We prove that this algorithm is a 2-approximation algorithm with a computational complexity on the order of the number of levels in the graph." 11:30 a.m. – Matthew Le Title: Proving Determinism for a Parallel Functional Language with Speculation Abstract: Parallel programming is often viewed as a daunting task for many, as programmers are forced to reason about nondeterministic execution of programs and the exponential number of interleavings of threads. Functional languages have shown to provide a nice solution to these difficulties as they don’t allow mutation, effectively eliminating the possibility for nondeterminism, making parallel programs easier to reason about. Unfortunately, a number of applications can be more efficiently encoded when shared state is available. Recent efforts have proposed forms of shared state that restrict the operations that can be performed, but are able to guarantee deterministic execution. We build on these efforts by extending these models with speculative parallelism. Additionally, we provide a rollback mechanism for preserving determinism in the presence of cancelation and prove it’s correctness Afternoon Session Room Chair – Dr. Ray Ptucha 3:00 p.m. – Haitao Du Title: Probabilistic Inference for Obfuscated Network Attack Sequences Abstract: Facing diverse network attack strategies and overwhelming alters, much work has been devoted to correlate observed malicious events to pre-defined scenarios, attempting to deduce the attack plans based on expert models of how network attacks may transpire. Sophisticated attackers can, however, employ a number of obfuscation techniques to confuse the alert correlation engine or classifier. Recognizing the need for a systematic analysis of the impact of attack obfuscation, our work models attack strategies as general finite order Markov models and explicitly models obfuscated observations as noises. Taking into account that only finite observation window and limited computational time can be afforded, this work develops an algorithm to efficiently inference on the joint distribution of clean and obfuscated attack sequences. The inference algorithm recovers the optimal match of obfuscated sequences to attack models, and enables a systematic and quantitative analysis on the impact of obfuscation on attack classification. 3:30 p.m. - Arthur Nunes-Harwitt Title: On Deriving a PROLOG Compiler Abstract: An interpreter is a concise definition of the semantics of a programming language and is easily implemented. A compiler is more difficult to construct, but the code that it generates runs faster than interpreted code. This research introduces rules to transform an interpreter into a compiler. An extended example in the form of a PROLOG compiler suggests the utility of the technique. 4:00 p.m. – Naseef Mansoor Title: A Unified Design Methodology for Robust and Temperature-Aware Millimeter-wave Wireless Network-on-Chip Abstract: Network-on-Chip (NoC) paradigm has emerged as the interconnect fabric of multicore SoCs. The continuing demand for low power and high speed interconnects with technology scaling necessitates looking beyond the conventional planar metal/dielectric-based interconnect infrastructures. Among different possible alternatives, millimeter-wave (mmwave) wireless interconnects have emerged as a promising solution to the energy-latency issues of global interconnects. Wireless Network-on-Chip (WiNoC) architectures utilizing these CMOS compatible mm-wave transceivers can achieve significant improvements in performance and energy-efficiency in on-chip data transfer for multi-core chips. A token-based medium access mechanism is used in several mm-wave WiNoC architectures to enable a distributed and optimal utilization of the available wireless bandwidth among multiple transmitters. However, on-chip wireless interconnects being an emerging technology can suffer from high rates of failures due to challenges in design and integration. Moreover, high frequency transceivers are especially vulnerable to noise. Consequently, the token passing mechanism can fail and significantly degrade the potential benefits of this novel interconnect technology. On the other hand, excessive data-transfer over the wireless links to save energy consumption even causes localized temperature hotspots in the NoC further aggravating the probability of failures. In this work, we establish a unified robust and temperature-aware design methodology for WiNoC architectures based on small-world networks. Through detailed system-level simulations and real application based benchmarks we demonstrate that our design methodology of a smallworld mm-wave WiNoC architecture augmented with the token management unit (TMU) can minimize the effect of failure of the wireless fabric as well as mitigate temperature hotspots in WiNoC architectures. 4:30 p.m. – Srinivas Sridharan Title: Collaborative Eye Tracking for Image Analysis Abstract: We present a framework for collaborative image analysis where gaze information is shared across all users. A server gathers and broadcasts fixation data from/to all clients and the clients visualize this information. Several visualization options are provided. The system can run in real-time or gaze information can be recorded and shared the next time an image is accessed. Our framework is scalable to large numbers of clients with different eye tracking devices. To evaluate our system we used it within the context of a spot-the-differences game. Subjects were presented with 10 image pairs each containing 5 differences. They were given one minute to detect the differences in each image. Our study was divided into three sessions. In session 1, subjects completed the task individually, in session 2, pairs of subjects completed the task without gaze sharing, and in session 3, pairs of subjects completed the task with gaze sharing. We measured accuracy, time-to-completion and visual coverage over each image to evaluate the performance of subjects in each session. We found that visualizing shared gaze information by graying out previously scrutinized regions of an image significantly increases the dwell time in the areas of the images that are relevant to the task (i.e. the regions where differences actually occurred). Furthermore, accuracy and time-to-completion also improved over collaboration without gaze sharing though the effects were not significant. Our framework is useful for a wide range of image analysis applications which can benefit from a collaborative approach.