Loading…
Friday, October 30 • 3:30pm - 4:30pm
Poster Session

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

Poster session and PM snack break. The following posters will be on display:

Run Android with LLVM - Shuo Kang, SkyEye project of Tsinghua University

We have employed the JIT ability of LLVM as our dynamic translation engine of full system simulator. The project translates the ARM instruction to IR of LLVM and then use LLVM JIT to translate and run these IRs. By applying various optimization pass in LLVM and some smart policy, we have gotten the huge performance improvement of instruction execution. Now a complete android system can run on such full system simulator supported by LLVM. Even you can play Angrybird application smoothly on the simulator.a Comparing to the official Android simulator which build on Qemu, we found there is some performance incremental for most of android application.

OpenMP Support in Clang: to 4.0 and Beyond! - Alexey Bataev, Intel, Andrey Bokhanko, Intel, Sergey Ostanevich, Intel

OpenMP is well-known and widely used Application Programming Interface for shared-memory parallelism. A project to implement OpenMP support is carried out by a lot of people from AMD, Argonne, IBM, Intel, Texas Instruments, University of Houston and other organizations – including several members of OpenMP Architecture Review Board.

Full implementation of OpenMP 3.1 support was released with clang 3.7. It proved to be a popular choice among C++ programmers looking for a compiler combining all the clang virtues with OpenMP capabilities. 

OpenMP continues to evolve. OpenMP 4.0 version of the standard was published a couple of years ago and introduced a host of improvements, most notably support for computation offloading. We will elaborate on current progress of implementation of these new features in clang and highlight design of offloading support in llvm compiler, recently proposed by us and our colleagues to the community.

The upcoming OpenMP 4.1 adds even more interesting features, like extended offloading, taskloops, new worksharing/simd clauses, etc. We will explain our plans regarding support for all these new features as well.

Evaluation of Core Tuning Options (-mcpu) in LLVM - Minseong Kim, Samsung Electronics, Hyeyeon Chung, Samsung Electronics, Taekhyun Kim, Samsung Electronics

Modern compilers like LLVM and GCC provide core tuning options that enable the generation of highly tuned code for the underlying H/W. This poster presents our observation on performance of various tuning options, and then presents guidelines for the proper use of core tuning options. The poster also discusses our findings on room for improvement in LLVM tuning options.

Sampling for data races
- Peter Goodman, Trail of Bits, Angela Demke Brown, University of Toronto, Ashvin Goel, University of Toronto

Race Sanitizer (RSan) is a new data race detector that implements a variant of the DataCollider algorithm. RSan avoids the complexity and overhead of tracking memory access interleavings by instrumenting a subset of loads/stores. RSan introduces scheduling delays at instrumented loads/stores to detect data races. RSan avoids the issue of determining what memory will suffer from racy accesses by leveraging type-awareness to uniformly sample all memory locations. RSan has been implemented as an LLVM module pass and an efficient runtime system.

Code Clone Detection in Clang Static Analyzer - Kirill A. Bobyrev, MIPT, Vassil Vassilev, CERN

The copy-paste is a common programming practice. Most of the programmers start from a code snippet, which already exists in the system and modify it to match their needs. Easily, some of the code snippets end up being copied dozens of times. This manual process is error prone, which leads to a seamless introduction of new hard-to-find bugs. Also, copy-paste usually means worse maintainability, understandability and logical design. Clang and Clang’s static analyzer provide all the building blocks to build a generic C/C++ copy-paste detecting infrastructure.
Large codebases may contain from 5% to 20% of identical code pieces, which leads to all the mentioned problems. My GSoC project introduces Code Clone Detection to Clang Static Analyzer and allows processing large projects in order to find duplicates.

Automatically finding and patching bugs in binary software using LLVM - Ryan Stortz and Jay Little of Trail of Bits

As part of DARPA’s Cyber Grand Challenge, we utilized McSema to translate executables into LLVM IR. We then built and extended tools on the LLVM toolchain that allowed us to automatically discover and patch exploitable software vulnerabilities. This poster presents our system, its capabilities and limitations, and our CGC results.

Molly - Parallelizing for Distributed  Memory  using LLVM - Michael Kruse

Motivated by Lattice Quantum Chromodynamics applications, Molly is an LLVM compiler extension, complementary to Polly, which optimizes the distribution of data and work between the nodes of a cluster machine such as Blue Gene/Q. Molly represents arrays using integer polyhedra and uses another already existing compiler extension Polly which represents statements and loops using polyhedra. When Molly knows how data is distributed among the nodes and where statements are executed, it adds code that manages the data flow between the nodes. Molly can also permute the order of data in memory. 

Gigabyte/Second Unicode RegEx Matching with Parabix/LLVM Nigel W. Medforth and Robert D. Cameron

Building on the Parabix transform representation of text and LLVM mcjit, icGrep offers dramatically accelerated regex search compared to byte-at-a-time alternatives, as well as superior Unicode support.



Friday October 30, 2015 3:30pm - 4:30pm PDT
Salon Lobby

Attendees (0)