Keynote 1: ALGORAND: A Better Distributed Ledger
Abstract. A distributed ledger is a tamperproof sequence of data that can be read and augmented by everyone. Distributed ledgers stand to revolutionize the way a democratic society operates. They secure all kinds of traditional transactions –such as payments, asset transfers, titling– in the exact order in which they occur; and enable totally new transactions ---such as cryptocurrencies and smart contracts. They can remove intermediaries and usher in a new paradigm for trust. As currently implemented, however, distributed ledgers cannot achieve their enormous potential. Algorand is an alternative, democratic, and efficient distributed ledger. Unlike prior ledgers based on ‘proof of work’, it dispenses with ‘miners’. Indeed, Algorand requires only a negligible amount of computation. Moreover, its transaction history does not ‘fork’ with overwhelming probability: i.e., Algorand guarantees the finality of all transactions.
Short Bio. Silvio Micali has received his Laurea in Mathematics from the University of Rome, and his PhD in Computer Science from the University of California at Berkeley. Since 1983 he has been on the MIT faculty, in Electrical Engineering and Computer Science Department, where he is Ford Professor of Engineering. Silvio’s research interests are cryptography, zero knowledge, pseudo-random generation, secure protocols, and mechanism design. Silvio is the recipient of the Turing Award (in computer science), of the Goedel Prize (in theoretical computer science) and the RSA prize (in cryptography). He is a member of the National Academy of Sciences, the National Academy of Engineering, and the American Academy of Arts and Sciences.
Keynote 2: Algorithmic Adaptations to Extreme Scale Computing
David E. Keyes
Abstract. Algorithmic adaptations to use next-generation computers close to their potential are underway. Instead of squeezing out flops - the traditional goal of algorithmic optimality, which once served as a reasonable proxy for all associated costs - algorithms must now squeeze synchronizations, memory, and data transfers, while extra flops on locally cached data represent only small costs in time and energy. After decades of programming model stability with bulk synchronous processing, new programming models and new algorithmic capabilities (to make forays into, e.g., data assimilation, inverse problems, and uncertainty quantification) must be co-designed with the hardware. We briefly recap the architectural constraints and application opportunities. We then concentrate on two types of tasks each of occupies a large portion of all scientific computing cycles: large dense symmetric/Hermitian linear systems (covariances, Hamiltonians, Hessians, Schur complements) and large sparse Poisson/Helmholtz systems (solids, fluids, electromagnetism, radiation diffusion, gravitation). We examine progress in porting "exact" and hierarchically rank-reduced solvers for these tasks to the hybrid distributed-shared programming environment, including the GPU and the MIC architectures that make up the cores of the top scientific computers "on the floor" and "on the books."
Short Bio. David Keyes is the director of the Extreme Computing Research Center at King Abdullah University of Science and Technology, where he was a founding dean in 2009, and an adjunct professor of applied mathematics at Columbia University. Keyes earned his BSE in Aerospace and Mechanical Engineering from Princeton and his PhD in Applied Mathematics from Harvard. He works at the algorithmic interface between parallel computing and the numerical analysis of partial differential equations. He is a Fellow of SIAM and AMS and has received the AMC Gordon Bell Prize, the IEEE Sidney Fernbach Award and the SIAM Prize for Distinguished Service to the Profession.
Keynote 3: Datacenters for the Post-Moore Era
Abstract. Datacenters are growing at unprecedented speeds fueled by the demand on global IT services, investments in massive data analytics and economies of scale. Worldwide data by some accounts (e.g., IDC) grows at much higher rates than server capability and capacity. Conventional silicon technologies laying the foundation for server platforms, however, have dramatically slowed down in efficiency and density scaling in recent years. The latter, now referred to as the post-Moore era, has given rise to a plethora of emerging logic and memory technologies presenting exciting new challenges and abundant opportunities from algorithms to platforms for server designers. In this talk, I will first motivate the post-Moore era for server architecture and present avenues to pave the path forward for server design.
Short Bio. Babak is a Professor in the School of Computer and Communication Sciences and the founding director of the EcoCloud research center at EPFL. He has made numerous contributions to computer system design and evaluation including multiprocessor architecture for the WildCat/WildFire severs by Sun Microsystems (now Oracle), memory prefetching technologies in IBM BlueGene and ARM cores, and server evaluation methodologies used by AMD, HPE and Google PerfKit. His recent work on workload-optimized server processors lays the foundation for Cavium ThunderX. He is a recipient of a number of distinctions including a Sloan Research Fellowship. He is a fellow of ACM and IEEE.