Seminario - Caching Data Stores for High Performance and Low Cost

Il seminario sarà tenuto dal Dott. David Lomet, Principal Researcher e Research Manager di Microsoft, nell'ambito del corso "Trends in Electronics M".

  • Data: 09 aprile 2018

  • Luogo: Aula 5.2, Scuola di Ingegneria e Architettura, viale Risorgimento 2, Bologna

Contatto di riferimento:

Recapito telefonico per contatti: + 39 051 209 3013

About the speaker

David Lomet founded the Database Group at Microsoft Research in Redmond, Washington and managed it for over 20 years. His research career began at IBM Research, where a 1975 sabbatical to the University of Newcastle-on-Tyne led to his being an inventor of transactions and to his focus on databases.

He later worked at Wang Institute and DEC. He received a PhD in computer science from the University of Pennsylvania.  Lomet's primary focus has been engineering database kernels.  He contributed to making DEC's Rdb and Microsoft's SQL Server leaders in cost/performance benchmarks. His Deuteronomy research project's latch(lock)-free index and log structure store are key elements in recent Microsoft database offerings. He has authored over 120 papers and over 60 patents.

Lomet has served as ACM TODS and VLDB Journal editor and has won awards for his long tenure as EIC of the IEEE Data Engineering Bulletin.  In the IEEE Computer Society, he serves on the Board of Governors, has been First VP,  and is Treasurer.  He is a fellow of IEEE, ACM, and AAAS and a member of the National Academy of Engineering.

Abstract

A caching data store, e.g., a traditional database system, moves data between main memory and secondary storage as dictated by access patterns.  Such a system provides good cost/performance by hosting hot data in fast but expensive DRAM, while moving cold data to cheaper  but slower SSD storage.  Modern main memory data systems achieve better performance, but endure the high cost of retaining data in DRAM.  Thus, achieving both high performance and low cost using any data management system has been a challenge.   We analyze data management cost/performance and describe techniques that can lead to both high performance and low cost using caching data stores.