Caching has been a consistent tool of designers of high-performance, scalable computing systems, but it has been deployed in so many ways that it can be difficult to standardize and scale in cloud systems. This project elevates the use of caching in cloud-scale storage system to a “first-class citizen” by designing and implementing generalized Caching-as-a-Service (CaaS). CaaS defines transformative technology along four complementary dimensions. First, it defines a new abstraction and architecture for storage caches whereby storage stacks can easily embed lightweight CaaS clients within a distributed compute infrastructure. Second, CaaS formulates and theoretically analyzes distributed caching algorithms that operate within the CaaS service such that individual CaaS server nodes cooperate towards achieving globally optimal caching decisions. Third, the distributed CaaS clients and servers are co-designed to achieve strict durability and fault-tolerance in their implementations. And finally, all of the CaaS advancements are driven by insights generated from a detailed whole-system simulator that models the diverse cache devices, network configurations, and application demand.
The CaaS project supports a broad spectrum of applications that run in the private and public clouds. The CaaS project showcases these improvements via use cases in three important computing paradigms: Cloud, Big Data, and Deep Learning. The findings from the CaaS project create new educational content and research opportunities for undergraduates, Masters, and PhD students via exposition and involvement of these student groups within classroom projects and laboratory work. The outreach activities focus on the recruitment of under-represented students from minority groups in Computer Science for participation in the project. The outcomes of the CaaS project include open-source software and public dissemination of research findings which help transition of the new technologies to practice.
Dates Active: 2020 — 2023
National Science Foundation