2025-CSEE-329

Performance Analysis of Multiple Versions of Matrix Multiplication

Xiaoxuan Wang

Department of Computer Science

Faculty Supervisor: E. Wes Bethel

This project investigates the performance and energy efficiency of various matrix multiplication implementations optimized for cache utilization. By analyzing runtime, hardware performance counters, cache data movement, and power consumption, the research seeks to identify the most energy-efficient computational strategies. Given rising concerns about the substantial energy consumption of computational tasks—particularly those central to AI workloads, optimizing such operations can lead to significant environmental and economic benefits. The project starts with a detailed study of matrix multiplication due to its fundamental role in AI applications, with potential expansion into related numerical operations like matrix inversion, crucial for training deep learning models via algorithms such as stochastic gradient descent. Ultimately, this research aims to provide insights that could help mitigate growing energy demands associated with advanced computational methods.