GraSS: Scalable Data Attribution with Gradient Sparsification and Sparse Projection

Posted: Sep 18th 2025
Abstract

We propose an efficient gradient compression algorithm to accelerate and scale gradient-based data attribution methods to billion-scale models.

GraSS: Scalable Data Attribution with Gradient Sparsification and Sparse Projection

arXiv | Talk | GitHub

Brief Summary

Gradient-based data attribution methods, such as influence functions, are critical for understanding the impact of individual training samples without requiring repeated model retraining. However, their scalability is often limited by the high computational and memory costs associated with per-sample gradient computation. In this work, we propose GraSS, a novel gradient compression algorithm and its variants FactGraSS for linear layers specifically, that explicitly leverage the inherent sparsity of per-sample gradients to achieve sub-linear space and time complexity. Extensive experiments demonstrate the effectiveness of our approach, achieving substantial speedups while preserving data influence fidelity. In particular, FactGraSS achieves up to 165% faster throughput on billion-scale models compared to the previous state-of-the-art baselines.

Citation

@inproceedings{hu2025grass,
  author    = {Pingbang Hu and Joseph Melkonian and Weijing Tang and Han Zhao and Jiaqi W. Ma},
  title     = {GraSS: Scalable Data Attribution with Gradient Sparsification and Sparse Projection},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025}
}
Last Updated on Oct 21st 2025