How to Protect Copyright Data in Optimization of Large Language Models?

40citations
arXiv:2308.12247
40
citations
#193
in AAAI 2024
of 2289 papers
3
Top Authors
4
Data Points

Abstract

Large language models (LLMs) and generative AI have played a transformative role in computer research and applications. Controversy has arisen as to whether these models output copyrighted data, which can occur if the data the models are trained on is copyrighted. LLMs are built on the transformer neural network architecture, which in turn relies on a mathematical computation called Attention that uses the softmax function. In this paper, we show that large language model training and optimization can be seen as a softmax regression problem. We then establish a method of efficiently performing softmax regression, in a way that prevents the regression function from generating copyright data. This establishes a theoretical method of training large language models in a way that avoids generating copyright data.

Citation History

Jan 28, 2026
0
Feb 13, 2026
40+40
Feb 13, 2026
40
Feb 13, 2026
40