If you had a contributor that plagiarized at a 2-10%, would you really go “eh it has to have a degree of novelty to be a problem” rather than just ban them? The different standards baffle me sometimes.
So do you want to legally review every line by an LLM to see if it meets the fair use criterion, since you have to assume it was probably stolen? And would you do this for a known plagiarizing human contributor too…?
If you had a contributor that plagiarized at a 2-10%, would you really go “eh it has to have a degree of novelty to be a problem” rather than just ban them? The different standards baffle me sometimes.
You can find various rates mentioned here: https://dl.acm.org/doi/10.1145/3543507.3583199 and here: https://www.theatlantic.com/technology/2026/01/ai-memorization-research/685552/
If the 2-10% is just boilerplate syscall number defines or trivial MIN/MAX macros then it’s just the common way to do things.
So do you want to legally review every line by an LLM to see if it meets the fair use criterion, since you have to assume it was probably stolen? And would you do this for a known plagiarizing human contributor too…?