Hey Michele,
Sorry if it seemed like I was trying to say anything was getting worse, or that this mistake has extended into bad code getring into the project. That wasn't my intention at all.
I think AI is pretty cool. But inherently it creates trust between the user and itself, and this trust creates a situation where the user starts to blindly accept the output of the AI.
I don't think the use of it should be blocked, but perhaps a clear cut policy where if AI was used, the contributer should be transparent about it. This way, the time could be taken for more human verification to take place? The only reason I mention this, is because I see multiple projects being slammed with PRs of code written by new contributors, where the contributor neither knew how the code worked, or if the code actually worked at all.... Because they never wrote it.