User avatar
privTri Volpeon
@volpeon@icy.wyvern.rip
@abucci Thanks for your reply! You're making good points which I overall agree with. I've had rather subpar experiences with LLM-generated code at work myself, so it's not like I don't see the downsides and how it leads to the erosion of skill. It's true that this also has implications on the security.

However, from what I've seen, I think the way GitHub integrates Copilot into the process makes it less likely to cause the same degradation as an AI assistant directly integrated into an editor. As I said elsewhere, GitHub presents Copilot as a PR author and your usage of it is akin to iterating a PR with a human author until it meets the project's standards.
If regular PRs don't pose a risk to one's skills, then I don't see why this would. It incentivizes the thinking that the AI must be held to the same standards as any other PR author, that it isn't inherently above them. I think this is a good way to handle it.
I'm happy to be corrected if my understanding of Copilot or the way the devs use it is wrong. You're clearly more involved in this topic than I am.

Apart from that, I do wonder how realistic it is to expect projects to reject LLMs contributions forever. No matter what you and I want, the global trend moves towards increasing adoption of AI and this means external contributions will become more and more "tainted", with and without their knowledge. Given this outlook, I think it's better to be open for AI contributions. This allows the developers to become familiar with the strengths and weaknesses of AI, and it creates an environment where contributors are willing to disclose their use of it so that reviews can be conducted with appropriate care. An environment where AI is banned will only lead to people trying to deceive the developers and causing necessary trouble.

@ngaylinn