That is your right to believe!
It is incorrect to state that the code generation of an LLM is “completely separated from where it came from.” While AI models do not explicitly track the licensing of every individual syntax element, they generate code based on publicly available, commonly used examples, and they can prioritize specific sources based on user instructions.
I have explicitly tested both Copilot and Perplexity with instructions requesting only open-source and freely licensed code compatible with the Gramps project. In these cases, both AI tools have successfully provided code exclusively from widely accepted, open libraries such as NumPy, Pandas, OpenCV, and others—all of which have permissive licensing and documented usage examples.
This is important for me exactly of the reason that I’m not a programmer, so it is important that the code I get actually can be used.
Furthermore, AI tools can be configured to prioritize specific repositories and guidelines, ensuring compliance with licensing requirements. While AI does not inherently store metadata on each line of code’s origin, it generates code by synthesizing patterns from publicly available examples—meaning that open-source libraries and common coding conventions are naturally reflected in the suggestions.
The claim that “all you have to do is give it a simple, single-line instruction” and AI will always comply without issue is an oversimplification. AI models operate probabilistically, meaning results may vary based on the complexity of the instruction and the broader context in which it is given.
However, the example I provided was intentionally simplified to make a clear argument: AI tools can follow structured prompts that instruct them to prioritize code compatibility with a specific project’s licensing requirements. While this does not guarantee perfect execution in all cases, it demonstrates that AI can be guided toward responsible code generation rather than functioning arbitrarily.
Finally, it is worth noting that using AI tools to generate code is fundamentally no different from a human developer referencing documentation or example code online. AI does not inherently make code less trustworthy—rather, its effectiveness depends on how it is guided and validated by the developer.
Additionally, I proposed a suggestion for how an AI guideline could be used within the Gramps project from a non-programmer’s perspective. My intent was to present an idea that could help structure AI-assisted contributions—not to endlessly debate individual sentences taken out of context.
I have shared my perspective, and I have no interest in spending further time dissecting specific fragments of my argument.
I still don’t understand the need to dissect individual lines from a comment rather than engaging with and discussing the actual guideline proposal I posted as a whole.
Note: This response has been assisted and translated from Norwegian by Copilot, ensuring better flow and clarity in English.