Re: If it's not immoral for humans, how can it be for AIs?
I hear your concerns about licensing, and share them. However, I think there is a more fundamental problem here.
We all start from other people's work to guide and inform our own, that's how we learn. In pre-WWW days we used textbooks: a good fifty percent of what I have done probably had a starting point in "Numerical Recipes" or "The Art of Electronics", aided by numerous manufacturer's application notes. Not intrinsically any more reliable than a random web page, granted, but at least they have been past an editor and a proof-reader first...
Over the last couple of years though, I have seen more and more copy and paste programming, with little effort made to understand how the code works, whether it is really appropriate in the current application, or indeed what it actually does. In hardware, it is sometimes worse - there are some great open source designs out there, but certainly in websites targeted at the maker community, I have seen designs ranging from "won't work under any circumstances" to "will probably kill you".
My concern here is that if the AI (god, I hate that term) algorithm has suggested the solution, there will be much less chance that the naive user will be critical of what is put in front of them. The computer said it is right, and it is better at this than me, so it must be OK? As someone has pointed out earlier, we don't know the signal to noise ration of the GitHub codebase, and neither does the AI bot.
Anyway, just my two pennorth. Feeling even more cynical than usual.