I think that approach is too limited.
"So why isn't there an AI program somewhere that can actually label the photograph itself? 'Cos AI isn't AI really, it's what the programmers tell it to do."
Whatever our definition of intelligence ends up being, it's got to do what the programmers tell it to do at a low level otherwise it's no longer artificial. We could probably argue about what intelligence is all day, but if AI is possible at all, it has to be implemented with code and machinery which means its instructions are created by another intelligence.
"As stated above, GIGO and it always will be until a 'machine' is actually cognitive."
Now this doesn't sound fair. GIGO applies to anything. If you take a human child and prime them with a bunch of false data without giving them the ability to learn that you're lying, they'll believe you. If everybody the child meets insists that the tall wood things that grow outside are called squirrels, they will believe that those are squirrels until they meet some other people who correct the misconception. If you're going to define intelligence as "the ability to figure out that everything people tell you is wrong even though you have no other source of information", that's a high bar. Computers don't get to meet random people and learn from them or even experiment with actions to see consequences. In life, we have a lot more input than any of these programs ever have, and yet there are a lot of humans who get incorrect concepts of how the world is.