always the same
"effectively outsourced 90 percent of your development to people you don't know and can't trust.
.. which equally applies to proprietary software - at least with open source if an issue arises you can (if you have the abilities, fix the code yourself)
I have frequently used open source code - however never gone the auto pull stuff in approach.
Downloaded the open source components, audited for vulnerabilities and licence compliance, (plenty of tools to help be it commercial stuff such as Mend (AKA whitesource) through to open source free stuff) and tested them and added to source control.
That way you know what you are including - none of this randomly pulling the latest & greatest version from online & hoping for the best.
It has its drawbacks - when there is zero day you cannot just rely on your software picking up the latest & greatest fix when its available, instead you have to wait and incorporate it into your software (after testing it).
But, so long as your own software has ability to auto update (though ideally with user approval) then its not that much of an extra delay (just the time for you pull new version of open source component, audit and test it and update your source control repo).
Though the upside is that testing makes sure the fix does not break your software or have other issues (as with "emergency" bug fixes do sometimes see situations where initial "fix" is sub optimal and further fixes soon get rolled out)
Other drawback is, have to regularly do pull, audit and incorporate periodically anyway - be it for new versions for less critical bugs, improved performance etc. - but in "mature" open source components, you fortunately don't have to update that often as depends on your usage of that code and what has changed in the open source component: Scenarios where it's quite OK to run an older version without any issues tend to be common.
More hard work with this approach, but you're insulating yourselves from random code poisoning via the internet .