I was asking myself the same question - they're talking about programmed automated control systems, not AI where you try to teach an initially "dumb" system to program itself from experience.
Programs can (and often) present unexpected outputs, because the underlying logic isn't correct and the particular combination and/or sequence of inputs received wasn't anticipated. AI won't protect against the unexpected and, when that occurs, it's probably harder to address because the actual logic is only known within the computer. Knowing the logic and the NI (Naturally Intelligent) meat (in situ or back on the ground) has a chance to manage the unexpected. HAL was fiction but, like a lot of ACC's ideas, was well thought through and should serve as a cautionary tale.
But, as you said, Buzzword bingo gets more funding dollars...
PS I had the same physics teacher as ACC, my claim to fame (along with several hundred other boys who were taught at that school over several decades).