It seems like this effort is akin to converting a PLC's (programmable logic controller) ladder logic into another programming language or vice-versa. I don't quite see the utility of this. If they're just trying to covert X number of inputs to Y number of outputs, then it is just a state machine. State machines can be very elegant (but those are usually quite obfuscated) or can be very inelegant (and usually easier to understand). Computers are very good for predicable behaviour (even without AI). Granted most, correctly written, software, excluding AI, can be distilled down to gigantic (predictable) state machines. Lucky for us humans.
To extend this thought, this is what FPGA tools already do. Take verilog / VHDL and turn it into a set of bits that define a huuuuge state machine that runs in the FPGA logic gates. Again, this has been done.
Taking examples from human programming for examples of (good) security just seems..... wrong. We (humans) aren't very good at that.
And lastly, why would computer (AI) generated language (designed by humans) to run on a computer be desirable? Once it's generated, humans are going to review and comment on the correctness, after the AI has already generated it based on learned examples (from potentially billions of input examples - both good and bad)?
We go from:
problem -> human -> (programming) language source -> preprocessed -> compiled machine code of choice
And with AI:
problem -> AI -> (programming) language source -> preprocessed -> compiled machine code of choice
Why not just:
problem -> AI -> compiled machine code?
We exist to serve our AI overlords.