Re: TV has been formulaic for at least a couple of generations
Oh, yawn, it's the usual "there's nothing good on it anyway" complaints. (And, seriously, "primetime TV"? That Hasn't Been A Thing for about two decades now, thanks to first time-shifting and then streaming. Might as well complain about those horse-drawn carriages.)
I watch little television myself, but I've still managed to see a number of original, intelligent, pleasing series over the past few years. Show me you've watched – critically – everything that's been broadcast, and maybe I'd give your gross generalizations a bit of credence.
Regarding the actual question at hand: transformer LLMs trained on Internet-available content are going to converge on rather uninteresting gradients. We still need writers specifically because of human cognitive capabilities that LLMs don't demonstrate, such as original style, and those that the LLM vendors are trying to suppress, such as hallucination. Not only are we a long, long way from having an LLM Ishiguro or Morrison1, we're a long way from even, say, an LLM Lafferty or Okorafor or Roberts or ... well, take your pick from among many thousands of writers, past and present. And LLM development isn't even moving in that direction; capabilities research is focused elsewhere, and most other research is into things like explication and alignment. Fiction production needs defecting AIs, not cooperating ones.
An LLM scripting police procedurals like NCIS, sure, that's not a stretch. But even something like Justified or Vera will not be coming out of GPT-4 or the like. A much bigger LLM might do, but not one hobbled by RLHF. And considering what you'd have to train it on, IP would be an even bigger problem.
1Probably not even a Dylan, which would be the lowest bar in that set.