"Oh no, my LLM can't use this treasure trove of stolen data!"
So, this method basically adds a bunch of junk data to real data and makes the LLM more likely to choose junk data when it queries without an encryption key?
I don't see how this actually protects against IP theft, unless the only IP you're trying to protect is the knowledge graph itself, not the underlying data as you should be able to extract that using other means. I'm sure there's cases where this has some real-world applicability, but i feel like most companies wouldn't be happy about the plaintext data being stolen, even if it is slightly obfusciated.