Re: Intelligible non-decryption
People here may be interested in https://en.m.wikipedia.org/wiki/List_of_cryptographic_file_systems - particularly stegFS, rubberhose Filesystem and PEFS.
Of course, with enough computing resources and sufficiently small search spaces - your average jpg for instance - even encryption won't help you because, for a given search space, all that need be done is to calculate all possible sequences of 0 and 1 equal to the size of the file in question and, eventually, the data will be presented as 'plaintext' so to speak.
Obviously, it's not a trivial exercise even for small searches, but its still technically doable.
In fact, given enough time, 'all' that need be done is to generate all meaningful sequences at all sizes from n=1 bit to n=largest-storage-capacity-available, discarding those that represent random/meaningless data, and all data-sets that can possibly be created can be known in advance anyway.
From there, it's merely a matter of mapping a given data-set to a given individual (or individuals) by cross-indexing the encrypted form to the stored data-set template - i.e. we know that a given encrypted pattern maps to a specific set of unencrypted data of the same size, so we know what the data would be if we unencrypted it.
Ultimately, the only way you can be even reasonably sure your data will be safe from prying eyes would be to create an infinite stream of compressed data which, like a zip file, doesn't reveal its index until the end. Of course, since the data stream is infinite, the index will never be transmitted and no-one will ever be able to recreate it - but that includes the recipient, so its useless as anything but a though exercise. Stochastic analysis might result in sufficiently large portions of it being made intelligable, however, for enough of it to be revealed along the way to create a sufficiently robust case for "We've got enough for our purposes and the rest doesn't matter" - so even /that/ approach might fail eventually.
As for the steganographic element, as the AC above mentioned, there are often 'tells', so subtracting intelligable data from the complete data-set might reveal sufficient quantities of the hidden data to provide a data-set to which the above approach might reasonably be appled - if the need is great enough then a data-set the size of a jpg might be worth brute-forcing that way.
</just some idle musing>