Plausible deniability - a view from a different angle...

Topics: Feature Requests, Users Discussion
Oct 28, 2015 at 8:25 PM
Edited Oct 28, 2015 at 8:31 PM
The issue of plausible deniability has been addressed in VC/TC by means of possibility to create hidden container/partition/OS. This is a great option, although I am personally not a big fan of the whole idea of plausible deniability. Why? Simply because the western (and not only) legal system is moving towards some new norms and horizons, namely overwriting the good old Roman postulate that "everyone is deemed innocent until proven guilty"..... Penal codes, tax laws, just name it... it is already the norm that the defendant has to prove his innocence and not the other way around. Many countries already force you to disclose your password or face penal action....

In many countries already (and I am not referring to third world countries), the legislation is such that if one is being investigated and VC (or alike) is found on his PC together with random looking data, this data will be deemed to be a VC container (even if it is not). If he refuses to produce the password, the content of this random looking data will be deemed to be as such as the prosecutor decides it to be. If the prosecutor states that the data contains certification that "I killed Kennedy", one should better provide a suitable password showing this not being the case, or it will be deemed to be. There is little court case history yet, but if you read many countries penal codes and administrative laws you may have a few more sleepless nights. I am absolutely certain that soon, very soon, these laws will be retouched even further so then, even if you provide your password that opens your data showing that you didnt kill Kennedy, as far as the prosecution knows that VC has the ability to create hidden container, you will be forced to provide the second password too (even if you dont have one and there is no hidden partition).... and if you dont - you will be still deemed to have murdered the president......

So, is there a solution to this problem? If you are always deemed guilty unless you prove you are not.... In case when you will be always forced to provide your password (or in VC's case your 2 passwords, even if you dont have a second one. BTW, in the UK nobody cares if you have really forgotten even your first one)... So what to do? The only solution to this is a bit crazy, somewhat complicated, counter intuitive, but 100% waterproof.... the real plausible deniability will be achieved only if the software (VC in our case) provides the possibility to create an infinite number of hidden containers. Let me make myself clear, not a large number, but infinite. Well at least limited to the number of clusters/bytes/bits..... Only then there is no feasible way of forcing you to reveal an infinite passwords... no matter how many passwords you provide, there may always be one more..... In this case (legally), if you provide your first password (obviously showing that you didnt kill Kennedy), you can not be even deemed guilty to not having provided more passwords.... i find it difficult to explain it in that time of the day, but give it some thinking... you will come to the same conclusion..... this is the ONE TIME PAD solution in plausible deniability..... at least nobody can defy George Cantor - infinity matters.... and if VC wants to deploy a real plausible deniability this is the way forward...... Of course noone will make more than one or two hidden partitions, but this is not the idea..... The point is to have the theoretical possibility to do so - and this will defy the "deemed guilty until proven innocent" new realms.... :)
Nov 8, 2015 at 2:50 PM
Hi Alex,

Can you please direct me to a credited source where discussion is being made that there is even a suggestion of re-writing criminal legislation that you must prove your innocence rather than the prosecution must prove you are guilty?

I have to be honest it sounds like you have no idea what you’re talking about… The system of innocent until prove guilty has been around for a very long time!

I would like to advise you to back up all of your claims when you make them with creditable sources and not just an opinion.

While opinions are good for votes actually knowing what you’re talking about might help.
Nov 8, 2015 at 5:11 PM
SarDamien wrote:
Hi Alex,

Can you please direct me to a credited source where discussion is being made that there is even a suggestion of re-writing criminal legislation that you must prove your innocence rather than the prosecution must prove you are guilty?
Regulation of Investigatory Powers Act = Guilty until victim can prove his innocence.
Nov 8, 2015 at 5:41 PM
SarDamien wrote:
Hi Alex,

Can you please direct me to a credited source where discussion is being made that there is even a suggestion of re-writing criminal legislation that you must prove your innocence rather than the prosecution must prove you are guilty?

I have to be honest it sounds like you have no idea what you’re talking about… The system of innocent until prove guilty has been around for a very long time!

I would like to advise you to back up all of your claims when you make them with creditable sources and not just an opinion.

While opinions are good for votes actually knowing what you’re talking about might help.
Don't bother, understanding that requires at least few years legal studies.... and the way you put your question, i can only assume, you are not in the legal field (with all my due respect, you are probably great professional in other area(s)).

And unfortunately, i know very well what i am talking about..... very, very unfortunately...
Nov 8, 2015 at 11:32 PM
Edited Nov 8, 2015 at 11:48 PM
Dbkray I gotta say quoting a whole piece of legislation and not actually being precise shows you have no idea what your talking about at all.

Under all current legislation within the uk you are innocent until proven guiltily. The only time you are not is when it can be proven that you have withheld information such as a password with purpose. This therefore demonstrates that you have intent and therefore it may be proven your guilty.

Alex I know exactly what I am talking about I have studied and worked in and with law for many many years. I have worked with some of the top barristers in the country.

Unless you two have some actual creditable evidence and actual quotes from the legislation then I'd recommend you actually do some research.
Nov 9, 2015 at 12:49 PM
SarDamien wrote:
The only time you are not is when it can be proven that you have withheld information such as a password with purpose.
You answered your own question and confirmed my point.

In simple terms, encryption is effectively illegal in the UK. It has been made illegal in an underhand way, so most members of the public, including yourself, have not noticed.

Plausible deniability is a technique to help legitimate privacy advocates defend themselves against RIPA. Our protagonist is able to provide a password in order to comply with RIPA, without being forced to share his/her private files with an oppressive government.

SarDamien wrote:
This therefore demonstrates that you have intent and therefore it may be proven your guilty.
I am grateful you have pointed out the common belief is basically "if you have nothing to hide then you would hand over your password". However, I and many others do not believe this to be true, or a justification for government snooping into innocent peoples private files.

Considering you have conceded the points I have quoted, I am unsure of your motivation for posting in this thread. I will not be dragged into a trolling discussion, as it seems you fully understand the need for plausible deniability in the text I quoted. In light of this I will allow you the last comment, but I will not be replying further.
Nov 9, 2015 at 12:56 PM
100% agree with the above!
Nov 18, 2015 at 6:05 PM
Alex512 wrote:
The issue of plausible deniability has been addressed in VC/TC by means of possibility to create hidden container/partition/OS. This is a great option, although I am personally not a big fan of the whole idea of plausible deniability.
Plausible deniability, as implemented in VC, is pretty evil. First of all, because of the reason you state - it can make it difficult to prove the data you have doesn't have a hidden container. But secondly, because for plausible deniability to work, it has to be plausible. And VC's implementation isn't.

It's actually pretty easy to show, with a very high probabilty, that a container used as the outer container in a hidden container scheme actually contains a hidden container. First of all, most people who use hidden container systems don't write to the outer container very often. That's a red flag right there. There are methods to tell how well used a hard drive sector is, so if you can show the outer container hasn't had anything written in a year, and yet the container its in has been written recently and often, that's a red flag there. Or when there are no update time stamps in any file in the outer container that match the update time stamp for the container itself. Or let's say you're concientous and write to the outer container a lot. Modern filesystems naturally spread over use. A 500 gig partition that is used regularly but which has never had more than 200 gigs written to it at any one time will nevertheless have data well past the 200 gig point because of the way the OS tries to favour allocating space in a row. So when you mount the outer container, and it is analyzed and.... look at this... no data has every been written past the 200 gig point in the outer container, and yet it's been used for a year and a half.... that is a huge red flag. Those are a few examples. I could go one for an hour on the topic. This "feature" was added to TC by developers who thought it was a cool idea but had no real world forensic analysis experience. Plausible deniability as implemented here in it's best case is wildly unreliable, and in its worst case can get you killed.

However, the fact that it's fairly easy to show that there is a hidden container doesn't make it less dangerous. I do not want to depend on trying to use complex forensic techniques if my computer is impounded on a visit to the middle east to prove that my encrypted containers are just what I say they are. I want to be able to prove it reliably and quickly. That's why for every container I make, I make it so I can prove it doesn't contain a hidden container. The way I do this is to erase to zero all unused sectors when I mount it for the first time, and then erase to zero all files when I delete them thereafter. If I mount the container and can show, once mounted, that the vast majority of unused space is zeroed, then I can prove there is no hidden container.

I advise this method for everyone who uses VC until such time as the developers wise up and remove this ineffective and dangerous "feature".
Nov 18, 2015 at 6:27 PM
Kudalufi wrote:
However, the fact that it's fairly easy to show that there is a hidden container doesn't make it less dangerous. I do not want to depend on trying to use complex forensic techniques if my computer is impounded on a visit to the middle east to prove that my encrypted containers are just what I say they are. I want to be able to prove it reliably and quickly.
A simple and immediate fix for you personally is to create a volume with a hidden container. Then when you are questioned, provide BOTH the outer password and the inner password. VC can only create 1 inner and 1 outer, so you can demonstrate you have complied and provided access to all data.

I might not be able to reply further to this as I am finding it increasingly difficult to get past the captcha test to enable me to post.
Nov 18, 2015 at 6:48 PM
Edited Nov 19, 2015 at 12:40 PM
DBKray wrote:
A simple and immediate fix for you personally is to create a volume with a hidden container. Then when you are questioned, provide BOTH the outer password and the inner password. VC can only create 1 inner and 1 outer, so you can demonstrate you have complied and provided access to all data.
.
With the drawback of limiting the amount of data you can store in the outer volume since you must protect the inner volume during mounting in order to mount the hidden volume in the future. I use NTFS format for my volumes.

DBKray wrote:
I might not be able to reply further to this as I am finding it increasingly difficult to get past the captcha test to enable me to post.
.
I just click the captcha refresh icon until it displays an easy to read number instead of the difficult words.
Nov 18, 2015 at 7:10 PM
Kudalufi - your comment is perfect and spot on. It is indeed very easy to prove that the outer volume has not been used. Even worse however, the law can simply presume that to be so as far as VC provides the possibility to make hidden (second) volume.

DBKay - absolutely correct. If one wants to be able to show everything, then simply make both hidden and outer volume and provide both passwords.

My idea however was completely different, purely mathematical (and legal) one. If the encryption software is capable of making unlimited number of hidden (inner) volumes, then no law can ever force someone to disclose an unlimited amount of passwords (or give unlimited accesses). Whatever the law states, there will be always a +1 password possible, which makes any such law completely impotent. The law can not require "unlimited" passwords as this is simply not feasible and will render everybody guilty as nobody can comply with such norm.

I didnt hear yet any comments on my main idea. Of course, we have to presume that a solution for the vulnerabilities that Kudalufi wrote about will be somehow found.
Nov 19, 2015 at 11:26 AM
Alex512 wrote:
DBKay - absolutely correct. If one wants to be able to show everything, then simply make both hidden and outer volume and provide both passwords.
Yes, but if I never actually use the inner volume, what's stopping me from putting the real data inside a container I store inside the inner volume's scope but not in it's actual filesystem? Say in raw sectors? It's trivial in Linux to make a virtual partition out of a sequence of blocks inside a real one. No, the only real solution, the only way to PROVE there is no nested sort of encrypted container is to have a single, standard encrypted volume and have all empty space zeroized. As long as there is random noise anywhere, it's suspect.
My idea however was completely different, purely mathematical (and legal) one. If the encryption software is capable of making unlimited number of hidden (inner) volumes, then no law can ever force someone to disclose an unlimited amount of passwords (or give unlimited accesses). Whatever the law states, there will be always a +1 password possible, which makes any such law completely impotent. The law can not require "unlimited" passwords as this is simply not feasible and will render everybody guilty as nobody can comply with such norm.
You don't quite understand. The places where this is a problem, they don't care about the law or subpoenas. They won't ask nicely. Even crossing the border into the US, you're not a person with rights until you actually get in. They can hold you at the border, demand to open your laptop, demand your keys, and if they think for any reason you are holding out, there is very little they cannot do to you. This is in the US. Now think about the Israeli's, or the Saud's. Places you might actually have to go for work. If you can't prove you're showing them everything they will have you tied spread-eagle with your man parts flayed and pinned like a disected frog. You will tell them anything. And they won't care there was nothing to tell. If you're lucky they'll sew up your scrotum when they're done. Your can, of course, kiss your computer goodbye.

Alex512 wrote:
I didnt hear yet any comments on my main idea. Of course, we have to presume that a solution for the vulnerabilities that Kudalufi wrote about will be somehow found.
There isn't a solution for the vulnerabilities using a sub-container method like VC. Nested containers ad infinitum can still be detected. Nested containers increase the amount of work required significantly, as you have to make some sort of regular (but ultimately fake) use of every parent container, and you have to put in data in each parent that looks like it's worth going to the trouble of encrypting and nesting. You would have to spend more time on pretend data than on the real thing. You have rapidly diminishing space available, and in the end, it's not worth the effort for the gain. There are other methods that are far more useful. Like just using a tiny memory stick with conventional encryption that you can simply hide in some nook in your laptop. Or, really, the best way of all is just in plain sight. Encrypted steganographic encoding of your data into photos you post right on facebook or flickr.

There is no truly effective solution to making plausible deniability actually plausible. The best technique I've seen was where the inner container driver takes data blocks written and makes RAID-like copies of it and sprinkles them pseudo-randomly (a la spread spectrum) throughout the outer volume's free space. There is no hard protection for overwriting inner volume data. Just the probability that, unless you're doing a LOT of writing to the outer container, that you won't overwrite every copy of an inner-volume block. This is still detectable, though, as the "random" blocks end up getting speckled into actual old data. Like a deleted video that has space returned to the empty space pool ends up getting blocks of random noise speckled through it like flecks of pepper in a bowl of sugar. You have to have an inner-volume aware eraser program that takes all outer-volume unallocated space and erases it to random noise regularly without touching outer-volume data. Just the process of doing that is enough to infer the pressence of an inner volume.

The VC developers really need to understand that this misfeature can actually cause harm. Real world harm. They need to release their child-like grip on a "cool feature" and simply take it out.
Nov 26, 2015 at 9:04 PM
Edited Nov 26, 2015 at 9:05 PM
Kudalufi.... my idea is still a bit different from what you have probably got.... :
Nested containers ad infinitum can still be detected.  Nested containers increase the amount of work required significantly, as you have to make some sort of regular (but ultimately fake) use of every parent container, and you have to put in data in each parent that looks like it's worth going to the trouble of encrypting and nesting.- 
my idea is NOT to make a large number of (or close to infinity) subcontainers (hidden volumes) rather than SIMPLY have the possibility to make them.... at the same time working with one or two as is the case now. Only having the option to create them will make any future legislation irrelevant. Of course i know that this option is not torture proof. There is no such option unfortunately...

In line with the recent developments, i firmly believe encryption will be soon banned so we may indeed need to look into steganography soon, very soon....
Dec 3, 2015 at 9:40 AM
I would only add that it is theoretically impossible to prove the absence of anything.

Example. Assume I state that you used a steganography/whatever tool to embed hidden information into, say, unused parts of data storage device, into media files etc etc etc.

Please tell me, how on Earth you could prove that something does not exist? How will you pledge you are not guilty?

The flaws of UK legal system don't mean the hidden containers should be removed. If you wish, insist on not compiling that entire feature conditionally, using some build option, and make a custom build. However, as I pointed out above, there will always be simple way to prove you are guilty, just because you cannot theoretically prove that the data you own contains nothing hidden. It doesn't matter whether any related tools are found on your computers or not.

I think that features should not be removed just because some legislations win another victory over people's privacy. Fork VC version without hidden volumes, and pray that law enforcement services would never learn about steganography.
Dec 3, 2015 at 11:35 AM
quainta wrote:
I would only add that it is theoretically impossible to prove the absence of anything.

Example. Assume I state that you used a steganography/whatever tool to embed hidden information into, say, unused parts of data storage device, into media files etc etc etc.

Please tell me, how on Earth you could prove that something does not exist? How will you pledge you are not guilty?
Good question and here comes the answer. Under current legislation (confirmed by the legal practice), applied by many countries (not only UK), if there is no cryptography/stegano software installed on your machine, then you will be most probably assumed innocent ( = deemed to have no hidden/encrypted data on the computer). If you however have certain software installed (or available in some other form, ie on a flash drive which will be deemed that you are using, or even if you have visited the download page of VC for example... the list goes on and on...) then you will be kindly asked to provide all possible passwords to that software (in the case of VC you will have to provide 2 passwords, both of which will have to work fine and open the outer and inner volume). Failure to do so, will deem you guilty ( = you are hiding information, which will be used against you in the court of law).
Simple and cruel... reality...
Dec 5, 2015 at 12:19 AM
A very interesting discussion. But this is the basic reason for plausible deniability!
Nobody can tell whether a drive was overwritten with random data or contains a TC/VC container. And even if that is the case, nobody can tell whether there is a second container (well, besides the file system limitation). So you will be found guilty because you encrypted drive a, provide the password and own another drive which has been overwritten with random data?
I dont think so.

As quainta wrote: With the very same argument you would be guilty of murder because you cannot prove that you never owned a gun or any other crime.
Afaik this legislations are about passwords to e.g. online-accounts known to the authorities. But even this is pretty weird, as it is alway possible that you actually forgot your password. How can anybody know?
Dec 5, 2015 at 3:34 AM
RandomNameforCode wrote:
A very interesting discussion. But this is the basic reason for plausible deniability!
Nobody can tell whether a drive was overwritten with random data or contains a TC/VC container. And even if that is the case, nobody can tell whether there is a second container (well, besides the file system limitation). So you will be found guilty because you encrypted drive a, provide the password and own another drive which has been overwritten with random data?
I dont think so.
Let me nail down some of the flaws in "plausible deniability".

First, the legal myths:
Myth: They have to prove the data is there
Reality: The burden of proof is different in different jurisdictions, and in different situations. For example, at border crossings (which is the number one point of contact for this issue), the burden of proof is on you to show that you're not bringing in anything you "shouldn't be". And "shouldn't be" in that situation means whatever they want it to mean.

Myth: Well, if "they" cannot absolutely prove that the hidden container is there, you're ok.
Reality: Even when the burden of proof is with them, they do not have to prove with 100% certainty. Depending on the situation, it can be balance of probabilities (>50%), or beyond reasonable doubt (ie: would a reasonable person educated enough to understand the technical issues believe that the data is there). The last part will get you. Because someone who understands the issue enough to be well informed on it, will be able to make a prediction with high probability whether or not a hidden container exists. See below.

Technical Myths:
Myth: It's just "random data" at the end of your drive.
Reality: No one's drives have "random data" by chance. Leftover bits of deleted files, videos, etc, clutter up the "empty" space of a hard drive quickly. Cryptographically random data actually stands out from almost everything else you can have (even highly compressed data) like a sore thumb to analysis. So the fact that you have a huge chunk of cryptographic grade random data on your hard drive at all, that is one red flag.

Myth: Random data doesn't tell them anything - what it my drive is full of it
Reality: This random data will tell stories. Because this random data is special. It's special because it's in an area that the outer container never, ever writes too. This no-go zone might as well be a neon sign saying "HIDDEN CONTAINER HERE".

In the past, hard drives allocated space from the beginning and if there holes at the beginning that were left over from small deleted files, large files that came along later filled those holes and ended up fragmented. But it was compact. If I had a 100 meg drive that only ever had 20 meg written to it at most, then the back 80 meg of that drive would be all zeros. When Linux's EXT2 came out, it was revolutionary in that it was very fragmentation resistant. It would actively look throughout the drive for unfragmented areas large enough for the full file and allocate them there. It didn't allocate the old way. It allocates more efficiently, with the consequence that that you end up with files spread out over your whole partition, that your drive didn't fill up front-to-back any more. Nobody cares about that, really, except for obsessive defrag peeps. Microsoft hadn't really innovated in its filesystems in a long time when this came out, and they had some catchup to do. But they did. MS's filesystems themselves didn't change a whole lot, but their implementations did. They changed their allocation strategy to make even old FAT filesystems much more resistant to fragmentation. They did this by having them allocate more similarly to the way EXT2 allocates, by spreading things out throughout the drive. So even on a drive with a total capacity that exceeds by a fair amount the largest amount of space you've ever used, files tend to creep all over the partition.

But this never happens with hidden containers. If I have a hard drive with an inner container, the area where that container is will never have had any file fragments in it from old files. No deleted file will point into that area to say "I used to live there". No filesystem structure will be there. The OS won't have the swap file there. There will be no journal, no metafiles, no old directories, nothing. An NTFS analyzer will look at the filesystem and it will just be a big hole where nothing ever lives or points to and where nothing ever has lived. A big hole of what happens to be cryptographically random data that happens to be on a hard drive with software that specifically supports "plausible deniability".

Contrary to popular belief (and a large number of articles on magnetic force microscopy) modern drives are not easily scanned for "old" data. When you write a sector in a modern drive, you can't see what was there before. But there are methods of determining how recently a portion of your drive has been written to. Maybe not in hours or days, but certainly with high probability, you can tell that a part of the hard drive has been a) written to more often, and b) has been written to more recently than another part of your drive. So now you have a part of your hard drive that no outer partition filesystem structure has ever been written to, but where you can tell that the data there is fresh, it's cryptographically random, and on a computer that has "plausible deniability" inner partition software.

If you are ever in a situation where "they" are interested enough in your data to ask for the key, then a nested container will not save you from scrytiny.

I just hope you're not flying to Egypt with that laptop.
Dec 5, 2015 at 2:13 PM
I think you missed a small but important point in my statement. I never said a random-data-drive would look like a newly bought disk - but it looks like a disk that was overwritten with random data. So it is not possible to distinguish whether the drive is encrypted or was overwritten with random data.

Regarding the inner & outer volumes (which is the normal and which is the hidden one? I'm not used to that terminology.):
The question is how the drive is formated. I expect that the volume generation process
(at least if it is not in-place-encryption) writes random data to the entire normal volume.
So any place on that drive that has never been used by the filesystem looks like random data by default and so does
the hidden volume within the normal volume. The only argument left would be that the normal volume looks "to clean"
(no files have ever been deleted etc.) - but this is typical for archive storage. So you put some of your ripped DVDs there
for backup purposes.

Regarding legal myths:
You cannot technically enforce that someone who wants to bully you does not. If a guy of border control of any weird country wants that you do not enter the country he will find a reason. If law founds you guilty for no reason that is despotism (I did not find any better translation).