This project has moved and is read-only. For the latest updates, please go here.

detection of hidden volumes bug query

Topics: Technical Issues
Aug 20, 2016 at 3:44 AM
Hi am a bit confused with the newest Veracrypt 1.18a release logs saying its now fixed a vulnerability that allows hidden volumes to be detected.

So do we need to update to 1.18a and re-encrypt our hidden drives/partitions or perhaps change to a new password?

Appreciate any feed back thanks
Aug 20, 2016 at 4:36 PM

A little history first:

I was contacted end of July by Ivanov Aleksey to inform me that he found a way to detect the presence of hidden volumes in TrueCrypt volumes and that it also affects VeraCrypt. After many secure exchanges, he demonstrated the capability to detect hidden volumes with a rate near to 100%.

Although he didn't share with me the details of his technique which he doesn't want to make public, it was not difficult to find out what might be the cause of this: In TrueCrypt and VeraCrypt before 1.18 as explained in the Volume Format Specification, volumes that don't contain a hidden volume have a header + random data whereas volumes that contain hidden volume have two headers + random data.

Normally this difference should not be a problem because the headers are encrypted using a key derived from their respective passwords and the random data is actually the result of encryption of zeroes using a temporary random key. Thank to the properties of XTS encryption mode, the encrypted data should look random to an attacker who can not get information about the format of the data without having the password.

But it appears that this assumption is not always true and that at least it possible to build a distinguisher that would be able to detect if the volume has one header (no hidden volume) or two headers (with hidden volume).

Luckily, there is an obvious way to protect about such attack: volumes should always have two headers! When there is no volume header, we just create a "fake" hidden volume that uses a random master key. This way, the distinguisher mentioned above will always return that there is a hidden volume without being able to say if the hidden volume is a fake one or not.

This fix has been implemented in VeraCrypt 1.18a. It is not possible to apply the fix to existing volumes so users who rely on the hidden volume feature must create new volumes using the latest version of VeraCrypt and discard existing volumes. Of course, such users must install/use only VeraCrypt 1.18a or above in their machines so that the plausible deniability associated with hidden volumes is preserved: if they are coerced into revealing the hidden volume password by an entity that has the distinguisher capability mentioned above, they can reply that they created the volume using VeraCrypt 1.18a or above which created fake hidden volume header and this can be easily proved by handing out the outer volume password.

So to answer your question: If you relay on the plausible deniability of the hidden volume feature, you must update to version 1.18 on all your machines and re-create all your volumes (both outer and hidden) using this version. Changing the passwords is irrelevant to this issue so you can keep them unchanged.
Marked as answer by Jaffray48 on 8/23/2016 at 7:58 AM
Aug 20, 2016 at 5:38 PM
Edited Aug 20, 2016 at 5:47 PM
Hello Mounir,

Can you update the release notes with a special note at the top telling users that use hidden volumes that they need to recreate both outer/inner volumes?

Is that platform specific to Windows or are all three platforms need to rebuild their outer/inner volumes?

Does this apply to hidden OS?

Kind Regards.
Aug 20, 2016 at 7:34 PM
Follow-up question: Why do the users of hidden volumes need to recreate their outer/inner volumes which have two headers + random data?

Based on your explanation, it seems to me that people with only outer volumes are detectable as having only one header + random data. Hence, for plausible deniability, these users with only outer volumes would need to rebuild their outer volume so that it appears to have two headers + random data.

Or was the issue for hidden volumes not using the same random data method for the second header which allows for detection?
Aug 21, 2016 at 12:20 PM
Thank you very much idrassi for the information, very helpful.
Aug 21, 2016 at 2:05 PM
Edited Aug 21, 2016 at 5:01 PM
The issue affects all platforms, not only Windows. I will update the Release Notes to indicate this and also to advise users to recreate their volumes (both outer and hidden).
Hidden OS feature is also affected by this since it relies on the hidden volumes inside an encrypted data partition.

As for the reason why users of hidden volumes need to recreate their outer/inner volumes, it is because a change in the way random data is created in 1.18 compared to previous versions and 1.17: In TrueCrypt and VeraCrypt prior to 1.18, zeros were encrypted using a temporary key but I'm now suspicious about this choice of zeros so I decided to encrypt random data instead using a temporary for better entropy of resulting data.
Aug 21, 2016 at 3:16 PM
In TrueCrypt and VeraCrypt prior to 1.17,
I think you meant:
In TrueCrypt and VeraCrypt prior to 1.18,
Aug 21, 2016 at 4:34 PM
Edited Aug 21, 2016 at 4:35 PM
Does the Volume Format Specifications need to be updated?

You talked about the hidden volume header needing to be changed from zeros to encrypted random data. What about Offset Bytes 92 "Size of hidden volume (set to zero in non-hidden volumes)"? Should this value be zero or encrypted random data since this field could act as an indicator of hidden volume's existence?
Aug 21, 2016 at 5:14 PM
Indeed, the second paragraph of the Volume Format Specification should be changed to reflect that we use random bytes instead of zeros for the encryption of unused header.

As for the 8 bytes at the offset 92, it is filled with fake size when creating the fake hidden volume header, so no problem on this side and the documentation doesn't need to be updated for this (the zeros mentioned for this field in the documentation is for the header of the outer volume and there is no problem in setting zero in this case).
Aug 21, 2016 at 8:21 PM
Based on your statements, offset 92 is set to a fake size during the creation of the fake hidden volume header when no hidden volume is being used for the volume.

I think it would be clearer if the documentation for offset 92 was changed to:
Size of hidden volume (filled with fake size in non-hidden volumes)
However, I am not understanding your following statement referencing the outer header for offset 92:
...the documentation doesn't need to be updated for this (the zeros mentioned for this field in the documentation is for the header of the outer volume and there is no problem in setting zero in this case).
Does offset 92 only represent the size of the hidden volume or does offset 92 serve dual purposes when hidden volume is not used?
Aug 21, 2016 at 9:41 PM
I think you are misunderstanding the documentation: as you can see in the documentation, the hidden volume header that is located at offset 65536 is not described in details since it has the same format as a normal header and we only refer the reader to documentation about offsets 0-65536.

So the offset 92 documentation applies to the normal header that started at offset 0 (in this case offset 92 is 0) and also the hidden header that starts at offset 65536 (in this case offset 92 + 65536 contains the size of the hidden volume).

This explains the confusion you had about offset 92 description.
Aug 21, 2016 at 11:00 PM
Does this vulnerability exist in external header backup files?
Aug 22, 2016 at 4:16 PM
Edited Aug 22, 2016 at 4:18 PM
I'm somewhat worried about the possibility to distinguish encrypted data from uniformly distributed random data in general, because this could possibly hint some issue with the ciphers or their usage. If even a rather short block of encrypted zeros can be distinguished from random data, what about larger chunks of plaintext which show distinctive patterns (e.g. a 1 GB file of zeros), or plaintext chosen by or known to the attacker? Could they possibly also be detected in the ciphertext?

I think it could be at least worth investigating why a block of XTS encrypted zeros could be distinguished from an XTS encrypted header, or random data.
Aug 22, 2016 at 10:17 PM

Hmm, you have a good point.
Aug 22, 2016 at 10:50 PM
Indeed a deeper analysis by experts is needed because XTS maybe leaking some information after all.

But, in our case, we are not talking about one block of encrypted zeros but rather 255 blocks of encrypted zeros versus one block of encrypted header. So probably, there is some kind of leak here that is easy to spot.

Now I think the choice in TrueCrypt of encrypting 255 blocks of zeros was unfortunate and random bytes should have been used instead. Moreover, it is always good to maintain a uniform structure of data to protected against unknown leakage and so writing fake hidden header for normal volume is a good way to ensure that an attacker can't build a statistical distinguisher for hidden volumes.

Anyway, it is always easy to comment after a weakness is pointed out! Although it is the de-facto standard, XTS seems to be hard to prove correct from the security point of view compared to its ancestor XEX. Others are more competent than me to go down this road
Aug 22, 2016 at 10:51 PM

version 1.17 of backup saves only one header if there is no hidden data.

This situation can be distinguished from header with hidden data for version before 1.18.


In our situation problem is simpler. We can write fake hidden volume header always.

Note: It is possible to check smoothness of XTS and random data for example via ent tool.
Aug 23, 2016 at 10:35 AM
Edited Aug 23, 2016 at 10:35 AM
Thanks a lot for your replies!
But, in our case, we are not talking about one block of encrypted zeros but rather 255 blocks of encrypted zeros versus one block of encrypted header.
I think I don't understand ... both the hidden volume header and the random data if no volume header is present are supposed to be 65536 bytes long? And also the 'real' header contains a bunch of 'reserved' fields which must contain zeros?

Anyway, if XTS or the way it is used as a PRNG (encrypting zeros with a temporary key) somehow leaks information on the plaintext ... apart from other possible cryptographic weaknesses, according to the spec, not only the hidden volume header area was filled with encrypted zeros before 1.18a, but also the free space of the outer volume still is. If this pseudo-random data can possibly be distinguished from other XTS encrypted data, I'd expect this could open up another way of detecting hidden volumes: if the free space of any volume can be classified to not consist of encrypted zero blocks, a hidden volume is present.

But this thread might not be the right place to discuss the implications of this on the cryptographic properties of XTS and matters arising ... sorry for hijacking.
In our situation problem is simpler. We can write fake hidden volume header always.
Sure! I'm just afraid this might possibly hide an underlying, more severe cryptographic issue.
Aug 23, 2016 at 10:54 AM

If you think XTS is broken, it would have much bigger implications. Because in this case it would be possible to get information about the plain text from cipher text, which would throw into question the whole disk encryption. And of course a hidden volume would then be detectible even without the header if it contains some low entropy data (normally the case) because the cipher text of that data would differ significantly from the random bytes in the empty part of the outer container.

So there are only two possibilities: XTS encryption of zero blocks with random key wasn't the reason for detectability of the hidden volume and so the fix will not fix the issue. Or there is a really big problem with XTS impacting the core of VeraCrypt and other disk encryption software.
Aug 23, 2016 at 12:55 PM
XTS is OK for now. (we do not have another info)

The problem is the following: Different types of PRNG generate numbers a little different. It is possible to detect the difference and make decision about hidden volume.

Probably if outer volume ciphers and inner volume ciphers are the same (different keys only) it is impossible to detect. We need investigation in the area.
Aug 23, 2016 at 1:55 PM
kavsrf wrote:
The problem is the following: Different types of PRNG generate numbers a little different. It is possible to detect the difference and make decision about hidden volume.
No. If we look at the fix (it's enough to understand it for Format.c) then we see, that the only thing it does is writing a fake header in the hidden header area if there is no hidden volume. This is done after calling WriteRandomDataToReservedHeaderAreas(), which was the only function called to initialize the header areas before the change. The definition of that function within Volumes.c shows, that it creates random keys, reads the data from the header area (maybe zeros), calls EncryptBuffer() doing the XTS encryption and writes back the encrypted data to the header area.

So if this really is the reason for the detectability of a hidden volume, it would mean that the cipher text of an XTS encrypted header would significantly differ from the cipher text of other XTS encrypted data (maybe with very low entropy). Of course this would be a major flaw in XTS.
Aug 23, 2016 at 2:16 PM
Encryption and PRNG are comparable in their nature.

To investigate the problem we need to create special tool.
Aug 23, 2016 at 2:37 PM
A good point Trimster.

This is starting to concern me. I guess there have been no changes to how XTS works in VeraCrypt compared to the old TrueCrypt ? If so, then I wonder why the TrueCrypt audit (which also missed other things) did not notice this ?

The paranoid part of me is wondering if this is why the governments of the world allowed the old TrueCrypt and the new VeraCrypt to exist, have they been weakened at a low level ?

Just to make my position very clear, I do not suspect Mounir of any wrong doing, in fact he is my crypto hero. I also do not suspect any of the other VeraCrypt developers.

I hope we humble users have misunderstood something about this issue because I doubt Mounir would have not immediately considered a problem with XTS if it's output was in any way representative of a given input. I believe Mounir is just too smart not to instantly jump on it. For this reason only, I am slightly more reassured.

I am confident now the issue has been reported, a clever crypto geek somewhere in the world will thoroughly check the XTS implementation for us. Until then lets be optimistically cautious and wish Mounir and his team good luck in investigating !
Aug 23, 2016 at 9:41 PM
VeraCrypt uses the same XTS engine as TrueCrypt, otherwise compatibility would not have been possible.
When I received the report, TrueCrypt was mentioned as the primary target with VeraCrypt affected because it uses the same code as TrueCrypt.

The confusion comes from the fact that I consider as header only the first 512 bytes which contains the significant parts of the global header as show in the documentation. Maybe I should have used the word effective header. So we have: header = effective header + random
Since a block is 512 bytes, thus my statement that we have 255 blocks of encrypted zeros versus one block of encrypted "effective" header.

Good point about the free space of outer volume. This requires further analysis.

I don't think that XTS is broken as an encryption scheme but only a limited entropy leak may occur in special cases. My personal opinion is that the presence of 255 consecutive blocks of zeros encrypted by XTS next to a single block different from zero is the key point behind this issue, and that this enables somehow to have a statistical distinguisher for this special case.
That's why I have made an additional modification after the fake header fix in order to encrypt random data instead of zeros: (This is not done in case of In-Place encryption to retain the original content in the disk when decrypted).

Any further ideas or comments are welcomed. XTS has always been a complex subject to study!
Aug 23, 2016 at 10:26 PM
Thank you for you work on this Mounir.

I am not an expert so please forgive my ignorance, however I have done a lot of reading today. I have little knowledge in this subject, but my own, very simplistic research suggests there should never be a hint of the plain text piped to XTS, even if it is just zero's.

I am concerned some users may wipe the free space of their containers with zero's. I guess doing so may now provide an attacker with an indication of how full an encrypted volume is. Obviously not necessarily a security risk, but we do not know if that in itself may be harmful to a VeraCrypt user.

I guess we need a XTS specialist to make some checks and recommendations before word of this thread hits other public security forums. If changes are required to VeraCrypts implementation of XTS then I think they need to be made, even if it breaks backwards compatibility.

Please keep up the good work Mounir, we trust you will make the right decisions.
Aug 23, 2016 at 10:41 PM
Maybe it would help to get some more information: You've written that Aleksey was able to demonstrate the detection of hidden volumes with a rate near to 100 %.

Was this independet of the selected encryption algorithm or cascade? Was it independent of the selected file system of the outer volume? Was there any difference between a new blank outer volume or whether there were written some files already?
Aug 24, 2016 at 1:20 AM
idrassi wrote:
That's why I have made an additional modification after the fake header fix in order to encrypt random data instead of zeros: (This is not done in case of In-Place encryption to retain the original content in the disk when decrypted).
Does VeraCrypt write the fake hidden header once the in-place encryption has completed successfully?

Volumes.c -> Skips writing fake hidden header for in-place encryption.
if (backupHeaders || !bInPlaceEnc)
          // encrypt random data instead of existing data for better entropy, except in case of primary
          // header of an in-place encrypted disk
          RandgetBytes (hwndDlg, buf + TC_VOLUME_HEADER_EFFECTIVE_SIZE, sizeof (buf) - TC_VOLUME_HEADER_EFFECTIVE_SIZE, FALSE);
InPlace.c -> Does this write to the fake hidden header area after in-place encryption is completed at lines 613 and 1125?
// write fake hidden volume header to protect against attacks that use statistical entropy
// analysis to detect presence of hidden volumes
Aug 29, 2016 at 11:37 PM
Fake hidden header is always written, even for In-Place encryption.
What is skipped is the use of random data as input to the encryption of unused header space.
But actually, this skipping is not needed and I will use random data even for in-place encryption (not to be confused with fake hidden header).

Only volumes using AES were tested. I don't know if his detection method applies to other encryption algorithms. The detection only used the first 512 of the volume header, so it is independent from file system used or content.

To continue on this discussion, I want to share a research paper that was published in the 2014 SBSEG conference:
In this paper, the authors reveal that it is possible to distinguish the output of XTS (and other modes) that uses any n-bit block cipher from a random permutation:
We show distinguish-from-random attacks for any n-bit block cipher in the standard modes of operation for confidentiality: ECB, CBC, CFB, OFB, CTR and XTS. We demonstrate that in all these 1-pass modes any n-bit block cipher leaves ’footprints’ that allows an adversary to efficiently (in time and memory) distinguish them from a random permutation.
This is an important result which doesn't suppose any knowledge about the format of the plain text. So adding knowledge about the format of the header and the presence of large blocks of zeros can only make such 'footprints' easier to find and more importantly it would help build a distinguisher for the presence of hidden volumes.

It would be good is such kind of researchers spend time looking at the specific case of TrueCrypt/VeraCrypt headers detection...they certainly will come up with a mathematical framework that can help implement detection tools and this way help confirm the correctness of the fix.
Aug 31, 2016 at 12:07 PM
Edited Aug 31, 2016 at 12:08 PM
idrassi wrote:
Only volumes using AES were tested. I don't know if his detection method applies to other encryption algorithms. The detection only used the first 512 of the volume header, so it is independent from file system used or content.
512 of what?
I guess you meant first 256 disk blocks [512byte] (better call it disk sectors because it can be confusing with aes cipher blocks [16byte]).
=> 512byte * 256sectors = 2 * (1sector header + 127sectors random) = 2 * 65536byte = 131072byte
Is this assumption correct?

Can you please elaborate more on the exact test circumstances?
How do you know for sure that the detection is only based on first 256sectors?
Sep 1, 2016 at 1:47 PM

I've done some research by myself and already found and read this paper. But in my opinion there is nothing usable in there. They describe an adaptive attack with modification of the ciphertext where they need the decrypted result after modification to distinguish between the ciphers or randomness respectively.

If the first 512 bytes are enough, I can imagine something else as well (I already thought to this possibility before): What if the salt can be distinguished from other encrypted content? The salt is the unencrypted result of the random generator which utilizes a hash function. Maybe it differs in some kind from encrypted AES blocks. So if there is some kind of difference between the "randomness" of the first 64 bytes to the remaining 448 bytes, it is the header of a (hidden) volume. If there is no difference, it's a block of empty space and no hidden volume header (before you fixed it to write every time a fake hidden volume header).

I've done some tests with entropy, arithmetic mean, chi-square but didn't find any usable oracle yet. But if there is such an oracle, it could potentially point out a flaw in the random generator. So even if it isn't usable any more after the fix, it could be still important to know.