Tangentially related: why would a password manager provide a configurable iteration count? This is a number whose purpose is fairly hard to understand for many people and yet it’s an important corner stone for password security, especially for those who do not grasp the concept of an iteration count.
This should absolutely be application managed and gradually increased over time.
Also: while I understand that FIPS is the reason why we are stuck with PBKDF2 in the case of the more enterprisy password managers, wouldn’t it still be FIPS compliant to do some scrypt or argon rounds on top as a means of not constantly having to update the PBKDF2 iteration count (assuming that scrypt and argon are more resilient to hardware brute-forcing)?
> This should absolutely be application managed and gradually increased over time.
One issue I could see with that is that because it’s the encryption key it’s going to lock out all your “live” devices, so an explicit step is an easy opportunity to warn them.
The second issue is that the transcryption would have to be done on login, which is a pretty shit UX as the user logs in then immediately gets locked out for however long it takes to convert the store (then again for most people I’d assume the payload is not enormous).
> assuming that scrypt and argon are more resilient to hardware brute-forcing
They are but needing to update the work factor as hardware progresses remains. In fact scrypt and argon have more work factor knobs than pbkdf2, which only has the iterations count.
> One issue I could see with that is that because it’s the encryption key it’s going to lock out all your “live” devices, so an explicit step is an easy opportunity to warn them.
how so? The iteration count must be part of the non-encrypted parts of the vault data. If a client is offline, it will use its locally stored vault with the old (lower) iteration count. If it's online, it will have the updated vault with the higher iteration count.
> The second issue is that the transcryption would have to be done on login, which is a pretty shit UX as the user logs in then immediately gets locked out for however long it takes to convert the store (then again for most people I’d assume the payload is not enormous).
You could do this asynchronously in the background: Decrypt the vault, store it in memory (which all password managers do for some amount of time in order to provide any UI), re-encrypt, store to disk, send blob (which will continue an unencrypted iteration count) to server.
But this is the complicated case where the vault is re-keyed. What would totally be sufficient is to re-encrypt the same value key using a new hash derived from the same password, only with more rounds which means that the bulk of the vault blob won't change - only the password-derived key and the iteration count.
If any of this happens simultaneously on multiple machines, treat it the same way as you already treat editing conflicts (I'm not offering guidance there - this is a hard problem that each cloud provider is already solving one way or another).
> They are but needing to update the work factor as hardware progresses remains. In fact scrypt and argon have more work factor knobs than pbkdf2, which only has the iterations count.
Given the current state of the art and given these two algorithms, I think it would need to happen significantly less often than with PBKDF2, so if there's something that would need to cause the UI to re-lock immediately after unlock as you think (and I'm not sure about) then having argon or scrypt in the loop means you have more time between causes of shitty UX.
> how so? The iteration count must be part of the non-encrypted parts of the vault data. If a client is offline, it will use its locally stored vault with the old (lower) iteration count. If it's online, it will have the updated vault with the higher iteration count.
The iteration count affects the encryption key, and bitwarden neither has the old encryption key nor the actual password to derive either.
So the vault has to be updated at the first device connection after updating the iterations count, and any other device will have to derive the new encryption key and log back in.
so it would log other devices out, but not the device you're currently looking at. I think that's still an acceptable behavior compared to have people stuck with iterations counts of 500 or even 1 as we had seen in LastPass
Benchmark it once on each device. Then have a user-friendly slider.
"Do you want your security to be:"
a) "It only secures pr0n from my aunt" (1s for fetching a password)
b) "Not great, not terrible": (5s for fetching a password)
c) "Pretty Good Protectivity": (10s for fetching a password)
d) "The CIA haunts me and my name is Edward: (24 minutes for fetching a password)
and even then there should be no way to offer a completely insecure iteration count to a user because one of their devices is slow because the attacker's devices won't be.
Even on slow devices, password managers can employ techniques to help like only using the full count of rounds for a cold start but then re-encrypting the key for the vault key with fewer rounds but only keep that copy locally.
That way a user of a very slow device only needs to wait for, say, 10s once on first unlock.
While this is a downgrade in security, it's still better because now the key with the small amount of iterations is confined to the one device, not available on the server where an attacker can get bulk access.
Of course, devices where 1M PBKDF2 iterations take so long that it's noticeable are probably also old enough to be full of unpatched (due to EOL) security holes which makes such devices the weakest link anyways, but this would still be a better situation because this way not all users are punished because of one user's slow device.
keepassxc does this the right way — you're not picking the "iteration count" (which is hard enough to understand even for someone relatively technically inclined), but the time it takes to open the database. The default is 1 second, with the minimum of 100 ms.
"Higher values offer more protection, but opening the database will take longer".
I highly recommend keepassxc to everyone instead of these password-solutions-of-the-day that are coming and going so fast it's hard to remember all of them.
Recommended away, but ease of use matters. I can use, and am comfortable with keepassxc, but there is no way in he* my wife, daughter or parents would be. It was hard enough getting them used to using BitWarden.
One issue with "gradually increasing over time" is that, without a master password change, the old hashes are still "out there" and potentially available for inspection.
I don't think "on top" gets explicit guidance, but it's also almost never needed. In this case for example scrypt and yescrypt are fine to use directly. NIST has had a strong leaning toward memory-hard functions for coming up on six years now. See §5.1.1.2 in https://pages.nist.gov/800-63-3/sp800-63b.html#sec5.
I'm not familiar with FIPS. Is this NIST document part of requirements outlined in the FIPS publication? Which according to 20 seconds on Google, appear to be numbered 140, 180, 186, 197, 198, 199, 200, 201, and 202.
Password hashing is controlled by NIST SP 800-63B, not FIPS, but FIPS supplies the approved primitives. When NIST says:
> The key derivation function SHALL use an approved one-way function such as Keyed Hash Message Authentication Code (HMAC) [FIPS 198-1], any approved hash function in SP 800-107, Secure Hash Algorithm 3 (SHA-3) [FIPS 202], CMAC [SP 800-38B] or Keccak Message Authentication Code (KMAC), Customizable SHAKE (cSHAKE), or ParallelHash [SP 800-185].
Those options are authorized by FIPS. The main consequence of this is that there are FIPS-validated implementations available, which are what you want if you're selling to the government.
You are quite right. Open source software is always riddled with complicated and unintuitive UX like this. It's created by developers for developers. It's only when product owners, designers, and commercial managers get involved that the UX begins making sense.
"But Apple is trying to control me!" screams every dev who doesn't understand this, not realizing they are signaling their inability to empathize with normal end users and misattributing why Apple does what it does.
Apple gets it, devs don't. In this space, 1Password is least worst, yet is still more confusing than the average user quite understands.
Apple isn't a most valuable company because they want to control you. They're a most valuable company because their engineers blend software and hardware into experiences for end users not for engineers.
So much more software would be so much more successful if usability and adoption were as prioritized as utility and configurability.
I have 524 passwords in my vault. That's a list large enough to require a big sheet of paper that's very inconvenient to carry around. I'm also changing passwords often enough such that keeping track of which password I changed on what machine in order to just carry deltas with me and updating my hand-written lists manually is way too inconvenient.
Syncing password managers solve all these issues and, if they do encryption right, there is nothing that could possibly happen even if they are hacked and an attacker gains access to my encrypted data.
The doing it right part is why I was asking my questions with regards to PBKDF2.
This should absolutely be application managed and gradually increased over time.
Also: while I understand that FIPS is the reason why we are stuck with PBKDF2 in the case of the more enterprisy password managers, wouldn’t it still be FIPS compliant to do some scrypt or argon rounds on top as a means of not constantly having to update the PBKDF2 iteration count (assuming that scrypt and argon are more resilient to hardware brute-forcing)?