security - password中文 - php random salt




“雙重散列”密碼不僅僅是一次散列嗎? (11)

散列密碼一次是不安全的

不,多重哈希不安全; 它們是安全密碼使用的重要組成部分。

迭代散列會增加攻擊者在其候選人列表中嘗試每個密碼所需的時間。 您可以輕鬆地將攻擊密碼所需的時間從幾小時增加到幾年。

簡單的迭代是不夠的

僅僅將哈希輸出鏈接到輸入對安全性來說是不夠的。 迭代應該在保留密碼熵的算法的上下文中進行。 幸運的是,有幾個已發布的算法已經進行了足夠的審查,以提高其設計的信心。

一個好的密鑰派生算法,如PBKDF2將密碼輸入到每一輪哈希中,減輕了對哈希輸出中的衝突的擔憂。 PBKDF2可用於原樣密碼驗證。 Bcrypt通過加密步驟跟踪密鑰派生; 這樣,如果發現一種快速的方法來反轉密鑰派生,攻擊者仍然必須完成已知明文攻擊。

如何破解密碼

存儲的密碼需要防止脫機攻擊。 如果密碼不是鹽漬的,可以用預先計算的字典攻擊來破解密碼(例如,使用彩虹表)。 否則,攻擊者必須花時間計算每個密碼的散列值,並查看它是否與存儲的散列值相匹配。

所有密碼的可能性不盡相同。 攻擊者可能會徹底搜索所有短密碼,但他們知道,每個額外角色的暴力成功機會急劇下降。 相反,他們使用最有可能的密碼的有序列表。 他們從“password123”開始,並進入不經常使用的密碼。

假設攻擊者名單很長,有100億候選人; 還假設一個桌面系統可以計算每秒100萬次哈希值。 如果只使用一次迭代,攻擊者可以測試整個列表少於三個小時。 但如果只使用2000次迭代,那麼時間將延長至近8個月。 為了擊敗一個更複雜的攻擊者 - 例如能夠下載可以挖掘GPU功能的程序 - 你需要更多的迭代。

多少錢就夠了?

要使用的迭代次數是安全性和用戶體驗之間的折衷。 攻擊者可以使用的專用硬件很便宜,但它仍然可以每秒執行數億次迭代。 攻擊者係統的性能決定了在多次迭代中破解密碼需要多長時間。 但是你的應用程序不太可能使用這個專用硬件。 在不加重用戶的情況下,您可以執行多少次迭代取決於您的系統。

您可能可以讓用戶在認證過程中等待大約3/4秒左右。 剖析您的目標平台,並儘可能多地使用迭代。 我測試過的平台(移動設備上的一個用戶,或者服務器平台上的許多用戶)可以輕鬆地支持PBKDF2 ,其迭代次數為60,000到120,000次,或者成本因子為12或13的bcrypt

更多的背景

閱讀PKCS#5,以獲取有關鹽和迭代在散列中作用的權威信息。 即使PBKDF2是用於從密碼生成加密密鑰的,它也可以用作密碼驗證的單向散列。 bcrypt的每次迭代比SHA-2哈希要昂貴,所以你可以使用更少的迭代,但這個想法是相同的。 通過使用派生密鑰對眾所周知的純文本進行加密,Bcrypt也超越了大多數基於PBKDF2的解決方案。 結果密文被存儲為“散列”,以及一些元數據。 但是,沒有什麼能阻止你對PBKDF2做同樣的事情。

以下是我在這個主題上寫的其他答案:

在存儲任何更安全或更不安全的哈希值之前兩次哈希密碼?

我在說的是這樣做的:

$hashed_password = hash(hash($plaintext_password));

而不僅僅是這個:

$hashed_password = hash($plaintext_password);

如果它不太安全,你能提供一個很好的解釋(或鏈接到一個)嗎?

另外,使用的散列函數是否有所作為? 如果你混合使用md5和sha1(而不是重複相同的散列函數),它有什麼區別嗎?

注1:當我說“雙重哈希”時,我正在討論兩次密碼哈希以試圖使其更加模糊。 我不是在談論解決衝突技巧

注2:我知道我需要添加一個隨機鹽來確保它的安全。 問題是用相同的算法兩次散列是否有助於或傷害散列。


是。

Absolutely do not use multiple iterations of a conventional hash function, like md5(md5(md5(password))) . At best you will be getting a marginal increase in security (a scheme like this offers hardly any protection against a GPU attack; just pipeline it.) At worst, you're reducing your hash space (and thus security) with every iteration you add. In security, it's wise to assume the worst.

Do use a password has that's been designed by a competent cryptographer to be an effective password hash, and resistant to both brute-force and time-space attacks. These include bcrypt, scrypt, and in some situations PBKDF2. The glibc SHA-256-based hash is also acceptable.


Double hashing is ugly because it's more than likely an attacker has built a table to come up with most hashes. Better is to salt your hashes, and mix hashes together. There are also new schemas to "sign" hashes (basically salting), but in a more secure manner.


Double hashing makes sense to me only if I hash the password on the client, and then save the hash (with different salt) of that hash on the server.

That way even if someone hacked his way into the server (thereby ignoring the safety SSL provides), he still can't get to the clear passwords.

Yes he will have the data required to breach into the system, but he wouldn't be able to use that data to compromise outside accounts the user has. And people are known to use the same password for virtually anything.

The only way he could get to the clear passwords is installing a keygen on the client - and that's not your problem anymore.

總之:

  1. The first hashing on the client protects your users in a 'server breach' scenario.
  2. The second hashing on the server serves to protect your system if someone got a hold of your database backup, so he can't use those passwords to connect to your services.

I just look at this from a practical standpoint. What is the hacker after? Why, the combination of characters that, when put through the hash function, generates the desired hash.

You are only saving the last hash, therefore, the hacker only has to bruteforce one hash. Assuming you have roughly the same odds of stumbling across the desired hash with each bruteforce step, the number of hashes is irrelevant. You could do a million hash iterations, and it would not increase or reduce security one bit, since at the end of the line there's still only one hash to break, and the odds of breaking it are the same as any hash.

Maybe the previous posters think that the input is relevant; it's not. As long as whatever you put into the hash function generates the desired hash, it will get you through, correct input or incorrect input.

Now, rainbow tables are another story. Since a rainbow table only carries raw passwords, hashing twice may be a good security measure, since a rainbow table that contains every hash of every hash would be too large.

Of course, I'm only considering the example the OP gave, where it's just a plain-text password being hashed. If you include the username or a salt in the hash, it's a different story; hashing twice is entirely unnecessary, since the rainbow table would already be too large to be practical and contain the right hash.

Anyway, not a security expert here, but that's just what I've figured from my experience.


I'm going to go out on a limb and say it's more secure in certain circumstances... don't downvote me yet though!

From a mathematical / cryptographical point of view, it's less secure, for reasons that I'm sure someone else will give you a clearer explanation of than I could.

However , there exist large databases of MD5 hashes, which are more likely to contain the "password" text than the MD5 of it. So by double-hashing you're reducing the effectiveness of those databases.

Of course, if you use a salt then this advantage (disadvantage?) goes away.


Let us assume you use the hashing algorithm: compute rot13, take the first 10 characters. If you do that twice (or even 2000 times) it is possible to make a function that is faster, but which gives the same result (namely just take the first 10 chars).

Likewise it may be possible to make a faster function that gives the same output as a repeated hashing function. So your choice of hashing function is very important: as with the rot13 example it is not given that repeated hashing will improve security. If there is no research saying that the algorithm is designed for recursive use, then it is safer to assume that it will not give you added protection.

That said: For all but the simplest hashing functions it will most likely take cryptography experts to compute the faster functions, so if you are guarding against attackers that do not have access to cryptography experts it is probably safer in practice to use a repeated hashing function.


Most answers are by people without a background in cryptography or security. And they are wrong. Use a salt, if possible unique per record. MD5/SHA/etc are too fast, the opposite of what you want. PBKDF2 and bcrypt are slower (wich is good) but can be defeated with ASICs/FPGA/GPUs (very afordable nowadays). So a memory-hard algorithm is needed: enter scrypt .

Here's a layman explanation on salts and speed (but not about memory-hard algorithms).


The concern about reducing the search space is mathematically correct, although the search space remains large enough that for all practical purposes (assuming you use salts), at 2^128. However, since we are talking about passwords, the number of possible 16-character strings (alphanumeric, caps matter, a few symbols thrown in) is roughly 2^98, according to my back-of-the-envelope calculations. So the perceived decrease in the search space is not really relevant.

Aside from that, there really is no difference, cryptographically speaking.

Although there is a crypto primitive called a "hash chain" -- a technique that allows you to do some cool tricks, like disclosing a signature key after it's been used, without sacrificing the integrity of the system -- given minimal time synchronization, this allows you to cleanly sidestep the problem of initial key distribution. Basically, you precompute a large set of hashes of hashes - h(h(h(h....(h(k))...))) , use the nth value to sign, after a set interval, you send out the key, and sign it using key (n-1). The recepients can now verify that you sent all the previous messages, and no one can fake your signature since the time period for which it is valid has passed.

Re-hashing hundreds of thousands of times like Bill suggests is just a waste of your cpu.. use a longer key if you are concerned about people breaking 128 bits.


Yes - it reduces the number of possibly strings that match the string.

As you have already mentioned, salted hashes are much better.

An article here: http://websecurity.ro/blog/2007/11/02/md5md5-vs-md5/ , attempts a proof at why it is equivalent, but I'm not sure with the logic. Partly they assume that there isn't software available to analyse md5(md5(text)), but obviously it's fairly trivial to produce the rainbow tables.

I'm still sticking with my answer that there are smaller number of md5(md5(text)) type hashes than md5(text) hashes, increasing the chance of collision (even if still to an unlikely probability) and reducing the search space.


是的,重新哈希減少了搜索空間,但是不,這並不重要 - 有效的減少是微不足道的。

重新哈希增加了蠻力所需的時間,但這樣做只有兩次也不是最理想的。

你真正想要的是用PBKDF2來散列密碼 - 這是一種經過驗證的使用帶有salt和迭代的安全散列的方法。 看看這個SO響應

編輯 :我差點忘了 - 不要使用MD5 !!!! 使用現代加密散列,如SHA-2系列(SHA-256,SHA-384和SHA-512)。







password-hash