I usually go with something like the implementation given in Josh Bloch's fabulous Effective Java. It's fast and creates a pretty good hash which is unlikely to cause collisions. Pick two different prime numbers, e.g. 17 and 23, and do:
public override int GetHashCode()
{
unchecked // Overflow is fine, just wrap
{
int hash = 17;
// Suitable nullity checks etc, of course :)
hash = hash * 23 + field1.GetHashCode();
hash = hash * 23 + field2.GetHashCode();
hash = hash * 23 + field3.GetHashCode();
return hash;
}
}
As noted in comments, you may find it's better to pick a large prime to multiply by instead. Apparently 486187739 is good... and although most examples I've seen with small numbers tend to use primes, there are at least similar algorithms where non-prime numbers are often used. In the not-quite-FNV example later, for example, I've used numbers which apparently work well - but the initial value isn't a prime. (The multiplication constant is prime though. I don't know quite how important that is.)
This is better than the common practice of XOR
ing hashcodes for two main reasons. Suppose we have a type with two int
fields:
XorHash(x, x) == XorHash(y, y) == 0 for all x, y
XorHash(x, y) == XorHash(y, x) for all x, y
By the way, the earlier algorithm is the one currently used by the C# compiler for anonymous types.
This page gives quite a few options. I think for most cases the above is "good enough" and it's incredibly easy to remember and get right. The FNV alternative is similarly simple, but uses different constants and XOR
instead of ADD
as a combining operation. It looks something like the code below, but the normal FNV algorithm operates on individual bytes, so this would require modifying to perform one iteration per byte, instead of per 32-bit hash value. FNV is also designed for variable lengths of data, whereas the way we're using it here is always for the same number of field values. Comments on this answer suggest that the code here doesn't actually work as well (in the sample case tested) as the addition approach above.
// Note: Not quite FNV!
public override int GetHashCode()
{
unchecked // Overflow is fine, just wrap
{
int hash = (int) 2166136261;
// Suitable nullity checks etc, of course :)
hash = (hash * 16777619) ^ field1.GetHashCode();
hash = (hash * 16777619) ^ field2.GetHashCode();
hash = (hash * 16777619) ^ field3.GetHashCode();
return hash;
}
}
Note that one thing to be aware of is that ideally you should prevent your equality-sensitive (and thus hashcode-sensitive) state from changing after adding it to a collection that depends on the hash code.
As per the documentation:
You can override GetHashCode for immutable reference types. In general, for mutable reference types, you should override GetHashCode only if:
- You can compute the hash code from fields that are not mutable; or
- You can ensure that the hash code of a mutable object does not change while the object is contained in a collection that relies on its hash code.
The link to the FNV article is broken but here is a copy in the Internet Archive: Eternally Confuzzled - The Art of Hashing
DISCLAIMER: This answer was written in 2008.
Since then, PHP has given us password_hash
and password_verify
and, since their introduction, they are the recommended password hashing & checking method.
The theory of the answer is still a good read though.
TL;DR
Don'ts
- Don't limit what characters users can enter for passwords. Only idiots do this.
- Don't limit the length of a password. If your users want a sentence with supercalifragilisticexpialidocious in it, don't prevent them from using it.
- Don't strip or escape HTML and special characters in the password.
- Never store your user's password in plain-text.
- Never email a password to your user except when they have lost theirs, and you sent a temporary one.
- Never, ever log passwords in any manner.
- Never hash passwords with SHA1 or MD5 or even SHA256! Modern crackers can exceed 60 and 180 billion hashes/second (respectively).
- Don't mix bcrypt and with the raw output of hash(), either use hex output or base64_encode it. (This applies to any input that may have a rogue
\0
in it, which can seriously weaken security.)
Dos
- Use scrypt when you can; bcrypt if you cannot.
- Use PBKDF2 if you cannot use either bcrypt or scrypt, with SHA2 hashes.
- Reset everyone's passwords when the database is compromised.
- Implement a reasonable 8-10 character minimum length, plus require at least 1 upper case letter, 1 lower case letter, a number, and a symbol. This will improve the entropy of the password, in turn making it harder to crack. (See the "What makes a good password?" section for some debate.)
Why hash passwords anyway?
The objective behind hashing passwords is simple: preventing malicious access to user accounts by compromising the database. So the goal of password hashing is to deter a hacker or cracker by costing them too much time or money to calculate the plain-text passwords. And time/cost are the best deterrents in your arsenal.
Another reason that you want a good, robust hash on a user accounts is to give you enough time to change all the passwords in the system. If your database is compromised you will need enough time to at least lock the system down, if not change every password in the database.
Jeremiah Grossman, CTO of Whitehat Security, stated on White Hat Security blog after a recent password recovery that required brute-force breaking of his password protection:
Interestingly, in living out this nightmare, I learned A LOT I didn’t know about password cracking, storage, and complexity. I’ve come to appreciate why password storage is ever so much more important than password complexity. If you don’t know how your password is stored, then all you really can depend upon is complexity. This might be common knowledge to password and crypto pros, but for the average InfoSec or Web Security expert, I highly doubt it.
(Emphasis mine.)
What makes a good password anyway?
Entropy. (Not that I fully subscribe to Randall's viewpoint.)
In short, entropy is how much variation is within the password. When a password is only lowercase roman letters, that's only 26 characters. That isn't much variation. Alpha-numeric passwords are better, with 36 characters. But allowing upper and lower case, with symbols, is roughly 96 characters. That's a lot better than just letters. One problem is, to make our passwords memorable we insert patterns—which reduces entropy. Oops!
Password entropy is approximated easily. Using the full range of ascii characters (roughly 96 typeable characters) yields an entropy of 6.6 per character, which at 8 characters for a password is still too low (52.679 bits of entropy) for future security. But the good news is: longer passwords, and passwords with unicode characters, really increase the entropy of a password and make it harder to crack.
There's a longer discussion of password entropy on the Crypto StackExchange site. A good Google search will also turn up a lot of results.
In the comments I talked with @popnoodles, who pointed out that enforcing a password policy of X length with X many letters, numbers, symbols, etc, can actually reduce entropy by making the password scheme more predictable. I do agree. Randomess, as truly random as possible, is always the safest but least memorable solution.
So far as I've been able to tell, making the world's best password is a Catch-22. Either its not memorable, too predictable, too short, too many unicode characters (hard to type on a Windows/Mobile device), too long, etc. No password is truly good enough for our purposes, so we must protect them as though they were in Fort Knox.
Best practices
Bcrypt and scrypt are the current best practices. Scrypt will be better than bcrypt in time, but it hasn't seen adoption as a standard by Linux/Unix or by webservers, and hasn't had in-depth reviews of its algorithm posted yet. But still, the future of the algorithm does look promising. If you are working with Ruby there is an scrypt gem that will help you out, and Node.js now has its own scrypt package. You can use Scrypt in PHP either via the Scrypt extension or the Libsodium extension (both are available in PECL).
I highly suggest reading the documentation for the crypt function if you want to understand how to use bcrypt, or finding yourself a good wrapper or use something like PHPASS for a more legacy implementation. I recommend a minimum of 12 rounds of bcrypt, if not 15 to 18.
I changed my mind about using bcrypt when I learned that bcrypt only uses blowfish's key schedule, with a variable cost mechanism. The latter lets you increase the cost to brute-force a password by increasing blowfish's already expensive key schedule.
Average practices
I almost can't imagine this situation anymore. PHPASS supports PHP 3.0.18 through 5.3, so it is usable on almost every installation imaginable—and should be used if you don't know for certain that your environment supports bcrypt.
But suppose that you cannot use bcrypt or PHPASS at all. What then?
Try an implementation of PDKBF2 with the maximum number of rounds that your environment/application/user-perception can tolerate. The lowest number I'd recommend is 2500 rounds. Also, make sure to use hash_hmac() if it is available to make the operation harder to reproduce.
Future Practices
Coming in PHP 5.5 is a full password protection library that abstracts away any pains of working with bcrypt. While most of us are stuck with PHP 5.2 and 5.3 in most common environments, especially shared hosts, @ircmaxell has built a compatibility layer for the coming API that is backward compatible to PHP 5.3.7.
Cryptography Recap & Disclaimer
The computational power required to actually crack a hashed password doesn't exist. The only way for computers to "crack" a password is to recreate it and simulate the hashing algorithm used to secure it. The speed of the hash is linearly related to its ability to be brute-forced. Worse still, most hash algorithms can be easily parallelized to perform even faster. This is why costly schemes like bcrypt and scrypt are so important.
You cannot possibly foresee all threats or avenues of attack, and so you must make your best effort to protect your users up front. If you do not, then you might even miss the fact that you were attacked until it's too late... and you're liable. To avoid that situation, act paranoid to begin with. Attack your own software (internally) and attempt to steal user credentials, or modify other user's accounts or access their data. If you don't test the security of your system, then you cannot blame anyone but yourself.
Lastly: I am not a cryptographer. Whatever I've said is my opinion, but I happen to think it's based on good ol' common sense ... and lots of reading. Remember, be as paranoid as possible, make things as hard to intrude as possible, and then, if you are still worried, contact a white-hat hacker or cryptographer to see what they say about your code/system.
Best Solution
You might consider some sort of CRC.
Perhaps a CRC64.