I was helping somebody out with his JavaScript code and my eyes were caught by a section that looked like that:

```
function randOrd(){
return (Math.round(Math.random())-0.5);
}
coords.sort(randOrd);
alert(coords);
```

My first though was: **hey, this can't possibly work!** But then I did some experimenting and found that it indeed at least seems to provide nicely randomized results.

Then I did some web search and almost at the top found an article from which this code was most ceartanly copied. Looked like a pretty respectable site and author…

But my gut feeling tells me, that this must be wrong. Especially as the sorting algorithm is not specified by ECMA standard. I think different sorting algoritms will result in different non-uniform shuffles. Some sorting algorithms may probably even loop infinitely…

But what do you think?

And as another question… how would I now go and measure how random the results of this shuffling technique are?

**update:** I did some measurements and posted the results below as one of the answers.

## Best Solution

After Jon has already covered the theory, here's an implementation:

The algorithm is

`O(n)`

, whereas sorting should be`O(n log n)`

. Depending on the overhead of executing JS code compared to the native`sort()`

function, this might lead to a noticable difference in performance which should increase with array sizes.In the comments to bobobobo's answer, I stated that the algorithm in question might not produce evenly distributed probabilities (depending on the implementation of

`sort()`

).My argument goes along these lines: A sorting algorithm requires a certain number

`c`

of comparisons, eg`c = n(n-1)/2`

for Bubblesort. Our random comparison function makes the outcome of each comparison equally likely, ie there are`2^c`

equally probableresults. Now, each result has to correspond to one of the`n!`

permutations of the array's entries, which makes an even distribution impossible in the general case. (This is a simplification, as the actual number of comparisons neeeded depends on the input array, but the assertion should still hold.)As Jon pointed out, this alone is no reason to prefer Fisher-Yates over using

`sort()`

, as the random number generator will also map a finite number of pseudo-random values to the`n!`

permutations. But the results of Fisher-Yates should still be better:`Math.random()`

produces a pseudo-random number in the range`[0;1[`

. As JS uses double-precision floating point values, this corresponds to`2^x`

possible values where`52 ≤ x ≤ 63`

(I'm too lazy to find the actual number). A probability distribution generated using`Math.random()`

will stop behaving well if the number of atomic events is of the same order of magnitude.When using Fisher-Yates, the relevant parameter is the size of the array, which should never approach

`2^52`

due to practical limitations.When sorting with a random comparision function, the function basically only cares if the return value is positive or negative, so this will never be a problem. But there is a similar one: Because the comparison function is well-behaved, the

`2^c`

possible results are, as stated, equally probable. If`c ~ n log n`

then`2^c ~ n^(a·n)`

where`a = const`

, which makes it at least possible that`2^c`

is of same magnitude as (or even less than)`n!`

and thus leading to an uneven distribution, even if the sorting algorithm where to map onto the permutaions evenly. If this has any practical impact is beyond me.The real problem is that the sorting algorithms are not guaranteed to map onto the permutations evenly. It's easy to see that Mergesort does as it's symmetric, but reasoning about something like Bubblesort or, more importantly, Quicksort or Heapsort, is not.

The bottom line: As long as

`sort()`

uses Mergesort, youshouldbe reasonably safe except in corner cases (at least I'm hoping that`2^c ≤ n!`

is a corner case), if not, all bets are off.