Example. 123456, and we want the third from the right ('4') out.
The idea in practise is to access each digit seperately (ie. 6 5 4 3 2 1).
C/C++/C# preferred.
bit-manipulationnumbers
Example. 123456, and we want the third from the right ('4') out.
The idea in practise is to access each digit seperately (ie. 6 5 4 3 2 1).
C/C++/C# preferred.
This is known as the 'Hamming Weight', 'popcount' or 'sideways addition'.
Some CPUs have a single built-in instruction to do it and others have parallel instructions which act on bit vectors. Instructions like x86's popcnt
(on CPUs where it's supported) will almost certainly be fastest for a single integer. Some other architectures may have a slow instruction implemented with a microcoded loop that tests a bit per cycle (citation needed - hardware popcount is normally fast if it exists at all.).
The 'best' algorithm really depends on which CPU you are on and what your usage pattern is.
Your compiler may know how to do something that's good for the specific CPU you're compiling for, e.g. C++20 std::popcount()
, or C++ std::bitset<32>::count()
, as a portable way to access builtin / intrinsic functions (see another answer on this question). But your compiler's choice of fallback for target CPUs that don't have hardware popcnt might not be optimal for your use-case. Or your language (e.g. C) might not expose any portable function that could use a CPU-specific popcount when there is one.
A pre-populated table lookup method can be very fast if your CPU has a large cache and you are doing lots of these operations in a tight loop. However it can suffer because of the expense of a 'cache miss', where the CPU has to fetch some of the table from main memory. (Look up each byte separately to keep the table small.) If you want popcount for a contiguous range of numbers, only the low byte is changing for groups of 256 numbers, making this very good.
If you know that your bytes will be mostly 0's or mostly 1's then there are efficient algorithms for these scenarios, e.g. clearing the lowest set with a bithack in a loop until it becomes zero.
I believe a very good general purpose algorithm is the following, known as 'parallel' or 'variable-precision SWAR algorithm'. I have expressed this in a C-like pseudo language, you may need to adjust it to work for a particular language (e.g. using uint32_t for C++ and >>> in Java):
GCC10 and clang 10.0 can recognize this pattern / idiom and compile it to a hardware popcnt or equivalent instruction when available, giving you the best of both worlds. (https://godbolt.org/z/qGdh1dvKK)
int numberOfSetBits(uint32_t i)
{
// Java: use int, and use >>> instead of >>. Or use Integer.bitCount()
// C or C++: use uint32_t
i = i - ((i >> 1) & 0x55555555); // add pairs of bits
i = (i & 0x33333333) + ((i >> 2) & 0x33333333); // quads
i = (i + (i >> 4)) & 0x0F0F0F0F; // groups of 8
return (i * 0x01010101) >> 24; // horizontal sum of bytes
}
For JavaScript: coerce to integer with |0
for performance: change the first line to i = (i|0) - ((i >> 1) & 0x55555555);
This has the best worst-case behaviour of any of the algorithms discussed, so will efficiently deal with any usage pattern or values you throw at it. (Its performance is not data-dependent on normal CPUs where all integer operations including multiply are constant-time. It doesn't get any faster with "simple" inputs, but it's still pretty decent.)
References:
i = i - ((i >> 1) & 0x55555555);
The first step is an optimized version of masking to isolate the odd / even bits, shifting to line them up, and adding. This effectively does 16 separate additions in 2-bit accumulators (SWAR = SIMD Within A Register). Like (i & 0x55555555) + ((i>>1) & 0x55555555)
.
The next step takes the odd/even eight of those 16x 2-bit accumulators and adds again, producing 8x 4-bit sums. The i - ...
optimization isn't possible this time so it does just mask before / after shifting. Using the same 0x33...
constant both times instead of 0xccc...
before shifting is a good thing when compiling for ISAs that need to construct 32-bit constants in registers separately.
The final shift-and-add step of (i + (i >> 4)) & 0x0F0F0F0F
widens to 4x 8-bit accumulators. It masks after adding instead of before, because the maximum value in any 4-bit accumulator is 4
, if all 4 bits of the corresponding input bits were set. 4+4 = 8 which still fits in 4 bits, so carry between nibble elements is impossible in i + (i >> 4)
.
So far this is just fairly normal SIMD using SWAR techniques with a few clever optimizations. Continuing on with the same pattern for 2 more steps can widen to 2x 16-bit then 1x 32-bit counts. But there is a more efficient way on machines with fast hardware multiply:
Once we have few enough "elements", a multiply with a magic constant can sum all the elements into the top element. In this case byte elements. Multiply is done by left-shifting and adding, so a multiply of x * 0x01010101
results in x + (x<<8) + (x<<16) + (x<<24)
. Our 8-bit elements are wide enough (and holding small enough counts) that this doesn't produce carry into that top 8 bits.
A 64-bit version of this can do 8x 8-bit elements in a 64-bit integer with a 0x0101010101010101 multiplier, and extract the high byte with >>56
. So it doesn't take any extra steps, just wider constants. This is what GCC uses for __builtin_popcountll
on x86 systems when the hardware popcnt
instruction isn't enabled. If you can use builtins or intrinsics for this, do so to give the compiler a chance to do target-specific optimizations.
This bitwise-SWAR algorithm could parallelize to be done in multiple vector elements at once, instead of in a single integer register, for a speedup on CPUs with SIMD but no usable popcount instruction. (e.g. x86-64 code that has to run on any CPU, not just Nehalem or later.)
However, the best way to use vector instructions for popcount is usually by using a variable-shuffle to do a table-lookup for 4 bits at a time of each byte in parallel. (The 4 bits index a 16 entry table held in a vector register).
On Intel CPUs, the hardware 64bit popcnt instruction can outperform an SSSE3 PSHUFB
bit-parallel implementation by about a factor of 2, but only if your compiler gets it just right. Otherwise SSE can come out significantly ahead. Newer compiler versions are aware of the popcnt false dependency problem on Intel.
vpternlogd
making Harley-Seal very good.)-- Built-in Function: int __builtin_clz (unsigned int x) Returns the number of leading 0-bits in X, starting at the most significant bit position. If X is 0, the result is undefined. -- Built-in Function: int __builtin_clzl (unsigned long) Similar to `__builtin_clz', except the argument type is `unsigned long'. -- Built-in Function: int __builtin_clzll (unsigned long long) Similar to `__builtin_clz', except the argument type is `unsigned long long'.
I'd expect them to be translated into something reasonably efficient for your current platform, whether it be one of those fancy bit-twiddling algorithms, or a single instruction.
A useful trick if your input can be zero is __builtin_clz(x | 1)
: unconditionally setting the low bit without modifying any others makes the output 31
for x=0
, without changing the output for any other input.
To avoid needing to do that, your other option is platform-specific intrinsics like ARM GCC's __clz
(no header needed), or x86's _lzcnt_u32
on CPUs that support the lzcnt
instruction. (Beware that lzcnt
decodes as bsr
on older CPUs instead of faulting, which gives 31-lzcnt for non-zero inputs.)
There's unfortunately no way to portably take advantage of the various CLZ instructions on non-x86 platforms that do define the result for input=0 as 32 or 64 (according to the operand width). x86's lzcnt
does that, too, while bsr
produces a bit-index that the compiler has to flip unless you use 31-__builtin_clz(x)
.
(The "undefined result" is not C Undefined Behavior, just a value that isn't defined. It's actually whatever was in the destination register when the instruction ran. AMD documents this, Intel doesn't, but Intel's CPUs do implement that behaviour. But it's not whatever was previously in the C variable you're assigning to, that's not usually how things work when gcc turns C into asm. See also Why does breaking the "output dependency" of LZCNT matter?)
Best Solution
A more efficient implementation might be something like this:
This saves the effort of converting all digits to string format if you only want one of them. And, you don't have to allocate space for the converted string.
If speed is a concern, you could precalculate an array of powers of 10 and use n to index into this array:
As mentioned by others, this is as close as you are going to get to bitwise operations for base 10.