Here's a summary of Dimitris Andreou's link.

Remember sum of i-th powers, where i=1,2,..,k. This reduces the problem to solving the system of equations

a_{1} + a_{2} + ... + a_{k} = b_{1}

a_{1}^{2} + a_{2}^{2} + ... + a_{k}^{2} = b_{2}

...

a_{1}^{k} + a_{2}^{k} + ... + a_{k}^{k} = b_{k}

Using Newton's identities, knowing b_{i} allows to compute

c_{1} = a_{1} + a_{2} + ... a_{k}

c_{2} = a_{1}a_{2} + a_{1}a_{3} + ... + a_{k-1}a_{k}

...

c_{k} = a_{1}a_{2} ... a_{k}

If you expand the polynomial (x-a_{1})...(x-a_{k}) the coefficients will be exactly c_{1}, ..., c_{k} - see Viète's formulas. Since every polynomial factors uniquely (ring of polynomials is an Euclidean domain), this means a_{i} are uniquely determined, up to permutation.

This ends a proof that remembering powers is enough to recover the numbers. For constant k, this is a good approach.

However, when k is varying, the direct approach of computing c_{1},...,c_{k} is prohibitely expensive, since e.g. c_{k} is the product of all missing numbers, magnitude n!/(n-k)!. To overcome this, perform computations in Z_{q} field, where q is a prime such that n <= q < 2n - it exists by Bertrand's postulate. The proof doesn't need to be changed, since the formulas still hold, and factorization of polynomials is still unique. You also need an algorithm for factorization over finite fields, for example the one by Berlekamp or Cantor-Zassenhaus.

High level pseudocode for constant k:

- Compute i-th powers of given numbers
- Subtract to get sums of i-th powers of unknown numbers. Call the sums b
_{i}.
- Use Newton's identities to compute coefficients from b
_{i}; call them c_{i}. Basically, c_{1} = b_{1}; c_{2} = (c_{1}b_{1} - b_{2})/2; see Wikipedia for exact formulas
- Factor the polynomial x
^{k}-c_{1}x^{k-1} + ... + c_{k}.
- The roots of the polynomial are the needed numbers a
_{1}, ..., a_{k}.

For varying k, find a prime n <= q < 2n using e.g. Miller-Rabin, and perform the steps with all numbers reduced modulo q.

EDIT: The previous version of this answer stated that instead of Z_{q}, where q is prime, it is possible to use a finite field of characteristic 2 (q=2^(log n)). This is not the case, since Newton's formulas require division by numbers up to k.

## Best Solution

Just solve the cubic.

If you're talking about Bezier plane curves, where x(t) and y(t) are cubic polynomials, then y(x) might be undefined or have multiple values. An extreme degenerate case would be the line x= 1.0, which can be expressed as a cubic Bezier (control point 2 is the same as end point 1; control point 3 is the same as end point 4). In that case, y(x) has no solutions for x != 1.0, and infinite solutions for x == 1.0.

A method of recursive subdivision will work, but I would expect it to be much slower than just solving the cubic. (Unless you're working with some sort of embedded processor with unusually poor floating-point capacity.)

You should have no trouble finding code that solves a cubic that has already been thoroughly tested and debuged. If you implement your own solution using recursive subdivision, you won't have that advantage.

Finally, yes, there may be numerical stablility problems, like when the point you want is near a tangent, but a subdivision method won't make those go away. It will just make them less obvious.

EDIT: responding to your comment, but I need more than 300 characters.The wackypedia article is math with no code. I suspect you can find some cookbook code that's more ready-to-use somewhere. Maybe Numerical Recipies or ACM collected algorithms link text.

To your specific question, and using the same notation as the article, u is only zero or near zero when p is also zero or near zero. They're related by the equation:

`u^^6 + q u^^3 == p^^3 /27`

Near zero, you can use the approximation:

`q u^^3 == p^^3 /27`

or

`p / 3u ==`

cube root of qSo the computation of x from u should contain something like:

How "near" zero is near? Depends on how much accuracy you need. You could spend some quality time with Maple or Matlab looking at how much error is introduced for what magnitudes of u. Of course, only you know how much accuracy you need.

The article gives 3 formulas for u for the 3 roots of the cubic. Given the three u values, you can get the 3 corresponding x values. The 3 values for u and x are all complex numbers with an imaginary component. If you're

surethat there has to be only one real solution, then you expect one of the roots to have a zero imaginary component, and the other two to be complex conjugates. It looks like you have to compute all three and then pick the real one. (Note that a complex u can correspond to a real x!) However, there's another numerical stability problem there: floating-point arithmetic being what it is, the imaginary component of the real solution will not be exactly zero, and the imaginary components of the non-real roots can be arbitrarily close to zero. So numeric round-off can result in you picking the wrong root. It would be helpfull if there's some sanity check from your application that you could apply there.If you do pick the right root, one or more iterations of Newton-Raphson can improve it's accuracy a lot.