If X is normal with mean 0 and standard deviation sigma, it must hold
P = Prob[ -a <= X <= a ] = Prob[ -a/sigma <= N <= a/sigma ]
= 2 Prob[ 0 <= N <= a/sigma ]
= 2 ( Prob[ N <= a/sigma ] - 1/2 )
where N is normal with mean 0 and standard deviation 1. Hence
P/2 + 1/2 = Prob[ N <= a/sigma ] = Phi(a/sigma)
Where Phi is the cumulative distribution function (cdf) of a normal variable with mean 0 and stddev 1. Now we need the inverse normal cdf (or the "percent point function"), which in Python is scipy.stats.norm.ppf(). Sample code:
from scipy.stats import norm
P = 0.3456
a = 3.0
a_sigma = float(norm.ppf(P/2 + 0.5)) # a/sigma
sigma = a/a_sigma # Here is the standard deviation
For example, we know that the probability of a N(0,1) variable falling int the interval [-1.1] is ~ 0.682 (the dark blue area in this figure). If you set P = 0.682 and a = 1.0 you obtain sigma ~ 1.0, which is indeed the standard deviation.
Here's a summary of Dimitris Andreou's link.
Remember sum of i-th powers, where i=1,2,..,k. This reduces the problem to solving the system of equations
a1 + a2 + ... + ak = b1
a12 + a22 + ... + ak2 = b2
...
a1k + a2k + ... + akk = bk
Using Newton's identities, knowing bi allows to compute
c1 = a1 + a2 + ... ak
c2 = a1a2 + a1a3 + ... + ak-1ak
...
ck = a1a2 ... ak
If you expand the polynomial (x-a1)...(x-ak) the coefficients will be exactly c1, ..., ck - see Viète's formulas. Since every polynomial factors uniquely (ring of polynomials is an Euclidean domain), this means ai are uniquely determined, up to permutation.
This ends a proof that remembering powers is enough to recover the numbers. For constant k, this is a good approach.
However, when k is varying, the direct approach of computing c1,...,ck is prohibitely expensive, since e.g. ck is the product of all missing numbers, magnitude n!/(n-k)!. To overcome this, perform computations in Zq field, where q is a prime such that n <= q < 2n - it exists by Bertrand's postulate. The proof doesn't need to be changed, since the formulas still hold, and factorization of polynomials is still unique. You also need an algorithm for factorization over finite fields, for example the one by Berlekamp or Cantor-Zassenhaus.
High level pseudocode for constant k:
- Compute i-th powers of given numbers
- Subtract to get sums of i-th powers of unknown numbers. Call the sums bi.
- Use Newton's identities to compute coefficients from bi; call them ci. Basically, c1 = b1; c2 = (c1b1 - b2)/2; see Wikipedia for exact formulas
- Factor the polynomial xk-c1xk-1 + ... + ck.
- The roots of the polynomial are the needed numbers a1, ..., ak.
For varying k, find a prime n <= q < 2n using e.g. Miller-Rabin, and perform the steps with all numbers reduced modulo q.
EDIT: The previous version of this answer stated that instead of Zq, where q is prime, it is possible to use a finite field of characteristic 2 (q=2^(log n)). This is not the case, since Newton's formulas require division by numbers up to k.
Best Solution
The Ziggurat algorithm is pretty efficient for this, although the Box-Muller transform is easier to implement from scratch (and not crazy slow).