Fast Transforms
numpy
compatible
fwht
1 dimensional Fast Walsh Hadamard Transform (FWHT) along the last dimension.
Requires the size of the last dimension is a power of 2.
Examples:
>>> rng = np.random.Generator(np.random.SFC64(11))
>>> with np.printoptions(precision=2):
... fwht(rng.random(8))
array([ 1.07, -0.29, 0.12, 0.08, 0.1 , -0.45, 0.1 , -0.03])
>>> fwht(rng.random((2,3,4,5,8))).shape
(2, 3, 4, 5, 8)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x
|
ndarray
|
Array of samples at which to run FWHT. |
required |
Returns:
Name | Type | Description |
---|---|---|
y |
ndarray
|
FWHT values. |
Source code in qmcpy/fast_transform/ft.py
fftbr
1 dimensional Bit-Reversed-Order (BRO) Fast Fourier Transform (FFT) along the last dimension. Requires the last dimension of x is already in BRO, so we can skip the first step of the decimation-in-time FFT. Requires the size of the last dimension is a power of 2.
Examples:
>>> rng = np.random.Generator(np.random.SFC64(11))
>>> x = rng.random(8)+1j*rng.random(8)
>>> with np.printoptions(precision=2):
... fftbr(x)
array([ 1.07+1.14j, -0.32+0.26j, -0.27+0.22j, -0.27-0.34j, 0.1 +0.16j,
-0.15+0.17j, 0.5 +0.24j, 0.06-0.02j])
>>> fftbr(rng.random((2,3,4,5,8))+1j*rng.random((2,3,4,5,8))).shape
(2, 3, 4, 5, 8)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x
|
ndarray
|
Array of samples at which to run BRO-FFT. |
required |
Returns:
Name | Type | Description |
---|---|---|
y |
ndarray
|
BRO-FFT values. |
Source code in qmcpy/fast_transform/ft.py
ifftbr
1 dimensional Bit-Reversed-Order (BRO) Inverse Fast Fourier Transform (IFFT) along the last dimension.
Outputs an array in bit-reversed order, so we can skip the last step of the decimation-in-time IFFT.
Requires the size of the last dimension is a power of 2.
Examples:
>>> rng = np.random.Generator(np.random.SFC64(11))
>>> x = rng.random(8)+1j*rng.random(8)
>>> with np.printoptions(precision=2):
... ifftbr(x)
array([ 1.07+1.14j, -0.29-0.22j, 0.3 +0.06j, -0.09+0.02j, 0.03+0.54j,
-0.04-0.33j, -0.19+0.26j, -0.08+0.36j])
>>> ifftbr(rng.random((2,3,4,5,8))+1j*rng.random((2,3,4,5,8))).shape
(2, 3, 4, 5, 8)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x
|
ndarray
|
Array of samples at which to run BRO-IFFT. |
required |
Returns:
Name | Type | Description |
---|---|---|
y |
ndarray
|
BRO-IFFT values. |
Source code in qmcpy/fast_transform/ft.py
omega_fwht
A useful when efficiently updating FWHT values after doubling the sample size.
Examples:
>>> rng = np.random.Generator(np.random.SFC64(11))
>>> m = 3
>>> x1 = rng.random((3,5,7,2**m))
>>> x2 = rng.random((3,5,7,2**m))
>>> x = np.concatenate([x1,x2],axis=-1)
>>> ytrue = fwht(x)
>>> omega = omega_fwht(m)
>>> y1 = fwht(x1)
>>> y2 = fwht(x2)
>>> y = np.concatenate([y1+omega*y2,y1-omega*y2],axis=-1)/np.sqrt(2)
>>> np.allclose(y,ytrue)
True
Parameters:
Name | Type | Description | Default |
---|---|---|---|
m
|
int
|
Size \(2^m\) output. |
required |
Returns:
Name | Type | Description |
---|---|---|
y |
ndarray
|
\(\left(1\right)_{k=0}^{2^m}\). |
Source code in qmcpy/fast_transform/ft.py
omega_fftbr
A useful when efficiently updating FFT values after doubling the sample size.
Examples:
>>> rng = np.random.Generator(np.random.SFC64(11))
>>> m = 3
>>> x1 = rng.random((3,5,7,2**m))+1j*rng.random((3,5,7,2**m))
>>> x2 = rng.random((3,5,7,2**m))+1j*rng.random((3,5,7,2**m))
>>> x = np.concatenate([x1,x2],axis=-1)
>>> ytrue = fftbr(x)
>>> omega = omega_fftbr(m)
>>> y1 = fftbr(x1)
>>> y2 = fftbr(x2)
>>> y = np.concatenate([y1+omega*y2,y1-omega*y2],axis=-1)/np.sqrt(2)
>>> np.allclose(y,ytrue)
True
Parameters:
Name | Type | Description | Default |
---|---|---|---|
m
|
int
|
Size \(2^m\) output. |
required |
Returns:
Name | Type | Description |
---|---|---|
y |
ndarray
|
\(\left(e^{- \pi \mathrm{i} k / 2^m}\right)_{k=0}^{2^m}\). |
Source code in qmcpy/fast_transform/ft.py
torch
compatible
fwht_torch
Torch implementation of the 1 dimensional Fast Walsh Hadamard Transform (FWHT) along the last dimension.
Requires the size of the last dimension is a power of 2.
Examples:
>>> rng = np.random.Generator(np.random.SFC64(11))
>>> x = torch.from_numpy(rng.random(8)).float().requires_grad_()
>>> y = fwht_torch(x)
>>> with torch._tensor_str.printoptions(precision=2):
... y.detach()
tensor([ 1.07, -0.29, 0.12, 0.08, 0.10, -0.45, 0.10, -0.03])
>>> v = torch.sum(y**2)
>>> dvdx = torch.autograd.grad(v,x)[0]
>>> with torch._tensor_str.printoptions(precision=2):
... x.detach()
tensor([0.25, 0.73, 0.06, 0.61, 0.45, 0.26, 0.35, 0.32])
>>> print("%.4f"%v.detach())
1.4694
>>> with torch._tensor_str.printoptions(precision=2):
... dvdx.detach()
tensor([0.50, 1.47, 0.11, 1.23, 0.90, 0.51, 0.70, 0.64])
>>> fwht_torch(torch.rand((2,3,4,5,8))).shape
torch.Size([2, 3, 4, 5, 8])
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x
|
Tensor
|
Array of samples at which to run FWHT. |
required |
Returns:
Name | Type | Description |
---|---|---|
y |
Tensor
|
FWHT values. |
Source code in qmcpy/fast_transform/ft_pytorch.py
fftbr_torch
Torch implementation of the 1 dimensional Bit-Reversed-Order (BRO) Fast Fourier Transform (FFT) along the last dimension. Requires the last dimension of x is already in BRO, so we can skip the first step of the decimation-in-time FFT. Requires the size of the last dimension is a power of 2.
Examples:
>>> rng = np.random.Generator(np.random.SFC64(11))
>>> x = torch.from_numpy(rng.random(8)+1j*rng.random(8)).to(torch.complex64).requires_grad_()
>>> y = fftbr_torch(x)
>>> with torch._tensor_str.printoptions(precision=2):
... y.detach()
tensor([ 1.07+1.14j, -0.32+0.26j, -0.27+0.22j, -0.27-0.34j, 0.10+0.16j,
-0.15+0.17j, 0.50+0.24j, 0.06-0.02j])
>>> v = torch.abs(torch.sum(y**2))
>>> dvdx = torch.autograd.grad(v,x)[0]
>>> with torch._tensor_str.printoptions(precision=2):
... x.detach()
tensor([0.25+0.65j, 0.73+0.60j, 0.06+0.20j, 0.61+0.39j, 0.45+0.06j, 0.26+0.09j,
0.35+0.39j, 0.32+0.85j])
>>> print("%.4f"%v.detach())
2.5584
>>> with torch._tensor_str.printoptions(precision=2):
... dvdx.detach()
tensor([1.30+0.49j, 1.21+1.45j, 0.79+1.22j, 0.41+0.11j, 1.71+0.61j, 0.79+0.69j,
0.19+0.51j, 0.12+0.90j])
>>> fftbr_torch(torch.rand((2,3,4,5,8))+1j*torch.rand((2,3,4,5,8))).shape
torch.Size([2, 3, 4, 5, 8])
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x
|
Tensor
|
Array of samples at which to run BRO-FFT. |
required |
Returns:
Name | Type | Description |
---|---|---|
y |
Tensor
|
BRO-FFT values. |
Source code in qmcpy/fast_transform/ft_pytorch.py
ifftbr_torch
Torch implementation of the 1 dimensional Bit-Reversed-Order (BRO) Inverse Fast Fourier Transform (IFFT) along the last dimension.
Outputs an array in bit-reversed order, so we can skip the last step of the decimation-in-time IFFT.
Requires the size of the last dimension is a power of 2.
Examples:
>>> rng = np.random.Generator(np.random.SFC64(11))
>>> x = torch.from_numpy(rng.random(8)+1j*rng.random(8)).to(torch.complex64).requires_grad_()
>>> y = ifftbr_torch(x)
>>> with torch._tensor_str.printoptions(precision=2):
... y.detach()
tensor([ 1.07+1.14j, -0.29-0.22j, 0.30+0.06j, -0.09+0.02j, 0.03+0.54j,
-0.04-0.33j, -0.19+0.26j, -0.08+0.36j])
>>> v = torch.abs(torch.sum(y**2))
>>> dvdx = torch.autograd.grad(v,x)[0]
>>> with torch._tensor_str.printoptions(precision=2):
... x.detach()
tensor([0.25+0.65j, 0.73+0.60j, 0.06+0.20j, 0.61+0.39j, 0.45+0.06j, 0.26+0.09j,
0.35+0.39j, 0.32+0.85j])
>>> print("%.4f"%v.detach())
2.5656
>>> with torch._tensor_str.printoptions(precision=2):
... dvdx.detach()
tensor([ 1.15+0.79j, 1.51+1.01j, 0.60+0.86j, 0.06+0.54j, -0.10+0.90j, 0.47+1.37j,
0.37+0.20j, 0.83+1.70j])
>>> ifftbr_torch(torch.rand((2,3,4,5,8))+1j*torch.rand((2,3,4,5,8))).shape
torch.Size([2, 3, 4, 5, 8])
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x
|
Tensor
|
Array of samples at which to run BRO-IFFT. |
required |
Returns:
Name | Type | Description |
---|---|---|
y |
Tensor
|
BRO-IFFT values. |
Source code in qmcpy/fast_transform/ft_pytorch.py
omega_fwht
Torch implementation useful when efficiently updating FWHT values after doubling the sample size.
Examples:
>>> rng = np.random.Generator(np.random.SFC64(11))
>>> m = 3
>>> x1 = torch.from_numpy(rng.random((3,5,7,2**m)))
>>> x2 = torch.from_numpy(rng.random((3,5,7,2**m)))
>>> x = torch.cat([x1,x2],axis=-1)
>>> ytrue = fwht_torch(x)
>>> omega = omega_fwht_torch(m)
>>> y1 = fwht_torch(x1)
>>> y2 = fwht_torch(x2)
>>> y = torch.cat([y1+omega*y2,y1-omega*y2],axis=-1)/np.sqrt(2)
>>> np.allclose(y,ytrue)
True
Parameters:
Name | Type | Description | Default |
---|---|---|---|
m
|
int
|
Size \(2^m\) output. |
required |
Returns:
Name | Type | Description |
---|---|---|
y |
ndarray
|
\(\left(1\right)_{k=0}^{2^m}\). |
Source code in qmcpy/fast_transform/ft_pytorch.py
omega_fftbr_torch
Torch implementation useful when efficiently updating FFT values after doubling the sample size.
Examples:
>>> rng = np.random.Generator(np.random.SFC64(11))
>>> m = 3
>>> x1 = torch.from_numpy(rng.random((3,5,7,2**m))+1j*rng.random((3,5,7,2**m)))
>>> x2 = torch.from_numpy(rng.random((3,5,7,2**m))+1j*rng.random((3,5,7,2**m)))
>>> x = torch.cat([x1,x2],axis=-1)
>>> ytrue = fftbr_torch(x)
>>> omega = omega_fftbr_torch(m)
>>> y1 = fftbr_torch(x1)
>>> y2 = fftbr_torch(x2)
>>> y = torch.cat([y1+omega*y2,y1-omega*y2],axis=-1)/np.sqrt(2)
>>> np.allclose(y,ytrue)
True
Parameters:
Name | Type | Description | Default |
---|---|---|---|
m
|
int
|
Size \(2^m\) output. |
required |
Returns:
Name | Type | Description |
---|---|---|
y |
ndarray
|
\(\left(e^{- \pi \mathrm{i} k / 2^m}\right)_{k=0}^{2^m}\). |