Utilities
plot_proj
Parameters:
Name | Type | Description | Default |
---|---|---|---|
sampler
|
(DiscreteDistribution, TrueMeasure)
|
The generator of samples to be plotted. |
required |
n
|
Union[int, list]
|
The number of samples or a list of samples(used for extensibility) to be plotted. |
64
|
d_horizontal
|
Union[int, list]
|
The dimension or list of dimensions to be plotted on the horizontal axes. |
1
|
d_vertical
|
Union[int, list]
|
The dimension or list of dimensions to be plotted on the vertical axes. |
2
|
math_ind
|
bool
|
Setting to |
True
|
marker_size
|
float
|
The marker size (typographic points are 1/72 in.). |
5
|
figfac
|
float
|
The figure size factor. |
5
|
fig_title
|
str
|
The title of the figure. |
'Projection of Samples'
|
axis_pad
|
float
|
The padding of the axis so that points on the boundaries can be seen. |
0
|
want_grid
|
bool
|
Setting to |
True
|
font_family
|
str
|
The font family of the plot. |
'sans-serif'
|
where_title
|
float
|
the position of the title on the plot. Default value is 1. |
1
|
**kwargs
|
dict
|
Additional keyword arguments passed to |
{}
|
Source code in qmcpy/util/plot_functions.py
mlmc_test
Multilevel Monte Carlo test routine.
Examples:
>>> fo = qp.FinancialOption(
... sampler=qp.IIDStdUniform(seed=7),
... option = "ASIAN",
... asian_mean = "GEOMETRIC",
... volatility = 0.2,
... start_price = 100,
... strike_price = 100,
... interest_rate = 0.05,
... t_final = 1)
>>> print('Exact Value: %s'%fo.get_exact_value_inf_dim())
Exact Value: 5.546818633789201
>>> mlmc_test(fo)
Convergence tests, kurtosis, telescoping sum check using N = 20000 samples
l ave(Pf-Pc) ave(Pf) var(Pf-Pc) var(Pf) kurtosis check cost
0 5.4486e+00 5.4486e+00 5.673e+01 5.673e+01 0.00e+00 0.00e+00 2.00e+00
1 1.4925e-01 5.5937e+00 3.839e+00 5.838e+01 5.48e+00 1.14e-02 4.00e+00
2 3.5921e-02 5.6024e+00 9.585e-01 5.990e+01 5.59e+00 7.86e-02 8.00e+00
3 8.7217e-03 5.5128e+00 2.332e-01 5.828e+01 5.36e+00 2.92e-01 1.60e+01
4 1.9773e-03 5.6850e+00 6.021e-02 6.081e+01 5.46e+00 5.12e-01 3.20e+01
5 9.5925e-04 5.5628e+00 1.512e-02 5.939e+01 5.37e+00 3.71e-01 6.40e+01
6 8.5998e-04 5.5706e+00 3.773e-03 5.995e+01 5.48e+00 2.10e-02 1.28e+02
7 1.3592e-04 5.4359e+00 9.285e-04 5.808e+01 5.51e+00 4.13e-01 2.56e+02
8 3.4520e-05 5.5322e+00 2.313e-04 5.881e+01 5.57e+00 2.96e-01 5.12e+02
Linear regression estimates of MLMC parameters
alpha = 1.617207 (exponent for MLMC weak convergence)
beta = 2.000355 (exponent for MLMC variance)
gamma = 1.000000 (exponent for MLMC cost)
MLMC complexity tests
rmse_tol value mlmc_cost std_cost savings N_l
5.000e-03 5.545e+00 3.339e+07 1.038e+08 3.11 8605392 1566846 559701 198886 70359
1.000e-02 5.539e+00 7.272e+06 1.243e+07 1.71 2009192 365451 130781 46623
2.000e-02 5.549e+00 1.827e+06 3.108e+06 1.70 503397 91821 33196 11736
5.000e-02 5.474e+00 2.324e+05 2.556e+05 1.10 71432 13143 4617
1.000e-01 5.466e+00 6.220e+04 6.389e+04 1.03 19477 3361 1225
Parameters:
Name | Type | Description | Default |
---|---|---|---|
integrand
|
AbstractIntegrand
|
multilevel integrand |
required |
n
|
int
|
number of samples for convergence tests |
20000
|
l
|
int
|
number of levels for convergence tests |
8
|
n_init
|
int
|
initial number of samples for MLMC calcs |
200
|
rmse_tols
|
ndarray
|
desired accuracy array for MLMC calcs |
array([0.005, 0.01, 0.02, 0.05, 0.1])
|
levels_min
|
int
|
minimum number of levels for MLMC calcs |
2
|
levels_max
|
int
|
maximum number of levels for MLMC calcs |
10
|
Source code in qmcpy/util/mlmc_test.py
4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 |
|
Shift Invariant Ops
util.bernoulli_poly
\(n^\text{th}\) Bernoulli polynomial
Examples:
>>> x = np.arange(6).reshape((2,3))/6
>>> available_n = list(BERNOULLIPOLYSDICT.keys())
>>> available_n
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
>>> for n in available_n:
... y = bernoulli_poly(n,x)
... with np.printoptions(precision=2):
... print("n = %d\n%s"%(n,y))
n = 1
[[-0.5 -0.33 -0.17]
[ 0. 0.17 0.33]]
n = 2
[[ 0.17 0.03 -0.06]
[-0.08 -0.06 0.03]]
n = 3
[[ 0. 0.05 0.04]
[ 0. -0.04 -0.05]]
n = 4
[[-0.03 -0.01 0.02]
[ 0.03 0.02 -0.01]]
n = 5
[[ 0.00e+00 -2.19e-02 -2.06e-02]
[ 1.39e-17 2.06e-02 2.19e-02]]
n = 6
[[ 0.02 0.01 -0.01]
[-0.02 -0.01 0.01]]
n = 7
[[ 0.00e+00 2.28e-02 2.24e-02]
[-1.39e-17 -2.24e-02 -2.28e-02]]
n = 8
[[-0.03 -0.02 0.02]
[ 0.03 0.02 -0.02]]
n = 9
[[ 0. -0.04 -0.04]
[ 0. 0.04 0.04]]
n = 10
[[ 0.08 0.04 -0.04]
[-0.08 -0.04 0.04]]
>>> import scipy.special
>>> for n in available_n:
... bpoly_coeffs = BERNOULLIPOLYSDICT[n].coeffs
... bpoly_coeffs_true = scipy.special.bernoulli(n)*scipy.special.comb(n,np.arange(n,-1,-1))
... assert np.allclose(bpoly_coeffs_true,bpoly_coeffs,atol=1e-12)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
n
|
int
|
Polynomial order. |
required |
x
|
Union[ndarray, Tensor]
|
Points at which to evaluate the Bernoulli polynomial. |
required |
Returns:
Name | Type | Description |
---|---|---|
y |
Union[ndarray, Tensor]
|
Bernoulli polynomial values. |
Source code in qmcpy/util/shift_invar_ops.py
Digitally Shift Invariant Ops
util.weighted_walsh_funcs
Weighted walsh functions
where \(\mathrm{wal}_k\) is the \(k^\text{th}\) Walsh function and \(\mu_\alpha\) is the Dick weight function which sums the first \(\alpha\) largest indices of \(1\) bits in the binary expansion of \(k\) e.g. \(k=13=1101_2\) has 1-bit indexes \((4,3,1)\) so
Examples:
>>> t = 3
>>> rng = np.random.Generator(np.random.SFC64(11))
>>> xb = rng.integers(low=0,high=2**t,size=(2,3))
>>> available_alpha = list(WEIGHTEDWALSHFUNCSPOS.keys())
>>> available_alpha
[2, 3, 4]
>>> for alpha in available_alpha:
... y = weighted_walsh_funcs(alpha,xb,t)
... with np.printoptions(precision=2):
... print("alpha = %d\n%s"%(alpha,y))
alpha = 2
[[1.81 1.38 1.81]
[0.62 1.81 2.5 ]]
alpha = 3
[[1.85 1.43 1.85]
[0.62 1.85 2.39]]
alpha = 4
[[1.85 1.43 1.85]
[0.62 1.85 2.38]]
>>> import torch
>>> for alpha in available_alpha:
... y = weighted_walsh_funcs(alpha,torch.from_numpy(xb),t)
... with torch._tensor_str.printoptions(precision=2):
... print("alpha = %d\n%s"%(alpha,y))
alpha = 2
tensor([[1.81, 1.38, 1.81],
[0.62, 1.81, 2.50]])
alpha = 3
tensor([[1.85, 1.43, 1.85],
[0.62, 1.85, 2.39]])
alpha = 4
tensor([[1.85, 1.43, 1.85],
[0.62, 1.85, 2.38]])
Parameters:
Name | Type | Description | Default |
---|---|---|---|
alpha
|
int
|
Weighted walsh functions order. |
required |
xb
|
Union[ndarray, Tensor]
|
Jnteger points at which to evaluate the weighted Walsh function. |
required |
t
|
int
|
Number of bits in each integer in xb. |
required |
Returns:
Name | Type | Description |
---|---|---|
y |
Union[ndarray, Tensor]
|
Weighted Walsh function values. |
References:
-
Dick, Josef.
"Walsh spaces containing smooth functions and quasi–Monte Carlo rules of arbitrary high order."
SIAM Journal on Numerical Analysis 46.3 (2008): 1519-1553. -
Dick, Josef.
"The decay of the Walsh coefficients of smooth functions."
Bulletin of the Australian Mathematical Society 80.3 (2009): 430-453.
Source code in qmcpy/util/dig_shift_invar_ops.py
util.k4sumterm
where \(x_a\) is the bit at index \(a\) in the binary expansion of \(x\) e.g. \(x = 6\) with \(t=3\) has \((x_0,x_1,x_2) = (1,1,0)\)
Examples:
>>> t = 3
>>> rng = np.random.Generator(np.random.SFC64(11))
>>> x = rng.integers(low=0,high=2**t,size=(5,4))
>>> with np.printoptions(precision=2):
... k4sumterm(x,t)
array([[ 1.11, 0.89, 1.11, -0.89],
[ 1.11, 1.14, 1.11, -0.86],
[-0.89, 0.86, -1.11, 0.89],
[-1.11, 0.89, -0.89, 0.89],
[-1.14, -0.89, -0.89, -0.86]])
>>> import torch
>>> with torch._tensor_str.printoptions(precision=2):
... k4sumterm(torch.from_numpy(x),t)
tensor([[ 1.11, 0.89, 1.11, -0.89],
[ 1.11, 1.14, 1.11, -0.86],
[-0.89, 0.86, -1.11, 0.89],
[-1.11, 0.89, -0.89, 0.89],
[-1.14, -0.89, -0.89, -0.86]])
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x
|
Union[np.ndarray torch.Tensor]
|
Integer arrays. |
required |
t
|
int
|
Number of bits in each integer. |
required |
Returns:
Name | Type | Description |
---|---|---|
y |
Union[np.ndarray torch.Tensor]
|
The \(K_4\) sum term. |
Source code in qmcpy/util/dig_shift_invar_ops.py
util.to_float
Convert binary representations of digital net samples in base \(b=2\) to floating point representations.
Examples:
>>> xb = np.arange(8,dtype=np.uint64)
>>> xb
array([0, 1, 2, 3, 4, 5, 6, 7], dtype=uint64)
>>> to_float(xb,3)
array([0. , 0.125, 0.25 , 0.375, 0.5 , 0.625, 0.75 , 0.875])
>>> xbtorch = bin_from_numpy_to_torch(xb)
>>> xbtorch
tensor([0, 1, 2, 3, 4, 5, 6, 7])
>>> to_float(xbtorch,3)
tensor([0.0000, 0.1250, 0.2500, 0.3750, 0.5000, 0.6250, 0.7500, 0.8750])
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x
|
Union[ndarray, Tensor]
|
binary representation of samples with |
required |
t
|
int
|
number of bits in binary represtnations. Typically |
required |
Returns:
Name | Type | Description |
---|---|---|
xf |
Unioin[ndarray, Tensor]
|
floating point representation of samples. |
Source code in qmcpy/util/dig_shift_invar_ops.py
util.to_bin
Convert floating point representations of digital net samples in base \(b=2\) to binary representations.
Examples:
>>> xf = np.random.Generator(np.random.PCG64(7)).uniform(low=0,high=1,size=(5))
>>> xf
array([0.62509547, 0.8972138 , 0.77568569, 0.22520719, 0.30016628])
>>> xb = to_bin(xf,2)
>>> xb
array([2, 3, 3, 0, 1], dtype=uint64)
>>> to_bin(xb,2)
array([2, 3, 3, 0, 1], dtype=uint64)
>>> import torch
>>> xftorch = torch.from_numpy(xf)
>>> xftorch
tensor([0.6251, 0.8972, 0.7757, 0.2252, 0.3002], dtype=torch.float64)
>>> xbtorch = to_bin(xftorch,2)
>>> xbtorch
tensor([2, 3, 3, 0, 1])
>>> to_bin(xbtorch,2)
tensor([2, 3, 3, 0, 1])
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x
|
Union[ndarray, Tensor]
|
floating point representation of samples. |
required |
t
|
int
|
number of bits in binary represtnations. Typically |
required |
Returns:
Name | Type | Description |
---|---|---|
xb |
Unioin[ndarray, Tensor]
|
binary representation of samples with |
Source code in qmcpy/util/dig_shift_invar_ops.py
util.bin_from_numpy_to_torch
Convert numpy.uint64
to torch.int64
, useful for converting binary samples from DigitalNetB2
to torch representations.
Examples:
>>> xb = np.arange(8,dtype=np.uint64)
>>> xb
array([0, 1, 2, 3, 4, 5, 6, 7], dtype=uint64)
>>> bin_from_numpy_to_torch(xb)
tensor([0, 1, 2, 3, 4, 5, 6, 7])
Parameters:
Name | Type | Description | Default |
---|---|---|---|
xb
|
Union[ndarray]
|
binary representation of samples with |
required |
Returns:
Name | Type | Description |
---|---|---|
xbtorch |
Unioin[Tensor]
|
binary representation of samples with |