• torch、(三) Random sampling


    参考  torch、(三) Random sampling - 云+社区 - 腾讯云

    目录

    torch.seed()[source]

    torch.manual_seed(seed)[source]

    torch.initial_seed()[source]

    torch.get_rng_state()[source]

    torch.set_rng_state(new_state)[source]

    torch.default_generator Returns the default CPU torch.Generator

    torch.bernoulli(input, *, generator=None, out=None) → Tensor

    torch.multinomial(input, num_samples, replacement=False, out=None) → LongTensor

    torch.normal()

    torch.normal(mean=0.0, std, out=None) → Tensor

    torch.normal(mean, std=1.0, out=None) → Tensor

    torch.normal(mean, std, size, *, out=None) → Tensor

    torch.rand(*size, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor

    torch.rand_like(input, dtype=None, layout=None, device=None, requires_grad=False) → Tensor

    torch.randint(low=0, high, size, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor

    torch.randint_like(input, low=0, high, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor

    torch.randn(*size, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor

    torch.randn_like(input, dtype=None, layout=None, device=None, requires_grad=False) → Tensor

    torch.randperm(n, out=None, dtype=torch.int64, layout=torch.strided, device=None, requires_grad=False) → LongTensor

    In-place random sampling

    Quasi-random sampling

    class torch.quasirandom.SobolEngine(dimension, scramble=False, seed=None)[source]

    draw(n=1, out=None, dtype=torch.float32)[source]

    fast_forward(n)[source]

    reset()[source]


    torch.seed()[source]

    Sets the seed for generating random numbers to a non-deterministic random number. Returns a 64 bit number used to seed the RNG.

    torch.manual_seed(seed)[source]

    Sets the seed for generating random numbers. Returns a torch.Generator object.

    Parameters

    seed (int) – The desired seed.

    torch.initial_seed()[source]

    Returns the initial seed for generating random numbers as a Python long.

    torch.get_rng_state()[source]

    Returns the random number generator state as a torch.ByteTensor.

    torch.set_rng_state(new_state)[source]

    Sets the random number generator state.

    Parameters

    new_state (torch.ByteTensor) – The desired state

    torch.default_generator Returns the default CPU torch.Generator

    torch.bernoulli(input, *, generator=None, out=None) → Tensor

    Draws binary random numbers (0 or 1) from a Bernoulli distribution.

    The input tensor should be a tensor containing probabilities to be used for drawing the binary random number. Hence, all values in input have to be in the range: 0≤inputi≤10 \leq \text{input}_i \leq 10≤inputi​≤1 .

    The ith\text{i}^{th}ith element of the output tensor will draw a value 111 according to the ith\text{i}^{th}ith probability value given in input.

    outi∼Bernoulli(p=inputi)\text{out}_{i} \sim \mathrm{Bernoulli}(p = \text{input}_{i}) outi​∼Bernoulli(p=inputi​)

    The returned out tensor only has values 0 or 1 and is of the same shape as input.

    out can have integral dtype, but input must have floating point dtype.

    Parameters

    • input (Tensor) – the input tensor of probability values for the Bernoulli distribution

    • out (Tensor, optional) – the output tensor.

    Example:

    1. >>> a = torch.empty(3, 3).uniform_(0, 1) # generate a uniform random matrix with range [0, 1]
    2. >>> a
    3. tensor([[ 0.1737, 0.0950, 0.3609],
    4. [ 0.7148, 0.0289, 0.2676],
    5. [ 0.9456, 0.8937, 0.7202]])
    6. >>> torch.bernoulli(a)
    7. tensor([[ 1., 0., 0.],
    8. [ 0., 0., 0.],
    9. [ 1., 1., 1.]])
    10. >>> a = torch.ones(3, 3) # probability of drawing "1" is 1
    11. >>> torch.bernoulli(a)
    12. tensor([[ 1., 1., 1.],
    13. [ 1., 1., 1.],
    14. [ 1., 1., 1.]])
    15. >>> a = torch.zeros(3, 3) # probability of drawing "1" is 0
    16. >>> torch.bernoulli(a)
    17. tensor([[ 0., 0., 0.],
    18. [ 0., 0., 0.],
    19. [ 0., 0., 0.]])

    torch.multinomial(input, num_samples, replacement=False, out=None) → LongTensor

    Returns a tensor where each row contains num_samples indices sampled from the multinomial probability distribution located in the corresponding row of tensor input.

    Note

    The rows of input do not need to sum to one (in which case we use the values as weights), but must be non-negative, finite and have a non-zero sum.

    Indices are ordered from left to right according to when each was sampled (first samples are placed in first column).

    If input is a vector, out is a vector of size num_samples.

    If input is a matrix with m rows, out is an matrix of shape (m×num_samples)(m \times \text{num\_samples})(m×num_samples) .

    If replacement is True, samples are drawn with replacement.

    If not, they are drawn without replacement, which means that when a sample index is drawn for a row, it cannot be drawn again for that row.

    Note

    When drawn without replacement, num_samples must be lower than number of non-zero elements in input (or the min number of non-zero elements in each row of input if it is a matrix).

    Parameters

    • input (Tensor) – the input tensor containing probabilities

    • num_samples (int) – number of samples to draw

    • replacement (bool, optional) – whether to draw with replacement or not

    • out (Tensor, optional) – the output tensor.

    Example:

    1. >>> weights = torch.tensor([0, 10, 3, 0], dtype=torch.float) # create a tensor of weights
    2. >>> torch.multinomial(weights, 2)
    3. tensor([1, 2])
    4. >>> torch.multinomial(weights, 4) # ERROR!
    5. RuntimeError: invalid argument 2: invalid multinomial distribution (with replacement=False,
    6. not enough non-negative category to sample) at ../aten/src/TH/generic/THTensorRandom.cpp:320
    7. >>> torch.multinomial(weights, 4, replacement=True)
    8. tensor([ 2, 1, 1, 1])

    torch.normal()

    torch.normal(mean, std, out=None) → Tensor

    Returns a tensor of random numbers drawn from separate normal distributions whose mean and standard deviation are given.

    The mean is a tensor with the mean of each output element’s normal distribution

    The std is a tensor with the standard deviation of each output element’s normal distribution

    The shapes of mean and std don’t need to match, but the total number of elements in each tensor need to be the same.

    Note

    When the shapes do not match, the shape of mean is used as the shape for the returned output tensor

    Parameters

    • mean (Tensor) – the tensor of per-element means

    • std (Tensor) – the tensor of per-element standard deviations

    • out (Tensor, optional) – the output tensor.

    Example:

    1. >>> torch.normal(mean=torch.arange(1., 11.), std=torch.arange(1, 0, -0.1))
    2. tensor([ 1.0425, 3.5672, 2.7969, 4.2925, 4.7229, 6.2134,
    3. 8.0505, 8.1408, 9.0563, 10.0566])

    torch.normal(mean=0.0, std, out=None) → Tensor

    Similar to the function above, but the means are shared among all drawn elements.

    Parameters

    • mean (float, optional) – the mean for all distributions

    • std (Tensor) – the tensor of per-element standard deviations

    • out (Tensor, optional) – the output tensor.

    Example:

    1. >>> torch.normal(mean=0.5, std=torch.arange(1., 6.))
    2. tensor([-1.2793, -1.0732, -2.0687, 5.1177, -1.2303])

    torch.normal(mean, std=1.0, out=None) → Tensor

    Similar to the function above, but the standard-deviations are shared among all drawn elements.

    Parameters

    • mean (Tensor) – the tensor of per-element means

    • std (float, optional) – the standard deviation for all distributions

    • out (Tensor, optional) – the output tensor

    Example:

    1. >>> torch.normal(mean=torch.arange(1., 6.))
    2. tensor([ 1.1552, 2.6148, 2.6535, 5.8318, 4.2361])

    torch.normal(mean, std, size, *, out=None) → Tensor

    Similar to the function above, but the means and standard deviations are shared among all drawn elements. The resulting tensor has size given by size.

    Parameters

    • mean (float) – the mean for all distributions

    • std (float) – the standard deviation for all distributions

    • size (int...) – a sequence of integers defining the shape of the output tensor.

    • out (Tensor, optional) – the output tensor.

    Example:

    1. >>> torch.normal(2, 3, size=(1, 4))
    2. tensor([[-1.3987, -1.9544, 3.6048, 0.7909]])

    torch.rand(*size, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor

    Returns a tensor filled with random numbers from a uniform distribution on the interval [0,1)[0, 1)[0,1)

    The shape of the tensor is defined by the variable argument size.

    Parameters

    • size (int...) – a sequence of integers defining the shape of the output tensor. Can be a variable number of arguments or a collection like a list or tuple.

    • out (Tensor, optional) – the output tensor.

    • dtype (torch.dtype, optional) – the desired data type of returned tensor. Default: if None, uses a global default (see torch.set_default_tensor_type()).

    • layout (torch.layout, optional) – the desired layout of returned Tensor. Default: torch.strided.

    • device (torch.device, optional) – the desired device of returned tensor. Default: if None, uses the current device for the default tensor type (see torch.set_default_tensor_type()). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

    • requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False.

    Example:

    1. >>> torch.rand(4)
    2. tensor([ 0.5204, 0.2503, 0.3525, 0.5673])
    3. >>> torch.rand(2, 3)
    4. tensor([[ 0.8237, 0.5781, 0.6879],
    5. [ 0.3816, 0.7249, 0.0998]])

    torch.rand_like(input, dtype=None, layout=None, device=None, requires_grad=False) → Tensor

    Returns a tensor with the same size as input that is filled with random numbers from a uniform distribution on the interval [0,1)[0, 1)[0,1) . torch.rand_like(input) is equivalent to torch.rand(input.size(), dtype=input.dtype, layout=input.layout, device=input.device).

    Parameters

    • input (Tensor) – the size of input will determine size of the output tensor.

    • dtype (torch.dtype, optional) – the desired data type of returned Tensor. Default: if None, defaults to the dtype of input.

    • layout (torch.layout, optional) – the desired layout of returned tensor. Default: if None, defaults to the layout of input.

    • device (torch.device, optional) – the desired device of returned tensor. Default: if None, defaults to the device of input.

    • requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False.

    torch.randint(low=0, high, size, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor

    Returns a tensor filled with random integers generated uniformly between low (inclusive) and high (exclusive).

    The shape of the tensor is defined by the variable argument size.

    Parameters

    • low (int, optional) – Lowest integer to be drawn from the distribution. Default: 0.

    • high (int) – One above the highest integer to be drawn from the distribution.

    • size (tuple) – a tuple defining the shape of the output tensor.

    • out (Tensor, optional) – the output tensor.

    • dtype (torch.dtype, optional) – the desired data type of returned tensor. Default: if None, uses a global default (see torch.set_default_tensor_type()).

    • layout (torch.layout, optional) – the desired layout of returned Tensor. Default: torch.strided.

    • device (torch.device, optional) – the desired device of returned tensor. Default: if None, uses the current device for the default tensor type (see torch.set_default_tensor_type()). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

    • requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False.

    Example:

    1. >>> torch.randint(3, 5, (3,))
    2. tensor([4, 3, 4])
    3. >>> torch.randint(10, (2, 2))
    4. tensor([[0, 2],
    5. [5, 5]])
    6. >>> torch.randint(3, 10, (2, 2))
    7. tensor([[4, 5],
    8. [6, 7]])

    torch.randint_like(input, low=0, high, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor

    Returns a tensor with the same shape as Tensor input filled with random integers generated uniformly between low (inclusive) and high (exclusive).

    Parameters

    • input (Tensor) – the size of input will determine size of the output tensor.

    • low (int, optional) – Lowest integer to be drawn from the distribution. Default: 0.

    • high (int) – One above the highest integer to be drawn from the distribution.

    • dtype (torch.dtype, optional) – the desired data type of returned Tensor. Default: if None, defaults to the dtype of input.

    • layout (torch.layout, optional) – the desired layout of returned tensor. Default: if None, defaults to the layout of input.

    • device (torch.device, optional) – the desired device of returned tensor. Default: if None, defaults to the device of input.

    • requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False.

    torch.randn(*size, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor

    Returns a tensor filled with random numbers from a normal distribution with mean 0 and variance 1 (also called the standard normal distribution).

    outi∼N(0,1)\text{out}_{i} \sim \mathcal{N}(0, 1) outi​∼N(0,1)

    The shape of the tensor is defined by the variable argument size.

    Parameters

    • size (int...) – a sequence of integers defining the shape of the output tensor. Can be a variable number of arguments or a collection like a list or tuple.

    • out (Tensor, optional) – the output tensor.

    • dtype (torch.dtype, optional) – the desired data type of returned tensor. Default: if None, uses a global default (see torch.set_default_tensor_type()).

    • layout (torch.layout, optional) – the desired layout of returned Tensor. Default: torch.strided.

    • device (torch.device, optional) – the desired device of returned tensor. Default: if None, uses the current device for the default tensor type (see torch.set_default_tensor_type()). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

    • requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False.

    Example:

    1. >>> torch.randn(4)
    2. tensor([-2.1436, 0.9966, 2.3426, -0.6366])
    3. >>> torch.randn(2, 3)
    4. tensor([[ 1.5954, 2.8929, -1.0923],
    5. [ 1.1719, -0.4709, -0.1996]])

    torch.randn_like(input, dtype=None, layout=None, device=None, requires_grad=False) → Tensor

    Returns a tensor with the same size as input that is filled with random numbers from a normal distribution with mean 0 and variance 1. torch.randn_like(input) is equivalent to torch.randn(input.size(), dtype=input.dtype, layout=input.layout, device=input.device).

    Parameters

    • input (Tensor) – the size of input will determine size of the output tensor.

    • dtype (torch.dtype, optional) – the desired data type of returned Tensor. Default: if None, defaults to the dtype of input.

    • layout (torch.layout, optional) – the desired layout of returned tensor. Default: if None, defaults to the layout of input.

    • device (torch.device, optional) – the desired device of returned tensor. Default: if None, defaults to the device of input.

    • requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False.

    torch.randperm(n, out=None, dtype=torch.int64, layout=torch.strided, device=None, requires_grad=False) → LongTensor

    Returns a random permutation of integers from 0 to n - 1.

    Parameters

    • n (int) – the upper bound (exclusive)

    • out (Tensor, optional) – the output tensor.

    • dtype (torch.dtype, optional) – the desired data type of returned tensor. Default: torch.int64.

    • layout (torch.layout, optional) – the desired layout of returned Tensor. Default: torch.strided.

    • device (torch.device, optional) – the desired device of returned tensor. Default: if None, uses the current device for the default tensor type (see torch.set_default_tensor_type()). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

    • requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False.

    Example:

    1. >>> torch.randperm(4)
    2. tensor([2, 1, 0, 3])

    In-place random sampling

    There are a few more in-place random sampling functions defined on Tensors as well. Click through to refer to their documentation:

    Quasi-random sampling

    class torch.quasirandom.SobolEngine(dimension, scramble=False, seed=None)[source]

    The torch.quasirandom.SobolEngine is an engine for generating (scrambled) Sobol sequences. Sobol sequences are an example of low discrepancy quasi-random sequences.

    This implementation of an engine for Sobol sequences is capable of sampling sequences up to a maximum dimension of 1111. It uses direction numbers to generate these sequences, and these numbers have been adapted from here.

    References

    • Art B. Owen. Scrambling Sobol and Niederreiter-Xing points. Journal of Complexity, 14(4):466-489, December 1998.

    • I. M. Sobol. The distribution of points in a cube and the accurate evaluation of integrals. Zh. Vychisl. Mat. i Mat. Phys., 7:784-802, 1967.

    Parameters

    • dimension (Int) – The dimensionality of the sequence to be drawn

    • scramble (bool, optional) – Setting this to True will produce scrambled Sobol sequences. Scrambling is capable of producing better Sobol sequences. Default: False.

    • seed (Int, optional) – This is the seed for the scrambling. The seed of the random number generator is set to this, if specified. Otherwise, it uses a random seed. Default: None

    Examples:

    1. >>> soboleng = torch.quasirandom.SobolEngine(dimension=5)
    2. >>> soboleng.draw(3)
    3. tensor([[0.5000, 0.5000, 0.5000, 0.5000, 0.5000],
    4. [0.7500, 0.2500, 0.7500, 0.2500, 0.7500],
    5. [0.2500, 0.7500, 0.2500, 0.7500, 0.2500]])

    draw(n=1, out=None, dtype=torch.float32)[source]

    Function to draw a sequence of n points from a Sobol sequence. Note that the samples are dependent on the previous samples. The size of the result is (n,dimension)(n, dimension)(n,dimension) .

    Parameters

    • n (Int, optional) – The length of sequence of points to draw. Default: 1

    • out (Tensor, optional) – The output tensor

    • dtype (torch.dtype, optional) – the desired data type of the returned tensor. Default: torch.float32

    fast_forward(n)[source]

    Function to fast-forward the state of the SobolEngine by n steps. This is equivalent to drawing n samples without using the samples.

    Parameters

    n (Int) – The number of steps to fast-forward by.

    reset()[source]

    Function to reset the SobolEngine to base state.

  • 相关阅读:
    [JQ][属性]【css】
    设计模式-中介者模式
    Kafka设计原理
    【软件工程】介绍
    面试官:说一下红锁RedLock的实现原理?
    vue+elementUI 设置el-descriptions固定长度并对齐
    【AT4162 [ARC099C]】 Independence 题解
    WINDOWS在电脑中起什么样的作用
    链路聚合简述
    kubenates的傻瓜式部署教程(K8S部署教程)
  • 原文地址:https://blog.csdn.net/weixin_36670529/article/details/101198257