MindSpore详解


🎁MindSpore API

API入口:MindSpore API

MinfSpore实现经典模型

mindspore

classmindspore.dtype

Create a data type object of MindSpore.

The actual path of dtype is /mindspore/common/dtype.py. Run the following command to import the package:

from mindspore import dtype as mstype

Numeric Type

Currently, MindSpore supports Int type, Uint type and Float type. The following table lists the details.

Definition Description
mindspore.int8 , mindspore.byte 8-bit integer
mindspore.int16 , mindspore.short 16-bit integer
mindspore.int32 , mindspore.intc 32-bit integer
mindspore.int64 , mindspore.intp 64-bit integer
mindspore.uint8 , mindspore.ubyte unsigned 8-bit integer
mindspore.uint16 , mindspore.ushort unsigned 16-bit integer
mindspore.uint32 , mindspore.uintc unsigned 32-bit integer
mindspore.uint64 , mindspore.uintp unsigned 64-bit integer
mindspore.float16 , mindspore.half 16-bit floating-point number
mindspore.float32 , mindspore.single 32-bit floating-point number
mindspore.float64 , mindspore.double 64-bit floating-point number

Other Type

For other defined types, see the following table.

Type Description
tensor MindSpore’s tensor type. Data format uses NCHW. For details, see tensor.
bool_ Boolean True or False.
int_ Integer scalar.
uint Unsigned integer scalar.
float_ Floating-point scalar.
number Number, including int_ , uint , float_ and bool_ .
list_ List constructed by tensor , such as List[T0,T1,...,Tn] , where the element Ti can be of different types.
tuple_ Tuple constructed by tensor , such as Tuple[T0,T1,...,Tn] , where the element Ti can be of different types.
function Function. Return in two ways, when function is not None, returns Func directly, the other returns Func(args: List[T0,T1,…,Tn], retval: T) when function is None.
type_type Type definition of type.
type_none No matching return type, corresponding to the type(None) in Python.
symbolic_key The value of a variable is used as a key of the variable in env_type .
env_type Used to store the gradient of the free variable of a function, where the key is the symbolic_key of the free variable’s node and the value is the gradient.
  • Tree Topology

    The relationships of the above types are as follows:

    └─────── number
        │   ├─── bool_
        │   ├─── int_
        │   │   ├─── int8, byte
        │   │   ├─── int16, short
        │   │   ├─── int32, intc
        │   │   └─── int64, intp
        │   ├─── uint
        │   │   ├─── uint8, ubyte
        │   │   ├─── uint16, ushort
        │   │   ├─── uint32, uintc
        │   │   └─── uint64, uintp
        │   └─── float_
        │       ├─── float16
        │       ├─── float32
        │       └─── float64
        ├─── tensor
        │   ├─── Array[Float32]
        │   └─── ...
        ├─── list_
        │   ├─── List[Int32,Float32]
        │   └─── ...
        ├─── tuple_
        │   ├─── Tuple[Int32,Float32]
        │   └─── ...
        ├─── function
        │   ├─── Func
        │   ├─── Func[(Int32, Float32), Int32]
        │   └─── ...
        ├─── type_type
        ├─── type_none
        ├─── symbolic_key
        └─── env_type

mindspore.run_check()[source]

Provide a convenient API to check if the installation is successful or failed.

Examples

import mindspore
mindspore.run_check()

Mindspore version: xxx
The result of multiplication calculation is correct, Mindspore has been installed successfully

mindspore.dtype_to_nptype(type_)[source]

Convert MindSpore dtype to numpy data type.

  • Parameters

    type_ (mindspore.dtype) – MindSpore’s dtype.

  • Returns

    The data type of numpy.

mindspore.issubclass_(type_, dtype)[source]

Determine whether type_ is a subclass of dtype.

  • Parameters

    type_ (mindspore.dtype) – Target MindSpore dtype.

    dtype (mindspore.dtype) – Compare MindSpore dtype.

  • Returns

    bool, True or False.

mindspore.dtype_to_pytype(type_)[source]

Convert MindSpore dtype to python data type.

  • Parameters

    type_ (mindspore.dtype) – MindSpore’s dtype.

  • Returns

    Type of python.

mindspore.pytype_to_dtype(obj)[source]

Convert python type to MindSpore type.

  • Parameters

    obj (type) – A python type object.

  • Returns

    Type of MindSpore type.

mindspore.get_py_obj_dtype(obj)[source]

Get the MindSpore data type, which corresponds to python type or variable.

  • Parameters

    obj (type) – An object of python type, or a variable of python type.

  • Returns

    Type of MindSpore type.

class mindspore.Tensor(input_data=None, dtype=None, shape=None, init=None)[source]

Tensor is used for data storage.

Tensor inherits tensor object in C++. Some functions are implemented in C++ and some functions are implemented in Python.

  • Parameters

    input_data (Union**[Tensor, float, int, bool, tuple, list, numpy.ndarray]) – Input data of the tensor.

    dtype (mindspore.dtype) – Input data should be None, bool or numeric type defined in mindspore.dtype. The argument is used to define the data type of the output tensor. If it is None, the data type of the output tensor will be the same as the input_data. Default: None.

    shape (Union**[tuple, list, int]) – A list of integers, a tuple of integers or an integer as the shape of output. If input_data is available, shape doesn’t need to be set. Default: None.

    init (Initializer) – the information of init data. ‘init’ is used for delayed initialization in parallel mode. Usually, it is not recommended to use ‘init’ interface to initialize parameters in other conditions. If ‘init’ interface is used to initialize parameters, the Tensor.init_data API needs to be called to convert Tensor to the actual data.

  • Outputs:

    Tensor. If dtype and shape are not set, return a tensor with the same dtype and shape as input_data. If dtype or shape is set, the dtype or shape of the output Tensor is consistent with the setting.

Examples

>>> import numpy as np
>>> import mindspore as ms
>>> from mindspore import Tensor
>>> from mindspore.common.initializer import One
>>> # initialize a tensor with input data
>>> t1 = Tensor(np.zeros([1, 2, 3]), ms.float32)
>>> assert isinstance(t1, Tensor)
>>> assert t1.shape == (1, 2, 3)
>>> assert t1.dtype == ms.float32
>>>
>>> # initialize a tensor with a float scalar
>>> t2 = Tensor(0.1)
>>> assert isinstance(t2, Tensor)
>>> assert t2.dtype == ms.float64
...
>>> # initialize a tensor with init
>>> t3 = Tensor(shape = (1, 3), dtype=ms.float32, init=One())
>>> assert isinstance(t3, Tensor)
>>> assert t3.shape == (1, 3)
>>> assert t3.dtype == ms.float32

property T

Return the transposed tensor.

abs()[source]

Return absolute value element-wisely.

  • Returns

    Tensor, with absolute value element-wisely.

  • Supported Platforms:

    Ascend GPU CPU

Examples

>>> from mindspore import Tensor
>>> a = Tensor([1.1, -2.1]).astype("float32")
>>> output = a.abs()
>>> print(output)

all(axis=(), keep_dims=False)[source]

Check all array elements along a given axis evaluate to True.

  • Parameters

    axis (Union**[None, int, tuple(int)) – Dimensions of reduction, when the axis is None or empty tuple, reduce all dimensions. Default: ().

    keep_dims (bool) – Whether to keep the reduced dimensions. Default: False.

  • Returns

    Tensor, if all array elements along the given axis evaluate to True, its value is True, otherwise its value is False. If the axis is None or empty tuple, reduce all dimensions.

  • Supported Platforms:

    Ascend GPU CPU

Examples

>>> from mindspore import Tensor
>>> a = Tensor([True, True, False])
>>> output = a.all()
>>> print(output)

any(axis=(), keep_dims=False)[source]

Check any array element along a given axis evaluate to True.

  • Parameters

    axis (Union**[None, int, tuple(int)) – Dimensions of reduction, when the axis is None or empty tuple, reduce all dimensions. Default: ().

    keep_dims (bool) – Whether to keep the reduced dimensions. Default: False.

  • Returns

    Tensor, if any array element along the given axis evaluates to True, its value is True, otherwise its value is False. If the axis is None or empty tuple, reduce all dimensions.

  • Supported Platforms:

    Ascend GPU CPU

Examples

>>> from mindspore import Tensor
>>> a = Tensor([True, True, False])
>>> output = a.any()
>>> print(output)

argmax(axis=None)[source]

Return the indices of the maximum values along an axis.

  • Parameters

    axis (int, optional) – By default, the index is into the flattened tensor, otherwise along the specified axis.

  • Returns

    Tensor, indices into the input tensor. It has the same shape as self.shape with the dimension along axis removed.

  • Raises

    ValueError – if the axis is out of range.

  • Supported Platforms:

    Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> a = Tensor(np.arange(10, 16).reshape(2, 3).astype("float32"))
>>> print(a.argmax())

argmin(axis=None)[source]

Return the indices of the minimum values along an axis.

  • Parameters

    axis (int, optional) – By default, the index is into the flattened tensor, otherwise along the specified axis.

  • Returns

    Tensor, indices into the input tensor. It has the same shape as self.shape with the dimension along axis removed.

  • Raises

    ValueError – if the axis is out of range.

  • Supported Platforms:

    Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> a = Tensor(np.arange(10, 16).reshape(2, 3).astype("float32"))
>>> print(a.argmin())

asnumpy()[source]

Convert tensor to numpy array.

astype(dtype, copy=True)[source]

Return a copy of the tensor, cast to a specified type.

  • Parameters

    dtype (Union[mindspore.dtype, str]) – Designated tensor dtype, can be in format of mindspore.dtype.float32 or float32. Default: mindspore.dtype.float32.

    copy (bool, optional) – By default, astype always returns a newly allocated tensor. If this is set to false, the input tensor is returned instead of a copy if possible. Default: True.

  • Returns

    Tensor, with the designated dtype.

  • Raises

    TypeError – If dtype has types not specified above, or values cannot be understood.

  • Supported Platforms:

    Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> x = Tensor(np.ones((1,2,2,1), dtype=np.float32))
>>> x = x.astype("int32")
>>> print(x.dtype)

choose(choices, mode=”clip”)[source]

Construct an array from an index array and a list of arrays to choose from.

  • Parameters

    choices (Union**[tuple, list, Tensor]) – Choice arrays. a and all of the choices must be broadcasted to the same shape. If choices is itself an array, then its outermost dimension (i.e., the one corresponding to choices.shape[0]) is taken as defining the “sequence”.

    mode (‘raise’**, ‘wrap’**, ‘clip’**, optional) –Specifies how indices outside [0, n-1] will be treated:‘raise’ – raise an error (default);‘wrap’ – wrap around;‘clip’ – clip to the range. ‘clip’ mode means that all indices that are too large are replaced by the index that addresses the last element along that axis. Note that this disables indexing with negative numbers.

  • Returns

    Tensor, the merged result.

  • Supported Platforms:

    Ascend GPU CPU

  • Raises

    ValueError – if the input tensor and any of the choices cannot be broadcast.

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> choices = [[0, 1, 2, 3], [10, 11, 12, 13], [20, 21, 22, 23], [30, 31, 32, 33]]
>>> x = Tensor(np.array([2, 3, 1, 0]))
>>> print(x.choose(choices))
[20 31 12  3]

clip(xmin, xmax, dtype=None)[source]

Clips (limits) the values in a Tensor.

Given an interval, values outside the interval are clipped to the interval edges. For example, if an interval of 0,1 is specified, values smaller than 0 become 0, and values larger than 1 become 1.

Note

Currently, clip with xmin=nan or xmax=nan is not supported.

  • Parameters

    xmin (Tensor**, scalar**, None) – Minimum value. If None, clipping is not performed on lower interval edge. Not more than one of xmin and xmax may be None.

    xmax (Tensor**, scalar**, None) – Maximum value. If None, clipping is not performed on upper interval edge. Not more than one of xmin and xmax may be None. If xmin or xmax are tensors, then the three tensors will be broadcasted to match their shapes.

    dtype (mindspore.dtype, optional) – Overrides the dtype of the output Tensor. Default is None.

  • Returns

    Tensor, a tensor with the elements of input tensor, but where values < xmin are replaced with xmin, and those > xmax with xmax.

  • Raises

    TypeError – If inputs have types not specified above.

    ValueError – If the shapes of x1 and x2 cannot broadcast, or both xmin and xmax are None.

  • Supported Platforms:

    Ascend GPU CPU

Examples

>>> from mindspore import Tensor
>>> x = Tensor([1, 2, 3, -4, 0, 3, 2, 0]).astype("float32")
>>> output = x.clip(0, 2)
>>> print(output)
[1. 2. 2. 0. 0. 2. 2. 0.]

copy()[source]

Return a copy of the tensor.

Note

The current implementation does not support order argument.

  • Returns

    Copied tensor.

  • Supported Platforms:

    Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> a = Tensor(np.ones((3,3)).astype("float32"))
>>> output = a.copy()
>>> print(output)

cumsum(axis=None, dtype=None)[source]

Return the cumulative sum of the elements along a given axis.

Note

If self.dtype is int8, int16 or bool, the result dtype will be elevated to int32, int64 is not supported.

  • Parameters

    axis (int, optional) – Axis along which the cumulative sum is computed. The default (None) is to compute the cumsum over the flattened array.

    dtype (mindspore.dtype, optional) – If not specified, stay the same as original, tensor, unless it has an integer dtype with a precision less than float32. In that case, float32 is used. Default: None.

  • Raises

    ValueError – if the axis is out of range.

  • Returns

    Tensor.

  • Supported Platforms:

    Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> a = Tensor(np.ones((3,3)).astype("float32"))
>>> output = a.cumsum(axis=0)
>>> print(output)

diagonal(offset=0, axis1=0, axis2=1)[source]

Return specified diagonals.

  • Parameters

    offset (int, optional) – Offset of the diagonal from the main diagonal. Can be positive or negative. Defaults to main diagonal.

    axis1 (int, optional) – Axis to be used as the first axis of the 2-D sub-arrays from which the diagonals should be taken. Defaults to first axis (0).

    axis2 (int, optional) – Axis to be used as the second axis of the 2-D sub-arrays from which the diagonals should be taken. Defaults to second axis.

  • Returns

    Tensor, if a is 2-D, then a 1-D array containing the diagonal.

  • Raises

    ValueError – if the input tensor has less than two dimensions.

  • Supported Platforms:

    Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> a = Tensor(np.arange(4).reshape(2, 2))
>>> print(a)
[[0 1]
[2 3]]
>>> output = a.diagonal()
>>> print(output)

propertydtype

Return the dtype of the tensor (mindspore.dtype).

expand_as(x)[source]

Expand the dimension of target tensor to the dimension of input tensor.

  • Parameters

    x (Tensor) – The input tensor. The shape of input tensor must obey the broadcasting rule.

  • Returns

    Tensor, has the same dimension as input tensor.

fill(value)[source]

Fill the array with a scalar value.

Note

Unlike Numpy, tensor.fill() will always returns a new tensor, instead of filling the original tensor.

  • Parameters

    value (Union**[None, int, float, bool]) – All elements of a will be assigned this value.

  • Returns

    Tensor, with the original dtype and shape as input tensor.

  • Raises

    TypeError – If input arguments have types not specified above.

  • Supported Platforms:

    Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> a = Tensor(np.arange(4).reshape((2,2)).astype('float32'))
>>> print(a.fill(1.0))

flatten(order=”C”)[source]

Return a copy of the tensor collapsed into one dimension.

  • Parameters

    order (str, optional) – Can choose between ‘C’ and ‘F’. ‘C’ means to flatten in row-major (C-style) order. ‘F’ means to flatten in column-major (Fortran-style) order. Only ‘C’ and ‘F’ are supported. Default: ‘C’.

  • Returns

    Tensor, has the same data type as input.

  • Supported Platforms:

    Ascend GPU CPU

  • Raises

    TypeError – If order is not string type.

    ValueError – If order is string type, but not ‘C’ or ‘F’.

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> x = Tensor(np.ones((2,3,4), dtype=np.float32))
>>> output = x.flatten()
>>> print(output.shape)

flush_from_cache()[source]

Flush cache data to host if tensor is cache enable.

staticfrom_numpy(array)[source]

Convert numpy array to Tensor without copy data.

  • Parameters

    array (numpy.array) – The input array.

  • Returns

    Tensor, has the same data type as input array.

propertyhas_init

tensor is inited.

flush_from_cache()[source]

Flush cache data to host if tensor is cache enable.

staticfrom_numpy(array)[source]

Convert numpy array to Tensor without copy data.

  • Parameters

    array (numpy.array) – The input array.

  • Returns

    Tensor, has the same data type as input array.

propertyhas_init

tensor is inited.

init_data(slice_index=None, shape=None, opt_shard_group=None)[source]

Get the tensor format data of this Tensor. The init_data function can be called once for the same tensor.

  • Parameters

    slice_index (int) – Slice index of a parameter’s slices. It is used when initialize a slice of a parameter, it guarantees that devices using the same slice can generate the same tensor. Default: None.

    shape (list[*int*]) – Shape of the slice, it is used when initialize a slice of the parameter. Default: None.

    opt_shard_group (str) – Optimizer shard group which is used in auto or semi auto parallel mode to get one shard of a parameter’s slice. Default: None.

  • Returns

    Initialized Tensor.

  • Supported Platforms:

    Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> import mindspore.common.initializer as init
>>> x = init.initializer(init.Constant(1), [2, 2], ms.float32)
>>> out = x.init_data()
>>> print(out)

item(index=None)[source]

Getitem from the Tensor with the index.

Note

Tensor.item returns a Tensor scalar instead of a Python scalar.

  • Parameters

    index (Union**[None, int, tuple(int)**]) – The index in Tensor. Default: None.

  • Returns

    A Tensor scalar, dtype is the same with the original Tensor.

  • Raises

    ValueError – If the length of the index is not euqal to self.ndim.

  • Supported Platforms:

    Ascend GPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> x = Tensor(np.array([[1,2,3],[4,5,6]], dtype=np.float32))
>>> x = x.item((0,1))
>>> print(x)

itemset(*args)[source]

Insert scalar into a tensor (scalar is cast to tensor’s dtype, if possible).

There must be at least 1 argument, and define the last argument as item. Then, tensor.itemset(*args) is equivalent to tensor[args]=item.

  • Parameters

    args (Union[(numbers.Number)**, (int/tuple(int)**, numbers.Number)**]) – The arguments that specify the index and value. If args contain one argument (a scalar), it is only used in case tensor is of size 1. If args contain two arguments, the last argument is the value to be set and must be a scalar, the first argument specifies a single tensor element location. It is either an int or a tuple.

  • Returns

    A new Tensor, with value set by tensor[args]=item.

  • Raises

    ValueError – If the length of the first argument is not euqal to self.ndim.

    IndexError – If only one argument is provided, and the original Tensor is not scalar.

  • Supported Platforms:

    Ascend GPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> x = Tensor(np.array([[1,2,3],[4,5,6]], dtype=np.float32))
>>> x = x.itemset((0,1), 4)
>>> print(x)

propertyitemsize

Return the length of one tensor element in bytes.

max(axis=None, keepdims=False, initial=None, where=True)[source]

Return the maximum of a tensor or maximum along an axis.

  • Parameters

    axis (Union**[None, int, tuple of ints], optional) – Axis or axes along which to operate. By default, flattened input is used. If this is a tuple of ints, the maximum is selected over multiple axes, instead of a single axis or all the axes as before. Default: None.

    keepdims (bool, optional) – If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array. Default: False.

    initial (scalar**, optional) – The minimum value of an output element. Must be present to allow computation on empty slice. Default: None.

    where (bool Tensor**, optional) – A boolean array which is broadcasted to match the dimensions of array, and selects elements to include in the reduction. If non-default value is passed, initial must also be provided. Default: True.

  • Returns

    Tensor or scalar, maximum of input tensor. If axis is None, the result is a scalar value. If axis is given, the result is an array of dimension self.ndim - 1.

  • Raises

    TypeError – if arguments have types not specified above.

  • Supported Platforms:

    Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> a = Tensor(np.arange(4).reshape((2, 2)).astype('float32'))
>>> output = a.max()
>>> print(output)

mean(axis=(), keep_dims=False)[source]

Reduce a dimension of a tensor by averaging all elements in the dimension.

  • Parameters

    axis (Union**[None, int, tuple(int)**, list(int)**]) – Dimensions of reduction, when the axis is None or empty tuple, reduce all dimensions. Default: ().

    keep_dims (bool) – Whether to keep the reduced dimensions. Default: False.

  • Returns

    Tensor, has the same data type as input tensor.

  • Supported Platforms:

    Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> input_x = Tensor(np.array([1, 2, 3], dtype=np.float32))
>>> output = input_x.mean()
>>> print(output)

min(axis=None, keepdims=False, initial=None, where=True)[source]

Return the minimum of a tensor or minimum along an axis.

  • Parameters

    axis (Union**[None, int, tuple of ints], optional) – Axis or axes along which to operate. By default, flattened input is used. If this is a tuple of ints, the minimum is selected over multiple axes, instead of a single axis or all the axes as before. Default: None.

    keepdims (bool, optional) – If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array. Default: False.

    initial (scalar**, optional) – The maximum value of an output element. Must be present to allow computation on empty slice. Default: None.

    where (bool Tensor**, optional) – A boolean array which is broadcasted to match the dimensions of array, and selects elements to include in the reduction. If non-default value is passed, initial must also be provided. Default: True.

  • Returns

    Tensor or scalar, minimum of input tensor. If the axis is None, the result is a scalar value. If axis is given, the result is an array of dimension self.ndim - 1.

  • Raises

    TypeError – if arguments have types not specified above.

  • Supported Platforms:

    Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> import mindspore.numpy as np
>>> a = Tensor(np.arange(4).reshape((2,2)).astype('float32'))
>>> output = a.min()
>>> print(output)

propertynbytes

Return the total number of bytes taken by the tensor.

propertyndim

Return the number of tensor dimensions.

ptp(axis=None, keepdims=False)[source]

The name of the function comes from the acronym for ‘peak to peak’.

Note

Numpy arguments dtype and out are not supported.

  • Parameters

    axis (Union**[None, int, tuple(int)**]) – Axis or axes along which the range is computed. The default is to compute the variance of the flattened array. Default: None.

    keepdims (bool) – If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the array. Default is False.

  • Returns

    Tensor.

  • Raises

    TypeError – if self is not a tensor, or axis and keepdims have types not specified above.

  • Supported Platforms:

    Ascend GPU CPU

Examples

>>> from mindspore import Tensor
>>> x = Tensor([[4.0, 9.0, 2.0, 10.0], [6.0, 9.0, 7.0, 12.0]]).astype("float32")
>>> print(x.ptp(axis=1))
[8. 6.]
>>> print(x.ptp(axis=0))

ravel()[source]

Return a contiguous flattened tensor.

  • Returns

    Tensor, a 1-D tensor, containing the same elements of the input.

  • Supported Platforms:

    Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> x = Tensor(np.ones((2,3,4), dtype=np.float32))
>>> output = x.ravel()
>>> print(output.shape)

repeat(repeats, axis=None)[source]

Repeat elements of an array.

  • Parameters

    repeats (Union**[int, tuple, list]) – The number of repetitions for each element. repeats is broadcasted to fit the shape of the given axis.

    axis (int, optional) – The axis along which to repeat values. By default, use the flattened input tensor, and return a flat output tensor.

  • Returns

    Tensor, has the same shape as input tensor except along the given axis.

  • Raises

    ValueError – if the axis is out of range.

    TypeError – if arguments have types not specified above.

  • Supported Platforms:

    Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> x = Tensor(np.array(3))
>>> print(x.repeat(4))
[3 3 3 3]
>>> x = Tensor(np.array([[1, 2],[3, 4]]))
>>> print(x.repeat(2))
[1 1 2 2 3 3 4 4]
>>> print(x.repeat(3, axis=1))
[[1 1 1 2 2 2]
[3 3 3 4 4 4]]
>>> print(x.repeat([1,2], axis=0))

reshape(*shape)[source]

Give a new shape to a tensor without changing its data.

  • Parameters

    shape (Union**[int, tuple(int)**, list(int)**]) – The new shape should be compatible with the original shape. If an integer, then the result will be a 1-D array of that length. One shape dimension can be -1. In this case, the value is inferred from the length of the array and remaining dimensions.

  • Returns

    Tensor, with new specified shape.

  • Raises

    TypeError – If new_shape is not integer, list or tuple, or x is not tensor.

    ValueError – If new_shape is not compatible with the original shape.

  • Supported Platforms:

    Ascend GPU CPU

Examples

>>> from mindspore import Tensor
>>> from mindspore import dtype as mstype
>>> x = Tensor([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]], dtype=mstype.float32)
>>> output = x.reshape((3, 2))
>>> print(output)

resize(*new_shape)[source]

Changes shape and size of array in-place.

Note

Instead of changing the size of the input array and returns nothing as in numpy, this method returns a new Tensor with the input size. Numpy argument refcheck is not supported.

  • Parameters

    new_shape (Union[ints**, tuple of ints**]) – Shape of resized array.

  • Returns

    Tensor.

  • Supported Platforms:

    Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> x = Tensor(np.array([[0, 1], [2, 3]]))
>>> x = x.resize(2, 3)
>>> print(x)

searchsorted(v, side=”left”, sorter=None)[source]

Finds indices where elements should be inserted to maintain order.

  • Parameters

    v (Union**[int, float, bool, list, tuple, Tensor]) – Values to insert into a.side (‘left’**, ‘right’**, optional) – If ‘left’, the index of the first suitable location found is given. If ‘right’, return the last such index. If there is no suitable index, return either 0 or N (where N is the length of a). Default: left.sorter (Union**[int, float, bool, list, tuple, Tensor]) – 1-D optional array of integer indices that sort array a into ascending order. They are typically the result of argsort.

  • Returns

    Tensor, array of insertion points with the same shape as v.

  • Raises

    ValueError – if argument for side or sorter is invalid.

  • Supported Platforms:

    Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> x = Tensor(np.array([1, 2, 3, 4, 5]))
>>> print(x.searchsorted(3))

propertyshape

Returns the shape of the tensor as a tuple.

propertysize

Returns the total number of elements in tensor.

squeeze(axis=None)[source]

Remove single-dimensional entries from the shape of a tensor.

  • Parameters

    axis (Union**[None, int, list(int)**, tuple(int)], optional) – Selects a subset of the entries of length one in the shape. If an axis is selected with shape entry greater than one, an error is raised. Default is None.

  • Returns

    Tensor, with all or a subset of the dimensions of length 1 removed.

  • Raises

    TypeError – If input arguments have types not specified above.

    ValueError – If specified axis has shape entry >1>1.

  • Supported Platforms:

    Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> x = Tensor(np.ones((1,2,2,1), dtype=np.float32))
>>> x = x.squeeze()
>>> print(x.shape)

std(axis=None, ddof=0, keepdims=False)[source]

Compute the standard deviation along the specified axis. The standard deviation is the square root of the average of the squared deviations from the mean, i.e., std=sqrt(mean(abs(x−x.mean())∗∗2))std=sqrt(mean(abs(x−x.mean())∗∗2)).

Return the standard deviation, which is computed for the flattened array by default, otherwise over the specified axis.

Note

Numpy arguments dtype, out and where are not supported.

  • Parameters

    axis (Union**[None, int, tuple(int)**]) –Axis or axes along which the standard deviation is computed. Default: None.If None, compute the standard deviation of the flattened array.

    ddof (int) – Means Delta Degrees of Freedom. The divisor used in calculations is N−ddofN−ddof, where NN represents the number of elements. Default: 0.

    keepdims – Default: False.

  • Returns

    Standard deviation tensor.

  • Supported Platforms:

    Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> input_x = Tensor(np.array([1, 2, 3, 4], dtype=np.float32))
>>> output = input_x.std()
>>> print(output)

propertystrides

Return the tuple of bytes to step in each dimension when traversing a tensor.

sum(axis=None, dtype=None, keepdims=False, initial=None)[source]

Return sum of array elements over a given axis.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported.

  • Parameters

    axis (Union**[None, int, tuple(int)**]) – Axis or axes along which a sum is performed. Default: None. If None, sum all of the elements of the input array. If the axis is negative, it counts from the last to the first axis. If the axis is a tuple of ints, a sum is performed on all of the axes specified in the tuple instead of a single axis or all the axes as before.

    dtype (mindspore.dtype, optional) – defaults to None. Overrides the dtype of the output Tensor.

    keepdims (bool) – If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array. If the default value is passed, then keepdims will not be passed through to the sum method of sub-classes of ndarray, however any non-default value will be. If the sub-class’ method does not implement keepdims any exceptions will be raised. Default: False.

    initial (scalar) – Starting value for the sum. Default: None.

  • Returns

    Tensor. A tensor with the same shape as input, with the specified axis removed. If input tensor is a 0-d array, or if the axis is None, a scalar is returned.

  • Raises

    TypeError – If input is not array_like, or axis is not int or tuple of ints, or keepdims is not integer, or initial is not scalar.

    ValueError – If any axis is out of range or duplicate axes exist.

  • Supported Platforms:

    Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> input_x = Tensor(np.array([-1, 0, 1]).astype(np.float32))
>>> print(input_x.sum())
0.0
>>> input_x = Tensor(np.arange(10).reshape(2, 5).astype(np.float32))
>>> print(input_x.sum(axis=1))

swapaxes(axis1, axis2)[source]

Interchange two axes of a tensor.

  • Parameters

    axis1 (int) – First axis.

    axis2 (int) – Second axis.

  • Returns

    Transposed tensor, has the same data type as the input.

  • Raises

    TypeError – If axis1 or axis2 is not integer.

    ValueError – If axis1 or axis2 is not in the range of −ndim,ndim−1.

  • Supported Platforms:

    Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> x = Tensor(np.ones((2,3,4), dtype=np.float32))
>>> output = x.swapaxes(0, 2)
>>> print(output.shape)

take(indices, axis=None, mode=”clip”)[source]

Takes elements from an array along an axis.

  • Parameters

    indices (Tensor) – The indices with shape (Nj…) of the values to extract.

    axis (int, optional) – The axis over which to select values. By default, the flattened input array is used. Default: None.

    mode (‘raise’**, ‘wrap’**, ‘clip’**, optional) –edge: Pads with the edge values of arr.raise: Raises an error;wrap: Wraps around;clip: Clips to the range. clip mode means that all indices that are too large are replaced by the index that addresses the last element along that axis. Note that this disables indexing with negative numbers.Default: clip.

  • Returns

    Tensor, the indexed result.

  • Raises

    ValueError – if axis is out of range, or mode has values other than (‘raise’, ‘wrap’, ‘clip’)

  • Supported Platforms:

    Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> a = Tensor(np.array([4, 3, 5, 7, 6, 8]))
>>> indices = Tensor(np.array([0, 1, 4]))
>>> output = a.take(indices)
>>> print(output)

to_tensor(slice_index=None, shape=None, opt_shard_group=None)[source]

Return init_data() and get the tensor format data of this Tensor.

Note

The usage of to_tensor is deprecated. Please use init_data.

  • Parameters

    slice_index (int) – Slice index of a parameter’s slices. It is used when initialize a slice of a parameter, it guarantees that devices using the same slice can generate the same tensor. Default: None.

    shape (list[*int*]) – Shape of the slice, it is used when initialize a slice of the parameter. Default: None.

    opt_shard_group (str) – Optimizer shard group which is used in auto or semi auto parallel mode to get one shard of a parameter’s slice. Default: None.

  • Returns

    Initialized Tensor.

  • Supported Platforms:

    Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> import mindspore.common.initializer as init
>>> x = init.initializer(init.Constant(1), [2, 2], ms.float32)
>>> out = x.to_tensor()
>>> print(out)

trace(offset=0, axis1=0, axis2=1, dtype=None)[source]

Return the sum along diagonals of the array.

  • Parameters

    offset (int, optional) – Offset of the diagonal from the main diagonal. Can be positive or negative. Defaults to main diagonal.

    axis1 (int, optional) – Axis to be used as the first axis of the 2-D sub-arrays from which the diagonals should be taken. Defaults to first axis (0).

    axis2 (int, optional) – Axis to be used as the second axis of the 2-D sub-arrays from which the diagonals should be taken. Defaults to second axis.

    dtype (mindspore.dtype, optional) – defaults to None. Overrides the dtype of the output Tensor.

  • Returns

    Tensor, sum_along_diagonals.

  • Raises

    ValueError – if the input tensor has less than two dimensions.

  • Supported Platforms:

    Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> x = Tensor(np.eye(3, dtype=np.float32))
>>> print(x.trace())

transpose(*axes)[source]

Return a view of the tensor with axes transposed.

  • For a 1-D tensor this has no effect, as a transposed vector is simply the same vector.
  • For a 2-D tensor, this is a standard matrix transpose.
  • For an n-D tensor, if axes are given, their order indicates how the axes are permuted.

If axes are not provided and tensor.shape = (i[0], i[1],...i[n-2], i[n-1]), then tensor.transpose().shape = (i[n-1], i[n-2], ... i[1], i[0]).

  • Parameters

    axes (Union**[None, tuple(int)**, list(int)**, int]**, optional) – If axes is None or blank, the method will reverse the order of the axes. If axes is tuple(int) or list(int), tensor.transpose() will transpose the tensor to the new axes order. If axes is int, this form is simply intended as a convenience alternative to the tuple/list form.

  • Returns

    Tensor, has the same dimension as input tensor, with axes suitably permuted.

  • Raises

    TypeError – If input arguments have types not specified above.

    ValueError – If the number of axes is not euqal to a.ndim.

  • Supported Platforms:

    Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> x = Tensor(np.ones((1,2,3), dtype=np.float32))
>>> x = x.transpose()
>>> print(x.shape)

var(axis=None, ddof=0, keepdims=False)[source]

Compute the variance along the specified axis.

The variance is the average of the squared deviations from the mean, i.e., var=mean(abs(x−x.mean())∗∗2)var=mean(abs(x−x.mean())∗∗2).

Return the variance, which is computed for the flattened array by default, otherwise over the specified axis.

Note

Numpy arguments dtype, out and where are not supported.

  • Parameters

    axis (Union**[None, int, tuple(int)**]) – Axis or axes along which the variance is computed. The default is to compute the variance of the flattened array. Default: None.

    ddof (int) – Means Delta Degrees of Freedom. Default: 0. The divisor used in calculations is N−ddofN−ddof, where NN represents the number of elements.

    keepdims (bool) – Default: False.

  • Supported Platforms:

    Ascend GPU CPU

  • Returns

    Standard deviation tensor.

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> input_x = Tensor(np.array([1., 2., 3., 4.], np.float32))
>>> output = input_x.var()
>>> print(output)

view(*shape)[source]

Reshape the tensor according to the input shape.

  • Parameters

    shape (Union**[tuple(int)**, int]) – Dimension of the output tensor.

  • Returns

    Tensor, has the same dimension as the input shape.

propertyvirtual_flag

Used to mark whether the tensor is virtual. If the tensor is virtual, return True.

classmindspore.RowTensor(indices, values, dense_shape)[source]

A sparse representation of a set of tensor slices at given indices.

An RowTensor is typically used to represent a subset of a larger tensor dense of shape [L0, D1, .. , DN] where L0 >> D0.

The values in indices are the indices in the first dimension of the slices that have been extracted from the larger tensor.

The dense tensor dense represented by an RowTensor slices has dense[slices.indices[i], :, :, :, …] = slices.values[i, :, :, :, …].

RowTensor can only be used in the Cell’s construct method.

It is not supported in pynative mode at the moment.

  • Parameters

    indices (Tensor) – A 1-D integer Tensor of shape [D0].

    values (Tensor) – A Tensor of any dtype of shape [D0, D1, …, Dn].

    dense_shape (tuple(int)) – An integer tuple which contains the shape of the corresponding dense tensor.

  • Returns

    RowTensor, composed of indices, values, and dense_shape.

Examples

>>> import mindspore as ms
>>> import mindspore.nn as nn
>>> from mindspore import RowTensor
>>> class Net(nn.Cell):
...     def __init__(self, dense_shape):
...         super(Net, self).__init__()
...         self.dense_shape = dense_shape
...     def construct(self, indices, values):
...         x = RowTensor(indices, values, self.dense_shape)
...         return x.values, x.indices, x.dense_shape
>>>
>>> indices = Tensor([0])
>>> values = Tensor([[1, 2]], dtype=ms.float32)
>>> out = Net((3, 2))(indices, values)
>>> print(out[0])
[[1. 2.]]
>>> print(out[1])
[0]
>>> print(out[2])
classmindspore.SparseTensor(indices, values, dense_shape)[source]

A sparse representation of a set of nonzero elememts from a tensor at given indices.

SparseTensor can only be used in the Cell’s construct method.

Pynative mode not supported at the moment.

For a tensor dense, its SparseTensor(indices, values, dense_shape) has dense[indices[i]] = values[i].

  • Parameters

    indices (Tensor) – A 2-D integer Tensor of shape [N, ndims], where N and ndims are the number of values and number of dimensions in the SparseTensor, respectively.

    values (Tensor) – A 1-D tensor of any type and shape [N], which supplies the values for each element in indices.

    dense_shape (tuple(int)) – A integer tuple of size ndims, which specifies the dense_shape of the sparse tensor.

  • Returns

    SparseTensor, composed of indices, values, and dense_shape.

Examples

>>> import mindspore as ms
>>> import mindspore.nn as nn
>>> from mindspore import SparseTensor
>>> class Net(nn.Cell):
...     def __init__(self, dense_shape):
...         super(Net, self).__init__()
...         self.dense_shape = dense_shape
...     def construct(self, indices, values):
...         x = SparseTensor(indices, values, self.dense_shape)
...         return x.values, x.indices, x.dense_shape
>>>
>>> indices = Tensor([[0, 1], [1, 2]])
>>> values = Tensor([1, 2], dtype=ms.float32)
>>> out = Net((3, 4))(indices, values)
>>> print(out[0])
[1. 2.]
>>> print(out[1])
[[0 1]
 [1 2]]
>>> print(out[2])

mindspore.ms_function(fn=None, obj=None, input_signature=None)[source]

Create a callable MindSpore graph from a python function.

This allows the MindSpore runtime to apply optimizations based on graph.

  • Parameters

    fn (Function) – The Python function that will be run as a graph. Default: None.

    obj (Object) – The python object that provides the information for identifying the compiled function. Default: None.

    input_signature (Tensor) – The Tensor which describes the input arguments. The shape and dtype of the Tensor will be supplied to this function. If input_signature is specified, each input to fn must be a Tensor. And the input parameters of fn cannot accept **kwargs. The shape and dtype of actual inputs should keep the same as input_signature. Otherwise, TypeError will be raised. Default: None.

  • Returns

    Function, if fn is not None, returns a callable function that will execute the compiled function; If fn is None, returns a decorator and when this decorator invokes with a single fn argument, the callable function is equal to the case when fn is not None.

  • Supported Platforms:

    Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> from mindspore import ms_function
...
>>> x = Tensor(np.ones([1, 1, 3, 3]).astype(np.float32))
>>> y = Tensor(np.ones([1, 1, 3, 3]).astype(np.float32))
...
>>> # create a callable MindSpore graph by calling ms_function
>>> def tensor_add(x, y):
...     z = x + y
...     return z
...
>>> tensor_add_graph = ms_function(fn=tensor_add)
>>> out = tensor_add_graph(x, y)
...
>>> # create a callable MindSpore graph through decorator @ms_function
>>> @ms_function
... def tensor_add_with_dec(x, y):
...     z = x + y
...     return z
...
>>> out = tensor_add_with_dec(x, y)
...
>>> # create a callable MindSpore graph through decorator @ms_function with input_signature parameter
>>> @ms_function(input_signature=(Tensor(np.ones([1, 1, 3, 3]).astype(np.float32)),
...                               Tensor(np.ones([1, 1, 3, 3]).astype(np.float32))))
... def tensor_add_with_sig(x, y):
...     z = x + y
...     return z
...
>>> out = tensor_add_with_sig(x, y)
classmindspore.Parameter(default_input, name=None, requires_grad=True, layerwise_parallel=False, parallel_optimizer=True)[source]

Parameter types of cell models, after initialized Parameter is a subtype of Tensor.

Note

In auto_parallel mode of “semi_auto_parallel” and “auto_parallel”, if init Parameter by an Tensor, the type of Parameter will be Tensor. Tensor will save the shape and type info of a tensor with no memory usage. The shape can be changed while compiling for auto-parallel. Call init_data will return a Tensor Parameter with initialized data. If there is an operator in the network that requires part of the inputs to be Parameter, then the Parameters as this part of the inputs are not allowed to be cast. It is recommended to use the default value of name when initialize a parameter as one attribute of a cell, otherwise, the parameter name may be different than expected.

  • Parameters

    default_input (Union**[Tensor, int, float, numpy.ndarray, list]) – Parameter data,initialize the parameter data. (to) –

    name (str) – Name of the child parameter. Default: None.

    requires_grad (bool) – True if the parameter requires gradient. Default: True.

    layerwise_parallel (bool) – When layerwise_parallel is true in data/hybrid parallel mode, broadcast and gradients communication would not be applied to parameters. Default: False.

    parallel_optimizer (bool) – It is used to filter the weight shard operation in semi auto or auto parallel mode. It works only when enable parallel optimizer in mindspore.context.set_auto_parallel_context(). Default: True.

Examples

>>> import numpy as np
>>> from mindspore import Parameter, Tensor
>>> import mindspore.ops as ops
>>> import mindspore.nn as nn
>>> import mindspore
>>>
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.matmul = ops.MatMul()
...         self.weight = Parameter(Tensor(np.ones((1, 2)), mindspore.float32), name="w", requires_grad=True)
...
...     def construct(self, x):
...         out = self.matmul(self.weight, x)
...         return out
>>> net = Net()
>>> x = Tensor(np.ones((2, 1)), mindspore.float32)
>>> print(net(x))
[[2.]]
>>> net.weight.set_data(Tensor(np.zeros((1, 2)), mindspore.float32))
>>> print(net(x))

propertycache_enable

Return whether the parameter is cache enable.

propertycache_shape

Return the cache shape corresponding to the parameter if use cache.

clone(init=”same”)[source]

Clone the parameter.

  • Parameters

    init (Union**[Tensor, str, numbers.Number]) – Initialize the shape and dtype of the parameter. If init is a Tensor or numbers.Number, clone a new parameter with the same shape and dtype, and the data of the new parameter will be set according to init. If init is a str, the init should be the alias of the class inheriting from Initializer. For example, if init is ‘same’, clone a new parameter with the same data, shape, and dtype. Default: ‘same’.

  • Returns

    Parameter, a new parameter.

propertycomm_fusion

Get and set the fusion type (int) for communication operators corresponding to this parameter.

In AUTO_PARALLEL and SEMI_AUTO_PARALLEL mode, some communication operators used for parameters or gradients aggregation are inserted automatically. Set the fusion type for communication operators generated for this parameter. The value of fusion must be greater than or equal to 0. When the value of fusion is 0, operators will not be fused together.

Only support in Ascend environment with Graph mode.

propertydata

Return the parameter object.

init_data(layout=None, set_sliced=False)[source]

Initialize the parameter’s data.

  • Parameters

    layout (Union**[None, tuple(list(int))]) –Parameter slice layout [dev_mat, tensor_map, slice_shape]. Default: None.dev_mat (list(int)): Device matrix.tensor_map (list(int)): Tensor map.slice_shape (list(int)): Shape of slice.

    set_sliced (bool) – True if the parameter is set sliced after initializing the data. Default: False.

  • Raises

    RuntimeError – If it is from Initializer, and parallel mode has changed after the Initializer created.

    ValueError – If the length of the layout is less than 3.

    TypeError – If layout is not tuple.

  • Returns

    Parameter, the Parameter after initializing data. If current Parameter was already initialized before, returns the same initialized Parameter.

propertyinited_param

Get the new parameter after call the init_data.

Default is a None, If self is a Parameter with out data, after call the init_data the initialized Parameter with data will be recorded here.

propertyis_init

Get the initialization status of the parameter.This flag only work in GE, and it will be set to False in other backend.

propertylayerwise_parallel

When layerwise_parallel is true in data/hybrid parallel mode, broadcast and gradients communication would not be applied to parameters.

propertyname

Get the name of the parameter.

propertyparallel_optimizer

It is used to filter the weight shard operation in semi auto or auto parallel mode. It works only when enable parallel optimizer in mindspore.context.set_auto_parallel_context().

propertyrequires_grad

Return whether the parameter requires gradient.

set_data(data, slice_shape=False)[source]

Set Parameter’s data.

  • Parameters

    data (Union**[Tensor, int, float]) – new data.

    slice_shape (bool) – If slice the parameter is set to true, the shape is not checked for consistency. Default: False.

  • Returns

    Parameter, the parameter after set data.

set_param_fl(push_to_server=False, pull_from_server=False, requires_aggr=True)[source]

Set the way of parameter and server interaction.

  • Parameters

    push_to_server (bool) – Whether the parameter should be pushed to server. Default: False.pull_from_server (bool) – Whether the parameter should be pulled from server. Default: False.requires_aggr (bool) – Whether the parameter should be aggregated in the server. Default: True.

set_param_ps(init_in_server=False)[source]

Set whether the trainable parameter is updated by parameter server and whether the trainable parameter is initialized on server.

Note

It only works when a running task is in the parameter server mode.

  • Parameters

    init_in_server (bool) – Whether trainable parameter updated by parameter server is initialized on server. Default: False.

propertysliced

Get slice status of the parameter.

propertyunique

whether the parameter is already unique or not.

classmindspore.ParameterTuple[source]

Class for storing tuple of parameters.

Note

It is used to store the parameters of the network into the parameter tuple collection.

clone(prefix, init=”same”)[source]

Clone the parameters in ParameterTuple element-wisely to generate a new ParameterTuple.

  • Parameters

    prefix (str) – Namespace of parameter.

    init (Union**[Tensor, str, numbers.Number]) – Initialize the shape and dtype of the parameters. The definition of init is the same as in Parameter API. If init is ‘same’, the parameters in the new parameter tuple are the same as those in the original parameter tuple. Default: ‘same’.

  • Raises

    RuntimeError – If parameter’s name is not end with embedding_table.

  • Returns

    Tuple, the new Parameter tuple.

mindspore.set_seed(seed)[source]

Set global seed.

Note

The global seed is used by numpy.random, mindspore.common.Initializer, mindspore.ops.composite.random_ops and mindspore.nn.probability.distribution.

If global seed is not set, these packages will use their own default seed independently, numpy.random and mindspore.common.Initializer will choose a random seed, mindspore.ops.composite.random_ops and mindspore.nn.probability.distribution will use zero.

Seed set by numpy.random.seed() only used by numpy.random, while seed set by this API will also used by numpy.random, so just set all seed by this API is recommended.

  • Parameters

    seed (int) – The seed to be set.

  • Raises

    ValueError – If seed is invalid (< 0).

    TypeError – If seed isn’t a int.

Examples

>>> import numpy as np
>>> import mindspore.ops as ops
>>> from mindspore import Tensor, set_seed, Parameter
>>> from mindspore.common.initializer import initializer
>>>
>>> # Note: (1) Please make sure the code is running in PYNATIVE MODE;
>>> # (2) Because Composite-level ops need parameters to be Tensors, for below examples,
>>> # when using ops.uniform operator, minval and maxval are initialised as:
>>> minval = Tensor(1.0, ms.float32)
>>> maxval = Tensor(2.0, ms.float32)
>>>
>>> # 1. If global seed is not set, numpy.random and initializer will choose a random seed:
>>> np_1 = np.random.normal(0, 1, [1]).astype(np.float32) # A1
>>> np_1 = np.random.normal(0, 1, [1]).astype(np.float32) # A2
>>> w1 = Parameter(initializer("uniform", [2, 2], ms.float32), name="w1") # W1
>>> w1 = Parameter(initializer("uniform", [2, 2], ms.float32), name="w1") # W2
>>> # Rerun the program will get different results:
>>> np_1 = np.random.normal(0, 1, [1]).astype(np.float32) # A3
>>> np_1 = np.random.normal(0, 1, [1]).astype(np.float32) # A4
>>> w1 = Parameter(initializer("uniform", [2, 2], ms.float32), name="w1") # W3
>>> w1 = Parameter(initializer("uniform", [2, 2], ms.float32), name="w1") # W4
>>>
>>> # 2. If global seed is set, numpy.random and initializer will use it:
>>> set_seed(1234)
>>> np_1 = np.random.normal(0, 1, [1]).astype(np.float32) # A1
>>> np_1 = np.random.normal(0, 1, [1]).astype(np.float32) # A2
>>> w1 = Parameter(initializer("uniform", [2, 2], ms.float32), name="w1") # W1
>>> w1 = Parameter(initializer("uniform", [2, 2], ms.float32), name="w1") # W2
>>> # Rerun the program will get the same results:
>>> set_seed(1234)
>>> np_1 = np.random.normal(0, 1, [1]).astype(np.float32) # A1
>>> np_1 = np.random.normal(0, 1, [1]).astype(np.float32) # A2
>>> w1 = Parameter(initializer("uniform", [2, 2], ms.float32), name="w1") # W1
>>> w1 = Parameter(initializer("uniform", [2, 2], ms.float32), name="w1") # W2
>>>
>>> # 3. If neither global seed nor op seed is set, mindspore.ops.composite.random_ops and
>>> # mindspore.nn.probability.distribution will choose a random seed:
>>> c1 = ops.uniform((1, 4), minval, maxval) # C1
>>> c2 = ops.uniform((1, 4), minval, maxval) # C2
>>> # Rerun the program will get different results:
>>> c1 = ops.uniform((1, 4), minval, maxval) # C3
>>> c2 = ops.uniform((1, 4), minval, maxval) # C4
>>>
>>> # 4. If global seed is set, but op seed is not set, mindspore.ops.composite.random_ops and
>>> # mindspore.nn.probability.distribution will calculate a seed according to global seed and
>>> # default op seed. Each call will change the default op seed, thus each call get different
>>> # results.
>>> set_seed(1234)
>>> c1 = ops.uniform((1, 4), minval, maxval) # C1
>>> c2 = ops.uniform((1, 4), minval, maxval) # C2
>>> # Rerun the program will get the same results:
>>> set_seed(1234)
>>> c1 = ops.uniform((1, 4), minval, maxval) # C1
>>> c2 = ops.uniform((1, 4), minval, maxval) # C2
>>>
>>> # 5. If both global seed and op seed are set, mindspore.ops.composite.random_ops and
>>> # mindspore.nn.probability.distribution will calculate a seed according to global seed and
>>> # op seed counter. Each call will change the op seed counter, thus each call get different
>>> # results.
>>> set_seed(1234)
>>> c1 = ops.uniform((1, 4), minval, maxval, seed=2) # C1
>>> c2 = ops.uniform((1, 4), minval, maxval, seed=2) # C2
>>> # Rerun the program will get the same results:
>>> set_seed(1234)
>>> c1 = ops.uniform((1, 4), minval, maxval, seed=2) # C1
>>> c2 = ops.uniform((1, 4), minval, maxval, seed=2) # C2
>>>
>>> # 6. If op seed is set but global seed is not set, 0 will be used as global seed. Then
>>> # mindspore.ops.composite.random_ops and mindspore.nn.probability.distribution act as in
>>> # condition 5.
>>> c1 = ops.uniform((1, 4), minval, maxval, seed=2) # C1
>>> c2 = ops.uniform((1, 4), minval, maxval, seed=2) # C2
>>> # Rerun the program will get the same results:
>>> c1 = ops.uniform((1, 4), minval, maxval, seed=2) # C1
>>> c2 = ops.uniform((1, 4), minval, maxval, seed=2) # C2
>>>
>>> # 7. Recall set_seed() in the program will reset numpy seed and op seed counter of
>>> # mindspore.ops.composite.random_ops and mindspore.nn.probability.distribution.
>>> set_seed(1234)
>>> np_1 = np.random.normal(0, 1, [1]).astype(np.float32) # A1
>>> c1 = ops.uniform((1, 4), minval, maxval, seed=2) # C1
>>> set_seed(1234)
>>> np_2 = np.random.normal(0, 1, [1]).astype(np.float32) # still get A1
>>> c2 = ops.uniform((1, 4), minval, maxval, seed=2) # still get C1

mindspore.get_seed()[source]

Get global seed.

  • Returns

    Integer. The global seed.

classmindspore.Model(network, loss_fn=None, optimizer=None, metrics=None, eval_network=None, eval_indexes=None, amp_level=”O0”, boost_level=”O0”, **kwargs)[source]

High-Level API for Training or Testing.

Model groups layers into an object with training and inference features.

  • Parameters

    network (Cell) – A training or testing network.

    loss_fn (Cell) – Objective function, if loss_fn is None, the network should contain the logic of loss and grads calculation, and the logic of parallel if needed. Default: None.

    optimizer (Cell) – Optimizer for updating the weights. Default: None.

    metrics (Union**[dict, set]) – A Dictionary or a set of metrics to be evaluated by the model during training and testing. eg: {‘accuracy’, ‘recall’}. Default: None.

    eval_network (Cell) – Network for evaluation. If not defined, network and loss_fn would be wrapped as eval_network . Default: None.

    eval_indexes (list) – When defining the eval_network, if eval_indexes is None, all outputs of the eval_network would be passed to metrics, otherwise eval_indexes must contain three elements, including the positions of loss value, predicted value and label. The loss value would be passed to the Loss metric, the predicted value and label would be passed to other metric. Default: None.

    amp_level (str) –Option for argument level in mindspore.amp.build_train_network , level for mixed precision training. Supports [“O0”, “O2”, “O3”, “auto”]. Default: “O0”.O0: Do not change.O2: Cast network to float16, keep batchnorm run in float32, using dynamic loss scale.O3: Cast network to float16, with additional property keep_batchnorm_fp32=False .auto: Set to level to recommended level in different devices. Set level to O2 on GPU, Set level to O3 Ascend. The recommended level is choose by the export experience, cannot always general. User should specify the level for special network.O2 is recommended on GPU, O3 is recommended on Ascend.The more detailed explanation of amp_level setting can be found at mindspore.amp.build_train_network .

    boost_level (str) –Option for argument level in mindspore.boost , level for boost mode training. Supports [“O0”, “O1”, “O2”]. Default: “O0”.O0: Do not change.O1: Enable the boost mode, the performance is improved by about 20%, and the accuracy is the same as the original accuracy.O2: Enable the boost mode, the performance is improved by about 30%, and the accuracy is reduced by less than 3%.

Examples

>>> from mindspore import Model, nn
>>>
>>> class Net(nn.Cell):
...     def __init__(self, num_class=10, num_channel=1):
...         super(Net, self).__init__()
...         self.conv1 = nn.Conv2d(num_channel, 6, 5, pad_mode='valid')
...         self.conv2 = nn.Conv2d(6, 16, 5, pad_mode='valid')
...         self.fc1 = nn.Dense(16*5*5, 120, weight_init='ones')
...         self.fc2 = nn.Dense(120, 84, weight_init='ones')
...         self.fc3 = nn.Dense(84, num_class, weight_init='ones')
...         self.relu = nn.ReLU()
...         self.max_pool2d = nn.MaxPool2d(kernel_size=2, stride=2)
...         self.flatten = nn.Flatten()
...
...     def construct(self, x):
...         x = self.max_pool2d(self.relu(self.conv1(x)))
...         x = self.max_pool2d(self.relu(self.conv2(x)))
...         x = self.flatten(x)
...         x = self.relu(self.fc1(x))
...         x = self.relu(self.fc2(x))
...         x = self.fc3(x)
...         return x
>>>
>>> net = Net()
>>> loss = nn.SoftmaxCrossEntropyWithLogits()
>>> optim = nn.Momentum(params=net.trainable_params(), learning_rate=0.1, momentum=0.9)
>>> model = Model(net, loss_fn=loss, optimizer=optim, metrics=None)
>>> # For details about how to build the dataset, please refer to the tutorial
>>> # document on the official website.
>>> dataset = create_custom_dataset()
>>> model.train(2, dataset)

build(train_dataset=None, valid_dataset=None, sink_size=-1)[source]

Build computational graphs and data graphs with the sink mode.

Warning

This is an experimental prototype that is subject to change and/or deletion.

Note

Pre-build process only supports GRAPH_MODE and Ascend target currently. The interface builds the computational graphs, when the interface is executed first, ‘model.train’ only performs the graphs execution. It only support dataset sink mode.

  • Parameters

    train_dataset (Dataset) – A training dataset iterator. If train_dataset is defined, training graphs will be initialized. Default: None.valid_dataset (Dataset) – An evaluating dataset iterator. If valid_dataset is defined, evaluation graphs will be initialized, and metrics in Model can not be None. Default: None.sink_size (int) – Control the amount of data in each sink. Default: -1.

Examples

>>> from mindspore import Model, nn, FixedLossScaleManager
>>>
>>> # For details about how to build the dataset, please refer to the tutorial
>>> # document on the official website.
>>> dataset = create_custom_dataset()
>>> net = Net()
>>> loss = nn.SoftmaxCrossEntropyWithLogits()
>>> loss_scale_manager = FixedLossScaleManager()
>>> optim = nn.Momentum(params=net.trainable_params(), learning_rate=0.1, momentum=0.9)
>>> model = Model(net, loss_fn=loss, optimizer=optim, metrics=None, loss_scale_manager=loss_scale_manager)
>>> model.build(dataset)
>>> model.train(2, dataset)

eval(valid_dataset, callbacks=None, dataset_sink_mode=True)[source]

Evaluation API where the iteration is controlled by python front-end.

Configure to pynative mode or CPU, the evaluating process will be performed with dataset non-sink mode.

Note

If dataset_sink_mode is True, data will be sent to device. If device is Ascend, features of data will be transferred one by one. The limitation of data transmission per time is 256M. When dataset_sink_mode is True, step_end method of the Callback class will be executed when the epoch_end method is called.

  • Parameters

    valid_dataset (Dataset) – Dataset to evaluate the model.

    callbacks (Optional**[list(Callback)**]) – List of callback objects which should be executed while training. Default: None.

    dataset_sink_mode (bool) – Determines whether to pass the data through dataset channel. Default: True.

  • Returns

    Dict, which returns the loss value and metrics values for the model in the test mode.

Examples

>>> from mindspore import Model, nn
>>>
>>> # For details about how to build the dataset, please refer to the tutorial
>>> # document on the official website.
>>> dataset = create_custom_dataset()
>>> net = Net()
>>> loss = nn.SoftmaxCrossEntropyWithLogits()
>>> model = Model(net, loss_fn=loss, optimizer=None, metrics={'acc'})
>>> acc = model.eval(dataset, dataset_sink_mode=False)

propertyeval_network

Get the model’s eval_network.

infer_predict_layout(*predict_data)[source]

Generate parameter layout for the predict network in auto or semi auto parallel mode.

Data could be a single tensor or multiple tensors.

Note

Batch data should be put together in one tensor.

  • Parameters

    predict_data (Tensor) – One tensor or multiple tensors of predict data.

  • Returns

    Dict, Parameter layout dictionary used for load distributed checkpoint.

  • Raises

    RuntimeError – If get_context is not GRAPH_MODE.

Examples

>>> # This example should be run with multiple devices. Refer to the tutorial > Distributed Training on
>>> # mindspore.cn.
>>> import numpy as np
>>> import mindspore as ms
>>> from mindspore import Model, context, Tensor
>>> from mindspore.context import ParallelMode
>>> from mindspore.communication import init
>>>
>>> context.set_context(mode=context.GRAPH_MODE)
>>> init()
>>> context.set_auto_parallel_context(full_batch=True, parallel_mode=ParallelMode.SEMI_AUTO_PARALLEL)
>>> input_data = Tensor(np.random.randint(0, 255, [1, 1, 32, 32]), ms.float32)
>>> model = Model(Net())
>>> predict_map = model.infer_predict_layout(input_data)

infer_train_layout(train_dataset, dataset_sink_mode=True, sink_size=-1)[source]

Generate parameter layout for the train network in auto or semi auto parallel mode. Only dataset sink mode is supported for now.

Warning

This is an experimental prototype that is subject to change and/or deletion.

Note

This is a pre-compile function. The arguments should be the same with model.train() function.

  • Parameters

    train_dataset (Dataset) – A training dataset iterator. If there is no loss_fn, a tuple with multiple data (data1, data2, data3, …) should be returned and passed to the network. Otherwise, a tuple (data, label) should be returned. The data and label would be passed to the network and loss function respectively.

    dataset_sink_mode (bool) – Determines whether to pass the data through dataset channel. Default: True. Configure pynative mode or CPU, the training process will be performed with dataset not sink. Default: True.

    sink_size (int) – Control the amount of data in each sink. If sink_size = -1, sink the complete dataset for each epoch. If sink_size > 0, sink sink_size data for each epoch. If dataset_sink_mode is False, set sink_size as invalid. Default: -1.

  • Returns

    Dict, Parameter layout dictionary used for load distributed checkpoint

Examples

>>> # This example should be run with multiple devices. Refer to the tutorial > Distributed Training on
>>> # mindspore.cn.
>>> import numpy as np
>>> import mindspore as ms
>>> from mindspore import Model, context, Tensor, nn, FixedLossScaleManager
>>> from mindspore.context import ParallelMode
>>> from mindspore.communication import init
>>>
>>> context.set_context(mode=context.GRAPH_MODE)
>>> init()
>>> context.set_auto_parallel_context(parallel_mode=ParallelMode.SEMI_AUTO_PARALLEL)
>>>
>>> # For details about how to build the dataset, please refer to the tutorial
>>> # document on the official website.
>>> dataset = create_custom_dataset()
>>> net = Net()
>>> loss = nn.SoftmaxCrossEntropyWithLogits()
>>> loss_scale_manager = FixedLossScaleManager()
>>> optim = nn.Momentum(params=net.trainable_params(), learning_rate=0.1, momentum=0.9)
>>> model = Model(net, loss_fn=loss, optimizer=optim, metrics=None, loss_scale_manager=loss_scale_manager)
>>> layout_dict = model.infer_train_layout(dataset)

predict(*predict_data)[source]

Generate output predictions for the input samples.

Data could be a single tensor, a list of tensor, or a tuple of tensor.

Note

This is a pre-compile function. The arguments should be the same with model.predict() function.

  • Parameters

    predict_data (Tensor) – The predict data, can be bool, int, float, str, None, tensor, or tuple, list and dict that store these types.

  • Returns

    Tensor, array(s) of predictions.

Examples

>>> import mindspore as ms
>>> from mindspore import Model, Tensor
>>>
>>> input_data = Tensor(np.random.randint(0, 255, [1, 1, 32, 32]), ms.float32)
>>> model = Model(Net())
>>> result = model.predict(input_data)

propertypredict_network

Get the model’s predict_network.

train(epoch, train_dataset, callbacks=None, dataset_sink_mode=True, sink_size=-1)[source]

Training API where the iteration is controlled by python front-end.

When setting pynative mode or CPU, the training process will be performed with dataset not sink.

Note

If dataset_sink_mode is True, data will be sent to device. If device is Ascend, features of data will be transferred one by one. The limitation of data transmission per time is 256M. When dataset_sink_mode is True, step_end method of the Callback class will be executed when the epoch_end method is called. If sink_size > 0, each epoch the dataset can be traversed unlimited times until you get sink_size elements of the dataset. Next epoch continues to traverse from the end position of the previous traversal. The interface builds the computational graphs and then executes the computational graphs. However, when the ‘model.build’ is executed first, it only performs the graphs execution.

  • Parameters

    epoch (int) – Generally, total number of iterations on the data per epoch. When dataset_sink_mode is set to true and sink_size>0, each epoch sink sink_size steps on the data instead of total number of iterations.

    train_dataset (Dataset) – A training dataset iterator. If there is no loss_fn, a tuple with multiple data (data1, data2, data3, …) should be returned and passed to the network. Otherwise, a tuple (data, label) should be returned. The data and label would be passed to the network and loss function respectively.

    callbacks (Optional**[list[*Callback]*, Callback]) – List of callback objects or callback object, which should be executed while training. Default: None.

    dataset_sink_mode (bool) – Determines whether to pass the data through dataset channel. Default: True. Configure pynative mode or CPU, the training process will be performed with dataset not sink. Default: True.

    sink_size (int) – Control the amount of data in each sink. If sink_size = -1, sink the complete dataset for each epoch. If sink_size > 0, sink sink_size data for each epoch. If dataset_sink_mode is False, set sink_size as invalid. Default: -1.

Examples

>>> from mindspore import Model, nn, FixedLossScaleManager
>>>
>>> # For details about how to build the dataset, please refer to the tutorial
>>> # document on the official website.
>>> dataset = create_custom_dataset()
>>> net = Net()
>>> loss = nn.SoftmaxCrossEntropyWithLogits()
>>> loss_scale_manager = FixedLossScaleManager()
>>> optim = nn.Momentum(params=net.trainable_params(), learning_rate=0.1, momentum=0.9)
>>> model = Model(net, loss_fn=loss, optimizer=optim, metrics=None, loss_scale_manager=loss_scale_manager)
>>> model.train(2, dataset)

propertytrain_network

Get the model’s train_network.

classmindspore.DatasetHelper(dataset, dataset_sink_mode=True, sink_size=-1, epoch_num=1)[source]

DatasetHelper is a class to process the MindData dataset and it provides the information of dataset.

According to different contexts, change the iterations of dataset and use the same iteration for loop in different contexts.

Note

The iteration of DatasetHelper will provide one epoch data.

  • Parameters

    dataset (Dataset) – The training dataset iterator. The dataset can be generated by dataset generator API in mindspore.dataset, such as mindspore.dataset.ImageFolderDataset.

    dataset_sink_mode (bool) – If true use GetNext to fetch the data, or else feed the data from host. Default: True.

    sink_size (int) – Control the amount of data in each sink. If sink_size=-1, sink the complete dataset for each epoch. If sink_size>0, sink sink_size data for each epoch. Default: -1.

    epoch_num (int) – Control the number of epoch data to send. Default: 1.

Examples

>>> from mindspore import DatasetHelper
>>>
>>> train_dataset = create_custom_dataset()
>>> set_helper = DatasetHelper(train_dataset, dataset_sink_mode=False)
>>> # Object of DatasetHelper is iterable
>>> for next_element in set_helper:
...     next_element

continue_send()[source]

Continue send data to device at the beginning of epoch.

dynamic_min_max_shapes()[source]

Get shape range(min shape, max shape) of dynamic data.

get_data_info()[source]

Get the types and shape of current batch.

release()[source]

Free up resources about data sink.

sink_size()[source]

Get sink_size for each iteration.

stop_send()[source]

stop send data about data sink.

types_shapes()[source]

Get the types and shapes from dataset on the current configuration.

mindspore.connect_network_with_dataset(network, dataset_helper)[source]

Connect the network with dataset in dataset_helper.

This function wraps the input network with ‘GetNext’ so that the data can be fetched automatically from the data channel corresponding to the ‘queue_name’ and passed to the input network during forward computation.

Note

In the case of running the network on Ascend/GPU in graph mode, this function will wrap the input network with ‘GetNext’, in other cases, the input network will be returned with no change. The ‘GetNext’ is required to get data only in sink mode, so this function is not applicable to no-sink mode.

  • Parameters

    network (Cell) – The training network for dataset.

    dataset_helper (DatasetHelper) – A class to process the MindData dataset, it provides the type, shape and queue name of the dataset to wrap the GetNext.

  • Returns

    Cell, a new network wrapped with ‘GetNext’ in the case of running the task on Ascend in graph mode, otherwise it is the input network.

  • Supported Platforms:

    Ascend GPU

Examples

>>> from mindspore import DatasetHelper
>>>
>>> # call create_dataset function to create a regular dataset, refer to mindspore.dataset
>>> train_dataset = create_custom_dataset()
>>> dataset_helper = DatasetHelper(train_dataset, dataset_sink_mode=True)
>>> net = Net()
>>> net_with_get_next = connect_network_with_dataset(net, dataset_helper)
mindspore.build_train_network(network, optimizer, loss_fn=None, level=”O0”, boost_level=”O0”, **kwargs)[source]

Build the mixed precision training cell automatically.

  • Parameters

    network (Cell) – Definition of the network.

    loss_fn (Union**[None, Cell]) – Definition of the loss_fn. If None, the network should have the loss inside. Default: None.optimizer (Optimizer) – Optimizer to update the Parameter.

    level (str) –Supports [“O0”, “O2”, “O3”, “auto”]. Default: “O0”.O0: Do not change.O2: Cast network to float16, keep batchnorm and loss_fn (if set) run in float32, using dynamic loss scale.O3: Cast network to float16, with additional property keep_batchnorm_fp32=False .auto: Set to level to recommended level in different devices. Set level to O2 on GPU, Set level to O3 Ascend. The recommended level is choose by the export experience, cannot always general. User should specify the level for special network.O2 is recommended on GPU, O3 is recommended on Ascend.Property of keep_batchnorm_fp32 , cast_model_type and loss_scale_manager determined by level setting may be overwritten by settings in kwargs .

    cast_model_type (mindspore.dtype) – Supports mstype.float16 or mstype.float32 . If set, the network will be casted to cast_model_type ( mstype.float16 or mstype.float32 ), but not to be casted to the type determined by level setting.

    keep_batchnorm_fp32 (bool) – Keep Batchnorm run in float32 when the network is set to cast to float16 . If set, the level setting will take no effect on this property.

    loss_scale_manager (Union**[None, LossScaleManager]) – If None, not scale the loss, otherwise scale the loss by LossScaleManager . If set, the level setting will take no effect on this property.

  • Raises

    ValueError – Auto mixed precision only supported on device GPU and Ascend. If device is CPU, a ValueError exception will be raised.

    ValueError – If device is CPU, property loss_scale_manager only can be set as None or FixedLossScaleManager (with property drop_overflow_update=False ), or a ValueError exception will be raised.

classmindspore.LossScaleManager[source]

Loss scale manager abstract class.

get_loss_scale()[source]

Get loss scale value.

get_update_cell()[source]

Get the loss scaling update logic cell.

update_loss_scale(overflow)[source]

Update loss scale value.

  • Parameters

    overflow (bool) – Whether it overflows.

classmindspore.FixedLossScaleManager(loss_scale=128.0, drop_overflow_update=True)[source]

Loss scale with a fixed value, inherits from LossScaleManager.

  • Parameters

    loss_scale (float) – Loss scale. Note that if drop_overflow_update is set to False, the value of loss_scale in optimizer that you used need to be set to the same value as here. Default: 128.0.

    drop_overflow_update (bool) – Whether to execute optimizer if there is an overflow. If True, the optimizer will not executed when overflow occurs. Default: True.

Examples

>>> from mindspore import Model, nn, FixedLossScaleManager
>>>
>>> net = Net()
>>> #1) Drop the parameter update if there is an overflow
>>> loss_scale_manager = FixedLossScaleManager()
>>> optim = nn.Momentum(params=net.trainable_params(), learning_rate=0.1, momentum=0.9)
>>> model = Model(net, loss_scale_manager=loss_scale_manager, optimizer=optim)
>>>
>>> #2) Execute parameter update even if overflow occurs
>>> loss_scale = 1024.0
>>> loss_scale_manager = FixedLossScaleManager(loss_scale, False)
>>> optim = nn.Momentum(params=net.trainable_params(), learning_rate=0.1, momentum=0.9, loss_scale=loss_scale)
>>> model = Model(net, loss_scale_manager=loss_scale_manager, optimizer=optim)

get_drop_overflow_update()[source]

Get the flag whether to drop optimizer update when there is an overflow.

  • Returns

    bool, drop_overflow_update value.

get_loss_scale()[source]

Get loss scale value.

  • Returns

    bool, loss_scale value.

get_update_cell()[source]

Returns the update cell for TrainOneStepWithLossScaleCell.

  • Returns

    None or Cell. Cell object, used to update loss_scale, when drop_overflow_update is True. None when drop_overflow_update is False.

update_loss_scale(overflow)[source]

Update loss scale value. The interface at FixedLossScaleManager will do nothing.

  • Parameters

    overflow (bool) – Whether it overflows.

classmindspore.DynamicLossScaleManager(init_loss_scale=2 ** 24, scale_factor=2, scale_window=2000)[source]

Loss scale that dynamically adjusts itself, inherits from LossScaleManager.

  • Parameters

    init_loss_scale (float) – Initialize loss scale. Default: 2**24.

    scale_factor (int) – Coefficient of increase and decrease. Default: 2.

    scale_window (int) – Maximum continuous normal steps when there is no overflow. Default: 2000.

Examples

>>> from mindspore import Model, nn, DynamicLossScaleManager
>>>
>>> net = Net()
>>> loss_scale_manager = DynamicLossScaleManager()
>>> optim = nn.Momentum(params=net.trainable_params(), learning_rate=0.1, momentum=0.9)
>>> model = Model(net, loss_scale_manager=loss_scale_manager, optimizer=optim)

get_drop_overflow_update()[source]

Get the flag whether to drop optimizer update when there is an overflow.

  • Returns

​ bool, always return True at DynamicLossScaleManager.

get_loss_scale()[source]

Get loss scale value.

  • Returns

​ bool, loss_scale value.

get_update_cell()[source]

Returns the update cell for TrainOneStepWithLossScaleCell.

  • Returns

​ Cell, cell object used to update loss_scale.

update_loss_scale(overflow)[source]

Update loss scale value.

  • Parameters

    overflow (bool) – Whether it overflows.

mindspore.save_checkpoint(save_obj, ckpt_file_name, integrated_save=True, async_save=False, append_dict=None, enc_key=None, enc_mode=”AES-GCM”)[source]

Save checkpoint info to a specified file.

  • Parameters

    save_obj (Union**[Cell, list]) – The cell object or data list(each element is a dictionary, like [{“name”: param_name, “data”: param_data},…], the type of param_name would be string, and the type of param_data would be parameter or Tensor).

    ckpt_file_name (str) – Checkpoint file name. If the file name already exists, it will be overwritten.

    ntegrated_save (bool) – Whether to integrated save in automatic model parallel scene. Default: True

    async_save (bool) – Whether asynchronous execution saves the checkpoint to a file. Default: False

    append_dict (dict) – Additional information that needs to be saved. The key of dict must be str, the value of dict must be one of int float and bool. Default: None

    enc_key (Union**[None, bytes]) – Byte type key used for encryption. If the value is None, the encryption is not required. Default: None.

    enc_mode (str) – This parameter is valid only when enc_key is not set to None. Specifies the encryption mode, currently supports ‘AES-GCM’ and ‘AES-CBC’. Default: ‘AES-GCM’.

  • Raises

    TypeError – If the parameter save_obj is not nn.Cell or list type. And if the parameter integrated_save and async_save are not bool type.

Examples

>>> from mindspore import save_checkpoint
>>>
>>> net = Net()
>>> save_checkpoint(net, "lenet.ckpt")

mindspore.load_checkpoint(ckpt_file_name, net=None, strict_load=False, filter_prefix=None, dec_key=None, dec_mode=”AES-GCM”)[source]

Load checkpoint info from a specified file.

  • Parameters

    ckpt_file_name (str) – Checkpoint file name.

    net (Cell) – Cell network. Default: None

    strict_load (bool) – Whether to strict load the parameter into net. If False, it will load parameter in the param_dict into net with the same suffix and load parameter with different accuracy. Default: False.

    filter_prefix (Union**[str, list[*str]*, tuple[*str]*]) – Parameters starting with the filter_prefix will not be loaded. Default: None.

    dec_key (Union**[None, bytes]) – Byte type key used for decryption. If the value is None, the decryption is not required. Default: None.

    dec_mode (str) – This parameter is valid only when dec_key is not set to None. Specifies the decryption mode, currently supports ‘AES-GCM’ and ‘AES-CBC’. Default: ‘AES-GCM’.

  • Returns

    Dict, key is parameter name, value is a Parameter.

  • Raises

    ValueError – Checkpoint file is incorrect.

Examples

>>> from mindspore import load_checkpoint
>>>
>>> ckpt_file_name = "./checkpoint/LeNet5-1_32.ckpt"
>>> param_dict = load_checkpoint(ckpt_file_name, filter_prefix="conv1")
>>> print(param_dict["conv2.weight"])
Parameter (name=conv2.weight, shape=(16, 6, 5, 5), dtype=Float32, requires_grad=True)

mindspore.load_param_into_net(net, parameter_dict, strict_load=False)[source]

Load parameters into network.

  • Parameters

    net (Cell) – Cell network.

    parameter_dict (dict) – Parameter dictionary.

    strict_load (bool) – Whether to strict load the parameter into net. If False, it will load parameter in the param_dict into net with the same suffix and load parameter with different accuracy. Default: False.

  • Returns

    List, parameters not loaded in the network.

  • Raises

    TypeError – Argument is not a Cell, or parameter_dict is not a Parameter dictionary.

Examples

>>> from mindspore import load_checkpoint, load_param_into_net
>>>
>>> net = Net()
>>> ckpt_file_name = "./checkpoint/LeNet5-1_32.ckpt"
>>> param_dict = load_checkpoint(ckpt_file_name, filter_prefix="conv1")
>>> param_not_load = load_param_into_net(net, param_dict)
>>> print(param_not_load)

mindspore.export(net, inputs, *file_name, file_format=”AIR”, *kwargs*)[source]

Export the MindSpore prediction model to a file in the specified format.

Note

  1. When exporting to AIR、ONNX format, the size of a single tensor can not exceed 2GB.
  2. When file_name does not have a suffix, the system will automatically add according to the file_format.
  • Parameters

    net (Cell) – MindSpore network.

    inputs (Tensor) – Inputs of the net, if the network has multiple inputs, incoming tuple(Tensor).

    file_name (str) – File name of the model to be exported.

    file_format (str) –MindSpore currently supports ‘AIR’, ‘ONNX’ and ‘MINDIR’ format for exported model.AIR: Ascend Intermediate Representation. An intermediate representation format of Ascend model.ONNX: Open Neural Network eXchange. An open format built to represent machine learning models.MINDIR: MindSpore Native Intermediate Representation for Anf. An intermediate representation format for MindSpore models.

    kwargs (dict) –Configuration options dictionary.quant_mode (str): If the network is quantization aware training network, the quant_mode should be set to “QUANT”, else the quant_mode should be set to “NONQUANT”.mean (float): The mean of input data after preprocessing, used for quantizing the first layer of network. Default: 127.5.std_dev (float): The variance of input data after preprocessing, used for quantizing the first layer of network. Default: 127.5.enc_key (byte): Byte type key used for encryption. Tha valid length is 16, 24, or 32.enc_mode (str): Specifies the encryption mode, take effect when enc_key is set. Option: ‘AES-GCM’ | ‘AES-CBC’. Default: ‘AES-GCM’.dataset (Dataset): Specifies the preprocess methods of network.

Examples

>>> import numpy as np
>>> from mindspore import export, Tensor
>>>
>>> net = LeNet()
>>> input = Tensor(np.ones([1, 1, 32, 32]).astype(np.float32))
>>> export(net, Tensor(input), file_name='lenet', file_format='MINDIR')

mindspore.load(file_name, **kwargs)[source]

Load MindIR.

The returned object can be executed by a GraphCell, see class mindspore.nn.GraphCell for more details.

  • Parameters

    file_name (str) – MindIR file name.

    kwargs (dict) –Configuration options dictionary.dec_key (bytes): Byte type key used for decryption. Tha valid length is 16, 24, or 32.dec_mode (str): Specifies the decryption mode, take effect when dec_key is set. Option: ‘AES-GCM’ | ‘AES-CBC’. Default: ‘AES-GCM’.

  • Returns

    Object, a compiled graph that can executed by GraphCell.

  • Raises

    ValueError – MindIR file name is incorrect.

    RuntimeError – Failed to parse MindIR file.

Examples

>>> import numpy as np
>>> import mindspore.nn as nn
>>> from mindspore import Tensor, export, load
>>>
>>> net = nn.Conv2d(1, 1, kernel_size=3, weight_init="ones")
>>> input = Tensor(np.ones([1, 1, 3, 3]).astype(np.float32))
>>> export(net, input, file_name="net", file_format="MINDIR")
>>> graph = load("net.mindir")
>>> net = nn.GraphCell(graph)
>>> output = net(input)
>>> print(output)

mindspore.parse_print(print_file_name)[source]

Load Print data from a specified file.

  • Parameters

    print_file_name (str) – The file name of saved print data.

  • Returns

    List, element of list is Tensor.

  • Raises

    ValueError – The print file may be empty, please make sure enter the correct file name.

mindspore.build_searched_strategy(strategy_filename)[source]

Build strategy of every parameter in network.

  • Parameters

    strategy_filename (str) – Name of strategy file.

  • Returns

    Dict, whose key is parameter name and value is slice strategy of this parameter.

  • Raises

    ValueError – Strategy file is incorrect.

    TypeError – strategy_filename is not str.

mindspore.merge_sliced_parameter(sliced_parameters, strategy=None)[source]

Merge parameter slices to one whole parameter.

  • Parameters

    sliced_parameters (list[*Parameter*]) – Parameter slices in order of rank_id.

    strategy (Optional**[dict]) – Parameter slice strategy, whose key is parameter name and value is slice strategy of this parameter. If strategy is None, just merge parameter slices in 0 axis order. Default: None.

  • Returns

    Parameter, the merged parameter which has the whole data.

  • Raises

    ValueError – Failed to merge.

    TypeError – The sliced_parameters is incorrect or strategy is not dict.

    KeyError – The parameter name is not in keys of strategy.

Examples

>>> import numpy as np
>>> from mindspore import Tensor, merge_sliced_parameter, Parameter
>>>
>>> sliced_parameters = [
...                      Parameter(Tensor(np.array([0.00023915, 0.00013939, -0.00098059])),
...                                "network.embedding_table"),
...                      Parameter(Tensor(np.array([0.00015815, 0.00015458, -0.00012125])),
...                                "network.embedding_table"),
...                      Parameter(Tensor(np.array([0.00042165, 0.00029692, -0.00007941])),
...                                "network.embedding_table"),
...                      Parameter(Tensor(np.array([0.00084451, 0.00089960, -0.00010431])),
...                                "network.embedding_table")]
>>> merged_parameter = merge_sliced_parameter(sliced_parameters)
>>> print(merged_parameter)

mindspore.load_distributed_checkpoint(network, checkpoint_filenames, predict_strategy=None, train_strategy_filename=None, dec_key=None, dec_mode=”AES-GCM”)[source]

Load checkpoint into net for distributed predication.

  • Parameters

    network (Cell) – Network for distributed predication.

    checkpoint_filenames (list[*str*]) – The name of Checkpoint files in order of rank id.

    predict_strategy (dict) – Strategy of predication process, whose key is parameter name, and value is a list or a tuple that the first four elements are [dev_matrix, tensor_map, param_split_shape, field]. If None, it means that the predication process just uses single device. Default: None.

    train_strategy_filename (str) – Train strategy proto file name. Default: None.

    dec_key (Union**[None, bytes]) – Byte type key used for decryption. If the value is None, the decryption is not required. Default: None.

    dec_mode (str) – This parameter is valid only when dec_key is not set to None. Specifies the decryption mode, currently supports ‘AES-GCM’ and ‘AES-CBC’. Default: ‘AES-GCM’.

  • Raises

    TypeError – The type of inputs do not match the requirements.

    ValueError – Failed to load checkpoint into net.

mindspore.async_ckpt_thread_status()[source]

Get the status of asynchronous save checkpoint thread.

  • Returns

    True, Asynchronous save checkpoint thread is running.

    False, Asynchronous save checkpoint thread is not executing.

mindspore.get_level()[source]

Get the logger level.

  • Returns

    str, the Log level includes 4(EXCEPTION), 3(ERROR), 2(WARNING), 1(INFO), 0(DEBUG).

Examples

>>> import os
>>> os.environ['GLOG_v'] = '0'
>>> from mindspore import log as logger
>>> logger.get_level()

mindspore.get_log_config()[source]

Get logger configurations.

  • Returns

    Dict, the dictionary of logger configurations.

Examples

>>> import os
>>> os.environ['GLOG_v'] = '1'
>>> os.environ['GLOG_logtostderr'] = '0'
>>> os.environ['GLOG_log_dir'] = '/var/log'
>>> os.environ['logger_maxBytes'] = '5242880'
>>> os.environ['logger_backupCount'] = '10'
>>> os.environ['GLOG_stderrthreshold'] = '2'
>>> from mindspore import log as logger
>>> logger.get_log_config()

mindspore.common.initializer

Initializer for cell parameters.

classmindspore.common.initializer.Initializer(**kwargs)[source]

The base class of the initializer. Initialization of tensor basic attributes and model weight values.

  • Parameters

    kwargs (dict) – Keyword arguments for Initializer.

mindspore.common.initializer.initializer(init, shape=None, dtype=mstype.float32)[source]

Create and initialize a tensor.

  • Parameters

    init (Union**[Tensor, str, Initializer, numbers.Number]) –Initialize value.str: The init should be the alias of the class inheriting from Initializer and the corresponding class will be called. The value of ‘init’ can be “normal”, “ones” or “zeros”, etc.Initializer: The init should be the class inheriting from Initializer to initialize tensor.numbers.Number: The Constant will be called to initialize tensor.

    shape (Union**[tuple, list, int]) – A list of integers, a tuple of integers or an integer as the shape of output. Default: None.

    dtype (mindspore.dtype) – The type of data in initialized tensor. Default: mindspore.float32.

  • Returns

    Union[Tensor], return is Tensor object.

Examples

>>> import mindspore
>>> from mindspore.common.initializer import initializer, One
>>> tensor1 = initializer('ones', [1, 2, 3], mindspore.float32)
>>> tensor2 = initializer(One(), [1, 2, 3], mindspore.float32)
>>> tensor3 = initializer(0, [1, 2, 3], mindspore.float32)
classmindspore.common.initializer.TruncatedNormal(sigma=0.01)[source]

Initialize a truncated normal distribution which is a bounded normal distribution within N(extlow,exthigh)N(extlow,exthigh).

  • Parameters

    sigma (float) – The sigma of the array. Default: 0.01.

Examples

>>> import mindspore
>>> from mindspore.common.initializer import initializer, TruncatedNormal
>>> tensor1 = initializer(TruncatedNormal(), [1, 2, 3], mindspore.float32)
>>> tensor2 = initializer('truncatedNormal', [1, 2, 3], mindspore.float32)
classmindspore.common.initializer.Normal(sigma=0.01, mean=0.0)[source]

Initialize a normal array, and obtain values N(sigma,mean)N(sigma,mean) from the normal distribution to fill the input tensor.

  • Parameters

    sigma (float) – The sigma of the array. Default: 0.01.

    mean (float) – The mean of the array. Default: 0.0.

Examples

>>> import mindspore
>>> from mindspore.common.initializer import initializer, Normal
>>> tensor1 = initializer(Normal(), [1, 2, 3], mindspore.float32)
>>> tensor2 = initializer('normal', [1, 2, 3], mindspore.float32)
classmindspore.common.initializer.Uniform(scale=0.07)[source]

Initialize a uniform array, and obtain values U(−extscale,extscale)U(−extscale,extscale) from the uniform distribution to fill the input tensor.

  • Parameters

    scale (float) – The scale of the array. Default: 0.07.

Examples

>>> import mindspore
>>> from mindspore.common.initializer import initializer, Uniform
>>> tensor1 = initializer(Uniform(), [1, 2, 3], mindspore.float32)
>>> tensor2 = initializer('uniform', [1, 2, 3], mindspore.float32)
classmindspore.common.initializer.HeNormal(negative_slope=0, mode=”fan_in”, nonlinearity=”leaky_relu”)[source]

Initialize the array with HeKaiming Normal algorithm, and from a normal distribution collect samples within N(0,sigma2)N(0,sigma2) where

  • where gaingain is an optional scaling factor.
  • where modemode is the number of input units or output units in the weight tensor.

For details of HeUniform algorithm, please check https://arxiv.org/abs/1502.01852.

  • Parameters

    negative_slope (int, float, bool) – The negative slope of the rectifier used after this layer (only used when nonlinearity is ‘leaky_relu’). Default: 0.

    mode (str) – Either ‘fan_in’ or ‘fan_out’. Choosing ‘fan_in’ preserves the magnitude of the variance of the weights in the forward pass. Choosing ‘fan_out’ preserves the magnitudes in the backwards pass. Default: fan_in.

    nonlinearity (str) – The non-linear function, recommended to use only with ‘relu’ or ‘leaky_relu’. Default: leaky_relu.

Examples

>>> import mindspore
>>> from mindspore.common.initializer import initializer, HeNormal
>>> tensor1 = initializer(HeNormal(), [1, 2, 3], mindspore.float32)
>>> tensor2 = initializer('he_normal', [1, 2, 3], mindspore.float32)
classmindspore.common.initializer.XavierUniform(gain=1)[source]

Initialize the array with xavier uniform algorithm, and from a uniform distribution collect samples within U(−boundary,boundary)U(−boundary,boundary) where:

  • where gaingain is an optional scaling factor.
  • where ninnin is the number of input units in the weight tensor.
  • where noutnout is the number of output units in the weight tensor.

For details of XavierUniform algorithm, please check http://proceedings.mlr.press/v9/glorot10a.html.

  • Parameters

    gain (float) – An optional scaling factor. Default: 1.

Examples

>>> import mindspore
>>> from mindspore.common.initializer import initializer, XavierUniform
>>> tensor1 = initializer(XavierUniform(), [1, 2, 3], mindspore.float32)
>>> tensor2 = initializer('xavier_uniform', [1, 2, 3], mindspore.float32)
classmindspore.common.initializer.One(**kwargs)[source]

Fills the input array with the values one.

  • Parameters

    arr (Array) – The array to be assigned.

Examples

>>> import mindspore
>>> from mindspore.common.initializer import initializer, One
>>> tensor1 = initializer(One(), [1, 2, 3], mindspore.float32)
>>> tensor2 = initializer('ones', [1, 2, 3], mindspore.float32)
classmindspore.common.initializer.Zero(**kwargs)[source]

Fills the input array with the values zero.

  • Parameters

    arr (Array) – The array to be assigned.

Examples

>>> import mindspore
>>> from mindspore.common.initializer import initializer, Zero
>>> tensor1 = initializer(Zero(), [1, 2, 3], mindspore.float32)
>>> tensor2 = initializer('zeros', [1, 2, 3], mindspore.float32)
classmindspore.common.initializer.Constant(value)[source]

Initialize a constant.

Examples

>>> import mindspore
>>> from mindspore.common.initializer import initializer
>>> tensor1 = initializer(0, [1, 2, 3], mindspore.float32)
>>> tensor2 = initializer(5, [1, 2, 3], mindspore.float32)

mindspore.communication

Collective communication interface.


文章作者: 杰克成
版权声明: 本博客所有文章除特別声明外,均采用 CC BY 4.0 许可协议。转载请注明来源 杰克成 !
评论
  目录