lasagne.layers.dnn

This module houses layers that require cuDNN to work. Its layers are not automatically imported into the lasagne.layers namespace: To use these layers, you need to import lasagne.layers.dnn explicitly.

Note that these layers are not required to use cuDNN: If cuDNN is available, Theano will use it for the default convolution and pooling layers anyway. However, they allow you to enforce the usage of cuDNN or use features not available in lasagne.layers.

class lasagne.layers.dnn.Pool2DDNNLayer(incoming, pool_size, stride=None, pad=(0, 0), ignore_border=True, mode='max', **kwargs)[source]

2D pooling layer

Performs 2D mean- or max-pooling over the two trailing axes of a 4D input tensor. This is an alternative implementation which uses theano.sandbox.cuda.dnn.dnn_pool directly.

Parameters:

incoming : a Layer instance or tuple

The layer feeding into this layer, or the expected input shape.

pool_size : integer or iterable

The length of the pooling region in each dimension. If an integer, it is promoted to a square pooling region. If an iterable, it should have two elements.

stride : integer, iterable or None

The strides between sucessive pooling regions in each dimension. If None then stride = pool_size.

pad : integer or iterable

Number of elements to be added on each side of the input in each dimension. Each value must be less than the corresponding stride.

ignore_border : bool (default: True)

This implementation never includes partial pooling regions, so this argument must always be set to True. It exists only to make sure the interface is compatible with lasagne.layers.MaxPool2DLayer.

mode : string

Pooling mode, one of ‘max’, ‘average_inc_pad’ or ‘average_exc_pad’. Defaults to ‘max’.

**kwargs

Any additional keyword arguments are passed to the Layer superclass.

Notes

The value used to pad the input is chosen to be less than the minimum of the input, so that the output of each pooling region always corresponds to some element in the unpadded input region.

This is a drop-in replacement for lasagne.layers.MaxPool2DLayer. Its interface is the same, except it does not support the ignore_border argument.

class lasagne.layers.dnn.MaxPool2DDNNLayer(incoming, pool_size, stride=None, pad=(0, 0), ignore_border=True, **kwargs)[source]

2D max-pooling layer

Subclass of Pool2DDNNLayer fixing mode='max', provided for compatibility to other MaxPool2DLayer classes.

class lasagne.layers.dnn.Pool3DDNNLayer(incoming, pool_size, stride=None, pad=(0, 0, 0), ignore_border=True, mode='max', **kwargs)[source]

3D pooling layer

Performs 3D mean- or max-pooling over the 3 trailing axes of a 5D input tensor. This is an alternative implementation which uses theano.sandbox.cuda.dnn.dnn_pool directly.

Parameters:

incoming : a Layer instance or tuple

The layer feeding into this layer, or the expected input shape.

pool_size : integer or iterable

The length of the pooling region in each dimension. If an integer, it is promoted to a square pooling region. If an iterable, it should have two elements.

stride : integer, iterable or None

The strides between sucessive pooling regions in each dimension. If None then stride = pool_size.

pad : integer or iterable

Number of elements to be added on each side of the input in each dimension. Each value must be less than the corresponding stride.

ignore_border : bool (default: True)

This implementation never includes partial pooling regions, so this argument must always be set to True. It exists only to make sure the interface is compatible with lasagne.layers.MaxPool2DLayer.

mode : string

Pooling mode, one of ‘max’, ‘average_inc_pad’ or ‘average_exc_pad’. Defaults to ‘max’.

**kwargs

Any additional keyword arguments are passed to the Layer superclass.

Notes

The value used to pad the input is chosen to be less than the minimum of the input, so that the output of each pooling region always corresponds to some element in the unpadded input region.

class lasagne.layers.dnn.MaxPool3DDNNLayer(incoming, pool_size, stride=None, pad=(0, 0, 0), ignore_border=True, **kwargs)[source]

3D max-pooling layer

Subclass of Pool3DDNNLayer fixing mode='max', provided for consistency to MaxPool2DLayer classes.

class lasagne.layers.dnn.Conv2DDNNLayer(incoming, num_filters, filter_size, stride=(1, 1), pad=0, untie_biases=False, W=lasagne.init.GlorotUniform(), b=lasagne.init.Constant(0.), nonlinearity=lasagne.nonlinearities.rectify, flip_filters=False, **kwargs)[source]

2D convolutional layer

Performs a 2D convolution on its input and optionally adds a bias and applies an elementwise nonlinearity. This is an alternative implementation which uses theano.sandbox.cuda.dnn.dnn_conv directly.

Parameters:

incoming : a Layer instance or a tuple

The layer feeding into this layer, or the expected input shape. The output of this layer should be a 4D tensor, with shape (batch_size, num_input_channels, input_rows, input_columns).

num_filters : int

The number of learnable convolutional filters this layer has.

filter_size : int or iterable of int

An integer or a 2-element tuple specifying the size of the filters.

stride : int or iterable of int

An integer or a 2-element tuple specifying the stride of the convolution operation.

pad : int, iterable of int, ‘full’, ‘same’ or ‘valid’ (default: 0)

By default, the convolution is only computed where the input and the filter fully overlap (a valid convolution). When stride=1, this yields an output that is smaller than the input by filter_size - 1. The pad argument allows you to implicitly pad the input with zeros, extending the output size.

A single integer results in symmetric zero-padding of the given size on all borders, a tuple of two integers allows different symmetric padding per dimension.

'full' pads with one less than the filter size on both sides. This is equivalent to computing the convolution wherever the input and the filter overlap by at least one position.

'same' pads with half the filter size (rounded down) on both sides. When stride=1 this results in an output size equal to the input size. Even filter size is not supported.

'valid' is an alias for 0 (no padding / a valid convolution).

Note that 'full' and 'same' can be faster than equivalent integer values due to optimizations by Theano.

untie_biases : bool (default: False)

If False, the layer will have a bias parameter for each channel, which is shared across all positions in this channel. As a result, the b attribute will be a vector (1D).

If True, the layer will have separate bias parameters for each position in each channel. As a result, the b attribute will be a 3D tensor.

W : Theano shared variable, expression, numpy array or callable

Initial value, expression or initializer for the weights. These should be a 4D tensor with shape (num_filters, num_input_channels, filter_rows, filter_columns). See lasagne.utils.create_param() for more information.

b : Theano shared variable, expression, numpy array, callable or None

Initial value, expression or initializer for the biases. If set to None, the layer will have no biases. Otherwise, biases should be a 1D array with shape (num_filters,) if untied_biases is set to False. If it is set to True, its shape should be (num_filters, output_rows, output_columns) instead. See lasagne.utils.create_param() for more information.

nonlinearity : callable or None

The nonlinearity that is applied to the layer activations. If None is provided, the layer will be linear.

flip_filters : bool (default: False)

Whether to flip the filters and perform a convolution, or not to flip them and perform a correlation. Flipping adds a bit of overhead, so it is disabled by default. In most cases this does not make a difference anyway because the filters are learnt. However, flip_filters should be set to True if weights are loaded into it that were learnt using a regular lasagne.layers.Conv2DLayer, for example.

**kwargs

Any additional keyword arguments are passed to the Layer superclass.

Attributes

W (Theano shared variable or expression) Variable or expression representing the filter weights.
b (Theano shared variable or expression) Variable or expression representing the biases.
class lasagne.layers.dnn.Conv3DDNNLayer(incoming, num_filters, filter_size, stride=(1, 1, 1), pad=0, untie_biases=False, W=lasagne.init.GlorotUniform(), b=lasagne.init.Constant(0.), nonlinearity=lasagne.nonlinearities.rectify, flip_filters=False, **kwargs)[source]

3D convolutional layer

Performs a 3D convolution on its input and optionally adds a bias and applies an elementwise nonlinearity. This implementation uses theano.sandbox.cuda.dnn.dnn_conv3d directly.

Parameters:

incoming : a Layer instance or a tuple

The layer feeding into this layer, or the expected input shape. The output of this layer should be a 5D tensor, with shape (batch_size, num_input_channels, input_depth, input_rows, input_columns).

num_filters : int

The number of learnable convolutional filters this layer has.

filter_size : int or iterable of int

An integer or a 3-element tuple specifying the size of the filters.

stride : int or iterable of int

An integer or a 3-element tuple specifying the stride of the convolution operation.

pad : int, iterable of int, ‘full’, ‘same’ or ‘valid’ (default: 0)

By default, the convolution is only computed where the input and the filter fully overlap (a valid convolution). When stride=1, this yields an output that is smaller than the input by filter_size - 1. The pad argument allows you to implicitly pad the input with zeros, extending the output size.

A single integer results in symmetric zero-padding of the given size on all borders, a tuple of three integers allows different symmetric padding per dimension.

'full' pads with one less than the filter size on both sides. This is equivalent to computing the convolution wherever the input and the filter overlap by at least one position.

'same' pads with half the filter size (rounded down) on both sides. When stride=1 this results in an output size equal to the input size. Even filter size is not supported.

'valid' is an alias for 0 (no padding / a valid convolution).

Note that 'full' and 'same' can be faster than equivalent integer values due to optimizations by Theano.

untie_biases : bool (default: False)

If False, the layer will have a bias parameter for each channel, which is shared across all positions in this channel. As a result, the b attribute will be a vector (1D).

If True, the layer will have separate bias parameters for each position in each channel. As a result, the b attribute will be a 4D tensor.

W : Theano shared variable, expression, numpy array or callable

Initial value, expression or initializer for the weights. These should be a 5D tensor with shape (num_filters, num_input_channels, filter_depth, filter_rows, filter_columns). See lasagne.utils.create_param() for more information.

b : Theano shared variable, expression, numpy array, callable or None

Initial value, expression or initializer for the biases. If set to None, the layer will have no biases. Otherwise, biases should be a 1D array with shape (num_filters,) if untied_biases is set to False. If it is set to True, its shape should be (num_filters, output_depth, output_rows, output_columns) instead. See lasagne.utils.create_param() for more information.

nonlinearity : callable or None

The nonlinearity that is applied to the layer activations. If None is provided, the layer will be linear.

flip_filters : bool (default: False)

Whether to flip the filters and perform a convolution, or not to flip them and perform a correlation. Flipping adds a bit of overhead, so it is disabled by default. In most cases this does not make a difference anyway because the filters are learned, but if you want to compute predictions with pre-trained weights, take care if they need flipping.

**kwargs

Any additional keyword arguments are passed to the Layer superclass.

Attributes

W (Theano shared variable or expression) Variable or expression representing the filter weights.
b (Theano shared variable or expression) Variable or expression representing the biases.
class lasagne.layers.dnn.SpatialPyramidPoolingDNNLayer(incoming, pool_dims=[4, 2, 1], mode='max', **kwargs)[source]

Spatial Pyramid Pooling Layer

Performs spatial pyramid pooling (SPP) over the input. It will turn a 2D input of arbitrary size into an output of fixed dimension. Hence, the convolutional part of a DNN can be connected to a dense part with a fixed number of nodes even if the dimensions of the input image are unknown.

The pooling is performed over \(l\) pooling levels. Each pooling level \(i\) will create \(M_i\) output features. \(M_i\) is given by \(n_i * n_i\), with \(n_i\) as the number of pooling operation per dimension in level \(i\), and we use a list of the \(n_i\)‘s as a parameter for SPP-Layer. The length of this list is the level of the spatial pyramid.

Parameters:

incoming : a Layer instance or tuple

The layer feeding into this layer, or the expected input shape.

pool_dims : list of integers

The list of \(n_i\)‘s that define the output dimension of each pooling level \(i\). The length of pool_dims is the level of the spatial pyramid.

mode : string

Pooling mode, one of ‘max’, ‘average_inc_pad’ or ‘average_exc_pad’. Defaults to ‘max’.

**kwargs

Any additional keyword arguments are passed to the Layer superclass.

Notes

This layer should be inserted between the convolutional part of a DNN and its dense part. Convolutions can be used for arbitrary input dimensions, but the size of their output will depend on their input dimensions. Connecting the output of the convolutional to the dense part then usually demands us to fix the dimensions of the network’s InputLayer. The spatial pyramid pooling layer, however, allows us to leave the network input dimensions arbitrary. The advantage over a global pooling layer is the added robustness against object deformations due to the pooling on different scales.

References

[R29]He, Kaiming et al (2015): Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition. http://arxiv.org/pdf/1406.4729.pdf.