SHOGUN
3.2.1
|
A generic multi-layer neural network.
A Neural network is constructed using an array of CNeuralLayer objects. The NeuralLayer class defines the interface necessary for forward and backpropagation.
The network can be constructed as any arbitrary directed acyclic graph.
How to use the network:
Supported feature types: CDenseFeatures<float64_t> Supported label types:
The neural network can be trained using L-BFGS (default) or mini-batch gradient descent.
NOTE: LBFGS does not work properly when using dropout/max-norm regularization due to their stochastic nature. Use gradient descent instead.
During training, the error at each iteration is logged as MSG_INFO. (to turn on info messages call io.set_loglevel(MSG_INFO)).
The network stores the parameters of all the layers in a single array. This makes it easy to train a network of any combination of arbitrary layer types using any optimization method (gradient descent, L-BFGS, ..)
All the matrices the network (and related classes) deal with are in column-major format
When implemnting new layer types, the function check_gradients() can be used to make sure the gradient computations are correct.
Definition at line 107 of file NeuralNetwork.h.
Protected Member Functions | |
virtual bool | train_machine (CFeatures *data=NULL) |
virtual bool | train_gradient_descent (SGMatrix< float64_t > inputs, SGMatrix< float64_t > targets) |
virtual bool | train_lbfgs (SGMatrix< float64_t > inputs, SGMatrix< float64_t > targets) |
virtual SGMatrix< float64_t > | forward_propagate (CFeatures *data, int32_t j=-1) |
virtual SGMatrix< float64_t > | forward_propagate (SGMatrix< float64_t > inputs, int32_t j=-1) |
virtual void | set_batch_size (int32_t batch_size) |
virtual float64_t | compute_gradients (SGMatrix< float64_t > inputs, SGMatrix< float64_t > targets, SGVector< float64_t > gradients) |
virtual float64_t | compute_error (SGMatrix< float64_t > inputs, SGMatrix< float64_t > targets) |
virtual float64_t | compute_error (SGMatrix< float64_t > targets) |
virtual bool | is_label_valid (CLabels *lab) const |
CNeuralLayer * | get_layer (int32_t i) |
SGMatrix< float64_t > | features_to_matrix (CFeatures *features) |
SGMatrix< float64_t > | labels_to_matrix (CLabels *labs) |
virtual void | store_model_features () |
virtual bool | train_require_labels () const |
virtual TParameter * | migrate (DynArray< TParameter * > *param_base, const SGParamInfo *target) |
virtual void | one_to_one_migration_prepare (DynArray< TParameter * > *param_base, const SGParamInfo *target, TParameter *&replacement, TParameter *&to_migrate, char *old_name=NULL) |
virtual void | load_serializable_pre () throw (ShogunException) |
virtual void | load_serializable_post () throw (ShogunException) |
virtual void | save_serializable_pre () throw (ShogunException) |
virtual void | save_serializable_post () throw (ShogunException) |
Protected Attributes | |
int32_t | m_num_inputs |
int32_t | m_num_layers |
CDynamicObjectArray * | m_layers |
SGMatrix< bool > | m_adj_matrix |
int32_t | m_total_num_parameters |
SGVector< float64_t > | m_params |
SGVector< bool > | m_param_regularizable |
SGVector< int32_t > | m_index_offsets |
int32_t | m_batch_size |
bool | m_is_training |
float64_t | m_max_train_time |
CLabels * | m_labels |
ESolverType | m_solver_type |
bool | m_store_model_features |
bool | m_data_locked |
Friends | |
class | CDeepBeliefNetwork |
CNeuralNetwork | ( | ) |
default constuctor
Definition at line 43 of file NeuralNetwork.cpp.
CNeuralNetwork | ( | CDynamicObjectArray * | layers | ) |
Sets the layers of the network
layers | An array of CNeuralLayer objects specifying the layers of the network. Must contain at least one input layer. The last layer in the array is treated as the output layer |
Definition at line 49 of file NeuralNetwork.cpp.
|
virtual |
Definition at line 151 of file NeuralNetwork.cpp.
apply machine to data if data is not specified apply to the current features
data | (test)data to be classified |
Definition at line 160 of file Machine.cpp.
|
virtual |
apply machine to data in means of binary classification problem
Reimplemented from CMachine.
Definition at line 156 of file NeuralNetwork.cpp.
|
virtualinherited |
apply machine to data in means of latent problem
Reimplemented in CLinearLatentMachine.
Definition at line 240 of file Machine.cpp.
Applies a locked machine on a set of indices. Error if machine is not locked
indices | index vector (of locked features) that is predicted |
Definition at line 195 of file Machine.cpp.
|
virtualinherited |
applies a locked machine on a set of indices for binary problems
Reimplemented in CKernelMachine, and CMultitaskLinearMachine.
Definition at line 246 of file Machine.cpp.
|
virtualinherited |
applies a locked machine on a set of indices for latent problems
Definition at line 274 of file Machine.cpp.
|
virtualinherited |
applies a locked machine on a set of indices for multiclass problems
Definition at line 260 of file Machine.cpp.
|
virtualinherited |
applies a locked machine on a set of indices for regression problems
Reimplemented in CKernelMachine.
Definition at line 253 of file Machine.cpp.
|
virtualinherited |
applies a locked machine on a set of indices for structured problems
Definition at line 267 of file Machine.cpp.
|
virtual |
apply machine to data in means of multiclass classification problem
Reimplemented from CMachine.
Definition at line 191 of file NeuralNetwork.cpp.
|
virtualinherited |
applies to one vector
Reimplemented in CKernelMachine, CRelaxedTree, CWDSVMOcas, COnlineLinearMachine, CLinearMachine, CMultitaskLinearMachine, CMulticlassMachine, CKNN, CDistanceMachine, CMultitaskLogisticRegression, CMultitaskLeastSquaresRegression, CScatterSVM, CGaussianNaiveBayes, CPluginEstimate, and CFeatureBlockLogisticRegression.
|
virtual |
apply machine to data in means of regression problem
Reimplemented from CMachine.
Definition at line 179 of file NeuralNetwork.cpp.
|
virtualinherited |
apply machine to data in means of SO classification problem
Reimplemented in CLinearStructuredOutputMachine.
Definition at line 234 of file Machine.cpp.
|
inherited |
Builds a dictionary of all parameters in SGObject as well of those of SGObjects that are parameters of this object. Dictionary maps parameters to the objects that own them.
dict | dictionary of parameters to be built. |
Definition at line 1185 of file SGObject.cpp.
Checks if the gradients computed using backpropagation are correct by comparing them with gradients computed using numerical approximation. Used for testing purposes only.
Gradients are numerically approximated according to:
\[ c = max(\epsilon x, s) \]
\[ f'(x) = \frac{f(x + c)-f(x - c)}{2c} \]
approx_epsilon | Constant used during gradient approximation |
s | Some small value, used to prevent division by zero |
Definition at line 508 of file NeuralNetwork.cpp.
|
virtualinherited |
Creates a clone of the current object. This is done via recursively traversing all parameters, which corresponds to a deep copy. Calling equals on the cloned object always returns true although none of the memory of both objects overlaps.
Definition at line 1302 of file SGObject.cpp.
|
protectedvirtual |
Forward propagates the inputs and computes the error between the output layer's activations and the given target activations.
inputs | inputs to the network, a matrix of size m_num_inputs*m_batch_size |
targets | desired values for the network's output, matrix of size num_neurons_output_layer*batch_size |
Definition at line 500 of file NeuralNetwork.cpp.
Computes the error between the output layer's activations and the given target activations.
targets | desired values for the network's output, matrix of size num_neurons_output_layer*batch_size |
Reimplemented in CDeepAutoencoder, and CAutoencoder.
Definition at line 473 of file NeuralNetwork.cpp.
|
protectedvirtual |
Applies backpropagation to compute the gradients of the error with repsect to every parameter in the network.
inputs | inputs to the network, a matrix of size m_num_inputs*m_batch_size |
targets | desired values for the output layer's activations. matrix of size m_layers[m_num_layers-1].get_num_neurons()*m_batch_size |
gradients | array to be filled with gradient values. |
Definition at line 421 of file NeuralNetwork.cpp.
|
virtual |
Connects layer i as input to layer j. In order for forward and backpropagation to work correctly, i must be less that j
Definition at line 73 of file NeuralNetwork.cpp.
Locks the machine on given labels and data. After this call, only train_locked and apply_locked may be called
Only possible if supports_locking() returns true
labs | labels used for locking |
features | features used for locking |
Reimplemented in CKernelMachine.
Definition at line 120 of file Machine.cpp.
|
virtualinherited |
Unlocks a locked machine and restores previous state
Reimplemented in CKernelMachine.
Definition at line 151 of file Machine.cpp.
|
virtualinherited |
A deep copy. All the instance variables will also be copied.
Definition at line 146 of file SGObject.cpp.
|
virtual |
Disconnects layer i from layer j
Definition at line 86 of file NeuralNetwork.cpp.
|
virtual |
Removes all connections in the network
Definition at line 91 of file NeuralNetwork.cpp.
Recursively compares the current SGObject to another one. Compares all registered numerical parameters, recursion upon complex (SGObject) parameters. Does not compare pointers!
May be overwritten but please do with care! Should not be necessary in most cases.
other | object to compare with |
accuracy | accuracy to use for comparison (optional) |
tolerant | allows linient check on float equality (within accuracy) |
Definition at line 1206 of file SGObject.cpp.
Ensures the given features are suitable for use with the network and returns their feature matrix
Definition at line 568 of file NeuralNetwork.cpp.
Applies forward propagation, computes the activations of each layer up to layer j
data | input features |
j | layer index at which the propagation should stop. If -1, the propagation continues up to the last layer |
Definition at line 393 of file NeuralNetwork.cpp.
|
protectedvirtual |
Applies forward propagation, computes the activations of each layer up to layer j
inputs | inputs to the network, a matrix of size m_num_inputs*m_batch_size |
j | layer index at which the propagation should stop. If -1, the propagation continues up to the last layer |
Definition at line 400 of file NeuralNetwork.cpp.
|
virtual |
get classifier type
Reimplemented from CMachine.
Definition at line 185 of file NeuralNetwork.h.
|
inherited |
|
inherited |
|
inherited |
|
virtualinherited |
|
protected |
returns a pointer to layer i in the network
Definition at line 677 of file NeuralNetwork.cpp.
returns a copy of a layer's parameters array
i | index of the layer |
Definition at line 666 of file NeuralNetwork.cpp.
CDynamicObjectArray * get_layers | ( | ) |
Returns an array holding the network's layers
Definition at line 698 of file NeuralNetwork.cpp.
|
virtual |
returns type of problem machine solves
Reimplemented from CMachine.
Definition at line 629 of file NeuralNetwork.cpp.
|
inherited |
|
inherited |
Definition at line 1077 of file SGObject.cpp.
|
inherited |
Returns description of a given parameter string, if it exists. SG_ERROR otherwise
param_name | name of the parameter |
Definition at line 1101 of file SGObject.cpp.
|
inherited |
Returns index of model selection parameter with provided index
param_name | name of model selection parameter |
Definition at line 1114 of file SGObject.cpp.
|
virtual |
Returns the name of the SGSerializable instance. It MUST BE the CLASS NAME without the prefixed `C'.
Reimplemented from CMachine.
Reimplemented in CDeepAutoencoder, and CAutoencoder.
Definition at line 229 of file NeuralNetwork.h.
int32_t get_num_inputs | ( | ) |
returns the number of inputs the network takes
Definition at line 221 of file NeuralNetwork.h.
int32_t get_num_outputs | ( | ) |
returns the number of neurons in the output layer
Definition at line 693 of file NeuralNetwork.cpp.
int32_t get_num_parameters | ( | ) |
returns the totat number of parameters in the network
Definition at line 215 of file NeuralNetwork.h.
return the network's parameter array
Definition at line 218 of file NeuralNetwork.h.
|
inherited |
|
virtual |
Initializes the network
sigma | standard deviation of the gaussian used to randomly initialize the parameters |
Definition at line 96 of file NeuralNetwork.cpp.
|
inherited |
|
virtualinherited |
If the SGSerializable is a class template then TRUE will be returned and GENERIC is set to the type of the generic.
generic | set to the type of the generic if returning TRUE |
Definition at line 243 of file SGObject.cpp.
|
protectedvirtual |
check whether the labels is valid.
Subclasses can override this to implement their check of label types.
lab | the labels being checked, guaranteed to be non-NULL |
Reimplemented from CMachine.
Definition at line 643 of file NeuralNetwork.cpp.
converts the given labels into a matrix suitable for use with network
Definition at line 584 of file NeuralNetwork.cpp.
|
inherited |
maps all parameters of this instance to the provided file version and loads all parameter data from the file into an array, which is sorted (basically calls load_file_parameter(...) for all parameters and puts all results into a sorted array)
file_version | parameter version of the file |
current_version | version from which mapping begins (you want to use Version::get_version_parameter() for this in most cases) |
file | file to load from |
prefix | prefix for members |
Definition at line 648 of file SGObject.cpp.
|
inherited |
loads some specified parameters from a file with a specified version The provided parameter info has a version which is recursively mapped until the file parameter version is reached. Note that there may be possibly multiple parameters in the mapping, therefore, a set of TParameter instances is returned
param_info | information of parameter |
file_version | parameter version of the file, must be <= provided parameter version |
file | file to load from |
prefix | prefix for members |
Definition at line 489 of file SGObject.cpp.
|
virtualinherited |
Load this object from file. If it will fail (returning FALSE) then this object will contain inconsistent data and should not be used!
file | where to load from |
prefix | prefix for members |
param_version | (optional) a parameter version different to (this is mainly for testing, better do not use) |
Definition at line 320 of file SGObject.cpp.
|
protectedvirtualinherited |
Can (optionally) be overridden to post-initialize some member variables which are not PARAMETER::ADD'ed. Make sure that at first the overridden method BASE_CLASS::LOAD_SERIALIZABLE_POST is called.
ShogunException | Will be thrown if an error occurres. |
Reimplemented in CKernel, CWeightedDegreePositionStringKernel, CList, CAlphabet, CLinearHMM, CGaussianKernel, CInverseMultiQuadricKernel, CCircularKernel, and CExponentialKernel.
Definition at line 1004 of file SGObject.cpp.
|
protectedvirtualinherited |
Can (optionally) be overridden to pre-initialize some member variables which are not PARAMETER::ADD'ed. Make sure that at first the overridden method BASE_CLASS::LOAD_SERIALIZABLE_PRE is called.
ShogunException | Will be thrown if an error occurres. |
Reimplemented in CDynamicArray< T >, CDynamicArray< float64_t >, CDynamicArray< float32_t >, CDynamicArray< int32_t >, CDynamicArray< char >, CDynamicArray< bool >, and CDynamicObjectArray.
Definition at line 999 of file SGObject.cpp.
|
inherited |
Takes a set of TParameter instances (base) with a certain version and a set of target parameter infos and recursively maps the base level wise to the current version using CSGObject::migrate(...). The base is replaced. After this call, the base version containing parameters should be of same version/type as the initial target parameter infos. Note for this to work, the migrate methods and all the internal parameter mappings have to match
param_base | set of TParameter instances that are mapped to the provided target parameter infos |
base_version | version of the parameter base |
target_param_infos | set of SGParamInfo instances that specify the target parameter base |
Definition at line 686 of file SGObject.cpp.
|
protectedvirtualinherited |
creates a new TParameter instance, which contains migrated data from the version that is provided. The provided parameter data base is used for migration, this base is a collection of all parameter data of the previous version. Migration is done FROM the data in param_base TO the provided param info Migration is always one version step. Method has to be implemented in subclasses, if no match is found, base method has to be called.
If there is an element in the param_base which equals the target, a copy of the element is returned. This represents the case when nothing has changed and therefore, the migrate method is not overloaded in a subclass
param_base | set of TParameter instances to use for migration |
target | parameter info for the resulting TParameter |
Definition at line 893 of file SGObject.cpp.
|
protectedvirtualinherited |
This method prepares everything for a one-to-one parameter migration. One to one here means that only ONE element of the parameter base is needed for the migration (the one with the same name as the target). Data is allocated for the target (in the type as provided in the target SGParamInfo), and a corresponding new TParameter instance is written to replacement. The to_migrate pointer points to the single needed TParameter instance needed for migration. If a name change happened, the old name may be specified by old_name. In addition, the m_delete_data flag of to_migrate is set to true. So if you want to migrate data, the only thing to do after this call is converting the data in the m_parameter fields. If unsure how to use - have a look into an example for this. (base_migration_type_conversion.cpp for example)
param_base | set of TParameter instances to use for migration |
target | parameter info for the resulting TParameter |
replacement | (used as output) here the TParameter instance which is returned by migration is created into |
to_migrate | the only source that is used for migration |
old_name | with this parameter, a name change may be specified |
Definition at line 833 of file SGObject.cpp.
|
virtualinherited |
Definition at line 209 of file SGObject.cpp.
|
inherited |
prints all parameter registered for model selection and their type
Definition at line 1053 of file SGObject.cpp.
|
virtualinherited |
prints registered parameters out
prefix | prefix for members |
Definition at line 255 of file SGObject.cpp.
|
virtual |
Connects each layer to the layer after it. That is, connects layer i to as input to layer i+1 for all i.
Definition at line 79 of file NeuralNetwork.cpp.
|
virtualinherited |
Save this object to file.
file | where to save the object; will be closed during returning if PREFIX is an empty string. |
prefix | prefix for members |
param_version | (optional) a parameter version different to (this is mainly for testing, better do not use) |
Definition at line 261 of file SGObject.cpp.
|
protectedvirtualinherited |
Can (optionally) be overridden to post-initialize some member variables which are not PARAMETER::ADD'ed. Make sure that at first the overridden method BASE_CLASS::SAVE_SERIALIZABLE_POST is called.
ShogunException | Will be thrown if an error occurres. |
Reimplemented in CKernel.
Definition at line 1014 of file SGObject.cpp.
|
protectedvirtualinherited |
Can (optionally) be overridden to pre-initialize some member variables which are not PARAMETER::ADD'ed. Make sure that at first the overridden method BASE_CLASS::SAVE_SERIALIZABLE_PRE is called.
ShogunException | Will be thrown if an error occurres. |
Reimplemented in CKernel, CDynamicArray< T >, CDynamicArray< float64_t >, CDynamicArray< float32_t >, CDynamicArray< int32_t >, CDynamicArray< char >, CDynamicArray< bool >, and CDynamicObjectArray.
Definition at line 1009 of file SGObject.cpp.
|
protectedvirtual |
Sets the batch size (the number of train/test cases) the network is expected to deal with. Allocates memory for the activations, local gradients, input gradients if necessary (if the batch size is different from it's previous value)
batch_size | number of train/test cases the network is expected to deal with. |
Definition at line 558 of file NeuralNetwork.cpp.
|
inherited |
Definition at line 38 of file SGObject.cpp.
|
inherited |
Definition at line 43 of file SGObject.cpp.
|
inherited |
Definition at line 48 of file SGObject.cpp.
|
inherited |
Definition at line 53 of file SGObject.cpp.
|
inherited |
Definition at line 58 of file SGObject.cpp.
|
inherited |
Definition at line 63 of file SGObject.cpp.
|
inherited |
Definition at line 68 of file SGObject.cpp.
|
inherited |
Definition at line 73 of file SGObject.cpp.
|
inherited |
Definition at line 78 of file SGObject.cpp.
|
inherited |
Definition at line 83 of file SGObject.cpp.
|
inherited |
Definition at line 88 of file SGObject.cpp.
|
inherited |
Definition at line 93 of file SGObject.cpp.
|
inherited |
Definition at line 98 of file SGObject.cpp.
|
inherited |
Definition at line 103 of file SGObject.cpp.
|
inherited |
Definition at line 108 of file SGObject.cpp.
|
inherited |
set generic type to T
|
inherited |
|
inherited |
set the parallel object
parallel | parallel object to use |
Definition at line 189 of file SGObject.cpp.
|
inherited |
set the version object
version | version object to use |
Definition at line 230 of file SGObject.cpp.
|
virtual |
set labels
lab | labels |
Reimplemented from CMachine.
Definition at line 650 of file NeuralNetwork.cpp.
|
virtual |
Sets the layers of the network
layers | An array of CNeuralLayer objects specifying the layers of the network. Must contain at least one input layer. The last layer in the array is treated as the output layer |
Reimplemented in CDeepAutoencoder.
Definition at line 55 of file NeuralNetwork.cpp.
|
inherited |
set maximum training time
t | maximimum training time |
Definition at line 90 of file Machine.cpp.
|
inherited |
|
virtualinherited |
Setter for store-model-features-after-training flag
store_model | whether model should be stored after training |
Definition at line 115 of file Machine.cpp.
|
virtualinherited |
A shallow copy. All the SGObject instance variables will be simply assigned and SG_REF-ed.
Reimplemented in CGaussianKernel.
Definition at line 140 of file SGObject.cpp.
|
protectedvirtualinherited |
Stores feature data of underlying model. After this method has been called, it is possible to change the machine's feature data and call apply(), which is then performed on the training feature data that is part of the machine's model.
Base method, has to be implemented in order to allow cross-validation and model selection.
NOT IMPLEMENTED! Has to be done in subclasses
Reimplemented in CKernelMachine, CKNN, CLinearMulticlassMachine, CTreeMachine< T >, CTreeMachine< ConditionalProbabilityTreeNodeData >, CTreeMachine< RelaxedTreeNodeData >, CTreeMachine< id3TreeNodeData >, CTreeMachine< VwConditionalProbabilityTreeNodeData >, CTreeMachine< CARTreeNodeData >, CTreeMachine< C45TreeNodeData >, CTreeMachine< CHAIDTreeNodeData >, CTreeMachine< NbodyTreeNodeData >, CLinearMachine, CHierarchical, CDistanceMachine, CGaussianProcessMachine, CKernelMulticlassMachine, and CLinearStructuredOutputMachine.
|
virtualinherited |
Reimplemented in CKernelMachine, and CMultitaskLinearMachine.
|
virtualinherited |
train machine
data | training data (parameter can be avoided if distance or kernel-based classifiers are used and distance/kernels are initialized with train data). If flag is set, model features will be stored after training. |
Reimplemented in CRelaxedTree, CAutoencoder, CSGDQN, and COnlineSVMSGD.
Definition at line 47 of file Machine.cpp.
|
protectedvirtual |
trains the network using gradient descent
Definition at line 244 of file NeuralNetwork.cpp.
trains the network using L-BFGS
Definition at line 324 of file NeuralNetwork.cpp.
Trains a locked machine on a set of indices. Error if machine is not locked
NOT IMPLEMENTED
indices | index vector (of locked features) that is used for training |
Reimplemented in CKernelMachine, and CMultitaskLinearMachine.
|
protectedvirtual |
|
protectedvirtualinherited |
returns whether machine require labels for training
Reimplemented in COnlineLinearMachine, CHierarchical, CLinearLatentMachine, CVwConditionalProbabilityTree, CConditionalProbabilityTree, and CLibSVMOneClass.
|
virtual |
Applies the network as a feature transformation
Forward-propagates the data through the network and returns the activations of the last layer
data | Input features |
Reimplemented in CDeepAutoencoder, and CAutoencoder.
Definition at line 205 of file NeuralNetwork.cpp.
|
inherited |
unset generic type
this has to be called in classes specializing a template class
Definition at line 250 of file SGObject.cpp.
|
virtualinherited |
Updates the hash of current parameter combination
Definition at line 196 of file SGObject.cpp.
|
friend |
Definition at line 109 of file NeuralNetwork.h.
float64_t dropout_hidden |
Probabilty that a hidden layer neuron will be dropped out When using this, the recommended value is 0.5
default value 0.0 (no dropout)
For more details on dropout, see paper [Hinton, 2012]
Definition at line 372 of file NeuralNetwork.h.
float64_t dropout_input |
Probabilty that a input layer neuron will be dropped out When using this, a good value might be 0.2
default value 0.0 (no dropout)
For more details on dropout, see this paper [Hinton, 2012]
Definition at line 382 of file NeuralNetwork.h.
float64_t epsilon |
convergence criteria training stops when (E'- E)/E < epsilon where E is the error at the current iterations and E' is the error at the previous iteration default value is 1.0e-5
Definition at line 397 of file NeuralNetwork.h.
float64_t gd_error_damping_coeff |
Used to damp the error fluctuations when stochastic gradient descent is used. damping is done according to: error_damped(i) = c*error(i) + (1-c)*error_damped(i-1) where c is the damping coefficient
If -1, the damping coefficient is automatically computed according to: c = 0.99*gd_mini_batch_size/training_set_size + 1e-2;
default value is -1
Definition at line 441 of file NeuralNetwork.h.
float64_t gd_learning_rate |
gradient descent learning rate, defualt value 0.1
Definition at line 412 of file NeuralNetwork.h.
float64_t gd_learning_rate_decay |
gradient descent learning rate decay learning rate is updated at each iteration i according to: alpha(i)=decay*alpha(i-1) default value is 1.0 (no decay)
Definition at line 419 of file NeuralNetwork.h.
int32_t gd_mini_batch_size |
size of the mini-batch used during gradient descent training, if 0 full-batch training is performed default value is 0
Definition at line 409 of file NeuralNetwork.h.
float64_t gd_momentum |
gradient descent momentum multiplier
default value is 0.9
For more details on momentum, see this paper [Sutskever, 2013]
Definition at line 429 of file NeuralNetwork.h.
|
inherited |
io
Definition at line 461 of file SGObject.h.
float64_t l1_coefficient |
L1 Regularization coeff, default value is 0.0
Definition at line 362 of file NeuralNetwork.h.
float64_t l2_coefficient |
L2 Regularization coeff, default value is 0.0
Definition at line 359 of file NeuralNetwork.h.
|
protected |
Describes the connections in the network: if there's a connection from layer i to layer j then m_adj_matrix(i,j) = 1.
Definition at line 455 of file NeuralNetwork.h.
|
protected |
number of train/test cases the network is expected to deal with. Default value is 1
Definition at line 477 of file NeuralNetwork.h.
|
protectedinherited |
|
inherited |
parameters wrt which we can compute gradients
Definition at line 476 of file SGObject.h.
|
inherited |
Hash of parameter values
Definition at line 482 of file SGObject.h.
|
protected |
offsets specifying where each layer's parameters and parameter gradients are stored, i.e layer i's parameters are stored at m_params + m_index_offsets[i]
Definition at line 472 of file NeuralNetwork.h.
|
protected |
True if the network is currently being trained initial value is false
Definition at line 482 of file NeuralNetwork.h.
|
protected |
network's layers
Definition at line 450 of file NeuralNetwork.h.
|
protectedinherited |
|
inherited |
model selection parameters
Definition at line 473 of file SGObject.h.
|
protected |
number of neurons in the input layer
Definition at line 444 of file NeuralNetwork.h.
|
protected |
number of layer
Definition at line 447 of file NeuralNetwork.h.
|
protected |
Array that specifies which parameters are to be regularized. This is used to turn off regularization for bias parameters
Definition at line 466 of file NeuralNetwork.h.
|
inherited |
map for different parameter versions
Definition at line 479 of file SGObject.h.
|
inherited |
parameters
Definition at line 470 of file SGObject.h.
array where all the parameters of the network are stored
Definition at line 461 of file NeuralNetwork.h.
|
protectedinherited |
|
protectedinherited |
|
protected |
total number of parameters in the network
Definition at line 458 of file NeuralNetwork.h.
float64_t max_norm |
Maximum allowable L2 norm for a neurons weights When using this, a good value might be 15
default value -1 (max-norm regularization disabled)
Definition at line 389 of file NeuralNetwork.h.
int32_t max_num_epochs |
maximum number of iterations over the training set. If 0, training will continue until convergence. defualt value is 0
Definition at line 403 of file NeuralNetwork.h.
ENNOptimizationMethod optimization_method |
Optimization method, default is NNOM_LBFGS
Definition at line 356 of file NeuralNetwork.h.
|
inherited |
parallel
Definition at line 464 of file SGObject.h.
|
inherited |
version
Definition at line 467 of file SGObject.h.