neural-classifier

API documentation

Neural network class and accessors
neural-network
OptionValue
Superclasses:(t)
Metaclass:standard-class
Default Initargs:nil
Class for neural networks
  • layout
    Number of neurons in each layer of the network
    OptionValue
    Allocation:instance
    Type:list
    Initarg::layout
    Initform:(error "Specify number of neurons in each layer")
    Readers:(neural-network-layout)
  • activation-funcs
    List of activation functions.
    OptionValue
    Allocation:instance
    Type:list
    Initarg::activation-funcs
    Initform:nil
    Accessors:(neural-network-activation-funcs)
  • weights
    Weight matrices for each layer
    OptionValue
    Allocation:instance
    Type:list
    Accessors:(neural-network-weights)
  • biases
    Bias vectors for each layer
    OptionValue
    Allocation:instance
    Type:list
    Accessors:(neural-network-biases)
  • input-trans
    Function which translates an input object to a vector
    OptionValue
    Allocation:instance
    Type:function
    Initarg::input-trans
    Initform:(function identity)
    Accessors:(neural-network-input-trans)
  • output-trans
    Function which translates an output vector to a label.
    OptionValue
    Allocation:instance
    Type:function
    Initarg::output-trans
    Initform:(function identity)
    Accessors:(neural-network-output-trans)
  • input-trans%
    Function which translates an input object to a vector (used for training)
    OptionValue
    Allocation:instance
    Type:function
    Initarg::input-trans%
    Initform:(function identity)
    Accessors:(neural-network-input-trans%)
  • label-trans
    Function which translates a label to a vector
    OptionValue
    Allocation:instance
    Type:function
    Initarg::label-trans
    Initform:(function identity)
    Accessors:(neural-network-label-trans)
Functions
make-neural-network(layout &key input-trans output-trans input-trans% label-trans activation-funcs)
Create a new neural network.
  • layout is a list of positive integers which describes a number of neurons in each layer (starting from input layer).
  • activation-funcs is a list all the elements of which are objects of type activation. The length of this list must be equal to the length of layout minus one because the input layer does not have an activation function. The last element must be of type output-layer-activation and the all elements but last must be of type hidden-layer-activation.
  • input-trans is a function which is applied to an object passed to calculate to transform it into an input column (that is a matrix with the type magicl:matrix/single-float and the shape Nx1, where N is the first number in the layout). For example, if we are recognizing digits from the MNIST set, this function can take a number of an image in the set and return 784x1 matrix.
  • output-trans is a function which is applied to the output of calculate function (that is a matrix with the type magicl:matrix/single-float and the shape Mx1, where M is the last number in the layout) to return some object with user-defined meaning (called a label). Again, if we are recognizing digits, this function transforms 10x1 matrix to a number from 0 to 9.
  • input-trans% is just like input-trans, but is used while training. It can include additional transformations to extend your training set (e.g. it can add some noise to input data, rotate an input picture by a small random angle, etc.).
  • label-trans is a function which is applied to a label to get a column (that is a matrix with the type magicl:matrix/single-float and the shape Mx1, where M is the last number in the layout) which is the optimal output from the network for this object. With digits recognition, this function may take a digit n and return 10x1 matrix of all zeros with exception for n-th element which would be 1f0.
Default value for all transformation functions is identity.
calculate(neural-network object)
Calculate output from the network neural-network for the object object. The input transformation function (specified by :input-trans when creating a network) is applied to the object and the output transformation function (specified by :output-trans) is applied to output Nx1 matrix from the network.
train-epoch(neural-network generator &key (optimizer (make-instance (quote sgd-optimizer))))
Perform training of neural-network on every object returned by the generator generator. Each item returned by generator must be in the form (data-object . label) cons pair. input-trans% and label-trans functions passes to make-neural-network are applied to car and cdr of each pair respectively.
rate(neural-network generator &key (test (function eql)))
Calculate accuracy of the neural-network(ratio of correctly guessed samples to all samples) using testing data from the generator generator. Each item returned by generator must be a cons pair in the form (data-object . label), as with train-epoch function. test is a function used to compare the expected label with the label returned by the network.
idx-abs-max(matrix)
Returns index of first element with maximal absolute value by calling isamax() function from BLAS. Works only for rows or columns.