caffe

softmax 是logistic回归的多值的扩展。

  • forward and backward
    Imgur

    The Net::Forward() and Net::Backward() methods carry out the respective passes while Layer::Forward() and Layer::Backward() compute each step.
    The Solver optimizes a model by first calling forward to yield the output and loss, then calling backward to generate the gradient of the model, and then incorporating the gradient into a weight update that attempts to minimize the loss.

1.layer

Vision Layers

Vision layers usually take images as input and produce other images as output.

c channel; w width; h height;

卷积

Imgur

pooling

Imgur

Local Response Normalization (LRN)

Imgur

Loss Layers

Loss drives learning by comparing an output to a target and assigning cost to minimize.
Imgur

Hinge loss

  • Sigmoid Cross-Entropy

  • SigmoidCrossEntropyLoss

  • Infogain

  • InfogainLoss

  • Accuracy and Top-k

Activation / Neuron Layers

taking one bottom blob and producing one top blob of the same size.

  • ReLU / Rectified-Linear and Leaky-ReLU : Given an input value x, The ReLU layer computes the output as x if x > 0 and negative_slope * x if x <= 0.
  • Sigmoid
  • TanH / Hyperbolic Tangent : tan(x)
  • Absolute Value
  • power
  • BNLL : log(1 + exp(x))

Data Layers

Data enters Caffe through data layers: they lie at the bottom of nets.

Common Layers

  • InnerProduct
  • Splitting
  • Flattening
  • Reshape
  • Concatenation
  • Slicing
  • Elementwise Operations
  • Argmax
  • Softmax
  • Mean-Variance Normalization (MVN)

2.Solver

The solver orchestrates model optimization by coordinating the network’s forward inference and backward gradients to form parameter updates that attempt to improve the loss.

Stochastic gradient descent(SGD)

Imgur
Imgur

AdaGrad

Imgur

Nesterov’s accelerated gradient(NAG)

Imgur

3.Interfaces

Command Line