Oliver Zeigermann / @DJCordhose
http://www.theprojectspot.com/tutorial-post/introduction-to-artificial-neural-networks-part-1/7
percepton training visualization (initial version provided as a courtesy of Jed Borovik)
Interactive classifier using deep neural network
layer_defs = [
{type:'input', out_sx:1, out_sy:1, out_depth:2},
{type:'fc', num_neurons:6, activation: 'tanh'},
{type:'fc', num_neurons:2, activation: 'tanh'},
{type:'softmax', num_classes:2}];
net = new convnetjs.Net();
net.makeLayers(layer_defs);
trainer = new convnetjs.Trainer(net);
var point = new convnetjs.Vol(1,1,2); // needs to match input layer
point.w = [3.0, 4.0];
var prediction = net.forward(point);
// probability of classes in .w
if(prediction.w[0] > prediction.w[1]) // red;
else // green;
Predictions will also be painted as background colors
var data = [[-0.4326, 1.1909], [3.0, 4.0], [1.8133, 1.0139 ]];
var labels = [1, 1, 0];
var N = labels.length;
var avloss = 0.0;
for (var iter=0; iter < 20; iter++) {
for (var ix=0; ix < N; ix++) {
var point = new convnetjs.Vol(1,1,2);
var label = labels[ix];
point.w = data[ix];
var stats = trainer.train(point, label);
avloss += stats.loss;
}
}
// make this as small as possible
avloss /= N*iters;
Convnetjs uses stochastic gradient descent by default
Just one hidden layer, number of perceptrons variable
1, 2, 3, 5
Does not classify, but tries to find a coninuous function that goes through all data points
Quick Quiz: How many neurons in hidden layer?
layer_defs = [
{type:'input', out_sx:1, out_sy:1, out_depth:1},
{type:'fc', num_neurons:5, activation:'sigmoid'},
{type:'regression', num_neurons:1}];
uses convolutional hidden layers for filtering
Tiny images in 10 classes, 6000 per class in training set
Udacity Course 730, Deep Learning (L3 Convolutional Neural Networks > Convolutional Networks)
layer_defs = [
{type:'input', out_sx:32, out_sy:32, out_depth:3},
{type:'conv', sx:5, filters:16, stride:1, pad:2, activation:'relu'},
{type:'pool', sx:2, stride:2}),
{type:'conv', sx:5, filters:20, stride:1, pad:2, activation:'relu'}.
{type:'pool', sx:2, stride:2},
{type:'conv', sx:5, filters:20, stride:1, pad:2, activation:'relu'},
{type:'pool', sx:2, stride:2},
{type:'softmax', num_classes:10}];
// 16 5x5 filters will be convolved with the input
// output again 32x32 as stride (step width) is 1
// 2 pixels on each edge will be padded with 0 matching 5x5 pixel filter
{type:'conv', sx:5, filters:16, stride:1, pad:2, activation:'relu'}
convolutional layers remove noise, add semantics
// perform max pooling in 2x2 non-overlapping neighborhoods
// output will be 16x16 as stride (step width) is 2
{type:'pool', sx:2, stride:2}),
pooling layers in between provide translation invariance
{type:'softmax', num_classes:10}
assigns a probability to each of the 10 categories
By AI strategy (gray)
http://rinuboney.github.io/2015/10/18/theoretical-motivations-deep-learning.html