Science of getting computers to act without being explicitly programmed
// Initial weights
let w0 = 3, w1 = -4, w2 = 2;
function perceptron(x1, x2) {
const sum = w0 + w1 * x1 + w2 * x2;
return activation(sum);
}
function activation(z) {
// in this case a sigmoid function (alt.: tanh, linear, relu)
return 1 / (1 + Math.exp(z * -1));
}
can emulate most logic functions (NOT, AND, OR, NAND)
percepton training visualization (initial version provided as a courtesy of Jed Borovik)
http://www.theprojectspot.com/tutorial-post/introduction-to-artificial-neural-networks-part-1/7
Because it would require two lines for separation
https://en.wikipedia.org/wiki/Feedforward_neural_network
Arraging many Neurons in layers
Using the Tensorflow Playground
Why would I care how to tell blue from organge spots?
x and y coordinate of a spot in our example
Just a single layer containing 3 neurons
in our example using a single neuron, tanh activation
again combining all weighted lines to determine two categories: blue or orange?
training loss low, test loss much higher
NN model is too specific to training values, not general enough
each neuron in one layer feeds all neurons in the next layer
That means what architecture: How many layers, how many neurons, which activation function?
... using searches over a set of hyper-parameters (might be expensive)
Or: use a pre-trained network (by people who have done that job for you already)
E.g. to recognize dogs (again a classification problem)
using an internal representation like
https://auduno.github.io/2016/06/18/peeking-inside-convnets/
Dog vs Muffin
TensorFlow is the full version of the Playground
Chihuahua (score = 0.68340)
Pomeranian (score = 0.02451)
Pekinese, Pekingese, Peke (score = 0.00751)
toy terrier (score = 0.00716)
beagle (score = 0.00645)
Using the pre-trained Inception model