Martin Andrews @ redcatlabs.com
Sam Witteveen @ samwitteveen.com
24 June 2017
Problems with Installation? ASK!
Android feature since Jellybean (v4.3, 2012) using Cloud
Trained in ~5 days on 800 machine cluster
Embedded in phone since Android Lollipop (v5.0, 2014)
Google's Deep Models are on the phone
"Use your camera to translate text instantly in 26 languages"
Translations for typed text in 90 languages
Google Street-View (and ReCaptchas)
(now better than human level)
Some good, some not-so-good
Google DeepMind's AlphaGo
Learn to play Go from (mostly) self-play
Change weights to change output function
Layers of neurons combine and
can form more complex functions
Inputs, Outputs, Hidden Layers, Features
Loss
Loss
→ Improving/Training NetworkLoss
Loss
)
output = activation( ( w1*x1 + w2*x2 + ... + wn*xn ) + bias )
y = f( np.dot(ws, xs) + b )
# Now x, y and b are vectors, w is a matrix - for a 'Dense' net
y = f( np.dot(w, x) + b )
for fn in [ 'linear', 'sigmoid', 'tanh', 'relu', 'elu', ]:
model.add( Dense(64, activation=fn) )
outputs = F( inputs ) # For some complicated 'F()'
model = Model( inputs=inputs, outputs=outputs )
model_regression.compile( loss='mean_squared_error' )
model_classifier.compile( loss='categorical_crossentropy' )
Need derivatives of Loss w.r.t every single weight
But after a forward pass, we can push the error backwards...
Credit to Alec Radford
My blog : http://mdda.net/
GitHub : mdda
My blog : http://mdda.net/
GitHub : mdda
In 2012, Deep Learning started to beat other approaches...
hidden1 = Conv2D(128, 5, strides=(2, 2), padding='same')( input )
hidden2 = MaxPooling2D( pool_size=(2, 2) )( hidden1 )
2 x (3x3) < (5x5)
output = Softmax()(Dense(1000)(Dense(2048, activation='relu')( hidden6 )))
# this handles the train/test phases behind the scenes
output = DropOut(0.5)( previous_layer )
GoogLeNet (2014)
Google Inception-v3 (2015)
Microsoft ResNet (2015)
My blog : http://mdda.net/
GitHub : mdda