neural network 20161210_jintaekseo

57
Mobile Game Programming: Artificial Neural Network [email protected] Division of Digital Contents, DongSeo University. December 2016

Upload: jintaek-seo

Post on 23-Feb-2017

85 views

Category:

Engineering


0 download

TRANSCRIPT

Page 1: Neural network 20161210_jintaekseo

Mobile Game Programming:

Artificial Neural Net-work

[email protected] of Digital Contents, DongSeo University.

December 2016

Page 2: Neural network 20161210_jintaekseo

Standard Deviation The standard deviation (SD, also represented by

sigma σ or s) is a measure that is used to quantify the amount of variation of a set of data values

2

Page 3: Neural network 20161210_jintaekseo

Root mean square The root mean square (abbreviated RMS) is defined

as the square root of mean square (the arithmetic mean of the squares of a set of numbers).[

3

Page 4: Neural network 20161210_jintaekseo

Neural Network Neuron

– More than 100 billion neurons in a brain Dendrites

– receive input signals and, based on those inputs, fire an output signal via an axon

Axon

4

Page 5: Neural network 20161210_jintaekseo

Learning Supervised Learning Unsupervised Learning Reinforcement Learning

5

Page 6: Neural network 20161210_jintaekseo

Supervised Learning —Essentially, a strategy that in-volves a teacher that is smarter than the network itself. For example, let’s take the facial recognition example. The teacher shows the network a bunch of faces, and the teacher already knows the name associated with each face. The network makes its guesses, then the teacher provides the network with the answers. The network can then compare its answers to the known “correct” ones and make adjustments according to its errors.

6

Page 7: Neural network 20161210_jintaekseo

Unsupervised Learning —Required when there isn’t an example data set with known answers. Imagine searching for a hidden pattern in a data set. An applica-tion of this is clustering, i.e. dividing a set of elements into groups according to some unknown pattern.

Reinforcement Learning —A strategy built on obser-vation. Think of a little mouse running through a maze. If it turns left, it gets a piece of cheese; if it turns right, it receives a little shock. Presumably, the mouse will learn over time to turn left. Its neural network makes a deci-sion with an outcome (turn left or right) and observes its environment (yum or ouch). If the observation is nega-tive, the network can adjust its weights in order to make a different decision the next time. Reinforcement learn-ing is common in robotics.

7

Page 8: Neural network 20161210_jintaekseo

Standard Uses of Neural Networks Pattern Recognition

– it’s probably the most common application. Examples are facial recognition, optical character recognition, etc.

Control– You may have read about recent research advances in self driv-

ing cars. Neural networks are often used to manage steering decisions of physical vehicles (or simulated ones).

Anomaly Detection– Because neural networks are so good at recognizing patterns,

they can also be trained to generate an output when something occurs that doesn’t fit the pattern.

8

Page 9: Neural network 20161210_jintaekseo

Perceptron Invented in 1957 by Frank Rosenblatt at the Cornell

Aeronautical Laboratory, a perceptron is the simplest neural network possible: a computational model of a single neuron.

Step1 : Receive inputsInput 0: = 12Input 1: = 4

9

0.5

-1.0

Page 10: Neural network 20161210_jintaekseo

Step 2: Weight inputsWeight 0: 0.5Weight 1: -1

We take each input and multiply it by its weight.Input 0 * Weight 0 ⇒ 12 * 0.5 = 6Input 1 * Weight 1 ⇒ 4 * -1 = -4

Step 3: Sum inputs The weighted inputs are then summed.

Sum = 6 + -4 = 2

Step 4: Generate outputOutput = sign(sum) ⇒ sign(2) ⇒ +1

10

Page 11: Neural network 20161210_jintaekseo

The Perceptron Algorithm:1. For every input, multiply that input by its weight.2. Sum all of the weighted inputs.3. Compute the output of the perceptron based on that

sum passed through an activation function (the sign of the sum).

float inputs[] = 12 , 4;float weights[] = 0.5,-1;

float sum = 0;for (int i = 0; i < inputs.length; i++)

sum += inputs[i]*weights[i];

11

Page 12: Neural network 20161210_jintaekseo

The Perceptron Algorithm:1. For every input, multiply that input by its weight.2. Sum all of the weighted inputs.3. Compute the output of the perceptron based on that

sum passed through an activation function (the sign of the sum).

float output = activate(sum);

int activate(float sum) // Return a 1 if positive, -1 if negative.if (sum > 0) return 1;else return -1;

12

Page 13: Neural network 20161210_jintaekseo

Simple Pattern Recognition using a Perceptron Consider a line in two-dimensional space. Points in that space can be classified as living on either

one side of the line or the other.

13

Page 14: Neural network 20161210_jintaekseo

Let’s say a perceptron has 2 inputs (the x- and y-coordi-nates of a point).

Using a sign activation function, the output will either be -1 or +1.

In the previous diagram, we can see how each point is either below the line (-1) or above (+1).

14

Page 15: Neural network 20161210_jintaekseo

Bias Let’s consider the point (0,0).

– No matter what the weights are, the sum will always be 0!

0 * weight for x = 00 * weight for y = 01 * weight for bias = weight for bias

15

Page 16: Neural network 20161210_jintaekseo

Coding the Perceptronclass Perceptronprivate: //The Perceptron stores its weights and learning constants. float* spWeights; int mWeightsSize = 0; float c = 0.01; // learning constant

16

Page 17: Neural network 20161210_jintaekseo

Initialize Perceptron(int n) mWeightsSize = n; spWeights = new float[ n ]; //Weights start off random. for (int i = 0; i < mWeightsSize; i++) spWeights[ i ] = random( -1, 1 );

17

Page 18: Neural network 20161210_jintaekseo

Feed forward int feedforward(float inputs[]) float sum = 0; for (int i = 0; i < mWeightsSize; i++) sum += inputs[ i ] * spWeights[ i ]; return activate(sum);

//Output is a +1 or -1. int activate(float sum) if (sum > 0) return 1; else return -1;

18

Page 19: Neural network 20161210_jintaekseo

Use the PerceptronPerceptron p = new Perceptron(3);float point[] = 50,-12,1; // The input is 3 values: x,y and bias.int result = p.feedforward(point);

19

Page 20: Neural network 20161210_jintaekseo

Supervised Learning① Provide the perceptron with inputs for which there is a

known answer.② Ask the perceptron to guess an answer.③ Compute the error. (Did it get the answer right or

wrong?)④ Adjust all the weights according to the error.⑤ Return to Step 1 and repeat!

20

Page 21: Neural network 20161210_jintaekseo

The Perceptron's errorERROR = DESIRED OUTPUT - GUESS OUTPUT

The error is the determining factor in how the percep-tron’s weights should be adjusted

21

Page 22: Neural network 20161210_jintaekseo

For any given weight, what we are looking to calculate is the change in weight, often called Δweight.

NEW WEIGHT = WEIGHT + ΔWEIGHT Δweight is calculated as the error multiplied by the in-

put.ΔWEIGHT = ERROR × INPUT

Therefore:NEW WEIGHT = WEIGHT + ERROR × INPUT

22

Page 23: Neural network 20161210_jintaekseo

Learning ConstantNEW WEIGHT = WEIGHT + ERROR * INPUT * LEARNING CON-STANT With a small learning constant, the weights will be ad-

justed slowly, requiring more training time but allowing the network to make very small adjustments that could improve the network’s overall accuracy. float c = 0.01; // learning constant //Train the network against known data. void train(float inputs[], int desired) int guess = feedforward(inputs); float error = desired - guess; for (int i = 0; i < mWeightsSize; i++) spWeights[ i ] += c * error * inputs[ i ];

23

Page 24: Neural network 20161210_jintaekseo

Trainer To train the perceptron, we need a set of inputs

with a known answer.class Trainerpublic: //A "Trainer" object stores the inputs and the correct answer. float mInputs[ 3 ]; int mAnswer;

void SetData( float x, float y, int a ) mInputs[ 0 ] = x; mInputs[ 1 ] = y; //Note that the Trainer has the bias input built into its array. mInputs[ 2 ] = 1; mAnswer = a;

24

Page 25: Neural network 20161210_jintaekseo

//The formula for a linefloat f( float x ) return 2 * x + 1;

25

Page 26: Neural network 20161210_jintaekseo

void Setup() srand( time( 0 ) ); //size( 640, 360 ); spPerceptron.reset( new Perceptron( 3 ) );

// Make 2,000 training points. for( int i = 0; i < gTrainerSize; i++ ) float x = random( -gWidth / 2, gWidth / 2 ); float y = random( -gHeight / 2, gHeight / 2 ); //Is the correct answer 1 or - 1 ? int answer = 1; if( y < f( x ) ) answer = -1; gTraining[ i ].SetData( x, y, answer );

26

Page 27: Neural network 20161210_jintaekseo

void Training() for( int i = 0; i < gTrainerSize; i++ ) spPerceptron->train( gTraining[ i ].mInputs, gTraining[ i ].mAnswer );

void main() Setup(); Training();

27

Page 28: Neural network 20161210_jintaekseo

Second Example: Computer Go

28

Page 29: Neural network 20161210_jintaekseo

29

Page 30: Neural network 20161210_jintaekseo

30

Page 31: Neural network 20161210_jintaekseo

31

Page 32: Neural network 20161210_jintaekseo

32

Page 33: Neural network 20161210_jintaekseo

33

Page 34: Neural network 20161210_jintaekseo

34

Page 35: Neural network 20161210_jintaekseo

Practice Calculate weights of the Perceptron for Go by using pa-

per and pencil.

35

Page 36: Neural network 20161210_jintaekseo

Limitations of Perceptron Perceptrons are limited in their abilities. It can only solve

linearly separable problems.– If you can classify the data with a straight line, then it is linearly

separable. On the right, however, is nonlinearly separable data

36

Page 37: Neural network 20161210_jintaekseo

One of the simplest examples of a nonlinearly separable problem is XOR(exclusive or).

37

Page 38: Neural network 20161210_jintaekseo

Multi Layered Perceptron So perceptrons can’t even solve something as simple as

XOR. But what if we made a network out of two percep-trons? If one perceptron can solve OR and one percep-tron can solve NOT AND, then two perceptrons com-bined can solve XOR.

38

Page 39: Neural network 20161210_jintaekseo

Backpropagation The output of the network is generated in the same

manner as a perceptron. The inputs multiplied by the weights are summed and fed forward through the net-work.

The difference here is that they pass through additional layers of neurons before reaching the output. Training the network (i.e. adjusting the weights) also involves taking the error(desired result guess).

The error, however, must be fed backwards through the network. The final error ultimately adjusts the weights of all the connections.

39

Page 40: Neural network 20161210_jintaekseo

Neural Network Single Layer

40

Page 41: Neural network 20161210_jintaekseo

Two Layers

41

Page 42: Neural network 20161210_jintaekseo

Neural Network

42

Page 43: Neural network 20161210_jintaekseo

43

Page 44: Neural network 20161210_jintaekseo

44

Page 45: Neural network 20161210_jintaekseo

45

Page 46: Neural network 20161210_jintaekseo

46

Page 47: Neural network 20161210_jintaekseo

47

Page 48: Neural network 20161210_jintaekseo

48

Page 49: Neural network 20161210_jintaekseo

49

Page 50: Neural network 20161210_jintaekseo

50

Page 51: Neural network 20161210_jintaekseo

51

0.56 = 1 / (1 + e-

0.25)

Page 52: Neural network 20161210_jintaekseo

Activation Functions

52

Page 53: Neural network 20161210_jintaekseo

Sigmoid Function A sigmoid function is a mathematical function having

an "S" shape (sigmoid curve). A wide variety of sigmoid functions have been used as

the activation function of artificial neurons.

53

0.56 = 1 / (1 + e-0.25)

Page 54: Neural network 20161210_jintaekseo

54

Page 55: Neural network 20161210_jintaekseo

Practice Write a neural network program which recognize a digit

on a 8×8 image.

55

Page 57: Neural network 20161210_jintaekseo