• Hey! Guest! The 37th GMC Jam will take place between May 28th, 12:00 UTC and June 1st, 12:00 UTC. Why not join in! Click here to find out more!

GMS 2 Neural Network - Back Propagation

Gzebra

Member
Hi :)

I'm trying to follow this tutorial on Neural Networks:

I've been struggling for a few days with what I believe to be a faulty back propagation, so I was hoping somebody with greater math knowledge than myself might help me figuring out how to do this.

Here's my code..

Create event
Code:
randomize()
input[0]=1//bias
input[1]=1
input[2]=0
hidden[0]=1//bias
hidden[1]=0
hidden[2]=0
output[0]=0
a=input[1]>input[2]
newNeuralNetwork(input,hidden,output)

newNeuralNetwork(input,hidden,output)
Code:
/*initiate hidden nodes*/for(var i=0;i<array_length_1d(argument1);i++){hn[i]=argument1[i];eh[i]=0;}
/*initiate output nodes*/for(var i=0;i<array_length_1d(argument2);i++){on[i]=argument2[i];eo[i]=0;}
lr=0.1

//initiate input weights
for(var i=0;i<array_length_1d(argument0);i++){
    for(var p=0;p<array_length_1d(hn);p++){
        win[p,i]=random_range(-1,1)
    }
}

//initiate output weights
for(var i=0;i<array_length_1d(hn);i++){
    for(var p=0;p<array_length_1d(on);p++){
        won[i,p]=random_range(-1,1)
    }
}

feedForward(input)
Code:
//calculate hidden nodes
for(var i=0;i<array_length_1d(hn);i++){
    for(var p=0;p<array_length_1d(argument0);p++){
        hn[i]+=argument0[p]*win[p,i]
    }
    hn[i]=sigmoid(hn[i])
}


//calculate output
for(var i=0;i<array_length_1d(on);i++){
    for(var p=0;p<array_length_1d(hn);p++){
            on[i]+=hn[p]*won[p,i]
    }
    on[i]=sigmoid(on[i])
}

trainNeuralNetwork(input,answer)
Code:
feedForward(argument0)
//calculate output error
for(var i=0;i<array_length_1d(on);i++){
        eo[i]=argument1-on[i]
}

//calculate hidden error
for(var i=0;i<array_length_1d(on);i++){
    for(var p=0;p<array_length_1d(hn);p++){
        eh[p]+=won[p,i]*eo[i]
    }
}

//re-adjust output weights based upon the output error
for(var i=0;i<array_length_1d(hn);i++){
    for(var p=0;p<array_length_1d(on);p++){
        won[i,p]+=lr*eo[p]*(on[p]*(1-on[p]))*hn[i]
    }
}

//re-adjust input weights based upon the input error
for(var i=0;i<array_length_1d(argument0);i++){
    for(var p=0;p<array_length_1d(hn);p++){
        win[p,i]+=lr*eh[p]*(hn[p]*(1-hn[p]))*argument0[i]
    }
}

I can't seem to solve xor
 
Last edited:

Gzebra

Member
Apparently it works just fine. I'm not sure why I couldn't get it to work yesterday, but I didn't do anything, I just tested it again and it worked..

Nope, it's definitely broken.. Sigh, please help!
 
Last edited:

Kezarus

Member
Genetic is a liiiiiiittle bit easier than back-prop. But it's heavier.

If you have any doubts about the article or something related to Neural Nets, feel free to ask. I am glad to help if I am able. =]
 
Top