GMS 2 Neural Network - Back Propagation

Discussion in 'Programming' started by Gzebra, May 15, 2019.

  1. Gzebra

    Gzebra Member

    Joined:
    Aug 3, 2017
    Posts:
    63
    Hi :)

    I'm trying to follow this tutorial on Neural Networks:


    I've been struggling for a few days with what I believe to be a faulty back propagation, so I was hoping somebody with greater math knowledge than myself might help me figuring out how to do this.

    Here's my code..

    Create event
    Code:
    randomize()
    input[0]=1//bias
    input[1]=1
    input[2]=0
    hidden[0]=1//bias
    hidden[1]=0
    hidden[2]=0
    output[0]=0
    a=input[1]>input[2]
    newNeuralNetwork(input,hidden,output)
    

    newNeuralNetwork(input,hidden,output)
    Code:
    /*initiate hidden nodes*/for(var i=0;i<array_length_1d(argument1);i++){hn[i]=argument1[i];eh[i]=0;}
    /*initiate output nodes*/for(var i=0;i<array_length_1d(argument2);i++){on[i]=argument2[i];eo[i]=0;}
    lr=0.1
    
    //initiate input weights
    for(var i=0;i<array_length_1d(argument0);i++){
        for(var p=0;p<array_length_1d(hn);p++){
            win[p,i]=random_range(-1,1)
        }
    }
    
    //initiate output weights
    for(var i=0;i<array_length_1d(hn);i++){
        for(var p=0;p<array_length_1d(on);p++){
            won[i,p]=random_range(-1,1)
        }
    }
    

    feedForward(input)
    Code:
    //calculate hidden nodes
    for(var i=0;i<array_length_1d(hn);i++){
        for(var p=0;p<array_length_1d(argument0);p++){
            hn[i]+=argument0[p]*win[p,i]
        }
        hn[i]=sigmoid(hn[i])
    }
    
    
    //calculate output
    for(var i=0;i<array_length_1d(on);i++){
        for(var p=0;p<array_length_1d(hn);p++){
                on[i]+=hn[p]*won[p,i]
        }
        on[i]=sigmoid(on[i])
    }
    

    trainNeuralNetwork(input,answer)
    Code:
    feedForward(argument0)
    //calculate output error
    for(var i=0;i<array_length_1d(on);i++){
            eo[i]=argument1-on[i]
    }
    
    //calculate hidden error
    for(var i=0;i<array_length_1d(on);i++){
        for(var p=0;p<array_length_1d(hn);p++){
            eh[p]+=won[p,i]*eo[i]
        }
    }
    
    //re-adjust output weights based upon the output error
    for(var i=0;i<array_length_1d(hn);i++){
        for(var p=0;p<array_length_1d(on);p++){
            won[i,p]+=lr*eo[p]*(on[p]*(1-on[p]))*hn[i]
        }
    }
    
    //re-adjust input weights based upon the input error
    for(var i=0;i<array_length_1d(argument0);i++){
        for(var p=0;p<array_length_1d(hn);p++){
            win[p,i]+=lr*eh[p]*(hn[p]*(1-hn[p]))*argument0[i]
        }
    }
    

    I can't seem to solve xor
     
    Last edited: May 15, 2019
    Kezarus likes this.
  2. Gzebra

    Gzebra Member

    Joined:
    Aug 3, 2017
    Posts:
    63
    Apparently it works just fine. I'm not sure why I couldn't get it to work yesterday, but I didn't do anything, I just tested it again and it worked..

    Nope, it's definitely broken.. Sigh, please help!
     
    Last edited: May 15, 2019
    Kezarus likes this.
  3. Kezarus

    Kezarus Member

    Joined:
    Jan 14, 2018
    Posts:
    157
    Gzebra likes this.
  4. Gzebra

    Gzebra Member

    Joined:
    Aug 3, 2017
    Posts:
    63
    Kezarus likes this.
  5. Kezarus

    Kezarus Member

    Joined:
    Jan 14, 2018
    Posts:
    157
    Genetic is a liiiiiiittle bit easier than back-prop. But it's heavier.

    If you have any doubts about the article or something related to Neural Nets, feel free to ask. I am glad to help if I am able. =]
     
    Gzebra likes this.

Share This Page

  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice