drandula
Member
Multi-Layer Perceptron
Feed-forward neural network in GML.
Buyable here:
YYG Marketplace
Itchio
I decided to release it.
- - -
What asset will let you do.
- - -
How to use MLP's:
- - -
How to use MNIST reader
- - -
Here are lists which functions etc. are included in the asset.
Tanh
TanhLecun
TanhShrink
ArcTan
sin
cos
Sigmoid
BipolarSigmoid
LogSigmoid
LogLog
Relu
ReluCap
LeakyRelu
PRelu
Elu
Gelu
Selu
Celu
Swish
Elish
SoftSign
SoftPlus
SoftClipping
SoftExponential
Sinc
Gaussian
SQRBF
ISRU
ISRLU
SQNL
BentIdentity
BinaryStep
Absolute
Threshold
HardTanh
HardSigmoid
HardShrink
HardSwish
HardElish
- - -
General rambling about this asset.
- - -
(Simplified version of these scripts are available here: Simple Multi-layer Perceptron. This doesn't include examples, and are not actually tested to work.)
Feed-forward neural network in GML.
- Neural network for GameMaker Studio 2.3
- Feedforward neural network in pure GML
- Asset includes GML scripts for neural network
- Also includes alternative C++ extension and wrapper for more speed.
- Includes examples shown in this page and video.
- GML solution should theoretically be multi-platform, haven't throughly tested
- Asset includes GML scripts for neural network
Buyable here:
YYG Marketplace
Itchio
I decided to release it.
- - -
What asset will let you do.
- Create multi-layer neural network
- Layers are fully-connected and signal propagates forward.
- I call it Multi-Layer Perceptron (not used in it's strict meaning).
- Train with Gradient Descent for supervised learning
- Uses Backpropagation
- Choose gradient descent optimizer.
- Give lot of examples so network can learn.
- Train with Genetic Algorithm for unsupervised learning
- Save and load as Buffer or JSON-string.
- Define precision of Buffer to reduce file-size.
- Includes way of storing example input/output pairs in arrays.
- Will include way of storing them in buffer.
- Includes MNIST-reader
- Lets you open MNIST-dataset and easily train networks with it.
- Same reader also supports MNIST Fashion dataset.
- MNIST is dataset of 60.000 hand-drawn digits, with test-set 10.000.
- MNIST Fashion is similiar dataset, but has images of clothings.
- Asset has three versions of scripts: Array, Grid, Plus
- Array is for readibility and learning purposes.
- Grid is for "pure GML performance" version.
- Plus is for actual speed using C++ extension.
- Extension is self-made, (my first C++ code). Works similiarly to Array.
- - -
How to use MLP's:
GML:
// 1) Create empty MLP.
mlp = new mlp_array();
// 2) Create with given layers. Default activation is "Tanh"
mlp = new mlp_array([4, 10, 5, 2]);
mlp = new mlp_array(layer_sizes);
// 3) Define activation functions at creation
mlp = new mlp_array(layer_sizes, activations); //
mlp = new mlp_array(layer_sizes, Relu); // Uses given activation for all layer
mlp = new mlp_array(layer_sizes, [Tanh, Sigmoid, Relu, Sigmoid]); // Specify activation for every layer
mlp = new mlp_array(layer_sizes, [Tanh, Sigmoid]); // If array is too short, it repeats last activation for rest of layers.
// 4) There are three versions of scripts. They all work same.
mlp = new mlp_array(...);
mlp = new mlp_grid(...);
mlp = new mlp_plus(...);
GML:
// Can copy mlp
mlp = new mlp_array().Copy(original); // Create empty and copy.
mlp.Copy(original);
// Original mlp can be any three version of MLP.
GML:
// Saving/loading as buffer
buffer = mlp.Save(); // Saves with default precision, 64bit float.
buffer = mlp.Save(NumberType.DOUBLE); // Supported precisions for saving to buffer.
buffer = mlp.Save(NumberType.FLOAT); // 32bit
buffer = mlp.Save(NumberType.HALF); // 16bit
// Loads values from buffer.
mlp.Load(buffer);
// MLP saved as buffer can use GML functions saving as file.
buffer_save(buffer, "Test.MLP");
buffer = buffer_load("Test.MLP");
GML:
// Saving/loading as JSON-string
jsonString = mlp.Stringify(); // Stringifies mlp to JSON-string
mlp.Parse(jsonString); // Loads mlp values from JSON-string
GML:
// Get prediction from MLP.
mlp.Forward(inputArray); // Updates values.
output = mlp.Output(); // Can check latest result.
output = mlp.Forward(inputArray); // Or just take when updated.
GML:
// Initializing optimizer for MLP
mlp.Optimizer(Stochastic);
mlp.Optimizer(Momentum, .8);
mlp.Optimizer(Nesterov);
mlp.Optimizer(Adam);
// Topic has list of all available optimizers, here is only first 4.
// You can initialize optimzier at creation
mlp = new mlp_array(layer_sizes).Optimizer(Adam);
GML:
// Using gradient descent - minibatch size 1.
mlp.Forward(example); // Get prediction.
mlp.optimizer.MeanSquared(target); // Cost function, get output-error.
mlp.optimizer.Backward(); // Backpropagate error throguh network.
mlp.optimizer.Apply(learning_rate); // Update weights + biases
GML:
// One example how to.
for(var i = 0; i < batchSize; i++) { // Results are cumulative, so you can add examples as long as you want.
mlp.Forward(batch[i]); // Get prediction.
mlp.optimizer.MeanSquared(target[i]); // Cost function, get output-error.
mlp.optimizer.Backward(); // Backpropagate error throguh network.
}
mlp.optimizer.Apply(learning_rate); // Update weights + biases
// Also you can do it across several frames.
mlp.Forward(batch[batchPos]); // Get prediction.
mlp.optimizer.MeanSquared(target[batchPos]); // Cost function, get output-error.
mlp.optimizer.Backward(); // Backpropagate error throguh network.
batchPos++;
if (batchPos > batchSize) {
mlp.optimizer.Apply(learning_rate); // Update weights + biases
batchPos = 0;
}
// You can define how mlp takes minibatch anyway you want.
// Gradients are cumulative, so repeat: 1) Forward, 2) Cost-function, 3) Backward, with different examples.
// Then use "Apply". It calculates average gradents automatically.
// Repeat this process with many different minibatches.
GML:
// Some other functionality
mlp.Destroy(); // Important for Grid and Plus to avoid memory leaks.
mlp.Randomize(-.5,.5,-.2,.2); // Randomize with given range.
mlp.Reset(); // Randomizes with default range.
mlp.optimizer.Decay(.01); // Decays weights given percentage, this can be used as regularization method for avoiding too large weights.
- - -
How to use MNIST reader
GML:
/// Create holder for MNIST
mnist = new MNIST();
// MNIST has images for training and testing.
// So it's good to open them separately, so create two.
mnist_train = new MNIST();
mnist_test = new MNIST();
// You cannot open them in same struct, but you can close and open the other set.
// What reader does is open files for reading and reads metadata.
// After this, struct only reads data for wanted examples when asked to.
GML:
/// MNIST files are uncompressed, they need to be uncompressed before use.
/// Reader doesn't mind if the file is renamed.
/// Give file-paths for correct files
mnist.Open(path_images, path_labels);
- - -
// One easy way is to use get_open_filename, like following:
path_images = get_open_filename(...);
path_labels = get_open_filename(...);
mnist.Open(path_images, path_labels);
// Assuming the names are not changed you, here is how to open test and train datasets:
// Training data
path_images = get_open_filename("MNIST image|*.idx3-ubyte", "train-images.idx3-ubyte");
path_labels = get_open_filename("MNIST label|*.idx1-ubyte", "train-labels.idx1-ubyte");
mnist_train.Open(path_images, path_labels);
// Testing data
path_images = get_open_filename("MNIST image|*.idx3-ubyte", "t10k-images.idx3-ubyte");
path_labels= get_open_filename("MNIST label|*.idx1-ubyte", "t10k-labels.idx1-ubyte");
mnist_test.Open(path_images, path_labels);
GML:
/// Example is pair of input and output.
/// MNIST dataset has images and labels for these images.
/// Image is input, and label is output.
/// MNIST reader opens files for reading and reads example input/output on the fly by given example index.
GML:
/// Read input-example from mnist with given index.
image = mnist.Input(index); // Returns image as 1D array
image = mnist.Input2D(index); // Returns as 2D array
mnist.Buffer(buffer, index); // Writes image values to given buffer
mnist.Image(surf, index); // Draws image in to given surface
GML:
/// Read input-example from mnist with given index.
target = mnist.Output(index); // Returns 1D array.
label = mnist.Label(index); // Real, numeric label. Range: [0, 9]
// mnist.Output(index) returns 1D array, which is easiest to use with MLP to evaluate performance.
// eg. target[0] is tells "is the number 0", and target[6] tells "is the number 6".
// eg. target-array is structured like [0,0,1,0,0,0,0,0,0,0] where wrong answers are 0, and correct is 1. In this array correct is target[2] and other are wrong.
GML:
/// Remember to close files when not used anymore to avoid memory leaks
mnist.Close();
// Or if you had both of them open.
mnist_train.Close();
mnist_test.Close();
- - -
Here are lists which functions etc. are included in the asset.
Stochastic
Momentum
Nesterov
Adam
AdaGrad
AdaDelta
AdaMax
Nadam
RMSprop
AMSgrad
Momentum
Nesterov
Adam
AdaGrad
AdaDelta
AdaMax
Nadam
RMSprop
AMSgrad
TanhLecun
TanhShrink
ArcTan
sin
cos
Sigmoid
BipolarSigmoid
LogSigmoid
LogLog
Relu
ReluCap
LeakyRelu
PRelu
Elu
Gelu
Selu
Celu
Swish
Elish
SoftSign
SoftPlus
SoftClipping
SoftExponential
Sinc
Gaussian
SQRBF
ISRU
ISRLU
SQNL
BentIdentity
BinaryStep
Absolute
Threshold
HardTanh
HardSigmoid
HardShrink
HardSwish
HardElish
Mean Squared Error
Mean Absolute Error
Mean Squared Logarithmic Error
Root Mean Squared Error
Root Mean Squared Logarithmic Error
Huber loss
Hinge
Smoothed Hinge
Quadratically Smoothed Hinge
Log-Cosh
Cross Entropy
Categorial Cross Entropy
Binary Cross Entropy
Multi-Label Cross Entropy
Exponential
Hellinger
Kullback Leibler Divergence
Generalized Kullback Leibler Divergence
Itakura-Saito Distance
Mean Absolute Error
Mean Squared Logarithmic Error
Root Mean Squared Error
Root Mean Squared Logarithmic Error
Huber loss
Hinge
Smoothed Hinge
Quadratically Smoothed Hinge
Log-Cosh
Cross Entropy
Categorial Cross Entropy
Binary Cross Entropy
Multi-Label Cross Entropy
Exponential
Hellinger
Kullback Leibler Divergence
Generalized Kullback Leibler Divergence
Itakura-Saito Distance
- - -
General rambling about this asset.
A bit background: I have been reading neural networks for a while now, as I have wanted to learn more about them, and how they work. My very first attempts writing own learning neural network were about year ago, though working it had poor performance. Also I got interested in other stuff like new GMS2.3 updates, which took my attention elsewhere. Couple of months ago I got interested again writing neural network scripts, and started from clean table.
So making this asset has been learning journey for me about neural networks. But later it also became introduction myself to world of C++ language.
So making this asset has been learning journey for me about neural networks. But later it also became introduction myself to world of C++ language.
I started with GML that is only coding language I actually know, though I started learning C++ to make the extension.
At first I started writing neural networks with arrays, as they are easy to use and store values, also use reflected well with written material. Eventually when I wanted to use bit larger networks, it was awfully slow. I sought for performance boost, which lead me to write alternative in ds_grids, which allowed me to use grid_region calculations.
Compared to arrays this brought great performance gain, though the inner workings got more complicated to look at. So I decided to keep both, Array is for looking how it behaves and to learn, and Grid is "performance".
But I still wasn't satisfied with performance, so I decided to try write my first extension, which also became my first C++ code I have written. Extension now includes about everything as GML counterparts have, and as I add new stuff I first write it GML Array version of it, which I adapt to extension.
Array is to show how things work. Grid is "pure GML performance" version. In theory Array and Grid should be multiplatform.
Lastly Plus is for actual speed in this asset, which uses extension. It's my first C++ code, so I think I can optimize it more.
At first I started writing neural networks with arrays, as they are easy to use and store values, also use reflected well with written material. Eventually when I wanted to use bit larger networks, it was awfully slow. I sought for performance boost, which lead me to write alternative in ds_grids, which allowed me to use grid_region calculations.
Compared to arrays this brought great performance gain, though the inner workings got more complicated to look at. So I decided to keep both, Array is for looking how it behaves and to learn, and Grid is "performance".
But I still wasn't satisfied with performance, so I decided to try write my first extension, which also became my first C++ code I have written. Extension now includes about everything as GML counterparts have, and as I add new stuff I first write it GML Array version of it, which I adapt to extension.
Array is to show how things work. Grid is "pure GML performance" version. In theory Array and Grid should be multiplatform.
Lastly Plus is for actual speed in this asset, which uses extension. It's my first C++ code, so I think I can optimize it more.
I don't know what actual use-cases for this asset could have. It isn't as robust as real neural network solutions out there, but maybe someone can come up with idea to use.
As performance-wise, executing neural network is good enough for pure GML Array and Grid. As Plus uses C++ extension, execution is great comparing to GML.
Most compute-intensive part of MLP is learning, which is also most important. For Array, performance just isn't good enough, and with Grid I wouldn't try train too large network. With Plus training speed is acceptable, but could be better. Extension is my first C++ code, which isn't optimized, so there is room for improvement.
Though you can't use Plus expect in Windows. Though as you can save the MLP's and load in another type, you could train Plus mlp in your computer, save and then implement it on the game as Array or Grid. This way you could have pre-trained MLP running in another platforms.
As performance-wise, executing neural network is good enough for pure GML Array and Grid. As Plus uses C++ extension, execution is great comparing to GML.
Most compute-intensive part of MLP is learning, which is also most important. For Array, performance just isn't good enough, and with Grid I wouldn't try train too large network. With Plus training speed is acceptable, but could be better. Extension is my first C++ code, which isn't optimized, so there is room for improvement.
Though you can't use Plus expect in Windows. Though as you can save the MLP's and load in another type, you could train Plus mlp in your computer, save and then implement it on the game as Array or Grid. This way you could have pre-trained MLP running in another platforms.
- - -
(Simplified version of these scripts are available here: Simple Multi-layer Perceptron. This doesn't include examples, and are not actually tested to work.)
Last edited: