I'm not really seeing where your mind is at now, but I still get the impression that you are trying to force it. Talking about having to come back to avoid potentially losing interest is a bit of an odd thing in my mind. As for me, i'll take a break when I get burned out, but even after a couple of days, i'll be itching to get back at it. I don't force myself to come back to it out of a sense of duty (unless im mid-way through a project), I come back to it when I am genuinly motivated by a random idea, or because I want to create something, nothing else.

-- On a note of those tutorials, I did have a skim through and I do agree, he doesn't really explain the process that well in either. It seems to be the case that he's created a working solution and is just re-writing the code for it progressively. Copy and paste tutorials may work for some people, but they aren't that great for actually learning the process. It is far more valuable to try and understand what tasks are being achieved and work towards thinking about how you might do that, rather than copying someone elses code.

By this I mean with Perlin noise for example, you listed the key steps in the process, so now you should be asking yourself, well how do I do all of those steps:

Filling a grid with random values should be easy, I create a grid, loop through it, and assign a random value to each cell.

Interpolating it should again be rather easy. For each cell, I want to interpolate it with the values of its neighbours, so I sum up the values of neighbouring cells and average them.

At this point, if any of the steps suddenly become complicated again, then break it down into more steps.

So A* is actually an optimisation on top of Dijkstra's algorithm. Dijkstra's algorithm is the basic case, and you should ideally understand this fully before you try something more complicated like A*. Dijkstra's algorithm can be generalised as a graph pathfinding algorithm which simply follows the following process:

We start with a graph (A graph is any network where nodes are connected to each other. A grid is a graph where each grid cell can be modelled as a node connected to its 4 neighbours. Dijkstra's algorithm boils down to a process of determining the shortest path from one node in a graph to another. The shortest path in this case can be considered in terms of distance, but you can use a "cost" function in general. The algorithm finds the path that minimises cost. In this case, you can treat the cost between each cell as 1, given it costs the same to move from one cell to any other cell.

We start by keeping track of two things per cell:

- 1) The cost to get to that cell using the best known path so far (we dont need to store the full path, just:

- 2) The node we came from to give that cost

The algorithm starts at the start node, and finds all accessible nodes to itself (excluding nodes we have locked). We then evaluate each of the connections, and for each node, determine a new cost, which is the total cost up to the node we are testing, plus the cost between our current node and the next. If the overall cost to that new node is lower than its existing, we replace that nodes cost, and also change the node index we came from. Once a node has evaluated all its connections, we lock that node.

We repeat this until all nodes are locked. At that point, we can trace a path back from the destination to the source using the 2nd piece of data we store per-node.

A* is an additional optimisation on top of this which controls the order in which we evaluate nodes, using a "Heuristic function" to first evaluate nodes which are going to be more likely on the path (e.g. not checking nodes which would likely be further away.)

This explanation is thin, but the point im making is less about trying to explain the problem, but a process for breaking things down. A* for example is an easy generalisation if you already understand and have implemented dijkstra's (which is a simple case).

Also remember that there are many different ways of doing things. There is no requirement to follow something exactly to the book. For example, there are so many outdated and horrendous OpenGL tutorials out there that people still reference because it is believed they are good, when in reality, they completely miss the ball as far as teaching useful graphics programming techniques. As you move forward with both computer science/programming study, you'll learn that ultimately things matter a lot less than they may first seem to. Implementations change drastically, however it is the core principles that you need to focus on understanding, the exact mathematics and exchanges going on. If you can understand that generally, then you should find it easier to work towards implementing them. This is also important because implementation details can be vastly different depending on what language you are working in. Creating an A* algorithm in Haskell is a completely different ballgame to doing it in Java, however, the core definition of the process is the same.

I would suggest that you read about A* or Dijkstra's more generally, ignoring code samples and try to understand the process, then think about what datastructures and functions you may need within GM to try and achieve that same task. It can also be beneficial to implement algorithms visually. That said, rather than performing the entire algorithm in one go, create a visual representation of the state, and step through it by pressing a button. (so each button click performs an iteration of the algorithm for example). I did this exact thing when working on a pathfinding algorithm for my own game:

https://i.imgur.com/Gnksp6U.gifv