## Posts

Showing posts from December, 2019

### Gym: MountainCar using Genetic Algorithm

Solving OpenAI Gym MountainCar using Genetic Algorithm
Hey fellas, This time I have partially solved the Gym's MountainCar v0 problem using the Deep Genetic Algorithm. Why partially?
The model trained within 48 seconds which is astonishing for me as well. It seems like DGA rocks. Although my neural network wasn't that deep to be specific. I just used 8 nodes in the first hidden layer and 4 in the next hidden layer.
The observation space:

NumObservationMinMax0position-1.20.61velocity-0.070.07
The action space:

NumAction0push left1no push2push right
Here's the GIF of the Mountain Car:

In my previous post, I solved the LunarLander environment using a genetic algorithm. However, in the LunarLander environment, we have high complexity, space, and states.  The result of our algorithm?

The Green colored plot is the median of our population. The Blue colored plot is the maximum of each generation.
The tweaks in GA relies on the basics.
Crossover: For a crossover, I used Uniform …

### Gym: LunarLander using Genetic Algorithm

Solving OpenAI Gym Lunar Lander v2 using Genetic Algorithm
Hey fellas, This time I have partially solved the Gym'sLunarLander v2 problem using the Deep Genetic Algorithm. Why partially?
Well, the average score means between 180 to 200. The best score crosses 200 for a good number of times. Median score dips to negative 'sometimes'. But the landing of the lander is great and might even get claps from NASA. I am kidding.
UPDATE: Model has crossed 200 median scores!! Celebrations
UPDATE: I accidentally chose the SoftMax activation function every layer. I corrected that and now only the hidden layers have ReLU and the output layer has Softmax activation function.
Here's the GIF of multiple landings of our Lander.

In my previous post, I solved the CartPole environment using a genetic algorithm. However, in the Cartpole environment, we have low complexity, space, and states. Also, Catpole begins with a state of maximum reward i.e the rendering always starts from the perpen…

### Self Balancing Robot using Genetic Algorithm

Hola Amigos,

I have always loved inverted pendulums. They are very fascinating to me and I play with them a lot. In real life, I have made various ones using Arduino, motors and MPU 6050. Yes, I hate encoders because it involves too much wiring and I want a clean bot not an Android-like Dr. Zero from Dragon Ball Z.
In this project, I modified the Open AI, CartPole-v1 to act more like a self-balancing robot rather than an inverted pendulum.

So here is my bot.

The output plot of the error angle.

Looks nice, eh?

In order to balance the pendulum, we would need some parameters. The first is the angle of tilt. With just the angle of tilt, we can balance the pendulum, but the problem of runaway will exist. For example, If the pendulum is tilted by 5 degrees, it moves in that direction. There’s a balancing force where the gravitational force and the pseudo force acting opposite to the pendulum gets balanced. Due to this, the error angle remains constant. But because of this error, the motors…