Info
This post was imported from a personal note. It may contain inside jokes, streams of consciousness, errors, and other nonsense.
Python stopped working since 5-6 days ago when I was using it to combine screenshots into a single frame showing all the locations visited by boids.
→ BraitenBoids git:(main) ✗ python
zsh: /mnt/c/Users/d/.pyenv/pyenv-win/shims/python: bad interpreter: /bin/sh^M: no such file or directory
Oy yo yoy.
It’s the carriage return ^M
causing the issue. A quick dos2unix
fixed it for me.
Man, one more vote for trying out Ubuntu as my dev environment.
Oh. Still doesn’t work. It’s because it’s pyenv-win. I don’t want to use the Windows Python.
Okay, I installed pyenv. Bit of a hassle, tbh, and I might need to do it again soon so:
sudo apt update
sudo apt install -y make build-essential libssl-dev zlib1g-dev libbz2-dev libreadline-dev libsqlite3-dev wget curl llvm libncursesw5-dev xz-utils tk-dev libxml2-dev libxmlsec1-dev libffi-dev liblzma-dev
curl https://pyenv.run | bash
echo -e 'export PYENV_ROOT="$HOME/.pyenv"\nexport PATH="$PYENV_ROOT/bin:$PATH"' >> ~/.zshrc
echo -e 'eval "$(pyenv init --path)"\neval "$(pyenv init -)"' >> ~/.zshrc
exec "$SHELL"
pyenv install --list
pyenv install <version>
pyenv global <version>
And with that the compose_frames.py script works again.
pip install pandas
Using ChatGPT since I’m not so familiar with Python, Pandas, and Matplotlib but already it gave me bad advice regarding eliminating spaces from headers imported by pandas.read_csv()
.
I got a basic line chart for a single run working but it’s really ugly because I only ran two generations. Doing a bigger batch of 20 runs at 30 generations each. Should take ~30 minutes.
By the way, this is what the CPU looks like when running the evaluate
command.
Could use more of them cores if I implement threads in the
evaluate
command.
…
20 minutes. Even better.
Ouch. It’s all over the place. Some of them aren’t improving over time and a lot of them do well and get worse.
Is this the food source locations? I wonder what’d happen if I standardize those.
Second run is still gross.
And here’s the average:
Definitely not what I was hoping for. But I think it _is_ working properly. I’ll just have to troubleshoot the simulation itself.
K, I’ll try with a ring of food sources. Add a little order to the randomness.
Alright. I’ll do another batch of runs and see how the charts look.
Oooh. Maybe I should implement the idea I had for testing out a set of boids. Create a set of environments where food sources are always in the same location and run the boids on those. That way I can compare across strains more reliably. I guess I would have to run the boids from each generation of each run against them if I want to see how the performance improves (or does not) over multiple generations. That’s… a lot.
First I should implement the feature where the visualize command can accept a boids.csv as an input parameter and see what these things actually look like.
I can also try removing the energy input neuron.
The figure on the left still looks like a horrid mess but I guess the average is converging to something at least. How can I get it to be more consistent?
First, let’s give these builds some sort of description so I can remember the purpose of a run from hours earlier.
…
Alright, I’m pretty happy with that.
Now I’ll run a big test with a ring of food and see how it looks… in half an hour or so.
…
This time it’s a little better. There are still a lot of runs that perform well one generation and then very poorly the next generation. It’s nice that the average improves steadily and seems to converge, making a nice smooth curve.
So how do I get any given run to look more like the average of 20 runs?