Author: thelabwithbradbarton

Ep 123: A tale of two books, part 2

Ep 123: A tale of two books, part 2

A tale of two books, part 2

One of the things I’ve always had available for experiment is my own mind. Whether it’s mnemonics to improve my memory, or strange mental exercises to induce a lucid dream or an out of body experience, I’ve spent decades plumbing the depths of my internal ocean. Today, I tell the story of the second book I bumped into as a child, and the strange quest it began, even though it wasn’t an especially good book.

And the baby has a baby

And the baby has a baby

When I started posting about this project, I just dove in. Eventually I’ll have to lay out exactly what I’m working on, and how I’m approaching the development of my artificial life forms. That will take some time. For now, have yet another post wherein I entirely fail to explain what’s going on.

Over the weekend, I finished coding the Figure class. It only took 1500 to 1700 lines of code, depending upon how you count and what you count. I still have much testing debugging and documentation to get done before I can finally move on to the interesting parts.

A few days ago, while testing one of many little pieces of the project, I saw the first little program that was produced by the system, instead of hand written by me, reproduce itself, I’m going to paste in the text from my journal. Note that each program copies itself, and adds a small mutation to the end of the child program. The longer the program, the younger it is.

Friday January 19, 2018

2:18:AM

First time I ran a figure I didn’t write.

The size of the realm=3
looking for neighbors.
Going up.
nobody new around.
Reading from out there.
Running baby.
looking for neighbors.
Going up.
party at 0!
Reading from out there.
Figure 0 memory={-4, -17, 3, -9, -13, 6, -8, -19, 9, -3, -19, 12, -6, -18, -1, –
1, -1, -1}
Figure 2 memory={-4, -17, 3, -9, -13, 6, -8, -19, 9, -3, -19, 12, -6, -18, -1, –
1, -1, -1, 3, 17, -14, 18, 15, 22}
Figure 1 memory={-4, -17, 3, -9, -13, 6, -8, -19, 9, -3, -19, 12, -6, -18, -1, –
1, -1, -1, 3, 17, -14}

See that? Baby had a baby!

Ep 122: A tale of two books, part 1

Ep 122: A tale of two books, part 1

A tale of two books, part 1

After over 120 episodes, I thought it might be about time I introduce myself. So, today we have the story of how I fell in love with science.

In this episode, I quote Walt Whitman’s poem “O Me! O Life!” with the line: “That the powerful play goes on, and you may contribute a verse.” Here’s a linked to the entire poem.

O Me! O Life!

Ep 121: Neural Turing machines

Ep 121: Neural Turing machines

Neural Turing machines

Traditional programming methods are very good at solving problems that have simple rules to apply. They’re not so good when there are no simple rules that can be used, or when the rules are unknown. Neural networks are very good at problems that have complex or poorly defined rules, but not so good at simple rules like if, then. With traditional computing on the one hand, and neural networks on the other, each one good at what the other is bad at, perhaps they should be somehow combined.

Here’s a video on Neural Turing machines.

Neural Turing Machines: Perils and Promise

Here are the episodes that were referred to in this episode.

Ep 108: Socrates is not a woman

Ep 110: Better and better

Here are a couple of articles on Neural Turing machines.

Neural Turing Machines | the morning paper

Neural Turing Machine

Ep 120: Long short-term memory

Ep 120: Long short-term memory

Long short-term memory

In episode 117, I expressed some concern. It seemed that neural network implementations lacked a way of holding onto information over time. It turns out that the problem has been addressed by recurrent neural networks. Recurrent networks remember, though not very well. Today, we look at a modification of recurrent networks that allow artificial neural networks to remember much more, for much longer.

Here is one of the best videos I’ve ever seen for explaining how a neural network functions, that explains how a long short-term neural network works.

Recurrent Neural Networks (RNN) and Long Short-Term Memory (LSTM)

Here are a couple of articles on long short-term memory neural networks.

Understanding LSTM Networks

Recurrent Neural Networks Tutorial

Ep 119: Robotic dreaming

Ep 119: Robotic dreaming

Robotic dreaming

When you are awake, the world comes in at you through your senses. When you are asleep and dreaming, you create a world from within. An algorithm for deep learning, called “the wake sleep algorithm,” seems to capture this behavior.

I referenced the previous episode in this one, so you may as well have a link to it.

Ep 118: Sleep and dreams

Here’s a link to a 13-minute, jargon heavy lecture on the wake sleep algorithm.

Lecture 13.4 — The wake sleep algorithm

I love the smell of code in the morning

I love the smell of code in the morning

I need to make a couple of notes before I forget.

I went through and put in code to set b to 1 or -1 on the methods that needed it. I was going to have write and paste, both inner and outer return a -1 if they were called while the buffer was empty. However, I changed my mind. Instead, the outer commands return -1 when a figure tries to write or paste to itself with the outer head. The same thing will happen with read and cut. As it happens, that means that inner write and inner paste never set b to -1. I went ahead and had them set b to 1, just for the sake of consistency.

Meanwhile, in move inner, b is set to -1 if the method tries to move the inner head further than it can go, and otherwise sets b to 1. That gives a figure a way to tell when it has reached the top or bottom of the figure. Read commands also set b to -1 if a read command is sent while the head is at the end of the figure, one spot further than there are numbers to be read.

I need to make sure that any outer commands, like read cut write or paste, set the otherHead field of the figure being read from or written to, to be the address of the figure that is doing the reading or writing.

I’ve still got several outer head commands to implement, and I’m way behind on comments. I actually think I’ll go ahead and finish implementing the remaining ports before I catch up on the docs. It shouldn’t, judging by the methods I’ve already created, take too long. With a combination of cut and paste, and find and replace, I can adapt the methods I’ve already implemented to do what they do, only to another figure, instead of the one that is calling the outer head ports.

I have no clue how long catching up on the documentation will take, but I don’t want to go further without getting that done. There’s just too many little details that could get lost if I don’t take care of it soon.

Meanwhile, last night, I tested setting one of the slots in the realm’s population array to null. It worked. I ran it a couple of times, but stopped when I realized that I was making orphans. The one figure would write out a child copy of itself, add a slight mutation, and then the parent figure got deleted. The system is very far from creating anything that would count as living, but it still made me feel just enough guilt to make me stop screwing around and move on.

Okay, that’s where I am, and where I’m going.

I got the addresses handled in what’s there so far. I made sure the outerWrite command set the target figures’ otherAddress field to be the address of the source figure.

While I was at it, I changed some of the test code messages, so directions up and down are reversed from before. I just picture 0 at the top of the array, so adding is going down to me.

When a method needs to see if the outer Head is pointing at the source figure, it should test the addresses. It was using the name field, but I might switch that to a string or some other object. I changed that too. That’s why I did all this in the morning; I had too much that I might forget to do if I didn’t get it done.

If x==address

Got an episode to get done.

Mutant bouncing baby bits

Mutant bouncing baby bits

It’s Saturday night, and I’ve got some coding to do. I’d really like to get a figure to self-reproduce tonight, but I don’t know if I can pull it off or not. Last time, in the middle of creating the realm, I realized that there was and is some stuff that needs doing with the ports and methods I’ve already created. The figures could use some feedback on how a given operation has worked.

It’s really easy to setup. I can set b to zero or less for one result, one or higher for the other. It actually doesn’t matter, since this happens from a set command. The value I give to b will only be used to choose one or the other branch, and won’t be saved. I’ve already got the test code in place, and moved b to a global scope for the Figure class. I’m thinking of moving the other internal values for what is, at the moment, the run method to the global scope as well. Right now, it’s all about the set method, but I have no idea what future implementations or extensions might need or want to do. I think it’s best to go for maximum flexibility, especially for the node system which I’ve yet to talk about, let alone implement.

Ports are used for special commands, Nodes are used to add new ports and abilities. The goal is to create a tunable emergent system. I don’t just want digital life; I want digital life that can solve problems and do tasks for me. It’s artificial life that is also artificial intelligence.

Read More Read More

Ep 118: Sleep and dreams

Ep 118: Sleep and dreams

Sleep and dreams

There are two types of sleep: rapid eye movement or REM sleep, and non-rapid eye movement, or non-REM. Dreams happen during both types of sleep, and there is a well-established link between the amount and quality of sleep you get, and how well you recall and/or learn. Today, we take a little peak at what happens in the brain while you sleep and dream.

Here’s a link to a panel discussion on sleep and dreams. The part I talk about in this episode starts at roughly 22 minutes and 22 seconds in.

The Mind After Midnight: Where Do You Go When You Go to Sleep?

Here are a couple of articles about the studies done with rats and their dreams.

Rats May Dream, It Seems, Of Their Days at the Mazes

Rats dream about their tasks during slow wave sleep

Here’s a link to an article about memory, and the types of dreams that occur during REM and non-REM sleep.

Memory, Sleep and Dreaming: Experiencing Consolidation

Ep 117: Sleep, reset and brain wash

Ep 117: Sleep, reset and brain wash

Sleep, reset and brain wash

While you are sleeping, your brain performs a reset of sorts. Synaptic weights that increased over the course of the day decrease while you are sleeping. At the same time, the fluid your brain floats in, rushes through your brain tissue, clearing out wastes that couldn’t be removed over the course of the day.

Here’s a video and an article about how wastes are cleared away during your sleep.

One more reason to get a good night’s sleep

How Sleep Clears the Brain

Here’s an article on the link between synapse size and synaptic weight—the strength of the signal that comes from a given synapse.

The Secret to the Brain’s Memory Capacity May Be Synapse Size

And here’s an article about how the size of synapses shrink during sleep.

How Sleep Resets the Brain