Next post in topic

We’re fast approaching topics like evolutionary computation and artificial neural networks on the show. Unless something crops up that is just too interesting not to make an episode over, we should get started with that by episode 89. It has been a long time since I last considered these topics, and I’ve been looking forward to finding out what new tricks have been found since.

My favorite approach is genetic programming, as described by John Koza. Mind you, after the recent episodes, I’m more aware than ever of its limitations, but it still strikes me as a rather elegant approach.

In genetic programming, you start with an initial, usually randomly generated, population of small computer programs. The programs are tested and evaluated, and each one is given a fitness score, based on how well each one is doing on the given problem. Some portion of the population that has done relatively well on the given problem, are perturbed—mutated and perhaps combined in an analog of sexual reproduction—to form the next generation of programs. The new population is sent through the whole process again, and the loop continues until one or more of the programs in the population are solving the problem to your satisfaction. Either that, or they entirely fail, and you have to back up and figure out what’s going wrong.

In gp, each possible command is called a “node.” The programs are composed of nodes that are specific to the given problem, tied together in a way that makes sense. I’ve always wondered if there might be some set of nodes that could, at least in theory, solve any given problem.

The set of all possible problems that can be solved by a computer is infinite, or as close to it as makes no practical difference. Even so, it turns out, the set of commands needed to solve any problem that can be solved by computer can be very small. In fact, if you’re unconcerned with how difficult it is to write a program, you can actually solve any computable problem with just one command.

Introducing the “one instruction set computer.”

A “one instruction set computer” or, “oisc,” is a method of computing that uses only one command. There are several approaches. Some of them use mathematics, and at least one, uses a command that simply moves data from one location to another. The oisc approach is most often used as a method for designing computer hardware, but we might be able to adapt it for evolutionary computation.

I’ve always wanted to be able to design a computer, at least conceptually—to be able to go from simple bit manipulation, all the way up to something that could run an arbitrarily complicated piece of software, like word processing or games. This is the first time that particular goal has been in sight. Before attempting to use such algorithms for evolutionary computation, I need to familiarize myself with how such things work.

So, let’s make a computer!

This would make far too many far too miserably lengthy and dull podcast episodes. Instead, I’ll blog about it.

I’ve no clue how long this will take, or if I’ll even finish. I make no promises, but it’s an interesting experiment.

Next post in topic