The Monty Python test
Have the output of a test run.
memory={-6, -14, -1, -1, -1, -1, -1, -1, -1, -1, -1}
size=11
Size=8
Buffer size=3
memory={-1, -1, -1, -1, -1, -1, -1, -1}
buffer memory={-6, -14, -1}
…
Have the output of a test run.
memory={-6, -14, -1, -1, -1, -1, -1, -1, -1, -1, -1}
size=11
Size=8
Buffer size=3
memory={-1, -1, -1, -1, -1, -1, -1, -1}
buffer memory={-6, -14, -1}
…
Podcast: Play in new window | Download (Duration: 5:34 — 3.5MB) | Embed
Deep learning algorithms, and neural networks in general, require much more training than humans do. They are unable to generalize well enough to handle situations not covered in the training data, and can be thrown off by things that a human wouldn’t even notice. Today we look at these challenges by examining what it takes to train a neural network to drive a car.
Here are a couple of links about training self-driving vehicles.
Edge case training and discovery are keeping self-driving cars from gaining full autonomy
Training AI for Self-Driving Vehicles: the Challenge of Scale
Here’s a short video demo and an article about how AI image recognition can be fooled by things that wouldn’t fool many animals.
Podcast: Play in new window | Download (Duration: 3:50 — 2.5MB) | Embed
Though deep learning has had some promising results, there are still some things that it simply doesn’t do well at. There are other algorithms that do as well or better at certain tasks. On the other hand, we’ve only been able to implement comparatively small neural networks. Perhaps, if we could simulate larger networks, deep learning or an algorithm like it could do what it currently cannot.
Here’s a link to a paper by Gary Marcus, providing a critical review of deep learning and suggesting that it may have to be combined with other approaches to create a general intelligence.
We’re in space, for reasons which are not explained. I’m the engineer, and it’s mostly about programming. There is a problem, and I need to make the AI software creatures solve it for us.
It was a dream, and short on details.
…
I did a test where I had the code create a figure that was 36 safe random numbers long. A “safe random” number is a random number in a range equal to the lowest port number, and the memory array’s length. Port numbers are negative. 36 numbers is a max of 12 commands, unless the figure loups.
…
Podcast: Play in new window | Download (Duration: 6:42 — 4.0MB) | Embed
Your graphics card holds a GPU—graphics processing unit. Unlike your CPU, which does one operation at a time, the GPU does many simple operations on many numbers at once. This allows your computer to run your favorite game, and it can be used to run artificial neural networks, and implement deep learning.
Here’s an amusing short video that demonstrates the difference between the sequential CPU approach, and the parallel processing of a GPU.
Podcast: Play in new window | Download (Duration: 6:14 — 3.9MB) | Embed
Neural networks have had their ups and downs. However, in the last decade or so, they’ve really taken off. There were a couple of breakthroughs that made all the difference. Today we look at one of them, called “Deep Learning.”
Here’s a 4-post long blog series on deep learning concepts and history.
A ‘Brief’ History of Neural Nets and Deep Learning
Here are a couple of talks on deep learning, and some of the things it has been able to do.
The wonderful and terrifying implications of computers that can learn
Podcast: Play in new window | Download (Duration: 4:46 — 3.0MB) | Embed
Figuring out exactly what structure of a neural network could capture intelligence is a complicated problem. Why not let evolution do it for you? Today we look at Polyworld, where they did and are continuing to do that very experiment.
Here’s a talk on polyworld and some of their results.
Podcast: Play in new window | Download (Duration: 5:13 — 3.0MB) | Embed
When you are teaching an AI to solve a problem or do some task, you usually have to provide the right answer. However, sometimes, the AI can learn by itself, without being told what the right answer is.
Podcast: Play in new window | Download (Duration: 4:49 — 3.0MB) | Embed
Computers are being used to design computers. The better our computers and their tools get, the better computers and tools they can produce. But it wasn’t always an easy path. Today we look at the VAX9000. It used a system called “SID” to generate part of its design. SID was an expert system, and it was outperforming the human engineers, some of whom refused to work with it. The company that created the VAX9000 didn’t do well, and was eventually acquired by Compaq, after divestment of its major assets.
Here’s a video Digital Equipment Corporation produced for its sales department in 1991, 7 years before the company failed.
Here’s an article about the previous VAX8800 series, and Digital Equipment’s move into the main frame market.