Monthly Archives: December 2017

Ep 114: Use of graphics cards for neural networks

Use of graphics cards for neural networks

Your graphics card holds a GPU—graphics processing unit. Unlike your CPU, which does one operation at a time, the GPU does many simple operations on many numbers at once. This allows your computer to run your favorite game, and it can be used to run artificial neural networks, and implement deep learning.

Here’s an amusing short video that demonstrates the difference between the sequential CPU approach, and the parallel processing of a GPU.

Mythbusters Demo GPU versus CPU

Ep 113: Diving into deep learning

Diving into deep learning

Neural networks have had their ups and downs. However, in the last decade or so, they’ve really taken off. There were a couple of breakthroughs that made all the difference. Today we look at one of them, called “Deep Learning.”

Here’s a 4-post long blog series on deep learning concepts and history.

A ‘Brief’ History of Neural Nets and Deep Learning

Here are a couple of talks on deep learning, and some of the things it has been able to do.

The wonderful and terrifying implications of computers that can learn

The Deep End of Deep Learning

Ep 112: Evolving neural networks with polyworld

Evolving neural networks with polyworld

Figuring out exactly what structure of a neural network could capture intelligence is a complicated problem. Why not let evolution do it for you? Today we look at Polyworld, where they did and are continuing to do that very experiment.

Here’s a talk on polyworld and some of their results.

Using Evolution to Design Artificial Intelligence

Ep 111: Unsupervised learning

Unsupervised learning

When you are teaching an AI to solve a problem or do some task, you usually have to provide the right answer. However, sometimes, the AI can learn by itself, without being told what the right answer is.

Ep 110: Better and better

Better and better

Computers are being used to design computers. The better our computers and their tools get, the better computers and tools they can produce. But it wasn’t always an easy path. Today we look at the VAX9000. It used a system called “SID” to generate part of its design. SID was an expert system, and it was outperforming the human engineers, some of whom refused to work with it. The company that created the VAX9000 didn’t do well, and was eventually acquired by Compaq, after divestment of its major assets.

Here’s a video Digital Equipment Corporation produced for its sales department in 1991, 7 years before the company failed.

VAX 9000 Sales Video

Here’s an article about the previous VAX8800 series, and Digital Equipment’s move into the main frame market.


Ep 109: An experiment in fuzzy logic

An experiment in fuzzy logic

Computers use Boolean logic. Everything is true false, yes no, zero and one. There are plenty of situations when a simple yes or no won’t cover it. To get a computer to handle those situations, one can use fuzzy logic. Today, we have an informal experiment that shows why fuzzy logic is needed for even simple things.

Here’s an introductory video on fuzzy logic.

An Introduction to Fuzzy Logic

And here’s a detailed tutorial on fuzzy logic and its application.

Fuzzy Logic Tutorial

Ep 108: Socrates is not a woman

Socrates is not a woman

Evolutionary approaches, genetic algorithms, and neural networks aren’t the only approaches to creating artificial intelligence. Today, we look at one of the early and rather successful approaches—expert systems.

Ep 107: Take heart, yee robots shivering in the cold

Take heart, yee robots shivering in the cold

In the past, new methods of creating an artificial intelligence have garnered interest and enthusiasm. Then, when the over optimistic forecasts fail, nearly all funding and research grinds to a halt. It’s called an AI winter. Despite such setbacks, the general trend has been toward increasing ability and complexity within AI systems. Spring is coming, and maybe, it’s already here.

Adding by subtracting

Previous post in topic
First post in topic
Table of contents

Adding by subtracting

There won’t be posts after this one on this thing for a couple of weeks. I must navigate the dangerous, relative infested waters of the holidays.

I need to be able to show that the way we program our machine can be made to run any program, at least in principle. On the other hand, I’d very much like to switch gear, and get to talking about gear soon.

Here’s one more post on using subleq. I think that will do for now.
Continue reading Adding by subtracting

Ep 106: The perceptron

The perceptron

In 1957, Frank Rosenblatt came up with the perceptron. The perceptron is a simple neural network that was able to recognize simple shapes. Unfortunately, Rosenblatt got a little over excited, and made over inflated statements about what his perceptron would be able to do. After the 1969 publication of Marvin Minsky and Seymour Papert’s book, “Perceptron,” which debunked many of Rosenblatt’s claims and pointed out some of the inherent limitations of the perceptron algorithm, interest and funding for neural networks dropped drastically.

Here are a couple of articles about the perceptron and the early history of neural network design.

History of the Perceptron

A ‘Brief’ History of Neural Nets and Deep Learning