Author: thelabwithbradbarton

Ep 24: Which part of the brain, off on and when

Ep 24: Which part of the brain, off on and when

Which part of the brain, off on and when

There is nothing simple about the brain. Wherein certain skills can be improved by shutting down one part, those same skills may require increasing activity in another.

Here’s a collection of short videos on the flow state of consciousness.

The Optimized Brain: A Workshop on FlowStates

Here’s a paper on the results of scanning brains of jazz musicians during improvisational playing.

Neural Substrates of Spontaneous Musical Performance: An fMRI Study of Jazz Improvisation

And here’s a ted talk by the author of the above paper on the same subject.

Your brain on improv

I’m not sure how I feel about it when scientist rap. Not because there’s anything wrong with rap/ just that they’re generally so very very bad at it.

Ep 23: As I was saying

Ep 23: As I was saying

As I was saying

While it has been fun to talk about the vOICe and sensory substitution, it’s time to return to the brain, how it works, and how one might discover how best to use one of your very own.

Here’s a blog post, with no podcast episode, about the vOICe, and my attempt to apply some of what I’ve learned and/or guessed about the brain.

A modified reaching exercise

And here’s the panel on genius by the world science festival.

Beautiful Minds: The Enigma of Genius

A modified reaching exercise

A modified reaching exercise

Over the weekend, I did a good deal of research. I was trying to track down which parts of the brain may interfere with certain skills. But I wanted more than just which bit does what; I wanted to know the subjective feel. How does it feel when that part of the brain is active, and can I consciously inhibit that part of the brain in order to improve performance?

I’ve been working with something called the “vOICe.” For background, you can checkout episode 19. Based on what I’ve been learning, I decided to slightly modify an exercise from the vOICe training manual—reach and grab.

If you consider mnemonic techniques, the approaches involve attaching things that are difficult for your mind to recall, to things that are easy. you tie the new information to structures within your brain/mind that already are in place. It occurred to me that I might be able to do that with the soundscapes. Perhaps, I could use methods that are already in place within my brain to help me keep track of where objects are in relation to my body while absorbing the new source of information, so that the new form of processing could piggyback on methods that are already in place.

Thus, I would set the target I was reaching for, “A small cube of white modeling clay,” down on the surface of my bed. Then I would use the vOICe to listen to where it was, when I already knew where it was. I did this multiple times, placing the target in different places, listening with the camera at different angles, and reaching out to touch the target with either hand.

When I’d toss the cube so that it would bounce slightly and land somewhere I wasn’t aware of, I’d find it in the soundscape, and then reach out for it. If I missed, even by just a little, instead of picking it up, I’d step back, look at it with the vOICe again, and reach out to touch it several times. It’s much the same as setting down the target somewhere on purpose, only it starts without knowing exactly where the target is, until you’ve already touched it.

Going by the notion that my forebrain might interfere with the sensory processing that usually takes place mostly in the back brain, I changed the way I listen to the soundscapes. I don’t try and hear where it is. I don’t focus too strongly on the soundscape at all. Instead, I’m simply aware of the soundscape, and I avoid trying to understand how and why it begins to work.

I was able to have some visual qualia. Visual qualia is the way that things look like they look; the steady seeming sensation of actually seeing something, and it turns out to be independent of the way the information reaches your mind.

I’d be listening to the voice, and after a time, like seeing a little spark or glint, I’d “see” the little cube appear in my view. Interestingly, I often missed by a tiny bit, even when I’d “seen” the target, by about the same amount I’d tend to miss without the visual qualia. When that happened, sometimes the little image would jump slightly as I noticed where it really was, but more often, it would simply vanish.

There were other times when the subjective “sight” of the target was completely correct, and I could just reach out and grab it.

Unfortunately, I haven’t been keeping track of my error rate, and I have nobody around to hold the clipboard, as it were.

I’m putting terms like “look,” and “see,” in quotes, because my eyes no longer work well enough to provide the “sight” of the target. However, as I had enough vision in the past to know what looking at something looks like, I can say that it’s the same. I see it, like seeing it looks.

As I’ve been totally blind for around 6 or 7 years now, I’m likely to be using the visual cortex to help me model the world around me without vision. Looking with the vOICe when I already know where it is should help integrate the newer source of visual information into that process.

I imagine the mental balancing act of listening without straining is something that most users of the vOICe figure out instinctively, but I thought I might be able to accelerate the process by doing it deliberately. I believe it to be a mistake to try and imagine what it might look like; that strikes me as building a model with the wrong part of the brain, and I suspect doing so interferes with building the model in the parts of your brain that do this sort of thing automatically.

Sure, let’s separate mental processing into deliberate and automatic activity, based on how conscious you are of the process. I submit that deliberate mental activity can block or interfere with automatic mental activity.

I’m slowly putting together some idea on how to better allocate mental resources for a given task, and this is merely the first experiment. I have no control, and no way to avoid observer biases, and this is only the very first try. Stay tuned.

Ep 22: Learning and doing

Ep 22: Learning and doing

Learning and doing

If I hadn’t been digging around, attempting to learn about something else entirely, I may never have learned of the vOICe. And what fun is that? The more you learn, the more you can do. The more you can do, the more you can learn. I’m glad you decided to join me.

Here’s a link to the vOICe training manual

Learn to see – The vOICe Training Manual

And here’s a link to episode 19, if you’d like to hear what the exercise I’m doing sounds like.

Ep 19: Can you hear what I see?

Ep 21: where to zap and why

Ep 21: where to zap and why

where to zap and why

If you wanted to use brain stimulation to increase the usefulness of, or decrease the training time for, sensory substitution; where would you stimulate the brain? According to the model developed in today’s reference, it would be more effective to turn up the sound, rather than to stimulate and cause random activity within the area of the brain associated with vision.

Here’s the paper.

The Emergence of Synaesthesia in a Neuronal Network Model via Changes in Perceptual Sensitivity and Plasticity

Ep 20: Sensory substitution and brain stimulation

Ep 20: Sensory substitution and brain stimulation

Sensory substitution and brain stimulation

In episode 13 we talked about brain stimulation, using electrical and magnetic stimulation of the brain, without having to cut into the skull, in order to improve certain types of skills. Starting in episode 17, we talked about sensory substitution, using one sense to deliver the missing information that would normally be delivered by a missing or impaired sense. That culminated in episode 19, when I demonstrated the vOICe.

An obvious question is, can brain stimulation improve sensory substitution?

Brain stimulation has been used to treat amblyopia, commonly known as lazy eye. Not all the results have been very encouraging, but here’s a paper that suggested longer term results might be possible.

Long Lasting Effects of Daily Theta Burst rTMS Sessions in the Human Amblyopic Cortex

Lastly, from the website of those who developed the vOICe, we have an article where they very briefly mention the possibility of using brain stimulation to help vOICe users better integrate the visual information that is being turned into sound. There is much more there and it’s worth the read for those interested in learning more about sensory substitution and synesthesia.

Artificial Synesthesia for Synthetic Vision via Sensory Substitution

Ep 19: Can you hear what I see?

Ep 19: Can you hear what I see?

Can you hear what I see?

Continuing the subject of sensory substitution, we have the vOICe. In this episode, I explain what it is, how it works, and let you listen in as I do an exercise from the vOICe manual.

If you’d like to know more about the vOICe and/or download a free copy of the software for your very own, you can visit the “seeing with sound” website.

seeingwithsound.com

Ep 18: Sometimes it goes away

Ep 18: Sometimes it goes away

Sometimes it goes away

Today we have the sad story of the optacon, a cautionary tale that is the reason I prefer the vOICe over the brain port for my sensory substitution needs.

Here’s some rather old films that explain what the optacon is, or rather was, and how it works, or rather worked.

Optaconmovies – YouTube

And here’s the story of how it all went wrong.

From Optacon to Oblivion: The Telesensory Story

Ep 17: do you lick what you see?

Ep 17: do you lick what you see?

do you lick what you see?

Today we learn of Paul Bach-y-Rita, and his work with sensory substitution, including the tongue stimulator used to provide visual information to the blind.

Here’s a YouTube video about Paul Bach-y-Rita and his work.

Paul Bach-y-Rita and Neuroplasticity – YouTube

Here’s an article about the tongue stimulator

Seeing with Your Tongue | The New Yorker

And finally, here’s a link to the company that manufactures and sells the tongue stimulator.

BrainPort V100 Vision Aid

Ep 16: can we use one sense for another?

Ep 16: can we use one sense for another?

can we use one sense for another?

If you read the following article, you’ll note that the person using the robotic exoskeleton is getting sensory feedback via his skin. Can we really use one sense in place of another?

Paraplegic in robotic suit kicks off World Cup – BBC News