Menu

Brain Fingerprinting: A sit down with Dr. Larry Farwell

Last Taaaauesday, EdRabbit covered the VLAB “Business of the Brain” event for Cerebralhack, we also lined up a fantastic sit down 1 on 1 Q/A with the inventor of Brain Fingerprinting, and also built the first EEG based BCI device, Dr. Larry Farwell.  The idea of Brainfinger printing is also briefly mentioned in our interview with Dr. Michael Schuette

Ed: You built the first EEG based BCI device in 1984, what was your motivation, what drove you towards that field?

Larry: I was minding my business in my brain research lab *laughs* measuring brain responses. Getting brain response without any overt physical indication and knew of a kid in Illinois (where Dr.Farwell attended Grad school, Harvard undergrad) who was paralyzed from the eyeballs down. So he couldn’t communicate at all, but we suspected he was still awake in there, it’s called locked in syndrome. If you damage the brain in a particular way you can wipe out the motor system and still keep everything intact. We suspected that was the case with him.

We said “hey!” we could set up a system whereby he could communicate with the computer and a speech synthesizer. So I built the program and set up the system to be a brain computer interface and it worked! That was what got me involved first. Then I thought “Well if we can communicate from the brain to the computer, what else can we use this for?” Well, we can find out if someone was at a murder scene, we could find out if someone was KGB, at that time it was a KGB agent, now they are more concerned if someone’s a bomb maker, or a terrorist, we can tell what information is stored in the brain.
So I developed that system, I tried it out on some undergraduates, it worked. I often thought, if anyone discovers that undergrads brains are different than everyone else, there’s a lot of scientist that are going to be in deep trouble! *laughs* cause we get all of our results from there! So it worked I went to a scientific conference, the FBI and the CIA were there and they became interested in the technology and then acquired a million dollar contract from the CIA to further develop it, and took it from there.

Ed: Can you give a quick description how that works, your Elevator pitch type of description?

Larry: As an example let’s say we wanted to detect who the FBI agents are in a group. We flash information in front of them ONLY an FBI agent would know and/or recognize on a screen, words or pictures. Mixed in with other things

When the FBI agents see the FBI related material they experience an “ah ha!” experience. They say“ah ha!” We pick up the pattern of neural firing that is propagated to the scalp, electoral changes, EEG, Electroencephalograph, on the scalp, we pick up that pattern. We say “ok, that guy just had an “ah ha!” experience” From that we can determine who is and who isn’t an FBI agent.Similar ly we can say “that guys an Al-Qaeda trained terrorist”, “That guys a bomb maker”, “that guy was in a murder scene”. Or we can say “That guy was NOT in a murder scene, he doesn’t have the information stored in his head about the scene” The technology works both ways.

Ed: And this type of reaction isn’t something that can be suppressed by people? It’s the brain itself doing the recognition. A CIA agent for example, could you train somebody to no be prey to this sort of interrogation if the technology were set in the wrong hands?

Larry: No, you really can’t. Here’s why. Say we are in this room; we basically know what the scene is here. Door opens, an elephant coms running into the room, assuming we can see and are looking, first thing we are going to say is “Ah hah! There’s an elephant in the room now.” The first thing that will ALWAYS happen is, we recognize it as being an elephant. THEN we decide what we are going to do about it. “Maybe I’ll feed him a peanut”, “maybe I’ll exit out the back door.” that depends on what we want to do with the elephant, but the FIRST thing that happens is that we notice it. When we pick up that brain response; that “ah ha!”, when they notice the relevant stimuli, it doesn’t matter what they do, because they can only do it after what they have noticed on the screen. I invented the system, I can’t beat it. People in my own lab can’t beat it. People who wrote the programs, developed the hardware and software who know exactly how it works, can’t beat it. It’s just not a matter of choice to recognize the elephant or not.

We don’t detect surprise, we detect relevancy. We have to set up thoroughly preemptive interviews very carefully to avoid false on the subject beforehand. For example, before we run a test we verify that a witness was not at the murder scene. We verify they do not know what the murder weapon is, or what it looked like. One of these is the murder weapon; you’re going to see an ax, a rifle, a shotgun, a knife and a rope. And none of these things mean anything significant you? “No” You don’t know which one is the murder weapon? “No”. This is the context we are taking that “ah ha!” from. [But, with any positive result ] this does not absolutely prove he committed the murder, what it does prove is that there are details about it that he claimed not to know that he had no legitimate reason to know. A DNA scientist will come into a court room as say this sample which is reported to come from the murder scene matches this sample which is reported to come from the suspect, as a scientist I can say, these two samples match. As a scientist I can say that either the records stored in the brain matches the crime or it doesn’t, the rest is up for a judge and jury to decide.
Ed: One more question before we let you go here, where do you see BCI in the next 5 years?
Larry: In forensics, only about 1 or 2 percent of cases are won by DNA and finger prints, the brain is always there. Whenever someone is accused of a crime, or whenever someone is falsely accused of a crime they can say “hey wait, don’t tell me anything about it, give me a brain fingerprinting test, and I’ll show you that I don’t know the details.” So I think it will be widely used in Forensics within 5yrs.

With respect to how to detect how well the brain is functioning, Alzheimer’s s is a big issue there. Whether it will be used in 5yrs? I think it will take a little longer than that. With medical things you have to get FDA approval, medical development requirements take a while. I think that will certainly see it used first pharmaceuticals in the evaluation of drugs; a research application rather than a diagnosis application, which are easier to get approval for.

With the respect to advertising, the bar is way lower, we can already pin point when people take notice and pay attention and how much they retain, we don’t need a more precise answer.

With respect to gaming I think it’s going to be huge in 5 years, because people’s brains respond. It doesn’t have to be entirely accurate, because it’s not as if someone’s life depends on it. We would be able to make cheap easily available and easy to use systems that will provide information. So when something comes on the gaming screen, either an event that’s created by the game, or an event that’s created by the competitor, how you respond mentally to that, how your brain responds to that event will be part of your score, your experience.

That’s something that’s really fun.

Share and Enjoy:

Author Image

Steven

Steven Caputo is a 37 year old 16yr Technology Professional presently working out of Chicago, IL. In his free time; an artist, a musician, a geek, a gamer, a philosophy/neuroscience junkie, and nano coral-reef aquarist. His opinions are his own, especially the weird stuff!

More Posts - Website

Follow Me:
TwitterFacebookLinkedInGoogle PlusFlickr