Posts

Showing posts with the label Lie detection

When the government can read your mind

Image
We are now at a point where we can scan brain activity with fMRI, decode the patterns, and use the information to “read minds” or predict what a person is experiencing. For example, the Gallant Lab at the University of California, Berkley , published a paper in 2011 1 showing that by recording subjects watching a set of movies, they can estimate what visual features parts of the brain are encoding. Then, when they show you a new movie, their model can predict what you are seeing based on your brain activity. This “mind-reading” has limitations: the reconstruction is primitive, worse on abstract or rare stimuli, and each subjects must be scanned many times to tune the model to his or her individual brain. However, this experiment proves the principle that we create models that use brain activity to predict dynamic conscious experience even with the low visual and temporal resolution and indirect measures of an fMRI. Other labs are progressing in different domains, for example Chang and...

I know what you’re thinking…

Image
“I know what you’re thinking….” It’s a common saying and one that belies any functional capacity to predict another’s thoughts. Yet our drive to uncover arguably the most secretive and private form of information – thoughts – has led to major advances in the science of “reading one’s mind.” Current technologies are able to perform generally accurate predictions about robust neurologic phenomena such as distinguishing if an individual is looking at (or imagining) a face or a home (Haynes and Rees, 2006). However, as technologies advance it may well soon be possible to detect and identify more complex and covert thoughts – lies (Langleben et al., 2005) and even subconscious thought (Dehaene et al., 1998). With this ability, we must question the ethics surrounding the pursuit of this knowledge. Figure 3 from Haynes and Rees, 2006 The foremost concern with use of this technology is the right to privacy. At what length are we able to sidestep an individual’s basic right to privacy? If an in...

Brain reading and the right to privacy

Image
With advances in neuroimaging the ability to decode mental states in humans by recording brain activity has become a reality. In a review for Nature Neuroscience that is now six years old, John-Dylan Haynes and Geraint Rees detail how fMRI can be used to accurately predict visual perception. They explain that with advanced statistical pattern recognition, not only can the perception of broadly different visual inputs be differentiated, such as faces versus landscapes, but even the perception of subtly distinct objects, such shoes versus a chair, can be recognized. Further, fine details can also be distinguished, including image orientation, direction of motion, and perceived color. Indeed, the orientation of masked images can even be discriminated by activity in the primary visual cortex despite the subject being unable to consciously distinguish the orientation of the image. Decoding unconscious processing (from Haynes and Rees, 2006 ) While the power of fMRI and other imaging te...

Who Owns My Thoughts?

Image
I attended the excellent Neuroscience, Law, and Ethics of Lie Detection Technologies Symposium  in May, and as a consequence, I have spent the last month trying to answer questions I hadn’t even thought to ask before: Who owns the thoughts in my head? Could I be compelled to submit them? Can someone else decide that keeping my ideas to myself is a violation of the law or a threat to my country? If they force me to surrender them, do I lose ownership? So this week, I thought I would share some of the things I learned as I tried to find out answers. You can actually  buy this online . I am considering getting it printed on a hat. Two preliminary points: first, I want specify what I mean when I say “compelled” to undergo a brain scan. It seems, at least it seemed to me while sitting in the audience, that Americans are pretty afraid of having someone else read their minds without their permission, or, worse, being forced to have their minds read. This extends even to a si...

Now Available! Videos of The Truth About LIes: Neuroscience, Law, and Ethics of Lie Detection Technologies Symposium

Image
Did you miss this awesome event ? Don't despair! We have the videos for all of our neuroethics friends who couldn't be there. You Can’t Handle the Truth! On May 25, 2012, the Neuroscience Program, Center for Ethics Neuroethics Program, and the Scholars Program in Interdisciplinary Neuroscience Research (SPINR) combined forces to hold a symposium on the intersection of neuroscience and law pertaining to the use of fMRI and other lie detection technologies in the courtroom. Dr. Hank Greely , director of the Center for Law and Biosciences at Stanford Law School, opened the symposium with a talk entitled, "fMRI-based lie detection: The Gap Between Lab and Life." Dr. Daniel Langleben , a professor of Psychiatry at University of Pennsylvania and pioneer of using fMRI to detect lies gave a talk entitled, "Brain Imaging and Deception: Research and Practice." Dr. Steven Laken , founder, president, and CEO of Cephos; a company that markets the use ...

Likin' Laken, if he ain't fakin'

Image
Last Friday, Emory held its third annual Neuroethics symposium , focusing this year on the use of fMRI for lie detection and the acceptance of fMRI data as evidence in the courtroom. The symposium featured talks from Stanford law professor Hank Greely , University of Pennsylvania psychiatrist Daniel Langleben , and the CEO of Cephos , Dr. Steven Laken . I wasn’t surprised to learn we’d invited the first two speakers: Greely wrote the seminal articles on law and neuroscience, and Langleben pioneered fMRI studies of lying. It did surprise me to see Laken on the list. His company Cephos is one of the few that have successfully marketed fMRI-based lie detection for the commercial sector. I kind of thought—maybe hoped—that an audience composed largely of neuroscientists would eat him alive. Dr. Steven Laken, CEO of Cephos I'd read about Cephos when w e'd discussed fMRI and lie detection at the Neuroethics journal club , and what I'd read had made me skeptical. Laken's crede...

Neuroethics Symposium: The Truth About Lies on May 25, 2012

Image
Neuroscience, Law, and Ethics of Lie Detection Technologies   May 25th School of Medicine Auditorium from 1-5pm. You Can’t Handle the Truth! The Neuroscience Program, Center for Ethics Neuroethics Program, and the Scholars Program in Interdisciplinary Neuroscience Research (SPINR) are combining forces to hold a symposium on the intersection of neuroscience and law pertaining to the use of fMRI and other lie detection technologies in the courtroom. Drs. Hank Greely , director of the Center for Law and Biosciences at Stanford Law School, Daniel Langleben , a professor of Psychiatry at University of Pennsylvania and pioneer of using fMRI to detect lies, and Steven Laken , founder, president, and CEO of Cephos; a company that markets the use of fMRI for courtroom lie detection will be providing their expertise through a series of talks. Following the talks, Emory’s Carolyn Meltzer , Chair of the Department of Radiology and Imaging Sciences, will join the speakers answering qu...

Daubert and Frye: Neuroscience in the Courtroom?

Image
I recently found myself thinking about how we would allow evidence dealing with neuroscience into the courtroom. The question interested me because I wanted to know how our judicial system would differentiate between real and useful evidence versus what may seem no better than allowing a Shaman enter to argue a point based on "evidentiary mysticism".  What I found was that there are two different legal rules for allowing use of neuroscience evidence. The first is the Frye rule and the second is the Daubert rule. Daubert applies in Federal Courts and in States that have adopted it, while the Frye rule applies in all other courts. The difference between the texts of the standards can seem nuanced but presents two different outcomes judicially. Joseph T. Walsh has a great primer on the two rules if you would like to explore them more, but the issue that I would like to deal with here is simple and does not require a complete knowledge of both rules. Basically you just have to un...

Neuroimaging in the Courtroom: Video by Neuroethics Creative Team

Image
The undergraduate Neuroethics Program Creative Team embarked on making one of their first videos featuring Dr. Paul Root Wolpe .  This short 3 minute video discusses the ethical implications of using neuroimaging as evidence in the courtroom. This video is a teaser for our upcoming event on May 25th at Emory (see below for more information).  Thanks to our Neuroethics Creative Team! Giacomo Waller Sabrina Bernstein Lauren Ladov The Truth About Lies: the Neuroscience, Law, and Ethics of Lie Detection Technologies You Can’t Handle the Truth! The Neuroscience Program, Center for Ethics Neuroethics Program, and the Scholars Program in Interdisciplinary Neuroscience Research (SPINR) are combining forces to hold a symposium on the intersection of neuroscience and law pertaining to the use of fMRI and other lie detection technologies in the courtroom. Drs. Hank Greely , director of the Center for Law and Biosciences at Stanford Law School, Daniel Langleben , a professor of Ps...

Neuroethics Journal Club documented by artist Jon Ciliberto

Image
Jon Ciliberto artist and all around jack-of-all-trades documented our last Neuroethics Journal Club on Neurotechnologies and Lie Detection via painting/drawing.  Thanks, Jon! by Jon Ciliberto Our next Neuroethics Journal Club will be on December 14, 2011. We will be discussing the AJOB Neuroscience article, "Deflating the Neuroenhancement Bubble," and Emory Neuroscience Graduate student David Nicholson will facilitate this session.

Lie Detection and the Jury

Image
Much virtual and actual ink has been spilled of late about the dangers of rushing to bring brain-imaging technologies into the courtroom.  Not only neuroskeptics, [1] but also preeminent neuroscientists, [2] have urged caution when it comes to the prospect of fMRI data being admitted as trial evidence.  And brain-based lie detection, as one of the most alluring areas of imaging research, has in particular come in for a great deal of hand-wringing. These portents of doom are perhaps even more premature than would be the use of fMRI “polygraphy” as evidence.  Worrying now about that prospect is a bit like throwing out the bathwater before the baby has even gotten into the tub.  While it’s true that a few ill-informed judges have made a few ill-conceived decisions along these lines (and those mostly in India, not the United States), the vast weight of judicial precedent, procedure, and practice makes it overwhelmingly likely that courts will move too slowly, rather t...