The 2016 Kavli Futures Symposium: Ethical foundations of Novel Neurotechnologies: Identity, Agency and Normality


By Sean Batir (1), Rafael Yuste (1), Sara Goering (2), and Laura Specker Sullivan (2)







Image from Kavli Futures Symposium

(1) Neurotechnology Center, Kavli Institute of Brain Science, Department of Biological Sciences, Columbia University, New York, NY 10027




(2) Department of Philosophy, and Center for Sensorimotor Neural Engineering, University of Washington, Seattle, WA 98195




Detailed biographies for each author are located at the end of this post




Often described as the “two cultures,” few would deny the divide between the humanities and the sciences. This divide must be broken down if humanistic progress is to be made in the future of transformative technologies. The 2016 Kavli Futures Symposium held by Dr. Rafael Yuste and Dr. Sara Goering at the Neurotechnology Center of Columbia University addressed the divide between the humanities and sciences by curating an interdisciplinary dialogue between leading neuroscientists, neural engineers, and bioethicists across three broad topics of conversation. These three topics include conversations on identity and mind reading, agency and brain stimulation, and definitions of normality in the context of brain enhancement. The message of such an event is clear: dialogue between neurotechnology and ethics is necessary because the novel neurotechnologies are poised to generate a profound transformation in our society.






With the emergence of technology that can read the brain’s patterns at an intimate level, questions arose about the implications for how these methods could reveal the core of human identity – the mind. Jack Gallant, from UC Berkeley, reported on a neural decoder that can identify the visual imagery used by human subjects (1). As subjects in Gallant’s studies watched videos, the decoder determined how to identify which videos they were watching based on fMRI data. Gallant is convinced that “technologically, ubiquitous non-invasive brain decoding will happen. The only way that’s not going to happen is if society stops funding science and technology.”





Other panelists at the symposium shared Gallant’s confidence in the advent of technology that can decode the content of mental activity, and discussed how motor intentions can be decoded and used to control external objects, like a computer cursor or robotic arm. For instance, Miguel Nicolelis from Duke University discussed a Brain Net that merged neural commands from the brains of three monkeys “into a collective effort responsible for moving a virtual arm.” As the leader of one of the laboratories at the forefront of improving brain computer interfaces for prosthetic control, Nicolelis raised the question of whether such technologies “should be used for military applications.” Beyond specialized use, Nicolelis expressed concern that access to new technologies could be limited – who will be using brain decoders or multiple brain machine interfaces, and why?







Neural technologies that access our internal mental processes may have the potential to shift our understanding of human identity and our sense of ourselves as individual agents. In thinking about identity, philosopher Francoise Baylis of Dalhousie University discussed neuromodulation devices, invoking deep brain stimulation treatments (DBS) as an example. She stated, “DBS is not a challenge or threat to identity. I think people are conflating changes in personality with changes in personal identity. I do not think these are the same… at the end of the day, identity rests in memory, belonging, and recognition.” Baylis argued that our identities are always dynamic and relational, and neural technologies are another way that our relational identities can shift, without threatening who we are. Still, some felt that neural devices may call into question our sense of agency and responsibility for our actions. In considering the issues raised during this panel, Patricia Churchland, from UCSD, emphasized that in 15 sensational accounts of the limits that new technologies will impose on free choice and responsibility for action, she stated that a key question about new neurotechnologies is: “What will it do for us?” There is a need for a balanced approach between speculation about future possibilities, reflection about what science and technology are already doing, and how this will affect society in the short term.





Since sophisticated brain stimulation technologies are already capable of eliciting complex behaviors in lower mammals, ethicists discussed an array of concerns related to agency: how we can know whether our actions and behaviors actually result from our own intentions when adaptive neural devices interact with our brains? Pim Haselager of Radboud University explored our “sense of agency” in experiments designed to separate our belief in our agency from our actual causal efficacy in acting (2). His work suggests that “the harder you work, the more agency you feel,” and he notes that maintaining a strong sense of agency while using a BCI may be linked to a relatively high level of effort on the part of user. Haselager described the sense of agency as multi-faceted – while we are learning more about the dimensions of agency, interpersonal and psychosocial issues are still emerging with neurotechnological research. Ed Boyden from MIT, whose laboratory is developing tools for mapping and controlling the brain through optogenetics, continued the discourse surrounding the multifaceted nature of agency, by questioning, “Can detailed models of an individual’s [mental] traits be reconstructed to the point in which simulation could be possible?” He suggested that as the ability to probe neural circuits expands, we will face increasingly complex questions about ourselves and our priorities. If a human-like simulation could be developed, would it possess the same internal dilemma of agency that persists in any decision-making human?





Leigh Hochberg, from Brown University, whose laboratory focuses on brain computer interfaces for paraplegic patients and the clinical trials of BrainGate technology, suggested that how and why privacy of neural data is ascertained depends on what we think is in the data – what does it tell us? This affects how he assesses risk and benefit in his own work – in a trial with a small number of participants, clinical data might be easily identifiable. This requires what Hochberg described as an “extraordinary consent process.” With evidence of the safety and efficacy of BCIs, increasing numbers of participants in BCI clinical trials and changes to consent requirements, more thinking is needed about how neural data and security are handled. Finally, Martha Farah, from the University of Pennsylvania, raised important conceptual questions about agency. She proposed that agency is ethically significant because it is necessary for freedom and autonomy, which underlie commonsense notions of moral responsibility. The concern with neurotechnology and agency is not whether an intervention is “in the head,” but whether it is quantitatively different from preceding technologies, like pharmaceuticals – does it allow for drastically more control over individuals and their agency? Farah suggested that new neural technology might allow for more fine-grained control of human thoughts and behavior, a possibility that raises economic and regulatory issues in the short term, equality and opportunity questions in the medium term, and existential questions about humanity in the long term.





The sheer existence of mind and brain enhancing technologies belies a tenuous and fundamental assumption that both ethicists and neuroscientists believe should be addressed: What exactly does it mean to be normal, and is achieving normality a reasonable aim? Blaise Aguera from Google opened the floor, starting a discussion about gender as an instance of the social tendency to impose a structure of normality (e.g., binary genders) when a much wider array of gendered possibilities is available – not even just on a spectrum, but across a “a multidimensional vectorial space.” Neural technologies should not inadvertently be designed in ways that exacerbate existing biases such as gender or limited appreciation for the diversity of modes of being in the world. Rather, Aguera asserted that “those of us who create these systems” of human enhancement should “explore a deontology” with “something like science, wellbeing, equity, freedom, and progress” as initial guiding principles. Polina Anikeeva at MIT then shared her work on new devices that match the flexible material properties of the brain, explaining her motivation to make devices less invasive because an “ethical implication is that when we introduce a rigid device, then we destroy the surrounding tissue,” creating glial scars that “don’t interact the same way as neurons do.” Her work shows how even upstream material design of electrodes for neural technology may have a significant impact on the end-user’s experience of the technology





Gregor Wolbring from the University of Calgary expanded the conversation on normality and enhancement to address “ability privilege,” which is the idea that “individuals who enjoy the advantages are unwilling to give up their advantages,” because for many people the judgment of abilities is intrinsic to one’s self-identity and security. He posed questions regarding how we determine ability expectations, and how those expectations alter the treatment of people whose bodies are not typical. Will disabled people want neurotechnologies? Perhaps, if they are understood as tools to achieve well-being, rather than as ways to “fix” people. When asked about the role of neuroprosthetics in the disability world, Wolbring expressed “Tool, yes. Identity, no.” David Wasserman from NIH turned the conversation to neurodiversity, and the movement to reframe some neuroatypical forms of processing as forms of valuable diversity. Such individuals may not need medical technology, but better social accommodation. Thus, Wasserman argued for a more pecuniary focus, emphasizing “more funding ought to be given to…biomedical research that would increase the flourishing” of people living with various neuroatypical conditions. Wasserman suggests that such research should be less focused on medical “fixes”, even though the public tends to be moved by research justifications focused on medical advancement. This latter point was confirmed by Gallant, who noted that “while scientists do a bad job of explaining how science works, the public knows they get sick, and they go to the hospital. This is why the NIH budget is 10 times greater than NSF….medicalizing research has the good effect of attracting funding to biomedical research.” Equipped with this knowledge, a slightly clearer picture begins to emerge, where research at the frontiers of neurotechnology may be forced to address normalization in a medical context for the sake of funding further research, unless funding structures change.





An open discussion held at the end of the Kavli Futures Symposium with all speakers and members of the NIH BRAIN Neuroethics Workgroup synthesized separate kernels of knowledge shared throughout the event. This included a sense of urgency for funding ethical and legal work in order to guide the development of new technologies that have the capacity to radically transform the human experience. There is a need to ensure that multiple stakeholders, including scientists, disabled people, members of the general public, and ethicists work together to consider the ethical aspects of scientific and technological developments. These ethical aspects are clearest in the short term, such as issues about funding priorities, institutional space for ethics, translational goals, and social support for individuals using novel technologies. Long-term questions can also be raised, including the value of preserving the separateness of individuals with private mental space, the potential for combining consciousness toward shared tasks, and the significance of potential enhancements that radically alter what we can directly control with our brains.





By exploring the collective web of thought that connects the humanities and the sciences, several profound issues were identified. Attending to these issues should galvanize the relevant public and private entities to attend more fully to the integration of neurotechnological research with human values.




Author Biographies




Rafael Yuste is a professor of biological sciences and neuroscience at Columbia University. Yuste is interested in understanding the function and pathology of the cerebral cortex, using calcium imaging and optogenetics to “break the code” and decipher the communication between groups of neurons. Yuste has obtained many awards for his work, including those from the New York City Mayor, the Society for Neuroscience and the National Institutes of Health’s Director. He is a member of Spain’s Royal Academies of Science and Medicine. Yuste also led the researchers who proposed the Brain Activity Map, precursor to the BRAIN initiative, and is currently involved in helping to launch a global BRAIN project and a Neuroethical Guidelines Commission.  He was born in Madrid, where he obtained his medical degree at the Universidad Autónoma. He then joined Sydney Brenner's laboratory in Cambridge, UK. He engaged in Ph.D. study with Larry Katz in Torsten Wiesel’s laboratory at Rockefeller University and was a postdoctoral student of David Tank at Bell Labs. In 2005, he became a Howard Hughes Medical Institute investigator and co-director of the Kavli Institute for Brain Circuits. Since 2014, he serves as director of the Neurotechnology Center at Columbia.





Sean Batir is currently a PhD candidate rotating the Dr. Rafael Yuste's laboratory. Previously, he helped co-found two companies in the Bay Area and Boston that developed inconspicuous wearable devices and augmented reality. He also worked as a software developer at Oracle Corporation, creating unified archives that could deploy cloud-enabled databases in a virtual zone and web-based applications that enabled user-friendly visualization of Oracle Supercluster system features. Academically, he earned his M.Res in Bioengineering at the Imperial College of London, in Dr. Simon Schultz's Neural Coding lab.  He developed a new method for complex spike classification in the cerebellum. Prior to his Master's work, Sean studied optogenetic interrogation of the amygdala-hippocampal circuit and  contributed to the automated patch clamping device developed in Dr. Ed Boyden's Synthetic Neurobiology Group at MIT. As a SENS Summer Research Fellow at the Buck Institute of Aging, Sean also characterized therapeutic effects of lithium as a tentative treatment to Parkinson's disease. Sean is driven to develop transformative technologies that redefine what it means to be human. He believes that innovation occurs through interdisciplinary dialogue, both within academia and outside of it, and seeks to facilitate interactions that drive creation.





Laura Specker Sullivan is a postdoctoral fellow in neuroethics at the Center for Sensorimotor Neural Engineering, University of Washington. Her position is jointly held with the National Core for Neuroethics at the University of British Columbia. She conducts conceptual research on issues in practical ethics relating to the justification and goals of biomedical practices as well as empirical research on stakeholder attitudes and perceptions towards emerging technologies such as brain-computer interfaces. Her work often takes a cross-cultural approach, focusing on Japanese and Buddhist perspectives. She received her PhD from the Department of Philosophy at the University of Hawaii at Manoa in 2015.






Sara Goering is Associate Professor of Philosophy at the University of Washington, Seattle, and affiliated with the Program on Values and the Disability Studies Program. She leads the ethics thrust at the Center for Sensorimotor Neural Engineering.











References



1. Naselaris et al. 2015. A voxel-wise encoding model for early visual areas decodes mental images of remembered scenes. Neuroimage 105(15): 215-228.



2. Haselager, W.F.G. 2013. Did I do that? Brain-Computer Interfacing and the sense of agency. Minds and Machines 23(3): 405-418.



Want to cite this post?



Batir S, Yuste R, Goering S, and Specker Sullivan L. (2016). The 2016 Kavli Futures Symposium: Ethical foundations of Novel Neurotechnologies: Identity, Agency and Normality. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2016/11/the-2016-kavli-futures-symposium_14.htm


Comments

Popular posts from this blog

Why use Brain Cells in Art?

Misophonia: Personality Quirk, Symptom, or Neurological Disorder?

The Man Who Voled the World