-

The Observer is a Student-run, daily print & online newspaper serving Notre Dame & Saint Mary's. Learn more about us.

-

archive

Awaiting the neurocentury

Edward A. Larkin | Wednesday, October 13, 2010

“All men by nature desire to know,” said Aristotle. The allure of knowledge — peering both inside ourselves and into the vast expanse beyond has occupied some of the most famous thinkers in history. Space has commonly been designated “the final frontier” of knowledge. Ancient philosophers such as Aristotle and Archimedes made the first bold attempts to understand the night sky. More recently, intellectual giants such as Newton, Einstein, and Hawking have advanced our knowledge of the universe further. We now have a framework for its very beginning — the Big Bang.

But what if the final frontier is a lot more human than we think? George H.W. Bush declared 1990-2000 “The Decade of the Brain,” and neuroscience research exploded during this time. In the end, however, it may be more accurate to label the hundred years starting in 2000 “The Century of the Brain.” There is a certain poetic irony about the proposition — the final great mystery of science being within us, the final frontier of knowledge an investigation of how we can know in the first place.

As our understanding of the human brain increases, so will our capability to use that knowledge for practical, engineering purposes. If one steps back and surveys the landscape, some things we can do currently (in the early phases of understanding) are remarkable. Neuroprosthetics allow handicapped people to control prosthetic limbs through brain activity. Certain drugs, dubbed “neuroenhancers,” can enhance brain function itself (the subject of an excellent 2009 New Yorker article). Lie detector tests have been designed (although their implementation has been very controversial) that differentiate truth from falsehood by actually peering into the brain and analyzing activity in different areas.

Looking into the future of neuroscience can cause one to simultaneously feel great hope and great fear. Medical breakthroughs for debilitating diseases such as Alzheimer’s and schizophrenia may be on the horizon. The future of prosthetics, especially when coupled with the emergence of stem cell therapy, is exciting. However, it is not hard for the mind to veer into the dark alleys of such a future, beyond the glimmering possibility and promise. What will we do with ourselves when we truly have advanced artificial intelligence that can outperform our brains? How will we enforce rules when access to neuroenhancers is easy? How will we react when the technology exists for someone to possibly know what we are thinking? More importantly, how will these technologies be used?

Great responsibility and caution will be required from all segments of society to make sure that advances in brain science bring about the intended benefits but minimize the vast possibilities of harm. The bureaucratic apparatus must make sure that laws dealing with the use of artificial intelligence and brain-based technology are clearly written and strictly enforced. Contrary to typical sentiment, the existence of a slow-moving bureaucracy and heavy regulation could actually be good in this case. Unfettered capitalism with regards to brain-based technology could be disastrous — businesses must think ethically about the social effects of their technology. Social norms will be key, as they are one of the major ways in which right manners of conduct are informed, and regulation and laws are created.

Most importantly, we as a society must establish a coherent set of principles that forms the fundamental precepts of how we approach issues of neuroethics. These should not be regarded as eternal and absolute rules — as technology evolves, we will certainly be more comfortable with many things than we are today. Hundreds of years ago, many certainly would have cringed to imagine a day in which we can legally own weapons as potent and destructive as guns, drive vehicles 80 miles per hour as a daily routine, or be as dependent upon electronics as we are today. But, by the same token, these precepts should merit substantial consideration, as the Constitution does today.

It is important to note that not all that can go wrong always will. The development of atomic bombs has not caused humanity to destroy itself (yet). We live in a very peaceful world by the standards of human history. The troubling issues raised by a new understanding of the brain do not automatically warrant despair. As Bill Joy wrote 10 years ago at the conclusion of his haunting analysis of the future, Why the Future Doesn’t Need Us, “it is because of our great capacity for caring that I remain optimistic that we will confront the dangerous issues now before us.”

It is this care — a fundamental respect for humanity — that will determine the course of the next century with respect to advances in neuroscience. We have the moral obligation and the practical necessity to determine which course we take — whether we harness the power of neuroscience to cure disease and make life richer without compromising ourselves in the process, or possibly, something much darker. If the past is any indication, we will persevere — somehow. We’ll adapt course as necessary, charting unknown territory in ways we never dreamed imaginable before. I predict that we will also stay essentially the same, essentially human. All men by nature desire to know. However, we also fundamentally seek a lot of other things. By embracing the first, we need not and should not give up the others.

 

Edward A. Larkin is a senior with a double major in Biological Sciences and Classical Civilization. He can be reached at elarkin1@nd.edu

The views expressed in this column are those of the author and not

necessarily those of The Observer.