Google Earth has plenty of examples of planes, helicopters — even hot air balloons — caught in flight, but this cruise missile, thought to be fired during military training exercises in the Utah mountains, might be the most unlikely capture yet. If it is, in fact, a cruise missile. Many dispute the image and say it's merely an airplane. You be the judge, but if you look closely, the "missile" appears to have wings.
See the missile on Google Maps.
Airplane Graveyard
See the airplane graveyard on Google Maps.
See the top 10 Ahmadinejad-isms.
Iraq's Bloody Lake
This blood-red lake outside of Iraq's Sadr City garnered a fair share of macabre speculation when it was discovered in 2007. One tipster told the tech blog Boing Boing that he was "told by a friend" that slaughterhouses in Iraq sometimes dump blood in canals. No one has offered an official explanation, but it's more likely the color comes from sewage, pollution or a water treatment process.
See Iraq's bloody lake on Google Maps.
Watch disastrous David Letterman interviews.
A Face in the Clay
See the Badlands Guardian on Google Maps.
See the top 10 embarrassing diplomatic moments.
Lost (and Found) at Sea
See the shipwreck on Google Maps.
See the top 10 celebrity Twitter feeds.
Secret Swastika
See the swastika building on Google Maps.
See 25 people to blame for the financial crisis.
Oprah Maze
See the Oprah maze on Google Maps.
See 10 forgettable presidents.
UFO Landing Pads, Maybe?
See the "motorcycle range" on Google Maps.
See videos of models falling down.
Firefox Crop Circles
Maybe alien technology isn't so foreign after all. This Firefox crop circle sprouted up in a corn field in Oregon, but its origins are no mystery. In 2006, the Oregon State University Linux Users group created the giant logo — spanning more than 45,000 square feet — to celebrate the Web browser's 50 millionth download.
See the Firefox logo on Google Maps.
See the top 10 celebrity meltdowns.
Atlantis Found?
Man, machine and in between
Nature 457, 1080-1081 (26 February 2009) | doi:10.1038/4571080a; Published online 25 February 2009Jens Clausen1
- Jens Clausen is at the Institute of Ethics and History in Medicine, University of Tübingen, Tübingen, Germany.
Email: jens.clausen@uni-tuebingen.de
Abstract
Brain-implantable devices have a promising future. Key safety issues must be resolved, but the ethics of this new technology present few totally new challenges, says Jens Clausen.
D. PUDLES
We are so surrounded by gadgetry that it is sometimes hard to tell where devices end and people begin. From computers and scanners to multifarious mobile devices, an increasing number of humans spend much of their conscious lives interacting with the world through electronics, the only barrier between brain and machine being the senses — sight, sound and touch — through which humans and devices interface. But remove those senses from the equation, and electronic devices can become our eyes and ears and even our arms and legs, taking in the world around us and interacting with it through man-made software and hardware.
This is no future prediction; it is already happening. Brain–machine interfaces are clinically well established in restoring hearing perception through cochlear implants, for example. And patients with end-stage Parkinson's disease can be treated with deep brain stimulation (DBS) (see 'Human brain–machine applications'). Worldwide, more than 30,000 implants have reportedly been made to control the severe motor symptoms of this disease. Current experiments on neural prosthetics point to the enormous future potential of such devices, whether as retinal or brainstem implants for the blind or as brain-recording devices for controlling prostheses1.
Non-invasive brain–machine interfaces based on electroencephalogram recordings have restored communication skills of patients 'locked in' by paralysis2. Animal research and some human studies3 suggest that full control of artificial limbs in real time could further offer the paralysed an opportunity to grasp or even to stand and walk on brain-controlled, artificial legs, albeit likely through invasive means, with electrodes implanted directly in the brain.
Future advances in neurosciences together with miniaturization of microelectronic devices will make possible more widespread application of brain–machine interfaces. Melding brain and machine makes the latter an integral part of the individual. This could be seen to challenge our notions of personhood and moral agency. And the question will certainly loom that if functions can be restored for those in need, is it right to use these technologies to enhance the abilities of healthy individuals? It is essential that devices are safe to use and pose few risks to the individual. But the ethical problems that these technologies pose are not vastly different from those presented by existing therapies such as antidepressants. Although the technologies and situations that brain–machine interfacing devices present might seem new and unfamiliar, most of the ethical questions raised pose few new challenges.
Welcome to the machine
In brain-controlled prosthetic devices, signals from the brain are decoded by a computer that sits in the device. These signals are then used to predict what a user intends to do. Invariably, predictions will sometimes fail and this could lead to dangerous, or at the very least embarrassing, situations. Who is responsible for involuntary acts? Is it the fault of the computer or the user? Will a user need some kind of driver's licence and obligatory insurance to operate a prosthesis?
Fortunately, there are precedents for dealing with liability when biology and technology fail to work. Increasing knowledge of human genetics, for example, led to attempts to reject criminal responsibility that were based on the inappropriate belief that genes predetermine actions. These attempts failed, and neuroscientific pursuits seem similarly unlikely to overturn views on human free will and responsibility4. Moreover, humans are often in control of dangerous and unpredictable tools such as cars and guns. Brain–machine interfaces represent a highly sophisticated case of tool use, but they are still just that. In the eyes of the law, responsibility should not be much harder to disentangle.
But what if machines change the brain? Evidence from early brain-stimulation experiments done half a century ago suggests that sending a current into the brain may cause shifts in personality and alterations in behaviour. Many patients with Parkinson's disease who have motor complications that are no longer manageable through medication report significant benefits from DBS. Nevertheless, compared with the best drug therapy, DBS for Parkinson's disease has shown a greater incidence of serious adverse effects such as nervous system and psychiatric disorders5 and a higher suicide rate6. Case studies revealed hypomania and personality changes of which the patients were unaware, and which disrupted family relationships before the stimulation parameters were readjusted7.
Such examples illustrate the possible dramatic side effects of DBS, but subtler effects are also possible. Even without stimulation, mere recording devices such as brain-controlled motor prostheses may alter the patient's personality. Patients will need to be trained in generating the appropriate neural signals to direct the prosthetic limb. Doing so might have slight effects on mood or memory function or impair speech control.
Nevertheless, this does not illustrate a new ethical problem. Side effects are common in most medical interventions, including treatment with psychoactive drugs. In 2004, for example, the US Food and Drug Administration told drug manufacturers to print warnings on certain antidepressants about the short-term increased risk of suicide in adolescents using them, and required increased monitoring of young people as they started medication. In the case of neuroprostheses, such potential safety issues should be identified and dealt with as soon as possible. The classic approach of biomedical ethics is to weigh the benefits for the patient against the risk of the intervention and to respect the patient's autonomous decisions8. This should also hold for the proposed expansion of DBS to treat patients with psychiatric disorders9.
Bench, bedside and brain
The availability of such technologies has already begun to cause friction. For example, many in the deaf community have rejected cochlear implants. Such individuals do not regard deafness as a disability that needs to be corrected, instead holding that it is a part of their life and their cultural identity. To them, cochlear implants are regarded as an enhancement beyond normal functioning.
What is enhancement and what is treatment depends on defining normality and disease, and this is notoriously difficult. For example, Christopher Boorse, a philosopher at the University of Delaware in Newark, defines disease as a statistical deviation from "species-typical functioning"10. As deafness is measurably different from the norm, it is thus considered disease. The definition is influential and has been used as a criterion for allocation of medical resources11. From this perspective, the intended medical application of cochlear implants seems ethically unproblematic. Nevertheless, Anita Silvers, a philosopher at San Francisco State University in California and a disability scholar and activist, has described such treatments as a "tyranny of the normal"12, designed to adjust people who are deaf to a world designed by the hearing, ultimately implying the inferiority of deafness.
Electronic devices can become our eyes and ears and even our arms and legs.
Although many have expressed excitement at the expanded development and testing of brain–machine interface devices to enhance otherwise deficient abilities, Silvers suspects that prostheses could be used for a "policy of normalizing". We should take these concerns seriously, but they should not prevent further research on brain–machine interfaces.
Brain technologies should be presented as one option, but not the only solution, for paralysis or deafness. Still, whether brain-technological applications are a proper option remains dependent on technological developments and on addressing important safety issues.
One issue that is perhaps more pressing is how to ensure that risks are minimized during research. Animal experimentation will probably not address the full extent of psychological and neurological effects that implantable brain–machine interfaces could have. Research on human subjects will be needed, but testing neuronal motor prostheses in healthy people is ethically unjustifiable because of the risk of bleeding, swelling, inflammation and other, unknown, long-term effects.
People with paralysis, who might benefit most from this research, are also not the most appropriate research subjects. Because of the very limited medical possibilities and often severe disabilities, such individuals may be vulnerable to taking on undue risk. Most suitable for research into brain–machine interface devices are patients who already have an electrode implanted for other reasons, as is sometimes the case in presurgical diagnosis for epilepsy. Because they face the lowest additional risk of the research setting and will not rest their decision on false hopes, such patients should be the first to be considered for research13.
Brain–machine interfaces promise therapeutic benefit and should be pursued. Yes, the technologies pose ethical challenges, but these are conceptually similar to those that bioethicists have addressed for other realms of therapy. Ethics is well prepared to deal with the questions in parallel to and in cooperation with the neuroscientific research.
Discuss this commentary on Nature Networks.
References
- Velliste, M., Perel, S., Spalding, M. C., Whitford, A. S. & Schwartz, A. B. Nature 453, 1098–1101 (2008). | Article | PubMed | ChemPort |
- Birbaumer, N. et al. Nature 398, 297–298 (1999). | Article | PubMed | ISI | ChemPort |
- Hochberg, L. R. et al. Nature 442, 164–171 (2006). | Article | PubMed | ISI | ChemPort |
- Greely, H. T. Minn. J. Law, Sci. Technol. 7, 599–637 (2006).
- Weaver, F. M. et al. J. Am. Med. Assoc. 301, 63–73 (2009). | Article | ChemPort |
- Appleby, B. S., Duggan, P. S., Regenberg, A. & Rabins, P. V. Mov. Disord. 22, 1722–1728 (2007). | Article | PubMed |
- Mandat, T. S., Hurwitz, T. & Honey, C. R. Acta Neurochir. (Wien) 148, 895–897 (2006). | Article | PubMed | ChemPort |
- Beauchamp, T. L. & Childress, J. F. Principles of Biomedical Ethics (Oxford Univ. Press, 2009).
- Synofzik, M. & Schlaepfer, T. E. Biotechnol. J. 3, 1511–1520 (2008). | Article | PubMed | ChemPort |
- Boorse, C. Phil. Sci. 44, 542–573 (1977). | Article |
- Daniels, N. Just Health Care (Cambridge Univ. Press, 1985).
- Silvers, A. in Enhancing Human Traits: Ethical and Social Implications (ed. Parens, E.) 95–123 (Georgetown Univ. Press, 1998).
- Clausen, J. Biotechnol. J. 3, 1493–1501 (2008). | Article | PubMed | ChemPort |
MAD Architects' City of the Future
Tue, Feb 24, 2009
All images via Dezeen
What will the city of the future look like? If MAD architects have anything to say about it, urban centres will no longer resemble the concrete jungles of the industrial revolution. MAD and their design friends have come together to create a conceptual model of the Huaxi city centre of Guiyang, China, that brings nature into every consideration when building with the most modern technologies of the 21st century.
Says MAD:
"The city is no longer determined by the leftover logic of the industrial revolution (speed, profit, efficiency) but instead follows the 'fragile rules' of nature."
According to Dezeen architecture and design magazine, the urbanization of Chinese cities over the past 15 years has been marked by "high-density, high-speed and low-quality duplication" that renders urban spaces "meaningless, crowded and soulless." The Huaxi project aims to reverse this trend, creating a new reality for urban centres that encourages a more seamless connection between humans and the surrounding natural world. With 200 to 400 new Chinese cities being built in the next 20 years, this sounds like a great idea!
Working with Shanghai Tongji Urban Planning and Design Institute, Studio 6, MAD developed a masterplan for Hauxi city. They invited ten other international young architectural firms to Huaxi for a three-day workshop to learn about the area's natural and cultural features, then charged them with creating their own design for their assigned part of the plan.
Design by MAD (China):
Mountain peaks serve as the backdrop to a structure that resembles rolling foothills. Windows and terraces pepper the building throughout, allowing for beautiful views of the surrounding country. Design by Serie (UK/India):
The building leans to one side to accommodate its sloping hillside site. Design by BIG (Denmark):
Design by Dieguez Fridman (Argentina):
Design by Mass Studies (Korea):
Drawings show the development of the final concept, a multi-level building with circular courtyards and terraces:
Design by HouLiang Architecture (China):
Close-up:
Design by Atelier Manferdini (USA):
This one resembles a growing crystalline structure, perhaps not unlike Superman's Fortress of Solitude. Design by Sou Fujimoto Architects (Japan):
Close-up:
Design by Rojkind Arquitectos (Mexico):
Design by JDS (Denmark/Belgium):
Design by EMERGENT/Tom Wiscombe (USA):
For more pictures and building plans, check out the article over at Dezeen.
No comments:
Post a Comment