Mr South Whidbey, Globalization, and the Worship of Profit by Stephan A. Schwartz
The SchwartzReport tracks emerging trends that will affect the world, particularly the United States. For EXPLORE it focuses on matters of health in the broadest sense of that term, including medical issues, changes in the biosphere, technology, and policy considerations, all of which will shape our culture and our lives.
By the time we get there it is already a raucous party. The elderly Freeland Hall on Whidbey island, off the coast of Seattle, with its walls and ceiling made of short strips of ancient pine boards, vibrates with the noise. Two hundred fifty people have packed themselves in tonight to eat a simple box dinner on folding tables and watch six men make fools of themselves. One of them will be voted Mr South Whidbey. The voting is done by buying votes, in the form of business card–sized bits of paper, for $1 a card. There is much encouragement to buy as many cards as possible.
As I sit there eating my chicken salad, men in odd outfits—one wears a kind of apron upon which is airbrushed a nude female form with a fig leaf, another is got up as Abe Lincoln—circulate with cardboard beer six-pack carriers. Where the beer would be there are paper cups with the names of the contestants, who are also wearing improbable outfits and who range in age from one man in his early 30s wearing a kilt and sporting a chain saw—sort of like one of the Village People seen by someone on a bad drug trip—to an octogenarian dressed as a 1920s Parisian boulevardier. The evening is a parody of any beauty pageant. There are dumb questions for the contestant interview, a runway promenade, and a talent segment. It is all uniformly awful, and so self-consciously so that it calls forth from the audience cheers, hoots, and laughter. As the evening progresses, we vote by placing the little cards in the cup labeled with the name of our favorite.
I have just moved to Whidbey and am here at the party with my partner Ronlyn, and neither of us is very clear why; it is largely at the urging of my physician friend, Rick Ingrasci, the man wearing the nude apron. We know no one at our table, and to make conversation I introduce myself to a modest, plainly dressed, middle-aged woman across from me, asking her what it's all about. She explains the purpose of the evening is to raise money for Friends of Friends, a local philanthropy. When I ask her what Friends of Friends is, she tells me it is a community-supported fund offering financial help to our fellow south islanders with medically related bills they cannot afford to pay. The man next to her introduces himself and tells me that it all started in 1997 “and so far has helped about a thousand people with $400,000 in medical expenses.” The woman to my right joins the conversation by telling me she would probably be dead had it not been for Friends of Friends, since “I have no health insurance, and could not have obtained the treatments I desperately needed if they hadn't helped me.” The woman across from me nods in agreement and says, “I would have lost all my teeth except for Friends of Friends.”
As the octogenarian carries the day, an obvious favorite—how can you not vote for an 80-year-old man wearing a beret and smoking jacket willing to sing old Maurice Chevalier songs in public—I am moved by the community spirit the Mr South Whidbey Pageant represents and heartened by yet another example of the interlocking safety network my new community has created to help itself. And at the same time I am outraged that any of this should be necessary.
Whenever I confront the illness profit industry's impact on American society, I always imagine I am speaking with my sister, Susan, who has lived most of her life in Europe and is now in France. As I sit there while several men in trench coats and large black witches hats, blowing kazoos, march the hall's length, this particular imaginary conversation plays out with me explaining that little communities such as the villages of South Whidbey have to band together so that the weak, the poor, and the afflicted amongst them can have the basic healthcare that every other industrial nation in the world provides as a matter of course. I imagine my sister's face as she hears about the pleasant woman sitting across from me who almost became one of the 122 people who die daily in America—more than die monthly in the active war zones of Iraq and Afghanistan—because they have no health insurance.
Viewed from another perspective, I realize, Mr South Whidbey represents a powerful trend shaping our future—and not a happy one. It is the response of a caring small community to a massive failure on the part of the larger society. If we had national healthcare, the nice woman sitting to my right eating the last of her salad would never have faced the abyss of involuntary death because she lacked the insurance to open the door to a continued future. She wouldn't have needed $10,000 to pay for a procedure that in most countries would be considered a right. What must that be like to wake up each morning lying in your bed and knowing that without some drug or medical procedure you are doomed?
What if instead of giving $10,000 to the woman because she didn't have health insurance, this collective effort, and thousands of other efforts like it in other communities around the country, were focused on something else? How about local preparation for climate change, or to assist local businesses to navigate the “Green Transition” that is occurring as the world moves out of the “Age of Petroleum?” Not that such efforts do not exist, of course they do. But the many programs like Mr South Whidbey support not preparation for the future, but assuage an immediate unnecessary present day failure. Of necessity they compete for time, energy, and money, of which local communities like the villages of South Whidbey have only so much. Suppose Mr South Whidbey gave that $10,000 as a loan to local government to build charging stations for electric cars that local people drive, so they could recharge when they went shopping in the village? If you collected a modest fee like a parking meter, and the city paid the loans back over time, the whole business might even be self-financing. Everybody would benefit, if simply with cleaner air.
One of America's great defining characteristics is this capacity for coming together in volunteer local effort. “About 61.8 million people, or 26.4% of the population, volunteered through or for an organization at least once between September 2007 and September 2008,” the Bureau of Labor Statistics of the U.S. Department of Labor reported.1 If you have ever lived in another country you realize how rare this is.
Another of our strengths is the deep commitment of the American people to philanthropy. We commit 1.7% of our gross national product to this purpose—nearly $300 billion a year.2 The next most philanthropic nation is Great Britain at .73%—less than half as much—and it falls off precipitously from there.2
And the generosity of spirit that is such an American hallmark can be found at every level of the culture. About 65% of households with incomes less than $100,000 give to charity.1 Even the poor give. Their share of the nearly $300 billion offered up is just as green.3 In 2007, as individuals and families, we spent nearly $25 billion a month serving what we felt was good and life-affirming as we understood it.
This squandering of volunteer action, and philanthropic purpose because the failure to have universal healthcare requires local programs such as Friend of Friends, is a consequence of the “Illness Profit” model. Along with people working sick, or having simple inexpensive medical problems become complex and expensive because they were not treated, these social failures constitute a kind of friction, or a tax. One that in a global world makes us less competitive and less prepared for climate change. I suddenly have images from Katrina in my mind. Once again, like New Orleans and FEMA, are we going to ignore the warnings and be less prepared than we could be if other considerations were not draining off our time, passion, and resources?
Why is this happening? I think this is one of the great questions that our public conversation should focus on. Barbara Tuchman's 1984 best seller, March of Folly,4 Jared Diamond's 2005 book, Collapse,5 and Naomi Klein's 2007 book, Shock Doctrine,6 all spell out how powerful societies can, and have, destroyed themselves. Almost always it results from an obsessive commitment to something that proves again and again that it is destructive, yet that society continues to focus on it in spite of the evidence. The Easter Islanders kept cutting down their trees, even as they were punished by nature for the destruction of their ecosystem. I always wonder what the last man—cutting down the last tree—thought.
In our case, the culprit seems to be that profit has become our only bedrock value. Because this is our default consideration, we have chosen to develop a model of healthcare that reflects that value. As a result we have millions, literally millions, of people who cannot make their full contribution to our society's success, either because they cannot contribute at all or are contributing in varying degrees of diminished capacity. But this is just one manifestation of our obsession. Here are a few others.
When profit is the most important consideration, then social programs like prenatal care, having no immediate payoff, get severely cut even though we know that if a mother does not get a proper diet between the 19th and 23rd week of pregnancy, her fetus' brain will not develop properly and her child will become an unacknowledged handicapped person for their entire lives, in a way that can never be repaired. When short-term profit is the only consideration, then long-term education, particularly of the poor, never really becomes a social priority. And when privatized profit-making prisons become the rice bowl for communities left destitute because of earlier outsourcing, it becomes important to keep a steady flow of the poor into those institutions as the justification for their existence and the mechanism for tapping the public till.
People are beginning to gather up their coats and I am left with this: in a world that is globalizing, the future and national security of a nation are directly correlated with its ability to field as many brains—literally neurons—and as many fully committed hearts working on behalf of societal success as it can. When profit is the only priority governing the social infrastructure, our recent history shows it sabotages this success.
Whidbey Island, off the coast of Seattle
The SchwartzReport tracks emerging trends that will affect the world, particularly the United States. For EXPLORE, it focuses on matters of health in the broadest sense of that term, including medical issues, changes in the biosphere, technology, and policy considerations, all of which will shape our culture and our lives.
Last edited by giovonni on Tue Apr 20, 2010 2:01 pm; edited 4 times in total
DALY CITY -- People have been toking up in the Cow Palace parking lot for more than 50 years. This was the first time it was legal. The International Cannabis and Hemp Expo, the first trade show in the United States to allow on-site pot smoking, attracted an estimated 15,000 enthusiasts to Daly City over the weekend. They talked bud, sold products ranging from a $500 water bong to a $19,500 mobile grow house, and discussed how efforts to legalize marijuana would impact their livelihoods. "We're exercising our rights as patients to peacefully gather," said Bob Katzman, chief operating officer of the expo, as he stood near the designated puffing area. "We're here to talk about changing some of the existing laws, but we're not here to break the law." Katzman said it took organizers four years to negotiate a permit with a venue that would allow marijuana consumption. It wasn't possible, he said, until a "massive change in the political climate." That climate is set to be tested in November, when an initiative that would legalize marijuana is to be decided by California voters. Now, marijuana is available only to those with a medicinal use card. Such cards were easy to attain at the exposition. For $99 - cash only - attendees such as Shawna Spencer of San Jose received a temporary "recommendation" from doctors that allowed her to smoke at the event. Spencer, who said she suffers from bipolar disorder, said she had waited for more than an hour. "It's worth the wait because I need it," Spencer said. Dr. Daniel Susott said he expected to sign off on 1,600 people by the end of the weekend. He said a portion of the fees would go to charity. "We're making history today," he said as his visitors complained of chronic pain, depression and insomnia, among other ailments. "We're operating within the guidelines of Prop. 215 and helping people get the medical marijuana they need." If marijuana becomes legal, Susott expects that his patients will self-prescribe. "People will start growing their own medicine in their homes," he said. "And the big pharma companies aren't going to like it." For those concerned with the conspicuous equipment needed to grow plants inside the home, Tim Ellis of Orange County had the solution: An 18-foot trailer that can yield up to 6 pounds of pot every two months. The Grow n' Mobile starts at $19,500. Ellis, a father of two, said he had the family grower in mind - a person who desires to cultivate outside the house, but in a secure location. Showing off every detail of his invention, Ellis said he rigged the trailer's hitch so thieves would need a blowtorch to hook the trailer to their own truck. Fumes are routed through a charcoal filter. And the roof has an infrared shield to thwart weed-hunting helicopters. "Can't steal it, can't smell it, can't find it," Ellis said, offering his sales pitch. "Built by a grower for a grower. Grow mobile!" The event hosted a panel discussion Saturday on how legalization would impact large California growers. A contingent from Humboldt County argued against the ballot initiative, complaining it could devastate a key local industry. "Radical" Russ Belville, the outreach coordinator for the National Organization for the Reform of Marijuana Laws, said that if California's initiative passed, he expected home-growers to enter the market and drive prices down. But he was unsympathetic to the group from Humboldt County. "To that end, I would say, 'Tough,' " Belville said. "We should have to put people in prison so you can continue to make a living?"
Nonlocal Linkage and The social Dimension By Stephen A. Schwartz
Do you sense the schism occurring in the United States? Not the red and blue of politics, although that comes into it. Something deeper, a shift that is producing two very different reactions. Can you feel the ground moving? The zeitgeist of one population is grounded in fear, resentment, anger, and a sense of loss. It is theologically conservative, politically rigid, and exclusionist. The other population holds a sober realization that great change is coming, but also the sense that it offers at least the putative opportunity to create a more stable life-affirming culture. It is theologically and politically accommodating, and inclusionist.
We all have a vested interest in this schism and the struggle it has produced, not only because through our choices we are its source, but because we will live with the consequences of the decisions made over the next few years. What is particularly concerning is the obsession amongst the population driven by fear with willful ignorance. Yet it cannot be denied that this is an essential attribute of its world view. Only by denying a fact-based world can this perspective be maintained. Most of human history can be seen as a striving for deeper understanding. Science is the highest manifestation of this impulse, perhaps because it is the most objective manifestation. Yet now in the 21st century, we see its antipode emerge—a deep denial of science and the fact-based view of the world. Science, from this perspective, is just another political position, competing in the marketplace of ideas as a political theory.
In 2005, the Pew Research Center for the People & the Press and Pew Forum on Religion & Public Life carried out a poll involving 2,000 adults, which gives us some real data on what willful ignorance means. They reported 42% of the public believed that “humans and other living things had existed in their present form since the beginning of time,” and that this rose to 70% amongst white evangelical Protestants and decreased to 32% in mainline Protestant churches, and—surprising to some, perhaps—to 31% amongst white Catholics.1 By 2006, the creationist position was affirmed by 55% of Americans.1 Think for a moment about what this means: more than half of America has discarded much of the hard won knowledge of the past 500 years—essentially, the age of modern science and medicine. Astrophysics. Gone. Astronomy. Gone. Paleontology. Gone. Geology. Gone. Biology. Mostly gone. Genetics. Gone. The general laws of physics such as the speed of light found to be defective. It is impossible to believe that the Earth is 10,000 years old, that God manufactured it in six days, and that dinosaurs and humans once coinhabited the planet, and accept that any of those disciplines has anything valid to say. What many would think of as the crown jewels of the human intellect—part of what makes it possible to be optimistic about humanity—are of little or no interest. Because, from the view on this side of the schism, these scientific disciplines cannot be valid. The Creation Museum in Petersburg, Kentucky, with its dioramas of dinosaurs and people happily coexisting, is the creationist statement of reality.
And as the consort of this self-imposed ignorance, there is a strong premillennial dispensationalist apocalyptic element. End of the world movements in American society are nothing new, but this is the first time in my lifetime significant numbers of the political elite actively entertain the idea of end-times or believe the world is less than 10,000 years old. People are always crying up the end of the world, but you don't expect to see your senator or president espousing such views, or public policy written and enacted on the basis of an information-free values perspective.
The possibilities of climate change do not permit this indulgence. Yet the ever-intensifying split in our society becomes more intense every year, and all middle ground disappears. We need to understand why this fear is so powerful that it trumps even self-preservation. From what does this fear arise, and why is it so powerful? The answers are usually couched in political or religious terms, but they are never satisfying.
I want to suggest another factor. To really understand why half of our population is invested in something which, to the other half, seems almost bizarre, I think we have to talk sensibly about the nonlocal, that aspect of consciousness that experimental evidence shows to be in a domain in which space and time have very different meanings. There are literally thousands of papers published in peer-reviewed journals—even though it sometimes took bitter scholarly struggle to make that publication happen—to build this case. I encourage anyone who would like to go into greater detail to visit the “papers” section of my personal website http://www.stephanaschwartz.com/home.htm for bibliographies on Remote Viewing, Therapeutic Intention, and Meditation. There are other research bibliographies one could present as well. I advance this social model because it has been repeatedly demonstrated from various perspectives in a variety of disciplines as reported in these papers.
I want to get past the usual circular debate of skeptic and proponent call and response because it is clouding something very important: the social implications of nonlocal linkage. It is here we must go to understand the genesis of our schism.
Here following, I think, is what can reasonably be said. The Interdependent Interconnected Nonlocal Consciousness Model
Here following is what I think can reasonably be said of the Interdependent Interconnected Nonlocal Consciousness Model based on the experimental data: (1) Only certain aspects of the mind are the result of physiologic processes. (2) Consciousness is causal, and physical reality is its manifestation. (3) All consciousnesses, regardless of their physical manifestations, are part of a network of life which they both inform and influence, and are informed and influenced by—there is a passage back and forth between the individual local and the collective nonlocal. (4) Some aspects of consciousness are not limited by space time.
If space and time are not controlling factors, what can be used to navigate in the nonlocal domain? I believe the research shows us three:
•intention
•numinosity
•entropic process
By intention, I mean focused awareness directed to a goal. Put another way, by intentioned awareness I mean the difference between looking and seeing.
By numinosity, I mean something entirely nonlocal. The reiterated acts of intentioned awareness create, in the nonlocal domain, an architecture of information. It has no physical characteristics. The term numinous, from which I derive numinosity, is based on the Latin word numen, and was coined in 1917 by the eminent German protestant philosopher and theologian Rudolf Otto (1869-1937).2
But Carl Jung was the first in modern science to capture the essence of the concept, saying:
We should not be in the least surprised if the empirical manifestations of unconscious contents bear all the marks of something illimitable, something not determined by space time. This quality is numinous…. numina are psychic entia…3
There is an aspect of coherence in numinosity. That it increases in intensity through individual acts of intentioned awareness is one of many demonstrations that all consciousness is interconnected. It is, after all, the process by which archetypes are created. The hero of a thousand faces is but one of the nonlocal domain's calling cards. And I think it can be said that the intensity of the nonlocal informational architecture increases with each act of intentioned awareness.
Entropic process is the third attribute of the nonlocal that I think can be supported by well-grounded evidence. In the nonlocal, where there is no space time except as information, entropy means the dissolution of the one informational architecture, and its transformation into another architecture. The transition can be anything from death to a nuclear explosion; it is all ultimately information in the nonlocal.
Below our conscious, awareness research tells us we have a physiological presentiment response to numinous events such a unexpected loud noises,4 or emotionally powerful images,5 both demonstrating our connection to the nonlocal. People know when they are being stared at; their brains respond before they are consciously aware.6 Animals know when their masters are coming home.7 Plants respond when people think about hurting them.8 Life is connected. If we are all linked and have access to the nonlocal aspect of our consciousness whether we are conscious of it or not—mostly not except for what we call hunches, a woman's intuition, or a gut feeling—then we must consider not just its individual attributes, which is what most research focuses on, but its social implications as well.
There is no time in the linear sense in the nonlocal, so the “when” of events is not what matters. In remote viewing, we know it is as easy for an individual to see something in the future, particularly the short-term future, as it is in present time, or past time. We also have evidence that the therapeutic intention linkage is not affected by either space or time. What matters is the numinous intensity of the target, whether it is a thing, a person, a locale, or a social process.
So consider this: the 2012 Mayan calendar predictions, and other ethnohistoric apocalyptic predictions, are presentiments. In the nonlocal domain, a tipping point—that historical moment when some massive entropic process is effected, such as the shift in the earth's biosphere to a new climatic destiny—bringing change, death, fear, and disruption would be an extremely numinous target. Roger Nelson's Global Consciousness Project, which reports changes in randomness amongst remote event generators when massively numinous moments such as the death of Princess Diana occurred, may be pointing out the perturbation of mass linkage.9
How much stronger and more enduring then would be the manifestation of the events described in the projections seen in the reports of the Intergovernmental Panel on Climate Change, the interactive maps of the Arizona State Geosciences Department, or a hundred other laboratories and research stations around the world. If what has been seen in the laboratory is extended, individual experience by individual experience, to a collective social awareness, the precognition of climate change, a massive (highly numinous) transition (entropic process) affecting all life, particularly the lives of those we love (producing a focused intention) must be unnerving at an individual level and deeply disturbing at the social level. And how strongly, one might ask, do they reinforce minds already disposed to apocalyptic end times?
This research, I think, is telling us that the fear arises from the presentiment of the future defined by climate change. We are each of us seismometers of the nonlocal, and our behavior is governed for good or ill in part by our response to this psychic wind. Those whose response is fear will become more and more agitated, more susceptible to manipulation, more irrational and centered on emotion. And more dangerous. Flight or fight at the social, the cultural, level is literally murderous. Every war being fought has as its basis some kind of fear.
Those whose response is to see opportunity come to comprehend that on the other side of the petroleum/nuclear age lies a world based on different rules. It only requires the capacity to be supple and to adapt. In the world of climate change, the only remediation is to change those behaviors that are creating the problem—or die. And, as with the rest of my argument, we can turn to research to guide us. As Science Daily reported, “In a wide range of studies, social scientists are amassing a growing body of evidence to show we are evolving to become more compassionate and collaborative in our quest to survive and thrive.”10
Psychologist Dacher Keltner, codirector of UC Berkeley's Greater Good Science Center and author of Born to Be Good: The Science of a Meaningful Life explains it this way:
Because of our very vulnerable offspring, the fundamental task for human survival and gene replication is to take care of others. Human beings have survived as a species because we have evolved the capacities to care for those in need and to cooperate. As Darwin long ago surmised, sympathy is our strongest instinct.4
Other research tells us that compassionate life-affirming choices create happiness and that happiness is contagious. Nicholas A. Christakis, a medical sociologist at Harvard University who has studied this exact issue says, “You would think that your emotional state would depend on your own choices and actions and experience, but it also depends on the choices and actions and experiences of other people, including people to whom you are not directly connected. Happiness is contagious.”11 In the study, 4,700 people were followed over two decades. Like all good longitudinal studies, those years mellowed the research data like a good wine, giving it gravitas. Christakis and his colleagues discovered that if you are happy or become happy you increase the probability that someone you know will be happy just through a casual interaction with you. Even more surprising, the Harvard researchers found that this capacity to create happiness could extend to third degree of separation. It even translates into real world economics. “Our work shows that whether a friend's friend is happy has more influence than a $5,000 raise,” says Christakis.4
Clusters of happy and unhappy people are visible in the network, and the relationship between people's happiness extends up to three degrees of separation (for example, to the friends of one's friends' friends). People who are surrounded by many happy people and those who are central in the network are more likely to become happy in the future. Longitudinal statistical models suggest that clusters of happiness result from the spread of happiness and not just a tendency for people to associate with similar individuals. A friend who lives within a mile (about 1.6 km) and who becomes happy increases the probability that a person is happy by 25% (95% confidence interval 1% to 57%). Similar effects are seen in coresident spouses (8%, 0.2% to 16%), siblings who live within a mile (14%, 1% to 28%), and next door neighbours (34%, 7% to 70%). Effects are not seen between coworkers. The effect decays with time and with geographical separation.12
Psychologist Martin E.P. Seligman, of the University of Pennsylvania, commenting on this work, made as clear a statement of the nonlocal linkage process in the social context as any I could make—although he may not see it in quite the same way—saying, “Laughter and singing and smiling tune the group emotionally. They get them on the same wavelength so they can work together more effectively as group.” I would only add that ritual ceremony using music or dance is the technique of choice the world over for creating nonlocal linked shared intention.
And just how big a network of shared intention does it take to create life-affirming change, instead of succumbing to fear? I think we have data on this as well. There are today less than 250,000 Quakers in the three Quaker groups in the United States.13 Such a small group that most Americans have never met one, and never will. Yet when you wind back every socially progressive transition in our history—abolition, penal reform, public education, women's suffrage, civil rights, nuclear freeze, and environmental concerns—you find a small group of Quakers. As I write this, there are a little more than 308 million people in the United States. So Quakers constitute 0.00081136% of the population. Pretty small. I find that quite comforting. It means we don't have to get everyone on the side of the life affirming.
We are going to go through some measure of climate change, whatever we do. We will survive 2012, although it seems to me that what is coming is something so numinous that it has resonated back to the Maya, so it will not be minor. In the nonlocal domain, there is no time, only numinosity and the entropic process it so often flows from. Climate change is the most numinous entropic process one can imagine, and the fullness of its impact lies far in the future. Fear is something we must learn to live with, just as the English lived with fear during the blitz. We won't serve ourselves through denial, and how bad this will be will in some measure be based on the choices we make. How subtle and adaptive we are, how quickly we learn the rules of working with the biosphere. We are all linked, and compassionate life-affirming choices will tip the balance in favor of survival and evolution.
These are choices each of us will have to make day by day.
The Lost Symbol Sparks Nationwide Interest in the Noetic Sciences By Bonnie J. Horrigan
Since the release of Dan Brown's newest novel, The Lost Symbol, visits to the IONS Web site have increased 10-fold; new members are signing up every day, calls are flooding in from across the country, and journalists from Dateline NBC, the Discovery Channel, NPR, and other media outlets are clamoring for interviews. Why? Because IONS and its research are prominently featured in the novel, and now people want to know more—a lot more—about the role of intention and consciousness in the world.
Brown's novel, which sold two million copies of the English language edition in its first two weeks of release, employs noetic science to untangle the web of clues that resolve a plot conflict between good and evil. It wasn't until the day of its release that IONS staff discovered they were featured. Marilyn Mandala Schlitz, PhD, CEO, of the Institute of Noetic Sciences, received an email from Brown. It basically said that he was a big fan of IONS, and although he had hoped to give them a heads up, he wasn't able to do so because of the security around the book. But he hoped they were enjoying the attention.
They are. “We are immensely grateful to Mr. Brown for catapulting the little-known field of Noetic Sciences into mainstream conversation surrounding his book,” said Dr Schlitz, who shares similarities with the book's heroine, Katherine Solomon. Before becoming the CEO for IONS, Schiltz spent several decades pioneering clinical and field-based research in the area of human consciousness, transformation, and healing.
“Based on my research over three decades, I am convinced that consciousness matters,” said Schlitz. “Through our individual and collective explorations, through the bridging of objective science and ancient wisdom traditions, we can find creative new solutions to age-old problems besetting humanity—fostering personal and social healing and transformation.”
The Institute of Noetic Sciences, which was founded in the early 70s by Apollo 14 astronaut Edgar Mitchell, is a nonprofit membership organization that conducts and sponsors leading-edge research into the potentials and powers of consciousness—including perceptions, beliefs, attention, intention, and intuition. Sitting in the cramped cabin of the space capsule on his return trip from the moon, Mitchell saw planet Earth floating freely in the vastness of space and was engulfed by a profound sense of universal connectedness. In Mitchell's own words: “The presence of divinity became almost palpable, and I knew that life in the universe was not just an accident based on random processes.”
Researchers at IONS have been applying the lens of science to the multidisciplinary study of consciousness for nearly 40 years. The word noetic comes from the ancient Greek nous, for which there is no exact equivalent in English. It refers to “inner knowing,” a kind of intuitive consciousness—direct and immediate access to knowledge beyond what is available to our normal senses and the power of reason.
EXPLORE Coeditor-in-Chief, Dean Radin, PhD, is the senior scientist at IONS. “My interest in consciousness was originally motivated out of an intuitive sense that the mind is far more mysterious and powerful than we know,” said Radin, who was brought on as an EXPLORE editor specifically because of his expertise in this field. “Through education and experience I've also come to appreciate that these experiences are also responsible for most of the greatest inventions, artistic and scientific achievements, creative insights, and religious epiphanies throughout history. Understanding this realm of human experience thus offers more than mere academic interest—it touches upon the very best that the human intellect and spirit have to offer.”
The Lost Symbol mentions studies on the effects of individual and collective intention, intuition, gut feelings, and presentiments—all areas that IONS has been studying intensively for years. For a list of IONS published studies, please go to: http://www.noetic.org/publications/journal_pubs.cfm.
To field the overwhelming number of requests by The Lost Symbol readers for information about noetic sciences, IONS launched a multipart teleseminar series on the subject in October 2009, which ran weekly through December 2009. The series was filmed on location in Washington, DC, where Brown's novel takes readers on a tour of the symbols and legends of ancient wisdoms and spiritual traditions hidden among the country's national monuments. Dr Schlitz guided listeners on a similar tour, offering a broader understanding of the noetic sciences and what science now knows about the mysteries of consciousness. “In January 2010 we will begin a new, live phone interview series in the form of a class that will take people deeper into the latest noetic research and implications,” explained Schlitz.
Great thread Giovani. I enjoyed it immensly. And dear Mudra as always, your image of kindness shows how we all tend to ignore the simple measures in life. Thank you both, Raven
Limitless, cheap chips made out of DNA could replace Slicon
A Single Waffle Structure Nanotechnology never looked so delicious.
Silicon chips are on the way out, at least if Duke University engineer Chris Dwyer has his way. The professor of electrical and computer engineering says a single grad student using the unique properties of DNA to coax circuits into assembling themselves could produce more logic circuits in a single day than the entire global silicon chip industry could produce in a month.
Indeed, DNA is perfectly suited to such pre-programming and self-assembly. Dwyer's recent research has shown that by creating and mixing customized snippets of DNA and other molecules, he can create billions of identical, waffle-like structures that can be turned into logic circuits using light rather than electricity as a signaling medium.
The process works by adding light-sensitive molecules called chromophores to the structures. These chromophores absorb light, exciting the electrons within. That energy is passed to a different nearby chromophore, which uses the energy to emit light of a different wavelength. The difference in wavelength is easily differentiated from the original light; in computing terms, it's the difference between a one or a zero. Presto: a logic gate.
Rather than running computers and electrical circuits on electricity, light-sensitive DNA switches could be used to move signals through a device at much higher speeds. Furthermore, the waffle structures are cheap and can be made quickly in virtually limitless quantities, driving down the cost of computing power. Once you figure out how you wish to code the DNA snippets, you can synthesize them easily and repeatedly; from there you can create everything from a single logic gate to larger, more complex circuits.
A shift from silicon-based semiconductor chips would be a sea-change for sure, but semiconductors are reaching a technological ceiling and if the economics of DNA-based chips are really as attractive as they seem, change might be inevitable. DNA is already smart enough to be the foundation of life on Earth: why not the foundation of computing as well?
Yunnan's worst drought for many years has been exacerbated by destruction of forest cover and a history of poor water management.
Jane Qi ~ Beijing
Born into a farming family in south Yunnan province, China, Zhu Youyong's life has always been tied to the soil. At the age of 54, however, Zhu — now president of Yunnan Agricultural University in Kunming — says he "has never seen such severe drought in Yunnan".
Since last September, the province has had 60% less rainfall than normal. According to the Ministry of Civil Affairs, 8.1 million people — 18% of Yunnan's population — are short of drinking water, and US$2.5-billion worth of crops are expected to fail.
Scientists in China say that the crisis marks one of the strongest case studies so far of how climate change and poor environmental practice can combine to create a disaster. They are now scrambling to pin down exactly what caused the drought, and whether similar events are likely to hit the region more often in the future.
Meanwhile, with most of the province's winter crops ruined, local farmers need immediate help. Zhu has been going from county to county to persuade farmers to grow different crops together in the same field, rather than as a monoculture. Intercropping can boost yields by up to 30%, and could help to avoid food shortages in the region later this year1. This summer, 80% of the farmland in Yunnan — a staggering 2.9 million hectares — will use the technique. But success will depend on a break in the weather. "If it still doesn't rain in late May, the consequences will be unthinkable," says Zhu. Dry spell
It is not news that China is seriously short of water, but its southwestern region — including the Yunnan, Guizhou, Guangxi and Sichuan provinces and Chongqing municipality — usually sees ample precipitation. This year, however, the rains did not come, and people there want to know why.
"Yunnan does experience droughts every few decades," says Xu Jianchu, an ecologist at the Kunming Institute of Botany, an institute of the Chinese Academy of Sciences (CAS). But the severity of this year's drought is unusual. Some say it is the worst in over a century. Xu is a contributor to a report on climate change in Yunnan and its myriad impacts2. Sponsored by CAS and the China Meteorological Administration, the report shows that Yunnan has got warmer and drier in the past half-century. Since 1960, the number of rainy days has decreased, whereas the number of extreme events, such as torrential rains and droughts, has increased.
Some suggest that this year's drought in Yunnan might be caused by the El Niño/Southern Oscillation (ENSO), an atmospheric circulation system that originates in the western Pacific Ocean and brings rainfall to Southeast Asia. During El Niño years the wind from the Pacific weakens, leading to droughts in the region.
"We've had a moderately strong El Niño event since October," says Dan Bebber, a climate researcher at the Earthwatch Institute in Oxford, UK, a non-profit environmental group. Although Yunnan is not directly under the influence of ENSO, "there is a statistical relationship between El Niño and the monsoon system in southwestern China through mechanisms that are unclear", he says.
Indeed, the CAS report suggests that in previous strong El Niño years, the rainy season in Yunnan, which spans May to October, was delayed, with less rain in the summer and more rain in the autumn. But climate models are divided on how climate change will affect ENSO, with some showing increasing intensity and others decreasing intensity, says Bebber.
Climate change is not the only factor affecting the drought. Deforestation in mountainous Yunnan is also being blamed. "Natural forests are a key regulator of climate and hydrological processes," says Xu, who is also China's representative at the World Agroforestry Centre, an international think tank headquartered in Nairobi, Kenya.
The forest's thick litter layer of organic materials can absorb up to seven times its own weight in water, says Liu Wenyao, an ecologist at the Xishuangbanna Tropical Botanical Garden (XTBG), a research institute of CAS in Menglun in southwestern Yunnan. Natural forests also have an extensive network of roots that keep the ground moist, and the canopy can trap water vapour, creating a dense fog that keeps the myriad plant species alive during dry seasons.
But in Xishuangbanna prefecture, renowned for the natural splendour of its tropical rainforests, forest clearance between 1976 and 2003 shrank the primary-forest cover to 3.6% of its 1976 value3. The rainforest has been replaced by rubber trees — known as 'water pumps' by locals because of their insatiable thirst — which now cover 20% of the prefecture's land.
In the Ailao mountains north of Xishuangbanna, where it is too cold to grow rubber trees, plantations of fast-growing but thirsty eucalyptus are replacing primary forest to feed the paper industry. In other parts of Yunnan, logging, mining, quarrying and increasing human settlement have cleared huge areas of forest. The results are an increase in soil erosion, landslides and flash floods.
"Such large-scale deforestation removes the valuable ecological services natural forests provide," says Liu. "The impact of deforestation on hydrological processes becomes particularly acute during prolonged droughts." The region could also be plagued by other natural hazards: with drought the risk of forest fire increases, whereas wetter monsoon seasons could see more floods wreaking havoc.
Many scientists are now worried that severe droughts, such as Yunnan's, will become more common across southeast Asia. In addition to the effect on humans, "the impact on biodiversity could be huge," says Jennifer Baltzer, an ecologist at Mount Allison University in Sackville, New Brunswick, Canada.
As existing plant species struggle to cope with the drought and die, they are replaced by hardier plants. Zhu Hua, an ecologist at XTBG, and his colleagues have already noted a 10% increase in the abundance of liana species over the past few decades in southwestern Yunnan's tropical forests4. Cao Kunfang, also an XTBG ecologist, says that lianas have a deep root system that allows them to absorb water deep in the soil5. They can also minimize evaporation by closing the minute stomatal pores in their leaves. But without a large trunk, lianas are poor at absorbing carbon dioxide — and even worse once their stomata close. "Having more lianas in tropical forests could compromise their function as a carbon sink," says Cao. Last-minute scramble
As government officials scramble to deal with the emergency in Yunnan, the province's water management is being scrutinized. Most of its reservoirs were built more than 50 years ago, and half are either disused or do not function properly. Many of Yunnan's natural lakes are severely polluted and unusable, says Ma Jun, director of the Institute of Public and Environmental Affairs, a non-governmental organization in Beijing. Xu says that the region has not enough small-scale infrastructure — ponds, small reservoirs and canals — to distribute clean water to the hardest-hit areas. "There is an urgent need to develop an effective hydrological network in the province," he says.
In recent years the region has instead focused on building huge reservoirs and hydropower stations, Xu says, because of the economic and political capital that such projects offer. Overall, the central government has been reactive, tackling droughts when they come rather than preparing for the worst, adds Yu Chaoqing, a hydrologist at the Beijing-based China Institute of Water Resources and Hydropower Research, part of the government's Ministry of Water Resources.
Throughout southwestern China, where 2,000 drought-relief workers are drilling wells around the clock, the location of groundwater remains elusive because few geological surveys have been done. "It's a last-minute scramble because only 10% of the drought-ridden region has been surveyed," says Hao Aibing, a geologist at the China Geological Survey in Beijing, who is helping to locate groundwater in Yunnan, Guizhou and Guangxi provinces. "Even if we get live water wells, the water quality remains an issue," he says. "We just know so little about the groundwater in the region."
Researchers are adamant that lessons must be learned from this year's drought in Yunnan. "Extreme weather events are likely to happen more frequently in the future," says Xu, referring to the findings of the CAS report. "I hope we will be better prepared when the next natural disaster strikes."
Some 43 million Americans do it every day: take a tiny aspirin to help prevent heart attacks and strokes. In fact, doctors have been routinely recommending the practice to older adults for years. But recently, experts have been questioning the aspirin-a-day regimen, concerned that this everyday miracle drug can pose serious risks, including bleeding in the brain and stomach. The aspirin-a-day controversy erupted publicly in March when a 10-year study of nearly 30,000 adults ages 50 to 75 without known heart disease found that a daily aspirin didn’t offer any discernible protection. The group taking aspirin had cardiovascular disease at the same rate as those taking a placebo. Moreover, the study—published in the Journal of the American Medical Association—reported that taking a daily aspirin (100 mg) almost doubled the risk of dangerous internal bleeding. And last year the U.S. Preventive Services Task Force—a panel of medical experts—issued new guidelines for patients, recommending only those at risk for heart attacks or strokes should take a daily aspirin. Risk factors include having high blood pressure, high cholesterol and diabetes, as well as being overweight. The panel also recommended that people over 80 not take aspirin at all because of bleeding risk. For the first time, the panel also broke down its advice by gender, recommending against daily aspirin use in women under 55 and men under 45. Is it right for you? So, should you take a daily aspirin or not? The answer is not quite as simple as doctors previously thought. Aspirin, they say, can still be a lifesaving drug, but it’s not for everyone. For reasons researchers don’t fully understand, aspirin seems to provide different benefits for men and women. In men, aspirin can prevent heart attacks but seems to have no effect on strokes, says Michael LeFevre, M.D., a member of the task force that wrote the new guidelines and a professor of family medicine at the University of Missouri. Conversely, he says, aspirin appears to help women avoid strokes but not heart attacks. The new recommendations suggest that aspirin will be most beneficial to:
men between 45 and 79 who have a high risk for heart attacks;
women between 55 and 79 who are at high risk for strokes.
Drawbacks Aspirin, which has been around for more than 100 years, is a cheap, easy, effective way to control pain and inflammation. In 1989, when a major study revealed that a small dose could reduce the risk of stroke and heart attack by preventing blood clots, doctors began recommending that their older patients take a low dose of aspirin, 81 mg, every day. “Aspirin is a lifesaving medicine in patients with established cardiovascular disease,” says Jeffrey Berger, M.D., a cardiologist at New York University who has studied the use of aspirin. But, he warns, it does come with some real drawbacks. Aspirin has been linked with chronic ringing in the ears (tinnitus), and earlier this year scientists reported that people who took aspirin regularly were more likely to suffer from hearing loss. Dangerous bleeding The drug’s ability to prevent blood clots is also a double-edged sword. The body’s ability to stop bleeding is what prevents a small cut, for instance, from causing uncontrollable bleeding. While aspirin might keep clots from blocking blood flow to our hearts and brains, it also makes it more likely that we might develop serious internal bleeds, particularly in the stomach. “That’s not a trivial side effect,” says LeFevre. “We’re talking about people who get hospitalized” and may end up in the intensive care unit, he adds. Some patients are more likely to suffer these complications than others; a recent review of the research reveals that men are twice as likely to experience bleeds as women, and the risk also increases with age. Researchers estimate the risk of internal bleeding for those who take aspirin is two to four times greater than for those who don’t take aspirin at all, depending on factors such as age and overall health. Even though people are more likely to bleed as they get older, researchers don’t think aspirin causes the risk of bleeding to build up over time. “In fact, it’s likely that if one is to bleed, their risk of bleeding is seen early on,” Berger says. Taking ibuprofen and naproxen—common pain relievers such as Advil and Aleve—also can make bleeding more likely. Unfortunately, this kind of severe bleeding doesn’t usually come with obvious warning signs, but sudden gastrointestinal pain can be a tip-off. The bleeding is often caused by inflammation of the stomach lining or an aspirin-induced ulcer and can result in vomiting blood or blood in the stool. The traditional point of view, LeFevre says, was: “Aspirin is a pretty benign thing. Why doesn’t everybody take one? Aspirin, as it turns out, is not harmless.” Strokes vs. heart attacks Many of the risk factors for heart attacks and strokes—including age, diabetes and smoking—overlap, but there are slight differences. High total cholesterol and high levels of LDL or “bad” cholesterol, for instance, are important predictors of heart attacks. The most important risk factors for strokes include high blood pressure, certain kinds of irregular heartbeats (known as atrial fibrillation) and a condition known as left ventricular hypertrophy in which some of the heart muscle thickens. Experts agree that women who have already had strokes and men who have already had heart attacks should absolutely be taking aspirin. “You have to make sure that people with a history of heart attack or stroke do not stop their aspirin, because it could be a deadly mistake,” says NYU’s Berger. Clearly, the benefits of aspirin have to be weighed against the possibility of bleeding, and that’s a conversation that experts say every patient needs to have with his or her doctor. “This decision has to be made one person at a time,” LeFevre says. “There is no one blanket recommendation for everybody.”
A burger and fries are not only bad for the waistline, they might also exacerbate asthma, a new study suggests. Patients with asthma who ate a high-fat meal had increased inflammation in their airways soon afterward, and did not respond as well to treatment as those who ate a low-fat meal, the researchers found. The results provide more evidence that environmental factors, such as diet, can influence the development of asthma, which has increased dramatically in recent years in westernized countries where high-fat diets are common. In 2007, about 34.1 million Americans had asthma, according to the American Academy of Allergy, Asthma and Immunology. From 1980 through 1994, the prevalence of asthma increased 75 percent.
While the results are preliminary, they suggest cutting down on fat might be one way to help control asthma. "If these results can be confirmed by further research, this suggests that strategies aimed at reducing dietary fat intake may be useful in managing asthma," study researcher Lisa Wood, of the University of Newcastle, told LiveScience in an e-mail. The results will be presented at this year's American Thoracic Society's International Conference, held May 14-19 in New Orleans. Asthma is a condition in which inflammation in the airways can lead to breathlessness, wheezing and coughing. Symptoms can be triggered by a variety of irritants, including air pollution, smoke and allergens, such as pollen and animal dander. Previous studies have shown eating fatty foods can trigger the immune system, leading to an increase in cells in the blood that are responsible for inflammation. But no one had specifically looked at the effect of a fatty diet on asthma. Wood and her colleagues had 40 asthmatic patients eat either a high-fat meal, consisting of burgers and hash browns, or a low-fat meal of yogurt. The high-fat meal was 1,000 calories (52 percent of calories from fat), and the low-fat meal was 200 calories (13 percent from fat). Analysis of sputum samples revealed that those who had eaten the burger meal had an increased number of immune cells called neutrophils in their airways. Neutrophils play a role in triggering inflammation. The high-fat diet patients also showed less improvement in their lung function in response to the asthma medication Ventolin (generically known as albuterol) three to four hours after the meal. The researchers aren't sure why the drug didn't work as well after the high-fat meal and plan further studies to tease out an answer. It could be the fatty acids interfere with the drug in some way, the researchers say.
A burger and fries are not only bad for the waistline, they might also exacerbate asthma, a new study suggests. Patients with asthma who ate a high-fat meal had increased inflammation in their airways soon afterward, and did not respond as well to treatment as those who ate a low-fat meal, the researchers found. The results provide more evidence that environmental factors, such as diet, can influence the development of asthma, which has increased dramatically in recent years in westernized countries where high-fat diets are common. In 2007, about 34.1 million Americans had asthma, according to the American Academy of Allergy, Asthma and Immunology. From 1980 through 1994, the prevalence of asthma increased 75 percent.
While the results are preliminary, they suggest cutting down on fat might be one way to help control asthma. "If these results can be confirmed by further research, this suggests that strategies aimed at reducing dietary fat intake may be useful in managing asthma," study researcher Lisa Wood, of the University of Newcastle, told LiveScience in an e-mail. The results will be presented at this year's American Thoracic Society's International Conference, held May 14-19 in New Orleans. Asthma is a condition in which inflammation in the airways can lead to breathlessness, wheezing and coughing. Symptoms can be triggered by a variety of irritants, including air pollution, smoke and allergens, such as pollen and animal dander. Previous studies have shown eating fatty foods can trigger the immune system, leading to an increase in cells in the blood that are responsible for inflammation. But no one had specifically looked at the effect of a fatty diet on asthma. Wood and her colleagues had 40 asthmatic patients eat either a high-fat meal, consisting of burgers and hash browns, or a low-fat meal of yogurt. The high-fat meal was 1,000 calories (52 percent of calories from fat), and the low-fat meal was 200 calories (13 percent from fat). Analysis of sputum samples revealed that those who had eaten the burger meal had an increased number of immune cells called neutrophils in their airways. Neutrophils play a role in triggering inflammation. The high-fat diet patients also showed less improvement in their lung function in response to the asthma medication Ventolin (generically known as albuterol) three to four hours after the meal. The researchers aren't sure why the drug didn't work as well after the high-fat meal and plan further studies to tease out an answer. It could be the fatty acids interfere with the drug in some way, the researchers say.
'Artificial life' breakthrough announced by scientists
Researchers in the US have developed the first synthetic living cell. Their work, which many scientists have called a landmark study, is a key step towards the design and creation of new living things. BBC News examines the issues raised by this controversial breakthrough. Have these scientists created synthetic life?
They are calling this a synthetic living cell. But they did use an existing cell as a template and as a recipient for their home-made DNA. Strictly speaking, it is only the genome - the DNA in the cell - that is entirely synthetic.
This bacterial cell, the researchers say, is the first life form to be entirely controlled by synthetic DNA.
The researchers also employed "nature's tools" to build their new chromosome (the package of DNA that contains all of the genetic material the cell needs to live and function).
They chemically constructed blocks of DNA then inserted them into yeast cells, which assembled the blocks into a complete bacterial chromosome. What will the scientists do with these synthetic bacteria?
These particular cells are just copies of existing or "wild type" bacteria. But they show that making a living cell with a synthetic chromosome is possible. Dr Craig Venter and his colleagues hope to use this technology to design new bacteria from scratch - cells that could carry out useful functions.
He and his colleagues are already collaborating with pharmaceutical and fuel companies to design and develop chromosomes for bacteria that would produce useful fuels or even new vaccines.
They say they hope eventually to "build" bacteria that absorb carbon dioxide and therefore help repair damage to the environment. Could they use this same technique to make more complex synthetic organisms - like plants or animals?
Theoretically, yes. But the current aim is to design and build bacterial cells.
They are an ideal first candidate because they could potentially produce substances that we want. Dr Venter believes such tailor-made bacteria could "create a new industrial revolution".
And, in genetic terms, bacteria are very simple organisms.
They typically have a single, circular chromosome of DNA. On the other hand, every single cell in the human body contains 23 pairs of much larger, linear chromosomes. So bacteria have much less information in their genomes and it has been possible to sequence and copy all of this information.
Dr Venter says that extending the technique to higher organisms, such as plants, might be possible, but it will take scientists many years to work out how to build such large and complex genomes. Are there ethical concerns about making new life?
Some critics have accused Dr Venter and his colleagues of "playing God" and believe that it should not be a role for humans to design new life.
There are also concerns about the safety of this new technology.
Professor Julian Savulescu, from the Oxford Uehiro Centre for Practical Ethics at the University of Oxford says the potential of this science is "in the far future, but real and significant: dealing with pollution, new energy sources, new forms of communication".
"But the risks are also unparalleled," he continues. "We need new standards of safety evaluation for this kind of radical research and protections from military or terrorist misuse and abuse.
These could be used in the future to make the most powerful bioweapons imaginable. The challenge is to eat the fruit without the worm."
Dr Venter stresses that he and his colleagues have been addressing these ethical and safety issues since they began their first experiments in the field of synthetic biology.
"We asked for an extensive ethical review of the approach," he explained.
"In 2003, when we made the first synthetic virus, it underwent extensive ethical review that went all the way up to the level of the White House.
"And there have been extensive reviews including from the National Academy of Sciences, which has done a comprehensive report on this new field.
"We think these are important issues and we urge continued discussion that we want to take part in."
note~ for more click below to BBC link page ~ illustrating graph how a synthetic cell was created ~ plus a short video by Craig Venter defending the synthetic living cell;
<blockquote class="postcontent restore "> Why Are American Doctors Mutilating Girls?
by Ayaan Hirsi Ali
A new proposal by the American Academy of Pediatrics would have doctors assisting families in the ritual of female circumcision, but activist and Nomad author Ayaan Hirsi Ali says they’d just be complicit in perpetuating a grave injustice.
The American Academy of Pediatrics recently put forward a proposal on female genital mutilation. They would like that American doctors be given permission to perform a ceremonial pinprick or “nick” on girls born into communities that practice female genital mutilation.
Female circumcision is a custom in many African and Asian countries whereby the genitals of a girl child are cut. There are roughly four procedures. First there is the ritual pinprick. This is what Pediatrics refers to as the “nick” option. To give you an idea of what that means, visualize a preteen girl held down by adults. Her clitoris is tweaked so that the circumcizer can hold it between her forefinger and her thumb. Then she takes a needle and pierces it using enough force for it to go into the peak of the clitoris. As soon as it bleeds, the parents and others attending the ceremony cheer, the girl is comforted and the celebrations follow.
The majority of girls are subjected to FGM to ensure their virginity and to curb their libido to guarantee sexual fidelity after marriage. Think of it as a genital burqa, designed to control female sexuality.
There is a more sinister meaning to the word “nick” if you consider the fact that in some cases it means to cut off the peak of the clitoris. Proponents compare “nicking” to the ritual of boy circumcision. But in the case of the boys, it is the foreskin that is all or partly removed and not a part of the penis head. In the case of the girls, the clitoris is actually mutilated.
Then there is the second method whereby a substantial part of the clitoris is removed and the opening of the vagina is sewn together (infibulation). The third variation adds to this the removal of the inner labia.
Finally, there is a procedure whereby as much of the clitoris as possible is removed along with the inner and outer labia. Then the inner walls of the vagina are scraped until they bleed and are then bound with pins or thorns. The tissue on either side grows together, forming a thick scar. Two small openings roughly equal to the diameter of a matchstick are left for urination and menstruation respectively.
Often these operations are done without anesthesia and with tools such as sharp rocks, razor blades, knives or scissors depending on the location, family income, and education. It is thus more accurate—as does the World Health Organization—to speak of female genital mutilation (FGM) instead of the obscure and positive-sounding “circumcision.”
According to the American Congress of Obstetricians and Gynecologists, more than 130 million women and girls worldwide have undergone some form of female genital cutting. Some immigrant parents from countries like Egypt, Sudan, Somalia, and others in Europe and the United States, where FGM is common, continue this practice in the West even though they know that it is criminal. Some of them sneak their daughters out of the country during the long school summer vacation so that they can be subjected to any one of these forms of FGM.
Congressman Joseph Crowley (D-NY) recently introduced a bill to toughen federal laws by making it a crime to take a girl overseas to be circumcised. He argued, rightly, that FGM serves no medical purpose and is rightfully banned in the U.S.
While the American Academy of Pediatrics agrees that FGM serves no medical purpose, it argues that the current federal law has had the unintended consequence of driving some families to take their daughters to other countries to undergo mutilation. The pediatricians say that “it might be more effective if federal and state laws enabled pediatricians to reach out to families by offering a ritual nick as a possible compromise to avoid greater harm.”
But is this plausible? I fear not.
I am familiar with this debate in two ways. First, I come from a culture where virtually every woman has undergone genital cutting. I was 5 years old when mine were cut and sewn. Second, while serving as a member of parliament in the Netherlands, I was assigned the portfolio for the emancipation and integration of immigrant women. One of my missions was to combat practices such as FGM.
To understand this problem, we need to begin with parental motives. The “nicking” option is regarded as a necessary cleansing ritual. The clitoris is considered to be an impure part of the girl-child and bleeding it is believed to make her pure and free of evil spirits.
But the majority of girls are subjected to FGM to ensure their virginity—hence the sewing up of the opening of the vagina—and to curb their libido to guarantee sexual fidelity after marriage—hence the effective removal of the clitoris and scraping of the labia. Think of it as a genital burqa, designed to control female sexuality.
When the motive for FGM is to ensure chastity before marriage and to curb female libido, then the nick option is not sufficient.
Moreover, the nick option does not address the main problem in Western liberal democracies where FGM is outlawed, which is that it can almost never be detected, so that few perpetrators are brought to justice. Even if we were to consider tolerating it in its most limited form, how could we tell that parents who want to ensure that their daughter will be a virgin on her wedding night will not have her (legally) nicked and then a few months later (illegally) infibulated? I applaud the compassion for children that inspires the pediatricians’ proposal, but they need to eliminate this risk for little girls.
Legislation is only a first step and even with that there is no uniformity. Some states have passed bills that define FGM in all its manifestations and punish it. Some states have none, but place FGM under existing laws of child abuse. So Rep. Crowley’s next move should be to push for uniform enforcement of his bill.
But even once the legislative flaws are fixed, there remains the really difficult question of detection.
For the law to have any meaningful effect in eradicating FGM in the U.S., we need to work out a way of knowing when a girl has been mutilated. As a legislator in the Netherlands, this was for me the thorniest issue. In the United States, where civil liberties are even more jealously guarded, the thorns are likely to be sharper still.
Ayaan Hirsi Ali was born in Mogadishu, Somalia, and escaped an arranged marriage by immigrating to the Netherlands in 1992. She served as a member of the Dutch parliament from 2003 to 2006 and is a research fellow at the American Enterprise Institute. Her autobiography, Infidel, was a 2007 New York Times bestseller.
How worried should we be about everyday chemicals? by Jerome Groopman
Bisphenol A, commonly known as BPA, may be among the world’s most vilified chemicals. The compound, used in manufacturing polycarbonate plastic and epoxy resins, is found in plastic goggles, face shields, and helmets; baby bottles; protective coatings inside metal food containers; and composites and sealants used in dentistry. As animal studies began to show links between the chemical and breast and prostate cancer, early-onset puberty, and polycystic ovary syndrome, consumer groups pressured manufacturers of reusable plastic containers, like Nalgene, to remove BPA from their products. Warnings went out to avoid microwaving plasticware or putting it in the dishwasher. On May 6th, the President’s Cancer Panel issued a report deploring the rising number of carcinogens released into the environment—including BPA—and calling for much more stringent regulation and wider awareness of their dangers. The panel advised President Obama “to use the power of your office to remove the carcinogens and other toxins from our food, water, and air that needlessly increase health care costs, cripple our Nation’s productivity, and devastate American lives.” Dr. LaSalle Leffall, Jr., the chairman of the panel, said in a statement, “The increasing number of known or suspected environmental carcinogens compels us to action, even though we may currently lack irrefutable proof of harm.”
The narrative seems to follow a familiar path. In the nineteen-sixties, several animal studies suggested that cyclamates, a class of artificial sweetener, caused chromosomal abnormalities and cancer. Some three-quarters of Americans were estimated to consume the sweeteners. In 1969, cyclamates were banned. Later research found that there was little evidence that these substances caused cancer in humans. In the nineteen-eighties, studies suggesting a cancer risk from Alar, a chemical used to regulate the color and ripening of apples, caused a minor panic among parents and a media uproar. In that case, the cancer risk was shown to have been overstated, but still present, and the substance remains classified a “probable human carcinogen.” Lead, too, was for years thought to be safe in small doses, until further study demonstrated that, particularly for children, even slight exposure could result in intellectual delays, hearing loss, and hyperactivity.
There is an inherent uncertainty in determining which substances are safe and which are not, and when their risks outweigh their benefits. Toxicity studies are difficult, because BPA and other, similar chemicals can have multiple effects on the body. Moreover, we are exposed to scores of them in a lifetime, and their effects in combination or in sequence might be very different from what they would be in isolation. In traditional toxicology, a single chemical is tested in one cell or animal to assess its harmful effects. In studying environmental hazards, one needs to test mixtures of many chemicals, across ranges of doses, at different points in time, and at different ages, from conception to childhood to old age. Given so many variables, it is difficult to determine how harmful these chemicals might be, or if they are harmful at all, or what anyone can do to avoid their effects. In the case of BPA and other chemicals of its sort, though, their increasing prevalence and a number of human studies that associate them with developmental issues have become too worrisome to ignore. The challenge now is to decide a course of action before there is any certainty about what is truly dangerous and what is not.
In 1980, Frederica Perera, a professor at Columbia’s Mailman School of Public Health and a highly regarded investigator of the effects of environmental hazards, was studying how certain chemicals in cigarette smoke might cause cancer. Dissatisfied with the research at the time, which measured toxic substances outside the body and then made inferences about their effects, she began using sophisticated molecular techniques to measure compounds called polycyclic aromatic hydrocarbons, or PAH—which are plentiful in tobacco smoke—in the body. Perera found that after entering the lungs the compounds pass into the bloodstream and damage blood cells, binding to their DNA. She hoped to compare the damaged blood cells from smokers with healthy cells, and decided to seek out those she imagined would be uncontaminated by foreign substances. “I thought that the most perfect pristine blood would come from the umbilical cord of a newborn,” Perera said.
But when she analyzed her samples Perera discovered PAH attached to some of the DNA in blood taken from umbilical cords, too. “I was pretty shocked,” she said. “I realized that we did not know very much about what was happening during this early stage of development.”
Perera’s finding that chemicals like PAH, which can also be a component of air pollution, are passed from mother to child during pregnancy has now been replicated for more than two hundred compounds. These include PCBs, chemical coolants that were banned in the United States in 1979 but have persisted in the food chain; BPA and phthalates, used to make plastics more pliable, which leach out of containers and mix with their contents; pesticides used on crops and on insects in the home; and some flame retardants, which are often applied to upholstery, curtains, and other household items.
Fetuses and newborns lack functional enzymes in the liver and other organs that break down such chemicals, and animal studies in the past several decades have shown that these chemicals can disrupt hormones and brain development. Some scientists believe that they may promote chronic diseases seen in adulthood such as diabetes, atherosclerosis, and cancer. There is some evidence that they may have what are called epigenetic effects as well, altering gene expression in cells, including those which give rise to eggs and sperm, and allowing toxic effects to be passed on to future generations.
In 1998, Perera initiated a program at Columbia to investigate short- and long-term effects of environmental chemicals on children, and she now oversees one of the largest and longest-standing studies of a cohort of mothers and newborns in the United States. More than seven hundred mother-child pairs have been recruited from Washington Heights, Harlem, and the South Bronx; Perera is also studying pregnant women in Kraków, Poland, and two cities in China, and, since September 11, 2001, a group of three hundred and twenty-nine mothers and newborns from the downtown hospitals near the World Trade Center. In all, some two thousand mother-child pairs have been studied, many for at least a decade.
This March, I visited Columbia’s Center for Children’s Environmental Health, where Perera is the director, and met with a woman I’ll call Renee Martin in an office overlooking the George Washington Bridge. Martin was born in Harlem, attended a community college in Queens, and then moved to 155th Street and Broadway, where she is raising her five children. She entered the study eleven years ago, when she was pregnant with her first child. “I was asthmatic growing up,” Martin said. “And I was concerned about triggers of asthma in the environment. So when they asked me to be in the study I thought it would be a good way to get information that might tell me something about my own health and the health of my child.” She showed me a small black backpack containing a metal box with a long plastic tube. During her pregnancy, Martin would drape the tube over her shoulder, close to her chin, and a vacuum inside the device would suck in a sample of air. A filter trapped particles and vapors of ambient chemicals, like pesticides, phthalates, and PAH. “I walked around pregnant with this hose next to my mouth, but, living in New York, people hardly notice,” she said with a laugh.
The Columbia team also developed a comprehensive profile of Martin’s potential proximity to chemicals, including an environmental map that charted her apartment’s distance from gas stations, dry cleaners, fast-food restaurants, supermarkets, and major roadways. They took urine samples and, at delivery, blood samples from her and from the umbilical cord, along with samples from the placenta. Nearly a hundred per cent of the mothers in the study were found to have BPA and phthalates in their urine. Urine and blood samples are taken as the babies grow older, as well as samples of their exhaled breath. “We have a treasure trove of biological material,” Perera said. The researchers track the children’s weight and sexual development, and assess I.Q., visual spatial ability, attention, memory, and behavior. Brain imaging, using an M.R.I., is performed on selected children.
Martin was still breast-feeding her two-year-old daughter. “I bottle-fed my first child,” she told me. “But when you learn what can come out of plastic bottles and all the benefits of breast-feeding—my other children were nursed.” The Columbia group regularly convenes the families to hear results and discuss ways to reduce their exposure to potential environmental hazards. At one meeting, Martin found out that some widely used pesticides could result in impaired learning and behavior. “I told the landlord to stop spraying in the apartment” to combat a roach infestation, she said. On the advice of the Columbia researchers, Martin asked him to seal the cracks in the walls that were allowing cockroaches to enter, and Martin’s family meticulously swept up crumbs. This approach has now become the New York City Department of Health’s official recommendation for pest control. “You don’t need to be out in the country and have compost,” Martin said. “This has made me into an urban environmentalist.”
In 2001, using data from animal studies, the E.P.A. banned the sale of the pesticide chlorpyrifos (sold under the name Dursban) for residential and indoor use. Many agricultural uses are still permitted, and farming communities continue to be exposed to the insecticide. Residues on food may affect those who live in urban areas as well. In 2004, the Columbia group published results in the journal Environmental Health Perspectives showing that significant exposure during the prenatal period to chlorpyrifos was associated with an average hundred-and-fifty-gram reduction in birth weight—about the same effect as if the mother had smoked all through pregnancy. Those most highly exposed to the insecticide were twice as likely to be born below the tenth percentile in size for gestational age. The researchers found that children born after 2001 had much lower exposure levels—indicating that the ban was largely effective.
For those children who were exposed to the pesticide in the womb, the effects have seemed to persist. The children with the greatest exposure were starting to fall off the developmental curve and displayed signs of attention-deficit problems by the time they were three. By seven, they showed significant deficits in working memory, which is strongly tied to problem-solving, I.Q., and reading comprehension. Another study, published this month in Pediatrics, using a random cross-section of American children, showed that an elevated level of a particular pesticide residue nearly doubled the likelihood that a child would have A.D.H.D.
“The size of this deficit is educationally meaningful in the early preschool years,” Virginia Rauh, the leader of Columbia’s research, said. “Such a decline can push whole groups of children into the developmentally delayed category.”
First used in Germany, in the nineteen-thirties, bisphenol A has a chemical structure similar to that of estrogen, but was considered too weak to be developed into a contraceptive pill. Recent animal studies have shown that, even at very low levels, BPA can cause changes that may lead to cancer in the prostate gland and in breast tissue. It is also linked to disruption in brain chemistry and, in female rodents, accelerated puberty. Japanese scientists found that high levels of BPA were associated with polycystic ovary syndrome, a leading cause of impaired fertility.
Phthalates are also ubiquitous in cosmetics, shampoos, and other personal-care products. They may have effects on older children and adults as well as on neonates. A study at Massachusetts General Hospital found an association of high levels of certain phthalates with lower sperm concentrations and impaired sperm motility; young girls in Puerto Rico who had developed breasts prematurely were more likely to have high levels of phthalates in their blood. Immigrant children in Belgium who exhibited precocious puberty also showed greater exposure to the pesticide DDT, which has estrogenlike effects and has been banned in the U.S., but is still used in Africa to help control malaria.
Long-term studies have provided the most compelling evidence that chemicals once considered safe may cause health problems in communities with consistent exposure over many years. Researchers from SUNY Albany, including Lawrence Schell, a biomedical anthropologist, have worked over the past two decades with Native Americans on the Mohawk reservation that borders the St. Lawrence River, once a major shipping thoroughfare, just east of Massena, New York. General Motors built a foundry nearby that made automobile parts, Alcoa had two manufacturing plants for aluminum, and the area was contaminated with PCBs, which were used in the three plants. Several Mohawk girls experienced signs of early puberty, which coincided with higher levels of PCBs in their blood.
The Albany researchers also observed that increased levels of PCBs correlated with altered levels of thyroid hormone and lower long-term memory functioning. Similar results have been found in an area of Slovakia near heavy industry. “Folks have complained about reproductive problems,” Schell said, of the residents of the Mohawk reservation. “They talked a lot about rheumatoid arthritis, about lupus, about polycystic ovary syndrome. And, you know, you hear these things and you wonder how much of it is just a heightened sensitivity, but, when you see elevated antibodies that are often a sign of autoimmune disease of one kind or another, it could be the beginning of discovering a biological basis for their complaints about these diseases.”
Beginning in 2003, Antonia Calafat, a chemist at the Centers for Disease Control and Prevention, and Russ Hauser, of the Harvard School of Public Health, set out to evaluate the exposure of premature infants to certain environmental contaminants. The researchers hypothesized that infants treated in the most intensive ways—intravenous feedings and delivery of oxygen by respirators—would receive the most exposure, since chemicals like phthalates and BPA can leach from plastic tubing. They studied forty-one infants from two Boston-area intensive-care units for BPA. Calafat told me, “We saw ten times the amounts of BPA in the neonates that we are seeing in the general population.” In several children, the levels of BPA were more than a hundred times as high as in healthy Americans.
Calafat, who came to the United States from Spain on a Fulbright scholarship, developed highly accurate tests to detect BPA, phthalates, and other compounds in body fluids like blood and urine. This advance, she explained, “means that you are not simply doing an exposure assessment based on the concentration of the chemicals in the food or in the air or in the soil. You are actually measuring the concentrations in the body.” With this technology, she can study each individual as if he or she were a single ecosystem. Her studies at the Centers for Disease Control show that 92.6 per cent of Americans aged six and older have detectable levels of BPA in their bodies; the levels in children between six and eleven years of age are twice as high as those in older Americans.
Critics such as Elizabeth Whelan, of the American Council on Science and Health, a consumer-education group in New York (Whelan says that about a third of its two-million-dollar annual budget comes from industry), think that the case against BPA and phthalates has more in common with those against cyclamates and Alar than with the one against lead. “The fears are irrational,” she said. “People fear what they can’t see and don’t understand. Some environmental activists emotionally manipulate parents, making them feel that the ones they love the most, their children, are in danger.” Whelan argues that the public should focus on proven health issues, such as the dangers of cigarettes and obesity and the need for bicycle helmets and other protective equipment. As for chemicals in plastics, Whelan says, “What the country needs is a national psychiatrist.”
To illustrate what Whelan says is a misguided focus on manufactured chemicals, her organization has constructed a dinner menu “filled with natural foods, and you can find a carcinogen or an endocrine-disrupting chemical in every course”—for instance, tofu and soy products are filled with plant-based estrogens that could affect hormonal balance. “Just because you find something in the urine doesn’t mean that it’s a hazard,” Whelan says. “Our understanding of risks and benefits is distorted. BPA helps protect food products from spoiling and causing botulism. Flame retardants save lives, so we don’t burn up on our couch.”
Several studies also contradict the conclusion that these chemicals have deleterious effects. The journal Toxicological Sciences recently featured a study from the E.P.A. scientist Earl Gray, a widely respected researcher, which indicated that BPA had no effect on puberty in rats. A study of military conscripts in Sweden found no connection between phthalates and depressed sperm counts, and a recent survey of newborns in New York failed to turn up an increase in a male genital malformation which might be expected if the effects from BPA seen in rodents were comparable to effects in humans. Richard Sharpe, a professor at the University of Edinburgh, and an internationally recognized pioneer on the effects of chemicals in the environment on endocrine disruption, recently wrote in Toxicological Sciences, “Fundamental, repetitive work on bisphenol A has sucked in tens, probably hundreds of millions of dollars from government bodies and industry, which, at a time when research money is thin on the ground, looks increasingly like an investment with a nil return.”
With epidemiological studies, like those at Columbia, in which scientists observe people as they live, without a control group, the real-life nature of the project can make it difficult to distinguish between correlation and causation. Unknown factors in the environment or unreported habits might escape the notice of the researchers. Moreover, even sophisticated statistical analysis can sometimes yield specious results.
Dr. John Ioannides, an epidemiologist at the University of Ioannina, in Greece, has noted that four of the six most frequently cited epidemiological studies published in leading medical journals between 1990 and 2003 were later refuted. Demonstrating the malleability of data, Peter Austin, a medical statistician at the Institute for Clinical Evaluative Sciences, in Toronto, has retrospectively analyzed medical records of the more than ten million residents of Ontario. He showed that Sagittarians are thirty-eight per cent more likely to fracture an arm than people of other astrological signs, and Leos are fifteen per cent more likely to suffer a gastrointestinal hemorrhage. (Pisces were more prone to heart failure.)
To help strengthen epidemiological analysis, Sir Austin Bradford Hill, a British medical statistician, set out certain criteria in 1965 that indicate cause and effect. Researchers must be sure that exposure to the suspected cause precedes the development of a disease; that there is a high degree of correlation between the two; that findings are replicated in different studies in various settings; that a biological explanation exists that makes the association plausible; and that increased exposure makes development of the disease more likely.
When epidemiological studies fulfill most of these criteria, they can be convincing, as when studies demonstrated a link between cigarettes and lung cancer. But, in an evolving field, dealing with chemicals that are part of daily life, the lack of long-term clinical data has made firm conclusions elusive. John Vandenbergh, a biologist who found that exposure to certain chemicals like BPA could accelerate the onset of puberty in mice, served on an expert panel that advised the National Toxicology Program, a part of the National Institute of Environmental Health Sciences, on the risks of exposure to BPA. In 2007, the panel reviewed more than three hundred scientific publications and concluded that “there is some concern” about exposure of fetuses and young children to BPA, given the research from Vandenbergh’s laboratory and others.
Vandenbergh is cognizant of the difficulty of extrapolating data from rodents and lower animals to humans. “Why can’t we just figure this out?” he said. “Well, one of the problems is that we would have to take half of the kids in the kindergarten and give them BPA and the other half not. Or expose half of the pregnant women to BPA in the doctor’s office and the other half not. And then we have to wait thirty to fifty years to see what effects this has on their development, and whether they get more prostate cancer or breast cancer. You have to wait at least until puberty to see if there is an effect on sexual maturation. Ethically, you are not going to go and feed people something if you think it harmful, and, second, you have this incredible time span to deal with.”
The inadequacy of the current regulatory system contributes greatly to the atmosphere of uncertainty. The Toxic Substances Control Act, passed in 1976, does not require manufacturers to show that chemicals used in their products are safe before they go on the market; rather, the responsibility is placed on federal agencies, as well as on researchers in universities outside the government. The burden of proof is so onerous that bans on toxic chemicals can take years to achieve, and the government is often constrained from sharing information on specific products with the public, because manufacturers claim that such information is confidential. Several agencies split responsibility for oversight, with little coördination: the Food and Drug Administration supervises cosmetics, food, and medications, the Environmental Protection Agency regulates pesticides, and the Consumer Product Safety Commission oversees children’s toys and other merchandise. The European Union, in contrast, now requires manufacturers to prove that their compounds are safe before they are sold.
According to the E.P.A., some eighty-two thousand chemicals are registered for use in commerce in the United States, with about seven hundred new chemicals introduced each year. In 1998, the E.P.A. found that, among chemicals produced in quantities of more than a million pounds per year, only seven per cent had undergone the full slate of basic toxicity studies. There is no requirement to label most consumer products for their chemical contents, and no consistent regulation throughout the country. Although the F.D.A. initially concluded that BPA was safe, some states, including Massachusetts and Connecticut, either have banned it or are considering a ban. (In January, the F.D.A. announced that it would conduct further testing.)
There has been some movement toward stricter controls: in July, 2008, Congress passed the Product Safety Improvement Act, which banned six phthalates from children’s toys. But so far removal from other products has been voluntary. The President’s Cancer Panel report advised people to reduce exposure with strategies that echo some of what the mothers in Frederica Perera’s study have learned: choose products made with minimal toxic substances, avoid using plastic containers to store liquids, and choose produce grown without pesticides or chemical fertilizers and meat free of antibiotics and hormones.
Mike Walls, the vice-president of regulatory affairs at the American Chemistry Council, a trade association that represents manufacturers of industrial chemicals, agrees that new laws are needed to regulate such chemicals. “Science has advanced since 1976, when the last legislation was enacted,” he said. But Walls notes that some eight hundred thousand people are employed in the companies that the A.C.C. represents, and that their products are found in ninety-six per cent of all American manufactured goods. “The United States is the clear leader in chemistry,” Walls said. “We have three times as many new applications for novel compounds as any other country in the world. We want to make good societal decisions but avoid regulations that will increase the burden on industry and stifle innovation.”
Academic researchers have found that the enormous financial stakes—the production of BPA is a six-billion-dollar-a-year industry—have prompted extra scrutiny of their results. In 2007, according to a recent article in Nature, a majority of non-industry-supported studies initially deemed sound by the National Toxicology Program on the safety of BPA were dismissed as unsuitable after a representative of the A.C.C. drafted a memo critiquing their methods; experimental protocols often differ from one university lab to another. Researchers are now attempting to create a single standard protocol, and a bill introduced by Representative Louise Slaughter, of New York, would fund a centralized research facility at the National Institute of Environmental Health Sciences.
Other legislation aims to completely overhaul the 1976 law. “It’s clear that the current system doesn’t work at all,” Ben Dunham, a staffer in the office of Senator Frank Lautenberg, of New Jersey, who crafted the bill now before the Senate, told me. Henry Waxman, of California, and Bobby Rush, of Illinois, have released a companion discussion draft in the House. Lautenberg’s bill seeks to allow the E.P.A. to act quickly on chemicals that it considers dangerous; to give new power to the E.P.A. to establish safety criteria in chemical compounds; to create a database identifying chemicals in industrial products; and to set specific deadlines for approving or banning compounds. The bill also seeks to limit the number of animals used for research. (Millions of animals are estimated to be required to perform the testing mandated under the E.U. law.) How much data would be needed to either restrict use of a chemical or mandate an outright ban is still unclear. Lautenberg’s bill resisted the call of environmental groups to ban certain compounds like BPA immediately.
Dr. Gina Solomon, of the Natural Resources Defense Council, said that the Lautenberg bill is “an excellent first step,” but noted several “gaps” in the bill: “There is what people call lack of a hammer, meaning no meaningful penalty for missing a deadline in evaluating a chemical if E.P.A. gets bogged down, and we know from history that it can be easily bogged down.” The language setting a standard for safety is too vague, she added. “You could imagine industry driving a truck through this loophole.”
Linda Birnbaum, the director of the N.I.E.H.S. and its National Toxicology Program, helps assess chemicals for the federal government and, if Slaughter’s bill passes, could become responsible for much of the research surrounding these safety issues. Birnbaum’s branch of the National Institutes of Health is working with the National Human Genome Research Institute and the E.P.A. to test thousands of compounds, singly and in combination, to assess their potential toxicity. Part of the difficulty, she points out, is that “what is normal for me may not be normal for you. We all have our own balance of different hormones in our different systems.” When it comes to development and achievement, incremental differences—such as the drop of five to ten I.Q. points, or a lower birth weight—are significant. “We’re all past the point of looking for missing arms and legs,” Birnbaum said.
“I know of very little science where you will ever get hundred-per-cent certainty,” Birnbaum says. “Science is constantly evolving, constantly learning new things, and at times decisions have to be made in the presence of a lot of information, but maybe not certainty. The problem is we don’t always want to wait ten or twelve or twenty years to identify something that may be a problem.”
Perera, who is keenly aware of the potential pitfalls of epidemiological research, told me that her team employs rigorous statistical methods to avoid falsely suggesting that one chemical or another is responsible for any given result. And she objects to the characterization of her research as fear-mongering. “Our findings in children increasingly show real deleterious effects that can occur short-term and potentially for the rest of the child’s life,” Perera said. In January, the Columbia group published data from the mothers and infants it studied following September 11th. Cord-blood samples saved at the time of birth had been analyzed for the presence of flame retardants. Each year, the children were assessed for mental and motor development. As a point of reference, low-level lead poisoning results in an average loss of four to five I.Q. points. Those children in Columbia’s group with the highest levels of flame retardant in their blood at birth had, by the age of two, I.Q. scores nearly seven points lower than normal.
How do we go forward? Flame retardants surely serve a purpose, just as BPA and phthalates have made for better and stronger plastics. Still, while the evidence of these chemicals’ health consequences may be far from conclusive, safer alternatives need to be sought. More important, policymakers must create a better system for making decisions about when to ban these types of substances, and must invest in the research that will inform those decisions. There’s no guarantee that we’ll always be right, but protecting those at the greatest risk shouldn’t be deferred.
I Feel Your Pain, Unless You're From a Different Race
By Charles Q. Choi, LiveScience Contributor
posted: 27 May 2010
Normally when you see or imagine someone else in pain, your brain experiences a twinge of pain as well. Not so when race and bias come into play, scientists now find.
Intriguingly, people respond with empathy when pain is inflicted on others who don't fit into any preconceived racial category, such as those who appear to have violet-colored skin.
"This is quite important because it suggests that humans tend to empathize by default unless prejudice is at play," said researcher Salvatore Maria Aglioti, a cognitive and social neuroscientist at the Sapienza University of Rome in Italy.
Scientists asked volunteers in Italy of Italian and African descent to watch short films showing either needles penetrating a person's hand or a Q-tip gently touching the same spot. At the same time, they measured brain and nervous system activity.
When the volunteers saw the hands get poked, the brain and nervous system activity revealed the same spot on each volunteer's own hands reacted involuntarily when the person in the film was of the same race. Those of a different race did not provoke the same response.
However, when both white and black volunteers saw violet-colored hands get jabbed, they responded empathetically. This suggests that people normally automatically feel the pain of others, and the lack of empathy that volunteers showed for people of other races was learned and not innate.
"This default reactivity of human beings implies empathy with the pain of strangers," said researcher Alessio Avenanti of the University of Bologna in Italy. "However, racial bias may suppress this empathic reactivity, leading to a dehumanized perception of others' experience."
It could make evolutionary sense that we feel less empathy for people who are different than us. "In case of war or even a friendly competition like a football game, it could be adaptive to feel less empathy for people we consider our opponents," said social neuroscientist Joan Chiao at Northwestern University in Evanston, Ill., who did not take part in this research.
Then again, "it also makes evolutionary sense for us to feel the pain of others, as it might cue that there is danger close by," Chiao noted. "Also, without feeling the pain of others, it could be harder to motivate altruistic behaviors, especially if such behaviors come at a cost."
Essentially, for the stranger in pain, in order to elicit help, he or she would need to actually get the stranger to feel empathy.
While the ability for culture to regulate empathy could be helpful, "when you feel prejudices that are not adaptive, that are not rooted in reality, that shows that there can be a darker side to empathy regulation," Chiao added.
These new findings could suggest one could help deal with racial prejudice with methods designed to restore empathy for others, the researchers said.
"One can reduce empathy, but one can also promote it, learning positive associations with another group," Chiao said.
The scientists detailed their findings online May 27 in the journal Current Biology.
PHILADELPHIA — By the time Djigui Keita left the hospital for home, his follow-up appointment had been scheduled. Emergency health insurance was arranged until he could apply for public assistance. He knew about changes in his medication — his doctor had found less expensive brands at local pharmacy chains. And Mr. Keita, 35, who had passed out from dehydration, was cautioned to carry spare water bottles in the taxi he drove for a living.
The hour long briefing the home-bound patient received here at the Hospital of the University of Pennsylvania was orchestrated by a hospitalist, a member of America’s fastest-growing medical specialty. Over a decade, this breed of physician-administrator has increasingly taken over the care of the hospitalized patient from overburdened family doctors with less and less time to make hospital rounds — or, as in Mr. Keita’s case, when there is no family doctor at all. Because hospitalists are on top of everything that happens to a patient — from entry through treatment and discharge — they are largely credited with reducing the length of hospital stays by anywhere from 17 to 30 percent, and reducing costs by 13 to 20 percent, according to studies in The Journal of the American Medical Association. As their numbers have grown, from 800 in the 1990s to 30,000 today, medical experts have come to see hospitalists as potential leaders in the transition to the Obama administration’s health care reforms, to be phased in by 2014. Under the new legislation, hospitals will be penalized for read missions, medical errors and inefficient operating systems.
Avoidable read missions are the costliest mistakes for the government and the taxpayer, and they now occur for one in five patients, gobbling $17.4 billion of Medicare’s current $102.6 billion budget. Dr. Subha Airan-Javia, Mr. Keita’s hospitalist, splits her time between clinical care and designing computer programs to contain costs and manage staff work flow. The discharge process she walked Mr. Keita and his wife through can work well, or badly, with very different results. Do it safely and the patient gets better. Do it wrong, and he’s back on the hospital doorstep — with a second set of bills. “Where we were headed was not a mystery to anyone immersed in health care,” said P. J. Brennan, the chief medical officer for the University of Pennsylvania’s hospitals. “We were getting paid to have people in the hospital and the part of that which was waste was under the gun. These young doctors, coming into a highly dysfunctional environment, had an affinity for working on processes and redesigning systems.” But hospitalists are not a panacea. Some have made mistakes when they sent their short-term charges home, failing to pass along necessary information to the regular doctor and family.
Another concern is that patients will balk at an unfamiliar doctor at the scariest of times. Carol Levine, in charge of family care giving at the United Hospital Fund of New York, remains skeptical that hospitalists will completely smooth the process. “The patient,” she said, “is still expecting a doctor-doctor, when ‘Wait a minute I don’t know you’ is going to take care of them.” The hospitalist appeared in the early 1990s, before the primary care situation was the crisis it is now. Today’s private internist may carry a roster of more than 2,000 patients, older and sicker than ever before, and the workload is expected to increase 29 percent by 2025. To keep tabs on hospitalized patients, the doctor generally races in, white coat flying, at 7 a.m., when the patient is asleep and the family is not there. (Physicians also earn 40 percent less for time spent with a hospitalized patient than one in the office, according to a report in the journal Health Affairs. ) Mort Miller, 84, of Chicago, was hospitalized eight years ago for a broken hip. He already had congestive heart failure and diabetes and was on dialysis. He died after four weeks. His son, Joseph, said that he did not once communicate with the family doctor. “He rounded in the morning when I wasn’t there and never returned my phone calls,” Mr. Miller said. “I guess he didn’t have time.” Mr. Miller left his business to help run the hospitalists’ professional group, the Society of Hospital Medicine, a career change inspired by his father’s experience. The most compelling argument in favor of hospitalists, who are now in 5,000 institutions, from academic giants like the Hospital of the University of Pennsylvania to small community hospitals to innovators like the Mayo and Cleveland Clinics — is that they are there all the time. Another is that they are more comfortable than their predecessors with technology and cost-cutting decision-making.
One day in April, Dr. Airan-Javia was in and out of the rooms of a dozen patients, toggling between clinical work and designing a computer system for the safe hand off of patients between residents whose hours are now limited by law. Bad discharges generally result from hurried instructions to patients and families and little thought to where they are headed. One such situation was the centerpiece of a class taught for doctors at Mount Sinai Medical Center in New York. The patient, an elderly woman in the hospital for scoliosis, a spinal condition, was discharged by a hospitalist on a Friday night, with a prescription for an narcotic pain reliever that her pharmacy, as it turned out, did not stock. No one explained how her new medication differed from the old, or gave her a contact number for help. Without medication, by Tuesday, her ankles swollen and her breathing irregular, the woman was back in the hospital. In 2008, the hospitalists’ organization decided to invent better discharge systems rather than respond defensively to criticism, not unlike the simple operating room checklist, made famous by the physician and author Atul Gawande, which reduced accidents and deaths.
In 65 participating hospitals around the country, the Society of Hospital Medicine identifies patients at high risk for readmission, provides staff mentoring, and designs user-friendly discharge forms listing follow-up appointments, potential signs of trouble and phone numbers for the hospital team. Peer-reviewed research on the reforms in the system is expected in a year or two. Even experts who were initially skeptical agree that the hospitalists’ skill set is timely. They are young and thus not entrenched in the current order. They enjoy working in teams, when older doctors tend to be hierarchical. And, like Dr. Airan-Javia, who has a 16-month-old baby, they appreciate the regular hours and a paycheck of, say, $190,000 — higher by $30,000 than community-based peers. Dr. Airan-Javia says she made an inspired career choice. Forty percent of her time is spent on the floor, treating diseases and helping patients and families though complex life events, like deciding when it is time to suspend medical care and let life end. Sixty percent of the time she is designing systems to improve workflow and advising the hospital’s chief medical officer. At meetings with her fellow hospitalists, phrases seldom spoken by most doctors, like “cost-effective delivery of care,” and “preventable adverse events,” flow off everyone’s tongue: The language of health care reform. “The tools have never been better,” she said, “for finally getting all of this right."
original story here;http://www.nytimes.com/2010/05/27/us/27hosp.html?src=me&ref=general
Working with nature is healthier. What a concept !
Carmen my friend~ you were (quite) right~ all along~
Science News
'Balanced' Ecosystems Seen in Organic Agriculture Better at Controlling Pests, Research Finds
Science Daily (July 1, 2010) — There really is a balance of nature, but as accepted as that thought is, it has rarely been studied. Now Washington State University researchers writing in the journal Nature have found that more balanced animal and plant communities typical of organic farms work better at fighting pests and growing a better plant.
The researchers looked at insect pests and their natural enemies in potatoes and found organic crops had more balanced insect populations in which no one species of insect has a chance to dominate. And in test plots, the crops with the more balanced insect populations grew better.
"I think 'balance' is a good term," says David Crowder, a post-doctorate research associate in entomology at Washington State University. "When the species are balanced, at least in our experiments, they're able to fulfill their roles in a more harmonious fashion."
Crowder and colleagues here and at the University of Georgia use the term "evenness" to describe the relatively equal abundance of different species in an ecosystem. Conservation efforts more typically concentrate on species richness -- the number of individual species -- or the loss of individual species. Crowder's paper is one of only a few to address the issue. It is the first the first to look at animal and fungal communities and at multiple points in the food chain.
The researchers say their results strengthen the argument that both richness and evenness need to be considered in restoring an ecosystem. The paper also highlights insect predator and prey relationships at a time when the potato industry and large French fry customers like McDonald's and Wendy's are being pushed to consider the ecological sustainability of different pest-control practices.
Conventional pest-management on farms often leads to biological communities dominated by a few species. Looking at conventional and organic potato farms in central Washington State's Columbia Basin, Crowder found that the evenness of natural pests differed drastically between the two types of farms. In the conventional fields, one species might account for four out of five insects. In the organic fields, the most abundant species accounted for as little as 38 percent of a field's insect predators and enemies.
Using field enclosures on Washington State University's Pullman campus, Crowder recreated those conditions using potato plants, Colorado potato beetles, four insect species and three soil pathogens that attack the beetles. When the predators and pathogens had similar numbers, says Crowder, "we would get significantly less potato beetles at the end of the experiment."
"In turn," he adds, "we'd get bigger plants."
Crowder says he is unsure why species evenness was lower in conventional crops. It could be from different types of fertilization or from insecticides killing some natural enemies more than others.
Journal Reference: David W. Crowder, Tobin D. Northfield, Michael R. Strand, William E. Snyder. Organic agriculture promotes evenness and natural pest control. Nature, 2010; 466 (7302): 109 DOI: 10.1038/nature09183
As you can see in this German article, I am not alone in seeing the destruction of the American middle class as a powerful trend in the U.S. It can be reversed, consider what Germany looked like 60 years ago, but not if the current trends continue. This is yet another screaming alarm bell telling us how important the November election is going to be.If the Republicans take power and attempt to reassert their policies, a full bore depression would not surprise me.
On the Way Down The Erosion of America's Middle Class
While America's super-rich congratulate themselves on donating billions to charity, the rest of the country is worse off than ever. Long-term unemployment is rising and millions of Americans are struggling to survive. The gap between rich and poor is wider than ever and the middle class is disappearing.
Ventura is a small city on the Pacific coast, about an hour's drive north of Los Angeles. Luxury homes with a view of the ocean dot the hillsides, and the beaches are popular with surfers. Ventura is storybook California. "It's a well-off place," says Captain William Finley. "But about 20 percent of the city is what we call at risk of homelessness." Finley heads the local branch of the Salvation Army.
Last summer Ventura launched a pilot program, managed by Finley, that allows people to sleep in their cars within city limits. This is normally illegal, both in Ventura and in the rest of the country, where local officials and residents are worried about seeing run-down vans full of Mexican migrant workers parked on residential streets.
But sometime at the beginning of last year, people in Ventura realized that the cars parked in front of their driveways at night weren't old wrecks, but well-tended station wagons and hatchbacks. And the people sleeping in them weren't fruit pickers or the homeless, but their former neighbors.
Finley also noticed a change. Suddenly twice as many people were taking advantage of his social service organization's free meals program, and some were even driving up in BMWs -- apparently reluctant to give up the expensive cars that reminded them of better times.
Finley calls them "the new poor." "That is a different category of people that I think we're seeing," he says. "They are people who never in their wildest imaginations thought they would be homeless." They're people who had enough money -- a lot of money, in some cases -- until recently.
"The image of what is a poor person in today's day and age doesn't fly. When I was growing up a poor person, and we grew up fairly poor, you drove a 10-year-old car that probably had some dents in it. You know, there was one car for the family and you lived out of the food bank," says Finley. "In the past, you got yourself out of poverty and were on your way up."
American Way Heads in Opposite Direction
It was the American way, a path taken by millions. "Today the image is you're getting newer late model cars that at one point cost somebody 40, 50 grand, and they're at wits end, now they're living out of the food banks. And for many of them it takes a lot to swallow their pride," says Finley.
Today the American way is often headed in the opposite direction: downward.
For a while, America seemed to have emerged relatively unscathed from the worst economic crisis in decades -- with renewed vigor and energy -- just as it had done in the wake of past crises.
The government was announcing new economic growth figures by as early as last fall, much earlier than expected. The banks, moribund until recently, were back to earning billions. Companies nationwide are reporting strong growth, and the stock market has almost returned to it pre-crisis levels. Even the number of billionaires grew by a healthy 17 percent in 2009.
Two weeks ago, Microsoft founder Bill Gates and 40 other billionaires pledged to donate at least half of their fortunes to philanthropy, either while still alive or after death. Is America a country so blessed with affluence that it can afford to give away billions, just like that?
Growing Resentment
Gates' move could also be interpreted as a PR campaign, in a country where the super-rich sense that although they are profiting from the crisis, as was to be expected, the number of people adversely affected has grown enormously. They also sense that there is growing resentment in American society against those at the top.
For people in the lower income brackets, the recovery already seems to be falling apart. Experts fear that the US economy could remain weak for many years to come. And despite the many government assistance programs, the small amount of hope they engender has yet to be felt by the general public. On the contrary, for many people things are still headed dramatically downward.
According to a recent opinion poll, 70 percent of Americans believe that the recession is still in full swing. And this time it isn't just the poor who are especially hard-hit, as they usually are during recessions.
This time the recession is also affecting well-educated people who had been earning a good living until now. These people, who see themselves as solidly middle-class, now feel more threatened than ever before in the country's history. Four out of 10 Americans who consider themselves part of this class believe that they will be unable to maintain their social status.
Unemployment Persists
In a recent cover story titled "So long, middle class," the New York Post presented its readers with "25 statistics that prove that the middle class is being systematically wiped out of existence in America." Last week, the leading online columnist Arianna Huffington issued the almost apocalyptic warning that "America is in danger of becoming a Third World country."
In fact, the United States, in the wake of a real estate, financial economic and now debt crisis, which it still hasn't overcome, is threatened by a social Ice Age more severe than anything the country has seen since the Great Depression.
The United States is experiencing the problem of long-term unemployment for the first time since World War II. The number of the long-term unemployed is already three times as high as it was during any crisis in the past, and it is still rising.
More than a year after the official end of the recession, the overall unemployment rate remains consistently above 9.5 percent. But this is just the official figure. When adjusted to include the people who have already given up looking for work or are barely surviving on the few hundred dollars they earn with a part-time job and are using up their savings, the real unemployment figure jumps to more than 17 percent.
In its current annual report, the US Department of Agriculture notes that "food insecurity" is on the rise, and that 50 million Americans couldn't afford to buy enough food to stay healthy at some point last year. One in eight American adults and one in four children now survive on government food stamps. These are unbelievable numbers for the world's richest nation.
Even more unsettling is the fact that America, which has always been characterized by its unshakable belief in the American Dream, and in the conviction that anyone, even those at the very bottom, can rise to the top, is beginning to lose its famous optimism. According to recent figures, a significant minority of US citizens now believe that their children will be worse off than they are.
Many Americans are beginning to realize that for them, the American Dream has been more of a nightmare of late. They face a bitter reality of fewer and fewer jobs, decades of stagnating wages and dramatic increases in inequality. Only in recent months, as the economy has grown but jobs have not returned, as profits have returned but poverty figures have risen by the week, the country seems to have recognized that it is struggling with a deep-seated, structural crisis that has been building for years. As the Washington Post writes, the financial crisis was merely the final turning -- for the worse.
Where Did All the Money Go?
The boom in stocks and real estate, the country's wild borrowing spree and its excessive consumer spending have long masked the fact that the overwhelming majority of Americans derived almost no benefit from 30 years of economic growth. In 1978, the average per capita income for men in the United States was $45,879 (about €35,570). The same figure for 2007, adjusted for inflation, was $45,113 (€35,051).
Where did all the money go? All the enormous market gains and corporate earnings, the profits from the boom in the financial markets and the 110-percent increase in the gross national product in the last 30 years? It went to those who had always had more than enough already.
While 90 percent of Americans have seen only modest gains in their incomes since 1973, incomes have almost tripled for people at the upper end of the scale. In 1979, one third of the profits the country produced went to the richest 1 percent of American society. Today it's almost 60 percent. In 1950, the average corporate CEO earned 30 times as much as an ordinary worker. Today it's 300 times as much. And today 1 percent of Americans own 37 percent of the total national wealth.
Income inequality in the United States is greater today than it has been since the 1920s, except that hardly anyone has minded until now.
Little Chance of the American Dream
In America, the free market is king, and people with low incomes are seen as having only themselves to blame. Those who make a lot of money are applauded -- and emulated. The only problem is that Americans have long overlooked the fact that the American Dream was becoming a reality for fewer and fewer people.
Statistically, less affluent Americans stand a 4-percent chance of becoming part of the upper middle class -- a number that is lower than in almost every other industrialized nation.
So far, politicians have failed to come up with solutions for the growing social crisis. Washington is still waiting for jobs that aren't coming. President Barack Obama and his administration seem to be pinning their hopes on the notion that Americans will eventually pull themselves up by their bootstraps -- preferably by doing the same thing they've always done: spending money. Domestic consumer spending is responsible for two-thirds of American economic output.
But even though Federal Reserve Chairman Ben Bernanke continues to pump money into the market, and even though the government deficit has now reached the dizzying level of $1.4 trillion, such efforts have remained unsuccessful.
"The lights are going out all over America," Nobel economics laureate Paul Krugman wrote last week, and described communities that couldn't even afford to maintain their streets anymore.
The problem is that many Americans can no longer spend money on consumer products, because they have no savings. In some cases, their houses have lost half of their value. They no longer qualify for low-interest loans. They are making less money than before or they're unemployed. This in turn reduces or eliminates their ability to pay taxes.
Turning Out the Lights
As a result, many state and local governments are faced with enormous budget deficits. In Hawaii, for example, schools are closed on some Fridays to save the state money. A county in Georgia has eliminated all public bus services. Colorado Springs, a city of 380,000 people, has shut off a third of its streetlights to save electricity.
There are many discrepancies in America in the wake of the financial crisis. On the one hand, the Fed is constantly printing fresh money, and the government spent $182 billion to bail out a single company, the insurance giant AIG. On the other hand, the lights are in fact going out in some areas, because Washington, citing the need to reduce spending, is unwilling to provide local governments with financial assistance. "America is now on the unlit, unpaved road to nowhere," economist Krugman warns.
Chanelle Sabedra is already on that road. She and her husband have been sleeping in their car for almost three weeks now. "We never saw this coming, never ever," says Sabedra. She starts to cry. "I'm an adult, I can take care of myself one way or another, and same with my husband, but (my kids are) too little to go through these things." She has three children; they are nine, five and three years old.
"We had a house further south, in San Bernardino," says Sabedra. Her husband lost his job building prefab houses in July 2009. The utility company turned off the gas. "We were boiling water on the barbeque to bathe our kids," she says. No longer able to pay the rent, the Sabedras were evicted from their house in August.
Friends and relatives had few resources to help them. Now they live in a room at the Salvation Army homeless shelter in downtown Ventura, which is run by Captain Finley.
The sudden plunge into homelessness is a reality that's difficult to understand, given the images of America we are accustomed to seeing in television series and films. They always depict homes with well-kept yards and two-car garages with basketball hoops attached to them. This America still exists, but it's shrinking. And often those who are managing to keep the illusion alive can hardly afford to do so.
Americans have been struggling with a rising cost of living for the past 20 years. At the beginning of the decade, families were already paying twice as much for health insurance and their mortgages than the previous generation did.
"To cope, millions of families put a second parent into the workforce," says Harvard Professor Elizabeth Warren, who President Obama appointed to chair the congressional panel to oversee the government's bank bailout program. According to Warren, the average family has spent all of its income and used up its savings "just to stay afloat a little while longer."
Spiraling Debt
Because they lacked savings, Americans began borrowing money to cover all of their other expenses, including education, healthcare and consumption. American consumer debt now totals about $13.5 trillion.
Many people threaten to suffocate under the burden of their debt. Some 61 percent of Americans have no financial reserves and are living from paycheck to paycheck. As little as a single hospital bill can spell potential financial ruin.
Chanelle Sabedra's husband has found another job, this time as a warehouse worker for a company that makes aircraft turbines. But he doesn't earn enough to get the family out of the homeless shelter. "I haven't got a new job yet," says Sabedra. Her husband's job doesn't pay enough, and the couple has now joined the growing ranks of the working poor, for whom even two low-wage jobs are insufficient to feed their families. "We need the second income," says Sabedra. "Just the baby alone is $600 a month for half-day care."
In pre-recession America, she and her husband would have had two jobs each to make ends meet. They would have worked at the cash register at Wal-Mart during the day, flipped burgers at McDonald's in the early evening and perhaps spent half the night working as a security guard or cleaning buildings. These are all low-paying jobs, hardly careers, but the combined income is usually enough to keep a family afloat. In pre-recession America, life wasn't luxurious for Chanelle Sabedra, but it was doable if they were willing to work hard enough and sacrifice enough of their lives to stay afloat.
What kind of a job is she looking for now? "Anything right now. Mostly I'm looking for retail, or just anything to get me started, but there's just nothing out there," says Sabedra.
This is a pretty good overview of what those of us who are studying the nature of consciousness -- what your faithful editor does when not doing SR -- are exploring. This is all part of an important emerging trend, which is pushing the old reductionist materialist paradigm into crisis.
Does the Past Exist Yet? Evidence Suggests Your Past Isn't Set in Stone
by Robert Lanza, M.D
from~ The Huffington Post August 22, 2010
Recent discoveries require us to rethink our understanding of history. "The histories of the universe," said renowned physicist Stephen Hawking "depend on what is being measured, contrary to the usual idea that the universe has an objective observer-independent history."
Is it possible we live and die in a world of illusions? Physics tells us that objects exist in a suspended state until observed, when they collapse in to just one outcome. Paradoxically, whether events happened in the past may not be determined until sometime in your future -- and may even depend on actions that you haven't taken yet.
In 2002, scientists carried out an amazing experiment, which showed that particles of light "photons" knew -- in advance −- what their distant twins would do in the future. They tested the communication between pairs of photons -- whether to be either a wave or a particle. Researchers stretched the distance one of the photons had to take to reach its detector, so that the other photon would hit its own detector first. The photons taking this path already finished their journeys -− they either collapse into a particle or don't before their twin encounters a scrambling device. Somehow, the particles acted on this information before it happened, and across distances instantaneously as if there was no space or time between them. They decided not to become particles before their twin ever encountered the scrambler. It doesn't matter how we set up the experiment. Our mind and its knowledge is the only thing that determines how they behave. Experiments consistently confirm these observer-dependent effects.
More recently (Science 315, 966, 2007), scientists in France shot photons into an apparatus, and showed that what they did could retroactively change something that had already happened. As the photons passed a fork in the apparatus, they had to decide whether to behave like particles or waves when they hit a beam splitter. Later on - well after the photons passed the fork - the experimenter could randomly switch a second beam splitter on and off. It turns out that what the observer decided at that point, determined what the particle actually did at the fork in the past. At that moment, the experimenter chose his history.
Of course, we live in the same world. Particles have a range of possible states, and it's not until observed that they take on properties. So until the present is determined, how can there be a past? According to visionary physicist John Wheeler (who coined the word "black hole"), "The quantum principle shows that there is a sense in which what an observer will do in the future defines what happens in the past." Part of the past is locked in when you observe things and the "probability waves collapse." But there's still uncertainty, for instance, as to what's underneath your feet. If you dig a hole, there's a probability you'll find a boulder. Say you hit a boulder, the glacial movements of the past that account for the rock being in exactly that spot will change as described in the Science experiment.
But what about dinosaur fossils? Fossils are really no different than anything else in nature. For instance, the carbon atoms in your body are "fossils" created in the heart of exploding supernova stars. Bottom line: reality begins and ends with the observer. "We are participators," Wheeler said "in bringing about something of the universe in the distant past." Before his death, he stated that when observing light from a quasar, we set up a quantum observation on an enormously large scale. It means, he said, the measurements made on the light now, determines the path it took billions of years ago.
Like the light from Wheeler's quasar, historical events such as who killed JFK, might also depend on events that haven't occurred yet. There's enough uncertainty that it could be one person in one set of circumstances, or another person in another. Although JFK was assassinated, you only possess fragments of information about the event. But as you investigate, you collapse more and more reality. According to biocentrism, space and time are relative to the individual observer - we each carry them around like turtles with shells.
History is a biological phenomenon − it's the logic of what you, the animal observer experiences. You have multiple possible futures, each with a different history like in the Science experiment. Consider the JFK example: say two gunmen shot at JFK, and there was an equal chance one or the other killed him. This would be a situation much like the famous Schrödinger's cat experiment, in which the cat is both alive and dead − both possibilities exist until you open the box and investigate.
"We must re-think all that we have ever learned about the past, human evolution and the nature of reality, if we are ever to find our true place in the cosmos," says Constance Hilliard, a historian of science at UNT. Choices you haven't made yet might determine which of your childhood friends are still alive, or whether your dog got hit by a car yesterday. In fact, you might even collapse realities that determine whether Noah's Ark sank. "The universe," said John Haldane, "is not only queerer than we suppose, but queerer than we can suppose."
This is the second major piece in a popular magazine in less than a week addressing quantum mechanics and consciousness. When a subject like this begins appearing frequently in the popular press it is because it is reaching consensus in the scientific community.
Back From the Future
08.26.2010 A series of quantum experiments shows that measurements performed in the future can influence the present. Does that mean the universe has a destiny—and the laws of physics pull us inexorably toward our prewritten fate? by Zeeya Merali; photography by Adam Magyar
Jeff Tollaksen may well believe he was destined to be here at this point in time. We’re on a boat in the Atlantic, and it’s not a pleasant trip. The torrential rain obscures the otherwise majestic backdrop of the volcanic Azorean islands, and the choppy waters are causing the boat to lurch. The rough sea has little effect on Tollaksen, barely bringing color to his Nordic complexion. This is second nature to him; he grew up around boats. Everyone would agree that events in his past have prepared him for today’s excursion. But Tollaksen and his colleagues are investigating a far stranger possibility: It may be not only his past that has led him here today, but his future as well.
Tollaksen’s group is looking into the notion that time might flow backward, allowing the future to influence the past. By extension, the universe might have a destiny that reaches back and conspires with the past to bring the present into view. On a cosmic scale, this idea could help explain how life arose in the universe against tremendous odds. On a personal scale, it may make us question whether fate is pulling us forward and whether we have free will.
The boat trip has been organized as part of a conference sponsored by the Foundational Questions Institute to highlight some of the most controversial areas in physics. Tollaksen’s idea certainly meets that criterion. And yet, as crazy as it sounds, this notion of reverse causality is gaining ground. A succession of quantum experiments confirm its predictions—showing, bafflingly, that measurements performed in the future can influence results that happened before those measurements were ever made.
As the waves pound, it’s tough to decide what is more unsettling: the boat’s incessant rocking or the mounting evidence that the arrow of time—the flow that defines the essential narrative of our lives—may be not just an illusion but a lie.
Tollaksen, currently at Chapman University in Orange County, California, developed an early taste for quantum mechanics, the theory that governs the motion of particles in the subatomic world. He skipped his final year of high school, instead attending physics lectures by the charismatic Nobel laureate Richard Feynman at Caltech in Pasadena and learning of the paradoxes that still fascinate and frustrate physicists today.
Primary among those oddities was the famous uncertainty principle, which states that you can never know all the properties of a particle at the same time. For instance, it is impossible to measure both where the particle is and how fast it is moving; the more accurately you determine one aspect, the less precisely you can measure the other. At the quantum scale, particles also have curiously split personalities that allow them to exist in more than one place at the same time—until you take a look and check up on them. This fragile state, in which the particle can possess multiple contradictory attributes, is called a superposition. According to the standard view of quantum mechanics, measuring a particle’s properties is a violent process that instantly snaps the particle out of superposition and collapses it into a single identity. Why and how this happens is one of the central mysteries of quantum mechanics.
“The textbook view of measurements in quantum mechanics is inspired by biology,” Tollaksen tells me on the boat. “It’s similar to the idea that you can’t observe a system of animals without affecting them.” The rain is clearing, and the captain receives radio notification that some dolphins have been spotted a few minutes away; soon we’re heading toward them. Our attempts to spy on these animals serve as the zoological equivalent of what Tollaksen terms “strong measurements”—the standard type in quantum mechanics —because they are anything but unobtrusive. The boat is loud; it churns up water as it speeds to the location. When the dolphins finally show themselves, they swim close to the boat, arcing through the air and playing to their audience. According to conventional quantum mechanics, it is similarly impossible to observe a quantum system without interacting with the particles and destroying the fragile quantum behavior that existed before you looked.
Most physicists accept these peculiar restrictions as part and parcel of the theory. Tollaksen was not so easily appeased. “I was smitten, and I knew there was no chance I was ever going to do anything else with my life,” he recalls. On Feynman’s advice, the teenager moved to Boston to study physics at MIT. But he missed the ocean. “For the first time in my life, I lost the background sound of surf,” he says. “That was actually traumatic.”
Mindful that a job in esoteric physics might not be the best way to put food on his family’s table, Tollaksen worked on a computing start-up company while pursuing his Ph.D. But if the young man wasn’t sure of his calling, fate quickly gave him a nudge when a physicist named Yakir Aharonov visited the neighboring Boston University. Aharonov, now at Chapman with Tollaksen, was renowned for having codiscovered a bizarre quantum mechanical effect in which particles can be affected by electric and magnetic fields, even in regions where those fields should have no reach. But Tollaksen was most taken by another area of Aharonov’s research: a time-twisting interpretation of quantum mechanics.
“Aharonov was one of the first to take seriously the idea that if you want to understand what is happening at any point in time, it’s not just the past that is relevant. It’s also the future,” Tollaksen says. In particular, Aharonov reanalyzed the indeterminism that forms the backbone of quantum mechanics. Before quantum mechanics arrived on the scene, physicists believed that the laws of physics could be used to determine the future of the universe and every object within it. By this thinking, if we knew the properties of every particle on the planet we could, in principle, calculate any person’s fate; we could even calculate all the thoughts in his or her head. +++
That belief crumbled when experiments began to reveal the indeterministic effects of quantum mechanics—for instance, in the radioactive decay of atoms. The problem goes like this, Tollaksen says: Take two radioactive atoms, so identical that “even God couldn’t see the difference between them.” Then wait. The first atom might decay a minute later, but the second might go another hour before decaying. This is not just a thought experiment; it can really be seen in the laboratory. There is nothing to explain the different behaviors of the two atoms, no way to predict when they will decay by looking at their history, and—seemingly—no definitive cause that produces these effects. This indeterminism, along with the ambiguity inherent in the uncertainty principle, famously rankled Einstein, who fumed that God doesn’t play dice with the universe.
It bothered Aharonov as well. “I asked, what does God gain by playing dice?” he says. Aharonov accepted that a particle’s past does not contain enough information to fully predict its fate, but he wondered, if the information is not in its past, where could it be? After all, something must regulate the particle’s behavior. His answer—which seems inspired and insane in equal measure—was that we cannot perceive the information that controls the particle’s present behavior because it does not yet exist.
“Nature is trying to tell us that there is a difference between two seemingly identical particles with different fates, but that difference can only be found in the future,” he says. If we’re willing to unshackle our minds from our preconceived view that time moves in only one direction, he argues, then it is entirely possible to set up a deterministic theory of quantum mechanics.
In 1964 Aharonov and his colleagues Peter Bergmann and Joel Lebowitz, all then at Yeshiva University in New York, proposed a new framework called time-symmetric quantum mechanics. It could produce all the same treats as the standard form of quantum mechanics that everyone knew and loved, with the added benefit of explaining how information from the future could fill in the indeterministic gaps in the present. But while many of Aharonov’s colleagues conceded that the idea was built on elegant mathematics, its philosophical implications were hard to swallow. “Each time I came up with a new idea about time, people thought that something must be wrong,” he says.
Perhaps because of the cognitive dissonance the idea engendered, time-symmetric quantum mechanics did not catch on. “For a long time, it was nothing more than a curiosity for a few philosophers to discuss,” says Sandu Popescu at the University of Bristol, in England, who works on the time-symmetric approach with Aharonov. Clearly Aharonov needed concrete experiments to demonstrate that actions carried out in the future could have repercussions in the here and now.
Through the 1980s and 1990s, Tollaksen teamed up with Aharonov to design such upside-down experiments, in which outcome was determined by events occurring after the experiment was done. Generally the protocol included three steps: a “preselection” measurement carried out on a group of particles; an intermediate measurement; and a final, “postselection” step in which researchers picked out a subset of those particles on which to perform a third, related measurement. To find evidence of backward causality—information flowing from the future to the past—the experiment would have to demonstrate that the effects measured at the intermediate step were linked to actions carried out on the subset of particles at a later time.
Tollaksen and Aharonov proposed analyzing changes in a quantum property called spin, roughly analogous to the spin of a ball but with some important differences. In the quantum world, a particle can spin only two ways, up or down, with each direction assigned a fixed value (for instance, 1 or –1). First the physicists would measure spin in a set of particles at 2 p.m. and again at 2:30 p.m. Then on another day they would repeat the two tests, but also measure a subset of the particles a third time, at 3 p.m. If the predictions of backward causality were correct, then for this last subset, the spin measurement conducted at 2:30 p.m. (the intermediate time) would be dramatically amplified. In other words, the spin measurements carried out at 2 p.m. and those carried out at 3 p.m. together would appear to cause an unexpected increase in the intensity of spins measured in between, at 2:30 p.m. The predictions seemed absurd, as ridiculous as claiming that you could measure the position of a dolphin off the Atlantic coast at 2 p.m. and again at 3 p.m., but that if you checked on its position at 2:30 p.m., you would find it in the middle of the Mediterranean.
And the amplification would not be restricted to spin; other quantum properties would be dramatically increased to bizarrely high levels too. The idea was that ripples of the measurements carried out in the future could beat back to the present and combine with effects from the past, like waves combining and peaking below a boat, setting it rocking on the rough sea. The smaller the subsample chosen for the last measurement, the more dramatic the effects at intermediate times should be, according to Aharonov’s math. It would be hard to account for such huge amplifications in conventional physics.
For years this prediction was more philosophical than physical because it did not seem possible to perform the suggested experiments. All the team’s proposed tests hinged on being able to make measurements of the quantum system at some intermediate time; but the physics books said that doing so would destroy the quantum properties of the system before the final, postselection step could be carried out. Any attempt to measure the system would collapse its delicate quantum state, just as chasing dolphins in a boat would affect their behavior. Use this kind of invasive, or strong, measurement to check on your system at an intermediate time, and you might as well take a hammer to your apparatus.
By the late 1980s, Aharonov had seen a way out: He could study the system using so-called weak measurements. (Weak measurements involve the same equipment and techniques as traditional ones, but the “knob” controlling the power of the observer’s apparatus is turned way down so as not to disturb the quantum properties in play.) In quantum physics, the weaker the measurement, the less precise it can be. Perform just one weak measurement on one particle and your results are next to useless. You may think that you have seen the required amplification, but you could just as easily dismiss it as noise or an error in your apparatus.
The way to get credible results, Tollaksen realized, was with persistence, not intensity. By 2002 physicists attuned to the potential of weak measurements were repeating their experiments thousands of times, hoping to build up a bank of data persuasively showing evidence of backward causality through the amplification effect.
Just last year, physicist John Howell and his team from the University of Rochester reported success. In the Rochester setup, laser light was measured and then shunted through a beam splitter. Part of the beam passed right through the mechanism, and part bounced off a mirror that moved ever so slightly, due to a motor to which it was attached. The team used weak measurements to detect the deflection of the reflected laser light and thus to determine how much the motorized mirror had moved.
That is the straightforward part. Searching for backward causality required looking at the impact of the final measurement and adding the time twist. In the Rochester experiment, after the laser beams left the mirrors, they passed through one of two gates, where they could be measured again—or not. If the experimenters chose not to carry out that final measurement, then the deflected angles measured in the intermediate phase were boringly tiny. But if they performed the final, postselection step, the results were dramatically different. When the physicists chose to record the laser light emerging from one of the gates, then the light traversing that route, alone, ended up with deflection angles amplified by a factor of more than 100 in the intermediate measurement step. Somehow the later decision appeared to affect the outcome of the weak, intermediate measurements, even though they were made at an earlier time.
This amazing result confirmed a similar finding reported a year earlier by physicists Onur Hosten and Paul Kwiat at the University of Illinois at Urbana-Champaign. They had achieved an even larger laser amplification, by a factor of 10,000, when using weak measurements to detect a shift in a beam of polarized light moving between air and glass.
For Aharonov, who has been pushing the idea of backward causality for four decades, the experimental vindication might seem like a time to pop champagne corks, but that is not his style. “I wasn’t surprised; it was what I expected,” he says. +++
Paul Davies, a cosmologist at Arizona State University in Tempe, admires the fact that Aharonov’s team has always striven to verify its claims experimentally. “This isn’t airy-fairy philosophy—these are real experiments,” he says. Davies has now joined forces with the group to investigate the framework’s implications for the origin of the cosmos (See “Does the Universe Have a Destiny?” below).
Vlatko Vedral, a quantum physicist at the University of Oxford, agrees that the experiments confirm the existence and power of weak measurements. But while the mathematics of the team’s framework offers a valid explanation for the experimental results, Vedral believes these results alone will not be enough to persuade most physicists to buy into the full time-twisting logic behind it.
For Tollaksen, though, the results are awe-inspiring and a bit scary. “It is upsetting philosophically,” he concedes. “All these experiments change the way that I relate to time, the way I experience myself.” The results have led him to wrestle with the idea that the future is set. If the universe has a destiny that is already written, do we really have a free choice in our actions? Or are all our choices predetermined to fit the universe’s script, giving us only the illusion of free will?
Tollaksen ponders the philosophical dilemma. Was he always destined to become a physicist? If so, are his scientific achievements less impressive because he never had any choice other than to succeed in this career? If I time-traveled back from the 21st century to the shores of Lake Michigan where Tollaksen’s 13-year-old self was reading the works of Feynman and told him that in the future I met him in the Azores and his fate was set, could his teenage self—just to spite me—choose to run off and join the circus or become a sailor instead?
The free will issue is something that Tollaksen has been tackling mathematically with Popescu. The framework does not actually suggest that people could time-travel to the past, but it does allow a concrete test of whether it is possible to rewrite history. The Rochester experiments seem to demonstrate that actions carried out in the future—in the final, postselection step—ripple back in time to influence and amplify the results measured in the earlier, intermediate step. Does this mean that when the intermediate step is carried out, the future is set and the experimenter has no choice but to perform the later, postselection measurement? It seems not. Even in instances where the final step is abandoned, Tollaksen has found, the intermediate weak measurement remains amplified, though now with no future cause to explain its magnitude at all.
I put it to Tollaksen straight: This finding seems to make a mockery of everything we have discussed so far.
Tollaksen is smiling; this is clearly an argument he has been through many times. The result of that single experiment may be the same, he explains, but remember, the power of weak measurements lies in their repetition. No single measurement can ever be taken alone to convey any meaning about the state of reality. Their inherent error is too large. “Your pointer will still read an amplified result, but now you cannot interpret it as having been caused by anything other than noise or a blip in the apparatus,” he says.
In other words, you can see the effects of the future on the past only after carrying out millions of repeat experiments and tallying up the results to produce a meaningful pattern. Focus on any single one of them and try to cheat it, and you are left with a very strange-looking result—an amplification with no cause—but its meaning vanishes. You simply have to put it down to a random error in your apparatus. You win back your free will in the sense that if you actually attempt to defy the future, you will find that it can never force you to carry out postselection experiments against your wishes. The math, Tollaksen says, backs him on this interpretation: The error range in single intermediate weak measurements that are not followed up by the required post*selection will always be just enough to dismiss the bizarre result as a mistake.
physics mainstream isdestined to finally notice his time-twisting ideas, then so it will be.
Tollaksen sums up this confounding argument with one of his favorite quotes, from the ancient Jewish sage Rabbi Akiva: “All is foreseen; but freedom of choice is given.” Or as Tollaksen puts it, “I can have my cake and eat it too.” He laughs.
Here, finally, is the answer to Aharonov’s opening question: What does God gain by playing dice with the universe? Why must the quantum world always retain a degree of fuzziness when we try to look at it through the time slice of the present? That loophole is needed so that the future can exert an overall pull on the present, without ever being caught in the act of doing it in any particular instance.
“The future can only affect the present if there is room to write its influence off as a mistake,” Aharonov says.
Whether this realization is a masterstroke of genius that explains the mechanism for backward causality or an admission that the future’s influence on the past can never fully be proven is open to debate. Andrew Jordan, who designed the Rochester laser amplification experiment with Howell, notes that there is even fundamental controversy over whether his results support Aharonov’s version of backward causality. No one disputes his team’s straightforward experimental results, but “there is much philosophical thought about what weak values really mean, what they physically correspond to—if they even really physically correspond to anything at all,” Jordan says. “My view is that we don’t have to interpret them as a consequence of the future’s influencing the present, but rather they show us that there is a lot about quantum mechanics that we still have to understand.” Nonetheless, he is open to being convinced otherwise: “A year from now, I may well change my mind.”
Popescu argues that the Rochester findings are hugely important because they open the door to a completely new range of laboratory explorations based on weak measurements. In starting from the conventional interpretation of quantum mechanics, physicists had not realized such measurements were possible. “With his work on weak measurements, Aharonov began to pose questions about what is possible in quantum mechanics that nobody had ever even thought could be articulated,” Popescu says.
Aharonov remains circumspect. He has spent most of his adult life waiting for recognition of the merit of his theory. If it is destined that mainstream physics should finally take serious notice of his time-twisting ideas, then so it will be.
And Tollaksen? He too is at one with his destiny. A few months ago he moved to Laguna Beach, California. “I’m in a house where I can hear the surf again—what a relief,” he says. He feels that he is finally back to where he was always meant to be.
DOES THE UNIVERSE HAVE A DESTINY?
Is feedback from the future guiding the development of life, the universe, and, well, everything? Paul Davies at Arizona State University in Tempe and his colleagues are investigating whether the universe has a destiny—and if so, whether there is a way to detect its eerie influence.
Cosmologists have long been puzzled about why the conditions of our universe—for example, its rate of expansion—provide the ideal breeding ground for galaxies, stars, and planets. If you rolled the dice to create a universe, odds are that you would not get one as handily conducive to life as ours is. Even if you could take life for granted, it’s not clear that 14 billion years is enough time for it to evolve by chance. But if the final state of the universe is set and is reaching back in time to influence the early universe, it could amplify the chances of life’s emergence.
With Alonso Botero at the University of the Andes in Colombia, Davies has used mathematical modeling to show that bookending the universe with particular initial and final states affects the types of particles created in between. “We’ve done this for a simplified, one-dimensional universe, and now we plan to move up to three dimensions,” Davies says. He and Botero are also searching for signatures that the final state of the universe could retroactively leave on the relic radiation of the Big Bang, which could be picked up by the Planck satellite launched last year.
Ideally, Davies and Botero hope to find a single cosmic destiny that can explain three major cosmological enigmas. The first mystery is why the expansion of the universe is currently speeding up; the second is why some cosmic rays appear to have energies higher than the bounds of normal physics allow; and the third is how galaxies acquired their magnetic fields. “The goal is to find out whether Mother Nature has been doing her own postselections, causing these unexpected effects to appear,” Davies says.
Bill Unruh of the University of British Columbia in Vancouver, a leading physicist, is intrigued by Davies’s idea. “This could have real implications for whatever the universe was like in its early history,” he says.
Arctic sea ice shrinks to third lowest area on record and Canada to become global power thanks to climate change
*********************************
Arctic sea ice shrinks to third lowest area on record
By Karin Zeitvogel (AFP) – Setember 15, 2010
WASHINGTON — Arctic sea ice melted over the summer to cover the third smallest area on record, US researchers said Wednesday, warning global warming could leave the region ice free in the month of September 2030.
Last week, at the end of the spring and summer "melt season" in the Arctic, sea ice covered 4.76 million square kilometers (1.84 million square miles), the University of Colorado's National Snow and Ice Data Center said in an annual report.
"This is only the third time in the satellite record that ice extent has fallen below five million square kilometers (1.93 million square miles), and all those occurrences have been within the past four years," the report said.
A separate report by the National Oceanic and Atmospheric Administration (NOAA) found that in August, too, Arctic sea ice coverage was down sharply, covering an average of six million square kilometers (2.3 million square miles), or 22 percent below the average extent from 1979 to 2000.
The August coverage was the second lowest for Arctic sea ice since records began in 1979. Only 2007 saw a smaller area of the northern sea covered in ice in August, NOAA said.
The record low for Arctic sea ice cover at the end of the spring and summer "melt season" in September, was also in 2007, when ice covered just 4.13 million square kilometers (1.595 million square miles).
Mark Serreze, director of the NSIDC, said climate-change skeptics might seize the fact that Arctic sea ice did not hit a record-low extent this year, but said they would be barking up the wrong tree if they claimed the shrinkage had been stopped.
"Only the third lowest? It didn't set a new record? Well, right. It didn't set a new record but we're still headed down. We're not looking at any kind of recovery here," he told AFP.
In fact, Serreze said, Arctic sea ice cover is shrinking year-round, with more ice melting in the spring and summer months and less ice forming in the fall and winter.
"The Arctic, like the globe as a whole, is warming up and warming up quickly, and we're starting to see the sea ice respond to that. Really, in all months, the sea ice cover is shrinking -- there's an overall downward trend," Serreze told AFP.
"The extent of Arctic ice is dropping at something like 11 percent per decade -- very quickly, in other words.
"Our thinking is that by 2030 or so, if you went out to the Arctic on the first of September, you probably won't see any ice at all. It will look like a blue ocean, we're losing it that quickly," he said.
Losing sea ice cover in the Arctic would affect everything from the obvious, such as people who live in the far north and polar bears, to global weather patterns, said Serreze.
"The Arctic acts as a sort of refrigerator of the northern hemisphere. As we lose the ice cover, we start to change the nature of that refrigerator, and what happens up there affects what happens down here in the middle latitudes," he said.
"We might have less cold outbreaks, which you might say is a good thing, but it's not such a good thing in regions that depend on snowfall for their water supply."
NOAA noted in its report that the first eight months of 2010 were in equal first place with the same period in 1998 for the warmest combined land and ocean surface temperatures on record worldwide, and the summer months were the second warmest on record globally, after 1998.
Canada to become global power thanks to climate change
By Randy Boswell, Postmedia News September 15, 2010
Photograph by: Darren Francey, Calgary Herald, and MCT, Postmedia News
A top U.S. geographer says Canada will emerge as a major world power within 40 years as part of a climate-driven transformation of global trade, agriculture and geopolitics highlighted by the rise of the "Northern Rim" nations.
UCLA scientist Laurence Smith, whose previous studies have documented the toll that climate change is taking on Arctic ecosystems and communities, examines the full range of effects of global warming -- many of them positive for places such as Canada -- in his new book The World in 2050: Four Forces Shaping Civilization's Northern Future, to be released next week.
Along with climate change, Smith identifies population growth, looming resource scarcity and global economic integration as the key forces shaping the planet's immediate future.
"In many ways, the New North is well positioned for the coming century even as its unique ecosystem is threatened by the linked forces of hydrocarbon development and amplifi ed climate change," states Smith, who describes in a UCLA-issued summary of his book how climate field research in Arctic communities exposed him to both the costs and benefits of a rapidly changing northern environment.
The book, to be released Sept. 23, suggests Canada and the other "NORCs" -- Northern Rim Countries -- are poised to become polar tigers similar to how several smaller Asian countries emerged in recent decades as powerhouse Pacific Rim economies.
Arctic oil and gas deposits are seen as key to catapulting Canada into a higher income bracket in the global community.
Projected population growth is also seen as central to the rise of his "New North" on the world stage.
"As worldwide population increases by 40 per cent over the next 40 years, sparsely populated Canada, Scandinavia, Russia and the northern United States will become formidable economic powers and migration magnets," states the UCLA summary of Smith's vision.
"While wreaking havoc on the environment, global warming will liberate a treasure trove of oil, gas, water and other natural resources previously locked in the frozen North, enriching residents and attracting newcomers."
Those resources will become available "precisely at a time when natural resources elsewhere are becoming critically depleted, making them all the more valuable."
Smith, a professor of geography and earth sciences, gained recognition in 2005 when he led a scientific study documenting the late-20th-century depletion or disappearance of hundreds of Arctic and sub-Arctic lakes around the world, a result of warming global temperatures and rapidly changing hydrological conditions in northern countries.
But Smith contends that countries in southern climes will have to contend with far greater pressures on scarce water resources and will face a host of other wrenching, climate-driven social changes that northern nations will largely escape.
"In many ways, the stresses that will be very apparent in other parts of the world by 2050 -- like coastal inundation, water scarcity, heat waves and violent cities -- will be easing or unapparent in northern places," Smith states. "The cities that are rising in these NORC countries are amazingly globalized, livable and peaceful."
Solar Doubling, Gas Glut Drive Down German Power Prices: Energy Markets
Rows of solar panels are seen at the Solarworld AG plant in Freiberg, Germany.
Germany is installing 10 times as much solar power capacity this year as the U.S.
By Lars Paulsson - Sep 22, 2010
Solar power may almost double in Germany this year just as a natural gas glut sends electricity prices to near five-month lows.
Capacity at plants converting sunlight to electricity in Europe’s biggest energy market will rise to 18,000 megawatts from 9,786 megawatts, according to Bloomberg New Energy Finance forecasts. No other power source will grow as fast, increasing the glut that emerged after last year’s recession, UBS AG said.
“What’s new and special in Germany this year is the devilish growth in solar,” Sigurd Lie, a senior analyst at Imarex ASA’s Nena unit, an Oslo-based energy markets research company, said in a Sept. 16 phone interview. “This has kept a lid on prices even as you’ve seen an increase in demand.”
German prices probably won’t gain this year even with power consumption forecast to rise 4 percent, according to Lie, 44, who has tracked electricity markets for 12 years at Nena. Per Lekander, UBS’s head of global utilities research, said in a Sept. 16 e-mail that profits at coal-fired plants, such as those run by E.ON AG and RWE AG, the country’s two biggest utilities, may drop by more than 50 percent to as low as 2 euros ($2.66) a megawatt hour in the next 12 months.
German power for next year, the European benchmark contract, fell to its lowest since July 27 today, trading at 49.25 euros a megawatt hour, according to broker prices on Bloomberg, down 11 percent from this year’s peak of 55.10 euros on June 21. E.ON spokesman Georg Oppermann declined to comment on the UBS forecast while RWE spokeswoman Annett Urbaczka declined to comment on trading matters.
Gas Glut
The contract exceeded 90 euros in July 2008, as a six-year rally in energy prices was coming to an end. Now, natural gas, used to produce about 15 percent of Germany’s electricity, is also damping gains, said Sebastien Terryn, a risk manager at Summit Energy Inc. in Waregem, Belgium.
“German power won’t rise to anywhere near the record levels of 2008 for at least another two years because of the price link with natural gas, which remains a market with ample supplies,” Terryn said by e-mail on Sept. 17. Summit manages about $20 billion of energy purchases annually for clients including Healthcare Trust of America Inc.
U.K. natural gas for this winter is down 18 percent since July 5 to 47.50 pence a therm today. A therm is 100,000 British thermal units. The U.K. gas market, Europe’s biggest, influences prices elsewhere in the region.
Solar Surge
Producers will bring 7,000 to 9,000 megawatts of new solar capacity online in Germany this year, said Francesco d’Avack, a London-based analyst at New Energy Finance. That’s in addition to the 9,786 megawatts in use at the end of last year, which is equivalent to the capacity of about 11 new coal-fired plants.
Germany is installing 10 times as much solar power capacity this year as the U.S. Investors are racing to lock in above- market rates for 20 years while they can. Germany’s parliament decided July 8 on a 16 percent reduction in solar subsidies, and another reduction in the so-called feed-in tariffs is scheduled for January.
Mikio Katayama, president of Sharp Corp., Japan’s biggest solar-panel maker, said last week the Osaka-based company may boost sales by 50 percent this year, faster than it earlier forecast, on increased demand in Europe. Sharp today agreed to buy California’s Recurrent Energy for as much as $305 million. The all-cash deal will be completed by the end of the year, the companies said.
Solar Sales
Global 2010 sales for photovoltaic panels may more than double to as much as 18,000 megawatts and then flatten next year as countries including Germany, Italy and France cut solar energy subsidies, Bloomberg New Energy Finance estimated.
Germany meets as much as 10 percent of its power demand from the sun on some days, Andreas Haenel, chief executive officer of German solar-plant developer Phoenix Solar AG, said in an interview last month. In the southernmost state of Bavaria, solar power contributes as much as 25 percent of total electricity when the sun shines and demand is low, he said.
Solar power’s share may rise to as much as 7.5 percent of Germany’s total power generation by 2013, according to Deutsche Energy Agentur GmbH, from about 1 percent last year, as measured by industry group Bundesverband Solarwitschaft.
As solar capacity jumps, traders will increasingly depend on data for projecting availability and prices. The European Energy Exchange AG in Leipzig started publishing daily data on expected solar capacity on July 19, in addition to estimates on other German power sources. Solar output was expected to peak today at as much as 7,963 megawatts at 1 p.m. Berlin time, according to a forecast published yesterday.
Surplus Power
The German government also plans to extend the lifespan of aging nuclear power plants. On Sept. 28, Chancellor Angela Merkel’s cabinet is set to approve a plan to allow reactors to operate for an average of 12 years beyond a legally mandated closure of 2022, to help the nation of 82 million people transition to renewable power.
Germany’s surplus power is weighing on coal-fired generators. The so-called clean-dark spread, a calculation of forward prices for fuel, power and carbon allowances, was at 5.36 euros a megawatt hour yesterday, according to Bloomberg calculations. That’s down from 11.93 euros at the start of this year and almost 18 euros in December 2008.
Lekander at UBS expects as much as 10,000 megawatts of solar capacity and 2,000 megawatts of wind-generated electricity plants to be added in Germany this year. Neither renewable power source is available 24 hours a day, unlike coal or nuclear. Germany, the Netherlands and Belgium are collectively adding about 7,000 megawatts of gas-fired capacity a year.
Out of the hysteria of 9/11 has grown a whole movement designed to compromise civil liberties. It had been my hope -- obviously a vain one -- that the Democrats would stop this erosion of privacy. Clearly both parties find fear too politically useful to honor the Constitution.
Feds: Privacy Does Not Exist in ‘Public Places’
by David Kravets
The Obama administration has urged a federal appeals court to allow the government, without a court warrant, to affix GPS devices on suspects’ vehicles to track their every move.
The Justice Department is demanding a federal appeals court rehear a case in which it reversed the conviction and life sentence of a cocaine dealer whose vehicle was tracked via GPS for a month, without a court warrant. The authorities then obtained warrants to search and find drugs in the locations where defendant Antoine Jones had travelled.
The administration, in urging the full U.S. Court of Appeals for the District of Columbia to reverse a three-judge panel’s August ruling from the same court, said Monday that Americans should expect no privacy while in public.
“The panel’s conclusion that Jones had a reasonable expectation of privacy in the public movements of his Jeep rested on the premise that an individual has a reasonable expectation of privacy in the totality of his or her movements in public places, ” Assistant U.S. Attorney Peter Smith wrote the court in a petition for rehearing.
The case is an important test of privacy rights as GPS devices have become a common tool in crime fighting, and can be affixed to moving vehicles by an officer shooting a dart. Three other circuit courts have already said the authorities do not need a warrant for GPS vehicle tracking, Smith pointed out.
The circuit’s ruling means that, in the District of Columbia area, the authorities need a warrant to install a GPS-tracking device on a vehicle. But in much of the United States, including the West, a warrant is not required. Unless the circuit changes it mind, only the Supreme Court can mandate a uniform rule.
The government said the appellate panel’s August decision is “vague and unworkable” and undermines a law enforcement practice used “with great frequency.”
The legal dispute centers on a 1983 U.S. Supreme Court decision concerning a tracking beacon affixed to a container, without a court warrant, to follow a motorist to a secluded cabin. The appeals court said that decision did not apply to today’s GPS monitoring of a suspect, which lasted a month.
The beacon tracked a person, “from one place to another,” whereas the GPS device monitored Jones’ “movements 24 hours a day for 28 days.”
The government argued Monday that the appellate court’s decision “offers no guidance as to when monitoring becomes so efficient or ‘prolonged’ as to constitute a search triggering the requirements of the Fourth Amendment.”
The appeals court ruled the case “illustrates how the sequence of a person’s movements may reveal more than the individual movements of which it is composed.”
The court said that a person “who knows all of another’s travels can deduce whether he is a weekly churchgoer, a heavy drinker, a regular at the gym, an unfaithful husband, an outpatient receiving medical treatment, an associate of particular individuals or political groups — and not just one such fact about a person, but all such facts."