The Solar wind today is 349.3 and the proton count is 0.3 Sunspot AR1416 tripled in size this weekend and, in the process, developed a "beta-gamma" magnetic field that harbors energy for M-class solar flares. NOAA forecasters estimate a 30% chance of M-flares during the next 24 hours. Any such eruption would likely be Earth-directed as the sunspot is facing our planet. On Feb. 3rd, Iran launched the country's third satellite. Named "Navid," the 110-pound mini-spacecraft is meant to stay in orbit for 18 months, sending back images to Iran as it completes a revolution of Earth every 90 minutes. World’s Oldest Living Thing By Jonathan Pearlman, Sydney 1:06PM GMT 07 Feb 2012 75 Comments Australian scientists sequenced the DNA of samples of the giant seagrass, Posidonia oceanic, from 40 underwater meadows in an area spanning more than 2,000 miles, from Spain to Cyprus. The analysis, published in the journal PLos ONE, found the seagrass was between 12,000 and 200,000 years old and was most likely to be at least 100,000 years old. This is far older than the current known oldest species, a Tasmanian plant that is believed to be 43,000 years old. Prof Carlos Duarte, from the University of Western Australia, said the seagrass has been able to reach such old age because it can reproduce asexually and generate clones of itself. Organisms that can only reproduce sexually are inevitably lost at each generation, he added. "They are continually producing new branches," he told The Daily Telegraph. "They spread very slowly and cover a very large area giving them more area to mine resources. They can then store nutrients within their very large branches during bad conditions for growth." The separate patches of seagrass in the Mediterranean span almost 10 miles and weigh more than 6,000 tons. But Prof Duarte said that while the seagrass is one of the world's most resilient organisms, it has begun to decline due to coastal development and global warming. "If climate change continues, the outlook for this species is very bad," he said. "The seagrass in the Mediterranean is already in clear decline due to shoreline construction and declining water quality and this decline has been exacerbated by climate change. As the water warms, the organisms move slowly to higher altitudes. The Mediterranean is locked to the north by the European continent. "They cannot move. The outlook is very bad." The BMI: Brain Machine Interface By Andrew Hough 7:15AM GMT 07 Feb 2012 Researchers found the Armed Forces could harness the rapid advance of neuroscience to improve the training of soldiers, pilots and other personnel. A study, from the Royal Society, Britain’s national academy of science, showed the possible benefits of neuroscience to military and law enforcement. It predicted new designer drugs that boost performance, make enemy troops fall asleep and ensure captives become more talkative. But among the more remarkable scenarios suggested in the report involved the use of devices called brain-machine interfaces (BMIs) to connect soldiers' brains directly to military technology such as drones and weapons. The study, published on Tuesday, stated that the work built on previous research that has enabled people to control cursors and artificial limbs through BMIs that read their brain signals. Related Articles Mind-reading could become reality 31 Jan 2012 The brain of Stephen Fry and the body of Pippa Middleton? No thanks 18 Jan 2012 Our brains respond differently to 'fake' art 06 Dec 2011 Cure for insomnia could be on the cards 23 Nov 2011 In their report, one of a series from the Royal Society looking at the field of neuroscience, the experts call on the UK Government to be as "transparent as possible" about research into military and law enforcement applications. "Since the human brain can process images, such as targets, much faster than the subject is consciously aware of, a neurally interfaced weapons system could provide significant advantages over other system control methods in terms of speed and accuracy," the report states. The report also showed how neuroscientists employed so-called “transcranial direct current stimulation” (tDCS) to improve soldiers' awareness while in hostile environments. It showed how soldiers’ ability to spot roadside bombs, snipers and other hidden threats were improved in a virtual reality training program used by US troops bound for the Middle East But the report’s authors argued that while hostile uses of neuroscience and related technologies were now more likely, scientists were oblivious to its potentials. While the benefits to society were obvious, through improved treatments for brain disease and mental illness, there were serious security implications to consider. "Neuroscience will have more of an impact in the future," said Prof Rod Flower, chair of the report's working group. "People can see a lot of possibilities, but so far very few have made their way through to actual use. All leaps forward start out this way.” Prof Flower, from the William Harvey Research Institute at Barts and the London Hospital added: “You have a groundswell of ideas and suddenly you get a step change." "If you are controlling a drone and you shoot the wrong target or bomb a wedding party, who is responsible for that action? Is it you or the BMI. “There's a blurring of the line between individual responsibility and the functioning of the machine. Where do you stop and the machine begin?" Vince Clark, a cognitive neuroscientist and lead author on the study at the University of New Mexico, admitted he was uncomfortable in knowing neuroscience could be used by the military. "As a scientist I dislike that someone might be hurt by my work,” he said. “I want to reduce suffering, to make the world a better place, but there are people in the world with different intentions, and I don't know how to deal with that. "If I stop my work, the people who might be helped won't be helped. Almost any technology has a defence application." The Ministry of Defence has not commented on the report. The Himalayas and nearby peaks have lost no ice in past 10 years, study shows Meltwater from Asia's peaks is much less than previously estimated, but lead scientist says the loss of ice caps and glaciers around the world remains a serious concern • Live Q&A: What does the Himalaya glacier study mean for climate change? • In pictures: the best images of the Earth from space o o o Share3237 reddit this Damian Carrington guardian.co.uk, Wednesday 8 February 2012 13.10 EST Article history Hopar glacier in Pakistan. Melting ice outside the two largest caps - Greenland and Antarctica - is much less than previously estimated, the study has found. Photograph: Paula Bronstein/Getty Images The world's greatest snow-capped peaks, which run in a chain from the Himalayas to Tian Shan on the border of China and Kyrgyzstan, have lost no ice over the last decade, new research shows. The discovery has stunned scientists, who had believed that around 50bn tonnes of meltwater were being shed each year and not being replaced by new snowfall. The study is the first to survey all the world's icecaps and glaciers and was made possible by the use of satellite data. Overall, the contribution of melting ice outside the two largest caps – Greenland and Antarctica – is much less than previously estimated, with the lack of ice loss in the Himalayas and the other high peaks of Asia responsible for most of the discrepancy. Bristol University glaciologist Prof Jonathan Bamber, who was not part of the research team, said: "The very unexpected result was the negligible mass loss from high mountain Asia, which is not significantly different from zero." The melting of Himalayan glaciers caused controversy in 2009 when a report from the UN's Intergovernmental Panel on Climate Change mistakenly stated that they would disappear by 2035, instead of 2350. However, the scientist who led the new work is clear that while greater uncertainty has been discovered in Asia's highest mountains, the melting of ice caps and glaciers around the world remains a serious concern. "Our results and those of everyone else show we are losing a huge amount of water into the oceans every year," said Prof John Wahr of the University of Colorado. "People should be just as worried about the melting of the world's ice as they were before." His team's study, published in the journal Nature, concludes that between 443-629bn tonnes of meltwater overall are added to the world's oceans each year. This is raising sea level by about 1.5mm a year, the team reports, in addition to the 2mm a year caused by expansion of the warming ocean. The scientists are careful to point out that lower-altitude glaciers in the Asian mountain ranges – sometimes dubbed the "third pole" – are definitely melting. Satellite images and reports confirm this. But over the study period from 2003-10 enough ice was added to the peaks to compensate. The impact on predictions for future sea level rise is yet to be fully studied but Bamber said: "The projections for sea level rise by 2100 will not change by much, say 5cm or so, so we are talking about a very small modification." Existing estimates range from 30cm to 1m. Wahr warned that while crucial to a better understanding of ice melting, the eight years of data is a relatively short time period and that variable monsoons mean year-to-year changes in ice mass of hundreds of billions of tonnes. "It is awfully dangerous to take an eight-year record and predict even the next eight years, let alone the next century," he said. The reason for the radical reappraisal of ice melting in Asia is the different ways in which the current and previous studies were conducted. Until now, estimates of meltwater loss for all the world's 200,000 glaciers were based on extrapolations of data from a few hundred monitored on the ground. Those glaciers at lower altitudes are much easier for scientists to get to and so were more frequently included, but they were also more prone to melting. The bias was particularly strong in Asia, said Wahr: "There extrapolation is really tough as only a handful of lower-altitude glaciers are monitored and there are thousands there very high up." The new study used a pair of satellites, called Grace, which measure tiny changes in the Earth's gravitational pull. When ice is lost, the gravitational pull weakens and is detected by the orbiting spacecraft. "They fly at 500km, so they see everything," said Wahr, including the hard-to-reach, highaltitude glaciers. "I believe this data is the most reliable estimate of global glacier mass balance that has been produced to date," said Bamber. He noted that 1.4 billion people depend on the rivers that flow from the Himalayas and Tibetan plateau: "That is a compelling reason to try to understand what is happening there better." He added: "The new data does not mean that concerns about climate change are overblown in any way. It means there is a much larger uncertainty in high mountain Asia than we thought. Taken globally all the observations of the Earth's ice – permafrost, Arctic sea ice, snow cover and glaciers – are going in the same direction." Grace launched in 2002 and continues to monitor the planet, but it has passed its expected mission span and its batteries are beginning to weaken. A replacement mission has been approved by the US and German space agencies and could launch in 2016. • This article was amended on 9 February 2012. The original sub-heading read "Melting ice from Asia's peaks is much less then previously estimated" as did the photo caption and text: "Melting ice outside the two largest caps - Greenland and Antarctica - is much less then previously estimated". These have all been corrected. It’ll be Alright says the phone “We’re trying to develop individual algorithms for each user that can determine specific states, so their location where they are, their activity, their social context, who they’re with, what they’re engaged in, and their mood,” Mohr said. That way, if someone is sitting at home for days on end feeling depressed, the phone could sense it. “It can provide them an automated text message, or an automated phone call to make a suggestion to give somebody a call or get out of the house,” Mohr said. Dr. Mohr says tests with eight patients so far, have shown that the phone “therapist,” has been helpful in lifting their moods. “They all had a major depressive disorder when they started, and they were all both clinically and statistically better at the end of the treatment,” he said. Dr. Mohr said the technology could offer more cost-effective ways to treat depression. He plans more widespread tests this summer. The Case of the Blank Checkbook House Minority Whip Steny Hoyer (D-Md.) said that Congress does not need an official federal budget because it can just adopt appropriations bills and authorization policies as needed to keep operating. At a briefing with journalists on Capitol Hill on Tuesday, Hoyer was asked, “Mr. Hoyer, around the same time of the State of the Union [on Jan. 24], I think it was the same day, Republicans were trying to hit Senate Democrats for 1,000 days without passing a budget, and then you talk about this milestone today, 400 days without a jobs bill in the Republican House. But then on Friday [Democratic Senator Harry] Reid said that he didn’t think they needed to bring a budget to the floor this year [and that] the Budget Control Act can serve as a guideline.” Hoyer said: “What does the budget do? The budget does one thing and really only one thing: It sets the parameters of spending and discretionary caps. Other than that, the Appropriations committee are not bound by the Budget committee’s priorities.” He continued: “The fact is, you don’t need a budget. We can adopt appropriations bills. We can adopt authorization policies without a budget. We already have an agreed-upon cap on spending.” Hoyer criticized the Republicans for not passing a budget for “a number of years” when they were in control of the House, Senate, and the presidency under George W. Bush. “So that this 1,000 days they haven’t passed a budget, the Republicans went for equal lengths of time without passing a budget. I think 05’ and 06’,” Hoyer said. Hoyer called the Congressional Republicans highlighting the Senate’s failure to pass a budget in over 1,000 days an “argument to dissemble and distract the attention on the lack of productive accomplishment in the House of Representatives.” “Again, I remind you, when we had a Republican president and we controlled the House and the Senate, twice as many bills -- more than twice as many bills -- were signed by President Bush as has been signed by President Obama,” Hoyer said. Senate Majority Leader Harry Reid (D.-Nev.) (AP Photo/Manuel Balce Ceneta) “Why? Because we [congressional Democrats] worked with President Bush,” said Hoyer. “This Republican leadership’s not interested in working with President Obama and that’s unfortunate.” The House Republicans passed a budget for fiscal year 2012 back in April 2011 – not one House Democrat supported the bill, and only four House Republicans voted against it. The budget bill went nowhere in the Democrat-controlled Senate. The last time the Senate passed a budget was on Apr. 29, 2009. The federal government has since been operating on funds approved through a series of continuing resolutions (CR), raises in the debt ceiling, and several appropriations bills. The last CR was passed in mid-December 2011, by both the House and Senate, and signed by President Barack Obama. That $915-billion deal, along with several appropriations measures, will keep the federal government operating though the end of fiscal year 2012, on Sept. 30. Last Friday, Senate Majority Leader Harry Reid (D-Nev.), referencing the last debt ceiling deal, said, “We do not need to bring a budget to the floor this year. It’s done, we don’t need to do it.” Concerning Reid’s remarks, Sen. Jeff Sessions (R-Ala.) said in a statement, “It’s been more than 1,000 days since Senate Democrats have offered a budget plan to the American people. Now, once again, the Senate’s ineffectual Democrat majority balks at the task of leadership. … He obviously continues in his belief that it would be politically foolish for his members to go on record in support of any long-term vision. … Budget Control Act spending caps, crafted behind closed doors and rushed to passage at the 11th hour under threat of panic, do not even approach the definition of the budget process that the law requires.” Despite Media Hysterics, Komen Accounts For Only 0.055% Of Planned Parenthood's Billion Dollar Revenue By Paul Wilson February 8, 2012 Subscribe to Paul Wilson's posts On learning that Susan G. Komen for the Cure was about to defund Planned Parenthood, both traditional media outlets and leftist media sites exploded with indignant rage. Hysterical bloggers on left-wing websites declared that Komen had joined the GOP “War on Women,” and claimed thousands of women would be harmed or even left to die if Komen stopped funding Planned Parenthood. Leaving aside the question of the type of “care” Planned Parenthood provides its customers, the group’s own numbers tell a different story – that Planned Parenthood could easily have survived financially without receiving Komen grants. Planned Parenthood’s most recent annual report, for the 2010 fiscal year, shows the “reproductive health group” took in $1.048 billion dollars in revenue. The same report shows that Planned Parenthood took in $18.5 million dollars in “excess of revenue over expenses” during the 2010 fiscal year. Planned Parenthood also reported that its net assets totaled $1.0096 billion at the end of the 2010 fiscal year. In other words, Planned Parenthood is a billion-dollar “non-profit” that makes millions in profits. By contrast, the amount of funding Komen provided to Planned Parenthood was comparatively small. Komen provided $680,000 in grants to Planned Parenthood during the 2011 year. In 2010, they provided $580,000 in grants to Planned Parenthood. Simple math shows that Komen accounted for less than 5 percent of Planned Parenthood’s “excess of revenue over expenses” in 2009-2010. Komen provided less than a tenth of 1 percent (specifically, 0.055%) of Planned Parenthood’s revenues during the year 2009-2010. Planned Parenthood isn’t in dire financial straits. On January 24, 2012 – a week before the controversy broke – it was reported by The Real Deal that Planned Parenthood bought a new headquarters in New York City for $34.8 million. At 2011 funding levels, Susan G. Komen would have needed to have given money to Planned Parenthood for more than 50 years to equal Planned Parenthood’s acquisition of its new headquarters in New York City. Mark Steyn from National Review Online points out that Komen’s 2010 grant to Planned Parenthood would not even cover Planned Parenthood CEO Cecile Richards’ salary and benefits. But for all the attention the media gave to Komen’s decision to defund Planned Parenthood, the broadcast networks exhibited a singular lack of curiosity about Planned Parenthood’s own finances. In their coverage of the Komen controversy, none of the big three broadcast networks reported that Planned Parenthood was worth a billion dollars. Not once. Nor, for that matter, did The New York Times and The Washington Post. Instead, traditional media outlets hyped – and in many cases actively assisted – Planned Parenthood’s efforts to force Komen to reverse its decision. Left-wing funded media outlets were far too busy providing a forum for Planned Parenthood leaders and allies to raise serious questions about Planned Parenthood’s financial status. The Huffington Post published an article from Cecile Richards, the head of Planned Parenthood, with the title “On Planned Parenthood and Women: What You Can Do.” Alternet published multiple articles from Jodi Jacobson, editor-in-chief of the pro-abortion blog RH Reality Check, including one with the hyperbolic title “The Cancerous Politics and Ideology of the Susan G. Komen Foundation.” Good journalists would have actually examined Planned Parenthood’s financial situation before claiming that women’s lives would be put in jeopardy. But it seems that traditional journalists have become more interested in “pro-choice” advocacy than actual reporting when it comes to the issue of abortion NASA’s Moon Base NASA is pressing forward on assessing the value of a "human-tended waypoint" near the far side of the moon — one that would embrace international partnerships as well as commercial and academic participation, SPACE.com has learned. According to a Feb. 3 memo from William Gerstenmaier, NASA's associate administrator for human exploration and operations, a team is being formed to develop a cohesive plan for exploring a spot in space known as the Earth-moon libration point 2 (EML-2). Libration points, also known as Lagrangian points, are places in space where the combined gravitational pull of two large masses roughly balance each other out, allowing spacecraft to essentially "park" there. A pre-memo NASA appraisal of EML-2, which is near the lunar far side, has spotlighted this destination as the "leading option" for a near-term exploration capability. EML-2 could serve as a gateway for capability-driven exploration of multiple destinations, such as nearlunar space, asteroids, the moon, the moons of Mars and, ultimately, Mars itself, according to NASA officials. A capabilities-driven NASA architecture is one that should use the agency's planned heavy-lift rocket, known as the Space Launch System, and the Orion Multi-Purpose Crew Vehicle "as the foundational elements." Cadence of compelling missions The memo spells out six strategic principles to help enable exploration beyond low-Earth orbit: Incorporating significant international participation that leverages current International Space Station partnerships. U.S. commercial business opportunities to further enhance the space station logistics market with a goal of reducing costs and allowing for private sector innovation. Multiuse or reusable in-space infrastructure that allows a capability to be developed and reused over time for a variety of exploration destinations. The application of technologies for near-term applications while focusing research and development of new technologies to reduce costs, improve safety, and increase mission capture over the longer term. Demonstrated affordability across the project life cycle. Near-term mission opportunities with a well-defined cadence of compelling missions providing for an incremental buildup of capabilities to perform more complex missions over time. Quiet zone According to strategic space planners, an EML-2 waypoint could enable significant telerobotic science on the far side of the moon and could serve as a platform for solar and Earth scientific observation, radio astronomy and other science in the quiet zone behind the moon. Furthermore, the waypoint could enable assembly and servicing of satellites and large telescopes, among a host of other uses. If NASA succeeds in establishing an astronaut-tended EML-2 waypoint, it would represent the farthest humans have traveled from Earth to date, the memo points out. Extended stays at EML-2 would provide advancements in life sciences and radiation-shielding for longduration missions outside of the Van Allen radiation belts that protect Earth, scientists say. Next step Gerstenmaier noted that moving forward on international, commercial and academic partnerships will "require significant detailed development and integration." Moreover, Gerstenmaier added, EML-2 "is a complex region of cis-lunar space that has certain advantages as an initial staging point for exploration, but may also have some disadvantages that must be well understood." A NASA study team is assigned the task of developing near-term missions to EML-2 "as we continue to refine our understanding and implications of using this waypoint as part of the broader exploration capability development," the memo explains. The study is targeted for completion by March 30, 2012. A working group of International Space Station members — a meeting bringing together space agencies from around the world — is being held in Paris this week with NASA’s EML-2 strategy likely to be discussed with international partners. Proving ground Bullish on the promise of telerobotics exploration of the moon from EML-2 is Jack Burns, director of the Lunar University Network for Astrophysics Research (LUNAR) Center at the University of Colorado, Boulder. LUNAR is funded by the NASA Lunar Science Institute. Burns and his team have been collaborating with Lockheed Martin (builder of the Orion Multi-Purpose Crew Vehicle) for more than a year to plan an early Orion mission that would go into a halo orbit of EML-2 above the lunar far side. "This is extremely exciting from both the exploration and science sides," Burns told SPACE.com. "This mission concept seems to be really taking off now because it is unique and offers the prospects of doing something significant outside of low-Earth orbit within this decade." In collaboration with Lockheed-Martin, the LUNAR Center is investigating human missions to EML-2 that could be a proving ground for future missions to deep space while also overseeing scientifically important investigations. Roadways on the moon? In a LUNAR Center white paper provided to SPACE.com, researchers note that an EML-2 mission would have astronauts traveling 15 percent farther from Earth than did the Apollo astronauts, and spending almost three times longer in deep space. [Lunar Legacy: Apollo Moon Mission Photos] Such missions would validate the Orion spacecraft's life-support systems for shorter durations, could demonstrate the high-speed re-entry capability needed for return to Earth from deep space, and could help scientists gauge astronauts’ radiation dose from cosmic rays and solar flares. Doing so would help verify that Orion provides sufficient radiation protection, as it is designed to do, researchers said. On such missions, the white paper explains, Orion astronauts could teleoperate gear on the lunar far side. For instance, the moon-based robotic hardware could obtain samples from the geologically appealing far side — perhaps from the South Pole-Aitken basin, which is one of the largest, deepest and oldest craters in the solar system. Also on a proposed lunar robotic agenda is deployment of a low-frequency array of radio antennas to observe the first stars in the early universe. Among a number of research jobs, the LUNAR team has been investigating how modest equipment could be used to fuse lunar regolith into a concrete-like material, which could then be used for construction of large structures, without the expense of having to carry most of the material to the lunar surface. The ability to fabricate hardened structures from lunar regolith could also foster on-the-spot creation of solar arrays, habitats, and radiation shielding and maybe, even roadways on the surface of the moon. The Challenges of Building A House on Mars by Morgen E. Peck Madison WI (SPX) Jan 11, 2012 Going to Mars? Expect to stay a while. Because of the relative motions of Earth and Mars, the pioneering astronauts who touch down on the Martian surface will have to remain there for a year and a half. For this reason, NASA has already started experimenting with a habitat fit for the long-term exploration of Mars. Last year, students at the University of Wisconsin won the XHab competition to design and build an inflatable loft addition to a habitat shell that NASA had already constructed. The final structure now serves as a working model that is being tested in the Arizona desert. Like any home, it's a sacred bulwark against the elements; but not just the cold, heat and pests of Arizona. A Mars habitat will have to protect astronauts from cosmic rays, solar flares and Habitat Demonstration Unit - the Deep Space unknown soil compositions all while keeping Habitat - with the student-built X-Hab loft on top, inhabitants happy and comfortable. a hygiene compartment on one side and airlock on "Radiation protection is our number one risk," says the other. Courtesy NASA Desert Rats. Kriss Kennedy, the project manager at NASA's habitat. "We have the luxury of Earth providing a magnetic field and an atmosphere." But on the moon or Mars, astronauts will be exposed to cosmic rays that damage human tissue when absorbed by the body. Solar flares, a more sudden and unpredictable burst of radiation, can kill instantaneously. NASA is investigating ways to build an electrostatic radiation shield to protect astronauts. For now, however, the easiest solution is surrounding them with materials that absorb the onslaught of energy. This means carefully choosing what materials to use in the habitat shell, but it will also determine how objects are arranged in the interior, says Kennedy. Food and supplies can by pushed up against the walls as extra protection. NASA is also looking at ways to repurpose discarded supplies and packaging to build up the habitat wall over time. "We can take all the garbage and compact it into these discs that we can use on the outside for radiation protection," says Kennedy. Which raises the more general problem of waste. In NASA's working prototype, the bathroom is not called a bathroom, but a "hygiene module." The name appropriately indicates the level of technology required in designing a bathroom that enables astronauts to tidy themselves without soiling the rest of the living quarters. The throne resembles more of a robot docking station in the NASA habitat, just as it did on the shuttles. The toilet spirits away waste using an assortment of pneumatic tubes and gullies, which astronauts fix into proper alignment with the aid of a live video feed. This very natural process does not come naturally in space, and astronauts undergo special training to master it. And so building a Martian or Lunar habitat involves a lot of thought about how to keep bad things out, including dirt. During the Apollo missions, lunar dust insinuated itself into the lander and command module causing respiratory problems among the crew and threatening to damage equipment. "It's like broken or ground glass. We have to really do a lot to minimize it getting into the habitat and interacting with the crew members," says Kennedy, possibly by covering equipment and fabric with an electrostatic coating. Reinforcing the habitat so successfully against outside elements has the negative effect of trapping in heat from metabolism and equipment. "The habitats are so well insulated that heat does not escape. So you're basically living inside a thermos bottle," says Kennedy. The NASA habitat collects heat from the air and electronic devices and sends it through a fluid loop to a set of radiators, then expel the heat out to the environment. Even after securing the ability to live, NASA will have to address the standard of living for astronauts. Unfortunately, recreation usually takes up a lot of space, and there is active debate in the agency about just how big to make the habitat, according to Kennedy. On a first trip to Mars it would be impossible to build from materials found on the planet. Every component of the habitat will either have to be preplaced or arrive with the astronauts - either way it will come from Earth. Designing an inflatable loft was one way to create more space while using lighter materials with the aim of providing social, recreational, and living space for each astronaut. These days, NASA is not quite sure where the next adventure lies-whether a good candidate for asteroid exploration will soar into our neighborhood before we summon enough political courage to send humans into deep space. Wherever we go next, we will at least be prepared to stay for a while. I have a proposal that could not only save billions of dollars, but also the lives of human beings. Build the habitat project on the Moon. Utilize the dormant openings in the surface of the Moon to put the habitat beneath the surface to further protect it from meteorites and cosmic radiation. Oh yeah. If there comes an emergency, the astronauts don’t have to wait three months for rescue. We are only two days away. We cut even stream energy to the station from Earth to supply emergency energy. 100 Billion planets out there Astronomers said Wednesday that each of the 100 billion stars in the Milky Way probably has at least one companion planet, adding credence to the notion that planets are as common in the cosmos as grains of sand on the beach. "Planets are the rule rather than the exception," said lead astronomer Arnaud Cassan at the Institute of Astrophysics in Paris. He led an international team of 42 scientists who spent six years surveying millions of stars at the heart of the Milky Way in the most comprehensive effort yet to gauge the prevalence of planets in the galaxy. To estimate the number of alien worlds, Dr. Cassan and his colleagues studied 100 million stars between 3,000 and 25,000 light years from Earth, with a technique called gravitational microlensing, which uses distant light amplified by the gravity of a massive star or planet to create an astronomical magnifying lens. Then they combined their findings with earlier surveys, which used other detection techniques, to create a statistical sample of stars and the planets that orbit them, which they say is representative of the galaxy. By their calculations, most of the Milky Way's stars—100 billion is the most conservative estimate—have planets, the researchers reported in Nature Wednesday. None of the alien planets detected so far appear suitable for conventional carbon-based life as known on Earth. But almost two-thirds of the stars likely host a planet slightly larger than Earth, measuring about five times its mass, and half of them harbor a planet about the mass of Neptune. About one-fifth of them are home to a gas giant like Jupiter or larger. "One can point at almost any random star and say there are planets orbiting that star," said astronomer Uffe Grae Jorgensen at the University of Copenhagen in Denmark, who was a member of Dr. Cassan's team. Moreover, millions of these planets may circle two stars, in an arrangement considered so unlikely that until a few months ago it was found only in science fiction, astronomers using NASA's Kepler space telescope announced in a separate finding published online in Nature Wednesday. "We are starting to see a whole new type of planetary system, which is unlike anything in our own solar system," said astronomer William Welsh at San Diego State University, who presented the Kepler findings Wednesday at a meeting of the American Astronomical Society in Austin, Texas. These discoveries are the latest from an avalanche of new data about worlds around other stars. Since 1994, researchers have confirmed the existence of more than 700 alien planets around various stars, with more than 2,000 additional candidates currently under study by astronomers around the world. Earlier this month, astronomers using the Hungarian-made Automated Telescope Network announced the discovery of four more massive alien worlds, each one orbiting a separate star. Confirmation requires different methods from those used in Dr. Cassan's estimates. "We are now facing the idea that planets are all over the place," said astrophysicist John Southworth at the U.K.'s Keele University, who was not part of these research projects. Indeed, the discovery of a planet orbiting two stars has gone from the startling to the commonplace within a few months. Astronomers using the Kepler space telescope found the first known double-star planet just last September—Kepler-16, a gassy oddball orb the size of Saturn that circles a pair of stars 200 light years from Earth, like the planet Tatooine in the "Star Wars" films. On Wednesday, Dr. Welsh and his colleagues announced that they have confirmed the existence of two more alien worlds in distinctive double-star solar systems in the constellation Cygnus. The first, called Kepler-34b, orbits its two small stars in a solar system about 4,900 light years from Earth. The second planet, called Kepler-35b, orbits a set of twin stars about 5,400 light years away. Both planets are "fluffy" gaseous Saturn-size worlds, where temperatures quickly rise and fall from balmy to near boiling, as the planets periodically swing close to their stars and then spin away in an elliptical orbital minuet, the researchers reported. Fermilab particle astrophysicist Craig Hogan made waves with a mind-boggling proposition: The 3D universe in which we appear to live is no more than a hologram. Now he is building the most precise clock of all time to directly measure whether our reality is an illusion. The idea that spacetime may not be entirely smooth – like a digital image that becomes increasingly pixelated as you zoom in – had been previously proposed by Stephen Hawking and others. Possible evidence for this model appeared last year in the unaccountable “noise” plaguing the GEO600 experiment in Germany, which searches for gravitational waves from black holes. To Hogan, the jitteriness suggested that the experiment had stumbled upon the lower limit of the spacetime pixels’ resolution. Black hole physics, in which space and time become compressed, provides a basis for math showing that the third dimension may not exist at all. In this two-dimensional cartoon of a universe, what we perceive as a third dimension would actually be a projection of time intertwined with depth. If this is true, the illusion can only be maintained until equipment becomes sensitive enough to find its limits. “You can’t perceive it because nothing ever travels faster than light,” says Hogan. “This holographic view is how the universe would look if you sat on a photon.” Not everyone agrees with this idea. Its foundation is formed with math rather than hard data, as is common in theoretical physics. And although a holographic universe would answer many questions about black hole physics and other paradoxes, it clashes with classical geometry, which demands a universe of smooth, continuous paths in space and time. “So we want to build a machine which will be the most sensitive measurement ever made of spacetime itself,” says Hogan. “That’s the holometer.” The holometer is named after a 17th century surveyor's instrument. The name “holometer” was first used for a surveying device created in the 17th century, an “instrument for the taking of all measures, both on the earth and in the heavens.” Hogan felt this fit with the mission of his “holographic interferometer,” which is currently being developed at Fermilab’s largest laser lab. In a classical interferometer, first developed in the late 1800s, a laser beam in a vacuum hits a mirror called a beamsplitter, which breaks it in two. The two beams travel at different angles down the length of two vacuum pipe arms before hitting mirrors at the end and bouncing back to the beamsplitter. Since light in a vacuum travels at a constant speed, the two beams should arrive back to the mirror at precisely the same time, with their waves in sync to reform a single beam. Any interfering vibration would change the frequency of the waves ever so slightly over the distance they traveled. When they returned to the beamsplitter, they would no longer be in sync. In the holometer, this loss of sync looks like a shaking or vibrations that represent jitters in spacetime itself, like the fuzziness of radio coming over too little bandwidth. The holometer’s precision means that it doesn’t have to be large; at 40 meters in length, it is only one hundredth of the size of current interferometers, which measure gravitational waves from black holes and supernovas. Yet because the spacetime frequencies it measures are so rapid, it will be more precise over very short time intervals by seven orders of magnitude than any atomic clock in existence. “The shaking of spacetime occurs at a million times per second, a thousand times what your ear can hear,” said Fermilab experimental physicist Aaron Chou, whose lab is developing prototypes for the holometer. “Matter doesn’t like to shake at that speed. You could listen to gravitational frequencies with headphones.” The whole trick, Chou says, is to prove that the vibrations don’t come from the instrument. Using technology similar to that in noise-cancelling headphones, sensors outside the instrument detect vibrations and shake the mirror at the same frequency to cancel them. Any remaining shakiness at high frequency, the researchers propose, will be evidence of blurriness in spacetime. “With the holometer’s long arms, we’re magnifying spacetime’s uncertainty,” Chou said. Conceptual design of the Fermilab holometer Hogan’s team liked the holometer idea so much they decided to build two. One on top of the other, the machines can confirm one another’s measurements. This month, having successfully built a 1-meter prototype of the 40-meter arm, they will weld the parts of the first of the vacuum arms together. Hogan expects the holometer to begin collecting data next year. “People trying to tie reality together don’t have any data, just a lot of beautiful math,” said Hogan. “The hope is that this gives them something to work with.” Where it is Located The Fermilab Holometer in Illinois is currently under construction and will be the world's most sensitive laser interferometer when complete, surpassing the sensitivity of the GEO600 GEO 600 is a gravitational wave detector located near Sarstedt, Germany. This instrument, and its sister interferometric detectors, when operational, are some of the most sensitive gravitational wave detectors ever designed. They are designed to detect relative changes in distance of the order of one part in 10−21, about the size of a single atom compared to the distance from the Sun to the Earth. GEO 600 is capable of detecting gravitational waves in the frequency range 50 Hz to 1.5 kHz.[1] Construction on the project began in 1995.[2] The Holometer also surpasses the sensitivity of the LIGO systems, LIGO When a gravitational wave passes through the interferometer, the space-time in the local area is altered. Depending on the source of the wave and its polarization, this results in an effective change in length of one or both of the cavities. The effective length change between the beams will cause the light currently in the cavity to become very slightly out of phase with the incoming light. The cavity will therefore periodically get very slightly out of resonance and the beams which are tuned to destructively interfere at the detector, will have a very slight periodically varying detuning. This results in a measurable signal. Note that the effective length change and the resulting phase change are a subtle tidal effect that must be carefully computed because the light waves are affected by the gravitational wave just as much as the beams themselves.[6] After an equivalent of approximately 75 trips down the 4 km length to the far mirrors and back again, the two separate beams leave the arms and recombine at the beam splitter. The beams returning from two arms are kept out of phase so that when the arms are both in resonance (as when there is no gravitational wave passing through), their light waves subtract, and no light should arrive at the photodiode. When a gravitational wave passes through the interferometer, the distances along the arms of the interferometer are shortened and lengthened, causing the beams to become slightly less out of phase, so some light arrives at the photodiode, indicating a signal. Light that does not contain a signal is returned to the interferometer using a power recycling mirror, thus increasing the power of the light in the arms. In actual operation, noise sources can cause movement in the optics which produces similar effects to real gravitational wave signals; a great deal of the art and complexity in the instrument is in finding ways to reduce these spurious motions of the mirrors. Observers compare signals from both sites to reduce the effects of noise. [edit] Observations Western leg of LIGO interferometer on Hanford Reservation Based on current models of astronomical events, and the predictions of the general theory of relativity, gravitational waves that originate tens of millions of light years from Earth are expected to distort the 4 kilometer mirror spacing by about 10−18 m, less than one-thousandth the "diameter" of a proton. Equivalently, this is a relative change in distance of approximately one part in 1021. A typical event which might cause a detection event would be the late stage inspiral and merger of two 10 solar mass black holes, not necessarily located in the Milky Way galaxy, which is expected to result in a very specific sequence of signals often summarized by the slogan chirp, burst, quasi-normal mode ringing, exponential decay. The Holometer may be capable of meeting or exceeding the sensitivity required to detect the smallest units in the universe called Planck units.[1] Planck Unit In physics, Planck units are physical units of measurement defined exclusively in terms of five universal physical constants listed below, in such a manner that these five physical constants take on the numerical value of 1 when expressed in terms of these units. Planck units elegantly simplify particular algebraic expressions appearing in physical law. Originally proposed in 1899 by German physicist Max Planck, these units are also known as natural units because the origin of their definition comes only from properties of nature and not from any human construct. Planck units are only one system of natural units among other systems, but are considered unique in that these units are not based on properties of any prototype object, or particle (that would be arbitrarily chosen) but are based only on properties of free space. The constants that Planck units, by definition, normalize to 1 are the: Gravitational constant, G; Reduced Planck constant, ħ; Speed of light in a vacuum, c; Coulomb constant, (sometimes ke or k); Boltzmann constant, kB (sometimes k). Each of these constants can be associated with at least one fundamental physical theory: c with special relativity, G with general relativity and Newtonian gravity, ħ with quantum mechanics, ε0 with electrostatics, and kB with statistical mechanics and thermodynamics. Planck units have profound significance for theoretical physics since they simplify several recurring algebraic expressions of physical law by nondimensionalization. They are particularly relevant in research on unified theories such as quantum gravity. Physicists sometimes refer to Planck units as "God's units". Planck units are free of anthropocentric arbitrariness. That is to say theya re not based upon the length of some king’s thunb, or the length of a stride. They are based upon mathematics. Some physicists argue that communication with extraterrestrial intelligence would have to employ such a system of units in order to be understood.[1] Unlike the meter and second, which exist as fundamental units in the SI system for (human) historical reasons, the Planck length and Planck time are conceptually linked at a fundamental physical level. All systems of measurement feature base units: in the International System of Units (SI), for example, the base unit of length is the meter. In the system of Planck units, the Planck base unit of length is known simply as the Planck length, the base unit of time is the Planck time, and so on. These units are derived from the five dimensional universal physical constants. Table 2: Base Planck units Name Dimension Expression Value[3] (SI units) Planck length Length (L) 1.616 199(97) × 10−35 m Planck mass Mass (M) 2.176 51(13) × 10−8 kg Planck time Time (T) 5.391 06(32) × 10−44 s Planck charge Electric charge (Q) 1.875 545 956(41) × 10−18 Planck temperature Temperature (Θ) 1.416 833(85) × 1032 K Now, this is why the Universe is not infinite. Because this is mathematically, based upon the data available in the universe itself, the smallest units on the measuring tape so to speak. When you have a finite measuring system, there is no possibility for an infinite universe. Fermilab states: "Everyone is familiar these days with the blurry and pixelated images, or noisy sound transmission, associated with poor internet bandwidth. The Holometer seeks to detect the equivalent blurriness or noise in reality itself, associated with the ultimate frequency limit imposed by nature."[2] Craig Hogan, a particle astrophysicist at Fermilab, states about the experiment, "What we’re looking for is when the lasers lose step with each other. We’re trying to detect the smallest unit in the universe. This is really great fun, a sort of old-fashioned physics experiment where you don’t know what the result will be." Experimental physicist Hartmut Grote of the Max Planck Institute in Germany, states that although he is skeptical that the apparatus will successfully detect the holographic fluctuations, if the experiment is successful "it would be a very strong impact to one of the most open questions in fundamental physics. It would be the first proof that space-time, the fabric of the universe, is quantized."[1] These fluctuations are caused by the fabric of spacetime being rippled by intense gravity located in a small space, like you would have with coalescing neutron stars or hopefully with black holes. Neutron stars are already exotic matter formed by suns at least 4 or 5 times the mass of our sun that burn out. They lose the battle with gravity, and the electrons are collapsed onto the protons, Hydrogen, Helium, and Carbon, which were formed during the fusion process of those stars. They re already quite dense and looking for other dense bodies in space with which to coalesce or come together. They rarely hit head on, but rather come together like two nickels spinning their way down one of theose coin collecting funnels you see in the mall. They sort of dance around eachother, spinning faster and faster as they reduce their orbits round eachother. As they do this they pull ripples into the fabric of spacetime. Those ripples may travel at up to 20 times the speed of light through space. The fabric forms the pathway for light when it travels through space as a wave. The LIGO, or the holometer uses laser light beams as the threads in a spider web. When the ripples come through, the sensor can detect them by slight changes in the length of the path for each laser beam. Two beams are better than one, and the holometer’s four beams are far better that two at sensing the “vibrations” of these ripples. The hypothesis that holographic noise may be observed in this manner has been criticized on the grounds that the theoretical framework used to derive the noise violates Lorentz-invariance. Lorentz-invariance violation is however very strongly constrained already, an issue that has been very unsatisfactorily addressed in the mathematical treatment.[4] Okay the Lorentz-Invariance is really a common mistake. It is actually called Covariance, not Invariance. In standard physics, Lorentz symmetry is "the feature of nature that says experimental results are independent of the orientation or the boost velocity of the laboratory through space".[1] Now, this can be a little counter-intuitive, meaning that knowing what we know now, they don’t make sense. Sort of like dropping a ball from six feet the ground and expecting it to fall up instead of down. That would be counter-intuitive. Efforts to test Lorentz symmetry at ever-increasing sensitivities have opened up new perspectives in theoretical and experimental physics. While high-energy accelerators have traditionally been designed to probe fundamental particles at eversmaller microscopic scales, it has been known for some time that data from many such experiments could contain information about weak Lorentz-violating background fields that exist in space on scales the size of the solar system and greater. Among the basic signals being sought are sidereal variations arising due to the rotation of the Earth relative to the fixed background stars. I mean, we really can only imagine perspectives other than standing on earth for looking at the universe and at very quantum behaviours as well. We have to rely on the laws we have proven to be true, but they may only be absolute in our own neighbourhood, if you know what I mean. Lorentz covariance is a related concept, covariance being a measure of how much two variables change together. Lorentz covariance (from Hendrik Lorentz) is a key property of spacetime that follows from the special theory of relativity. Lorentz covariance has two distinct, but closely related meanings: 1. A physical quantity is said to be Lorentz covariant if it transforms under a given representation of the Lorentz group. According to the representation theory of the Lorentz group, these quantities are built out of scalars, four-vectors, four-tensors, and spinors. In particular, a scalar (e.g. the space-time interval) remains the same under Lorentz transformations and is said to be a "Lorentz invariant" (i.e. they do seem to transform under the trivial representation like “wouldn’t it be neat if we could fly by flapping our arms?). 2. An equation is said to be Lorentz covariant if it can be written in terms of Lorentz covariant quantities I just mentioned. No this will not be on the test, so relax a little. The key property of such equations is that if they hold in one inertial frame, then they hold in any inertial frame; this follows from the result that if all the components of a tensor vanish in one frame, they vanish in every frame. This condition is a requirement according to the principle of relativity, i.e. all nongravitational laws must make the same predictions for identical experiments taking place at the same spacetime event in two different inertial frames of reference. The Holometer has been criticized by some because it makes it possible, and remember it is only possible, not probable, for a key property to be different in two different frames of reference. If it happens, then the holometer can measure that difference, and we will be on to the next step in understanding time and space. Keep in mind that these variations, which would be Lorentz-covariant violations, theoretically would me only measurable with Planck-sized units and only in the Standard Model Extension. The SME is an effective field theory that contains the Standard Model, General Relativity, and all possible operators that break Lorentz symmetry. I am not going any further with those definitions, because the exam at the end of this broadcast would be too large. I’m just kidding. I am sure you get all of this the first time through. The significance is that we once thought that lasers would bend slightly when gravitational waves, made by spinning dense bodies in space. We learned that this was correct, but that 10-21 was not small enough. With a few hundred million dollars, we got that down to 10-23, and we could measure the ripples. We also discovered particles blinking into our dimension from another dimension. That’s pretty wild, but it didn’t really surprise anyone much. We expected it. There was no other explanation for some behaviors of electromagnetism, but we weren’t sure until LIGO detected them. Now, we need something with much more resolution. Sort of like using a micrometer instead of a tape measure to tell how thick something is. The holometer can detect things about one million times smaller than the smallest thing LIGO can sense. Zooming in that small, we should be able to see the universe vibrating. This is where we will see just how much we can get past these Lorentz limitations. It isn’t that we going so small. It is that when it works, we can see the big picture. White House Grounds NASA The budget coming Monday from the Obama administration will send the NASA division that launches rovers to Mars and probes to Jupiter crashing back to Earth. Scientists briefed on the proposed budget said that the president’s plan drops funding for planetary science at NASA from $1.5 billion this year to $1.2 billion next year, with further cuts continuing through 2017. It would eat at NASA’s Mars exploration program, which, after two high-profile failures in 1999, has successfully sent three probes into Martian orbit and landed three more on the planet’s surface. “We’re doing all this great science and taking the public along with us,” said Jim Bell, an Arizona State University scientist and president of the Planetary Society who works on NASA’s Mars rover Opportunity. “Pulling the rug out from under it is going to be really devastating.” If approved, the president’s budget will sever NASA’s partnership with the European Space Agency to send probes to Mars in 2016 and 2018. Agreed upon in 2009, NASA was to pay $1.4 billion, and the Europeans $1.2 billion, for the two missions. In an e-mail, NASA spokesman David Weaver wrote, “Consistent with the tough choices being made across the Federal government . . . NASA is reassessing its current Mars exploration initiatives to maximize what can be achieved scientifically, technologically and in support of our future human missions.” A congressional champion of space exploration said that the budget slashing “absolutely will not fly” with the House committee that oversees NASA. “You don’t cut spending for critical scientific research endeavors that have immeasurable benefit to the nation and inspire the human spirit of exploration we all have,” Rep. John Abney Culberson (R-Tex.) said. Last fall, NASA handed out $46 million to contractors to begin building instruments for the 2016 mission. But earlier this week, Alvaro Gimenez, top scientist at the European agency, told the BBC that NASA’s continued participation in the partnership was “highly unlikely.” “The impact of the cuts . . . will be to immediately terminate the Mars deal with the Europeans,” said G. Scott Hubbard, a Stanford University and former NASA planetary scientist who revived the agency’s Mars exploration program after the 1999 failures. “It’s a scientific tragedy and a national embarrassment.” The 2016 mission, called the Trace Gas Orbiter, was to sniff the Martian atmosphere for methane, which could signal the existence of microbes on the surface. The 2018 mission was to land a rover to gather rocks and soil for eventual return to Earth. An official familiar with deliberations at NASA said the agency is still hoping to launch a robotic Mars mission in 2018, although the goals and hardware would probably differ from those of the joint European project. With austere budgets expected across the federal government, NASA is finding itself squeezed. Last year, Congress ordered the agency to build a giant new rocket and a deep-space crew capsule. Congress also told the agency to finish the overbudgeted James Webb Space Telescope, now expected to launch no earlier than 2018. The executive branch’s budget request, unveiled every February, is used by federal agencies to set spending priorities. Details are often decided by officials in the White House’s Office of Management and Budget. On Wednesday, planetary scientists accused the OMB of ignoring advice given to NASA by its scientific advisers. In May, planetary scientists told the agency they favored two big projects: the Mars missions, or, if those proved too expensive, a probe to explore Europa, an intriguing moon of Jupiter with an icecovered ocean and, within it, conditions possibly favorable for life. “They don’t seem to be interested in finding life in the universe or letting the experts manage their own program,” Hubbard said of the OMB. “Low-level workers have substituted their judgment for 1,700 scientists and the National Academy of Sciences.” Culberson said the House committee would continue to push for the Europa mission, which Congress directed the agency to study this year. NASA LINK What do you think? To explore or not to explore? Is national prestige worth the cost? Do you think that he who controls Space controls the Earth? Do you think that we should make cute elsewhere, to keep exploring? Do you think that Space will ever pay for itself? Here is what astronaut Gene Cernan thinks. See if you agree with him. ASTRONAUT LINK Now, since this is really tied to the economy and the science of money, let’s talk about the global money situation. Right now, the world is focused on the Eurozone. This is what happened when a group of states, that once had their own currencies and economies, come together to form a single currency. The problem happened when the European Union. Which is very labor friendly and not so production friendly, has allowed a welfare state to develop; more so in some countries than others. This is what happens when a windfall of value surges into an economy that does not become more productive to compensate for the surge. In other words, they started with dessert and then with a long happy hour. So, the Eurozone adopted lazy kids with 30-hour work weeks and free medical care, and they went broke. Here is the greter news on the current up to date health of the Eurozone. This is important, because we have already had $2.2 trillion sucked out of repaid bank bailout funds that were handed over to the Eurozone to shore up credit markets. There was not a single ounce of value exchanged. We just gave them our tax dollars that had been laundered through the federal reserve. ECONOMY LINK As I said, all of this was shored up with American dollars. And when the value ran out of the Dollar, the fed just printed more. Jay Leno said it the best when he made a Super Bowl comment about this. The trouble with the global super bowl being at half time. China has the ball and we are down by $16 trillion. The dollar will collapse unless we stop this crazy government spending that does not generate any form of return or any sort of improved ability to produce. DOLLAR COLLAPSE IRAN WAR HAS ALREADY BEGUN