Off the Air for a Bit notice as-of date 9.5.14

I will probably be “off the net” possibly for the next month or two, due to some unexpected family issues.

To those who have so kindly followed me, I sincerely hope you will KEEP me in your follow list until I am able to return to my routine blogging routine. Otherwise, upon my return, I shall hope to quickly win you back. ;)

In the meantime, my very best wishes.

from-dust-of-stars
from-dust-of-stars:

New solutions needed to recycle fracking water

Rice University scientists seek long-term answers to stem increase of water use at wells 

Editors: David Ruth and Mike Williams
HOUSTON – (Aug. 27, 2014)

Rice University scientists have performed a detailed analysis of water produced by hydraulic fracturing (aka fracking) of three gas reservoirs and suggested environmentally friendly remedies are needed to treat and reuse it.

Pic at top: http://cdn.insurancequotes.org/wp-content/uploads/2013/07/Fracking_diagram_jpg_800x1000_q100.jpg

Chart source: http://news.rice.edu/wp-content/uploads/2014/08/0902_FRACKING-1-WEB.jpg

Chart notes: Rice University researchers performed a detailed analysis of “produced” water from three underground shale gas formations subject to hydraulic fracturing. The chart shows the amounts of total carbon (TC), nonpurgeable organic carbon (NPOC) and total inorganic carbon (TIC) in the samples. Courtesy of the Barron Research Group

More advanced recycling rather than disposal of “produced” water pumped back out of wells could calm fears of accidental spillage and save millions of gallons of fresh water a year, said Rice chemist Andrew Barron. He led the study that appeared this week in the Royal Society of Chemistry journal Environmental Science: Processes and Impacts.

The amount of water used by Texas drillers for fracking may only be 1.5 percent of that used by farming and municipalities, but it still amounts to as much as 5.6 million gallons a year for the Texas portion of the Haynesville formation and 2.8 million gallons for Eagle Ford. That, Barron said, can place a considerable burden on nearby communities.

Barron noted that shale gas wells, the focus of the new study, make most of their water within the first few weeks of production. After that, a few barrels a day are commonly produced.

The project began with chemical analysis of fracking fluids pumped through gas-producing shale formations in Texas, Pennsylvania and New Mexico. Barron and the study’s lead author, Rice alumnus Samuel Maguire-Boyle, found that shale oil and gas-produced water does not contain significant amounts of the polyaromatic hydrocarbons that could pose health hazards; but minute amounts of other chemical compounds led them to believe the industry would be wise to focus its efforts on developing nonchemical treatments for fracking and produced water.

Currently, fracturing fluid pumped into a well bore to loosen gas and oil from shale is either directed toward closed fluid-capture systems when it comes out or is sent back into the ground for storage. But neither strategy is an effective long-term solution, Barron said.

“Ultimately, it will be necessary to clean produced water for reuse in fracking,” he said. “In addition, there is the potential to recover the fraction of hydrocarbon in the produced water.”

Fracking fluid is 90 percent water, Barron said. Eight to nine percent of the fluid contains sand or ceramic proppant particles that wedge themselves into tiny fractures in the rock, holding open paths for gas and oil to escape to the production well.

The remaining 1 or 2 percent, however, may contain salts, friction reducers, scale inhibitors, biocides, gelling agents, gel breakers and organic and inorganic acids. The organic molecules either occur naturally or are a residue from the added components.

The researchers found most of the salt, organic and other minerals that appear in produced water from shale gas reservoirs originate in the connate waters trapped in the dense rock over geologic time scales. These should be of little concern, they wrote

But they also found that produced water contained potentially toxic chlorocarbons and organobromides, probably formed from interactions between high levels of bacteria in the water and salts or chemical treatments used in fracking fluids.

Barron said industry sometimes uses chlorine dioxide or hypochlorite treatments to recycle produced water for reuse, but these treatments can actually enhance bacteria’s ability to convert naturally occurring hydrocarbons to chlorocarbons and organobromides. The researchers suggested this transition could happen either downhole or in storage ponds where produced water is treated.

“We believe the industry needs to investigate alternative, nonchemical treatments to avoid the formation of compounds that don’t occur in nature,” Barron said.

Primarily, he said, the researchers want their analysis to anticipate future problems as industry develops processes to remove organic compounds from water bound for reuse.

Barron said the new paper should be of particular interest to international producers who are preparing to ramp up gas-recovery efforts in the United Kingdom, which recently announced plans to expand drilling, and other European countries.

“As the U.K. and other European countries are looking to start hydraulic fracturing, it is important that they adopt best practices at the start, as opposed to evolving over time, as it has occurred here in the United States,” he said.

The Robert A. Welch Foundation and the Welsh Government Sêr Cymru Program funded the research. Barron is Rice’s Charles W. Duncan Jr.–Welch Professor of Chemistry and a professor of materials science and nanoengineering.

#### #####

Read the abstract at http://pubs.rsc.org/en/content/articlelanding/2014/em/c4em00376d#!divAbstract

Follow Rice News and Media Relations via Twitter @RiceUNews

Related Materials: Barron Research Group: http://barron.rice.edu/Barron.html

Rice University researchers performed a detailed analysis of “produced” water from three underground shale gas formations subject to hydraulic fracturing. The chart shows the amounts of total carbon (TC), nonpurgeable organic carbon (NPOC) and total inorganic carbon (TIC) in the samples. (Credit: Barron Research Group/Rice University)

http://news.rice.edu/wp-content/uploads/2014/08/0902_fracking-2-web.jpg

Rice University chemist Andrew Barron led an analysis of water produced by hydraulic fracturing of three gas reservoirs and suggested environmentally friendly remedies are needed to treat and reuse it. (Credit: Jeff Fitlow/Rice University)

Located on a 300-acre forested campus in Houston, Rice University is consistently ranked among the nation’s top 20 universities by U.S. News & World Report. Rice has highly respected schools of Architecture, Business, Continuing Studies, Engineering, Humanities, Music, Natural Sciences and Social Sciences and is home to the Baker Institute for Public Policy. With 3,920 undergraduates and 2,567 graduate students, Rice’s undergraduate student-to-faculty ratio is just over 6-to-1. Its residential college system builds close-knit communities and lifelong friendships, just one reason why Rice is highly ranked for best quality of life by the Princeton Review and for best value among private universities by Kiplinger’s Personal Finance. To read “What they’re saying about Rice,” go here.

Source: http://news.rice.edu/2014/08/28/new-solutions-needed-to-recycle-fracking-water/

from-dust-of-stars: More on Fracking and potential to reuse the water from the Fracking process. This is a huge deal for the environment.

from-dust-of-stars:

New solutions needed to recycle fracking water

Rice University scientists seek long-term answers to stem increase of water use at wells

Editors: David Ruth and Mike Williams
HOUSTON – (Aug. 27, 2014)

Rice University scientists have performed a detailed analysis of water produced by hydraulic fracturing (aka fracking) of three gas reservoirs and suggested environmentally friendly remedies are needed to treat and reuse it.

Pic at top: http://cdn.insurancequotes.org/wp-content/uploads/2013/07/Fracking_diagram_jpg_800x1000_q100.jpg

Chart source: http://news.rice.edu/wp-content/uploads/2014/08/0902_FRACKING-1-WEB.jpg

Chart notes: Rice University researchers performed a detailed analysis of “produced” water from three underground shale gas formations subject to hydraulic fracturing. The chart shows the amounts of total carbon (TC), nonpurgeable organic carbon (NPOC) and total inorganic carbon (TIC) in the samples. Courtesy of the Barron Research Group

More advanced recycling rather than disposal of “produced” water pumped back out of wells could calm fears of accidental spillage and save millions of gallons of fresh water a year, said Rice chemist Andrew Barron. He led the study that appeared this week in the Royal Society of Chemistry journal Environmental Science: Processes and Impacts.

The amount of water used by Texas drillers for fracking may only be 1.5 percent of that used by farming and municipalities, but it still amounts to as much as 5.6 million gallons a year for the Texas portion of the Haynesville formation and 2.8 million gallons for Eagle Ford. That, Barron said, can place a considerable burden on nearby communities.

Barron noted that shale gas wells, the focus of the new study, make most of their water within the first few weeks of production. After that, a few barrels a day are commonly produced.

The project began with chemical analysis of fracking fluids pumped through gas-producing shale formations in Texas, Pennsylvania and New Mexico. Barron and the study’s lead author, Rice alumnus Samuel Maguire-Boyle, found that shale oil and gas-produced water does not contain significant amounts of the polyaromatic hydrocarbons that could pose health hazards; but minute amounts of other chemical compounds led them to believe the industry would be wise to focus its efforts on developing nonchemical treatments for fracking and produced water.

Currently, fracturing fluid pumped into a well bore to loosen gas and oil from shale is either directed toward closed fluid-capture systems when it comes out or is sent back into the ground for storage. But neither strategy is an effective long-term solution, Barron said.

“Ultimately, it will be necessary to clean produced water for reuse in fracking,” he said. “In addition, there is the potential to recover the fraction of hydrocarbon in the produced water.”

Fracking fluid is 90 percent water, Barron said. Eight to nine percent of the fluid contains sand or ceramic proppant particles that wedge themselves into tiny fractures in the rock, holding open paths for gas and oil to escape to the production well.

The remaining 1 or 2 percent, however, may contain salts, friction reducers, scale inhibitors, biocides, gelling agents, gel breakers and organic and inorganic acids. The organic molecules either occur naturally or are a residue from the added components.

The researchers found most of the salt, organic and other minerals that appear in produced water from shale gas reservoirs originate in the connate waters trapped in the dense rock over geologic time scales. These should be of little concern, they wrote

But they also found that produced water contained potentially toxic chlorocarbons and organobromides, probably formed from interactions between high levels of bacteria in the water and salts or chemical treatments used in fracking fluids.

Barron said industry sometimes uses chlorine dioxide or hypochlorite treatments to recycle produced water for reuse, but these treatments can actually enhance bacteria’s ability to convert naturally occurring hydrocarbons to chlorocarbons and organobromides. The researchers suggested this transition could happen either downhole or in storage ponds where produced water is treated.

“We believe the industry needs to investigate alternative, nonchemical treatments to avoid the formation of compounds that don’t occur in nature,” Barron said.

Primarily, he said, the researchers want their analysis to anticipate future problems as industry develops processes to remove organic compounds from water bound for reuse.

Barron said the new paper should be of particular interest to international producers who are preparing to ramp up gas-recovery efforts in the United Kingdom, which recently announced plans to expand drilling, and other European countries.

“As the U.K. and other European countries are looking to start hydraulic fracturing, it is important that they adopt best practices at the start, as opposed to evolving over time, as it has occurred here in the United States,” he said.

The Robert A. Welch Foundation and the Welsh Government Sêr Cymru Program funded the research. Barron is Rice’s Charles W. Duncan Jr.–Welch Professor of Chemistry and a professor of materials science and nanoengineering.

#### #####

Read the abstract at http://pubs.rsc.org/en/content/articlelanding/2014/em/c4em00376d#!divAbstract

Follow Rice News and Media Relations via Twitter @RiceUNews

Related Materials: Barron Research Group: http://barron.rice.edu/Barron.html

Rice University researchers performed a detailed analysis of “produced” water from three underground shale gas formations subject to hydraulic fracturing. The chart shows the amounts of total carbon (TC), nonpurgeable organic carbon (NPOC) and total inorganic carbon (TIC) in the samples. (Credit: Barron Research Group/Rice University)

http://news.rice.edu/wp-content/uploads/2014/08/0902_fracking-2-web.jpg

Rice University chemist Andrew Barron led an analysis of water produced by hydraulic fracturing of three gas reservoirs and suggested environmentally friendly remedies are needed to treat and reuse it. (Credit: Jeff Fitlow/Rice University)

Located on a 300-acre forested campus in Houston, Rice University is consistently ranked among the nation’s top 20 universities by U.S. News & World Report. Rice has highly respected schools of Architecture, Business, Continuing Studies, Engineering, Humanities, Music, Natural Sciences and Social Sciences and is home to the Baker Institute for Public Policy. With 3,920 undergraduates and 2,567 graduate students, Rice’s undergraduate student-to-faculty ratio is just over 6-to-1. Its residential college system builds close-knit communities and lifelong friendships, just one reason why Rice is highly ranked for best quality of life by the Princeton Review and for best value among private universities by Kiplinger’s Personal Finance. To read “What they’re saying about Rice,” go here.

Source: http://news.rice.edu/2014/08/28/new-solutions-needed-to-recycle-fracking-water/

from-dust-of-stars: More on Fracking and potential to reuse the water from the Fracking process. This is a huge deal for the environment.

from-dust-of-stars
from-dust-of-stars:

Detecting neutrinos, physicists look into the heart of the sun
Date: August 27, 2014
Source: University of Massachusetts at Amherst
Summary: Using one of the most sensitive neutrino detectors on the planet, physicists have directly detected neutrinos created by the ‘keystone’ proton-proton fusion process going on at the sun’s core for the first time.
Pic: Scientists report for the first time they have directly detected neutrinos created by the “keystone” proton-proton (pp) fusion process going on at the sun’s core. Credit: NASA/SDO
Using one of the most sensitive neutrino detectors on the planet, an international team of physicists including Andrea Pocar, Laura Cadonati and doctoral student Keith Otis at the University of Massachusetts Amherst report in the current issue of Nature that for the first time they have directly detected neutrinos created by the “keystone” proton-proton (pp) fusion process going on at the sun’s core.
The pp reaction is the first step of a reaction sequence responsible for about 99 percent of the Sun’s power, Pocar explains. Solar neutrinos are produced in nuclear processes and radioactive decays of different elements during fusion reactions at the Sun’s core. These particles stream out of the star at nearly the speed of light, as many as 420 billion hitting every square inch of the Earth’s surface per second.
Because they only interact through the nuclear weak force, they pass through matter virtually unaffected, which makes them very difficult to detect and distinguish from trace nuclear decays of ordinary materials, he adds.
The UMass Amherst physicist, one principal investigator on a team of more than 100 scientists, says, “With these latest neutrino data, we are directly looking at the originator of the sun’s biggest energy producing process, or chain of reactions, going on in its extremely hot, dense core. While the light we see from the Sun in our daily life reaches us in about eight minutes, it takes tens of thousands of years for energy radiating from the sun’s center to be emitted as light.”
“By comparing the two different types of solar energy radiated, as neutrinos and as surface light, we obtain experimental information about the Sun’s thermodynamic equilibrium over about a 100,000-year timescale,” Pocar adds. “If the eyes are the mirror of the soul, with these neutrinos, we are looking not just at its face, but directly into its core. We have glimpsed the sun’s soul.”
“As far as we know, neutrinos are the only way we have of looking into the Sun’s interior. These pp neutrinos, emitted when two protons fuse forming a deuteron, are particularly hard to study. This is because they are low energy, in the range where natural radioactivity is very abundant and masks the signal from their interaction.”
The Borexino instrument, located deep beneath Italy’s Apennine Mountains, detects neutrinos as they interact with the electrons of an ultra-pure organic liquid scintillator at the center of a large sphere surrounded by 1,000 tons of water. Its great depth and many onion-like protective layers maintain the core as the most radiation-free medium on the planet.
Indeed, it is the only detector on Earth capable of observing the entire spectrum of solar neutrino simultaneously. Neutrinos come in three types, or “flavors.” Those from the Sun’s core are of the “electron” flavor, and as they travel away from their birthplace, they oscillate or change between two other flavors, “muon” to “tau.” With this and previous solar neutrino measurements, the Borexino experiment has strongly confirmed this behavior of the elusive particles, Pocar says.
One of the crucial challenges in using Borexino is the need to control and precisely quantify all background radiation. Pocar says the organic scintillator at Borexino’s center is filled with a benzene-like liquid derived from “really, really old, millions-of-years-old petroleum,” among the oldest they could find on Earth.
“We needed this because we want all the Carbon-14 to have decayed, or as much of it as possible, because carbon-14 beta decays cover the neutrino signals we want to detect. We know there is only three atoms of C14 for each billion, billion atoms in the scintillator, which shows how ridiculously clean it is.”
A related problem the physicists discuss in their new paper is that when two C14 atoms in the scintillator decay simultaneously, an event they call a “pileup,” its signature is similar to that of a pp solar neutrino interaction. In a great advance for the analysis, Pocar says, “Keith Otis figured out a way to solve the problem of statistically identifying and subtracting these pileup events from the data, which basically makes this new pp neutrino analysis process possible.”
Though detecting pp neutrinos was not part of the original National Science Foundation-sponsored Borexino experiment, “it’s a little bit of a coup that we could do it,” the astrophysicist says. “We pushed the detector sensitivity to a limit that has never been achieved before.”
Story Source: The above story is based on materials provided by University of Massachusetts at Amherst. Note: Materials may be edited for content and length.
Journal Reference: G. Bellini, J. Benziger, D. Bick, G. Bonfini, D. Bravo, B. Caccianiga, L. Cadonati, F. Calaprice, A. Caminata, P. Cavalcante, A. Chavarria, A. Chepurnov, D. D’Angelo, S. Davini, A. Derbin, A. Empl, A. Etenko, K. Fomenko, D. Franco, F. Gabriele, C. Galbiati, S. Gazzana, C. Ghiano, M. Giammarchi, M. Göger-Neff, A. Goretti, M. Gromov, C. Hagner, E. Hungerford, Aldo Ianni, Andrea Ianni, V. Kobychev, D. Korablev, G. Korga, D. Kryn, M. Laubenstein, B. Lehnert, T. Lewke, E. Litvinovich, F. Lombardi, P. Lombardi, L. Ludhova, G. Lukyanchenko, I. Machulin, S. Manecki, W. Maneschg, S. Marcocci, Q. Meindl, E. Meroni, M. Meyer, L. Miramonti, M. Misiaszek, M. Montuschi, P. Mosteiro, V. Muratova, L. Oberauer, M. Obolensky, F. Ortica, K. Otis, M. Pallavicini, L. Papp, L. Perasso, A. Pocar, G. Ranucci, A. Razeto, A. Re, A. Romani, N. Rossi, R. Saldanha, C. Salvo, S. Schönert, H. Simgen, M. Skorokhvatov, O. Smirnov, A. Sotnikov, S. Sukhotin, Y. Suvorov, R. Tartaglia, G. Testera, D. Vignaud, R. B. Vogelaar, F. von Feilitzsch, H. Wang, J. Winter, M. Wojcik, A. Wright, M. Wurm, O. Zaimidoroga, S. Zavatarelli, K. Zuber, G. Zuzel. Neutrinos from the primary proton–proton fusion process in the Sun. Nature, 2014; 512 (7515): 383 DOI: 10.1038/nature13702
Cite This Page: MLA APA Chicago: University of Massachusetts at Amherst. “Detecting neutrinos, physicists look into the heart of the sun.” ScienceDaily. ScienceDaily, 27 August 2014. 

Source: http://www.sciencedaily.com/releases/2014/08/140827131652.htm

from-dust-of-stars: Oh, to see into the very heart of our Star, our Sun - to “directly detected neutrinos created” by the pp fusion process at the core of our Sun.. Freaking wow. Just, wow!

from-dust-of-stars:

Detecting neutrinos, physicists look into the heart of the sun

Date: August 27, 2014
Source: University of Massachusetts at Amherst

Summary: Using one of the most sensitive neutrino detectors on the planet, physicists have directly detected neutrinos created by the ‘keystone’ proton-proton fusion process going on at the sun’s core for the first time.

Pic: Scientists report for the first time they have directly detected neutrinos created by the “keystone” proton-proton (pp) fusion process going on at the sun’s core. Credit: NASA/SDO

Using one of the most sensitive neutrino detectors on the planet, an international team of physicists including Andrea Pocar, Laura Cadonati and doctoral student Keith Otis at the University of Massachusetts Amherst report in the current issue of Nature that for the first time they have directly detected neutrinos created by the “keystone” proton-proton (pp) fusion process going on at the sun’s core.

The pp reaction is the first step of a reaction sequence responsible for about 99 percent of the Sun’s power, Pocar explains. Solar neutrinos are produced in nuclear processes and radioactive decays of different elements during fusion reactions at the Sun’s core. These particles stream out of the star at nearly the speed of light, as many as 420 billion hitting every square inch of the Earth’s surface per second.

Because they only interact through the nuclear weak force, they pass through matter virtually unaffected, which makes them very difficult to detect and distinguish from trace nuclear decays of ordinary materials, he adds.

The UMass Amherst physicist, one principal investigator on a team of more than 100 scientists, says, “With these latest neutrino data, we are directly looking at the originator of the sun’s biggest energy producing process, or chain of reactions, going on in its extremely hot, dense core. While the light we see from the Sun in our daily life reaches us in about eight minutes, it takes tens of thousands of years for energy radiating from the sun’s center to be emitted as light.”

“By comparing the two different types of solar energy radiated, as neutrinos and as surface light, we obtain experimental information about the Sun’s thermodynamic equilibrium over about a 100,000-year timescale,” Pocar adds. “If the eyes are the mirror of the soul, with these neutrinos, we are looking not just at its face, but directly into its core. We have glimpsed the sun’s soul.”

“As far as we know, neutrinos are the only way we have of looking into the Sun’s interior. These pp neutrinos, emitted when two protons fuse forming a deuteron, are particularly hard to study. This is because they are low energy, in the range where natural radioactivity is very abundant and masks the signal from their interaction.”

The Borexino instrument, located deep beneath Italy’s Apennine Mountains, detects neutrinos as they interact with the electrons of an ultra-pure organic liquid scintillator at the center of a large sphere surrounded by 1,000 tons of water. Its great depth and many onion-like protective layers maintain the core as the most radiation-free medium on the planet.

Indeed, it is the only detector on Earth capable of observing the entire spectrum of solar neutrino simultaneously. Neutrinos come in three types, or “flavors.” Those from the Sun’s core are of the “electron” flavor, and as they travel away from their birthplace, they oscillate or change between two other flavors, “muon” to “tau.” With this and previous solar neutrino measurements, the Borexino experiment has strongly confirmed this behavior of the elusive particles, Pocar says.

One of the crucial challenges in using Borexino is the need to control and precisely quantify all background radiation. Pocar says the organic scintillator at Borexino’s center is filled with a benzene-like liquid derived from “really, really old, millions-of-years-old petroleum,” among the oldest they could find on Earth.

“We needed this because we want all the Carbon-14 to have decayed, or as much of it as possible, because carbon-14 beta decays cover the neutrino signals we want to detect. We know there is only three atoms of C14 for each billion, billion atoms in the scintillator, which shows how ridiculously clean it is.”
A related problem the physicists discuss in their new paper is that when two C14 atoms in the scintillator decay simultaneously, an event they call a “pileup,” its signature is similar to that of a pp solar neutrino interaction. In a great advance for the analysis, Pocar says, “Keith Otis figured out a way to solve the problem of statistically identifying and subtracting these pileup events from the data, which basically makes this new pp neutrino analysis process possible.”

Though detecting pp neutrinos was not part of the original National Science Foundation-sponsored Borexino experiment, “it’s a little bit of a coup that we could do it,” the astrophysicist says. “We pushed the detector sensitivity to a limit that has never been achieved before.”

Story Source: The above story is based on materials provided by University of Massachusetts at Amherst. Note: Materials may be edited for content and length.

Journal Reference: G. Bellini, J. Benziger, D. Bick, G. Bonfini, D. Bravo, B. Caccianiga, L. Cadonati, F. Calaprice, A. Caminata, P. Cavalcante, A. Chavarria, A. Chepurnov, D. D’Angelo, S. Davini, A. Derbin, A. Empl, A. Etenko, K. Fomenko, D. Franco, F. Gabriele, C. Galbiati, S. Gazzana, C. Ghiano, M. Giammarchi, M. Göger-Neff, A. Goretti, M. Gromov, C. Hagner, E. Hungerford, Aldo Ianni, Andrea Ianni, V. Kobychev, D. Korablev, G. Korga, D. Kryn, M. Laubenstein, B. Lehnert, T. Lewke, E. Litvinovich, F. Lombardi, P. Lombardi, L. Ludhova, G. Lukyanchenko, I. Machulin, S. Manecki, W. Maneschg, S. Marcocci, Q. Meindl, E. Meroni, M. Meyer, L. Miramonti, M. Misiaszek, M. Montuschi, P. Mosteiro, V. Muratova, L. Oberauer, M. Obolensky, F. Ortica, K. Otis, M. Pallavicini, L. Papp, L. Perasso, A. Pocar, G. Ranucci, A. Razeto, A. Re, A. Romani, N. Rossi, R. Saldanha, C. Salvo, S. Schönert, H. Simgen, M. Skorokhvatov, O. Smirnov, A. Sotnikov, S. Sukhotin, Y. Suvorov, R. Tartaglia, G. Testera, D. Vignaud, R. B. Vogelaar, F. von Feilitzsch, H. Wang, J. Winter, M. Wojcik, A. Wright, M. Wurm, O. Zaimidoroga, S. Zavatarelli, K. Zuber, G. Zuzel. Neutrinos from the primary proton–proton fusion process in the Sun. Nature, 2014; 512 (7515): 383 DOI: 10.1038/nature13702

Cite This Page: MLA APA Chicago: University of Massachusetts at Amherst. “Detecting neutrinos, physicists look into the heart of the sun.” ScienceDaily. ScienceDaily, 27 August 2014.

Source: http://www.sciencedaily.com/releases/2014/08/140827131652.htm

from-dust-of-stars: Oh, to see into the very heart of our Star, our Sun - to “directly detected neutrinos created” by the pp fusion process at the core of our Sun.. Freaking wow. Just, wow!

New technique uses fraction of measurements to efficiently find quantum wave functions

Contact: Peter Iglinski, University of Rochester
PUBLIC RELEASE DATE: 28-Aug-2014

The result of every possible measurement on a quantum system is coded in its wave function, which until recently could be found only by taking many different measurements of a system and estimating a wave function that best fit all those measurements. Just two years ago, with the advent of a technique called direct measurement, scientists discovered they could reliably determine a system’s wave function by “weakly” measuring one of its variables (e.g. position) and “strongly” measuring a complementary variable (momentum). Researchers at the University of Rochester have now taken this method one step forward by combining direct measurement with an efficient computational technique.

Pic: http://cdn.arstechnica.net/wp-content/uploads/2011/11/wavefunction-4ecaaa7-intro.png

The new method, called compressive direct measurement, allowed the team to reconstruct a quantum state at 90 percent fidelity (a measure of accuracy) using only a quarter of the measurements required by previous methods.

"We have, for the first time, combined weak measurement and compressive sensing to demonstrate a revolutionary, fast method for measuring a high-dimensional quantum state," said Mohammad Mirhosseini, a graduate student in the Quantum Photonics research group at the University of Rochester and lead author of a paper appearing today in Physical Review Letters.

The research team, which also included graduate students Omar Magaña-Loaiza and Seyed Mohammad Hashemi Rafsanjani, and Professor Robert Boyd, initially tested their method on a 192-dimensional state. Finding success with that large state, they then took on a massive, 19,200-dimensional state. Their efficient technique sped up the process 350-fold and took just 20 percent of the total measurements required by traditional direct measurement to reconstruct the state.

"To reproduce our result using a direct measurement alone would require more than one year of exposure time," said Rafsanjani. "We did the experiment in less than 48 hours."

While recent compressive sensing techniques have been used to measure sets of complementary variables like position and momentum, Mirhosseini explains that their method allows them to measure the full wave function.

Compression is widely used in the classical world of digital media, including recorded music, video, and pictures. The MP3s on your phone, for example, are audio files that have had bits of information squeezed out to make the file smaller at the cost of losing a small amount of audio quality along the way.

In digital cameras, the more pixels you can gather from a scene, the higher the image quality and the larger the file will be. But it turns out that most of those pixels don’t convey essential information that needs to be captured from the scene. Most of them can be reconstructed later. Compressive sensing works by randomly sampling portions from all over the scene, and using those patterns to fill in the missing information.

Similarly for quantum states, it is not necessary to measure every single dimension of a multidimensional state. It takes only a handful of measurements to get a high-quality image of a quantum system.

The method introduced by Mirhosseini et al. has important potential applications in the field of quantum information science. This research field strives to make use of fundamental quantum effects for diverse applications, including secure communication, teleportation of quantum states, and ideally to perform quantum computation. This latter process holds great promise as a method that can, in principle, lead to a drastic speed-up of certain types of computation. All of these applications require the use of complicated quantum states, and the new method described here offers an efficient means to characterize these states.

###

Research funding was provided by the Defense Advanced Research Projects Agency’s (DARPA) Information in a Photon (InPho) program, U.S. Defense Threat Reduction Agency (DTRA), National Science Foundation (NSF), El Consejo Nacional de Ciencia y Tecnología (CONACYT) and Canadian Excellence Research Chair (CERC).

AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert! system.

Source: http://www.eurekalert.org/pub_releases/2014-08/uor-ntu082814.php

universal-abyss: Excellent - a revolutionary way to measure high-dimensional quantum states. The applications that this may ultimately assist are numerous, for example: quantum computing, secure communication, and much faster computational speed. This is an awesome leap.

New technique uses fraction of measurements to efficiently find quantum wave functions

Contact: Peter Iglinski, University of Rochester
PUBLIC RELEASE DATE: 28-Aug-2014

The result of every possible measurement on a quantum system is coded in its wave function, which until recently could be found only by taking many different measurements of a system and estimating a wave function that best fit all those measurements. Just two years ago, with the advent of a technique called direct measurement, scientists discovered they could reliably determine a system’s wave function by “weakly” measuring one of its variables (e.g. position) and “strongly” measuring a complementary variable (momentum). Researchers at the University of Rochester have now taken this method one step forward by combining direct measurement with an efficient computational technique.

Pic: http://cdn.arstechnica.net/wp-content/uploads/2011/11/wavefunction-4ecaaa7-intro.png

The new method, called compressive direct measurement, allowed the team to reconstruct a quantum state at 90 percent fidelity (a measure of accuracy) using only a quarter of the measurements required by previous methods.

"We have, for the first time, combined weak measurement and compressive sensing to demonstrate a revolutionary, fast method for measuring a high-dimensional quantum state," said Mohammad Mirhosseini, a graduate student in the Quantum Photonics research group at the University of Rochester and lead author of a paper appearing today in Physical Review Letters.

The research team, which also included graduate students Omar Magaña-Loaiza and Seyed Mohammad Hashemi Rafsanjani, and Professor Robert Boyd, initially tested their method on a 192-dimensional state. Finding success with that large state, they then took on a massive, 19,200-dimensional state. Their efficient technique sped up the process 350-fold and took just 20 percent of the total measurements required by traditional direct measurement to reconstruct the state.

"To reproduce our result using a direct measurement alone would require more than one year of exposure time," said Rafsanjani. "We did the experiment in less than 48 hours."

While recent compressive sensing techniques have been used to measure sets of complementary variables like position and momentum, Mirhosseini explains that their method allows them to measure the full wave function.

Compression is widely used in the classical world of digital media, including recorded music, video, and pictures. The MP3s on your phone, for example, are audio files that have had bits of information squeezed out to make the file smaller at the cost of losing a small amount of audio quality along the way.

In digital cameras, the more pixels you can gather from a scene, the higher the image quality and the larger the file will be. But it turns out that most of those pixels don’t convey essential information that needs to be captured from the scene. Most of them can be reconstructed later. Compressive sensing works by randomly sampling portions from all over the scene, and using those patterns to fill in the missing information.

Similarly for quantum states, it is not necessary to measure every single dimension of a multidimensional state. It takes only a handful of measurements to get a high-quality image of a quantum system.

The method introduced by Mirhosseini et al. has important potential applications in the field of quantum information science. This research field strives to make use of fundamental quantum effects for diverse applications, including secure communication, teleportation of quantum states, and ideally to perform quantum computation. This latter process holds great promise as a method that can, in principle, lead to a drastic speed-up of certain types of computation. All of these applications require the use of complicated quantum states, and the new method described here offers an efficient means to characterize these states.

###

Research funding was provided by the Defense Advanced Research Projects Agency’s (DARPA) Information in a Photon (InPho) program, U.S. Defense Threat Reduction Agency (DTRA), National Science Foundation (NSF), El Consejo Nacional de Ciencia y Tecnología (CONACYT) and Canadian Excellence Research Chair (CERC).

AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert! system.

Source: http://www.eurekalert.org/pub_releases/2014-08/uor-ntu082814.php

universal-abyss: Excellent - a revolutionary way to measure high-dimensional quantum states. The applications that this may ultimately assist are numerous, for example: quantum computing, secure communication, and much faster computational speed. This is an awesome leap.

Red Planet’s climate history uncovered in unique Martian meteorite

Date: August 27, 2014
Source: Florida State University

Summary: Was Mars — now a cold, dry place — once a warm, wet planet that sustained life? Research underway may one day answer those questions — and perhaps even help pave the way for future colonization of the Red Planet. By analyzing the chemical clues locked inside an ancient Martian meteorite known as Black Beauty, scientists are revealing the story of Mars’ ancient, and sometimes startling, climate history.
Pic: find a pic
Was Mars — now a cold, dry place — once a warm, wet planet that sustained life? And if so, how long has it been cold and dry?

Research underway at the National High Magnetic Field Laboratory may one day answer those questions — and perhaps even help pave the way for future colonization of the Red Planet. By analyzing the chemical clues locked inside an ancient Martian meteorite known as Black Beauty, Florida State University Professor Munir Humayun and an international research team are revealing the story of Mars’ ancient, and sometimes startling, climate history.

The team’s most recent finding of a dramatic climate change appeared in Nature Geoscience, in the paper “Record of the ancient Martian hydrosphere and atmosphere preserved in zircon from a Martian meteorite.”

The scientists found evidence for the climate shift in minerals called zircons embedded inside the dark, glossy meteorite. Zircons, which are also abundant in the Earth’s crust, form when lava cools. Among their intriguing properties, Humayun says, is that “they stick around forever.”

"When you find a zircon, it’s like finding a watch," Humayun said. "A zircon begins keeping track of time from the moment it’s born."

Last year, Humayun’s team correctly determined that the zircons in its Black Beauty sample were an astonishing 4.4 billion years old. That means, Humayun says, it formed during the Red Planet’s infancy and during a time when the planet might have been able to sustain life.

"First we learned that, about 4.5 billion years ago, water was more abundant on Mars, and now we’ve learned that something dramatically changed that," said Humayun, a professor of geochemistry. "Now we can conclude that the conditions that we see today on Mars, this dry Martian desert, must have persisted for at least the past 1.7 billion years. We know now that Mars has been dry for a very long time."

The secret to Mars’ climate lies in the fact that zircons (ZrSiO4) contain oxygen, an element with three isotopes. Isotopes are atoms of the same element that have the same number of protons but a different number of neutrons — sort of like members of a family who share the same last name but have different first names.

On Mars, oxygen is distributed in the atmosphere (as carbon dioxide, molecular oxygen and ozone), in the hydrosphere (as water) and in rocks. In the thin, dry Martian atmosphere, the sun’s ultraviolet light causes unique shifts in the proportions in which the three isotopes of oxygen occur in the different atmospheric gases.

So when water vapor that has cycled through the Martian atmosphere condenses into the Martian soil, it can interact with and exchange oxygen isotopes with zircons in the soil, effectively writing a climate record into the rocks. A warm, wet Mars requires a dense atmosphere that filters out the ultraviolet light making the unique isotope shifts disappear.

In order to measure the proportions of the oxygen isotopes in the zircons, the team, led by scientist Alexander Nemchin, used a device called an ion microprobe. The instrument is in the NordSIMS facility at the Swedish Museum of Natural History, directed by team member Martin Whitehouse.

Because of these precise measurements, said Humayun, “we now have an isotopic record of how the atmosphere changed, with dates on it.”

The Black Beauty meteorite Humayun’s team is studying was discovered in the Sahara Desert in 2011. It’s also known as NWA 7533, which stands for Northwest Africa, the location where it was found.

In all, more than five pieces of Black Beauty were found by Bedouin tribesmen, who make a living scouring the Sahara for meteorites and fossils that they can sell. The zircons analyzed by Humayun’s team were from Black Beauty samples kept in Paris.

Story Source: The above story is based on materials provided by Florida State University. Note: Materials may be edited for content and length.

Journal Reference: A. A. Nemchin, M. Humayun, M. J. Whitehouse, R. H. Hewins, J-P. Lorand, A. Kennedy, M. Grange, B. Zanda, C. Fieni, D. Deldicque. Record of the ancient martian hydrosphere and atmosphere preserved in zircon from a martian meteorite. Nature Geoscience, 2014; DOI: 10.1038/ngeo2231

Cite This Page: MLA APA Chicago: Florida State University. “Red Planet’s climate history uncovered in unique Martian meteorite.” ScienceDaily. ScienceDaily, 27 August 2014. .

Source: http://www.sciencedaily.com/releases/2014/08/140827131553.htm

universal-abyss: Wondrous and startling findings suggest that, about 4.5 billion years back, “water was more abundant on Mars” and then “something dramatically changed that.” It has been a desert planet for a mere 1.7 billion years or so. The Red planet never ceases to surprise, and only slowly reveals her mysteries. 

So, the question is: what caused such a dramatic change in our beautiful Red neighbor? That answer might prove incredibly important to us on Earth, as it may help us avoid a similar fate on our tiny blue planet, that we like to call home.

Red Planet’s climate history uncovered in unique Martian meteorite

Date: August 27, 2014
Source: Florida State University

Summary: Was Mars — now a cold, dry place — once a warm, wet planet that sustained life? Research underway may one day answer those questions — and perhaps even help pave the way for future colonization of the Red Planet. By analyzing the chemical clues locked inside an ancient Martian meteorite known as Black Beauty, scientists are revealing the story of Mars’ ancient, and sometimes startling, climate history.

Pic: find a pic

Was Mars — now a cold, dry place — once a warm, wet planet that sustained life? And if so, how long has it been cold and dry?

Research underway at the National High Magnetic Field Laboratory may one day answer those questions — and perhaps even help pave the way for future colonization of the Red Planet. By analyzing the chemical clues locked inside an ancient Martian meteorite known as Black Beauty, Florida State University Professor Munir Humayun and an international research team are revealing the story of Mars’ ancient, and sometimes startling, climate history.

The team’s most recent finding of a dramatic climate change appeared in Nature Geoscience, in the paper “Record of the ancient Martian hydrosphere and atmosphere preserved in zircon from a Martian meteorite.”

The scientists found evidence for the climate shift in minerals called zircons embedded inside the dark, glossy meteorite. Zircons, which are also abundant in the Earth’s crust, form when lava cools. Among their intriguing properties, Humayun says, is that “they stick around forever.”

"When you find a zircon, it’s like finding a watch," Humayun said. "A zircon begins keeping track of time from the moment it’s born."

Last year, Humayun’s team correctly determined that the zircons in its Black Beauty sample were an astonishing 4.4 billion years old. That means, Humayun says, it formed during the Red Planet’s infancy and during a time when the planet might have been able to sustain life.

"First we learned that, about 4.5 billion years ago, water was more abundant on Mars, and now we’ve learned that something dramatically changed that," said Humayun, a professor of geochemistry. "Now we can conclude that the conditions that we see today on Mars, this dry Martian desert, must have persisted for at least the past 1.7 billion years. We know now that Mars has been dry for a very long time."

The secret to Mars’ climate lies in the fact that zircons (ZrSiO4) contain oxygen, an element with three isotopes. Isotopes are atoms of the same element that have the same number of protons but a different number of neutrons — sort of like members of a family who share the same last name but have different first names.

On Mars, oxygen is distributed in the atmosphere (as carbon dioxide, molecular oxygen and ozone), in the hydrosphere (as water) and in rocks. In the thin, dry Martian atmosphere, the sun’s ultraviolet light causes unique shifts in the proportions in which the three isotopes of oxygen occur in the different atmospheric gases.

So when water vapor that has cycled through the Martian atmosphere condenses into the Martian soil, it can interact with and exchange oxygen isotopes with zircons in the soil, effectively writing a climate record into the rocks. A warm, wet Mars requires a dense atmosphere that filters out the ultraviolet light making the unique isotope shifts disappear.

In order to measure the proportions of the oxygen isotopes in the zircons, the team, led by scientist Alexander Nemchin, used a device called an ion microprobe. The instrument is in the NordSIMS facility at the Swedish Museum of Natural History, directed by team member Martin Whitehouse.

Because of these precise measurements, said Humayun, “we now have an isotopic record of how the atmosphere changed, with dates on it.”

The Black Beauty meteorite Humayun’s team is studying was discovered in the Sahara Desert in 2011. It’s also known as NWA 7533, which stands for Northwest Africa, the location where it was found.

In all, more than five pieces of Black Beauty were found by Bedouin tribesmen, who make a living scouring the Sahara for meteorites and fossils that they can sell. The zircons analyzed by Humayun’s team were from Black Beauty samples kept in Paris.

Story Source: The above story is based on materials provided by Florida State University. Note: Materials may be edited for content and length.

Journal Reference: A. A. Nemchin, M. Humayun, M. J. Whitehouse, R. H. Hewins, J-P. Lorand, A. Kennedy, M. Grange, B. Zanda, C. Fieni, D. Deldicque. Record of the ancient martian hydrosphere and atmosphere preserved in zircon from a martian meteorite. Nature Geoscience, 2014; DOI: 10.1038/ngeo2231

Cite This Page: MLA APA Chicago: Florida State University. “Red Planet’s climate history uncovered in unique Martian meteorite.” ScienceDaily. ScienceDaily, 27 August 2014. .

Source: http://www.sciencedaily.com/releases/2014/08/140827131553.htm

universal-abyss: Wondrous and startling findings suggest that, about 4.5 billion years back, “water was more abundant on Mars” and then “something dramatically changed that.” It has been a desert planet for a mere 1.7 billion years or so. The Red planet never ceases to surprise, and only slowly reveals her mysteries.

So, the question is: what caused such a dramatic change in our beautiful Red neighbor? That answer might prove incredibly important to us on Earth, as it may help us avoid a similar fate on our tiny blue planet, that we like to call home.

Quantum physics enables revolutionary imaging method
Date: August 28, 2014
Source: University of Vienna
Summary: Researchers have developed a fundamentally new quantum imaging technique with strikingly counter-intuitive features. For the first time, an image has been obtained without ever detecting the light that was used to illuminate the imaged object, while the light revealing the image never touches the imaged object.
Pic:  A new quantum imaging technique generates images with photons that have never touched to object — in this case a sketch of a cat. This alludes to the famous Schrödinger cat paradox, in which a cat inside a closed box is said to be simultaneously dead and alive as long there is no information outside the box to rule out one option over the other. Similarly, the new imaging technique relies on a lack of information regarding where the photons are created and which path they take. Credit: Copyright: Patricia Enigl, IQOQI
Researchers from the Institute for Quantum Optics and Quantum Information (IQOQI), the Vienna Center for Quantum Science and Technology (VCQ), and the University of Vienna have developed a fundamentally new quantum imaging technique with strikingly counterintuitive features. For the first time, an image has been obtained without ever detecting the light that was used to illuminate the imaged object, while the light revealing the image never touches the imaged object.
In general, to obtain an image of an object one has to illuminate it with a light beam and use a camera to sense the light that is either scattered or transmitted through that object. The type of light used to shine onto the object depends on the properties that one would like to image. Unfortunately, in many practical situations the ideal type of light for the illumination of the object is one for which cameras do not exist.
The experiment published in Nature this week for the first time breaks this seemingly self-evident limitation. The object (e.g. the contour of a cat) is illuminated with light that remains undetected. Moreover, the light that forms an image of the cat on the camera never interacts with it. In order to realise their experiment, the scientists use so-called “entangled” pairs of photons. These pairs of photons — which are like interlinked twins — are created when a laser interacts with a non-linear crystal. In the experiment, the laser illuminates two separate crystals, creating one pair of twin photons (consisting of one infrared photon and a “sister” red photon) in either crystal. The object is placed in between the two crystals. The arrangement is such that if a photon pair is created in the first crystal, only the infrared photon passes through the imaged object. Its path then goes through the second crystal where it fully combines with any infrared photons that would be created there.
With this crucial step, there is now, in principle, no possibility to find out which crystal actually created the photon pair. Moreover, there is now no information in the infrared photon about the object. However, due to the quantum correlations of the entangled pairs the information about the object is now contained in the red photons — although they never touched the object. Bringing together both paths of the red photons (from the first and the second crystal) creates bright and dark patterns, which form the exact image of the object.
Stunningly, all of the infrared photons (the only light that illuminated the object) are discarded; the picture is obtained by only detecting the red photons that never interacted with the object. The camera used in the experiment is even blind to the infrared photons that have interacted with the object. In fact, very low light infrared cameras are essentially unavailable on the commercial market. The researchers are confident that their new imaging concept is very versatile and could even enable imaging in the important mid-infrared region. It could find applications where low light imaging is crucial, in fields such as biological or medical imaging.
Story Source: The above story is based on materials provided by University of Vienna. Note: Materials may be edited for content and length.
Journal Reference: Gabriela Barreto Lemos, Victoria Borish, Garrett D. Cole, Sven Ramelow, Radek Lapkiewicz, Anton Zeilinger. Quantum imaging with undetected photons. Nature, 2014; 512 (7515): 409 DOI: 10.1038/nature13586
Cite This Page: MLA APA Chicago: University of Vienna. “Quantum physics enables revolutionary imaging method.” ScienceDaily. ScienceDaily, 28 August 2014. 

Source: http://www.sciencedaily.com/releases/2014/08/140828110820.htm

universal-abyss: Crikey, so the light never actually touches the object being imaged, yet can create an exact image - this is utterly extraordinary! This takes quantum imaging to a whole new level - and, wow is it cool! This should truly twist and bend your mind at the awesomeness of quantum physics. The potentials it may bring to  medical and biological imaging are simply astounding to consider. Just, wow!

Quantum physics enables revolutionary imaging method

Date: August 28, 2014
Source: University of Vienna

Summary: Researchers have developed a fundamentally new quantum imaging technique with strikingly counter-intuitive features. For the first time, an image has been obtained without ever detecting the light that was used to illuminate the imaged object, while the light revealing the image never touches the imaged object.

Pic: A new quantum imaging technique generates images with photons that have never touched to object — in this case a sketch of a cat. This alludes to the famous Schrödinger cat paradox, in which a cat inside a closed box is said to be simultaneously dead and alive as long there is no information outside the box to rule out one option over the other. Similarly, the new imaging technique relies on a lack of information regarding where the photons are created and which path they take. Credit: Copyright: Patricia Enigl, IQOQI

Researchers from the Institute for Quantum Optics and Quantum Information (IQOQI), the Vienna Center for Quantum Science and Technology (VCQ), and the University of Vienna have developed a fundamentally new quantum imaging technique with strikingly counterintuitive features. For the first time, an image has been obtained without ever detecting the light that was used to illuminate the imaged object, while the light revealing the image never touches the imaged object.

In general, to obtain an image of an object one has to illuminate it with a light beam and use a camera to sense the light that is either scattered or transmitted through that object. The type of light used to shine onto the object depends on the properties that one would like to image. Unfortunately, in many practical situations the ideal type of light for the illumination of the object is one for which cameras do not exist.

The experiment published in Nature this week for the first time breaks this seemingly self-evident limitation. The object (e.g. the contour of a cat) is illuminated with light that remains undetected. Moreover, the light that forms an image of the cat on the camera never interacts with it. In order to realise their experiment, the scientists use so-called “entangled” pairs of photons. These pairs of photons — which are like interlinked twins — are created when a laser interacts with a non-linear crystal. In the experiment, the laser illuminates two separate crystals, creating one pair of twin photons (consisting of one infrared photon and a “sister” red photon) in either crystal. The object is placed in between the two crystals. The arrangement is such that if a photon pair is created in the first crystal, only the infrared photon passes through the imaged object. Its path then goes through the second crystal where it fully combines with any infrared photons that would be created there.

With this crucial step, there is now, in principle, no possibility to find out which crystal actually created the photon pair. Moreover, there is now no information in the infrared photon about the object. However, due to the quantum correlations of the entangled pairs the information about the object is now contained in the red photons — although they never touched the object. Bringing together both paths of the red photons (from the first and the second crystal) creates bright and dark patterns, which form the exact image of the object.

Stunningly, all of the infrared photons (the only light that illuminated the object) are discarded; the picture is obtained by only detecting the red photons that never interacted with the object. The camera used in the experiment is even blind to the infrared photons that have interacted with the object. In fact, very low light infrared cameras are essentially unavailable on the commercial market. The researchers are confident that their new imaging concept is very versatile and could even enable imaging in the important mid-infrared region. It could find applications where low light imaging is crucial, in fields such as biological or medical imaging.

Story Source: The above story is based on materials provided by University of Vienna. Note: Materials may be edited for content and length.

Journal Reference: Gabriela Barreto Lemos, Victoria Borish, Garrett D. Cole, Sven Ramelow, Radek Lapkiewicz, Anton Zeilinger. Quantum imaging with undetected photons. Nature, 2014; 512 (7515): 409 DOI: 10.1038/nature13586

Cite This Page: MLA APA Chicago: University of Vienna. “Quantum physics enables revolutionary imaging method.” ScienceDaily. ScienceDaily, 28 August 2014.

Source: http://www.sciencedaily.com/releases/2014/08/140828110820.htm

universal-abyss: Crikey, so the light never actually touches the object being imaged, yet can create an exact image - this is utterly extraordinary! This takes quantum imaging to a whole new level - and, wow is it cool! This should truly twist and bend your mind at the awesomeness of quantum physics. The potentials it may bring to medical and biological imaging are simply astounding to consider. Just, wow!

from-dust-of-stars
from-dust-of-stars:

Atomically seamless, thinnest-possible semiconductor junctions crafted by scientists
Date: August 26, 2014
Source: University of Washington
Summary: Two single-layer semiconductor materials can be connected in an atomically seamless fashion known as a heterojunction, researchers say. This result could be the basis for next-generation flexible and transparent computing, better light-emitting diodes, or LEDs, and solar technologies.
Pic: As seen under an optical microscope, the heterostructures have a triangular shape. The two different monolayer semiconductors can be recognized through their different colors. Credit: U of Washington
Scientists have developed what they believe is the thinnest-possible semiconductor, a new class of nanoscale materials made in sheets only three atoms thick.
The University of Washington researchers have demonstrated that two of these single-layer semiconductor materials can be connected in an atomically seamless fashion known as a heterojunction. This result could be the basis for next-generation flexible and transparent computing, better light-emitting diodes, or LEDs, and solar technologies.
“Heterojunctions are fundamental elements of electronic and photonic devices,” said senior author Xiaodong Xu, a UW assistant professor of materials science and engineering and of physics. “Our experimental demonstration of such junctions between two-dimensional materials should enable new kinds of transistors, LEDs, nanolasers, and solar cells to be developed for highly integrated electronic and optical circuits within a single atomic plane.”
The research was published online this week in Nature Materials.
The researchers discovered that two flat semiconductor materials can be connected edge-to-edge with crystalline perfection. They worked with two single-layer, or monolayer, materials — molybdenum diselenide and tungsten diselenide — that have very similar structures, which was key to creating the composite two-dimensional semiconductor.
Collaborators from the electron microscopy center at the University of Warwick in England found that all the atoms in both materials formed a single honeycomb lattice structure, without any distortions or discontinuities. This provides the strongest possible link between two single-layer materials, necessary for flexible devices. Within the same family of materials it is feasible that researchers could bond other pairs together in the same way.
The researchers created the junctions in a small furnace at the UW. First, they inserted a powder mixture of the two materials into a chamber heated to 900 degrees Celsius (1,652 F). Hydrogen gas was then passed through the chamber and the evaporated atoms from one of the materials were carried toward a cooler region of the tube and deposited as single-layer crystals in the shape of triangles.
After a while, evaporated atoms from the second material then attached to the edges of the triangle to create a seamless semiconducting heterojunction.
“This is a scalable technique,” said Sanfeng Wu, a UW doctoral student in physics and one of the lead authors. “Because the materials have different properties, they evaporate and separate at different times automatically. The second material forms around the first triangle that just previously formed. That’s why these lattices are so beautifully connected.”
With a larger furnace, it would be possible to mass-produce sheets of these semiconductor heterostructures, the researchers said. On a small scale, it takes about five minutes to grow the crystals, with up to two hours of heating and cooling time.
“We are very excited about the new science and engineering opportunities provided by these novel structures,” said senior author David Cobden, a UW professor of physics. “In the future, combinations of two-dimensional materials may be integrated together in this way to form all kinds of interesting electronic structures such as in-plane quantum wells and quantum wires, superlattices, fully functioning transistors, and even complete electronic circuits.”
The researchers have already demonstrated that the junction interacts with light much more strongly than the rest of the monolayer, which is encouraging for optoelectric and photonic applications like solar cells.
Story Source: The above story is based on materials provided by University of Washington. The original article was written by Michelle Ma. Note: Materials may be edited for content and length.
Journal Reference: Chunming Huang, Sanfeng Wu, Ana M. Sanchez, Jonathan J. P. Peters, Richard Beanland, Jason S. Ross, Pasqual Rivera, Wang Yao, David H. Cobden, Xiaodong Xu. Lateral heterojunctions within monolayer MoSe2–WSe2 semiconductors. Nature Materials, 2014; DOI: 10.1038/nmat4064
Cite This Page: MLA APA Chicago: University of Washington. “Atomically seamless, thinnest-possible semiconductor junctions crafted by scientists.” ScienceDaily. ScienceDaily, 26 August 2014. <www.sciencedaily.com/releases/2014/08/140826205338.htm>.
Source: http://www.sciencedaily.com/releases/2014/08/140826205338.htm

from-dust-of-stars: Damn, this is so utterly exciting for the technological potentials, which might be that next-generation step for computing and other technologies. Cool.

from-dust-of-stars:

Atomically seamless, thinnest-possible semiconductor junctions crafted by scientists

Date: August 26, 2014
Source: University of Washington

Summary: Two single-layer semiconductor materials can be connected in an atomically seamless fashion known as a heterojunction, researchers say. This result could be the basis for next-generation flexible and transparent computing, better light-emitting diodes, or LEDs, and solar technologies.

Pic: As seen under an optical microscope, the heterostructures have a triangular shape. The two different monolayer semiconductors can be recognized through their different colors. Credit: U of Washington

Scientists have developed what they believe is the thinnest-possible semiconductor, a new class of nanoscale materials made in sheets only three atoms thick.

The University of Washington researchers have demonstrated that two of these single-layer semiconductor materials can be connected in an atomically seamless fashion known as a heterojunction. This result could be the basis for next-generation flexible and transparent computing, better light-emitting diodes, or LEDs, and solar technologies.

“Heterojunctions are fundamental elements of electronic and photonic devices,” said senior author Xiaodong Xu, a UW assistant professor of materials science and engineering and of physics. “Our experimental demonstration of such junctions between two-dimensional materials should enable new kinds of transistors, LEDs, nanolasers, and solar cells to be developed for highly integrated electronic and optical circuits within a single atomic plane.”

The research was published online this week in Nature Materials.

The researchers discovered that two flat semiconductor materials can be connected edge-to-edge with crystalline perfection. They worked with two single-layer, or monolayer, materials — molybdenum diselenide and tungsten diselenide — that have very similar structures, which was key to creating the composite two-dimensional semiconductor.

Collaborators from the electron microscopy center at the University of Warwick in England found that all the atoms in both materials formed a single honeycomb lattice structure, without any distortions or discontinuities. This provides the strongest possible link between two single-layer materials, necessary for flexible devices. Within the same family of materials it is feasible that researchers could bond other pairs together in the same way.

The researchers created the junctions in a small furnace at the UW. First, they inserted a powder mixture of the two materials into a chamber heated to 900 degrees Celsius (1,652 F). Hydrogen gas was then passed through the chamber and the evaporated atoms from one of the materials were carried toward a cooler region of the tube and deposited as single-layer crystals in the shape of triangles.

After a while, evaporated atoms from the second material then attached to the edges of the triangle to create a seamless semiconducting heterojunction.

“This is a scalable technique,” said Sanfeng Wu, a UW doctoral student in physics and one of the lead authors. “Because the materials have different properties, they evaporate and separate at different times automatically. The second material forms around the first triangle that just previously formed. That’s why these lattices are so beautifully connected.”

With a larger furnace, it would be possible to mass-produce sheets of these semiconductor heterostructures, the researchers said. On a small scale, it takes about five minutes to grow the crystals, with up to two hours of heating and cooling time.

“We are very excited about the new science and engineering opportunities provided by these novel structures,” said senior author David Cobden, a UW professor of physics. “In the future, combinations of two-dimensional materials may be integrated together in this way to form all kinds of interesting electronic structures such as in-plane quantum wells and quantum wires, superlattices, fully functioning transistors, and even complete electronic circuits.”

The researchers have already demonstrated that the junction interacts with light much more strongly than the rest of the monolayer, which is encouraging for optoelectric and photonic applications like solar cells.

Story Source: The above story is based on materials provided by University of Washington. The original article was written by Michelle Ma. Note: Materials may be edited for content and length.

Journal Reference: Chunming Huang, Sanfeng Wu, Ana M. Sanchez, Jonathan J. P. Peters, Richard Beanland, Jason S. Ross, Pasqual Rivera, Wang Yao, David H. Cobden, Xiaodong Xu. Lateral heterojunctions within monolayer MoSe2–WSe2 semiconductors. Nature Materials, 2014; DOI: 10.1038/nmat4064

Cite This Page: MLA APA Chicago: University of Washington. “Atomically seamless, thinnest-possible semiconductor junctions crafted by scientists.” ScienceDaily. ScienceDaily, 26 August 2014. <www.sciencedaily.com/releases/2014/08/140826205338.htm>.

Source: http://www.sciencedaily.com/releases/2014/08/140826205338.htm

from-dust-of-stars: Damn, this is so utterly exciting for the technological potentials, which might be that next-generation step for computing and other technologies. Cool.

What lit up the universe?

Date: August 27, 2014
Source: University College London

Summary: New research shows we will soon uncover the origin of the ultraviolet light that bathes the cosmos, helping scientists understand how galaxies were built. The study by cosmologists shows how forthcoming astronomical surveys will reveal what lit up the cosmos.

Pic: A computer model shows one scenario for how light is spread through the early universe on vast scales (more than 50 million light years across). Astronomers will soon know whether or not these kinds of computer models give an accurate portrayal of light in the real cosmos. Credit: Andrew Pontzen/Fabio Governato

New research from UCL shows we will soon uncover the origin of the ultraviolet light that bathes the cosmos, helping scientists understand how galaxies were built.

The study published today in The Astrophysical Journal Letters by UCL cosmologists Dr Andrew Pontzen and Dr Hiranya Peiris (both UCL Physics &amp; Astronomy), together with collaborators at Princeton and Barcelona Universities, shows how forthcoming astronomical surveys will reveal what lit up the cosmos.

"Which produces more light? A country&#8217;s biggest cities or its many tiny towns?" asked Dr Pontzen, lead author of the study. "Cities are brighter, but towns are far more numerous. Understanding the balance would tell you something about the organisation of the country. We&#8217;re posing a similar question about the universe: does ultraviolet light come from numerous but faint galaxies, or from a smaller number of quasars?"

Quasars are the brightest objects in the Universe; their intense light is generated by gas as it falls towards a black hole. Galaxies can contain millions or billions of stars, but are still dim by comparison. Understanding whether the numerous small galaxies outshine the rare, bright quasars will provide insight into the way the universe built up today&#8217;s populations of stars and planets. It will also help scientists properly calibrate their measurements of dark energy, the agent thought to be accelerating the universe&#8217;s expansion and determining its far future.

The new method proposed by the team builds on a technique already used by astronomers in which quasars act as beacons to understand space. The intense light from quasars makes them easy to spot even at extreme distances, up to 95% of the way across the observable universe. The team think that studying how this light interacts with hydrogen gas on its journey to Earth will reveal the main sources of illumination in the universe, even if those sources are not themselves quasars.

Two types of hydrogen gas are found in the universe &#8212; a plain, neutral form and a second charged form which results from bombardment by UV light. These two forms can be distinguished by studying a particular wavelength of light called &#8216;Lyman-alpha&#8217; which is only absorbed by the neutral type of hydrogen. Scientists can see where in the universe this &#8216;Lyman-alpha&#8217; light has been absorbed to map the neutral hydrogen.

Since the quasars being studied are billions of light years away, they act as a time capsule: looking at the light shows us what the universe looked like in the distant past. The resulting map will reveal where neutral hydrogen was located billions of years ago as the universe was vigorously building its galaxies.

An even distribution of neutral hydrogen gas would suggest numerous galaxies as the source of most light, whereas a much less uniform pattern, showing a patchwork of charged and neutral hydrogen gas, would indicate that rare quasars were the primary origin of light.

Current samples of quasars aren&#8217;t quite big enough for a robust analysis of the differences between the two scenarios; however, a number of surveys currently being planned should help scientists find the answer.

Chief among these is the DESI (Dark Energy Spectroscopic Instrument) survey which will include detailed measurements of about a million distant quasars. Although these measurements are designed to reveal how the expansion of the universe is accelerating due to dark energy, the new research shows that results from DESI will also determine whether the intervening gas is uniformly illuminated. In turn, the measurement of patchiness will reveal whether light in our universe is generated by &#8216;a few cities&#8217; (quasars) or by &#8216;many small towns&#8217; (galaxies).

Co-author Dr Hiranya Peiris, said: &#8220;It&#8217;s amazing how little is known about the objects that bathed the universe in ultraviolet radiation while galaxies assembled into their present form. This technique gives us a novel handle on the intergalactic environment during this critical time in the universe&#8217;s history.&#8221;

Dr Pontzen, said: &#8220;It&#8217;s good news all round. DESI is going to give us invaluable information about what was going on in early galaxies, objects that are so faint and distant we would never see them individually. And once that&#8217;s understood in the data, the team can take account of it and still get accurate measurements of how the universe is expanding, telling us about dark energy. It illustrates how these big, ambitious projects are going to deliver astonishingly rich maps to explore. We&#8217;re now working to understand what other unexpected bonuses might be pulled out from the data.&#8221;

Story Source: The above story is based on materials provided by University College London. Note: Materials may be edited for content and length.

Journal Reference: Andrew Pontzen, Simeon Bird, Hiranya Peiris, Licia Verde. CONSTRAINTS ON IONIZING PHOTON PRODUCTION FROM THE LARGE-SCALE Lyα FOREST. The Astrophysical Journal, 2014; 792 (2): L34 DOI: 10.1088/2041-8205/792/2/L34

Cite This Page: MLA APA Chicago: University College London. &#8220;What lit up the universe?.&#8221; ScienceDaily. ScienceDaily, 27 August 2014. .

Source: http://www.sciencedaily.com/releases/2014/08/140827092122.htm

universal-abyss: Exciting that DESI might help us better understand and get a more accurate measurement of the expansion of the universe and more about dark matter. Very cool.

What lit up the universe?

Date: August 27, 2014
Source: University College London

Summary: New research shows we will soon uncover the origin of the ultraviolet light that bathes the cosmos, helping scientists understand how galaxies were built. The study by cosmologists shows how forthcoming astronomical surveys will reveal what lit up the cosmos.

Pic: A computer model shows one scenario for how light is spread through the early universe on vast scales (more than 50 million light years across). Astronomers will soon know whether or not these kinds of computer models give an accurate portrayal of light in the real cosmos. Credit: Andrew Pontzen/Fabio Governato

New research from UCL shows we will soon uncover the origin of the ultraviolet light that bathes the cosmos, helping scientists understand how galaxies were built.

The study published today in The Astrophysical Journal Letters by UCL cosmologists Dr Andrew Pontzen and Dr Hiranya Peiris (both UCL Physics & Astronomy), together with collaborators at Princeton and Barcelona Universities, shows how forthcoming astronomical surveys will reveal what lit up the cosmos.

"Which produces more light? A country’s biggest cities or its many tiny towns?" asked Dr Pontzen, lead author of the study. "Cities are brighter, but towns are far more numerous. Understanding the balance would tell you something about the organisation of the country. We’re posing a similar question about the universe: does ultraviolet light come from numerous but faint galaxies, or from a smaller number of quasars?"

Quasars are the brightest objects in the Universe; their intense light is generated by gas as it falls towards a black hole. Galaxies can contain millions or billions of stars, but are still dim by comparison. Understanding whether the numerous small galaxies outshine the rare, bright quasars will provide insight into the way the universe built up today’s populations of stars and planets. It will also help scientists properly calibrate their measurements of dark energy, the agent thought to be accelerating the universe’s expansion and determining its far future.

The new method proposed by the team builds on a technique already used by astronomers in which quasars act as beacons to understand space. The intense light from quasars makes them easy to spot even at extreme distances, up to 95% of the way across the observable universe. The team think that studying how this light interacts with hydrogen gas on its journey to Earth will reveal the main sources of illumination in the universe, even if those sources are not themselves quasars.

Two types of hydrogen gas are found in the universe — a plain, neutral form and a second charged form which results from bombardment by UV light. These two forms can be distinguished by studying a particular wavelength of light called ‘Lyman-alpha’ which is only absorbed by the neutral type of hydrogen. Scientists can see where in the universe this ‘Lyman-alpha’ light has been absorbed to map the neutral hydrogen.

Since the quasars being studied are billions of light years away, they act as a time capsule: looking at the light shows us what the universe looked like in the distant past. The resulting map will reveal where neutral hydrogen was located billions of years ago as the universe was vigorously building its galaxies.

An even distribution of neutral hydrogen gas would suggest numerous galaxies as the source of most light, whereas a much less uniform pattern, showing a patchwork of charged and neutral hydrogen gas, would indicate that rare quasars were the primary origin of light.

Current samples of quasars aren’t quite big enough for a robust analysis of the differences between the two scenarios; however, a number of surveys currently being planned should help scientists find the answer.

Chief among these is the DESI (Dark Energy Spectroscopic Instrument) survey which will include detailed measurements of about a million distant quasars. Although these measurements are designed to reveal how the expansion of the universe is accelerating due to dark energy, the new research shows that results from DESI will also determine whether the intervening gas is uniformly illuminated. In turn, the measurement of patchiness will reveal whether light in our universe is generated by ‘a few cities’ (quasars) or by ‘many small towns’ (galaxies).

Co-author Dr Hiranya Peiris, said: “It’s amazing how little is known about the objects that bathed the universe in ultraviolet radiation while galaxies assembled into their present form. This technique gives us a novel handle on the intergalactic environment during this critical time in the universe’s history.”

Dr Pontzen, said: “It’s good news all round. DESI is going to give us invaluable information about what was going on in early galaxies, objects that are so faint and distant we would never see them individually. And once that’s understood in the data, the team can take account of it and still get accurate measurements of how the universe is expanding, telling us about dark energy. It illustrates how these big, ambitious projects are going to deliver astonishingly rich maps to explore. We’re now working to understand what other unexpected bonuses might be pulled out from the data.”

Story Source: The above story is based on materials provided by University College London. Note: Materials may be edited for content and length.

Journal Reference: Andrew Pontzen, Simeon Bird, Hiranya Peiris, Licia Verde. CONSTRAINTS ON IONIZING PHOTON PRODUCTION FROM THE LARGE-SCALE Lyα FOREST. The Astrophysical Journal, 2014; 792 (2): L34 DOI: 10.1088/2041-8205/792/2/L34

Cite This Page: MLA APA Chicago: University College London. “What lit up the universe?.” ScienceDaily. ScienceDaily, 27 August 2014. .

Source: http://www.sciencedaily.com/releases/2014/08/140827092122.htm

universal-abyss: Exciting that DESI might help us better understand and get a more accurate measurement of the expansion of the universe and more about dark matter. Very cool.

Early growth of giant galaxy, just 3 billion years after the Big Bang, revealed
Date: August 27, 2014
Source: Space Telescope Science Institute (STScI)
Summary: The birth of massive galaxies, according to galaxy formation theories, begins with the buildup of a dense, compact core that is ablaze with the glow of millions of newly formed stars. Evidence of this early construction phase, however, has eluded astronomers — until now. Astronomers identified a dense galactic core, dubbed &#8220;Sparky,&#8221; using a combination of data from several space telescopes. Hubble photographed the emerging galaxy as it looked 11 billion years ago, just 3 billion years after the birth of our universe in the big bang.
Pic: This illustration reveals the celestial fireworks deep inside the crowded core of a developing galaxy, as seen from a hypothetical planetary system. The sky is ablaze with the glow from nebulae, fledgling star clusters, and stars exploding as supernovae. The rapidly forming core may eventually become the heart of a mammoth galaxy similar to one of the giant elliptical galaxies seen today. Credit: NASA, ESA, and Z. Levay and G. Bacon (STScI)
Astronomers have for the first time gotten a glimpse of the earliest stages of massive galaxy construction. The building site, dubbed &#8220;Sparky,&#8221; is a developing galaxy containing a dense core that is blazing with the light of millions of newborn stars which are forming at a ferocious rate. The discovery was made possible through combining observations from NASA&#8217;s Hubble and Spitzer space telescopes, the European Space Agency&#8217;s Herschel Space Observatory, and the W.M. Keck Observatory in Hawaii.
Because the infant galaxy is so far away, it is seen as it appeared 11 billion years ago, just 3 billion years after the birth of the universe in the big bang. Astronomers think the compact galaxy will continue to grow, possibly becoming a giant elliptical galaxy, a gas-deficient assemblage of ancient stars theorized to develop from the inside out, with a compact core marking its beginnings.
&#8220;We really hadn&#8217;t seen a formation process that could create things that are this dense,&#8221; explained Erica Nelson of Yale University in New Haven, Connecticut, lead author of the science paper announcing the results. &#8220;We suspect that this core-formation process is a phenomenon unique to the early universe because the early universe, as a whole, was more compact.
Today, the universe is so diffuse that it cannot create such objects anymore.&#8221;
The research team&#8217;s paper appears in the August 27 issue of the journal Nature.
Although only a fraction of the size of the Milky Way, the tiny powerhouse galaxy already contains about twice as many stars as our galaxy, all crammed into a region only 6,000 light-years across. The Milky Way is about 100,000 light-years across. The barely visible galaxy may be representative of a much larger population of similar objects that are obscured by dust.
&#8220;They&#8217;re very extreme environments,&#8221; Nelson said. &#8220;It&#8217;s like a medieval cauldron forging stars. There&#8217;s a lot of turbulence, and it&#8217;s bubbling. If you were in there, the night sky would be bright with young stars, and there would be a lot of dust, gas, and remnants of exploding stars. To actually see this happening is fascinating.&#8221;

Alongside determining the galaxy&#8217;s size from the Hubble images, the team dug into archival far-infrared images from the Spitzer and Herschel telescopes. The analysis allowed them to see how fast the young galaxy is churning out stars. Sparky is producing roughly 300 stars per year. By comparison, the Milky Way produces roughly 10 stars per year.
Astronomers believe that this frenzied star formation occurred because the galactic center is forming deep inside a gravitational well of dark matter, an invisible form of matter that makes up the scaffolding upon which galaxies formed in the early universe. A torrent of gas is flowing into this well at the galaxy&#8217;s core, sparking waves of star birth.
The sheer amount of gas and dust within an extreme star-forming region like this may explain why these compact galaxies have eluded astronomers until now. Bursts of star formation create dust, which builds up within the forming galaxy and can block some starlight. Sparky was only barely visible, and it required the infrared capabilities of Hubble&#8217;s Wide Field Camera 3, Spitzer, and Herschel to reveal the developing galaxy.
The observations indicate that the galaxy had been furiously making stars for more than a billion years (at the time the light we now observe began its long journey). But the galaxy didn&#8217;t keep up this frenetic pace for very long, the researchers suggested. Eventually, the galaxy probably stopped forming stars in the packed core. Smaller galaxies then might have merged with the growing galaxy, making it expand outward in size over the next 10 billion years, possibly becoming similar to one of the mammoth, sedate elliptical galaxies seen today.
&#8220;I think our discovery settles the question of whether this mode of building galaxies actually happened or not,&#8221; said team member Pieter van Dokkum of Yale University. &#8220;The question now is, how often did this occur? We suspect there are other galaxies like this that are even fainter in near-infrared wavelengths. We think they&#8217;ll be brighter at longer wavelengths, and so it will really be up to future infrared telescopes such as NASA&#8217;s James Webb Space Telescope to find more of these objects.&#8221;
Story Source: The above story is based on materials provided by Space Telescope Science Institute (STScI). Note: Materials may be edited for content and length.
Journal Reference: Erica Nelson, Pieter van Dokkum, Marijn Franx, Gabriel Brammer, Ivelina Momcheva, Natascha Förster Schreiber, Elisabete da Cunha, Linda Tacconi, Rachel Bezanson, Allison Kirkpatrick, Joel Leja, Hans-Walter Rix, Rosalind Skelton, Arjen van der Wel, Katherine Whitaker, Stijn Wuyts. A massive galaxy in its core formation phase three billion years after the Big Bang. Nature, 2014; DOI: 10.1038/nature13616
Cite This Page: MLA APA Chicago:  Space Telescope Science Institute (STScI). &#8220;Early growth of giant galaxy, just 3 billion years after the Big Bang, revealed.&#8221; ScienceDaily. ScienceDaily, 27 August 2014. .
Source: http://www.sciencedaily.com/releases/2014/08/140827131551.htm

universal-abyss: Wow, just 3 billion years after the Big Bang, a giant galaxy was producing an incredible amount of stars, around 300, per year. This is so exciting to our understanding of Galaxies, galaxy formation theories, and our infant universe. Just wow!

Early growth of giant galaxy, just 3 billion years after the Big Bang, revealed

Date: August 27, 2014
Source: Space Telescope Science Institute (STScI)

Summary: The birth of massive galaxies, according to galaxy formation theories, begins with the buildup of a dense, compact core that is ablaze with the glow of millions of newly formed stars. Evidence of this early construction phase, however, has eluded astronomers — until now. Astronomers identified a dense galactic core, dubbed “Sparky,” using a combination of data from several space telescopes. Hubble photographed the emerging galaxy as it looked 11 billion years ago, just 3 billion years after the birth of our universe in the big bang.

Pic: This illustration reveals the celestial fireworks deep inside the crowded core of a developing galaxy, as seen from a hypothetical planetary system. The sky is ablaze with the glow from nebulae, fledgling star clusters, and stars exploding as supernovae. The rapidly forming core may eventually become the heart of a mammoth galaxy similar to one of the giant elliptical galaxies seen today. Credit: NASA, ESA, and Z. Levay and G. Bacon (STScI)

Astronomers have for the first time gotten a glimpse of the earliest stages of massive galaxy construction. The building site, dubbed “Sparky,” is a developing galaxy containing a dense core that is blazing with the light of millions of newborn stars which are forming at a ferocious rate. The discovery was made possible through combining observations from NASA’s Hubble and Spitzer space telescopes, the European Space Agency’s Herschel Space Observatory, and the W.M. Keck Observatory in Hawaii.

Because the infant galaxy is so far away, it is seen as it appeared 11 billion years ago, just 3 billion years after the birth of the universe in the big bang. Astronomers think the compact galaxy will continue to grow, possibly becoming a giant elliptical galaxy, a gas-deficient assemblage of ancient stars theorized to develop from the inside out, with a compact core marking its beginnings.

“We really hadn’t seen a formation process that could create things that are this dense,” explained Erica Nelson of Yale University in New Haven, Connecticut, lead author of the science paper announcing the results. “We suspect that this core-formation process is a phenomenon unique to the early universe because the early universe, as a whole, was more compact.

Today, the universe is so diffuse that it cannot create such objects anymore.”

The research team’s paper appears in the August 27 issue of the journal Nature.

Although only a fraction of the size of the Milky Way, the tiny powerhouse galaxy already contains about twice as many stars as our galaxy, all crammed into a region only 6,000 light-years across. The Milky Way is about 100,000 light-years across. The barely visible galaxy may be representative of a much larger population of similar objects that are obscured by dust.

“They’re very extreme environments,” Nelson said. “It’s like a medieval cauldron forging stars. There’s a lot of turbulence, and it’s bubbling. If you were in there, the night sky would be bright with young stars, and there would be a lot of dust, gas, and remnants of exploding stars. To actually see this happening is fascinating.”

Alongside determining the galaxy’s size from the Hubble images, the team dug into archival far-infrared images from the Spitzer and Herschel telescopes. The analysis allowed them to see how fast the young galaxy is churning out stars. Sparky is producing roughly 300 stars per year. By comparison, the Milky Way produces roughly 10 stars per year.

Astronomers believe that this frenzied star formation occurred because the galactic center is forming deep inside a gravitational well of dark matter, an invisible form of matter that makes up the scaffolding upon which galaxies formed in the early universe. A torrent of gas is flowing into this well at the galaxy’s core, sparking waves of star birth.

The sheer amount of gas and dust within an extreme star-forming region like this may explain why these compact galaxies have eluded astronomers until now. Bursts of star formation create dust, which builds up within the forming galaxy and can block some starlight. Sparky was only barely visible, and it required the infrared capabilities of Hubble’s Wide Field Camera 3, Spitzer, and Herschel to reveal the developing galaxy.

The observations indicate that the galaxy had been furiously making stars for more than a billion years (at the time the light we now observe began its long journey). But the galaxy didn’t keep up this frenetic pace for very long, the researchers suggested. Eventually, the galaxy probably stopped forming stars in the packed core. Smaller galaxies then might have merged with the growing galaxy, making it expand outward in size over the next 10 billion years, possibly becoming similar to one of the mammoth, sedate elliptical galaxies seen today.

“I think our discovery settles the question of whether this mode of building galaxies actually happened or not,” said team member Pieter van Dokkum of Yale University. “The question now is, how often did this occur? We suspect there are other galaxies like this that are even fainter in near-infrared wavelengths. We think they’ll be brighter at longer wavelengths, and so it will really be up to future infrared telescopes such as NASA’s James Webb Space Telescope to find more of these objects.”

Story Source: The above story is based on materials provided by Space Telescope Science Institute (STScI). Note: Materials may be edited for content and length.

Journal Reference: Erica Nelson, Pieter van Dokkum, Marijn Franx, Gabriel Brammer, Ivelina Momcheva, Natascha Förster Schreiber, Elisabete da Cunha, Linda Tacconi, Rachel Bezanson, Allison Kirkpatrick, Joel Leja, Hans-Walter Rix, Rosalind Skelton, Arjen van der Wel, Katherine Whitaker, Stijn Wuyts. A massive galaxy in its core formation phase three billion years after the Big Bang. Nature, 2014; DOI: 10.1038/nature13616

Cite This Page: MLA APA Chicago: Space Telescope Science Institute (STScI). “Early growth of giant galaxy, just 3 billion years after the Big Bang, revealed.” ScienceDaily. ScienceDaily, 27 August 2014. .

Source: http://www.sciencedaily.com/releases/2014/08/140827131551.htm

universal-abyss: Wow, just 3 billion years after the Big Bang, a giant galaxy was producing an incredible amount of stars, around 300, per year. This is so exciting to our understanding of Galaxies, galaxy formation theories, and our infant universe. Just wow!

from-dust-of-stars
from-dust-of-stars:

Symphony of nanoplasmonic and optical resonators produces laser-like light emission

Date: August 26, 2014
Source: University of Illinois College of Engineering

Summary: By combining plasmonics and optical microresonators, researchers have created a new optical amplifier (or laser) design, paving the way for power-on-a-chip applications.

Pic: Hybrid optoplasmonic system showing the operation of amplification. Credit: Nathan Bajandas, Beckman ITG

By combining plasmonics and optical microresonators, researchers at the University of Illinois at Urbana-Champaign have created a new optical amplifier (or laser) design, paving the way for power-on-a-chip applications.

"We have made optical systems at the microscopic scale that amplify light and produce ultra-narrowband spectral output," explained J. Gary Eden, a professor of electrical and computer engineering (ECE) at Illinois. "These new optical amplifiers are well-suited for routing optical power on a chip containing both electronic and optical components.

Hybrid optoplasmonic system showing the operation of amplification.

"Their potential applications in medicine are exciting because the amplifiers are actuated (‘pumped’) by light that is able to pass through human skin. For this reason, these microsphere-based amplifiers are able to transmit signals from cells and buried biomedical sensors to electrical and optical networks outside the body."

The speed of currently available semiconductor electronics is limited to about 10 GHz due to heat generation and interconnects delay time issues. Though, not limited by speed, dielectric-based photonics are limited in size by the fundamental laws of diffraction. The researchers, led by Eden and ECE associate professor Logan Liu, found that plasmonics — metal nanostructures — can serve as a bridge between photonics and nanoelectronics, to combine the size of nanoelectronics and the speed of dielectric photonics.

"We have demonstrated a novel optoplasmonic system comprising plasmonic nanoantennas and optical microcavities capable of active nanoscale field modulation, frequency switching, and amplification of signals," states Manas Ranjan Gartia, lead author of the article, "Injection- Seeded Optoplasmonic Amplifier in the Visible," published in the journal Scientific Reports. "This is an important step forward for monolithically building on-chip light sources inside future chips that can use much less energy while providing superior speed performance of the chips."

At the heart of the amplifier is a microsphere (made of polystyrene or glass) that is approximately 10 microns in diameter. When activated by an intense beam of light, the sphere generates internally a narrowband optical signal that is produced by a process known as Raman scattering. Molecules tethered to the surface of the sphere by a protein amplify the Raman signal, and in concert with a nano-structured surface in contact to the sphere, the amplifier produces visible (red or green) light having a bandwidth that matches the internally-generated signal.

The proposed design is well-suited for routing narrowband optical power on-a-chip. Over the past five decades, optical oscillators and amplifiers have typically been based on the buildup of the field from the spontaneous emission background. Doing so limits the temporal coherence of the output, lengthens the time required for the optical field to grow from the noise, and often is responsible for complex, multiline spectra.

"In our design, we use Raman assisted injection-seeded locking to overcome the above problems. In addition to the spectral control afforded by injection-locking, the effective Q of the amplifier can be specified by the bandwidth of the injected Raman signal," Gartia said. This characteristic contrasts with previous WGM-based lasers and amplifiers for which the Q is determined solely by the WGM resonator.

In addition to Eden, Liu, and Gartia, co-authors of the paper include Sujin Seo, Junhwan Kim, Te-Wei Chang, Assistant Professor Gaurav Bahl from Department of Mechanical Science and Engineering, and Prof. Meng Lu, ECE alumnus and currently assistant professor at Iowa State University. The research was done at Micro and Nanotechnology Laboratory at Illinois.

Story Source: The above story is based on materials provided by University of Illinois College of Engineering. The original article was written by Rick Kubetz. Note: Materials may be edited for content and length.

Cite This Page: MLA APA Chicago: University of Illinois College of Engineering. “Symphony of nanoplasmonic and optical resonators produces laser-like light emission.” ScienceDaily. ScienceDaily, 26 August 2014. 

Source: http://www.sciencedaily.com/releases/2014/08/140826121139.htm

from-dust-of-stars: Damn, this is some seriously awesome research that may help yield faster speeds and lower energy for computing. This is a fascinating read!

from-dust-of-stars:

Symphony of nanoplasmonic and optical resonators produces laser-like light emission

Date: August 26, 2014
Source: University of Illinois College of Engineering

Summary: By combining plasmonics and optical microresonators, researchers have created a new optical amplifier (or laser) design, paving the way for power-on-a-chip applications.

Pic: Hybrid optoplasmonic system showing the operation of amplification. Credit: Nathan Bajandas, Beckman ITG

By combining plasmonics and optical microresonators, researchers at the University of Illinois at Urbana-Champaign have created a new optical amplifier (or laser) design, paving the way for power-on-a-chip applications.

"We have made optical systems at the microscopic scale that amplify light and produce ultra-narrowband spectral output," explained J. Gary Eden, a professor of electrical and computer engineering (ECE) at Illinois. "These new optical amplifiers are well-suited for routing optical power on a chip containing both electronic and optical components.

Hybrid optoplasmonic system showing the operation of amplification.

"Their potential applications in medicine are exciting because the amplifiers are actuated (‘pumped’) by light that is able to pass through human skin. For this reason, these microsphere-based amplifiers are able to transmit signals from cells and buried biomedical sensors to electrical and optical networks outside the body."

The speed of currently available semiconductor electronics is limited to about 10 GHz due to heat generation and interconnects delay time issues. Though, not limited by speed, dielectric-based photonics are limited in size by the fundamental laws of diffraction. The researchers, led by Eden and ECE associate professor Logan Liu, found that plasmonics — metal nanostructures — can serve as a bridge between photonics and nanoelectronics, to combine the size of nanoelectronics and the speed of dielectric photonics.

"We have demonstrated a novel optoplasmonic system comprising plasmonic nanoantennas and optical microcavities capable of active nanoscale field modulation, frequency switching, and amplification of signals," states Manas Ranjan Gartia, lead author of the article, "Injection- Seeded Optoplasmonic Amplifier in the Visible," published in the journal Scientific Reports. "This is an important step forward for monolithically building on-chip light sources inside future chips that can use much less energy while providing superior speed performance of the chips."

At the heart of the amplifier is a microsphere (made of polystyrene or glass) that is approximately 10 microns in diameter. When activated by an intense beam of light, the sphere generates internally a narrowband optical signal that is produced by a process known as Raman scattering. Molecules tethered to the surface of the sphere by a protein amplify the Raman signal, and in concert with a nano-structured surface in contact to the sphere, the amplifier produces visible (red or green) light having a bandwidth that matches the internally-generated signal.

The proposed design is well-suited for routing narrowband optical power on-a-chip. Over the past five decades, optical oscillators and amplifiers have typically been based on the buildup of the field from the spontaneous emission background. Doing so limits the temporal coherence of the output, lengthens the time required for the optical field to grow from the noise, and often is responsible for complex, multiline spectra.

"In our design, we use Raman assisted injection-seeded locking to overcome the above problems. In addition to the spectral control afforded by injection-locking, the effective Q of the amplifier can be specified by the bandwidth of the injected Raman signal," Gartia said. This characteristic contrasts with previous WGM-based lasers and amplifiers for which the Q is determined solely by the WGM resonator.

In addition to Eden, Liu, and Gartia, co-authors of the paper include Sujin Seo, Junhwan Kim, Te-Wei Chang, Assistant Professor Gaurav Bahl from Department of Mechanical Science and Engineering, and Prof. Meng Lu, ECE alumnus and currently assistant professor at Iowa State University. The research was done at Micro and Nanotechnology Laboratory at Illinois.

Story Source: The above story is based on materials provided by University of Illinois College of Engineering. The original article was written by Rick Kubetz. Note: Materials may be edited for content and length.

Cite This Page: MLA APA Chicago: University of Illinois College of Engineering. “Symphony of nanoplasmonic and optical resonators produces laser-like light emission.” ScienceDaily. ScienceDaily, 26 August 2014.

Source: http://www.sciencedaily.com/releases/2014/08/140826121139.htm

from-dust-of-stars: Damn, this is some seriously awesome research that may help yield faster speeds and lower energy for computing. This is a fascinating read!

from-dust-of-stars
from-dust-of-stars:

Earth can sustain more terrestrial plant growth than previously thought, analysis shows
Date: August 26, 2014
Source: University of Illinois at Urbana-Champaign
Summary: A new analysis suggests the planet can produce much more land-plant biomass — the total material in leaves, stems, roots, fruits, grains and other terrestrial plant parts — than previously thought. The study recalculates the theoretical limit of terrestrial plant productivity, and finds that it is much higher than many current estimates allow.
Pic: Scientists have historically underestimated the potential productivity of the earth’s land plants, researchers report in a new study. Credit: NASA Earth Observatory image by Jesse Allen
A new analysis suggests the planet can produce much more land-plant biomass — the total material in leaves, stems, roots, fruits, grains and other terrestrial plant parts — than previously thought.
The study, reported in Environmental Science and Technology, recalculates the theoretical limit of terrestrial plant productivity, and finds that it is much higher than many current estimates allow.
“When you try to estimate something over the whole planet, you have to make some simplifying assumptions,” said University of Illinois plant biology professor Evan DeLucia, who led the new analysis. “And most previous research assumes that the maximum productivity you could get out of a landscape is what the natural ecosystem would have produced. But it turns out that in nature very few plants have evolved to maximize their growth rates.”
DeLucia directs the Institute for Sustainability, Energy, and Environment at the U. of I. He also is an affiliate of the Energy Biosciences Institute, which funded the research through the Institute for Genomic Biology at Illinois.

Estimates derived from satellite images of vegetation and modeling suggest that about 54 gigatons of carbon is converted into terrestrial plant biomass each year, the researchers report.
“This value has remained stable for the past several decades, leading to the conclusion that it represents a planetary boundary — an upper limit on global biomass production,” the researchers wrote.
But these assumptions don’t take into consideration human efforts to boost plant productivity through genetic manipulation, plant breeding and land management, DeLucia said. Such efforts have already yielded some extremely productive plants.
For example, in Illinois a hybrid grass, Miscanthus x giganteus, without fertilizer or irrigation produced 10 to 16 tons of above-ground biomass per acre, more than double the productivity of native prairie vegetation or corn. And genetically modified no-till corn is more than five times as productive — in terms of total biomass generated per acre — as restored prairie in Wisconsin.

Some non-native species also outcompete native species; this is what makes many of them invasive, DeLucia said. In Iceland, for example, an introduced species, the nootka lupine, produces four times as much biomass as the native boreal dwarf birch species it displaces. And in India bamboo plantations produce about 40 percent more biomass than dry, deciduous tropical forests.

Some of these plants would not be desirable additions to native or managed ecosystems, DeLucia said, but they represent the untapped potential productivity of plants in general.
“We’re saying this is what’s possible,” he said.
The team used a model of light-use efficiency and the theoretical maximum efficiency with which plant canopies convert solar radiation to biomass to estimate the theoretical limit of net primary production (NPP) on a global scale. This newly calculated limit was “roughly two orders of magnitude higher than the productivity of most current managed or natural ecosystems,” the authors wrote.
“We’re not saying that this is even approachable, but the theory tells us that what is possible on the planet is much, much higher than what current estimates are,” DeLucia said.
Taking into account global water limitations reduced this theoretical limit by more than 20 percent in all parts of the terrestrial landscape except the tropics, DeLucia said. “But even that water-limited NPP is many times higher than we see in our current agricultural systems.”
DeLucia cautions that scientists and agronomists have a long way to go to boost plant productivity beyond current limits, and the new analysis does not suggest that shortages of food or other plant-based resources will cease to be a problem.
“I don’t want to be the guy that says science is going to save the planet and we shouldn’t worry about the environmental consequences of agriculture, we shouldn’t worry about runaway population growth,” he said. “All I’m saying is that we’re underestimating the productive capacity of plants in managed ecosystems.”
Story Source: The above story is based on materials provided by University of Illinois at Urbana-Champaign. Note: Materials may be edited for content and length.
Journal Reference: Evan H. DeLucia, Nuria Gomez-Casanovas, Jonathan A. Greenberg, Tara W. Hudiburg, Ilsa B. Kantola, Stephen P. Long, Adam D. Miller, Donald R. Ort, William J. Parton. The Theoretical Limit to Plant Productivity. Environmental Science &amp; Technology, 2014; 48 (16): 9471 DOI: 10.1021/es502348e
Cite This Page: MLA APA Chicago: University of Illinois at Urbana-Champaign. “Earth can sustain more terrestrial plant growth than previously thought, analysis shows.” ScienceDaily. ScienceDaily, 26 August 2014. .
Source: http://www.sciencedaily.com/releases/2014/08/140826100855.htm

from-dust-of-stars: Finally, some encouraging news on the sustainability front - Damn, it is so nice to hear some good news! Obviously, this does solve nor eliminate the other concerns for sustainability and other related concerns.

from-dust-of-stars:

Earth can sustain more terrestrial plant growth than previously thought, analysis shows

Date: August 26, 2014
Source: University of Illinois at Urbana-Champaign

Summary: A new analysis suggests the planet can produce much more land-plant biomass — the total material in leaves, stems, roots, fruits, grains and other terrestrial plant parts — than previously thought. The study recalculates the theoretical limit of terrestrial plant productivity, and finds that it is much higher than many current estimates allow.

Pic: Scientists have historically underestimated the potential productivity of the earth’s land plants, researchers report in a new study. Credit: NASA Earth Observatory image by Jesse Allen

A new analysis suggests the planet can produce much more land-plant biomass — the total material in leaves, stems, roots, fruits, grains and other terrestrial plant parts — than previously thought.

The study, reported in Environmental Science and Technology, recalculates the theoretical limit of terrestrial plant productivity, and finds that it is much higher than many current estimates allow.

“When you try to estimate something over the whole planet, you have to make some simplifying assumptions,” said University of Illinois plant biology professor Evan DeLucia, who led the new analysis. “And most previous research assumes that the maximum productivity you could get out of a landscape is what the natural ecosystem would have produced. But it turns out that in nature very few plants have evolved to maximize their growth rates.”

DeLucia directs the Institute for Sustainability, Energy, and Environment at the U. of I. He also is an affiliate of the Energy Biosciences Institute, which funded the research through the Institute for Genomic Biology at Illinois.

Estimates derived from satellite images of vegetation and modeling suggest that about 54 gigatons of carbon is converted into terrestrial plant biomass each year, the researchers report.

“This value has remained stable for the past several decades, leading to the conclusion that it represents a planetary boundary — an upper limit on global biomass production,” the researchers wrote.

But these assumptions don’t take into consideration human efforts to boost plant productivity through genetic manipulation, plant breeding and land management, DeLucia said. Such efforts have already yielded some extremely productive plants.
For example, in Illinois a hybrid grass, Miscanthus x giganteus, without fertilizer or irrigation produced 10 to 16 tons of above-ground biomass per acre, more than double the productivity of native prairie vegetation or corn. And genetically modified no-till corn is more than five times as productive — in terms of total biomass generated per acre — as restored prairie in Wisconsin.

Some non-native species also outcompete native species; this is what makes many of them invasive, DeLucia said. In Iceland, for example, an introduced species, the nootka lupine, produces four times as much biomass as the native boreal dwarf birch species it displaces. And in India bamboo plantations produce about 40 percent more biomass than dry, deciduous tropical forests.

Some of these plants would not be desirable additions to native or managed ecosystems, DeLucia said, but they represent the untapped potential productivity of plants in general.

“We’re saying this is what’s possible,” he said.

The team used a model of light-use efficiency and the theoretical maximum efficiency with which plant canopies convert solar radiation to biomass to estimate the theoretical limit of net primary production (NPP) on a global scale. This newly calculated limit was “roughly two orders of magnitude higher than the productivity of most current managed or natural ecosystems,” the authors wrote.

“We’re not saying that this is even approachable, but the theory tells us that what is possible on the planet is much, much higher than what current estimates are,” DeLucia said.

Taking into account global water limitations reduced this theoretical limit by more than 20 percent in all parts of the terrestrial landscape except the tropics, DeLucia said. “But even that water-limited NPP is many times higher than we see in our current agricultural systems.”

DeLucia cautions that scientists and agronomists have a long way to go to boost plant productivity beyond current limits, and the new analysis does not suggest that shortages of food or other plant-based resources will cease to be a problem.
“I don’t want to be the guy that says science is going to save the planet and we shouldn’t worry about the environmental consequences of agriculture, we shouldn’t worry about runaway population growth,” he said. “All I’m saying is that we’re underestimating the productive capacity of plants in managed ecosystems.”

Story Source: The above story is based on materials provided by University of Illinois at Urbana-Champaign. Note: Materials may be edited for content and length.

Journal Reference: Evan H. DeLucia, Nuria Gomez-Casanovas, Jonathan A. Greenberg, Tara W. Hudiburg, Ilsa B. Kantola, Stephen P. Long, Adam D. Miller, Donald R. Ort, William J. Parton. The Theoretical Limit to Plant Productivity. Environmental Science & Technology, 2014; 48 (16): 9471 DOI: 10.1021/es502348e

Cite This Page: MLA APA Chicago: University of Illinois at Urbana-Champaign. “Earth can sustain more terrestrial plant growth than previously thought, analysis shows.” ScienceDaily. ScienceDaily, 26 August 2014. .

Source: http://www.sciencedaily.com/releases/2014/08/140826100855.htm

from-dust-of-stars: Finally, some encouraging news on the sustainability front - Damn, it is so nice to hear some good news! Obviously, this does solve nor eliminate the other concerns for sustainability and other related concerns.

Best view yet of merging galaxies in distant universe

Date: August 26, 2014
Source: National Radio Astronomy Observatory

Summary: Astronomers have obtained the best view yet of a collision between two galaxies when the Universe was only half its current age. To make this observation, the team also enlisted the help of a gravitational lens, a galaxy-size magnifying glass, to reveal otherwise invisible detail.

Pic: The Atacama Large Millimeter/submillimeter Array (ALMA) and many other telescopes on the ground and in space have been used to obtain the best &#8230; view yet of a collision that took place between two galaxies when the Universe was only half its current age. The astronomers enlisted the help of a galaxy-sized magnifying glass to reveal otherwise invisible detail. These new studies of the galaxy H-ATLAS J142935.3-002836 have shown that this complex and distant object looks surprisingly like the well-known local galaxy collision, the Antennae Galaxies. In this picture you can see the foreground galaxy that is doing the lensing, which resembles how our home galaxy, the Milky Way, would appear if seen edge-on. But around this galaxy there is an almost complete ring — the smeared out image of a star-forming galaxy merger far beyond. This picture combines the views from the NASA/ESA Hubble Space Telescope and the Keck-II telescope on Hawaii.  Credit: ESO/NASA/ESA/W. M. Keck Observatory

An international team of astronomers using the Atacama Large Millimeter/submillimeter Array (ALMA) and the Karl G. Jansky Very Large Array (VLA) &#8212; among other telescopes &#8212; has obtained the best view yet of a collision between two galaxies when the Universe was only half its current age.

To make this observation, the team also enlisted the help of a gravitational lens, a galaxy-size magnifying glass, to reveal otherwise invisible detail. These new studies of galaxy HATLAS J142935.3-002836 have shown that this complex and distant object looks surprisingly like the comparatively nearby pair of colliding galaxies collectively known as the Antennae.

"While astronomers are often limited by the power of their telescopes, in some cases our ability to see detail is hugely boosted by natural lenses created by the Universe," explains lead author Hugo Messias of the Universidad de Concepción in Chile and the Centro de Astronomia e Astrofísica da Universidade de Lisboa in Portugal. "Einstein predicted in his theory of General Relativity that, given enough mass, light does not travel in a straight line but will be bent in a similar way to a normal lens."

Cosmic lenses are created by massive structures like galaxies and galaxy clusters, which bend light from objects behind them due to their strong gravity &#8212; an effect called gravitational lensing. The magnifying properties of this effect allow astronomers to study objects that would otherwise be invisible and to directly compare local galaxies with much more remote ones, when the Universe was significantly younger.

For these gravitational lenses to work, however, the foreground lensing galaxy and the one beyond need to be precisely aligned.

"These chance alignments are quite rare and tend to be hard to identify," adds Messias, "but, recent studies have shown that by observing at far-infrared and millimeter wavelengths we can find these cases much more efficiently."

HATLAS J142935.3-002836 (or H1429-0028 for short) is one of these sources and was found in the Herschel Astrophysical Terahertz Large Area Survey (HATLAS). It is among the brightest gravitationally lensed objects in the far-infrared regime found so far, even though we are seeing it at a time when the Universe was just half its current age.

To study this object in further detail, the astronomers started an extensive follow-up campaign using an impressive collection of incredibly powerful telescopes, including the Hubble Space Telescope, ALMA, the Keck Observatory, and the VLA, among others.

The Hubble and Keck images revealed a detailed gravitationally induced ring of light around the foreground galaxy. These high resolution images also showed that the lensing galaxy is an edge-on disc galaxy &#8212; similar to our Milky Way &#8212; which obscures parts of the background light due to the large dust clouds it contains.

But this obscuration is not a problem for ALMA and the VLA, since these two facilities observe the sky at longer wavelengths, which are unaffected by dust. Using the combined data, the team discovered that the background system was actually an ongoing collision between two galaxies. From this point on, ALMA and the VLA played a key role in further characterizing this object.

In particular, ALMA traced carbon monoxide, which allows detailed studies of star-formation mechanisms in galaxies. The ALMA observations also allowed the motion of the material in the galaxy to be measured. This was essential to show that the lensed object is indeed an ongoing galactic collision forming hundreds of new stars each year, and that one of the colliding galaxies still shows signs of rotation; an indication that it was a disc galaxy just before this encounter.

The system of these two colliding galaxies resembles a spectacular object that is much closer to us: the Antennae, which is the closest ongoing merger of two spiral galaxies. While the Antennae system is forming stars with a total rate of only a few ten times the mass of our Sun each year, H1429-0028 each year turns more than 400 solar masses of gas into new stars.

Shane Bussmann, co-author from Cornell University in Ithaca, N.Y., concludes: &#8220;The powerful synergy between the Herschel Space Observatory and our coordinated follow-up effort from ALMA, VLA, and other observatories around the world has dramatically improved our understanding of galaxy mergers when the Universe was half its present age.&#8221;

The National Radio Astronomy Observatory is a facility of the National Science Foundation, operated under cooperative agreement by Associated Universities, Inc.

Story Source: The above story is based on materials provided by National Radio Astronomy Observatory. Note: Materials may be edited for content and length.

Journal Reference: Hugo Messias, Simon Dye, Neil Nagar, Gustavo Orellana, R. Shane Bussmann, Jae Calanog, Helmut Dannerbauer, Hai Fu, Edo Ibar, Andrew Inohara, R. J. Ivison, Mattia Negrello, Dominik A. Riechers, Yun-Kyeong Sheen, Simon Amber, Mark Birkinshaw, Nathan Bourne, Dave L. Clements, Asantha Cooray, Gianfranco De Zotti, Ricardo Demarco, Loretta Dunne, Stephen Eales, Simone Fleuren, Roxana E. Lupu, Steve J. Maddox, Michal J. Michalowski, Alain Omont, Kate Rowlands, Dan Smith, Matt Smith, Elisabetta Valiante. Herschel-ATLAS and ALMA: HATLAS J142935.3-002836, a lensed major merger at redshift 1.027. Astronomy &amp; Astrophysics, 2014

Cite This Page: MLA APA Chicago:  National Radio Astronomy Observatory. &#8220;Best view yet of merging galaxies in distant universe.&#8221; ScienceDaily. ScienceDaily, 26 August 2014. .

Source: http://www.sciencedaily.com/releases/2014/08/140826141130.htm

universal-abyss: The majesty and magnitude of merging galaxies - another merger caught by gravitational lensing. Wow.

Best view yet of merging galaxies in distant universe

Date: August 26, 2014
Source: National Radio Astronomy Observatory

Summary: Astronomers have obtained the best view yet of a collision between two galaxies when the Universe was only half its current age. To make this observation, the team also enlisted the help of a gravitational lens, a galaxy-size magnifying glass, to reveal otherwise invisible detail.

Pic: The Atacama Large Millimeter/submillimeter Array (ALMA) and many other telescopes on the ground and in space have been used to obtain the best … view yet of a collision that took place between two galaxies when the Universe was only half its current age. The astronomers enlisted the help of a galaxy-sized magnifying glass to reveal otherwise invisible detail. These new studies of the galaxy H-ATLAS J142935.3-002836 have shown that this complex and distant object looks surprisingly like the well-known local galaxy collision, the Antennae Galaxies. In this picture you can see the foreground galaxy that is doing the lensing, which resembles how our home galaxy, the Milky Way, would appear if seen edge-on. But around this galaxy there is an almost complete ring — the smeared out image of a star-forming galaxy merger far beyond. This picture combines the views from the NASA/ESA Hubble Space Telescope and the Keck-II telescope on Hawaii. Credit: ESO/NASA/ESA/W. M. Keck Observatory

An international team of astronomers using the Atacama Large Millimeter/submillimeter Array (ALMA) and the Karl G. Jansky Very Large Array (VLA) — among other telescopes — has obtained the best view yet of a collision between two galaxies when the Universe was only half its current age.

To make this observation, the team also enlisted the help of a gravitational lens, a galaxy-size magnifying glass, to reveal otherwise invisible detail. These new studies of galaxy HATLAS J142935.3-002836 have shown that this complex and distant object looks surprisingly like the comparatively nearby pair of colliding galaxies collectively known as the Antennae.

"While astronomers are often limited by the power of their telescopes, in some cases our ability to see detail is hugely boosted by natural lenses created by the Universe," explains lead author Hugo Messias of the Universidad de Concepción in Chile and the Centro de Astronomia e Astrofísica da Universidade de Lisboa in Portugal. "Einstein predicted in his theory of General Relativity that, given enough mass, light does not travel in a straight line but will be bent in a similar way to a normal lens."

Cosmic lenses are created by massive structures like galaxies and galaxy clusters, which bend light from objects behind them due to their strong gravity — an effect called gravitational lensing. The magnifying properties of this effect allow astronomers to study objects that would otherwise be invisible and to directly compare local galaxies with much more remote ones, when the Universe was significantly younger.

For these gravitational lenses to work, however, the foreground lensing galaxy and the one beyond need to be precisely aligned.

"These chance alignments are quite rare and tend to be hard to identify," adds Messias, "but, recent studies have shown that by observing at far-infrared and millimeter wavelengths we can find these cases much more efficiently."

HATLAS J142935.3-002836 (or H1429-0028 for short) is one of these sources and was found in the Herschel Astrophysical Terahertz Large Area Survey (HATLAS). It is among the brightest gravitationally lensed objects in the far-infrared regime found so far, even though we are seeing it at a time when the Universe was just half its current age.

To study this object in further detail, the astronomers started an extensive follow-up campaign using an impressive collection of incredibly powerful telescopes, including the Hubble Space Telescope, ALMA, the Keck Observatory, and the VLA, among others.

The Hubble and Keck images revealed a detailed gravitationally induced ring of light around the foreground galaxy. These high resolution images also showed that the lensing galaxy is an edge-on disc galaxy — similar to our Milky Way — which obscures parts of the background light due to the large dust clouds it contains.

But this obscuration is not a problem for ALMA and the VLA, since these two facilities observe the sky at longer wavelengths, which are unaffected by dust. Using the combined data, the team discovered that the background system was actually an ongoing collision between two galaxies. From this point on, ALMA and the VLA played a key role in further characterizing this object.

In particular, ALMA traced carbon monoxide, which allows detailed studies of star-formation mechanisms in galaxies. The ALMA observations also allowed the motion of the material in the galaxy to be measured. This was essential to show that the lensed object is indeed an ongoing galactic collision forming hundreds of new stars each year, and that one of the colliding galaxies still shows signs of rotation; an indication that it was a disc galaxy just before this encounter.

The system of these two colliding galaxies resembles a spectacular object that is much closer to us: the Antennae, which is the closest ongoing merger of two spiral galaxies. While the Antennae system is forming stars with a total rate of only a few ten times the mass of our Sun each year, H1429-0028 each year turns more than 400 solar masses of gas into new stars.

Shane Bussmann, co-author from Cornell University in Ithaca, N.Y., concludes: “The powerful synergy between the Herschel Space Observatory and our coordinated follow-up effort from ALMA, VLA, and other observatories around the world has dramatically improved our understanding of galaxy mergers when the Universe was half its present age.”

The National Radio Astronomy Observatory is a facility of the National Science Foundation, operated under cooperative agreement by Associated Universities, Inc.

Story Source: The above story is based on materials provided by National Radio Astronomy Observatory. Note: Materials may be edited for content and length.

Journal Reference: Hugo Messias, Simon Dye, Neil Nagar, Gustavo Orellana, R. Shane Bussmann, Jae Calanog, Helmut Dannerbauer, Hai Fu, Edo Ibar, Andrew Inohara, R. J. Ivison, Mattia Negrello, Dominik A. Riechers, Yun-Kyeong Sheen, Simon Amber, Mark Birkinshaw, Nathan Bourne, Dave L. Clements, Asantha Cooray, Gianfranco De Zotti, Ricardo Demarco, Loretta Dunne, Stephen Eales, Simone Fleuren, Roxana E. Lupu, Steve J. Maddox, Michal J. Michalowski, Alain Omont, Kate Rowlands, Dan Smith, Matt Smith, Elisabetta Valiante. Herschel-ATLAS and ALMA: HATLAS J142935.3-002836, a lensed major merger at redshift 1.027. Astronomy & Astrophysics, 2014

Cite This Page: MLA APA Chicago: National Radio Astronomy Observatory. “Best view yet of merging galaxies in distant universe.” ScienceDaily. ScienceDaily, 26 August 2014. .

Source: http://www.sciencedaily.com/releases/2014/08/140826141130.htm

universal-abyss: The majesty and magnitude of merging galaxies - another merger caught by gravitational lensing. Wow.

Do we live in a 2-D hologram? Experiment will test the nature of the universe
Date: August 26, 2014
Source: DOE/Fermi National Accelerator Laboratory
Summary: A unique experiment called the Holometer has started collecting data that will answer some mind-bending questions about our universe &#8212; including whether we live in a hologram.
Pic: A Fermilab scientist works on the laser beams at the heart of the Holometer experiment. The Holometer will use twin laser interferometers to test whether the universe is a 2-D hologram. Credit: Fermilab
A unique experiment at the U.S. Department of Energy&#8217;s Fermi National Accelerator Laboratory called the Holometer has started collecting data that will answer some mind-bending questions about our universe &#8212; including whether we live in a hologram.
Much like characters on a television show would not know that their seemingly 3-D world exists only on a 2-D screen, we could be clueless that our 3-D space is just an illusion. The information about everything in our universe could actually be encoded in tiny packets in two dimensions.
Get close enough to your TV screen and you&#8217;ll see pixels, small points of data that make a seamless image if you stand back. Scientists think that the universe&#8217;s information may be contained in the same way and that the natural &#8220;pixel size&#8221; of space is roughly 10 trillion trillion times smaller than an atom, a distance that physicists refer to as the Planck scale.
&#8220;We want to find out whether space-time is a quantum system just like matter is,&#8221; said Craig Hogan, director of Fermilab&#8217;s Center for Particle Astrophysics and the developer of the holographic noise theory. &#8220;If we see something, it will completely change ideas about space we&#8217;ve used for thousands of years.&#8221;
Quantum theory suggests that it is impossible to know both the exact location and the exact speed of subatomic particles. If space comes in 2-D bits with limited information about the precise location of objects, then space itself would fall under the same theory of uncertainty. The same way that matter continues to jiggle (as quantum waves) even when cooled to absolute zero, this digitized space should have built-in vibrations even in its lowest energy state.
Essentially, the experiment probes the limits of the universe&#8217;s ability to store information. If there is a set number of bits that tell you where something is, it eventually becomes impossible to find more specific information about the location &#8212; even in principle. The instrument testing these limits is Fermilab&#8217;s Holometer, or holographic interferometer, the most sensitive device ever created to measure the quantum jitter of space itself.
Now operating at full power, the Holometer uses a pair of interferometers placed close to one another. Each one sends a one-kilowatt laser beam (the equivalent of 200,000 laser pointers) at a beam splitter and down two perpendicular 40-meter arms. The light is then reflected back to the beam splitter where the two beams recombine, creating fluctuations in brightness if there is motion. Researchers analyze these fluctuations in the returning light to see if the beam splitter is moving in a certain way &#8212; being carried along on a jitter of space itself.
&#8220;Holographic noise&#8221; is expected to be present at all frequencies, but the scientists&#8217; challenge is not to be fooled by other sources of vibrations. The Holometer is testing a frequency so high &#8212; millions of cycles per second &#8212; that motions of normal matter are not likely to cause problems. Rather, the dominant background noise is more often due to radio waves emitted by nearby electronics. The Holometer experiment is designed to identify and eliminate noise from such conventional sources.

"If we find a noise we can&#8217;t get rid of, we might be detecting something fundamental about nature &#8212; a noise that is intrinsic to space-time," said Fermilab physicist Aaron Chou, lead scientist and project manager for the Holometer. "It&#8217;s an exciting moment for physics. A positive result will open a whole new avenue of questioning about how space works."
The Holometer experiment, funded by the U.S. Department of Energy Office of Science and other sources, is expected to gather data over the coming year.
Story Source: The above story is based on materials provided by DOE/Fermi National Accelerator Laboratory. Note: Materials may be edited for content and length.
Cite This Page: MLA APA Chicago: DOE/Fermi National Accelerator Laboratory. &#8220;Do we live in a 2-D hologram? Experiment will test the nature of the universe.&#8221; ScienceDaily. ScienceDaily, 26 August 2014. 

Source: http://www.sciencedaily.com/releases/2014/08/140826121052.htm

universal-abyss: Wow, this will be one truly mind bending experiment, to determine if we live inside a 2-d hologram. This is such an incredibly fascinating thing to think about and I, for one, cannot wait for more on their findings! Cool.

Do we live in a 2-D hologram? Experiment will test the nature of the universe

Date: August 26, 2014
Source: DOE/Fermi National Accelerator Laboratory

Summary: A unique experiment called the Holometer has started collecting data that will answer some mind-bending questions about our universe — including whether we live in a hologram.

Pic: A Fermilab scientist works on the laser beams at the heart of the Holometer experiment. The Holometer will use twin laser interferometers to test whether the universe is a 2-D hologram. Credit: Fermilab

A unique experiment at the U.S. Department of Energy’s Fermi National Accelerator Laboratory called the Holometer has started collecting data that will answer some mind-bending questions about our universe — including whether we live in a hologram.

Much like characters on a television show would not know that their seemingly 3-D world exists only on a 2-D screen, we could be clueless that our 3-D space is just an illusion. The information about everything in our universe could actually be encoded in tiny packets in two dimensions.

Get close enough to your TV screen and you’ll see pixels, small points of data that make a seamless image if you stand back. Scientists think that the universe’s information may be contained in the same way and that the natural “pixel size” of space is roughly 10 trillion trillion times smaller than an atom, a distance that physicists refer to as the Planck scale.

“We want to find out whether space-time is a quantum system just like matter is,” said Craig Hogan, director of Fermilab’s Center for Particle Astrophysics and the developer of the holographic noise theory. “If we see something, it will completely change ideas about space we’ve used for thousands of years.”

Quantum theory suggests that it is impossible to know both the exact location and the exact speed of subatomic particles. If space comes in 2-D bits with limited information about the precise location of objects, then space itself would fall under the same theory of uncertainty. The same way that matter continues to jiggle (as quantum waves) even when cooled to absolute zero, this digitized space should have built-in vibrations even in its lowest energy state.

Essentially, the experiment probes the limits of the universe’s ability to store information. If there is a set number of bits that tell you where something is, it eventually becomes impossible to find more specific information about the location — even in principle. The instrument testing these limits is Fermilab’s Holometer, or holographic interferometer, the most sensitive device ever created to measure the quantum jitter of space itself.

Now operating at full power, the Holometer uses a pair of interferometers placed close to one another. Each one sends a one-kilowatt laser beam (the equivalent of 200,000 laser pointers) at a beam splitter and down two perpendicular 40-meter arms. The light is then reflected back to the beam splitter where the two beams recombine, creating fluctuations in brightness if there is motion. Researchers analyze these fluctuations in the returning light to see if the beam splitter is moving in a certain way — being carried along on a jitter of space itself.

“Holographic noise” is expected to be present at all frequencies, but the scientists’ challenge is not to be fooled by other sources of vibrations. The Holometer is testing a frequency so high — millions of cycles per second — that motions of normal matter are not likely to cause problems. Rather, the dominant background noise is more often due to radio waves emitted by nearby electronics. The Holometer experiment is designed to identify and eliminate noise from such conventional sources.

"If we find a noise we can’t get rid of, we might be detecting something fundamental about nature — a noise that is intrinsic to space-time," said Fermilab physicist Aaron Chou, lead scientist and project manager for the Holometer. "It’s an exciting moment for physics. A positive result will open a whole new avenue of questioning about how space works."

The Holometer experiment, funded by the U.S. Department of Energy Office of Science and other sources, is expected to gather data over the coming year.

Story Source: The above story is based on materials provided by DOE/Fermi National Accelerator Laboratory. Note: Materials may be edited for content and length.

Cite This Page: MLA APA Chicago: DOE/Fermi National Accelerator Laboratory. “Do we live in a 2-D hologram? Experiment will test the nature of the universe.” ScienceDaily. ScienceDaily, 26 August 2014.

Source: http://www.sciencedaily.com/releases/2014/08/140826121052.htm

universal-abyss: Wow, this will be one truly mind bending experiment, to determine if we live inside a 2-d hologram. This is such an incredibly fascinating thing to think about and I, for one, cannot wait for more on their findings! Cool.

from-dust-of-stars
from-dust-of-stars:

A long childhood feeds the hungry human brain

8.25.14

A five-year old’s brain is an energy monster. It uses twice as much glucose (the energy that fuels the brain) as that of a full-grown adult, a new study led by Northwestern University anthropologists has found.

Pic: White matter fiber architecture of the brain. Credit: Human Connectome Project.

The study helps to solve the long-standing mystery of why human children grow so slowly compared with our closest animal relatives.

It shows that energy funneled to the brain dominates the human body’s metabolism early in life and is likely the reason why humans grow at a pace more typical of a reptile than a mammal during childhood.

Results of the study will be published the week of Aug. 25 in the journal Proceedings of the National Academy of Sciences.

"Our findings suggest that our bodies can’t afford to grow faster during the toddler and childhood years because a huge quantity of resources is required to fuel the developing human brain," said Christopher Kuzawa, first author of the study and a professor of anthropology at Northwestern’s Weinberg College of Arts and Sciences. "As humans we have so much to learn, and that learning requires a complex and energy-hungry brain."

Kuzawa also is a faculty fellow at the Institute for Policy Research at Northwestern.

The study is the first to pool existing PET and MRI brain scan data—which measure glucose uptake and brain volume, respectively—to show that the ages when the brain gobbles the most resources are also the ages when body growth is slowest. At 4 years of age, when this “brain drain” is at its peak and body growth slows to its minimum, the brain burns through resources at a rate equivalent to 66 percent of what the entire body uses at rest.

The findings support a long-standing hypothesis in anthropology that children grow so slowly, and are dependent for so long, because the human body needs to shunt a huge fraction of its resources to the brain during childhood, leaving little to be devoted to body growth. It also helps explain some common observations that many parents may have.

"After a certain age it becomes difficult to guess a toddler or young child’s age by their size," Kuzawa said. "Instead you have to listen to their speech and watch their behavior. Our study suggests that this is no accident. Body growth grinds nearly to a halt at the ages when brain development is happening at a lightning pace, because the brain is sapping up the available resources."

It was previously believed that the brain’s resource burden on the body was largest at birth, when the size of the brain relative to the body is greatest. The researchers found instead that the brain maxes out its glucose use at age 5. At age 4 the brain consumes glucose at a rate comparable to 66 percent of the body’s resting metabolic rate (or more than 40 percent of the body’s total energy expenditure).

"The mid-childhood peak in brain costs has to do with the fact that synapses, connections in the brain, max out at this age, when we learn so many of the things we need to know to be successful humans," Kuzawa said.

"At its peak in childhood, the brain burns through two-thirds of the calories the entire body uses at rest, much more than other primate species," said William Leonard, co-author of the study. "To compensate for these heavy energy demands of our big brains, children grow more slowly and are less physically active during this age range. Our findings strongly suggest that humans evolved to grow slowly during this time in order to free up fuel for our expensive, busy childhood brains."

 Explore further: Sugar-burning in the adult human brain is associated with continued growth, and remodeling

More information: “Metabolic costs and evolutionary implications of human brain development,” by Christopher W. Kuzawa et al. PNAS, 2014. www.pnas.org/cgi/doi/10.1073/pnas.1323099111

Journal reference: Proceedings of the National Academy of Sciences  

Provided by Northwestern University  

Source: http://medicalxpress.com/news/2014-08-childhood-hungry-human-brain.html

from-dust-of-stars: Incredible, those childhood years are so very precious and vital to the human brain development.

from-dust-of-stars:

A long childhood feeds the hungry human brain

8.25.14

A five-year old’s brain is an energy monster. It uses twice as much glucose (the energy that fuels the brain) as that of a full-grown adult, a new study led by Northwestern University anthropologists has found.

Pic: White matter fiber architecture of the brain. Credit: Human Connectome Project.

The study helps to solve the long-standing mystery of why human children grow so slowly compared with our closest animal relatives.

It shows that energy funneled to the brain dominates the human body’s metabolism early in life and is likely the reason why humans grow at a pace more typical of a reptile than a mammal during childhood.

Results of the study will be published the week of Aug. 25 in the journal Proceedings of the National Academy of Sciences.

"Our findings suggest that our bodies can’t afford to grow faster during the toddler and childhood years because a huge quantity of resources is required to fuel the developing human brain," said Christopher Kuzawa, first author of the study and a professor of anthropology at Northwestern’s Weinberg College of Arts and Sciences. "As humans we have so much to learn, and that learning requires a complex and energy-hungry brain."

Kuzawa also is a faculty fellow at the Institute for Policy Research at Northwestern.

The study is the first to pool existing PET and MRI brain scan data—which measure glucose uptake and brain volume, respectively—to show that the ages when the brain gobbles the most resources are also the ages when body growth is slowest. At 4 years of age, when this “brain drain” is at its peak and body growth slows to its minimum, the brain burns through resources at a rate equivalent to 66 percent of what the entire body uses at rest.

The findings support a long-standing hypothesis in anthropology that children grow so slowly, and are dependent for so long, because the human body needs to shunt a huge fraction of its resources to the brain during childhood, leaving little to be devoted to body growth. It also helps explain some common observations that many parents may have.

"After a certain age it becomes difficult to guess a toddler or young child’s age by their size," Kuzawa said. "Instead you have to listen to their speech and watch their behavior. Our study suggests that this is no accident. Body growth grinds nearly to a halt at the ages when brain development is happening at a lightning pace, because the brain is sapping up the available resources."

It was previously believed that the brain’s resource burden on the body was largest at birth, when the size of the brain relative to the body is greatest. The researchers found instead that the brain maxes out its glucose use at age 5. At age 4 the brain consumes glucose at a rate comparable to 66 percent of the body’s resting metabolic rate (or more than 40 percent of the body’s total energy expenditure).

"The mid-childhood peak in brain costs has to do with the fact that synapses, connections in the brain, max out at this age, when we learn so many of the things we need to know to be successful humans," Kuzawa said.

"At its peak in childhood, the brain burns through two-thirds of the calories the entire body uses at rest, much more than other primate species," said William Leonard, co-author of the study. "To compensate for these heavy energy demands of our big brains, children grow more slowly and are less physically active during this age range. Our findings strongly suggest that humans evolved to grow slowly during this time in order to free up fuel for our expensive, busy childhood brains."

Explore further: Sugar-burning in the adult human brain is associated with continued growth, and remodeling

More information: “Metabolic costs and evolutionary implications of human brain development,” by Christopher W. Kuzawa et al. PNAS, 2014. www.pnas.org/cgi/doi/10.1073/pnas.1323099111

Journal reference: Proceedings of the National Academy of Sciences

Provided by Northwestern University

Source: http://medicalxpress.com/news/2014-08-childhood-hungry-human-brain.html

from-dust-of-stars: Incredible, those childhood years are so very precious and vital to the human brain development.

Radical New Theory Could Kill the Multiverse Hypothesis

BY NATALIE WOLCHOVER, QUANTA MAGAZINE   
08.25.14  |   6:30 AM  |   PERMALINK

Though galaxies look larger than atoms and elephants appear to outweigh ants, some physicists have begun to suspect that size differences are illusory. Perhaps the fundamental description of the universe does not include the concepts of “mass” and “length,” implying that at its core, nature lacks a sense of scale.

Pic: http://www.fromquarkstoquasars.com/wp-content/uploads/2013/09/looking-for-life-in-the-multiverse_1.jpg

Original story reprinted with permission from Quanta Magazine, an editorially independent division of SimonsFoundation.org whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.
This little-explored idea, known as scale symmetry, constitutes a radical departure from long-standing assumptions about how elementary particles acquire their properties. But it has recently emerged as a common theme of numerous talks and papers by respected particle physicists. With their field stuck at a nasty impasse, the researchers have returned to the master equations that describe the known particles and their interactions, and are asking: What happens when you erase the terms in the equations having to do with mass and length?

Nature, at the deepest level, may not differentiate between scales. With scale symmetry, physicists start with a basic equation that sets forth a massless collection of particles, each a unique confluence of characteristics such as whether it is matter or antimatter and has positive or negative electric charge. As these particles attract and repel one another and the effects of their interactions cascade like dominoes through the calculations, scale symmetry “breaks,” and masses and lengths spontaneously arise.

Similar dynamical effects generate 99 percent of the mass in the visible universe. Protons and neutrons are amalgams — each one a trio of lightweight elementary particles called quarks. The energy used to hold these quarks together gives them a combined mass that is around 100 times more than the sum of the parts. “Most of the mass that we see is generated in this way, so we are interested in seeing if it’s possible to generate all mass in this way,” said Alberto Salvio, a particle physicist at the Autonomous University of Madrid and the co-author of a recent paper on a scale-symmetric theory of nature.

In the equations of the “Standard Model” of particle physics, only a particle discovered in 2012, called the Higgs boson, comes equipped with mass from the get-go. According to a theory developed 50 years ago by the British physicist Peter Higgs and associates, it doles out mass to other elementary particles through its interactions with them. Electrons, W and Z bosons, individual quarks and so on: All their masses are believed to derive from the Higgs boson — and, in a feedback effect, they simultaneously dial the Higgs mass up or down, too.

THE MULTIVERSE ENNUI CAN’T LAST FOREVER.
The new scale symmetry approach rewrites the beginning of that story.
“The idea is that maybe even the Higgs mass is not really there,” said Alessandro Strumia, a particle physicist at the University of Pisa in Italy. “It can be understood with some dynamics.”

The concept seems far-fetched, but it is garnering interest at a time of widespread soul-searching in the field. When the Large Hadron Collider at CERN Laboratory in Geneva closed down for upgrades in early 2013, its collisions had failed to yield any of dozens of particles that many theorists had included in their equations for more than 30 years. The grand flop suggests that researchers may have taken a wrong turn decades ago in their understanding of how to calculate the masses of particles.

“We’re not in a position where we can afford to be particularly arrogant about our understanding of what the laws of nature must look like,” said Michael Dine, a professor of physics at the University of California, Santa Cruz, who has been following the new work on scale symmetry. “Things that I might have been skeptical about before, I’m willing to entertain.”

The Giant Higgs Problem
The scale symmetry approach traces back to 1995, when William Bardeen, a theoretical physicist at Fermi National Accelerator Laboratory in Batavia, Ill., showed that the mass of the Higgs boson and the other Standard Model particles could be calculated as consequences of spontaneous scale-symmetry breaking. But at the time, Bardeen’s approach failed to catch on. The delicate balance of his calculations seemed easy to spoil when researchers attempted to incorporate new, undiscovered particles, like those that have been posited to explain the mysteries of dark matter and gravity.

Instead, researchers gravitated toward another approach called “supersymmetry” that naturally predicted dozens of new particles. One or more of these particles could account for dark matter. And supersymmetry also provided a straightforward solution to a bookkeeping problem that has bedeviled researchers since the early days of the Standard Model.

In the standard approach to doing calculations, the Higgs boson’s interactions with other particles tend to elevate its mass toward the highest scales present in the equations, dragging the other particle masses up with it. “Quantum mechanics tries to make everybody democratic,” explained theoretical physicist Joe Lykken, deputy director of Fermilab and a collaborator of Bardeen’s. “Particles will even each other out through quantum mechanical effects.”

This democratic tendency wouldn’t matter if the Standard Model particles were the end of the story. But physicists surmise that far beyond the Standard Model, at a scale about a billion billion times heavier known as the “Planck mass,” there exist unknown giants associated with gravity. These heavyweights would be expected to fatten up the Higgs boson — a process that would pull the mass of every other elementary particle up to the Planck scale. This hasn’t happened; instead, an unnatural hierarchy seems to separate the lightweight Standard Model particles and the Planck mass.

With his scale symmetry approach, Bardeen calculated the Standard Model masses in a novel way that did not involve them smearing toward the highest scales. From his perspective, the lightweight Higgs seemed perfectly natural. Still, it wasn’t clear how he could incorporate Planck-scale gravitational effects into his calculations.

Meanwhile, supersymmetry used standard mathematical techniques, and dealt with the hierarchy between the Standard Model and the Planck scale directly. Supersymmetry posits the existence of a missing twin particle for every particle found in nature. If for each particle the Higgs boson encounters (such as an electron) it also meets that particle’s slightly heavier twin (the hypothetical “selectron”), the combined effects would nearly cancel out, preventing the Higgs mass from ballooning toward the highest scales. Like the physical equivalent of x + (–x) ≈ 0, supersymmetry would protect the small but non-zero mass of the Higgs boson. The theory seemed like the perfect missing ingredient to explain the masses of the Standard Model — so perfect that without it, some theorists say the universe simply doesn’t make sense.

Yet decades after their prediction, none of the supersymmetric particles have been found. “That’s what the Large Hadron Collider has been looking for, but it hasn’t seen anything,” said Savas Dimopoulos, a professor of particle physics at Stanford University who helped develop the supersymmetry hypothesis in the early 1980s. “Somehow, the Higgs is not protected.”

The LHC will continue probing for convoluted versions of supersymmetry when it switches back on next year, but many physicists have grown increasingly convinced that the theory has failed. Just last month at the International Conference of High-Energy Physics in Valencia, Spain, researchers analyzing the largest data set yet from the LHC found no evidence of supersymmetric particles. (The data also strongly disfavors an alternative proposal called “technicolor.”)

THE THEORY HAS WHAT MOST EXPERTS CONSIDER A SERIOUS FLAW: IT REQUIRES THE EXISTENCE OF STRANGE PARTICLE-LIKE ENTITIES CALLED “GHOSTS.”
The implications are enormous. Without supersymmetry, the Higgs boson mass seems as if it is reduced not by mirror-image effects but by random and improbable cancellations between unrelated numbers — essentially, the initial mass of the Higgs seems to exactly counterbalance the huge contributions to its mass from gluons, quarks, gravitational states and all the rest. And if the universe is improbable, then many physicists argue that it must be one universe of many: just a rare bubble in an endless, foaming “multiverse.” We observe this particular bubble, the reasoning goes, not because its properties make sense, but because its peculiar Higgs boson is conducive to the formation of atoms and, thus, the rise of life. More typical bubbles, with their Planck-size Higgs bosons, are uninhabitable.

“It’s not a very satisfying explanation, but there’s not a lot out there,” Dine said.

As the logical conclusion of prevailing assumptions, the multiverse hypothesis has surged in begrudging popularity in recent years. But the argument feels like a cop-out to many, or at least a huge letdown. A universe shaped by chance cancellations eludes understanding, and the existence of unreachable, alien universes may be impossible to prove. “And it’s pretty unsatisfactory to use the multiverse hypothesis to explain only things we don’t understand,” said Graham Ross, an emeritus professor of theoretical physics at the University of Oxford.

The multiverse ennui can’t last forever.

“People are forced to adjust,” said Manfred Lindner, a professor of physics and director of the Max Planck Institute for Nuclear Physics in Heidelberg who has co-authored several new papers on the scale symmetry approach. The basic equations of particle physics need something extra to rein in the Higgs boson, and supersymmetry may not be it. Theorists like Lindner have started asking, “Is there another symmetry that could do the job, without creating this huge amount of particles we didn’t see?

Wrestling Ghosts
Picking up where Bardeen left off, researchers like Salvio, Strumia and Lindner now think scale symmetry may be the best hope for explaining the small mass of the Higgs boson. “For me, doing real computations is more interesting than doing philosophy of multiverse,” said Strumia, “even if it is possible that this multiverse could be right.”

For a scale-symmetric theory to work, it must account for both the small masses of the Standard Model and the gargantuan masses associated with gravity. In the ordinary approach to doing the calculations, both scales are put in by hand at the beginning, and when they connect in the equations, they try to even each other out. But in the new approach, both scales must arise dynamically — and separately — starting from nothing.

“The statement that gravity might not affect the Higgs mass is very revolutionary,” Dimopoulos said.

A theory called “agravity” (for “adimensional gravity”) developed by Salvio and Strumia may be the most concrete realization of the scale symmetry idea thus far. Agravity weaves the laws of physics at all scales into a single, cohesive picture in which the Higgs mass and the Planck mass both arise through separate dynamical effects. As detailed in June in the Journal of High-Energy Physics, agravity also offers an explanation for why the universe inflated into existence in the first place. According to the theory, scale-symmetry breaking would have caused an exponential expansion in the size of space-time during the Big Bang.

However, the theory has what most experts consider a serious flaw: It requires the existence of strange particle-like entities called “ghosts.” Ghosts either have negative energies or negative probabilities of existing — both of which wreak havoc on the equations of the quantum world.

“Negative probabilities rule out the probabilistic interpretation of quantum mechanics, so that’s a dreadful option,” said Kelly Stelle, a theoretical particle physicist at Imperial College, London, who first showed in 1977 that certain gravity theories give rise to ghosts. Such theories can only work, Stelle said, if the ghosts somehow decouple from the other particles and keep to themselves. “Many attempts have been made along these lines; it’s not a dead subject, just rather technical and without much joy,” he said.

Strumia and Salvio think that, given all the advantages of agravity, ghosts deserve a second chance. “When antimatter particles were first considered in equations, they seemed like negative energy,” Strumia said. “They seemed nonsense. Maybe these ghosts seem nonsense but one can find some sensible interpretation.”

Meanwhile, other groups are crafting their own scale-symmetric theories. Lindner and colleagues have proposed a model with a new “hidden sector” of particles, while Bardeen, Lykken, Marcela Carena and Martin Bauer of Fermilab and Wolfgang Altmannshofer of the Perimeter Institute for Theoretical Physics in Waterloo, Canada, argue in an Aug. 14 paper that the scales of the Standard Model and gravity are separated as if by a phase transition. The researchers have identified a mass scale where the Higgs boson stops interacting with other particles, causing their masses to drop to zero. It is at this scale-free point that a phase change-like crossover occurs. And just as water behaves differently than ice, different sets of self-contained laws operate above and below this critical point.

To get around the lack of scales, the new models require a calculation technique that some experts consider mathematically dubious, and in general, few will say what they really think of the whole approach. It is too different, too new. But agravity and the other scale symmetric models each predict the existence of new particles beyond the Standard Model, and so future collisions at the upgraded LHC will help test the ideas.

In the meantime, there’s a sense of rekindling hope.

“Maybe our mathematics is wrong,” Dine said. “If the alternative is the multiverse landscape, that is a pretty drastic step, so, sure — let’s see what else might be.”

Original story reprinted with permission from Quanta Magazine, an editorially independent division of SimonsFoundation.org whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Source: http://www.wired.com/2014/08/multiverse/

universal-abyss: Crikey, this will really get your mind whirring. Wow, very interesting. Stretch your mind muscles for this one!

Radical New Theory Could Kill the Multiverse Hypothesis

BY NATALIE WOLCHOVER, QUANTA MAGAZINE
08.25.14 | 6:30 AM | PERMALINK

Though galaxies look larger than atoms and elephants appear to outweigh ants, some physicists have begun to suspect that size differences are illusory. Perhaps the fundamental description of the universe does not include the concepts of “mass” and “length,” implying that at its core, nature lacks a sense of scale.

Pic: http://www.fromquarkstoquasars.com/wp-content/uploads/2013/09/looking-for-life-in-the-multiverse_1.jpg

Original story reprinted with permission from Quanta Magazine, an editorially independent division of SimonsFoundation.org whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.
This little-explored idea, known as scale symmetry, constitutes a radical departure from long-standing assumptions about how elementary particles acquire their properties. But it has recently emerged as a common theme of numerous talks and papers by respected particle physicists. With their field stuck at a nasty impasse, the researchers have returned to the master equations that describe the known particles and their interactions, and are asking: What happens when you erase the terms in the equations having to do with mass and length?

Nature, at the deepest level, may not differentiate between scales. With scale symmetry, physicists start with a basic equation that sets forth a massless collection of particles, each a unique confluence of characteristics such as whether it is matter or antimatter and has positive or negative electric charge. As these particles attract and repel one another and the effects of their interactions cascade like dominoes through the calculations, scale symmetry “breaks,” and masses and lengths spontaneously arise.

Similar dynamical effects generate 99 percent of the mass in the visible universe. Protons and neutrons are amalgams — each one a trio of lightweight elementary particles called quarks. The energy used to hold these quarks together gives them a combined mass that is around 100 times more than the sum of the parts. “Most of the mass that we see is generated in this way, so we are interested in seeing if it’s possible to generate all mass in this way,” said Alberto Salvio, a particle physicist at the Autonomous University of Madrid and the co-author of a recent paper on a scale-symmetric theory of nature.

In the equations of the “Standard Model” of particle physics, only a particle discovered in 2012, called the Higgs boson, comes equipped with mass from the get-go. According to a theory developed 50 years ago by the British physicist Peter Higgs and associates, it doles out mass to other elementary particles through its interactions with them. Electrons, W and Z bosons, individual quarks and so on: All their masses are believed to derive from the Higgs boson — and, in a feedback effect, they simultaneously dial the Higgs mass up or down, too.

THE MULTIVERSE ENNUI CAN’T LAST FOREVER.
The new scale symmetry approach rewrites the beginning of that story.
“The idea is that maybe even the Higgs mass is not really there,” said Alessandro Strumia, a particle physicist at the University of Pisa in Italy. “It can be understood with some dynamics.”

The concept seems far-fetched, but it is garnering interest at a time of widespread soul-searching in the field. When the Large Hadron Collider at CERN Laboratory in Geneva closed down for upgrades in early 2013, its collisions had failed to yield any of dozens of particles that many theorists had included in their equations for more than 30 years. The grand flop suggests that researchers may have taken a wrong turn decades ago in their understanding of how to calculate the masses of particles.

“We’re not in a position where we can afford to be particularly arrogant about our understanding of what the laws of nature must look like,” said Michael Dine, a professor of physics at the University of California, Santa Cruz, who has been following the new work on scale symmetry. “Things that I might have been skeptical about before, I’m willing to entertain.”

The Giant Higgs Problem
The scale symmetry approach traces back to 1995, when William Bardeen, a theoretical physicist at Fermi National Accelerator Laboratory in Batavia, Ill., showed that the mass of the Higgs boson and the other Standard Model particles could be calculated as consequences of spontaneous scale-symmetry breaking. But at the time, Bardeen’s approach failed to catch on. The delicate balance of his calculations seemed easy to spoil when researchers attempted to incorporate new, undiscovered particles, like those that have been posited to explain the mysteries of dark matter and gravity.

Instead, researchers gravitated toward another approach called “supersymmetry” that naturally predicted dozens of new particles. One or more of these particles could account for dark matter. And supersymmetry also provided a straightforward solution to a bookkeeping problem that has bedeviled researchers since the early days of the Standard Model.

In the standard approach to doing calculations, the Higgs boson’s interactions with other particles tend to elevate its mass toward the highest scales present in the equations, dragging the other particle masses up with it. “Quantum mechanics tries to make everybody democratic,” explained theoretical physicist Joe Lykken, deputy director of Fermilab and a collaborator of Bardeen’s. “Particles will even each other out through quantum mechanical effects.”

This democratic tendency wouldn’t matter if the Standard Model particles were the end of the story. But physicists surmise that far beyond the Standard Model, at a scale about a billion billion times heavier known as the “Planck mass,” there exist unknown giants associated with gravity. These heavyweights would be expected to fatten up the Higgs boson — a process that would pull the mass of every other elementary particle up to the Planck scale. This hasn’t happened; instead, an unnatural hierarchy seems to separate the lightweight Standard Model particles and the Planck mass.

With his scale symmetry approach, Bardeen calculated the Standard Model masses in a novel way that did not involve them smearing toward the highest scales. From his perspective, the lightweight Higgs seemed perfectly natural. Still, it wasn’t clear how he could incorporate Planck-scale gravitational effects into his calculations.

Meanwhile, supersymmetry used standard mathematical techniques, and dealt with the hierarchy between the Standard Model and the Planck scale directly. Supersymmetry posits the existence of a missing twin particle for every particle found in nature. If for each particle the Higgs boson encounters (such as an electron) it also meets that particle’s slightly heavier twin (the hypothetical “selectron”), the combined effects would nearly cancel out, preventing the Higgs mass from ballooning toward the highest scales. Like the physical equivalent of x + (–x) ≈ 0, supersymmetry would protect the small but non-zero mass of the Higgs boson. The theory seemed like the perfect missing ingredient to explain the masses of the Standard Model — so perfect that without it, some theorists say the universe simply doesn’t make sense.

Yet decades after their prediction, none of the supersymmetric particles have been found. “That’s what the Large Hadron Collider has been looking for, but it hasn’t seen anything,” said Savas Dimopoulos, a professor of particle physics at Stanford University who helped develop the supersymmetry hypothesis in the early 1980s. “Somehow, the Higgs is not protected.”

The LHC will continue probing for convoluted versions of supersymmetry when it switches back on next year, but many physicists have grown increasingly convinced that the theory has failed. Just last month at the International Conference of High-Energy Physics in Valencia, Spain, researchers analyzing the largest data set yet from the LHC found no evidence of supersymmetric particles. (The data also strongly disfavors an alternative proposal called “technicolor.”)

THE THEORY HAS WHAT MOST EXPERTS CONSIDER A SERIOUS FLAW: IT REQUIRES THE EXISTENCE OF STRANGE PARTICLE-LIKE ENTITIES CALLED “GHOSTS.”
The implications are enormous. Without supersymmetry, the Higgs boson mass seems as if it is reduced not by mirror-image effects but by random and improbable cancellations between unrelated numbers — essentially, the initial mass of the Higgs seems to exactly counterbalance the huge contributions to its mass from gluons, quarks, gravitational states and all the rest. And if the universe is improbable, then many physicists argue that it must be one universe of many: just a rare bubble in an endless, foaming “multiverse.” We observe this particular bubble, the reasoning goes, not because its properties make sense, but because its peculiar Higgs boson is conducive to the formation of atoms and, thus, the rise of life. More typical bubbles, with their Planck-size Higgs bosons, are uninhabitable.

“It’s not a very satisfying explanation, but there’s not a lot out there,” Dine said.

As the logical conclusion of prevailing assumptions, the multiverse hypothesis has surged in begrudging popularity in recent years. But the argument feels like a cop-out to many, or at least a huge letdown. A universe shaped by chance cancellations eludes understanding, and the existence of unreachable, alien universes may be impossible to prove. “And it’s pretty unsatisfactory to use the multiverse hypothesis to explain only things we don’t understand,” said Graham Ross, an emeritus professor of theoretical physics at the University of Oxford.

The multiverse ennui can’t last forever.

“People are forced to adjust,” said Manfred Lindner, a professor of physics and director of the Max Planck Institute for Nuclear Physics in Heidelberg who has co-authored several new papers on the scale symmetry approach. The basic equations of particle physics need something extra to rein in the Higgs boson, and supersymmetry may not be it. Theorists like Lindner have started asking, “Is there another symmetry that could do the job, without creating this huge amount of particles we didn’t see?

Wrestling Ghosts
Picking up where Bardeen left off, researchers like Salvio, Strumia and Lindner now think scale symmetry may be the best hope for explaining the small mass of the Higgs boson. “For me, doing real computations is more interesting than doing philosophy of multiverse,” said Strumia, “even if it is possible that this multiverse could be right.”

For a scale-symmetric theory to work, it must account for both the small masses of the Standard Model and the gargantuan masses associated with gravity. In the ordinary approach to doing the calculations, both scales are put in by hand at the beginning, and when they connect in the equations, they try to even each other out. But in the new approach, both scales must arise dynamically — and separately — starting from nothing.

“The statement that gravity might not affect the Higgs mass is very revolutionary,” Dimopoulos said.

A theory called “agravity” (for “adimensional gravity”) developed by Salvio and Strumia may be the most concrete realization of the scale symmetry idea thus far. Agravity weaves the laws of physics at all scales into a single, cohesive picture in which the Higgs mass and the Planck mass both arise through separate dynamical effects. As detailed in June in the Journal of High-Energy Physics, agravity also offers an explanation for why the universe inflated into existence in the first place. According to the theory, scale-symmetry breaking would have caused an exponential expansion in the size of space-time during the Big Bang.

However, the theory has what most experts consider a serious flaw: It requires the existence of strange particle-like entities called “ghosts.” Ghosts either have negative energies or negative probabilities of existing — both of which wreak havoc on the equations of the quantum world.

“Negative probabilities rule out the probabilistic interpretation of quantum mechanics, so that’s a dreadful option,” said Kelly Stelle, a theoretical particle physicist at Imperial College, London, who first showed in 1977 that certain gravity theories give rise to ghosts. Such theories can only work, Stelle said, if the ghosts somehow decouple from the other particles and keep to themselves. “Many attempts have been made along these lines; it’s not a dead subject, just rather technical and without much joy,” he said.

Strumia and Salvio think that, given all the advantages of agravity, ghosts deserve a second chance. “When antimatter particles were first considered in equations, they seemed like negative energy,” Strumia said. “They seemed nonsense. Maybe these ghosts seem nonsense but one can find some sensible interpretation.”

Meanwhile, other groups are crafting their own scale-symmetric theories. Lindner and colleagues have proposed a model with a new “hidden sector” of particles, while Bardeen, Lykken, Marcela Carena and Martin Bauer of Fermilab and Wolfgang Altmannshofer of the Perimeter Institute for Theoretical Physics in Waterloo, Canada, argue in an Aug. 14 paper that the scales of the Standard Model and gravity are separated as if by a phase transition. The researchers have identified a mass scale where the Higgs boson stops interacting with other particles, causing their masses to drop to zero. It is at this scale-free point that a phase change-like crossover occurs. And just as water behaves differently than ice, different sets of self-contained laws operate above and below this critical point.

To get around the lack of scales, the new models require a calculation technique that some experts consider mathematically dubious, and in general, few will say what they really think of the whole approach. It is too different, too new. But agravity and the other scale symmetric models each predict the existence of new particles beyond the Standard Model, and so future collisions at the upgraded LHC will help test the ideas.

In the meantime, there’s a sense of rekindling hope.

“Maybe our mathematics is wrong,” Dine said. “If the alternative is the multiverse landscape, that is a pretty drastic step, so, sure — let’s see what else might be.”

Original story reprinted with permission from Quanta Magazine, an editorially independent division of SimonsFoundation.org whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Source: http://www.wired.com/2014/08/multiverse/

universal-abyss: Crikey, this will really get your mind whirring. Wow, very interesting. Stretch your mind muscles for this one!