Rahul Rao | Popular Science https://www.popsci.com/authors/rahul-rao/ Awe-inspiring science reporting, technology news, and DIY projects. Skunks to space robots, primates to climates. That's Popular Science, 145 years strong. Thu, 19 Oct 2023 18:00:00 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://www.popsci.com/uploads/2021/04/28/cropped-PSC3.png?auto=webp&width=32&height=32 Rahul Rao | Popular Science https://www.popsci.com/authors/rahul-rao/ 32 32 Oldest radio burst ever found could tell us what exists between galaxies https://www.popsci.com/science/oldest-fast-radio-burst-8-billion-years/ Thu, 19 Oct 2023 18:00:00 +0000 https://www.popsci.com/?p=581152
A radio telescope in Australia beneath the Milky Way.
The Australian Square Kilometre Array Pathfinder sensed the remarkable FRB. CSIRO

These signals emit as much energy in milliseconds as the sun does in three days.

The post Oldest radio burst ever found could tell us what exists between galaxies appeared first on Popular Science.

]]>
A radio telescope in Australia beneath the Milky Way.
The Australian Square Kilometre Array Pathfinder sensed the remarkable FRB. CSIRO

Of all the pyrotechnics that blast through the cosmos, fast radio bursts (FRBs) are among the most powerful—and mysterious. While our radio telescopes have picked up hundreds of known FRBs, radio astronomers recently detected one of the most fascinating bursts yet. Not only does it come from a greater distance than any FRB observed before, it’s the most energetic, too.

A superlative FRB like this defies our already murky understanding of the bursts’ origins. FRBs are sudden surges of radio waves that typically last less than a second, if not mere milliseconds. And they are very, very high-energy: They can deliver as much energy in milliseconds as the sun emits in three days. Despite all that, we don’t know for certain how they form.

The new event, what astronomers lovingly call FRB 20220610A, first appeared as a blip in the Australian Square Kilometre Array Pathfinder, an arrangement of antennae in the desert about 360 miles north of Perth. When astronomers measured the burst’s redshift, they calculated that it left its source about 8 billion years ago, as they described in a paper published today in Science

After pinpointing the burst’s origin in the sky and following up with visible light and infrared telescopes, the authors managed to develop a blurry image of merging galaxies.

[Related: Two bizarre stars might have beamed a unique radio signal to Earth]

“The further you go out in the universe, of course, the fainter the galaxies are, because they’re farther away. It’s quite difficult to identify the host galaxy, and that’s what they’ve done,” Sarah Burke Spolaor, an astronomer who studies FRBs at West Virginia University, who was not an author of the study.

FRBs aren’t exciting just because they’re loud. To reach us, a burst from outside the Milky Way must traverse millions or billions of light-years of the near-empty space between galaxies. In the process, they’ll encounter an extremely sparse smattering of ionized particles. This is the stuff that prevents the bulk of the cosmos from being completely empty—what astronomers call the intergalactic medium, which might make up as much as half of the universe’s “normal” matter.

“We don’t know much about it, because it’s so tenuous that it’s difficult to detect,” says Daniele Michilli, an astronomer at the Massachusetts Institute of Technology, who also wasn’t a study author.

As an FRB crosses the intergalactic medium on its long voyage, the particles cause its radio waves to scatter, which leaves fingerprints that astronomers can pick apart. In this way, scientists can use FRBs to investigate the intergalactic medium. More faraway bursts like FRB 20220610A could allow astronomers to study the medium across wide swathes of the universe.

[Related: How astronomers traced a puzzling detection to a lunchtime mistake]

“It’s very exciting, definitely one of the great applications of fast radio bursts,” says Ziggy Pleunis, an astronomer who studies FRBs at the University of Toronto, who was also not part of the authors’ group. “Fast radio bursts currently are really the only thing that we know that interacts with the intergalactic medium in a meaningful enough way that we can measure properties.”

An illustrated yellow beam representing a fast radio burst connects merging galaxies to our Milky Way.
A yellow beam representing the FRB traveling between galaxies, in a concept illustration. ESO/M. Kornmesser

In the future, astronomers might even be able to use FRBs to study how the universe expands. To unweave that mystery, however, astronomers will need to detect FRBs from even deeper into the cosmic past than FRB 20220610A. “For a lot of applications, it’s still not quite far away enough,” Pleunis says. “But it certainly bodes well.” 

There’s a balancing act involved: Over a sufficiently long distance, the particles in the intergalactic medium will peel an FRB apart until it disperses into background noise. To survive, an FRB must be brighter and more energetic; in turn, by taking stock of how much a burst has dispersed, astronomers can estimate its original energy. 

By computing the numbers for FRB 20220610A, they found that it was the most energetic burst Earth has seen so far. (Another recently observed burst, FRB 20201124A, comes within the same order of magnitude, but FRB 20220610A is the record-holder.) A burst with this much energy throws something of a wrench into astronomers’ understanding, such as it is, of what creates FRBs in the first place.

We, again, don’t have a definitive answer to that question. Complicating the question, some FRBs are one-off flashes, while others repeat, hinting that the two types of FRBs may have two different origins. (To wit, FRB 20220610A seems to have been a one-off. But that other high-energy FRB, FRB 20201124A, seems to repeat.)

Nevertheless, astronomers have simulated a few scenarios, largely involving neutron stars. Perhaps FRBs burst from near a neutron star’s surface, or perhaps FRBs erupt from shockwaves through the material that neutron stars throw up.

But when this paper’s authors ran the numbers with their new FRB, they found that neither of those two scenarios could easily create an burst with this much energy—suggesting that theoretical astronomers have even more work to do before they can satisfactorily explain these events.

“What always strikes me about fast radio bursts is, every time we observe a new one, it breaks the mold of previous ones,” Spolaor says.

The post Oldest radio burst ever found could tell us what exists between galaxies appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
We can predict solar eclipses to the second. Here’s how. https://www.popsci.com/science/solar-eclipse-predictions-math/ Tue, 10 Oct 2023 16:00:00 +0000 https://www.popsci.com/?p=577216
An orange ring around the dark moon eclipsing the sun.
Astronomers have calculated to the second how long the annularity will last as the moon's shadow travels across the US southwest. Depositphotos

Astronomers have made maps for eclipses hundreds of years into the future.

The post We can predict solar eclipses to the second. Here’s how. appeared first on Popular Science.

]]>
An orange ring around the dark moon eclipsing the sun.
Astronomers have calculated to the second how long the annularity will last as the moon's shadow travels across the US southwest. Depositphotos

On October 14, the Western Hemisphere will witness an annular solar eclipse. The moon will be too small and far away in our view to totally block out the sun’s disc. Instead, it will blot out its center, leaving a ring at the edges. The best locations to view that ring of fire in the sky will be along a path that cuts through Oregon, Texas, Central America, Colombia, and finally northern Brazil. You might decide to visit Albuquerque, New Mexico, where you’ll experience exactly 4 minutes and 48 seconds of an annular eclipse.

And if you’re seeking a true total eclipse, you only have to wait another six months. On April 8, 2024, at 2:10 p.m. Eastern (12:10 p.m. local time), Mazatlan, Mexico will become the first city in North America to see most of the sun vanish in shadow. The path of totality then arcs through Dallas and Indianapolis into Montréal, New Brunswick, and Newfoundland in Canada. We know all of these precise details—and more—thanks to our knowledge of where the moon and sun are situated in the sky at any given moment.

In fact, we can predict and map eclipses farther into the future, even centuries from now. Because they know the precise positions of the moon and the sun and how they shift over time, scientists can project the moon’s shadow onto Earth’s globe. And with cutting-edge computers, it’s possible to chart eclipse paths down to a range of a few feet.

A solar eclipse needs three things. It results when the moon blocks the sun’s light from our vantage point on Earth. So to predict an eclipse, you must know where and how the sun, moon, and Earth move in relation to each other. This isn’t quite as elementary as it may seem, because the solar system isn’t flat. The moon’s orbit slants about 5 degrees in relation to the sun’s path, which astronomers call the ecliptic. While our satellite passes between Earth and the sun around once a month—which we call a new moon—the two rarely seem to cross paths.

A map of the October annular eclipse.
A map of the October annular eclipse. NASA

Solar eclipses can only occur when the moon is at one of the two points where the moon’s orbit crosses the ecliptic, known as a node. If the moon is new at this crossing, the result is a solar eclipse.

In centuries past, trying to predict eclipses meant predicting minute details of finicky orbits. But as astronomers learned more about how celestial objects moved, they began tabulating what they call ephemerides: predictions of where the moon, sun, and planets will be in the sky. Ephemerides are still the key to eclipse prediction.

[Related: Make a classic pinhole camera to watch the upcoming solar eclipse]

“All you need is the ephemeris data…you don’t have to actually track the orbit,” says C. Alex Young, a solar physicist at NASA’s Goddard Space Flight Center.

With ephemeris data, astronomers can pinpoint dates and times when the moon and sun cross paths. Once you know that date, mapping an eclipse is relatively straightforward. Ephemerides let scientists project the moon’s shadow onto Earth’s sphere; with 19th-century mathematics, they can calculate the shape and latitude of two features of that shadow, the umbra and penumbra. Then, by knowing what time it is and where Earth is angled in its rotation, it’s possible to determine the longitudes. Putting these together produces an eclipse map.

In the past, astronomers printed the ephemerides in almanacs, long tomes filled with page after page of coordinate tables. Just as all of astronomy has advanced into an era of computers, so have ephemerides. Scientists today mathematically model the paths of the moon, sun, planets, other moons, asteroids, and much more.

NASA’s Jet Propulsion Laboratory (JPL) regularly publishes a new compendium of celestial locations every few years. The most recent edition, 2021’s DE440, accounts for details like the moon’s core and mantle sloshing around and slowing its rotation. “Generally speaking, we know where the moon is from the Earth to about a meter, maybe a couple of meters,” says Ryan Park, an engineer at JPL. “We typically know where the sun is to maybe a couple hundred meters, maybe 300 meters.”

[Related: How to look at the eclipse without damaging your eyes]

Ephemerides serve other purposes, especially when planning spaceflight missions. But it’s largely due to more sophisticated ephemeris data that we can now reliably predict the motions of the moon for the centuries ahead. In fact, you can find detailed maps of solar eclipses nearly a millennium in the future. (If you’re lucky enough to be in Seattle on April 23, 2563 or in Amsterdam on September 7, 2974, prepare for total eclipse day.)

But these maps, like most eclipse maps, show the path of totality or annularity as a smooth line crossing Earth’s surface. That isn’t an accurate representation. “This was designed for pencil and paper calculation, so it makes a lot of simplifying assumptions that are just a tiny bit wrong,” says Ernie Wright, who makes eclipse maps for NASA Goddard, “for instance that the moon is a perfectly smooth sphere.”

Both the moon and Earth are jagged at the edge. Earth’s terrain can block some views of the sun, and the moon has its own patchwork of mountains and valleys. In fact, sunbeams passing through lunar vales create the Baily’s beads and “diamond ring” often seen at an eclipse’s edge. “We now have detailed terrain information of these mountains from the Lunar Reconnaissance Orbiter,” Young says.

Wright has helped devise a new way of mapmaking that swaps the Victorian-age mathematics out for modern computer graphics. His method turns Earth’s surface into a map of pixels, each one with different latitude, longitude, and elevation, with the sun and moon in the sky above. Then, the method calculates which pixels see which parts of the moon block which parts of the sun. 

“You then make a whole sequence of maps at, say, one-second intervals for the duration of the eclipse,” Wright says. “You end up with a frame sequence that you can put together to make a movie of the shadow.” This new technique—only possible with modern computers and ultraprecise ephemerides—may allow us to make eclipse maps that clearly show whether you can see an eclipse from, say, your house. 

“I think that’s going to provide a whole new set of maps in the future that are going to be much more accurate,” says Young. “It’s going to be pretty exciting.”

The post We can predict solar eclipses to the second. Here’s how. appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This nuclear byproduct is fueling debate over Fukushima’s seafood https://www.popsci.com/environment/fukushima-water-releases-tritium/ Sat, 07 Oct 2023 19:00:00 +0000 https://www.popsci.com/?p=577435
Blue bins of fish and other seafood caught near the Fukushima nuclear plant in Japan
Fishery workers sort out seafood caught in Japan's Fukushima prefecture about a week after the country began discharging treated wastewater from the Fukushima Daiichi nuclear power plant. STR/JIJI Press/AFP via Getty Images

Is disposing water from the Fukushima nuclear plant into the ocean safe for marine life? Scientists say it's complicated.

The post This nuclear byproduct is fueling debate over Fukushima’s seafood appeared first on Popular Science.

]]>
Blue bins of fish and other seafood caught near the Fukushima nuclear plant in Japan
Fishery workers sort out seafood caught in Japan's Fukushima prefecture about a week after the country began discharging treated wastewater from the Fukushima Daiichi nuclear power plant. STR/JIJI Press/AFP via Getty Images

On October 5, operators of Japan’s derelict Fukushima Daiichi nuclear power plant resumed pumping out wastewater held in the facility for the past 12 years. Over the following two-and-a-half weeks, Tokyo Electric Power Company (TEPCO) plans to release around 7,800 tons of treated water into the Pacific Ocean.

This is TEPCO’s second round of discharging nuclear plant wastewater, following an initial release in September. Plans call for the process, which was approved by and is being overseen by the Japanese government, to go on intermittently for some 30 years. But the approach has been controversial: Polls suggest that around 40 percent of the Japanese public opposes it, and it has sparked backlash from ecological activists, local fishermen, South Korean citizens, and the Chinese government, who fear that radiation will harm Pacific ecosystems and contaminate seafood.

Globally, some scientists argue there is no cause for concern. “The doses [or radiation] really are incredibly low,” says Jim Smith, an environmental scientist at the University of Portsmouth in the UK. “It’s less than a dental X-ray, even if you’re consuming seafood from that area.”

Smith vouches for the water release’s safety in an opinion article published on October 5 in the journal Science. The International Atomic Energy Agency has endorsed TEPCO’s process and also vouched for its safety. But experts in other fields have strong reservations about continuing with the pumping.

“There are hundreds of clear examples showing that, where radioactivity levels are high, there are deleterious consequences,” says Timothy Mousseau, a biologist at the University of South Carolina.

[Related: Nuclear war inspired peacetime ‘gamma gardens’ for growing mutant plants]

After a tsunami struck the Fukushima nuclear power plant in 2011, TEPCO started frantically shunting water into the six reactors to stop them from overheating and causing an even greater catastrophe. They stored the resulting 1.25 million tons of radioactive wastewater in tanks on-site. TEPCO and the Japanese government say that if Fukushima Daiichi is ever to be decommissioned, that water will have to go elsewhere.

In the past decade, TEPCO says it’s been able to treat the wastewater with a series of chemical reactions and cleanse most of the contaminant radioisotopes, including iodine-131, cesium-134, and cesium-137. But much of the current controversy swirls around one isotope the treatment couldn’t remove: tritium.

Tritium is a hydrogen isotope that has two extra neutrons. A byproduct of nuclear fission, it is radioactive with a half-life of around 12 years. Because tritium shares many properties with hydrogen, its atoms can infiltrate water molecules and create a radioactive liquid that looks and behaves almost identically to what we drink.

This makes separating it from nuclear wastewater challenging—in fact, no existing technology can treat tritium in the sheer volume of water contained at Fukushima. Some of the plan’s opponents argue that authorities should postpone any releases until scientists develop a system that could cleanse tritium from large amounts of water.

But TEPCO argues they’re running out of room to keep the wastewater. As a result, they have chosen to heavily dilute it—100 parts “clean” water for every 1 part of tritium water—and pipe it into the Pacific.

“There is no option for Fukushima or TEPCO but to release the water,” says Awadhesh Jha, an environmental toxicologist at the University of Plymouth in the UK. “This is an area which is prone to earthquakes and tsunamis. They can’t store it—they have to deal with it.”

Smith believes the same properties that allow tritium to hide in water molecules means it doesn’t build up in marine life, citing environmental research by him and his colleagues. For decades, they’ve been studying fish and insects in lakes, pools, and ponds downstream from the nuclear disaster at Chernobyl. “We haven’t really found significant impacts of radiation on the ecosystem,” Smith says.

[Related: Ultra-powerful X-rays are helping physicists understand Chernobyl]

What’s more, Japanese officials testing seawater during the initial release did not find recordable levels of tritium, which Smith attributes to the wastewater’s dilution.

But the first release barely scratches the surface of Fukushima’s wastewater, and Jha warns that the scientific evidence regarding tritium’s effect in the sea is mixed. There are still a lot of questions about how potent tritium effects are on different biological systems and different parts of the food chain. Some results do suggest that the isotope can damage fish chromosomes as effectively as higher-energy X-rays or gamma rays, leading to negative health outcomes later in life.

Additionally, experts have found tritium can bind to organic matter in various ecosystems and persist there for decades. “These things have not been addressed adequately,” Jha says.

Smith argues that there’s less tritium in this release than in natural sources, like cosmic rays that strike the upper atmosphere and create tritium rain from above. Furthermore, he says that damage to fish DNA does not necessarily correlate to adverse effects for wildlife or people. “We know that radiation, even at low doses, can damage DNA, but that’s not sufficient to damage how the organism reproduces, how it lives, and how it develops,” he says.

“We don’t know that the effects of the water release will be negligible, because we don’t really know for sure how much radioactive material actually will be released in the future,” Mousseau counters. He adds that independent oversight of the process could quell some of the environmental and health concerns.

Smith and other proponents of TEPCO’s plan point out that it’s actually common practice in the nuclear industry. Power plants use water to naturally cool their reactors, leaving them with tons of tritium-laced waste to dispose. Because tritium is, again, close to impossible to remove from large quantities of H20 with current technology, power plants (including ones in China) dump it back into bodies of water at concentrations that exceed those in the Fukushima releases.

“That doesn’t justify that we should keep discharging,” Jha says. “We need to do more work on what it does.”

If tritium levels stay as low as TEPCO and Smith assure they will, then the seafood from the region may very well be safe to eat. But plenty of experts like Mousseau and Jha don’t think there is enough scientific evidence to say that with certainty.

The post This nuclear byproduct is fueling debate over Fukushima’s seafood appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Does antimatter fall down or up? We now have a definitive answer. https://www.popsci.com/science/antimatter-gravity/ Wed, 27 Sep 2023 21:14:47 +0000 https://www.popsci.com/?p=575473
CERN scientists in hard hats putting antihydrogen in a vacuum chamber tube to test the effects of gravity on antimatter
The hardest part of the ALPHA experiment was not making antimatter fall, but creating and containing it in a tall vacuum chamber. CERN

Gravity wins—this time around.

The post Does antimatter fall down or up? We now have a definitive answer. appeared first on Popular Science.

]]>
CERN scientists in hard hats putting antihydrogen in a vacuum chamber tube to test the effects of gravity on antimatter
The hardest part of the ALPHA experiment was not making antimatter fall, but creating and containing it in a tall vacuum chamber. CERN

Albert Einstein didn’t know about the existence of antimatter when he came up with the theory of general relativity, which has governed our understanding of gravity ever since. More than a century later, scientists are still debating how gravity affects antimatter, the elusive mirror versions of the particles that abide within us and around us. In other words, does an antimatter droplet fall down or up? 

Common physics wisdom holds that it should fall down. A tenet of general relativity itself known as the weak equivalence principle implies that gravity shouldn’t care whether something is matter or antimatter. At the same time, a small contingent of experts argue that antimatter falling up might explain, for instance, the mystical dark energy that potentially dominates our universe.

As it happens, particle physicists now have the first direct evidence that antimatter falls down. The Antihydrogen Laser Physics Apparatus (ALPHA) collaboration, an international team based at CERN, measured gravity’s impact on antimatter for the first time. The ALPHA group published their work in the journal Nature today. 

Every particle in the universe has an antimatter reflection with an identical mass and opposite electrical charge; the inverses are hidden in nature, but have been detected in cosmic rays and used in medical imaging for decades. But actually creating antimatter in any meaningful amount is tricky because as soon as a particle of matter and its antagonist meet, the two self-destruct into pure energy. Therefore, antimatter must be carefully cordoned off from all matter, which makes it extra difficult to drop it or play with it any way.

“Everything about antimatter is challenging,” says Jeffrey Hangst, a physicist at Aarhus University in Denmark and a member of the ALPHA group. “It just really sucks to have to work with it.”

Adding to the challenge, gravity is extremely weak on the microscopic scale of atoms and subatomic particles. As early as the 1960s, physicists first thought about measuring gravity’s effects on positrons, or anti-electrons, which have positive rather than negative electric charge. Alas, that same electric charge makes positrons susceptible to tiny electric fields—and electromagnetism eclipses gravity’s force.

So, to properly test gravity’s influence on antimatter, researchers needed a neutral particle. The only “one of the horizon” was the antihydrogen atom, says Joel Fajans, a physicist at UC Berkeley and another member of the ALPHA group.

Antihydrogen is the first, most fundamental element of the anti-periodic table. Just as the garden-variety hydrogen atom consists of one proton and one electron, the basic antihydrogen atom consists of one negatively charged antiproton and an orbiting positron. Physicists only created antihydrogen atoms in the 1990s; they couldn’t trap and store some until 2010.

“We had to learn how to make it, and then we had to learn how to hold onto it, and then we had to learn how to interact with it, and so on,” says Hangst.

Once they overcame those hurdles, they were finally able to study antihydrogen’s properties—such as its behavior under gravity. For the new paper, the ALPHA group designed  a vertical vacuum chamber around a vertical tube devoid of any matter to prevent the antihydrogen from annihilating prematurely. Scientists wrapped part of the tube inside a superconducting magnetic “bottle,” creating a magnetic field that locked the antihydrogen in place until they needed to use it.

Building this apparatus took years on end. “We spent hundreds of hours just studying the magnetic field without using antimatter at all to convince ourselves that we knew what we were doing,” says Hangst. To produce a magnetic field strong enough to hold the antihydrogen, they had to keep the device chilled at -452 degrees Fahrenheit. 

The ALPHA group then dialed down the magnetic field to open the top and bottom of the bottle, and let the antihydrogen atoms loose until they crashed into the tube’s wall. They measured where those atomic deaths happened: above or under the position the antimatter was held in. Some 80 percent of atoms fell a few centimeters below the trap, in line with what a cloud of regular hydrogen atoms would do in the same setup. (The other 20 percent simply popped out.)

“It’s been a lot of fun doing the experiment,” Fajans says. “People have been thinking about this problem for a hundred years … we now have a definitive answer.”

Other researchers around the world are now trying to replicate the result. Their ranks include two other CERN collaborations, GBAR and AEgIS, that are also focused on antihydrogen atoms. The ALPHA team themselves hope to tinker with their experiment to gain more confidence in the outcome.

For instance, when the authors of the Nature study computed how rapidly the antihydrogen atoms accelerated downward with gravity, they found it was 75 percent of the rate physicists would expect for regular hydrogen atoms. But they expect the discrepancy to fade when they repeat these observations to find a more precise result. “This number and these uncertainties are essentially consistent with our best expectation for what gravity would have looked like in our experiment,” says William Bertsche, a physicist at the University of Manchester and another member of the ALPHA group.

But it’s also possible that gravity influences matter and antimatter in different ways. Such an anomaly would throw the weak equivalence principle—and, by extension, general relativity as a whole—into doubt.

Solving this essential question could lead to more answers around the birth of the universe, too. Antimatter lies at the heart of one of physics’ great unsolved mysteries: Why don’t we see more of it? Our laws of physics clearly decree that the big bang ought to have created equal parts matter and antimatter. If so, the two halves of our cosmos should have self-destructed shortly after birth.

Instead, we observe a universe filled with matter and devoid of discernable antimatter to balance it. Either the big bang created an unexplained glut of matter, or something unknown happened. Scientists call this cosmic riddle the baryogenesis problem.

“Any difference that you find between hydrogen and antihydrogen would be an extremely important clue to the baryogenesis problem,” says Fajans.

The post Does antimatter fall down or up? We now have a definitive answer. appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
What is matter? It’s not as basic as you’d think. https://www.popsci.com/science/what-is-matter/ Mon, 25 Sep 2023 10:00:00 +0000 https://www.popsci.com/?p=573508
Gold atom with nucleus and floating particles to depict what is matter
An atom consists of protons, neutrons, electors, and a nucleus. But matter consists of a whole lot more. Deposit Photos

Matter makes up nearly a third of the universe, but is still shrouded in secrets.

The post What is matter? It’s not as basic as you’d think. appeared first on Popular Science.

]]>
Gold atom with nucleus and floating particles to depict what is matter
An atom consists of protons, neutrons, electors, and a nucleus. But matter consists of a whole lot more. Deposit Photos

A little less than one-third of the universe—around 31 percent—consists of matter. A new calculation confirms that number; astrophysicists have long believed that something other than tangible stuff makes up the majority of our reality. So then, what is matter exactly?

One of the hallmarks of Albert Einstein’s theory of special relativity is that mass and energy are inseparable. All mass has intrinsic energy; this is the significance of Einstein’s famous E=mc2 equation. When cosmologists weigh the universe, they’re measuring both mass and energy at once. And 31 percent of that amount is matter, whether it’s visible or invisible.

That difference is key: Not all matter is alike. Very little of it, in fact, forms the objects we can see or touch. The universe is replete with examples of matter that are far stranger.

What is matter?

When we think of “matter,” we might picture the objects we see or their basic building block: the atom. 

Our conception of the atom has evolved over years. Thinkers throughout history had vague ideas that existence could be divided into basic components. But something that resembles the modern idea of the atom is generally credited to British chemist John Dalton. In 1808, he proposed that indivisible particles made up matter. Different base substances—the  elements—arose from atoms with different sizes, masses, and properties. 

John Dalton's primitive period table to depict what is matter.
John Dalton, a Quaker teacher, suggested that each element is made of characteristic atoms and that the weight ratio of the atoms in the products will be the same as the ratio for the reactants. SSLP/Getty Images

Dalton’s schema had 20 elements. Combining those elements created more complex chemical compounds. When the chemist Dmitri Mendeleev constructed a primitive period table in 1869, he listed 63 elements. Today we have cataloged 118

But if only it were that simple. Since the early 20th century, physicists have known that tinier building blocks lurk within atoms: swirling negatively charged electrons and shrouded nuclei, made from positively charged protons and neutral neutrons. We know now, too, that each element corresponds to atoms with a certain number of protons.

[Related: How does electricity work?]

And it’s still not that simple. By the middle of the century, physicists realized that protons and neutrons are actually combinations of even tinier particles, called quarks. To be precise, protons and neutrons both contain three quarks each: a configuration type that physicists call baryons. For that reason, protons, neutrons, and the matter they form—the stuff of our daily lives—are often called “baryonic matter.”

Strange matter in the sky

In our everyday world, baryonic matter typically exists in one of four states: solid, liquid, gas, and plasma. 

Again, matter is not that simple. Under extreme conditions, it can take on a menagerie of more exotic forms. At high enough pressures, materials can become supercritical fluids, simultaneously liquid and gas. At low enough temperatures, multiple atoms coalesce together, creating the Bose-Einstein condensate. These atoms behave as one, acting in all sorts of odd quantum ways

Such exotic states are not limited to the laboratory. Just look at neutron stars: Their undead cores aren’t quite massive enough to collapse into black holes when they go supernova. Instead, as their cores crumple, intense forces rip apart their atomic nuclei and crush the rubble together. The result is essentially a giant ball of neutrons—and protons that absorb electrons, becoming neutrons in the process—and it’s very, very dense. A single spoonful of a neutron star would weigh a billion tons.

Neutron star in infrared with disc of warm dust spinning around it to depict what is matter
This animation depicts a neutron star (RX J0806.4-4123) with a disk of warm dust that produces an infrared signature as detected by NASA’s Hubble Space Telescope. The disk wasn’t directly photographed, but one way to explain the data is by hypothesizing a disk structure that could be 18 billion miles across. NASA, ESA, and N. Tr’Ehnl (Pennsylvania State University)

There are, potentially, hundreds of millions of neutron stars in the Milky Way alone. Deep in their centers, some scientists think, pressures and temperatures are high enough to rip neutrons apart too. Those neutrons may break the quarks that form them.

Physicists study neutron stars to learn about these objects—and what happened at the beginning of the universe. The matter we see around us did not always exist; it formed in the aftermath of the big bang. Before atoms formed, protons and neutrons swam alone through the universe. Even earlier, before there were protons and neutrons, everything was a superheated quark slurry.

Scientists can recreate that state, in some fashion, in particle accelerators. But that disappears in a flash that lasts a fraction of a second. It’s no comparison to the extremely long-lasting neutron stars  “You have a lab that basically exists forever,” says Fridolin Weber, a physicist at San Diego State University.

Matter in the grand scheme of the universe

Over the past several decades, astronomers have developed several ways to understand the universe’s basic parameters. They can examine its large-scale structure and identify  subtle fluctuations in the density of the matter they can see. They can watch how objects’ gravity bends passing light.

A specific way to measure matter density—the proportion of the universe made up of visible and invisible matter—is to pick apart the cosmic microwave background of the big bang. From 2009 to 2013, the European Space Agency’s Planck observatory prodded the afterglow to give scientists the best calculation of the matter density yet, 31 percent.

[Related: Does antimatter fall down or up? We now have a definitive answer.]

The most recent research used a different technique called the mass-richness relation, essentially examining clusters of galaxies, counting how many galaxies exist in each cluster, using that to calculate each group’s mass, and reverse-engineering the matter density. The technique isn’t new, but until now it was raw and unrefined.

“When we did our work, as far as I know, this is the first time that the mass-richness relation has been used to get a result that’s in very good agreement with Planck,” says Gillian Wilson, an astrophysicist at the University of California Riverside, and one of the authors of a paper published in The Astrophysical Journal on September 13. 

Yet remember, it’s not that simple. Only a small fraction—thought to be around 15 percent of matter, or 3 percent of the universe—is visible. The rest, most scientists think, is dark matter. We can detect the ripples that dark matter leaves in gravity. But we can’t observe it directly.

LZ Dark Matter detector with gold photomultipliers to depict what is matter
The 494 xenon-filled photomultipliers on the LUX-ZEPLIN dark matter detector can sense solitary photons from deep space. LUX-ZEPLIN Experiment

Consequently, we aren’t certain what dark matter is. Some scientists believe it is baryonic matter, just in a form that we can’t easily see: Perhaps it is black holes that formed in the early universe, for instance. Others believe it consists of particles that must barely interact at all with our familiar matter. Some scientists believe it is a mix of these. And at least some scientists believe that dark matter does not exist at all.

If it does exist, we might see it with a new generation of telescopes, such as eROSITA, the Rubin Observatory, the Nancy Grace Roman Space Telescope, and Euclid, that can scan ever greater swathes of the universe and see a wider variety of galaxies at different times in cosmic history. “These new surveys might change our understanding of the whole universe [and its matter],” says Mohamed El Hashash, an astrophysicist at the University of California Riverside, and another of the authors. “This is what I personally expect.”

The post What is matter? It’s not as basic as you’d think. appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Nature generates more data than the internet … for now https://www.popsci.com/science/human-nature-data-comparison/ Fri, 22 Sep 2023 19:00:00 +0000 https://www.popsci.com/?p=573562
Internet data server farm with green and pink glowing LED lights
A data server farm in Frankfurt, Germany. By some estimates, the internet is growing at a rate of 26 percent annually. Sebastian Gollnow/picture alliance via Getty Images

In the next century, the information transmitted over the internet might eclipse the information shared between Earth's most abundant lifeforms.

The post Nature generates more data than the internet … for now appeared first on Popular Science.

]]>
Internet data server farm with green and pink glowing LED lights
A data server farm in Frankfurt, Germany. By some estimates, the internet is growing at a rate of 26 percent annually. Sebastian Gollnow/picture alliance via Getty Images

Is Earth primarily a planet of life, a world stewarded by the animals, plants, bacteria, and everything else that lives here? Or, is it a planet dominated by human creations? Certainly, we’ve reshaped our home in many ways—from pumping greenhouse gases into the atmosphere to literally redrawing coastlines. But by one measure, biology wins without a contest.

 In an opinion piece published in the journal Life on August 31, astronomers and astrobiologists estimated the amount of information transmitted by a massive class of organisms and technology for communication. Their results are clear: Earth’s biosphere churns out far more information than the internet has in its 30-year history. “This indicates that, for all the rapid progress achieved by humans, nature is still far more remarkable in terms of its complexity,” says Manasvi Lingam, an astrobiologist at the Florida Institute of Technology and one of the paper’s authors.

[Related: Inside the lab that’s growing mushroom computers]

But that could change in the very near future. Lingam and his colleagues say that, if the internet keeps growing at its current voracious rate, it will eclipse the data that comes out of the biosphere in less than a century. This could help us hone our search for intelligent life on other planets by telling us what type of information we should seek.

To represent information from technology, the authors focused on the amount of data transferred through the internet, which far outweighs any other form of human communication. Each second, the internet carries about 40 terabytes of information. They then compared it to the volume of information flowing through Earth’s biosphere. We might not think of the natural world as a realm of big data, but living things have their own ways of communicating. “To my way of thought, one of the reasons—although not the only one—underpinning the complexity of the biosphere is the massive amount of information flow associated with it,” Lingam says.

Bird calls, whale song, and pheromones are all forms of communication, to be sure. But Lingam and his colleagues focused on the information that individual cells transmit—often in the form of molecules that other cells pick up and respond accordingly, such as producing particular proteins. The authors specifically focused on the 100 octillion single-celled prokaryotes that make up the majority of our planet’s biomass

“That is fairly representative of most life on Earth,” says Andrew Rushby, an astrobiologist at Birkbeck, University of London, who was not an author of the paper. “Just a green slime clinging to the surface of the planet. With a couple of primates running around on it, occasionally.”

Bacteria colony forming red biofilm on black background
This colorized image shows an intricate colony of millions of the single-celled bacterium Pseudomonas aeruginosa that have self-organized into a sticky, mat-like colony called a biofilm, which allows them to cooperate with each other, adapt to changes in their environment, and ensure their survival. Scott Chimileski and Roberto Kolter, Harvard Medical School, Boston

As all of Earth’s prokaryotes signal to each other, according to the authors’ estimate, they generate around a billion times as much data as our technology. But human progress is rapid: According to one estimate, the internet is growing by around 26 percent every year. Under the bold assumption that both these rates hold steady for decades to come, the authors stated its size will continue to balloon until it dwarfs the biosphere in around 90 years’ time, sometime in the early 22nd century.

What, then, does a world where we create more information than nature actually look like? It’s hard to predict for certain. The 2110s version of Earth may be as strange to us as the present Earth would seem to a person from the 1930s. That said, picture alien astronomers in another star system carefully monitoring our planet. Rather than glimpsing a planet teeming with natural life, their first impressions of Earth might be a torrent of digital data.

Now, picture the reverse. For decades, scientists and military experts have sought out signatures of extraterrestrials in whatever form it may take. Astronomers have traditionally focused on the energy that a civilization of intelligent life might use—but earlier this year, one group crunched the numbers to determine if aliens in a nearby star system could pick up the leakage from mobile phone towers. (The answer is probably not, at least with LTE networks and technology like today’s radio telescopes.)

MeerKAT radio telescope dish under starry sky
The MeerKAT radio telescope array in South Africa scans for, among other things, extraterrestrial communication signals from distant stars. MeerKAT

On the flip side, we don’t totally have the observational capabilities to home in on extraterrestrial life yet. “I don’t think there’s any way that we could detect the kind of predictions and findings that [Lingam and his coauthors] have quantified here,” Rushby says. “How can we remotely determine this kind of information capacity, or this information transfer rate? We’re probably not at the stage where we could do that.”

But Rushby thinks the study is an interesting next step in a trend. Astrobiologists—certainly those searching for extraterrestrial life—are increasingly thinking about the types and volume of information that different forms of life carries. “There does seem to be this information ‘revolution,’” he says, “where we’re thinking about life in a slightly different way.” In the end, we might learn that there’s more harmony between the communication networks nature has built and computers.

The post Nature generates more data than the internet … for now appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The Tonga volcanic eruption reshaped the seafloor in mind-boggling ways https://www.popsci.com/environment/tonga-eruption-seafloor-fiber-cables/ Thu, 07 Sep 2023 18:00:00 +0000 https://www.popsci.com/?p=568621
An eruption emerges from the ocean in a cloud of ash and a lightning strike.
The Hunga Tonga volcano eruption triggered lightning and a tsunami. Tonga Geological Services via NOAA

Immense flows traveled up to 60 miles away, damaging the region's underwater infrastructure.

The post The Tonga volcanic eruption reshaped the seafloor in mind-boggling ways appeared first on Popular Science.

]]>
An eruption emerges from the ocean in a cloud of ash and a lightning strike.
The Hunga Tonga volcano eruption triggered lightning and a tsunami. Tonga Geological Services via NOAA

On January 15, 2022, the drowned caldera under the South Pacific isles of Hunga Tonga and Hunga Haʻapai in Tonga blew up. The volcanic eruption shot gas and ash 36 miles up into Earth’s mesosphere, higher than the plume from any other volcano on record. The most powerful explosion observed on Earth in modern history unleashed a tsunami that reached Peru and a sonic boom heard as far as Alaska.

New research shows that when the huge volume of volcanic ash, dust, glass fell back into the water, it reshaped the seafloor in a dramatic fashion. For the first time, scientists have reconstructed what might have happened beneath the Pacific’s violently strewn waves. According to a paper published in Science today, all that material flowed underwater for dozens of miles.

“These processes have never been observed before,” says study author Isobel Yeo, a marine volcanologist at the UK’s National Oceanography Centre.

About 45 miles from the volcano, the eruption cut off a seafloor fiber-optic cable. For Tongans and rescuers, the broken cable was a major inconvenience that severely disrupted the islands’ internet. For scientists, the abrupt severance of internet traffic provided a timestamp of when something touched the cable: around an hour and a half after the eruption.

The cut also alerted scientists to the fact that the eruption had disrupted the seafloor, which isn’t easy to spot. “We can’t see it from satellites,” says Yeo. “We actually have to go there and do a survey.” So in the months after the eruption, Yeo and her fellow researchers set out to fish clues from the surrounding waters and piece them back together.

A Tongan charter boat owner named Branko Sugar had caught the initial eruption with a mobile phone camera, giving an exact time when volcanic ejecta began to fall into the water. Several months later, the boat RV Tangaroa sailed from New Zealand to survey the seafloor and collect volcanic flow samples. Unlike in much of the ocean, the seafloor around Tonga had already been mapped, allowing scientists to corroborate changes to the topography. 

[Related: The centuries-long quest to map the seafloor’s hidden secrets]

The scene researchers reconstructed, had it unfolded above ground, might fit neatly into Roland Emmerich disaster film. The volcano moved as much matter in a few hours as the world’s rivers delivered into the oceans in a whole year. These truly immense flows traveled more than 60 miles from their origin, carving out gullies as tall as skyscrapers.

When the volcano blew, it spewed out immense quantities of rock, ash, glass, and gas that fell back to earth. This is bog-standard for such eruptions, and it typically produces the fast-moving pyroclastic flows that menace anything in their path. But over Hunga Tonga–Hunga Haʻapai, that falling mass had nowhere to go but out to sea.

Satellite imagery of the January 2022 eruption.
Satellite imagery of the January 2022 eruption. NASA Worldview, NOAA/NESDIS/STAR

“It’s that Goldilocks spot of dropping huge amounts of really dense material straight down into the ocean, onto a really steep slope, eroding extra material,” says Michael Clare, a marine geologist at the National Oceanographic Centre and another author. “It bogs up, it becomes more dense, and it just really goes.”

Scientists estimated the material fanned out from Hunga Tonga–Hunga Haʻapai at 75 miles per hour—as fast as, or faster than, the speed limit of most U.S. interstate highways. If correct, that’s 50 percent faster than any other underwater flow recorded on the planet. That rushing earth gushed back up underwater slopes as tall as mountains.

“It’s like seeing a snow avalanche, thinking you’re safe on the mountain next to it, and this thing just comes straight up against you,” says Clare.

These underwater flows, according to the researchers, had never been observed before. But understanding volcanic impacts on the seafloor is about more than scientific curiosity. In the last two centuries, we’ve laid vital infrastructure below the water: first for telegraph cables, then telephone lines, and now optical fibers that carry the internet.

Trying to prepare a single cable for an eruption of this scale is like trying to prepare for being struck by a train—it can’t really be done. Instead, a surer way to protect communications is to lay more cables, ensuring that one disaster won’t break all connectivity.

[Related: Mixing volcanic ash with meteorites may have jump-started life on Earth]

In many parts of the globe, that’s already the case. Fishing accidents break cables all the time, without much lasting effect. If, for instance, the world experienced a repeat of the 1929 earthquake-induced landslide that cut off cables off Newfoundland, we probably wouldn’t notice too much: There are plenty of other routes for internet traffic to run between Europe and North America.

As a global map of seafloor cables shows, though, that isn’t true everywhere. In Tonga in 2022, a single severed cable all but entirely cut the archipelago off from the internet. Many other islands, especially in the developing world, are similarly vulnerable.

And those cables are of great value to geologists, too. “Without having the cables, we’d probably still be in the dark and wouldn’t know these sorts of events happen on the scale that they do,” says Clare.

The post The Tonga volcanic eruption reshaped the seafloor in mind-boggling ways appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
How the world’s biggest particle accelerator is racing to cook up plasma from after the big bang https://www.popsci.com/science/large-hadron-collider-quark-gluon-plasma/ Thu, 31 Aug 2023 10:00:00 +0000 https://www.popsci.com/?p=566750
collage of cern images
Collage by Russ Smith; photos from left: Maximillien Brice / CERN; CERN; X-ray: NASA / CXC / University of Amsterdam / N.Rea et al; Optical: DSS

For 30 years, physicists around the world have been trying to reconstruct how life-giving particles formed in the very early universe. ALICE is their mightiest effort yet.

The post How the world’s biggest particle accelerator is racing to cook up plasma from after the big bang appeared first on Popular Science.

]]>
collage of cern images
Collage by Russ Smith; photos from left: Maximillien Brice / CERN; CERN; X-ray: NASA / CXC / University of Amsterdam / N.Rea et al; Optical: DSS

NORMALLY, creating a universe isn’t the job of the Large Hadron Collider (LHC). Most of the back-breaking science—singling out and tracking Higgs bosons, for example—from the world’s largest particle accelerator happens when it launches humble protons at nearly the speed of light.

But for around a month near the end of each year, LHC switches its ammunition from protons to bullets that are about 208 times heavier: lead ions.

When the LHC crashes those ions into each other, scientists can—if they have worked everything out properly—glimpse a fleeting droplet of a universe like the one that ceased to exist a few millionths of a second after the big bang.

This is the story of quark-gluon plasma. Take an atom, any atom. Peel away its whirling electron clouds to reveal its core, the atomic nucleus. Then, finely dice the nucleus into its base components, protons and neutrons.

When physicists first split an atomic nucleus in the early 20th century, this was as far as they got. Protons, neutrons, and electrons formed the entire universe’s mass—well, those, plus dashes of short-lived electrically charged particles like muons. But calculations, primitive particle accelerators, and cosmic rays striking Earth’s atmosphere began to reveal an additional menagerie of esoteric particles: kaons, pions, hyperons, and others that sound as if they’d give aliens psychic powers.

It seemed rather inelegant of the universe to present so many basic ingredients. Physicists soon figured out that some of those particles weren’t elementary at all, but combinations of even tinier particles, which they named with a word partly inspired by James Joyce’s Finnegans Wake: quarks.

Quarks come in six different “flavors,” but the vast majority of the observable universe consists of just two: up quarks and down quarks. A proton consists of two up quarks and one down quark; a neutron, two down and one up. (The other four, in ascending order of heaviness and elusiveness: strange quarks, charm quarks, beauty quarks, and the top quark.)

CERN particle accelerator
The ALICE experiment measures heavy-ion collisions (and their aftermath) with the world’s longest particle accelerator, hosted at CERN. Wladyslaw Henryk Trzaska / CERN

At this point, the list of ingredients ends. You can’t ordinarily chop a proton or neutron into quarks in our world; in most cases, quarks can’t exist on their own. But by the 1970s, physicists had come up with a workaround: heating things up. At a point that scientists call the Hagedorn temperature, those subatomic particles are reduced to a high-energy soup of quarks and the even tinier particles that glue them together: gluons. Scientists dubbed that soup quark-gluon plasma (QGP).

It’s a tantalizing recipe because, again, quarks and gluons can’t normally exist on their own, and reconstructing them from the larger particles they build is challenging. “If I give you water, it’s very difficult to tell the properties of [hydrogen and oxygen atoms],” says Bedangadas Mohanty, a physicist at India’s National Institute of Science Education and Research and at CERN. “Similarly, I can give you protons, neutrons, pions…but if you really want to study properties of quarks and gluons, you need them in a box, free.”

This isn’t a recipe you can test in a home oven. In units of the everyday world, the temperature in a hadronic system is about 3 trillion degrees Fahrenheit—100 thousand times hotter than the center of the sun. The best appliance for the job is a particle accelerator. 

But not just any particle accelerator will do. You need to boost your particles with sufficient energy. And when scientists set out to create QGP, LHC was no more than a dream of a distant future. Instead, CERN had an older collider only about a quarter of LHC’s circumference: the Super Proton Synchrotron (SPS).

As its name suggests, SPS was designed to crash protons into fixed targets. But by the end of the 1980s, scientists had decided to try swapping out the protons for heavy ions—lead nuclei—and see what they could manage. In experiment after experiment across the 1990s, CERN researchers thought they saw something happening to the nuclei. 

“Somewhat to our surprise, already at these relatively low energies, it looked like we were creating quark-gluon plasma,” says Marco van Leeuwen, a physicist at Dutch National Institute for Subatomic Physics and at CERN. In 2000, his team claimed they had “compelling evidence” of the achievement.

For the brief flickers for which the quantum matter exists in the world, physicists can watch the plasma materialize in what they call “little bangs.”

Across the Atlantic, CERN’s counterparts at Long Island’s Brookhaven National Laboratory had been trying their hands with equal parts optimism and uncertainty. The uncertainty faded around the turn of the millennium, when Brookhaven switched on the Relativistic Heavy Ion Collider (RHIC), a device designed specifically to create QGP.

“RHIC turned on, and we were deeply within quark-gluon plasma,” says James Dunlop, a physicist at Brookhaven National Laboratory.

So there are two major QGP factories in the world today: CERN and Brookhaven. With this pair of colliders, for the brief flickers for which the quantum matter exists in the world, physicists can watch the plasma materialize in what they call “little bangs.”

helmeted person stands inside inner workings at CERN
At ALICE’s heart lies a 39-foot-long solenoid maganet, coiled around a thermal shield and a number of fast-trigger detectors. Julien Marius Ordan / Maximillien Brice / CERN

Going back and forth in time

The closer in time to the big bang that you travel, the less the universe resembles your familiar one. As of this writing, the James Webb Space Telescope has possibly observed galaxies from around 320 million years after the big bang. Go farther back, and you’ll reach a very literal Dark Ages—a time before the first stars, when there was little to illuminate the universe except the cosmic background.

In this shadowy age, astronomy steadily gives way to subatomic physics. Go even farther back, to just 380,000 years after the big bang, and electrons are just joining their nuclei to form atoms. Keep going back; the universe is ever smaller, denser, hotter. Seconds after the big bang, protons and neutrons haven’t joined together to form nuclei more complex than hydrogen. 

Go back even farther—around a millionth of a second after the big bang—and the universe is hot enough that quarks and gluons stay split apart. It’s a miniature version of this universe that physicists seek to create.

Physicists puzzle over that universe in office blocks like the exquisitely modernist one overlooking CERN’s visitors center. Look out this building’s window, and you might see the terminus of a Geneva tram line. Cornavin, the city’s main railway station, is only 20 minutes away.

CERN physicists Urs Wiedemann and Federico Antinori meet me in their office. Wiedemann is a theoretical physicist by background; Antinori is an experimentalist, presiding over heavy-ion collision runs. Studying QGP requires the talents of both.

“The existence of quark-gluon plasma we have established,” says Antinori. “What is most interesting is understanding what kind of animal it is.”

For instance, their colleagues who first created QGP expected to find a sort of gas. Instead, QGP behaves like a liquid. QGP, in fact, behaves like what’s called a perfect liquid, one with almost no viscosity. (Yes, the early universe may have been, very briefly, a sort of superheated ocean. Many creation myths might find a distant mirror inside a particle accelerator.)

Both Antinori and Wiedemann are especially interested in watching the liquid come into being, watching atomic nuclei rend themselves apart. Some scientists call the process a “phase transition,” as if creating QGP is like melting snow to create liquid water. But turning protons and neutrons into QGP is far more than melting ice; it’s creating a transition into a very different world with fundamentally different laws of physics. “The symmetries of the world we live in change,” Wiedemann says.

This transition happened in reverse in the very early universe as it cooled down past the Hagedorn temperature. The quarks and gluons clumped together, forming the protons and neutrons that, in turn, form the atoms we know and love today.

But physicists struggle to understand this process with mathematics. They come closer by examining QGP collisions in the lab.

scintillator array at CERN
Central detector components, like the VZERO scintillator array, were built to handle the “ultra-relativistic energies” of the LHC. Julien Marius Ordan / CERN

QGP is also a laboratory for the strong nuclear force. One of the four fundamental forces of the universe—alongside gravity, electromagnetism, and the weak nuclear force that governs certain radioactive processes—the strong nuclear force is what holds particles together at the hearts of atoms. The gluons in QGP’s name are the strong nuclear force’s tools. Without them, charged particles would electromagnetically repel each other and atoms would rip themselves apart.

Yet while we know quite a lot about gravity and electromagnetism, the inner workings of the strong nuclear force remain a secret. Moreover, scientists want to learn more about the role the strong nuclear force plays.

“You can say, ‘I understand how an electron interacts with a photon,’” says Wiedemann, “but that doesn’t mean that you understand how a laser functions. That doesn’t mean that you know why this table doesn’t break down.”

Again, to understand such things, they’ve got to crash heavy ions together.

With the likes of SPS, scientists could look at droplets of QGP and confirm they existed. But if they wanted to actually peer inside and see their properties at work—to examine them—they’d need something more powerful.

“It was clear,” says Antinori, “that one had to go to higher energies than were available at the SPS.”

The universe-faking machine

Crossing from CERN’s campus into France, it’s impossible to tell that this green and pleasant vale—under the grace of the Jura Mountains—sits atop a 17-mile-long ring of superconducting magnets and steel. Scattered around that ring are different experiments and detectors. The search for QGP is headquartered in one such detector.

The road there passes through the glistening hamlet of Saint-Genis-Pouilly, where many of CERN’s staff live. On the pastoral outskirts sits a cluster of industrial cuboids and cooling towers.

Apart from a mural on the corrugated metal facade overlooking a parking lot, the complex doesn’t really advertise that this is where scientists look for QGP—that one of these warehouselike buildings is the outer cocoon of a large ion collider experiment called, well, A Large Ion Collider Experiment (ALICE).

inner workings at CERN
To date, more than 2,000 physicists from 40 different countries have been involved with the decades-long experiment. Jan Hosan / CERN / Fotogloria Agency

CERN physicist Nima Zardoshti greets me beneath that mural: ALICE’s detector, the QGP-watcher, depicted in a pastel-colored mural. Zardoshti leads me inside, past a control room that wouldn’t look out of place in a moon-landing documentary, around a corner covered in sheet metal, and out to a precipice. A concrete shield caps it, several stories below. “This concrete is what stops radiation,” he explains.

Beneath it, occluded from sight, sits the genuine article, a machine the size of a small building that weighs nearly the same as the Eiffel Tower. The detector sits more than 180 feet beneath the ground, accessible by a mine lift. No one is allowed to go down there while the LHC is running, save for CERN’s fire department, which needs to move in quickly if any radioactive or hazardous materials combust.

The heavy ions that collide inside that machine don’t originate in this building. Several miles away sits the old SPS, transformed into LHC’s first steppingstone. SPS accelerates bunches of lead nuclei up to very near the speed of light. Once they’re ready, the shorter collider unloads them into the longer one.

But unlike SPS, LHC doesn’t do fixed-target experiments. Instead, ALICE creates a magnetic squeeze that goads lead beams, racing in opposite directions, into violently crashing head-on.

Lead ions make fine ingredients. A lead-208 ion has 82 protons and 126 neutrons, and both of those are “magic numbers” that help make the nuclei as spherical as nuclei can become. Spherical nuclei create better collisions. (Across the Atlantic, Brookhaven’s RHIC uses gold ions.)

ALICE’s detector isn’t a camera; QGP isn’t like a ball of light that you can “see.” When these lead ions collide at high energies, they erupt into a flash of QGP, which dissipates into a perfect storm of smaller particles. Instead of watching for light, the detector watches the particles as they cascade away. 

A proton-proton collision might produce a few dozen particles—maybe a hundred, if physicists are lucky. A heavy-ion collision produces several thousand.

When heavy ions collide, they create a flash of QGP and spiky jets of more “normal” particles: often combinations of heavy quarks, like charm and beauty quarks. The jets pierce through the QGP before they reach the detector. Physicists can reconstruct what the QGP looked like by examining those jets and how they changed as they passed through.

First those particles crash through silicon chips not unlike the pixels in your smartphone. Then the particles pass through a time projection chamber: a cylinder filled with gas. Still streaking at high energy, they shoot through the gas atoms like meteors through the upper atmosphere. They knock electrons free of their atoms, leaving brilliant trails that the chamber can pick up.

inner workings at CERN
After completing major upgrades in 2021, the ALICE team is ready for Run 3, where they aim to increase the number of particle collisions they sample by 50 times. Jan Hosan / CERN / Fotogloria Agency

For fans of particle physics equipment, the time projection chamber makes ALICE special. “It’s super useful, but the downside of it, and why other experiments don’t use it, is it’s very slow,” says Zardoshti. “The process takes, I think, roughly something on the order of a millionth of a second.”

ALICE creates about 3.5 terabytes of data—around the equivalent of three full-length feature films—each second. Physicists process that data to reconstruct the QGP that produced the particles. Much of that data is processed right here, but much of it is also processed by a vast global network of computers.

From particle accelerators to neutron stars

Particle physics is a field that always has one foot extended decades into the future. While ALICE kicked into operation in 2010, physicists had already begun sketching it out in the early 1990s, years before scientists had even detected QGP at all. 

One of their current big questions is whether they can make QGP by smashing ions smaller than lead or gold. They’ve already succeeded with xenon; later this year, they want to try with an even scanter substance like oxygen. “We want to see: Where is the transition where we can make this material?” says Zardoshti. “Is oxygen already too light?” They expect the life-giving element to work. But in particle physics, there’s no knowing for certain until after the fact.

In the longer term, ALICE’s stewards have big plans. After 2025, the LHC will shut off for several years for maintenance and upgrades, which will boost the collider’s energy. Alongside those upgrades will come a wholesale renovation of ALICE’s detector, scheduled for installation as early as 2033. All of this is planned out precisely many years in advance.

CERN’s stewards are daring to draft a device for an even more distant future, a Future Circular Collider that would be more than three times the LHC’s size and wouldn’t be online till the 2050s. No one is sure yet if it will pan out; if it does, it will require securing an investment of more than 20 billion euros.

ALICE project's inner workings at CERN
ALICE’s inner tracking system holds the record for the biggest pixel system ever built. Felix Reidt / Jochen Klein / CERN

Higher energies, larger colliders, and more sensitive detectors all make for stronger tools in QGP-watchers’ arsenals. The particles they’re seeking are tiny and incredibly short-lived, and they need those tools to see more of them.

But while particle physicists have spent billions of euros and decades of effort bringing fragments of the very early universe back into reality, some astrophysicists think the universe might have been showing the same zeal.

Instead of a particle accelerator, the universe can avail itself of a far more powerful appliance: a neutron star. 

When an immense star, far larger than the mass of our sun, ends its life in a spectacular supernova, the shard of a core that remains begins to cave in. The core can’t be too large, or else it will collapse into a black hole. But if the mass is just right, the core will reach pressures and temperatures that might just tear atomic nuclei apart into quarks. It’s like the ALICE experiment at scale in a more natural setting—the unruly universe, where it all began.

Read more PopSci+ stories.

The post How the world’s biggest particle accelerator is racing to cook up plasma from after the big bang appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
What’s the most sustainable way to mine the largest known lithium deposit in the world? https://www.popsci.com/environment/lithium-mining-mcdermitt-caldera/ Wed, 30 Aug 2023 20:30:00 +0000 https://www.popsci.com/?p=567117
Lithium samples from the proposed Thacker Pass mining site in the McDermitt Caldera lithium deposit
The clay mixture from which lithium would be extracted if a mine were to be permitted in Nevada's Thacker Pass. Carolyn Cole / Los Angeles Times via Getty Images

The McDermitt Caldera in Nevada and Oregon could hold up to 100 megatons of lithium. Now companies are proposing a new method for mining it.

The post What’s the most sustainable way to mine the largest known lithium deposit in the world? appeared first on Popular Science.

]]>
Lithium samples from the proposed Thacker Pass mining site in the McDermitt Caldera lithium deposit
The clay mixture from which lithium would be extracted if a mine were to be permitted in Nevada's Thacker Pass. Carolyn Cole / Los Angeles Times via Getty Images

At first glance, the McDermitt Caldera might feel like the edge of the Earth. This oblong maze of rocky vales straddles the arid Nevada-Oregon borderlands, in one of the least densely populated parts of North America. 

But the future of the modern world depends on the future of places like the McDermitt Caldera, which has the potential to be the largest known source of lithium on the planet. Where today’s world runs on hydrocarbons, tomorrow’s may very well rely on the element for an expanding offering of lithium-ion batteries. The flaky silver metal is a necessity for these batteries that we already use, and which we’ll likely use in far greater numbers to support mobile phones, electric cars, and large electric grids.

Which is why it matters a ton where we get our lithium from. A new study, published in the journal Science Advances today, suggests that McDermitt Caldera contains even more lithium than previously thought and outlines how the yet-to-be-discovered stores could be extracted. But these results are unlikely to ease the criticisms about the environmental costs of mining the substance.

[Related: Why solid state batteries are the next frontier for EV makers]

By 2030, the world may require more than a megaton of lithium every year. If previous geological surveys are correct, then the McDermitt Caldera—the remnants of a 16-million-old volcanic supereruption—could contain as many as 100 megatons of the metal

“It’s a huge, massive feature that has a lot of lithium in it,” Tom Benson, one of the authors of the new paper and a volcanologist at Columbia University and the Lithium Americas Corporation.

One high-profile project, partly run by Lithium Americas Corporation, proposes a 17,933-acre mine in the Thacker Pass, on the Nevada side of the border at the caldera’s southern edge. The project is contentious: Thacker Pass (or Peehee Mu’huh in Northern Paiute) sits on land that many local Indigenous groups consider sacred. Native American activists are continuing to fight a plan to expand the mine-exploration area in court. 

But not all of the lithium under McDermitt’s rocky sands ranks the same. Most of the desired metal there comes in the form of a mineral called smectite; under certain conditions, smectite can transform into a different mineral called illite that can sometimes also be processed for lithium. Benson and his colleagues studied samples of both smectite and illite drilled from the ground throughout the caldera. “There’s lithium everywhere you drill,” he says. 

Previously, geologists assumed that you could find both smectite and illite in a wide distribution across the caldera, but the authors only found the latter in high concentrations in the caldera’s south, around Thacker Pass. “It’s constrained to this area,” explains Benson.

McDermitt Caldera map with colored dots for lithium mining assays
Benson et al. (2023)

That’s important. Benson and colleagues think that the caldera’s illite formed when lithium-rich fluid, heated by the underlying volcano, washed over smectite. In the process, the mineral absorbed much of the lithium. Consequently, they project the illite in Thacker Pass holds more than twice as much lithium than the neighboring smectite.

“That’s really helpful to change exploration strategy,” Benson says. “Now we know we have to stick in the Thacker Pass area if we want to find and mine that illite.”

Some of Thacker Pass’s proponents believe that would result in fewer costs and less damage from mining. Anyone who deals with lithium is, on some level, aware of the environmental costs. The recovery process produces pollutants like heavy metals, sucks up water, and emits tons of greenhouse gases. By one estimate, fitting a new electric vehicle with its lithium battery can result in upwards of 70 percent more carbon emissions than building an equivalent petrol-powered car (although the average electric car will more than make up the difference with day-to-day use).

That said, not all extraction is the same. There are two main types of lithium sources: brine recovery and hard-rock mining. Some of the lithium we use comes from super salty pools. Over millions of years, rainwater percolates through lithium-containing rocks, dissolves the metal, and carries it to underground aquifers. Today, humans pump brine to the surface, evaporate the water, add a slurry of hydrated lime to keep out unwanted metals, and extract the lithium that’s left behind. Much of the world’s brine lithium today comes from the “lithium triangle” of Argentina, Bolivia, and Chile—one of the world’s driest regions.

Alternatively, we can directly mine lithium ores from the earth and process them as we would with most other metals. Separating lithium from ore typically involves crushing the rock and heating it up to temperatures of more than 1,000 degrees Fahrenheit. Getting to those high temperatures often requires fossil fuels in the first place. This method is less laborious and costly than brine extraction, but also far more carbon-intensive.

[Related: Inside the high-powered process that could recycle rare earth metals]

McDermitt Caldera’s smectite and illite belong to what some lithium watchers see as a new third category of extraction: volcanic sedimentary lithium. When volcanic minerals containing lithium flow into nearby valleys  and react with the loose dirt, they leave behind lithium-rich sediments that require little energy and processing to separate.

With the new alternative, mining proponents claim they can drastically reduce the environmental impact of their current and future activities at Thacker Pass. And the research by Benson’s team seems to suggest that, if lithium companies probe in the right places, they might get rewarded more for their efforts.

But this is likely little comfort to lithium-mining opponents in Oregon and Nevada, whose criticisms will be considered as the Bureau of Land Management maps out drilling in the deposit. Their case parallels those of Indigenous Chileans who oppose lithium extraction near their homes in the Atacama and locals fighting a lithium mining project near Portugal’s northern border. Together, they’re fighting a world that’s growing hungrier for lithium, along with new ways and places to exploit it.

The post What’s the most sustainable way to mine the largest known lithium deposit in the world? appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Mini jets of energy could power the sun’s violent winds https://www.popsci.com/science/tiny-jets-solar-wind/ Thu, 24 Aug 2023 18:00:00 +0000 https://www.popsci.com/?p=565319
An illustration of the ESA Solar Orbiter craft monitoring our giant orange sun.
ESA's Solar Orbiter investigates the sun from within Mercury's orbit in this illustration. ESA/ATG medialab

These flares can drag charged particles through holes in the solar atmosphere and out into space.

The post Mini jets of energy could power the sun’s violent winds appeared first on Popular Science.

]]>
An illustration of the ESA Solar Orbiter craft monitoring our giant orange sun.
ESA's Solar Orbiter investigates the sun from within Mercury's orbit in this illustration. ESA/ATG medialab

On the one hand, the sun provides life-giving heat and light. On the other, it spews an incessant stream of potentially harmful charged particles. These particles form the solar wind, and it is no less formidable than our star’s other products. Without Earth’s magnetic field to shield our planet’s surface, we would constantly face a bombardment of ionizing radiation.

But astronomers have never been completely certain where those particles come from or how they travel into interplanetary space. Now, they’ve found a promising clue. Using ESA’s Solar Orbiter spacecraft, researchers have found miniature jets that seem to channel particles up through holes in the sun’s corona and away from the star. These jets might combine to blow the solar wind, a group of astronomers suggests in a paper published in the journal Science on Thursday.

The corona, a star’s outermost layer, is a sheath of undulating plasma. It is almost always hidden in visible light, although it’s thousands of times hotter than the layers below. We might only see this outer layer during a solar eclipse, when the moon blots out the rest of the sun. 

But the corona is not one even layer. Imaging the sun in ultraviolet reveals shifting dark swatches: regions where the corona’s plasma is cooler and less dense. Astronomers call these areas coronal holes.

[Related: Why is space cold if the sun is hot?]

Coronal holes also seem to resculpt the sun’s powerful, endlessly changing magnetic field. In these parts, lines that guide the sun’s magnetic field seem to blow outward. “Usually, magnetic fields loop back to the solar surface, but in these open field regions the lines of force stretch into interplanetary space,” says Lakshmi Pradeep Chitta, an astronomer at the Max Planck Institute for Solar System Research in Göttingen, Germany, and one of the paper’s authors.

It’s also within coronal holes that the sun’s magnetic field lines can knot about themselves. When that happens, the magnetic field realigns and reconnects, creating fierce electrical surges. Those energetic outbursts siphon matter from deeper layers of the sun and toss them away in jets that can stretch more than a thousand miles across. Astronomers had long suspected that these jets fuel the solar wind, but didn’t know if these jets could provide enough particles to fill the solar wind we observe.

Sun-watching spacecraft like Yohkoh and SOHO have been able to see jets since the 1990s. But astronomers say that none have the sightseeing abilities of Solar Orbiter, which launched in 2020. At its closest approach, Solar Orbiter dips closer to the sun than Mercury.

“Solar Orbiter has the advantage of being located close to the sun, so it can detect smaller and fainter jets,” says Yi-Ming Wang, an astronomer at the US Naval Research Laboratory, who was not an author of the paper.

In March 2022, Chitta and his colleagues focused one of Solar Orbiter’s ultraviolet cameras upon a coronal hole situated near the sun’s south pole. When they did, they glimpsed a type of miniature jet never before seen by humans. Each of these tiny jets carried around one-trillionth the energy of a full-size version. The authors dubbed these “picoflare jets,” dipping into SI system prefixes.

These adorable-sounding surges don’t stick around. Each fleeting picoflare jet lasts about a minute. But this is still the sun—a place of immense power. A single solar picojet might create enough energy to power a small city for a year.

[Related: How a sun shade tied to an asteroid could cool Earth]

The authors scoured only one small part of the sun, but they saw picoflare jets in every corner they looked. It’s likely they cover much of the sun’s surface. Myriads of miniature jets, then, might combine into a large-scale process that transfers charged particles away from the star and out toward the planets.

“We suggest that these tiny picoflare jets could actually be a major source of mass and energy to sustain the solar wind,” Chitta says.

In years past, many astronomers thought of the solar wind as a steady flow, streaming away from the sun at a constant rate. But, if surging picoflare jets drive the solar wind, then the phenomenon might actually be ragged, uneven, and constantly in flux. Picoflare jets may not be the only source of the solar wind, but if Chitta and colleagues are correct, they’re at least a significant contributor.

Fortunately, scientists in a few years’ time will have plenty of additional tools to peer into the sun. Alongside the Solar Orbiter—and future sun-seeing spacecraft, such as the Japanese-led SOLAR-C—they’ll have more powerful solar magnetograms, instruments that allow them to directly measure the sun’s magnetic field from places like Southern California and Maui, able to track the magnetic fluctuations powering the sun’s jets from right here on Earth.

The post Mini jets of energy could power the sun’s violent winds appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A fleeting subatomic particle may be exposing flaws in a major physics theory https://www.popsci.com/science/muon-measurement-fermilab/ Thu, 17 Aug 2023 18:00:00 +0000 https://www.popsci.com/?p=563623
The ring-shaped machinery of the Fermi National Accelerator Laboratory.
The Department of Energy’s Fermi National Accelerator Laboratory near Chicago. Ryan Postel/Fermilab

A refined measurement for subatomic muons has major implications—if fundamental theories are accurate.

The post A fleeting subatomic particle may be exposing flaws in a major physics theory appeared first on Popular Science.

]]>
The ring-shaped machinery of the Fermi National Accelerator Laboratory.
The Department of Energy’s Fermi National Accelerator Laboratory near Chicago. Ryan Postel/Fermilab

One of the biggest questions in particle physics is whether the field itself tells an incomplete picture of the universe. At Fermilab, a US Department of Energy facility in suburban Chicago, particle physicists are trying to resolve this identity crisis. There, members of the Muon g–2 (pronounced as “g minus 2”) Collaboration have been carefully measuring a peculiar particle known as a muon. Last week, they released their updated results: the muon—a heavier, more ephemeral counterpart of the electron—may be under the influence of something unknown.

If accurate, it’s a sign that the theories forming the foundation of modern particle physics don’t tell the whole story. Or is it? While the Collaboration’s scientists have been studying muons, theoretical researchers have been re-evaluating their numbers, leaving doubt whether such an error exists.

“Either way, there’s something that’s not understood, and it needs to be resolved,” says Ian Bailey, a particle physicist at Lancaster University in the UK and a member of the Muon g–2 collaboration.

The tried and tested basic law of modern particle physics—what scientists call the Standard Model—enshrines the muon as one of our universe’s fundamental building blocks. Muons, like electrons, are subatomic particles that carry negative electrical charge; unlike electrons, muons decay after a few millionths of a second. Still, scientists readily encounter muons in the wild. Earth’s upper atmosphere is laced with muon rain, spawned by high-energy cosmic rays striking our planet. 

But if the muon doesn’t always look like physicists expect it to look, that is a sign that the Standard Model is incomplete, and some hitherto unknown physics is at play. “The muon, it turns out, is predicted to have more sensitivity to the existence of new physics than…the electron,” says Bailey.

[Related: The green revolution is coming for power-hungry particle accelerators]

Also like electrons, muons spin like whirling tops, which creates a magnetic field. The titular g defines how quickly it spins. In isolation, a muon’s g has a value of 2. In reality, muons don’t exist in isolation. Even in a vacuum, muons are hounded by throngs of short-lived “virtual particles” that pop in and out of quantum existence, influencing a muon’s spin.

The Standard Model should account for these particles, too. But in the 2000s, scientists at Brookhaven National Laboratory measured g and found that it was subtly but significantly greater than the Standard Model’s prediction. Perhaps the Brookhaven scientists had gotten it wrong—or, perhaps, the muon was at the mercy of particles or forces the Standard Model doesn’t consider.

Breaking the Standard Model would be one of the biggest moments in particle physics history, and particle physicists don’t take such disruption lightly. The Brookhaven scientists  moved their experiment to Fermilab in Illinois, where they could take advantage of a more powerful particle accelerator to mass-produce muons. In 2018, the Muon g–2 experiment began. 

Three years later, the experimental collaboration released their first results, suggesting that Brookhaven hadn’t made a mistake or seen an illusion. The results released last week add data from two additional runs in 2018 and 2019, corroborating what was published in 2021 and improving its precision. Their observed value for g—around 2.0023—diverges from what theory would predict after the eighth decimal place.

[Related: Scientists found a fleeting particle from the universe’s first moments]

“We’ve got a true value of the magnetic anomaly pinned down nicely,” says Lawrence Gibbons, a particle physicist at Cornell University and a member of the Muon g–2 collaboration.

Had this result come out several years ago, physicists might have heralded it as definitive proof of physics beyond the Standard Model. But today, it’s not so straightforward. Few affairs of the quantum world are simple, but the spanner in these quantum works is the fact that the Standard Model’s prediction itself is blurry.

“There has been a change coming from the theory side,” says Bailey.

Physicists think that the “virtual particles” that pull at a muon’s g do so with different forces. Some particles yank with electromagnetism, whose influence is easy to calculate. Others do so via the strong nuclear force (whose effects we mainly notice because it holds particles together inside atomic nuclei). Computing the strong nuclear force’s influence is nightmarishly complex, and theoretical particle physicists often substituted data from past experiments in their calculations. 

Recently, however, some groups of theorists have adopted a technique known as “lattice quantum chromodynamics,” or lattice QCD, which allows them to crunch strong nuclear force numbers on computers. When scientists feed lattice QCD numbers into their g predictions, they produce a result that’s more in line with Muon g–2’s results.

Adding to the confusion is that a different particle experiment located in Siberia—known as CMD-3—produced a result that also makes the Muon g–2 discrepancy disappear. “That one is a real head scratcher,” says Gibbons.

The Muon g–2 Collaboration isn’t done. Crunching through three times as much data, collected between 2021 and 2023, remains on the collaboration’s to-do list. Once they analyze all that data, which may be ready in 2025, physicists believe they can make their g minus 2 estimate twice as precise. But it’s not clear whether this refinement would settle things, as theoretical physicists race to update their predictions. The question of whether or not muons really are misbehaving remains an open one.

The post A fleeting subatomic particle may be exposing flaws in a major physics theory appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
How a US lab created energy with fusion—again https://www.popsci.com/science/nuclear-fusion-second-success-nif/ Sun, 13 Aug 2023 17:00:00 +0000 https://www.popsci.com/?p=562508
Machinery at the center of the National Ignition Facility.
The target chamber of LLNL’s National Ignition Facility. Lawrence Livermore National Laboratory

A barrage of X-rays hit a tiny pellet at temperatures and pressures greater than our sun's.

The post How a US lab created energy with fusion—again appeared first on Popular Science.

]]>
Machinery at the center of the National Ignition Facility.
The target chamber of LLNL’s National Ignition Facility. Lawrence Livermore National Laboratory

About eight months ago, scientists at a US-government-funded lab replicated the process that powers stars—nuclear fusion—and created more energy than they put in. Now, physicists and engineers at the same facility, the National Ignition Facility (NIF) at Northern California’s Lawrence Livermore National Laboratory, appear to have successfully created an energy-gaining fusion experiment for the second time.

NIF’s latest achievement is a step closer—the second step down a very long road—to a dream of fusion providing the world with clean, abundant energy. There is a long way to go before a fusion power plant opens in your city. But scientists are optimistic.

“It indicates that the scientists at [NIF] and their collaborators understand what happened back in December well enough that they have been able to make it happen again,” says John Pasley, a fusion scientist at York University in the UK who wasn’t part of this experiment.

NIF declined to comment, noting that the facility’s scientists had not yet formally presented their results. Until that happens, there’s a lot we won’t know about the specifics of the experiment, which took place on July 30.

There are multiple ways of achieving fusion, and NIF works with one, called inertial confinement fusion (ICF). In NIF’s setup, a high-powered laser beam splits into 192 smaller beams, showering a capsule that scientists call a hohlraum. Inside the hohlraum’s walls, this barrage spawns X-rays that crash into the capsule’s filling: a pellet of deuterium and tritium, super-squeezing it at temperatures and pressures more intense than the sun’s, initiating fusion.

The goal of all this work is to pass the break-even point and create more energy than the laser puts in: an achievement that fusion scientists call gain. In December’s experiment, 2.05 megajoules of laser beams elicited 3.15 megajoules of fusion energy. We won’t know for sure until NIF releases its data, but unnamed sources told the Financial Times that this second success created even greater gain.

[Related: Cold fusion is making a scientific comeback]

In addition, the December experiment achieved self-heating: a state where the fusion reaction powered itself, like a fire that no longer needs stoking. Many scientists think self-heating is a prerequisite to generating power in ICF. Outside scientists speculate that NIF’s new experiment also achieved self-heating.

“An obvious part of the scientific process is that you get the same result,” says Dennis Whyte, a fusion scientist at MIT who also wasn’t involved in the NIF research. “Of course, that’s extremely heartening.”

This is no small feat. ICF experiments are notoriously delicate. Very subtle changes to the lasers’ angles, to the shapes of the hohlraum and the pellet, and to any one of dozens of other factors could drastically alter the output. NIF in December barely scratched the surface of fusion gain, and it’s clear that tiny changes were the difference between passing break-even and not.

“We also repeat things, not just to see if they repeat, but also to see the sensitivities,” Whyte says. “Seeing the variability and the differences of those from experiment to experiment is really exciting.”

Since the 1950s fusion scientists have tried to accomplish what the NIF team has done, twice, in the past year. But the long-term goal is to turn these experimental forays into clean, cheap, abundant energy for the world’s people. Converting that milestone into a power plant is another quest entirely, and it has only just begun. If creating gain in the lab is like learning to light a fire, then using it to generate electricity is like building a steam engine.

“I would like to see them gradually shift some of their focus from demonstration of ignition and gain toward investigation of target designs that are closer to those which might be employed in a fusion power reactor,” Pasley says. 

[Related: Microsoft thinks this startup can deliver on nuclear fusion by 2028]

To build a viable power plant, NIF will need to show greater gain. The December experiment created about 1.5 times as much energy as the NIF scientists put in. Even if the July experiment created two or three times as much energy, NIF won’t have come close to the gain that fusion scientists think is necessary for a viable power plant: some 100 times.

Gain of that magnitude would also make fusion a viable addition to the larger electrical grid. It’s difficult to understate the importance of NIF’s achievement, but the facility didn’t actually generate more energy than it took from the outside world. To power the laser that created those 3.15 megajoules, the device needed 300 megajoules from California’s grid.

NIF isn’t really the optimal place to complete this quest, partly because it was built to maintain the US nuclear weapons stockpile and can’t focus on fusion all the time. But for now, NIF will likely keep trying, running more and more laser shots. And scientists can compare the results with simulations to understand what is happening under the surface.

“What we assume is going to happen now is we’re going to get dozens of [runs], and we’re going to really learn a lot,” Whyte says.

The post How a US lab created energy with fusion—again appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Glowing dye lets us peek inside growing bones and teeth https://www.popsci.com/science/teeth-bones-growing-image/ Wed, 02 Aug 2023 18:30:00 +0000 https://www.popsci.com/?p=560795
Fluorescent dye shows the growth of teeth and bones.
A new tool tracks hard tissue growth like teeth and bones in many species. Gonzalez Lopez et al. Sci. Adv. 2023

For the first time, biologists have precisely tracked the development of hard biological structures.

The post Glowing dye lets us peek inside growing bones and teeth appeared first on Popular Science.

]]>
Fluorescent dye shows the growth of teeth and bones.
A new tool tracks hard tissue growth like teeth and bones in many species. Gonzalez Lopez et al. Sci. Adv. 2023

Your body contains the stuff of rocks: the calcium-based minerals in bones and teeth. In a process called biomineralization, you produce these materials that harden and stiffen as they grow. So do the bodies of other bony, toothed animals. It’s in shells, too: Iridescent mother-of-pearl forms via biomineralization.

But, historically, biologists struggled to observe how this process worked. Now, scientists have been able to observe it in vivid 3D. Bones and tEEth Spatio-Temporal growth monitoring (BEE-ST), as its creators have named their technique, involves adding dye to nascent bones or budding teeth, then watching the color spread as their host components grow.

BEE-ST’s creators published their work in the journal Science Advances today. If its authors are correct, then this work could be a boon for people who aren’t just studying how bones and teeth grow, but people who want to control that growth themselves.

“Currently, there are no available tools for precise monitoring and measuring the pace of tooth growth in space and time,” says Jan Křivánek, a developmental biologist at Masaryk University in Brno, Czechia, and one of the paper’s authors. BEE-ST, they hope, may change that.

[Related: This new synthetic tooth enamel is even harder than the real thing]

A few methods can accomplish parts of that goal. Today, scientists and medics can rely on a technique called micro-computed tomography, in which they scan an object with X-rays from multiple angles, then stitch the scans together into a 3D image. While this does give observers a 3D perspective, it also only gives a snapshot—moments in time, rather than a coherent sequence of development.

Another potential option is dye. Bone-watchers have known for decades that dye and substances like it can bind with the calcium in these organs. But this is far from a perfect option to watch how calcium-based structures grow. For one, to see into a bone, you typically have to remove the calcium from your sample, which removes the dye. You can get around this by taking a slice of the tooth or bone, but that only gives you a 2D shadow of the larger 3D picture.

Křivánek and his colleagues wanted to see how mouse teeth grew, but they also wanted a more sophisticated way of seeing calcium. So, they decided to adapt the dye method. Fortunately, in the last several years, researchers had developed techniques to see into a tooth without removing the calcium. They could insert dye into a growing tooth or bone and take 3D images of it over time. Every few days, the researchers added batches of new dye to lab mice. The result, when the scientists later placed the teeth under a microscope, was a sequence of stripes: each one marking a different injection.

[Related: We finally know why we grow wisdom teeth as adults]

In the process, they realized that their technique could be used for more than just mouse teeth. They next showed it could work in a mouse’s bones. Then they expanded from mice to representatives of other provinces in the animal kingdom, administering dye to a menagerie of vertebrates: chameleons (reptiles), junglefowl (birds), frogs (amphibians), and zebrafish (fish).

All of this took Křivánek and colleagues several years, but in the end, they think they have created a reliable process for watching how teeth and bones grow. But that doesn’t mean it only serves this purpose. “We strongly believe it will be further tuned for other applications,” Křivánek says.

One of them is a field called tissue engineering, the science and craft of manipulating the tissues of the human body. A “tissue” can be anything from skin to muscle to internal organs—to stronger materials like the hard, tough matter found in bone and tissue. With tools such as stem cells, scientists can strengthen tissue, improve it, or even try to replicate it from scratch. This technology can help heal cracked bones or regenerate missing teeth.

But, in order to engineer anything, would-be bonesmiths first need to understand how their materials behave as they grow. That, Křivánek thinks, is where something like their method could enter the picture. “We basically opened doors,” he says. “Let’s see how the scientific community will use it.”

The post Glowing dye lets us peek inside growing bones and teeth appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Space junk is a precious treasure trove to some archaeologists https://www.popsci.com/science/archaeology-artifacts-space/ Sun, 30 Jul 2023 17:00:00 +0000 https://www.popsci.com/?p=559970
NASA astronaut Buzz Aldrin walking across Tranquility Base with equipment after the Apollo 11 moon landing. Black and white photo.
Astronaut Buzz Aldrin looks back on Tranquility Base after the Apollo 11 moon landing. NASA

Artifacts scattered across the solar system can reflect its changes over time.

The post Space junk is a precious treasure trove to some archaeologists appeared first on Popular Science.

]]>
NASA astronaut Buzz Aldrin walking across Tranquility Base with equipment after the Apollo 11 moon landing. Black and white photo.
Astronaut Buzz Aldrin looks back on Tranquility Base after the Apollo 11 moon landing. NASA

Terms like “cultural heritage” and “archaeology” might conjure Indiana Jones-lie scenes of old and ancient things buried under the sands of time. But even now, each one of us is producing material that could interest future humans trying to record and study our own era.

For those who believe that space exploration and astronauts’ first departures from Earth are culturally significant, then there is a wealth of objects that spacefarers—crewed and uncrewed, past and present—have left in the realms beyond our atmosphere.

“This stuff is an extension of our species’ migration, beginning in Africa and extending to the solar system,” says Justin Holcomb, an archaeologist with the Kansas Geological Survey. “I argue that a piece of a lander is the exact same thing as a piece of a stone tool in Africa.”

This idea is the heart of what Holcomb and his colleagues call “planetary geoarchaeology.” In a paper published in the journal Geoarchaeology on July 21, these “space archaeologists” detail how they want to study the interactions between the items we’ve left around the solar system and the  hostile environments they now occupy. This research, the authors believe, will only become more important as human activity on the moon is set to blossom in the decades to come.

The idea of documenting and preserving what we leave behind in space isn’t a completely new concept. In the early 2000s, New Mexico State University anthropologist Beth O’Leary (who co-authored the paper with Holcomb) cataloged objects scattered around Tranquility Base, Apollo 11’s landing site on the moon. O’Leary later helped get some of those artifacts registered in California and New Mexico as culturally significant properties.

“I would argue that Tranquility Base could easily be considered the most important archaeological site that exists,” says Justin St. P. Walsh, an archaeologist at Chapman University in California who was not involved with the new paper. The base’s lunar soil can’t be declared a cultural heritage site because that would violate the 1967 Outer Space Treaty, which prevents any country from claiming the soil of the moon or another world. But scholars can still list objects found there as heritage.

Naturally, O’Leary’s catalog includes the remnants of Apollo 11’s lunar module and its famed US flag, along with empty food bags, utensils, hygiene equipment, and wires. What is space junk to some is precious culture to space archaeologists. Even long-festering astronaut poop has its value—“that’s human DNA,” Holcomb says.

Archaeological sites on Earth are deeply impacted by the processes of the world around them, both natural and artificial. Likewise, Tranquility Base doesn’t just sit in tranquility. The moon’s surface is constantly bombarded by cosmic rays and micrometeoroids; even faraway human landings can kick up regolith showers.

[Related: Want to learn something about space? Crash into it.]

Holcomb and his colleagues want to study the various states objects are left in to learn how sites on the moon and other worlds change over time—and how to preserve them for our distant descendants. “We think in deep time scales,” says Holcomb. “We’re not thinking in just the next five years. We’re thinking in a thousand years.”

That sort of research, the authors say, is still quite new. Holcomb, for instance, wants to study what happens to NASA’s Spirit rover on Mars as a sand dune washes over it. Other planetary geoarchaeology projects might focus on what the moon’s environment has wrought upon artificial materials we’ve left on the lunar surface.

“We can find out more about what happened to [castoffs] in the length of time they’ve been there,” says Alice Gorman, an archaeologist at Flinders University in Adelaide, Australia, who also wasn’t a co-author. 

NASA Opportunity rover false-color image of Mars Endurance crater
The Opportunity rover now rests in the same Martian sand dunes that it once photographed. NASA officially lost contact with the long-lived robot in 2019. NASA/JPL/Cornell

On Earth, Gorman and colleagues plan to replicate Apollo astronauts’ boot prints in simulated lunar soil and subject them to forces like rocket exhaust. Gorman believes even engineers with no interest in archaeology may want to take interest in work like this. “These same processes will be happening to any new habitats built on the surface,” she says. “With the archaeological sites, we get a bit of a longer-term perspective.”

The moon is the immediate focus for both this paper’s authors and other space archaeologists, and it’s easy to see why. After several decades of occasional uncrewed missions and flybys, NASA’s Artemis program promises to spearhead a mass return to the satellite’s surface. The Artemis program is slated to land on the moon’s south pole, far away from existing Apollo landing sites. But a flurry of private companies have emerged with the goal of not just touching the moon as Apollo did, but extracting its resources.

Space archaeologists fear that all this future activity will place past sites at risk. “We barely know how to operate on the moon,” says Walsh.

There are some indications that the broader space community is thinking about the problem. The Artemis Accords (a US-initiated document that aims to outline the ethical guidelines for the Artemis era) and the Vancouver Recommendations on Space Mining (a 2020 white paper by primarily Canadian academics that proposes a framework for sustainable space mining) express a desire to protect space heritage sites.

Of course, these are only words on nonbinding paper, and space archaeologists do not think they go far enough. Holcomb and colleagues want experts in their field to be involved in planning—for instance, steering scientific and commercial space missions away from spots where they might interfere with existing cultural heritage. There is earthbound precedent for such a role: In many countries, archaeologists already assist infrastructure projects.

“We know we’re going to go there someday, so let’s make sure that we have the protections in place before we go and ruin things,” says Walsh.

[Related: What an extraterrestrial archaeological dig could tell us about space culture]

Moves like this can’t protect lunar heritage from every possible harm: A future satellite could very well crash-land on Tranquility Base and wreck the last remnants of Apollo 11 there. But space archaeologists say that it is valuable to take any steps we can.

“I think the paper is a really fantastic demonstration of how any mission to the moon has to be about more than just engineering, and it has to be interdisciplinary,” Gorman notes. “It’s very timely that it’s been published now, while there’s still time to incorporate its recommendations into actual lunar missions.”

The post Space junk is a precious treasure trove to some archaeologists appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The rocky history of a missing 26,000-foot Himalayan peak https://www.popsci.com/science/himalaya-mountain-landslide/ Thu, 06 Jul 2023 16:30:00 +0000 https://www.popsci.com/?p=553788
The base camp at Annapurna in the foreground, with the peak behind it.
The base camp at Annapurna in Nepal, one of the tallest mountains on Earth. Depositphotos

A massive mountain summit crumbled around 1190 CE, leaving evidence in the plains below.

The post The rocky history of a missing 26,000-foot Himalayan peak appeared first on Popular Science.

]]>
The base camp at Annapurna in the foreground, with the peak behind it.
The base camp at Annapurna in Nepal, one of the tallest mountains on Earth. Depositphotos

Earth is home to 14 “eight-thousanders,” summits that top off at more than 8,000 meters, or 26,247 feet, above sea level. All of these grand mountains tower over the Himalayas, the highest place in the world.

But our planet is dynamic—could there have been additional peaks like these, since lost? “We wanted to know whether, 830 years ago, the Earth and the Himalayas had one more,” says Jérôme Lavé, a geomorphologist at the National Centre for Scientific Research (CNRS) and the University of Lorraine in France.

The answer, according to Lavé and his colleagues, appears to be yes. In a new paper, published in the journal Nature on July 6, they’ve found evidence of an ancient landslide that reshaped South Asia’s geography—and linked that to the collapse of a peak that would have once been one of the tallest mountains on Earth.

Lavé says his team first spotted the fingerprints of this medieval landslide not in the Himalayas, but far to the south, near the India-Nepal border, in the flat plains around the Narayani River.

To look for missing mountains, these plains are prime land for geomorphologists—scientists who study the evolution of the land under our feet (or, in this case, the land towering well above everyone but the hardiest mountaineers). Rivers like the Narayani carry sediments downslope, and those sediments can reveal much about the mountains where they originated.

For instance, Lavé and colleagues found medieval sediments with a carbonate content five times higher than average. This mineral fingerprint indicated that something had disrupted the Narayani’s flow. “A giant landslide occurring…seemed to me the most obvious avenue to explore,” Lavé says.

[Related: How to start mountain biking this summer]

They began plying uphill to find out more. The Narayani flows through the city of Pokhara, nestled in a valley less than 3,000 feet above sea level. But this is one of the steepest landscapes on Earth: looming over Pokhara is the Annapurna massif, a section of the Himalayas. (The massif’s crown jewel is its tallest peak: also named Annapurna, a proud member of the eight-thousand club.)

By studying images of the Annapurna massif, the team found geographic signs of an old landslide. In one subsection of the massif, called the Sabche cirque, they spotted strange features like pillars and pinnacles, markers of erosion.

The authors needed more samples. Collecting fragments from the plains is one thing. It was another to gather wood and rock from the Sabche cirque—they ventured up into the massif by helicopter. From these parts, they began to build the hazy image of a mountain that existed, long ago, until one catastrophic day around 1190 CE.

“They really managed to capture this event…both at the source as well as at the far sink of these sediments,” says Wolfgang Schwanghart, a geomorphologist at the University of Potsdam in Germany, who was not an author of the paper.

This is what Lavé and colleagues think happened: There once rose a second eight-thousander from the Annapurna massif. Then, it collapsed. The resulting rockslide thoroughly eroded the Himalayan landscape and poured sediment into the valley that now contains Pokhara, from where waters carried it downstream. This event played a major role in eroding the rock, reshaping the massif closer to what we see today.

The paper suggests that large, dramatic landslides may be a significant driver of erosion at high altitudes like this. “This is a mechanism that still needs to be further investigated, but this hypothesis may open new insights,” says Odin Marc, a geomorphologist at CNRS who was also not involved in the research.

What caused the mountain to collapse isn’t clear. A warming medieval climate might have melted mountaintop permafrost that otherwise strengthens the peak. Schwanghart, who has also studied the region’s geology, believes the answer may be earthquakes. He says the chronology indicates that three earthquakes struck Nepal around the time that Lavé and colleagues suggested the mountain collapsed, and one of them may have caused the mountain to topple in the first place.

[Related: There might be underground ‘mountains’ near Earth’s core]

Whatever happened, the new report reinforces the fact that mountains are constantly changing environments. We might see summits as eternal fixtures on the landscape, but if anything, they are the complete opposite.

After all, Himalayan landslides aren’t consigned to the past. In 2021, an avalanche and rockslide careened down a mountainside in Uttarakhand, India, around 300 miles northwest of Annapurna. The disaster burst a dam, and the resulting flood left some 200 people dead or missing.

If such a rockslide were to happen to Pokhara today, the results could be devastating. Pokhara is Nepal’s second-largest city (after the capital Kathmandu) and home to more than half a million people. Moreover, globally, evidence is mounting that a warming climate exacerbates the risk of mountain landslides. Just last month, the Alpine summit of Fluchthorn, nestled on the Swiss-Austrian border, abruptly collapsed in an event that scientists ascribed to thawing permafrost.

Mountain collapses like these may be more common than we realize. “In Alaska, you would find similar events—but often they go unnoticed, because there is no one around,” says Schwanghart.

The post The rocky history of a missing 26,000-foot Himalayan peak appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Cold fusion is making a scientific comeback https://www.popsci.com/science/cold-fusion-low-energy-nuclear-reaction/ Mon, 03 Jul 2023 18:00:00 +0000 https://www.popsci.com/?p=552986
The ringed building is the European Synchrotron Radiation Facility in France, where LENR researchers are studying palladium nanoparticles.
The ringed building is the European Synchrotron Radiation Facility in France, where LENR researchers are studying palladium nanoparticles. ESRF/P. Jayet

A US agency is funding low-energy nuclear reactions to the tune of $10 million.

The post Cold fusion is making a scientific comeback appeared first on Popular Science.

]]>
The ringed building is the European Synchrotron Radiation Facility in France, where LENR researchers are studying palladium nanoparticles.
The ringed building is the European Synchrotron Radiation Facility in France, where LENR researchers are studying palladium nanoparticles. ESRF/P. Jayet

Earlier this year, ARPA-E, a US government agency dedicated to funding advanced energy research, announced a handful of grants for a field it calls “low-energy nuclear reactions,” or LENR. Most scientists likely didn’t take notice of the news. But, for a small group of them, the announcement marked vindication for their specialty: cold fusion.

Cold fusion, better known by its practitioners as LENR, is the science—or, perhaps, the art—of making atomic nuclei merge and, ideally, harnessing the resultant energy. All of this happens without the incredible temperatures, on the scale of millions of degrees, that you need for “traditional” fusion. In a dream world, successful cold fusion could provide us with a boundless supply of clean, easily attainable energy.

Tantalizing as it sounds, for the past 30 years, cold fusion has largely been a forgotten specter of one of science’s most notorious controversies, when a pair of chemists in 1989 claimed to achieve the feat—which no one else could replicate. There is still no generally accepted theory that supports cold fusion; many still doubt that it’s possible at all. But those physicists and engineers who work on LENR believe the new grants are a sign that their field is being taken seriously after decades in the wilderness.

“It got a bad start and a bad reputation,” believes David Nagel, an engineer at George Washington University, “and then, over the intervening years, the evidence has piled up.”

[Related: Physicists want to create energy like stars do. These two ways are their best shot.]

Igniting fusion involves pressing the hearts of atoms together, creating larger nuclei and a fountain of energy. This isn’t easy. The protons inside a nucleus give it a positive charge, and like-charged nuclei electrically repel each other. Physicists must force the atoms to crash together anyway. 

Normally, breaking this limit needs an immense amount of energy, which is why stars, where fusion happens naturally, and Earthbound experiments reach extreme heat. But what if there were another, lower-temperature way?

Scientists had been theorizing such methods since the early 20th century, and they’d found a few tedious, extremely inefficient ways. But in the 1980s, two chemists thought they’d made one method work to great success. 

The duo, Martin Fleischmann and Stanley Pons, had placed the precious metal palladium in a bath of heavy water: a form of H2O whose hydrogen atoms have an extra neutron, a form known as deuterium, commonly used in nuclear science. When Fleischmann and Pons switched on an electrical current through their apparatus and left it running, they began to see abrupt heat spikes, or so they claimed, and particles like neutrons.

Those heat spikes and particles, according to them, could not be explained by any chemical process. What could explain them were the heavy water’s deuterium nuclei fusing, just as they would in a star.

If Fleischmann and Pons were right, fusion could be achievable at room temperature in a relatively basic chemistry lab. If you think that sounds too good to be true, you’re far from alone. When the pair announced their results in 1989, what followed was one of the most spectacular firestorms in the history of modern science. Scientist after scientist tried to recreate their experiment, and no one could reliably replicate their results.

[Related: Nuclear power’s biggest problem could have a small solution]

Pons and Fleischmann are remembered as fraudsters. It likely didn’t help that they were chemists trying to make a mark on a field dominated by physicists. Whatever they had seen, “cold fusion” found itself at respectable science’s margins. 

Still, in the shadows, LENR experiments continued. (Some researchers tried variations on Fleischmann and Pons’ themes. Others, especially in Japan, sought LENR as a means of cleaning up nuclear waste by transforming radioactive isotopes into less dangerous ones.) A few experiments showed oddities such as excess heat or alpha particles—anomalies that might best be explained if atomic nuclei were reacting behind the scenes.

“The LENR field has somehow, miraculously, due to the convictions of all these people involved, has stayed alive and has been chugging along for 30 years,” says Jonah Messinger, an analyst at the Breakthrough Institute think tank and a graduate student at MIT.

Fleischmann and Pons’ fatal flaw—that their results could not be replicated—continues to cast a pall over the field. Even some later experiments that seemed to show success could not be replicated. But this does not deter LENR’s current proponents. “Science has a reproducibility problem all the time,” says Florian Metzler, a nuclear scientist at MIT.

In the absence of a large official push, the private sector had provided much of LENR’s backing. In the late 2010s, for instance, Google poured several million dollars into cold fusion research to limited success. But government funding agencies are now starting to pay attention. The ARPA-E program joins European Union projects, HERMES and CleanHME, which both kicked off in 2020. (Messinger and Metzler are members of an MIT team that will receive ARPA-E grant funds.)

By the standards of other energy research funding, none of the grants are particularly eye-watering. The European Union programs and ARPA-E total up to around $10 million each: a pittance compared to the more than $1 billion the US government plans to spend in 2023 on mainstream fusion.

But that money will be used in important ways, its proponents say. The field has two pressing priorities. One is to attract attention with a high-quality research paper that clearly demonstrates an anomaly, ideally published in a reputable journal like Nature or Science. “Then, I think, there will be a big influx of resources and people,” says Metzler.

A second, longer-term goal is to explain how cold fusion might work. The laws of physics, as scientists understand them today, do not have a consensus answer for why cold fusion could happen at all.

Metzler doesn’t see that open question as a problem. “Sometimes people have made these arguments: ‘Oh, cold fusion contradicts established physics,’ or something like that,” he says. But he believes there are many unanswered questions in nuclear physics, especially with larger atoms. “We have an enormous amount of ignorance when it comes to nuclear systems,” he says.

Yet answers would have major benefits, other experts argue. “As long as it’s not understood, a lot of people in the scientific community are put off,” says Nagel. “They’re not willing to pay any attention to it.”

It is, of course, entirely possible that cold fusion is an illusion. If that’s the case, then ARPA-E’s grants may give researchers more proof that nothing is there. But it’s also possible that something is at work behind the scenes.

And, LENR proponents say, the Fleischmann and Pons saga is now fading as younger researchers enter the field with no memory of 1989. Perhaps that will finally be what lets LENR emerge from the pair’s shadow.“If there is a nuclear anomaly that occurs,” says Messinger, “my hope is that the wider physics community is ready to listen.”

The post Cold fusion is making a scientific comeback appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The Milky Way’s ghostly neutrinos have finally been found https://www.popsci.com/science/neutrinos-milky-way-detection/ Thu, 29 Jun 2023 18:00:00 +0000 https://www.popsci.com/?p=552344
The IceCube Lab is seen under a starry night sky, with the Milky Way appearing over low auroras in the background.
The IceCube Neutrino Observatory looks for neutrinos exiting Earth at the South Pole. Yuya Makino, IceCube/NSF

There should be plenty of these subatomic particles emerging from our galaxy, but until now they'd never been detected.

The post The Milky Way’s ghostly neutrinos have finally been found appeared first on Popular Science.

]]>
The IceCube Lab is seen under a starry night sky, with the Milky Way appearing over low auroras in the background.
The IceCube Neutrino Observatory looks for neutrinos exiting Earth at the South Pole. Yuya Makino, IceCube/NSF

Neutrinos fill the universe, but you wouldn’t know that with your eyes. These subatomic particles are small—so small that physicists once thought they had no mass at all—and they have no electric charge. Even though trillions of neutrinos enter your body every second, the vast majority of them pass through without a trace.

Yet astronomers crave glimpses of neutrinos. That’s because neutrinos are products of cosmic rays, cryptic high-energy particles constantly streaming throughout the universe from all directions. Astronomers aren’t sure where cosmic rays come from, how they behave in flight, or—in many cases—what they’re made of.

Neutrinos can help researchers find out, but they have to see those particles first. Until now, astronomers had only confirmed that they’d found neutrinos originating from outside our galaxy. But in a paper published in Science today, a global team of astronomers announced a long-sought goal: the first neutrinos that hail from the plane of the Milky Way itself.

“This is a very important discovery,” says Luigi Antonio Fusco, an astronomer at the University of Salerno in Italy, who was not an author of the paper.

Seeing these ghosts is a tricky task. A neutrino observatory looks nothing like a telescope or a radio dish. Instead, the paper’s authors worked with an array of holes drilled more than a mile into the South Pole ice: the IceCube Neutrino Observatory. Down those shafts, deep in the frozen darkness, IceCube’s detectors watch for light trails from the particles that neutrinos spawn when they collide with matter.

[Related: This ghostly particle may be why dark matter keeps eluding us]

In water or ice, light only travels around three-quarters of its speed limit. Particles can move through those substances quicker than that (though not faster than speed of light in a vacuum). If they do, they shoot out cones of bright light called Cherenkov radiation, equivalent to a sonic boom. Some neutrino observatories, such as ANTARES at the bottom of the Mediterranean, look for Cherenkov radiation in water. IceCube uses, well, ice.

Even after scientists tabulated how to find those neutrinos, they faced another problem: noise. Neutrino detectors constantly detect the result of cosmic rays careening into the upper atmosphere, pumping subatomic particles into the planet. How, then, do you find the needles of cosmic neutrinos in that high-energy haystack?

The answer is by examining the direction. Neutrinos from afar have higher energies and can more easily shoot through our planet. If your detector spots a neutrino that seems to be coming from the ground, there’s a healthy chance it came from space and passed through Earth before striking your detector.

In 2013, IceCube detected the first cosmic neutrinos. In the years since, they’ve been able to narrow neutrino sources down to individual galaxies. “We have been detecting extragalactic neutrinos for 10 years now,” says Francis Halzen, a physicist at the University of Wisconsin and a member of the IceCube collaboration.

[Related: Dark energy fills the cosmos. But what is it?]

But one important source was missing: neutrinos from within our own galaxy. Astronomers believe that there should be plenty of neutrinos emerging from the Milky Way’s plane. IceCube scientists had tried to find those neutrinos before, but they’d never been able to confidently pinpoint their origins.

“What changed in this analysis is that we really improved the methods that we’re using,” says Mirco Hünnefeld, a scientist at the Technical University of Dortmund in Germany and a member of the IceCube collaboration.

The IceCube team sharpened their artificial intelligence tools. Today’s neural networks can pluck out neutrinos from the noise with keener discretion than ever before. Astronomers analyzed more than 59,000 IceCube detections collected between 2011 and 2021 and compared them against predicted models of neutrino sources.

As a result, they’re confident that their detections can be explained by neutrinos streaming off of the Milky Way’s flat plane and, in particular, the galactic center.

Now, astronomers want to narrow down the points in the sky where those neutrinos actually come from. More sensitive neutrino detectors can help with that task. IceCube will get upgrades later this decade, and a new generation of neutrino observatories—such as KM3NeT in the Mediterranean and GVD under Russia’s Lake Baikal—will expand astronomers’ neutrino-seeing toolbox.

“The IceCube signal is kind of a diffuse haze,” says Fusco. “With the next generation, I think we can really push to try to point out which are the individual sources of this signal.” If astronomers can do that, they can learn more about the sources that create cosmic rays, which could potentially be supernovae

“Only cosmic rays make neutrinos, so if you see neutrinos, you see cosmic ray sources,” says Halzen. “The goal of neutrino physics, the prime goal, is to solve the 100-year-old cosmic ray problem.”

“That will help us disentangle a lot of the mysteries out there,” says Hünnefeld, “that we couldn’t do before.”

The post The Milky Way’s ghostly neutrinos have finally been found appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The secret to better folding phones might hinge on mussels https://www.popsci.com/science/mussels-hinge-engineering/ Thu, 22 Jun 2023 18:00:00 +0000 https://www.popsci.com/?p=550459
A cockscomb pearl mussel.
The shell of a cockscomb pearl mussel, which pops open thanks to a protein cushion and biological wires. Prof. Yu's Team

Materials scientists are studying why these shellfish are so great at opening and closing.

The post The secret to better folding phones might hinge on mussels appeared first on Popular Science.

]]>
A cockscomb pearl mussel.
The shell of a cockscomb pearl mussel, which pops open thanks to a protein cushion and biological wires. Prof. Yu's Team

The function of folding phones and handheld game consoles like the Nintendo 3DS hinges on, well, their hinges. Open and shut such gadgets enough times that the hinges start to fail, and you might find yourself wishing for a far better joint.

As it happens, the animal realm may have fulfilled that wish. Sources of inspiration take the form of bivalves: clams, oysters, cockles, mussels, and a whole host of other two-shelled organisms. Over a bivalve’s life, its shells can open and shut hundreds of thousands of times, seemingly without taking damage.

Now, a team of biologists and materials scientists have worked together to examine the case of one particular bivalve, Cristaria plicata, the cockscomb pearl mussel. In their paper, published in the journal Science today, they’ve not only reverse-engineered a mussel’s hinge, they’ve recreated it with glass fibers and other modern materials.

Cockscomb pearl mussels, the study’s starring bivalves, are found in fresh waters across northeast Asia. Ancient Chinese craftspeople grew pearls within this mussel’s shells. By opening the mussel, inserting a small object like a bead or a tiny Buddha inside, closing the animal, and letting it be for a year, they could retrieve the object afterward—now coated in iridescent mother-of-pearl.

Mother-of-pearl, also known as nacre, has long drawn the attention of materials scientists for far more than its beauty. Although nacre is made from a brittle calcium carbonate mineral called aragonite, its structure—aragonite “bricks” glued together by a protein “mortar”—gives the substance incredible strength and resilience

“A lot of researchers have replicated various aspects of its brick-and-mortar structure to try to create stiff, tough, and strong materials, says Rachel Crane, a biomechanist at the University of California, Davis, who was not an author of the new paper.

[Related: This new material is as strong as steel—but lighter]

In the process of studying nacre, some scientists couldn’t help notice the mussel’s hinge. Despite also being made from the same brittle aragonite, the hinge both bends and stretches without breaking. “This exceptional performance impressed us greatly, and we decided to figure out the underlying reason,” says Shu-Hong Yu, a materials scientist at the University of Science and Technology of China, and one of the paper’s authors.

Biologists had studied hinges and the differences between them to classify bivalve species as early as the 19th century. But they didn’t have the technology to peel apart these living joints’ inner structures. Yu and his colleagues, though, extracted the hinges and examined them under a battery of microscopes and analyzers.

They found that the bivalve’s hinge consists of two key parts. The first is at the hinge’s core: a folding part shaped like a paper fan. The fan’s “ribs” are an array of tiny aragonite wires, shrouded in a soft protein cushion. The second part is a ligament, an elastic layer over the fan’s outer edge.

As the hinge closes, the protein matrix helps keep the wires straight, preventing them from bending and breaking. Meanwhile, the outer ligament absorbs the tension from the hinge unfurling. Together, this configuration makes the hinge particularly hardy.

The authors placed hinges extracted from mussels in a machine that repeatedly forced them open and shut. This tested their prowess under long-term, repeated stress. Even after 1.5 million cycles, the authors found no sign of damage. In other words, if the mussel opened and shut its shells once a minute, every minute, for three years on end, its hinge would stay perfectly functional.

This makes the mussel’s hinge super-resistant to what engineers call “fatigue.” Everything from nuts and bolts to bridge supports builds up wear and tear from repeated use, just as your legs might feel tired if you’ve recently run a marathon. And, just like a pair of exhausted legs, a fatigued part is more likely to fail—with crippling consequences. “The bivalve shell hinge is particularly interesting not only for its fatigue resistance, but also for its ability to bend,” says Crane.

It’s surely tempting to imagine bizarre biopunk doors that open and close on the backs of indefatigable mussel hinges. While that’s almost certainly impractical, the authors believe that these hinges could inspire human-engineered parts that serve our purposes well.

[Related: Recycling one of the planet’s trickiest plastics just got a little easier]

In fact, inspired by the structure they found, Yu and his colleagues fashioned their own hinge from glass fibers embedded like fan ribs in a polymer matrix. They put their artificial hinge to the test, and found that it held up like the genuine, organic article—while other hinges, one with disorganized glass fibers and another with glass spheres, began to break and crack

Yu says that their early effort isn’t meant for regular human use. But it demonstrates that we could create a mussel-like bend if we needed to. For instance, what if a mobile phone designer wants to make a folding touch-screen phone that needs a brittle material like glass?

“The fan-shaped-region-inspired design strategy provides a promising way to address this challenge,” Yu says. His group now plans to examine what those soft proteins do in the hinge.

But evolution and engineering play by different rules. It isn’t necessarily easy to emulate materials that have evolved over millions of years. “The finest-scale patterns in biological structures are often challenging and costly to replicate,” says Crane.

The post The secret to better folding phones might hinge on mussels appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
How does electricity work? Let’s demystify the life-changing physics. https://www.popsci.com/technology/how-does-electricity-work/ Mon, 19 Jun 2023 11:00:00 +0000 https://www.popsci.com/?p=549308
Tesla coil experiment to demonstrate how electricity works.
A Tesla coil gives off current electricity, where the negatively charged electrons continuously move, just like they would through an electrical wire. Depositphotos

How current is your knowledge?

The post How does electricity work? Let’s demystify the life-changing physics. appeared first on Popular Science.

]]>
Tesla coil experiment to demonstrate how electricity works.
A Tesla coil gives off current electricity, where the negatively charged electrons continuously move, just like they would through an electrical wire. Depositphotos

To the uninitiated, electricity might seem like a sort of hidden magic. It plays by laws of physics we can’t necessarily perceive with our eyes.

But most of our lives run on electricity. Anyone who has ever lived through a power outage knows how inconvenient it is. On a broader level, it’s hard to understate just how vital the flow of electricity is to powering the functions of modern society.

“If I lose electricity, I lose telecommunications. I lose the financial sector. I lose water treatment. I can’t milk the cows. I can’t refrigerate food,” says Mark Petri, an electrical grid researcher at Argonne National Laboratory in Illinois. 

[Related: How to save electricity this summer]

Which makes it all the more important to know how electricity works, where it comes from, and how it gets to our homes.

How does electricity work?

The universe as we know it is governed by four fundamental forces: the strong nuclear force (which holds subatomic particles together inside atoms), the weak nuclear force (which guides some types of radioactivity), gravity, and electromagnetism (which governs the intrinsically linked concepts of electricity and magnetism). 

One of electromagnetism’s key tenets is that the subatomic particles that make up the cosmos can have either a positive or negative charge. To use them as a form of energy, we have to make them flow as electric current. The electricity we have on Earth is mostly from the movement of negatively charged electrons. 

But it takes more than a charge to keep electrons flowing. The particles don’t travel far before they run into an obstacle, such as a neighboring atom. That means electricity needs a material whose atoms have loose electrons, which can be knocked away to keep the current going. This type of material is known as a conductor. Most metals have conductive qualities, such as the copper that forms a lot of electrical wires.

Other materials, called insulators, have far more tightly bound electrons that aren’t easily pushed around. The plastic that coats most wires is an insulator, which is why you don’t get a nasty shock when you touch a cord or plug.

Some scientists and engineers think of electricity as a bit like water streaming through a pipe. The volume of water passing through a pipe section at a given time compares to the number of electrons flowing through a particular strand of wire, which scientists measure in amps. The water pressure that helps to push the fluid through is like the electrical voltage. When you multiply amps by volts, you compute the power or the amount of energy passing through the wire every second, which electricians measure in watts. The wattage of your microwave, then, is approximately the amount of electrical energy it uses per second.

How electrons carry voltage through wires

Based on the law of electromagnetism, if a wire is caught in a magnetic field and that magnetic field shifts, it induces an electric current in the wire. This is why most of the world’s electricity is born from generators, which are typically rotating magnetic apparatuses. As a generator spins, it sends electricity shooting through a wire coiled around it.

[Related: The best electric generators for your home]

Powering a whole city calls for a colossal generator, potentially the size of a building. But it takes energy to make energy from that generator. In most fossil fuel and nuclear plants, the fuel source boils water into steam, which causes turbines to spin their respective generators. Hydro and wind generators take advantage of nature’s own motion, redirecting water or gusts of wind to do the spinning. Solar panels, meanwhile, work differently because they don’t need moving magnets at all. When light strikes a solar cell, it excites the electrons within the atoms of the material, causing them to flow out in a current.

It’s easier to transfer energy with lots of volts and fewer amps. As such, long-distance power lines use thousands of volts to carry electricity away from power plants. That’s far too high for most buildings, so power grids rely on substations to lower the voltage for regular outlets and home electronics. North American buildings typically set their voltage to 120 volts; most of the rest of the world uses between 220 and 240 volts.

Current also doesn’t flow one way—instead, it constantly switches direction back and forth, which engineers call alternating current. This enables it to travel stretches of up to several thousands of miles. North American wires flip from one current direction to the other 60 times every second. In other parts of the globe, particularly in Europe and Africa, they alternate back and forth 50 times every second.

That brings the current to your building’s breaker box. But how does that power actually get to your electronic devices? 

[Related: Why you need an uninterruptible power supply]

To keep a continuous flow of electricity, a system needs a complete circuit. Buildings everywhere are wired with incomplete circuits. A two-hole socket contains one “live” wire and one “neutral” wire. When you plug in a lamp, kitchen appliance, or phone charger, you’re completing that circuit, allowing electricity to flow from the live wire, through the device, and back through the neutral wire to deliver energy. 

Put another way, if you stick a finger into a live socket, you’re temporarily completing the circuit with your body (somewhat painfully).

An electrical worker suspended on high-voltage power lines in China against the sunset
An electrician carries out maintenance work on electric wires of a high-voltage power line project on September 28, 2022, in Lianyungang, China. Geng Yuhe / VCG via Getty Images

The future of electricity

Not long ago, electricity was still a luxury. In the late 1990s, nearly one-third of the world’s population lived in homes without electrical access. We’ve since cut that proportion by more than half—but nearly a billion people, mainly concentrated in sub-Saharan Africa, still don’t have a current.

Historically, almost all electricity started at large power plants and ended at homes and businesses. But the transition to renewable energy is altering that process. On average, solar and wind farms are smaller than hulking coal plants and dams. On rainy and calm days, giant batteries can back them up with stored power.

“What we have been seeing, and what we can expect to see in the future, is a major evolution of the grid,” says Petri.

[Related: Why hasn’t Henry Ford’s power grid become a reality?]

The infrastructure we build around electricity makes a difference, both for the health of the planet and people. In 2020, only 39 percent of the world’s electricity came from clean sources like nuclear and hydro, compared to CO2-emitting fossil fuels.

Fortunately, there is plenty of reason for optimism. By some accounts, solar power is now the cheapest energy source in human history, with wind power not far behind. Moreover, a growing number of utility users are installing rooftop solar panels, solar generators, heat pumps, and the like. “People’s homes are not just taking power from the grid,” says Petri. “They’re putting power back on the grid. It’s a much more complex system.”

The laws of electricity don’t change depending on where we choose to draw our current from. But the consequences of our decisions on how to use that power do matter.

The post How does electricity work? Let’s demystify the life-changing physics. appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
What happens if AI grows smarter than humans? The answer worries scientists. https://www.popsci.com/science/ai-singularity/ Mon, 12 Jun 2023 10:00:00 +0000 https://www.popsci.com/?p=547500
Agent Smith clones from The Matrix to show the concept of singularity in AI and AGI
With each iteration, Singularity could get more invincible—and dangerous. Warner Bros.

Some AI experts have begun to confront the 'Singularity.' What they see scares them.

The post What happens if AI grows smarter than humans? The answer worries scientists. appeared first on Popular Science.

]]>
Agent Smith clones from The Matrix to show the concept of singularity in AI and AGI
With each iteration, Singularity could get more invincible—and dangerous. Warner Bros.

In 1993, computer scientist and sci-fi author Vernor Vinge predicted that within three decades, we would have the technology to create a form of intelligence that surpasses our own. “Shortly after, the human era will be ended,” Vinge said.

As it happens, 30 years later, the idea of an artificially created entity that can surpass—or at least match—human capabilities is no longer the domain of speculators and authors. Ranks of AI researchers and tech investors are seeking what they call artificial general intelligence (AGI): an entity capable of human-level performance at all kinds of intellectual tasks. If humans produce a successful AGI, some researchers now believe, “the end of the human era” will no longer be a vague, distant possibility.  

[Related: No, the AI chatbots still aren’t sentient]

Futurists often credit Vinge with popularizing what many commentators have called “the Singularity.” He believed that technological progress could eventually spawn an entity with capabilities surpassing the human brain. Its introduction to society would warp the world beyond recognition—a “change comparable to the rise of human life on Earth,” in Vinge’s own words.

Perhaps it’s easiest to imagine Singularity as a powerful AI, but Vinge envisioned it in other ways. Biotech or electronic enhancements might tweak the human brain to be faster and smarter, combining, say, the human mind’s intuition and creativity with a computer’s processor and information access to perform superhuman feats. Or as a more mundane example, consider how the average smartphone user has powers that would awe a time traveler from 1993.

“The whole point is that, once machines take over the process of doing science and engineering, the progress is so quick, you can’t keep up,” says Roman Yampolskiy, a computer scientist at the University of Louisville.

Already, Yampolskiy sees a microcosm of that future in his own field, where AI researchers are publishing an incredible amount of work at a rapid rate. “As an expert, you no longer know what the state of the art is,” he says. “It’s just evolving too quickly.”

What is superhuman intelligence?

While Vinge didn’t lay out any one path to the Singularity, some experts think AGI is the key to getting there through computer science. Others contest that the term is a meaningless buzzword. In general, it describes a system that matches human performance in any intellectual task.

If we develop AGI, it might open the door to a future of creating a superhuman intelligence. When applied to research, that intelligence could then produce its own new discoveries and new technologies at a breakneck pace. For instance, imagine a hypothetical AI system better than any real-world computer scientist. Now, imagine that system in turn tasked with designing better AI systems. The result, some researchers believe, could be an exponential acceleration of AI’s capabilities.

[Related: Engineers finally peeked inside a deep neural network]

That may pose a problem, because we don’t fully understand why many AI systems behave in the ways they do—a problem that may never disappear. Yampolskiy’s work suggests that we will never be able to reliably predict what an AGI will be able to do. Without that ability, in Yampolskiy’s mind, we will be unable to reliably control it. The consequences of that could be catastrophic, he says.

But predicting the future is hard, and AI researchers around the world are far from unified on the issue. In mid-2022, the think tank AI Impact surveyed 738 researchers’ opinions on the likelihood of a Singularity-esque scenario. They found a split: 33 percent replied that such a fate was “likely” or “quite likely,” while 47 percent replied it was “unlikely” or “quite unlikely.”

“I feel like it’s taking away from the problems that actually matter.”

Sameer Singh, computer scientist

Sameer Singh, a computer scientist at the University of California, Irvine, says that the lack of a consistent definition for AGI—and Singularity, for that matter—makes the concepts difficult to empirically examine. “Those are interesting academic things to be thinking about,” he explains. “But, from an impact point of view, I think there is a lot more that could happen in society that’s not just based on this threshold-crossing.”

Indeed, Singh worries that focusing on possible futures obscures the very real impacts that AI’s failures or follies are already having. “When I hear of resources going to AGI and these long-term effects, I feel like it’s taking away from the problems that actually matter,” he says. It’s already well established that the models can create racist, sexist, and factually incorrect output. From a legal point of view, AI-generated content often clashes with copyright and data privacy laws. Some analysts have begun blaming AI for inciting layoffs and displacing jobs.

“It’s much more exciting to talk about, ‘we’ve reached this science-fiction goal,’ rather than talk about the actual realities of things,” says Singh. “That’s kind of where I am, and I feel like that’s kind of where a lot of the community that I work with is.” 

Do we need AGI?

Reactions to an AI-powered future reflect one of many broader splits in the community building, fine-tuning, expanding, and monitoring models. Computer science pioneers Geoffrey Hinton and Yoshua Bengio both recently expressed regrets and a loss of direction over a field they see as spiraling out of control. Some researchers have called for a six-month moratorium on developing AI systems more powerful than GPT-4. 

Yampolskiy backs the call for a pause, but he doesn’t believe half a year—or one year, or two, or any timespan—is enough. He is unequivocal in his judgment: “The only way to win is not to do it.”

The post What happens if AI grows smarter than humans? The answer worries scientists. appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Physicists take first-ever X-rays of single atoms https://www.popsci.com/science/one-atom-x-ray-characterization/ Fri, 02 Jun 2023 18:00:00 +0000 https://www.popsci.com/?p=545645
Argonne National Laboratory's Advanced Photon Source.
The particle accelerator at Argonne National Laboratory provided the intense X-rays needed to image single atoms. Argonne National Laboratory/Flickr

This technique could help materials scientists control chemical reactions with better precision.

The post Physicists take first-ever X-rays of single atoms appeared first on Popular Science.

]]>
Argonne National Laboratory's Advanced Photon Source.
The particle accelerator at Argonne National Laboratory provided the intense X-rays needed to image single atoms. Argonne National Laboratory/Flickr

Perhaps you think of X-rays as the strange, lightly radioactive waves that phase through your body to scan broken bones or teeth. When you get an X-ray image taken, your medical professionals are essentially using it to characterize your body.

Many scientists use X-rays in a very similar role—they just have different targets. Instead of scanning living things (which likely wouldn’t last long when exposed to the high-powered research X-rays), they scan molecules or materials. In the past, scientists have X-rayed batches of atoms, to understand what they are and predict how those atoms might fare in a particular chemical reaction.

But no one has been able to X-ray an individual atom—until now. Physicists used X-rays to study the insides of two different single atoms, in work published in the journal Nature on Wednesday.

“The X-ray…has been used in so many different ways,” says Saw-Wai Hla, a physicist at Ohio University and Argonne National Laboratory, and an author of the paper. “But it’s amazing what people don’t know. We cannot measure one atom—until now.”

Beyond atomic snapshots

Characterizing an atom doesn’t mean just snapping a picture of it; scientists first did that way back in 1955. Since the 1980s, atom-photographers’ tool of choice has been the scanning tunneling microscope (STM). The key to an STM is its bacterium-sized tip. As scientists move the tip a millionth of a hair’s breadth above the atom’s surface, electrons tunnel through the space in between, creating a current. The tip detects that current, and the microscope transforms it into an image. (An STM can drag and drop atoms, too. In 1989, two scientists at IBM became the first STM artists, spelling the letters “IBM” with xenon atoms.)

But actually characterizing an atom—scanning the lone object, sorting it by its element, decoding its properties, understanding how it will behave in chemical reactions—is a far more complex endeavor. 

X-rays allow scientists to characterize larger batches of atoms. When X-rays strike atoms, they transfer their energy into those atoms’ electrons, exciting them. All good things must end, of course, and when those electrons come down, they release their newfound energy as, again, X-rays. Scientists can study that fresh radiation to study the properties of the atoms in between.

[Related: How scientists managed to store information in a single atom]

That’s a fantastic tool, and it’s been a boon to scientists who need to tinker with molecular structures. X-ray spectroscopy, as the process is called, helped create COVID-19 vaccines, for instance. The technique allows scientists to study a group of atoms—identifying which elements are in a batch and what their electron configurations are in general—but it doesn’t enable scientists to match them up to individual atoms. “We might be able to see, ‘Oh, there’s a whole team of soccer players,’ and ‘There’s a whole team of dancers,’ but we weren’t able to identify a single soccer player or a single dancer,” says Volker Rose, a physicist at Argonne National Laboratory and another of the authors.

Peering with high-power beams

You can’t create a molecule-crunching machine with the X-ray source at your dentist’s office. To reach its full potential, you need a beam that is far brighter, far more powerful. You’ve got to go to a particle accelerator known as a synchrotron.

The device the Nature authors used is located at Argonne National Laboratory, which zips electrons around a ring in the plains of Illinois, two-thirds of a mile long. Rather than crashing particles into each other, however, a synchrotron sends its high-speed electrons through an undulating magnetic gauntlet. As the electrons pass through, they unleash much of their energy as an X-ray beam.

Physics photo
A diagram showing X-rays illuminating a single iron atom (the red ball marked Fe), which provides elemental and chemical information when the tip detects excited electron. Saw-Wai Hla

The authors combined the power of such an X-ray beam with the precision of an STM. In this case, the X-rays energized the atom’s electrons. The STM, however, pulled some of the electrons out, giving scientists a far closer look. Scientists have given this process a name that wouldn’t feel out of place in a PlayStation 1 snowboarding game: synchrotron X-ray scanning tunneling microscopy (SX-STM).

[Related: How neutral atoms could help power next-gen quantum computers]

Combining X-rays and STM isn’t so simple. More than simple technical tinkering, they’re two separate technologies used by two completely separate batches of scientists. Getting them to work together took years of work.

Using SX-STM, the authors successfully detected the electron arrangement within two different atoms: one of iron; and another of terbium, a rare-earth element (number 65) that’s often used in electronic devices that contain magnets as well as in green fluorescent lamps. “That’s totally new, and wasn’t possible before,” says Rose.

The scientists believe that their technique can find use in a broad array of fields. Quantum computers can store information in atoms’ electron states; researchers could use this technique to read them. If the technique catches on, materials scientists might be able to control chemical reactions with far greater precision.

Hla believes that SX-STM characterization can build upon the work that X-ray science already does. “The X-ray has changed many lives in our civilization,” he says. For instance, knowing what specific atoms do is critical to creating better materials and to studying proteins, perhaps for future immunizations. 

Now that Hla and his colleagues have proven it’s possible to examine one or two atoms at a time, he says the road is clear for scientists to characterize whole batches of them at once. “If you can detect one atom,” Hla says, “you can detect 10 atoms and 20 atoms.”

The post Physicists take first-ever X-rays of single atoms appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Danish painters used beer to create masterpieces, but not the way you think https://www.popsci.com/science/beer-byproducts-danish-art/ Thu, 25 May 2023 10:00:00 +0000 https://www.popsci.com/?p=543346
C.W. Eckersberg's painting "The 84-Gun Danish Warship Dronning Marie in the Sound” contains beer byproducts in its canvas primer.
C.W. Eckersberg's painting "The 84-Gun Danish Warship Dronning Marie in the Sound” contains beer byproducts in its canvas primer. Statens Museum for Kunst

Nineteenth-century craftspeople made do with what they had. In Denmark, they had beer leftovers.

The post Danish painters used beer to create masterpieces, but not the way you think appeared first on Popular Science.

]]>
C.W. Eckersberg's painting "The 84-Gun Danish Warship Dronning Marie in the Sound” contains beer byproducts in its canvas primer.
C.W. Eckersberg's painting "The 84-Gun Danish Warship Dronning Marie in the Sound” contains beer byproducts in its canvas primer. Statens Museum for Kunst

Behind a beautiful oil-on-canvas painting is, well, its canvas. To most art museum visitors, that fabric might be no more than an afterthought. But the canvas and its chemical composition are tremendously important to scientists and conservators who devote their lives to studying and caring for works of art.

When they examine a canvas, sometimes those art specialists are surprised by what they find. For instance, few conservators expected a 200-year-old canvas to contain proteins from yeast and fermented grains: the fingerprints of beer-brewing.

But those very proteins sit in the canvases of paintings from early 19th century Denmark. In a paper published on Wednesday in the journal Science Advances, researchers from across Europe say that Danes may have applied brewing byproducts as a base layer to a canvas before painters had their way with it.

“To find these yeast products—it’s not something that I have come across before,” says Cecil Krarup Andersen, an art conservator at the Royal Danish Academy, and one of the authors. “For us also, as conservators, it was a big surprise.”

The authors did not set out in search of brewing proteins. Instead, they sought traces of animal-based glue, which they knew was used to prepare canvases. Conservators care about animal glue since it reacts poorly with humid air, potentially cracking and deforming paintings over the decades.

[Related: 5 essential apps for brewing your own beer]

The authors chose 10 paintings created between 1828 and 1837 by two Danes: Christoffer Wilhelm Eckersberg, the so-called “Father of Danish Painting,” fond of painting ships and sea life; and Christen Schiellerup Købke, one of Eckersberg’s students at the Royal Danish Academy of Fine Arts, who went on to become a distinguished artist in his own right.

The authors tested the paintings with protein mass spectrometry: a technique that allows scientists to break a sample down into the proteins within. The technique isn’t selective, meaning that the experimenters could find substances they weren’t seeking.

Mass spectrometry destroys its sample. Fortunately, conservators in the 1960s had trimmed the paintings’ edges during a preservation treatment. The National Gallery of Denmark—the country’s largest art museum—had preserved the scraps, allowing the authors to test them without actually touching the original paintings.

Scraps from eight of the 10 paintings contained structural proteins from cows, sheep, or goats, whose body parts might have been reduced into animal glue. But seven paintings also contained something else: proteins from baker’s yeast and from fermented grains—wheat, barley, buckwheat, rye.

[Related: Classic Mexican art stood the test of time with the help of this secret ingredient]

That yeast and those grains feature in the process of brewing beer. While beer does occasionally turn up in recipes for 19th century house-paint, it’s alien to works of fine art.

“We weren’t even sure what they meant,” says study author Fabiana Di Gianvincenzo, a biochemist at the University of Copenhagen in Denmark and the University of Ljubljana in Slovenia.

The authors considered the possibility that stray proteins might have contaminated the canvas from the air. But three of the paintings contained virtually no brewer’s proteins at all, while the other seven contained too much protein for contamination to reasonably explain.

“It was not something random,” says Enrico Cappellini, a biochemist at the University of Copenhagen in Denmark, and another of the authors.

To learn more, the authors whipped up some mock substances containing those ingredients: recipes that 19th-century Danes could have created. The yeast proved an excellent emulsifier, creating a smooth, glue-like paste. If applied to a canvas, the paste would create a smooth base layer that painters could beautify with oil colors.

A mock primer made in the laboratory.
Making a paint paste in the lab, 19th-century style. Mikkel Scharff

Eckersberg, Købke, and their fellow painters likely didn’t interact with the beer. The Royal Danish Academy of Fine Arts provided its professors and students with pre-prepared art materials. Curiously, the paintings that contained grain proteins all came from earlier in the time period, between 1827 and 1833. Købke then left the Academy and produced the three paintings that didn’t contain grain proteins, suggesting that his new source of canvases didn’t use the same preparation method.

The authors aren’t certain how widespread the brewer’s method might have been. If the technique was localized to early 19th century Denmark or even to the Academy, art historians today could use the knowledge to authenticate a painting from that era, which historians sometimes call the Danish Golden Age. 

This was a time of blossoming in literature, in architecture, in sculpture, and, indeed, in painting. In art historians’ reckoning, it was when Denmark developed its own unique painting tradition, which vividly depicted Norse mythology and the Danish countryside. The authors’ work lets them glimpse lost details of the society under that Golden Age. “Beer is so important in Danish culture,” says Cappellini. “Finding it literally at the base of the artwork that defined the origin of modern painting in Denmark…is very meaningful.” 

[Related: The world’s art is under attack—by microbes]

The work also demonstrates how craftspeople repurposed the materials they had. “Denmark was a very poor country at the time, so everything was reused,” says Andersen. “When you have scraps of something, you could boil it to glue, or you could use it in the grounds, or use it for canvas, to paint on.”

The authors are far from done. For one, they want to study their mock substances as they age. Combing through the historical record—artists’ diaries, letters, books, and other period documents—might also reveal tantalizing details of who used the yeast and how. Their work, then, makes for a rather colorful crossover of science with art conservation. “That has been the beauty of this study,” says Andersen. “We needed each other to get to this result.”

This story has been updated to clarify the source of canvases for Købke’s later works.

The post Danish painters used beer to create masterpieces, but not the way you think appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A gassy black hole might have burped out the largest cosmic explosion ever https://www.popsci.com/science/largest-explosion-cosmos-supermassive-black-hole/ Thu, 18 May 2023 16:00:00 +0000 https://www.popsci.com/?p=541906
A supermassive black hole with a mass many times that of our sun.
A supermassive black hole (like the one illustrated here) was thought to have feasted on gas, emitting a mammoth bang. NASA/JPL-Caltech

This fault in the stars lit up the universe with extraordinary brightness.

The post A gassy black hole might have burped out the largest cosmic explosion ever appeared first on Popular Science.

]]>
A supermassive black hole with a mass many times that of our sun.
A supermassive black hole (like the one illustrated here) was thought to have feasted on gas, emitting a mammoth bang. NASA/JPL-Caltech

Humans might fear the nuclear bomb, but it is not even a blip against what the cosmos can unleash. Take, for example, the gamma ray burst: a stark flash of light and radiation erupting from a colossal star in its death throes. Earlier this year, astronomers spotted a gamma ray burst that they’ve labeled “the brightest of all time.”

Yet a gamma ray burst is only a single exploding star. When far more mass is involved, the universe can set off even larger bangs. In a paper published May 11 in the journal Monthly Notices of the Royal Astronomical Society, astronomers announced what, in their words, is the most energetic astronomical event ever seen.

Still ongoing, this event isn’t as bright as a gamma ray burst—but, lasting far longer, it has unleashed far more energy into the universe. Although this explosion, an event named AT2021lwx, defies easy explanation, the astronomers who found it have an idea involving lucky black holes. If they’re right, their observatories may have sighted something like this event more than once before.

In a bit of irony, this “largest explosion ever seen” evaded astronomers’ detection for nearly a year. The Samuel Oschin Telescope, nestled at Palomar Observatory in the mountains northeast of San Diego, California, first picked up a brightening blip in June 2020. But as often happens in astronomy, a field inundated with data from a sky constantly bursting with activity, the event remained unnoticed.

Only in April 2021 did an automated system called Lasair bring AT2021lwx to human astronomers’ attention. By then, the blip in the sky had been steadily brightening for more than 300 days. While the blip was peculiar, astronomers thought little of it, until they estimated the object’s brightness by calculating how far away the event was: 8 billion light-years.

“That’s, suddenly, when we realized: ‘Hang on, this is something very, very unusual,’” says study author Philip Wiseman, an astronomer at the University of Southampton in the UK.

[Related: Astronomers now know how supermassive black holes blast us with energy]

“I haven’t seen anything changing brightness and becoming this bright on such a short timescale,” says Tonima Ananna, a black hole astrophysicist at Dartmouth College, who wasn’t an author.

At first, the authors didn’t know what to make of AT2021lwx. They asked their colleagues. Some thought it was a tidal disruption event, where a black hole violently tears apart a captured star. But this event was far, far brighter than any known star-eating episode. Others thought it was a quasar, a young galaxy with an active nucleus: a supermassive black hole churning out bright jets of radiation. But this event’s hundredfold surge in brightness was far greater than anything astronomers had seen in quasars.

“You have the tidal disruption people saying, ‘No, I don’t think it’s one of ours.’ You’ve got the quasar people saying, ‘No, I don’t think it’s one of ours.’ That’s where you have to start coming up with a new scenario,” Wiseman says.

Their new scenario also involves a black hole: a supermassive one, more than a million times the mass of the sun, at the heart of a galaxy. Normally, a supermassive black hole is surrounded by a gas accretion disc, drawn in by the immense gravity. Some supermassive black holes, like those in quasars, actively devour that gas; as they do, they glow in response. Others, like the one in the center of the Milky Way, are dormant, quiet, and dark.

“You have the tidal disruption people saying, ‘No, I don’t think it’s one of ours.’ You’ve got the quasar people saying, ‘No, I don’t think it’s one of ours.’ That’s where you have to start coming up with a new scenario.”

 Philip Wiseman, University of Southampton astronomer

Wiseman and his colleagues believe that, abruptly, a dormant black hole might suddenly find itself inundated by a very large quantity of gas—potentially thousands of times the mass of the sun. The black hole would respond to its newfound banquet by brilliantly awakening, bursting far more brightly than even an active counterpart.. 

Wiseman and his colleagues believe that such a windfall triggered AT2021lwx, causing a dormant supermassive black hole to light up the night.

“I think they make a compelling case that this is a supermassive black hole … suddenly being ‘switched on,’” says Ananna.

Astronomers might have seen accretion events like AT2021lwx before. Wiseman and his colleagues pored through past observations and found multiple needles in the haystack of astronomical data that resembled the record event. None of them were even close to this one’s brightness, but they also increased in luminosity along a similar pattern. These events occurred in galaxies known to have black holes at their centers, showering in streams of gas that fall inward.

[Related: Astronomers just caught a ‘micronova’—a small but mighty star explosion]

“There’s a chance that [the record event] is the same, but just the amount of gas that has been dumped on is much, much, much, much larger,” says Wiseman.

Wiseman and his colleagues plan to put their ideas to the test in the form of computer simulations. By doing this, they can learn if accretion events could have caused this record explosion and the other bright patterns they’d found.

Meanwhile, they’re planning to follow the trail they’ve found. AT2021lwx’s brightness has peaked and begun to steadily decline. They’ve begun watching the object’s X-ray emissions and plan to follow up with radio waves. Once the object has faded to black, they plan to zoom in with something like the Hubble Space Telescope, which can see if there’s a galaxy behind the burst—and what it looks like.

The need for more observations underscores that astronomers still have many unanswered questions about some of the universe’s most extreme events.

“There may be things out there already that have been larger and brighter, but because they are so slow, our detection algorithms never actually flagged them as being an explosion themselves—and they kind of just got lost,” Wiseman says.

The post A gassy black hole might have burped out the largest cosmic explosion ever appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The epic journey of dust in the wind often ends with happy plankton https://www.popsci.com/science/dust-plankton-ocean-blooms/ Thu, 04 May 2023 18:00:00 +0000 https://www.popsci.com/?p=539003
A swirl of dust from the Sahara desert is carried on winds above the Mediterranean.
NASA's Terra satellite captured this Saharan dust blowing over Italy and the Mediterranean Sea in December 2022. NASA Earth Observatory/MODIS

The voyage of a dust particle links sand to sky to the sea.

The post The epic journey of dust in the wind often ends with happy plankton appeared first on Popular Science.

]]>
A swirl of dust from the Sahara desert is carried on winds above the Mediterranean.
NASA's Terra satellite captured this Saharan dust blowing over Italy and the Mediterranean Sea in December 2022. NASA Earth Observatory/MODIS

A dust particle can go on a great voyage. It starts on land; it continues in the air, where winds carry the particle up, up, and away. And—at least for some dust particles—that saga might end with a fall into seawater thousands of miles from where it began.

Dust intrinsically links Earth’s sands, skies, and seas. Particles that fall into water can deliver nutrients that feed life in the sea, creating great algal blooms. Scientists are learning more about the process, but there are many questions they still haven’t answered about how—and if—it works.

In a new study published today in the journal Science, scientists have answered one previous mystery. They’ve shown that more dust does, indeed, create more phytoplankton.

“Understanding how the ocean works is an underlying motivation,” says Toby Westberry, a botanist at Oregon State University, and the paper’s lead author. “It is vast and still poorly understood in many respects.”

Much of the world’s dust begins its journey in the world’s deserts. Winds blowing across the sands might carry some fine particles away. And the longer that sand sits in one place, the more dust that place generates: The world’s great dust generator lies in North Africa: the vast expanses of the Sahara. 

From there, dust particles are passengers of the world’s wind patterns. For instance, North African dust might ride the westerlies to Europe, or it might ride the trade winds from North Africa across the Atlantic. 

Inevitably, some dust falls into the world’s oceans along the way, unloading the cargo it carried from the deserts—elements like phosphorus and iron. The atmosphere is not inert, either, and adds new chemicals to airborne particles: As dust rides high through the skies of Earth’s troposphere, it collects nitrogen from the surrounding air. When dust delivers this nitrogen and other nutrients to the water, they encourage phytoplankton to bloom—tinting the oceans greenchanging the very color of the oceans.

Atmospheric dust isn’t the primary source of nutrients for sea plants; scientists think that they mainly rely on what rises as water upwells from the ocean depths. But dust can still make its mark—especially by delivering iron to parts of the ocean that are deficient in the metal.

Scientists pay close attention to dust particles because of their roles as iron couriers.  “Often, when we think of dust,” says Douglas Hamilton, an earth scientist at North Carolina State University, who was not an author on the paper, “we do link it immediately to the iron.”

There are many questions that remain unanswered about this process. What precise role does the dust play in encouraging phytoplankton? Are there different types of dust that cause phytoplankton to respond in different ways? 

Most pressingly, scientists didn’t know the process worked on a worldwide scale. Past research had shown that dust storms could cause local phytoplankton blooms; experiments had also demonstrated that literally pouring iron into seawater encouraged phytoplankton growth. “We’ve done this work, but does it actually matter?” says Hamilton. “We think it does…it’s been proved for isolated events, but it’s never been proved on the global scale.”

The paper’s authors tried to answer that question. NASA had simulated dust flows in the atmosphere between 2003 and 2016 based on observations of how surface temperatures changed with the days. Unsurprisingly, the simulations stated that more dust fell in regions around the Sahara Desert: in seas like the Mediterranean, the North Atlantic, and the Indian Ocean.

[Related: The Sahara used to be full of fish]

With that data in hand, the authors turned to satellite measurements of the seas over that same time period: specifically, observations of ocean color, which could indicate phytoplankton. Indeed, phytoplankton grew on the days after the simulation suggested certain parts of the sea would have received a windfall of dust.

The scientists saw such responses across the globe—but the blooms weren’t always equal. In some areas, increased dust led to a boost in the quantity of phytoplankton; in others, increased dust made the phytoplankton healthier, with brighter chlorophyll. In still others, dust didn’t seem to elicit a response at all.

“Why would this be?” Westberry wonders. “Knowing something about the mineralogy of the dust—what it’s composed of and what nutrients it carries—would be helpful to this end.”

Dust isn’t the only source of food airdropped to phytoplankton. Volcanic eruptions and wildfires both spew out nutrients that enter the ocean. “Volcanic ash is not the same as dust, but conveys nutrients much the same,” Westberry says. Meanwhile, scientists have linked megafires in Australia with phytoplankton in the downwind South Pacific. On the other side of the planet, wildfires in northern forests are associated with blooms around the North Pole.

[Related: In constant darkness, Arctic krill migrate by twilight and the Northern Lights]

“This paper is great, it’s awesome,” says Hamilton. “Then the next question is: Right, now, what about all this other stuff which is also out there? What impact is that having, too?” One future area of study is human activity, which causes climate change and wildfires. We may be responsible for desertification, too, creating more sand for winds to carry away. And our industrial activity—pollution and fossil fuels, for instance—pours out particulates of its own. Scientists think these substances might feed phytoplankton, but they don’t fully know how or if it works across the globe.

Fortunately for scientists, they may see a bloom of their own field. In 2024, NASA will launch a satellite called PACE specifically to observe phytoplankton in the ocean.

The post The epic journey of dust in the wind often ends with happy plankton appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Ancient Maya masons had a smart way to make plaster stronger https://www.popsci.com/science/ancient-maya-plaster/ Wed, 19 Apr 2023 18:16:42 +0000 https://www.popsci.com/?p=535272
Ancient Maya idol in Copán, Guatemala
The idols, pyramids, and dwellings in the ancient Maya city of Copán have lasted longer than a thousand years. DEA/V. Giannella/Contributor via Getty Images

Up close, the Mayas' timeless recipe from Copán looks similar to mother-of-pearl.

The post Ancient Maya masons had a smart way to make plaster stronger appeared first on Popular Science.

]]>
Ancient Maya idol in Copán, Guatemala
The idols, pyramids, and dwellings in the ancient Maya city of Copán have lasted longer than a thousand years. DEA/V. Giannella/Contributor via Getty Images

An ancient Maya city might seem an unlikely place for people to be experimenting with proprietary chemicals. But scientists think that’s exactly what happened at Copán, an archaeological complex nestled in a valley in the mountainous rainforests of what is now western Honduras.

By historians’ reckoning, Copán’s golden age began in 427 CE, when a king named Yax Kʼukʼ Moʼ came to the valley from the northwest. His dynasty built one of the jewels of the Maya world, but abandoned it by the 10th century, leaving its courts and plazas to the mercy of the jungle. More than 1,000 years later, Copán’s buildings have kept remarkably well, despite baking in the tropical sun and humidity for so long. 

The secret may lie in the plaster the Maya used to coat Copán’s walls and ceilings. New research suggests that sap from the bark of local trees, which Maya craftspeople mixed into their plaster, helped reinforce its structures. Whether by accident or by purpose, those Maya builders created a material not unlike mother-of-pearl, a natural element of mollusc shells.

“We finally unveiled the secret of ancient Maya masons,” says Carlos Rodríguez Navarro, a mineralogist at the University of Granada in Spain and the paper’s first author. Rodríguez Navarro and his colleagues published their work in the journal Science Advances today.

[Related: Scientists may have solved an old Puebloan mystery by strapping giant logs to their foreheads]

Plaster makers followed a fairly straightforward recipe. Start with carbonate rock, such as limestone; bake it at over 1,000 degrees Fahrenheit; mix in water with the resulting quicklime; then, set the concoction out to react with carbon dioxide from the air. The final product is what builders call lime plaster or lime mortar. 

Civilizations across the world discovered this process, often independently. For example, Mesoamericans in Mexico and Central America learned how to do it by around 1,100 BCE. While ancient people found it useful for covering surfaces or holding together bricks, this basic lime plaster isn’t especially durable by modern standards.

Ancient Maya pyramid in Copán, Guatemala, in aerial photo
Copán, with its temples, squares, terraces and other characteristics, is an excellent representation of Classic Mayan civilization. Xin Yuewei/Xinhua via Getty Images

But, just as a dish might differ from town to town, lime plaster recipes varied from place to place. “Some of them perform better than others,” says Admir Masic, a materials scientist at the Massachusetts Institute of Technology who wasn’t part of the study. Maya lime plaster, experts agree, is one of the best.

Rodríguez Navarro and his colleagues wanted to learn why. They found their first clue when they examined brick-sized plaster chunks from Copán’s walls and floors with X-rays and electron microscopes. Inside some pieces, they found traces of organic materials like carbohydrates. 

That made them curious, Rodríguez Navarro says, because it seemed to confirm past archaeological and written records suggesting that ancient Maya masons mixed plant matter into their plaster. The other standard ingredients (lime and water) wouldn’t account for complex carbon chains.

To follow this lead, the authors decided to make the historic plaster themselves. They consulted living masons and Maya descendants near Copán. The locals referred them to the chukum and jiote trees that grow in the surrounding forests—specifically, the sap that came from the trees’ bark.

Jiote or gumbo-limbo tree in the Florida Everglades
Bursera simaruba, sometimes locally known as the jiobe tree. Deposit Photos

The authors tested the sap’s reaction when mixed into the plaster. Not only did it toughen the material, it also made the plaster insoluble in water, which partly explains how Copán survived the local climate so well.

The microscopic structure of the plant-enhanced plaster is similar to nacre or mother-of-pearl: the iridescent substance that some molluscs create to coat their shells. We don’t fully understand how molluscs make nacre, but we know that it consists of crystal plates sandwiching elastic proteins. The combination toughens the sea creatures’ exteriors and reinforces them against weathering from waves.

A close study of the ancient plaster samples and the modern analog revealed that they also had layers of rocky calcite plates and organic sappy material, giving the materials the same kind of resilience as nacre. “They were able to reproduce what living organisms do,” says Rodríguez Navarro. 

“This is really exciting,” says Masic. “It looks like it is improving properties [of regular plaster].”

Now, Rodríguez Navarro and his colleagues are trying to answer another question: Could other civilizations that depended on masonry—from Iberia to Persia to China—have stumbled upon the same secret? We know, for instance, that Chinese lime-plaster-makers mixed in a sticky rice soup for added strength.

Plaster isn’t the only age-old material that scientists have reconstructed. Masic and his colleagues found that ancient Roman concrete has the ability to “self-heal.” More than two millennia ago, builders in the empire may have added quicklime to a rocky aggregate, creating microscopic structures within the material that help fill in pores and cracks when it’s hit by seawater.

[Related: Ancient architecture might be key to creating climate-resilient buildings]

If that property sounds useful, modern engineers think so too. There exists a blossoming field devoted to studying—and recreating—materials of the past. Standing structures from archaeological sites already prove they can withstand the test of time. As a bonus, ancient people tended to work with more sustainable methods and use less fuel than their industrial counterparts.

“The Maya paper…is another great example of this [scientific] approach,” Masic says.

Not that Maya plaster will replace the concrete that’s ubiquitous in the modern world—but scientists say it could have its uses in preserving and upgrading the masonry found in pre-industrial buildings. A touch of plant sap could add centuries to a structure’s lifespan.

The post Ancient Maya masons had a smart way to make plaster stronger appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
How the Tonga eruption rang Earth ‘like a bell’ https://www.popsci.com/science/tonga-volcano-tsunami-simulation/ Fri, 14 Apr 2023 18:00:00 +0000 https://www.popsci.com/?p=534151
Satellite image of the powerful eruption.
Earth-observing satellites captured the powerful eruption. NASA Earth Observatory

A detailed simulation of underwater shockwaves changes what we know about the Hunga Tonga-Hunga Ha’apai eruption.

The post How the Tonga eruption rang Earth ‘like a bell’ appeared first on Popular Science.

]]>
Satellite image of the powerful eruption.
Earth-observing satellites captured the powerful eruption. NASA Earth Observatory

When the Hunga Tonga–Hunga Haʻapai volcano in Tonga exploded on January 15, 2022—setting off a sonic boom heard as far north as Alaska—scientists instantly knew that they were witnessing history. 

“In the geophysical record, this is the biggest natural explosion ever recorded,” says Ricky Garza-Giron, a geophysicist at the University of California at Santa Cruz. 

It also spawned a tsunami that raced across the Pacific Ocean, killing two people in Peru. Meanwhile, the disaster devastated Tonga and caused four deaths in the archipelago. While tragic, experts anticipated an event of this magnitude would cause further casualties. So why didn’t it?

Certainly, the country’s disaster preparations deserve much of the credit. But the nature of the eruption itself and how the tsunami it spawned spread across Tonga’s islands, also saved Tonga from a worse outcome, according to research published today in the journal Science Advances. By combining field observations with drone and satellite data, the study team was able to recreate the event through a simulation.

2022 explosion from Hunga-Tonga volcano captured by satellites
Satellites captured the explosive eruption of the Hunga Tonga-Hunga Ha’apai volcano. National Environmental Satellite Data and Information Service

It’s yet another way that scientists have studied how this eruption shook Tonga and the whole world. For a few hours, the volcano’s ash plume bathed the country and its surrounding waters with more lightning than everywhere else on Earth—combined. The eruption spewed enough water vapor into the sky to boost the amount in the stratosphere by around 10 percent. 

[Related: Tonga’s historic volcanic eruption could help predict when tsunamis strike land]

The eruption shot shockwaves into the ground, water, and air. When Garza-Giron and his colleagues measured those waves, they found that the eruption released an order of magnitude more energy than the 1980 eruption of Mount St Helens.

“It literally rang the Earth like a bell,” says Sam Purkis, a geoscientist at the University of Miami in Florida and the Khaled bin Sultan Living Oceans Foundation. Purkis is the first author of the new paper. 

The aim of the simulation is to present a possible course of events. Purkis and his colleagues began by establishing a timeline. Scientists agree that the volcano erupted in a sequence of multiple bursts, but they don’t agree on when or how many. Corroborating witness statements with measurements from tide gauges, the study team suggests a quintet of blasts, each steadily increasing in strength up to a climactic fifth blast: measuring 15 megatons, equivalent to a hydrogen bomb.

Credit: Steven N. Ward Institute of Geophysics and Planetary Physics, University of California Santa Cruz, U.S.A.

Then, the authors simulated what those blasts may have done to the ocean—and how fearsome the waves they spawned were as they battered Tonga’s other islands. The simulation suggests the isle of Tofua, about 55 miles northeast of the eruption, may have fared worst: bearing waves more than 100 feet tall.

But there’s a saving grace: Tofua is uninhabited. The simulation also helps explain why Tonga’s capital and largest city, Nuku’alofa, was able to escape the brunt of the tsunami. It sits just 40 miles south of the eruption, and seemingly experienced much shallower waves. 

[Related: Tonga is fighting multiple disasters after a historic volcanic eruption]

The study team thinks geography is partly responsible. Tofua, a volcanic caldera, sits in deep waters and has sharp, mountainous coasts that offer no protection from an incoming tsunami. Meanwhile, Nuku’alofa is surrounded by shallower waters and a lagoon, giving a tsunami less water to displace. Coral reefs may have also helped protect the city from the tsunami. 

Researchers believed that reefs could cushion tsunamis, Purkis says, but they didn’t have the real-world data to show it. “You don’t have a real-world case study where you have waves which are tens of meters high hitting reefs,” says Purkis.

We do know of volcanic eruptions more violent than Hunga Tonga–Hunga Haʻapai: for instance, Tambora in 1815 (which famously caused a “Year Without a Summer”) and Krakatau in 1883. But those occurred before the 1960s when geophysicists started deploying the worldwide net of sensors and satellites they can use today.

Ultimately, the study authors write that this eruption resulted in a “lucky escape.” It occurred under the most peculiar circumstances: At the time of its eruption, Tonga had shut off its borders due to Covid-19, reducing the number of overseas tourists visiting the islands. Scientists credit this as another reason for the low death toll. But the same closed borders meant scientists had to wait to get data.

Ash cloud from Hunga-Tonga volcano over the Pacific ocean seen from space
Ash over the South Pacific could be seen from space. NASA

That’s part of why this paper came out 15 months after the eruption. Other scientists had been able to simulate the tsunami before, but Purkis and his colleagues bolstered theirs with data from the ground. Not only did this help them reconstruct a timeline, it also helped them to corroborate their simulation with measurements from more than 100 sites along Tonga’s coasts. 

The study team argues that the eruption serves as a “natural laboratory” for the Earth’s activity. Understanding this tsunami can help humans plan how to stay safe from them. There are many other volcanoes like Hunga Tonga–Hunga Haʻapai, and volcanoes located underwater can devastate coastal communities if they erupt at the wrong time.

Garza-Giron is excited about the possibility of comparing the new study’s results with prior studies, such as his own, about seismic activity—in addition to other data sources, likethe sounds of the ocean—to create a more complete picture of what happened that day.

“It’s not very often that we can see the Earth acting as a whole system, where the atmosphere, the ocean, and the solid earth are definitely interacting,” says Garza-Giron. “That, to me, was one of the most fascinating things about this eruption.”

The post How the Tonga eruption rang Earth ‘like a bell’ appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Quantum computers can’t teleport things—yet https://www.popsci.com/technology/wormhole-teleportation-quantum-computer-simulation/ Fri, 07 Apr 2023 12:28:09 +0000 https://www.popsci.com/?p=532454
Google Sycamore processor for quantum computer hanging from a server room with gold and blue wires
Google's Sycamore quantum computer processor was recently at the center of a hotly debate wormhole simulation. Rocco Ceselin/Google

It's almost impossible to simulate a good wormhole without more qubits.

The post Quantum computers can’t teleport things—yet appeared first on Popular Science.

]]>
Google Sycamore processor for quantum computer hanging from a server room with gold and blue wires
Google's Sycamore quantum computer processor was recently at the center of a hotly debate wormhole simulation. Rocco Ceselin/Google

Last November, a group of physicists claimed they’d simulated a wormhole for the first time inside Google’s Sycamore quantum computer. The researchers tossed information into one batch of simulated particles and said they watched that information emerge in a second, separated batch of circuits. 

It was a bold claim. Wormholes—tunnels through space-time—are a very theoretical product of gravity that Albert Einstein helped popularize. It would be a remarkable feat to create even a wormhole facsimile with quantum mechanics, an entirely different branch of physics that has long been at odds with gravity. 

And indeed, three months later, a different group of physicists argued that the results could be explained through alternative, more mundane means. In response, the team behind the Sycamore project doubled down on their results.

Their case highlights a tantalizing dilemma. Successfully simulating a wormhole in a quantum computer could be a boon for solving an old physics conundrum, but so far, quantum hardware hasn’t been powerful or reliable enough to do the complex math. They’re getting there very quickly, though.

[Related: Journey to the center of a quantum computer]

The root of the challenge lies in the difference of mathematical systems. “Classical” computers, such as the device you’re using to read this article, store their data and do their computations with “bits,” typically made from silicon. These bits are binary: They can be either zero or one, nothing else. 

For the vast majority of human tasks, that’s no problem. But binary isn’t ideal for crunching the arcana of quantum mechanics—the bizarre rules that guide the universe at the smallest scales—because the system essentially operates in a completely different form of math.

Enter a quantum computer, which swaps out the silicon bits for “qubits” that adhere to quantum mechanics. A qubit can be zero, one—or, due to quantum trickery, some combination of zero and one. Qubits can make certain calculations far more manageable. In 2019, Google operators used Sycamore’s qubits to complete a task in minutes that they said would have taken a classical computer 10,000 years.

There are several ways of simulating wormholes with equations that a computer can solve. The 2022 paper’s researchers used something called the Sachdev–Ye–Kitaev (SYK) model. A classical computer can crunch the SYK model, but very ineffectively. Not only does the model involve particles interacting at a distance, it also features a good deal of randomness, both of which are tricky for classical computers to process.

Even the wormhole researchers greatly simplified the SYK model for their experiment. “The simulation they did, actually, is very easy to do classically,” says Hrant Gharibyan, a physicist at Caltech, who wasn’t involved in the project. “I can do it in my laptop.”

But simplifying the model opens up new questions. If physicists want to show that they’ve created a wormhole through quantum math, it makes it harder for them to confirm that they’ve actually done it. Furthermore, if physicists want to learn how quantum mechanics interact with gravity, it gives them less information to work with.

Critics have pointed out that the Sycamore experiment didn’t use enough qubits. While the chips in your phone or computer might have billions or trillions of bits, quantum computers are far, far smaller. The wormhole simulation, in particular, used nine.

While the team certainly didn’t need billions of qubits, according to experts, they should have used more than nine. “With a nine-qubit experiment, you’re not going to learn anything whatsoever that you didn’t already know from classically simulating the experiment,” says Scott Aaronson, a computer scientist at the University of Texas at Austin, who wasn’t an author on the paper.

If size is the problem, then current trends give physicists reason to be optimistic that they can simulate a proper wormhole in a quantum computer. Only a decade ago, even getting one qubit to function was an impressive feat. In 2016, the first quantum computer with cloud access had five. Now, quantum computers are in the dozens of qubits. Google Sycamore has a maximum of 53. IBM is planning a line of quantum computers that will surpass 1,000 qubits by the mid-2020s.

Additionally, today’s qubits are extremely fragile. Even small blips of noise or tiny temperature fluctuations—qubits need to be kept at frigid temperatures, just barely above absolute zero—may cause the medium to decohere, snapping the computer out of the quantum world and back into a mundane classical bit. (Newer quantum computers focus on trying to make qubits “cleaner.”)

Some quantum computers use individual particles; others use atomic nuclei. Google’s Sycamore, meanwhile, uses loops of superconducting wire. It all shows that qubits are in their VHS-versus-Betamax era: There are multiple competitors, and it isn’t clear which qubit—if any—will become the equivalent to the ubiquitous classical silicon chip.

“You need to make bigger quantum computers with cleaner qubits,” says Gharibyan, “and that’s when real quantum computing power will come.”

[Related: Scientists eye lab-grown brains to replace silicon-based computer chips]

For many physicists, that’s when great intangible rewards come in. Quantum physics, which guides the universe at its smallest scales, doesn’t have a complete explanation for gravity, which guides the universe at its largest. Showing a quantum wormhole—with qubits effectively teleporting—could bridge that gap.

So, the Google users aren’t the only physicists poring over this problem. Earlier in 2022, a third group of researchers published a paper, listing signs of teleportation they’d detected in quantum computers. They didn’t send a qubit through a simulated wormhole—they only sent a classical bit—but it was still a promising step. Better quantum gravity experiments, such as simulating the full SYK model, are about “purely extending our ability to build processors,” Gharibyan explains.

Aaronson is skeptical that a wormhole will ever be modeled in a meaningful form, even in the event that quantum computers do reach thousands of qubits. “There’s at least a chance of learning something relevant to quantum gravity that we didn’t know how to calculate otherwise,” he says. “Even then, I’ve struggled to get the experts to tell me what that thing is.”

The post Quantum computers can’t teleport things—yet appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Dying plants are ‘screaming’ at you https://www.popsci.com/science/do-plants-makes-sounds-stressed/ Thu, 30 Mar 2023 18:00:00 +0000 https://www.popsci.com/?p=524200
Pincushion cactus with pink flowers on a sunny windowsill
Under that prickly exterior, even a cactus has feelings. Deposit Photos

In the future, farmers might use ultrasound to listen to stressed plants vent.

The post Dying plants are ‘screaming’ at you appeared first on Popular Science.

]]>
Pincushion cactus with pink flowers on a sunny windowsill
Under that prickly exterior, even a cactus has feelings. Deposit Photos

While plants can’t chat like people, they don’t just sit in restful silence. Under certain conditions—such as a lack of water or physical damage—plants vibrate and emit sound waves. Typically, those waves are too high-pitched for the human ear and go unnoticed.

But biologists can now hear those sound waves from a distance. Lilach Hadany, a biologist at Tel Aviv University in Israel, and her colleagues even managed to record them. They published their work in the journal Cell today.

Hadany and colleagues’ work is part of a niche but budding field called “plant bioacoustics.” While scientists know plants aren’t just inert decorations in the ecological backdrop— they interact with their surroundings, like releasing chemicals as a defense mechanism—researchers don’t exactly know how plants respond to and produce sounds. Not only could solving this mystery give farmers a new way of tending to their plants, but it might also unlock something wondrous: Plants have senses in a way we never realized.

It’s established that “the sounds emitted by plants are much more prominent after some kind of stress,” says František Baluška, a plant bioacoustics researcher at Bonn University in Germany who wasn’t a part of the new study. But past plant bioacoustics experiments had to listen to plants at a very close distance to measure vibrations. Meanwhile, Hadany and her colleagues managed to pick up plant sounds from across a room.

[Related on PopSci+: Biohacked cyborg plants may help prevent environmental disaster]

The study team first tested out their ideas on tomato and tobacco plants. Some plants were watered regularly, while others were neglected for days—a process that simulated drought-like conditions. Finally, the most unfortunate plants were severed from their roots.

Plants under idyllic conditions seemed to thrive. But the damaged and dehydrated plants did something peculiar: They emitted clicking sounds once every few minutes. 

Of course, if you were to walk through a drought-stricken tomato grove with a machete, chopping every vine you see, you wouldn’t hear a chorus of distressed plants. The plants emit sounds in ultrasound: frequencies too high for the human ear to hear. That’s part of why researchers have only now perceived these clicks.
“Not everybody has the equipment to do ultrasound [or] has the mind to look into these broader frequencies,” says ecologist Daniel Robert, a professor at the University of Bristol in the United Kingdom who wasn’t an author of the paper.

Three tomato plants in a greenhouse with a microphone in front of them
Three tomato plants’ sounds were recorded in a greenhouse. Ohad Lewin-Epstein

The researchers were able to record similar sounds in other plants deprived of water, including wheat, maize, wine grapes, pincushion cactus, and henbit (a common spring weed in the Northern Hemisphere). 

Biologists think the clicks might come from xylem, the “piping” that transports water and nutrients through a plant. Pressure differences cause air bubbles to enter the fluid. The bubbles grow until they pop—and the burst is the noise picked up by scientists. This process is called cavitation. 

Most people who study cavitation aren’t biologists; they’re typically physicists and engineers. For them, cavitation is often a nuisance. Bursting bubbles can damage pumps, propellers, hydraulic turbines, and other devices that do their work underwater. But, on the other hand, we can put cavitation to work for us: for instance, in ultrasound jewelry cleaners.

Although it’s known cavitation occurs in plants under certain conditions, like when they’re dehydrated, scientists aren’t sure that this process can entirely explain the plant sounds they hear. “There might not be only one mechanism,” says Robert.

The authors speculate that their work could eventually help plant growers, who could listen from a distance and monitor the plants in their greenhouse. To support this potential future,  Hadany and her colleagues trained a machine learning model to break down the sound waves and discern what stress caused a particular sound. Instead of being surprised by wilted greens, this type of tech could give horticulturists a heads-up.

[Related: How to water your plants less but still keep them happy]

Robert suspects that—unlike people—animals might already be able to hear plant sounds. Insects searching for landing spots or places to lay their eggs, for instance, might pick and choose plants by listening in and selecting a plant based on their health.

If there is an observable quality like sound (or light or electric fields) in the wild, then some organisms will evolve to use it, explains Robert. “This is why we have ears,” he says

If that’s the case, perhaps it can work the other way—plants may also respond to sounds. Scientists like Baluška have already shown that plants can “hear” external sounds. For example, research suggests some leaf trichomes react to vibrations from worms chewing on them. And in the laboratory, researchers have seen some plants’ root tips grow through the soil in the direction of incoming sounds.

If that’s the case, some biologists think plants may have more sophisticated “senses” than we perhaps believed.

“Plants definitely must be aware of what is around because they must react every second because the environment is changing all the time,” says Baluška. “They must be able to, somehow, understand the environment.”

The post Dying plants are ‘screaming’ at you appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Room-temperature superconductors could zap us into the future https://www.popsci.com/science/room-temperature-superconductor/ Sat, 25 Mar 2023 16:00:00 +0000 https://www.popsci.com/?p=522900
Superconductor cuprate rings lit up in blue and green on a black grid
In this image, the superconducting Cooper-pair cuprate is superimposed on a dashed pattern that indicates the static positions of electrons caught in a quantum "traffic jam" at higher energy. US Department of Energy

Superconductors convey powerful currents and intense magnetic fields. But right now, they can only be built at searing temperatures and crushing pressures.

The post Room-temperature superconductors could zap us into the future appeared first on Popular Science.

]]>
Superconductor cuprate rings lit up in blue and green on a black grid
In this image, the superconducting Cooper-pair cuprate is superimposed on a dashed pattern that indicates the static positions of electrons caught in a quantum "traffic jam" at higher energy. US Department of Energy

In the future, wires might cross underneath oceans to effortlessly deliver electricity from one continent to another. Those cables would carry currents from giant wind turbines or power the magnets of levitating high-speed trains.

All these technologies rely on a long-sought wonder of the physics world: superconductivity, a heightened physical property that lets metal carry an electric current without losing any juice.

But superconductivity has only functioned at freezing temperatures that are far too cold for most devices. To make it more useful, scientists have to recreate the same conditions at regular temperatures. And even though physicists have known about superconductivity since 1911, a room-temperature superconductor still evades them, like a mirage in the desert.

What is a superconductor?

All metals have a point called the “critical temperature.” Cool the metal below that temperature, and electrical resistivity all but vanishes, making it extra easy to move charged atoms through. To put it another way, an electric current running through a closed loop of superconducting wire could circulate forever. 

Today, anywhere from 8 to 15 percent of mains electricity is lost between the generator and the consumer because the electrical resistivity in standard wires naturally wicks some of it away as heat. Superconducting wires could eliminate all of that waste.

[Related: This one-way superconductor could be a step toward eternal electricity]

There’s another upside, too. When electricity flows through a coiled wire, it produces a magnetic field; superconducting wires intensify that magnetism. Already, superconducting magnets power MRI machines, help particle accelerators guide their quarry around a loop, shape plasma in fusion reactors, and push maglev trains like Japan’s under-construction Chūō Shinkansen.

Turning up the temperature

While superconductivity is a wondrous ability, physics nerfs it with the cold caveat. Most known materials’ critical temperatures are barely above absolute zero (-459 degrees Fahrenheit). Aluminum, for instance, comes in at -457 degrees Fahrenheit; mercury at -452 degrees Fahrenheit; and the ductile metal niobium at a balmy -443 degrees Fahrenheit. Chilling anything to temperatures that frigid is tedious and impractical. 

Scientists made it happen—in a limited capacity—by testing it with exotic materials like cuprates, a type of ceramic that contains copper and oxygen. In 1986, two IBM researchers found a cuprate that superconducted at -396 degrees Fahrenheit, a breakthrough that won them the Nobel Prize in Physics. Soon enough, others in the field pushed cuprate superconductors past -321 degrees Fahrenheit, the boiling point of liquid nitrogen—a far more accessible coolant than the liquid hydrogen or helium they’d otherwise need. 

“That was a very exciting time,” says Richard Greene, a physicist at the University of Maryland. “People were thinking, ‘Well, we might be able to get up to room temperature.’”

Now, more than 30 years later, the search for a room-temperature superconductor continues. Equipped with algorithms that can predict what a material’s properties will look like, many researchers feel that they’re closer than ever. But some of their ideas have been controversial.

The replication dilemma

One way the field is making strides is by turning the attention away from cuprates to hydrates, or materials with negatively charged hydrogen atoms. In 2015, researchers in Mainz, Germany, set a new record with a sulfur hydride that superconducted at -94 degrees Fahrenheit. Some of them then quickly broke their own record with a hydride of the rare-earth element lanthanum, pushing the mercury up to around -9 degrees Fahrenheit—about the temperature of a home freezer.

But again, there’s a catch. Critical temperatures shift when the surrounding pressure changes, and hydride superconductors, it seems, require rather inhuman pressures. The lanthanum hydride only achieved superconductivity at pressures above 150 gigapascals—roughly equivalent to conditions in the Earth’s core, and far too high for any practical purpose in the surface world.

[Related: How the small, mighty transistor changed the world]

So imagine the surprise when mechanical engineers at the University of Rochester in upstate New York presented a hydride made from another rare-earth element, lutetium. According to their results, the lutetium hydride superconducts at around 70 degrees Fahrenheit and 1 gigapascal. That’s still 10,000 times Earth’s air pressure at sea level, but low enough to be used for industrial tools.

“It is not a high pressure,” says Eva Zurek, a theoretical chemist at the University at Buffalo. “If it can be replicated, [this method] could be very significant.”

Scientists aren’t cheering just yet, however—they’ve seen this kind of an attempt before. In 2020, the same research group claimed they’d found room-temperature superconductivity in a hydride of carbon and sulfur. After the initial fanfare, many of their peers pointed out that they’d mishandled their data and that their work couldn’t be replicated. Eventually, the University of Rochester engineers caved and retracted their paper.

Now, they’re facing the same questions with their lutetium superconductor. “It’s really got to be verified,” says Greene. The early signs are inauspicious: A team from Nanjing University in China recently tried to replicate the experiment, without success.

“Many groups should be able to reproduce this work,” Greene adds. “I think we’ll know very quickly whether this is correct or not.”

But if the new hydride does mark the first room-temperature superconductor—what next? Will engineers start stringing power lines across the planet tomorrow? Not quite. First, they have to understand how this new material behaves under different temperatures and other conditions, and what it looks like at smaller scales.

“We don’t know what the structure is yet. In my opinion, it’s going to be quite different from a high-pressure hydride,” says Zurek. 

If the superconductor is viable, engineers will have to learn how to make it for everyday uses. But if they succeed, the result could be a gift for world-changing technologies.

The post Room-temperature superconductors could zap us into the future appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Venus could still be spewing lava, and scientists are hellbent on proving it https://www.popsci.com/science/venus-volcano-magellan-evidence/ Fri, 17 Mar 2023 18:10:41 +0000 https://www.popsci.com/?p=520701
Venus volcano in NASA Magellan spacecraft radar image
A Magellan radar image of Maat Mons on Venus in 1991. Comparison of further images from the space probe showed it changing, potentially due to volcanic activity. NASA/JPL-Caltech

Does Venus have active volcanoes? Radar images from the old Magellan mission add new fuel to the debate.

The post Venus could still be spewing lava, and scientists are hellbent on proving it appeared first on Popular Science.

]]>
Venus volcano in NASA Magellan spacecraft radar image
A Magellan radar image of Maat Mons on Venus in 1991. Comparison of further images from the space probe showed it changing, potentially due to volcanic activity. NASA/JPL-Caltech

Venus is a searing inferno. Its surface temperatures are hot enough to melt lead. Its surface pressures, 75 times that of Earth at sea level, are enough to crush even the hardiest of metal objects. Sulfuric acid rain falls from noxious clouds in its atmosphere that choke out even the slightest glimpse of the sky.

In a typical infernal hellscape, you’d expect to find lava—but that element seems to be missing from Venus today. Astronomers are sure that our twin planet had volcanic activity in the past, but they’ve never agreed if volcanoes still erupt and reshape the Venusian surface as they do Earth’s.

Now, two planetary scientists may have found the first evidence of an active Venusian volcano hiding in 30-year-old radar scans from NASA’s Magellan spacecraft. Robert Herrick from the University of Alaska Fairbanks and Scott Hensley from NASA’s Jet Propulsion Laboratory published their breakthrough in the journal Science on March 15.  The new analysis has excited planetary scientists, many of whom are now waiting for future missions to carry on the volcano hunt.

“This [study] is the first-ever reported evidence for active volcanism on another planet,” says Darby Dyar, an astronomer at Mount Holyoke College in Massachusetts, who wasn’t an author on the paper.

The dense Venusian clouds would hide any volcanic activity from a spacecraft in orbit. Specially honed instruments can certainly delve under the clouds, but the planet’s capricious weather tends to make probes’ lives too short to fully explore the grounds. Of the Soviet Venera landers of the 1960s, 1970s, and 1980s, none survived longer than around two hours.

[Related: The hellish Venus surface in 5 vintage photos]

Magellan changed that. Launched in 1989 and equipped with the finest radar that the technology of its time could offer, Magellan mapped much of Venus to the resolution of a city block. In the probe’s charts, scientists found evidence of giant volcanoes, past lava flows, and lava-built domes—but no smoking gun (or smoking caldera) of live volcanic activity.

Before NASA crashed it into the Venusian atmosphere, Magellan made three different passes at mapping the planet between 1990 and 1993, covering a different chunk each time. In the process, the probe scanned about 40 percent of the planet more than once. If the Venusian terrain had shifted in the months between passes, scientists today might find it by comparing different radar images and spotting the difference.

But researchers in the early 1990s didn’t have the sophisticated software and image-analysis tools that their counterparts have today. If they wanted to compare Magellan’s maps then, they’d have had to do it manually, comparing printouts with the naked eye. So, Herrick and Hensley revisited Magellan’s data with more advanced computers. They found that in addition to blurriness, the probe often scanned the same feature from different angles, making it difficult to tell actual changes apart from, say, shadows.

“To detect changes on the surface, we need a pretty big event, something that disturbs roughly more than a square kilometer of area,” Hensley says.

Eventually, Herrick and Hensley found their smoking gun: a vent, just more than a mile wide, on a previously known mountain named Maat Mons. Between a Magellan radar image taken in February 1991 and another taken about eight months later, this vent appeared to have changed shape, with lava oozing out onto the nearby slopes.

To double-check, Herrick and Hensley constructed simulations of volcanic vents based on the shape of the feature that Magellan had spotted. Their results matched what Magellan saw: a potential volcano in the process of burping lava out onto Venus’s surface.

There is other evidence that backs up their radical results In 2012, ESA’s Venus Express mission spotted a spike in sulfur dioxide in the planet’s atmosphere, which some scientists ascribe to volcanic eruptions. In 2020, geologists identified 37 spots where magma plumes from the Venusian mantle might still touch its surface. But the evidence has so far been circumstantial, and astronomers have never actually seen a volcano in action on the “Morning Star.”

Fortunately for Venus enthusiasts, there might soon be heaps of fresh data to play with. The VERITAS space probe, part of NASA’s follow-up to Magellan, was originally scheduled for a 2028 launch, but is now pushed back to the early 2030s due to funding issues. When it does finally reach Venus, volcanoes will be near the top of its sightseeing list.

“We’ll be looking for [volcanoes] in two different ways,” says Dyar, who is also deputy principal investigator on VERITAS. The spacecraft will conduct multiple flybys to map the entire Venusian surface again, with radar that has 100 times the resolution of Magellan’s instruments (like zooming in from a city block to a single building). If there are volcanoes erupting across the planet, VERITAS might help scientists spot the changes that they etch into the landscape.

[Related: These scientists spent decades pushing NASA to go back to Venus]

Additionally, VERITAS will examine the Venusian atmosphere in search of fluids, which scientists call volatiles, that volcanoes belch out as they erupt. Water vapor, for example, is one of the most prominent volcanic volatiles. The phosphines that elicited whispers about life on Venus in 2020 also fall into this category of molecules. (Indeed, some experts tried to explain their presence via volcanoes).

VERITAS isn’t the only mission set to arrive at Earth’s infernal twin in the next decade. The European Space Agency’s EnVision—scheduled for a 2031 launch—will map the planet just like VERITAS, only with even higher resolution.

VERITAS and EnVision “will have far, far better capability to see changes with time in a variety of ways during their missions,” says Herrick, who is also involved with both missions. Not only will the two produce multiple higher-resolution scans for scientists to compare against each other, the results can also be corroborated with Magellan’s antique maps, which will be 40 years in the past by the time they arrive.

“When we get high-resolution imagery,” Dyar says, “I think that we’re going to find active volcanism all over Venus.”

The post Venus could still be spewing lava, and scientists are hellbent on proving it appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
DART left an asteroid crime scene. This mission is on deck to investigate it. https://www.popsci.com/science/hera-asteroid-deflection-space-mission/ Tue, 14 Mar 2023 10:00:00 +0000 https://www.popsci.com/?p=519198
Hera asteroid space probe radio antenna in ESA lab
This is the antenna that will transmit back the first close-up images of the distant Dimorphos asteroid since its orbit was shifted by a collision with NASA’s DART spacecraft. Last December, the High Gain Antenna of ESA’s Hera mission went through a week-long test campaign at the Compact Antenna Test Range in the Netherlands. ESA-SJM Photography

Hera will retrace history's first asteroid-deflection test and piece together the crash from every angle.

The post DART left an asteroid crime scene. This mission is on deck to investigate it. appeared first on Popular Science.

]]>
Hera asteroid space probe radio antenna in ESA lab
This is the antenna that will transmit back the first close-up images of the distant Dimorphos asteroid since its orbit was shifted by a collision with NASA’s DART spacecraft. Last December, the High Gain Antenna of ESA’s Hera mission went through a week-long test campaign at the Compact Antenna Test Range in the Netherlands. ESA-SJM Photography

What happens when a dart hits the bullseye? In a game among amateurs, it sends everybody home. But professional players will want to analyze the shot in preparation to fire again. 

In this case, that dart is NASA’s Double Asteroid Redirection Test (DART), the spacecraft that crashed last November into the asteroid Dimorphos in hopes of redirecting its course. On March 2, a quintet of papers in the journal Nature confirmed what DART’s controllers had already guessed: The mission’s impact was a smashing success.

But DART won’t be the last human mission to visit Dimorphos or the larger asteroid which it orbits, Didymos. The European Space Agency’s Hera will soon follow in DART’s trail to appraise its aftermath—in far more detail than scientists, with their combination of instruments from Earth and the DART mission’s own sensors, have managed so far.

Now scheduled for an October 2024 departure, Hera is slated to lift off from Cape Canaveral on the wings of a SpaceX Falcon 9 rocket. According to the mission’s current itinerary, it will arrive at Dimorphos and Didymos in late 2026 for around six months of sightseeing. Then, if conditions allow, Hera—a car–sized probe outfitted with a large radio antenna and a pair of solar panels—will try to make a full landing on Didymos.

Hera will also carry two passengers: a pair of CubeSats named Milani and Juventas. Milani will study the asteroids’ exteriors; Juventas will probe the asteroids’ interiors. With three spacecraft, scientists can get three different views of the crash site on Dimorphos. The mission’s chief purpose is to follow in DART’s shadow and understand what damage humanity’s first asteroid strike actually left on its target.

[Related: NASA has major plans for asteroids. Could Psyche’s delay change them?]

Between DART’s now-destroyed cameras, its companion LICIACube, and telescopes watching from Earth’s ground and orbit, we already know quite a bit about the planetary defense test. We can see Dimorphos’ orbit, both before and after DART’s impact; we know that DART altered it, cutting Dimorphos closer to Didymos and shortening its orbital period; and we can home in on where on the asteroid’s surface that DART struck, down to a patch the size of a vending machine.

But there’s still a lot we don’t know—most critically, Dimorphos’s mass before and after it was infiltrated. Scientists can’t calculate the measurement from Earth, but Hera’s instruments will have that ability. Without knowing the mass, we have no way of knowing why, precisely, DART’s impact pushed Dimorphos into its new orbit.

“We want to determine, accurately, how much momentum was transferred to Dimorphos,” says Patrick Michel, astronomer at the Côte d’Azur Observatory in France and Hera’s mission principal investigator.

Hera might also tell us what cosmetic scars DART left from the crash. It’s possible that the impactor simply left a crater, or that it violently shook up the asteroid, rearranging a large chunk of its exterior. “A lot of us are wondering how much of the surface we’ll even be able to recognize,” says Andy Cheng, an astronomer at the Johns Hopkins Applied Physics Laboratory who worked on DART.

The problem is that, until humans send an observer to the asteroid, we don’t know what the surface holds in wait for us, Michel says. What the asteroid’s exterior looks like now depends on what Dimorphos’s interior looked like when DART struck it. If the spacecraft dramatically reshaped the asteroid, it’s a sign that the target’s insides were weakly held together. And right now, “we have no clue, really, what’s happening inside,” says Terik Daly, an astronomer at the Johns Hopkins Advanced Physics Laboratory and DART team member. Hera, along with the radar-packing Juventas, will try to scan below the rocky surface.

Hera space probe flying by Dimorphos asteroid in animation
Hera will be equipped with automated guidance, navigation and control to allow it to safely navigate the double-asteroid system, akin in function to a self-driving car. Its desk-sized body will carry instruments including an optical Asteroid Framing Camera, supplemented by thermal and spectral imagers, as well as a laser altimeter for surface mapping. ESA-Science Office

Of course, Hera won’t be able to observe everything. Many astronomers have focused on Dimorphos’s ejecta—the material kicked up from the asteroid upon DART’s impact—to understand how exactly the strike nudged the asteroid. By the time of Hera’s arrival, at least four years after the crash, most of that ejecta will have long dissipated.

Still, knowing more about the asteroid’s innards can help astronomers understand where that ejecta came from—and what would happen if we crossed paths with a space rock again. “For example, in the future, if we had to use this technique to divert some asteroid, then we could do a more precise prediction [to hit it],” says Jian-Yang Li, an astronomer at Pennsylvania State University who worked on DART.

There are also other reasons why Dimorphos might not look the same way in 2026. Just as the moon pulls and pushes the tides around Earth’s oceans, Didymos’ gravity might play with its smaller companion. Scientists think it’s possible that those forces might cause Dimorphos to wobble in its orbit. But again, they won’t be able to observe any of this until Hera actually gets up close.

As the mission progresses, they might at least be able to set a baseline. Michel says that astronomers on Earth can simulate many of Dimorphos’s possible future orbits on their computers. “It’s not really a problem that we arrive four years later,” says Michel. “We have the tools to understand if something evolved.”

[Related: This speedy space rock is the fastest asteroid in our solar system]

The data from DART’s impact and Hera’s eyes certainly will help astronomers understand asteroids in their pre- and post-collision states. But they’ll also help us prevent the specter of death from above. Humans have long feared destruction from space in line with the dinosaurs, and with DART, planetary defense—the science of stemming that fear—made its first step into real-world strategies. 

It’s hard to say when we’ll need the ability to deflect a space rock; astronomers’ projections show that no object larger than a kilometer is set to pass Earth in the next century. But, according to Michel, space agencies haven’t identified 60 percent of the objects flying by that are at least 40 meters long—large enough to devastate a region or a small country.

“We know that, eventually, such an impact [with Earth] will happen again,” Michel says, “and we cannot improvise.”

The post DART left an asteroid crime scene. This mission is on deck to investigate it. appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
People may have been riding horses as early as 5,000 years ago https://www.popsci.com/science/first-horse-rider-5000-years-ago/ Fri, 03 Mar 2023 19:00:00 +0000 https://www.popsci.com/?p=516768
The skeleton of a possible Yamnaya horse rider.
Archeologists discovered this horse rider in Malomirovo, Bulgaria, buried in the typical Yamnaya custom. Michał Podsiadło

Skeletal remains suggest the Yamnaya people of Eastern Europe sat astride horses.

The post People may have been riding horses as early as 5,000 years ago appeared first on Popular Science.

]]>
The skeleton of a possible Yamnaya horse rider.
Archeologists discovered this horse rider in Malomirovo, Bulgaria, buried in the typical Yamnaya custom. Michał Podsiadło

Who was the first human to ride a horse? That first rider’s distant descendents might have crossed continents and built empires on horseback. But when and where horsemanship began is not a straightforward question to answer. Horse-riding began in a time from which few equine remains survive.

As it happens, we don’t need to find the horse to find signs of people riding it. We could uncover clues from the remains of the human rider instead. A life on horseback warps human bones, and thanks to such skeletal signs, archaeologists might have found the earliest evidence of human horse-riding yet—dating from as early as 3000 BCE, as they report in a study published in the journal Science Advances today.

“You have not only the horse as a mount, but you have also the rider,” says Volker Heyd, an archaeologist at the University of Helsinki in Finland and one of the study authors. “And we were looking into the human beings.”

The skeletons in question were once people of the Yamnaya culture, living in what is now southeastern Europe, some 5,000 years ago. But because they died long before written history, there aren’t many signs of “culture” as most of us would imagine it—they might have been one ethnic group, or many. Instead, archaeologists have found evidence that the Yamnaya built similar objects and practiced similar ways of life: These people roamed the steppes, herded cattle, and drove wheeled wagons. Some scholars believe they spoke a distant antecedent of today’s Indo-European languages. Perhaps most impressively, they buried the dead beneath towering mounds that we call kurgans.

[Related: Scientists are trying to figure out where the heck horses came from]

We know that the Yamnaya had horses, but we don’t know if they merely herded them for milk and meat, or if they actually rode them. Any riding equipment—bridles and saddles—would have been fashioned from organic materials that probably long decomposed.

But horses are only one half of horse-riding. Archaeologists, perhaps, could find the other half within Yamnaya kurgans—in human bones that can tell their own stories. 

That’s because “primates like us humans are not made for sitting on horseback,” says Birgit Bühler, an archaeologist at the University of Vienna in Austria. “The horse is not made to carry us.” Without a saddle or stirrups—which the earliest riders probably didn’t have—staying balanced requires repeatedly moving the lower body and thighs. With all that biological material in motion, horse-riding, just like any other mechanical movement, would leave a mark on human bones.

Over decades of repeated stress on horseback, the human skeleton changes in response. Bone tissue in the pelvis and femurs might thicken and densify. Hip bones might chafe against each other and build up calcium. Vertebrae in the spine might warp and deform. And horses might bite, kick, step on, or throw off their riders—all of which can break bones.

[Related: Ancient climate change may have dragged the wild horses away]

Researchers have dubbed these as symptoms of “horsemanship syndrome” or “horse-riding syndrome.” Other activities might cause individual changes, but the combination of these markers may be a telltale sign of a horseback life. Bühler, for instance, has used this method to study the Avars: horse-riding nomads from the Asian steppes who rode west to rule swathes of central and eastern Europe in the early Middle Ages. 

Studying bones from 1,500 years ago is already difficult; studying bones that are three times older is even more so. But this study’s authors came across multiple markers of horseback riding in one 4,500-year-old skeleton from Strejnicu, Romania.

“It was kind of surprising to all of us to find that,” says Martin Trautmann, an archaeologist at the University of Helsinki, and another of the study authors.

To further confirm whether the Yamnaya rode horses, the authors examined every bone from this group that they could get their hands on, dug up from sites across Bulgaria, Czechia, Hungary, and Romania. Some remains had been excavated decades ago. 

Just because they had bones doesn’t mean they had every bone. “On average, about half of the skeleton is preserved, and the half we have is sometimes heavily eroded,” says Trautmann. The authors evaluated skeletons from 24 ancient people against a list of six criteria that matched the first Strejnicu skeleton. They diagnosed four additional sets of bones—dating between 3021 and 2501 BCE—that fit at least four of horsemanship syndrome’s criteria.

We know that humans first domesticated the horse around 4000 BCE; we also know that the first chariots arose around 2000 BCE. If these skeletons are evidence of horse-riders, then they could provide a key “missing link” between the two.

An Egyptian graffito of goddess Astarte on horseback from the Nineteenth Dynasty of Egypt.
A 3,500-year-old depiction of the Egyptian goddess Astarte on horseback. S. Steiß, Berlin

“It doesn’t come that unexpected if you see the wider context of Yamnaya,” says Heyd. Archaeologists believe that the Yamnaya culture spread rapidly across the European steppe within just a few decades—in archaeologists’ time, virtually an instant. “You wonder how this is possible without horseback riding,” he says.

It isn’t definitive proof. Time’s ravages, by erasing bones, have made this certain. Bühler, who wasn’t involved with the work but called it a “fantastic paper,” points out that the authors missed one of the key criteria of other horsemanship syndrome research—the hip socket stretching, vertically, into an oval—because they just didn’t have the hip sockets to properly measure.

“It’s not their fault, because the material is not there,” says Bühler. Future finds may give archaeologists the full skeletons they need, she says. Until then, she says she is “cautious” about interpretations that these people rode horses.

The authors may just yet find those bones—their research into the Yamnaya is far from over. 

The post People may have been riding horses as early as 5,000 years ago appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
We might soon lose a full second of our lives https://www.popsci.com/science/negative-leap-second/ Mon, 20 Feb 2023 11:00:00 +0000 https://www.popsci.com/?p=513420
Surrealist digital painting inspired by Dali of flying clocks and chess pieces upside down over sand and the Earth. The motifs symbolize the leap second.
Some tech companies think a negative leap second would turn the world upside down. But it probably won't be that bad. Deposit Photos

The Earth is spinning faster. A negative leap second could help the world's clocks catch up.

The post We might soon lose a full second of our lives appeared first on Popular Science.

]]>
Surrealist digital painting inspired by Dali of flying clocks and chess pieces upside down over sand and the Earth. The motifs symbolize the leap second.
Some tech companies think a negative leap second would turn the world upside down. But it probably won't be that bad. Deposit Photos

The leap second’s days, so to speak, are numbered. Late last year, the world’s timekeepers announced they would abandon the punctual convention in 2035.

That still gives timekeepers a chance to invoke the leap second before its scheduled end—in a more unconventional way. Ever since its creation, they’ve only used positive leap seconds, adding a second to slow down the world’s clocks when they get too far ahead of Earth’s rotation.

As it happens, the world’s clocks aren’t ahead right now; in fact, they’ve fallen behind. If this trend holds up, it’s possible that the next leap second may be a negative one, removing a second to speed the human measure of time back up. That’s uncharted territory.

The majority of humans won’t notice a missing second, just as they wouldn’t with an extra one. Computers and their networks, however, already have problems with positive leap seconds. While their operators can practice for when the world’s clocks skip a second, they won’t know what a negative leap second can do until the big day happens (if it ever does).

“Nobody knows how software systems will react to it,” says Marina Gertsvolf, a researcher at the National Research Council, which is responsible for Canada’s timekeeping.

The second is defined by a process that occurs in the nucleus of a cesium-155 atom—a process that atomic clocks can calculate with stunning accuracy. But a day is based on how long the Earth takes to finish one full spin, which takes 24 hours or 86,400 of those seconds.

Except a day isn’t always precisely 86,400 seconds, because the planet’s rotation isn’t a constant. Everything from the mantle churning to the atmosphere moving to the moon’s gravity pulling can play with it, adding or subtracting a few milliseconds every day. Over time, those differences add up.

[Related: What would happen if the Earth started to spin faster]

An organization called the International Earth Rotation and Space Systems Service (IERS) is responsible for tracking and adjusting for the changes. When the gulf widens by enough, they decree that the final minute of June 30 or December 31—whichever comes next—should be modified with a leap second. Since 1972, these judges of time have added 31 positive leap seconds.

But for the past several months, Earth’s rotation has been pacing ahead of the world’s clocks. If this continues, then it’s possible the next leap second might be negative. At some point in the late 2020s or early 2030s, IERS might decide to peel away the last second of the last minute of June 30 or December 31, resulting in a minute that’s 59 seconds long. Clocks would skip from 23:59:58 right to 00:00:00. And we don’t know what that will do. 

What we do know is the chaos that past positive leap seconds have caused. It’s nothing like the apocalyptic collapse that Y2K preppers feared, but the time tweaks have given systems administrators their fair share of headaches. In 2012, a leap second glitched a server’s Linux operating system and knocked out Reddit at midnight. In 2017, Cloudflare—a major web service provider—experienced a significant outage due to a leap second. Problems invariably arise when one computer or server talks to another computer or server that still might not have accounted for a leap second.

As a result, some of the leap second’s biggest critics have been tech companies who have to deal with the consequences. And at least one of them is not excited about the possibility of a negative leap second. In 2022, two Facebook engineers wrote: “The impact of a negative leap second has never been tested on a large scale; it could have a devastating effect on the software relying on timers or schedulers.”

Timekeepers, however, aren’t expecting a meltdown. “Negative leap seconds aren’t quite as nasty as positive leap seconds,” says Michael Wouters, a researcher at the National Measurement Institute, Australia’s peak measurement body.

[Related: Daylight saving can mess with circadian rhythm]

Still, some organizations have already made emergency plans. Google, for instance, uses a process they call a “smear.” Rather than adding a second, it spread a positive leap second over the course of a day, making every second slightly longer to make up the difference. According to the company, it tested the process for a negative leap second by making every second slightly shorter, amounting to a lost second over the course of the day.

Many servers get their time from constellations of navigation satellites like America’s GPS, Europe’s Galileo, and China’s BeiDou. To read satellite data, servers typically rely on specialized receivers that translate signals into information—including the time. According to Wouters, many of those receivers’ manufacturers have tested to handle negative leap seconds. “I think that there is a lot more awareness of leap seconds than in the past,” says Wouters. 

At the end of the day, the leap second is just an awkward, artificial construct. Human timekeepers use it to force the astronomical cycles that once defined our time back into lockstep with the atomic physics that have replaced the stars. “It removes this idea that time belongs to no country … and no particular industrial interest,” says Gertsvolf.

So, with the blessing of the world’s timekeepers, the leap second is on its way out. If that goes according to plan, then we can let the Earth spin as it wants without having to skip a beat.

The post We might soon lose a full second of our lives appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Engineers finally peeked inside a deep neural network https://www.popsci.com/science/neural-network-fourier-mathematics/ Thu, 16 Feb 2023 19:00:00 +0000 https://www.popsci.com/?p=512935
An illustration of a circuit in the form of a human brain.
Neural networks may be viewed as black boxes, even by their creators. Deposit Photos

Nineteenth-century math can give scientists a tour of 21st-century AI.

The post Engineers finally peeked inside a deep neural network appeared first on Popular Science.

]]>
An illustration of a circuit in the form of a human brain.
Neural networks may be viewed as black boxes, even by their creators. Deposit Photos

Say you have a cutting-edge gadget that can crack any safe in the world—but you haven’t got a clue how it works. What do you do? You could take a much older safe-cracking tool—a trusty crowbar, perhaps. You could use that lever to pry open your gadget, peek at its innards, and try to reverse-engineer it. As it happens, that’s what scientists have just done with mathematics.

Researchers have examined a deep neural network—one type of artificial intelligence, a type that’s notoriously enigmatic on the inside—with a well-worn type of mathematical analysis that physicists and engineers have used for decades. The researchers published their results in the journal PNAS Nexus on January 23. Their results hint their AI is doing many of the same calculations that humans have long done themselves.

The paper’s authors typically use deep neural networks to predict extreme weather events or for other climate applications. While better local forecasts can help people schedule their park dates, predicting the wind and the clouds can also help renewable energy operators plan what to put into the grid in the coming hours.

“We have been working in this area for a while, and we have found that neural networks are really powerful in dealing with these kinds of systems,” says Pedram Hassanzadeh, a mechanical engineer from Rice University in Texas, and one of the study authors.

Today, meteorologists often do this sort of forecasting with models that require behemoth supercomputers. Deep neural networks need much less processing power to do the same tasks. It’s easy to imagine a future where anyone can run those models on a laptop in the field.

[Related: Disney built a neural network to automatically change an actor’s age]

AI comes in many forms; deep neural networks are just one of them, if a very important one. A neural network has three parts. Say you build a neural network that identifies an animal from its image. The first part might translate the picture into data; the middle part might analyze the data; and the final part might compare the data to a list of animals and output the best matches.

What makes a deep neural network “deep” is that its creators expand that middle part into a far more convoluted affair, consisting of multiple layers. For instance, each layer of an image-watching deep network might analyze successively more complex sections of the image.

That complexity makes deep neural networks very powerful, and they’ve fueled many of AI’s more impressive feats in recent memory. One of their first abilities, more than a decade ago, was to transcribe human speech into words. In later years, they’ve colorized images, tracked financial fraud, and designed drug molecules. And, as Hassanzadeh’s group has demonstrated, they can predict the weather and forecast the climate.

[Related: We asked a neural network to bake us a cake. The results were…interesting.]

The problem, for many scientists, is that nobody can actually see what the network is doing, due to the way these networks are made. They train a network by assigning it a task and feeding it data. As the newborn network digests more data, it adjusts itself to perform that task better. The end result is a “black box,” a tool whose innards are so scrambled that even its own creators can’t fully understand them. 

AI experts have devoted countless hours of their lives to find better ways of looking into their very creations. That’s already tough to do with a simple image-recognition network. It’s even more difficult to understand a deep neural network that’s crunching a system such as Earth’s climate, which consists of myriad moving parts.

Still, the rewards are worth the work. If scientists know how their neural network works, not only can they know more about their own tools, they can think about how to adapt those tools for other uses. They could make weather-forecasting models, for instance, that work better in a world with more carbon dioxide in the air.

So, Hassanzadeh and his colleagues had the idea to apply Fourier analysis—a method that’s fit neatly in the toolboxes of physicists and mathematicians for decades—to their AI. Think of Fourier analysis as an act of translation. The end language represents a dataset as the sum of smaller functions. You can then apply certain filters to blot out parts of that sum, allowing you to see the patterns 

As it happened, their attempt was a success. Hassanzadeh and his colleagues discovered that what their neural network was doing, in essence, was a combination of the same filters that many scientists would use.

“This better connects the inner workings of a neural network with things that physicists and applied mathematicians have been doing for the past few decades,” says Hassanzadeh.

If he and his colleagues are correct about the work they’ve just published, then it means that they’ve opened—slightly—something that might seem like magic with a crowbar fashioned from math that scientists have been doing for more than a century.

The post Engineers finally peeked inside a deep neural network appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Moondust could chill out our overheated Earth, some scientists predict https://www.popsci.com/science/moondust-climate-change-shield/ Wed, 08 Feb 2023 19:00:00 +0000 https://www.popsci.com/?p=510711
Apollo 11 commander Neil Armstrong leaves a boot print in dusty surface of the moon.
Apollo 11 commander Neil Armstrong leaves a boot print in dusty surface of the moon. NASA/Neil Armstrong

Under this high-concept sun-brella, incoming light would be reduced by about 2 percent per year.

The post Moondust could chill out our overheated Earth, some scientists predict appeared first on Popular Science.

]]>
Apollo 11 commander Neil Armstrong leaves a boot print in dusty surface of the moon.
Apollo 11 commander Neil Armstrong leaves a boot print in dusty surface of the moon. NASA/Neil Armstrong

In one possible future, great maglev lines cross the lunar surface. But these rails don’t carry trains. Instead, like space catapults, these machines accelerate cargo to supersonic speeds and fling it into the sky. The massive catapults have one task: throwing mounds of moondust off-world. Their mission is to halt climate change on Earth, 250,000 miles away.

All that dust will stream into deep space, where it will pass between Earth and the sun—and blot out some of the sun’s rays, cooling off the planet. As far-fetched as the idea is, it’s an idea that received real scientific attention. In a paper published in the journal PLOS Climate on February 8, researchers simulated just how it might go if we tried to pull it off. According to their computer modeling, a cascade of well-placed moondust could shave off a few percent of the sun’s light. 

It’s a spectacular idea, but it isn’t new. Filtering the sunlight that reaches Earth in the hope of cooling off the planet, blunting the blades making the thousand cuts of global warming, is an entire field called solar geoengineering. Designers have proposed similar spaceborne concepts: swarms of mirrors or giant shades, up to thousands of miles across, strategically placed to act as a parasol for our planet. Other researchers have suggested dust, which is appealing because, as a raw material, there’s no effort or expense to engineer it.

“We had read some accounts of previous attempts,” inspiring them to revisit the technique, says Scott Kenyon, an astrophysicist at the Smithsonian Astrophysical Observatory in Cambridge, Massachusetts, and one of the study’s authors.

Kenyon and his colleagues don’t usually dream up ways to chill planets. They study a vastly different type of dust: the kind that coalesces around distant, newly forming stars. In the process, the astrophysicists realized that the dust had a shading effect, cooling whatever lay in its shadow. 

[Related: The past 8 years have been the hottest on human record, according to new report]

“So we began to experiment with collections of dust that would shield Earth from sunlight,” says Kenyon. They turned methods that let them simulate distant dust disks to another problem, much closer to home.

Most solar engineering efforts focus on altering Earth’s atmosphere. We could, for instance, spray aerosols into the stratosphere to copy the cooling effects from volcanic eruptions. Altering the air is, predictably, a risky business; putting volcanic matter in the sky could have unwanted side effects such as eroding the ozone layer or seeding acid rain.

“If you could just reduce the amount of incoming sunlight reaching the Earth, that would be a cleaner intervention than adding material to the stratosphere,” says Peter Irvine, a solar geoengineer at University College London, who was not an author of the paper.

Even if you found a way that would leave the skies ship-shape, however, the field is contentious. By its very nature, a solar geoengineering project will impact the entire planet, no matter who controls it. Many observers also believe that promises of a future panacea remove the pressure to curb carbon emissions in the present. 

It’s for such reasons that some climate scientists oppose solar geoengineering at all. In 2021, researchers scrubbed the trial of a solar geoengineering balloon over Sweden after activists and representatives of the Sámi people protested the flight, even though the equipment test wouldn’t have conducted any atmospheric experiments.

But perhaps there’s a future where those obstacles have been cast aside. Perhaps the world hasn’t pushed down emissions quickly enough to avoid a worsening catastrophe; perhaps the world has then come together and decided that such a gigaproject is necessary. In that future, we’d need a lot of dust—about 10 billion kilograms, every year, close to 700 times the amount of mass that humans have ever launched into space, as of this writing. 

That makes the moon attractive: With lower gravity, would-be space launchers require less energy to throw mass off the moon than off Earth. Hypothetical machines like mass drivers—those electromagnetic catapults—could do the job without rocket launches. According to the authors, a few square miles of solar panels would provide all the energy they need.

That moondust isn’t coming back to Earth, nor is it settling into lunar orbit. Instead, it’s streaming toward a Lagrange point, a place in space where two objects’ respective gravitational forces cancel each other out. In particular, this moondust is headed for the sun and Earth’s L1, located in the direction of the sun, about 900,000 miles away from us.

There, all that dust would be in a prime position to absorb sunlight on a path to Earth. The 10 billion kilograms would drop light levels by around 1.8 percent annually, the study estimates—not as dramatic as an eclipse, but equivalent to losing about 6 days’ worth of sunlight per year.

[Related on PopSci+: Not convinced that humans are causing climate change? Here are the facts.]

Although L1’s gravitational balance would capture the dust, enough for it to remain for a few days, it would then drift away. We’d need to keep refilling the dust, as if it were a celestial water supply—part of why we’d need so much of it.

That dust wouldn’t come back to haunt Earth. But L1 hosts satellites like NASA’s SOHO and Wind, which observe the sun or the solar wind of particles streaming away from it. “The engineers placing dust at L1 would have to avoid any satellites to prevent damage,” says Kenyon.

Of course, this is one hypothetical, very distant future. Nobody can launch anything from the moon, let alone millions of tons of moondust, without building the infrastructure first. While market analysts are already tabulating the value of the lunar economy in two decades’ time, building enough mass drivers to perform impressive feats of lunar engineering probably isn’t in the cards.

“If we had a moonbase and were doing all sorts of cool things in space, then we could do this as well—but that’s something for the 22nd century,” says Irvine. Meanwhile, a far more immediate way to blunt climate change is to decarbonize the energy grid and cull fossil fuels, with haste. “Climate change,” Irvine says, “is a 21st century problem.”

The post Moondust could chill out our overheated Earth, some scientists predict appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Why shooting cosmic rays at nuclear reactors is actually a good idea https://www.popsci.com/science/nuclear-reactor-3d-imaging/ Fri, 03 Feb 2023 19:00:00 +0000 https://www.popsci.com/?p=509775
Marcoule Nuclear Power Plant in France. Workers in protective gear heating glowing nuclear reactor.
The Marcoule Nuclear Power Plant in France was decommissioned in the 1980s. The French government has been trying to take down the structures since, including the G2 reactor. Patrick Robert/Sygma/CORBIS/Sygma via Getty Images

Muons, common and mysterious particles that beam down from space, can go where humans can't. That can be useful for nuclear power plants.

The post Why shooting cosmic rays at nuclear reactors is actually a good idea appeared first on Popular Science.

]]>
Marcoule Nuclear Power Plant in France. Workers in protective gear heating glowing nuclear reactor.
The Marcoule Nuclear Power Plant in France was decommissioned in the 1980s. The French government has been trying to take down the structures since, including the G2 reactor. Patrick Robert/Sygma/CORBIS/Sygma via Getty Images

The electron is one of the most common bits of matter around us—every complete atom in the known universe has at least one. But the electron has far rarer and shadier counterparts, one of them being the muon. We may not think much about muons, but they’re constantly hailing down on Earth’s surface from the edge of the atmosphere

Muons can pass through vast spans of bedrock that electrons can’t cross. That’s good luck for scientists, who can collect the more elusive particles to paint images of objects as if they were X-rays. In the last several decades, they’ve used muons to pierce the veils of erupting volcanoes and peer into ancient tombs, but only in two dimensions. The few three-dimensional images have been limited to small objects.

That’s changing. In a paper published in the journal Science Advances today, researchers have created a fully 3D muon image of a nuclear reactor the size of a large building. The achievement could give experts new, safer ways of inspecting old reactors or checking in on nuclear waste.

“I think, for such large objects, it’s the first time that it’s purely muon imaging in 3D,” says Sébastien Procureur, a nuclear physicist at the Université Paris-Saclay in France and one of the study authors.

[Related: This camera can snap atoms better than a smartphone]

Muon imaging is only possible with the help of cosmic rays. Despite their sunny name, most cosmic rays are the nuclei of hydrogen or helium atoms, descended to Earth from distant galaxies. When they strike our atmosphere, they burst into an incessant rainstorm of radiation and subatomic particles.

Inside the rain is a muon shower. Muons are heavier—about 206 times more massive—than their electron siblings. They’re also highly unstable: On average, each muon lasts for about a millionth of a second. That’s still long enough for around 10,000 of the particles to strike every square meter of Earth per minute.

Because muons are heavier than electrons, they’re also more energetic. They can penetrate the seemingly impenetrable, such as rock more than half a mile deep. Scientists can catch those muons with specially designed detectors and count them. More muons striking from a certain direction might indicate a hollow space lying that way. 

In doing so, they can gather data on spaces where humans cannot tread. In 2017, for instance, researchers discovered a hidden hollow deep inside Khufu’s Great Pyramid in Giza, Egypt. After a tsunami ravaged the Fukushima Daiichi nuclear power station in 2011, muons allowed scientists to gauge the damage from a safe distance. Physicists have also used muons to check nuclear waste casks without risking leakage while opening them up.

However, taking a muon image comes with some downsides. For one, physicists have no control over how many muons drizzle down from the sky, and the millions that hit Earth each day aren’t actually very many in the grand scheme of things. “It can take several days to get a single image in muography,” says Procureur. “You have to wait until you have enough.”

Typically, muon imagers take their snapshots with a detector that counts how many muons are striking it from what directions. But with a single machine, you can only tell that a hollow space exists—not how far away it lies. This limitation leaves most muon images trapped in two dimensions. That means if you scan of a building’s facade, you might see the individual rooms, but not the layout. If you want to explore a space in great detail, the lack of a third dimension is a major hurdle.

In theory, by taking muon images from different perspectives, you can stitch them together into a 3D reconstruction. This is what radiologists do with X-rays. But while it’s easy to take hundreds of X-ray images from different angles, it’s far more tedious and time-consuming to do so with muons. 

Muon detectors around G2 nuclear reactor in France. Two facility photos and four diagrams.
The 3D muon images of the G2 nuclear reactor. Procureur et al., Sci. Adv. 9, eabq8431 (2023)

Still, Procureur and his colleagues gave it a go. The site in question was an old reactor at Marcoule, a nuclear power plant and research facility in the south of France. G2, as it’s called, was built in the 1950s. In 1980, the reactor shut down for good; since then, French nuclear authorities have slowly removed components from the building. Now, preparing to terminally decommission G2, they wanted to conduct another safety check of the structures inside. “So they contacted us,” says Procureur.

Scientists had taken 3D muon images of small objects like tanks before, but G2—located inside a concrete cylinder the size of a small submarine and fitted inside a metal-walled building the size of an aircraft hangar—required penetrating a lot more layers and area.

Fortunately, this cylinder left enough space for Procureur and his colleagues to set up four gas-filled detectors at strategic points around and below the reactor. Moving the detectors around, they were able to essentially snap a total of 27 long-exposure muon images, each one taking days on end to capture.

[Related: Nuclear power’s biggest problem could have a small solution]

But the tricky part, Procureur says, wasn’t actually setting up the muon detectors or even letting them run: It was piecing together the image afterward. To get the process started, the team adapted an algorithm used for stitching together anatomical images in a medical clinic. Though the process was painstaking, they succeeded. In their final images, they could pluck out objects as small as cooling pipes about two-and-a-half feet in diameter.

“What’s significant is they did it,” says Alan Bross, a physicist at Fermilab in suburban Chicago, who wasn’t involved with this research. “They built the detectors, they went to the site, and they took the data … which is really involved.”

The effort, Procureur says, was only a proof of concept. Now that they know what can be accomplished, they’ve decided to move onto a new challenge: imaging nuclear containers at other locations. “The accuracy will be significantly better,” Procureur notes.

Even larger targets may soon be on the horizon. Back in Giza, Bross and some of his colleagues are working to scan the Great Pyramid in three dimensions. “We’re basically doing the same technique,” he explains, but on a far more spectacular scale.

The post Why shooting cosmic rays at nuclear reactors is actually a good idea appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Earth’s natural air-scrubbing system works better when it’s wetter https://www.popsci.com/science/carbon-dioxide-mineral-weathering/ Thu, 26 Jan 2023 22:00:00 +0000 https://www.popsci.com/?p=508025
Volcanic eruptions naturally release carbon dioxide, though human activities contribute far more of the gas.
Volcanic eruptions naturally release carbon dioxide, though human activities contribute far more of the gas. Deposit Photos

When it's warm and rainy, minerals that react with carbon dioxide draw in more of the greenhouse gas.

The post Earth’s natural air-scrubbing system works better when it’s wetter appeared first on Popular Science.

]]>
Volcanic eruptions naturally release carbon dioxide, though human activities contribute far more of the gas.
Volcanic eruptions naturally release carbon dioxide, though human activities contribute far more of the gas. Deposit Photos

If a supervolcano burps out a choking cloud of carbon dioxide, even if the effects are deadly and devastating, Earth’s atmosphere will eventually return to normal. Where, then, does all that greenhouse gas end up? 

Earth’s surface, it turns out, conceals a natural air filter. 

Certainly, plants play their part, drawing in carbon dioxide for photosynthesis. But there’s an even larger control mechanism: the very earth itself. Carbon dioxide in the air can weather certain minerals in the ground. In the process, those minerals react with carbon dioxide and pull it from the atmosphere. 

Geologists have long known about this air filter, but they’ve yet to master how it works. Now, scientists have evidence of what controls the process on a global scale: Those minerals weather more quickly if the, well, weather is warm and rainy.

“Everybody wants to understand how the globe works,” says Susan Brantley, a geologist at Pennsylvania State University. Brantley and her colleagues published their evidence in the journal Science today.

Weathering is when rocks and minerals deteriorate under exposure to nature’s elements—water, heat, microorganisms, and plants, to name just a few. (Weathering isn’t erosion, which involves movement, such as blowing wind or flowing water that picks up crumbs of rock and drops them elsewhere.) The authors focused on one specific type of weathering, caused by chemical reactions that involve carbon dioxide.

Even then, this gas doesn’t weather all minerals in the same way. Depending on their chemical composition, some might spit carbon dioxide right back into the atmosphere. Brantley and her colleagues instead studied a group of minerals known as silicate minerals, whose molecules contain silicon and oxygen atoms. Silicate minerals react with carbon dioxide and store it in the ground or, sometimes, in the water. Fortunately, these compounds are plentiful: Oxygen and silicon are the two most common elements in Earth’s crust. 

[Related: Earth has more than 10,000 kinds of minerals. This massive new catalog describes them all.]

The authors wanted to answer one question: How quickly do silicate minerals weather, and how does that attribute change as their surroundings shift? 

The answer is not straightforward. Chemical reactions don’t cause all the world’s weathering; it’s hard for geologists to separate chemical weathering from biological activity or groundwater percolation.

Because of that complication, geologists have found that chemical weathering seems to occur far more slowly in the soil outside than in a controlled laboratory. That’s a problematic discrepancy. In Brantley’s words, “if you can’t even extrapolate from your beaker to a stream outside your lab, how could we ever extrapolate to the globe?”

Fortunately, Brantley and colleagues weren’t the only researchers interested in the problem. They had decades of research, performed at local and regional scales, to pore over. They could look at experiments done in the lab. They could zoom out and look at observations of weathering in parcels of soil. They could zoom out further and find studies examining how weathering worked over entire river systems.

Analyzing that data, they could zoom out even farther and estimate a global trend.

Brantley’s group found that, as the temperature heated up, so did weathering. Likewise, weathering slowed down in the cold. But warmth wasn’t the only player they discovered. If the ground wasn’t in motion—if there was less erosion to move rock or less rainfall to create flowing water—weathering slowed down.

[Related: This diamond holds a never-before-seen mineral that can’t exist on Earth’s surface]

“It’s a very detailed analysis,” says Salvatore Calabrese, an environmental engineer at Texas A&M University who wasn’t an author on the paper.

“It’s able to align the field [and] the lab studies and make this coherent message,” says Bob Hilton, a geologist at Oxford University who also wasn’t an author on the paper.

The finding, according to Brantley, certainly helps geologists trying to look into Earth’s past, back at its long history of volcanic eruptions and swings back to normal. Assuming their models are accurate, geologists could look very far back indeed. For instance, they could examine preserved soil that’s billions of years old and make an educated guess at its atmosphere.

More pertinent to the present, the amount of greenhouse gases belched out by volcanoes is a drop in the bucket of emissions from humans burning fossil fuels. That raises another question: If we know weathering can decarbonize the air, then why can’t we accelerate the process?

As it happens, scientists and engineers are already working on that: an idea they call enhanced weathering. The process, as they envision it, might entail sprinkling a rock crumble across the ocean or over vast tracts of land. If done over large chunks of the world’s farmland, the hope goes, minerals in the rock will make a dent in the world’s carbon dioxide. (Of course, doing this might mean mining rocks from somewhere and potentially exposing people to rock dust.)

It’s a new idea, and for now, it’s largely confined to the laboratory. Some experiments have evaluated how it works in the presence of soil and plants, such as tests by Calabrese and his colleagues on small plots in a tropical forest. “You can take some measurements, but you cannot really look into what will happen over the entire forest,” he says.

That means enhanced weathering proponents face many of the same unknowns as geologists like Brantley. They know what happens in the lab, but they don’t know how this process might interact with real-world soils. And they don’t know whether their observations change over the size of an area.

It means, then, that Brantley’s findings could inform future enhanced weathering research: for instance, pointing its researchers to places with a plentiful water supply. “Maybe it’s a good reference to say, okay, maybe we can do something similar” to make enhanced weathering more efficient, says Calabrese.

For her part, Brantley is less interested in enhanced weathering than in some of the other players behind weathering: namely, living organisms. Life can boost weathering: microbes can manipulate their surrounding minerals. At the same time, living things can slow it down—a tree, for instance, can cut their roots into a rock and stabilize it. 

Hilton agrees that geologists should now study what microbes are doing.

“They’re probably driving part of this temperature response,” says Hilton. “So, understanding how they’re working, how they’re functioning, is really important.”

The post Earth’s natural air-scrubbing system works better when it’s wetter appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The best—and worst—places to shelter after a nuclear blast https://www.popsci.com/science/how-to-survive-a-nuclear-bomb-shockwave/ Fri, 20 Jan 2023 16:53:24 +0000 https://www.popsci.com/?p=506575
Nuclear shelter basement sign on brick building to represent survival tips for a nuclear blast
Basements work well as nuclear shelters as long as they don't have many external opening. Deposit Photos

Avoid windows, doors, and long hallways at all costs.

The post The best—and worst—places to shelter after a nuclear blast appeared first on Popular Science.

]]>
Nuclear shelter basement sign on brick building to represent survival tips for a nuclear blast
Basements work well as nuclear shelters as long as they don't have many external opening. Deposit Photos

In the nightmare scenario of a nuclear bomb blast, you might picture a catastrophic fireball, a mushroom cloud rising into an alien sky overhead, and a pestilent rain of toxic fallout in the days to come. All of these are real, and all of them can kill.

But just as real, and every bit as deadly, is the air blast that comes just instants after. When a nuke goes off, it usually creates a shockwave. That front tears through the air at supersonic speed, shattering windows, demolishing buildings, and causing untold damage to human bodies—even miles from the point of impact.

[Related: How to protect yourself from nuclear radiation]

So, you’ve just seen the nuclear flash, and know that an air blast is soon to follow. You’ve only got seconds to hide. Where do you go?

To help you find the safest spot in your home, two engineers from Cyprus simulated which spaces made winds from a shockwave move more violently—and which spaces slowed them down. Their results were published on January 17 in the journal Physics of Fluids.

During the feverish nuclear paranoia of the Cold War, plenty of scientists studied what nuclear war would do to a city or the world. But most of their research focused on factors like the fireball or the radiation or simulating a nuclear winter, rather than an individual air blast. Moreover, 20th-century experts lacked the sophisticated computational capabilities that their modern counterparts can use. 

Very little is known about what is happening when you are inside a concrete building that has not collapsed,” says Dimitris Drikakis, an engineer at the University of Nicosia and co-author of the new paper. 

[Related: A brief but terrifying history of nuclear weapons]

The advice that he and his colleague Ioannis W. Kokkinakis came up with doesn’t apply to the immediate vicinity of a nuclear blast. If you’re within a shout of ground zero, there’s no avoiding it—you’re dead. Even some distance away, the nuke will bombard you with a bright flash of thermal radiation: a torrent of light, infrared, and ultraviolet that could blind you or cause second- or third-degree burns.

But as you move farther away from ground zero, far enough that the thermal radiation might leave you with minor injuries at most, the airburst will leave most structures standing. The winds will only be equivalent to a very strong hurricane. That’s still deadly, but with preparation, you might just make it.

Drikakis and Kokkinakis constructed a one-story virtual house and simulated striking winds from two different shockwave scenarios—one well above standard air pressure, and one even stronger. Based on their simulations, here are the best—and worst—places to go during a nuclear war.

Worst: by a window

If you catch a glimpse of a nuclear flash, your first instinct might be to run to the nearest window to see what’s just happened. That would be a mistake, as you’d be in the prime place to be hit by the ensuing air blast.

If you stand right in a window facing the blast, the authors found, you might face winds over 300 miles per hour—enough to pick the average human off the ground. Depending on the exact strength of the nuke, you might then strike the wall with enough force to kill you.

Surprisingly, there are more dangerous places in the house when it comes to top wind speed (more on that later). But what really helps make a window deadly is the glass. As it shatters, you’ll be sprayed in the face by high-velocity shards.

Bad: a hallway

You might imagine that you can escape the airblast by retreating deeper into your building. But that’s not necessarily true. A window can act as a funnel for rushing air, turning a long hallway into something like a wind tunnel. Doors can do the same. 

The authors found that winds would throw an average-sized human standing in the corridor nearly as far as it would throw an average-sized human standing by the front window. Intense winds can also pick up glass shards and loose objects from the floor or furniture and send them hurtling as fast as a shot from a musket, the simulations showed.

Better: a corner

Not everywhere in the house is equally deadly. The authors found that, as the nuclear shockwave passed through a room, the highest winds tended to miss the room’s edges and corners. 

Therefore, even if you’re in an otherwise dangerous room, you can protect yourself from the worst of the impact by finding a corner and bracing yourself in. The key, again, is to avoid doors and windows.

“Wherever there are no openings, you have better chances to survive,” says Drikakis. “Essentially, run away from the openings.”

Best: a corner of an interior room

The best place to hide out is in the corner of a small room as far inside the building as possible.  For example, a closet that lacks any openings is ideal.

The “good” news is that the peak of the blast lasts just a moment. The most furious winds will pass in less than a second. If you can survive that, you’ll probably stay alive—as long as you’re not in the path of the radioactive fallout.

These tips for sheltering can be useful in high-wind disasters across the board. (The US Centers for Disease Control currently advises those who cannot evacuate before a hurricane to avoid windows and find a closet.) But the authors stress that the risk of nuclear war, while low, has certainly not disappeared. “I think we have to raise awareness to the international community … to understand that this is not just a joke,” says Drikakis. “It’s not a Hollywood movie.”

The post The best—and worst—places to shelter after a nuclear blast appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Physicists figured out a recipe to make titanium stardust on Earth https://www.popsci.com/science/stardust-titanium-tools/ Fri, 13 Jan 2023 19:00:00 +0000 https://www.popsci.com/?p=505062
Cosmic dust on display in Messier 98 galaxy.
Spiral galaxy Messier 98 showcases its cosmic dust in this Hubble Space Telescope image. NASA / ESA / Hubble / V. Rubin et al

The essential ingredients are carbon atoms, titanium, and a good coating of graphite.

The post Physicists figured out a recipe to make titanium stardust on Earth appeared first on Popular Science.

]]>
Cosmic dust on display in Messier 98 galaxy.
Spiral galaxy Messier 98 showcases its cosmic dust in this Hubble Space Telescope image. NASA / ESA / Hubble / V. Rubin et al

Long ago—before humans, before Earth, before even the sun—there was stardust.

In time, the young worlds of the solar system would eat up much of that dust as those bodies ballooned into the sun, planets, and moons we know today. But some of the dust survived, pristine, in its original form, locked in places like ancient meteorites.

Scientists call this presolar dust, since it formed before the sun. Some grains of presolar dust contain tiny bits of carbon, like diamond or graphite; others contain a host of other elements such as silicon or titanium. One form contains a curious and particularly hardy material called titanium carbide, used in machine tools on Earth. 

Now, physicists and engineers think they have an idea of how those particular dust grains formed. In a study published today in the journal Science Advances, researchers believe they could use that knowledge to build better materials here on Earth.

These dust grains are extremely rare and extremely minuscule, often smaller than the width of a human hair. “They were present when the solar system formed, survived this process, and can now be found in primitive solar system materials,” such as meteorites, says Jens Barosch, an astrophysicist at the Carnegie Institution for Science in Washington, DC, who was not an author of the study.

[Related: See a spiral galaxy’s haunting ‘skeleton’ in a chilly new space telescope image]

The study authors peered into a unique kind of dust grain with a core of titanium carbide—titanium and carbon, combined into durable, ceramic-like material that’s nearly as hard as diamond—wrapped in a shell of graphite. Sometimes, tens or even hundreds of these carbon-coated cores clump together into larger grains.

But how did titanium carbide dust motes form in the first place? So far, scientists haven’t quite known for sure. Testing it on Earth is hard, because would-be dustbuilders have to deal with gravity—something that these grains didn’t have to contend with. But scientists can now go to a place where gravity is no object.

On June 24, 2019, a sounding rocket launched from Kiruna, a frigid Swedish town north of the Arctic circle. This rocket didn’t reach orbit. Like many rockets before and since, it streaked in an arc across the sky, peaking at an altitude of about 150 miles, before coming back down.

Still, that brief flight was enough for the rocket’s components to gain more than a taste of the microgravity that astronauts experience in orbit. One of those components was a contraption inside which scientists could incubate dust grains and record the process. 

“Microgravity experiments are essential to understanding dust formation,” says Yuki Kimura, a physicist at Hokkaido University in Japan, and one of the paper’s authors.

Deep Space photo
Titanium carbide grains, seen here magnified at a scale of several hundred nanometers. Yuki Kimura

Just over three hours after launch, including six and a half minutes of microgravity, the rocket landed about 46 miles away from its launch site. Kimura and his colleagues had the recovered dust grains sent back to Japan for analysis. From this shot and follow-up tests in an Earthbound lab, the group pieced together a recipe for a titanium carbide dust grain.

[Related: Black holes have a reputation as devourers. But they can help spawn stars, too.]

That recipe might look something like this: first, start with a core of carbon atoms, in graphite form; second, sprinkle the carbon core with titanium until the two sorts of atoms start to mix and create titanium carbide; third, fuse many of these cores together and drape them with graphite until you get a good-sized grain.

It’s interesting to get a glimpse of how such ancient things formed, but astronomers aren’t the only people who care. Kimura and his colleagues also believe that understanding the process could help engineers and builders craft better materials on Earth—because we already build particles not entirely unlike dust grains.

They’re called nanoparticles, and they’ve been around for decades. Scientists can insert them into polymers like plastic to strengthen them. Road-builders can use them to reinforce the asphalt under their feet. Doctors can even insert them into the human body to deliver drugs or help image hard-to-see body parts.

Typically, engineers craft nanoparticles by growing them within a liquid solution. “The large environmental impact of this method, such as liquid waste, has become an issue,” says Kimura. Stardust, then, could help reduce that waste.

Machinists already use tools strengthened by a coat of titanium carbide nanoparticles. Just like diamond, the titanium carbide helps the tools, often used to forge things like spacecraft, cut harder. One day, stardust-inspired machine coatings might help build the very vessels humans send to space.

The post Physicists figured out a recipe to make titanium stardust on Earth appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
UV radiation might be behind the planet’s biggest mass extinction https://www.popsci.com/environment/mass-extinction-uv-radiation/ Fri, 06 Jan 2023 19:00:00 +0000 https://www.popsci.com/?p=503763
Geologists collecting Permian fossils on a Tibetan plateau
The field site, with the latest Permian rocks in the foreground, and the outcrop containing the Permian-Triassic boundary above. Feng Liu

Volcanic gases, carbon dioxide, and UV-B rays made for a noxious combination for Permian life.

The post UV radiation might be behind the planet’s biggest mass extinction appeared first on Popular Science.

]]>
Geologists collecting Permian fossils on a Tibetan plateau
The field site, with the latest Permian rocks in the foreground, and the outcrop containing the Permian-Triassic boundary above. Feng Liu

Above 10 miles in the sky lies a layer of ozone—a form of orange gas with molecules of three atoms, rather than two. This ozone layer is a crucial shield that protects all life from the sun’s barrage of ultraviolet radiation. So what happens if something in the ozone layer goes horribly wrong? 

The results can be catastrophic. And we have prehistoric proof that might support that.

It comes from the time of the worst mass extinction in Earth’s history—252 million years ago, at the end of the Permian period when an apocalyptic cascade of volcanic eruptions may have turned the world toxic. And it comes in the form of fossilized pollen grains with signs of exposure to a high-energy type of ultraviolet known as ultraviolet B (UV-B) radiation. In a paper published today in the journal Science Advances, an international group of geologists and botanists used the deformed specimens to piece together a possible course of deadly events.

“I would say the elevated UV-B radiation probably played a part in the extinction of some terrestrial life,” says Feng Liu, a geologist at the Nanjing Institute of Geology and Palaeontology in China and one of the paper’s authors. Scientists have long suspected that a drop in ozone levels and spike in ultraviolet rays might have played a role in this catastrophe, and now they have data to show for it.

[Related: Geologists are searching for when the Earth took its first breath]

One prime suspect for the end-of-the-Permian devastation is the Siberian Traps. These igneous rocks coat central Siberia (which, at the time, was one of the northernmost chunks of the supercontinent Pangaea) and were spewed from a truly colossal complex of volcanoes. Experts think that for more than a million years, the Siberian Traps belched greenhouse gases like carbon dioxide into Earth’s atmosphere. 

In the wake of constant volcanic activity, teeming ancient oceans would have acidified and deoxygenated, turning toxic and sentencing more than 80 percent of their resident marine species to extinction. Life would of course recover, but it needed millions of years more to return to its pre-extinction abundance.

That explains much of the prehistoric carnage in the water, but what about on land? What types of terrestrial organisms died, and why? The fossil record there is much less clear.

[Related on PopSci+: An ancient era of global warming could hint at our scorching future]

Researchers had previously dug up clues of some immense destruction. For instance, several parts of that ancient world were once covered with forests of great ferns. Both of these biomes vanish from the fossil record around the end of the Permian, suggesting that ground dwellers suffered worldwide. 

Still, other experts contend that the fossil record could be misleading, and the extinctions were more regional. “It’s a case of compiling lots of pieces of information from different places, and trying to build it together into a coherent—albeit incomplete—picture,” says Phillip Jardine, a paleobotanist at the University of Münsterin in Germany and author on the new paper. So far, that picture doesn’t tell us what, exactly, caused the deaths on land.

But these scientists may have found a missing piece. In 2014, Liu collected samples from rocks under what is now southern Tibet. When he and his colleagues studied the rock closely, they found ancient grains of conjoined and misshapen pollen.

Brown pollen spore from Permian period for UV radiation study
An alisporites pollen grain from one of the samples collected in Tibet and analyzed in the study. Feng Liu

To understand what caused the damage, the team analyzed the pollen and sought out particular compounds containing carbon, oxygen, and nitrogen. Plants would have created these chemicals to protect themselves from UV-B radiation, which consists of shorter wavelengths than visible light and therefore, higher energies. As a result, UV-B rays can cause more damage to living cells than UV-A.

Scientists like Jardine had used the same technique to study UV-B levels that reached Earth’s surface a few hundred thousand years ago. But this was the first time anybody had tried to look for these compounds from 252 million years ago. And Jardine and Liu’s group did find them.

“I think the key thing is that we have definite evidence that plants were affected by this,” says Jardine. “The increase in UV-B-absorbing compounds that we have observed shows that plants were biochemically responding to this situation.”

The hunch is that at the Permian period’s end, volcanic activity unleashed gases known as halocarbons, which contain atoms of halogens like chlorine and bromine. The chemicals might have eaten away at the ozone layer, allowing more UV-B travel to the ground. That, in turn, would have stunted plant growth and reproduction, possibly leading to fewer flora pulling toxic carbon dioxide out of the air.

“Whilst it would be pre-emptive of me to suggest ozone depletion or elevated UV radiation were the only cause of these mass extinctions, it certainly seems plausible that increasing UV radiation at a time when the global ecosystem is already under considerable stress is likely to exacerbate negative impacts on life on Earth,” says Wesley Fraser, a geologist at Oxford Brookes University in the UK and another one of the study authors.

[Related: Tonga survived the largest volcanic plume in the planet’s history this year]

If UV-B really did make the planet more unlivable in that period, the devastation may have happened globally. Of course, scientists will need to find hard evidence of that. “These data only came from one locality, so we need to find more from the same time interval to validate these findings,” says Jardine.

Though the mass extinction at the end of the Permian is considered the deadliest, there were more. Scientists have identified similar mortality events at the ends of the Devonian (around 360 million years ago) and the Triassic (around 201 million years ago) periods. And according to Fraser, scientists have found traces of ultraviolet poisoning in those extinctions, too.

“There may be a common thread involving UV radiation spanning different mass extinction events,” says Fraser. Even if ultraviolet radiation wasn’t the primary killer, it might have been the accomplice that helped do in much of the world’s terrestrial life.

And while the Permian is ancient history, we’re still wrestling with the problem of UV-B radiation today. It was not too long ago that the world was in alarm over an ozone hole over Antarctica, caused by compounds known as chlorofluorocarbons (CFCs) leaching into the atmosphere from the refrigerators and air conditioners that once used them. Many were concerned that the ozone hole would expand and leave large parts of the globe exposed to burning UV radiation.

[Related on PopSci+: Rocket fuel might be polluting the Earth’s upper atmosphere]

After governments came together in 1987 to craft the Montreal Protocol and ban CFCs, the ozone hole began to heal. But the damage was done, and it continues to affect plants today.

With that in mind, learning about how UV-B exposure affected plants in the past could inform scientists about what may happen in the near future. And vice versa, Fraser explains. “I think deep-time and modern-day research on UV-B radiation go hand-in-glove.” 

The post UV radiation might be behind the planet’s biggest mass extinction appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
ISS astronauts are building objects that couldn’t exist on Earth https://www.popsci.com/science/iss-resin-manufacture-new-shapes/ Tue, 03 Jan 2023 17:00:00 +0000 https://www.popsci.com/?p=502628
A test device aboard the ISS is making new shapes beyond gravity's reach.
A test device aboard the ISS is making new shapes beyond gravity's reach. NASA

Gravity-defying spare parts are created by filling silicone skins with resin.

The post ISS astronauts are building objects that couldn’t exist on Earth appeared first on Popular Science.

]]>
A test device aboard the ISS is making new shapes beyond gravity's reach.
A test device aboard the ISS is making new shapes beyond gravity's reach. NASA

Until now, virtually everything the human race has ever built—from rudimentary tools to one-story houses to the tallest skyscrapers—has had one key restriction: Earth’s gravity. Yet, if some scientists have their way, that could soon change.

Aboard the International Space Station (ISS) right now is a metal box, the size of a desktop PC tower. Inside, a nozzle is helping build little test parts that aren’t possible to make on Earth. If engineers tried to make these structures on Earth, they’d fail under Earth’s gravity. 

“These are going to be our first results for a really novel process in microgravity,” says Ariel Ekblaw, a space architect who founded MIT’s Space Exploration Initiative and one of the researchers (on Earth) behind the project.

The MIT group’s process involves taking a flexible silicone skin, shaped like the part it will eventually create, and filling it with a liquid resin. “You can think of them as balloons,” says Martin Nisser, an engineer at MIT, and another of the researchers behind the project. “Instead of injecting them with air, inject them with resin.” Both the skin and the resin are commercially available, off-the-shelf products.

The resin is sensitive to ultraviolet light. When the balloons experience an ultraviolet flash, the light percolates through the skin and washes over the resin. It cures and stiffens, hardening into a solid structure. Once it’s cured, astronauts can cut away the skin and reveal the part inside.

All of this happens inside the box that launched on November 23 and is scheduled to spend 45 days aboard the ISS. If everything is successful, the ISS will ship some experimental parts back to Earth for the MIT researchers to test. The MIT researchers have to ensure that the parts they’ve made are structurally sound. After that, more tests. “The second step would be, probably, to repeat the experiment inside the International Space Station,” says Ekblaw, “and maybe to try slightly more complicated shapes, or a tuning of a resin formulation.” After that, they’d want to try making parts outside, in the vacuum of space itself. 

The benefit of building parts like this in orbit is that Earth’s single most fundamental stressor—the planet’s gravity—is no longer a limiting factor. Say you tried to make particularly long beams with this method. “Gravity would make them sag,” says Ekblaw.

[Related: The ISS gets an extension to 2030 to wrap up unfinished business]

In the microgravity of the ISS? Not so much. If the experiment is successful, their box would be able to produce test parts that are too long to make on Earth.

The researchers imagine a near future where, if an astronaut needed to replace a mass-produced part—say, a nut or a bolt—they wouldn’t need to consign one from Earth. Instead, they could just fit a nut- or a bolt-shaped skin into a box like this and fill it up with resin.

But the researchers are also thinking long-term. If they can make very long parts in space, they think, those pieces could  speed up large construction projects, such as the structures of space habitats. They might also be used to form the structural frames for solar panels that power a habitat or radiators that keep the habitat from getting too warm.

International Space Station photo
A silicone skin that will be filled to make a truss. Rapid Liquid Printing

Building stuff in space has a few key advantages, too. If you’ve ever seen a rocket in person, you’ll know that—as impressive as they are—they aren’t particularly wide. It’s one reason that large structures such as the ISS or China’s Tiangong go up piecemeal, assembled one module at a time over years.

Mission planners today often have to spend a great deal of effort trying to squeeze telescopes and other craft into that small cargo space. The James Webb Space Telescope, for instance, has a sprawling tennis-court-sized sunshield. To fit it into its rocket, engineers had to delicately fold it up and plan an elaborate unfurling process once JWST reached its destination. Every solar panel you can assemble in Earth orbit is one less solar panel you have to stuff into a rocket. 

[Related: Have we been measuring gravity wrong this whole time?]

Another key advantage is cost. The cost of space launches, adjusted for inflation, has fallen more than 20-fold since the first Space Shuttle went up in 1981, but every pound of cargo can still cost over $1,000 to put into space. Space is now within reach of small companies and modest academic research groups, but every last ounce makes a significant price difference.

When it comes to other worlds like the moon and Mars, thinkers and planners have long thought about using the material that’s already there: lunar regolith or Martian soil, not to mention the water that’s found frozen on both worlds. In Earth’s orbit, that’s not quite as straightforward. (Architects can’t exactly turn the Van Allen radiation belts into building material.)

That’s where Ekblaw, Nisser, and their colleagues hope their resin-squirting approach might excel. It won’t create intricate components or complex circuitry in space, but every little part is one less that astronauts have to take up themselves.

“Ultimately, the purpose of this is to make this manufacturing process available and accessible to other researchers,” says Nisser.

The post ISS astronauts are building objects that couldn’t exist on Earth appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Time doesn’t have to be exact—here’s why https://www.popsci.com/science/leap-second-day-length/ Sat, 31 Dec 2022 16:00:00 +0000 https://www.popsci.com/?p=501341
Gold clock with blue arms for minutes and seconds
Starting in 2035, we'll be shaving a second off our New Year's countdowns. Hector Achautla

The recent decision to axe the leap second shouldn't affect your countdowns or timekeeping too much.

The post Time doesn’t have to be exact—here’s why appeared first on Popular Science.

]]>
Gold clock with blue arms for minutes and seconds
Starting in 2035, we'll be shaving a second off our New Year's countdowns. Hector Achautla

It’s official: The leap second’s time is numbered

By 2035, computers around the world will have one less cause for glitching based on human time. Schoolchildren will have one less confusing calculation to learn when memorizing the calendar.

Our days are continually changing: Tiny differences in the Earth’s rotation build up over months or years. To compensate, every so often, authorities of world time insert an extra second to bring the day back in line. Since 1972, when the system was introduced, we’ve experienced 27 such leap seconds.

But the leap second has always represented a deeper discrepancy. Our idea of a day is based on how fast the Earth spins; yet we define the second—the actual base unit of time as far as scientists, computers, and the like are concerned—with the help of atoms. It’s a definitive gap that puts astronomy and atomic physics at odds with each other.

[Related: Refining the clock’s second takes time—and lasers]

Last month, the guardians of global standard time chose atomic physics over astronomy—and according to experts, that’s fine.

“We will never abandon the idea that timekeeping is regulated by the Earth’s rotation. [But] the fact is we don’t want it to be strictly regulated by the Earth’s rotation,” says Patrizia Tavella, a timekeeper at the International Bureau of Weights and Measures (BIPM) in Paris, a multigovernmental agency that, amongst other things, binds together nations’ official clocks.

The day is a rather odd unit of time. We usually think about it as the duration the Earth takes to complete one rotation about its axis: a number from astronomy. The problem is that the world’s most basic unit of time is not the day, but the second, which is measured by something far more miniscule: the cesium-133 atom, an isotope of the 55th element. 

As cesium-133’s nucleus experiences tiny shifts in energy, it releases photons with very predictable timing. Since 1967, atomic clocks have counted precisely 9,192,631,770 of these time-units in every second. So, as far as metrologists (people who study measurement itself) are concerned, a single day is 86,400 of those seconds.

Except a day isn’t always exactly 86,400 seconds, because the world’s revolutions aren’t constant.

Subtle motions, such as the moon’s tidal pull or the planet’s mass distribution shifting as its melty innards churn about, affect Earth’s spin. Some scientists even believe that a warming climate could shuffle heated air and melted water closer to the poles, which might speed up the rotation. Whatever the cause, it leads to millisecond differences in day length over the year that are unacceptable for today’s ultra-punctual timekeepers. Which is why they try to adjust for it.

The International Earth Rotation and Space Systems Service (IERS), a scientific nonprofit responsible for setting global time standards, publishes regular counts of just how large the difference is for the benefit of the world’s timekeepers. For most of December, Earth’s rotation has been between 15 and 20 milliseconds off the atomic-clock day.

Scientists think it will take about a century for the difference to build up to a minute. It will take about five millennia for it to build up to an hour.

Whenever that gap has gotten too large, IERS invokes the commandment of the leap second. Every January and July, the organization publishes a judgement on whether a leap second is in order. If one is necessary, the world’s timekeepers tack a 61st second onto the last minute of June 30 or December 31, depending on whichever comes next. But this November, the BIPM ruled that by 2035, the masters of the world’s clocks will shelve the leap second in favor of a still-undecided approach.

That means the Royal Observatory in Greenwich, London—the baseline for Greenwich Mean Time (GMT) and its modern successor, Universal Coordinated Time (UTC)—will drift out of sync with the days it once defined. Amateur astronomers might complain, too, as without the leap second, star sightings could become less predictable in the night sky.

But for most people, the leap second is an insignificant curiosity—especially compared to the maze of time zones that long-distance travelers face, or the shifts that humans must observe twice a year if they live in countries that observe daylight savings or summer time

On the other hand, adding a subtle second to shove the day into perfect alignment comes at a cost: technical glitches and nightmares for programmers who must already deal with different countries’ hodgepodge of timekeeping. “The absence of leap seconds will make things a little easier by removing the need for the occasional adjustment, but the difference will not be noticed by everyday users,” says Judah Levine, a timekeeper at the National Institute of Standards and Technology (NIST) in Boulder, Colorado, the US government agency that sets the country’s official clocks.

[Related: It’s never too late to learn to be on time]

The new plan stipulates that in 2026, BIPM and related groups will meet again to determine how much they can let the discrepancy grow before the guardians of time need to take action. “We will have to propose the new tolerance, which could be one minute, one hour, or infinite,” says Tavella. They’ll also propose how often they (or their successors) will revise the number.

It’s not a decision that needs to be made right away. “It’s probably not necessary” to reconcile atomic time with astronomical time, says Elizabeth Donley, a timekeeper at NIST. “User groups that need to know time for astronomy and navigation can already look up the difference.”

We can’t currently predict the vagaries of Earth’s rotation, but scientists think it will take about a century for the difference to build up to a minute. “Hardly anyone will notice,” says Donley. It will take about five millennia for it to build up to an hour. 

In other words, we could just kick the conundrum of counting time down the road for our grandchildren or great-grandchildren to solve. “Maybe in the future, there will be better knowledge of the Earth’s movement,” says Tavella, “And maybe, another better solution will be proposed.”

The post Time doesn’t have to be exact—here’s why appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
What the Energy Department’s laser breakthrough means for nuclear fusion https://www.popsci.com/science/nuclear-fusion-laser-net-gain/ Tue, 13 Dec 2022 18:33:00 +0000 https://www.popsci.com/?p=498247
Target fusion chamber of the National Ignition Facility
The National Ignition Facility's target chamber at Lawrence Livermore National Laboratory, where fusion experiments take place, with technicians inside. LLNL/Flickr

Nearly 200 lasers fired at a tiny bit of fuel to create a gain in energy, mimicking the power of the stars.

The post What the Energy Department’s laser breakthrough means for nuclear fusion appeared first on Popular Science.

]]>
Target fusion chamber of the National Ignition Facility
The National Ignition Facility's target chamber at Lawrence Livermore National Laboratory, where fusion experiments take place, with technicians inside. LLNL/Flickr

Since the 1950s, scientists have quested to bring nuclear fusion—the sort of reaction that powers the sun—down to Earth.

Just after 1 a.m. on December 5, scientists at the National Ignition Facility (NIF) in Lawrence Livermore National Laboratory (LLNL) in California finally reached a major milestone in the history of nuclear fusion: achieving a reaction that creates more energy than scientists put in.

This moment won’t bring a fusion power plant to your city just yet—but it is an important step to that goal, one which scientists have sought from the start of their quest.

“This lays the groundwork,” says Tammy Ma, a scientist at LLNL, in a US Department of Energy press conference today. “It demonstrates the basic scientific feasibility.”

On the outside, NIF is a nondescript industrial building in a semi-arid valley east of San Francisco. On the inside, scientists have quite literally been tinkering with the energy of the stars (alternating with NIF’s other major task, nuclear weapons research).

[Related: Physicists want to create energy like stars do. These two ways are their best shot.]

Nuclear fusion is how the sun generates the heat and light that warm and illuminate the Earth to sustain life. It involves crushing hydrogen atoms together. The resulting reaction creates helium and energy—quite a bit of energy. You’re alive today because of it, and the sun doesn’t produce a wisp of greenhouse gas in the process.

But to turn fusion into anything resembling an Earthling’s energy source, you need conditions that match the heart of the sun: millions of degrees in temperature. Creating a facsimile of that environment on Earth takes an immense amount of power—far eclipsing the amount of researchers usually end up producing.

Lasers aimed at a tiny target

For decades, scientists have struggled to answer one fundamental question: How do you fine-tune a fusion experiment to create the right conditions to actually gain energy?

NIF’s answer involves an arsenal of high-powered laser beams. First, experts stuff a peanut-sized, gold-plated, open-ended cylinder (known as a hohlraum) with a peppercorn-sized pellet containing deuterium and tritium, forms of hydrogen atoms that come with extra neutrons. 

Then, they fire a laser—which splits into 192 finely tuned beams that, in turn, enter the hohlraum from both ends and strike its inside wall. 

“We don’t just smack the target with all of the laser energy all at once,” says Annie Kritcher, a scientist at NIF, at the press conference. “We divide very specific powers at very specific times to achieve the desired conditions.”

As the chamber heats up to millions of degrees under the laser barrage, it starts producing a cascade of X-rays that violently wash over the fuel pellet. They shear off the pellet’s carbon outer shell and begin to compress the hydrogen inside—heating it to hundreds of millions of degrees—squeezing and crushing the atoms into pressures and densities higher than the center of the sun.

If all goes well, that kick-starts fusion.

Nuclear fusion energy experiment fuel source in a tiny metal capsule
This metal case, called a hohlraum, holds a tiny bit of fusion fuel. Eduard Dewald/LLNL

A new world record

When NIF launched in 2009, the fusion world record belonged to the Joint European Torus (JET) in the United Kingdom. In 1997, using a magnet-based method known as a tokamak, scientists at JET produced 67 percent of the energy they put in. 

That record stood for over two decades until late 2021, when NIF scientists bested it, reaching 70 percent. In its wake, many laser-watchers whispered the obvious question: Could NIF reach 100 percent? 

[Related: In 5 seconds, this fusion reactor made enough energy to power a home for a day]

But fusion is a notoriously delicate science, and the results of a given fusion experiment are difficult to predict. Any object that’s this hot will want to cool off against scientists’ wishes. Tiny, accidental differences in the setup—from the angles of the laser beams to slight flaws in the pellet shape—can make immense differences in a reaction’s outcome.

It’s for that reason that each NIF test, which takes about a billionth of a second, involves months of meticulous planning.

“All that work led up to a moment just after 1 a.m. last Monday, when we took a shot … and as the data started to come in, we saw the first indications that we’d produced more fusion energy than the laser input,” says Alex Zylstra, a scientist at NIF, at the press conference.

This time, the NIF’s laser pumped 2.05 megajoules into the pellet—and the pellet burst out 3.15 megajoules (enough to power the average American home for about 43 minutes). Not only have NIF scientists achieved that 100-percent ignition milestone, they’ve gone farther, reaching more than 150 percent.

“To be honest…we’re not surprised,” says Mike Donaldson, a systems engineer at General Fusion, a Vancouver, Canada-based private firm that aims to build a commercially viable fusion plant by the 2030s, who was not involved with the NIF experiment. “I’d say this is right on track. It’s really a culmination of lots of years of incremental progress, and I think it’s fantastic.”

But there’s a catch

These numbers only account for the energy delivered by the laser—omitting the fact that this laser, one of the largest and most intricate on the planet, needed about 300 megajoules from California’s electric grid to power on in the first place.

“The laser wasn’t designed to be efficient,” says Mark Herrmann, a scientist at LLNL, at the press conference. “The laser was designed to give as much juice as possible.” Balancing that energy-hungry laser may seem daunting, but researchers are optimistic. The laser was built on late-20th-century technology, and NIF leaders say they do see a pathway to making it more efficient and even more powerful. 

Even if they do that, experts need to figure out how to fire repeated shots that gain energy. That’s another massive challenge, but it’s a key step toward making this a viable base for a power plant.

[Related: Inside France’s super-cooled, laser-powered nuclear test lab]

“Scientific results like today’s are fantastic,” says Donaldson. “We also need to focus on all the other challenges that are required to make fusion commercializable.”

A fusion power plant may very well involve a different technique. Many experimental reactors like JET and the under-construction ITER in southern France, in lieu lasers, try to recreate the sun by using powerful magnets to shape and sculpt super-hot plasma within a specially designed chamber. Most of the private-sector fusion efforts that have mushroomed of late are keying their efforts toward magnetic methods, too.

In any event, it will be a long time before you read an article like this on a device powered by cheap fusion energy—but that day has likely come an important milestone closer.

“It’s been 60 years since ignition was first dreamed of using lasers,” Ma said at the press conference. “It’s really a testament to the perseverance and dedication of the folks that made this happen. It also means we have the perseverance to get to fusion energy on the grid.”

The post What the Energy Department’s laser breakthrough means for nuclear fusion appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Oldest DNA ever sampled paints a lush portrait of a lost Arctic world https://www.popsci.com/environment/oldest-dna-analysis-greenland/ Wed, 07 Dec 2022 19:31:26 +0000 https://www.popsci.com/?p=495947
Woolly mammoths in forests in Greenland after eDNA reconstruction. The scene is an illustration.
More than 2 million years ago, Greenland was really green. Beth Zaiken/bethzaiken.com

A long time ago when it was warmer up north, one corner of Greenland was actually green and teeming with life.

The post Oldest DNA ever sampled paints a lush portrait of a lost Arctic world appeared first on Popular Science.

]]>
Woolly mammoths in forests in Greenland after eDNA reconstruction. The scene is an illustration.
More than 2 million years ago, Greenland was really green. Beth Zaiken/bethzaiken.com

Despite its verdant name, Greenland is hardly the place you’d expect to find a garden. Larger than Mexico, the island is home to fewer than 60,000 people, most of whom live along the southwestern coast. And to no surprise: Most of the country’s vast interior is clad in ice, and in many places, the air is too dry to even make snow.

On the island’s desolate northern coast, however, researchers have pieced together an entire ancient ecosystem that looks drastically different from today’s. Long before modern humans arrived on its shores, mastodons walked through leafy forests and crabs thrived along corals.

Instead of using fossils to paint this lost world, geologists and biologists used a different kind of brush: 2.4 million-year-old DNA scraped from the land itself. In a paper published in the journal Nature today, researchers analyzed the oldest genetic material ever sampled to understand what a tiny corner of Greenland would have looked when Earth’s climate was much warmer

“It shows an ecosystem with no analogue today,” says Eske Willerslev, an ecologist at the University of Copenhagen in Denmark and Cambridge University in the UK and one of the authors of this study.

[Related: Airborne animal DNA could help biologists track endangered species]

Any kind of antiquated DNA is a rare find. While scientists usually extract it from living beings, that evidence can fade quickly after death. When an organism is alive, it will shed much of itself—from hair to poop to dead skin and many things in between—into its surroundings. The DNA in that detritus can tell scientists what species it came from, almost like a barcode. Scientists call these traces environmental DNA, or eDNA for short. 

“It’s kind of like the forensic analysis that they do in a crime scene,” says Mehrdad Hajibabaei, a biologist at the University of Guelph in Canada who was not involved in the new research. “You can identify organisms that live in that environment.”

The practice, which is still relatively new, has been nifty in monitoring biodiversity, finding new species, and tracking invasive species. But this study is the first to use eDNA to reconstruct an entire prehistoric ecosystem. Greenland’s harsh climate certainly helped.

Two scientists in hazmat suits on Greenland's rocky barren Peary Land
Eske Willerslev and a colleague sample sediments for environmental DNA in Greenland’s Peary Land. Courtesy of NOVA, HHMI Tangled Bank Studios & Handful of Films

In most circumstances, eDNA rapidly degrades into the dust of time. But in the right conditions—say, if kept dry, attached to minerals, or frozen—then it can last for thousands or perhaps even millions of years. The eDNA in the Nature analysis came from a place known as Peary Land, a peninsula on Greenland’s Arctic coast, only around 450 miles south from the North Pole. 

Willerslev and a few colleagues first dug up samples of Peary Land permafrost back in 2006. At first, they weren’t sure if they would be able to find any surviving eDNA from the substrate. Though the team constantly tried new ways of extracting clues from a haystack of minerals, many of their efforts were fruitless. “We revisited these samples, and we failed and failed until a couple of years ago,” Willerslev says.

So what changed? One of Willerslev’s collaborators, geobiologist Karina Sand, developed better methods for finding the specific minerals that carried eDNA. That allowed the researchers to target bits of clay and quartz, release the eDNA, and sequence it with technology that has come a long way since 2006. It also enabled them to study even smaller samples.

[Related: This is the most-complete woolly mammoth ever found in North America]

Before this paper, the oldest-known DNA came from two woolly mammoths preserved in Siberian ice, estimated to be between 1.2 and 1.7 million years old. But Willerslev and Sand measured their eDNA to be far older: around 2.4 million years old. 

Next, they started to sift through their eDNA fragments in search of matches. That wasn’t straightforward, either, given how life has evolved since the start of the Pleistocene. The eDNA in question didn’t have an exact match to animals today—or even animals in the fossil record. As a backup, the scientists had to identify close relatives, which pointed them to prehistoric creatures that have never been traced to the Arctic.

Scientist in hazmat suit handling Greenland sediment sample in lab
A researcher prepares a sediment core for sampling in Copenhagen, Denmark. Courtesy of NOVA, HHMI Tangled

In their eDNA record, they found evidence of mastodons, relatives of modern elephants. They also found signs of animals linked to today’s reindeer, rodents, and geese, and critters connected to ants and fleas. They found remnants of marine life, too, including corals and horseshoe crabs that once lived in the Arctic waters. Some of the smaller animals still live in Greenland today, but for others, it’s just too cold. Indeed, climate records indicate that 2.4 million years ago, Greenland was some 20 to 34 degrees Fahrenheit warmer than it is today.

On the plant side, researchers pinpointed a curious mix of leafy, deciduous trees like poplars and birches (like you might find in Britain or the Eastern US) and Arctic shrubbery (like you might find in northern Canada). It was a combination of temperate and polar conditions that doesn’t really exist on Earth today.

Gloved hands holding eDNA sample in Greenland analysis
A researcher extracts samples from a sediment core for DNA sequencing. Courtesy of NOVA, HHMI Tangled Bank Studios & Handful of Films

“This is a very exciting study and an outstanding achievement by the authors, and it shows how far the field has come with regard to developing new tools to study such old environmental systems,” says Linda Armbrecht, a biologist at the University of Tasmania in Australia, who was not an author on the paper.

“It contributes hugely to our understanding of how Earth was different in regions like the Arctic,” says Hajibabaei. What’s more, he adds, this study can show scientists how to study ecosystems in a changing climate.

[Related: Inside the U.S. Army’s plan to build a luxurious city under the Arctic]

Indeed, Greenland’s lost world illustrates a different driver of climate change. Not long after the garden flourished, a sequence of ice ages set in. Arctic temperatures plunged, and Peary Land became more of a barren wasteland.

That situation is now playing in reverse—and much, much faster. All around the planet today, organisms are adapting to drastic warming at unprecedented speeds. Often, scientists assume that species can survive by moving northward from tropical and temperate zones. But that doesn’t mean herds of elephants and reindeer will end up mixing in Peary Land.

The post Oldest DNA ever sampled paints a lush portrait of a lost Arctic world appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The small, mighty, world-changing transistor turns 75 https://www.popsci.com/science/transistors-changed-the-world/ Sun, 04 Dec 2022 18:00:00 +0000 https://www.popsci.com/?p=493705
Japanese woman holding gold Sharp calculator with transistor
Sharp employee Ema Tanaka displays a gold-colored electronic caluculator "EL-BN691,, which is the Japanese electronics company's commemoration model, next to the world's first all-transistor/diode desktop calculator CS-10A introduced in 1964. YOSHIKAZU TSUNO/AFP via Getty Images

Without this universal technology, our computers would probably be bulky vacuum-tube machines.

The post The small, mighty, world-changing transistor turns 75 appeared first on Popular Science.

]]>
Japanese woman holding gold Sharp calculator with transistor
Sharp employee Ema Tanaka displays a gold-colored electronic caluculator "EL-BN691,, which is the Japanese electronics company's commemoration model, next to the world's first all-transistor/diode desktop calculator CS-10A introduced in 1964. YOSHIKAZU TSUNO/AFP via Getty Images

It’s not an exaggeration to say that the modern world began 75 years ago in a nondescript office park in New Jersey.

This was the heyday of Bell Labs. Established as the research arm of a telephone company, it had become a playground for scientists and engineers by the 1940s. This office complex was the forge of innovation after innovation: radio telescopes, lasers, solar cells, and multiple programming languages. But none were as consequential as the transistor.

Some historians of technology have argued that the transistor, first crafted at Bell Labs in late 1947, is the most important invention in human history. Whether that’s true or not, what is without question is that the transistor helped trigger a revolution that digitized the world. Without the transistor, electronics as we know them could not exist. Almost everyone on Earth would be experiencing a vastly different day-to-day.

“Transistors have had a considerable impact in countries at all income levels,” says Manoj Saxena, senior member of the New Jersey-based Institute of Electrical and Electronics Engineers. “It is hard to overestimate the impact they have had on the lives of nearly every person on the planet,” Tod Sizer, a vice president at modern Nokia Bell Labs, writes in an email.

What is a transistor, anyway?

A transistor is, to put it simply, a device that can switch an electric current on or off. Think of it as an electric gate that can open and shut thousands upon thousands of times every second. Additionally, a transistor can boost current passing through it. Those abilities are fundamental for building all sorts of electronics, computers included.

Within the first decade of the transistor era, these powers were recognized when three Bell Labs scientists who built that first transistor—William Shockley, John Bardeen, Walter Brattain—won the 1956 Nobel Prize in Physics. (In later decades, much of the scientific community would condemn Shockley for his support of eugenics and racist ideas about IQ.)

Transistors are typically made from certain elements called semiconductors, which are useful for manipulating current. The first transistor, the size of a human palm, was fashioned from a metalloid, germanium. By the mid-1960s, most transistors were being made from silicon—the element just above germanium in the periodic table—and engineers were packing transistors together into complex integrated circuits: the foundation of computer chips.

[Related: Here’s the simple law behind your shrinking gadgets]

For decades, the development of transistors has stuck to a rule of thumb known as Moore’s law: The number of transistors you can pack into a state-of-the-art circuit doubles roughly every two years. Moore’s law, a buzzword in the computer chip world, has long been a cliché among engineers, though it still abides today.

Modern transistors are just a few nanometers in size. The typical processor in the device you’re using to read this probably packs billions of transistors onto a chip smaller than a human fingernail. 

What would a world without the transistor be like?

To answer that question, we have to look at what the transistor replaced—it wasn’t the only device that could amplify current. 

Before its dominance, electronics relied on vacuum tubes: bulbs, typically made of glass, that held charged plates inside an airless interior. Vacuum tubes have a few advantages over transistors. They could generate more power. Decades after the technology became obsolete, some audiophiles swore that vacuum tube music players sounded better than their transistor counterparts. 

But vacuum tubes are very bulky and delicate (they tend to burn out quickly, just like incandescent light bulbs). Moreover, they often need time to “warm up,” making vacuum tube gadgets a bit like creaky old radiators. 

The transistor seemed to be a convenient replacement. “The inventors of the transistors themselves believed that the transistor might be used in some special instruments and possibly in military radio equipment,” says Ravi Todi, current president of the IEEE Electron Devices Society.

The earliest transistor gadget to hit the market was a hearing aid released in 1953. Soon after came the transistor radio, which became emblematic of the 1960s. Portable vacuum tube radios did exist, but without the transistor, handheld radios likely wouldn’t have become the ubiquitous device that kick-started the ability to listen to music out and about.

Martin Luther King Jr listens to a transistor radio.
Civil rights activist Martin Luther King Jr listens to a transistor radio during the third march from Selma to Montgomery, Alabama, in 1965. William Lovelace/Daily Express/Hulton Archive/Getty Images

But even in the early years of the transistor era, these devices started to skyrocket in number—and in some cases, literally. The Apollo program’s onboard computer, which helped astronauts orient their ship through maneuvers in space, was built with transistors. Without it, engineers would either have had to fit a bulky vacuum tube device onto a cramped spacecraft, or astronauts would have had to rely on tedious commands from the ground.

Transistors had already begun revolutionizing computers themselves. A computer built just before the start of the transistor era—ENIAC, designed to conduct research for the US military—used 18,000 vacuum tubes and filled up a space the size of a ballroom.

Vacuum tube computers squeezed into smaller rooms over time. Even then, 1951’s UNIVAC I cost over a million dollars (not accounting for inflation), and its customers were large businesses or data-heavy government agencies like the Census Bureau. It wouldn’t be until the 1970s and 1980s when personal computers, powered by transistors, started to enter middle-class homes.

Without transistors, we might live in a world where a computer is something you’d use at work—not at home. Forget smartphones, handheld navigation, flatscreen displays, electronic timing screens in train stations, or even humble digital watches. All of those need transistors to work.

“The transistor is fundamental for all modern technology, including telecommunications, data communications, aviation, and audio and video recording equipment,” says Todi.

What do the next 75 years of transistor technologies look like?

It’s hard to deny that the world of 2022 looks vastly different from the world of 1947, largely thanks to transistors. So what should we expect from transistors 75 years in the future, in the world of 2097?

It’s hard to say with any amount of certainty. Almost all transistors today are made with silicon—how Silicon Valley got its name. But how long will that last? 

[Related: The trick to a more powerful computer chip? Going vertical.]

Silicon transistors are now small enough that engineers aren’t sure how much smaller they can get, indicating Moore’s law may have a finite end. And energy-conscious researchers want to make computer chips that use less power, partly in hopes of reducing the carbon footprint from data centers and other large facilities

A growing number of researchers are thinking up alternatives to silicon. They’re thinking of computer chips that harness weird quantum effects and tiny bits of magnets. They’re looking at alternative materials like germanium to exotic forms of carbon. Which of these, if any, may one day replace the silicon transistor? That isn’t certain yet.

“No one technology can meet all needs,” says Saxena. And it’s very possible that the defining technology of the 2090s hasn’t been invented yet.

The post The small, mighty, world-changing transistor turns 75 appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The most awesome aerospace innovations of 2022 https://www.popsci.com/technology/best-aerospace-innovations-2022/ Thu, 01 Dec 2022 15:00:00 +0000 https://www.popsci.com/?p=490866
It's the Best of What's New.
It's the Best of What's New. NASA

Game-changing new developments in space, a “Parallel Reality” on the ground, and more innovations are the Best of What’s New.

The post The most awesome aerospace innovations of 2022 appeared first on Popular Science.

]]>
It's the Best of What's New.
It's the Best of What's New. NASA

In space, no one can hear a probe smash into an asteroid—but that’s just what happened in September, when NASA’s successful DART experiment proved that it’s possible to reroute a space rock by crashing into it on purpose. And that wasn’t even the most important event to materialize in space this year—more on the James Webb Space Telescope in a moment. Back on Earth, innovation also reached new heights in the aviation industry, as a unique electric airplane took off, as did a Black Hawk helicopter that can fly itself. 

Looking for the complete list of 100 winners? Check it out here.

Innovation of the Year

The James Webb Space Telescope by NASA: A game-changing new instrument to see the cosmos 

Once a generation, an astronomical tool arrives that surpasses everything that came before it. NASA’s James Webb Space Telescope (JWST) is just such a creation. After more than two decades and $9.7 billion in the making, JWST launched on December 25, 2021. Since February of this year, when it first started imaging—employing a mirror and aperture nearly three times larger in radius than its predecessor, the Hubble Space Telescope—JWST’s vibrant images have captured the attention of the world.

The JWST can see deep into fields of forming stars. It can peer 13 billion years back in time at ancient galaxies, still in their nursery. It can peek at exoplanets, seeing them directly where astronomers would have once had to reconstruct meager traces of their existence. It can teach us about how those stars and galaxies came together from primordial matter, something Hubble could only glimpse.

While Hubble circles in low Earth orbit, JWST instead sits hundreds of thousands of miles farther away, in Earth’s shadow. It will never see sunlight. There, protected even further by a multi-layer sunshield thinner than a human fingernail, the telescope chills at -370 degrees F, where JWST’s infrared sight works best. Its home is a fascinating location called L2, one of several points where the sun and Earth’s gravities balance each other out. 

All this might just be JWST’s prologue. Since the telescope used less fuel than initially anticipated when reaching its perch, the instrument might have enough to last well past its anticipated 10-year-long window. We can’t wait to see what else it dazzles us with.

Parallel Reality by Delta: A screen customized for you

You’ve probably found yourself running through an airport at some point, squinting up at a screen filled with rows of flight information. A futuristic new offering from Delta and a startup called Misapplied Sciences aims to change that. At Detroit Metro Airport, an installation can show travelers customized information for their flight. A scan of your boarding pass in McNamara Terminal is one way to tell the system who you are. Then, when you look at the overhead screen, you see that it displays only personalized data about your journey, like which gate you need to find. The tech behind the system works because the pixels in the display itself can shine in one of 18,000 directions, meaning many different people can see distinct information while looking at the same screen at the same time. 

Electronic bag tags by Alaska Airlines: The last tag you’ll need (for one airline)

Alaska Airlines

Learn More

Believe it or not, some travelers do still check bags, and a new offering from this Seattle-based airline aims to make that process easier. Flyers who can get an electronic bag tag from Alaska Airlines (at first, 2,500 members of their frequent flier plan will get them, and in 2023 they’ll be available to buy) can use their mobile phone to create the appropriate luggage tag on this device’s e-ink display while at home, up to 24 hours before a flight. The 5-inch-long tag itself gets the power it needs to generate the information on the screen from your phone, thanks to an NFC connection. After the traveler has done this step at home, they just need to drop the tagged bag off in the right place at the airport, avoiding the line to get a tag. 

Alice by Eviation: A totally electric commuter airplane 

Eviation

Learn More

The aviation industry is a major producer of carbon emissions. One way to try to solve that problem is to run aircraft on electric power, utilizing them just for short hops. That’s what Eviation aims to do with a plane called Alice: 8,000 pounds of batteries in the belly of this commuter aircraft give its two motors the power it needs to fly. In fact, it made its first flight in September, a scant but successful eight minutes in the air. Someday, as battery tech improves, the company hopes that it can carry nine passengers for distances of 200 miles or so. 

OPV Black Hawk by Sikorsky: A military helicopter that flies itself 

Sikorsky

Learn More

Two pilots sit up front at the controls of the Army’s Black Hawk helicopters, but what if that number could be zero for missions that are especially hazardous? That’s exactly what a modified UH-60 helicopter can do, a product of a DARPA program called ALIAS, which stands for Aircrew Labor In-Cockpit Automation System. The self-flying whirlybird made its first flights with zero occupants on board in February, and in October, it took flight again, even carrying a 2,600-pound load beneath it. The technology comes from helicopter-maker Sikorsky, and allows the modified UH-60 to be flown by two pilots, one pilot, or zero. The idea is that this type of autonomy can help in several ways: to assist the one or two humans at the controls, or as a way for an uninhabited helicopter to execute tasks like flying somewhere dangerous to deliver supplies without putting any people on board at risk. 

Detect and Avoid by Zipline: Drones that can listen for in-flight obstacles

Zipline

Learn More

As drones and other small aircraft continue to fill the skies, all parties involved have an interest in avoiding collisions. But figuring out the best way for a drone to detect potential obstacles isn’t an easy problem to solve, especially since there are no pilots on board to keep their eyes out and weight is at a premium. Drone delivery company Zipline has turned to using sound, not sight, to solve this conundrum. Eight microphones on the drone’s wing listen for traffic like an approaching small plane, and can preemptively change the UAV’s route to get out of the way before it arrives. An onboard GPU and AI help with the task, too. While the company is still waiting for regulatory approval to totally switch the system on, the technique represents a solid approach to an important issue.

DART by NASA and Johns Hopkins Applied Physics Laboratory: Smashing into an asteroid, for good 

Earthlings who look at the sky in fear that a space rock might tumble down and devastate our world can now breathe a sigh of relief. On September 26, a 1,100-pound spacecraft streaked into a roughly 525-foot-diameter asteroid, Dimorphos, intentionally crashing into it at over 14,000 mph. NASA confirmed on October 11 that the Double Asteroid Redirection Test (DART)’s impact altered Dimorphos’s orbit around its companion asteroid, Didymos, even more than anticipated. Thanks to DART, humans have redirected an asteroid for the first time. The dramatic experiment gives astronomers hope that perhaps we could do it again to avert an apocalypse.

CAPSTONE by Advanced Space: A small vessel on a big journey

Advanced Space

Learn More

Some lunar craft fill up whole rooms. On the other hand, there’s CAPSTONE, a satellite that can fit on a desk. Despite control issues, CAPSTONE—which launched on June 28—triumphantly entered lunar orbit on November 13. This small traveler is a CubeSat, an affordable design of mini-satellite that’s helped make space accessible to universities, small companies, and countries without major space programs. Hundreds of CubeSats now populate the Earth’s orbit, and although some have hitched rides to Mars, none have made the trip to the moon under their own power—until CAPSTONE. More low-cost lunar flights, its creators hope, may follow.

The LSST Camera by SLAC/Vera C. Rubin Observatory: A 3,200-megapixel camera

SLAC/Vera C. Rubin Observatory

Learn More

Very soon, the Vera C. Rubin Observatory in the high desert of Northern Chile will provide astronomers with what will be nearly a live-feed view of the southern hemisphere’s sky. To do that, it will rely on the world’s largest camera—with a lens 5 feet across and matching shutters, it will be capable of taking images that are an astounding 3,200 megapixels. The camera’s crafters are currently placing the finishing touches on it, but their impressive engineering feats aren’t done yet: In May 2023, the camera will fly down to Chile in a Boeing 747, before traveling by truck to its final destination.

The Event Horizon Telescope by the EHT Collaboration: Seeing the black hole in the Milky Way’s center

Just a few decades ago, Sagittarius A*, the supermassive black hole at our galaxy’s heart, was a hazy concept. Now, thanks to the Event Horizon Telescope (EHT), we have a blurry image of it—or, since a black hole doesn’t let out light, of its surrounding accretion disc. The EHT is actually a global network of radio telescopes stretching from Germany to Hawaii, and from Chile to the South Pole. EHT released the image in May, following years of painstaking reconstruction by over 300 scientists, who learned much about the black hole’s inner workings in the process. This is EHT’s second black hole image, following its 2019 portrait of a behemoth in the galaxy M87.

Starliner by Boeing: A new way of getting to the ISS 

Boeing

Learn More

After years of budget issues, technical delays, and testing failures, Boeing’s much-awaited Starliner crew capsule finally took to the skies and made it to its destination. An uncrewed test launch in May successfully departed Florida, docked at the International Space Station (ISS), and landed back on Earth. Now, Boeing and NASA are preparing for Starliner’s first crewed test, set to launch sometime in 2023. When that happens, Starliner will take its place alongside SpaceX’s Crew Dragon, and NASA will have more than one option to get astronauts into orbit. There are a few differences between the two: Where Crew Dragon splashes down in the sea, Starliner touches down on land, making it easier to recover. And, where Crew Dragon was designed to launch on SpaceX’s own Falcon 9 rockets, Starliner is more flexible. 

The post The most awesome aerospace innovations of 2022 appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Astronomers now know how supermassive black holes blast us with energy https://www.popsci.com/science/black-hole-light-energy-x-ray/ Wed, 23 Nov 2022 18:54:41 +0000 https://www.popsci.com/?p=490856
Black hole shooting beam of energy out speed of light and being caught by a space telescope in an illustration. There's an inset showing blue and purple electromagnetic waves,
This illustration shows the IXPE spacecraft, at right, observing blazar Markarian 501, at left. A blazar is a black hole surrounded by a disk of gas and dust with a bright jet of high-energy particles pointed toward Earth. The inset illustration shows high-energy particles in the jet (blue). Pablo Garcia (NASA/MSFC)

An extreme particle accelerator millions of light-years away is directing immensely fast electromagnetic waves at Earth.

The post Astronomers now know how supermassive black holes blast us with energy appeared first on Popular Science.

]]>
Black hole shooting beam of energy out speed of light and being caught by a space telescope in an illustration. There's an inset showing blue and purple electromagnetic waves,
This illustration shows the IXPE spacecraft, at right, observing blazar Markarian 501, at left. A blazar is a black hole surrounded by a disk of gas and dust with a bright jet of high-energy particles pointed toward Earth. The inset illustration shows high-energy particles in the jet (blue). Pablo Garcia (NASA/MSFC)


Some 450 million light-years away from Earth in the constellation Hercules lies a galaxy named Markarian 501. In the visible-light images we have of it, Markarian 501 looks like a simple, uninteresting blob.

But looks can be deceiving, especially in space. Markarian 501 is a launchpad for charged particles traveling near the speed of light. From the galaxy’s heart erupts a bright jet of high-energy particles and radiation, rushing right in Earth’s direction. That makes it a perfect natural laboratory to study those accelerating particles—if only scientists could understand what causes them.

In a paper published in the journal Nature today, astronomers have been able to take a never-before-seen look deep into the heart of one of those jets and see what drives those particles out in the first place. “This is the first time we are able to directly test models of particle acceleration,” says Yannis Liodakis, an astronomer at the University of Turku in Finland and the paper’s lead author.

Markarian 501 is a literally shining example of a special class of galaxy called a blazar. What makes this galaxy so bright is the supermassive black hole at its center. The gravity-dense region spews a colossal wellspring of high-energy particles, forming a jet that travels very near the speed of light and stretches over hundreds of millions of light-years.

Many galaxies have supermassive black holes spew out jets like this—they’re what astronomers call active galactic nuclei. But blazars like Markarian 501 are defined by the fact that their jets are pointed right in Earth’s general direction. Astronomers can use telescopes trained at it to look upstream and get a clear view of a constant torrent of particles riding through waves of every part of the electromagnetic spectrum, from bright radio waves to visible light to blazing gamma rays.

[Related: You’ve probably never heard of terahertz waves, but they could change your life]

A blazar can spread its influence far beyond its own corner of the universe. For instance, a detector buried under the Antarctic ice caught a neutrino—a ghostly, low-mass particle that does its best to elude physicists—coming from a blazar called TXS 0506+56. It was the first time researchers had ever picked up a neutrino alighting on Earth from a point of origin outside the solar system (and from 5 billion light-years away, at that).

But what actually causes a supermassive black hole to form light and other electromagnetic waves? What happens inside that jet? If you were surfing inside of it, what exactly would you feel and see?

Scientists want to know these answers, too, and not just because they make for a fun, extreme thought experiment. Blazars are natural particle accelerators, and they’re far larger and more powerful than any accelerator we can currently hope to build on Earth. By analyzing the dynamics of a blazar jet, they can learn what natural processes can accelerate matter to near the speed of light. What’s more, Markarian 501 is one of the more desirable blazars to study, given that it’s relatively close to the Earth, at least compared to other blazars that can be many billions of light-years farther still.

[Related: What would happen if you fell into a black hole?]

So, Liodakis and dozens of colleagues from around the world took to observing it. They used the Imaging X-ray Polarization Explorer (IXPE), a jellyfish-like telescope launched by NASA in December 2021, to look down the length of that jet. In particular, IXPE studied if distant X-rays were polarized, and how their electromagnetic waves are oriented in space. The waves from a light bulb, for instance, aren’t polarized—they wiggle every which way. The waves from an LCD screen, on the other hand, are polarized and only wiggle in one direction, which is why you can pull tricks like making your screen invisible to everyone else. 

Back to the sky, if astronomers know the polarization of a source like a black hole, they might be able to reconstruct what happened at it. Liodakis and his colleagues had some idea of what to expect, because experts in their field had previously spent years modeling and simulating jets on their computers. “This was the first time we were able to directly test the predictions from those models,” he explains.

They found that the culprits were shockwaves: fronts of fast-moving particles crashing into slower-moving particles, speeding them along like flotsam pushed by rushing water. The violent crashes created the X-rays that the astronomers saw in IXPE’s readings.

It’s the first time that astronomers have used the X-ray polarization method to see results. “This is really a breakthrough in our understanding of these sources,” says Liodakis.

In an accompanying perspective in Nature, Lea Marcotulli, an astrophysicist at Yale University who wasn’t an author on the paper, called the result “dazzling.” “This huge leap forward brings us yet another step closer to understanding these extreme particle accelerators,” she wrote.

Of course, there are still many unanswered questions surrounding the jets. Do these shockwaves account for all the particles accelerating from Markarian 501’s black hole? And do other blazars and galaxies have shockwaves like them?

Liodakis says his group will continue to study the X-rays from Markarian 501, at least into 2023. With an object this dazzling, it’s hard to look away.

The post Astronomers now know how supermassive black holes blast us with energy appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Two meteorite mysteries are helping astronomers investigate the origins of life https://www.popsci.com/science/winchcombe-meteorite-water-amino-acids/ Wed, 16 Nov 2022 18:53:13 +0000 https://www.popsci.com/?p=487869
Winchcombe meteorite fragment in a purple-gloved hand for chemical analysis
Fragments of the Winchcombe meteorite helped shed light on the space rock's age, origins, and life-giving contents. Trustees of the Natural History Museum

Rare rocks known as carbonaceous chondrites really are as old as time—and that's what makes them priceless.

The post Two meteorite mysteries are helping astronomers investigate the origins of life appeared first on Popular Science.

]]>
Winchcombe meteorite fragment in a purple-gloved hand for chemical analysis
Fragments of the Winchcombe meteorite helped shed light on the space rock's age, origins, and life-giving contents. Trustees of the Natural History Museum

On a chilly February night in 2021, Winchcombe, a garden-girdled market town nestled in the gentle hills of southwest England, lit up with a fireball and a sonic boom as a meteorite streaked across the sky.

Astronomers knew, instantly, the value of the rock they had just been granted. They got the word out quickly. In the following days, locals and scientists alike combed through hedges, fields, and driveways, collecting more than a pound of extraterrestrial fragments in all.

[Related: Hunt for meteorites in your own yard]

Experts had good reason to be excited about this particular fireball, which, as it turns out, is far more ancient than Winchcombe’s 15th-century castle. This meteorite was a 4.5-billion-year-old relic from the very first days of the solar system. Studying debris like these can help researchers understand what was happening as the planets were still forming, including how much water might have once coated a world like Mars.

“The Winchcombe meteorite contains all the ingredients—water and organic molecules—that are needed to kickstart oceans and life on Earth,” says Ashley King, an earth scientist at the Natural History Museum in London. He was one of dozens of scientists to publish their findings on the Winchcombe meteorite in the journal Science Advances today.

The British rubble provides an example of what astronomers call a carbonaceous chondrite. These rocks really are old as time: They likely formed at the dawn of the solar system, in its outer reaches, before eventually falling closer to the inner planets.

Scientists in hats and jackets laying in a field looking at meteorite fragments
Scientists combing a field in Winchcombe, England, for meteorite fragments. Mira Ihasz, Spire Global & The University of Glasgow

Carbonaceous chondrites have a much higher carbon content than most of their fellow space rocks. They’re also spiced with a healthy pinch of what astronomers call volatiles: substances like methane, nitrogen, carbon dioxide, and, yes, water, all of which are frozen in space but can readily turn into gas under the heat of the inner solar system.

Carbonaceous chondrites don’t often come to Earth; they account for a tiny fraction of the thousands of meteorites that are collected for study on our planet. One plummeted to the ground in Denmark in 2009; another crashed in California in 2012. Those two examples seem to have followed similar arcs as the Winchcombe rock, potentially hinting that they all may share an origin story.

To find material from rocks that old, astronomers often have to send couriers off-world. Hayabusa2, a spacecraft launched from Japan in 2014, returned to Earth six years later with samples from 162173 Ryugu, a near-Earth asteroid. NASA’s OSIRIS-REx, launched in 2016, is set to return in 2023 with similar samples from another near-Earth asteroid, 101955 Bennu. Both asteroids are suspected chondrites.

The Winchcombe rock saved space agencies the trouble by coming to Earth instead. More than that, because locals caught the meteorite’s course with their doorbell cameras and dashcams, astronomers had no trouble reconstructing the rock’s arc through the atmosphere.

“Since we know the pre-atmospheric orbit of the original rock and the meteorite was recovered only hours after landing, it’s been a little bit like having our own ‘natural’ sample return mission from an asteroid,” says King.

Because the fragments came from a known environment, astronomers could also confidently determine which bits came from the meteorite and which came from, say, a driveway. The pieces were also retrieved within days, which meant any contamination was kept to a minimum. “The Winchcombe [rock] is pristine, unmodified by the terrestrial environment, and gives us a chance to look back through time,” says King.

“We were able to make a really exciting measurement of the composition of the water in Winchcombe and know it was 100 percent extraterrestrial,” says Luke Daly, an earth scientist at Glasgow University in Scotland and one of King’s co-authors.

The scientists did just find water in their time capsule—they also detected carbon- and oxygen-containing compounds, including amino acids, the building blocks of life on Earth.

[Related: Meteorites older than the solar system contain key ingredients for life]

Although Winchombe’s rock is a rarity on Earth, the early solar system would have been teeming with debris like this. In many ways, they’re the leftovers: material that didn’t get eaten by growing little planets. Back then, carbonaceous chondrite after carbonaceous chondrite would have encroached on the inner planets and battered their surfaces. So, it’s possible that rocks like Winchcombe might have helped deliver water and amino acids right to Earth, along with Mercury, Venus, and Mars. That raises another question: How much life-giving substance did they carry?

To answer that, a group of researchers from France and Denmark examined a very different sort of meteorite: fragments of Mars, broken off and cast away until they fell to Earth. There are about 200 known examples of such meteorites, which give scientists a glance into Martian history right from the comfort of their own home world. The resulting study was also published in Science Advances today.

There’s a particular red flag in those Mars-borne rocks: chromium. It’s not that this heavy metal is unknown to the Red Planet—but one specific isotope, chromium-54, isn’t naturally found in the crust. In fact, chromium-54’s most likely origin is, indeed, chondrites. From the levels of chromium in this sample of meteorites, experts can estimate the number of chondrites that crashed into Mars.

“This allows us to place a firm estimate on the minimum amount of water that must have been present on Mars,” says Martin Bizzarro, an astronomer at the University of Copenhagen in Denmark. He and colleagues concluded that the chondrites that struck Mars, combined with water vapor rising from the planet’s churning interior, might have flooded it in an ocean nearly a thousand feet deep.

Asteroid or comet flying toward Mars in illustration
Artist’s impression of asteroid or comet falling into the Martian atmosphere millions of years ago. Detlev Vans Ravenswaay/Science Source

“This study looks really exciting and adds further support to the hypothesis that water-rich asteroids were the main source of volatiles to the terrestrial planets,” says King, who was not an author of the Martian rock study.

As for the Winchcombe meteorite, the scientists behind that paper have barely even scratched the surface of the rocks that have fallen from the sky and almost right into their lap. It’s a window into a cosmological period with limited hard clues—the oldest known Earth rocks, for instance, are only around 4 billion years old.

“There is so much more exciting science to come out of this stone,” says Daly. “It’s impossible to cover it all.”

The post Two meteorite mysteries are helping astronomers investigate the origins of life appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Magnets might be the future of nuclear fusion https://www.popsci.com/science/nuclear-fusion-magnet-nif/ Fri, 11 Nov 2022 11:00:00 +0000 https://www.popsci.com/?p=486077
A hohlraum at the Lawrence Livermore National Laboratory.
A target at the NIF, pictured here in 2008, includes the cylindrical fuel container called a hohlraum. Lawrence Livermore National Laboratory

When shooting lasers at a nuclear fusion target, magnets give you a major energy increase.

The post Magnets might be the future of nuclear fusion appeared first on Popular Science.

]]>
A hohlraum at the Lawrence Livermore National Laboratory.
A target at the NIF, pictured here in 2008, includes the cylindrical fuel container called a hohlraum. Lawrence Livermore National Laboratory

For scientists and dreamers alike, one of the greatest hopes for a future of bountiful energy is nestled in a winery-coated vale east of San Francisco. 

Here lies the National Ignition Facility (NIF) in California’s Lawrence Livermore National Laboratory. Inside NIF’s boxy walls, scientists are working to create nuclear fusion, the same physics that powers the sun. About a year ago, NIF scientists came closer than anyone to a key checkpoint in the quest for fusion: creating more energy than was put in.

Unfortunately—but in a familiar outcome to those familiar with fusion—that world would have to wait. In the months after the achievement, NIF scientists weren’t able to replicate their feat. 

But they haven’t given up. And a recent paper, published in the journal Physics Review Letters on November 4, might bring them one step closer to cracking a problem that has confounded energy-seekers for decades. Their latest trick: lighting up fusion within the flux of a strong magnetic field. 

Fusion power, to put it simply, aims to ape the sun’s interior. By smashing certain hydrogen atoms together and making them stick, you get helium and a lot of energy. The catch is that actually making atoms stick together requires very high temperatures—which, in turn, requires fusion-operators to spend incredible amounts of energy in the first place. 

[Related: In 5 seconds, this fusion reactor made enough energy to power a home for a day]

Before you can even think about making a feasible fusion power plant, you need to somehow create more energy than you put in. That tipping point—a point that plasma physicists call ignition—has been fusion’s longest-sought goal.

The NIF’s container of choice is a gold-plated cylinder, smaller than a human fingernail. Scientists call that cylinder a hohlraum; it houses a peppercorn-sized pellet of hydrogen fuel.

At fusion time, scientists fire finely tuned laser beams at the hohlraum—in NIF’s case, 192 beams in all—energizing the cylinder enough to evoke violent X-rays within. In turn, those X-rays wash over the pellet, squeezing and battering it into an implosion that fuses hydrogen atoms together. That, at least, is the hope.

NIF used this method to achieve its smashing result in late 2021: creating some 70 percent of the energy put in, far and away the record at the time. For plasma physicists, it was a siren call. “It has breathed new enthusiasm into the community,” says Matt Zepf, a physicist at the Helmholtz Institute Jena in Germany. Fusion-folk wondered: Could NIF do it again?

As it happens, they would have to wait. Subsequent laser shots didn’t succeed at coming even close to that original. Part of the problem is that, even with all the knowledge and capabilities they have, scientists have a very hard time predicting what exactly a shot will do.

[Related: Nuclear power’s biggest problem could have a small solution]

“NIF implosions are currently showing significant fluctuations in their performance, which is caused by slight variations in the target quality and laser quality,” says John Moody, a physicist at NIF. “The targets are very, very good, but slight imperfections can have a big effect.”

Physicists could continue fine-tuning their laser or tinkering with their fuel pullet. But there might be a third way to improve that performance: bathing the hohlraum and its fuel pellet in a magnetic field.

Tests with other lasers, like OMEGA in Rochester, New York, and the Z-machine in Sandia, New Mexico—had shown that this method could prove fruitful. Moreover, computer simulations of NIF’s own laser suggested that a magnetic field could double the energy of NIF’s best-performing shots. 

“Pre-magnetized fuel will allow us to get good performance even with targets or laser delivery that is a little off of what we want,” says Moody, one of the paper’s authors.

So NIF scientists decided to try it out themselves.

They had to swap out the hohlraum first. Pure gold wouldn’t do well—putting the metal under a magnetic field like theirs would create electric currents in the cylinder walls, tearing it apart. So the scientists crafted a new cylinder, forged from an alloy of gold and tantalum, a rare metal found in some electronics.

Then, the scientists stuffed their new hohlraum with a hydrogen pellet, switched on the magnetic field, and lined up a shot.

As it happened, the magnetic field indeed made a difference. Compared to similar magnetless shots, the energy increased threefold. It was a low-powered test shot, to be sure, but the results give scientists a new glimmer of hope. “The paper marks a major achievement,” says Zepf, who was not an author of the report.

Still, the results are early days, “essentially learning to walk before we run,” Moody cautions. Next, the NIF scientists will try to replicate the experiment with other laser setups. If they can do that, they’ll know they can add a magnetic field to a wide range of shots.

As with anything in this misty plane of physics, this alone won’t be enough to solve all of fusion’s problems. Even if NIF does achieve ignition, afterward comes phase two: being able to create significantly more energy than you put in, something that physicists call “gain.” Especially for a laser of NIF’s limited size, says Zepf, that is an even more foreboding quest.

Nonetheless, the eyes of the fusion world will be watching. Zepf says that NIF’s results can teach similar facilities around the world how to get the most from their laser shots.

Achieving a high enough gain is a prerequisite for a phase that’s even further into the future: actually turning the heat of fusion power into a feasible power plant design. That’s still another step for particle physicists—and it’s a project that the fusion community is already working on.

The post Magnets might be the future of nuclear fusion appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This far-off galaxy is probably shooting us with oodles of ghostly particles https://www.popsci.com/science/icecube-neutrino-spiral-galaxy/ Thu, 03 Nov 2022 18:00:00 +0000 https://www.popsci.com/?p=483938
The center of Messier 77's spiral galaxy.
The center of the galaxy NGC 1068 (also known as Messier 77) where neutrinos may originate, as captured by the Hubble Space Telescope. NASA, ESA & A. van der Hoeven

A sophisticated experiment buried under Antarctica is tracing neutrinos to their extraterrestrial origin.

The post This far-off galaxy is probably shooting us with oodles of ghostly particles appeared first on Popular Science.

]]>
The center of Messier 77's spiral galaxy.
The center of the galaxy NGC 1068 (also known as Messier 77) where neutrinos may originate, as captured by the Hubble Space Telescope. NASA, ESA & A. van der Hoeven

Deep under the South Pole sits an icebound forest of wiring called IceCube. It’s no cube: IceCube is a hexagonal formation of kilometer-deep holes in the ice, drilled with hot water and filled with electronics. Its purpose is to pick up flickers of neutrinos—ghostly little particles that often come from space and phase through Earth with hardly a trace. 

Four years ago, IceCube helped scientists find their first hint of a neutrino source outside our solar system. Now, for the second time, IceCube scientists have pinpointed a fountain of far-traveler neutrinos, originating from NGC 1068, a galaxy about 47 million light-years away.

Their results, published in the journal Science on November 3, need further confirmation. But if these observations are correct, they’re a key step in helping astronomers understand where in the universe those neutrinos originate. And, curiously, NGC 1068 is very different from IceCube’s first suspect.

Neutrinos are little phantoms. By some estimates, 100 trillion pass through your body every single second—and virtually none of them interact with your body’s atoms. Unlike charged particles such as protons and electrons, neutrinos are immune to the pulling and pushing of electromagnetism. Neutrinos have so little mass that, for many years, physicists thought they had no mass at all.

Most neutrinos that we see from space spew out from the sun. But scientists are especially interested in the even more elusive breed of neutrinos that come from outside the solar system. For astronomers, IceCube represents a wish: If researchers can observe far-flung neutrinos, they can use them to see through gas and dust clouds, which light typically doesn’t pass through.

[Related: We may finally know where the ‘ghost particles’ that surround us come from]

IceCube’s mission is to find those neutrinos, which reach Earth with far more energy than any solar neutrino. Although it’s at the south pole, IceCube actually focuses on neutrinos striking the northern hemisphere. IceCube’s detectors try to discern the direction a neutrino is traveling. If IceCube detects particles pointing downwards, scientists struggle to discern them from the raging static of cosmic radiation that constantly batters Earth’s atmosphere. If IceCube detects particles pointing upwards, on the other hand, scientists know that they’ve come from the north, having passed through the bulk of our planet before striking icebound detectors.

“We discovered neutrinos reaching us from the cosmos in 2013,” says Francis Halzen, a physicist at the University of Wisconsin-Madison and part of the IceCube collaboration who authored the paper, “which raised the question of where they originate.”

Finding neutrinos is already hard; finding where they come from is orders of magnitude harder. Identifying a neutrino origin involves painstaking data analysis that can take years to complete.

Crucially, this isn’t IceCube’s first identification. In 2018, scientists comparing IceCube data to observations from traditional telescopes pinpointed one possible neutrino source, more than 5 billion light years away: TXS 0506+56. TXS 0506+56 is an example of what astronomers call a blazar: a distant, high-energy galaxy with a central black hole that spews out a jet directly in Earth’s direction. It’s loud, bright, and the exact sort of object that astronomers thought created neutrinos. 

But not everybody was convinced they had the whole picture.

“The interpretation has been under debate,” says Kohta Murase, a physicist at Pennsylvania State University, who wasn’t an author of the new paper. “Many researchers think that other source classes are necessary to explain the origin of high-energy neutrinos coming from different directions over the sky.”

So IceCube scientists got to work. They combed through nine years’ worth of IceCube observations, from 2011 to 2020. Since blazars such as TXS 0506+56 tend to spew out torrents of gamma rays, the researchers tried to match the neutrinos with known gamma-ray sources.

As it happened, the source they found wasn’t the gamma-ray source they expected.

[Related: This ghostly particle may be why dark matter keeps eluding us]

NGC 1068 (also known as M77), located some 47 million light-years from us, is not unlike our own galaxy. Like the Milky Way, it’s shaped like a spiral. Like the Milky Way, it has a supermassive black hole at its heart. Some astronomers had suspected it as a neutrino source, but any proof remained elusive.

That black hole produces a torrent of what astrophysicists call cosmic rays. Despite their name (the scientists who first discovered them thought they were rays), cosmic rays are actually ultra-energized protons and atomic nuclei hurtling through the universe at nearly the speed of light. 

But, unlike its counterpart at the center of the Milky Way, NGC 1068’s black hole is shrouded behind a thick veil of gas and dust, which blocks many of the gamma rays that would otherwise emerge. That, astronomers say, complicates the old picture of where neutrinos came from. “This is the key issue,” says Halzen. “The sources we detect are not seen in high energy gamma rays.”

As cosmic rays crash into that veil, they cause a cascade of nuclear reactions that spew out neutrinos. (In fact, cosmic rays do the same when they strike Earth’s atmosphere). One reason why the NGC 1068 discovery is so exciting, then, is that the ensuing neutrinos might give astronomers clues about those cosmic rays.

It’s not final proof; there’s not enough data quite yet to be certain. That will take more observations, more years of painstaking data analysis. Even so, Murase says, other astronomers might search the sky for galaxies like NGC 1068, galaxies whose central black holes are occluded.

Meanwhile, other astronomers believe that there are even more places high-energy neutrinos could flow from. If a star passes too close to a supermassive black hole, for instance, the black hole’s gravity might rip the star apart and unleash neutrinos in the process. As astronomers prepare to look for neutrinos, they’ll want to look for new, more diverse points in the sky, too.

They’ll soon have more than just IceCube to work with. Astronomers are laying the groundwork—or seawork—for additional high-sensitivity neutrino detectors: one at the bottom of Siberia’s Lake Baikal and another on the Mediterranean Sea floor. Soon, those may join the hunt for distant, far-traveler neutrinos.

The post This far-off galaxy is probably shooting us with oodles of ghostly particles appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
To set the record straight: Nothing can break the speed of light https://www.popsci.com/science/whats-faster-than-the-speed-of-light/ Mon, 24 Oct 2022 12:35:47 +0000 https://www.popsci.com/?p=480200
Gamma-ray burst from exploding galaxy in NASA Hubble telescope rendition
Gamma-ray bursts (like the one in this illustration) from distant exploding galaxies transmit more powerful light than the visible wavelengths we see. But that doesn't mean they're faster. NASA, ESA and M. Kornmesser

Objects may not be as fast as they appear with this universal illusion.

The post To set the record straight: Nothing can break the speed of light appeared first on Popular Science.

]]>
Gamma-ray burst from exploding galaxy in NASA Hubble telescope rendition
Gamma-ray bursts (like the one in this illustration) from distant exploding galaxies transmit more powerful light than the visible wavelengths we see. But that doesn't mean they're faster. NASA, ESA and M. Kornmesser

Back in 2018, astronomers examining the ruins of two collided neutron stars in Hubble Space Telescope images noticed something peculiar: a stream of bright high-energy ions, jetting away from the merger in Earth’s direction at seven times the speed of light.

That didn’t seem right, so the team recalculated with observations from a different radio telescope. In those observations, the stream was flying past at only four times the speed of light.

That still didn’t seem right. Nothing in the universe can go faster than the speed of light. As it happens, it was an illusion, a study published in the journal Nature explained earlier this month.

[Related: Have we been measuring gravity wrong this whole time?]

The phenomenon that makes particles in space appear to travel faster than light is called superluminal motion. The phrase fits the illusion: It means “more than light,” but actually describes a trick where an object moving toward you appears much faster than its actual speed. There are high-energy streams out in space there that can pretend to move faster than light—today, astronomers are seeing a growing number of them.

“They look like they’re moving across the sky, crazy fast, but it’s just that they’re moving toward you and across the sky at the same time,” says Jay Anderson, an astronomer at the Space Telescope Science Institute in Maryland who has worked extensively with Hubble and helped author the Nature paper.

To get their jet’s true speed, Anderson and his collaborates compared Hubble and radio telescope observations. Ultimately, they estimated that the jet was zooming directly at Earth at around 99.95 percent the speed of light. That’s very close to the speed of light, but not quite faster than it.

Indeed, to our knowledge so far, nothing on or off our planet can travel faster than the speed of light. This has been proven time and time again through the laws of special relativity, put on paper by Albert Einstein a century ago. Light, which moves at about 670 million miles per hour, is the ultimate cosmic speed limit. Not only that, special relativity holds that the speed of light is a constant no matter who or what is observing it.

But special relativity doesn’t limit things from traveling super close to the speed of light (cosmic rays and the particles from solar flares are some examples). That’s where superluminal motion kicks in. As something moves toward you, the distance that its light and image needs to reach you decreases. In everyday life, that’s not really a factor: Even seemingly speedy things, like a plane moving through the sky above you, don’t move anywhere near the speed of light. 

[Related: Check out the latest version of Boom’s supersonic plane]

But when something is moving at high speeds at hundreds of millions of miles per hour in the proper direction, the distance between the object and the perceiver (whether it be a person or a camera lens) drops very quickly. This gives the illusion that something is approaching more rapidly than it actually is. Neither our eyes nor our telescopes can tell the difference, which means astronomers have to calculate an object’s actual speed from data collected in images.

The researchers behind the new Nature paper weren’t the first to grapple with superluminal motion. In fact, they’re more than a century late. In 1901, astronomers scanning the night sky caught a glimpse of a nova in the direction of the constellation Perseus. It was the remnants of a white dwarf that ate the outer shells of a nearby gas giant, briefly lighting up bright enough to see with the naked eye. Astronomers caught a bubble inflating from the nova at breakneck speed. But because there was no theory of general relativity at the time, the event quickly faded from memory.

The phenomenon gained buzz again by the 1970s and 1980s. By then, astronomers were finding all sorts of odd high-energy objects in distant corners of the universe: quasars and active galaxies, all of which could shoot out jets of material. Most of the time, these objects were powered by black holes that spewed out high-energy jets almost moving at the speed of the light. Depending on the mass and strength of the black hole they come from, they could stretch for thousands, hundreds of thousands, or even millions of light-years to reach Earth.

As distant objects close in, neither our eyes nor our telescopes can tell the difference, giving us the illusion that they’re moving faster and faster.

Around the same time, scientists studying radio waves began seeing enough faux-speeders to raise alarms. They even found a jet from one distant galaxy that appeared to be racing at nearly 10 times the speed of light. The observations garnered a fair amount of alarm among astronomers, though by then the mechanisms were well-understood.

In the decades since, observations of superluminal motion have added up. Astronomers are seeing an ever-increasing number of jets through telescopes, particularly ones that are floating through space like Hubble or the James Webb Space Telescope. When light doesn’t have to pass through Earth’s atmosphere, their captures can be much higher in resolution. This helps teams find more jets that are farther away (such as from ancient, distant galaxies), and it helps them view closer jets in more detail. “Things stand out much better in Hubble images than they do in ground-based images,” says Anderson. 

[Related: This image wiggles when you scroll—or does it?]

Take, for instance, the distant galaxy M87, whose gargantuan central black hole launched a jet that apparently clocked in at between 4 and 6 times the speed of light. By the 1990s, Hubble could actually peer into the stream of energy and reveal that parts it were traveling at different speeds. “You could actually see features in the jet moving, and you could measure the locations of those features,” Anderson explains.

There are good reasons for astronomers to be interested in such breakneck jets, especially now. In the case of the smashing neutron stars from the Nature study, the crash caused a gamma-ray burst, a type of high-energy explosion that remains poorly understood. The event also stirred up a storm of gravitational waves, causing rippled in space-time that researchers can now pick up and observe. But until they uncover some strange new physics in the matter flying through space, the speed of light remains the hard limit.

The post To set the record straight: Nothing can break the speed of light appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Geologists are searching for when the Earth took its first breath https://www.popsci.com/science/earths-first-breath/ Fri, 14 Oct 2022 20:21:42 +0000 https://www.popsci.com/?p=478159
Volcano belching lava and gas above ocean to represent Great Oxygenation Event
At first the Earth's atmosphere was filled with helium and volcanic emissions. Then it slowly got doses of oxygen from the oceans and tectonic activity. Deposit Photos

The planet's early oxygenation events were more like rollercoaster rides than spikes.

The post Geologists are searching for when the Earth took its first breath appeared first on Popular Science.

]]>
Volcano belching lava and gas above ocean to represent Great Oxygenation Event
At first the Earth's atmosphere was filled with helium and volcanic emissions. Then it slowly got doses of oxygen from the oceans and tectonic activity. Deposit Photos

Many eons ago, the Earth was a vastly different place from our home. A great supercontinent called Rodinia was fragmenting into shards with faintly familiar names like Laurentia, Baltica, and Gondwanaland. For a time, Earth was covered, in its entirety, by sheets of ice. Life was barely clinging onto this drastically changing world.

All this came from the chapter of our planet’s history that scientists today have titled the Neoproterozoic Era, which lasted from roughly 1 billion to 540 million years ago. The long stretches of time within its stony pages were a very distant prelude to our world today: a time when the first animals stirred to life, evolving from protists in ancient seas.

Just as humans and their fellow animals do today, these ancient precursors would have needed oxygen to live. But where did it come from, and when? We still don’t have firm answers. But experts have developed a blurry snapshot of how oxygen built up in the Neoproterozoic, published today in the journal Science Advances. And that picture is a bumpy ride, filled with periods of oxygen entering the atmosphere before disappearing again, on repeat, in cycles that lasted tens of millions of years.

To look that far back, you have to throw much of what we take for granted about the modern world right out the window. “As you go further back in time, the more alien of a world Earth becomes,” says Alexander Krause, a geologist at University College London in the United Kingdom, and one of the paper’s authors.

[Related: Here’s how life on Earth might have formed out of thin air and water]

Indeed, after the Earth formed, its early atmosphere was a medley of gases burped out by volcanic eruptions. Over several billion years, they coated our planet with a stew of noxious methane, hydrogen sulfide, carbon dioxide, and water vapor.

That would change in time. We know that some 2.3 billion years ago, microorganisms called cyanobacteria created a windfall of oxygen through photosynthesis. Scientists call these first drops of the gas, creatively, the Great Oxygenation Event. But despite its grandiose name, the juncture only brought our atmosphere’s oxygen to at most a small fraction of today’s levels. 

What happened between then and now is still a murky question. Many experts think that there was another oxygenation event about 400 million years ago in the Paleozoic Era, just as animals were starting to crawl out of the ocean and onto land. Another camp, including the authors of this new research, think there was a third event, sometime around 700 million years ago in the Neoproterozoic. But no one knows for sure if oxygen gradually increased over time, or if it fluctuated wildly. 

That’s important for geologists to know, because atmospheric oxygen is involved in virtually every process on Earth’s surface. Even if early life mostly lived in the sea, the upper levels of the ocean and the atmosphere constantly exchange gases.

To learn more, Krause and his collaborators simulated the atmosphere from 1.5 billion years ago until today—and how oxygen levels in the air fluctuated over that span. Though they didn’t have the technology to take a whiff of billion-year-old air, there are a few fingerprints geologists can use to reconstruct what the ancient atmosphere might have looked like. By probing sedimentary rocks from that era, they’re able to measure the carbon and sulfur isotopes within, which rely on oxygen in the atmosphere to form.

Additionally, as the planet’s tectonic plates move, oxygen buried deep within the mantle can emerge and bubble up into the air through a process known as tectonic degassing. Using information on tectonic activity from the relevant eras, Krause and his colleagues previously estimated the history of degassing over time.

No one knows for sure if oxygen gradually increased over time, or if it fluctuated wildly. 

By putting those scraps of evidence together, the team came up with a projection of how oxygen levels wavered in the air until the present day. It’s not the first time scientists have tried to make such a model, but according to Krause, it’s the first time anyone has tried it over a billion-year timescale. “Others have only reconstructed it for a few tens of millions of years,” Krause says.

He and his colleagues found that atmospheric oxygen levels didn’t follow a straight line over the Earth’s history. Instead, imagine it like an oxygen roller coaster. Across 100-million-year stretches or so, oxygen levels rose to around 50 percent of modern levels, and then plummeted again. The Neoproterozoic alone saw five such peaks.

Only after 540 million years ago, in the Paleozoic Era, did the atmosphere really start to fill up. Finally, close to 350 million years ago, oxygen reached something close to current-day levels. That increase coincided with the great burst of life’s diversity known as the Cambrian Explosion. Since then, while oxygen levels have continued to fluctuate, they’ve never dropped below around 60 percent of the present.

“It’s an interesting paper,” says Maxwell Lechte, a geologist at McGill University in Montréal, who wasn’t involved in the research. “It’s probably one of the big contentious discussion points of the last 10 years or so” in the study of Earth’s distant past.

[Related: Enjoy breathing oxygen? Thank the moon.]

It’s important to note, however, that the data set used for the simulation was incomplete. “There’s still a lot of rock out there that hasn’t been looked at,” says Lechte. “As more studies come out, they can probably update the model, and it would potentially change the outputs significantly.”

The obvious question then is how oxygen trends left ripple effects on the evolution of life.After all, it’s during that third possible oxygenation event that protists began to diversify and fan out into the very first animals—multicellular creatures that required oxygen to live. Paleontologists have found an abundance of fossils that date to the very end of the era, including a contested 890-million-year-old sponge.

Those animals might have developed and thrived in periods when oxygen levels were sufficiently high, like the flourishing Cambrian Explosion. Meanwhile, drops in oxygen levels might have coincided with great die-offs. 

Astronomers might take note of this work, too. Any oxygenation answers have serious implications for what we might find on distant Earth-like exoplanets. If these geologists are correct, then it’s evidence that Earth’s history is not linear, but rather bumpy, twisted, and sometimes violent. “These questions that this paper deals with represent a fundamental gap in our understanding of how our planet actually operates,” says Lechte.

The post Geologists are searching for when the Earth took its first breath appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
How researchers leveled up worm silk to be tougher than a spider’s https://www.popsci.com/science/silkworm-silk-spider-spinning/ Thu, 06 Oct 2022 18:00:00 +0000 https://www.popsci.com/?p=475655
Silkworms like the one seen here spin cocoons of silk.
A silkworm, like the one seen here, spins a cocoon of silk. Deposit Photos

Spiders usually produce the strongest threads of all the silk-making animals.

The post How researchers leveled up worm silk to be tougher than a spider’s appeared first on Popular Science.

]]>
Silkworms like the one seen here spin cocoons of silk.
A silkworm, like the one seen here, spins a cocoon of silk. Deposit Photos

Silk is a fabulous fabric, but it’s also a material that can do so much more. At its best, silk can hold up even better than steel when you pull at it. If you can get enough high-quality silk, you might think about making silk body sensors and silk body armor.

But therein lies the problem. Getting enough silk is difficult.

Spiders make the strongest silk that we know about. Unfortunately, spider silk is hard to come by—it’s certainly very hard to get enough for most uses, unless you find an alternative that doesn’t exist yet. But scientists may now have done just that, in a paper published in the journal Matter on Thursday, by tweaking the weaker but far more common silkworm silk to have properties even stronger than spider silk.

“We hope that this work opens up a promising way to produce profitable high-performance artificial silks,” says Zhi Lin, a biochemist at Tianjin University in northern China and one of the paper’s authors, in a press release.

The worms of the domesticated silk moth, Bombyx mori, are by far our most prolific source of silk. Humans in what is now China seem to have domesticated silkworms some 6,000 years ago, and what they wove from the animals’ fibers would become renowned the world over as a luxury good. Silk gave its name to the Silk Road, a network of trade routes that connected peoples and cultures across old Eurasia for centuries. Silk was even involved in the earliest recorded case of industrial espionage, when two Byzantine monks supposedly smuggled silkworms west along the Silk Road by hiding them in hollowed canes.

But even in premodern times, we knew that silkworms don’t necessarily make the best silk. Worms create their silk to form cocoons. That’s generally great for weaving into textiles, but to really get those resilient properties, it isn’t enough. (Silkworms also don’t make very much of it—just one pound of silk requires hundreds of silkworms, and collecting it is often fatal to the worms.)

[Related: Jumping spiders might be able to sleep—perchance to dream]

There’s a whole menagerie of animals who can make silk—carp, mussels, many insects, and much more—but the strongest silk comes from spiders. A spider can make several different types of silk, depending on the task at hand: one type for making egg sacs, for instance, and another for securing prey. 

The strongest of them all is called dragline silk, which spiders spin for the most important lines of their webs. It’s also the silk that spiders dangle from. At its best, dragline silk can match high-grade steel in tensile strength: how much a material holds up when it’s pulled or stretched.

If you’re wondering why engineers aren’t yet crafting bridge cables from spider silk, there’s a major problem. There just isn’t very much spider silk to go around. “We always have to fight to get more of it,” says Hannes Schniepp, a materials scientist at the College of William & Mary in Virginia who wasn’t part of the new paper, and whose lab studies the ribbon-like silk of the venomous brown recluse spider.

More than a decade ago, textile-makers used the silk from over a million spiders to weave a single 11-foot-by-4-foot cloth. Getting a million spiders is no small feat, especially because it’s very difficult to domesticate them: They’re far too territorial and aggressive.

There may be ways to make silk without animals, but scientists have yet to perfect any of them. A few startups are now trying their hand (or, perhaps, their spinneret), but despite bold promises of plentiful spider silk, their products don’t quite match all of the organic article’s impressive properties.

Scientists have tried a number of tricks to match what spiders can do. A very 21st-century idea involves inserting spider DNA into silkworms. Recently, to that end, scientists from a number of Chinese institutions joined forces to map the Bombyx mori genome. But it hasn’t yet led to effective artificial silk.

The scientists behind this latest paper took a different approach: spinning the silk from worms as a spider might.

Silkworm silk typically consists of protein fibers sheathed in a glue-like substance. That glue helps the silk fibers stick together in cocoons, but it’s an impediment to spinning it. So it had to go. The researchers did so by bathing the fibers in an acid called HFIP, a substance that organic chemists often use to dissolve proteins.

The process injected the resultant fibers into another bath, this one filled with zinc and iron ions that joined to the silk and strengthened it. Within that bath, the researchers used a custom-built machine to spin the strands, stretching each out to three times its original length.

After drying the threads, the researchers were left with silk that didn’t just match spider dragline silk in tensile strength. It went beyond. In testing, they found that the silk they’d just created—identical to spider dragline silk in appearance—was 70 percent stronger. It was also more than twice as strong as a silkworm’s natural silk, stripped of its glue.

[Related: Spider silk proteins could be the key to future cancer therapies]

“Our finding reverses the previous perception that silkworm silk cannot compete with spider silks on mechanical performance,” says Lin.

The process, according to Schniepp, isn’t entirely new. Silk-scientists have been tinkering with ways to refine the properties of silkworm silk. But they’ve never been able to create “a silkworm silk that outperforms spider silk—that is definitely not something that I expected to be, in a way, so easy,” Schniepp says.

This approach won’t solve all the problems behind making silk in bulk. You’d still need to get your silk from silkworms, rather than growing proteins in a lab, as some attempts seek. HFIP, moreover, isn’t really suitable for mass production; it’s both expensive and quite toxic.

But scientists say this experiment could, ultimately, be a thread for researchers to follow.

The post How researchers leveled up worm silk to be tougher than a spider’s appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Europe’s energy crisis could shut down the Large Hadron Collider https://www.popsci.com/science/europe-gas-crisis-cern/ Mon, 26 Sep 2022 21:00:00 +0000 https://www.popsci.com/?p=472868
Large Hadron Collider experiment view with CERN staff in a hard hat standing near during the Europe gas crisis
Large Hadron Collider experiments like beauty might be put on ice for a few months, or even a year. CERN

In light of Russia's war in Ukraine, CERN officials are considering the energy costs of particle physics experiments.

The post Europe’s energy crisis could shut down the Large Hadron Collider appeared first on Popular Science.

]]>
Large Hadron Collider experiment view with CERN staff in a hard hat standing near during the Europe gas crisis
Large Hadron Collider experiments like beauty might be put on ice for a few months, or even a year. CERN

Europe is now suffering an energy crisis. The fallout from the invasion of Ukraine, resulting in the Russian government choking gas supplies, has pushed the continent’s heating and electricity prices up to a much higher order of magnitude.

In the heart of Europe, along the French-Swiss border, the particle physics laboratory at CERN is facing the same plight. This month, it’s been reported that CERN officials are drawing up plans to limit or even shut down the recently rebooted Large Hadron Collider (LHC).

If the LHC, the largest and most expensive collider in the world, does shut down for a short stint, it wouldn’t be out of the ordinary for particle accelerator research. But if it has to go into hibernation for a longer period, complications might arise.

[Related: The green revolution is coming for power-hungry particle accelerators]

Some say that CERN uses as much electricity as a small city, and there’s some truth in that. By the group’s own admission, in a year, its facility consumes about one-third the electricity as nearby Geneva, Switzerland. The exact numbers do vary from month to month and year to year, but the lab’s particle accelerators can account for around 90 percent of CERN’s electric bill.

For an observer on the ground, it’s very easy to wonder why so much energy is going into arcane physics experiments involving subatomic particles, plasma, and dark matter. “Given the current context and as part of its social responsibility, CERN is drawing up a plan to reduce its energy consumption this winter,” Maïlys Nicolet, a spokesperson for the group, wrote in a press statement.

That said, CERN doesn’t have the same utility concerns as the everyday European as its energy strategy is already somewhat sustainable. The facility draws its power from the French grid, which sources more than two-thirds of its juice from nuclear fission—the highest of any country in the world. Not only does that drastically reduce the LHC’s carbon footprint, it also makes it far less reliant on imported fossil fuels.

But the French grid has another quirk: Unlike much of Europe, which relies on gas to heat its homes, homes in France often use electric heaters. As a result, local power bills can double during the cold months. Right now, 32 of the country’s 56 nuclear reactors are down for maintenance or repairs. The French government plans to bolster its grid against the energy crisis by switching most of them back on by winter. 

[Related: Can Europe swap Russian energy with nuclear power?]

But if that doesn’t happen, CERN might be facing a power supply shortage. Even if the research giant stretched its budget to pay for power, there just might not be enough of it, depending on how France’s reactors fare. “For this autumn, it is not a price issue, it’s an availability issue,” Serge Claudet, chair of CERN’s energy management panel, told Science.

Hibernation isn’t exactly out of the ordinary for LHC, though. In the past, CERN has shut down the particle accelerator for maintenance during the winter. This year is no exception: The collider’s stewards plan to mothball it from November until March. If Europe’s energy crisis continues into 2023, the LHC pause could last well into the warmer months, if not longer.

CERN managers are exploring their options, according to the facility’s spokesperson. The French government might order the LHC not to run at times of peak electric demand, such as mornings or evenings. Alternatively, to keep its flagship running, CERN might try to shut off some of the smaller accelerators that share the site.

But not all particle physicists are on board with prioritizing energy for a single machine “I don’t think you could justify running it but switching off everything else,” says Kristin Lohwasser, a particle physicist at the University of Sheffield in the United Kingdom and a collaborator on ATLAS, one of the LHC’s experiments.

On the other hand, the LHC has more to lose by going dark for an indefinite amount of time. If it has to power down for a year or more, the collider’s equipment, such as the detectors used to watch collisions at very small scales, might start to degrade. “This is why no one would blankly advertise to switch off and just wait five years,” says Lohwasser. It also takes a fair amount of energy to keep the LHC in a dormant state.

Even if CERN’s accelerators aren’t running, the particle physicists around the world sifting through the data will still have plenty to work on. Experiments in the field produce tons of results: positions, velocities, and countless mysterious bits of matter from thousands of collisions. Experts can still find subatomic artifacts hidden in the measurements as much as a decade after they’re logged. The flow of physics studies almost certainly won’t cease on account of an energy crisis.

For now, the decision to power LHC’s third run of experiments still remains up in the air. This week CERN officials will present a plan to the agency’s governing authority on how to proceed. That solution will, in turn, be presented to the French and Swiss governments for consultation. Only after will the final decision be made public.

“So far, I do not necessarily see a big concern from [physicists] about these plans,” says Lohwasser. If CERN must take a back seat to larger concerns, then many in the scientific community will accept that.

The post Europe’s energy crisis could shut down the Large Hadron Collider appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Two bizarre stars might have beamed a unique radio signal to Earth https://www.popsci.com/science/frb-magnetar-be-star/ Thu, 22 Sep 2022 18:00:00 +0000 https://www.popsci.com/?p=472065
Canada's CHIME telescope.
The CHIME telescope in Canada. CHIME

The origin of radio bursts like this one remain a vexing astronomical mystery.

The post Two bizarre stars might have beamed a unique radio signal to Earth appeared first on Popular Science.

]]>
Canada's CHIME telescope.
The CHIME telescope in Canada. CHIME

There are strange radio signals that ping us here on Earth. It’s possible to sense them thousands of times every day, if astronomers know where to find them.

These most likely aren’t aliens’ attempts to contact us. Astronomers call them fast radio bursts (FRBs), and they’re some of today’s most vexing space mysteries. We’re starting to get a picture of where they might come from, but we’re not certain what exactly causes them.

Astronomers are working on that. Researchers from Nanjing University and Hong Kong University have modelled what might be shaping one of them, in a paper published in Nature Communications on September 21, studying the rapidly repeating burst named FRB 20201124A.

Fast radio bursts are brief: Most last a second or two, or even less. They are bursts: When they’re created, they’re thought to be as energetic as our sun. That said, by the time the signals reach us, they’ve generally far weaker than our terrestrial radio waves—which partly explains why they’ve taken so long to find.

Astronomers have been observing these little blips in their radio telescopes for more than a decade. In 2007, astronomers combing through six-year-old data found a short, brief pulse from an unknown origin. It was the first of hundreds, so far.

Signals from the unknown

What causes FRBs, if they have a single explanation at all, remains murky. Astrophysicists have suggested links to black holes, neutron stars, gamma ray bursts, supernovae, and all sorts of other distant phenomena (yes, even aliens). 

One popular culprit is a magnetar: a certain type of high-energy neutron star with an extremely strong magnetic field, as much as a trillion times the strength of Earth’s. In 2020, astronomers spotted an FRB emanating from a magnetar in our own galaxy. 

[Related: Astronomers caught a potent radio burst blasting at us from a dwarf galaxy 3 billion light-years away]

Even then, what exactly makes a magnetar generate an FRB isn’t known. Some astronomers suspect it’s to do with how magnetars spin, which could create the predictable beats of certain FRBs—somewhat like the clockwork-precise timings of a rotating pulsar. Astronomers call this attribute “periodicity.” Yet, in many cases, there’s no evidence of it. (Another theory is that some FRBs come from discs of gas and dust that build up around black holes.)

Making matters more complex is that every one of those hundreds of FRBs is a different beast. Some flash once, never to be seen again. Some flash a few times. Some stay silent for days, then light up randomly for a short period, then go silent again. And some flash dozens hundreds of times in rapid succession. FRB 20201124A is firmly in the latter category. 

Hunting for FRB 20201124A 

Astronomers first saw it in November 2020 (hence the numbering of its name). They caught a glimpse of its chimes with, well, CHIME—a radio telescope in British Columbia that’s now tasked with scouring for FRBs’ fingerprints. Every day, CHIME sweeps across the sky, pausing in a spot for a few minutes at a time. It was during one of those pauses that the scope found FRB 20201124A.

At first, it seemed like just another FRB. “We didn’t announce it right away,” says Adam Lanman, a postdoctoral astrophysicist at McGill University who was involved with the CHIME finding. That would soon change.

In April 2021, CHIME spotted FRB 20201124A metaphorically lighting up, sending out repeating pulses. CHIME’s astronomers alerted the world’s astronomy community. “Following that, a bunch of other observatories started seeing a lot of events from it,” says Lanman.

[Related: Astronomers just made one giant leap in solving a bizarre cosmic mystery]

One of those observatories was FAST: the world’s largest radio telescope, nestled in the mountains of Guizhou province in southwestern China. In another paper published in Nature on the same day, scientists using FAST reported seeing nearly 2,000 more blasts from FRB 20201124A before the source went silent again.

“This large sample can help us to shed light on the origins of FRBs,” says Wang Fayin, an astrophysicist at Nanjing University.

Repeating FRBs aren’t new, but FAST’s observations saw a number of unique fingerprints in the radio waves that suggested something was playing with them. “There are some unique characteristics of FRB 20201124A, which motivates us to create a model for it,” says Wang.

A model star system

Wang and his colleagues tried their hand at a model. Theirs suggests that FRB 20201124A does hail from a magnetar—but not a magnetar alone. As radio waves burst from the magnetar, they pass through the skirt of the star that the magnetar orbits. It’s a particular kind of star called a Be star, a very bright star shrouded within a disc of plasma and gas. The radio waves from an FRB would pass through that disc, explaining their unique characters. 

“All completely speculative, but none of it’s impossible,” says Jonathan Katz, astrophysicist at Washington University in St Louis, who wasn’t an author.

“I haven’t seen any other papers that go into quite as much detail as this,” says Lanman, who also wasn’t an author.

But this model isn’t a perfect fit to the FAST data—there’s a fair bit of variation it doesn’t fully explain. “Whatever is going on, it might have their model at the core, but there’s a lot more going on than that,” says Katz.

Modelling FRBs in this way isn’t isn’t new. Astronomers have often thought that repeating FRBs are thanks to a neutron star or black hole orbiting another star. On the other hand, it’s not yet clear how, exactly, FRB 20201124A repeats. Katz says outside groups haven’t yet been able to scour the FAST data for evidence of periodicity.

Still, if it’s a magnetar orbiting another star that astronomers are looking for, then they also know where to find it. The same observations that produced the model have helped narrow down FRB 20201124A’s source to a particular galaxy, which can help astronomers find it later. They might do that by searching in other wavelengths: X-rays, for instance, or gamma rays.

Astronomers have tried to scour that galaxy with X-rays before. But the model might help them narrow their search attempts, and that’s what Lanman recommends after this work: “Certainly, further searches for X-ray counterparts going forward” are in order, says Lanman.

The post Two bizarre stars might have beamed a unique radio signal to Earth appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Farmers accidentally created a flood-resistant ‘machine’ across Bangladesh https://www.popsci.com/environment/bangladesh-farmers-seasonal-floods/ Thu, 15 Sep 2022 18:00:00 +0000 https://www.popsci.com/?p=470227
Groundwater pumps like this one deliver water from below to farms in Bangladesh.
A groundwater pump delivers water from below a farm during the dry season in Bangladesh. M. Shamsudduha

Pumping water in the dry months makes the ground sponge-like for the wet season, a system called the Bengal Water Machine.

The post Farmers accidentally created a flood-resistant ‘machine’ across Bangladesh appeared first on Popular Science.

]]>
Groundwater pumps like this one deliver water from below to farms in Bangladesh.
A groundwater pump delivers water from below a farm during the dry season in Bangladesh. M. Shamsudduha

To control unpredictable water and stop floods, you might build a dam. To build a dam, you generally need hills and dales—geographic features to hold water in a reservoir. Which is why dams don’t fare well in Bangladesh, most of which is a flat floodplain that’s just a few feet above sea level.

Instead, in a happy accident, millions of Bangladeshi farmers have managed to create a flood control system of their very own, taking advantage of the region’s wet-and-dry seasonal climate. As farmers pump water from the ground in the dry season, they free up space for water to flood in during the wet season, hydrogeologists found. 

Researchers published the system they’d uncovered in the journal Science on September 15. And authorities could use the findings to make farming more sustainable, writes Aditi Mukherji, a researcher in Delhi for the International Water Management Institute who wasn’t involved in the paper, in a companion article in Science.

“No one really intended this to happen, because farmers didn’t have the knowledge when they started pumping,” says Mohammad Shamsudduha, a geoscientist at University College London in the UK and one of the paper’s authors.

[Related: What is a flash flood?]

Most of Bangladesh lies in the largest river delta on the planet, where the Rivers Ganges and Brahmaputra fan out into the Bay of Bengal. It’s an expanse of lush floodplains and emerald forests, blanketing some of the most fertile soil in the world. Indeed, that soil supports a population density nearly thrice that of New Jersey, the densest US state.

Like much of South Asia, Bangladesh’s climate revolves around the yearly monsoon. The monsoon rains support local animal and plant life and are vital to agriculture, too. But a heavy monsoon can cause devastating floods, as residents of northern Bangladesh experienced in June.

Yet Bangladesh’s warm climate means that farmers can grow crops, especially rice, in the dry season. To do so, farmers often irrigate their fields with water they draw up from the ground. Many small-scale farmers started doing so in the 1990s, when the Bangladeshi government loosened restrictions on importing diesel-powered pumps and made them more affordable. 

The authors of the new study wanted to examine whether pumping was depriving the ground of its water. That’s generally not very good, resulting in strained water supplies and the ground literally sinking (just ask Jakarta). They examined data from 465 government-controlled stations that monitor Bangladesh’s irrigation efforts across the country.

[Related: How climate change fed Pakistan’s devastating floods]

The situation was not so simple: In many parts of the country, groundwater wasn’t depleting at all.

It’s thanks to how rivers craft the delta. The Ganges and the Brahmaputra carry a wealth of silt and sediment from as far away as the Himalayas. As they fan out through the delta, they deposit those fine particles into the surrounding land. These sediments help make the delta’s soil as fertile as it is. 

This accumulation also results in loads of little pores in the ground. When the heavy rains come, instead of running off into the ocean or adding to runaway flooding, all that water can soak into the ground, where farmers can use it.

Where a dam’s reservoir is more like a bucket, Bangladesh is more like a sponge. During the dry season, farmers dry out the sponge. That gives it more room to absorb more water in the monsoon. And so forth, in an—ideally—self-sustaining cycle. Researchers call it the Bengal Water Machine. 

“The operation of the [Bengal Water Machine] was suspected by a small number of hydrogeologists within our research network but essentially unknown prior to this paper,” says Richard Taylor, a hydrogeologist at University College London in the UK, and another of the paper’s authors.

“If there was no pumping, then this would not have happened,” says Kazi Matin Uddin Ahmed, a hydrogeologist at the University of Dhaka in Bangladesh, and another of the paper’s authors. 

Storing water underground instead of a dam has a few advantages, Ahmed adds. The subsurface liquid is at less risk of evaporating into useless vapor. It doesn’t rewrite the region’s geography, and farmers can draw water from their own land, rather than relying on water shuttled in through irrigation channels.

The researchers believe that other “water machines” might fill fertile deltas elsewhere in the tropics with similar wet-and-dry climates. Southeast Asia might host a few, at the mouths of the Red River, the Mekong, and the Irrawaddy.

But an ominous question looms over the Bengal Water Machine: What happens as climate change reshapes the delta? Most crucially, a warming climate might intensify monsoons and change where they deliver their rains. “This is something we need to look into,” says Shamsudduha.

The Bengal Water Machine faces several other immediate challenges. In 2019, in response to overpumping concerns, the Bangladeshi government reintroduced restrictions on which farmers get to install a pump, which could make groundwater pumping more inaccessible. Additionally, many farmers use dirty diesel-powered pumps. (The government’s now encouraging farmers to switch to solar power.)

Also, keeping the Bengal Water Machine ship-shape means not using too much groundwater. Unfortunately, that’s already happening. Bangladesh’s west generally gets less rainfall than its east, and the results reflect that. The researchers noticed groundwater depletion in the west that wasn’t happening out east.

“There is a limit,” says Ahmed. “There has to be close monitoring of the system.”

The post Farmers accidentally created a flood-resistant ‘machine’ across Bangladesh appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
What’s next for NASA’s Artemis 1 launch https://www.popsci.com/science/artemis-1-launch-next-steps/ Thu, 08 Sep 2022 21:00:00 +0000 https://www.popsci.com/?p=468650
NASA’s Space Launch System rocket a mobile launcher on September 4 at the Kennedy Space Center in Florida.
NASA’s Space Launch System rocket sits on a mobile launcher on September 4 at the Kennedy Space Center in Florida. NASA/Joel Kowsky

A fuel leak has delayed NASA’s moon mission, but there’s no cause for alarm.

The post What’s next for NASA’s Artemis 1 launch appeared first on Popular Science.

]]>
NASA’s Space Launch System rocket a mobile launcher on September 4 at the Kennedy Space Center in Florida.
NASA’s Space Launch System rocket sits on a mobile launcher on September 4 at the Kennedy Space Center in Florida. NASA/Joel Kowsky

After years of mounting anticipation, NASA’s first full-scale moonshot since 1972 finally towered over its Florida launchpad in late August—only to go nowhere due to a persnickety fuel leak.

It’s just the latest delay for Artemis 1—an uncrewed flight slated to launch from Earth, shoot itself around the moon, and return. The recent setbacks mark a renewed bout of uncertainty over when, exactly, the mission will actually launch.

So what’s causing these hold-ups, what are NASA engineers doing to fix it, and will it affect NASA’s long-term lunar dreams? (Spoiler: the answer to that last question is probably no.)

What caused the delay?

More than just a lunar launch, Artemis 1 was set to be the first test of the 21st century’s Saturn V: the Space Launch System (SLS), the behemoth rocket designed to be the backbone of the Artemis program. While flying around the moon and back is certainly very cool, testing the rocket that will power future launches is perhaps even more important.

SLS uses hydrogen as a propellant, storing it in a super-chilled liquid form, below minus 423°F. While engineers were cooling the fuel lines down to that temperature, they accidentally raised the pressure. Later, as engineers began filling up the rocket’s fuel tanks in preparation for the launch, they noticed a leak in one fuel line where it met the rocket. Whether the two issues are related isn’t yet known.

Even a simple leak could be a disaster in waiting, because it could spew out hydrogen gas: a highly flammable substance, as the Hindenburg fire demonstrated. 

(SLS is no stranger to such fueling problems. Back in April, when NASA engineers were conducting dress rehearsals of the rocket on the pad, engineers ran into recurring issues with leaking propellant while they tried to fill up the rocket’s tanks.)

[Related: With Artemis, NASA is aiming for the moon once more. But where will it land?]

NASA engineers are now trying to fix the leak, replacing seals along the fuel line. Over the next several weeks, they’ll retest on the pad.

Importantly, a scrubbed launch isn’t a failed launch. Instead, it’s a decision to abort and try again later, once engineers have sorted out the problems at hand. “They’re much more keen to scrub or delay a launch than to have something catastrophic that would really harm the mission,” says Makena Young, an aerospace analyst at the Center for Strategic & International Studies, a Washington-based think tank.

What happens next?

Once the engineers finish their retests, NASA can’t instantly try to launch again. To complete its mission, Artemis 1 needs the moon to be in a proper place in its orbit around Earth. That opportunity has passed by, and the next launch window doesn’t begin until late September: either the 23rd or the 27th.

Those dates are not arbitrary. Even though Artemis 1 is high-profile, it has to share support systems with other missions. In this case, it would share a deep-space tracking network with DART, an uncrewed probe that aims to change an asteroid’s course by crashing into it like, well, a dart. DART’s big day is on September 26, give or take a day. It isn’t becoming for Artemis 1 to step on DART’s toes.

A September launch isn’t certain. Another unanswered question is whether engineers will need to roll Artemis 1 back into the Vehicle Assembly Building (VAB), the enormous skyscraper-sized hangar where NASA assembles its rockets. The question stems from something called the Flight Termination System, a battery-powered system that causes the rocket to self-destruct if it veers off-course, avoiding collisions. 

The US Space Force—who actually has authority over NASA’s rocket launches—certified the batteries for a 25-day-long period that ends before that late September window begins. Normally, NASA would need to roll the rocket back into the VAB and replace the batteries. NASA is seeking special permission from the Space Force to swap out the batteries on the pad instead.

If NASA does need to return to the VAB, the September window might become trickier to hit. The next window won’t start until later in October. In that case, NASA would need to work around a solar eclipse on October 25 that could throw a wrench into the communication systems NASA relies upon.

What does the future hold?

By all indication, it’s not a matter of whether Artemis 1 will launch, but rather a matter of when. Still, for viewers on the ground, some of whom have been waiting decades to see Artemis materialize, the delays might feel like assembling a piece of furniture only to find that the final parts are missing.

But such is the nature of any complex aerospace project. “It’s never assumed that those things are going to go perfectly,” says Young. “So, sometimes, these delays are just the cost of doing business.”

[Related: ‘Phantom’ mannequins will help us understand how cosmic radiation affects female bodies in space]

It does help, in this case, that the other Artemis missions are well into the future. Artemis 2, which plans to take three Americans and one Canadian around the moon’s orbit and back, a la Apollo 8, is currently slated for 2024. Artemis 3—the long-awaited first boots on the Moon in over half a century—won’t launch until 2025 at least. 

The long downtime between missions, irritating as it might be for impatient Earthlings, do give NASA some slack. It means that future missions won’t pay the price of these delays.

“[Artemis 1] would have to slip much further into the winter, or even next year, to start having impacts on the rest of the program,” says Young.

The post What’s next for NASA’s Artemis 1 launch appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Sustainable batteries could one day be made from crab shells https://www.popsci.com/science/crab-shell-green-batteries/ Thu, 01 Sep 2022 19:30:00 +0000 https://www.popsci.com/?p=467040
A bucket of crabs, who have a multi-purpose material called chitosan in their shells.
Crabs that we eat contain chitosan in their shells, which scientists are using to make batteries. Mark Stebnick via Pexels

A material in crab shells has been used to brew booze, dress wounds, and store energy.

The post Sustainable batteries could one day be made from crab shells appeared first on Popular Science.

]]>
A bucket of crabs, who have a multi-purpose material called chitosan in their shells.
Crabs that we eat contain chitosan in their shells, which scientists are using to make batteries. Mark Stebnick via Pexels

There are those who say ours is the age of the battery. New and improved batteries, perhaps more than anything else, have made possible a world of mobile phones, smart devices, and blossoming electric vehicle fleets. Electrical grids powered by clean energy may soon depend on server-farm-sized battery projects with massive storage capacity.

But our batteries aren’t perfect. Even if they’ll one day underpin a world that’s sustainable, today they’re made from materials that aren’t. They rely on heavy metals or non-organic polymers that might take hundreds of years to degrade. That’s why battery disposal is such a tricky task.

Enter researchers from the University of Maryland and the University of Houston, who have made a battery from a promising alternative: crustacean shells. They’ve taken a biological material, easily sourced from the same crabs and squids you can eat, and crafted it into a partly biodegradable battery. They published their results in the journal Matter on September 1.

It’s not the first time batteries have been made from this stuff. But what makes the researchers’ work new is the design, according to Liangbing Hu, a materials scientist at the University of Maryland and one of the paper’s authors. 

A battery has three key components: two ends and a conductive filling, called an electrolyte. In short, charged particles crossing the electrolyte put out a steady flow of electric current. Without an electrolyte, a battery would just be a sitting husk of electric charge.

Today’s batteries use a whole rainbow of electrolytes, and few are things you’d particularly want to put in your mouth. A standard AA battery uses a paste of potassium hydroxide, a dangerously corrosive substance that makes throwing batteries in the trash a very bad idea. 

[Related: This lithium-ion battery kept going (and going and going) in the extreme cold]

The rechargeable batteries in your phone are a completely different sort of battery: lithium-ion batteries. Those batteries can power on for many years and usually rely on plastic-polymer-based electrolytes that aren’t quite as toxic, but they can still take centuries or even millennia to break down.

Batteries themselves, full of environmentally unfriendly materials, aren’t the greenest. They’re rarely sustainably made, either, reliant on rare earth mining. Even if batteries can last thousands of discharges and recharges, thousands more get binned every day.

So researchers are trawling through oceans of materials for a better alternative. In that, they’ve started to dredge up crustacean parts. From crabs and prawns and lobsters, battery-crafters can extract a material called chitosan. It’s a derivative of chitin, which makes up the hardened exoskeletons of crustaceans and insects, too. There’s plenty of chitin to go around, and a relatively simple chemical process is all that’s necessary to convert it into chitosan.

We already use chitosan for quite a few applications, most of which have little to do with batteries. Since the 1980s, farmers have sprinkled chitosan over their crops. It can boost plant growth and harden their defenses against fungal infestation. 

[Related: The race to close the EV battery recycling loop]

Away from the fields, chitosan can remove particles from liquids: Water purification plants use it to remove sediment and impurities from drinking water, and alcohol-makers use it to clarify their brew. Some bandages come dressed with chitosan that helps seal wounds.

You can sculpt things from chitosan gel, too. Because chitosan is biodegradable and non-toxic, it’s especially good for making things that must go into the human body. It’s entirely possible that hospitals of the future might use specialized 3D printers to carve chitosan into tissues and organs for transplants.

Now, researchers are seeking to put chitosan into batteries whose ends are made from zinc. Largely experimental today, these rechargeable batteries could one day form the backbone of an energy storage system.

The researchers at Maryland and Houston weren’t the first to think about making chitosan into batteries. Scientists around the world, from China to Italy to Malaysia to Iraqi Kurdistan, have been playing with crab-stuff for about a decade, spindling it into intricate webwork that charged particles could cross like adventurers.

The authors of the new work added zinc ions to that chitosan structure, which bolstered its physical strength. Combined with the zinc ends, the addition also boosted the battery’s effectiveness.

This design means that two-thirds of the battery is biodegradable; the researchers found that the electrolyte broke down completely within around five months. Compared to conventional electrolytes and their thousand-year lifespans in the landfill, Hu says, these have little downside. 

And although this design was made for those experimental zinc batteries, Hu sees no reason researchers can’t extend it to other sorts of batteries—including the one in your phone.

Now, Hu and his colleagues are pressing ahead with their work. One of their next steps, Hu says, is to expand their focus beyond the confines of the electrolyte—to the other parts of a battery. “We will put more attention to the design of a fully biodegradable battery,” he says.

The post Sustainable batteries could one day be made from crab shells appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Atoms are famously camera-shy. This dazzling custom rig can catch them. https://www.popsci.com/science/particle-physics-custom-camera/ Sun, 28 Aug 2022 17:00:00 +0000 https://www.popsci.com/?p=465661
MAGIS-100 vacuum for a Fermilab quantum physics experiment
When built, the MAGIS-100 atom interferometer will be the largest in the world. But it's still missing a key component: a detailed camera. Stanford University

The mirror-studded camera is designed to take glamor shots of quantum physics experiments.

The post Atoms are famously camera-shy. This dazzling custom rig can catch them. appeared first on Popular Science.

]]>
MAGIS-100 vacuum for a Fermilab quantum physics experiment
When built, the MAGIS-100 atom interferometer will be the largest in the world. But it's still missing a key component: a detailed camera. Stanford University

In suburban Chicago, about 34 miles west of Lake Michigan, sits a hole in the ground that goes about 330 feet straight down. Long ago, scientists had the shaft drilled for a particle physics experiment that’s long vanished from this world. Now, in a few short years, they will reuse the shaft for a new project with the mystical name MAGIS-100.

When MAGIS-100 is complete, physicists plan to use it for detecting hidden treasures: dark matter, the mysterious invisible something that’s thought to make up much of the universe; and gravitational waves, ripples in space-time caused by cosmic shocks like black holes collisions. They hope to find traces of those elusive phenomena by watching the quantum signatures they leave behind on raindrop-sized clouds of strontium atoms.

But actually observing those atoms is trickier than you might expect. To pull off similar experiments, physicists have so far relied on cameras comparable to the ones on a smartphone. And while the technology might work fine for a sunset or a tasty-looking food shot, it limits what physicists can see on the atomic level.

[Related: It’s pretty hard to measure nothing, but these engineers are getting close]

Fortunately, some physicists may have an upgrade. A research team from different groups in Stanford, California, has created a unique camera contraption that relies on a dome of mirrors. The extra reflections help them to see what light is entering the lens, and tell what angle a certain patch of light is coming from. That, they hope, will let them peer into an atom cloud like never before.

Your mobile phone camera or DSLR doesn’t care where light travels from: It captures the intensity of the photons and the colors reflected by the wavelengths, little more. For taking photographs of your family, a city skyline, or the Grand Canyon, that’s all well and good. But for studying atoms, it leaves quite a bit to be desired. “You’re throwing away a lot of light,” says Murtaza Safdari, a physics graduate student at Stanford University and one of the creators.

Physicists want to preserve that information because it lets them paint a more complex, 3D picture of the object (or objects) they’re studying. And when it comes to the finicky analyses physicists like to do, the more information they can get in one shot, the quicker and better. 

One way to get that information is to set up multiple cameras, allowing them to snap pictures from multiple angles and stitch them together for a more detailed view. That can work great with, say, five cameras. But some physics experiments require such precise measurements that even a thousand cameras might not do the trick.

Stanford atom camera mirror array shown in the lab
The 3D-printed, laser-cut camera. Sanha Cheong/Stanford University

So, in a Stanford basement, researchers decided to set out on making their own system to get around that problem. “Our thinking…was basically: Can we try and completely capture as much information as we can, and can we preserve directional information?” says Safdari.

Their resulting prototype—made from off-the-shelf and 3D-printed components—looks like a shallow dome, spangled with an array of little mirror-like dots on the inside. The pattern seems to form a fun optical illusion of concentric circles, but it’s carefully calculated to maximize the light striking the camera.

For the MAGIS-100 project, the subject of the shot—the cloud of strontium atoms—would sit within the dome. A brief light flash from an external laser beam would then scatter off the mirror-dots and through the cloud at a myriad angles. The lens would pick up the resulting reflections, how they’ve interacted with the molecules, and which dots they’ve bounced off.

Then, from that information, machine learning algorithms can piece the three-dimensional structure of the cloud back together. Currently, this reconstruction takes many seconds; in an ideal world, it would take milliseconds, or even less. But, like the algorithms used to trainy self-driving cars to adjust to the surrounding world, researchers think their computer codes’ performance will improve. 

While the creators haven’t gotten around to testing the camera on atoms just yet, they did try it out by scanning some suitably sized sample parts: 3D-printed letter-shaped pieces the size of the strontium droplets they intend to use. The photo they took was so clear, they could find defects where the little letters D, O, and E varied from their intended design. 

3D-printed letters photographed and 3D modeled on a grid
Reconstructions of the test letters from a number of angles. Sanha Cheong/SLAC National Accelerator Laboratory

For atom experiments like MAGIS-100, this equipment is distinct from anything else on the market. “The state of the art are just cameras, commercial cameras, and lenses,” says Ariel Schwartzman, a physicist at SLAC National Accelerator Laboratory in California and co-creator of the Stanford setup. They scoured photo-equipment catalogs for something that could see into an atom cloud from multiple angles at once. “Nothing was available,” says Schwartzman.

Complicating matters is that many experiments require atoms to rest in extremely cold temperatures, barely above absolute zero. This means they require low-light conditions—shining any bright light source for too long could heat them up too fast. Setting a longer exposure time on a camera could help, but it also means sacrificing some of the detail and information needed in the final image. “You are allowing the atom cloud to diffuse,” says Sanha Cheong, a physics graduate student at Stanford University and member of the camera-building crew. The mirror dome, on the other hand, aims to use only a brief laser-flash with an exposure of microseconds. 

[Related: Stanford researchers want to give digital cameras better depth perception]

The creators’ next challenge is to actually place the camera in MAGIS-100, which will take a lot of tinkering to fit the camera to a much larger shaft and in a vacuum. But physicists are hopeful: A camera like this might go a lot further than detecting obscure effects around atoms. Its designers plan to use it for everything from tracking particles in plasma to measuring quality control of small parts in the factory.

“To be able to capture as much light and information in a single shot in the shortest exposure possible—it opens up new doors,” says Cheong.

The post Atoms are famously camera-shy. This dazzling custom rig can catch them. appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Blind scientists adapted a centuries-old art to make data that can be touched and seen https://www.popsci.com/science/lithophane-blind-scientist-data/ Thu, 18 Aug 2022 21:00:00 +0000 https://www.popsci.com/?p=463485
A chemist can sense this scientific data with their fingers.
A lithophane of an SDS-PAGE gel, a laboratory technique to study proteins. Jordan Koone and Bryan Shaw

Tools called lithophanes are physical images adorned with shallow engravings.

The post Blind scientists adapted a centuries-old art to make data that can be touched and seen appeared first on Popular Science.

]]>
A chemist can sense this scientific data with their fingers.
A lithophane of an SDS-PAGE gel, a laboratory technique to study proteins. Jordan Koone and Bryan Shaw

Before photography, there was the lithophane. 

It’s a thin slice of porcelain or plastic, adorned with a shallow engraving. Hold it up to a light, and the translucent relief turns into a shadowy image. Europeans first began making lithophanes around 1800, though East Asians had been doing similar tricks with ceramics centuries earlier. For a time, artisans and primitive factories pumped out lithophane nightlights, lampshades, and drinking vessels. Even lithophane portraits were once fashionable.

Lithophanes haven’t entirely vanished from the modern world. Today, you might find them as fun decorations or 3D printing tutorials. Now, the fact that lithophanes can play dual roles as picture and engraving has given them a new purpose: making science more tangible for those with vision difficulties.

Scientists at Baylor University, teaming up with blind chemists from around the US, have turned to lithophanes as a way to bridge those chemists with their sighted counterparts. The scientists have publicized the concept in a paper published in the journal Science Advances on August 17.

“Just imagine a world in which a blind person and a sighted person are sitting next to each other, referring to the same piece of data, and accessible to both,” says Mona Minkara, a blind chemist at Northeastern University, and one of the paper authors. “This could be revolutionary, if people incorporate it.”

It’s certainly an improvement over what exists today. If you’ve ever seen a scientific paper, you probably know that graphs, plots, and visuals can be critical to understanding the dense and jargon-heavy text. If you’re a blind reader, you can turn the text into audio with the aid of software, but what about the pictures themselves?

“There’s hundreds of PDFs of scientific articles that I download, and none of the figures are accessible,” says Matthew Guberman-Pfeffer, a blind postdoctoral researcher at Yale University and the National Institutes of Health, and another of the paper authors.

It’s not that there aren’t methods of doing it. Some digital images have alt text describing them, but that can be frustratingly uncommon. You can make touchable images, but they require costly special paper. You can get bespoke braille books, but they can cost tens of thousands of dollars, or even more, and take a literal year to assemble.

It’s ironic that these barriers exist in chemistry: a field that deals with atoms and molecules that are too small to physically see with your eyes—even if you shrunk yourself down to their size, because they’re smaller than the wavelength of visible light. 

[Related: Why chemists are watching light destroy tiny airborne particles from within]

“One of the things I like to think about in terms of chemistry is that: Really, we are all blind,” says Hoby Wedler, a blind chemist, and another of the paper authors. The visual aids chemists use—everything from the periodic table to models of molecules to X-ray crystallography images—are no more than that.

One scientist working to chip away at the barriers was Bryan Shaw, a sighted chemist at Baylor University. Shaw has long been interested in making chemistry more accessible to the visually impaired by revamping laboratory equipment and turning models of complex molecules into tasty edible sweets.

One of Shaw’s students had been tinkering with engraving graphics into 3D-printed slabs. To save on material and time, they started making those slabs thinner and thinner. When they held the slimmest up to the light, the engraved graphs popped out to them just as starkly as it had in the original image. 

What makes these creations special is that the carving a blind person can feel turns into a visible image when a sighted person holds it up to the light. To test that quality, the researchers created lithophane graphics—a few types of images and graphs common in chemistry—and showed them to test subjects (both blind and sighted) Then, the researchers asked them questions about the visuals. 

Both groups could answer the questions with an average accuracy of over 90 percent: comparable to the 88 percent average of sighted people viewing the original digital image. Moreover, when researchers made sighted subjects wear blindfolds and answer questions via touch alone, they could answer questions with an impressive 79 percent accuracy.

“Having one thing that satisfies both populations is really, I think…the really exciting part of this work,” says Gary Patti, a sighted chemist at Washington University in St Louis who wasn’t involved with the work.

[Related: Lyft’s braille guide to autonomous tech helps the blind become familiar with robocars]

Moreover, unlike the expensive materials other accessibility methods use, these lithophanes came from a 3D printer that cost just $3,500. While expensive compared to at-home printers, that’s within the reach of many university computer labs.

“A lot of times, we think about making things accessible to blind students,” says Minkara. “But this technology could be really useful for me, as a blind professor, to be able to communicate with my students…and share data.”

Admittedly, the technology isn’t quite as accessible as it needs to be. Most urgently, the process of turning these graphics from digital images into lithophanes still needs the eyes of a sighted person. Building the software that would allow visually impaired scientists to do it themselves, researchers say, is the next step.

Even though chemists made these lithophanes for other chemists, they believe that any field that relies on plots and graphs—in other words, virtually all of science—could use them. “We’ve already started thinking about different types of data that other scientists use,” says Chad Dashnaw, a grad student at Baylor University and one of the paper authors.

Blind scientists adapted a centuries-old art to make data that can be touched and seen
A backlit lithophane showing a magnified butterfly scale. Jordan Koone and Bryan Shaw

The lithophanes created for this study included a microscope image of the scales on a butterfly wing. “I would never, with my limited sight, be able to see a butterfly wing,” says Guberman-Pfeffer. “And yet I could feel the texture of the wing and measure its dimensions.”

The post Blind scientists adapted a centuries-old art to make data that can be touched and seen appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
It’s pretty hard to measure nothing, but these engineers are getting close https://www.popsci.com/science/vacuum-measurements-manufacturing-new-method/ Mon, 08 Aug 2022 22:30:00 +0000 https://www.popsci.com/?p=461034
NIST computer room with small glass vacuum chamber
The National Institute of Standards and Technology sets the bar for precise vacuum measurements. NIST

The US still uses Cold War-era tech to calibrate vacuums for manufacturing. Is there a more precise (and fun) option out there?

The post It’s pretty hard to measure nothing, but these engineers are getting close appeared first on Popular Science.

]]>
NIST computer room with small glass vacuum chamber
The National Institute of Standards and Technology sets the bar for precise vacuum measurements. NIST

Outer space is a vast nothingness. It’s not a perfect vacuum—as far as astronomers know, that concept only exists in theoretical calculations and Hollywood thrillers. But aside from the remnant hydrogen atom floating about, it is a vacuum.

That’s important because here on Earth, much of the modern world quietly relies on partial vacuums. More than just a place for physicists to do fun experiments, the machine-based environments are critical for crafting many of the electronic components in cutting-edge phones and computers. But to actually measure a vacuum—and understand how good it will be at manufacturing—engineers rely on relatively basic tech left over from the days of old-school vacuum tubes.

[Related: What happens to your body when you die in space?]

Now, some teams are working on an upgrade. Recent research has brought a novel technique—one that relies on the coolest new atomic physics (as cool as -459 degrees Fahrenheit)—one step closer to being used as a standardized method.

“It’s a new way of measuring vacuum, and I think it’s really revolutionary,” says Kirk Madison, a physicist at the University of British Columbia in Vancouver.

NIST circular metal vacuum chamber with blue lights
The NIST mass-in-vacuum precision mass comparator. NIST

What’s inside a vacuum

It might seem hard to quantify nothing, but what you’re actually doing is reading the gas pressure inside a vacuum—in other words, the force that any remaining atoms put on the chamber well. So, measuring vacuums is really about calculating pressures with far more precision than your local meteorologist can manage.

Today, engineers might do that with a tool called an ion gauge. It consists of a spiralling wire that pops out electrons when inserted into a vacuum chamber; the electrons collide with any gas atoms within the spiral, turning them into charged ions. The gauge then reads the number of ions left in the chamber. But to interpret that figure, you need to know the composition of the different gases you’re measuring, which isn’t always simple.

Ion gauges are technological cousins of vacuum tubes, the components that powered antique radios and the colossal computers that filled rooms and pulp science fiction stories before the development of the silicon transistor. “They are very unreliable,” says Stephen Eckel, a physicist at the National Institute for Standards and Technology (NIST). “They require constant recalibration.”

Other vacuum measuring tools do exist, but ion gauges are the best at getting pressure readings down to billionths of a Pascal (the standard unit of pressure). While this might seem unnecessarily precise, many high-tech manufacturers want to read nothingness as accurately as possible. A couple of common techniques to fabricate electronic components and gadgets like lasers and nanoparticles rely on delicately layering materials inside vacuum chambers. Those techniques need pure voids of matter to work well.

The purer the voir, the harder it is to identify the leftover atoms, making ion gauges even more unreliable. That’s where deep-frozen atoms come in.

Playing snooker with atoms

For decades physicists have taken atoms, pulsed them with a finely tuned laser, and confined them in a magnetic cage, all to keep them trapped at temperatures just fractions of a degree above absolute zero. The frigidness forces atoms, otherwise wont to fly about, to effectively sit still so that physicists can watch how they behave.

In 2009, Madison and other physicists at several institutions in British Columbia were watching trapped atoms of chilled rubidium—an element with psychrophilic properties—when a new arrangement dawned on them.

Suppose you put a trap full of ultracold atoms in a vacuum chamber at room temperature. They would face a constant barrage of whatever hotter, higher-energy atoms were left in the vacuum. Most of the frenzied particles would slip through the magnetic trap without notice, but some would collide with the trapped atoms and snooker them out of the trap.

It isn’t a perfect measurement—not all collisions would successfully kick an atom out of the trap. But if you know the trap’s “depth” (or temperature) and a number called the atomic cross-section (essentially, a measure of the probability of a collision), you can find out fairly quickly how many atoms are entering the plane. Based on that, you can know the pressure, along with how much matter is left in the vacuum, Madison explains.

Such a method could have a few advantages over ion gauges. For one, it would work for all types of gases present in the vacuum, as there are no chemical reactions happening. Most of all, due to the fact that you’re making calculations from how the atoms’ behaviors, nothing needs to be calibrated.

At first, few people in the physics community noticed the breakthrough by Madison and his collaborators. “Nobody believed that the work we were doing was impactful,” he says. But in the 13 years since, other groups have taken up the technology themselves. In China, the Lanzhou Institute of Physics has begun building their own version. So has an agency in the German government.

The NIST is the latest test subject on the list. It’s the US agency responsible for deciding the country’s official weights and measures, like the official kilogram (yes, even the US government uses the SI system). One of NIST’s tasks for decades has been to calibrate those persnickety ion gauges as manufacturers kept sending them in. The British Columbia researchers’ new way presented an appealing shortcut.

NIST engineer in red polo and glasses testing silver cold-atom vacuum chamber
As part of a project testing the ultra-cold atom method of vacuum measurement, NIST scientist Stephen Eckel behind a pCAVS unit (silver-colored cube left of center) that is connected to a chamber (cylinder at right). C. Suplee/NIST

A new standard for nothing

NIST’s system isn’t exactly like the one Madison’s group devised. For one, the agency uses lithium atoms, which are much smaller and lighter than rubidium. Eckert, who was involved in the NIST project, says that these atoms are far less likely to stay in the trap after collision. But it uses the same underlying principles as the original experiment, which reduces labor because it doesn’t need to be calibrated over and over.

“If I go out and I build one of these things, it had better measure the pressure correctly,” says Eckel. “Otherwise, it’s not a standard.”

NIST put their system to the test in the last two years. To make sure it worked, they built two identical cold-atom devices and ran them in the same vacuum chamber. When they turned the devices on, they were dismayed to find that both produced different measurements. As it turned out, the vacuum chamber had developed a leak, allowing atmospheric gases to trickle in. “Once we fixed the leak, they agreed with each other,” says Eckel.

Now that their system seems to work against itself, NIST researchers want to compare the ultra-chilled atoms against ion gauges and other old-fashioned techniques. If these, too, result in the same measurement, then engineers might soon be able to close in on nothingness by themselves.

The post It’s pretty hard to measure nothing, but these engineers are getting close appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Nuclear power’s biggest problem could have a small solution https://www.popsci.com/science/nuclear-fusion-less-energy/ Sun, 07 Aug 2022 23:07:36 +0000 https://www.popsci.com/?p=460468
Spherical fusion energy reactor in gold, copper, and silver seen from above
In 2015 the fusion reactor at the Princeton Plasma Physics Laboratory got a spherical upgrade for an energy-efficiency boost. Some physicists think this sort of design might be the future of the field. US Department of Energy

Most fusion experiments take place in giant doughnut-shaped reactors. Physicists want to test a smaller peanut-like one instead.

The post Nuclear power’s biggest problem could have a small solution appeared first on Popular Science.

]]>
Spherical fusion energy reactor in gold, copper, and silver seen from above
In 2015 the fusion reactor at the Princeton Plasma Physics Laboratory got a spherical upgrade for an energy-efficiency boost. Some physicists think this sort of design might be the future of the field. US Department of Energy

For decades, if you asked a fusion scientist to picture a fusion reactor, they’d probably tell you about a tokamak. It’s a chamber about the size of a large room, shaped like a hollow doughnut. Physicists fill its insides with a not-so-tasty jam of superheated plasma. Then they surround it with magnets in the hopes of crushing atoms together to create energy, just as the sun does.

But experts think you can make tokamaks in other shapes. Some believe that making tokamaks smaller and leaner could make them better at handling plasma. If the fusion scientists proposing it are right, then it could be a long-awaited upgrade for nuclear energy. Thanks to recent research and a newly proposed reactor project, the field is seriously thinking about generating electricity with a “spherical tokamak.”

“The indication from experiments up to now is that [spherical tokamaks] may, pound for pound, confine plasmas better and therefore make better fusion reactors,” says Steven Cowley, director of Princeton Plasma Physics Laboratory.

[Related: Physicists want to create energy like stars do. These two ways are their best shot.]

If you’re wondering how fusion power works, it’s the same process that the sun uses to generate heat and light. If you can push certain types of hydrogen atoms past the electromagnetic forces keeping them apart and crush them together, you get helium and a lot of energy—with virtually no pollution or carbon emissions.

It does sound wonderful. The problem is that, to force atoms together and make said reaction happen, you need to achieve celestial temperatures of millions of degrees for sustained periods of time. That’s a difficult benchmark, and it’s one reason that fusion’s holy grail—a reaction that generates more energy than you put into it, also known as breakeven and gain—remains elusive.

The tokamak, in theory, is one way to reach it. The idea is that by carefully sculpting the plasma with powerful electromagnets that line the doughnut’s shell, fusion scientists can keep that superhot reaction going. But tokamaks have been used since the 1950s, and despite continuous optimism, they’ve never been able to mold the plasma the way they need to deliver on their promise.

But there’s another way to create fusion outside of a tokamak, called inertial confinement fusion (ICF). For this, you take a sand-grain-sized pellet of hydrogen, place it inside a special container, blast it with laser beams, and let the resulting shockwaves ruffle the pellet’s interior into jump-starting fusion. Last year, an ICF reactor in California came closer than anyone’s gotten to that energy milestone. Unfortunately, in the year since, physicists haven’t been able to make the flash happen again.

Stories like this show that if there’s an alternative method, researchers won’t hesitate to jump on it.

The idea of trimming down the tokamak emerged in the 1980s, when theoretical physicists—followed by computer simulations—proposed that a more compact shape could handle the plasma more effectively than a traditional tokamak.

Not long after, groups at the Culham Center for Fusion Energy in the UK and Princeton University in New Jersey began testing the design. “The results were almost instantaneously very good,” says Cowley. That’s not something physicists can say with every new chamber design.

Round fusion reactor with silver lithium sides and a core
A more classic-shaped lithium tokamak at the Plasma Physics Laboratory. US Department of Energy

Despite the name, a spherical tokamak isn’t a true sphere: It’s more like an unshelled peanut. This shape, proponents think, gives it a few key advantages. The smaller size allows the magnets to be placed closer to the plasma, reducing the energy (and cost) needed to actually power them. Plasma also tends to act more stably in a spherical tokamak throughout the reaction.

But there are disadvantages, too. In a standard tokamak, the doughnut hole in the middle of the chamber contains some of those important electromagnets, along with the wiring and components needed to power the magnets up and support them. Downsizing the tokamak reduces that space into something like an apple core, which means the accessories need to be miniaturized to match. “The technology of being able to get everything down the narrow hole in the middle is quite hard work,” says Cowley. “We’ve had some false starts on that.”

On top of the fitting issues, placing those components closer to the celestially hot plasma tends to wear them out more quickly. In the background, researchers are making new components to solve these problems. At Princeton, one group has shrunk those magnets and wrapped them with special wires that don’t have conventional insulation, which would need to be specially treated in an expensive and error-prone process to fit in fusion reactors’ harsh conditions. This development doesn’t solve all of the problems, but it’s an incremental step.

[Related: At NYC’s biggest power plant, a switch to clean energy will help a neighborhood breathe easier]

Others are dreaming of going even further. The world of experimental tokamaks is currently preparing for ITER, a record-capacity test reactor that’s been underway since the 1980s and will finally finish construction in southern France this decade. It will hopefully pave the way for viable fusion power by the 2040s. 

Meanwhile, fusion scientists are already designing something very similar in Britain with a Spherical Tokamak for Energy Production, or STEP. The chamber is nowhere near completion—the most optimistic plans won’t have it begin construction until the mid-2030s and start generating power until about 2040—but it’s an indication that engineers are taking the spherical tokamak design quite seriously. 

“One of the things we always have to keep doing is asking ourselves: ‘If I were to build a reactor today, what would I build?’” says Cowley. Spherical tokamaks, he thinks, are beginning to enter that equation.

The post Nuclear power’s biggest problem could have a small solution appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Counting down to the Artemis 1 launch, NASA’s biggest moon mission in decades https://www.popsci.com/science/nasa-sls-rocket-preparations/ Thu, 04 Aug 2022 21:00:00 +0000 https://www.popsci.com/?p=460274
The Artemis SLS rocket at Kennedy Space Center in Florida in early 2022.
NASA's Space Launch System sits on the mobile launcher in Florida on March 18. NASA/Kim Shiflett

This super powerful rocket won't carry people—instead, two fake torsos will be on board.

The post Counting down to the Artemis 1 launch, NASA’s biggest moon mission in decades appeared first on Popular Science.

]]>
The Artemis SLS rocket at Kennedy Space Center in Florida in early 2022.
NASA's Space Launch System sits on the mobile launcher in Florida on March 18. NASA/Kim Shiflett

After facing cancellation, resumption, Congressional hearing drama, COVID-19, technical delays, and more technical delays, NASA’s decades-long push to return to the moon is finally about to get off the ground.

If all goes well, the Artemis 1 flight is about a month away. It’s slated to launch in late August or early September, put itself into the moon’s orbit, then return to Earth. On top of being the first entry in NASA’s newest spaceflight program, it’s an important test of the long-awaited Space Launch System (SLS)—a heavy-lifter of a rocket comparable to the old Saturn V—and the Orion command module that will one day house astronauts.

“The team is beyond excited,” says Cliff Lanham, an operations manager at NASA’s Kennedy Space Center on Florida’s east coast, where Artemis 1 will launch. “We still have a few weeks of work to do, so we gotta temper that.”

Here’s what’s going on with the launch—and what has to happen first.

Last season: Learning from rehearsals

You might remember that, a few months ago, NASA had some issues with fuel leaks that called off test runs.

NASA engineers called those tests “wet-dress rehearsals” (WDR). They were what they sound like: placing the rocket on the pad and going through the motions of launch day. The WDRs’ other purpose was to suss out issues like those very leaks, which aren’t exactly uncommon with highly complex systems such as large rockets.

The WDRs are quietly very useful; workers at NASA use the results to write the checklist for the Artemis 1 launch. It’s perhaps not the most glamorous step of launch prep. But without these trials, the rocket launch likely couldn’t happen.

[Related: In pictures: NASA’s powerful moonshot rocket debuts at Kennedy Space Center]

After some tinkering, NASA held the final tests in June. Despite another fuel leak, engineers elected to call it there and end the tests, believing they could resolve the issues by returning the rocket to its assembly building for repairs.

One month to launch: Readying the rocket

Engineers still need to complete a few tasks before they can send Artemis 1 on its way.

A critical one is to charge up the rocket’s batteries, whose power SLS draws upon to control its components. But those batteries have a limited life, and engineers can’t fill them too early. Lanham says that charging those batteries is a careful balancing act of planning for an uncertain launch date.

Furthermore, although Artemis 1 won’t have any human crew, its Orion capsule will carry a trio of passengers: three mannequins, dummies that’ll test the elements human astronauts will face on their lunar journeys.

Already, the first of those has boarded. Its name is Moonikin Campos. It bears accelerometers and vibration sensors to test how rocky the ride will be, as well as detectors that measure radiation exposure on the lunar flightpath. Before the launch, two fake torsos will join, outfitted with test vests that future astronauts might wear in order to mitigate that radiation.

NASA will also load a Snoopy plushy—the zero-gravity indicator, which will float when the rocket is in space—and a Shaun the Sheep doll that’ll ride with the mannequins around the moon and back. 

One week to launch: Checking the calendar

NASA can’t just plop the 5.8-million-pound Artemis 1 on the pad at a whim. Many factors have to come together for a successful launch, and the rocket is only one of them. Earth, moon, and sun have to be in the right spots so the spacecraft’s flight maneuvers get it to the proper place. The sun is especially critical, because Artemis 1 is powered in part by solar panels.

NASA planners have identified three possible dates that fit the requirements: August 29, September 2, and September 5.

Selecting one of those dates will likely happen just days before launch. The US Navy, which recovers the fallen husks of discarded rocket stages, has to be ready. The pad, also used by SpaceX vehicles, has to be clear of other rockets. And the weather has to be cooperative. “We’re in hurricane season down here in Florida,” Lanham says.

[Related: This is why rocket launches always get delayed]

If none of those dates pan out, the next opportunities will come in late September or early October. If that again doesn’t work out, there’s another set of openings in late October. NASA officials hope it won’t come to that. Artemis would have to dodge a partial solar eclipse that could compromise its solar power.

After the launch: A lunar future

“NASA’s had a number of lunar return programs that have never made it past PowerPoint slides,” says Casey Dreier, a space policy adviser for the Planetary Society.

Artemis 1, if it’s successful, will refute that pattern. And Dreier says there’s good reason to be optimistic about this particular attempt. Despite the Artemis program’s ballooning costs, returning to the moon is a prospect that enjoys broad support in Washington that crosses political party lines and presidential administrations. They’ll no doubt be happy to see their support finally paying off.

Then, assuming Artemis 1 is successful, it will be just the first mission of a much larger list. “This is not really the culmination,” says Lanham. “It’s just the beginning.”

The timeline of the first crewed Artemis 2 mission—which will fly around the moon and return to Earth, much like Apollo 8—is still hazy, but current plans have it launching around 2024. After that would come the first human steps on lunar soil since 1972.

“The lunar landings have almost receded into myth at this point,” says Dreier. “For the first time, we have a real, viable chance at seeing humans walk on the moon again.”

The post Counting down to the Artemis 1 launch, NASA’s biggest moon mission in decades appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
What engineers learned about diving injuries by throwing dummies into a pool https://www.popsci.com/science/physics-how-to-dive-safely/ Wed, 27 Jul 2022 21:00:00 +0000 https://www.popsci.com/?p=458564
Diving mannequins enter a pool so researchers can measure what forces affect them.
Two 3D-printed mannequins plunge into a pool. Anupam Pandey, Jisoo Yuk, and Sunghwan Jung

Pointier poses slipped into the water more easily than rounded ones.

The post What engineers learned about diving injuries by throwing dummies into a pool appeared first on Popular Science.

]]>
Diving mannequins enter a pool so researchers can measure what forces affect them.
Two 3D-printed mannequins plunge into a pool. Anupam Pandey, Jisoo Yuk, and Sunghwan Jung

The next time you’re about to jump off a diving board to escape the summer heat, consider this: There are denizens of the animal kingdom who can put even the flashiest of Olympic divers to shame. Take, for instance, the gannet. In search of fresh fish to eat, this white seabird can plunge into water at 55 miles per hour. That’s almost double the speed of elite human athletes who leap from 10-meter platforms.

Engineers can now measure for themselves what diving does to the human body, without any actual humans being harmed in the process. They created mannequins, like crash-test dummies, fitted them with force sensors, and dropped them into water. Their results, published in the journal Science Advances on July 27, show just how unnatural it is for a human body to plummet headlong into the drink.

“Humans are not designed to dive into water,” says Sunghwan Jung, a biological engineer at Cornell University, and one of the researchers behind the study.

Jung’s group have spent the past several years studying how various living things crash into water. Initially, they focused on animals: the gannet, for one; the porpoise; and the basilisk lizard, famed for running on water’s surface before gravity forces their feet under.

Those animals’ bodies likely evolved and adapted to their aquatic environments. They might need to dive under the water to find food or to avoid predators swooping down from above. Humans, who evolved in drier, terrestrial environments, have no such biological need. For us, that tendency makes diving much more dangerous.

“Humans are different,” says Jung. “Humans dive into water for fun—and likely get injured frequently.”

[Related: Swimming is the ultimate brain exercise. Here’s why.]

Jung and his colleagues wanted to measure the force the human body experienced when it crashed into the water surface. To do this, they 3D-printed mannequins and fitted them with sensors. The sensors that could record the force the dummy diver was experiencing, and in turn, how that force changed over a splash.

They measured three different poses, each mimicking one of their diving animals. To emulate the rounded head of a porpoise, a mannequin dropped into water head-first. To emulate the pointed beak of a bird, the second pose had the mannequin’s hands joined in a V-shape beyond its head. And to copy how a lizard falls in, a third pose had the mannequin plunge with its feet.

As the bodies experienced the force of the impact, the researchers found that the rate of change in the force varied depending on the shape. A rounded shape, like a human head, underwent a more brutal jolt than a pointier shape.

From this, they estimated a few heights above which diving in a particular posture would be dangerous. Diving feet-first from above around 50 feet would put you at risk of knee injury, they say. Diving hands-first from above roughly 40 feet could put you through enough force to hurt your collarbone. And diving from just 27 feet, head-first, might cause spinal cord injuries, the researchers believe.

You likely won’t encounter diving boards that high at your local pool, but it’s not inconceivable that you’d jump from that high when, say, diving from a cliff.

“The modelling is really solid, and it’s very interesting to look at the different impacts,” said Chet Moritz, who studies how people recover from spinal cord injuries at the University of Washington and wasn’t involved with the paper.

Spinal cord injuries aren’t common, but poolside warnings beg you not to dive into shallow water for very good reason: The trauma can be utterly debilitating. A 2013 study found that most spinal cord injuries were due to falls or vehicle crashes—and diving accounted for 5 percent of them. 

But Moritz points out that the spinal cord injuries he is aware of come from striking the bottom of a pool, rather than the surface that these engineers are scrutinizing. “From my experience, I don’t know of anyone who’s had a spinal cord injury from just hitting the water itself,” he says.

Nonetheless, Jung believes that if people can’t stop diving, then his research may at least make the activity safer. “If you really need to dive, then it’s good to follow these kind of suggestions,” he says. That is to say: Try not to hit the water head-first.

Jung’s group aren’t just doing this research to improve diving safety warnings. They’re trying to make a projectile—one with a pointed front, inspired by a bird’s beak—that can better arc through deep water.

Correction (July 28, 2022): Sunghwan Jung’s last name was previously misspelled. It has been corrected.

The post What engineers learned about diving injuries by throwing dummies into a pool appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This solar tower makes jet fuel from sunbeams, water, and gas https://www.popsci.com/science/solar-tower-jet-fuel/ Wed, 20 Jul 2022 22:00:00 +0000 https://www.popsci.com/?p=457487
In the ceramic box atop a solar tower, a chemical reaction takes place that makes jet fuel.
Sunlight focuses on a ceramic box atop the solar tower in Madrid. ETH Zurich

It's a smart way to make propellent that cuts back on carbon-intensive processes in flying.

The post This solar tower makes jet fuel from sunbeams, water, and gas appeared first on Popular Science.

]]>
In the ceramic box atop a solar tower, a chemical reaction takes place that makes jet fuel.
Sunlight focuses on a ceramic box atop the solar tower in Madrid. ETH Zurich

At first glance, you might think the structure tucked away in a Madrid suburb is a solar power plant. Perched in an industrial park, the facility features an audience of solar reflectors—mirrors that concentrate blinding sunlight to the top of a tower.

But this plant isn’t for generating electricity. It’s for generating jet fuel.

For the past several years, researchers from several different institutions in Switzerland and Germany have been using it to test a method to create propellant—normally a carbon-intensive process involving fossil fuels—using little more than sunlight and greenhouse gases captured from the atmosphere. They published their results in the journal Joule today.

What happens inside their tower is a bit of chemistry known as the Fischer-Tropsch process. Under certain conditions, hydrogen gas and carbon monoxide (yes, the same toxic gas from vehicle exhaust) can react. They rearrange their atoms into water vapor and hydrocarbons. Those carbon compounds include diesel, kerosene, and other fuels that you might otherwise produce by dirtying your hands and refining petroleum.

Though the tower is new, the underlying process isn’t a recent invention; two chemists—named, naturally, Fischer and Tropsch—pioneered it in Germany nearly a century ago. But it’s historically been something of an afterthought. You need some source of that carbon monoxide: typically coal, natural gas, or their byproducts. It’s useful if you have limited access to petroleum, but less helpful if you’re trying to clean up the transport sector.

[Related: All your burning questions about sustainable aviation fuel, answered]

Now, with the intensifying climate crisis kindling interest in cleaner fuels, there’s growing demand for alternate carbon sources. Biological waste is a popular one. This plant takes a different approach: capturing carbon dioxide from the atmosphere. 

That’s where 169 solar reflectors beam sunlight into the picture. Atop the 50-foot-tall structure, their light—on average, 2,500 times brighter than the sun—strikes a porous ceramic box made from cerium, the rare-earth element number 58. That draws water and carbon dioxide from the air and splits their atoms into hydrogen gas and carbon monoxide.

“We have been developing the science and technology for more than a decade,” says Aldo Steinfeld, an engineer at ETH Zürich in Switzerland and one of the paper authors. Steinfeld and his colleagues had first demonstrated the box method in the lab in 2010. By  2017, they’d begun building the plant.

In that plant, the newly created gases sink to the bottom of the tower, where they enter a shipping container that carries out the Fischer-Tropsch reactions. The end result is fossil-fuel-free kerosene, produced by pulling carbon dioxide from the air. The researchers say it can be pumped into fuel tanks, today, without issue.

Before the global pandemic, aviation accounted for less than 3 percent of the world’s carbon dioxide emissions. Land vehicles, in contrast, spewed out more than six times as much. But, while we’ve already started to replace the world’s road traffic with electric cars, there just isn’t a viable alternative for aircraft yet.

So the aviation industry—and governments—are trying to focus on alternative sources, such as biofuels. Though their exact timeline is still up in the air, European regulators may require non-fossil-fuel sources to provide as much as 85 percent of the fuel pumped at European Union’s airports by 2050.

In this environment, the Fischer-Tropsch process has entered the stage. Last year, a German nonprofit named Atmosfair opened a plant near the Dutch border that produces synthetic kerosene. It relies on a complex interplay of solar electricity and waste biogas to get its chemical components. Since the Atmosfair plant opened, it has produced eight barrels of kerosene a day: barely a drop in the 2.3-billion-gallon bucket that the world used in the year 2019.

The solar kerosene planet in Spain follows in its footsteps, though Steinfeld says the sun makes getting hydrogen and carbon monoxide much simpler. Still, just like Atmosfair’s plant, it’s only an early drop. “The facility is relatively small compared to a commercial-scale one,” says Steinfeld. But he and his colleagues believe that it’s an important demonstration.

[Related: Floating solar panels could be the next big thing in clean energy]

According to Steinfeld, meeting the entire aviation’s sector would require solar kerosene plants to cover an area of around 17,500 square miles, roughly the size of Estonia. That does sound large, but Steinfeld looks at it differently: A relatively small parcel of a sparsely inhabited hot desert could supply all the world’s planes. 

(There’s precedent for something like it: Sunny Morocco has already become a solar power hub, and the country is planning to export some of that power to relatively cloudier Britain.)

For now, Steinfeld says the next steps are to make the process more efficient. Right now, a meager 4.1 percent of the solar energy striking the ceramic box actually goes into making gas. The researchers think they could considerably boost that number.

The post This solar tower makes jet fuel from sunbeams, water, and gas appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A self-aware robot taught itself how to use its body https://www.popsci.com/science/self-aware-robot-learning/ Wed, 13 Jul 2022 18:00:00 +0000 https://www.popsci.com/?p=456112
This robot is learning about its body to adapt to new tasks.
The test robot, which learns about its shape to perform new tasks. Jane Nisselson and Yinuo Qin/ Columbia Engineering

It could be the first step toward making a smarter type of robot.

The post A self-aware robot taught itself how to use its body appeared first on Popular Science.

]]>
This robot is learning about its body to adapt to new tasks.
The test robot, which learns about its shape to perform new tasks. Jane Nisselson and Yinuo Qin/ Columbia Engineering

Say you wake up and you find that you’ve transformed into a six legged insect. This might be a rather disruptive experience, but if you carry on, you’ll probably want to find out what your new body can and can’t do. Perhaps you’ll find a mirror. Perhaps, with a little bit of time, you might be able to acclimatize to this new shape.

This fantastic concept isn’t too different from the principle that some engineers want to harness to build better robots. For a demonstration, one group has created a robot that could learn, through practice, what its own form can do.

“The idea is that robots need to take care of themselves,” says Boyuan Chen, a roboticist at Duke University in North Carolina. “In order to do that, we want a robot to understand their body.” Chen and his colleagues published their work in the journal Science Robotics on July 13.

Their robot is relatively simple: a single arm mounted atop a table, surrounded by a bank of five video cameras. The robot had access to the camera feeds, allowing it to see itself as if in a room full of mirrors. The researchers instructed it to perform the basic task of touching a nearby sphere. 

By means of a neural network, the robot pieced together a fuzzy model of what it looked like, almost like a child scribbling a self-portrait. That helped human observers, too, prepare for the machine’s actions. If, for instance, the robot thought its arm was shorter than it actually was, its handlers could stop it from accidentally striking a bystander.

Like infants wiggling their limbs, the robot began to understand the effects of its movements. If it rotated its end or moved it back and forth, it would know whether or not it would strike the sphere. After about three hours of training, the robot understood the limitations of its material shell, enough to touch that sphere with ease.

“Put simply, this robot has an inner eye and an inner monologue: It can see what it looks like from the outside, and it can reason with itself about how actions it has to perform would pan out in reality,” says Josh Bongard, a roboticist at the University of Vermont, who has worked with the paper authors in the past but was not an author.

[Related: MIT scientists taught robots how to sabotage each other]

Robots knowing what they look like isn’t, in itself, new. Around the time of the Apollo moon landings, scientists in California built Shakey the Robot, a boxy contraption that would have been at home in an Outer Limits episode. Shakey came preloaded with a model of itself, helping the primitive robot make decisions.

Since then, it’s become a fairly common practice for engineers to program robots with an image of itself or its environment, one that the robot can consult to make decisions. It’s not always advantageous, because the robot won’t be very adaptable. That’s fine if the robot has one or a few preset tasks, but for robots with a more general purpose, the researchers think they can do better.

More recently, researchers have tried training robots in virtual reality. The robots learn maneuvers in a simulation that they can put into practice in meatspace. It sounds elegant, but it isn’t always practical. Running a simulation and having robots learn inside it demands a heavy dose of computational power, like many other forms of AI. The costs, both financially and environmentally, add up.

Having a robot teach itself in real life, on the other hand, opens many more doors. It’s less computationally demanding, and isn’t unlike how we learn to view our own changing bodies. “We have a coherent understanding of our self-body, what we can and cannot do, and once we figure this out, we carry over and update the abilities of our self-body every day,” Chen says.

That process could aid robots in environments that are inaccessible for humans, like deep underwater or outside Earth’s atmosphere. Even robots in common settings might make use of such abilities. A factory robot, say, might be able to determine if there’s a malfunction and adjust its routine accordingly. 

These researchers’ arm is but a rudimentary first step to that goal. It’s a far cry from the body of even a simple animal, let alone the body of a human.

The machine, to wit, has only four degrees of freedom, meaning there are only four different motions it can take. The scientists are now working on a robot with twelve degrees of freedom. The human body has hundreds. And a robot with a rigid exterior is a vastly different beast from one with a softer, flexible form.

“The more complex you are, the more you need this self-model to make predictions. You can’t just guess your way through the future,” believes Hod Lipson, a mechanical engineer at Columbia University and one of the paper authors. “We’ll have to figure out how to do this with increasingly complex systems.”

[Related: Will baseball ever replace umpires with robots?]

Roboticists are optimistic that the machine learning that guided this robot can be applied to those with more complex systems. Bongard says that the methods the robots used to learn have already been proven to scale up well—and, potentially, to other things, too.

“If you have a robot that can now build a model of itself with little computational effort, it could build and use models of lots of other things, like other robots, autonomous cars…or someone reaching for your off switch,” says Bongard. “What you do with that information is, of course, up to you.”

For Lipson in particular, making a robot that can understand its own body isn’t just a matter of building smarter robots in the future. He believes his group has created a robot that understands the limitations—and powers—of its own body. 

We might think of self-awareness as being able to think about its existence. But as you might know if you’ve been around an infant lately, there are other forms of self-awareness, too.

“To me,” Lipson says, “this is a first step towards sentient robotics.”

The post A self-aware robot taught itself how to use its body appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The green revolution is coming for power-hungry particle accelerators https://www.popsci.com/science/sustainable-particle-accelerators/ Tue, 05 Jul 2022 10:00:00 +0000 https://www.popsci.com/?p=454081
LHC consumes as much energy as a city. Can particle accelerators be greener?
The gentle underground curve of the LHC at CERN, in Geneva. Deposit Photos

Future accelerators may be able to capture spent energy, or use greener cooling gases.

The post The green revolution is coming for power-hungry particle accelerators appeared first on Popular Science.

]]>
LHC consumes as much energy as a city. Can particle accelerators be greener?
The gentle underground curve of the LHC at CERN, in Geneva. Deposit Photos

On July 5, marking the end of a three-year hiatus, CERN’s Large Hadron Collider will start collecting data. It will launch beams of high-energy particles in opposite directions around a 16-mile-long loop to create an explosive crash. Scientists will watch the carnage with high-precision detectors and sift through the debris for particles that reveal the inner workings of our universe.

But to do all of that, LHC needs electricity: enough to power a small city. It’s easy for someone outside CERN to wonder just why one physics facility needs all that power. Particle physicists know these demands are extreme, and many of them are trying to make the colliders of the future more efficient. 

“I think there is increased awareness in the community that accelerator facilities need to reduce energy consumption if at all possible,” says Thomas Roser, a physicist formerly at Brookhaven National Laboratory in New York.

Scientists are already drawing up plans for LHC’s proposed successor—the so-called Future Circular Collider (FCC), with a circumference nearly four times as large as LHC’s, quite literally encircling most of the city of Geneva. As they do that, they’re looking at a few, sometimes unexpected, sources of energy use and greenhouse gas emissions—and how to reduce them.

Networked costs

LHC, despite its size and energy demands, isn’t that carbon-intensive to operate. For one, CERN sources its electricity from the French grid, whose portfolio of nuclear power plants make it one of the least carbon-reliant in the world. Put LHC in a place with a fossil-fuel-heavy grid, and its climate footprint would be very different.

“We’re very lucky…if it was in the US, it would be terrible,” says Véronique Boisvert, a particle physicist at Royal Holloway, University of London.

But the collider’s climate impacts spread far beyond a little sector of Geneva’s suburbs. Facilities like CERN generate heaps of raw data. To process and analyze that data, particle physics relies on a global network of supercomputers, computer clusters, and servers—which are notoriously power-hungry. At least 22 are, indeed, in the US.

Scientists can plan to build these networks or use computers in places with low-carbon electricity: California, say, over Florida. 

“Maybe we should also think about what’s the carbon emission per CPU cycle and use that as a factor in planning your technology, as much as you do cost or power efficiency,” says Ken Bloom, a particle physicist at the University of Nebraska-Lincoln.

[Related: The biggest particle collider in the world gets back to work]

Even though the accelerator itself is only a small portion of particle physics’ carbon footprint, Bosivert believes that researchers should plan to reduce the facility’s energy consumption. By the time FCC comes online in the 2040s and 2050s, decarbonization means that it will have to compete for resources on the power grid with many more cars and appliances than exist today. She thinks it’s wise to plan ahead for that time.

The goal of reducing power use is the same, says Bosivert. “You still need to minimize power, but for a different reason.”

Recovering energy

In the name of efficiency and energy conservation, scientists are studying a few technologies that can help make “green accelerators.” 

In 2019, researchers at Cornell University and Brookhaven National Lab unveiled a prototype accelerator called the Cornell-Brookhaven ERL Test Accelerator (CBETA). Remarkably, in demonstrations, CBETA recovered all the energy that scientists put into it.

“We took technology that existed, to some extent, and improved it and broadened its application,” says Georg Hoffstaetter, a physicist at Cornell University.

CBETA launched high-energy electrons through a racetrack-shaped loop that could fit inside a warehouse. With every “lap,” the electrons gained an energy boost. After four laps, the machine could slow down the electrons and store their energy to be used again. CBETA was the first time physicists had recovered energy after that many full laps.

It’s not a new technology, but as particle physicists grow more interested in saving energy, similar technology is in FCC’s plans. “There are options for [FCC] that use energy recovery,” says Hoffstaetter. Particles that aren’t smashed can be recovered.

CBETA also saves energy by using different magnets. Most particle accelerators use electromagnets to guide their particles along the arc. Electromagnets get their magnetic strength from running electricity around them; turn off the switch, and the magnetic field disappears. By replacing electromagnets with permanent magnets that don’t need electricity, CBETA could cut down on energy use.

“These technologies are kind of catching on,” says Hoffstater. “They’re being recognized and they’re being incorporated into new projects to save energy.”

Some of those projects are closer to completion than the FCC. Designers of a new collider at Brookhaven, smashing electrons and ions together, have plotted with energy recovery. At the Jefferson Lab, an accelerator facility in Newport News, Virginia, scientists are building a much larger accelerator that uses permanent magnets.

It isn’t the only way that energy from a particle collider finds new life. Much of the collider’s energy is turned into heat. That warmth can be put to work: CERN has experimented with piping heat to homes in the towns that surround LHC.

Gassy culprits

But focusing on carbon emissions from these facilities misses part of the picture—in fact, the largest part. “That is not the dominant source of emission,” says Bosivert. “The dominant source is the gases we use in our particle detectors.”

To keep an apparatus at the ideal temperatures for detecting particles, the highly sensitive equipment needs to be chilled by gases—similar to the gases used in some refrigerators. Those gases need to be non-flammable and endure high levels of radiation, even while maintaining refrigerated temperatures.

The gases of choice fall into categories hydrofluorocarbon (HFC) and perfluorocarbons (PFC). Some of them are greenhouse gases, far more potent than carbon dioxide. C2H2F2, CERN’s most common HFC, traps heat 1,300 times more effectively.

LHC already tries to capture these gases, reuse them, and stop them from spewing out into the atmosphere. Still, its process isn’t perfect. “A lot of these are in parts of these experiments that are just really difficult to access,” says Bloom. “Leaks can develop in there. They’re going to be very hard to fix.”

[Related: Scientists found a fleeting particle from the universe’s first moments]

From the logistician’s point of view, the use of HFCs and PFCs poses a procurement problem. Some jurisdictions—the European Union—are moving to ban them. Bosivert says this has led to wild fluctuations in price.

“When you’re designing future detectors, you can’t use those gases anymore,” says Bosivert. “All this R&D—‘Okay, what gases are we going to use?’—needs to happen now, essentially.”

There are alternatives. One, actually, is carbon dioxide itself. CERN has retrofitted some of LHC’s detectors to chill themselves with that compound. It isn’t perfect, but it’s an improvement.

These are the sorts of choices that many scientists want to see enter any planning discussion for a future accelerator. 

“Just as monetary costs are a consideration in the planning of any future facility, future experiment, future physics program,” Bloom says, “we can think about climate costs in the same way.”

Correction August 9, 2022: This article has been updated to include Ken Bloom’s university affiliation.

The post The green revolution is coming for power-hungry particle accelerators appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Earth has more than 10,000 kinds of minerals. This massive new catalog describes them all. https://www.popsci.com/science/earth-minerals-catalog/ Fri, 01 Jul 2022 14:20:49 +0000 https://www.popsci.com/?p=454078
Stacks of white calcite on a black background in a mineral catalog
Calcite is known to form in at least 17 different ways, making it one of the most diverse mineral species (along with pyrite). This other-worldly example appears to be a cave deposit capturing different episodes of crystallization that correlate to changing water levels in southern China, during the ice ages. ARKENSTONE/Rob Lavinsky

If you consider how and where a diamond was formed, you end up with nine different kinds instead of one.

The post Earth has more than 10,000 kinds of minerals. This massive new catalog describes them all. appeared first on Popular Science.

]]>
Stacks of white calcite on a black background in a mineral catalog
Calcite is known to form in at least 17 different ways, making it one of the most diverse mineral species (along with pyrite). This other-worldly example appears to be a cave deposit capturing different episodes of crystallization that correlate to changing water levels in southern China, during the ice ages. ARKENSTONE/Rob Lavinsky

Robert Hazen was attending a Christmas party, one December night in 2006, when a biologist friend and colleague asked a simple question: “Were there any clay minerals in the Hadean?”

The question came from an important place. The Hadean eon is what scientists call the first chapter of Earth’s history—the fiery and mythopoetic time from our planet’s formation until about 4 billion years ago. And clay minerals, often found today in soils around the world, play a key role in some of the many theories of how life began.

But according to Hazen, a mineralogist at the Carnegie Institution for Science in Washington, D.C., it wasn’t a question his field was equipped to study a decade or two ago.

[Related: How minerals and rocks reflect rainbows, glow in the dark, and otherwise blow your mind]

He now hopes that will change, thanks to a new mineral cataloging system that takes into account how—and when—a mineral formed. It’s described in two papers, published today in the journal American Mineralogist. (This research could well be the vanguard for more than 70 other studies.)

“I think this gives us the opportunity to answer almost unlimited questions,” says Shaunna Morrison, a geoscientist at the Carnegie Institution and one of the papers’ authors.

Traditionally, mineralogists classify crystalline compounds by way of their chemical makeup (what atoms are in a mineral?) and their structure (if you zoomed in, how would you see those atoms arranged?).

“The way mineralogists think about their field is: Each mineral is an idealized chemical composition and crystal structure,” says Hazen, also one of the paper authors. “That’s how we define ‘mineral species.’”

Iridescent opalized ammonite on black in a mineral catalog
A beautiful example of opalized ammonite from Alberta, Canada, shows the intersection of biological evolution and mineral evolution—the interplay between minerals and life. A hundred million years ago, the ammonite deposited its own hard carbonate shell — a “biomineral.” In this rare case, that original carbonate shell was later replaced by the fiery mineral opal. ARKENSTONE/Rob Lavinsky

The International Mineralogical Association (IMA), the world congress for the field of study, defines around 5,800 listed species: from pyrite and diamond to hydroxyapophyllite-(K) and ferro-ferri-fluoro-leakeite. It’s a collection that scientists have assembled over centuries.

That schema is great for identifying minerals on their face, but it doesn’t say much about how a geological artifact might have formed. Pyrite, for instance, can be traced back to anything from hot water and volcanoes to meteorites and human-created mine fires. Without that extra bit of knowledge, if you find pyrite, you won’t know the story it’s trying to tell you. Other minerals are borne in the extreme conditions of a lightning strike, or from Earth’s life directly, like in bones or bird poop. There are minerals that arise due to the oxygen that early bacteria pumped into Earth’s ancient atmosphere.

Hazen and Morrison wanted to create a next-level catalog that tied materials to their histories. “What we were really looking to do was bring in context,” says Morrison.

Currently, there are quite a few ways researchers can tell where, when, and how a mineral formed. They might look at trace elements, which are extra bits of chemical and biological matter that are incorporated into a mineral from its surroundings. They might look at the ratio of different radioactive isotopes in a mineral which, similar to carbon-dating, could tell scientists how far back a mineral goes. They might even think about a mineral’s texture or color; samples that have oxidized or rusted, for instance, might change appearance.

Orange tourmaline in a white rock base on a black background in a mineral catalog
Tourmaline is the most common mineral with the element boron. It forms gorgeous crystals in mineral-rich granite pegmatites, which host hundreds of exotic mineral species. The International Mineralogical Association recognizes more than 30 “species” of tourmaline, but the new papers acknowledge only a handful of “mineral kinds.” The reason is that the composition of tourmaline is highly variable — ratios of Mg/Fe, F/OH, Al/Fe and many other “chemical substitutions” can lead to individual colorfully zoned crystals that hold as many as seven different species but only one “mineral kind.” ARKENSTONE/Rob Lavinsky

Equipped with data science methods—often used today by biologists to analyze genomes and by sociologists to find groups of people in a social network—Morrison was able to correlate multiple of those factors and find the formation histories for various minerals. It took her team 15 years to scour housands of minerals from around the planet and tag them with one of 57 different formation environments, ranging from spaceborne minerals that predated the Earth to minerals formed in human mines.

Now, they’ve transformed the IMA’s 5,800 species into more than 10,500 of what Morrison and Hazen call “mineral kinds.” One mineral can have numerous kinds if it formed in several different ways.

Take diamond, for instance. Chemically, it’s one of the simplest minerals, made entirely of carbon atoms arranged in a cube-based structure. But the new catalog lists nine different kinds of it: diamond that was baked and pressed in Earth’s mantle, diamond that precipitated from a meteor strike, diamond from carbon-rich stars before life even existed, and more.

In Morrison and Hazen’s revamped guide, around five-sixths of the IMA’s minerals came in only one or two kinds. But nine minerals actually branched off into 15 kinds. And no mineral in the catalogue has quite as many kinds as pyrite: 21.

Green malachite pillars on a black background in a mineral catalog
Malachite is an example of a mineral that formed after life created atmospheric oxygen about 2.5 billion years ago. They are among hundreds of beautiful blue and green copper minerals that form near Earth’s surface as ore deposits weather. ARKENSTONE/Rob Lavinsky

In creating this schema, Hazen and Morrison (both of who are also on the Curiosity Mars rover team) are looking far beyond Earth. If you find a mineral on another world and you know where it formed, you can quickly figure out what sort of environment that planet held in ancient times. For instance, if your mineral is amongst the 80 percent of kinds that originated in contact with water, then you might have evidence of a long-dead ocean. 

And if your mineral is amongst the one-third of mineral kinds that emerged from biological processes, it could be a hint of long-disappeared extraterrestrial life.

[Related: Why sustainable diamonds are almost mythical]

“A new way of seeing minerals appears,” said Patrick Cordier, a mineralogist at the University of Lille in France, in a statement. “Minerals become witnesses, markers of the long history of matter.”

“You can hold a mineral that’s hundreds of millions or billions of years [old]. You can hold a meteorite that’s 4.567 billion years old,” says Hazen. “There’s no other tangible evidence of the earliest history of our solar system.”

The post Earth has more than 10,000 kinds of minerals. This massive new catalog describes them all. appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This ghostly particle may be why dark matter keeps eluding us https://www.popsci.com/science/dark-matter-particle-experiment/ Mon, 27 Jun 2022 22:43:48 +0000 https://www.popsci.com/?p=452753
Photodetectors in a circular array in a large neutrino detector experiment at Fermilab
Different detectors can be built to measure and study different kinds of neutrinos. The MiniBooNE experiment at Fermilab, for example, is specifically used for muon neutrinos. Reidar Hahn/Fermilab

Physicists in Russia think they’re on the trail of a new particle that's everywhere, but nowhere.

The post This ghostly particle may be why dark matter keeps eluding us appeared first on Popular Science.

]]>
Photodetectors in a circular array in a large neutrino detector experiment at Fermilab
Different detectors can be built to measure and study different kinds of neutrinos. The MiniBooNE experiment at Fermilab, for example, is specifically used for muon neutrinos. Reidar Hahn/Fermilab

Every second, 100 trillion phantasmic little particles called neutrinos pass through your body. Nearly all of them shoot through your skin without interacting at all. Their shyness makes these particles particularly painstaking for physicists to detect.

But over the past few decades, the world of neutrino physics has been taking on a new challenge. 

From an experiment conducted deep under the Caucasus Mountains in Russia, physicists have found further evidence—published on June 9 in two papers—that a piece of the current theory of neutrinos is out of place. If they’re right, it could unveil a never-before-seen type of neutrino that could fly even more under the radar—and could explain why we can’t see the dark matter that makes up much of our universe.

“It’s probably, in my mind, one of the most important results in neutrino physics, at least in the last five years,” says Ben Jones, a neutrino physicist at the University of Texas at Arlington, who was not involved with the experiment.

Department of Energy physicists handling chromium disks for a neutrino detector
In the Los Alamos experiment, a set of 26 irradiated disks of chromium 51 provides the source of electron neutrinos that react with gallium and produce germanium 71 at rates which can be measured against predicted rates. A.A. Shikhin

The case of the misbehaving neutrinos

Like creatures from an ethereal plane, neutrinos react with their material surroundings sparingly. With zero electric charge, they aren’t susceptible to electromagnetism. Nor do they get involved in the strong nuclear interaction, which helps bind particles together in the hearts of atoms. 

But neutrinos do play a part in the weak nuclear force, which—according to the Standard Model, the theoretical framework that forms the foundation of modern particle physics—is responsible for certain types of radioactivity.

The vast majority of neutrinos we observe on Earth are born from radioactive processes in the sun. To watch those, scientists rely on neutrino observatories under the sea or buried deep beneath the planet’s crust. It’s not often easy to tell if neutrino detectors are working properly, so physicists can calibrate their equipment by placing certain isotopes—like chromium-51, whose neutrino emissions they know well—nearby.

As neutrino physics gained momentum in the 1990s, however, researchers noticed something odd. In some experiments, when they calibrated their detectors, they began finding fewer neutrinos than accounted for in theoretical particle physics.

For instance, in 1997 at Los Alamos National Lab in New Mexico, scientists from the US and Russia set up a tank filled with gallium, a metal that’s liquid on a warm summer day. As neutrinos struck the gallium, the element’s atoms absorbed the particles. That process transformed gallium into a more solid metal, germanium—a sort of reversed radioactive decay. Physicists measured that germanium to trace how many neutrinos have passed through the tank.

But when the Los Alamos team tested their system with chromium-51, they found too much gallium—and too few neutrinos, in other words. This deficit became known as the “gallium anomaly.”

[Related: Why Los Alamos lab is working on the tricky task of creating new plutonium cores]

Since then, experts poring over the gallium anomaly have explored a tentative explanation. Particle physicists know that neutrinos come in three “flavors”: electron neutrinos, muon neutrinos, and tau neutrinos, each playing different roles in the dance of the quantum world. Under certain circumstances, it’s possible to observe neutrinos switching between flavors. Those shifts are called “neutrino oscillations.” 

That led to an interesting possibility—that neutrinos were missing in the gallium anomaly because they were jumping into another hidden flavor, one that’s even less reactive to the physical world. The physicists came up with a name for the category: sterile neutrinos.

The sterile neutrino story was just an idea, but it found support. Around the same time, physicists at places like Los Alamos and Fermilab in suburban Chicago had started to observe neutrino oscillations directly. When they did, they found discrepancies the number of neutrinos of each flavor they expected to appear and how many actually appeared.

“Either some of the experiments are wrong,” says Jones, “or something more interesting and strange is going on that has a different signature.”

Sterile neutrino detector machinery in a large underground room in Russia
The main setup of the Baksan Experiment on Sterile Transitions. V.N. Gavrin/BEST

Searching for sterile signatures

So what would that sterile neutrino look like? The name “sterile,” and the fact that physicists haven’t detected them through the normal channels, indicate that this class of particles abstains from the weak nuclear force, too. That leaves just one way they can interact with their environment: gravity. 

At the subatomic scales that neutrinos call home, compounded by their puny masses, gravity is extremely weak. Sterile neutrinos would be extraordinarily hard to detect.

That held true well into the 21st century, as the anomalies were too inconsistent for physicists to tell if they amounted to sterile neutrinos. Some experiments found anomalies; others simply didn’t. The sum of experiments seemed to paint a mural of circumstantial evidence. “I think that’s how a lot of people viewed it,” says Jones. “That’s how I viewed it.”

So, physicists created a whole new observatory to test the Los Alamos gallium anomaly. They named it Baksan Experiment on Sterile Transitions, or, in the proud physics tradition of strained acronyms, BEST.

The observatory sits in a tunnel buried more than a mile under the Baksan River in the Russian republic of Kabardino-Balkaria, across the mountains from the country of Georgia. There, before Russia’s invasion of Ukraine threw the local scientific community into chaos, an international team of particle physicists recreated the Los Alamos gallium experiment, specifically looking for missing neutrinos.

BEST found the anomaly again by detecting 20 to 25 percent less germanium than expected. “This definitely reaffirms the anomaly we’ve seen in previous experiments,” Steve Elliot, a particle physicist at Los Alamos National Laboratory and a collaborator on the BEST experiment, said in a statement in early June. “But what this means is not obvious.”

Despite the satisfying result, physicists aren’t getting ahead of themselves. BEST is only one experiment, and it doesn’t explain every discrepancy that’s ever been ascribed to sterile neutrinos. (Other analyses have argued that the Fermilab result couldn’t have been the signs of sterile neutrinos, though they didn’t offer an alternative explanation.)

[Related: Meet the mysterious particle that’s the dark horse in dark matter]

But if scientists were to find similar evidence in other scenarios—for instance, in the neutrino experiment IceCube, buried under the Antarctic sheets, or in other detectors purposely planned for the sterile neutrino hunt—that would serve up real, compelling evidence that something is out there.

If the BEST result holds—and is confirmed by other experiments—it still doesn’t mean sterile neutrinos are responsible for the anomaly. Other undiscovered particles may be in play, or the whole discrepancy could be the fingerprint of some strange and unknown process. If the sterile neutrino idea is true, however, it would breach the biggest theory behind some of the world’s smallest objects.

“It would be real evidence, not only of physics beyond the Standard Model, but of truly new and not-understood physics,” says Jones.

Simply put, if sterile neutrinos exist, the implications would reach far past particle physics. Sterile neutrinos might make up much of our universe’s dark matter, which holds six times as much of the matter we can see—and whose composition we still don’t understand.

The post This ghostly particle may be why dark matter keeps eluding us appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This tiny, trailblazing satellite is taking on a big moon mission https://www.popsci.com/science/cubesat-moon-mission/ Thu, 23 Jun 2022 22:00:00 +0000 https://www.popsci.com/?p=452001
An artist's conception of the CAPSTONE satellite.
The little CAPSTONE spacecraft in orbit around the moon, in an artist's conception. Advanced Space

If CAPSTONE's goals are successful, much larger lunar orbiters could follow.

The post This tiny, trailblazing satellite is taking on a big moon mission appeared first on Popular Science.

]]>
An artist's conception of the CAPSTONE satellite.
The little CAPSTONE spacecraft in orbit around the moon, in an artist's conception. Advanced Space

In a few years, if all goes well, NASA astronauts will ride to the moon aboard an Orion capsule, an 8.5-ton shelter that fills up a large room. But on the other end of the size spectrum—yet, in many ways, no less important to those lunar exploration goals—sits a spacecraft that could fit, neatly, on an office desk.

That craft is the Cislunar Autonomous Positioning System Technology Operations and Navigation Experiment—CAPSTONE, for short. It will launch for the moon in late June, potentially becoming the first lunar satellite of its class. And it’s going on a test run where future, perhaps shinier missions are planned to follow. CAPSTONE may help NASA create a communications hub that, not too far in the future, will circle the moon. 

The fellowship of the CubeSat

Despite its size, CAPSTONE is remarkable for a few reasons, many of which have to do with the satellite’s class: CubeSat.

CubeSats are, well, cubic: The common base models are about 4 inches to a side and weigh no more than 4.5 pounds. You could hold one in your hand; you might even build one by hand, too, since most use off-the-shelf components. You can stack them into larger satellites. CAPSTONE combines 12 CubeSats, shy of the largest to date (which used 16). 

From 1998 to the start of June 2022, 1,862 CubeSats have launched—and that number is set to more than double by 2028. CubeSats’ low cost means that they’re within reach of amateurs, university groups, fledgling startups, small developing countries, and others who lack the resources of SpaceX or the world’s big space agencies.

But CubeSats’ low cost has made them appealing for other missions, too. In 2019, NASA contracted private firm Advanced Space to build CAPSTONE for $13.7 million. (For comparison, even the most rudimentary large lunar probe can cost an order of magnitude more.) Advanced Space chose to use CubeSats to put the probe into space cheaply and quickly.

[Related: This satellite has high hopes—the transformation of Finland’s space industry]

The vast majority of CubeSats live in Earth orbit. Only a few have gone beyond that. In 2018, two arrived at Mars alongside NASA’s Mars InSight mission. Absolutely none have gone to CAPSTONE’s destination in the moon’s orbit.

“To date, there have not been lunar cubesats,” says Jekan Thanga, an engineer at the University of Arizona, who isn’t involved with CAPSTONE. “CAPSTONE is actually going to be a first in that respect.”

Other CubeSats are riding with the Artemis 1 uncrewed test flight. Depending on when they launch—currently scheduled for no earlier than August—they may outrace CAPSTONE to the moon.

CAPSTONE’s two missions

CAPSTONE will launch from the Mahia Peninsula in New Zealand on an Electron rocket, built by private space company Rocket Lab, who mainly launch little satellites into Earth orbit. CAPSTONE will be Electron’s first attempt to reach for the moon. “That’s also a bit of precedent,” says Thanga.

In early November, after a 3.5-month-long voyage, CAPSTONE will insert itself into a peculiarly elongated loop around the moon, called a near-rectilinear halo orbit (NRHO). This swings from 1,000 miles above one pole to 43,500 miles above the other pole. Entering NRHO is more than just a fun curiosity. CAPSTONE will test this orbit for the future Lunar Gateway, a moon-orbiting space station planned as part of the Artemis program.

“There’s no real uncertainty that the math works,” says Cheetham, but CAPSTONE will give spacecraft operators practice for getting into that orbit.

While it’s orbiting the moon, CAPSTONE will try to do something else: talk to a spacecraft without contacting ground control on Earth. CAPSTONE’s onboard computer will try to link with the Lunar Reconnaissance Orbiter, an earlier NASA spacecraft that’s been mapping the moon’s surface since 2009, and calculate the positions of both spacecraft. When communication from Earth to the moon, even at light speed, takes more than 1 second, being able to chat with local satellites is a useful ability.

Future CubeSats, Thanga says, might be able to make that ability more permanent. For instance, it would enable easier communication to the lunar far side, currently out of Earth’s reach. When the Chinese lander Chang’e-4 touched down on the moon’s far side last year, it needed another satellite to relay messages to and from Earth. 

Lunar satellites that talk with each other can more easily avoid collisions, and they won’t have to hail Earth’s ground control for their every need. “What we want to do is prioritize that ground contact,” Cheetham says, removing routine location checks in favor of transmitting important operational data,

Communication is king

The world’s attention will likely be on the crewed Artemis fights—whenever they actually get off the ground, with the first set for 2024. But small-scale missions like CAPSTONE are necessary to lay the groundwork (or spacework, as it were) for those astronauts.

More moon missions are in the pipeline, potentially launching as soon as the end of this year. NASA has tapped a handful of companies to build an armada of lunar landers—fitted with science experiments for measuring things like subsurface water, the composition of the moon’s surface, and the strength of its magnetic field—that test the prospects for future lunar living.

[Related: We could actually learn a lot by going back to the moon]

As more and more Artemis flights and astronauts make it to the moon, they’ll rely on infrastructure like the Lunar Gateway, which will act as a communications center and a delivery hub for astronauts on the surface. That plan has faced criticism—some commentators have suggested sending moon landings through Gateway will make missions require more energy and expensive fuel.

But Gateway is only the start. The space agencies and their partners behind Artemis are planning everything from lunar mines to lunar satnav to lunar nuclear power plants.

“The feeling is there’s going to be a lot more traffic to the moon,” says Thanga, “and that requires a lot more infrastructure, including systems like the Gateway.”

Correction (June 24, 2022): The CAPSTONE launch location was changed from the Chesapeake Bay to Mahea, New Zealand. Also, the company behind the Electron rocket is called Rocket Lab, not Rocket Labs.

The post This tiny, trailblazing satellite is taking on a big moon mission appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
People enjoy colorful cities—even in virtual reality https://www.popsci.com/science/vr-study-color-urban-environments/ Fri, 17 Jun 2022 18:00:00 +0000 https://www.popsci.com/?p=450833
The colorful Bo Kaap neighborhood in Cape Town.
The colorful Bo Kaap quarter of Cape Town. Deposit Photos

For some psychological pep, add color to urban surroundings.

The post People enjoy colorful cities—even in virtual reality appeared first on Popular Science.

]]>
The colorful Bo Kaap neighborhood in Cape Town.
The colorful Bo Kaap quarter of Cape Town. Deposit Photos

Not long after the Industrial Revolution kicked off in England, Romantic poet William Blake famously lamented a country lost beneath “dark satanic mills.” He’s not always wrong: Today’s cities might include the prefab slabs of Eastern Bloc micro-districts, the drab facades and decrepit lots that dot the US Rust Belt, or an Arctic industrial hellscape like Norilsk—one of the most polluted cities on Earth.

But a modern city can just as easily be filled with verdant gardens or splashes of color: from Jodhpur’s azure to Jaipur’s pink to the rainbows of Bristol in the UK or Cape Town’s Bo-Kaap neighborhood.

It’s perhaps of little surprise that scientists believe vibrant environments can be a physical and psychological boon for their inhabitants. The latest evidence to support that—in a study published Friday in the journal Frontiers in Virtual Reality—comes from, well, VR.

“Virtual reality was used as a proof of concept to demonstrate that colors could be a powerful tool to trigger alertness and pleasure in gray urban cities,” says Yvonne Delevoye-Turrell, a psychologist at the University of Lille in France and one of the paper’s authors, in a statement.

Delevoye-Turrell and colleagues crafted a virtual recreation of their university’s campus: paved paths winding through a cluster of modernist buildings. They created two variants of the campus: one drab and gray, another ornamented with greenery. They beautified some of those paths, in the green world and the gray one, with patterns of multicolored polygons.

Then, the researchers immersed students from their university in each of the variants, sending them on a virtual walk. Normally, walkers might speed through an uninspiring environment, keeping their eyes stuck to the ground, perhaps lost in their thoughts. But if walkers slow their pace, or if they take a look around, it’s a sign that they’ve found something stimulating and interesting.

When test subjects walked along the patterned pathways, their heartbeats accelerated, their walking speed slowed, and the colors drew their gaze. When students walked the green campus as opposed to its gray version, the researchers observed the same influence of the many-colored pixels—but it was even more pronounced.

People enjoy colorful cities—even in virtual reality
Bright polygons splashed across walkways draw the gaze of virtual pedestrians. University of Lille

It’s only one study limited to one sense and one type of environment. The researchers want to expand it. “Odors and sounds could be the next step for VR to truly test the impact of colors on the pleasure of walking,” says Delevoye-Turrell.

This study is but the latest drop in a surge of interest into how architecture and urban design link with the human brain. “Urban designers are hungry for this kind of information,” says Leia Minaker, a public health researcher at the University of Waterloo in Ontario who wasn’t involved with the Lille group’s paper. “They want to do their best work … They want to improve health and equity in their cities.”

Researchers have repeatedly shown that being around vegetation grants boosts to people’s mood and attention. One recent study, published in May, found that children were more interested in and engaged with visually rich building elements, including greenery.

In the past decade, many of those researchers have turned to virtual reality. “VR is used for a variety of different things,” says Adrian Buttazzoni, a doctoral student at the University of Waterloo who was also not involved in the paper.

Researchers can virtually recreate an urban environment: a neighborhood, a park, or, as the Lille group did, a campus. Then, they can track how people navigate and their sensory reactions. In previous research, this data would often come from questionnaires, whose self-reported answers might not be as reliable.

[Related: This VR accessory is designed to make your mouth feel stuff]

Some even believe that VR can help future architects or designers in the planning stage. Perhaps architects might craft a campus or a park in virtual reality, let people walk through it, and judge their reactions.

This kind of research can inspire real-world change, Minaker says. “We’re trying to give people concrete evidence so that they can create policies and guidelines that will help create healthy cities,” she says.

As for the Lille group’s study, it seems to provide more scientific evidence for something that might sound obvious: a touch of color here and some bursts of vegetation there can liven up a city. But as unsurprising as this conclusion may seem, researchers find that it’s not always apparent.

“When you actually talk to people about the built environment around them…and you actually have a conversation about different designs of places they probably walk through every day,” says Buttazzoni, “they’re quite surprised at how little they pay attention to these different places.”

After all, even Norilsk has its share of brightly colored housing blocks.

The post People enjoy colorful cities—even in virtual reality appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This lithium-ion battery kept going (and going and going) in the extreme cold https://www.popsci.com/science/lithium-ion-batteries-cold-weather/ Wed, 08 Jun 2022 20:00:00 +0000 https://www.popsci.com/?p=448894
Lithium batteries in a pile.
A pile of lithium batteries (not the experimental cold-tolerant battery). Deposit Photo

Subzero temperatures were no match for the experimental battery.

The post This lithium-ion battery kept going (and going and going) in the extreme cold appeared first on Popular Science.

]]>
Lithium batteries in a pile.
A pile of lithium batteries (not the experimental cold-tolerant battery). Deposit Photo

Few recent inventions have proven their worth more than the humble lithium-ion battery. It’s been only 30 years since they first left the lab, but they’re what power smartphones in the world’s palms and put electric cars on the road. They’ll only become more important as critical components of renewable energy grids.

Since the early 1990s, these batteries’ prices have fallen more than thirtyfold, even as they’ve grown ever more powerful. But they aren’t perfect. For one, they struggle in the deep cold. At temperatures that wouldn’t be unfamiliar to anyone who experiences particularly harsh winters, these batteries don’t hold their charge—or deliver it.

But scientists are trying to make hardier batteries. In a paper published in the journal ACS Central Science on June 8, chemical engineers from several universities in China have worked together to build a better battery that holds up as low as minus 31°F.

From past studies, scientists knew that most lithium-ion batteries start flatlining at about minus 4°F. Below this point, they don’t hold as much charge, and they aren’t as good as transferring it—meaning that it’s harder to use them for power. And the colder they go, the worse they perform.

For most of the world, subzero temperatures aren’t a problem. But if you live in, say, the American Midwest, your electric car might have less range in January than you might like. And if you’ve ever been caught outside in the frozen winter, you might have noticed that your phone’s battery tends to drain more quickly.

[Related: We need safer ways to recycle electric car and cellphone batteries]

This drawback also means that lithium-ion batteries can’t work as well as engineers might hope in other places that commonly experience subzero cold: atop mountains, in the air where commercial planes fly, or out in the cold of unlit space.

So there’s abundant research that addresses the problem, according to Enyuan Hu, a battery chemist at Brookhaven National Laboratory who wasn’t involved in the paper. And to do so, engineers and chemists have to tinker with a battery’s innards.

At its heart, a lithium-ion battery consists of two electrically charged plates, one negative, the other positive. The middle space is filled with an electrolyte, which is an electrically conducting slurry containing dissolved ions. The negative plate is typically carbon-based, such as graphite; the positive plate typically contains atoms of metal and oxygen.

And lithium ions are what make the battery tick—hence the name.

As a battery runs, those ions fall out of the positive plate, cross the electrolyte like fish drifting down a river, and land on the negative plate, delivering constant jolts of electricity in the process. When you plug in a battery for charging, the electric current forces ions to flee in the opposite direction. It works, without much issue, and those moving lithium ions fuel your phone or car for hours on end. 

That is, it works until the battery cools to beneath minus 4°F. In the past several years, scientists have found that much of the issue has to do with the movement of the ions themselves, which struggle to properly exit the electrolyte and land on the negative plate. Scientists have tried to alleviate that problem by making hardier electrolytes  that hold up in the cold better. 

These latest researchers, however, took a different approach: They tinkered with that carbon-based negative plate instead. They decided to replace the graphite with an entirely new material. They heated a cobalt-containing compound to very high temperatures—nearly 800°F—producing little nuggets, shaped like 12-sided dice, made from carbon atoms. The researchers fashioned these carbon dodecahedra into a plate that’s bumpier than flat graphite, allowing it to better grab at lithium ions.

When they tested their battery, they found that it worked at temperatures as frigid as minus 31°F. Even after over 200 cycles of discharging, charging, and recharging, this battery kept up its performance.

“The material is scientifically interesting,” says Hu. “But its practical application may be limited, as it requires [a] complicated synthesis route.”

That’s the catch. As with many materials, trying to actually create more of these tiny carbon orbs is a challenge. Not helping matters is that the cobalt compound is rather expensive. On the other hand, Hu says, this research may be helpful for very specific applications.

It’s not an end to this quest, then, but rather the next incremental step. But, with every passing day, scientists are pushing the limits of these crucial batteries ever further.

The post This lithium-ion battery kept going (and going and going) in the extreme cold appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Volcanic eruptions are unpredictable, but these geologists cracked the code https://www.popsci.com/science/volcanic-eruption-forecast/ Fri, 03 Jun 2022 18:33:48 +0000 https://www.popsci.com/?p=447942
Sierra Negra volcano erupting in the Galapagos Islands in 2018
The Sierra Negra volcano on Ecuador's Isabela Island last erupted on June 26, 2018. The 1,124-meter-high crater is one of the largest on the planet. Xavier Garcia/picture alliance via Getty Images

If you thought weather forecasting was tough, try taking on magma.

The post Volcanic eruptions are unpredictable, but these geologists cracked the code appeared first on Popular Science.

]]>
Sierra Negra volcano erupting in the Galapagos Islands in 2018
The Sierra Negra volcano on Ecuador's Isabela Island last erupted on June 26, 2018. The 1,124-meter-high crater is one of the largest on the planet. Xavier Garcia/picture alliance via Getty Images

On June 26, 2018, the earth rumbled under the giant sleeping tortoises on Isabela Island in the Galápagos. Not long afterwards, Sierra Negra, a volcano that towers over the island, began to erupt. Over the next two months, the volcano’s fissures spewed out enough lava to cover an area of roughly 19 square miles.

It was hardly Sierra Negra’s first eruption: It’s blown at least seven other times in the past century alone. But what made the 2018 phenomenon special is that geologists had forecasted the eruption’s date as early as January. In fact, they almost got it down to the exact day.

It was a fortunate forecast, to be sure. Now, in a paper published in the journal Science Advances today, they’ve figured out why their estimates hit the mark—and how they can make their simulations get it right again. Sierra Negra is just one volcano in a sparsely inhabited archipelago, but when hundreds of millions of people around the world reside in volcanic danger zones, translating these forecasts to other craters could save untold numbers of lives.

[Related: A 1930s adventure inside an active volcano]

“There is still a lot of work to be done, but … volcano forecasting may become a reality in the coming decades,” says Patricia Gregg, a geologist at the University of Illinois Urbana-Champaign and one of the paper’s authors.

Forecasting eruptions is like forecasting the weather. With so many variables and moving parts, it becomes increasingly difficult to paint a picture of moments while trying to project further into the future. You might trust a forecast for tomorrow, but you might not be so eager to trust a forecast for a week away.

That made Gregg and her colleagues’ Sierra Negra forecast—five months prior to the eruption—all the more fortunate. Although the volcano had begun grumbling by then, with spikes of seismic activity, the forecasters themselves agree it was a gamble.

“It was always just meant to be a test,” says Gregg. “We did not put much faith in our forecast being accurate.”

But Sierra Negra is an ideal laboratory for fine-tuning volcanic forecasts. Because it erupts once every 15 or 20 years, it gets a lot of scrutiny, with scientists from both Ecuador and around the world continually monitoring it. By 2017, their instruments were picking up renewed rumblings indicating a future eruption.

[Related: How to study a volcano when it destroys your lab]

Experts know that volcanoes like Sierra Negra blow their top when magma builds up in the reservoir below. As more magma strains against the surrounding rock, it puts the earth under ever-mounting pressure. Eventually, something has to give. The rocks break, and magma begins to burst through. If geologists could understand exactly how the rocks crumble, they could forecast when that breaking point was likely to occur.

Gregg and colleagues relied on methods familiar to weather or climate forecasters: They combined observational data of the volcano’s ground activity with predictions from simulations. They then used satellite radar images of the ground beneath Sierra Negra to watch what the bloating magma reservoir was doing, and ran models on supercomputers to learn what happens next.

Based on how the magma was inflating by January 2018, their forecasts highlighted a likely eruption between June 25 and July 5. The levels kept rising at the same rate over the next few months—and the eruption began on June 26, right on schedule. 

“If anything had changed during those months, our forecast would not have worked,” says Gregg.

“The very tight coincidence of the author’s forecast with the eruption onset must involve some good fortune, but that in itself tells us something,” says Andrew Bell, a geologist at the University of Edinburgh, who has studied Sierra Negra but wasn’t an author on the paper.

Colorful rocks on the Sierra Negra volcano in the Galapagos
The Sierra Negra volcano seemingly at rest. Deposit Photos

So, in the years afterwards, Gregg and her colleagues combed back over their calculations to determine what they’d gotten right—and what that “something” might be. They ran more simulations using data from the actual eruption to see how close they could get to reality.

What they found was that the magma buildup remained relatively constant over the first part of 2018. By late June, the reservoir had placed enough pressure against the volcano’s underside to trigger a moderately strong earthquake. That seemed to have been the final straw, cracking the rock and letting the magma flow through.

This practice of simulating historic phenomena to check the accuracy of forecast models is sometimes called “hindcasting” in meteorology. In addition to Sierra Negra, Gregg and her colleagues have examined old eruptions from Sumatra, Alaska, and underwater off the coast of Cascadia. 

[Related: Tonga’s historic volcanic eruption could help predict when tsunamis strike land]

But is it possible to use the same forecasting techniques in different areas of the world? Every volcano is unique, which means geologists need to adjust their models. By doing so, however, the authors behind the Sierra Negra study found some commonalities in how ground motions translate into the chance of an eruption.

Better forecasting models also mean that scientists learn more about the physical processes that cause volcanoes to rumble to life as they try to match simulations to real-world conditions. “Making genuine quantitative forecasts ahead of the event happening is a challenging thing to do,” Bell says, “but it’s important to try.”

The post Volcanic eruptions are unpredictable, but these geologists cracked the code appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
How to make an X-ray laser that’s colder than space https://www.popsci.com/science/slacs-ultra-cold-x-ray-laser/ Sun, 29 May 2022 14:00:00 +0000 https://www.popsci.com/?p=446431
The cryomodule being delivered to SLAC's X-ray facility.
A cryomodule delivered to SLAC for its enhanced X-ray beam. Jacqueline Ramseyer Orrell/SLAC National Accelerator Laboratory

What's cooler than being cool? This ultra-cold X-ray beam.

The post How to make an X-ray laser that’s colder than space appeared first on Popular Science.

]]>
The cryomodule being delivered to SLAC's X-ray facility.
A cryomodule delivered to SLAC for its enhanced X-ray beam. Jacqueline Ramseyer Orrell/SLAC National Accelerator Laboratory

The physics world is rallying around CERN’s Large Hadron Collider, now coming online after a lengthy upgrade and a yearslong pause. But that isn’t the only science machine to literally receive new energy. Nearly 6,000 miles away, on the other side of the globe, another one is undergoing its final touches.

The SLAC National Accelerator Laboratory, south of San Francisco, is home to a large laser called LCLS, which lets scientists use X-rays to peer into molecules. “The way to think about a facility like LCLS is really as a super-resolution microscope,” says Mike Dunne, the facility’s director.

Now, LCLS has just finished a major upgrade—called LCLS-II—that plunges the laser down to just a few degrees above absolute zero.

Giving a particle accelerator new life

A half-century ago, SLAC’s tunnel housed a particle accelerator. While most particle accelerators today send their quarry whirling about in circles, this accelerator was perfectly straight. To bring electrons up to speed for smashing, it had to be over 2 miles long. For decades after it opened, it was the “longest building in the world.” (The tunnel is so distinctive, a miles-long straight line carved into foothills, that pilots use it for wayfinding.)

When it came online in 1966, this so-called Stanford Linear Accelerator was an engineering marvel. In the following decades, the particle physics research conducted there led to no fewer than three Nobel prizes in physics. But by the 21st century, it had become something of a relic, surpassed by other accelerators at CERN and elsewhere that could smash particles at far higher energies and see things Stanford couldn’t.

But that 2-mile-long building remained, and in 2009, SLAC outfitted it with a new machine: the Linac Coherent Light Source (LCLS).

LCLS is an example of an apparatus called an X-ray free-electron laser (XFEL). Although it is a laser, it doesn’t have much in common with the little handheld laser pointers that excite kittens. Those create a laser beam using electronic components such as diodes.

An XFEL, on the other hand, has far more in common with a particle accelerator. In fact, that’s the laser’s first stage, accelerating a beam of electrons to very near the speed of light. Then, those electrons pass through a gauntlet of magnets that force them to zig-zag in rapid switchbacks. In the process, the electrons shoot their vast energy forward as X-rays.

Physics photo
The electron gun that’s the source of the beam. Marilyn Chung/Berkeley Lab via SLAC

Doing this can create all sorts of electromagnetic waves from microwaves to ultraviolet to visible light. But scientists prefer to use X-rays. That’s because X-rays have wavelengths that are about the size of atoms, which, when focused in a powerful beam, allow scientists to peer inside molecules. 

[Related: Scientists are putting the X factor back in X-rays]

LCLS is different from most of the other X-ray sources in the world. The California beam works like a strobe light. “Each flash captures the motion of that molecule in a particular state,” says Dunne. 

LCLS could originally shoot 100 flashes per second. That allowed scientists to make, say, a movie of a chemical reaction as it happened. They could watch bonds between atoms form and break and watch new molecules. It may soon be able to make movies with frame rates thousands of times faster.

Chilling a laser

In its first iteration, LCLS used copper structures to accelerate its electrons. But increasing the whole machine’s power was pushing the limits of that copper. “The copper just is pulling too much current, so it melts, just like when you fuse a wire in your fuse box,” says Dunne.

There’s a way around that: the bizarre quantum effect called superconductivity.

When you lower a material past a certain critical temperature, its electrical resistance drops off to virtually nothing. Then, you can functionally get current to flow indefinitely, without losing energy to its surroundings, as heat.

LCLS is far from the first laser to use technology like this. The problem is that getting to that temperature—typically just a few degrees above absolute zero—is no small feat. 

[Related: Scientists found a fleeting particle from the universe’s first moments]

“It gets really hard to support these cryogenic systems that cool to very low temperatures,” says Georg Hoffstaetter, a physicist at Cornell University who had previously worked on the technology. There are superconducting materials that operate at slightly less unforgiving temperatures, but none of them work in spaces that are hundreds of feet long.

A smaller facility might have been fazed by this challenge, but SLAC built a warehouse-sized refrigerator at one end of the structure. It uses liquid helium to cool the accelerator down to -456°F.

Superconductivity also has the bonus of making the setup more energy-efficient; large physics facilities are notorious for using as much electricity as small countries do. “The superconducting technology in itself is, in a way, a green technology, because so little of the accelerator power gets turned into heat,” says Hoffstaetter.

When the upgrades are finished, the new and improved LCLS-II will be able to deliver not just 100 pulses a second, but as many as a million.

What to do with a million frames per second

Dunne says that there are, roughly, three main areas where the beam can advance science. For one, the X-ray beam can help chemists sort out how to make reactions go faster using less material, which could lead to more environmentally friendly industrial processes or more efficient solar panels.

For another, the tool can aid biologists doing things like drug discovery—looking at how pharmaceuticals impact enzymes in the human body that are hard to study via other methods.

For a third, the beam can help materials scientists better understand how a material might behave under extreme conditions, such as an X-ray barrage. Scientists can also use it to design new substances—such as even better superconductors to build future physics machines just like this one.

SLAC's Linac Coherent Light Source X-ray free-electron laser is housed in this building.
The miles-long facility that houses SLAC’s Linac Coherent Light Source X-ray free-electron laser. SLAC National Accelerator Laboratory

Of course, there’s a catch. As with any major upgrade to a machine like this one, physicists need to learn how to use their new tools. “You’ve sort of got to learn how to do that science from scratch,” says Dunne. “It’s not just what you did before…It’s an entirely new field.”

One problem scientists will need to solve is how to handle the data the laser produces: one terabyte, every second. It’s already a hurdle that large facilities face, and it’s likely to get even more acute if networks and supercomputers can’t quite keep up.

Even so, this hasn’t diminished physicists’ enthusiasm for enhancement. Scientists are already plotting yet another update for the laser, set for later in the 2020s, which will boost its energy and allow it to probe even deeper into the world of atoms.

The post How to make an X-ray laser that’s colder than space appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Some rivers suddenly change course, and we may finally know why https://www.popsci.com/science/why-rivers-change-course/ Thu, 26 May 2022 18:00:00 +0000 https://www.popsci.com/?p=446325
An aerial view of meandering frozen channels of the Copper River in Alaska.
The meandering Copper River in Alaska. Mandy Lindeberg, NOAA/NMFS/AKFSC

Satellite images gave researchers an unpredicted look at avulsions—when rivers abruptly shift.

The post Some rivers suddenly change course, and we may finally know why appeared first on Popular Science.

]]>
An aerial view of meandering frozen channels of the Copper River in Alaska.
The meandering Copper River in Alaska. Mandy Lindeberg, NOAA/NMFS/AKFSC

In August 2008, the Koshi River—typically meandering from the Himalayan foothills of Nepal down into the plain of the Ganges, waxing and waning with the yearly monsoon—abruptly shifted some 75 miles eastward. The river broke its banks and inundated hundreds of villages in Nepal and the Indian state of Bihar. A million people had to evacuate their homes.

That part of the world is hardly unfamiliar with intense floods, but what happened in 2008 wasn’t just any flood. Rivers don’t often change courses with reckless abandon. But when they do, scientists who study rivers use a special term: an avulsion. Avulsions are rare, but as the Koshi River disaster showed, the consequences can be catastrophic.

Now, scientists have carried out an unprecedented global survey into where avulsions happen, published in Science on May 27. In doing so, they’ve found how avulsions can affect even places that aren’t expecting them.

“Our study is the first global-scale study to map out avulsions at this scale,” says Vamsi Ganti, a terrestrial scientist at the University of California Santa Barbara, and one of the authors.

Previously, scientists focused on individual avulsion cases, in places such as the Mississippi River Delta or China’s Yellow River, a particularly avulsion-prone water channel that’s changed course more than two dozen times in recorded history. The floods from those events have been linked to political turmoil and civil wars, such as an 1850s flood that helped spark the Taiping Rebellion.

Indeed, there’s good reason to study avulsions. They leave behind fertile land in floodplains and help build diverse ecosystems in river deltas. But at the same time, their floods can devastate nearby human settlements. Archaeologists have linked avulsions to disasters as old as ancient Mesopotamian catastrophes.

[Related: Most large rivers don’t flow freely anymore]

Despite this, scientists haven’t really had a great grasp on what causes avulsions. They knew that avulsions tend to happen on rivers that carry large amounts of sediment. (The Yellow River, to wit, gets its name from the amount of silt it picks up.) As that sediment builds up in the river’s path, it’s easier for the river to switch to another nearby course. Over time, the process repeats at the new course, leading to rivers that switch back and forth over the centuries.

It’s fortunate, then, that Ganti, postdoc Sam Brooke, and their colleagues had a surprise tool: satellite imagery. Working with decades of satellite observations, the researchers assembled an avulsion scrapbook dating back to 1973—113 avulsions in all. Most of them happened in the world’s tropics.

“This is a global analysis of avulsions which provides unprecedented information on this process,” says Paola Passalacqua, a water resources engineer at the University of Texas at Austin who was not an author on the paper.

The research team divided that list of avulsions in two categories, based on the sorts of places where they occur. The first are avulsions in alluvial fans, where flowing water, well, fans out: say, exiting a mountain valley or a canyon into a wide-open plain. The 2008 Koshi River flood was one such avulsion. The second category are avulsions in river deltas, where rivers spread their waters into many tinier streams as they enter a water body or a desert.

Some rivers suddenly change course, and we may finally know why
Indonesia’s Pemali delta, seen from satellite. Sam Brooke and Vamsi Ganti

All of that was known beforehand. But the researchers observed another process at play: In deltas, they found quite a few avulsions happened further upstream than previously thought. They think these avulsions are due to floods. When rivers break their banks, they erode the surrounding land. With enough flooding and enough edits to the landscape, the river may abruptly chart a new, more favorable path. 

The researchers think this kind of avulsion is more common on steeper rivers in deserts or in the tropics. And the floods that cause them are predicted to grow more common in a warming world.

[Related: From pollution to dams: here’s what is plaguing America’s 10 most endangered rivers]

“Knowing where [avulsions] might happen is a big help,” says Passalacqua, “particularly if upstream communities may not have expected it otherwise.”

This means better ways of managing waterways. Humans, understandably, don’t like their rivers to behave in unpredictable ways. To prevent avulsions, we’ve tried to manage their flow, corralling them with dikes, levees, and dams. These efforts aren’t new; ancient engineers were trying to control rivers in Egypt, South Asia, and China at least 3000 years ago. 

Sometimes it works. Management on the Mississippi River has, so far, prevented a potentially devastating avulsion into the nearby Atchafalaya River in Louisiana. But measures to stop avulsions can be double-edged swords. Along with upstream dams, they can prevent rivers from depositing much-needed sediment in river deltas, undermining their stability. 

Meanwhile, rising sea levels are pushing at deltas from the other side. The result of all these processes is that the pressure on rivers—and the risk of avulsions farther upriver—is mounting.

“It seems like all the knobs you can turn on a river delta, you’re turning them in the wrong direction,” says Ganti.

Ganti hopes he and his colleagues’ research can help find ways that let rivers flow naturally and simultaneously protect the more than 300 million humans who live on river deltas. Their next step, now that they know where avulsions happen, is to focus on when they happen. Ganti says that avulsions seem to follow a periodic pattern, repeating every so often. Many of the forces that drive rivers are still poorly understood by experts, even though rivers have helped sustain people since the earliest civilizations.

“There’s a dual nature to our linkage with rivers,” Ganti says, “because they are a nourishing force, but also, they can be a destructive force.”

The post Some rivers suddenly change course, and we may finally know why appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Deep-sea internet cables could help sense distant earthquake rumbles https://www.popsci.com/science/deep-sea-internet-cables-earthquakes/ Thu, 19 May 2022 18:00:00 +0000 https://www.popsci.com/?p=444525
Undersea fiber optic cables can be used to sense earthquakes.
An array of fiber optic cables. VadimVasenin via Deposit Photos

This underwater technique could shake up seismology.

The post Deep-sea internet cables could help sense distant earthquake rumbles appeared first on Popular Science.

]]>
Undersea fiber optic cables can be used to sense earthquakes.
An array of fiber optic cables. VadimVasenin via Deposit Photos

Ocean covers more than two-thirds of Earth’s surface. For seismologists, oceanographers, and others who want to continually monitor our planet’s motions, this fact poses a problem. The seas can be dim and murky places where important data—on things like earthquakes and seismic hazards—are hard to come by. 

But just because the oceans are mysterious doesn’t mean it lacks infrastructure: for one, the over 750,000 miles of telecommunications cables that let the internet cross continents. Scientists know this too. They’ve begun to play with that infrastructure for detecting earthquakes. 

Their latest step in doing so: using a trans-Atlantic cable to find earthquakes, as they did in a paper published in Science on May 20. The researchers, led by Giuseppe Marra at the UK’s National Physical Laboratory, detected two earthquakes, one of which had originated half the world away.

“We have very limited sensing offshore. Very limited. It’s ridiculous, what we have,” says Zack Spica, a seismologist at the University of Michigan, who was not one of this paper’s authors. “But, now, we are realizing that we have, actually, thousands of possible sensors out there, so we could possibly start digging into it and start watching what’s going on.”

Today, telecommunications companies have woven optical fibers into an intricate web cast across the globe. These cables are hidden yet crucial components that make the internet tick. Not only do they bridge hemispheres, they bring critical connectivity to more isolated parts of the world. 

(Just ask Tonga, whose cable link was torn by a volcanic eruption earlier this year. People and relief efforts in the islands often had to rely on snail-like 2G satellite internet until the cable was repaired.)

Using cables for underwater sensing isn’t a new idea. At first, the idea relied on bespoke, specialized cables. The US Navy toyed with them in the early Cold War as a way of detecting Soviet submarines. Scientists in both California and Japan began testing cables for earthquake detection from the 1960s. 

But installing specific equipment is expensive, and in the 21st century—helped by the telecoms industry’s increased reception to the idea—scientists have begun to take advantage of what is already there.

[Related: Earthquake models get a big shakeup with clues buried in the San Andreas fault]

Perhaps the most established method is a technique known as distributed acoustic sensing (DAS). To do this, scientists shoot short light pulses of light from one end of the cable. If an earthquake, for instance, shakes the cable, the tremors will reflect some of that light back to the sender, who can use it to reconstruct what happened and where.

Many scientists have embraced DAS, but it has a key limitation: distance. As light (or any other signal) travels along a line, it attenuates, or loses strength. So it’s hard to use DAS to sense beyond a few dozen miles. That is no small feat, but what if you wanted to see into, say, the middle of the ocean, thousands of miles from shore?

In 2021, researchers led by Zhongwen Zhan, a seismologist at Caltech, tested another method on Curie, a Google-owned cable running from Los Angeles to Valaparaíso, Chile, parallel to the highly active Pacific coast of the Americas. That team studied the fingerprints of earthquakes on regular signal traffic through the cable. 

But their method had a flaw: They couldn’t tell how far away something had happened, only that it had. “They detected earthquakes, but…they didn’t know where it was coming from,” says Spica.

Of course, if you’re chatting with your friend overseas, your voices can reach each other with no issue at all. That’s because these cables are outfitted with devices called repeaters. Like players in a grand game of telephone (only far, far more reliable), repeaters take an incoming signal and amplify it to send it along to the next one. 

For several years, some scientists have supported a proposal, called SMART, to outfit new repeaters on future cables with inexpensive seismic, pressure, and temperature sensors. Telecoms firms are now paying attention: One SMART project—a cable linking Portugal’s mainland with its Atlantic islands—is slated to enter service in 2025.

But seafloor cables’ submerged repeaters already have a second function: To help cable operators locate potential issues, the repeaters can send some of their signal back.

Marra and his colleagues harnessed that existing failsafe. They sent an infrared laser through the cable and examined the signals that returned from each repeater. In doing so, they could break an ocean-crossing cable into bite-sized chunks a few dozen miles long. 

“I know others have been thinking about how to do this,” says Bruce Howe, an oceanographer at the University of Hawai’i who also wasn’t involved in this paper, “but they did it.”

Marra’s group tested their technique on a trans-Atlantic cable running between Southport in North West England and Halifax in Atlantic Canada. They were able to detect not just earthquakes—one originating from northern Peru and another originating from all the way in Indonesia—but also the noise from water moving in the ocean.

There are a few catches. For one, says Howe, this sort of detection is different from what seismologists are accustomed to. Marra and colleagues weren’t yet able to measure the magnitude of an earthquake. And discerning an earthquake from, say, ocean temperature shifts may prove difficult. This is where multiple methods—for instance, this latest technique  plus SMART—could work in tandem.

Many scientists are excited about cables’ potential. “I really feel that the greatest breakthroughs [in seismology] are going to be done offshore, because there is so much to explore,” Spica says. They could vastly improve our tsunami warning systems. They might help geologists peer into poorly understood places where tectonic plates are coming together or pulling apart, such as mid-ocean ridges. And they might be able to help oceanographers monitor what’s happening in warming oceans.

“Money is, as always, the main obstacle,” Howe says, “but recent progress indicates we can overcome this.”

The post Deep-sea internet cables could help sense distant earthquake rumbles appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The souped-up Large Hadron Collider is back to take on its weightiest questions yet https://www.popsci.com/science/large-hadron-collider-restarts-run/ Sun, 15 May 2022 17:00:00 +0000 https://www.popsci.com/?p=442423
The Large Hadron Collider's magnet chain.
A chain of magnets inside the tunnel at the Large Hadron Collider. Samuel Joseph Hertzog via CERN

What happens where the LHC's beams meet tells us how the universe works.

The post The souped-up Large Hadron Collider is back to take on its weightiest questions yet appeared first on Popular Science.

]]>
The Large Hadron Collider's magnet chain.
A chain of magnets inside the tunnel at the Large Hadron Collider. Samuel Joseph Hertzog via CERN

The bleeding edge of physics lies in a beam of subatomic particles, rushing in a circle very near the speed of light in an underground tunnel in Central Europe. That beam crashes into another racing just as fast in the other direction. The resulting collision produces a flurry of other particles, captured by detectors before they blink out of existence.

This is standard procedure at the Large Hadron Collider (LHC), which recently switched on for the first time since 2018, its beams now more powerful than ever. The LHC, located at the European Organization for Nuclear Research (CERN) near Geneva, is the world’s largest particle collider: a mammoth machine that literally smashes subatomic particles together and lets scientists watch the fountain of quantum debris that spews out.

That may seem unnecessarily violent for a physics experiment, but physicists have a good reason for the destruction. Inside those collisions, physicists can peel away the layers of our universe to see what makes it tick at the smallest scales.

The physicists behind the machine

The “large” in the LHC’s name is no exaggeration: The collider cuts a 17-mile-long magnetic loop, entirely underground, below the Geneva suburbs on both sides of the ragged French-Swiss border (home of CERN’s headquarters), through the shadows by the eastern slopes of France’s Jura Mountains, and back again.

Assembling such a colossus took time. First proposed in the 1980s and approved in the mid-1990s, the LHC took over a decade to build before its beam first switched on in 2008. Construction took $4.75 billion, mostly coming from the coffers of various European governments.

LHC consumes enough electricity to power a small city. Even before its current upgrades, LHC’s experiments produced a petabyte of data per day, enough to hold over 10,000 4K movies—and that’s after CERN’s computer network filtered out the excess. That data passes through the computers of thousands of scientists from every corner of the globe, although some parts of the world are better represented than others.

[Related: The biggest particle collider in the world gets back to work]

Time, money, and people power continue to pour into the collider as physicists seek to answer the universe’s most fundamental questions. 

For instance, what causes mass to exist? Helping to answer that question has been one of the LHC’s most public triumphs to date. In 2012, LHC scientists announced the discovery of a long-sought particle known as the Higgs boson. The boson is the product of a field that, when particles interact with the field, gives those particles mass.

The discovery of the Higgs boson was the final brick in the wall known as the Standard Model. It’s the heart of modern particle physics, a schematic that lays out about a dozen subatomic particles and how they neatly fit together to give rise to the universe we see.

But with every passing year, the Standard Model seems increasingly inadequate to answer basic questions. Why is there so much more matter in the universe than antimatter, its opposite? What makes up the massive chunk of our universe that seems to be unseen and unseeable? And why does gravity exist? The answers are anything but simple.

The answers may come in the form of yet-undiscovered particles. But, so far, they’ve eluded even the most powerful particle colliders. “We have not found any non-Standard Model particles at the LHC so far,” says Finn Rebassoo, a particle physicist at Lawrence Livermore National Laboratory in California and an LHC collaborator.

Upgrading the behemoth

Although the COVID-19 pandemic disrupted the LHC’s reopening (it was originally scheduled for 2020) , the collider’s stewards have not sat by idly since 2018. As part of a raft of technical upgrades, they’ve topped up the collider’s beam, boosting its energy by about 5 percent. 

That may seem like a pittance (and it certainly pales in comparison to the planned High-Luminosity LHC upgrade later this decade that will boost the number of collisions). But scientists say that it still makes a difference.

“This means an increase in the likelihood for producing interesting physics,” says Elizabeth Brost, a particle physicist at Brookhaven National Laboratory on Long Island, and an LHC collaborator. “As a personal favorite example, we will now get 10 percent more events with pairs of Higgs bosons.”

The Standard Model says that paired Higgs bosons should be an extremely rare occurrence—and perhaps it is. But, if the LHC does produce pairs in abundance, it’s a sign that something yet undiscovered is at play.

“It’s a win-win situation: Either we observe Higgs pair production soon, which implies new physics,” says Brost, “or we will eventually be able to confirm the Standard Model prediction using the full LHC dataset.”

The enhancements also provide the chance to observe things never before seen. “Every extra bit provides more potential for finding new phenomena,” says Bo Jayatilaka, a particle physicist at Fermilab in suburban Chicago and an LHC collaborator.

It wasn’t long ago that one potential fodder for observation emerged—not from CERN, but from an old, now-shuttered accelerator at Fermilab, outside Chicago. Researchers poring over old data found that the W boson, a particle responsible for causing radioactive decay inside atoms, seemed to have a heavier mass than anticipated. If that’s true, it could blow the Standard Model wide open.

Naturally, particle physicists want to make sure it is true. They’re already planning to repeat that W boson measurement at CERN, both with data collected from past experiments and with new data from experiments yet to come.

It will likely take time to get the LHC up to its newfound full capacity. “Typically, when the LHC is restarted it is a slow restart, meaning the amount of data in the first year is not quite as much as the subsequent years,” says Rebassoo. And analyzing even that data it produces takes time, even for the great masses of scientists who work on the collider.

But as soon as 2023, we could see results—taking advantage of the collider’s newfound energy boost, Jayatilaka speculates.

The post The souped-up Large Hadron Collider is back to take on its weightiest questions yet appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This robot chef can taste salt with its arm https://www.popsci.com/science/robot-learns-taste-food/ Wed, 04 May 2022 18:20:00 +0000 https://www.popsci.com/?p=441008
The University of Cambridge chef robot.
The tasting robot in action, hovering above a plate of eggs and tomatoes (inset). Cambridge University via YouTube

Robot chefs are becoming more common, but their ability to taste is undercooked.

The post This robot chef can taste salt with its arm appeared first on Popular Science.

]]>
The University of Cambridge chef robot.
The tasting robot in action, hovering above a plate of eggs and tomatoes (inset). Cambridge University via YouTube

Robots can see in wavelengths beyond human eyes. Robots can hear in wavelengths beyond human ears. Robots can even feel with tactility approaching human skin.

But when it comes to tasting, robots are laggards. Taste is a sense that may seem basic to any human, including young children licking food from the floor, but not to robots. Tasting technology doesn’t even come close to the multifaceted sensitivity of the human tongue.

For robot-builders and food scientists alike, improving that technology is an active area of research. One idea: Relocating the tongue to an arm, which a robot can manipulate. Researchers at the University of Cambridge have done just that, testing a robot arm’s ability to taste eggy dishes. They published their work in the journal Frontiers in Robotics and AI on May 4.

The Cambridge group weren’t new to the idea, having previously created a robot that could make omelettes and improve its egg-making prowess with human feedback. It slots neatly into a wave of robots that have started to work their way into restaurants, typically doing rote kitchen tasks. 

Take Spyce, a Boston-area restaurant where patrons could watch automated machines cook up customized bowls. The MIT engineers who founded Spyce had dreams of expanding it into a chain across the US East Coast. But those dreams met a mixed reception, and Spyce shuttered its doors earlier this year.

For robots, even the most elementary cooking tasks can prove to be insurmountable obstacles. One British startup offers a set of robotic cooking arms, which costs over $300,000, that can make thousands of recipes—but it still needs human help to chop its vegetables. 

[Related: What robots can and can’t do for a restaurant]

Another thing that robots cannot do—but what comes naturally to many human cooks—is to check their progress by taste. “If robots are to be used for certain aspects of food preparation, it’s important that they are able to ‘taste’ what they’re cooking,” said Grzegorz Sochacki, an engineer at Cambridge and an author of the study, in a press release.

That’s a solvable problem, because taste is a chemical process. Flavors are your brain’s interpretations of different molecules touching your tongue. Acids, for instance, taste sour, while their alkaline counterparts taste bitter. Certain amino acids give a savory umami taste, while salts like sodium chloride taste, well, salty. A chemical called capsaicin is responsible for the hot spice of peppers.

For some years now, researchers have been tinkering with so-called “electronic tongues,” devices that emulate that process by sensing those molecules and more. Some of those implements even look like human tongues. In past research, they’ve been used for tasting orange juice.

But electronic tongues are a pale imitation of the organic kind. To taste anything even remotely solid—even honey—you need to mix the food with water, and that water must be pure, to keep out unwanted molecules. Electronic tongues can appraise cheese or a braised chicken dish, but a human needs to liquefy the food first. You’d be hard-pressed to find any cook that wants to wait 10 minutes to have a taste.

Even then, that process results in a one-time measurement that doesn’t do the food justice. Any foodie will know that taste is far more complex than taking a chemical sample of liquified food. Taste changes over the course of a bite. Different seasonings will hit at different points. As you chew on a morsel, and as your saliva and digestive enzymes mix with an increasingly mushy mouthful, the bite’s flavor profile can change.

The Cambridge group hoped to address that issue head-on (or, perhaps, mouth-on). Instead of a tongue-like tendril, they decided to shift the taster—specifically, a salinity sensor—to a movable arm. In doing so, the researchers hoped to give the robot a tool that could sample a dish at multiple points during preparation and chart a “taste map” of the food.

“For the robot to have control of the motion and where and how it is sampling” is different from other electronic tongues that have come before, says Josie Hughes, a roboticist at the École Polytechnique Fédérale de Lausanne in Switzerland, who was part of the Cambridge group in the past but wasn’t an author on this current paper.

To test the arm, the researchers created nine simple egg dishes, each one with different quantities of salt and tomato. The arm mapped out the salinity of each plate. Afterward, the researchers put each dish in a blender, to see if the robotic arm could discern the differences in salinity as the egg and tomatoes churned together into a salmon-colored mush, as they would in a human mouth.

With this new technique, the researchers could create salt maps that surpassed anything electronic tongues had done before. Of course, salinity is only one aspect of cooking. In the future, the researchers hope to expand to other tastes, such as sweetness or oiliness. And putting food in a blender isn’t quite the same as putting it in your mouth. 

“Perhaps in the short term we might see robot ‘assistants’ in the kitchen,” says Hughes, “but we need more exciting and insightful advances such as this work to be able to develop robots that can taste, chop, mix, cook and learn as a true human chef.” The road to getting anything like this robot into a kitchen, whether that’s in a restaurant or at home, might make culinary training seem easy in comparison.

The post This robot chef can taste salt with its arm appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This one-way superconductor could be a step toward eternal electricity https://www.popsci.com/science/superconductor-one-way-electricity/ Wed, 27 Apr 2022 19:30:00 +0000 https://www.popsci.com/?p=439667
A superconducting chip, in an artist's impression.
A superconducting chip, in an artist's impression. TU Delft

The material used in this first-of-a-kind superconductor could make data servers more energy-efficient.

The post This one-way superconductor could be a step toward eternal electricity appeared first on Popular Science.

]]>
A superconducting chip, in an artist's impression.
A superconducting chip, in an artist's impression. TU Delft

Imagine if your computer could run on electricity that flows forever without overheating. This isn’t magic: It’s the potential future of a real phenomenon called superconductivity, which today underpins everything from cutting-edge magnetic research to MRIs.

Now, scientists have found that they can make a superconductor that’s different from others that have come before. It lets electricity flow in only one direction: Like a train pointing downhill, it slides freely one way but faces a daunting uphill in the other. It sounds arcane, but this ability is critical to making electronic circuits like the ones that power your computer. If these scientists’ results hold, it could bring that future one step closer.

“There are so many fun possibilities available now,” says Mazhar Ali, a physicist at Delft University of Technology in the Netherlands, and one of the authors who published their work in the journal Nature on April 27.

Superconductivity flies in the face of how physics ought to work. Normally, as electric current flows along a wire, the electrons inside face stiff resistance, brushing up against the atoms that form the wire. The electrical energy gets lost, often as heat. It’s a large part of why your electronics can feel hot to the touch. It’s also a massive drain on efficiency.

But if you deep-chill a material that conducts electricity, you’ll reach a point that scientists call the critical temperature. The precise critical temperature depends on the substance, but it’s usually in the cryogenic realm, barely above absolute zero, the coldest temperature allowed by physics. At the critical point, the material’s resistance plunges off a cliff to functionally nil. Now, you’ve created a superconductor.

What does resistance-free electricity look like? It means that current can flow through a wire, theoretically for an eternity, without dissipating. That’s a startling achievement in physics, where perpetual motion shouldn’t be possible.

“It violates our current understanding of how one-way superconductivity can occur.”

Mazhar Ali

We’ve known about this magical-sounding quirk of quantum physics since a student in the Netherlands happened across it in 1911. Today, scientists use superconductivity to watch extremely tiny magnetic fields, such as those inside mouse brains. By coiling superconducting wires around a magnet, engineers can craft low-energy, high-power electromagnets that fuel everything from MRI machines in hospitals to the next generation of Japanese bullet trains.

Bullet trains were probably not on the minds of Ali and his colleagues when they set about their work. “My group was not approaching this research with the goal of realizing one-way superconductivity actually,” says Ali.

Ali‘s group, several years ago, had begun investigating the properties of an evocatively named metal, Nb3Br8, made from atoms of niobium (a metal often used in certain types of steel and specialized magnets) and bromine (a halogen, similar to chlorine or iodine, that’s often found in fire retardants). 

As the study team made thinner and thinner sheets of Nb3Br8, they found that it actually became more and more conductive. That’s unusual. To further investigate, they turned to a tried technique: making a sandwich. Two pieces of a known superconductor were the bread, and Nb3Br8 was the filling. The researchers could learn more about Nb3Br8 from how it affected the sandwich. And when they looked, they found that they’d made a one-way superconductor.

What Ali’s group has created is very much like a diode: a component that only conducts electricity in one direction. Diodes are ubiquitous in modern electronics, critical for underpinning the logic that lets computers operate.

Yet Ali and his colleagues don’t fully know how this effect works in the object they created. It also, as it turns out, “violates our current understanding of how one-way superconductivity can occur,” says Ali. “There is a lot of fundamental research as well that needs to be done” to uncover the hidden new physics.

It isn’t the first time physicists have built a one-way superconducting road, but previous constructions generally needed magnetic fields. That’s common when it comes to manipulating superconductors, but it makes engineers’ lives more complicated.

“Applying magnetic fields is cumbersome,” says Anand Bhattacharya, a physicist at Argonne National Laboratory in suburban Chicago, who was not one of the paper authors. If engineers want to manipulate different parts within a superconductor, for instance, magnetic fields make a formidable challenge. “You can’t really apply a magnetic field, very locally, to one little guy.” 

For people who dream of constructing electronics with superconductors, the ability to send electricity in one direction is a powerful inspiration. “You could imagine very cool device applications at low temperatures,” says Bhattacharya.

Such devices, some scientists believe, have some obvious hosts: quantum computers, which harness particles like atoms to make devices that do things conventional computers can’t. The problem is that tiny amounts of heat can throw quantum computers off, so engineers have to build them in cryogenic freezers that keep them barely above absolute zero. The problem compounds again: Normal electronics don’t work very well at those temperatures. An ultra-cold superconducting diode, on the other hand, may thrive.

[Related: What the heck is a quantum network?]

Conventional computers could benefit, too: Not your personal computer or laptop, most likely, but larger behemoths like industrial supercomputers. Other beneficiaries could be the colossal server racks that line the world’s data centers. They account for a whopping 1 percent of the world’s energy consumption, comparable to entire mid-sized countries. Bringing superconductors to data servers could make them thousands of times more energy-efficient.

There is some way to go before that can happen. One next step is finding how to produce many superconducting diodes at once. Another is to find how to make them operate above -321°F, the boiling point of liquid nitrogen: That temperature sounds extremely low, but it’s easier to achieve than the even colder temperatures, supplied by liquid hydrogen, that current devices might need.

Despite those challenges, Ali is excited about the future of his group’s research. “We have very specific ideas for attacking both of these avenues and hope to see some more ground-breaking results in the next couple of years,” he says.

The post This one-way superconductor could be a step toward eternal electricity appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
NASA’s new moon rocket is leaking fuel, but that’s not a setback https://www.popsci.com/space/nasa-sls-rocket-technical-issues/ Mon, 25 Apr 2022 10:00:00 +0000 https://www.popsci.com/?p=438943
NASA SLS rocket against sunset at Kennedy Space Center launch pad before Artemis I mission
After scrubbing a fuel-loading test on the SLS rocket in early April, NASA engineers delayed another. But that might not be so unusual for a spacecraft of such epic proportions. Ben Smegelsky/NASA

The spacecraft powering the Artemis missions has been stalled by technical difficulties, but they're only proportional to what it's endeavoring.

The post NASA’s new moon rocket is leaking fuel, but that’s not a setback appeared first on Popular Science.

]]>
NASA SLS rocket against sunset at Kennedy Space Center launch pad before Artemis I mission
After scrubbing a fuel-loading test on the SLS rocket in early April, NASA engineers delayed another. But that might not be so unusual for a spacecraft of such epic proportions. Ben Smegelsky/NASA

NASA is about to take its first baby steps back to the moon. Phase one of the plan, fittingly called Artemis 1, plans to dispatch an uncrewed test flight around the moon and back. It will be the first flight of the Artemis program, which aims to put humans on the moon again by the mid-2020s.

But before Artemis 1 can happen, NASA needs to ensure that its highly touted rocket—known as the Space Launch System (SLS)—is fully operational. That still hasn’t happened; the rocket’s latest dress rehearsal ended prematurely earlier this month. It’s the third in a sequence of unsuccessful practice sessions for the team of engineers.

One of the key problems is that NASA hasn’t been able to pump all of the Artemis rocket’s fuel reserved into its tanks. As NASA engineers tested filling up the tanks on the Kennedy Space Center launch pad, technical issues and leaks prevented the process from being completed. It might sound bad, but this sort of obstacle rings all too familiar in space launches.

“I’m hopeful that there won’t be too many delays, and they’ll figure out what is causing the problems pretty quickly and be able to get back on a better track pretty soon,” says Makena Young, an associate fellow in space at the Center for Strategic and International Studies in Washington, D.C.

[Related: Inside NASA’s messy plan to return to the moon by 2024]

When the Obama administration green-lit the SLS rocket concept a decade ago, NASA said it would be ready for launch by 2016 or 2017. But experimental rockets in this stage of their development are almost expected to have major difficulties. A rocket is a highly complex operation—it’s not good enough to just plop one atop a pad and light it like a firework. A multitude of intricate subsystems have to work together in it for it to help astronauts and other precious cargo reach space.

The pad itself, for instance, is far more than a temporary resting site for a rocket before it takes off. In the case of SLS, it’s a mobile launch tower that rolls onto the launchpad and plays several critical roles.

Sprouting from that toweris a mass of tendrils, called umbilicals, latched onto the rocket. One umbilical allows astronauts to board the rocket when the crew capsule is hundreds of feet in the air. Others act as stabilizers that keep the rocket steady on the launchpad. Still others include cables that provide vital electrical and communications links between the ground and the rocket.

And all of those umbilicals, which operate and separate in different ways, must effortlessly detach at liftoff. That’s a big ask when the object they’re anchored too is a fiery 365-foot-tall beast that can set off car alarms from miles away. “It’s a violent, violent atmosphere for those components to be in,” says Kevin Miller, an engineer at NASA’s Kennedy Space Center. “It’s an interesting environment unlike anything else in the world, and we have to make sure it operates flawlessly.” 

Not all umbilicals attach to the tower of the spacecraft. On the other side of the rocket, two more tendrils rise from the ground and link up near its bottom. One of the chief functions of these Tail Service Mast Umbilicals (TSMUs) is to help fill SLS’s fuel tanks as the rocket sits on the launchpad.

When it finally comes time to fly, the SLS propels itself with two some simple elements: hydrogen and oxygen. Pump them into the same chamber, ignite them, and the subsequent reaction creates water and massive amounts of energy that allow the rocket to push past Earth’s atmosphere and gravitational field. (That’s just the main stage. The SLS also relies on a pair of boosters like crutches to help push it farther up into the sky. These burn aluminum-based solid fuel through an entirely different process.)

But because gases are not very dense, trying to store that hydrogen and oxygen in their room-temperature forms would require fuel tanks far too large to be practical on a rocket. Instead, these elements must be kept in chilled liquid states. That cold is nothing to sneeze at. Oxygen liquifies at -297 degrees Fahrenheit (-183 degrees Celsius). Meanwhile, liquid hydrogen’s boiling point is a more biting -423 degrees F (-253 degrees C).

In its latest test, NASA filled up 49 percent of the liquid oxygen and 5 percent of the liquid hydrogen before technicians spotted a hydrogen leak and halted filling.

[Related: Astronauts explain what it’s like to be ‘shot off the planet’]

“A small quantity of liquid hydrogen becomes an absolutely huge gaseous hydrogen cloud,” says Miller. That flammable fallout could be disastrous if it’s not fixed before takeoff.

“Many launches have been scrubbed by propellant leaks over the years, either with launch vehicle or ground system hardware,” says Jeff Foust, a space writer who has been following SLS since the project’s inception. Throughout history, numerous space shuttle launches have faced problems with leaks. “These problems are tracked down and fixed, and the vehicles eventually launch,” says Foust.

While circular liquid hydrogen storage tank at Kennedy Space Center launch pad
Each of the liquid hydrogen and liquid oxygen tanks at the launch pad can hold more than 800,000 gallons of propellant. Ben Smegelsky/NASA

But just as the TSMUs need to function properly to fuel the rocket, they also need to pump it flush with nitrogen gas. Doing this purges the rocket’s system of the aforementioned hydrogen, minimizing potential fire hazards. Nitrogen also acts as climate control to keep the rocket’s components at a steady temperature and low humidity—something that’s especially important in Florida’s subtropical climate.

“It keeps all the electronics and such happy when they’re nice and cool and dry,” says Miller.

Still, even at this stage, such errors are expected. Dress rehearsals and other tests are designed to catch finicky problems before they can plague actual launches involving astronauts and pricy space instruments.

[Related: We could live in caves on the moon. What would that be like?]

As for what this means for the future of Artemis launches, the timetable isn’t certain. Successful tests are necessary before the first full uncrewed launch can be schedule. At this rate, Artemis I might not speed off to the moon until June or early July.

If launching Artemis flights to the moon is anything like launching arrows from the bow of its namesake goddess, then it will be another few weeks before NASA knows that the bow works at all. Only after that can the mission lift off.

“It’s not a good thing that it’s being delayed, but it’s a good thing that they’re taking every precaution to make sure that this will be a safe and successful launch,” says Young.

That said, any delays to Artemis 1 will put pressure on the scheduling of Artemis 2: the first crewed mission, which will put three US and one Canadian astronaut in lunar orbit for 10 days. Artemis 2 is currently slated to launch in May 2024, but it will take roughly two years to prepare after Artemis 1. The longer Artemis 1’s launch slips, the longer humans will have to wait to visit the far side of the moon.

Only after that comes phase three: placing boots back in moon dust for the first time since 1972.

Correction (April 27, 2022): The story previously mixed up the percentage levels of liquid oxygen and hydrogen levels that were injected into the SLS rocket during NASA’s most recent wet dress rehearsal. It also incorrectly stated that there was a nitrogen leak during that test. Both of those lines have now been updated.

The post NASA’s new moon rocket is leaking fuel, but that’s not a setback appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Why chemists are watching light destroy tiny airborne particles from within https://www.popsci.com/science/light-aerosol-decay-pollution/ Thu, 14 Apr 2022 21:00:00 +0000 https://www.popsci.com/?p=437516
Airborne particles are responsible for features like smog.
Pixabay

How these particles break down could improve the way we understand aerosol pollution.

The post Why chemists are watching light destroy tiny airborne particles from within appeared first on Popular Science.

]]>
Airborne particles are responsible for features like smog.
Pixabay

When you look out the window or above you, you might notice colossal clouds or the sheer quantity of air. Scientists notice this, too; large masses like air and clouds are the focus of the models they use to monitor Earth’s atmosphere.

Easy to miss against the blue sky are the tiny liquid droplets or solid particles floating by—also known as aerosols. Though often invisible, they’re crucial to the way our atmosphere works. Every parcel of air around you is filled with hundreds of aerosols. They might act as the seeds that sprout clouds. Or they might come together as city-choking smog. 

Aerosols remain one of the most poorly understood aspects of Earth’s atmosphere, but it’s clear that they don’t just drift through the atmosphere without consequence. As the sun drives the winds that bear aerosols, its light can also break them apart. Moreover, light can bend through an aerosol as though it’s passing through a lens, speeding up the destructive process.

Scientists have observed that last effect in detail for the first time, as they report in a paper published in Science on April 15. These processes—how sunlight affects and breaks down aerosols—are crucial for understanding how pollution works.

This process can have major effects on an aerosol’s chemistry, “and the reactions can happen faster in some parts of the particle than in others,” says Pablo Corral Arroyo, a chemist at ETH Zürich in Switzerland and one of the Science paper’s authors.

You may recognize aerosols as vectors for transmitting the coronavirus—but that’s just one subtype of them. Ninety percent of aerosols are natural, like sea salt and volcanic ash, made by processes that have existed long before humans. But others are our fault: vehicle emissions, soot from burning plant material, and dust that machines kick up into the air.

Studying aerosols isn’t entirely new. In particular, scientists knew that sunlight can decay aerosols by breaking and shrinking them. Light—particularly the sun’s ultraviolet light—gnaws at the chemical bonds holding these molecules together. That might cause an aerosol to become smaller or its contents to decay into other substances.

[Related: Tiny air pollutants may come from different sources, but they all show a similar biased trend]

But only now are scientists beginning to understand that aerosols can behave in nuanced ways with large effects. “We have to be careful how to treat tiny objects floating in the air. It cannot just be treated as similar to bulk water, to bulk liquids,” says Christian George, an atmospheric chemist at Université Claude-Bernard Lyon 1 in France who wasn’t a member of the research team.

For instance, something that scientists are only now understanding is that particles act like lenses, magnifying and amplifying the light that passes through it. This decay is accelerated, too, if the aerosol is made from certain materials. Scientists knew this happened: In previous experiments, they’d trapped small dye-containing particles in light and watched them break down. They found that as particles got smaller, the dye decayed more quickly.

But trying to observe how this effect actually works, trying to look inside a droplet and see an accelerated reaction play out, is difficult. A particle has to be just the right size to see inside: Too large and it will be too big for the tools to see it—even when using an X-ray microscope, as these researchers did. Too small, and the differences in chemical composition will be too minute to see.

To look at something visible, these researchers used a chemical called iron(III) citrate. It exists in the atmosphere, particularly near the ground. But the researchers primarily selected it because when it reacts in sunlight, it degrades into another chemical called iron(II) citrate in a reaction that’s easy to see, but only if you can look closely enough.

Corral Arroyo and his colleagues blasted iron(III) particles with ultraviolet light for hours on end. Meanwhile, they carefully observed the particles with an X-ray microscope. The X-rays allowed them to see what parts of an individual particle—less than a hundredth the width of a human hair—were reacting and when.

“This is what really allowed us to follow the chemical composition in different parts of the particle,” says Corral Arroyo.

Now that they’ve seen how light degrades particles from the inside, the chemists might try to sort out how light behaves in different types of aerosols. Not all particles and droplets are equal when it comes to basking and breaking up in sunlight. Black carbon soot from burning charcoal and other dark particles tend to absorb light, rather than let it ping around within.

Sea salt and many aerosols of organic origin, on the other hand, will experience accelerated reactions. Knowing that this happens in particles has a major impact on the models scientists use to understand how pollution behaves.

“If you really want to have precise models, you will need to take into account these effects,”  Corral Arroyo says. “Otherwise, your model just does not work properly.”

Indeed, most current atmospheric models primarily focus on large masses of air or water. “What this paper is really showing is that we cannot proceed as we are currently doing,” George says. If effects like these are important—and the study authors say they are—then it is a sign that those models, critical to everything from weather prediction to understanding climate change, are incomplete.

The post Why chemists are watching light destroy tiny airborne particles from within appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This subatomic particle’s surprising heft has weighty consequences https://www.popsci.com/science/w-boson-heavy-mass/ Thu, 07 Apr 2022 20:00:00 +0000 https://www.popsci.com/?p=436252
Fermilab's Collider Detector.
The Collider Detector at Fermilab, one of the two detectors on the Tevatron particle accelerator. Fermilab

If the W boson is heavier than expected, then a foundational idea in physics is unfinished.

The post This subatomic particle’s surprising heft has weighty consequences appeared first on Popular Science.

]]>
Fermilab's Collider Detector.
The Collider Detector at Fermilab, one of the two detectors on the Tevatron particle accelerator. Fermilab

The sun, a nuclear power plant, and carbon dating all draw their abilities from interactions between particles in the hearts of atoms. Those are, in part, the work of a subatomic particle called the W boson. The W boson is an invisible bearer of the weak nuclear force, the fundamental force of the universe responsible for causing radioactive decay.

It is also the subject of the newest mystery in particle physics. The latest, most precise, and most informed measurement yet of the W boson’s mass, published in Science on April 7, reveals that the particle is heavier than anticipated.

It’s a deviation that can’t easily be explained. If the measurement is confirmed—and that’s a very big if—it could be the strongest evidence yet that particle physics’ long-standing understanding of the universe at the tiniest scales, known as the Standard Model, is unfinished.

“Is nature also hiding yet another particle which would influence this particular quantity?” says Ashutosh Kotwal, a particle physicist at Duke University and a member of the collaboration that published the paper.

The W boson isn’t a newly discovered particle: CERN scientists found it in the early 1980s, and theoreticians had predicted its existence over a decade earlier. Sorting out its mass had been a goal right from the start.

“There’s a long, long history of making this measurement and making the precision better and better, because it’s always been recognised as a very important measurement to make,” says Claudio Campagnari, a particle physicist at the University of California, Santa Barbara, who wasn’t one of the paper’s authors.

In fact, the latest Science paper is the fruit of experiments that are over a decade old. The myriad collaborators who co-authored the paper all worked with data from the Tevatron: a particle accelerator, located at Fermilab in suburban Chicago, whose final collision came in 2011.

As particles whirled around the Fermilab’s rings and smashed into each other, they’d erupt into a glittering high-energy confetti of particles—W bosons included. With more collisions came more data for scientists to poke and prod to piece together the W boson’s mass.

“Our task that we defined for ourselves was: Go measure the facts. And here’s our best effort yet to get at that fact,” says Kotwal.

Those particles spiralled around the accelerator at very near the speed of light, pummeling into each other almost instantaneously. Analysing their collisions, on the other hand, takes years. The Fermilab group had done it before, in 2006 and 2012, taking four and five years, respectively, to sort through previous sets of data. 

That’s because measuring the W boson’s mass is a delicate and highly sensitive process that must account for all sorts of minute distractions, from shifts in the magnetic field inside the accelerator to the angles of the detectors that glimpsed the collisions.

“Small mistakes could have a big effect, so it has to be done very carefully, and as far as I can tell, the authors have done an extremely careful job, and that’s why they have been working on it for so many years,” Martijn Mulders, a particle physicist at CERN, in Switzerland, who was not one of the paper authors.

The study authors took over a decade. At the end of it, they found that the W boson was more massive than any of the previous measurements and too massive to align with  for theoretical predictions. It’s almost certainly too big a difference to be written off as a mere statistical accident.

“I don’t think people really expected that a new result would be so far off the prediction,” Campagnari says.

[Related: Physicists close in on the exceedingly short life of the Higgs boson]

The W boson is a brick in the wall of the Standard Model, the heart of modern particle physics. The Standard Model consists of a dozen subatomic particles, basic building blocks of the universe, tightly woven by tethers of theory. The Standard Model has been physicists’ guide to discovering new particles: Most notably, it led researchers to the Higgs boson, the long-sought particle that helps give its peers mass. Time and again, the Standard Model’s predictions have held up.

But the Standard Model is not a compendium, and its picture leaves much of the universe unanswered. It doesn’t explain how or why gravity works, it doesn’t explain dark matter, and it doesn’t explain why there is so much more matter in the universe than antimatter.

“By no means do we believe the Standard Model is intrinsically complete,” says Kotwal.

And if the result holds, “I think we can honestly say it’s probably the biggest problem that the Standard Model has encountered over many years,” says Mulders.

In the coming days and months, particle physicists will pick apart every aspect of the paper in search of an explanation. It’s possible that the Fermilab team made an undiscovered error; it’s also possible that a minor tweak in the theoretical background could explain the discrepancy.

Even if the Fermilab finding is in order, the task still isn’t finished. Physicists would have to independently cross-check the result, verifying it in a completely different experiment. They’d want to know, for instance, why no previous measurement saw W bosons as massive as this one. “For that, the hope would be on CERN experiments,” says Mulders.

In fact, CERN’s Large Hadron Collider (LHC) has already observed more W bosons than Tevatron ever did. Now, scientists working with its data have new motivation to calculate the mass from those observations. They may find aid from new collisions when the LHC becomes fully operational later this year—or, further in the future, when it’s upgraded in 2027. 

But suppose that LHC does give proof. Then, the misbehaving W boson could be the fingerprint of something lurking unseen in the quantum shadows. Perhaps it’s a sign of another particle, such as one predicted by a long-elusive theory called supersymmetry, or a hitherto unknown force.

“This is really at the heart of what we think of as the Standard Model, and that would be broken…you have to start questioning everything,” says Mulders.

The post This subatomic particle’s surprising heft has weighty consequences appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A popular mobile game is teaching scientists how we navigate our worlds https://www.popsci.com/science/sea-hero-quest-shaping-navigational-skills/ Fri, 01 Apr 2022 19:32:54 +0000 https://www.popsci.com/?p=435127
New York city street grid vs. London street grid in thin colorful lines on black
Whether you grew up on the "wobbly" roadways of London (left) or the gridded streets of New York City (right) could determine how well you play Sea Hero Quest. Antoine Coutrot and Ed Manley

Our age, city, and social status can shape our sense of direction. Sea Hero Quest gives a window on how.

The post A popular mobile game is teaching scientists how we navigate our worlds appeared first on Popular Science.

]]>
New York city street grid vs. London street grid in thin colorful lines on black
Whether you grew up on the "wobbly" roadways of London (left) or the gridded streets of New York City (right) could determine how well you play Sea Hero Quest. Antoine Coutrot and Ed Manley

At first glance, Sea Hero Quest is a completely unassuming mobile phone game. Its mechanics are simple: You’re a boater, given a map to memorize. When the map disappears, you have to rely on recall alone to steer your vessel to the points you’re given.

In some levels, you have to find buoys hidden deep within mazes of ice; in others, you have to capture sea creatures on camera. But your success is based off about how well you can get around the virtual waters. That’s because for the first five years it was available, Sea Hero Quest was actually a science experiment, testing players’ spatial navigation by their age, country, and much more (with their knowledge). 

Traditionally, a neuroscience study might involve a few dozen participants. But Sea Hero Quest allowed researchers to study the gameplay of over 400,000 people. Now, in a new paper, they’ve used that data to determine one factor that might hurt navigational skills: growing up in a city with a regular street grid. The results were published in the journal Nature on March 30.

[Related: How Ubisoft built the world of Assassin’s Creed Valhalla]

“It’s very hard to generalize the findings that you make based on a limited population,” says Antoine Coutrot, a neuroscientist at the Centre National de la Recherche Scientifique in Lyon, France, and one of the paper’s authors. ”I think video games are … an interesting way to collect more participants from more, different backgrounds.”

Collaborators from several groups in Europe, including Alzheimer’s Research UK, Deutsche Telecom, and the University of East Anglia in England, initially created the game to help diagnose dementia. Neurodegenerative conditions eat away at your memory and capacity to find your way from place to place.

But just because you’re bad at directions doesn’t mean you should be concerned. There are many external factors, from your upbringing to your lifestyle, that can influence how well you can navigate. That’s where Sea Hero Quest came in: Its creators hoped they could use it to build a global baseline. The game asked its players to answer questions about their age, country, and more detailed matters, too, including how long they tended to sleep and the sort of environment they grew up in. 

“At the beginning, our initial aim was just to collect 100,000 people’s data, which we thought, at that time, was wildly optimistic,” says Michael Hornberger, a dementia researcher at the University of East Anglia and a coauthor on the paper. “We collected 100,000 people in the first two days after the game launched.”

Sea Hero Quest is no longer available to the public, but when it was, nearly 4 million people from all over the world made it through at least one level. While most of them didn’t play beyond that, close to 10 percent spent enough time on it to leave a treasure trove of information on their spatial navigation abilities—along with their demographics.

“We found out very quickly that, this data, you can use it for many different purposes,” says Hornberger.

The neuroscientists decided to scour for any patterns they could find. Unsurprisingly, regardless of where they lived, people tended to perform worse as they aged. But the team also found that the power to navigate correlated with a country’s wealth: Players from areas with higher per-capita GDPs scored better.

Gender seemed to play a role as well. Neuroscientists have seen that men are often better at navigating than women, but they aren’t sure why. Sea Hero Quest indicates that the size of this disparity correlates with a country’s rank in the Gender Gap Index: Women who lived in places with more gender inequality tended to score more poorly than their countrymen. Coutrot and his peers published those results in a 2018 Current Biology study.

Later, they wondered if the type of neighborhood a player grew up in would have any effect on their performance. The researchers began breaking down that data, which was provided by around a quarter of the game’s users. Those who grew up in rural areas or in suburbs tended to do better than native urbanites.

Street layouts from 10 Argentinian cities versus 10 Romanian cities
The author of the Nature study compared street layouts across different global cities, including in Argentina (top) and Romania (bottom). Coutrot et a. 2022

But cities aren’t the same across countries. In places like Argentina, Canada, South Africa, and the US, streets tend to be laid out in grids of predictable right angles. On the other hand, in areas like India, Malaysia, Britain, and much of Europe, roadways tend to go off at all sorts of angles, especially in city centers. This is often a result of older metropolises growing organically, tacking on new infrastructure over time.

On the whole, urbanites from countries in the first category navigated more poorly than their counterparts from countries in the second category. But players also tended to fare well at levels that reflected the street patterns they grew up around. Individuals who were used to right angles shone at getting around levels with regular layouts; those who were familiar with more hectic networks did better at comparatively random levels. In other words, the researchers think that being raised on a grid might make you adept at navigating other grids.

The makers of Sea Hero Quest aren’t the first to turn video games into a research tool. Researchers at the University of California, San Francisco, built Neuroracer, a driving game designed to improve its players’ cognition and memory. In 2020, the Food and Drug Administration approved the game as a treatment for ADHD in children.

Other groups have partnered with game developers to let millions of players mine scientific data for virtual rewards. In 2020 the multiplayer space exploration title EVE Online teamed up with various universities to create minigames where participants helped classify cell parts and hunt for exoplanets. Meanwhile, Borderlands 3 has a minigame that helps nutrition scientists sequence the DNA of human gut microbes.

[Related: Inside the ambitious video game project to preserve Indigenous sports]

The difference with Sea Hero Quest is the players were part of the research themselves. Finding thousands, or even hundreds, of subjects for a human-cognition study can be tricky, Coutrot says. The phone game allowed researchers to collect data from orders of magnitude more.

“We did a quick calculation to see how long and how much it would have cost to do that in a classical way,” says Coutrot. “I think we calculated that it would have cost something like $10 billion. It would have taken us 10,000 years.” 

The nature of the game also meant that it cast a much larger net than traditional studies at a university or research institution, which can be biased toward participants in their 20s or from certain socioeconomic or ethnic groups. Sea Hero Quest’s community represented all ages and every inhabited continent on the planet.

Which means there’s plenty of data left from the game’s heyday to explore and learn from. Next, Coutrot and his collaborators want to see if spatial navigation changes depending on players’ level of education and how long they sleep on a daily basis. 

“This dataset gives enough work for several lives of researchers,” he says.

The post A popular mobile game is teaching scientists how we navigate our worlds appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
You’ve probably never heard of terahertz waves, but they could change your life https://www.popsci.com/science/terahertz-waves-future-technologies/ Tue, 22 Mar 2022 12:24:57 +0000 https://www.popsci.com/?p=432744
Terahertz laser setup with green lights and silver machinery in a dark room
Engineers from Harvard, MIT, and the US Army created this experimental terahertz laser setup in 2019. They are among the few to do so. Arman Amirzhan, Harvard SEAS

Welcome to the electromagnetic dark zone.

The post You’ve probably never heard of terahertz waves, but they could change your life appeared first on Popular Science.

]]>
Terahertz laser setup with green lights and silver machinery in a dark room
Engineers from Harvard, MIT, and the US Army created this experimental terahertz laser setup in 2019. They are among the few to do so. Arman Amirzhan, Harvard SEAS

There’s a gap on the electromagnetic spectrum where engineers can not tread.

The spectrum covers everything from radio waves and microwaves, to the light that reaches our eyes, to X-rays and gamma rays. And humans have mastered the art of sending and receiving almost all of them.

There is an exception, however. Between the beams of visible light and the blips of radio static, there lies a dead zone where our technology isn’t effective. It’s called the terahertz gap. For decades now, no one’s succeeded in building a consumer device that can transmit terahertz waves.

Electromagnetic spectrum with rainbow colors and labels for wavelengths
The terahertz band lies in a slim region of the electromagnetic spectrum between microwaves and infrared. Deposit Photos

“There’s a laundry list of potential applications,” says Qing Hu, an electrical engineer at MIT.

But some researchers are slowly making progress. If they stick the landing, they might open up a whole new suite of technologies, like the successor to Wi-Fi or a smarter detection system for skin cancer.

The mystery of the terahertz

Look at the terahertz gap as a borderland. On the left side, there are microwaves and longer radio waves. On the right side lies the infrared spectrum. (Some scientists even call the terahertz gap “far infrared.”) Our eyes can’t see infrared, but as far as our technologies are concerned, it’s just like light.

Radio waves are crucial for communication, especially between electronic devices, making them universal in today’s electronics. Light powers the optical fibers that underpin the internet. These realms of technology typically feed off different wavelengths, and uneasily coexist in the modern world.

[Related: An inside look at how fiber optic glass is made]

But both realms struggle to go far into the terahertz neutral zone. Standard electronic components, like silicon chips, can’t go about their business quickly enough to make terahertz waves. Light-producing technologies like lasers, which are right at home in infrared, don’t work with terahertz waves either. Even worse, terahertz waves don’t last long in the Earth’s atmosphere: Water vapor in the air tends to absorb them after only a few dozen feet.

There are a few terahertz wavelengths that can squeeze through the water vapor. Astronomers have built telescopes that capture those bands, which are especially good for seeing interstellar dust. For best use, those telescopes need to be stationed in the planet’s highest and driest places, like Chile’s Atacama Desert, or outside the atmosphere altogether in space. 

The rest of the terahertz gap is shrouded in mist. Researchers like Hu are trying to fix this, but it isn’t easy.

Engineering terahertz waves

When it comes to tapping into terahertz waves, the world of electronics faces a fundamental problem. To enter the gap, the silicon chips in our electronics need to pulsate quickly—at trillions of cycles per second (hence a terahertz). The chips in your phone or computer can operate perfectly well at millions or billions cycles per second, but they struggle to reach the trillions. The highly experimental terahertz components that do work can cost as much as a luxury car. Engineers are working to bring the prices down.

The other realm, the world of light, has long sought to make devices like lasers that could cheaply create terahertz waves at specific frequencies. Researchers were talking about how to make such a laser as early as the 1980s. Some thought it was impossible.

[Related: When light flashes for a quintillionth of a second, things get weird]

But MIT’s Hu didn’t think so.“I knew nothing about how to make lasers,” he says. Still, making this kind of laser became his quest. 

Then in 1994, scientists invented the quantum cascade laser, which was particularly good for making infrared light. All that Hu and his colleagues needed to do was push the laser out to the longer waves of the far infrared.

Around 2002, they succeeded in making a terahertz quantum cascade laser. But there was a catch: The system needed temperatures around -343 degrees Fahrenheit to actually fire. It also required liquid nitrogen to work, which made it difficult to use outside the lab or cryogenic settings.

In the two decades since, that temperature threshold has crept up. The latest lasers from Hu’s lab operate at a balmier 8 degrees Fahrenheit. That’s not quite room temperature, but it’s warm enough that the laser could be chilled inside a portable refrigerator and carted out of the lab. Meanwhile, in 2019, a team from Harvard, MIT, and the US Army created a shoe box-sized terahertz laser that can alter molecular gas.

Bendable yellow chip with terahertz waves dripped by white gloved fingers
The nanoscale terahertz wave generator, created by engineers at École Polytechnique Fédérale de Lausanne in Switzerland in 2020, can be implemented on flexible substrates. EPFL/POWERlab

In the time it took Hu to finetune his laser, electronics have made progress, too. Advances into how chips are built and the materials that go into them have pushed them to run faster and faster. (A nanoplasma chip made by a group in Switzerland in 2020 was able to transmit 600 milliwatts of terhertz waves, but again, only in the lab.) While electrical engineers want to see more progress, designing terahertz components isn’t the distant dream it once was. 

“Now we can really make really complicated systems on the chip,” says Ruonan Han, an electrical engineer at MIT. “So I think the landscape is changing.”

“What’s happened over the last thirty years is that progress has been made from both ends,” says Mark Sherwin, a physicist at the University of California, Santa Barbara’s Terahertz Facility. “It’s still relatively rare, but I would say, much, much, much more common … and much easier.”

Such decades-long timescales are common in a world where new technologies whirl about in cycles of hype and disappointment. Amongst engineers, terahertz is no exception. 

The future of terahertz technology

For now, the two realms trying to enter the terahertz dark zone from either end remain largely separate. Even so, they’re giving the science world new abilities in a broad range of disciplines.

Some of those abilities could speed up communication. Your Wi-Fi runs on microwaves: Terahertz, with higher frequencies than microwaves, could forge a better connection that’s orders of magnitude faster. Through a wire, it could also create a lightning-fast cross between USB and fiber optics.

Terahertz waves are also ideal for detecting substances. “Almost every molecule has a ‘fingerprint’ spectrum in the terahertz frequency range,” says Sherwin. That makes terahertz waves optimal for picking out chemicals like explosives and the molecules in medicines. Astronomers already use that ability to look at the chemical compositions of cosmic dust and celestial objects. Closer to Earth, Han envisions a terahertz “electronic nose” that could even discern odors in the air.

Those terahertz signatures also make the far infrared ideal for scanning people and objects. Terahertz waves can see through stuff that light can’t, such as clothes, with the bonus of avoiding potentially harmful ionizing radiation like X-rays. Security screeners have already shown interest in the tech.

The one scanning characteristic that terahertz waves lack is that they can’t get through water—in the air and in the human body. But that’s no obstacle for medicine. A doctor could use a terahertz device to screen for subtle signs of skin cancer that X-rays might miss; or a neuroscientist might use it to scan a mouse brain.

Closer to Earth, Han envisions a terahertz “electronic nose” that could even discern odors in the air.

Hu thinks the research is still early days. “If we can develop tools that can really see something and not take forever to scan some area, that could really entice potential practitioners to play with it,” he says. “That’s an open-ended question.”

Much of the terahertz gap remains a blank spot on researchers’ maps, which means equipment using the coveted far infrared waves just isn’t common yet. 

“Researchers really don’t have a lot of chances to explore what [terahertz waves] can be good at,” says Han. So, for now, the faster, more sensitive world inside the gap remains largely in their imagination.

The post You’ve probably never heard of terahertz waves, but they could change your life appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Manipulating atomic motion could make metals stronger and bendier https://www.popsci.com/science/study-reveals-metal-atoms-in-motion/ Fri, 18 Mar 2022 20:30:16 +0000 https://www.popsci.com/?p=432173
Rare platinum ore nugget on a black background
Platinum might look stolid on the surface, but on the atomic level, it's got as much as a marching band. Deposit Photos

By tracking individual atoms in crystal grains, materials scientists might find ways to make metals like platinum more useful.

The post Manipulating atomic motion could make metals stronger and bendier appeared first on Popular Science.

]]>
Rare platinum ore nugget on a black background
Platinum might look stolid on the surface, but on the atomic level, it's got as much as a marching band. Deposit Photos

The Earth’s crust is cracked into seven major tectonic plates, constantly sliding and grinding into each other. You can’t see it happen, but you can see the results: the mountains and the volcanoes that erupt when plates collide, for instance, or the valleys and seas left behind when plates break apart.

But the crust isn’t alone in behaving like that. Many metals—including steel, copper, and aluminum, which are critical to making the modern world tick—are made of little crystal bits. If you take a sheet of one of those metals and pull on it or squish it down, those little bits  move against each other, just like tectonic plates. Their boundaries can shift.

After years of trying to see those shifting boundaries for themselves, materials scientists have now shown that they can zoom into the atomic scale to watch it happen. In a study published in the journal Science on March 17, they explain how this could unlock the ability for other researchers to tinker with crystal grains and sculpt metals into better building-blocks for manufacturing.

It might seem odd to describe metals as crystals, but many are, just like gemstones and ice. What defines a crystal is that their atoms are arranged in regular geometric patterns—hexagons, for instance, or cubes repeating through space. Solid glass, on the other hand, isn’t a crystal, because its atoms don’t have a defined structure and sit wherever they please.

You might think of these patterns as street grids in cities. But if an urban center is large enough, chances are that it won’t share a single grid. A megacity like New York, Tokyo, or Jakarta might be fashioned from many smaller cities, suburbs, or quarters, each one with a grid laid out at its own angle. 

The patterns in metals are called “polycrystals,” and their mini-crystal components are deemed “crystal grains.” Crystal grains might share the same pattern, but not connect cleanly to their neighbors. Sometimes one grain’s atoms don’t line up with another grain’s, or are arranged at a different angle.

What’s more, the grains are not static or fixed; they slide past one another, or twist and dance. In the parlance of materials scientists, all this is called grain boundary motion. It can change how the whole material behaves when it’s under pressure. Depending on how the grains are arranged, the material could also become hardier—or more fragile.

[Related: Time crystals just got a little easier to make]

Researchers have been trying to study grain boundary motion for decades. The problem was that, to do that, they had to zoom in enough to examine the individual atoms in a piece of material.

In recent years, they’d come closer than ever before, thanks to transmission electron microscopes, which scan a slice of material by blasting it with electrons and watching the shapes that pass through on the far side.

That works when grain boundaries are simple, like two-flat-surfaced cubes twisting away from each other. But most boundaries are far more complicated: They might be jagged, or they might slice through a piece of metal at strange angles. “It is very challenging to observe, track, and understand atomic movements at these,” says Ting Zhu, a materials engineer at the Georgia Institute of Technology and one of the Science paper’s authors.

Yellow and pink atoms on a grain boundary from a platinum scan
The electron microscopy image shows a grain boundary between two adjoining crystals where platinum atoms are colored in yellow and pink, respectively. Wang et al. 2022

Zhu and his colleagues studied platinum, which despite being rare, is frequently used in wind turbine blades, computer hard disks, and car catalytic converters. They took cross-sections of platinum just a few billionths of a meter thick, and ran them through an electron microscope. They also used an automatic atom-tracker—a kind of software—to examine the images coming out of the microscope and label the atoms. With that, the researchers could track how those individual atoms moved over time.

When they analyzed the platinum, they found something they hadn’t expected. Sometimes, as crystal grains moved and their boundaries shifted, the atoms at the edge would jump from one grain to another. The boundaries would bend and change to accomodate more atoms.

Zhu compares the atoms’ motion to that of marching band members. “When one line of band members moves to pass a neighboring line in parallel, the two lines of band members are merged into one line,” he explains.

[Related: Inside the high-powered process that could recycle rare earth metals]

Platinum might seem like a shining anomaly in this field, but Zhu says their work could translate to other metals, too. Tinkering with the grains in steel, copper, and aluminum can make those metals more durable and flexible at the same time.

It’s something that materials scientists can consider going forward. “Engineering such fine-grained polycrystals is an important strategy for making stronger engineering materials,” says Zhu.

Zhu says he’d expect to find grain boundary motion like this in most metals, including alloys that include atoms of multiple elements. To confirm, materials scientists would have to zoom in on each one’s atoms, studying what makes the acrobatics of aluminum different from the dance inside copper.

The post Manipulating atomic motion could make metals stronger and bendier appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Headed back to the office? Make sure your building has flushed out its water. https://www.popsci.com/science/stagnant-water-copper-contamination/ Thu, 10 Mar 2022 22:00:00 +0000 https://www.popsci.com/?p=430203
A dripping faucet
Metals such as copper may accumulate in plumbing that's left unused. Pixabay

Vacancies can allow microbes and metals to build up in the plumbing.

The post Headed back to the office? Make sure your building has flushed out its water. appeared first on Popular Science.

]]>
A dripping faucet
Metals such as copper may accumulate in plumbing that's left unused. Pixabay

Suppose you’re returning to a dusty workplace after years of working remotely. It’s a morning at the start of the work week, and you’re groggy. You might be tempted to go for a drink of water from the office faucet or fountain. That might be a mistake: You might have just splashed your face with copper-contaminated water, if it’s sat stagnating for months amid the pandemic.

A new study, published in the journal PLOS Water on March 9, adds to a growing body of research pointing to an unfortunate problem. When a building sits vacant for a long time, metals and microorganisms build up in its plumbing.

Fortunately, scientists and environmental engineers have pinpointed a few easy things building management can do to eliminate possible contamination. There are other more involved steps that might make better buildings in the future, too.

This focus on the water quality of individual buildings is a very recent development. It has, historically, been studied at a larger scale, such as at the city level: lead pipes running under streets, for instance.

But in the last decade, scientists studying water quality have found sources of water contamination within individual large buildings. “Something’s actually accumulating within the plumbing,” says Treavor Boyer, an environmental engineer at Arizona State University who was not involved in the study.

When buildings sit unused for days on end, the water that circulates in them tends to stagnate. The water might corrode the surrounding plumbing, leaching metals such as copper or—worse—lead.

Additionally, water suppliers treat their supply with chemicals like chlorine to prevent infection, in the same way many swimming pools are chlorinated. Over time, that chlorine dissipates, and plumbing can become a watering hole for unwanted microorganisms, such as Legionella, the genus of bacteria that can cause Legionnaires’ disease when inhaled.

[Related: Worsening droughts could increase arsenic exposure for some Americans]

Of course, not all buildings are the same. Different buildings have different sorts of infrastructure. The substances that are used to treat water vary from place to place: Some use chlorine, while others use chloramine. Older buildings, in cities that were built in past decades, might have more lead.

Nor is all contamination equal. High levels of copper in water can hurt your liver and your gastrointestinal systems, but it’s nothing next to the toxicity of lead—as residents of Flint, Michigan, know firsthand.

Microorganisms are another story as well. According to Rain Richard, an environmental engineer formerly at Arizona State University who was not involved in the study, you can drink water that contains Legionella—you just can’t breathe it in. These bacteria are more concerning in places like showers—and green-certified office buildings, which are built to handle commuters who travel via bicycle, must be equipped with at least one shower.

All of these factors mean contamination changes from building to building. Andrew Whelton, an environmental engineer at Purdue University, and his colleagues studied one such green building, a low-rise office in Indiana. Green buildings like this try to conserve water by reducing the amount that flows through it. That slows the water down, which has an unfortunate side effect.

“When water takes a long time to travel through pipes…it creates conditions for biological growth,” says Boyer. “These green buildings create an interesting sort of problem.”

Whelton’s group found that levels of copper and lead had spiked. The building had three risers, which are the main pipes that lift water to different floors; the researchers could trace the worst of the contamination—in which copper levels exceeded US government limits for healthy water—to one of those risers.

That the researchers were able to sample such a building is still an important step. “It’s hard to get access to buildings to sample their water consistently, multiple times in a row,” says Kelsie Cassell, a public health researcher at Yale University who was also not involved in the study.

The researchers also found elevated levels of Legionella, although Cassell says they didn’t find elevated levels of Legionella pneumophila, the specific species that’s typically associated with Legionnaires’ disease.

This research, which resulted in the PLOS Water report, was conducted before COVID-19 sent offices and schools into lockdown. For many office workers, this study’s weekends are drops in the time bucket, compared to the years they might have spent working remotely. And contamination is likely to grow worse in that longer time.

The pandemic has given researchers the chance to examine longer-term effects. Richard and Boyer, for instance, studied the water in Arizona schools that had been shuttered for months. They found similar copper contamination.

Schools were precedents for this issue, even before the pandemic—since they often sit mostly unused for months on end, whenever students go on long holidays, such as summer breaks. Thus, some schools have developed water management plans to ensure that students aren’t splashing their faces with corrosion.

Cassell says that, as buildings reopened, many public health experts feared Legionnaires’ outbreaks would result. Fortunately, no such outbreaks seem to have occurred—at least so far.

But that doesn’t mean gaps in prevention don’t exist. Traditionally, just as water quality was seen as a city-level problem, so was ensuring that the water was safe. Building codes and standards don’t always account for water quality.

Sampling the water for contaminants is an important start. But you can’t only sample at a building’s water meter. Whelton stresses the importance of doing this in multiple places around a building, especially if the plumbing is complex. Water from a building’s opposite sides or different floors might contain wildly different sets of substances and microbes.

“You have all these different directions [water] can go in the plumbing system,” says Cassell. “If you only sample one tap, you might not be capturing [everything].”

If there is contamination, or even if enough time has passed, one quick and easy solution to clean up a plumbing supply is to “flush” it. This involves essentially running the water—keeping the tap on—until the water that’s been sitting around, accumulating heavy metals and growing microorganisms, flows out of the system.

Richard also wants to see architects and engineers rethink how water systems might be able to work, by making them more environmentally friendly. For instance, flush toilets don’t need to use fresh water—they might be able to use greywater, the stuff that goes down the drain of sinks and drinking fountains. (Some places, such as water-deprived Hong Kong, flush toilets using seawater.)

“Long term, it would require re-envisioning the plumbing systems of these large buildings,” says Boyer.

The post Headed back to the office? Make sure your building has flushed out its water. appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Earthquake models get a big shakeup with clues buried in the San Andreas fault https://www.popsci.com/science/ancient-rocks-assess-california-earthquakes/ Thu, 03 Mar 2022 22:00:00 +0000 https://www.popsci.com/?p=428264
San Andreas fault's Soda Lake.
Soda Lake in California, located on the San Andreas fault. Pixabay

Telltale signs in ancient rock may show the future of San Andreas earthquakes.

The post Earthquake models get a big shakeup with clues buried in the San Andreas fault appeared first on Popular Science.

]]>
San Andreas fault's Soda Lake.
Soda Lake in California, located on the San Andreas fault. Pixabay

For 750 miles, the San Andreas fault cuts a scar up and down the length of California. There, two colossal tectonic plates—the North American and the Pacific plates—crush against each other. When those plates give way and slip, the humans living above might suffer devastating earthquakes.

The keys to understanding those earthquakes may lie within the fault’s danger zone, inside the glass walls of a nondescript office building in Menlo Park, a suburb on San Francisco’s peninsula. There lies the regional office of the US Geological Survey, the keepers of earthquake hazard data. Thanks in part to geologists working in that building, this fault is one of the best-studied on the planet. 

Yet our understanding of how the Earth works under the surface is far from complete. To piece that puzzle together, scientists are looking millions of years into the past. What two groups have discovered—published in two papers, one in Science Advances and another in Geology—may help us better know where earthquakes happen. 

Modeling how mountains move

Menlo Park lies in the shadow of the Santa Cruz Mountains. Jagged like a dragon’s back, these peaks, known for their vineyards, split the sprawl of Silicon Valley from the Pacific Ocean. 

In geological time, these mountains are toddlers: Geologists think the mountains began to rise about 4 million years ago. They sit upon a “knot” where the San Andreas fault curves. Geologists think that bend pushed the mountains upward in a long sequence of earthquakes. What exactly about earthquakes caused that rise, however, remains murky.

[Related: When should we issue earthquake warnings? It’s complicated.]

Fortunately, Bay Area scientists have been measuring earthquakes and collecting rock samples for decades. Not all the data fit together, but they still make the area “one of the premier natural laboratories for answering some of these questions,” according to George Hilley, a geologist at Stanford University, and one of the authors of the Science Advances paper. That group also collected data of their own: They sampled rocks for helium, an element that can tell geologists at what temperature, and how long ago, a rock formed.

Archaeology photo
The San Andreas fault on the Carrizo Plain, which is a sparsely inhabited valley located northwest of Los Angeles. NASA Jet Propulsion Laboratory 

Using that data, Hilley, one of his graduate students, Curtis Baden, and their colleagues created a computational model, one of the first geological models to rely on dynamic physics, to demonstrate how mountains formed. They harnessed software used by engineers to study how materials stood up to various loads. As a result, their model could show how rocks might bend, break, and buckle as earthquakes caused mountains to rise.

Their simulation showed something surprising. “At least if the models are to be believed, much of the mountain-building could actually happen between earthquakes rather than during the earthquakes themselves,” says Hilley.

In most faults, moving tectonic plates try to force themselves past each other. For years or decades or even centuries, they’ll quietly keep pushing, building up energy at the boundary. 

Inevitably, something snaps. All that energy gets released in an abrupt tremor: an earthquake. But between quakes, that energy could go into building mountains, too, according to these simulations.

And data from these simulations, the authors say, can help fill gaps where other observations don’t match up.

A feat of earthquake archaeology

Fly about a hundred miles southeast from the Bay Area, to the area near Pinnacles National Park, and the nature of the San Andreas fault changes. Large earthquakes aren’t nearly as common here as they are to the north or further south, where the fault passes by Los Angeles, the Inland Empire, and Palm Springs. 

That is because this part of the San Andreas fault isn’t like the others. Here, the North American and the Pacific plates continually crawl past each other, without building up the stress that results in violent earthquakes. Geophysicists call this a “creeping” fault. 

Central San Andreas can see flurries of relatively harmless minor quakes, but there hasn’t been a Big One in recorded history—at least for 2,000 years. But just because central San Andreas is a creeping fault today doesn’t mean it was always so. Scientists wanted to peel away the rocks and peer into its past. 

They relied on biomarkers: The remnants of living organisms, trapped in the rock record and chemically transformed by high heat. This is how petroleum and natural gas form, and fossil-fuel hunters are very familiar with the idea of using biomarkers as a search tool.

“What we did was sort of take that idea and turn it on its head,” says Heather Savage, a seismologist at the University of California, Santa Cruz, and an author of the Geology paper. “If you have some organic molecules in fault zones, they would only experience high heat for maybe a few seconds during an earthquake, but it can get really hot, so we should still see some of these reactions take place.” 

Drilling deep, almost 10,500 feet (3,200 meters) beneath the surface, these scientists found biomarkers that indicated a rather violent history. This placid fault had once been riven by a myriad earthquakes. These scientists found evidence for at least 100 quakes, some potentially as high as magnitude 7 on the Richter scale: stronger than the 1989 Loma Prieta and 1994 Northridge quakes in California’s recent memory. 

Archaeology photo
Sedimentary rock that was structurally altered during an earthquake, as seen via microscope. The green layer was heated when the fault slipped. Kelly Bradbury/Utah State University

“As far as we knew, until this work, we didn’t know that there would be such large earthquakes this far into the creeping section,” says Savage.

These quakes might have happened anywhere from a few thousand to 3.2 million years ago; Savage and her colleagues are now seeking to put a finer date on these earthquakes. But it’s a sign that this fault is nowhere near as placid as it might have seemed. If it could violently rupture in the past, the conditions exist for it to violently rupture again.

From seismology to seismic retrofitting

Understanding the history of the San Andreas fault isn’t just about creating a picture of what California looked like when ground sloths and saber-toothed cats roamed the land, millions of years ago. The fault cuts past two of North America’s largest urban areas, and its earthquakes put tens of millions of people at risk. 

Geologists hope their research can, behind the walls of that United States Geological Survey office, inform better assessments of how earthquakes can threaten buildings and lives.

When tectonic experts evaluate earthquake hazards in a particular area, they’ll consider a few different types of data: satellite measurements of Earth’s shape, past earthquake patterns, or a long-term history of a fault. Sometimes—such as in the Santa Cruz Mountains today—those data don’t agree with each other. Hilley hopes that his group’s model can reconcile those disagreements by showing how these data connect to the same processes.

And the central San Andreas research could add nuance to Central California’s risk models. “I would like to think that our work can inform that, in fact, we do see earthquakes, and evidence for many earthquakes in this section,” says Savage.

The post Earthquake models get a big shakeup with clues buried in the San Andreas fault appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Time crystals just got a little easier to make https://www.popsci.com/science/time-crystal-uses/ Mon, 28 Feb 2022 21:16:48 +0000 https://www.popsci.com/?p=427595
Silver pocket watch being swung on a chain on a black background to symbolize a time crystal
Time crystals form a repeating pattern by "flipping" between two atomic states precisely on the clock. Those properties could be useful for silicon chips, fiber optics, and much more. Deposit Photos

A new kind of time crystal can exist at room temperature, making it all the more relevant for the real world.

The post Time crystals just got a little easier to make appeared first on Popular Science.

]]>
Silver pocket watch being swung on a chain on a black background to symbolize a time crystal
Time crystals form a repeating pattern by "flipping" between two atomic states precisely on the clock. Those properties could be useful for silicon chips, fiber optics, and much more. Deposit Photos

To make a space crystal, you need the immense pressures of the Earth’s surface bearing down on minerals and magma. But to make a time crystal, you need esoteric equations and ridiculously precise lasers.

At least, that’s how physicists shaped the first self-standing time crystal in a lab last year. Now, they’ve turned into an even more tangible object by creating a time crystal from common elements that can withstand room temperature. They shared their design in the journal Nature Communications on February 14.

If you’re wondering what a time crystal is (outside of pulp science fiction), most physicists also had the same question until pretty recently. It’s a form of matter that wasn’t proposed until 2012, and wasn’t even seen in rudimentary stages until 2016.

To wrap your head around this wonky chapter of quantum mechanics, think of a crystalized structure like a piece of salt or a diamond. The atoms deep within those objects are arranged in repeating, predictable patterns in space. For instance, if you take an ice cube from your freezer and zoom into the tiniest scales, you’ll see the hydrogen and oxygen atoms of the water molecules forming a mosaic of tiny hexagons. (This is why snowflakes tend to be hexagonal.)

As a result, physicists also call these formations “space crystals.” But just as the three axes of space form different dimensions, time also makes a dimension. Physicists began to wonder if they could find a crystal—or something like it—whose atoms formed repeating patterns in time.

[Related: What the heck is a time crystal, and why are physicists obsessed with them?]

Over the past few years, labs across the world have been working out what a time crystal might look like. Some started with a space crystal whose atoms were arranged one way. They then buzzed the crystal with a finely tuned laser to “flip” the atoms into another state, heated it up again to switch it back to the first arrangement, over to the second, and so on, with precise regularity. This laser-driven setup is specifically called a “discrete time crystal.” (In theory, there are other types of time crystals.)

In 2016, physicists at the University of Maryland created a basic but discrete time crystal with atoms from the rare earth metal ytterbium. Other groups have tinkered with exotic environments like the insides of diamond or a wavy state of matter called a Bose-Einstein condensate. More recently, in November of 2021, physicists from Stanford University and Google announced that they’d created a time crystal in a quantum computer.

But early time crystals have been limited. For one, they can usually only exist at cryogenic temperatures barely above absolute zero, and are impractical for most systems that everyday people use. Partly for that reason, those time crystals have existed in isolated systems like quantum computers, away from the “real world.” Moreover, they weren’t long-lasting: The change between states would come to a halt after mere milliseconds, almost like a windup toy running out of thread.

And just as a space crystal can be big or small in space, depending on how much the pattern repeats itself, a time crystal can be long or short, depending on the duration of each state. Time crystals so far have tended to be short or “small.” That left room for growth.

So, this global group of physicists set about engineering a time crystal that circumvented some of these problems to, hopefully, work in the real world. Their device consists of a crystal about 2 millimeters across, fashioned from fluorine and magnesium atoms. It uses a pair of lasers to move between patterns, and can do so at 70 degrees Fahrenheit (room temperature).

Once the team finished fine-tuning their systems, they found that they could create a variety of time crystals “bigger” than any seen before. “The lifetime of the generated discrete time crystals in our system is, in principle, infinite,” Hossein Taheri, an electrical engineer at the University of California, Riverside, and contributor on the study, told the “Physics World Weekly” podcast.

“Generally in physics, wherever there is a path for energy exchange between the system and its environment, noise also creeps in through the same path,” Taheri said on the podcast. That can undo the delicate physics needed for time crystals to form, which is why they need to be contained by such impractical means. But Taheri and his collaborators were able to bypass the limitations by keeping the state change going with two lasers.

[Related: The trick to a more powerful computer chip? Going vertical.]

With the researchers’ achievement, time crystals might be one step closer to existing outside of the lab. If that’s the case, what applications would they have?

No one’s going to put time crystals in time machines or warp drives soon, but their precise properties could pair well with atomic clocks or silicon chips for specialized devices. Or, because they’re driven by laser lights, they could back stronger fiber optic connections. Alternatively, they could help people better understand quantum physics and unique states of matter.

“We can use our device to predict what we can observe in much more complex experiments,” Andrey Matsko, an engineer at Jet Propulsion Laboratory in Pasadena, California and another one of the authors, told “Physics World Weekly.”

In fact, he and his team think time crystals could spawn a whole field of study with a beautifully science-fiction-esque name: “timetronics.” 

“I believe that timetronics is around the corner,” Krzysztof Sacha, a physicist at Jagiellonian University in Krakow, Poland and research co-author, said on the podcast. So while you’re still a long way from being able to hold time crystals, they might enter your world sooner than you’d expect.

The post Time crystals just got a little easier to make appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The most precise atomic clocks ever are proving Einstein right—again https://www.popsci.com/science/atomic-clock-measures-time-dilation/ Thu, 17 Feb 2022 23:00:00 +0000 https://www.popsci.com/?p=426079
An atomic clock at the National Institute of Standards and Technology's JILA.
An atomic clock at the National Institute of Standards and Technology's JILA. Jacobson/NIST

One of the atomic clocks can track time to within one second over 300 billion years.

The post The most precise atomic clocks ever are proving Einstein right—again appeared first on Popular Science.

]]>
An atomic clock at the National Institute of Standards and Technology's JILA.
An atomic clock at the National Institute of Standards and Technology's JILA. Jacobson/NIST

For most of human history, we kept time by Earth’s place in space. The second was a subdivision of an Earth day, and, later, an Earth year: The timespan was defined by where Earth was. Then came the atomic clock. 

Scientists delved into atoms of the element cesium, where a process called the hyperfine transition emits and absorbs microwaves, which scientists could time very precisely with the help of a vibrating quartz crystal. That underpins the basis of how scientists measure time today, and allowed them to craft a more accurate definition of the second in 1967. 

That definition hasn’t changed significantly in over half a century, nor has the timing of the atomic clocks used to create it. Those clocks wouldn’t have lost a second since the extinction of the dinosaurs. But better atomic clocks are here, and they’re good for more than just keeping time—they’re great physics tools, too.

Now, two different groups have created clocks that can measure subtle physics within the clocks themselves. The research teams published their respective results in two different papers in the journal Nature on February 16. These new clocks can measure one of Albert Einstein’s predictions—time dilation due to gravity—on the smallest scale yet.

Cutting-edge atomic clocks such as these use neither cesium nor quartz. Instead, their foundation is pancake-like structures of super-chilled strontium atoms. Their operators can control the atoms using a laser that emits visible light. Hence, they’re called “optical clocks.”

One such optical clock exists at the University of Wisconsin-Madison. This clock holds six strontium pancakes—in effect, six smaller clocks—in the same structure. (There’s nothing unique about that number; they could add more or less. “Six is somewhat arbitrary,” says Shimon Kolkowitz, a physicist at the University of Wisconsin-Madison.)

The Madison clock can keep time to within one second over 300 billion years—over 20 times longer than the age of the universe. That would have been a world record, but this clock is not even the most powerful one out there. It’s outmatched by another multi-clock at JILA, a joint project of the National Institute of Standards and Technology (NIST) and the University of Colorado, Boulder. 

[Related: Researchers just linked three atomic clocks, and it could change the future of timekeeping]

Having multiple “clocks” in the same device isn’t necessarily useful for timekeeping. (Which clock do you watch, for instance?) But they do allow you to compare the clocks to each other. Since these clocks are very, very precise, they can measure some very precise physics. For instance, the Boulder group could test time dilation within one device.

“It’s kind of been, up till now, something you find by comparing separate clocks over distances,” says Tobias Bothwell, a graduate student at JILA and NIST.

According to relativity, time slows the faster you go as you approach the speed of light. Gravitational fields can cause the same slowdown, too: The stronger the field, the greater the time dilation. Take Earth. The closer you are to Earth’s center the more Earth’s gravity is pulling you down, and the more time dilation you experience.

In fact, you’re experiencing time slower than birds above your head, and the stuff under your feet is actually experiencing time slower than you are. Earth’s core is actually 2.5 years younger than Earth’s crust. That might sound like a lot, but against our planet’s 4.6-billion-year-long history, it’s not even a drop in the bucket of time. Yet scientists have been measuring these kinds of subtle differences for decades, using everything from gamma rays to radio signals to Mars to, indeed, atomic clocks. 

In 1971, two scientists carried atomic clocks on board commercial flights and flew them around the world, one in each direction. They measured a subtle difference of several hundred nanoseconds, matching predictions. In 2020, scientists used two clocks, one 1,480 feet above the other at the Tokyo Skytree, and found a difference that again proved Einstein correct.

These experiments show relativity is universal. “It’s the same everywhere on Earth, basically,” says Alexander Aeppli, a graduate student at JILA and NIST. “If you can measure one centimeter here, you can measure one centimeter somewhere else.”

NIST had already gotten down to the centimeter level. In 2010, scientists at NIST performed a similar measurement using different clocks about a foot apart.

In one of the new studies, two strontium pancakes in a single device were separated by even less: about a millimeter. After 90 hours of collecting data, the Boulder group were able to discern the subtle difference in the light, making a measurement 50 times more precise than any before it. 

The record for their measurement before was observing a dilation—a difference in the light’s frequency—to 19 decimal places, says Bothwell. “Now, we’ve gone to 21 digits…Normally, when you move a single decimal, you get excited. But we were fortunate to be in a position where we could go for two.”

Theise, according to Kolkowitz, are “very beautiful and exciting results.”

But Kolkowitz, who wasn’t part of the NIST study, says that NIST’s clock has one disadvantage: It is not so easy to take out of the lab. “The NIST group has the best laser in the world, and it’s not very portable,” he says. 

He sees the two groups’ work as complementary to each other. The Boulder clock could measure time and other physical properties with ever-greater precision. Meanwhile, he thinks a more mobile clock, similar to the one at Madison, could be carried to a number of settings, including into space to search for dark matter or gravitational waves.

While it’s pretty cool to prove that basic physics works as Einstein and friends figured it does, there are actually quite a few real-world applications for this sort of science, too. Navigation, for instance, could benefit from more accurate clocks; GPS has to correct for time dilation. And measuring the strength of time dilation could allow you to measure gravitational fields more precisely, which could, for instance, look under Earth’s surface.

“You can look at magma plumes under the Earth, and figure out, maybe, when a volcano might erupt,” says Aeppli. “Things like that.”

Correction March 2, 2022: A previous version of this story stated that Earth’s core is about 2.5 years older than its crust. In fact, thanks to gravitational time dilation, the core is 2.5 years younger.

The post The most precise atomic clocks ever are proving Einstein right—again appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Inside the high-powered process that could recycle rare earth metals https://www.popsci.com/environment/rare-earth-metal-recycling/ Fri, 11 Feb 2022 13:31:41 +0000 https://www.popsci.com/?p=424957
Two chemists pulling apart an old computer in a rare earth metal recycling experiment
Researchers at Rice University successfully separated rare earth metals out of old computers and other waste. Jeff Fitlow/Rice University

It takes some tricky chemistry to mine industrial waste and old electronics for critical rare earths.

The post Inside the high-powered process that could recycle rare earth metals appeared first on Popular Science.

]]>
Two chemists pulling apart an old computer in a rare earth metal recycling experiment
Researchers at Rice University successfully separated rare earth metals out of old computers and other waste. Jeff Fitlow/Rice University

Look in the second row from the bottom of most periodic tables, and you’ll find the lanthanides, split off from an archive of elements that doesn’t know what to do them. The lanthanides are a close-knit bunch that are hard to distinguish from each other because of their similar colors and properties. Even for most scientists, they live in a cold and distant land, thoroughly inorganic and far from the comforts of hydrogen and carbon and oxygen. 

But these metals are critical to making the modern world tick. They’re members of a group known as rare earth elements, or rare earths, that support everything from magnets that power clean energy technology to telescope lenses to the screen on device you’re reading this upon. And mining them is difficult and ecologically costly.

So, chemists and engineers are trying to make the best use of rare earths that have already been processed by recycling them out of industrial waste and old electronics. In new research published on February 9 in Science Advances, they show how they’re trying to do that with bright flashes of electricity. 

Molecules of coal fly ash separating in a black and white microscope image
A molecular look at rare earths separating in coal fly ash. Tour Group/Rice University

Most rare earths aren’t actually that rare (certainly not compared to truly rare elements like iridium), but they’re not easy to get. After their ore is mined from the ground, they have to be separated to make specialized products—a tedious process, given their similar properties. Most rare earth mining homes in on lanthanum and cerium, but heavier metals like neodymium and dysprosium are especially desirable for the magnets used in clean energy tech.

The supermajority (some estimates say more than 90 percent) of the world’s supply today comes from China, which makes the resource more vulnerable to geopolitical tensions. In 2010, after a Chinese fishing boat collided with a Japanese Coast Guard boat in disputed waters, China stopped rare earth exports to Japan. The blockade didn’t last, but Japan has spent the years since then aggressively seeking alternative sources of rare earths. So have other countries.

More importantly, rare earths’ extraction comes at an environmental cost. “It’s energy- and chemically intensive,” says Simon Jowitt, a geochemist at the University of Nevada, Las Vegas who was not involved with the latest research. “Depending on how you process them, it involves high-strength acids.” Those acids can leach into the environment.

[Related: How shape-shifting magnets could help build a lower-emission computer]

One way of reducing the burden is by recycling goods that already contain these elements—but that’s still not common. Callie Babbitt, a professor of sustainability at Rochester Institute of Technology in upstate New York who was also not involved with the new study, says that only about 1 to 5 percent of the world’s rare earths get recycled.

Which is why researchers are innovating to find new ways of breaking rare earths down. Some have tried bacteria, but feeding those microbes has proven energy-intensive. 

Now, one group from Rice University has devised a recycling method that relies on intense electricity called “flash joule heating.” The researchers behind it had previously tested it on old, chopped-up circuit boards to strip them of precious metals like palladium and gold and heavy metals like chromium and mercury before safely disposing them in agricultural soil.

This time, they applied flash joule heating to other industrial byproducts: coal fly ash, which is a pollutant from fossil fuel power plants, red mud, which is a toxic substance left over from turning bauxite into aluminum, and, indeed, more electronic waste.

Their process looked something like this. They put the substance they were breaking down into a finger-sized quartz tube, where electricity “flashed” it to around 5400 degrees Fahrenheit. The separated components were then dissolved in a solution for chemists to retrieve later.

The process does release some toxic compounds, but the system aims to capture them and prevent them from getting into the air. “When you do this industrially, you wouldn’t just release these compounds to the air,” says James Tour, a chemist at Rice University and one of the authors of the study. “You would trap them.”

“Our waste stream is very different,” Tour explains. Unlike the strong nitric acid that’s often used to extract rare earths from the ground, their solution is a much weaker, more diluted hydrochloric acid. “If that got on your hand, I don’t think you’d even feel it,” Tour says.

However, even with a step forward in this research, it will be some time before piles of industrial waste can be recycled for rare earths. “There’s a lot of activity going on in this area, but I haven’t seen anything in the way of breakthroughs,” says Jowitt.

[Related: You throw out 44 pounds of electronic waste a year. Here’s how to keep it out of the dump.]

One issue with flash joule heating, according to Jowitt, is that the rare earths still need to be separated before they can be molded into gadgets. What’s more, using pollutants like coal fly ash means there will be other harmful leftovers from the process. “Extracting and recovering the [rare earths] they contain are only part of a larger challenge of managing these wastes,” says Babbitt.

When it comes to e-waste, it won’t be easy to mine mountains of disused computers and phones for valuable components. The amount of rare earths in an average smartphone, for instance, add up to fractions of a gram. And many consumers wouldn’t know where and how to recycle them.

With that, Jowitt thinks the solution could lie in the products raising the demand for rare earths. “One obvious thing is changing the way to design things to make them more recyclable.”

The post Inside the high-powered process that could recycle rare earth metals appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The future might be filled with squishy robots printed to order https://www.popsci.com/science/3d-printed-gelatin-robot/ Sat, 05 Feb 2022 20:47:00 +0000 https://www.popsci.com/?p=423866
Three excited kids in robotics class printing robot toys on a 3D printer
3D printed plastic robots may be nothing new, but 3D printed gelatin robots? They could make waves for kids toys, medical procedures, and more. Deposit Photos

T-1000, but jigglier.

The post The future might be filled with squishy robots printed to order appeared first on Popular Science.

]]>
Three excited kids in robotics class printing robot toys on a 3D printer
3D printed plastic robots may be nothing new, but 3D printed gelatin robots? They could make waves for kids toys, medical procedures, and more. Deposit Photos

Mixing gelatin and sugar syrup could make for a tasty 1900’s dessert. But it’s also the base of a gel-like substance that, in the future, could lead to cheap, bendy, and sustainable robots.

Scientists at Johannes Kepler University Linz in Austria have built a tentacle-like robotic finger with gelatin and other materials you can probably find in a shop near you. They cooked the ingredients and then created the finger in a 3D printer. They published their work on February 2 in Science Robotics.

You might be used to thinking of robots as rigid constructs of metal, ceramic, and other hard materials. These are the sorts of machines that build cars and make exoskeletons. But there are other types of robots, ones made from more compliant materials that can bend to their surroundings.

This is the growing world of soft robotics. In the near future, it’s soft robots that might find their way into the human body, where their flexibility could, for instance, allow surgical tools to conform to different body shapes. It’s soft robots that might mimic sea creatures and delve under the sea, both on Earth and on other worlds.

But even if these squishy robots can go to extremes and swim like fish, the materials that make them work are often polymers like plastics, which aren’t renewable nor ecologically friendly. Gelatin, on the other hand, naturally biodegrades, leaving no trace. As such, robot-makers such as those in Linz have been tinkering with gelatin-based materials for a few years now

But gelatin poses other challenges that you might not expect to pop up in a robotics lab. Because it’s essentially sugar and protein, it tends to attract mold. And when the gel’s water content dries up—something that, predictably, happens in very dry environments—the gel becomes hard to work with.

“It was too brittle,” says Florian Hartmann, a physicist at EFPL in Switzerland and one of the researchers behind the new paper. “So, if you stretch it just a little bit, it breaks very easily.”

The Linz group’s recipe gets around a few of those challenges. In addition to gelatin and sugar, they added citric acid, which alters the pH of the material and prevents microorganisms from feasting on it prematurely. They also mixed in glycerol, which helps the gel hold in water. With those upgrades, the material can be stretched up to six times its original length and still retain its structure. The Linz group had first published this recipe by 2020.

“We carried on and tried to make more complicated robots with more performance and more functionality,” says Hartmann.

That brings them to today. Unlike most gelatin builders before, who typically made their parts with molds like you might do in the kitchen, the Linz researchers modified a 3D printer to use their gelatin substance.

3D printing soft robots has so far been a technology with a lot of promise, but few results. Part of the problem is that the few polymers that have been used take a long time to settle and solidify, meaning that printing them takes an unpractical amount of time. But gelatin has an advantage: As a protein, it can crystalize and make viable prints much more quickly than polymers.

“Manufacturing something completely biodegradable coming right out of the 3D printer—I believe it’s a very interesting approach,” says Ramses Martinez, an engineer at Purdue University, who was not involved with this paper.

To make the finger move, the Linz group wrapped it with an exoskeleton made from a material that included ethanol and shellac, the resin that’s used in very old records. These strips are sensitive to how light refracts, or bends, as it passes between the finger and the air around it.

That made it possible to control the 3D printed gelatin finger by pressing compressed air at it. The moving air changed the angle of light passing through it, which makes the strips sway in response. The Linz group controlled the finger with a system containing a Raspberry Pi and a PlayStation 4 controller. In their experiments, they were able to make it push objects away from its surroundings.

[Related: These robotic wings use artificial muscle to flap like an insect]

Hartmann isn’t sure how well this finger might fare outside the lab. The gelatin can go up to around 140 degrees Fahrenheit before it starts to melt, and it will require more tinkering before it can come in contact with water. But the good news is that, because it’s made of widely available ingredients, it’s easy to make more materials for further tests.

“I believe everything related with proteins is something that you can explore putting this technology in,” says Martinez. That might include robotic parts used to manufacture food to avoid the safety risks from non-organic parts. Hartmann also imagines it being used in robotic toys, to minimize harm to children, or in pop-up art installations, to make them easily disposable.

Martinez adds that gelatin robots could be used to enter sensitive environments, such as highly radioactive areas, where operators have to balance the need to reduce harm to themselves with the need to prevent further contamination. “You just simply don’t want to bring them back and recover them,” he says. “So, for these effects, having something that will degrade and actually biodegrade, that would be quite interesting.”

Gelatin robots probably won’t ever lift enough weight to build cars. But as this single finger shows, robots can do far more things than just that.

The post The future might be filled with squishy robots printed to order appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Scientists found a fleeting particle from the universe’s first moments https://www.popsci.com/science/scientists-detect-x-particle-lhc-facility/ Fri, 28 Jan 2022 21:00:00 +0000 https://www.popsci.com/?p=422662
The X particle was detected at CERN's underground LHC facility.
The X particle was detected at CERN's underground LHC facility. Maximilien Brice

To detect an X particle, make some quark-gluon plasma.

The post Scientists found a fleeting particle from the universe’s first moments appeared first on Popular Science.

]]>
The X particle was detected at CERN's underground LHC facility.
The X particle was detected at CERN's underground LHC facility. Maximilien Brice

As atoms and subatomic particles swirl and crash into each other within the magnetic whorl of CERN’s Large Hadron Collider (LHC), the detectors watching their collisions and the high-energy debris they produce turn what they’re seeing into data—a lot of data.

The vast majority of that data is fluff that CERN automatically filters out. But each year that LHC runs, CERN estimates, produces 90 petabytes of saved data—enough to fill up 90,000 typical 1-terabyte hard drives. CERN, in the fashion of a 1960s space opera, stores much of it on giant banks of magnetic tape in a glossy room near the French-Swiss border. It’s too much data for any human to easily sift through.

It’s perhaps unsurprising that hidden gems lie buried deep within these storage banks, waiting to be found. Particle physicists have uncovered one such gem: a strange particle with a strange name, X(3872). If they’re right, it could be a look back into the very earliest flickers of time—what the universe looked like in the first millionth of a second after the Big Bang. They published their findings in the journal Physical Review Letters on January 19.

They’ve only scratched the surface of what this particle looks like. “Theoretical predictions from different groups did not agree with each other,” says Yen-Jie Lee, a particle physicist at MIT, and one of the researchers.

X(3872) sounds like the name of a cryptid, and its previous sightings have been fleeting indeed. The first one came in 2003, when scientists at the Belle experiment, a particle accelerator in Tsukuba, Japan, north of Tokyo, glimpsed X(3872) as they were smashing electrons together. Unfortunately, X(3872) decayed and vanished too quickly for scientists to learn much.

[Related: Physicists close in on the exceedingly short life of the Higgs boson]

Instead, scientists thought they could find X(3872) in something called quark-gluon plasma. The nuclei of atoms contain clumps of protons and neutrons, but those tiny particles are actually fashioned from even tinier particles called quarks. To make larger particles, quarks are bound together by gluons, particles that are tinier yet, acting as agents of the small nuclear force.

At extremely, extremely high temperatures—trillions of degrees—protons and neutrons and other particles like them break apart and dissolve into a high-energy slurry of quarks and gluons. That’s quark-gluon plasma.

Only in the 21st century have physicists been able to create it. One method that’s been shown to work is heavy-ion collision: smashing atomic nuclei together at very high speeds. Fortunately, experiments at LHC had been smashing heavy lead atoms together, leaving behind data trails in quark-gluon plasma for researchers to comb through.

But that’s not so simple. “Nobody has tried to detect X(3872) in heavy-ion collisions before, because it is a very difficult task,” says Lee.

LHC usually collides smaller particles like protons, but larger particles like atomic nuclei leave behind a lot more debris. “In proton-proton collisions, there are about a few tens of particles produced in one event, while in heavy-ion collisions, there are typically several thousand, even 10,000 particles per event,” says Jing Wang, a postdoc at MIT, and one of the researchers.

Finding X(3872) in all that—lost in the rainforest of LHC data—is like trying to find a needle in a meadow. Wang and her colleagues devised a method that relied on machine learning: They trained an algorithm to find X(3872)’s signature, its fingerprints, as it decayed into other particles. After some fine-tuning, the algorithm found a particle with X(3872)’s mass no fewer than a hundred times.

The results tell us more about an artifact from the very earliest ticks of history. Quark-gluon plasma filled the universe in the first millionths of a second of its life, before what we recognize as matter—molecules, atoms, or even protons or neutrons—had formed.

Lee says that, in the future, the quarks and gluons in the plasma could be used to break the particle apart and see what lies inside.

Some physicists believe that X(3872) could be a particle with four quarks: a tetraquark. The subatomic particles that we’re familiar with—the typical protons and neutrons—are made up of three quarks; tetraquark particles are weird, and usually need high energies to stay together. In the past decade, physicists have seen other examples of tetraquarks in their particle accelerators.

Another possibility is that X(3872) is actually built from mesons. These are another type of subatomic particle made from two particles: one quark and one antiquark, the quark’s antimatter doppelganger. Mesons can sometimes appear fleetingly on Earth, when high-energy cosmic rays collide with typical matter. But nobody has ever seen a larger particle made up of multiple mesons.

This is exciting, says Lee, because if X(3872) is created from mesons, then it’s a sign that the universe was resplendent with such “exotic” particles.

But to learn, they’ll need to wait for even more data. LHC is currently three years into its second extended maintenance and upgrade period—called, fittingly enough, Long Shutdown 2—and its restart date has been pushed back repeatedly, thanks to COVID. (It may be as soon as next month.) After that, there will be more collisions, more quark-gluon plasma, more data to sift.

“It is going to be exciting to follow up this line of study with a much larger amount of data,” says Lee.

The post Scientists found a fleeting particle from the universe’s first moments appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
NASA’s James Webb telescope is about to arrive at an exceptional point in space https://www.popsci.com/science/james-webb-telescope-destination-lagrange-point/ Sat, 22 Jan 2022 20:00:00 +0000 https://www.popsci.com/?p=421615
The JWST rocket launch.
NASA's James Webb Space Telescope aboard a rocket launched in December 2021. NASA/Bill Ingalls

The telescope will drift at gravity's tipping point between Earth and the sun.

The post NASA’s James Webb telescope is about to arrive at an exceptional point in space appeared first on Popular Science.

]]>
The JWST rocket launch.
NASA's James Webb Space Telescope aboard a rocket launched in December 2021. NASA/Bill Ingalls

The James Webb Space Telescope (JWST) is approaching its new home. On January 24, it will arrive at a point in space that scientists call Lagrange point 2, or L2.

This is the technical name for a delicate gravitational tipping point. JWST is bound for where, in the Earth-sun pair, the gravitational pull of Earth perfectly balances out the much stronger gravity of the sun. JWST’s designers planned for their telescope to drift there, because there, the telescope can work without gravity nudging it out of place.

“We knew that we needed to keep JWST at L2,” says Stefanie Milam, a NASA planetary scientist on JWST.

Each pair of gravitationally bound objects—a sun and its planet, say, or a planet and one of its moons—has five Lagrange points. An asteroid or a spacecraft, for instance, can live at one of those five points without falling out of orbit.

In 1765, the mathematician Leonhard Euler crunched gravitational equations to find the first three points. Those three—L1, L2, and L3—form a straight line. Take Earth and the sun. L1 lies between the Earth and the Sun: to be more precise, about 930,000 miles (1.5 million kilometers) from Earth. L2 is tucked away on Earth’s far side: it’s also about 930,000 miles from us, facing the outer reaches of the solar system. 

Several years after Euler, in 1772, one of his close correspondents—another mathematician named Joseph-Louis Lagrange—ran through those equations again and realized that two additional points exist: L4 and L5. They’re located in Earth’s orbit, with L4 a bit ahead of us and L5 an equal bit behind us.

L4 and L5 are more stable than their counterparts—they can pull in gas, dust, and even larger objects. Astronomers have discovered at least two asteroids at the sun and Earth’s L4 and L5, and they’re hunting for more. Other sun-planet pairs have captured objects at these points, too. The sun and Jupiter’s L4 and L5 points are home to whole groups of asteroids, known as trojans, which NASA’s Lucy probe will visit.

L3 is the odd point without a counterpart. To find it, you’d have to go all the way to the other side of the sun, close to the opposite point in our orbit (but, because Earth’s gravity subtly pulls at the sun, not quite exactly the opposite point). Predictably, it’s a little impractical for spacecraft to easily get there; no known spacecraft has ever called L3 home.

But L1 and L2 are much easier for us to visit, each less than four times the distance from Earth to the moon. L1, the perch facing the sun, has been an ideal stomping ground—or stomping space—for missions designed to observe our star or the streams of particles in its solar wind.

[Related: NASA’s James Webb telescope will peer through the haze of other worlds]

L2, on the other hand, lies in Earth’s shadow. This site is ideal for craft that peer out beyond the solar system and into the vast cosmos beyond. Currently calling it home are ESA’s Gaia, which is measuring the distances to the stars, and the X-ray observatory Spektr-RG. On January 24, JWST will join them. 

From the beginning of JWST’s planning, decades ago, its designers and planners decided that L2 was its right place. JWST is an infrared telescope. Heat is also infrared, so being in Earth orbit—and constantly having to revolve into sunlight—is not ideal. Even heat from our own planet could throw off the telescope’s extremely finicky observations.

“Any heat from the Earth or the moon would be something that we would have to fight with, and we’re trying to detect the faintest signals of galaxies and stars across the universe,” says Milam.

Placing JWST at L2, far from Earth, circumvents that problem. Being in Earth’s shadow also means that the telescope can use a sunshield, rather than being wrapped inside a tube like the Hubble Space Telescope is. This is part of the reason JWST can use its colossal mirror.

There are other advantages to being at L2, Milam says. For instance, being out of Earth’s orbit means dodging other spacecraft, as well as most of the space junk that could slam into the telescope and damage it. 

But there is a catch. JWST is too far from Earth to easily conduct maintenance. That’s in contrast to Hubble, whose location in Earth orbit meant that it was easy for NASA to conduct repairs, including its famous mirror job

JWST won’t be fixed into place at L2, but actually in a fine-tuned orbit around it. Its orbit won’t be completely stable, either. The force of the sun’s radiation pushing down on the sunshield will  gently push JWST out of place. There, says Milam, “unlike Hubble, we don’t get Earth’s gravity to help us” keep the spacecraft’s momentum in check.

Thus, JWST will need constant adjustments to stay in place. Figuring out what that will require, Milam says, is something JWST’s operators will be playing with over the next several months. After the telescope arrives in place Monday, they’ll begin checking to ensure that all of its instruments function. Then, observations begin .

Once JWST is settled near its companions at L2, all may be quiet—for a few years, at least. 

Over a half-dozen other missions are slated to go there. The Nancy Grace Roman Space Telescope, for instance, is set to head there sometime later in the 2020s. So will Euclid, ESA’s dark-matter-watcher; PLATO and ARIEL, two ESA telescopes that will peer at exoplanets; and LiteBIRD, a Japanese mission to try to peer into the very earliest days of the universe.

The post NASA’s James Webb telescope is about to arrive at an exceptional point in space appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
How the world’s first ‘quantum tornadoes’ came to be https://www.popsci.com/science/ultracold-atoms-form-quantum-tornados/ Mon, 17 Jan 2022 13:00:00 +0000 https://www.popsci.com/?p=420626
Whirlpool of water to represent a quantum tornado from a physics experiment
The ultra-cold atoms flowed together like water, but were a different state of matter completely. Deposit Photos

The atom-sized version of extreme weather.

The post How the world’s first ‘quantum tornadoes’ came to be appeared first on Popular Science.

]]>
Whirlpool of water to represent a quantum tornado from a physics experiment
The ultra-cold atoms flowed together like water, but were a different state of matter completely. Deposit Photos

In some of the most extreme physical conditions, physicists have created perhaps the smallest storms yet. 

These “quantum tornadoes,” whipped up by quantum researchers from MIT and Harvard, are the latest demonstrations of quantum mechanics—the strange code of laws that governs the universe at its finest, subatomic scales. They’re made from little clouds of sodium atoms, swirling around at temperatures of a fraction of a degree above absolute zero.

There’s a well-established method of deep-freezing atoms to these ultra-cold depths. It starts by trapping atoms—often alkali metals—in a magnetic cage, then shooting a laser at them. It might seem odd to use lasers as a way of cooling, but they produce a beam with only one wavelength of light (in this case yellow, to match sodium’s vaporous hue). When finely tuned, they can slow down the atoms to the point where they no longer produce heat.

What might be left in the end is a Bose-Einstein condensate, a lovably esoteric state of matter where multiple atoms act as one and behave in all sorts of unimaginable quantum ways.

[Related: There’s a new state of matter known as “quantum spin liquid”]

Bose-Einstein condensates might seem weird, and they are—but it’s a sort of weirdness that physicists are used to dealing with. They were first predicted in the 1920s, and scientists succeeded in creating one in the lab in 1995. Those efforts won their scientists the 2001 Nobel Prize in Physics.

Since then, the physics world has been abuzz with attempts to push Bose-Einstein condensates to new heights (or new lows, as they were). For instance, physicists have wondered for some time if they could get atoms frozen in this state of matter to rotate. 

Researchers were interested in doing that because it followed in the footsteps of something called a quantum Hall liquid. To make a long story short, under certain quantum conditions and in a magnetic field, a cloud of electrons that normally would push each other away would instead begin mimicking each other’s properties. That would cause them to act a little like water molecules in a fluid, flowing freely.

Electrons are difficult to observe, but physicists thought that rotating a Bose-Einstein condensate in a whirlpool could make atoms behave the same way. That’s appealing, because atoms are much, much bigger than electrons.

This latest research group isn’t the only one that has tried to stir up up a vortex. The challenge, then, comes with trying to get the atoms to spin without breaking the Bose-Einstein condensate. 

“It’s kind of tricky to get, essentially, this rotation under control,” says Peter Schauss, a physicist at the University of Virginia, who wasn’t part of the newest experiment. “It’s easy to rotate it somehow, but it’s hard to rotate it in a way that you don’t heat it up.”

Four density images on black with sodium crystals in orange and purple. Two simulations in white with purple prints indicating circular rotation in pi.
Density profiles and simulations of the magnetically charged sodium crystals show rotational flow. Mukherjee et al. 2022

The Harvard-MIT group took their shot by wrangling a million sodium atoms, cooling them down to 100 billionths of a kelvin above absolute zero, and corralling them inside powerful electromagnets. Then, they spun the condensate around, hoping that they could see a quantum fluid in motion.

It worked—to a point. The atoms formed a thin, needle-like structure that had the properties of the fluid they’d been seeking. The researchers published the results up to here in Science in June 2021.

But they knew they could go further. They decided to keep spinning that needle to see what happened. And that’s when they notices something extraordinary: The needle began to undulate. At first, it began to wrap itself into a corkscrew. Then, the curls broke apart, shattering the needle into a smattering of little quantum blobs, which each began to rotate. Hence, quantum tornadoes.

The researchers compare this to chaos theory. Creating these tornadoes is akin to the famous example of a butterfly’s wings flapping and causing a storm on the other side of the planet, only that the process is playing out on a quantum scale. They published their description of the vortices in Nature earlier this month.

[Related: When light flashes for a quintillionth of a second, things get weird]

So what comes next? As one might imagine, getting atoms to cooperate at this level is not easy. “It’s still kind of a work in progress to get more stable lasers to … run these experiments efficiently,” says Schauss. “A lot of these experiments are limited by that.”

Another challenge: The tiniest of these quantum tornadoes had 10 atoms each. But some physicists think it’s possible to go even further—to get down to a Bose-Einstein condensate with just one atom. Doing that would really help physicists watch some of the arcane equations of quantum mechanics playing out in the real world (with very sophisticated cameras, at any rate).

While scientists continue to refine their process for crafting these vortices and other shapes of ultra-cold matter, their creations might be applied to technologies like sensors. The MIT-Harvard research was funded by DARPA, which wants to use the spinning condensates to detect subtle underwater movements. But so far, subtlety has not been part of the equation.

The post How the world’s first ‘quantum tornadoes’ came to be appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Ancient rocks hold the story of Earth’s first breath of oxygen https://www.popsci.com/science/ancient-rocks-holds-story-earths-atmosphere/ Mon, 10 Jan 2022 18:00:00 +0000 https://www.popsci.com/?p=419485
Ukraine’s Dnieper River
A massive cyanobacteria bloom in Ukraine’s Dnieper River. Ancient cyanobacteria probably pumped oxygen into early Earth's atmosphere. Deposit Photos

The Great Oxygenation Event may have happened all at once, around 2.3 billion years ago.

The post Ancient rocks hold the story of Earth’s first breath of oxygen appeared first on Popular Science.

]]>
Ukraine’s Dnieper River
A massive cyanobacteria bloom in Ukraine’s Dnieper River. Ancient cyanobacteria probably pumped oxygen into early Earth's atmosphere. Deposit Photos

Earth’s atmosphere wasn’t always like it is today. The oxygen that’s so vital to us wasn’t always there. In fact, for much of the first half of Earth’s history to date, our planet’s atmosphere may have had far more in common with the noxious haze of Venus, with scarcely a trace of oxygen to be found.

That all changed with a bang, sometime around 2.3 billion years ago. Scientists generally believe that microorganisms called cyanobacteria, using photosynthesis, began to pump out enough oxygen to quite literally terraform Earth. It’s called the Great Oxygenation Event, or GOE.

It was “this huge climatic shift, environmental shift in Earth’s history,” says Sarah Slotznick, an Earth scientist at Dartmouth College, “the biggest one that we have.”

But there are many unanswered questions about how the GOE played out. And now, finding clues in ancient rock, Slotznick and her colleagues’ research has turned upside-down some of scientists’ previous assumptions about the nature of the GOE. Researchers had assumed that the GOE was preceded by several “whiffs” of early oxygen, but this new study shows that the GOE happened in one go. The study authors published their work in the journal Science Advances on January 5.

Digging up rocks

Dating things this far back—ten times older than the oldest dinosaurs, in a time period known as the Archaean—isn’t straightforward. The first plants and animals wouldn’t appear for well over another billion years. Most continents haven’t formed yet, and most that have are unrecognizable to us.

Studying this period requires looking for evidence in very old rocks. For instance, if there’s oxygen in the atmosphere, it will react with stuff in rock—leaving fingerprints for geologists to find. Or, if the rock lies under the ocean, the atmosphere will react with stuff in the sea, and the end product will sink to the bottom and stay there. 

With the arrival of oxygen, iron in rocks will turn into iron oxide, which we call rust. Other elements, such as molybdenum, rhenium, and sulfur, show changes too—though scientists aren’t always sure what drives those changes.

“The geological record only provides snapshots of Earth history,” says David Johnston, an Earth scientist at Harvard University who was not involved with this research. “The further back in time we go, the less complete these records become.”

Yet, if you look in the right place, you can find rocks in the very oldest parts of continents, rocks old enough to show these signatures. By the early 2000s, those signatures were pointing to a single beginning for the GOE: about 2.3 billion years ago.

[Related: Diamonds contain remnants of Earth’s ancient atmosphere]

For the scientists behind this newest paper, the right place is the Pilbara region in the north of Western Australia. Its rocks might preserve the oldest known evidence of life on Earth, tiny fossils from 3.5 billion years ago. 

Visiting the rocks

In 2004, a NASA-backed effort cut a cylinder, known as a drill core, from the Mount McRae Shale, a particularly well-preserved bit of Pilbara. “We use this word, ‘preserved,’ similar to how you would talk about a preserved ancient text,” says Slotznick. Its text hasn’t been erased by high heat, and its material hasn’t been squished by rising mountains. 

And Johnston says that shale is “the right flavor” to capture a glimpse of the ocean above.

After this core was pulled from the ground, scientists combed it up and down to see the rock at different points in history. In 2007, scientists examining sulfur, molybdenum, and rhenium announced that they’d found evidence of oxygen in this sample—from well before the GOE. 

While the GOE had been established for decades at this point, the idea that oxygen was in the atmosphere beforehand was new. It indicated that the GOE might not have been all one event, but perhaps that it was preceded by “whiffs” that occurred more than 50 million years earlier.

And in the years since then, these “whiffs” were embraced by the community. “They’re starting to be taught in introductory earth science classes, is what I’d say,” says Slotznick.

Revisiting the rocks

In the past decade and a half, that same Mount McRae Shale drill core had been the subject of over half-a-dozen different papers. Many looked at shifts in other elements: selenium, for instance, or heavy metals like osmium and mercury. But the chemistry behind what researchers were seeing is often poorly understood,  and scientists weren’t quite sure what a particular signal meant.

To add a bit of clarity, Slotznick and her colleagues decided to have their go at it. They took their own sample from that core and conducted a number of tests, including examining a rock sample with a synchrotron—a very, very bright source of light, especially X-rays. 

What they found was new evidence that flew in the face of the “whiff” theory. Events unconnected to the GOE actually created the signatures that previous studies had seemingly misattributed to “whiffs.” For instance, the molybdenum that had been crucial to creating the idea of “whiffs,” Slotznick and co-authors think, actually came from volcanoes.

“This certainly presents a formidable challenge to arguments for the ‘whiff’ of oxygen within the Mount McRae [Shale],” says Johnston, “and provides a terrific roadmap for how we test these hypotheses moving forward.”

Moving forward doesn’t just mean learning more; it means that this research will also help re-interpret some of the other data that’s come from this rock over the past several years.

As for Slotznick, she’s planning to peer back into this primordial chapter in Earth’s history, with other drill cores from another especially ancient part of the world’s rocks: South Africa.

“I think it’s an amazing time period,” she says. “There’s a lot that can be learned and dug into, especially if we’re trying to understand what caused it, which is the big question.”

The post Ancient rocks hold the story of Earth’s first breath of oxygen appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A seismic mystery deep within Earth hints at the moon’s origins https://www.popsci.com/science/earths-mantle-regions-explained/ Tue, 04 Jan 2022 21:00:00 +0000 https://www.popsci.com/?p=418529
Earth viewed from space.
Earth seen from the Moderate Resolution Imaging Spectroradiometer on NASA's Terra satellite. NASA

In subterranean regions called ultra-low velocity zones, seismic waves mysteriously slow down.

The post A seismic mystery deep within Earth hints at the moon’s origins appeared first on Popular Science.

]]>
Earth viewed from space.
Earth seen from the Moderate Resolution Imaging Spectroradiometer on NASA's Terra satellite. NASA

Deep under Africa and the central Pacific lie clusters of geological mysteries—nearly 1,800 miles (2,890 kilometers) below, down at the bottom of the mantle. Scientists typically use seismic waves generated by earthquakes to peer into Earth’s interior. But such waves aren’t especially enlightening in these parts, where seismic waves slow down as if they’re caught in jelly. 

Those regions, potentially hundreds of kilometers wide, are called ultra-low velocity zones (ULVZs), and so far, their origins have remained a mystery. But now, a team of geologists has an idea. Simulating the formation of these mysterious zones, the geologists have evidence that the features are actually patches of ancient material, billions of years old, that sank to the bottom of the mantle over the years. 

The researchers published their findings in Nature Geophysics on December 30. If their simulations are true, then ULVZs could be windows into the conditions of the early Earth.

“What continues to interest me about them is that they are such bizarre features of the lowermost mantle,” says Michael Thorne, a geologist at the University of Utah and one of the authors.

For years, scientists hadn’t been sure what, exactly, ULVZs were, or what created them. These areas occur at the base of the mantle, at the edge of Earth’s outer core. Scientists knew that these zones were much denser than the surrounding mantle, but that only raised more questions than answers.

“We can see these features in many different locations of the lower mantle, but we still don’t know the answers to many basic questions about them,” says Thorne. According to him, we don’t know what they’re made of, how big they are, or even where they’re all located.

Knowing how they formed could answer some of those questions. “The physical properties of ultra-low velocity zones are linked to their origin,” says Surya Pachhai, a geophysicist at the University of Utah and another of the authors, in a statement.

Unfortunately, their origin is just as murky as anything else about them. Some scientists thought ULVZs might be the source of magma for volcanic hot spots, since they lie under volcanoes in Hawaii and Samoa in the Pacific. But many other known ULVZs don’t align with volcanoes, so that seemed to make little sense.

And most theories of ULVZs assumed that they were made from one layer of some material.  But that was far from certain, and a ULVZ with multiple layers would have wildly different properties.

Thorne, Pachhai, and their colleagues focused on one area of ULVZs: located deep under the Coral Sea, northeast of Australia, home to the Great Barrier Reef. It’s an ideal location, since earthquakes are a frequent occurrence there. Those earthquakes give plenty of seismic waves that scientists can use to visualize the inside of the Earth.

But the seismic wave signatures they observed, from thousands of miles below even the deepest depths of the ocean, offered only blurry and uncertain pictures of ULVZs, so the scientists turned to simulations. They created theoretical models of Earth’s interior that include ULVZs, and they simulated seismic waves trembling through them to determine what those waves would look like to an observer on their virtual Earth. Running simulations under scores of conditions, they compared the results to what they’d observed under the Coral Sea to see how well each model matched.

[Related: You can now sense earthquakes on your smartphone]

“One of the most striking things was that Surya found evidence that the ULVZ was layered,” says Thorne.

The best match to their work corresponded to a scenario in which ULVZs don’t have single layers, but rather multiple ones. Pachhai says that, to their knowledge, this is the first study that’s shown evidence of this.

Their models also show that the layers aren’t uniform. There’s a lot of unevenness in their composition and in their structure. Those layers, the researchers think, must have formed early in Earth’s history. 

“They are still not well mixed after 4.5 billion years of mantle convection,” says Pachhai.

The researchers think this could be related to a cataclysm when Earth was quite young. Four-and-a-half billion years ago, a planetoid that some call Theia collided with Earth—the same impact that might have kicked up a patch of debris that later coalesced into the moon.

The enormous energy of the impact would have taken a giant chunk out of Earth and left behind an ocean of mixed molten rock, stuffed and spiced with all sorts of gases and crystals. As this ocean cooled and sorted itself out, becoming today’s mantle and crust, denser material might have dropped to the bottom without mixing.

That dense material, then, would form the basis of what are today ULVZs.

Of course, this is only one theory, and limited to one stop of the globe. By gleaning more details from ULVZs, the researchers say they could learn a lot more about what that ancient magma ocean was like.

“With all of these various unknown questions remaining, there is still a lot of room for basic discovery, and this is what keeps sucking me back in to study them,” says Thorne, “the potential to add to the fundamental knowledge about what our Earth is made of and how it works.”

The post A seismic mystery deep within Earth hints at the moon’s origins appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
NASA’s James Webb telescope is unfurling a super-thin shield to save it from the sun https://www.popsci.com/science/james-webb-space-telescope-sunshield-protection/ Thu, 30 Dec 2021 21:00:00 +0000 https://www.popsci.com/?p=418126
JWST's sunshield in a cleanroom.
The five-layer sunshield for NASA’s James Webb Space Telescope, sitting in a cleanroom in California. Northrop Grumman

The telescope must be kept at -370°F, which is cold enough to freeze nitrogen.

The post NASA’s James Webb telescope is unfurling a super-thin shield to save it from the sun appeared first on Popular Science.

]]>
JWST's sunshield in a cleanroom.
The five-layer sunshield for NASA’s James Webb Space Telescope, sitting in a cleanroom in California. Northrop Grumman

The James Webb Space Telescope (JWST) is busy unwrapping itself, making a grand entrance to its new home about 930,000 miles (1.5 million kilometers) from Earth. JWST will observe faint distant objects in infrared light, and because heat also travels as infrared radiation, JWST needs to operate under very finicky temperature conditions.

“It can’t have other sources of heat,” says James Cooper, a NASA engineer. “It’ll just swamp the science you’re trying to get.”

The telescope’s mirror and instruments need to be kept below about -370°F (-223°C)—cold enough to freeze nitrogen. That’s no easy task when the sun’s rays and the spacecraft bus, which contains JWST’s central computer and communications, can heat the telescope and its instrumentation up to a tropical 230°F (110°C). 

Fortunately, JWST has a cooling device of its very own: a sunshield, as its creators call it. Shaped like a kite, the size of a tennis court, and made of layers less than a millimeter thick, JWST’s sunshield is able to cool the telescope by several hundred degrees.

Getting that sunshield to work has been a long and tortuous task. Cooper has helped lead the sunshield’s development for more than a dozen years, and he’s seen many of the trials and tribulations the structure overcame in order to work.

Planning JWST took decades, and its designers knew they wanted a sunshield early on in the development process, even before Cooper came aboard. To build the sunshield, the designers looked at several plastic-like materials before settling on one called Kapton. 

Kapton isn’t a new material—it’s a mainstay in the world of cryogenics, since its thermal properties are good for keeping very cold things cool. Additionally, Kapton, says Cooper, is “tougher than most similar [materials] and it doesn’t tear as easily, and it’ll survive the space environment better.”

JWST isn’t Kapton’s first flight into space. It was used to insulate the engines on Apollo’s lunar modules; humans have literally strewn it across the moon. There, lunar modules had a tendency to blow it about when astronauts lifted off to begin their return journeys. Neil Armstrong recalled that, when Apollo 11 ascended from the lunar surface, he could see Kapton “scattering all around the area for great distances.”

More recently, New Horizons used Kapton to keep its temperature stable as it journeyed from Earth to fly by Pluto and Charon in the solar system’s freezing outer reaches. 

JWST’s sunshield is fashioned from five layers of Kapton, each the thickness of a human hair. The layers are separated by vacuum gaps to prevent heat from conducting through the whole shield.

Each layer is coated with aluminum, and the two layers nearest the sun are also coated with doped silicon. In addition to making the sunshield more reflective, these metals improve its electrical conductivity—to avoid static electricity building up inside the sheets.

Moreover, each layer’s edges had to line up, and each layer had to be pulled taut and flat. The spacing needed to be even to prevent heat from getting trapped in the middle of the shield.

James Webb Space Telescope in flight with mirrors and sunshield unfurled in an artist's rendering
What the James Webb Space Telescope should look like when it finally unfurls beyond the Earth’s atmosphere. Adriana Manrique Gutierrez/NASA’s Goddard Space Flight Center/CIL

When it came time to assemble the sunshield, the NASA team faced another hurdle. “The Kapton comes in 4-foot-wide sections, and we need a 70-by-45-foot sunshield, roughly,” says Cooper. “And so we had to seam it together.” 

They did this by essentially melting the edges together, and adding additional strips as “rip-stops” to help prevent tears. Even if one area tears, the rip-stops will isolate the problem and allow the rest of the sunshield to operate as planned—or so the designers hope.

Piecing together the sunshield was only half of the challenge. For the telescope to fit into the Ariane 5 rocket that launched from French Guiana on Christmas Day, the sunshield needed to be folded up and fastened with pins. It was a puzzle: the shield had to be secured when folded, operational when unfurled, all while avoiding damage to the delicate material.

“You end up with 25, 30 layers of membrane — and you have the [pin] holes all line up, so you can put a pin through them — and they have to line up every time you fold it,” says Cooper. “And developing the tools to do that was a massive challenge, because you also can’t have those holes line up with each other when you’re deployed, or the sun comes right through.”

[Related: After years of delays, the James Webb telescope is finally in space]

They needed to perfect the system for releasing the sunshield. Unfolding the shield relies on 107 different release devices. If even one of those devices fails, then the entire telescope is compromised. And the NASA engineers had to ensure the tethers holding it together didn’t accidentally snap and graze the shield. “So we had to spend a lot of effort on looking anywhere that a cable could possibly go,” says Cooper. And they had to test all of this on the ground—away from the microgravity where the telescope’s shield will actually unfurl.

But now, all of that is behind them. The launch has gone smoothly so far–in fact, it used much less propellant than expected, which NASA predicts will extend the observatory’s lifetime by years. On Tuesday, JWST began to unravel the sunshield. If all goes according to plan, it will continue to gingerly unfold its cooling armor until January 3.

The post NASA’s James Webb telescope is unfurling a super-thin shield to save it from the sun appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This material uses a physics trick to keep roofs cool in summer and warm in winter https://www.popsci.com/science/vanadium-oxide-roof-coating/ Fri, 17 Dec 2021 11:00:00 +0000 https://www.popsci.com/?p=416198
Samples of an all-season roof coating, developed using a material called vanadium dioxide.
Samples of an all-season roof coating, developed using a material called vanadium dioxide. Thor Swift, Lawrence Berkeley National Laboratory

To build a better roof coating, try vanadium oxide.

The post This material uses a physics trick to keep roofs cool in summer and warm in winter appeared first on Popular Science.

]]>
Samples of an all-season roof coating, developed using a material called vanadium dioxide.
Samples of an all-season roof coating, developed using a material called vanadium dioxide. Thor Swift, Lawrence Berkeley National Laboratory

Say you’re trying to cool a house in the summer or in a hot climate, where the sun is beating down overhead. You might cover your roof with a material that reflects or absorbs sunlight, keeping it from getting inside. But if you are in a place where summer turns to winter, those roof coatings that keep heat out have trouble also keeping heat in—driving up heating costs and contributing to the fact that building operations, in one way or another, are responsible for an estimated 28 percent of the world’s carbon emissions.

A solution, then, might be found in an adaptive smart material that does both: a substance that keeps heat out in the summer and keeps heat from getting out in the winter. Thanks to a material that can switch between two phases—a material that was tested atop a professor’s house—researchers in California have developed such a roof coating. They published their work in the journal Science on December 17.

“The whole point of our work is that our roof works not only in hot weather, but also in cold weather,” says Junqiao Wu, a materials scientist at the University of California, Berkeley, and Lawrence Berkeley National Laboratory, and one of the researchers behind the project.

The key material in the roofing is vanadium oxide, a compound that’s previously been tested as a window coating. Unlike most metals, vanadium oxide is a poor conductor of heat, which makes it ideal as an insulator. 

The sun’s Earth-warming infrared rays can pass through vanadium oxide, when the material is at room temperature. But when the compound heats up to 153°F (67°C), its properties alter—it changes phase. It starts to block those infrared rays, effectively shadowing what lies underneath. In other words, it lets in the sun when it’s cool, and keeps the sun out when it’s warm.

Unless you’re building condos on Mercury, 153°F is a high temperature for a roof. But Wu and his colleagues had previously found that by adding a dash of tungsten—in materials science terms, “doping” the vanadium oxide with tungsten—they could drop the compound’s phase-switch point down to a much more salubrious 77°F (25°C).

The researchers believed they had pinpointed the right material. But they needed a place to test it. “You cannot just do it in the lab,” says Wu, “because in the lab, you don’t get sunlight, you don’t get wind, you don’t face the sky.”

Their lab’s roof was inaccessible—and, by then, the COVID-19 pandemic had shuttered much of the lab anyway. They couldn’t leave the roof coating sample in an open area like a playground or a parking lot; they needed somewhere where they could run a laptop, unsupervised, for days on end.

There was another option: Wu’s house. 

The more they thought about it, the more they liked the idea. The house, on a hill in the San Francisco Bay Area, wasn’t blocked by trees, allowing uninterrupted sunlight to touch it. It had optimal weather for testing, too; the surrounding temperature swings drastically between day and night.

“I have power, I have WiFi,” says Wu. “I have me living in the house. I can maintain the equipment for multiple days. So that’s how we did the experiment.”

[Related: To combat extreme heat, cover your roof in hungry, sweaty plants]

The researchers mounted blocks of vanadium oxide atop a transparent layer of barium fluoride, a compound often used to study infrared rays, and a bottom layer of reflective silver, fashioning them into an adhesive-tape-like material. 

Wu and a then-postdoc, Kechao Tang, installed that tape on the rooftop of Wu’s house and set up a wireless measurement system on Wu’s balcony to monitor how it responded to changes in sunlight and air temperature. Comparing it to two different existing roof coating methods—one colored white and the other black—they found that, while the white coating performed better in direct sun, their material fared better in most other conditions. 

But the Bay Area hardly represents every climate in the world—its weather can drastically change by wandering just a few miles in one direction—and the researchers only tested the material on one summer day.

So, with the help of Finnegan Reichertz, a local high school student who was remotely interning in Wu’s lab, the researchers used the data from the roof experiment to conduct computer simulations of how the coating would fare, year-round, in 15 different climates across North America—-ranging from the desert of New Mexico to the harsh winters of Chicago to the rains of the Pacific Northwest.

The coating works especially well for climates where temperatures swing between hot summers and cold winters, according to the simulations. “For Florida, it’s going to not work very well,” says Wu. “For Hawaii, no. For Alaska, it’s too cold—also, no. But for all the middle, temperate climate zones, it’s going to work well.” The material saved more energy than existing roof coatings in 12 of the 15 climes they simulated.

Now, says Wu, they’re planning to patent the material in 2022 and find ways to make lots of it efficiently—or a similar material with the same properties. “We are looking into improving the performance, while making it scalable,” he says.

These smart coatings, if Wu is right, are good for more than just roofs. Wu imagines they could be used in space exploration to keep the insides of vehicles at a comfortable temperature, even in extreme environments outside Earth’s atmosphere. Closer to ground, the coating might be used in consumer electronics or in textiles. You might, for instance, one day wear a jacket or camp under a tent coated with phase-shifting vanadium oxide–kept cool one minute, then warm the next. 

The post This material uses a physics trick to keep roofs cool in summer and warm in winter appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Physicists close in on the exceedingly short life of the Higgs boson https://www.popsci.com/science/research-determines-higgs-boson-lifespan/ Sun, 12 Dec 2021 18:00:00 +0000 https://www.popsci.com/?p=415177
The electromagnetic and hadron calorimeters that make up the center of the 49-foot-high, 69-foot-long CMS instrument.
The electromagnetic and hadron calorimeters that make up the center of the 49-foot-high, 69-foot-long CMS instrument. Enrico Sachetti

Just because scientists have discovered a particle doesn't mean they know all its properties.

The post Physicists close in on the exceedingly short life of the Higgs boson appeared first on Popular Science.

]]>
The electromagnetic and hadron calorimeters that make up the center of the 49-foot-high, 69-foot-long CMS instrument.
The electromagnetic and hadron calorimeters that make up the center of the 49-foot-high, 69-foot-long CMS instrument. Enrico Sachetti

1.6 x 10-22 seconds: That, according to theory, is the lifetime of the Higgs boson, one of the most sought-after particles in the subatomic world. This time is so short that tens of trillions of Higgs bosons might live and die before the light from the device you’re using to read this reaches your eyes.

Physicists are zeroing in on this lifetime in the real world. Poring over data from CERN’s Large Hadron Collider (LHC), scientists have narrowed down the Higgs’ lifespan to something around that 1.6 x 10-22 figure. The scientists were able to do so thanks to data from the CMS, one of the LHC’s detectors. Their work is a major advance–and it’s a sign that, nearly a decade after the Higgs boson’s discovery, there is still quite a bit to learn about the particle.

“This is a good achievement, a great milestone, but it’s just the first step,” says Caterina Vernieri, a particle physicist at the SLAC National Accelerator Laboratory in California, who has worked with the CMS group in the past but was not involved with this current research.

The Higgs boson is the reason that many particles have mass, to make a long story involving complex concepts called “quantum fields” and “symmetry breaking” short. It was first theorised in the 1960s—its namesake is Peter Higgs, a Nobel-winning British physicist—but it eluded scientists for decades. 

Smashing particles together at higher and higher energy was the key to its discovery, made possible by the LHC, where particles circle through a 17-mile-long ring on the French-Swiss border. The LHC went online in 2008. In 2012, physicists working there found the fingerprints of something that could have been the Higgs boson; by the end of 2013, they’d determined that their results weren’t just random statistical noise.

The search for the Higgs boson was over. But just because scientists have discovered a particle–or anything else–doesn’t mean that they understand all of its properties.

[Related: Inside the discovery that could change particle physics]

Theoretical physicists predicted many of the Higgs boson’s properties in the decades before its discovery. If those theoretical predictions matched well with what scientists ultimately found, then it would be additional evidence that the Higgs boson fits into the theory behind modern particle physics–the so-called Standard Model. It would help scientists learn more about how the universe ticks on the tiniest scales

But scientists are trying to study things that don’t exactly reveal themselves to the world. Particles like the Higgs, on top of their puny size, might only show themselves for vanishingly short timespans before decaying into a charcuterie board of other particles.

“The lifetime of the Higgs boson is extremely small,” says Vernieri. “So when it’s produced in our experiment, we don’t really actually measure the Higgs boson or see a Higgs boson, but what we see is the debris…of the particles it decays into.”

So the CMS scientists pored over data from LHC experiments undertaken between 2015 and 2018. By looking at the particles that the Higgs boson decayed into, they could backtrack and find a range of masses that the Higgs boson could have. Thanks to a quantum property called the uncertainty principle, that range is inversely proportional to the particle’s lifetime–allowing the physicists to calculate the latter from the former.

According to their calculations, the Higgs boson’s lifetime lies somewhere between 1.2 x 10-22 seconds and 4.4 x 10-22 seconds. That’s the most precise estimate of the Higgs boson’s lifetime yet, aligning well with the 1.6 x 10-22 number that theorists predicted.

And, yet, it’s not precise enough for some physics. 

There’s a possibility, for instance, that there’s a strange, currently unknown exotic particle that the Higgs boson decays into, which the Standard Model doesn’t account for. That would influence the Higgs boson’s lifetime–but so subtly that even this calculation couldn’t detect it.

“This would be a tiny, tiny change in the lifetime value,” says Vernieri. “So we need, really, to measure the lifetime with very good precision.”

Fortunately, particle physicists think they can get better in that regard. “The precision of the measurement is expected to improve in the coming years with data from the next LHC runs and new analysis ideas,” says Pascal Vanlaer, a physicist at CMS and one of the physicists behind the project, in a statement.

The first of those next runs is, according to plan, not too far in the future. Since 2018, the LHC has been shut down for a lengthy period called, fittingly, Long Shutdown 2. During that time, the collider and CERN’s surrounding facilities have undergone a raft of upgrades. Following a disruption to that timetable caused by COVID-19, the collider is currently set to turn on again in February 2022.

And there are many other things about the Higgs boson that we still don’t know for sure — from how it’s produced to how it reacts to other particles to how it interacts with itself. To determine those features, not even the LHC may be sensitive enough. 

“We produce a Higgs boson every billion collisions at LHC,” says Vernieri, and often, trying to see Higgs bosons means having to look through a whole sea of other particles. “It’s a very challenging environment to study, very precisely, particle production.”

The key will be a cleaner environment to study the Higgs boson with higher precision, Vernieri says. Perhaps, then, that’s a job for one of the LHC’s proposed successors.

The post Physicists close in on the exceedingly short life of the Higgs boson appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Physicists just gifted us ‘quantum spin liquid,’ a weird new state of matter https://www.popsci.com/science/physicists-create-new-state-of-matter/ Thu, 02 Dec 2021 21:00:00 +0000 https://www.popsci.com/?p=413767
siliicon droplets
Droplets of silicon, used to illustrate movements similar to those of quantum particles. Aleks Labuda

For decades, quantum spin liquid had existed only as a theory.

The post Physicists just gifted us ‘quantum spin liquid,’ a weird new state of matter appeared first on Popular Science.

]]>
siliicon droplets
Droplets of silicon, used to illustrate movements similar to those of quantum particles. Aleks Labuda

A solid is made of atoms that are, more or less, locked in an ordered structure. A liquid, on the other hand, is made of atoms that can flow freely around and past each other. But imagine atoms that stay unfrozen, like those in a liquid–but which are in a constantly changing magnetic mess.

What you have then is a never-before-seen state of matter, a state of quantum weirdness called a quantum spin liquid. Now, by carefully manipulating atoms, researchers have managed to create this state in the laboratory. The researchers published their work in the journal Science on December 2.

Scientists had discussed theories about spin liquids for years. “But we really got very interested in this when these theorists, here at Harvard, finally found a way to actually generate the quantum spin liquids,” says Giulia Semeghini, a physicist and postdoc at Harvard University, who coordinated the research project and was one of the paper authors.

Under extreme conditions not typically found on Earth, the rules of quantum mechanics can twist atoms into all sorts of exotica. Take, for instance, degenerate matter, found in the hearts of dead stars like white dwarfs or neutron stars, where extreme pressures cook atoms into slurries of subatomic particles. Or, for another, the Bose-Einstein condensate, in which multiple atoms at very low temperatures sort of merge together to act as one (its creation won the 2001 Nobel Prize in Physics). 

The quantum spin liquid is the latest entry in that bestiary of cryptid states. Its atoms don’t freeze into any sort of ordered state, and they’re constantly in flux.

[Related: IBM’s latest quantum chip breaks the elusive 100-qubit barrier]

The “spin” in the name refers to a property inherent to each particle–either up or down–which gives rise to magnetic fields. In a normal magnet, all the spins point up or down in a careful order. In a quantum spin liquid, on the other hand, there’s a third spin in the picture. This prevents coherent magnetic fields from forming.

This, combined with the esoteric rules of quantum mechanics, means that the spins are constantly in different positions at once. If you look at just a few particles, it’s hard to tell whether you have a quantum liquid or, if you do, what properties it has.

Quantum spin liquids were first theorized in 1973 by a physicist named Philip W. Anderson, and physicists have been trying to get their hands on this matter ever since. “Many different experiments…tried to create and observe this type of state. But this has actually turned out to be very challenging,” says Mikhail Lukin, a physicist at Harvard University and one of the paper authors.

The researchers at Harvard had a new tool in their arsenal: what they call a “programmable quantum simulator.” Essentially, it’s a machine that allows them to play with individual atoms. Using specifically focused laser beams, researchers can shuffle atoms around a two-dimensional grid like magnets on a whiteboard.

“We can control the position of each atom individually,” says Semeghini. “We can position them individually in any shape or form that we want.”

Moreover, to actually determine if they had successfully created a quantum spin liquid, the researchers took advantage of something called quantum entanglement. They energized the atoms, which began to interact: changes in the property of one atom would reflect in another. By looking at those connections, the scientists found the confirmation they needed.

All this might seem like creating abstract matter for abstract matter’s sake–but that’s part of the appeal. “We can kind of touch it, poke, play with it, even in some ways talk to this state, manipulate it, and make it do what we want,” says Lukin. “That’s what’s really exciting.”

But scientists do think quantum spin liquids have valuable applications, too. Just venture into the realms of quantum computers.

Quantum computers have the potential to far outstrip their traditional counterparts. Compared with computers today, quantum computers could create better simulations of systems such as molecules and far more quickly complete certain calculations.

But what scientists use as the building blocks of quantum computers can leave something to be desired. Those blocks, called qubits, are often things like individual particles or atomic nuclei–which are sensitive to the slightest bit of noise or temperature fluctuations. Quantum spin liquids, with information stored in how they’re arranged, could be less finicky qubits.

If researchers were able to demonstrate that a quantum spin liquid could be used as a qubit, says Semeghini, it could lead to an entirely new sort of quantum computer.

The post Physicists just gifted us ‘quantum spin liquid,’ a weird new state of matter appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>