Online Shopping

- -

 HOME : 




MIT . In the News ...

Engineers produce a fisheye lens that’s completely flat
Posted on Friday September 18, 2020

The single piece of glass produces crisp panoramic images.

To capture panoramic views in a single shot, photographers typically use fisheye lenses — ultra-wide-angle lenses made from multiple pieces of curved glass, which distort incoming light to produce wide, bubble-like images. Their spherical, multipiece design makes fisheye lenses inherently bulky and often costly to produce.

Now engineers at MIT and the University of Massachusetts at Lowell have designed a wide-angle lens that is completely flat. It is the first flat fisheye lens to produce crisp, 180-degree panoramic images. The design is a type of “metalens,” a wafer-thin material patterned with microscopic features that work together to manipulate light in a specific way.

In this case, the new fisheye lens consists of a single flat, millimeter-thin piece of glass covered on one side with tiny structures that precisely scatter incoming light to produce panoramic images, just as a conventional curved, multielement fisheye lens assembly would. The lens works in the infrared part of the spectrum, but the researchers say it could be modified to capture images using visible light as well.

The new design could potentially be adapted for a range of applications, with thin, ultra-wide-angle lenses built directly into smartphones and laptops, rather than physically attached as bulky add-ons. The low-profile lenses might also be integrated into medical imaging devices such as endoscopes, as well as in virtual reality glasses, wearable electronics, and other computer vision devices.

“This design comes as somewhat of a surprise, because some have thought it would be impossible to make a metalens with an ultra-wide-field view,” says Juejun Hu, associate professor in MIT’s Department of Materials Science and Engineering. “The fact that this can actually realize fisheye images is completely outside expectation.

This isn’t just light-bending — it’s mind-bending.”

Hu and his colleagues have published their results today in the journal Nano Letters. Hu’s MIT coauthors are Mikhail Shalaginov, Fan Yang, Peter Su, Dominika Lyzwa, Anuradha Agarwal, and Tian Gu, along with Sensong An and Hualiang Zhang of UMass Lowell.

Design on the back side

Metalenses, while still largely at an experimental stage, have the potential to significantly reshape the field of optics. Previously, scientists have designed metalenses that produce high-resolution and relatively wide-angle images of up to 60 degrees. To expand the field of view further would traditionally require additional optical components to correct for aberrations, or blurriness — a workaround that would add bulk to a metalens design.

Hu and his colleagues instead came up with a simple design that does not require additional components and keeps a minimum element count. Their new metalens is a single transparent piece made from calcium fluoride with a thin film of lead telluride deposited on one side. The team then used lithographic techniques to carve a pattern of optical structures into the film.

Each structure, or “meta-atom,” as the team refers to them, is shaped into one of several nanoscale geometries, such as a rectangular or a bone-shaped configuration, that refracts light in a specific way. For instance, light may take longer to scatter, or propagate off one shape versus another — a phenomenon known as phase delay.

In conventional fisheye lenses, the curvature of the glass naturally creates a distribution of phase delays that ultimately produces a panoramic image. The team determined the corresponding pattern of meta-atoms and carved this pattern into the back side of the flat glass.

‘We’ve designed the back side structures in such a way that each part can produce a perfect focus,” Hu says.

On the front side, the team placed an optical aperture, or opening for light.

“When light comes in through this aperture, it will refract at the first surface of the glass, and then will get angularly dispersed,” Shalaginov explains. “The light will then hit different parts of the backside, from different and yet continuous angles. As long as you design the back side properly, you can be sure to achieve high-quality imaging across the entire panoramic view.”

Across the panorama

In one demonstration, the new lens is tuned to operate in the mid-infrared region of the spectrum. The team used the imaging setup equipped with the metalens to snap pictures of a striped target. They then compared the quality of pictures taken at various angles across the scene, and found the new lens produced images of the stripes that were crisp and clear, even at the edges of the camera’s view, spanning nearly 180 degrees.

“It shows we can achieve perfect imaging performance across almost the whole 180-degree view, using our methods,” Gu says.

In another study, the team designed the metalens to operate at a near-infrared wavelength using amorphous silicon nanoposts as the meta-atoms. They plugged the metalens into a simulation used to test imaging instruments. Next, they fed the simulation a scene of Paris, composed of black and white images stitched together to make a panoramic view. They then ran the simulation to see what kind of image the new lens would produce.

“The key question was, does the lens cover the entire field of view? And we see that it captures everything across the panorama,” Gu says. “You can see buildings and people, and the resolution is very good, regardless of whether you’re looking at the center or the edges.”

The team says the new lens can be adapted to other wavelengths of light. To make a similar flat fisheye lens for visible light, for instance, Hu says the optical features may have to be made smaller than they are now, to better refract that particular range of wavelengths. The lens material would also have to change. But the general architecture that the team has designed would remain the same.

The researchers are exploring applications for their new lens, not just as compact fisheye cameras, but also as panoramic projectors, as well as depth sensors built directly into smartphones, laptops, and wearable devices.

“Currently, all 3D sensors have a limited field of view, which is why when you put your face away from your smartphone, it won’t recognize you,” Gu says. “What we have here is a new 3D sensor that enables panoramic depth profiling, which could be useful for consumer electronic devices.”

This research was funded in part by DARPA under the EXTREME Program.

Helping robots avoid collisions
Posted on Thursday September 17, 2020

Realtime Robotics has created a controller that helps robots safely move around on the fly.

George Konidaris still remembers his disheartening introduction to robotics.

“When you’re a young student and you want to program a robot, the first thing that hits you is this immense disappointment at how much you can’t do with that robot,” he says.

Most new roboticists want to program their robots to solve interesting, complex tasks — but it turns out that just moving them through space without colliding with objects is more difficult than it sounds.

Fortunately, Konidaris is hopeful that future roboticists will have a more exciting start in the field. That’s because roughly four years ago, he co-founded Realtime Robotics, a startup that’s solving the “motion planning problem” for robots.

The company has invented a solution that gives robots the ability to quickly adjust their path to avoid objects as they move to a target. The Realtime controller is a box that can be connected to a variety of robots and deployed in dynamic environments.

“Our box simply runs the robot according to the customer’s program,” explains Konidaris, who currently serves as Realtime’s chief roboticist. “It takes care of the movement, the speed of the robot, detecting obstacles, collision detection. All [our customers] need to say is, ‘I want this robot to move here.’”

Realtime’s key enabling technology is a unique circuit design that, when combined with proprietary software, has the effect of a plug-in motor cortex for robots. In addition to helping to fulfill the expectations of starry-eyed roboticists, the technology also represents a fundamental advance toward robots that can work effectively in changing environments.

Helping robots get around

Konidaris was not the first person to get discouraged about the motion planning problem in robotics. Researchers in the field have been working on it for 40 years. During a four-year postdoc at MIT, Konidaris worked with School of Engineering Professor in Teaching Excellence Tomas Lozano-Perez, a pioneer in the field who was publishing papers on motion planning before Konidaris was born.

Humans take collision avoidance for granted. Konidaris points out that the simple act of grabbing a beer from the fridge actually requires a series of tasks such as opening the fridge, positioning your body to reach in, avoiding other objects in the fridge, and deciding where to grab the beer can.

“You actually need to compute more than one plan,” Konidaris says. “You might need to compute hundreds of plans to get the action you want. … It’s weird how the simplest things humans do hundreds of times a day actually require immense computation.”

In robotics, the motion planning problem revolves around the computational power required to carry out frequent tests as robots move through space. At each stage of a planned path, the tests help determine if various tiny movements will make the robot collide with objects around it. Such tests have inspired researchers to think up ever more complicated algorithms in recent years, but Konidaris believes that’s the wrong approach.

“People were trying to make algorithms smarter and more complex, but usually that’s a sign that you’re going down the wrong path,” Konidaris says. “It’s actually not that common that super technically sophisticated techniques solve problems like that.”

Konidaris left MIT in 2014 to join the faculty at Duke University, but he continued to collaborate with researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). Duke is also where Konidaris met Realtime co-founders Sean Murray, Dan Sorin, and Will Floyd-Jones. In 2015, the co-founders collaborated to make a new type of computer chip with circuits specifically designed to perform the frequent collision tests required to move a robot safely through space. The custom circuits could perform operations in parallel to more efficiently test short motion collisions.

“When I left MIT for Duke, one thing bugging me was this motion planning thing should really be solved by now,” Konidaris says. “It really did come directly out of a lot of experiences at MIT. I wouldn’t have been able to write a single paper on motion planning before I got to MIT.”

The researchers founded Realtime in 2016 and quickly brought on robotics industry veteran Peter Howard MBA ’87, who currently serves as Realtime’s CEO and is also considered a co-founder.

“I wanted to start the company in Boston because I knew MIT and lot of robotics work was happening there,” says Konidaris, who moved to Brown University in 2016. “Boston is a hub for robotics. There’s a ton of local talent, and I think a lot of that is because MIT is here — PhDs from MIT became faculty at local schools, and those people started robotics programs. That network effect is very strong.”

Removing robot restraints

Today the majority of Realtime’s customers are in the automotive, manufacturing, and logistics industries. The robots using Realtime’s solution are doing everything from spot welding to making inspections to picking items from bins.

After customers purchase Realtime’s control box, they load in a file describing the configuration of the robot’s work cell, information about the robot such as its end-of-arm tool, and the task the robot is completing. Realtime can also help optimally place the robot and its accompanying sensors around a work area. Konidaris says Realtime can shorten the process of deploying robots from an average of 15 weeks to one week.

Once the robot is up and running, Realtime’s box controls its movement, giving it instant collision-avoidance capabilities.

“You can use it for any robot,” Konidaris says. “You tell it where it needs to go and we’ll handle the rest.”

Realtime is part of MIT’s Industrial Liaison Program (ILP), which helps companies make connections with larger industrial partners, and it recently joined ILP’s STEX25 startup accelerator.

With a few large rollouts planned for the coming months, the Realtime team’s excitement is driven by the belief that solving a problem as fundamental as motion planning unlocks a slew of new applications for the robotics field.

“What I find most exciting about Realtime is that we are a true technology company,” says Konidaris. “The vast majority of startups are aimed at finding a new application for existing technology; often, there’s no real pushing of the technical boundaries with a new app or website, or even a new robotics ‘vertical.’ But we really did invent something new, and that edge and that energy is what drives us. All of that feels very MIT to me.”

Rapid test for Covid-19 shows improved sensitivity
Posted on Thursday September 17, 2020

A CRISPR-based test developed at MIT and the Broad Institute can detect nearly as many cases as the standard Covid-19 diagnostic.

Since the start of the Covid-19 pandemic, researchers at MIT and the Broad Institute of MIT and Harvard, along with their collaborators at the University of Washington, Fred Hutchinson Cancer Research Center, Brigham and Women's Hospital, and the Ragon Institute, have been working on a CRISPR-based diagnostic for Covid-19 that can produce results in 30 minutes to an hour, with similar accuracy as the standard PCR diagnostics now used.

The new test, known as STOPCovid, is still in the research stage but, in principle, could be made cheaply enough that people could test themselves every day. In a study appearing today in the New England Journal of Medicine, the researchers showed that on a set of patient samples, their test detected 93 percent of the positive cases as determined by PCR tests for Covid-19.

“We need rapid testing to become part of the fabric of this situation so that people can test themselves every day, which will slow down outbreak,” says Omar Abudayyeh, an MIT McGovern Fellow working on the diagnostic.

Abudayyah is one of the senior authors of the study, along with Jonathan Gootenberg, a McGovern Fellow, and Feng Zhang, a core member of the Broad Institute, investigator at the MIT McGovern Institute and Howard Hughes Medical Institute, and the James and Patricia Poitras ’63 Professor of Neuroscience at MIT. The first authors of the paper are MIT biological engineering graduate students Julia Joung and Alim Ladha in the Zhang lab.

A streamlined test

Zhang’s laboratory began collaborating with the Abudayyeh and Gootenberg laboratory to work on the Covid-19 diagnostic soon after the SARS-CoV-2 outbreak began. They focused on making an assay, called STOPCovid, that was simple to carry out and did not require any specialized laboratory equipment. Such a test, they hoped, would be amenable to future use in point-of-care settings, such as doctors’ offices, pharmacies, nursing homes, and schools. 

“We developed STOPCovid so that everything could be done in a single step,” Joung says. “A single step means the test can be potentially performed by nonexperts outside of laboratory settings.”

In the new version of STOPCovid reported today, the researchers incorporated a process to concentrate the viral genetic material in a patient sample by adding magnetic beads that attract RNA, eliminating the need for expensive purification kits that are time-intensive and can be in short supply due to high demand. This concentration step boosted the test’s sensitivity so that it now approaches that of PCR.

“Once we got the viral genomes onto the beads, we found that that could get us to very high levels of sensitivity,” Gootenberg says.

Working with collaborators Keith Jerome at Fred Hutchinson Cancer Research Center and Alex Greninger at the University of Washington, the researchers tested STOPCovid on 402 patient samples — 202 positive and 200 negative — and found that the new test detected 93 percent of the positive cases as determined by the standard CDC PCR test.

“Seeing STOPCovid working on actual patient samples was really gratifying,” Ladha says.

They also showed, working with Ann Woolley and Deb Hung at Brigham and Women’s Hospital, that the STOPCovid test works on samples taken using the less invasive anterior nares swab. They are now testing it with saliva samples, which could make at-home tests even easier to perform. The researchers are continuing to develop the test with the hope of delivering it to end users to help fight the COVID-19 pandemic.

“The goal is to make this test easy to use and sensitive, so that we can tell whether or not someone is carrying the virus as early as possible,” Zhang says.

The research was funded by the National Institutes of Health, the Swiss National Science Foundation, the Patrick J. McGovern Foundation, the McGovern Institute for Brain Research, the Massachusetts Consortium on Pathogen Readiness Evergrande Covid-19 Response Fund, the Mathers Foundation, the Howard Hughes Medical Institute, the Open Philanthropy Project, J. and P. Poitras, and R. Metcalfe.

Making tuberculosis more susceptible to antibiotics
Posted on Wednesday September 16, 2020

Shortening carbohydrates in the bacterial cell wall makes them more vulnerable to certain drugs.

Every living cell is coated with a distinctive array of carbohydrates, which serves as a unique cellular “ID” and helps to manage the cell’s interactions with other cells.

MIT chemists have now discovered that changing the length of these carbohydrates can dramatically affect their function. In a study of mycobacteria, the type of bacteria that cause tuberculosis and other diseases, they found that shortening the length of a carbohydrate called galactan impairs some cell functions and makes the cells much more susceptible to certain antibiotics.

The findings suggest that drugs that interfere with galactan synthesis could be used along with existing antibiotics to create more effective treatments, says Laura Kiessling, the Novartis Professor of Chemistry at MIT and the senior author of the study.

“There are a lot of TB strains that are resistant to the current set of antibiotics,” Kiessling says. “TB kills over a million people every year and is the number one infectious disease killer.”

Former MIT graduate student Alexander Justen is the lead author of the paper, which appears today in Science Advances.

The long and short of it

Galactan, a polysaccharide, is a component of the cell wall of mycobacteria, but little is known about its function. Until now, its only known role was to form links between molecules called peptidoglycans, which make up most of the bacterial cell wall, and other sugars and lipids. However, the version of galactan found in mycobacteria is much longer than it needs to be to perform this linker function.

“What was so strange is that the galactan is about 30 sugar molecules long, but the branch points for the other sugars that it links to are at eight, 10, and 12. So, why is the cell expending so much energy to make galactan longer than 12 units?” Kiessling says.

That question led Kiessling and her research group to investigate what might happen if galactan were shorter. A team led by Justen genetically engineered a type of mycobacteria called Mycobacterium smegmatis (which is related to Mycobacterium tuberculosis but is not harmful to humans) so that their galactan chains would contain only 12 sugar molecules.

As a result of this shortening, cells lost their usual shape and developed “blebs,” or bulges from their cell membranes. Shortening galactan also shrank the size of a compartment called the periplasm, a space that is found between a bacterial cell’s inner and outer cell membranes. This compartment is involved in absorbing nutrients from the cell’s environment.

Truncating galactan also made the cells more susceptible to certain antibiotics — specifically, antibiotics that are hydrophobic. Mycobacteria cell walls are relatively impermeable to hydrophobic antibiotics, but the shortened galactan molecules make the cells more permeable, so these drugs can get inside more easily.

“This suggests that drugs that would lead to these truncated chains could be valuable in combination with hydrophobic antibiotics,” Kiessling says. “I think it validates this part of the cell as a good target.”

Her lab is currently working on developing drugs that could block galactan synthesis, which is not targeted by any existing TB drugs. Patients with TB are usually given drug combinations that have to be taken for six months, and some strains have developed resistance to the existing drugs.

Unexpected roles

Kiessling’s lab is also studying the question of why it is useful for bacteria to alter the length of their carbohydrate molecules. One hypothesis is that it helps them to shield themselves from the immune system, she says. Some studies have shown that a dense coating of longer carbohydrate chains could help to achieve a stealth effect by preventing host immune cells from interacting with proteins on the bacterial cell surface.

If that hypothesis is confirmed, then drugs that interfere with the length of galactan or other carbohydrates might also help the immune system fight off bacterial infection, Kiessling says. This could be useful for treating not only tuberculosis but also other diseases caused by mycobacteria, such as chronic obstructive pulmonary disease (COPD) and leprosy. Other strains of mycobacteria (known as “flesh-eating bacteria”) cause a potentially deadly infection called necrotizing fasciitis. All of these mycobacteria have galactan in their cell walls, and there are no good vaccines against any of them.

Although the research may end up helping scientists to develop better drugs, Kiessling first became interested in this topic as a basic science question.

“The reason I like this paper is because while it does have implications for treating tuberculosis, it also shows a fundamentally new role for carbohydrates, which I love. People are finding that they can have unexpected roles, and this is another unexpected result,” she says.

The research was funded by the National Institute of Allergy and Infectious Disease and the National Institutes of Health Common Fund.

Astronomers may have found a signature of life on Venus
Posted on Monday September 14, 2020

Evidence indicates phosphine, a gas associated with living organisms, is present in the habitable region of Venus’ atmosphere.

The search for life beyond Earth has largely revolved around our rocky red neighbor. NASA has launched multiple rovers over the years, with a new one currently en route, to sift through Mars’ dusty surface for signs of water and other hints of habitability.

Now, in a surprising twist, scientists at MIT, Cardiff University, and elsewhere have observed what may be signs of life in the clouds of our other, even closer planetary neighbor, Venus. While they have not found direct evidence of living organisms there, if their observation is indeed associated with life, it must be some sort of “aerial” life-form in Venus’ clouds — the only habitable portion of what is otherwise a scorched and inhospitable world. Their discovery and analysis is published today in the journal Nature Astronomy.

The astronomers, led by Jane Greaves of Cardiff University, detected in Venus’ atmosphere a spectral fingerprint, or light-based signature, of phosphine. MIT scientists have previously shown that if this stinky, poisonous gas were ever detected on a rocky, terrestrial planet, it could only be produced by a living organism there. The researchers made the detection using the James Clerk Maxwell Telescope (JCMT) in Hawaii, and the Atacama Large Millimeter Array (ALMA) observatory in Chile.

The MIT team followed up the new observation with an exhaustive analysis to see whether anything other than life could have produced phosphine in Venus’ harsh, sulfuric environment. Based on the many scenarios they considered, the team concludes that there is no explanation for the phosphine detected in Venus’ clouds, other than the presence of life.

“It’s very hard to prove a negative,” says Clara Sousa-Silva, research scientist in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “Now, astronomers will think of all the ways to justify phosphine without life, and I welcome that. Please do, because we are at the end of our possibilities to show abiotic processes that can make phosphine.”

“This means either this is life, or it’s some sort of physical or chemical process that we do not expect to happen on rocky planets,” adds co-author and EAPS Research Scientist Janusz Petkowski.

The other MIT co-authors include William Bains, Sukrit Ranjan, Zhuchang Zhan, and Sara Seager, who is the Class of 1941 Professor of Planetary Science in EAPS with joint appointments in the departments of Physics and of Aeronautics and Astronautics, along with collaborators at Cardiff University, the University of Manchester, Cambridge University, MRC Laboratory of Molecular Biology, Kyoto Sangyo University, Imperial College, the Royal Observatory Greenwich, the Open University, and the East Asian Observatory.

A search for exotic things

Venus is often referred to as Earth’s twin, as the neighboring planets are similar in their size, mass, and rocky composition. They also have significant atmospheres, although that is where their similarities end. Where Earth is a habitable world of temperate oceans and lakes, Venus’ surface is a boiling hot landscape, with temperatures reaching 900 degrees Fahrenheit and a stifling air that is drier than the driest places on Earth.

Much of the planet’s atmosphere is also quite inhospitable, suffused with thick clouds of sulfuric acid, and cloud droplets that are billions of times more acidic than the most acidic environment on Earth. The atmosphere also lacks nutrients that exist in abundance on a planet surface.

“Venus is a very challenging environment for life of any kind,” Seager says.

There is, however, a narrow, temperate band within Venus’ atmosphere, between 48 and 60 kilometers above the surface, where temperatures range from 30 to 200 degrees Fahrenheit. Scientists have speculated, with much controversy, that if life exists on Venus, this layer of the atmosphere, or cloud deck, is likely the only place where it would survive. And it just so happens that this cloud deck is where the team observed signals of phosphine.

“This phosphine signal is perfectly positioned where others have conjectured the area could be habitable,” Petkowski says.

The detection was first made by Greaves and her team, who used the JCMT to zero in on Venus’ atmosphere for patterns of light that could indicate the presence of unexpected molecules and possible signatures of life. When she picked up a pattern that indicated the presence of phosphine, she contacted Sousa-Silva, who has spent the bulk of her career characterizing the stinky, toxic molecule.

Sousa-Silva initially assumed that astronomers could search for phosphine as a biosignature on much farther-flung planets. “I was thinking really far, many parsecs away, and really not thinking literally the nearest planet to us.”

The team followed up Greaves’ initial observation using the more sensitive ALMA observatory, with the help of Anita Richards, of the ALMA Regional Center at the University of Manchester. Those observations confirmed that what Greaves observed was indeed a pattern of light that matched what phosphine gas would emit within Venus’ clouds.

The researchers then used a model of the Venusian atmosphere, developed by Hideo Sagawa of Kyoto Sangyo University, to interpret the data. They found that phosphine on Venus is a minor gas, existing at a concentration of about 20 out of every billion molecules in the atmosphere. Although that concentration is low, the researchers point out that phosphine produced by life on Earth can be found at even lower concentrations in the atmosphere.

The MIT team, led by Bains and Petkowski, used computer models to explore all the possible chemical and physical pathways not associated with life, that could produce phosphine in Venus’ harsh environment. Bains considered various scenarios that could produce phosphine, such as sunlight, surface minerals, volcanic activity, a meteor strike, and lightning. Ranjan along with Paul Rimmer of Cambridge University then modeled how phosphine produced through these mechanisms could accumulate in the Venusian clouds. In every scenario they considered, the phosphine produced would only amount to a tiny fraction of what the new observations suggest is present on Venus’ clouds.

“We really went through all possible pathways that could produce phosphine on a rocky planet,” Petkowski says. “If this is not life, then our understanding of rocky planets is severely lacking.”

A life in the clouds

If there is indeed life in Venus’ clouds, the researchers believe it to be an aerial form, existing only in Venus’ temperate cloud deck, far above the boiling, volcanic surface.

“A long time ago, Venus is thought to have oceans, and was probably habitable like Earth,” Sousa-Silva says. “As Venus became less hospitable, life would have had to adapt, and they could now be in this narrow envelope of the atmosphere where they can still survive. This could show that even a planet at the edge of the habitable zone could have an atmosphere with a local aerial habitable envelope.”

In a separate line of research, Seager and Petkowski have explored the possibility that the lower layers of Venus’ atmosphere, just below the cloud deck, could be crucial for the survival of a hypothetical Venusian biosphere.

“You can, in principle, have a life cycle that keeps life in the clouds perpetually,” says Petkowski, who envisions any aerial Venusian life to be fundamentally different from life on Earth. “The liquid medium on Venus is not water, as it is on Earth.”

Sousa-Silva is now leading an effort with Jason Dittman at MIT to further confirm the phosphine detection with other telescopes. They are also hoping to map the presence of the molecule across Venus’ atmosphere, to see if there are daily or seasonal variations in the signal that would suggest activity associated with life.

“Technically, biomolecules have been found in Venus’ atmosphere before, but these molecules are also associated with a thousand things other than life,” Sousa-Silva says. “The reason phosphine is special is, without life it is very difficult to make phosphine on rocky planets. Earth has been the only terrestrial planet where we have found phosphine, because there is life here. Until now.”

This research was funded, in part, by the Science and Technology Facilities Council, the European Southern Observatory, the Japan Society for the Promotion of Science, the Heising-Simons Foundation, the Change Happens Foundation, the Simons Foundation, and the European Union’s Horizon 2020 research and innovation program.

Monitoring sleep positions for a healthy rest
Posted on Friday September 11, 2020

Wireless device captures sleep data without using cameras or body sensors; could aid patients with Parkinson’s disease, epilepsy, or bedsores.

MIT researchers have developed a wireless, private way to monitor a person’s sleep postures — whether snoozing on their back, stomach, or sides — using reflected radio signals from a small device mounted on a bedroom wall.

The device, called BodyCompass, is the first home-ready, radio-frequency-based system to provide accurate sleep data without cameras or sensors attached to the body, according to Shichao Yue, who will introduce the system in a presentation at the UbiComp 2020 conference on Sept. 15. The PhD student has used wireless sensing to study sleep stages and insomnia for several years.

“We thought sleep posture could be another impactful application of our system” for medical monitoring, says Yue, who worked on the project under the supervision of Professor Dina Katabi in the MIT Computer Science and Artificial Intelligence Laboratory. Studies show that stomach sleeping increases the risk of sudden death in people with epilepsy, he notes, and sleep posture could also be used to measure the progression of Parkinson’s disease as the condition robs a person of the ability to turn over in bed.

"Unfortunately, many patients are completely unaware of how they sleep at night or what position they end up after a seizure," says  Dong Woo Lee, an epilepsy neurologist at Brigham and Women's Hospital and Harvard Medical School, who was not associated with the study. "A body monitoring system like BodyCompass would move our field forward in allowing for baseline monitoring of our patients to assess their risk, and when combined with an alerting/intervention system, could save patients from sudden unexpected death in epilepsy."

In the future, people might also use BodyCompass to keep track of their own sleep habits or to monitor infant sleeping, Yue says: “It can be either a medical device or a consumer product, depending on needs.”

Other authors on the conference paper, published in the Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, include graduate students Yuzhe Yang and Hao Wang, and Katabi Lab affiliate Hariharan Rahul. Katabi is the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT.

Restful reflections

BodyCompass works by analyzing the reflection of radio signals as they bounce off objects in a room, including the human body. Similar to a Wi-Fi router attached to the bedroom wall, the device sends and collects these signals as they return through multiple paths. The researchers then map the paths of these signals, working backward from the reflections to determine the body’s posture.

For this to work, however, the scientists needed a way to figure out which of the signals were bouncing off the sleeper’s body, and not bouncing off the mattress or a nightstand or an overhead fan. Yue and his colleagues realized that their past work in deciphering breathing patterns from radio signals could solve the problem.

Signals that bounce off a person’s chest and belly are uniquely modulated by breathing, they concluded. Once that breathing signal was identified as a way to “tag” reflections coming from the body, the researchers could analyze those reflections compared to the position of the device to determine how the person was lying in bed. (If a person was lying on her back, for instance, strong radio waves bouncing off her chest would be directed at the ceiling and then to the device on the wall.) “Identifying breathing as coding helped us to separate signals from the body from environmental reflections, allowing us to track where informative reflections are,” Yue says.

Reflections from the body are then analyzed by a customized neural network to infer how the body is angled in sleep. Because the neural network defines sleep postures according to angles, the device can distinguish between a sleeper lying on the right side from one who has merely tilted slightly to the right. This kind of fine-grained analysis would be especially important for epilepsy patients for whom sleeping in a prone position is correlated with sudden unexpected death, Yue says.

Lee says "it is becoming apparent that patients do not like wearing devices, they forget to wear it, they decrease comfort, battery life is short, and data transfer may be difficult. A non-wearable contactless device like the BodyCompass would overcome these issues."

BodyCompass has some advantages over other ways of monitoring sleep posture, such as installing cameras in a person’s bedroom or attaching sensors directly to the person or their bed. Sensors can be uncomfortable to sleep with, and cameras reduce a person’s privacy, Yue notes. “Since we will only record essential information for detecting sleep posture, such as a person’s breathing signal during sleep,” he says, “it is nearly impossible for someone to infer other activities of the user from this data.”

An accurate compass

The research team tested BodyCompass’ accuracy over 200 hours of sleep data from 26 healthy people sleeping in their own bedrooms. At the start of the study, the subjects wore two accelerometers (sensors that detect movement) taped to their chest and stomach, to train the device’s neural network with “ground truth” data on their sleeping postures.

BodyCompass was most accurate — predicting the correct body posture 94 percent of the time — when the device was trained on a week’s worth of data. One night’s worth of training data yielded accurate results 87 percent of the time. BodyCompass could achieve 84 percent accuracy with just 16 minutes’ worth of data collected, when sleepers were asked to hold a few usual sleeping postures in front of the wireless sensor.

Along with epilepsy and Parkinson’s disease, BodyCompass could prove useful in treating patients vulnerable to bedsores and sleep apnea, since both conditions can be alleviated by changes in sleeping posture. Yue has his own interest as well: He suffers from migraines that seem to be affected by how he sleeps. “I sleep on my right side to avoid headache the next day,” he says, “but I’m not sure if there really is any correlation between sleep posture and migraines. Maybe this can help me find out if there is any relationship.”

For now, BodyCompass is a monitoring tool, but it may be paired someday with an alert that can prod sleepers to change their posture. “Researchers are working on mattresses that can slowly turn a patient to avoid dangerous sleep positions,” Yue says. “Future work may combine our sleep posture detector with such mattresses to move an epilepsy patient to a safer position if needed.”

Highly sensitive trigger enables rapid detection of biological agents
Posted on Wednesday September 16, 2020

The Rapid Agent Aerosol Detector developed at Lincoln Laboratory has demonstrated excellent accuracy in identifying toxic biological particles suspended in the air.

Any space, enclosed or open, can be vulnerable to the dispersal of harmful airborne biological agents. Silent and near-invisible, these bioagents can sicken or kill living things before steps can be taken to mitigate the bioagents' effects. Venues where crowds congregate are prime targets for biowarfare strikes engineered by terrorists, but expanses of fields or forests could be victimized by an aerial bioattack. Early warning of suspicious biological aerosols can speed up remedial responses to releases of biological agents; the sooner cleanup and treatment begin, the better the outcome for the sites and people affected.  

MIT Lincoln Laboratory researchers have developed a highly sensitive and reliable trigger for the U.S. military's early warning system for biological warfare agents.

"The trigger is the key mechanism in a detection system because its continual monitoring of the ambient air in a location picks up the presence of aerosolized particles that may be threat agents," says Shane Tysk, principal investigator of the laboratory's bioaerosol trigger, the Rapid Agent Aerosol Detector (RAAD), and a member of the technical staff in the laboratory's Advanced Materials and Microsystems Group.

The trigger cues the detection system to collect particle specimens and then to initiate the process to identify particles as potentially dangerous bioagents. The RAAD has demonstrated a significant reduction in false positive rates while maintaining detection performance that matches or exceeds that of today’s best deployed systems. Additionally, early testing has shown that the RAAD has significantly improved reliability compared to currently deployed systems.

RAAD process

The RAAD determines the presence of biological warfare agents through a multistep process. First, aerosols are pulled into the detector by the combined agency of an aerosol cyclone that uses high-speed rotation to cull out the small particles, and an aerodynamic lens that focuses the particles into a condensed (i.e., enriched) volume, or beam, of aerosol. The RAAD aerodynamic lens provides more efficient aerosol enrichment than any other air-to-air concentrator.

Then, a near-infrared (NIR) laser diode creates a structured trigger beam that detects the presence, size, and trajectory of an individual aerosol particle. If the particle is large enough to adversely affect the respiratory tract — roughly 1 to 10 micrometers — a 266-nanometer ultravolet (UV) laser is activated to illuminate the particle, and multiband laser-induced fluorescence is collected.

The detection process continues as an embedded logic decision, referred to as the “spectral trigger,” uses scattering from the NIR light and UV fluorescence data to predict if the particle's composition appears to correspond to that of a threat-like bioagent. "If the particle seems threat-like, then spark-induced breakdown spectroscopy is enabled to vaporize the particle and collect atomic emission to characterize the particle's elemental content," says Tysk.

Spark-induced breakdown spectroscopy is the last measurement stage. This spectroscopy system measures the elemental content of the particle, and its measurements involve creating a high-temperature plasma, vaporizing the aerosol particle, and measuring the atomic emission from the thermally excited states of the aerosol. 

The measurement stages — structured trigger beam, UV-excited fluorescence, and spark-induced breakdown spectroscopy — are integrated into a tiered system that provides seven measurements on each particle of interest. Of the hundreds of particles entering the measurement process each second, a small subset of particles are down-selected for measurement in all three stages. The RAAD algorithm searches the data stream for changes in the particle set’s temporal and spectral characteristics. If a sufficient number of threat-like particles are found, the RAAD issues an alarm that a biological aerosol threat is present.

RAAD design advantages

"Because RAAD is intended to be operated 24 hours a day, seven days a week for long periods, we incorporated a number of features and technologies to improve system reliability and make the RAAD easy to maintain," says Brad Perkins, another staff member on the RAAD development team. For example, Perkins goes on to explain, the entire air-handling unit is a module that is mounted on the exterior of the RAAD to allow for easy servicing of the items most likely to need replacement, such as filters, the air-to-air concentrator, and pumps that wear out with use.

To improve detection reliability, the RAAD team chose to use carbon-filtered, HEPA-filtered, and dehumidified sheathing air and purge air (compressed air that pushes out extraneous gases) around the optical components. This approach ensures that contaminants from the outside air do not deposit onto the optical surfaces of the RAAD, potentially causing reductions in sensitivity or false alarms.

The RAAD has undergone more than 16,000 hours of field testing, during which it has demonstrated an extremely low false-alarm rate that is unprecedented for a biological trigger with such a high level of sensitivity. "What sets RAAD apart from its competitors is the number, variety, and fidelity of the measurements made on each individual aerosol particle," Tysk says. These multiple measurements on individual aerosol particles as they flow through the system enable the trigger to accurately discriminate biological warfare agents from ambient air at a rapid rate. Because RAAD does not name the particular bioagent detected, further laboratory testing of the specimen would have to be done to determine its exact identity.

The RAAD was developed under sponsorship from the Defense Threat Reduction Agency and Joint Program Executive Office for CBRN Defense. The technology is currently being transitioned for production from Lincoln Laboratory to Chemring Sensors and Electronic Systems.

MIT-led team to develop software to help forecast space storms
Posted on Thursday September 10, 2020

National Science Foundation awards proposal for space weather modeling.

On a moonless night on Aug. 28, 1859, the sky began to bleed. The phenomenon behind the northern lights had gone global: an aurora stretching luminous, rainbow fingers across time zones and continents illuminated the night sky with an undulating backdrop of crimson. From New England to Australia, people stood in the streets looking up with admiration, inspiration, and fear as the night sky shimmered in Technicolor. But the beautiful display came with a cost. The global telegraph system — which at the time was responsible for nearly all long-distance communication — experienced widespread disruption. Some telegraph operators experienced electric shocks while sending and receiving messages; others witnessed sparks flying from cable pylons. Telegraph transmissions were halted for days.  

The aurora and the damage that followed were later attributed to a geomagnetic storm caused by a series of coronal mass ejections (CMEs) that burst from the sun’s surface, raced across the solar system, and barraged our atmosphere with magnetic solar energy, wreaking havoc on the electricity that powered the telegraph system. Although we no longer rely on the global telegraph system to stay connected around the world, experiencing a geomagnetic storm on a similar scale in today’s world would still be catastrophic. Such a storm could cause worldwide blackouts, massive network failures, and widespread damage to the satellites that enable GPS and telecommunication — not to mention the potential threat to human health from increased levels of radiation. Unlike storms on Earth, solar storms’ arrival and intensity can be difficult to predict. Without a better understanding of space weather, we might not even see the next great solar storm coming until it’s too late.

To advance our ability to forecast space weather like we do on weather Earth, Richard Linares, an assistant professor in the Department of Aeronautics and Astronautics (AeroAstro) at MIT, is leading a multidisciplinary team of researchers to develop software that can effectively address this challenge. With better models, we can use historical observational data to better predict the impact of space weather events like CMEs, solar wind, and other space plasma phenomena as they interact with our atmosphere. Under the Space Weather with Quantified Uncertainties (SWQU) program, a partnership between the U.S. National Science Foundation (NSF) and NASA, the team was awarded a $3 million grant for their proposal “Composable Next Generation Software Framework.”

“By bringing together experts in geospace sciences, uncertainty quantification, software development, management, and sustainability, we hope to develop the next generation of software for space weather modeling and prediction,” says Linares. “Improving space weather predictions is a national need, and we saw a unique opportunity at MIT to combine the expertise we have across campus to solve this problem.”

Linares’ MIT collaborators include Philip Erickson, assistant director at MIT Haystack Observatory and head of Haystack’s atmospheric and geospace sciences group; Jaime Peraire, the H.N. Slater Professor of Aeronautics and Astronautics; Youssef Marzouk, professor of aeronautics and astronautics; Ngoc Cuong Nguyen, a research scientist in AeroAstro; Alan Edelman, professor of applied mathematics; and Christopher Rackauckas, instructor in the Department of Mathematics. External collaborators include Aaron Ridley (University of Michigan) and Boris Kramer (University of California at San Diego). Together, the team will focus on resolving this gap by creating a model-focused composable software framework that allows a wide variety of observation data collected across the world to be ingested into a global model of the ionosphere/thermosphere system. 

“MIT Haystack research programs include a focus on conditions in near-Earth space, and our NSF-sponsored Madrigal online distributed database provides the largest single repository of ground-based community data on space weather and its effects in the atmosphere using worldwide scientific observations. This extensive data includes ionospheric remote sensing information on total electron content (TEC), spanning the globe on a nearly continuous basis and calculated from networks of thousands of individual global navigation satellite system community receivers,” says Erickson. “TEC data, when analyzed jointly with results of next-generation atmosphere and magnetosphere modeling systems, provides a key future innovation that will significantly improve human understanding of critically important space weather effects.”

The project aims to create a powerful, flexible software platform using cutting-edge computational tools to collect and analyze huge sets of observational data that can be easily shared and reproduced among researchers. The platform will also be designed to work even as computer technology rapidly advances and new researchers contribute to the project from new places, using new machines. Using Julia, a high-performance programming language developed by Edelman at MIT, researchers from all over the world will be able to tailor the software for their own purposes to contribute their data without having to rewrite the program from scratch.

“I'm very excited that Julia, already fast becoming the language of scientific machine learning, and a great tool for collaborative software, can play a key role in space weather applications,” says Edelman. 

According to Linares, the composable software framework will serve as a foundation that can be expanded and improved over time, growing both the space weather prediction capabilities and the space weather modeling community itself.

The MIT-led project was one of six projects selected for three-year grant awards under the SWQU program. Motivated by the White House National Space Weather Strategy and Action Plan and the National Strategic Computing Initiative, the goal of the SWQU program is to bring together teams from across scientific disciplines to advance the latest statistical analysis and high-performance computing methods within the field of space weather modeling.

“One key goal of the SWQU program is development of sustainable software with built-in capability to evaluate likelihood and magnitude of electromagnetic geospace disturbances based on sparse observational data,” says Vyacheslav Lukin, NSF program director in the Division of Physics. “We look forward to this multidisciplinary MIT-led team laying the foundations for such development to enable advances that will transform our future space weather forecasting capabilities.”

As information flows through brain’s heirarchy, higher regions use higher-frequency waves
Posted on Thursday September 10, 2020

Study also finds specific frequency bands of brain waves associated with encoding, or inhibiting encoding, of sensory information across the cortex.

To produce your thoughts and actions, your brain processes information in a hierarchy of regions along its surface, or cortex, ranging from “lower” areas that do basic parsing of incoming sensations to “higher” executive regions that formulate your plans for employing that newfound knowledge. In a new study, MIT neuroscientists seeking to explain how this organization emerges report two broad trends: In each of three distinct regions, information encoding or its inhibition was associated with a similar tug of war between specific brain wave frequency bands, and the higher a region’s status in the hierarchy, the higher the peak frequency of its waves in each of those bands.

By making and analyzing measurements of thousands of neurons and surrounding electric fields in three cortical regions in animals, the team’s new study in the Journal of Cognitive Neuroscience provides a unifying view of how brain waves, which are oscillating patterns of the activity of brain cells, may control the flow of information throughout the cortex.

“When you look at prior studies you see examples of what we found in many regions, but they are all found in different ways in different experiments,” says Earl Miller, the Picower Professor of Neuroscience in The Picower Institute for Learning and Memory at MIT and senior author of the study. “We wanted to obtain an overarching picture, so that’s what we did. We addressed the question of what does this look like all over the cortex.”

Adds co-first author Mikael Lundqvist of Stockholm University, formerly a postdoc at MIT: “Many, many studies have looked at how synchronized the phases of a particular frequency are between cortical regions. It has become a field by itself, because synchrony will impact the communication between regions. But arguably even more important would be if regions communicate at different frequencies altogether. Here we find such a systematic shift in preferred frequencies across regions. It may have been suspected by piecing together earlier studies, but as far as I know hasn’t been shown directly before. It is a simple, but potentially very fundamental, observation.”

The paper’s other first author is Picower Institute postdoc Andre Bastos.

To make their observations, the team gave animals the task of correctly distinguishing an image they had just seen — a simple feat of visual working memory. As the animals played the game, the scientists measured the individual spiking activity of hundreds of neurons in each animal in three regions at the bottom, middle, and top of the task’s cortical hierarchy — the visual cortex, the parietal cortex, and the prefrontal cortex. They simultaneously tracked the waves produced by this activity.

In each region, they found that when an image was either being encoded (when it was first presented) or recalled (when working memory was tested), the power of theta and gamma frequency bands of brain waves would increase in bursts and power in alpha and beta bands would decrease. When the information had to be held in mind, for instance in the period between first sight and the test, theta and gamma power went down and alpha and beta power went up in bursts. This functional “push/pull” sequence between these frequency bands has been shown in several individual regions, including the motor cortex, Miller said, but not often simultaneously across multiple regions in the course of the same task.

The researchers also observed that the bursts of theta and gamma power were closely associated with neural spikes that encoded information about the images. Alpha and beta power bursts, meanwhile, were anti-correlated with that same spiking activity.

While this rule applied across all three regions, a key difference was that each region employed a distinct peak within each frequency band. While the visual cortex beta band, for instance, peaked at 11 Hz, parietal beta peaked at 15 Hz, and prefrontal beta peaked at 19 Hz. Meanwhile, visual cortex gamma occurred at 65 Hz, parietal gamma topped at 72 Hz, and prefrontal gamma at 80 Hz.

“As you move from the back of the brain to the front, all the frequencies get a little higher,” Miller says.

While both main trends in the study — the inverse relationships between frequency bands and the systematic rise in peak frequencies within each band — were both consistently observed and statistically significant, they only show associations with function, not causality. But the researchers said they are consistent with a model in which alpha and beta alternately inhibit, or release, gamma to control the encoding of information — a form of top-down control of sensory activity.

Meanwhile, they hypothesize that the systematic increase in peak frequencies up the hierarchy could serve multiple functions. For instance, if waves in each frequency band carry information, then higher regions would sample at a faster frequency to provide more fine-grained sampling of the raw input coming from lower regions. Moreover, faster frequencies are more effective at entraining those same frequencies in other regions, giving higher regions an effective way of controlling activity in lower ones.

“The increased frequency in the oscillatory rhythms may help sculpt information flow in the cortex,” the authors wrote.

The study was supported by the U.S. National Institutes of Health, the Office of Naval Research, The JPB Foundation, the Swedish Research Council, and the Brain and Behavior Research Foundation.

A “bang” in LIGO and Virgo detectors signals most massive gravitational-wave source yet
Posted on Wednesday September 02, 2020

A binary black hole merger likely produced gravitational waves equal to the energy of eight suns.

For all its vast emptiness, the universe is humming with activity in the form of gravitational waves. Produced by extreme astrophysical phenomena, these reverberations ripple forth and shake the fabric of space-time, like the clang of a cosmic bell.

Now researchers have detected a signal from what may be the most massive black hole merger yet observed in gravitational waves. The product of the merger is the first clear detection of an “intermediate-mass” black hole, with a mass between 100 and 1,000 times that of the sun.

They detected the signal, which they have labeled GW190521, on May 21, 2019, with the National Science Foundation’s Laser Interferometer Gravitational-wave Observatory (LIGO), a pair of identical, 4-kilometer-long interferometers in the United States; and Virgo, a 3-kilometer-long detector in Italy.

The signal, resembling about four short wiggles, is extremely brief in duration, lasting less than one-tenth of a second. From what the researchers can tell, GW190521 was generated by a source that is roughly 5 gigaparsecs away, when the universe was about half its age, making it one of the most distant gravitational-wave sources detected so far.

As for what produced this signal, based on a powerful suite of state-of-the-art computational and modeling tools, scientists think that GW190521 was most likely generated by a binary black hole merger with unusual properties.

Almost every confirmed gravitational-wave signal to date has been from a binary merger, either between two black holes or two neutron stars. This newest merger appears to be the most massive yet, involving two inspiraling black holes with masses about 85 and 66 times the mass of the sun.

The LIGO-Virgo team has also measured each black hole’s spin and discovered that as the black holes were circling ever closer together, they could have been spinning about their own axes, at angles that were out of alignment with the axis of their orbit. The black holes’ misaligned spins likely caused their orbits to wobble, or “precess,” as the two Goliaths spiraled toward each other.

The new signal likely represents the instant that the two black holes merged. The merger created an even more massive black hole, of about 142 solar masses, and released an enormous amount of energy, equivalent to around 8 solar masses, spread across the universe in the form of gravitational waves.

“This doesn’t look much like a chirp, which is what we typically detect,” says Virgo member Nelson Christensen, a researcher at the French National Centre for Scientific Research (CNRS), comparing the signal to LIGO’s first detection of gravitational waves in 2015. “This is more like something that goes ‘bang,’ and it’s the most massive signal LIGO and Virgo have seen.”

The international team of scientists, who make up the LIGO Scientific Collaboration (LSC) and the Virgo Collaboration, have reported their findings in two papers published today. One, appearing in Physical Review Letters, details the discovery, and the other, in The Astrophysical Journal Letters, discusses the signal’s physical properties and astrophysical implications.

“LIGO once again surprises us not just with the detection of black holes in sizes that are difficult to explain, but doing it using techniques that were not designed specifically for stellar mergers,” says Pedro Marronetti, program director for gravitational physics at the National Science Foundation. “This is of tremendous importance since it showcases the instrument’s ability to detect signals from completely unforeseen astrophysical events. LIGO shows that it can also observe the unexpected.”

In the mass gap

The uniquely large masses of the two inspiraling black holes, as well as the final black hole, raise a slew of questions regarding their formation.

All of the black holes observed to date fit within either of two categories: stellar-mass black holes, which measure from a few solar masses up to tens of solar masses and are thought to form when massive stars die; or supermassive black holes, such as the one at the center of the Milky Way galaxy, that are from hundreds of thousands, to billions of times that of our sun.

However, the final 142-solar-mass black hole produced by the GW190521 merger lies within an intermediate mass range between stellar-mass and supermassive black holes — the first of its kind ever detected.

The two progenitor black holes that produced the final black hole also seem to be unique in their size. They’re so massive that scientists suspect one or both of them may not have formed from a collapsing star, as most stellar-mass black holes do.

According to the physics of stellar evolution, outward pressure from the photons and gas in a star’s core support it against the force of gravity pushing inward, so that the star is stable, like the sun. After the core of a massive star fuses nuclei as heavy as iron, it can no longer produce enough pressure to support the outer layers. When this outward pressure is less than gravity, the star collapses under its own weight, in an explosion called a core-collapse supernova, that can leave behind a black hole.

This process can explain how stars as massive as 130 solar masses can produce black holes that are up to 65 solar masses. But for heavier stars, a phenomenon known as “pair instability” is thought to kick in. When the core’s photons become extremely energetic, they can morph into an electron and antielectron pair. These pairs generate less pressure than photons, causing the star to become unstable against gravitational collapse, and the resulting explosion is strong enough to leave nothing behind. Even more massive stars, above 200 solar masses, would eventually collapse directly into a black hole of at least 120 solar masses. A collapsing star, then, should not be able to produce a black hole between approximately 65 and 120 solar masses — a range that is known as the “pair instability mass gap.”

But now, the heavier of the two black holes that produced the GW190521 signal, at 85 solar masses, is the first so far detected within the pair instability mass gap.

“The fact that we’re seeing a black hole in this mass gap will make a lot of astrophysicists scratch their heads and try to figure out how these black holes were made,” says Christensen, who is the director of the Artemis Laboratory at the Nice Observatory in France.

One possibility, which the researchers consider in their second paper, is of a hierarchical merger, in which the two progenitor black holes themselves may have formed from the merging of two smaller black holes, before migrating together and eventually merging.

“This event opens more questions than it provides answers,” says LIGO member Alan Weinstein, professor of physics at Caltech. “From the perspective of discovery and physics, it’s a very exciting thing.”

“Something unexpected”

There are many remaining questions regarding GW190521.

As LIGO and Virgo detectors listen for gravitational waves passing through Earth, automated searches comb through the incoming data for interesting signals. These searches can use two different methods: algorithms that pick out specific wave patterns in the data that may have been produced by compact binary systems; and more general “burst” searches, which essentially look for anything out of the ordinary.

LIGO member Salvatore Vitale, assistant professor of physics at MIT, likens compact binary searches to “passing a comb through data, that will catch things in a certain spacing,” in contrast to burst searches that are more of a “catch-all” approach.

In the case of GW190521, it was a burst search that picked up the signal slightly more clearly, opening the very small chance that the gravitational waves arose from something other than a binary merger.

“The bar for asserting we’ve discovered something new is very high,” Weinstein says. “So we typically apply Occam’s razor: The simpler solution is the better one, which in this case is a binary black hole.”

But what if something entirely new produced these gravitational waves? It’s a tantalizing prospect, and in their paper the scientists briefly consider other sources in the universe that might have produced the signal they detected. For instance, perhaps the gravitational waves were emitted by a collapsing star in our galaxy. The signal could also be from a cosmic string produced just after the universe inflated in its earliest moments — although neither of these exotic possibilities matches the data as well as a binary merger.

“Since we first turned on LIGO, everything we’ve observed with confidence has been a collision of black holes or neutron stars,” Weinstein says. “This is the one event where our analysis allows the possibility that this event is not such a collision.  Although this event is consistent with being from an exceptionally massive binary black hole merger, and alternative explanations are disfavored, it is pushing the boundaries of our confidence. And that potentially makes it extremely exciting. Because we have all been hoping for something new, something unexpected, that could challenge what we’ve learned already. This event has the potential for doing that.”

This research was funded by the U.S. National Science Foundation.


New, Used, Rare Books & Textbooks, e-books, etc. Buy books online. Books for every taste. Millions of books available.

Category Home ... ALL PRODUCTS >> : 0 - 1 - 2 - 3 - 4 - 5 - 6 - 7 - 8 - 9 - 10 -
Back to Top BACK TO TOP