What are you looking for ? 

Custom Search


 



MIT . In the News ...


Testing the waters
Posted on Tuesday January 28, 2020

MIT sophomore Rachel Shen looks for microscopic solutions to big environmental challenges.

In 2010, the U.S. Army Corps of Engineers began restoring the Broad Meadows salt marsh in Quincy, Massachusetts. The marsh, which had grown over with invasive reeds and needed to be dredged, abutted the Broad Meadows Middle School, and its three-year transformation fascinated one inquisitive student. “I was always super curious about what sorts of things were going on there,” says Rachel Shen, who was in eighth grade when they finally finished the project. She’d spend hours watching birds in the marsh, and catching minnows by the beach.

In her bedroom at home, she kept an eye on four aquariums furnished with anubias, hornwort, guppy grass, amazon swords, and “too many snails.” Now, living in a dorm as a sophomore at MIT, she’s had to scale back to a single one-gallon tank. But as a Course 7 (Biology) major minoring in environmental and sustainability studies, she gets an even closer look at the natural world, seeing what most of us can’t: the impurities in our water, the matrices of plant cells, and the invisible processes that cycle nutrients in the oceans.

Shen’s love for nature has always been coupled with scientific inquiry. Growing up, she took part in Splash and Spark workshops for grade schoolers, taught by MIT students. “From a young age, I was always that kid catching bugs,” she says. In her junior year of high school, she landed the perfect summer internship through Boston University’s GROW program: studying ant brains at BU’s Traniello lab. Within a colony, ants with different morphological traits perform different jobs as workers, guards, and drones. To see how the brains of these castes might be wired differently, Shen dosed the ants with serotonin and dopamine and looked for differences in the ways the neurotransmitters altered the ants’ social behavior.

This experience in the Traniello lab later connected Shen to her first campus job working for MITx Biology, which develops online courses and educational resources for students with Department of Biology faculty. Darcy Gordon, one of the administrators for GROW and a postdoc at the Traniello Lab, joined MITx Biology as a digital learning fellow just as Shen was beginning her first year. MITx was looking for students to beta-test their biochemistry course, and Gordon encouraged Shen to apply. “I’d never taken a biochem course before, but I had enough background to pick it up,” says Shen, who is always willing to try something new. She went through the entire course, giving feedback on lesson clarity and writing practice problems.

Using what she learned on the job, she’s now the biochem leader on a student project with the It’s On Us Data Sciences club (formerly Project ORCA) to develop a live map of water contamination by rigging autonomous boats with pollution sensors. Environmental restoration has always been important to her, but it was on her trip to the Navajo Nation with her first-year advisory group, Terrascope, that Shen saw the effects of water scarcity and contamination firsthand. She and her peers devised filtration and collection methods to bring to the community, but she found the most valuable part of the project to be “working with the people, and coming up with solutions that incorporated their local culture and local politics.”

Through the Undergraduate Research Opportunities Program (UROP), Shen has put her problem-solving skills to work in the lab. Last summer, she interned at Draper and the Velásquez-García Group in MIT’s Microsystems Technologies Laboratories. Through experiments, she observed how plant cells can be coaxed with hormones to reinforce their cell walls with lignin and cellulose, becoming “woody” — insights that can be used in the development of biomaterials.

For her next UROP, she sought out a lab where she could work alongside a larger team, and was drawn to the people in the lab of Sallie “Penny” Chisholm in MIT’s departments of Biology and Civil and Environmental Engineering, who study the marine cyanobacterium Prochlorococcus. “I really feel like I could learn a lot from them,” Shen says. “They’re great at explaining things.”

Prochlorococcus is one of the most abundant photosynthesizers in the ocean. Cyanobacteria are mixotrophs, which means they get their energy from the sun through photosynthesis, but can also take up nutrients like carbon and nitrogen from their environment. One source of carbon and nitrogen is found in chitin, the insoluble biopolymer that crustaceans and other marine organisms use to build their shells and exoskeletons. Billions of tons of chitin are produced in the oceans every year, and nearly all of it is recycled back into carbon, nitrogen, and minerals by marine bacteria, allowing it to be used again.

Shen is investigating whether Prochlorococcus also recycles chitin, like its close relative Synechococcus that secretes enzymes which can break down the polymer. In the lab’s grow room, she tends to test tubes that glow green with cyanobacteria. She’ll introduce chitin to half of the cultures to see if specific genes in Prochlorococcus are expressed that might be implicated in chitin degradation, and identify those genes with RNA sequencing.

Shen says working with Prochlorococcus is exciting because it’s a case study in which the smallest cellular processes of a species can have huge effects in its ecosystem. Cracking the chitin cycle would have implications for humans, too. Biochemists have been trying to turn chitin into a biodegradable alternative to plastic. “One thing I want to get out of my science education is learning the basic science,” she says, “but it’s really important to me that it has direct applications.”

Something else Shen has realized at MIT is that, whatever she ends up doing with her degree, she wants her research to involve fieldwork that takes her out into nature — maybe even back to the marsh, to restore shorelines and waterways. As she puts it, “something that’s directly relevant to people.” But she’s keeping her options open. “Currently I'm just trying to explore pretty much everything.”

For cheaper solar cells, thinner really is better
Posted on Monday January 27, 2020

Solar panel costs have dropped lately, but slimming down silicon wafers could lead to even lower costs and faster industry expansion.

Costs of solar panels have plummeted over the last several years, leading to rates of solar installations far greater than most analysts had expected. But with most of the potential areas for cost savings already pushed to the extreme, further cost reductions are becoming more challenging to find.

Now, researchers at MIT and at the National Renewable Energy Laboratory (NREL) have outlined a pathway to slashing costs further, this time by slimming down the silicon cells themselves.

Thinner silicon cells have been explored before, especially around a dozen years ago when the cost of silicon peaked because of supply shortages. But this approach suffered from some difficulties: The thin silicon wafers were too brittle and fragile, leading to unacceptable levels of losses during the manufacturing process, and they had lower efficiency. The researchers say there are now ways to begin addressing these challenges through the use of better handling equipment and some recent developments in solar cell architecture.

The new findings are detailed in a paper in the journal Energy and Environmental Science, co-authored by MIT postdoc Zhe Liu, professor of mechanical engineering Tonio Buonassisi, and five others at MIT and NREL.

The researchers describe their approach as “technoeconomic,” stressing that at this point economic considerations are as crucial as the technological ones in achieving further improvements in affordability of solar panels.

Currently, 90 percent of the world’s solar panels are made from crystalline silicon, and the industry continues to grow at a rate of about 30 percent per year, the researchers say. Today’s silicon photovoltaic cells, the heart of these solar panels, are made from wafers of silicon that are 160 micrometers thick, but with improved handling methods, the researchers propose this could be shaved down to 100 micrometers —  and eventually as little as 40 micrometers or less, which would only require one-fourth as much silicon for a given size of panel.

That could not only reduce the cost of the individual panels, they say, but even more importantly it could allow for rapid expansion of solar panel manufacturing capacity. That’s because the expansion can be constrained by limits on how fast new plants can be built to produce the silicon crystal ingots that are then sliced like salami to make the wafers. These plants, which are generally separate from the solar cell manufacturing plants themselves, tend to be capital-intensive and time-consuming to build, which could lead to a bottleneck in the rate of expansion of solar panel production. Reducing wafer thickness could potentially alleviate that problem, the researchers say.

The study looked at the efficiency levels of four variations of solar cell architecture, including PERC (passivated emitter and rear contact) cells and other advanced high-efficiency technologies, comparing their outputs at different thickness levels. The team found there was in fact little decline in performance down to thicknesses as low as 40 micrometers, using today’s improved manufacturing processes.

“We see that there’s this area (of the graphs of efficiency versus thickness) where the efficiency is flat,” Liu says, “and so that’s the region where you could potentially save some money.” Because of these advances in cell architecture, he says, “we really started to see that it was time to revisit the cost benefits.”

Changing over the huge panel-manufacturing plants to adapt to the thinner wafers will be a time-consuming and expensive process, but the analysis shows the benefits can far outweigh the costs, Liu says. It will take time to develop the necessary equipment and procedures to allow for the thinner material, but with existing technology, he says, “it should be relatively simple to go down to 100 micrometers,” which would already provide some significant savings. Further improvements in technology such as better detection of microcracks before they grow could help reduce thicknesses further.

In the future, the thickness could potentially be reduced to as little as 15 micrometers, he says. New technologies that grow thin wafers of silicon crystal directly rather than slicing them from a larger cylinder could help enable such further thinning, he says.

Development of thin silicon has received little attention in recent years because the price of silicon has declined from its earlier peak. But, because of cost reductions that have already taken place in solar cell efficiency and other parts of the solar panel manufacturing process and supply chain, the cost of the silicon is once again a factor that can make a difference, he says.

“Efficiency can only go up by a few percent. So if you want to get further improvements, thickness is the way to go,” Buonassisi says. But the conversion will require large capital investments for full-scale deployment.

The purpose of this study, he says, is to provide a roadmap for those who may be planning expansion in solar manufacturing technologies. By making the path “concrete and tangible,” he says, it may help companies incorporate this in their planning. “There is a path,” he says. “It’s not easy, but there is a path. And for the first movers, the advantage is significant.”

What may be required, he says, is for the different key players in the industry to get together and lay out a specific set of steps forward and agreed-upon standards, as the integrated circuit industry did early on to enable the explosive growth of that industry. “That would be truly transformative,” he says.

Andre Augusto, an associate research scientist at Arizona State University who was not connected with this research, says “refining silicon and wafer manufacturing is the most capital-expense (capex) demanding part of the process of manufacturing solar panels. So in a scenario of fast expansion, the wafer supply can become an issue. Going thin solves this problem in part as you can manufacture more wafers per machine without increasing significantly the capex.” He adds that “thinner wafers may deliver performance advantages in certain climates,” performing better in warmer conditions.

Renewable energy analyst Gregory Wilson of Gregory Wilson Consulting, who was not associated with this work, says “The impact of reducing the amount of silicon used in mainstream cells would be very significant, as the paper points out. The most obvious gain is in the total amount of capital required to scale the PV industry to the multi-terawatt scale required by the climate change problem. Another benefit is in the amount of energy required to produce silicon PV panels. This is because the polysilicon production and ingot growth processes that are required for the production of high efficiency cells are very energy intensive.”

Wilson adds “Major PV cell and module manufacturers need to hear from credible groups like Prof. Buonassisi’s at MIT, since they will make this shift when they can clearly see the economic benefits.”

The team also included Sarah Sofia, Hannu Lane, Sarah Wieghold and Marius Peters at MIT and Michael Woodhouse at NREL. The work was partly supported by the U.S. Department of Energy, the Singapore-MIT Alliance for Research and Technology (SMART), and by a Total Energy Fellowship through the MIT Energy Initiative.

Researchers hope to make needle pricks for diabetics a thing of the past
Posted on Friday January 24, 2020

Study suggests noninvasive spectroscopy could be used to monitor blood glucose levels.

Patients with diabetes have to test their blood sugar levels several times a day to make sure they are not getting too high or too low. Studies have shown that more than half of patients don’t test often enough, in part because of the pain and inconvenience of the needle prick.

One possible alternative is Raman spectroscopy, a noninvasive technique that reveals the chemical composition of tissue, such as skin, by shining near-infrared light on it. MIT scientists have now taken an important step toward making this technique practical for patient use: They have shown that they can use it to directly measure glucose concentrations through the skin. Until now, glucose levels had to be calculated indirectly, based on a comparison between Raman signals and a reference measurement of blood glucose levels.

While more work is needed to develop the technology into a user-friendly device, this advance shows that a Raman-based sensor for continuous glucose monitoring could be feasible, says Peter So, a professor of biological and mechanical engineering at MIT.

“Today, diabetes is a global epidemic,” says So, who is one of the senior authors of the study and the director of MIT’s Laser Biomedical Research Center. “If there were a good method for continuous glucose monitoring, one could potentially think about developing better management of the disease.”

Sung Hyun Nam of the Samsung Advanced Institute of Technology in Seoul is also a senior author of the study, which appears today in Science Advances. Jeon Woong Kang, a research scientist at MIT, and Yun Sang Park, a research staff member at Samsung Advanced Institute of Technology, are the lead authors of the paper.

Seeing through the skin

Raman spectroscopy can be used to identify the chemical composition of tissue by analyzing how near-infrared light is scattered, or deflected, as it encounters different kinds of molecules.

MIT’s Laser Biomedical Research Center has been working on Raman-spectroscopy-based glucose sensors for more than 20 years. The near-infrared laser beam used for Raman spectroscopy can only penetrate a few millimeters into tissue, so one key advance was to devise a way to correlate glucose measurements from the fluid that bathes skin cells, known as interstitial fluid, to blood glucose levels.

However, another key obstacle remained: The signal produced by glucose tends to get drowned out by the many other tissue components found in skin.

“When you are measuring the signal from the tissue, most of the strong signals are coming from solid components such as proteins, lipids, and collagen. Glucose is a tiny, tiny amount out of the total signal. Because of that, so far we could not actually see the glucose signal from the measured signal,” Kang says.

To work around that, the MIT team has developed ways to calculate glucose levels indirectly by comparing Raman data from skin samples with glucose concentrations in blood samples taken at the same time. However, this approach requires frequent calibration, and the predictions can be thrown off by movement of the subject or changes in environmental conditions.

For the new study, the researchers developed a new approach that lets them see the glucose signal directly. The novel aspect of their technique is that they shine near-infrared light onto the skin at about a 60-degree angle, but collect the resulting Raman signal from a fiber perpendicular to the skin. This results in a stronger overall signal because the glucose Raman signal can be collected while unwanted reflected signal from the skin surface is filtered out.

The researchers tested the system in pigs and found that after 10 to 15 minutes of calibration, they could get accurate glucose readings for up to an hour. They verified the readings by comparing them to glucose measurements taken from blood samples.

“This is the first time that we directly observed the glucose signal from the tissue in a transdermal way, without going through a lot of advanced computation and signal extraction,” So says.

Continuous monitoring

Further development of the technology is needed before the Raman-based system could be used to monitor people with diabetes, the researchers say. They now plan to work on shrinking the device, which is about the size of a desktop printer, so that it could be portable, in hopes of testing such a device on diabetic patients.

“You might have a device at home or a device in your office that you could put your finger on once in a while, or you might have a probe that you hold to your skin,” So says. “That’s what we’re thinking about in the shorter term.”

In the long term, they hope to create a wearable monitor that could offer continuous glucose measurements.

Other MIT authors of the paper include former postdoc Surya Pratap Singh, who is now an assistant professor at the Indian Institute of Technology; Wonjun Choi, a former visiting scientist from the Institute for Basic Science in South Korea; research technical staff member Luis Galindo; and principal research scientist Ramachandra Dasari. Hojun Chang, Woochang Lee, and Jongae Park of the Samsung Advanced Institute of Technology are also authors of the study.

The research was funded by the National Institutes of Health, the Samsung Advanced Institute of Technology, the Singapore-MIT Alliance Research and Technology Center, and Hamamatsu Corporation.

Study: Commercial air travel is safer than ever
Posted on Friday January 24, 2020

The rate of passenger fatalities has declined yet again in the last decade, accelerating a long-term trend.

It has never been safer to fly on commercial airlines, according to a new study by an MIT professor that tracks the continued decrease in passenger fatalities around the globe.

The study finds that between 2008 and 2017, airline passenger fatalities fell significantly compared to the previous decade, as measured per individual passenger boardings — essentially the aggregate number of passengers. Globally, that rate is now one death per 7.9 million passenger boardings, compared to one death per 2.7 million boardings during the period 1998-2007, and one death per 1.3 million boardings during 1988-1997.

Going back further, the commercial airline fatality risk was one death per 750,000 boardings during 1978-1987, and one death per 350,000 boardings during 1968-1977.

“The worldwide risk of being killed had been dropping by a factor of two every decade,” says Arnold Barnett, an MIT scholar who has published a new paper summarizing the study’s results. “Not only has that continued in the last decade, the [latest] improvement is closer to a factor of three. The pace of improvement has not slackened at all even as flying has gotten ever safer and further gains become harder to achieve. That is really quite impressive and is important for people to bear in mind.”

The paper, “Aviation Safety: A Whole New World?” was published online this month in Transportation Science. Barnett is the sole author.

The new research also reveals that there is discernible regional variation in airline safety around the world. The study finds that the nations housing the lowest-risk airlines are the U.S., the members of the European Union, China, Japan, Canada, Australia, New Zealand, and Israel. The aggregate fatality risk among those nations was one death per 33.1 million passenger boardings during 2008-2017. Barnett chose the nation as the unit of measurement in the study because important safety regulations for both airlines and airports are decided at the national level.

For airlines in a second set of countries, which Barnett terms the “advancing” set with an intermediate risk level, the rate is one death per 7.4 million boardings during 2008-2017. This group — comprising countries that are generally rapidly industrializing and have recently achieved high overall life expectancy and GDP per capita — includes many countries in Asia as well as some countries in South America and the Middle East.

For a third and higher-risk set of developing countries, including some in Asia, Africa, and Latin America, the death risk during 2008-2017 was one per 1.2 million passenger boardings — an improvement from one death per 400,000 passenger boardings during 1998-2007.

“The two most conspicuous changes compared to previous decades were sharp improvements in China and in Eastern Europe,” says Barnett, who is the George Eastman Professor of Management at the MIT Sloan School of Management. In those places, he notes, had safety achievements in the last decade that were strong even within the lowest-risk group of countries.

Overall, Barnett suggests, the rate of fatalities has declined far faster than public fears about flying.

“Flying has gotten safer and safer,” Barnett says. “It’s a factor of 10 safer than it was 40 years ago, although I bet anxiety levels have not gone down that much. I think it’s good to have the facts.”

Barnett is a long-established expert in the field of aviation safety and risk, whose work has helped contextualize accident and safety statistics. Whatever the absolute numbers of air crashes and fatalities may be — and they fluctuate from year to year — Barnett has sought to measure those numbers against the growth of air travel.

To conduct the current study, Barnett used data from a number of sources, including the Flight Safety Foundation’s Aviation Safety Network Accident Database. He mostly used data from the World Bank, based on information from the International Civil Aviation Organization, to measure the number of passengers carried, which is now roughly 4 billion per year.

In the paper, Barnett discusses the pros and cons of some alternative metrics that could be used to evaluate commercial air safety, including deaths per flight and deaths per passenger miles traveled. He prefers to use deaths per boarding because, as he writes in the paper, “it literally reflects the fraction of passengers who perished during air journeys.”

The new paper also includes historical data showing that even in today’s higher-risk areas for commerical aviation, the fatality rate is better, on aggregate, than it was in the leading air-travel countries just a few decades in the past.

“The risk now in the higher-risk countries is basically the risk we used to have 40-50 years ago” in the safest air-travel countries, Barnett notes.

Barnett readily acknowledges that the paper is evaluating the overall numbers, and not providing a causal account of the air-safety trend; he says he welcomes further research attempting to explain the reasons for the continued gains in air safety.

In the paper, Barnett also notes that year-to-year air fatality numbers have notable variation. In 2017, for instance, just 12 people died in the process of air travel, compared to 473 in 2018.

“Even if the overall trendline is [steady], the numbers will bounce up and down,” Barnett says. For that reason, he thinks looking at trends a decade at a time is a better way of grasping the full trajectory of commercial airline safety.

On a personal level, Barnett says he understands the kinds of concerns people have about airline travel. He began studying the subject partly because of his own worries about flying, and quips that he was trying to “sublimate my fears in a way that might be publishable.”

Those kinds of instinctive fears may well be natural, but Barnett says he hopes that his work can at least build public knowledge about the facts and put them into perspective for people who are afraid of airplane accidents.

“The risk is so low that being afraid to fly is a little like being afraid to go into the supermarket because the ceiling might collapse,” Barnett says.

The new front against antibiotic resistance
Posted on Thursday January 23, 2020

Deborah Hung shares research strategies to combat tuberculosis as part of the Department of Biology's IAP seminar series on microbes in health and disease.

After Alexander Fleming discovered the antibiotic penicillin in 1928, spurring a “golden age” of drug development, many scientists thought infectious disease would become a horror of the past. But as antibiotics have been overprescribed and used without adhering to strict regimens, bacterial strains have evolved new defenses that render previously effective drugs useless. Tuberculosis, once held at bay, has surpassed HIV/AIDS as the leading cause of death from infectious disease worldwide. And research in the lab hasn’t caught up to the needs of the clinic. In recent years, the U.S. Food and Drug Administration has approved only one or two new antibiotics annually.

While these frustrations have led many scientists and drug developers to abandon the field, researchers are finally making breakthroughs in the discovery of new antibiotics. On Jan. 9, the Department of Biology hosted a talk by one of the chemical biologists who won’t quit: Deborah Hung, core member and co-director of the Infectious Disease and Microbiome Program at the Broad Institute of MIT and Harvard, and associate professor in the Department of Genetics at Harvard Medical School.

Each January during Independent Activities Period, the Department of Biology organizes a seminar series that highlights cutting-edge research in biology. Past series have included talks on synthetic and quantitative biology. This year’s theme is Microbes in Health and Disease. The team of student organizers, led by assistant professor of biology Omer Yilmaz, chose to explore our growing understanding of microbes as both pathogens and symbionts in the body. Hung’s presentation provided an invigorating introduction to the series.

“Deborah is an international pioneer in developing tools and discovering new biology on the interaction between hosts and pathogens,” Yilmaz says. “She's done a lot of work on tuberculosis as well as other bacterial infections. So it’s a privilege for us to host her talk.”

A clinician as well as a chemical biologist, Hung understands firsthand the urgent need for new drugs. In her talk, she addressed the conventional approaches to finding new antibiotics, and why they’ve been failing scientists for decades.

“The rate of resistance is actually far outpacing our ability to discover new antibiotics,” she said. “I’m beginning to see patients [and] I have to tell them, I’m sorry, we have no antibiotics left.”

The way Hung sees it, there are two long-term goals in the fight against infectious disease. The first is to find a method that will greatly speed up the discovery of new antibiotics. The other is to think beyond antibiotics altogether, and find other ways to strengthen our bodies against intruders and increase patient survival.

Last year, in pursuit of the first goal, Hung spearheaded a multi-institutional collaboration to develop a new high-throughput screening method called PROSPECT (PRimary screening Of Strains to Prioritize Expanded Chemistry and Targets). By weakening the expression of genes essential to survival in the tuberculosis bacterium, researchers genetically engineered over 400 unique “hypomorphs,” vulnerable in different ways, that could be screened in large batches against tens of thousands of chemical compounds using PROSPECT.

With this approach, it’s possible to identify effective drug candidates 10 times faster than ever before. Some of the compounds Hung’s team has discovered, in addition to those that hit well-known targets like DNA gyrase and the cell wall, are able to kill tuberculosis in novel ways, such as disabling the bacterium’s molecular efflux pump.

But one of the challenges to antibiotic discovery is that the drugs that will kill a disease in a test tube won’t necessarily kill the disease in a patient. In order to address her second goal of strengthening our bodies against disease-causing microbes, Hung and her lab are now using zebrafish embryos to screen small molecules not just for their extermination of a pathogen, but for the survival of the host. This way, they can investigate drugs that have no effect on bacteria in a test tube but, in Hung’s words, “throw a wrench in the system” and interact with the host’s cells to provide immunity.

For much of the 20th century, microbes were primarily studied as agents of harm. But, more recent research into the microbiome — the trillions of organisms that inhabit our skin, gut, and cavities — has illuminated their complex and often symbiotic relationship with our immune system and bodily functions, which antibiotics can disrupt. The other three talks in the series, featuring researchers from Harvard Medical School, delve into the connections between the microbiome and colorectal cancer, inflammatory bowel disease, and stem cells.

“We're just starting to scratch the surface of the dance between these different microbes, both good and bad, and their role in different aspects of organismal health, in terms of regeneration and other diseases such as cancer and infection,” Yilmaz says.

For those in the audience, these seminars are more than just a way to pass an afternoon during IAP. Hung addressed the audience as potential future collaborators, and she stressed that antibiotic research needs all hands on deck.

“It's always a work in progress for us,” she said. “If any of you are very computationally-minded or really interested in looking at these large datasets of chemical-genetic interactions, come see me. We are always looking for new ideas and great minds who want to try to take this on.”

Technique reveals whether models of patient risk are accurate
Posted on Thursday January 23, 2020

Computer scientists’ new method could help doctors avoid ineffective or unnecessarily risky treatments.

After a patient has a heart attack or stroke, doctors often use risk models to help guide their treatment. These models can calculate a patient’s risk of dying based on factors such as the patient’s age, symptoms, and other characteristics.

While these models are useful in most cases, they do not make accurate predictions for many patients, which can lead doctors to choose ineffective or unnecessarily risky treatments for some patients.

“Every risk model is evaluated on some dataset of patients, and even if it has high accuracy, it is never 100 percent accurate in practice,” says Collin Stultz, a professor of electrical engineering and computer science at MIT and a cardiologist at Massachusetts General Hospital. “There are going to be some patients for which the model will get the wrong answer, and that can be disastrous.”

Stultz and his colleagues from MIT, IBM Research, and the University of Massachusetts Medical School have now developed a method that allows them to determine whether a particular model’s results can be trusted for a given patient. This could help guide doctors to choose better treatments for those patients, the researchers say.

Stultz, who is also a professor of health sciences and technology, a member of MIT’s Institute for Medical Engineering and Sciences and Research Laboratory of Electronics, and an associate member of the Computer Science and Artificial Intelligence Laboratory, is the senior author of the new study. MIT graduate student Paul Myers is the lead author of the paper, which appears today in Digital Medicine.

Modeling risk

Computer models that can predict a patient’s risk of harmful events, including death, are used widely in medicine. These models are often created by training machine-learning algorithms to analyze patient datasets that include a variety of information about the patients, including their health outcomes.

While these models have high overall accuracy, “very little thought has gone into identifying when a model is likely to fail,” Stultz says. “We are trying to create a shift in the way that people think about these machine-learning models. Thinking about when to apply a model is really important because the consequence of being wrong can be fatal.”

For instance, a patient at high risk who is misclassified would not receive sufficiently aggressive treatment, while a low-risk patient inaccurately determined to be at high risk could receive unnecessary, potentially harmful interventions.

To illustrate how the method works, the researchers chose to focus on a widely used risk model called the GRACE risk score, but the technique can be applied to nearly any type of risk model. GRACE, which stands for Global Registry of Acute Coronary Events, is a large dataset that was used to develop a risk model that evaluates a patient’s risk of death within six months after suffering an acute coronary syndrome (a condition caused by decreased blood flow to the heart). The resulting risk assessment is based on age, blood pressure, heart rate, and other readily available clinical features.

The researchers’ new technique generates an “unreliability score” that ranges from 0 to 1. For a given risk-model prediction, the higher the score, the more unreliable that prediction. The unreliability score is based on a comparison of the risk prediction generated by a particular model, such as the GRACE risk-score, with the prediction produced by a different model that was trained on the same dataset. If the models produce different results, then it is likely that the risk-model prediction for that patient is not reliable, Stultz says.

“What we show in this paper is, if you look at patients who have the highest unreliability scores — in the top 1 percent — the risk prediction for that patient yields the same information as flipping a coin,” Stultz says. “For those patients, the GRACE score cannot discriminate between those who die and those who don’t. It’s completely useless for those patients.”

The researchers’ findings also suggested that the patients for whom the models don’t work well tend to be older and to have a higher incidence of cardiac risk factors.

One significant advantage of the method is that the researchers derived a formula that tells how much two predictions would disagree, without having to build a completely new model based on the original dataset. 

“You don’t need access to the training dataset itself in order to compute this unreliability measurement, and that’s important because there are privacy issues that prevent these clinical datasets from being widely accessible to different people,” Stultz says.

Retraining the model

The researchers are now designing a user interface that doctors could use to evaluate whether a given patient’s GRACE score is reliable. In the longer term, they also hope to improve the reliability of risk models by making it easier to retrain models on data that include more patients who are similar to the patient being diagnosed.

“If the model is simple enough, then retraining a model can be fast. You could imagine a whole suite of software integrated into the electronic health record that would automatically tell you whether a particular risk score is appropriate for a given patient, and then try to do things on the fly, like retrain new models that might be more appropriate,” Stultz says.

The research was funded by the MIT-IBM Watson AI Lab. Other authors of the paper include MIT graduate student Wangzhi Dai; Kenney Ng, Kristen Severson, and Uri Kartoun of the Center for Computational Health at IBM Research; and Wei Huang and Frederick Anderson of the Center for Outcomes Research at the University of Massachusetts Medical School.

Using artificial intelligence to enrich digital maps
Posted on Thursday January 23, 2020

Model tags road features based on satellite images, to improve GPS navigation in places with limited map data.

A model invented by researchers at MIT and Qatar Computing Research Institute (QCRI) that uses satellite imagery to tag road features in digital maps could help improve GPS navigation.  

Showing drivers more details about their routes can often help them navigate in unfamiliar locations. Lane counts, for instance, can enable a GPS system to warn drivers of diverging or merging lanes. Incorporating information about parking spots can help drivers plan ahead, while mapping bicycle lanes can help cyclists negotiate busy city streets. Providing updated information on road conditions can also improve planning for disaster relief.

But creating detailed maps is an expensive, time-consuming process done mostly by big companies, such as Google, which sends vehicles around with cameras strapped to their hoods to capture video and images of an area’s roads. Combining that with other data can create accurate, up-to-date maps. Because this process is expensive, however, some parts of the world are ignored.

A solution is to unleash machine-learning models on satellite images — which are easier to obtain and updated fairly regularly — to automatically tag road features. But roads can be occluded by, say, trees and buildings, making it a challenging task. In a paper being presented at the Association for the Advancement of Artificial Intelligence conference, the MIT and QCRI researchers describe “RoadTagger,” which uses a combination of neural network architectures to automatically predict the number of lanes and road types (residential or highway) behind obstructions.

In testing RoadTagger on occluded roads from digital maps of 20 U.S. cities, the model counted lane numbers with 77 percent accuracy and inferred road types with 93 percent accuracy. The researchers are also planning to enable RoadTagger to predict other features, such as parking spots and bike lanes.

“Most updated digital maps are from places that big companies care the most about. If you’re in places they don’t care about much, you’re at a disadvantage with respect to the quality of map,” says co-author Sam Madden, a professor in the Department of Electrical Engineering and Computer Science (EECS) and a researcher in the Computer Science and Artificial Intelligence Laboratory (CSAIL). “Our goal is to automate the process of generating high-quality digital maps, so they can be available in any country.”

The paper’s co-authors are CSAIL graduate students Songtao He, Favyen Bastani, and Edward Park; EECS undergraduate student Satvat Jagwani; CSAIL professors Mohammad Alizadeh and Hari Balakrishnan; and QCRI researchers Sanjay Chawla, Sofiane Abbar, and Mohammad Amin Sadeghi.

Combining CNN and GNN

Qatar, where QCRI is based, is “not a priority for the large companies building digital maps,” Madden says. Yet, it’s constantly building new roads and improving old ones, especially in preparation for hosting the 2022 FIFA World Cup.

“While visiting Qatar, we’ve had experiences where our Uber driver can’t figure out how to get where he’s going, because the map is so off,” Madden says. “If navigation apps don’t have the right information, for things such as lane merging, this could be frustrating or worse.”

RoadTagger relies on a novel combination of a convolutional neural network (CNN) — commonly used for images-processing tasks — and a graph neural network (GNN). GNNs model relationships between connected nodes in a graph and have become popular for analyzing things like social networks and molecular dynamics. The model is “end-to-end,” meaning it’s fed only raw data and automatically produces output, without human intervention.

The CNN takes as input raw satellite images of target roads. The GNN breaks the road into roughly 20-meter segments, or “tiles.” Each tile is a separate graph node, connected by lines along the road. For each node, the CNN extracts road features and shares that information with its immediate neighbors. Road information propagates along the whole graph, with each node receiving some information about road attributes in every other node. If a certain tile is occluded in an image, RoadTagger uses information from all tiles along the road to predict what’s behind the occlusion.

This combined architecture represents a more human-like intuition, the researchers say. Say part of a four-lane road is occluded by trees, so certain tiles show only two lanes. Humans can easily surmise that a couple lanes are hidden behind the trees. Traditional machine-learning models — say, just a CNN — extract features only of individual tiles and most likely predict the occluded tile is a two-lane road.

“Humans can use information from adjacent tiles to guess the number of lanes in the occluded tiles, but networks can’t do that,” He says. “Our approach tries to mimic the natural behavior of humans, where we capture local information from the CNN and global information from the GNN to make better predictions.”

Learning weights   

To train and test RoadTagger, the researchers used a real-world map dataset, called OpenStreetMap, which lets users edit and curate digital maps around the globe. From that dataset, they collected confirmed road attributes from 688 square kilometers of maps of 20 U.S. cities — including Boston, Chicago, Washington, and Seattle. Then, they gathered the corresponding satellite images from a Google Maps dataset.

In training, RoadTagger learns weights — which assign varying degrees of importance to features and node connections — of the CNN and GNN. The CNN extracts features from pixel patterns of tiles and the GNN propagates the learned features along the graph. From randomly selected subgraphs of the road, the system learns to predict the road features at each tile. In doing so, it automatically learns which image features are useful and how to propagate those features along the graph. For instance, if a target tile has unclear lane markings, but its neighbor tile has four lanes with clear lane markings and shares the same road width, then the target tile is likely to also have four lanes. In this case, the model automatically learns that the road width is a useful image feature, so if two adjacent tiles share the same road width, they’re likely to have the same lane count.

Given a road not seen in training from OpenStreetMap, the model breaks the road into tiles and uses its learned weights to make predictions. Tasked with predicting a number of lanes in an occluded tile, the model notes that neighboring tiles have matching pixel patterns and, therefore, a high likelihood to share information. So, if those tiles have four lanes, the occluded tile must also have four.

In another result, RoadTagger accurately predicted lane numbers in a dataset of synthesized, highly challenging road disruptions. As one example, an overpass with two lanes covered a few tiles of a target road with four lanes. The model detected mismatched pixel patterns of the overpass, so it ignored the two lanes over the covered tiles, accurately predicting four lanes were underneath.

The researchers hope to use RoadTagger to help humans rapidly validate and approve continuous modifications to infrastructure in datasets such as OpenStreetMap, where many maps don’t contain lane counts or other details. A specific area of interest is Thailand, Bastani says, where roads are constantly changing, but there are few if any updates in the dataset.

“Roads that were once labeled as dirt roads have been paved over so are better to drive on, and some intersections have been completely built over. There are changes every year, but digital maps are out of date,” he says. “We want to constantly update such road attributes based on the most recent imagery.”

Printing objects that can incorporate living organisms
Posted on Thursday January 23, 2020

A 3D printing system that controls the behavior of live bacteria could someday enable medical devices with therapeutic agents built in.

A method for printing 3D objects that can control living organisms in predictable ways has been developed by an interdisciplinary team of researchers at MIT and elsewhere. The technique may lead to 3D printing of biomedical tools, such as customized braces, that incorporate living cells to produce therapeutic compunds such as painkillers or topical treatments, the researchers say.

The new development was led by MIT Media Lab Associate Professor Neri Oxman and graduate students Rachel Soo Hoo Smith, Christoph Bader, and Sunanda Sharma, along with six others at MIT and at Harvard University’s Wyss Institute and Dana-Farber Cancer Institute. The system is described in a paper recently published in the journal Advanced Functional Materials.

“We call them hybrid living materials, or HLMs,” Smith says. For their initial proof-of-concept experiments, the team precisely incorporated various chemicals into the 3D printing process. These chemicals act as signals to activate certain responses in biologically engineered microbes, which are spray-coated onto the printed object. Once added, the microbes display specific colors or fluorescence in response to the chemical signals.

In their study, the team describes the appearance of these colored patterns in a variety of printed objects, which they say demonstrates the successful incorporation of the living cells into the surface of the 3D-printed material, and the cells’ activation in response to the selectively placed chemicals.

The objective is to make a robust design tool for producing objects and devices incorporating living biological elements, made in a way that is as predictable and scalable as other industrial manufacturing processes.

The team uses a multistep process to produce their hybrid living materials. First, they use a commercially available multimaterial inkjet-based 3D printer, and customized recipes for the combinations of resins and chemical signals used for printing. For example, they found that one type of resin, normally used just to produce a temporary support for overhanging parts of a printed structure and then dissolved away after printing, could produce useful results by being mixed in with the structural resin material. The parts of the structure that incorporate this support material become absorbent and are able to retain the chemical signals that control the behavior of the living organisms.

Finally, the living layer is added: a surface coating of hydrogel — a gelatinous material composed mostly of water but providing a stable and durable lattice structure — is infused with biologically engineered bacteria and spray-coated onto the object.

“We can define very specific shapes and distributions of the hybrid living materials and the biosynthesized products, whether they be colors or therapeutic agents, within the printed shapes,” Smith says. Some of these initial test shapes were made as silver-dollar-sized disks, and others in the form of colorful face masks, with the colors provided by the living bacteria within their structure. The colors take several hours to develop as the bacteria grow, and then remain stable once they are in place.

“There are exciting practical applications with this approach, since designers are now able to control and pattern the growth of living systems through a computational algorithm,” Oxman says. “Combining computational design, additive manufacturing, and synthetic biology, the HLM platform points toward the far-reaching impact these technologies may have across seemingly disparate fields, ‘enlivening’ design and the object space.”

The printing platform the team used allows the material properties of the printed object to be varied precisely and continuously between different parts of the structure, with some sections stiffer and others more flexible, and some more absorbent and others liquid-repellent. Such variations could be useful in the design of biomedical devices that can provide strength and support while also being soft and pliable to provide comfort in places where they are in contact with the body.

The team included specialists in biology, bioengineering, and computer science to come up with a system that yields predictable patterning of the biological behavior across the printed object, despite the effects of factors such as diffusion of chemicals through the material. Through computer modeling of these effects, the researchers produced software that they say offers levels of precision comparable to the computer-assisted design (CAD) systems used for traditional 3D printing systems.

The multiresin 3D printing platform can use anywhere from three to seven different resins with different properties, mixed in any proportions. In combination with synthetic biological engineering, this makes it possible to design objects with biological surfaces that can be programmed to respond in specific ways to particular stimuli such as light or temperature or chemical signals, in ways that are reproducible yet completely customizable, and that can be produced on demand, the researchers say.

“In the future, the pigments included in the masks can be replaced with useful chemical substances for human augmentation such as vitamins, antibodies or antimicrobial drugs,” Oxman says. “Imagine, for example, a wearable interface designed to guide ad-hoc antibiotic formation customized to fit the genetic makeup of its user. Or, consider smart packaging that can detect contamination, or environmentally responsive architectural skins that can respond and adapt — in real-time — to environmental cues.”

In their tests, the team used genetically modified E. coli bacteria, because these grow rapidly and are widely used and studied, but in principle other organisms could be used as well, the researchers say.

The team included Dominik Kolb, Tzu-Chieh Tang, Christopher Voigt, and Felix Moser at MIT; Ahmed Hosny at the Dana-Farber Cancer Institute of Harvard Medical School; and James Weaver at Harvard's Wyss Institute. It was supported by the Robert Wood Johnson Foundation, Gettylab, the DARPA Engineered Living Materials agreement, and a National Security Science and Engineering Faculty Fellowship.

Study: State-level adoption of renewable energy standards saves money and lives
Posted on Wednesday January 22, 2020

MIT researchers review renewable energy and carbon pricing policies as states consider repealing or relaxing renewable portfolio standards.

In the absence of federal adoption of climate change policy, states and municipalities in the United States have been taking action on their own. In particular, 29 states and the District of Columbia have enacted renewable portfolio standards (RPSs) requiring that a certain fraction of their electricity mix come from renewable power tech­nologies, such as wind or solar. But now some states are rethinking their RPSs. A few are making them more stringent, but many more are relaxing or even repealing them.

To Noelle Eckley Selin, an associate professor in the Institute for Data, Systems, and Society and the Department of Earth, Atmospheric and Planetary Sciences, and Emil Dimanchev SM ’18, a senior research associate at the MIT Center for Energy and Environmental Policy Research, that’s a double concern: The RPSs help protect not only the global climate, but also human health.

Past studies by Selin and others have shown that national-level climate policies designed to reduce carbon dioxide (CO2) emissions also significantly improve air quality, largely by reducing coal burning and related emissions, especially those that contribute to the formation of fine particulate matter, or PM2.5. While air quality in the United States has improved in recent decades, PM2.5 is still a threat. In 2016, some 93,000 premature deaths were attributed to exposure to PM2.5, according to the Institute of Health Metrics and Evaluation. Any measure that reduces those exposures saves lives and delivers health-related benefits, such as savings on medical bills, lost wages, and reduced productivity.

If individual states take steps to reduce or repeal their RPSs, what will be the impacts on air quality and human health in state and local communities? “We didn’t really know the answer to that question, and finding out could inform policy debates in individual states,” says Selin. “Obviously, states want to solve the climate problem. But if there are benefits for air quality and human health within the state, that could really motivate policy development.”

Selin, Dimanchev, and their collaborators set out to define those benefits. Most studies of policies that change electricity prices focus on the electricity sector and on the costs and climate benefits that would result nationwide. The MIT team instead wanted to examine electricity-consuming activities in all sectors and track changes in emissions, air pollution, human health exposures, and more. And to be useful for state or regional decision-making, they needed to generate estimates of costs and benefits for the specific region that would be affected by the policy in question.

A novel modeling framework

To begin, the researchers developed the following framework for analyzing the costs and benefits of renewable energy and other “sub-national” climate policies.

  • They start with an economy-wide model that simulates flows of goods and services and money throughout the economy, from sector to sector and region to region. For a given energy policy, the model calculates how the resulting change in electricity price affects human activity throughout the economy and generates a total cost, quantified as the change in consumption: How much better or worse off are consumers? The model also tracks CO2 emissions and how they’re affected by changes in economic activity.
  • Next, they use a historical emissions dataset published by the U.S. Environmental Protection Agency that maps sources of air pollutants nationwide. Linking outputs of the economic model to that emissions dataset generates estimates of future emissions from all sources across the United States resulting from a given policy.
  • The emissions results go into an air pollution model that tracks how emitted chemicals become air pollution. For a given location, the model calculates resulting pollutant concentrations based on information about the height of the smoke stacks, the prevailing weather circulation patterns, and the chemical composition of the atmosphere.
  • The air pollution model also contains population data from the U.S. census for all of the United States. Overlaying the population data onto the air pollution results generates human exposures at a resolution as fine as 1 square kilometer.
  • Epidemiologists have developed coefficients that translate air pollution exposure to a risk of premature mortality. Using those coefficients and their outputs on human exposures, the researchers estimate the number of premature deaths in a geographical area that will result from the energy policy being analyzed.
  • Finally, based on values used by government agencies in evaluating policies, they assign monetary values to their calculated impacts of the policy on CO2 emissions and human mortality. For the former, they use the “social cost of carbon,” which quantifies the value of preventing damage caused by climate change. For the latter, they use the “value of statistical life,” a measure of the economic value of reducing the risk of premature mortality.

With that modeling framework, the researchers can estimate the economic cost of a renewable energy or climate policy and the benefits it will provide in terms of air quality, human health, and climate change. And they can generate those results for a specific state or region.

Case study: RPSs in the U.S. Rust Belt

As a case study, the team focused on the Rust Belt — in this instance, 10 states across the Midwest and Great Lakes region of the United States (see Figure 1 in the slideshow above). Why? Because they expected lawmakers in some of those states to be reviewing their RPSs in the near future.

On average, the RPSs in those states require customers to purchase 13 percent of their electricity from renewable sources by 2030. What will happen if states weaken their RPSs or do away with them altogether, as some are considering?

To find out, the researchers evaluated the impacts of three RPS options out to 2030. One is business as usual (BAU), which means maintaining the current renewables requirement of 13 percent of generation in 2030. Another boosts the renewables share to 20 percent in 2030 (RPS+50%), and another doubles it to 26 percent (RPS+100%). As a baseline, they modeled a so-called counterfactual (no-RPS), which assumes that all RPSs were repealed in 2015. (In reality, the average RPS in 2015 was 6 percent.)

Finally, they modeled a scenario that adds to the BAU-level RPS a “CO2 price,” a market-based climate strategy that caps the amount of CO2 that industry can emit and allows companies to trade carbon credits with one another. To the researchers’ knowledge, there have been no studies comparing the air quality impacts of such carbon pricing and an RPS using the same model plus consistent scenarios. To fill that gap, they selected a CO2 price that would achieve the same cumulative CO2 reductions as the RPS+100% scenario does.

Results of the analysis

The four maps in Figure 2 in the slideshow above show how the enactment of each policy would change air pollution — in this case, PM2.5 concentrations — in 2030 relative to having no RPS. The results are given in micrograms of PM2.5 per cubic meter. For comparison, the national average PM2.5 concentration was 8 micrograms per cubic meter in 2018.

The effects of the policy scenarios on PM2.5 concentrations (relative to the no-policy case) mostly occur in the Rust Belt region. From largest to smallest, the reductions occur in Maryland, Delaware, Pennsylvania, Indiana, Ohio, and West Virginia. Concentrations of PM2.5 are lower under the more stringent climate policies, with the largest reduction coming from the CO2 price scenario. Concentrations also decline in states such as Virginia and New York, which are located downwind of coal plants on the Ohio River.

Figure 3 in the slideshow above presents an overview of the costs (black), climate benefits (gray), and health benefits (red) in 2030 of the four scenarios relative to the no-RPS assumption. (All costs and benefits are reported in 2015 U.S. dollars.) A quick glance at the BAU results shows that the health benefits of the current RPSs exceed both the total policy costs and the estimated climate benefits. Moreover, while the cost of the RPS increases as the stringency increases, the climate benefits and — especially — the human health benefits jump up even more. The climate benefit from the CO2 price and the RPS+100% are, by definition, the same, but the cost of the CO2 price is lower and the health benefit is far higher.

Figure 4 in the slideshow above presents the quantitative results behind the Figure 3 chart. (Depending on the assumptions used, the analyses produced a range of results; the numbers here are the central values.) According to the researchers’ calculations, maintaining the current average RPS of 13 percent from renewables (BAU) would bring health benefits of $4.7 billion and implementation costs of $3.5 billion relative to the no-RPS scenario. (For comparison, Dimanchev notes that $3.5 billion is 0.1 percent of the total goods and services that U.S. households consume per year.) Boosting the renewables share from the BAU level to 20 percent (RPS+50%) would result in additional health benefits of $8.8 billion and $2.3 billion in costs. And increasing from 20 to 26 percent (RPS+100%) would result in additional health benefits of $6.5 billion and $3.3 billion in costs.

COreductions due to the RPSs would bring estimated climate benefits comparable to policy costs — and maybe larger, depending on the assumed value for the social cost of carbon. Assuming the central values, the climate benefits come to $2.8 billion for the BAU scenario, $6.4 billion for RPS+50%, and $9.5 billion for RPS+100%.

The analysis that assumes a carbon price yielded some unexpected results. The carbon price and the RPS+100% both bring the same reduction in CO2 emissions in 2030, so the climate benefits from the two policies are the same — $9.5 billion. But the CO2 price brings health benefits of $29.7 billion at a cost of $6.4 billion.

Dimanchev was initially surprised that health benefits were higher under the CO2 price than under the RPS+100 percent policy. But that outcome largely reflects the stronger effect that CO2 pricing has on coal-fired generation. “Our results show that CO2 pricing is a more effective way to drive coal out of the energy mix than an RPS policy is,” he says. “And when it comes to air quality, the most important factor is how much coal a certain jurisdiction is burning, because coal is by far the biggest contributor to air pollutants in the electricity sector.”

The politics of energy policy

While the CO2 price scenario appears to offer economic, health, and climate benefits, the researchers note that adopting a carbon pricing policy has historically proved difficult for political reasons — both in the United States and around the world. “Clearly, you’re forgoing a lot of benefits by doing an RPS, but RPSs are more politically attractive in a lot of jurisdictions,” says Selin. “You’re not going to get a CO2 price in a lot of the places that have RPSs today.”

And steps to repeal or weaken those RPSs continue. In summer 2019, the Ohio state legislature began considering a bill that would both repeal the state’s RPS and subsidize existing coal and nuclear power plants. In response, Dimanchev performed a special analysis of the benefits to Ohio of its current RPS. He concluded that by protecting human health, the RPS would generate an annual economic benefit to Ohio of $470 million in 2030. He further calculated that, starting in 2030, the RPS would avoid the premature deaths of 50 Ohio residents each year. Given the estimated cost of the bill at $300 million, he concluded that the RPS would have a net benefit to the state of $170 million in 2030.

When the state Legislature took up the bill, Dimanchev presented those results on the Senate floor. In introductory comments, he noted that Ohio topped the nation in the number of premature deaths attributed to power plant pollution in 2005, more than 4,000 annually. And he stressed that “repealing the RPS would not only hamper a growing industry, but also harm human health.”

The bill passed, but in a form that significantly rolled back the RPS requirement rather than repealing it completely. So Dimanchev’s testimony may have helped sway the outcome in Ohio. But it could have a broader impact in the future. “Hopefully, Emil’s testimony raised some awareness of the tradeoffs that a state like Ohio faces as they reconsider their RPSs,” says Selin. Observing the proceedings in Ohio, legislators in other states may consider the possibility that strengthening their RPSs could actually benefit their economies and at the same time improve the health and well-being of their constituents.

Emil Dimanchev is a 2018 graduate of the MIT Technology and Policy Program and a former research assistant at the MIT Joint Program on the Science and Policy of Global Change (the MIT Joint Program). This research was supported by the U.S. Environmental Protection Agency through its Air, Climate, and Energy Centers Program, with joint funding to MIT and Harvard University. The air pollution model was developed as part of the EPA-supported Center for Clean Air and Climate Solutions. The economic model — the U.S. Regional Energy Policy model — is developed at the MIT Joint Program, which is supported by an international consortium of government, industry, and foundation sponsors. Dimanchev’s outreach relating to the Ohio testimony was supported by the Policy Lab at the MIT Center for International Studies. 

This article appears in the Autumn 2019 issue of Energy Futures, the magazine of the MIT Energy Initiative. 

Putting a finger on the switch of a chronic parasite infection
Posted on Tuesday January 21, 2020

Researchers find master regulator needed for Toxoplasma gondii parasite to chronically infect host; promising step toward infection treatment, prevention.

Toxoplasma gondii (T. gondii) is a parasite that chronically infects up to a quarter of the world’s population, causing toxoplasmosis, a disease that can be dangerous, or even deadly, for the immunocompromised and for developing fetuses. One reason that T. gondii is so pervasive is that the parasites are tenacious occupants once they have infected a host. They can transition from an acute infection stage into a quiescent life cycle stage and effectively barricade themselves inside of their host’s cells. In this protected state, they become impossible to eliminate, leading to long-term infection.

Researchers used to think that a combination of genes were involved in triggering the parasite’s transition into its chronic stage, due to the complexity of the process and because a gene essential for differentiation had not been identified. However, new research from Sebastian Lourido, Whitehead Institute member and assistant professor of biology at MIT, and MIT graduate student Benjamin Waldman has identified a sole gene whose protein product is the master regulator, which is both necessary and sufficient for the parasites to make the switch. Their findings, which appeared online in the journal Cell on Jan. 16, illuminate an important aspect of the parasite’s biology and provide researchers with the tools to control whether and when T. gondii transitions, or undergoes differentiation. These tools may prove valuable for treating toxoplasmosis, since preventing the parasites from assuming their chronic form keeps them susceptible to both treatment and elimination by the immune system.

T. gondii spreads when a potential host, which can be any warm-blooded animal, ingests infected tissue from another animal — in the case of humans, by eating undercooked meat or unwashed vegetables — or when the parasite’s progeny are shed by an infected cat, T. gondii’s target host for sexual reproduction. When T. gondii parasites first invade the body, they are in a quickly replicating part of their life cycle, called the tachyzoite stage. Tachyzoites invade a cell, isolate themselves by forming a sealed compartment from the cell’s membrane, and then replicate inside of it until the cell explodes, at which point they move on to another cell to repeat the process. Although the tachyzoite stage is when the parasites do the most damage, it’s also when they are easily targetable by the immune system and medical therapies.

In order for the parasites to make their stay more permanent, they must differentiate into bradyzoites, a slow-growing stage, during which they are less susceptible to drugs and have too little effect on the body to trigger the immune system. Bradyzoites construct an extra-thick wall to isolate their compartment in the host cell and encyst themselves inside of it. This reservoir of parasites remains dormant and undetectable until, under favorable conditions, they can spring back into action, attacking their host or spreading to new ones.

Although the common theory was that multiple genes collectively orchestrate the transition from tachyzoite to bradyzoite, Lourido and Waldman suspected that there was instead a single master regulator.

“Differentiation is not something a parasite wants to do halfway, which could leave them vulnerable,” Waldman says. “Multiple genes means more chances for things to go wrong, so you would want a master regulator to ensure that differentiation happens cleanly.”

To investigate this hypothesis, Waldman used CRISPR-based screens to knock out T. gondii genes, and then tested to see if the parasite could still differentiate from tachyzoite to bradyzoite. Waldman monitored whether the parasites were differentiating by developing a strain of T. gondii that fluoresces in its bradyzoite stage. The researchers also performed a first-of-its-kind single-cell RNA sequencing of T. gondii in collaboration with members of Alex Shalek’s lab in the MIT Department of Chemistry. This sequencing allowed the researchers to profile the genes’ activity at each stage in unprecedented detail, shedding light on changes in gene expression during the parasite’s cell-cycle progression and differentiation.

The experiments identified one gene, which the researchers named Bradyzoite-Formation Deficient 1 (BFD1), as the only gene both sufficient and necessary to prevent the transition from tachyzoite to bradyzoite: the master regulator. Not only was T. gondii unable to make the transition without the BFD1 protein, but Waldman found that artificially increasing its production induced the parasites to become bradyzoites, even without the usual stress triggers required to cue the switch. This means that the researchers can now control Toxoplasma differentiation in the lab.

These findings may inform research into potential therapies for toxoplasmosis, or even a vaccine.

Toxoplasma that can’t differentiate is a good candidate for a live vaccine, because the immune system can eliminate an acute infection very effectively,” Lourido says.

The researchers’ findings also have implications for food production. T. gondii and other cyst-forming parasites that use BFD1 can infect livestock. Further research into the gene could inform the development of vaccines for farm animals as well as humans.

“Chronic infection is a huge hurdle to curing many parasitic diseases,” Lourido says. “We need to study and figure out how to manipulate the transition from the acute to chronic stages in order to eradicate these diseases.”

This study was supported by an NIH Director’s Early Independence Award, a grant from the Mathers Foundation, the Searle Scholars Program, the Beckman Young Investigator Program, a Sloan Fellowship in Chemistry, the National Institutes of Health, and the Bill and Melinda Gates Foundation.


 

New, Used, Rare Books & Textbooks, e-books, etc. Buy books online. Books for every taste. Millions of books available.

Category Home ... ALL PRODUCTS >> : 0 - 1 - 2 - 3 - 4 - 5 - 6 - 7 - 8 - 9 - 10 -

 

 

 

 

Back to Top BACK TO TOP