It has become overwhelmingly clear over the past few months that NASA’s James Webb Space Telescope is doing exactly what it set out to do. As its creators had hoped, the multibillion-dollar machine ‘unfolds the universe’ perfectly, revealing cosmic light we can’t see with our own eyes – and its excellent results have even astronomers most unlikely feel alive.
Because of this gold-plated telescope, Twitter once went wild over a murky red dot. For 48 hours, people around the world were gawked at a galaxy born shortly after the birth of time itself. It would seem that, thanks to the technological prowess of the JWST, humanity is united on stardust.
But here’s the thing.
Amid personal awe, Massachusetts Institute of Technology scientists warn we should consider a crucial scientific consequence of having a superhero telescope.
If the JWST is like a zero-to-100 range upgrade, they wonder, is it possible that our science models also need a zero-to-100 reboot? Are the datasets that scientists have been using for decades unable to match the power of the device and therefore fail to reveal what it is trying to tell us?
“The data we will get from the JWST will be incredible, but … our knowledge will be limited if our models don’t match in quality,” said Clara Sousa-Silva, a quantum astrochemist at Harvard’s Center for Astrophysics. & Smithsonian, told CNET.
And, according to a new study she co-authored, published Thursday in the journal Nature Astronomy, the answer is yes.
Specifically, this paper suggests that some of the light analysis tools scientists normally use to understand exoplanet atmospheres are not fully equipped to handle the exceptional light data from the JWST. In the long term, such an obstacle can have the most impact massive Everyone’s JWST Quest: The Hunt for Alien Life.
“Currently, the model we use to decipher the spectral information is not up to par with the accuracy and quality of data we have from the James Webb Telescope,” said Prajwal Niraula, a graduate student in the science department of Earth, Atmosphere and Planets from MIT and co-author of the study, said in a statement. “We have to improve our game.”
Here’s one way to think about the riddle.
Imagine pairing the newest and most powerful Xbox console with the very first iteration of a television. (Yes, I know the extreme hypothetical nature of my scenario). The Xbox would try to give the TV great high-res, colorful, beautiful graphics to show us – but the TV wouldn’t have the ability to compute anything.
I wouldn’t be surprised if the TV exploded. But the fact is you wouldn’t to know which is what Xbox is trying to give you, unless you get an equally high resolution TV.
Similarly, in the vein of exoplanet discoveries, scientists feed a bunch of light or photon data from deep space into models that test “opacity.” Opacity measures how easily photons pass through a material and differs depending on factors such as the wavelength of light, temperature and pressure of the material.
This means that each such interaction leaves behind a signature that tells about the properties of the photon and, therefore, in the case of exoplanets, the type of chemical atmosphere that these photons passed through to reach the light detector. This is how scientists somehow calculate backwards, from light data, the composition of the atmosphere of an exoplanet.
In this case, the detector linkage is on the James Webb Space Telescope – but in the team’s new study, after testing the most commonly used opacity model, the researchers saw the data lights from JWST hit what they call a “precision wall”. ”
The model wasn’t sensitive enough to analyze things like whether a planet has an atmospheric temperature of 300 or 600 Kelvin, the researchers say, or whether a certain gas occupies 5% or 25% of the atmosphere. Not only is such a difference statistically significant, but according to Niraula, “it is also important for allowing us to constrain planetary formation mechanisms and reliably identify biosignatures.”
That is, evidence of extraterrestrial life.
“We need to work on our interpretive tools,” Sousa-Silva said, “so we don’t end up seeing something amazing through JWST and not knowing how to interpret it.”
Additionally, the team also found that its models somehow concealed its uncertain readings. A few adjustments can easily hide the uncertainty, judging the results as a good fit when they are incorrect.
“We found that there are enough parameters to change, even with a bad model, to get a good fit, meaning you wouldn’t know your model is wrong and what it’s telling you is wrong. “, Julien de Wit, assistant professor at MIT’s EAPS and co-author of the study, said in a press release.
Going forward, the team urges that opacity models be improved to accommodate our dramatic revelations from the JWST – calling in particular for cross-sectional studies between astronomy and spectroscopy.
“There are so many things that could be done if we fully understood how light and matter interact,” says Niraula. “We know that pretty well around Earth conditions, but as we move into different types of atmospheres, things change, and that’s a lot of data, of increasing quality, that we risk misinterpreting.”
De Wit compares the current opacity model to the old Rosetta Stone language translation tool, explaining that so far this Rosetta Stone has worked well, like with the Hubble Space Telescope.
“But now that we’re taking Webb’s precision to the next level,” the researcher said, “our translation process will prevent us from capturing important subtleties, such as those that make the difference between a habitable planet or not.”
As Sousa-Silva puts it, “it’s a call to improve our models, so that we don’t miss the intricacies of the data.”
#NASAs #Webb #Space #Telescope #good #improved #planetary #models