United Nations Intergovernmental Panel on Climate Change (IPCC) reports, such as the recent 6th Assessment Report (AR6) and the six previous AR’s, as well as the 16 or 17 annual Friedlingstein et al (CDIAC and ESSD) reports, and reports by NOAA, NASA, and Hadley Climate Research Unit, etc., are comparing apples and oranges and then drawing conclusions which are overconfident and unjustified by scientific data from the real world. You are being conned.
The uncertainty of each data set must be propagated across all data sets in the study to the final result in order to properly compare, correlate, blend, different data sets. But this propagation is rarely done in climatology. The result is incorrect conclusions and overconfidence in those conclusions. Examples are provided in this paper.
There are standard statistical methods for propagating uncertainty (also known as error propagation, or random error propagation) when a mathematical operation is used to combine or compare data sets. The error or variability for each data set must be distributed to the final result. In other words, propagation of uncertainty enables estimation of the uncertainty in the result based on the uncertainties in the estimates and/or measurements which were used to calculate that result. “The Gaussian error propagation method is used to estimate the uncertainty of any mathematical expression that contains physical quantities with uncertainties.” http://www.julianibus.de/
Uncertainty is calculated by taking the partial derivatives of the function with respect to each variable, multiplying by the uncertainty in that variable, and then adding these separate terms. The error propagation formulae depend on the mathematical operation used for the calculation. So, for example, the formulae are different for addition than for division. The formulae are explained very well here: https://www.statisticshowto.com/statistics-basics/error-propagation/
For example, if the proxy data for historical temperature from ice cores is to be coupled to the proxy data of estimated temperature based on tree rings, then there is a standard method to calculate the uncertainty in the result of combining these two data sets. However, the famous hockey stick (seen in Al Gore’s science fiction movie “An Inconvenient Truth”) is an unlikely outcome from such a coupling of data sets because ice core data is notoriously variable for time periods less than 1000 years. But tree rings are more or less one year growth events. The variation in annual temperature estimated at the yearly basis from ice cores is very, very large compared to the variation in annual temperature based on tree ring growth. When the large uncertainty at annual level in the ice cores is propagated to the result, that high uncertainty overwhelms the uncertainty in the annual tree ring data. The tree ring data cannot be reproducibly detected within the noisy variation in the annual ice core data. How can these two data sets be coupled together to produce the infamous hockey stick? They cannot be compared with scientific validity.
A better example: if (1) the net global CO2 concentration measured at NOAA-Scripps Global Monitoring Laboratory at Mauna Loa is to be correlated with (2) the estimates of the CO2 emissions from burning fossil fuels (FFCO2), then the respective uncertainties of these two very different data sets (apples and oranges) must be propagated to the result. The final correlation statistic and thus the confidence in the conclusion is a function of the combined uncertainties of (1) and (2).
Unfortunately, propagation of uncertainty seems to be seldom practiced in climatology, the result of this failure is expressed in typically hyperbolic semantics such as “unprecedented” or “dangerous” increase in CO2 or temperature, “climate crisis” and the like.
Propagation of uncertainty is an objective and standard method to validate and justify the elimination and exclusion of outlier data points and outlier data sets. Propagation of uncertainty reveals where one set of data will be lost (statistically insignificant and possibly unmeasurable) due to the variability of the other data set.
Here’s another example that appears throughout orthodox literature of human-caused global warming.
NOAA-Scripps describes its CO2 data as follows (the apple):
|# The uncertainty in the global annual mean is estimated using a monte carlo|
|# technique that computes 100 global annual averages each time using a|
|# slightly different set of measurement records from the NOAA ESRL cooperative|
|# air sampling network. The reported uncertainty is the mean of the standard|
|# deviations for each annual average using this technique. Please see|
|# Conway et al. # See www.esrl.noaa.gov/gmd/ccgg/trends/ for additional details.|
Therein from 1980 average CO2 of 338.91 ppm to 2020 average of CO2 of 412.48 ppm, in each year NOAA calculate the uncertainty of their CO2 data as 0.1. (This is only from the ‘gold standard’ lab at GML Mauna Loa. If the other GML labs in Alaska, Samoa, etc. are included the uncertainty differences among the labs should be propagated to the resulting total uncertainty for CO2 measurements for all GML lab data used.)
Contrast the above “apple” of 0.1 uncertainty for Mauna Loa data with the following “orange” based on the paper: (Robert J. Andres, Thomas A. Boden & David Higdon (2014). A new evaluation of the uncertainty associated with CDIAC estimates of fossil fuel carbon dioxide emission.)
From the conclusion: “Despite its importance, the characterisation of uncertainty on estimates of the global total FFCO2 [fossil fuel CO2] emission made from the CDIAC database is still cumbersome. The lack of independent measurements at the spatial and temporal scales of interest complicates the characterisation. The mix of dependent and independent data used in the CDIAC calculations further complicates the determination. The three cases presented above collectively give a range of uncertainty that spans 1.013%. However, the end members of this range are not calculated on the same basis and each case measures different aspects…” “…As the contribution from different countries changes annually, so does the annual global uncertainty change. Global uncertainty has been increasing recently (Fig. 4) because more emissions are coming from countries with less certain data collection and management practices (Fig. 5).” …”As data are revised, missing data are reported and methodology refined, global uncertainty for a given emission year settles to typically less than 2% growth after initial data publication.”…”Finally, this analysis gives updated uncertainty assessments for the CDIAC FFCO2 global estimates. It is anticipated that these uncertainty assessments will have three primary impacts. First, these assessments remind the community that FFCO2 emissions have a non-zero uncertainty associated with them. Second, that this uncertainty is significant, either in isolation or in relation to other components of the global carbon cycle (Fig. 10). Third, that these uncertainty assessments will be used in the next-generation inverse (and other) models to better understand and constrain the global carbon cycle.”
In the body of the same paper:
“CDIAC has never published quantitative values for the uncertainty in national emissions, although many data users are aware that the uncertainty varies widely among countries.”…(page 1)
“Adding to the cumbersomeness of the uncertainty quantification presented, FFCO2 emission estimates do not fit neatly into the categories of dependent or independent data (at nearly all levels of the calculations) for which classical uncertainty quantification approaches are well established.”…(page 2)
Below is a table from this paper to illustrate thee wide differences in FFCO2 by country. Click to enlarge.
In the example in the table above, the uncertainty of FFCO2 data from Mexico is about 50 times higher than that for the USA. The uncertainty of the fossil fuel production data reported to Oakridge National Laboratories from Mexico is 50 times less than the uncertainty of the fossil fuel production data from the U.S. FFCO2 is calculated from the production data. The authors arrive at an uncertainty for the total FFCO2 data set from all countries of 8.4% at 2 sigma (i.e., 2 standard deviations.)
“The three assessments collectively give a range that spans from 1.0 to 13% (2 sigma). Greatly simplifying the assessments give a global fossil fuel carbon dioxide uncertainty value of 8.4% (2 sigma).”
Thus the total uncertainty in this estimate of FFCO2 is 84 times larger than (i.e., less precise, less reproducible, less likely to be found in the Gaussian, normally distributed data) than the 0.1% uncertainty in the measured NOAA-Scripps Mauna Loa CO2 data.
Nevertheless, in orthodox climatology, FFCO2 is unequivocally held to be driving (dangerously and catastrophically! or so they claim) the trend in net global CO2 concentration, and thus causing global warming (or so they claim.) This is an “apples and oranges” comparison and they have drawn an invalid conclusion.
Uncertainty and statistical significance are functions of variability in the source data. The difference between the uncertainty (or precision or reproducibility) of the measured Mauna Loa data compared to the uncertainty of the estimated fossil fuel CO2 (FFCO2) is so large that any supposed correlation will most likely rendered worthless. And that is exactly what is found by statistical analysis.
Our two apples and oranges examples above are very different. In the first example, attempting to compare estimated average annual temperature from tree rings with estimated 1000 year average temperature based on ice cores, if these two proxies are combined, the tree ring signal will be lost in the high variability of ice cores at the annual level. Ice core data of less than 1000 becomes too uncertain to be with 2 sigma significance. Relative to annual ice core data, the error bars for the temperature proxy from trees rings are very much smaller. Coupling these two data sets together is invalid.
In the second example, the uncertainty of estimated fossil fuels CO2 emissions is about 80 times larger than the uncertainty of net CO2 concentration measured at Mauna Loa, but the 2020 Mauna Loa net CO2 concentration at 412 ppm is about 160 times larger than the net increase in CO2 due to all sources and sinks, natural and human. The net increase in CO2 due to fossil fuels is necessarily an amount less than the 2.58 ppm net increase for 2020; the net increase for 2020 is about 0.6% of the net total CO2 for 2020. This allows a useful conclusion in the second example.
In the second example, net global CO2 measured at Mauna Loa is highly variable at the daily, weekly, monthly and annual basis, but the measurement is highly reproducible. Uncertainty is low. The annual data, although still variable has a very low uncertainty calculated by NOAA-Scripps at 0.1. The NOAA data are very reproducible, very likely the measured data is found within the normal distributed Gaussian bell curve. The annual variation is about +/- 5 ppm, this is an easily visible sharks tooth pattern reproduced each year in the net global CO2 measurements.
Daily Mauna Loa data is variable, as seen in the graph below, but this variability is hidden somewhat in the graph above by the effect of averaging to calculate weekly, monthly and annual data. But the cyclical sharks tooth pattern is still clearly observed in the daily data.
Source: Bromley & Tamarkin (2022). Figure 9. https://budbromley.blog/2022/05/20/correcting-misinformation-on-atmospheric-carbon-dioxide/
The net CO2 increase due to fossil fuels for 2020, which must be less than 2.58 ppm, is in the same order of magnitude as the annual variability of the measured net global CO2. But the erratic trend in estimated CO2 due to fossil fuels cannot be detected in the measured trend of net global CO2 which is 160 times larger and about 80 times more reproducible; this important. The estimated CO2 due to fossil fuels is highly uncertain, outside of the normally distributed data in the Gaussian bell curve and cannot be distinguished from random noise in the data. If these estimated fossil fuel data were more certain, more reproducible, then we might be able to calculate the net CO2 due to fossil fuels. The estimated CO2 emissions due to fossil fuels, despite how many gigatons the estimate, is a useless and invalid amount with regard to calculations and models of global carbon balance because we cannot measure the net CO2 from fossil fuels. We do not know the amount of CO2 from fossil fuels which is absorbed. Therefore, it is incorrect to for proponents of human-cause climate change, UN IPCC et al, to claim that CO2 from fossil fuels is causing the rising trend in net global CO2.
By analogy, this is like attempting to find a 1:600 scale model of the International Space Station floating on the ocean while flying overhead in the International Space Station. Even with high zoom magnification we cannot find the small scale model on the ocean surface. One might think that the highly variable, near random movements of the model floating in the ocean would be like a red flag visible from the stable platform and highly predictable motion of the actual space station. But frequently the ocean waves or some other event completely obscures the visibility of floating model space station.
“The essence of the theory of anthropogenic global warming (AGW) is that fossil fuel emissions cause warming by increasing atmospheric CO2 levels and that therefore the amount of warming can be attenuated by reducing fossil fuel emissions (Hansen, 1981) (Meinshausen, 2009) (Stocker, 2013) (Callendar, 1938) (Revelle, 1957) (Lacis, 2010) (Hansen, 2016) (IPCC, 2000) (IPCC, 2014). At the root of the proposed AGW causation chain is the ability of fossil fuel emissions to cause measurable changes in atmospheric CO2 levels in excess of natural variability.” Munshi, Jamal. Revised 2017. RESPONSIVENESS OF ATMOSPHERIC CO2 TO FOSSIL FUEL EMISSIONS: UPDATED. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2997420
However, as Dr. Munshi, professor of business statistics, reveals in his several papers, the AGW argument is spurious and without merit because there is no correlation between the trend in estimated FFCO2 compared to the trend in measured Mauna Loa CO2 concentration. The apparent correlation results from visible inspection and intuition but not statistical analysis. When these two data sets are detrended to remove the cumulative effects of the shared timeline, then the intuitive correlation disappears. This is shown in the graphic below from Professor Munshi.
Spurious correlations between cumulative values of random numbers are illustrated in this 55 second video. https://youtu.be/8YVHJRyY3_I
IPCC, NASA, NOAA, CDIAC, etc continue with this false assumption of human causation, claiming in widely, expensively, and pervasively publicized literature that this an unequivocal conclusion. But, in fact, that conclusion is easily disproven on the basis of CO2 alone. There is no need to analyze for a possible temperature correlation with CO2; such a correlation would require solving simultaneous partial differential equations with many co-dependent variables. Although these equations can be solved, the result is even higher uncertainty, too high to be of any value. Similar to the CO2 data, the uncertainties in temperature data sets also must be propagated along with the CO2 uncertainties in the differential equations. The temperature variable only adds more uncertainty to the CO2 uncertainty.
Elaborate and hugely expensive carbon budgets done by 50 and more scientists each year as well as many different computer models are wrong. Analysis of uncertainty proves them wrong.
Figure 3 source: Munshi, Jamal. RESPONSIVENESS OF ATMOSPHERIC CO2 TO FOSSIL FUEL EMISSIONS: PART 2. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2862438
AGW proponents are most likely aware of these problems with their causation hypothesis (i.e., that CO2 from humans use of fossil fuels is the primary cause of increasing global CO2 concentration, and thereby increased global warming). The fundamental and most simple AGW causation hypothesis is that fossil fuel emissions cause atmospheric CO2 concentration to rise. But, the evidence above disproves that hypothesis. So, they created a circular argument in their many climate budget models which they call the “Airborne Fraction.” The portion of annual CO2 emissions used to explain the annual change in atmospheric CO2 concentration is what they call the “Airborne Fraction“. The evidence offered is a list of estimated carbon sinks and sources, e.g., land use, cement production, inorganic CO2 to and from ocean, CO2 to and from forests, etc. Each of these estimated fluxes has an uncertainty and each of these uncertainties should be propagated to the final modelled result. Some of the uncertainties in these fluxes vary by an order of magnitude. Probably their estimated “Airborne Fraction” will disappear when the uncertainties are propagated. Also, it is a logically invalid argument because they start with an assumed conclusion and then back fill with hypothetical estimates for these various fluxes. (We used to call that fudge factors.)
Dr. Munshi explains, “Circular reasoning creates a bias in favor of resumed findings and against alternate explanations of the data. This form of argumentation is not considered valid because it subsumes the conclusion, makes assumptions that facilitate the desired result, or otherwise validates a hypothesis with assumptions implicit in the hypothesis (Walton, 1991) (Rips, 2002) (Hahn, 2007) (Tavallaee, 2009). Yet, circular reasoning is surprisingly common in published research (Finkelstein, 2007) (Finkelstein, 2010) (Wiggins, 1989) (McCarroll, 2002) (Sackett, 1979).“ Excerpt from the abstract: “Circular reasoning is used in the IPCC carbon budget to relate atmospheric CO2 to fossil fuel emissions as a way of dealing with insurmountable measurement problems. No evidence exists to relate changes in atmospheric CO2 or the rate of warming to fossil fuel emissions because correlations presented for these relationships are spurious. The UNFCCC holds annual COP meetings and calls for reductions in fossil fuel emissions to attenuate global warming without evidence that warming is related to emissions.” Munshi, Jamal. SOME METHODOLOGICAL ISSUES IN CLIMATE SCIENCE. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2873672
The argument by proponents of human-caused global warming (AGW) claims in definitive and unequivocal terms that human-produced FFCO2 has increased atmospheric CO2 concentration. Proponents claim that this human-increased CO2 concentration in the atmosphere has increased the greenhouse effect; and that the human-produced increase in greenhouse effect in turn increases the earth’s surface temperature which, if not stopped or reversed by human actions, will cause Catastrophic Climate Change (CCC) (IPCC, 2014) (Hansen, 1981) (Lacis, 2010), and all manner of crises. There is no evidence for these claims. Their computer models, which are only hypotheses, are too uncertain to be used for climate predictions, policies, laws and regulations.
You must be logged in to post a comment.