Eddy covariance (EC) technique is a world-wide recognized method, for measuring GHGs and energy fluxes between ecosystems and the atmosphere. Today's market offers a variety of sonic anemometers and gas analyzers with various designs and features. These various possibilities, together with different data processing techniques that are frequently employed, are potential source of uncertainty that could affect inter-site comparisons. The performances and specifications of the single sensor do not always reflect the specification and uncertainty in the final measurements and there are no equivalent measurements that could be used for validation. Long term Research Infrastructures (e.g., ICOS or NEON) standardized the technique in its different steps, from the sensors selection to instrument’s setup and the data processing. On the other hand, there isn't a perfect sensor that can accommodate all possible environmental conditions without potential biases or issues. In this synthesis study, effect of standardization is analysed using data from 15 sites covering different climate and ecosystems where two EC system run in parallel, one of them standardized. The data are then processed both by the single station teams and centrally in order to evaluate differences due to setups and processing. Results pointed out that differences between the two systems and processing are site dependent and both setup and processing play role. Effect of standardization in the EC setup has been quantified on average between 10 and 16 % in carbon flux, 11 and 19 % in LE flux and 5 and 7 % in H flux. Differences due to processing methods are in general smaller for the standardized setup (9 % in FC, 14 % in LE and 10 % in H) respect to the non-standardized setup (17 % for FC, 16 % for LE and 12 % for H). Given the complexity of the EC approach and the several steps (setup, calculation, filtering), it is challenging to point a single common factor that explains the variation and differences among sites. Standardization helps to minimize difference when small changes must be detected (in time and among sites) although this does not ensure the correctness of the absolute values. The correct storage and organization of raw data and metadata are the only possible solution to allow future reanalysis and the correct interpretation of the data.
Papale, D.; Shaukat, S.; Sabbatini, S.; Nicolini, G.; Moffat, A.M.; Belelli Marchesini, L.; Heinesch, B.; Berveiller, D.; Brummer, C.; Brut, A.; Chipeaux, C.; Czerný, R.; Graf, A.; Grünwald, T.; Hörtnagl, L.K.; Klosterhalfen, A.; Lafont, S.; Manise, T.; Montagnani, L.; Moreaux, V.; Peichl, M.; Schrader, F.; Tallec, T. (2022). Standardization of Eddy Covariance Measurements: role of setup, calculation and filtering. In: AGU Fall Meeting 2022, Chicago, ILL, 12-16 December 2022. url: https://agu.confex.com/agu/fm22/meetingapp.cgi/Paper/1144843 handle: https://hdl.handle.net/10449/77757
Standardization of Eddy Covariance Measurements: role of setup, calculation and filtering
Belelli Marchesini, L.;
2022-01-01
Abstract
Eddy covariance (EC) technique is a world-wide recognized method, for measuring GHGs and energy fluxes between ecosystems and the atmosphere. Today's market offers a variety of sonic anemometers and gas analyzers with various designs and features. These various possibilities, together with different data processing techniques that are frequently employed, are potential source of uncertainty that could affect inter-site comparisons. The performances and specifications of the single sensor do not always reflect the specification and uncertainty in the final measurements and there are no equivalent measurements that could be used for validation. Long term Research Infrastructures (e.g., ICOS or NEON) standardized the technique in its different steps, from the sensors selection to instrument’s setup and the data processing. On the other hand, there isn't a perfect sensor that can accommodate all possible environmental conditions without potential biases or issues. In this synthesis study, effect of standardization is analysed using data from 15 sites covering different climate and ecosystems where two EC system run in parallel, one of them standardized. The data are then processed both by the single station teams and centrally in order to evaluate differences due to setups and processing. Results pointed out that differences between the two systems and processing are site dependent and both setup and processing play role. Effect of standardization in the EC setup has been quantified on average between 10 and 16 % in carbon flux, 11 and 19 % in LE flux and 5 and 7 % in H flux. Differences due to processing methods are in general smaller for the standardized setup (9 % in FC, 14 % in LE and 10 % in H) respect to the non-standardized setup (17 % for FC, 16 % for LE and 12 % for H). Given the complexity of the EC approach and the several steps (setup, calculation, filtering), it is challenging to point a single common factor that explains the variation and differences among sites. Standardization helps to minimize difference when small changes must be detected (in time and among sites) although this does not ensure the correctness of the absolute values. The correct storage and organization of raw data and metadata are the only possible solution to allow future reanalysis and the correct interpretation of the data.File | Dimensione | Formato | |
---|---|---|---|
2022 AGU Belelli.pdf
accesso aperto
Tipologia:
Versione editoriale (Publisher’s layout)
Licenza:
Tutti i diritti riservati (All rights reserved)
Dimensione
1.66 MB
Formato
Adobe PDF
|
1.66 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.