Cross-cutting activities (CCA)
Ad-hoc cross-cutting activities are dealing with inter-WG specific issues like spatial representativeness, forecasting and the use of monitoring and modeling to support assessment and planning applications.CCA1 - Spatial representativeness
Lead: JRC Co-ordinator: O.Kracht
1 Review existing methodologies and current needs within the FAIRMODE community directed to the fields of spatial representativeness, station classification, and related topical areas.
2 Support the development of the MQO: Uncertainty estimates derived from geo-statistical methods (variography of monitoring data) can contribute towards a further level of detail in the MQO formulation in addition to monitoring uncertainty. A methodology to assess the spatial representativeness of measurement stations will be developed to this purpose. Depending on the outcomes of this research, such method can also supply information for a better design of monitoring networks.
3 Improvement of the model evaluation methodology: A methodology to automatically screen for anomalies within records of the AirBase database will bring a clear benefit for choosing the adequate monitoring sites for model evaluation purposes. The approach is based on spatio-temporal neighborhood statistics and is currently applicable to background type stations.
4 Evaluate the feasibility of methodological comparisons (example given, on shared datasets). However, the methodological diversity of the different approaches might impose significant limitations in this regard.
5 Assessing the representativeness of source contribution estimates derived from field data is essential for their proper interpretation. Interest has been expressed to explore the opportunities to review the progress in this subject within the FAIRMODE community.Related DocumentsCCA2 - Monitoring & modeling
Lead: Uni. Aveiro Co-ordinator: A.Miranda
1 Comparison of various methodologies (for assessment and planning) in which monitoring and modeling data are used in conjunction. This topic was already discussed by a FAIRMODE working subgroup in the past (2010-2011) and findings were presented in a discussion document 6 . This document will be a starting point to assess current best practices.
2 When model output and monitoring data are combined, it is not straightforward anymore to validate the final results in an independent way. Guidance will be provided how to tackle this issue and it will be explored if and how this can be incorporated into the model quality objectives and model evaluation tool.
3 Planning is one of the most important applications of air quality models. However, today very little effort is spend on the validation of models in planning mode. How do we use monitoring data to assess the planning capabilities of our modelling tools. What are best practices for so called dynamical evaluation?
4 When models are used for planning purpose, how do we make sure that at least the base year is simulated well. How do we correct for observed biases/deviations in the base year (e.g. underestimation of PM) and how do we take into account this information in the planning simulations for future years?
5 The location, characterization and capacities of monitoring networks is of fundamental relevance for modeling. Discussion among relevant actors on the development and organization of monitoring networks to ensure the availability of high quality information is required.CCA3 - Forecasting
Lead: INERIS Co-ordinator: F.Meleux
1 Very often, the same air quality model is used for assessment and forecasting purposes. However, additional model quality objectives might be required to evaluate the forecast capabilities of a model. What are the best indicators to evaluate the skill of a forecast model? How do they take into account that in forecast mode the focus is on air pollution episodes or threshold exceedances rather than on (annual) average statistics? How can this information be incorporated in the current FAIRMODE Model Quality Objectives and consolidated DELTA tool?
2 The optimal spatial (horizontal/vertical) model resolutions required to forecast pollution episodes depend on the considered pollutant but also on the main physical and chemical processes involved in its formation. Are the current MQO adequate to provide sufficient information about model performances in such cases?
3 In forecast mode, simple or complex method for assimilating AQ observations are used and require the need to split the database in two parts (i.e. assimilation and evaluation). What is the optimal way to select the most consistent and relevant set of stations for both parts (note that this topic is closely connected to the monitoring and modeling cross cutting activity)
4 What is the impact of using different station classifications on model performances (e.g. usual EEA classification vs. MACC-ensemble classification from Joly & Peuch)?
5 Do we have adequate indicators to address both the long-term (e.g. Gothenburg emission reductions) and short term (control scenarios) scenarios in a forecast mode? Is there any difference in terms of robustness of the model responses between these two kinds of scenarios?