A novel method that combines artificial intelligence with remote sensing satellite technologies has produced the most detailed coverage of air pollution in Britain to date.
Highlighted by new research led by the London School of Hygiene & Tropical Medicine (LSHTM) and published in Remote Sensing, the methodology provides accurate estimates of concentrations of air pollution across Great Britain. The model offers an impressive level of details, with measurements provided at daily level and in a 1x1km grid across the whole Great Britain.
Results indicate that the South-East of England is the most polluted region, and they identify hot spots in urban and industrial areas. Encouragingly, the findings also show an overall decline in air pollution in Great Britain during the last decade.
The researchers say this novel approach could revolutionise the assessment of exposure to air pollution and our understanding of the related health risks, by linking country-wide exposure maps and health databases.
Currently, scientists rely on ground-based monitors to measure air pollution, however, these are sparsely located, mostly concentrated in urban areas, and are not always taking measurements continuously. This means there are no nationwide air pollution records accurate enough to be used in epidemiological analyses to evaluate health risks.
In this study, the researchers applied an innovative methodology that uses artificial intelligence and satellite-based data to estimate the daily human exposure to fine particles of air pollution from 2008-2018.
The team combined readings from existing ground-based monitors with data from earth observation satellite instruments, which provides information on weather patterns, aerosols suspended in the atmosphere, land use and vegetation cover. They also incorporated data from other sources, including population density, road density and the location of airports.
Using sophisticated machine learning algorithms, they combined the datasets to produce estimates of the ground-level concentration of fine particulate matter (less than 2.5 micron in size, PM2.5), one of the most dangerous air pollutants. They divided Great Britain into grid cells and derived daily pollution series in the period 2008-18.
Dr Rochelle Schneider, first author who led the analysis, said: “This research uses the power of artificial intelligence to advance environmental modelling and address public health challenges. This impressive air pollution dataset represents PM2.5 records for 4,018 days in a spatial domain of 234,429 grid cells. This provides a remarkable total of 950 million data points that comprehensively quantify the level of air pollution across the whole of Great Britain in an eleven-year period.”
The results of the study were cross-validated by comparing the estimates produced by the model to measurements taken from particular ground-based monitors, and were found to be closely aligned.
The team now intend to combine the data with local health records. This linked information will be used in cutting-edge epidemiological analyses to reveal a highly granular picture of the association between air pollution and health outcomes across Great Britain.
Professor Antonio Gasparrini, Professor of Biostatistics and Epidemiology at LSHTM and senior author of the study, said: “This study demonstrates how cutting-edge techniques based on artificial intelligence and satellite technologies can benefit public health research. The output reveals the shifting patterns of air pollution across Great Britain and in time with extraordinary detail. We now hope to use this information to better understand how pollution is affecting the nation’s health, so we can take steps to minimise the risk. The vast amount of data produced will provide a vital tool for public health researchers investigating the effects of air pollution.”
The World Health Organization estimates that there are seven million deaths per year worldwide due to air pollution, which causes lung disease, lung cancer, heart disease and strokes.
Dr Vincent-Henri Peuch, Director of Copernicus Atmosphere Monitoring Service (CAMS) at European Centre for Medium-Range Weather Forecasts (ECMWF), said: “This innovative method has combined the strengths of different data sources to give accurate and comprehensive estimates of air pollution exposure, including ground-based sensors, satellite data, and model reanalyses developed by ECMWF as part of the EU Copernicus programme. Dr Schneider and co-authors convincingly demonstrate its performance over Great Britain, paving the way for many future studies into the health effects of air pollution.”
Dr Pierre-Philippe Mathieu, Head of Phi-lab Explore Office at European Space Agency (ESA), said: “It’s exciting to see data from Earth observation satellites being used in public health research to advance our understanding of the intricate relationship between health and air quality, improving lives in Great Britain, Europe and the rest of the world.”
The study is limited by the fact that the method could not reliably recover air pollution levels from years before 2008, given the limited number of PM2.5 monitors available. In addition, the performance of the model can be lower in remote areas characterised by limited coverage of ground monitoring network. The LSHTM team plans to extend this model and reconstruct high-resolution data of other air pollutants.
For more information or interviews, please contact email@example.com.
A copy of the embargoed paper is available upon request.
Notes for Editors
Rochelle Schneider, Ana M. Vicedo-Cabrera, Francesco Sera, Pierre Masselot, Massimo Stafoggia, Kees de Hoogh, Itai Kloog, Stefan Reis, Massimo Vieno, Antonio Gasparrini. A Satellite-Based Spatio-Temporal Machine Learning Model to Reconstruct Daily PM2.5 Concentrations across Great Britain. Remote Sensing.
Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.
Published at Thu, 19 Nov 2020 23:26:15 +0000
Everywhere you turn today is some unbelievable technological advancement on a variety of fronts. In our everyday lives, we hear or experience things about autonomous vehicles, warehouse robots, chatbots, Alexa, Siri, Uber, automated email responses, robotic surgeries, Netflix recommendation systems, smart factories, smart buildings and search retargeting. Technology giants are becoming the most valuable companies on the planet, and we see our lives shifting from what we thought was kids and their smartphone addictions and gaming to encroaching on our daily lives across the board. They are typically revolving around a variety of enabling technology layers, namely, cloud computing, computational systems, networks and sensors, robotics, material sciences, digital manufacturing and artificial intelligence. However, at the center of it, is AI that permeates many of the other advances in some shape or form in creating intelligent systems on top of advances in core products or technologies.
“Computers, intelligent machines and robots seem like the workforce of the future. And as more and more jobs are replaced by technology, people will have less work to do and ultimately will be sustained by payments from the government,” predicts Elon Musk, the cofounder and CEO of Tesla. This is a scary proposition in some sense, in that what will we do if all the work is done by AI or robots? Isn’t life tough enough? Don’t we have enough economic disparity and can barely make ends meet today? To add insult to injury, many of the analyses seem to center on displacing the low wage workers. As if they didn’t have enough disadvantages already, their entire economic class will be wiped out is the feeling we get from the news cycle. This is evidenced by robotic warehouses and chatbots or automated customer service and we can really feel the changes all around us.
Innovation and technology are certainly changing; skills and jobs as we know them today will need to change. Our frame of reference is being disrupted like never before, the guideposts and rules are changing, and this causes discomfort, uncertainty and worry. How can we chart the course if we are uncertain that the traditional methods (hard work, educational degrees, etc.) do not necessarily guarantee a certain quality of life? News flash, it is changing rapidly and therefore uncertain. We need to all become comfortable with being uncomfortable, with adapting to change, continuing education and reskilling. Some reports predict that millennials will change their jobs 17 times, but that might be a low number when you really factor in the gig economy.
According to various reports, the warnings suggest that AI could lead to the loss of tens of millions of jobs. It begs the question, when or what is the time horizon of the adoption of AI and the job loss a reality? Many reports suggest of job displacement or the very nature of jobs shifting. Automation and technology have shifted work in pursuit of lowering costs, increasing efficiency and production. The automobile “displaced” work that was done via horse and buggy, electricity or fluorescent lighting displaced gas lamps and gas replaced coal in many instances. Jobs have been displaced in the past, but in today’s case the rate at which these exponential technologies are growing is moving faster than the rate of human adaptation. That speed at which we are experiencing technological and societal change is only the beginning as many futurists, such as Peter Diamandis, prophesize.
Bloomberg reports that “more than 120 million workers globally will need retraining in the next three years due to artificial intelligence’s impact on jobs, according to an IBM survey.” That report and interpretations of it seem to suggest that adoption of AI may result in massive job losses and requires massive retraining. This paints a doomsday scenario, creates uncertainty and worry. The interpretation is AI equals job loss; we would argue that the interpretation should be interpreted as AI and technology advancements will require job retraining and job reskilling. The reports also seem to suggest that our educational system is preparing for jobs of today, when we see that the jobs of the future will be quite different, with different resources and tools at our disposal. This further creates panic in that we see nothing but chaos and the inability to control our destiny for ourselves and our children.
The report was MIT-IBM Watson Lab research that shed light on the reorganization of tasks within occupations by analyzing 170 million online job postings in the U.S. between 2010-17. There is no question that AI and related technologies will affect all jobs; what the report did shed light on was the fact how the nature of work is changing, how tasks are changing and tried to link the implications for employment and wages. There were some key findings in the report, that tasks are shifting between people and machines (or AI), but the change has been small (Figure 1).
Automation or AI is disappearing from job requirements, shifting in the way work gets done; as technology reduces the cost of some tasks, the value of remaining tasks increases, particularly soft skills such as creativity, common sense, judgment and communication skills.
This type of analysis is what experts refer to as the “future of work” and how the work is shifting, job requirements are changing, and automation and AI are displacing certain sectors of the labor market. It also informs policymakers of where to focus attention and resources in order to best prepare for the future. Many of the takeaways and political talk seem to focus on “the vulnerable will be the most vulnerable” as a key takeaway, and that better-educated workers will fare out alright as AI/automation spreads. As a McKinsey report forecasted 800 million global workers could be replaced by robots by 2030, they further stated that blue-collar jobs, such as machine operating, warehouse workers and fast food are particularly susceptible to disruption.
But a new study published by the Brookings institution states that might not be the case. The report looked at thousands of AI patents and job descriptions and that educated, well-paid workers may be affected even more by the spread of AI. Most consider robotics and software as impacting the physical and routine work of traditionally blue-collar jobs. The report goes on to state that workers with a bachelor’s degree, for example, would be exposed to AI over five times more than those with only a high school degree. That is due to the fact AI is very strong at completing tasks that require planning, learning, reasoning, problem-solving and predicting – most of which are skills we think of as white-collar jobs.
This analysis of patent data and tasks and exposure to risk on a sampling of various occupations in Table 2.
AI’s impact on the workplace, the future of work, sectors of the economy and global domination are hard to assess. Most forecasts are rooted in well-established, well-understood technologies such as robotics and extrapolated across of a range of tasks, functions and jobs. The nature of AI being new and poorly understood, nonetheless unsuccessfully implemented across all industries, makes it even more difficult to understand. There is no shared agreement on the tasks, nor the expected impacts on the workforce or economy. The best scholars concede to the limitations of their economic or forecasts for the future. What we do know is that the nature of work will change, as it has through the centuries with innovation. We do know that disruption will occur in sectors of the economy, and we should brace for that change and try to harness that change for good. Perhaps AI can see patterns in deadly diseases, fight climate change and explore the universe. We should be as excited as nervous about change, and try to the best of our abilities to shape our society for that coming change.
Manjeet Rege is an associate professor of Graduate Programs in Software and Data Science and Director of Center for Applied Artificial Intelligence at the University of St. Thomas. Dr. Rege is an author, mentor, thought leader, and a frequent public speaker on big data, machine learning and artificial intelligence technologies. He is also the co-host of the “All Things Data” podcast that brings together leading data scientists, technologists, business model experts and futurists to discuss strategies to utilize, harness and deploy data science, data-driven strategies and enable digital transformation. Apart from being engaged in research, Dr. Rege regularly consults with various organizations to provide expert guidance for building big data and AI practice, and applying innovative data science approaches. He has published in various peer-reviewed reputed venues such as IEEE Transactions on Knowledge and Data Engineering, Data Mining & Knowledge Discovery Journal, IEEE International Conference on Data Mining, and the World Wide Web Conference. He is on the editorial review board of Journal of Computer Information Systems and regularly serves on the program committees of various international conferences.
Dan Yarmoluk is an adjunct faculty at Graduate Programs in Software at the University of St. Thomas. He has been involved in analytics, embedded design and components of mobile products for over a decade. He has focused on creating and driving IoT automation, condition monitoring and predictive maintenance programs with technology, analytics and business models that intersect to drive added value and digital transformation. Industries he has served include: oil and gas, refining, chemical, precision agriculture, food, pulp and paper, mining, transportation, filtration, field services and distribution. He publishes his thoughts frequently and co-hosts a popular podcast “All Things Data” with Dr. Manjeet Rege of the University of St. Thomas.
Published at Thu, 19 Nov 2020 21:18:33 +0000