Artificial Intelligence is getting better – latest news and trends in AI concerning image processing

Artificial intelligence is now a part of new, more useful applications and it is getting better. In this blog post we will present you some of these new and interesting AI apps. And, let us just inform you that, from this blog post, every couple of months, we will show and discuss news and trends in image processing field, including new papers, research and applications!

And now, let’s start with news from our favorite, NVIDIA. What is NVIDIA up to?

Image source: https://pixabay.com/

AI can Detect Open Parking Spaces

Image source: https://pixabay.com/

With as many as 2 billion parking spaces in the United States, finding an open spot in a major city can be complicated. To help city planners and drivers more efficiently manage and find open spaces, MIT researchers developed a deep learning-based system that can automatically detect open spots from a video feed.

Parking spaces are costly to build, parking payments are difficult to enforce, and drivers waste an excessive amount of time searching for empty lots,” the researchers stated in their paper.

Article from:
https://news.developer.nvidia.com/ai-algorithm-aims-to-help-you-find-a-parking-spot/

New AI Imaging Technique Reconstructs Photos with Realistic Results

Researchers from NVIDIA, led by Guilin Liu, introduced a state-of-the-art deep learning method that can edit images or reconstruct a corrupted imageone that has holes or is missing pixels. The method can also be used to edit images by removing content and filling in the resulting holes. The method, which performs a process called “image inpainting”, could be implemented in photo editing software to remove unwanted content, while filling it with a realistic computer-generated alternative.

Our model can robustly handle holes of any shape, size location, or distance from the image borders. Previous deep learning approaches have focused on rectangular regions located around the center of the image, and often rely on expensive post-processing,” the NVIDIA researchers stated in their research paper.

Article from:
https://news.developer.nvidia.com/new-ai-imaging-technique-reconstructs-photos-with-realistic-results/

AI Can Now Fix Your Grainy Photos by Only Looking at Grainy Photos

What if you could take your photos that were originally taken in low light and automatically remove the noise and artifacts? Have grainy or pixelated images in your photo library and want to fix them? This deep learning-based approach has learned to fix photos by simply looking at examples of corrupted photos only. The work was developed by researchers from NVIDIA, Aalto University, and MIT, and was presented at the International Conference on Machine Learning in Stockholm, Sweden.

Recent deep learning work in the field has focused on training a neural network to restore images by showing example pairs of noisy and clean images. The AI then learns how to make up the difference. This method differs because it only requires two input images with the noise or grain.

Without ever being shown what a noise-free image looks like, this AI can remove artifacts, noise, grain, and automatically enhance your photos.

It is possible to learn to restore signals without ever observing clean ones, at performance sometimes exceeding training using clean exemplars,” the researchers stated in their paper.

Article from:
https://news.developer.nvidia.com/ai-can-now-fix-your-grainy-photos-by-only-looking-at-grainy-photos/

AI Model Can Generate Images from Natural Language Descriptions

Image source: https://pixabay.com/

To potentially improve natural language queries, including the retrieval of images from speech, Researchers from IBM and the University of Virginia developed a deep learning model that can generate objects and their attributes from natural language descriptions.

We show that under minor modifications, the proposed framework can handle the generation of different forms of scene representations, including cartoon-like scenes, object layouts corresponding to real images, and synthetic images,” the researchers stated in their paper.

Article from:
https://news.developer.nvidia.com/ai-model-can-generate-images-from-natural-language-descriptions/

Now, some new research papers with different fields that need AI as well as image processing:

Digital image analysis in breast pathology—from image processing techniques to artificial intelligence 

From: https://www.sciencedirect.com/science/article/pii/S1931524417302955 

Image source: https://pixabay.com/

Abstract: Breast cancer is the most common malignant disease in women worldwide. In recent decades, earlier diagnosis and better adjuvant therapy have substantially improved patient outcome. Diagnosis by histopathology has proven to be instrumental to guide breast cancer treatment, but new challenges have emerged as our increasing understanding of cancer over the years has revealed its complex nature. As patient demand for personalized breast cancer therapy grows, we face an urgent need for more precise biomarker assessment and more accurate histopathologic breast cancer diagnosis to make better therapy decisions. The digitization of pathology data has opened the door to faster, more reproducible, and more precise diagnoses through computerized image analysis. Software to assist diagnostic breast pathology through image processing techniques have been around for years. But recent breakthroughs in artificial intelligence (AI) promise to fundamentally change the way we detect and treat breast cancer in the near future. Machine learning, a subfield of AI that applies statistical methods to learn from data, has seen an explosion of interest in recent years because of its ability to recognize patterns in data with less need for human instruction. One technique in particular, known as deep learning, has produced groundbreaking results in many important problems including image classification and speech recognition. In this review, we will cover the use of AI and deep learning in diagnostic breast pathology, and other recent developments in digital image analysis.

Predicting tool life in turning operations using neural networks and image processing

From: https://www.sciencedirect.com/science/article/pii/S088832701730599X 

Abstract: A two-step method is presented for the automatic prediction of tool life in turning operations. First, experimental data are collected for three cutting edges under the same constant processing conditions. In these experiments, the parameter of tool wear, VB, is measured with conventional methods and the same parameter is estimated using Neural Wear, a customized software package that combines flank wear image recognition and Artificial Neural Networks (ANNs). Second, an ANN model of tool life is trained with the data collected from the first two cutting edges and the subsequent model is evaluated on two different subsets for the third cutting edge: the first subset is obtained from the direct measurement of tool wear and the second is obtained from the Neural Wear software that estimates tool wear using edge images. Although the complete-automated solution, Neural Wear software for tool wear recognition plus the ANN model of tool life prediction, presented a slightly higher error than the direct measurements, it was within the same range and can meet all industrial requirements. These results confirm that the combination of image recognition software and ANN modelling could potentially be developed into a useful industrial tool for low-cost estimation of tool life in turning operations.

Automatic food detection in egocentric images using artificial intelligence technology 

From:
https://www.cambridge.org/core/journals/public-health-nutrition/article/automatic-food-detection-in-egocentric-images-using-artificial-intelligence-technology/CAE3262B945CC45E4B14C06C83A68F42  

Image source: https://pixabay.com/

Abstract:

Objective:To develop an artificial intelligence (AI)-based algorithm which can automatically detect food items from images acquired by an egocentric wearable camera for dietary assessment.

Design:To study human diet and lifestyle, large sets of egocentric images were acquired using a wearable device, called eButton, from free-living individuals. Three thousand nine hundred images containing real-world activities, which formed eButton data set 1, were manually selected from thirty subjects. eButton data set 2 contained 29 515 images acquired from a research participant in a week-long unrestricted recording. They included both food- and non-food-related real-life activities, such as dining at both home and restaurants, cooking, shopping, gardening, housekeeping chores, taking classes, gym exercise, etc. All images in these data sets were classified as food/non-food images based on their tags generated by a convolutional neural network.

Results:A cross data-set test was conducted on eButton data set 1. The overall accuracy of food detection was 91·5 and 86·4 %, respectively, when one-half of data set 1 was used for training and the other half for testing. For eButton data set 2, 74·0 % sensitivity and 87·0 % specificity were obtained if both ‘food’ and ‘drink’ were considered as food images. Alternatively, if only ‘food’ items were considered, the sensitivity and specificity reached 85·0 and 85·8 %, respectively.

Conclusions: The AI technology can automatically detect foods from low-quality, wearable camera-acquired real-world egocentric images with reasonable accuracy, reducing both the burden of data processing and privacy concerns.

Bioinformatics and Image Processing—Detection of Plant Diseases 

From:
https://link.springer.com/chapter/10.1007/978-981-13-1580-0_14 

Image source: https://pixabay.com/

Abstract:

This paper gives an idea of how a combination of image processing along with bioinformatics detects deadly diseases in plants and agricultural crops. These kinds of diseases are not recognizable by bare human eyesight. First occurrence of these diseases is microscopic in nature. If plants are affected with such kind of diseases, there is deterioration in the quality of production of the plants. We need to correctly identify the symptoms, treat the diseases, and improve the production quality. Computers can help to make correct decision as well as can support industrialization of the detection work. We present in this paper a technique for image segmentation using HSI algorithm to classify various categories of diseases. This technique can also classify different types of plant diseases as well. GA has always proven itself to be very useful in image segmentation.

And, at the end, some news from public sector and applied algorithms:

China Now has Facial Recognition Based Toilets 

Image source: https://pixabay.com/

China has integrated facial recognition in the toilets across the country. Citizens now need WeChat or face scans to get the toilet papers. People will stand in the yellow recognition spot and will bring their face near the face identification machine.  Then after about three seconds, 90 centimeters of toilet paper will come out. People will then go in and use the toilet but only for limited time as alarm will buzz if someone occupies it for too long. In toilet, sensors will assess ammonium amount and spray a deodorant if required. The two bathrooms integrated with face scanners for being “clean and convenient,” and “reducing toilet paper waste.”

Read more here:
https://www.aitechnologies.com/china-now-has-facial-recognition-based-toilets/ 

Apple’s Camera-Toting Watch Band Uses Facial Recognition For Flawless FaceTime Calls 

Image source: https://pixabay.com/

U.S. Patent and Trademark Office granted a patent to Apple which says that the tech titan wants to widen the set of attributes of its wearable, by integrating an original camera system with the ability to automatically crop subject matter, trace objects such as user’s face and produce angle-adjusted avatars for FaceTime calls. “Image-capturing watch” U.S. Patent No. 10,129,503 of Apple tells a software and hardware solution that creates a camera-toting Apple Watch, that is both handy and feasible. Using a camera-toted Watch, consumers can put aside a heavy handheld device while playing sports, exercising or doing other energetic activities. However, a feasible smartwatch solution is hard to accomplish. The camera captures the motion data and then the watch processes it, after which it is mapped onto the computer produced picture, which imitates a consumer’s facial movements and expressions in real time. On the other hand, source movement data can be utilized to tell about the motion of inhuman avatars such as Apple’s Memoji and Animoji. It still remains unknown whether Apple wants to integrate its Apple Watch camera band tech.

Read more here:
https://www.aitechnologies.com/apples-camera-toting-watch-band-uses-facial-recognition-for-flawless-facetime-calls/

Metropolitan Police London is to Integrate Face Recognition Tech 

Image source: https://pixabay.com/

London’s police will integrate face recognition tech as an experiment for two days. In the areas of Leicester Square, Piccadilly Circus, and Soho in London, the technology will examine crowds’ faces and compare them with the database of individuals wanted by the courts and Metropolitan Police in London. If the tech founds a match, the police officers in that field will analyze it and perform further tests to make sure the identity of that individual.

Read more here:
https://www.aitechnologies.com/metropolitan-police-london-is-to-integrate-face-recognition-tech/

That’s all for now folks. But, tell me, what do you think, what are some areas where AI is going to bring most benefits? What are areas, by your opinion where there is space for more research? Can you actually believe that it is possible to have AI solutions in every day life?

All news are citations from the mentioned sites, where you can find the whole text about the topic.

What is real time processing (online VS offline)

If you are a beginner in the area of the image and video processing, you may often hear the term real time processing. In this post, we will try to explain the term and list some typical concerns related to this term.

Real time processing – circuit board (image souce: https://pixabay.com/)

Real time image processing is related with typical frame rate. Current standard for capture is typically 30 frames per second. Real time processing would require processing all the frames as soon as they are captured. So broadly speaking, if capture rate is 30 FPS then 30 frames needs to be processed in one second. That comes to around 33 milliseconds (1000 ms / 30 frames = 33 ms/frame). Similar calculation can be done for any frame rate to get required processing time per frame.

In image and video processing, the source of our signal is a camera. So, what real time image processing really means is: produce output simultaneously with the input. What is actually meant is that the algorithm will run at the rate of the source (e.g. a camera) supplying the images, so the algorithm can process images at the frame rate of the camera.

Image source: https://pixabay.com/

Source of image signal is camera

Human vision:

Just out of curiosity, let’s see how the human vision works:

The first thing to understand is that we perceive different aspects of vision differentlyDetecting motion is not the same as detecting light. Another thing is that different parts of the eye perform differently. The center of vision is good at different stuff than the periphery. And another thing is that there are naturalphysical limits to what we can perceive. It takes time for the light that passes through your cornea to become information on which your brain can act, and our brains can only process that information at a certain speed.

Another important concept: the whole of what we perceive is greater than what any one element of our visual system can achieve. This point is fundamental to understanding our perception of vision.

The temporal sensitivity and resolution of human vision varies depending on the type and characteristics of visual stimulus, and it differs between individuals. The human visual system can process 10 to 12 images per second and perceive them individually, while higher rates are perceived as motion. Modulated light (such as a computer display) is perceived as stable by the majority of participants in studies when the rate is higher than 50 Hz through 90 Hz. This perception of modulated light as steady is known as the flicker fusion threshold. However, when the modulated light is non-uniform and contains an image, the flicker fusion threshold can be much higher, in the hundreds of hertz. Regarding image recognition, people have been found to recognize a specific image in an unbroken series of different images, each of which lasts as little as 13 millisecondsPersistence of vision sometimes accounts for very short single-millisecond visual stimulus having a perceived duration of between 100 ms and 400 ms. Multiple stimuli that are very short are sometimes perceived as a single stimulus, such as a 10 ms green flash of light immediately followed by a 10 ms red flash of light perceived as a single yellow flash of light.

Image source: https://pixabay.com/

Human vision

Applications:

The real-time aspect is critical in many real-world devices or products such as mobile phones, digital still/video/cell-phone cameras, portable media players, personal digital assistants, high-definition television, video surveillance systems, industrial visual inspection systems, medical imaging devices, vision-assisted intelligent robots, spectral imaging systems, and many other embedded image or video processing systems.

With the increasing capabilities of imaging systems like cameras with very high-density captures having 16 or more megapixels, it is extremely difficult to get real time performance for many applications.

Image source: https://pixabay.com/

Applications

What applications need real time performance and what applications do not:

When talking about the numerous applications of image and video processing, we can say that some applications in some systems need real time processing, and some don’t. That is why we will talk about online (real time) and offline processing.

Image made by author

Offline processing is processing already recorded video sequence or image. So, digital video stabilization, video enhancement, video coloring, or any application can work with already prepared video. These applications can be found in marketing, industry, medical imaging, film industry or in some ordinary commercial applications, such as a user that wants to stabilize and enhance some video from the phone library.

Offline processing enables using more complex and computationally demanding algorithms, therefore usually gives better results than real time processing. That is why offline processing tools are used a lot in academic research and in some kinds of challenges.

Some of Deep Learning tools for offline processing (on CPU) are:

Image made by author

On the other hand, some applications have a demand for real time processing. For example, traffic monitoring, target tracking in military applications, surveillance and monitoring, real time video games, etc. are apps that demand real time feedback and processed image from sensor.

The algorithms that work in real time do not have the luxury of high complexity, since the processing time for each frame is determined by source frame rate and resolution. New hardware solutions nowadays offer better processing speeds, but there are still limitations, depending of the specific application.

Image made by author

Systems with multiple complex applications working in parallel:

Sometimes the application demands multiple complex algorithms working in parallel. That is the time when not only the complexity of the algorithms is considered, but also which algorithm will be processed first and how this affects the desired performance of the application. One good example is when video enhancement and digital video stabilization algorithm work in parallel.

Video stabilization and video dehazing algorithms in the same video processing pipeline can affect the results of each other. This interesting topic is described in a paper [Dehazing Algorithms Influence on Video Stabilization Performance] given in references at the end of the post. When there is no severe haze, noise or low contrast in the scene, it is important to perform video stabilization algorithm prior to video dehazing algorithm. On the other hand, when the feature level in the scene is low, which happens because of severe haze or low contrast in image, the stabilization algorithm cannot perform well, since it cannot calculate global motion accurately. That is why, for the sake of the better stabilization performance, the proposed pipeline performs video dehazing algorithm prior to video stabilization.

Image source: scientific paper Dehazing Algorithm Influence on Video Stabilization Performance

Dehazing Algorithms Influence on Video Stabilization Performance

At the end, we will mention some of the possibilities for real time image processing platforms:

  • FPGA – very good for complex parallel operations, example of the application in paper [High-performance electronic image stabilization for shift and rotation correction] given in references.
  • Nvidia Jetson TX1, TX2, Xavier –

Get real-time Artificial Intelligence (AI) performance where you need it most with the high-performance, low-power NVIDIA Jetson AGX systems. Processing of complex data can now be done on-board edge devices. This means you can count on fast, accurate inference in everything from robots and drones to enterprise collaboration devices and intelligent cameras. Bringing AI to the edge unlocks huge potential for devices in network-constrained environments.”  – from Nvidia site, given in references.

References

Thermal Imaging – Theory and Applications

What is Thermal Imagery

We know that our eyes see reflected light, so it is easy for us to understand the principle of forming the image from Visual (daylight and night vision cameras). But if there is not enough light it is impossible for us or the camera to see. This is not the case in the thermal imagery domain. Thermal cameras measure temperature and emissivity of objects in the scene. In the thermal infrared technologies, most of the captured radiation is emitted from the observed objects, in contrast to visual and near infrared, where most of the radiation is reflected. Thus, knowing or assuming material and environmental properties, temperatures can be measured using a thermal camera (i.e., the camera is said to be radiometric).  But, let’s not forget: “Thermal cameras detect more than just heat though; they detect tiny differences in heat – as small as 0.01°C – and display them as shades of grey or with different colors.” [1]

Thermal image is different from visual camera image and cannot be treated as a grayscale visual image. In thermal infrared there are no shadows, and noise characteristics are different then in visual tracking. There are also no color patterns like in visual domain, but patterns come out from variations in material or temperature of objects.

The infrared wavelength band is usually divided into different sub-bands, according to their different properties: near infrared (NIR, wavelengths 0.7–1 µm), shortwave infrared (SWIR, 1–3 µm), midwave infrared (MWIR, 3–5 µm), and longwave infrared (LWIR, 7.5–12 µm). These bands are separated by regions where the atmospheric transmission is very low (i.e., the air is opaque) or where sensor technologies have their limits. LWIR, and sometimes MWIR, is commonly referred to as thermal infrared (TIR). TIR cameras should not be confused with NIR cameras that are dependent on illumination and in general behave in a similar way as visual cameras. Thermal cameras are either cooled or uncooled. Images are typically stored as 16 bits per pixel to allow a large dynamic range. Uncooled cameras give noisier images at a lower frame rate, but are smaller, silent, and less expensive. [2,3] 

Image source: from LTIR dataset

Q&A

1. What is the biggest difference between a high and low cost thermal camera?

   The biggest difference is typically resolution. The higher the resolution, the better the picture clarity. This translates to a better picture at a greater distance as well, similar to the megapixels of a regular digital camera.

2. Can thermal imaging cameras see through objects?

   No. Thermal imaging cameras only detect heat; they will not “see” through solid objects, clothing, brick walls, etc. They see the heat coming off the surface of the object.

3. Is there a difference between night vision and thermal imaging?

    Yes. Night vision relies on at least a very low level of light (less than the human eye can detect) in order to amplify it so that it can produce a picture. Night vision will not work in complete darkness whereas thermal imaging will  

    because it only “sees” heat.

4. Can rain and heavy fog limit the range of thermal imaging cameras?

    Yes. Rain and heavy fog can severely limit the range of thermal imaging cameras because light scatters off of droplets of water.

[4]

Image source: https://pixabay.com/

Applications

Applications of  thermal vision are numerous, in civil as well as in military sector, but here we will focus on applications in civil sector that can be of help in every day life. So, this technology can be used to observe and analyze human activities from a distance in a noninvasive manner, for example. Traditional computer vision utilizes RGB cameras, but problems with this sensor include its light dependency. Thermal cameras operate independently of light and measure the radiated infrared waves representing the temperature of the scene. In order to showcase the possibilities, both indoor and outdoor scenarios applications which use thermal imaging only are presented.

Image source: https://pixabay.com/

Surveillance: People counting in urban environments

Human movement can be automatically registered and analyzed. For both real-time and long-term perspectives, this knowledge can be beneficial in relation to urban planning and for shopkeepers in the city. Information in real-time can be used for analyzing the current flow and occupancy of the city, while long-term analysis can reveal trends and patterns related to specific days, time or events in the city.

Security: Analyzing the use of sports arenas

The interest in analyzing and optimizing the use of public facilities in cities has a large variety of applications in both indoor and outdoor spaces. Here, the focus is on sports arenas, but other possible applications could be libraries, museums, shopping malls, etc. The aim is to estimate the occupancy of sports arenas in terms of the number of people and their positions in real time. Potential use of this information is both online booking systems, and post-processing of data for analyzing the general use of the facilities. For the purpose of analyzing the use of the facilities, we also try to estimate the type of sport observed based on people’s positions.

In indoor spaces, the temperature is often kept constant and cooler than the human temperature. Foreground segmentation can therefore be accomplished by automatic thresholding the image. In some cases, unwanted hot objects, such as hot water pipes and heaters, can appear in the scene. In these situations, background subtraction can be utilized.

Health and safety: Gas leaking location and event alert

Some public buildings of interest can be monitored with thermal cameras, while gas or water leakage can be discovered before a hazardous situation happens.

Localizing a suspected leak in a building can turn to be delicate, sometimes requiring stopping the operations, if not probe walls or floors. Whatever the mix of construction materials, thermal imaging can be the right answer: in most cases, a leakage translates into an abnormal temperature pattern. Thermal imaging is de facto a non-contact operation, increasing inspector safety, capable of visualizing fluid leakage as well as electrical dysfunction. Thermal imaging can of course also detect thermal bridges and, as such, is a key tool to generate property investigation report.

Water leakage can be both hot and cold, and thermal imagers can catch them both. It can sometimes be close to impossible to spot a water leak on your own, especially when they are behind walls. That is why thermal cameras prevent dangerous situations.

Traffic control: Traffic monitoring and specific event alert

As for monitoring heterogenous traffic, thermal imaging can be a precious camera type reducing overall system costs and increasing reliability. On contrary to Visible and NIR-based detectors, LWIR cameras are not affected by the lighting conditions of the scene: e.g. night vs day, and sun orientation. This remains true over long distances, enabling the detection of a child, a biker, a car or a truck. Once coupled with relevant processing, LWIR cameras turn to be a key asset of ITS, reducing the number of cameras while increasing alarms reliability. This helps the manager on duty to take quickly the right decision in case of e.g. obstacle detection, reverse direction vehicle, abnormal traffic jam, etc. to ensure road-users security as well as optimal commuting time.

Energy saving: Building occupancy

Monitoring building occupancy turns to be highly relevant for management of commercial complex or public infrastructure: optimal adjustment of energy supply, scheduling of maintenance services, as well as comfort and health of occupants.
It is also useful for sizing security services, and of crucial importance in case of event requiring building evacuation. Advanced solutions, relying on thermal sensors, integrate thermal imaging: low resolution detectors (detecting presence / human activity) and/or a high-resolution thermal camera spotting relevant doorways (for people counting / human activity characterization).

This time, our goal was to explain more the science behind thermal cameras and its applications. If there are some additional questions or anything else you would like to know about this topic, feel free to ask via mail or comments.

References

How can Artificial Intelligence help solve environmental problems like Air Pollution

Air pollution

Air pollution is caused by solid and liquid particles and certain gases that are suspended in the air. These particles and gases can come from car and truck exhaust, factories, dust, pollen, mold spores, volcanoes and wildfires. The solid and liquid particles suspended in our air are called aerosols.

Certain gases in the atmosphere can cause air pollution. For example, in cities, a gas called ozone is a major cause of air pollution. Ozone is also a greenhouse gas that can be both good and bad for our environment. It all depends where it is in Earth’s atmosphere.

Ozone high up in our atmosphere is a good thing. It helps block harmful energy from the Sun, called radiation. But, when ozone is closer to the ground, it can be really bad for our health. Ground level ozone is created when sunlight reacts with certain chemicals that come from sources of burning fossil fuels, such as factories or car exhaust.

When particles in the air combine with ozone, they create smog. Smog is a type of air pollution that looks like smoky fog and makes it difficult to see. (https://climatekids.nasa.gov/air-pollution/)

Polluted city – image source: https://pixabay.com/

The major outdoor pollution sources include vehicles, power generation, building heating systems, agriculture/waste incineration and industry. In addition, more than 3 billion people worldwide rely on polluting technologies and fuels (including biomass, coal and kerosene) for household cooking, heating and lighting, releasing smoke into the home and leaching pollutants outdoors.

Air quality is closely linked to earth’s climate and ecosystems globally. Many of the drivers of air pollution (i.e. combustion of fossil fuels) are also sources of high CO2 emissions. Some air pollutants such as ozone and black carbon are short-lived climate pollutants that greatly contribute to climate change and affect agricultural productivity. Policies to reduce air pollution, therefore, offer a “win-win” strategy for both climate and health, lowering the burden of disease attributable to air pollution, as well as contributing to the near- and long-term mitigation of climate change.

Air pollution can be significantly reduced by expanding access to clean household fuels and technologies, as well as prioritizing: rapid urban transit, walking and cycling networks; energy-efficient buildings and urban design; improved waste management; and electricity production from renewable power sources. (https://www.who.int/airpollution/ambient/about/en/)

How does air pollution affect our health?

Breathing in polluted air can be very bad for our health. Long-term exposure to air pollution has been associated with diseases of the heart and lungs, cancers and other health problems. That’s why it’s important for us to monitor air pollution.

Polluted city – image source: WHO

AI might be used to improve urban sustainability and quality of life. It is about time that Artificial Intelligence is used for something important for the whole planet. That is why we will talk about AI solutions that address the problem of air pollution.

Air pollution – AI solutions

Artificial Intelligence for cleaner air in Smart Cities

In Singapore, where air pollution and related health costs are particularly high, a team of researchers investigated the possibility to combine the power of sensor technologies, Internet of things (IoT) and AI to get reliable and valid environmental data and feed bettergreener policy-making. As reported by The Business Times, through the computation of real-time IoT sensor data measuring spatial and temporal pollutants, user-friendly air quality heat maps and executive dashboards can be created, and the most severe pollution hotspots can be determined with the help of machine learning algorithms for predictive modelling. This is the first step to take proactive actions towards further decarbonizing the economy, including incentives for virtuous businesses, the development of wiser land use plans, the revitalization of urban precincts, and more. (https://www.pdxeng.ch/2019/03/28/artificial-intelligence-for-cleaner-air-in-smart-cities/)

Polluted city – image source: https://pixabay.com/

An Artificial Intelligence-Based Environment Quality Analysis System

The paper describes an environment quality analysis system based on a combination of some artificial intelligence techniquesartificial neural networks and rule-based expert systems. Two case studies of the system use are discussed: air pollution analysis and flood forecasting with their impact on the environment and on the population health. The system can be used by an environmental decision support system in order to manage various environmental critical situations (such as floods and environmental pollution), and to inform the population about the state of the environment quality. (An Artificial Intelligence-Based Environment Quality Analysis System – https://link.springer.com/chapter/10.1007/978-3-642-23957-1_55)

AI non-profit to track air pollution from every power plant in the world and make data public

A nonprofit artificial intelligence firm called WattTime is going to use satellite imagery to precisely track the air pollution (including carbon emissions) coming out of every single power plant in the world, in real time. And it’s going to make the data public. This system promises to effectively eliminate poor monitoring and gaming of emissions data.

The plan is to use data from satellites that make theirs publicly available, as well as data from a few private companies that charge for their data. The images will be processed by various algorithms to detect signs of emissions. Google.org, Google’s philanthropic wing, is getting the project off the ground…with a $1.7 million grant. WattTime made a splash earlier this year with Automated Emissions Reduction. AER is a program that uses real-time grid data and machine learning to determine exactly when the grid is producing the cleanest electricity.

Author: David Roberts, Vox, Published on: 8 May 2019

“We’ll soon know the exact air pollution from every power plant in the world. That’s huge.”, 7 May 2019. (https://www.business-humanrights.org/en/ai-non-profit-to-track-air-pollution-from-every-power-plant-in-the-world-and-make-data-public)

A fresher breeze: How AI can help improve air quality

As part of our AI for Earth commitment, Microsoft supports five projects from Germany in the areas of environmental protection, biodiversity and sustainability. In the next few weeks, we will introduce the project teams and their innovative ideas that made the leap into our global programme and group of AI for Earth grantees.

AI for Earth

The AI​​for Earth program helps researchers and organizations to use artificial intelligence to develop new approaches to protect water, agriculture, biodiversity and the climate. Over the next five years, Microsoft will invest $ 50 million in “AI for Earth.” To become part of the “AI for Earth” program, developers, researchers and organizations can apply with their idea for a so-called “Grant”. If you manage to convince the jury of Microsoft representatives, you´ll receive financial and technological support and also benefit from knowledge transfer and contacts within the global AI for Earth network. As part of Microsoft Berlin´s EarthLab and beyond, five ideas have been convincing and will be part of our “AI for Earth” program in the future in order to further promote their environmental innovations. (https://news.microsoft.com/europe/2019/08/20/a-fresher-breeze-how-ai-can-help-improve-air-quality/)

Environmental Protection – image source: https://pixabay.com/

Artificial Intelligence For Air Quality Control Systems: A Holistic Approach

Abstract

Recent environmental regulations introduced by the United States environmental protection agency such as the Mercury Air Toxics Standards and Hazardous Air Pollution Standards have challenged environmental particulate control equipment especially the electro-static precipitators to operate beyond their design specifications. The impact is exacerbated in power plants burning a wide range of low and high-ranking fossil fuels relying on co-benefits from upstream processes such as the selective catalytic reactor and boilers. To alleviate and mitigate the challenge, this manuscript presents the utilization of modern and novel algorithms in machine learning and artificial intelligence for improving the efficiency and performance of electrostatic precipitators reflecting a holistic approach by considering upstream processes as model parameters. In addition, the paper discusses input relevance algorithms for neural networks and random forests such as partial derivatives, input perturbation and GINI importance comparing their performance and applicability for our case study. Our approach comprises of applying random forests and neural network algorithms to an electrostatic precipitator extending the model to include upstream process parameters such as the selective catalytic reactor and the air heaters. To study variable importance differences and model generalization performance between our employed algorithms, we developed a statistical approach to compare features data distributions impact on input relevance.

Read more here:

 (https://ieeexplore.ieee.org/document/8635295)

Artificial intelligence based approach to forecast PM2.5 during haze episodes: A case study of Delhi, India

Highlights

•Neural network and fuzzy logic are combined for forecasting of PM2.5 during haze conditions.

•The haze occurs when the level of PM2.5 is more than 50 μg/m3 and relative humidity is less than 90%.

•Neuro-fuzzy model is capable for better forecasting of haze episodes over urbanized area than ANN and MLR models.

Abstract

Delhi has been listed as the worst performer across the world with respect to the presence of alarmingly high level of haze episodes, exposing the residents here to a host of diseases including respiratory disease, chronic obstructive pulmonary disorder and lung cancer. This study aimed to analyze the haze episodes in a year and to develop the forecasting methodologies for it. The air pollutants, e.g., CO, O3, NO2, SO2, PM2.5 as well as meteorological parameters (pressure, temperature, wind speed, wind direction index, relative humidity, visibility, dew point temperature, etc.) have been used in the present study to analyze the haze episodes in Delhi urban area. The nature of these episodes, their possible causes, and their major features are discussed in terms of fine particulate matter (PM2.5) and relative humidity. The correlation matrix shows that temperature, pressure, wind speed, O3, and dew point temperature are the dominating variables for PM2.5 concentrations in Delhi. The hour-by-hour analysis of past data pattern at different monitoring stations suggest that the haze hours were occurred approximately 48% of the total observed hours in the year, 2012 over Delhi urban area. The haze hour forecasting models in terms of PM2.5 concentrations (more than 50 μg/m3) and relative humidity (less than 90%) have been developed through artificial intelligence based Neuro-Fuzzy (NF) techniques and compared with the other modeling techniques e.g., multiple linear regression (MLR), and artificial neural network (ANN). The haze hour’s data for nine months, i.e. from January to September have been chosen for training and remaining three months, i.e., October to December in the year 2012 are chosen for validation of the developed models. The forecasted results are compared with the observed values with different statistical measures, e.g., correlation coefficients (R), normalized mean square error (NMSE), fractional bias (FB) and index of agreement (IOA). The performed analysis has indicated that R has values 0.25 for MLR, 0.53 for ANN, and NF: 0.72, between the observed and predicted PM2.5 concentrations during haze hours invalidation period. The results show that the artificial intelligence implementations have a more reasonable agreement with the observed values. Finally, it can be concluded that the most convincing advantage of artificial intelligence based NF model is capable for better forecasting of haze episodes in Delhi urban area than ANN and MLR models.

Read more here:

(https://www.sciencedirect.com/science/article/abs/pii/S1352231014009157)

AI – image source: https://pixabay.com/

Artificial intelligence modeling to evaluate field performance of photocatalytic asphalt pavement for ambient air purification

Abstract

In recent years, the application of titanium dioxide (TiO2) as a photocatalyst in asphalt pavement has received considerable attention for purifying ambient air from traffic-emitted pollutants via photocatalytic processes. In order to control the increasing deterioration of ambient air quality, urgent and proper risk assessment tools are deemed necessary. However, in practice, monitoring all process parameters for various operating conditions is difficult due to the complex and non-linear nature of air pollution-based problems. Therefore, the development of models to predict air pollutant concentrations is very useful because it can provide early warnings to the population and also reduce the number of measuring sites. This study used artificial neural network (ANN) and neuro-fuzzy (NF) models to predict NOx concentration in the air as a function of traffic count (Tr) and climatic conditions including humidity (H), temperature (T), solar radiation (S), and wind speed (W) before and after the application of TiO2 on the pavement surface. These models are useful for modeling because of their ability to be trained using historical data and because of their capability for modeling highly non-linear relationships. To build these models, data were collected from a field study where an aqueous nano TiO2 solution was sprayed on a 0.2-mile of asphalt pavement in Baton Rouge, LA. Results of this study showed that the NF model provided a better fitting to NOx measurements than the ANN model in the training, validation, and test steps. Results of a parametric study indicated that traffic level, relative humidity, and solar radiation had the most influence on photocatalytic efficiency.

Read more here:

 (https://link.springer.com/article/10.1007/s11356-014-2821-z)

Neuro Fuzzy Modeling Scheme for the Prediction of Air Pollution

Abstract

The techniques of artificial intelligence based in fuzzy logic and neural networks are frequently applied together. The reasons to combine these two paradigms come out of the difficulties and inherent limitations of each isolated paradigm. Hybrid of Artificial Neural Networks (ANN) and Fuzzy Inference Systems (FIS) have attracted the growing interest of researchers in various scientific and engineering areas due to the growing need of adaptive intelligent systems to solve the real world problems. ANN learns from scratch by adjusting the interconnections between layers. FIS is a popular computing framework based on the concept of fuzzy set theory, fuzzy if-then rules, and fuzzy reasoning. The structure of the model is based on three-layered neural fuzzy architecture with back propagation learning algorithm. The main objective of this paper is two folds. The first objective is to develop Fuzzy controller, scheme for the prediction of the changing for the NO2 or SO2, over urban zones based on the measurement of NO2 or SO2 over defined industrial sourcesThe second objective is to develop a neural net, NN; scheme for the prediction of O3 based on NO2 and SO2 measurements.

Read more here:

 (https://pdfs.semanticscholar.org/1fee/92567748cc2fd4530557bd8d8ebf6395d4e5.pdf)

Sensing the Air We Breathe — The OpenSense Zurich Dataset

Abstract

Monitoring and managing urban air pollution is a significant challenge for the sustainability of our environment. We quickly survey the air pollution modeling problem, introduce a new dataset of mobile air quality measurements in Zurich, and discuss the challenges of making sense of these data.

Read more here:

 (https://www.aaai.org/ocs/index.php/AAAI/AAAI12/paper/view/4896/5158)

This article is good for getting started and gives a dataset to work with!

Clean air – image source: https://pixabay.com/

Development of artificial intelligence based NO2 forecasting models at Taj Mahal, Agra

Abstract

The statistical regression and specific computational intelligence based models are presented in this paper for the forecasting of hourly NO2 concentrations at a historical monument Taj Mahal, Agra. The model was developed for the purpose of public health oriented air quality forecasting. Last ten–year air pollution data analysis reveals that the concentration of air pollutants increased significantly. It is also observed that the pollution levels are always higher during the months of November at around Taj Mahal, Agra. Therefore, the hourly observed data during November were used in the development of air quality forecasting models for Agra, India. Firstly, multiple linear regression (MLR) was used for building an air quality–forecasting model to forecast the NO2 concentrations at Agra. Further, a novel approach, based on regression models, principal component analysis (PCA) was analyzed to find the correlations of different predictor variables between meteorology and air pollutants. Then, the significant variables were taken as the input parameters to propose the reliable physical artificial neural network (ANN)-multi layer perceptron model for forecasting of air pollution in Agra. MLR and PCA–ANN models were evaluated through statistical analysis. The correlation coefficients (R) were 0.89 and 0.91 respectively, for PCA–ANN and were 0.69 and 0.89 respectively for MLR in the training and validation periods. Similarly, the values of normalized mean square error (NMSE), index of agreement (IOA) and fractional bias (FB) were in good agreement with the observed values. It was concluded that PCA–ANN model performs better and can be used for forecasting air pollution at Taj Mahal, Agra.

Read more here:

 (https://reader.elsevier.com/reader/sd/pii/S1309104215302567?token=9C6D5D566E2D1892B932A804377D82A742BDCA1C793AFEF1C357C03B4004FC916FE4196F6EDEC2F9CA093DB4E1B54E8C)

A Novel Air Quality Early-Warning System Based on Artificial Intelligence

Abstract

The problem of air pollution is a persistent issue for mankind and becoming increasingly serious in recent years, which has drawn worldwide attention. Establishing a scientific and effective air quality early-warning system is really significant and important. Regretfully, previous research didn’t thoroughly explore not only air pollutant prediction but also air quality evaluation, and relevant research work is still scarce, especially in China. Therefore, a novel air quality early-warning system composed of prediction and evaluation was developed in this study. Firstly, the advanced data preprocessing technology Improved Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (ICEEMDAN) combined with the powerful swarm intelligence algorithm Whale Optimization Algorithm (WOA) and the efficient artificial neural network Extreme Learning Machine (ELM) formed the prediction model. Then the predictive results were further analyzed by the method of fuzzy comprehensive evaluation, which offered intuitive air quality information and corresponding measures. The proposed system was tested in the Jing-Jin-Ji region of China, a representative research area in the world, and the daily concentration data of six main air pollutants in Beijing, Tianjin, and Shijiazhuang for two years were used to validate the accuracy and efficiency. Therefore, the proposed system is believed to play an important role in air pollution control and smart city construction all over the world in the future.

Read more here:

 (https://www.mdpi.com/1660-4601/16/19/3505)

Octopus – image source: https://pixabay.com/

How AI and IoT could help people combat air pollution issues

It is with little surprise that the UN’s 2019 World Environment Day Is a call to action to #beatairpollutionIT, as a sector, influences air quality in terms of the energy used to drive our electronics, data centers and, indeed, through business travel. With a large-scale industry presence in Asia, home to some of the most polluted cities in the world, we need to do what we can to minimize these impacts.

But technology can also be part of the solution. Last year, Capgemini announced a new global ambition to leverage technology to help organizations with their sustainability challenges, recognizing that this is the biggest impact we can make. Technology can be an enabler to help address prevention at source, helping organizations optimize their operations and reduce their impact. But with 4.2 million deaths every year as a result of exposure to ambient outdoor air pollution, how can we also leverage technology to monitor, inform, and ultimately change the behaviors of those most affected as they head into our many cities?

The advances in technology give us the opportunity to reach people directly and build a more sophisticated monitoring and communication network. We could leverage both artificial intelligence (AI) and the internet of things (IOT) with the capabilities from an increasing range of personal devices whether it be the 2.5 billion smart phones or the estimated 278 million smart watches in the world.[3] Indeed, the wearable health and fitness technology sector is set to grow 10–20% in the next five years, with an expanding set of capabilities. These devices measure elements such as heart rate, blood pressure, and breathing rate, which are indicators of overall health and are also measurables that change with exposure to air pollutants such as PM, nitrogen oxide and sulfur oxides. Yet they also monitor spatial and GPS data, which if combined could demonstrate the impact of the external environment on health factors, and better inform people of the issues. Data from different sources and AI technology could allow us to drill down on very local issues.

If we overlay current air quality monitoring data sources onto an individual, it would allow us to give a very precise prediction of local air quality issues. We could then integrate AI, to both refine and include a wider range of factors such as weather conditions and traffic levelsAdded to this, if automatic number plate recognition (ANPR) is integrated, we could discern the proportion of vehicle fuel types being used in specific locations. This is important because diesel vehicles emit 90% of particulate matter.

Data analytics over time would allow people to understand impacts on their health – and change behavior.

Over time, as an individual’s health and diagnostics data are inputted into a data analytics model alongside their own spatial data and air pollution exposure data, they could receive an analysis of how air pollution is impacting their physiology. Based on this, they could receive tailored suggested actions to take as wellThe ability to overlay a Google Map of your walk to school or work to the air quality data around you could, instead of highlighting traffic congestion, show air quality issues and provide the options to re-route to avoid, or offer alternative options for time to start a journey.

Read more here:

(https://www.capgemini.com/2019/06/beatairpollution-how-ai-and-iot-could-help-people-combat-air-pollution-issues/)

So, this time we listed some novel AI solutions for solving the environmental air pollution problem. Next time we talk about this topic, expect the idea how we are going to include Smart Imaging and AI in Smart city solution for cleaner air. Do you have any suggestions?

References