Wildfire Detection and Impact Monitoring
Contents
Wildfire Detection and Impact Monitoring#
0. Presentation#
In this use case, we are going to try to detect burnt areas due to wildfires, with Sentinel 2 satellite imagery.
For a high-level, non-technical presentation, you can read this blog article
We will be testing several methods based on research papers, to see which one seems to be the most accurate. At the end, we’ll also try to detect active fires. These are the methods tested:
We will be using a wildfire that happened in France in the month of July, and that burned more than 10 000 hectares in a few days. The fire was called Landiras 1, after a neighbouring city.

1. Installation#
This installs the SpaceSense SDK in your notebook
[ ]:
!pip install spacesense
2. Import#
Imports the additional librairies needed in this use case
[ ]:
from spacesense import Client, Sentinel2SearchResult
import matplotlib.pyplot as plt
import numpy as np
import xarray as xr
import math
import json
import pandas as pd
import os
import copy
We then import this function as it will be useful later on in the notebook to display some results
[ ]:
def get_rgb(ds, brightness_factor=1):
def norm(band):
band_min, band_max = band.min(), band.max()
return (band - band_min) / (band_max - band_min)
var_names = ["S2_BLUE", "S2_GREEN", "S2_RED"]
band_names = ["S2_B02", "S2_B03", "S2_B04"]
for elem in var_names:
if elem in ds.data_vars:
if elem == "S2_BLUE":
ds = ds.rename_vars({"S2_BLUE": "S2_B02"})
elif elem == "S2_GREEN":
ds = ds.rename_vars({"S2_GREEN": "S2_B03"})
elif elem == "S2_RED":
ds = ds.rename_vars({"S2_RED": "S2_B04"})
blue = norm(ds.S2_B02) * brightness_factor
green = norm(ds.S2_B03) * brightness_factor
red = norm(ds.S2_B04) * brightness_factor
rgb = np.dstack([red, green, blue])
return rgb
3. Authentication#
Insert the API Key that you received by registering with SpaceSense. If you don’t have one, you can register and get one here
[ ]:
if "SS_API_KEY" not in os.environ:
from getpass import getpass
api_key = getpass('Enter your api key : ')
os.environ["SS_API_KEY"] = api_key
Enter your api key : ··········
4. Identify your Area Of Interest#
In the AOI variable you can add your Area Of Interest boundaries using a geojson format. If you want to build one, you can do it here
[ ]:
# Define the area of interest (AOI)
aoi= {
"type": "FeatureCollection",
"features": [
{
"type": "Feature",
"properties": {},
"geometry": {
"coordinates": [
[
[
-0.6575898010393075,
44.54103780739024
],
[
-0.6630939826831366,
44.50478459477523
],
[
-0.6324278278127906,
44.44381344144509
],
[
-0.6038585040449789,
44.452046395561126
],
[
-0.4908917284124641,
44.452420593176015
],
[
-0.46101188520589176,
44.47337183303722
],
[
-0.4518382491332602,
44.505719239152285
],
[
-0.44101302667107234,
44.594976306884234
],
[
-0.537215277117383,
44.595272862103
],
[
-0.5678297832179453,
44.543977549684115
],
[
-0.6575898010393075,
44.54103780739024
]
]
],
"type": "Polygon"
}
}
]
}
# Get an instance of the SpaceSense Client object
client = Client(id="wild fire example")
# Enable to save data in local files
client.enable_local_output()
5. Search available images#
First add the Time range Of Interest (TOI) for your study. Below the SDK will search Sentinel 2 images available for the AOI and TOI specified above, and then display all the results in a list
[ ]:
# Define the time range of interest (TOI)
start_date = "2022-07-12"
end_date = "2022-08-08"
# Call the S2 search function, passing the AOI, and TOI
deforestation_search = client.s2_search(aoi=aoi, start_date=start_date, end_date=end_date, query_filters={"valid_pixel_percentage": {">=": 99}})
# Visualize the Pandas dataframe with the results
deforestation_search.dataframe
id | date | tile | valid_pixel_percentage | platform | relative_orbit_number | product_id | datetime | swath_coverage_percentage | no_data | cloud_shadows | vegetation | not_vegetated | water | cloud_medium_probability | cloud_high_probability | thin_cirrus | snow | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
3 | S2A_30TXQ_20220712_0_L2A | 2022-07-12 | 30TXQ | 100.00 | sentinel-2a | 094 | S2A_MSIL2A_20220712T105631_N0400_R094_T30TXQ_2... | 2022-07-12T11:09:00Z | 100.0 | 0.0 | 0.0 | 96.57 | 2.66 | 0.66 | 0.00 | 0.0 | 0.00 | 0.0 |
2 | S2B_30TXQ_20220717_0_L2A | 2022-07-17 | 30TXQ | 99.91 | sentinel-2b | 094 | S2B_MSIL2A_20220717T105629_N0400_R094_T30TXQ_2... | 2022-07-17T11:08:52Z | 100.0 | 0.0 | 0.0 | 62.05 | 13.43 | 0.65 | 0.07 | 0.0 | 0.02 | 0.0 |
1 | S2A_30TXQ_20220801_0_L2A | 2022-08-01 | 30TXQ | 100.00 | sentinel-2a | 094 | S2A_MSIL2A_20220801T105631_N0400_R094_T30TXQ_2... | 2022-08-01T11:08:57Z | 100.0 | 0.0 | 0.0 | 42.25 | 26.82 | 0.75 | 0.00 | 0.0 | 0.00 | 0.0 |
0 | S2B_30TXQ_20220806_0_L2A | 2022-08-06 | 30TXQ | 100.00 | sentinel-2b | 094 | S2B_MSIL2A_20220806T105629_N0400_R094_T30TXQ_2... | 2022-08-06T11:08:51Z | 100.0 | 0.0 | 0.0 | 40.09 | 27.15 | 0.71 | 0.00 | 0.0 | 0.00 | 0.0 |
We define the bands we want to download. This saves processing time and cost, giving us only what we need. The full list is available on the documentation
[ ]:
deforestation_search.output_bands = ["B02", "B03", "B04", "B06", "B07", "B8A", "B12"]
#Here you can also directly add "NDVI" as an item to get the index, instead of calculating it manually further below
6. Downloading the images#
Through the fuse function we are able to download all images we selected into a single object
[ ]:
# Call the core fusion function, passing our filtered S2 result
fusion = client.fuse([deforestation_search])
# Visualize the Pandas dataframe with the results
fusion.dataset
Below we display the images we have to select the best ones to compare. In this example, we see that the first and last date (2022-07-12 and 2022-08-06) are the best to compare. Furthermore, the 2022-07-17 has active fire, which will be useful in one of the next steps
[ ]:
rgb = fusion.plot_rgb(all_dates = True, brightness_factor = 3)
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).

7. Build the indexes#
[ ]:
dataset_workable = fusion.dataset
Here we create another layer in the xarray dataset for each index that we want to build. The formulas are directly extracted from the research papers linked on the top of the page
[ ]:
dataset_workable = dataset_workable.assign(NBR_RAW = lambda elem: (elem.S2_B8A - elem.S2_B12) / (elem.S2_B8A + elem.S2_B12))
dataset_workable = dataset_workable.assign(NBR_RAW_PLUS = lambda elem: (elem.S2_B12 - elem.S2_B8A - elem.S2_B03 - elem.S2_B02) / (elem.S2_B12 + elem.S2_B8A + elem.S2_B03 + elem.S2_B02))
dataset_workable = dataset_workable.assign(BAIS2 = lambda elem: (1-((elem.S2_B06*elem.S2_B07*elem.S2_B8A)/elem.S2_B04)**0.5)*((elem.S2_B12-elem.S2_B8A)/((elem.S2_B12+elem.S2_B8A)**0.5)+1))
dataset_workable = dataset_workable.assign(NDVI = lambda elem: (elem.S2_B8A - elem.S2_B04) / (elem.S2_B8A + elem.S2_B04))
We build the delta value for each pixel between the first and last date image, to highlight the differences between the two dates. This is going to be our main way to measure deforestation.
[ ]:
dNBR = dataset_workable.isel(time=0).NBR_RAW - dataset_workable.isel(time=3).NBR_RAW
dNBR_PLUS = dataset_workable.isel(time=0).NBR_RAW_PLUS - dataset_workable.isel(time=3).NBR_RAW_PLUS
dBAIS2 = dataset_workable.isel(time=0).BAIS2 - dataset_workable.isel(time=3).BAIS2
dNDVI = dataset_workable.isel(time=0).NDVI - dataset_workable.isel(time=3).NDVI
Now we can look at the performances of each index
Index #1: NBR#
For each of our indexes, we are going to display:
The delta value map
The delta masked value (based on a different threshold each time
The RGB map with the mask overlayed
The estimated burned surface, based on the index
[ ]:
tmp = dNBR
#The threshold is empirical, and should be adapted on the season and region.
#More information can be found here: https://docs.digitalearthafrica.org/en/latest/sandbox/notebooks/Real_world_examples/Burnt_area_mapping.html
threshold_value = 0.3
#We build the mask based on the threshold value
tmp_mask = xr.DataArray(abs(tmp) > threshold_value, dims=('y', 'x'))
#The RGB image construction
img_highlighted = get_rgb(dataset_workable.isel(time=3), brightness_factor=3)
#We put all the pixels from the mask in red on the RGB image
img_highlighted[tmp_mask] = [1, 0, 0] # red color
#We plot the images
fig, axes = plt.subplots(ncols=3, figsize=(20, 12))
axes[0].imshow(tmp)
axes[1].imshow(tmp_mask)
axes[2].imshow(img_highlighted)
axes[0].set_title('Delta NBR')
axes[1].set_title('Delta NBR mask with 0.3 threshold')
axes[2].set_title('Mask applied to RGB')
# Show the plot
plt.show()
burnt_pixels = np.count_nonzero(tmp_mask)
#This is an average approximation based on the fact that a pixel is around 10x10m, and that a hectare is 100x100m
burnt_surface = burnt_pixels / 100
print(f"In this image there is an estimated {burnt_surface} hectares which burned")
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).

In this image there is an estimated 11727.1 hectares which burned
Comment
Here we can clearly see that some areas on the left (around the coordinates (100:1050)) are being counted as burned, when they are actually bodies of water. So the value is probably inflated
Index #2: NBR+#
[ ]:
tmp = dNBR_PLUS
#The threshold is empirical, and should be adapted on the season and region.
threshold_value = 0.3
#We build the mask based on the threshold value
tmp_mask = xr.DataArray(abs(tmp) > threshold_value, dims=('y', 'x'))
img_highlighted = get_rgb(dataset_workable.isel(time=3), brightness_factor=3)
img_highlighted[tmp_mask] = [1, 0, 0] # red color
fig, axes = plt.subplots(ncols=3, figsize=(20, 12))
axes[0].imshow(tmp)
axes[1].imshow(tmp_mask)
axes[2].imshow(img_highlighted)
axes[0].set_title('Delta NBR+')
axes[1].set_title('Delta NBR+ mask with 0.3 threshold')
axes[2].set_title('Mask applied to RGB')
# Show the plot
plt.show()
burnt_pixels = np.count_nonzero(tmp_mask)
burnt_surface = burnt_pixels / 100
print(f"In this image there is an estimated {burnt_surface} hectares which burned")
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).

In this image there is an estimated 9970.86 hectares which burned
Comment
The mask seems much finer than for the NBR, probably more representative of the ground truth. The water bodies detected in the NBR are here correctly classified. According to the paper the identification of burned surface can be done more smoothly when using a Maximum Likelihood Classification instead of an arbitary threshold. This is explored at the end of this use case
Index #3: BAIS2#
[ ]:
tmp = dBAIS2
#The threshold is empirical, and should be adapted on the season and region.
threshold_value = 0.4
#We build the mask based on the threshold value
tmp_mask = xr.DataArray(abs(tmp) > threshold_value, dims=('y', 'x'))
img_highlighted = get_rgb(dataset_workable.isel(time=3), brightness_factor=3)
img_highlighted[tmp_mask] = [1, 0, 0] # red color
fig, axes = plt.subplots(ncols=3, figsize=(20, 12))
axes[0].imshow(tmp)
axes[1].imshow(tmp_mask)
axes[2].imshow(img_highlighted)
axes[0].set_title('Delta BAIS2')
axes[1].set_title('Delta BAIS2 mask with 0.4 threshold')
axes[2].set_title('Mask applied to RGB')
# Show the plot
plt.show()
burnt_pixels = np.count_nonzero(tmp_mask)
burnt_surface = burnt_pixels / 100
print(f"In this image there is an estimated {burnt_surface} hectares which burned")
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).

In this image there is an estimated 11258.6 hectares which burned
Comment
Here the index correctly classifies the water, but it would seem that there is more noise, as can be seen visually by looking at the areas around the main fire “body”
Index #4: NDVI#
[ ]:
tmp = dNDVI
#The threshold is empirical, and should be adapted on the season and region.
threshold_value = 0.3
#We build the mask based on the threshold value
tmp_mask = xr.DataArray(abs(tmp) > threshold_value, dims=('y', 'x'))
img_highlighted = get_rgb(dataset_workable.isel(time=3), brightness_factor=3)
img_highlighted[tmp_mask] = [1, 0, 0] # red color
fig, axes = plt.subplots(ncols=3, figsize=(20, 12))
axes[0].imshow(tmp)
axes[1].imshow(tmp_mask)
axes[2].imshow(img_highlighted)
axes[0].set_title('Delta NDVI')
axes[1].set_title('Delta NDVI mask with 0.3 threshold')
axes[2].set_title('Mask applied to RGB')
# Show the plot
plt.show()
burnt_pixels = np.count_nonzero(tmp_mask)
burnt_surface = burnt_pixels / 100
print(f"In this image there is an estimated {burnt_surface} hectares which burned")
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).

In this image there is an estimated 10741.98 hectares which burned
Comment
We encounter the same issue as we did with NBR and the water detection. Furthermore we can also see some roads and fields in the south of the image
8. Conclusion#
We know from the firefighter reports that this fire burned approximatively 10 300 hectares in this area. Let’s look at our different methods:
NBR : 11727 ha (~14% error)
NBR+ : 9970 (~3.3% error)
BAIS2 : 11258 (~9.3% error)
NDVI: 10741 (~4.2 error)
Based on these results, the NBR+ is the most accurate solution. And even though the NDVI seems pretty close, we’ve seen that there is an issue with water detection, so this method would probably perform worst on areas near coasts
9. Detecting active fires#
The image on the 2022-07-17 contains active fires. This is a perfect opportunity to test these methods for active fire detection.
For each of the methods, we are going to create a mask to detect the fire. The threshold for these masks are defined by the papers they are from, and some empirical experimentation. We discarded the NDVI index, as it is incapable of accurately detect the fires
[ ]:
active_fire_NBR = dataset_workable.isel(time=1).NBR_RAW
active_fire_NBR_PLUS = dataset_workable.isel(time=1).NBR_RAW_PLUS
active_fire_BAIS2 = dataset_workable.isel(time=1).BAIS2
active_fire_NDVI = dataset_workable.isel(time=1).NDVI
tmp_mask = xr.DataArray(active_fire_NBR < -0.5, dims=('y', 'x'))
img_highlighted_NBR = get_rgb(dataset_workable.isel(time=1), brightness_factor=3)
img_highlighted_NBR[tmp_mask] = [1, 0, 0] # red color
tmp_mask = xr.DataArray(active_fire_NBR_PLUS > 0.3, dims=('y', 'x'))
img_highlighted_NBR_PLUS = get_rgb(dataset_workable.isel(time=1), brightness_factor=3)
img_highlighted_NBR_PLUS[tmp_mask] = [1, 0, 0] # red color
tmp_mask = xr.DataArray(active_fire_BAIS2 > 1.3, dims=('y', 'x'))
img_highlighted_BAIS = get_rgb(dataset_workable.isel(time=1), brightness_factor=3)
img_highlighted_BAIS[tmp_mask] = [1, 0, 0] # red color
fig, axes = plt.subplots(ncols=4, figsize=(20, 12))
axes[0].imshow(get_rgb(dataset_workable.isel(time=1), brightness_factor=3))
axes[1].imshow(img_highlighted_NBR)
axes[2].imshow(img_highlighted_NBR_PLUS)
axes[3].imshow(img_highlighted_BAIS)
axes[0].set_title('RGB image')
axes[1].set_title('RGB image with NBR mask')
axes[2].set_title('RGB image with NBR+ mask')
axes[3].set_title('RGB image with BAIS2 mask')
# Show the plot
plt.show()
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).

Here all three methods seems to be performing rather well, with a slightly better performance for NBR and NBR+, which also detect the fires on the bottom right of the image.
Below is the NBR+ index map, and we can clearly see the bright points representing the active fires
[ ]:
# Create a Matplotlib figure and axis
fig, ax = plt.subplots()
# Display the image with a color map
img = ax.imshow(active_fire_NBR_PLUS, cmap='viridis')
# Add a color bar to show the values of each pixel
cbar = fig.colorbar(img)
# Show the plot
plt.show()

And that is it! You now have a basic solution to start tracking wildfires and their impact, all over the world. As mentioned in the introduction, this solution can be improved for better accuracy and scalability, but this already provides you with a simple and quick way to generate very useful insights. If you want to try it for yourself using SpaceSense’s SDK, reach out to us here!