Matthew Ibarra’s Final Project Was His Favorite

As a SkyTruth Intern, Matthew Ibarra learned new skills and helped protect Native lands.

As I finish up my internship at SkyTruth, I can honestly say that the experience has been everything I imagined it would be and more. My time here was a perfect amalgamation of what I love: namely, an organization that applies technology and gathers and analyzes data to protect the environment. 

When I started my internship at SkyTruth I was unsure of what to expect. I remember the first day I drove into the small town of Shepherdstown, West Virginia. I was worried. For the first time in my life I was working with like-minded individuals with special talents and skills far above my own. I thought that I would have to perform as well as my colleagues right off the bat. However, my fears quickly melted away upon meeting Brendan Jarrell, SkyTruth’s Geospatial Analyst and father to all us interns. Brendan assured me that I would be focusing on my own personal interests, developing practical skills, and applying them to the various projects happening at SkyTruth. Within my first week I became familiar with all the programs I needed for my internship, namely Google Earth Engine and QGIS. Both are programs that are critical in geospatial analysis that were completely new to me, despite having taken Geographic Information System (GIS) courses at West Virginia University. Interning at SkyTruth opened my eyes to the new possibilities of environmental monitoring and I was excited to get started.

My very first day I became familiar with the various satellites that orbit the Earth and provide the imagery that SkyTruth uses on a daily basis. The Landsat and Sentinel satellite missions provide imagery available for free to the public, allowing interns like myself to create maps and interactive data to track activity on Earth. My first task as an intern was to monitor Southeast Asian waters for bilge dumps — oily slicks of wastewater dumped in the ocean by ships. I used Google Earth Engine to access the necessary imagery easily. Then I used QGIS to create the various maps that we post on our Facebook page and blog posts. I found my first bilge dump on February 7, 2020. It was a 35 kilometer slick (almost 22 miles long) off the coast of Malaysia. 

Often, we can identify the likely polluter using Automatic Identification System (AIS) to track vessels at sea. Most vessels constantly send out a radio signal to broadcast their route. When those signal points align with a bilge dump it suggests that the ship is the likely source for that bilge slick. However, not all ships will transmit their signal at all times, and there have even been instances of ships spoofing their signal to different locations. For my first slick I was unable to match a ship’s AIS broadcast to the trail of the bilge dump, but I was able to do so several  times after that. We can’t know for certain who caused this slick, but imagery helps us paint a picture of potential suspects. My first slick pales in comparison to the many slicks I found in later months: later, I captured a few slicks that were over 100 kilometers (more than 62 miles) in length. I was also able to link a ship’s AIS broadcast to the trail of the slick. You can read more about slicks in the South East Asia region in my April 15 blog post here.

 

An example of likely bilge dumping from a vessel identified using Sentinel satellite imagery

Following my introduction to our bilge dumping detection work, I was thrilled to be assigned my first solo project for SkyTruth — updating SkyTruth’s FrackFinder. FrackFinder is an ongoing project at SkyTruth. It aims to keep track of the active oil and natural gas well pads in states such as West Virginia. Drilling permit data is often misleading; sites that are permitted to be drilled may not actually be drilled for several years. In the past, our FrackFinder app was hosted in Carto. Carto is a cloud-based mapping platform that provides limited GIS tools for analysis. I was tasked with giving the application an overhaul and bringing it into Google Earth Engine, a much more powerful and accessible program. 

Learning to code for Earth Engine was challenging for me. I had only one computer science course in college, and that was nearly three years ago. So I was surprised that my first project would revolve around coding. Initially, I was overwhelmed and I struggled to find a place to start. As time went on I slowly became more comfortable with spending large amounts of time solving tiny problems. Brendan was incredibly helpful and patient with teaching me everything I would need to know to be successful. He always made time for me and assisted me with my code numerous times. My finished app is far from perfect but I am proud of the work that I accomplished and I hope that it brings attention to the changing landscape of West Virginia caused by oil and natural gas drilling using hydraulic fracturing (fracking). 

 

The FrackTracker app for West Virginia

My second and final project was creating a visualization about the land surrounding Chaco Culture National Historical Park in New Mexico. Much like the update to the FrackFinder App, it involved the changing landscape surrounding the park due to the increase in fracking. I was tasked with creating a series of still images, an embeddable GIF which shows an animation of the rapid increase in drilling, and an app on Earth Engine that allows the user to zoom in and visually inspect each individual well surrounding the park. In the final months of my internship, I became comfortable using the programs that were foreign to me initially. I created a series of 19 images using QGIS from the years 2000-2018. You can see the collection of images for each year here. SkyTruth’s Geospatial Engineer Christian Thomas assisted me in creating the GIF. 

This project was special to me because I was able to help activists who are advocating for the passage of the Chaco Cultural Heritage Area Protection Act, legislation passed by the U.S. House of Representatives that would effectively create a 10-mile buffer zone surrounding the park and ensure the protection of the area for Native Americans and local communities for generations to come. The Senate has not yet passed the act. When I started my internship at SkyTruth I never would have believed that I would be advocating for protection of Native lands. I always believed issues like these were too big for one person to tackle, but if there’s anything I learned at SkyTruth is that only one person can create real change.

The growth of oil and gas wells within a 15-mile radius of Chaco Culture National Historical Park from 2000 – 2018

After interning at SkyTruth for the past eight months I am happy to say that I feel I have made a difference in the world. I accomplished so much that I thought would be impossible for me initially. I used to think oil slicks were tragedies that happened infrequently, limited to a few times a decade. I was shocked to learn that oily wastewater gets dumped into the ocean so frequently that I was able to capture more than  80 bilge dumps in my eight months at SkyTruth. 

In addition, one of my greatest passions is sustainable energy. I was thrilled to be an advocate for clean energy by showcasing the dangers of an ever-expanding oil and natural gas industry. West Virginia has been my home for the past five years during my time at West Virginia University and I was happy to be able to bring to light one of the growing concerns of the state through the 2018 FrackFinder update. Finally, I was able to advocate for the protection of Native lands through the most meaningful project to me — the Chaco Culture National Historical Park visualizations. It felt incredible fighting for something that was much bigger than myself. As I leave SkyTruth, I will miss contributing to the world in my own way.

SkyTruth has always been more than just a place to intern at for me. I have made unforgettable connections with my colleagues despite the various challenges that we all have to face every day, such as the ongoing COVID-19 pandemic. Never once did I feel that I was alone in my work. I always knew there were people supporting me and encouraging me in my projects even when I was working remotely. I will never forget Christian’s tour of Shepherdstown on my first day or Brendan’s talks about the best Star Wars movie. I cannot thank each of them enough for the patience and kindness they showed me in my short time with them. Everyone at SkyTruth has contributed to my success in some way. I will miss everyone, but I’ll carry my new skills and experiences with me for the rest of my life.   

Drilling Detection with Machine Learning Part 2: Segmentation Starter Kit

Geospatial Analyst Brendan Jarrell explains, step-by-step, how to develop a machine learning model to detect oil and gas well pads from satellite imagery.

[This is the second post in a 3-part blog series describing SkyTruth’s effort to automate the detection of oil and gas well pads around the world using machine learning. This tool will allow local communities, conservationists, researchers, policymakers and journalists to see for themselves the growth of drilling in the areas they care about. This is a central part of SkyTruth’s work: to share our expertise with others so that anyone can help protect the planet, their communities, and the places they care about. You can read the first post in the series here. All of the code that will be covered in this post can be found here. Our training dataset is also available here.]

SkyTruth Intern Sasha Bylsma explained in our first post in this series how we create training data for a machine learning workflow that will be used to detect oil and gas well pads around the world. In this post, I’m going to explain how we apply a machine learning model to satellite imagery, explaining all the tools we use and steps we take to make this happen, so that anyone can create similar models on their own.

Once we have created a robust set of training data, we want to feed a satellite image into the machine learning model and have the model scan the image in search of well pads. We then look to the model to tell us where the well pads are located and give us the predicted boundary of each of the well pads. This is known as segmentation, as shown in Figure 1. 

Figure 1: An example of our current work on well pad segmentation. The original image is seen on the left; what the ML model predicts as a well pad can be seen on the right. Notice that the algorithm is not only returning the drilling site’s location, but also its predicted boundaries.

We want the model to identify well pad locations because of the crucial context that location data provides. For example, location can tell us if there is a high density of drilling in the area, helping nearby communities track increasing threats to their health. It can also calculate the total area of disturbed land in the area of interest, helping researchers, advocates and others determine how severely wildlife habitat or other land characteristics are diminished.  

In the past, SkyTruth did this work manually, with an analyst or volunteer viewing individual images to search for well pads and laboriously drawing their boundaries. Projects like FrackFinder, for example, may have taken staff and volunteers weeks to complete. Now, with the help of a machine learning model, we can come in on a Monday morning, let the model do its thing, and have that same dataset compiled and placed on a map in an hour or two. The benefits of leveraging this capability are obvious: we can scan thousands of images quickly and consistently, increasing the likelihood of finding well pads and areas with high levels of drilling.

Formatting the data

So how do we do this? The first thing we need to do is get our data into a format that will be acceptable for the machine learning model. We decided that we would use the TensorFlow API as our framework for approaching this task. TensorFlow is an open-source (i.e. “free-to-use”) software package that was developed by Google to give users access to a powerful math library specifically designed for machine learning. We exported data from Google Earth Engine in the TFRecord format; TFRecords are convenient packages for exporting information from Earth Engine for later use in TensorFlow. In our code under the section labeled “Get Training, Validation Data ready for UNET,” we see that there are a few steps we must fulfill to extract the TFRecords from their zipped up packages and into a usable format (see Figure 2). 

# Bands included in our input Feature Collection and S2 imagery.

bands = ['R','G','B']
label = 'Label'
featureNames = bands + [label]
# Convert band names into tf.Features.

cols = [
         tf.io.FixedLenFeature(shape=[256,256],dtype=tf.float32) for band in featureNames
       ]

"""Pass these new tensors into a dictionary, used to describe pieces of the input dataset."""
featsDict = dict(zip(featureNames,cols))

Figure 2:  Preprocessing code

Second, we create Tensorflow representations of the information we are interested in drawing out of each of our examples from the Google Earth Engine workflow (see the first post in this series for more explanation on how we made these samples). Each of the samples has a Red, Green, and Blue channel associated with it, as well as a mask band, called “label” in our code. As such, we create Tensorflow representations for each of these different channels that data will be plugged into. Think of the representations we create for each channel name as sorting bins; when a TFRecord is unpacked, the corresponding channel values from the record will be placed into the bin that represents it. After loading in all of our TFRecords, we push them into a TFRecord Dataset. A TFRecord Dataset is a dataset which is populated by several TFRecords. We then apply a few functions to the TFRecord Dataset that make the records interpretable by the model later on.

Validation dataset

Once the dataset is loaded in, we split the dataset into two. This is an important part of machine learning, where we set aside a small amount of the whole dataset. When the model is being trained on the larger portion of the dataset, known as the training data, it will not see this smaller subset, which we call the validation set. As its name suggests, the model uses this smaller fraction of information to perform a sanity check of sorts. It’s asking itself, “Okay, I think that a well pad looks like this. Am I close to the mark, or am I way off?” All of this is put in place to help the model learn the minute details and intricacies of the data we’ve provided it. Typically, we will reserve 15-30% of our total dataset for the validation set. The code necessary for splitting the dataset is shown in Figure 3 below.

# Get the full size of the dataset.
full_size = len(list(data))
print(f'Full size of the dataset: {full_size}','\n')

# Define a split for the dataset.
train_pct = 0.8
batch_size = 16
split = int(full_size * train_pct)

# Split it up.
training = data.take(split)
evaluation = data.skip(split)

# Get the data ready for training.
training = training.shuffle(split).batch(batch_size).repeat()
evaluation = evaluation.batch(batch_size)

# Define the steps taken per epoch for both training and evaluation.
TRAIN_STEPS = math.ceil(split / batch_size)
EVAL_STEPS = math.ceil((full_size - split)  / batch_size)

print(f'Number of training steps: {TRAIN_STEPS}')
print(f'Number of evaluation steps: {EVAL_STEPS}')

Figure 3: Validation split code snippet

Implementation in U-Net

Now it’s time for the fun stuff! We’re finally ready to begin setting up the model that we will be using for our segmentation task. We will be leveraging a model called a U-Net for our learning. Our implementation of the U-Net in TensorFlow follows a very similar structure to the one seen in the example here. In a nutshell, here is what’s happening in our U-Net code:

1.) The machine learning model is expecting a 256 pixel by 256 pixel by 3 band input. This is the reason why we exported our image samples in this manner from Earth Engine. Also, by chopping up the images into patches, we reduce the amount of information that needs to be stored in temporary memory at any given point. This allows our code to run without crashing.

2.) The computer scans the input through a set of encoders. An encoder’s job is to learn every little detail of the thing we’re instructing it to learn. So in our case, we want it to learn all of the intricacies that define a well pad in satellite imagery. We want it to learn that well pads are typically squares or rectangles, have well defined edges, and may or may not be in close proximity to other well pads. As the number of encoders increases further down the “U” shape of the model, it is learning and retaining more of these features that make well pads unique.

3.) As the computer creates these pixel-by-pixel classifications sliding down the “U,” it sacrifices the spatial information that the input once held. That is to say, the image no longer appears as a bunch of well pads scattered across a landscape. It appears more so as a big stack of cards. All of the pixels in the original image are now classified with their newly minted predictions (i.e. “I am a well pad” or “I am not a well pad”), but they don’t have any clue where in the world they belong. The task of the upper slope of the “U” is to stitch the spatial information onto the classified predictions generated by our model. In this light, the upward slope of the “U” is made up of filters known as decoders. The cool thing about the U-Net is that as we go further up the “U”, it will grab the spatial pattern associated with the same location on the downward slope of the U-Net. In short, the model gives its best shot at taking these classified predictions and making them back into an image. To see a visual representation of the U-Net model, refer to Figure 4 below.

Figure 4: A graphic representing the U-Net architecture, courtesy of Ronneberger, et al.

At the end of the trip through the model, we are left with an output image from the model. This image is the model’s best guess at whether or not what we’ve fed it shows well pads or not. Of course, the model’s best guess will not be absolute for each and every pixel in the image. Given what it has learned about well pads, (how they’re shaped, what color palette usually describes a well pad, etc.), the model returns values on a spectrum from 0 to 1. Wherever the values land in between these two numbers can be called the model’s confidence in its prediction. So for example, forested areas in the image would ideally show a confidence value near zero; conversely, drilling sites picked up in the image would have confidence values close to one. Ambiguous features in the image, like parking lots or agricultural fields, might have a value somewhere in the middle of zero and one. Depending on how well the model did when compared to the mask associated with the three band input, it will be reprimanded for mistakes or errors it makes using what’s known as a loss function. To read more about loss functions and how they can be used, be sure to check out this helpful blog. Now that we have the model set up, we are ready to gear up for training!

Data augmentation

Before we start to train, we define a function which serves the purpose of tweaking the inputs slightly every time they are seen by the model. This is a process known as data augmentation. The reason why we make these small changes is because we don’t have a large dataset. If we give the model a small dataset without making these tweaks, each time the model sees the image, it will essentially memorize the images as opposed to learning the characteristics of a well pad. It’s a pretty neat trick, because we can make a small dataset seem way larger than it actually is simply by mirroring the image on the y-axis or by rotating the image 90 degrees, for example. Our augmentation workflow is shown in Figure 5.

# Augmentation function to pass to Callback class.
def augment(image, mask):
 rand = np.random.randint(100)
  if rand < 25:
   image = tf.image.flip_left_right(image)
   mask = tf.image.flip_left_right(mask)

 elif rand >= 25 and rand < 50:
   image = tf.image.rot90(image)
   mask = tf.image.rot90(mask)

 elif rand >= 50 and rand < 75:
   image = tf.image.flip_up_down(image)
   mask = tf.image.flip_up_down(mask)

 else:
   pass

 return (image, mask)

# Callback for data augmentation.
class aug(tf.keras.callbacks.Callback):
 def on_training_batch_begin(self, batch, logs = None):
   batch.map(augment, num_parallel_calls = 5)
   batch.shuffle(10)

Figure 5: Augmentation function and checkpoints cell

Fitting the model to the dataset

Now it’s time to put this model to the test! We do this in a TensorFlow call known as .fit(). As the name suggests, it is going to “fit” the model to our input dataset. Let’s go ahead and take a look at the code from Figure 6, shown below. 

history = UNet.fit(
     x = training,
     epochs = model_epochs,
     steps_per_epoch = TRAIN_STEPS,
     validation_data = evaluation,
     validation_steps = EVAL_STEPS,
     callbacks = [aug(),cp,csv])

Figure 6: Fitting the model to the input dataset

It’s important to conceptually understand what each of the values passed into this function call represents. We start with the variable “x”: this expects us to pass in our training dataset, which was created earlier. The next argument is called epochs. Epochs describe how many times the model will see the entire dataset during the fitting process. This is somewhat of an arbitrary number, as some models can learn the desired information more quickly, thus requiring less training. Conversely, training a model for too long can become redundant or potentially lead to overfitting. Overfitting is when a model learns to memorize the images it’s trained on, but it doesn’t learn to generalize. Think of overfitting like memorizing a review sheet the night before a test; you memorize what is covered in the review, but any minor changes in the way questions are asked on the actual test could trip you up. For this reason, it is generally up to the user to determine how many epochs are deemed necessary based on the application. 

The next argument, steps_per_epoch (also validation_steps) describes how many batches of data should be taken from our training and validation sets respectively through each epoch. Batches are small chunks of the dataset; it is useful to divide up the dataset into batches to make the training process more computationally efficient. One would typically want to go through the whole dataset every epoch, so it’s best to set the steps as such. Validation_data is where we would specify the data we set aside during training to validate our model’s predictions. Remember, that data will not be seen by the model during its training cycle. The last argument is called callbacks. This is where we pass in the augmentation function. This function is instructed by our callback to run at the beginning of each new batch, therefore constantly changing the data during training. We also optionally pass in other callbacks which might be useful for later reference to our training session. Such callbacks might export the loss and metrics to our Google Drive in a comma-separated values format or might save checkpoints throughout the model, keeping track of which training epoch produces the lowest loss. There are many other pre-packaged callbacks which can be used; a full list of these callbacks can be found here. Now that we have all of that covered, it’s time to start learning! By running this code, we begin the training process and will continue until the model has finished running through all of the epochs we specified.

Once that has finished, we save the model and plot its metrics and its loss, as shown in Figure 7. Based upon how these plots look, we can tell how well we did during our training.

Figure 7: An example chart, showing plotted metrics (top) and loss (bottom). Metrics are used to evaluate the performance of our model, while loss is directly used during training to optimize the model. As such, a good model will have a greatly reduced loss by the time we reach the end of training.

And voila! You have made it through the second installment in our series. The next entry will cover post-processing steps of our machine learning workflow. Questions we will answer include:

– How do we make predictions on an image we’ve never seen before?

– How do we take a large image and chop it into smaller, more manageable pieces? 

– How do we take some new predictions and make them into polygons?

Stay tuned for our next entry, brought to you by Dr. Ry Covington, SkyTruth’s Technical Program Director. In case you missed it, be sure to check out the first post in this series. Happy skytruthing!

SkyTruth’s West Virginia FrackFinder Datasets Updated

Oil and gas drilling activity in West Virginia continues to expand.

For more than a decade, SkyTruth has been tracking the footprint of oil and gas development in the Marcellus and Utica shale basins in West Virginia, Pennsylvania, and Ohio through our FrackFinder project. Initially, our FrackFinder project relied on volunteers to help us identify activity on the ground (thank you to all you SkyTruthers out there!). Since then, we’ve continued to update this database with help from SkyTruth interns and staff. Today, we’re excited to announce our latest updates to our West Virginia FrackFinder datasets. The updated data now include drilling sites and impoundments that appeared on the landscape through 2015–2016 (our 2016 update) and through 2017–2018 (our 2018 update). In 2016, 49 new drilling sites and 17 new impoundments appeared on the landscape. In 2018, 60 additional drilling sites and 20 new impoundments appeared; an 18% and 15% jump, respectively, from 2016.

With these additions, our West Virginia datasets track the footprint of oil and gas development in the state for more than decade, stretching from 2007 to 2018. 

Image 1. New drilling sites in Tyler County, near Wilbur and West Union, WV

We use high-resolution aerial photography collected as a part of the USDA’s National Agricultural Imaging Program (NAIP) to identify drilling sites and impoundments and make their locations available to the public. NAIP imagery is typically collected every two to three years, so once the imagery from each flight season is available, we  compare permit information from the West Virginia Department of Environmental Protection with NAIP imagery to find and map new drilling sites. Our datasets of what’s actually on the ground — not just what’s been permitted on paper — help landowners, public health researchers, nonprofits, and policymakers identify opportunities for better policies and commonsense regulations. And our data has resulted in real-world impacts. For example, researchers from Johns Hopkins University used our FrackFinder data in Pennsylvania to document the human health impacts of fracking. Their research found that living near an unconventional natural gas drilling site can lead to higher premature birth rates in expecting mothers and may also lead to a greater chance of suffering an asthma attack. Maryland Governor Larry Hogan cited this information in his decision to ban fracking in his state. 

We’ve shared the updated FrackFinder West Virginia data with research partners at Downstream Strategies and the University of California–Berkeley investigating the public health impacts of modern drilling and fracking, and with environmental advocacy groups like Appalachian Voices and FracTracker Alliance fighting the expansion of energy development in the mid-Atlantic.

We are also proud to roll out a Google Earth Engine app, which will be the new home for our  West Virginia FrackFinder data. Users can find all of our previous years’ data (2007–2014) as well as our new 2016 and 2018 datasets on this app. The interactive map allows you to zoom into locations and see exactly where we’ve found oil and gas drilling sites and wastewater impoundments. A simple click on one of the points will display the year in which we first detected drilling, along with the measured area of the site or impoundment (in square meters). Users can toggle different years of interest on and off using the left panel of the map. At the bottom of that same panel, uses can access the total number of drilling sites and impoundments identified during each year. Lastly, users can download SkyTruth’s entire FrackFinder dataset using the export button.

Image 2. Our Earth Engine app lets users track oil and gas development through time in WV.

We hope that the updates to our West Virginia FrackFinder datasets, and the new Earth Engine app that hosts them, will inform researchers, landowners, policymakers, and others, and help them bring about positive change. Feel free to take a look and send us feedback; we love to hear from people using our data.

New Intern Matthew Ibarra Shifts from Aerospace Engineering to Protecting the Planet from Space

Matthew thought he wanted to be an aerospace engineer when he started college. Then he learned more about environmental damage to the planet.

Hello There!

My name is Matthew Ibarra and I am a new intern at SkyTruth. I am currently a student attending West Virginia University (WVU). Originally I came to WVU to study mechanical and aerospace engineering. I have always been passionate about math and science and so naturally I believed engineering would be a perfect fit for me. I was a part of my robotics team in high school and I believed this would be something I could do forever. 

However, as my time at WVU went on I became much less interested in engineering and I decided that I wanted to study something else. Through my engineering classes I inadvertently learned more about energy and from there about renewable energy sources. I developed a passion for renewables and I decided I wanted to shift my focus of study and work on environmental challenges. I have always felt there is a lot more bad news than good news in the world and I kept hearing about problems such as massive deforestation in the Amazon, pollution of the planet and the oceans — and those were just the tip of the melting iceberg. I wanted to do something that would leave a lasting impact. All of these factors pushed me to change my major to Environmental and Energy Resource Management. And it was the best decision I have ever made. 

Matthew played saxaphone for the WVU marching band and currently plays clarinet in the WVU Concert Band and saxophone in the WVU pep band. Photo by Roger Sealey.

My best friend Amanda’s mother Teri works at SkyTruth as our office administrator, which was very serendipitous for me. Amanda told me about SkyTruth and I was excited to learn how SkyTruth gathers environmental data and conducts research using satellite imagery. I was intrigued because it seemed like SkyTruth worked in all the areas I was passionate about: the environment, technology, and research. I looked into some of SkyTruth’s current and past projects and the ones that excited me the most include FrackFinder, which helps keep track of the environmental impacts of fracking for natural gas. I was also excited about SkyTruth’s interactive maps that help track the removal of mountaintops from coal mining. SkyTruth works on many other projects that I knew that I wanted to be a part of as well. An internship at SkyTruth was the perfect way for me to not only help work on projects I cared about, but also to learn more about what I am interested in.

As an intern I am currently working to monitor the South East Asia region for bilge dumps. Bilge dumps are illegal practices by vessels that attempt to bypass pollution control and dump their oily ballast and waste water at sea. I am collecting useful data that will contribute to a machine learning program that can automatically detect bilge dumps from satellite images around the world. I am also working to update FrackFinder to include data from 2016 and create an interactive map that can easily display information such as natural gas well pad locations in West Virginia, and when they were drilled, to show how natural gas fracking has impacted West Virginia over time.

I am passionate about sustainability and hope to make this central to my career. Sustainability is the notion of living your life in such a way that you leave resources for the people who come after you. After my time here at SkyTruth I hope to go into government work. I would like to work for the Department of Energy in the Office of Energy Efficiency and Renewable Energy. Fossil fuels will eventually run out and a transition to renewables will help current climate and environmental issues. I feel that it is important to find solutions now and transition our power needs to something that is more sustainable while we are still able to do so. 

Matthew admires Blackwater Canyon in West Virginia. Photo by Matthew Ibarra.

I believe SkyTruth is important in achieving my goals because I am gaining valuable skills and knowledge that I know will help me in the future. I love working with Geographic Information System programs (GIS). GIS is essentially using computers to analyze physical features of the Earth such as measuring forest density or tracking changing temperatures; it has almost endless applications.  I am learning to work with Google Earth Engine which is essentially a super powerful and intuitive way to work in GIS. Earth Engine requires me to be able to code in the programming language JavaScript and so I’m learning that skill as well. These are skills that will be forever relevant in the future and I am excited to deepen my understanding of them.

When I started college five years ago I never thought that I would end up where I am today. I spent so many sleepless nights trying to finish my physics homework and study my chemistry notes. I never thought that I would want to give all that up to work in something completely different, but I am thankful I did. I am eager to be learning something new every day at SkyTruth and I am thankful to everyone who helped me get to where I am today. I am excited to continue my internship here and keep learning more about what’s important to me.

Matthew is a hockey fan and celebrated the DC Capitals’ Stanley Cup victory in 2018. Photo by Photos Beyond DC.

 

 

Fracking in Suburbia

What do you do when big oil moves in next door?

Karen Speed’s new house in Windsor, Colorado was supposed to be a peaceful retirement home. Now she plans to move.

Patricia Nelson wanted her son Diego to grow up the way she did – far from the petrochemical plants surrounding their home in Louisiana. So she moved back to Greeley, Colorado to be close to her family. Then she learned about the drilling behind Diego’s school.

Shirley Smithson had enjoyed her quiet community for years, riding her horse through her neighbor’s pastures, watching the wildlife, and teaching at local schools. When she learned that oil wells would be popping up down the street, she was in denial at first, she says. Then she took action. 

These women shared their stories with a group of journalists and others attending the Society of Environmental Journalists (SEJ) 2019 meeting in Fort Collins, Colorado last month. Fort Collins sits right next to Weld County – the most prolific county in Colorado for oil and gas production and among the most prolific in the entire United States. There, hydraulic fracturing (mostly for oil) has boomed, along with a population surge that is gobbling up farmland and converting open space into subdivisions. Often, these two very different types of development occur side-by-side. 

“We moved [into our house] in September, 2014,” Karen Speed told me, “and by the third week of January 2015, boy, I regretted building that house.” That was the week she learned that Great Western Oil and Gas Company, LLC, was proposing to put a well pad between two neighborhoods; and one of those neighborhoods was hers. When residents complained, she said, the company moved the site across a road and into a valley. “Which really isn’t the right answer,” Speed said. “Not in my backyard attitude? No – not in my town.” The well pad now sits next to the Poudre River and a bike path according to Speed. “People I know no longer ride there. They get sick,” she said. “One guy I know gets nosebleeds. He had asthma already and gets asthma attacks after riding.“

Well pads in neighborhoods are not uncommon throughout parts of Colorado’s Front Range. Weld County alone has an estimated 21,800 well pads and produces roughly 88% of Colorado’s oil. SkyTruth’s Flaring Map reveals a high concentration of flaring sites occurring in that region. This industrial activity occurs within residential areas and farmland despite the fact that people living near fracking sites in Colorado complain of bloody noses, migraines, sore throats, difficulty breathing, and other health problems according to Nathalie Eddy, a Field Advocate with the nonprofit environmental group Earthworks.   

Image 1. ImageMethane flaring locations from oil and gas wells in Weld County, CO. Image from SkyTruth’s Annual Flaring Volume Estimates from Earth Observation Group.

 

And then there was the explosion. Two years after Speed moved into her new home, on December 22, 2017, her house shook when a tank exploded at Extraction Energy’s Stromberger well pad four miles away. “When it exploded it really rocked the town,” she said. More than a dozen fire departments responded to the 30-foot high flames. “It went from 8:45 in the evening until the following morning before they could recover and get out of that space,” Speed recalls. According to a High Country News story, workers raced around shutting down operations throughout the site — 19 wells in all plus pipelines, tanks, trucks and other industrial infrastructure  — to prevent oil, gas, and other chemicals from triggering more explosions. Roughly 350 houses sat within one mile of the site and many more were within shaking range. One worker was injured. Dispatcher recordings released by High Country News reveal how dangerous the situation was, and how local fire departments were unprepared for an industrial fire of that magnitude.

That explosion occurred the very night Patricia Nelson returned home from a long day at the District Court in Denver. Nelson has been part of a coalition of public interest groups – including the NAACP, the Sierra Club, Wall of Women, and Weld Air and Water – that sued the Colorado agency responsible for overseeing oil and gas production in the state, the Colorado Oil and Gas Conservation Commission, for approving permits for 24 wells behind her son Diego’s school.  The company that would drill those wells was the same company overseeing the site that exploded – Extraction Energy.

Under Colorado law, oil and gas wells can be as close as 500 feet from a home and 1,000 feet from a school. Extraction’s new wells would be just over that limit and less than 1,000 feet from the school’s playing fields. Although the court hadn’t yet ruled, the company began construction on the site a few months later, in February 2018, and began drilling the wells that May. Ultimately, the District Court and the Appeals Court upheld the permits. Oil wells now tower over the Bella Romero Academy’s playing fields and the surrounding neighborhood of modest homes.

Smithson once taught at Bella Romero and worries about the kids. “When you have noise pollution and light pollution and dust and methane and all the things that come with having oil and gas production going on, kids are impacted physically. Their lungs aren’t developed…their immune systems aren’t totally developed and they are picking all this up,” she said. She has tried to mobilize the community but has been frustrated by the intimidation many parents feel. “This is a community without a voice,” she said. Bella Romero Academy is roughly 87% students of color, most of whom qualify for free or reduced lunch. “There are kids from Somalia, from war camps” attending the school, Smithson said. “They have trauma from the top of their head to their toes. They’re not going to speak up.” Both Smithson and Nelson pointed out that immigrants – whether from Somalia or Latin America – are unlikely to speak out because they fear retaliation from Immigration and Customs Enforcement. Moreover, some parents work for energy companies. They fear losing their jobs if they oppose an oil site near the school.

 In fact, according to Smithson, Nelson, and Speed, Extraction Energy came to Bella Romero because it expected few parents would resist: The company originally proposed these wells adjacent to the wealthier Frontier Academy on the other side of town, where the student body is 77% white. Extraction moved the wells to Bella Romero after an outcry from the school community. This kind of environmental injustice isn’t unusual, and it generated attention from major media outlets, including the New York Times and Mother Jones. You can see how close the wells are to the school in this clip from The Daily Show (and on the SkyTruth image below).

Image 2: Extraction Energy’s facking site near Bella Romero Academy in Greeley, CO. Image by SkyTruth.

 

SkyTruth has resources to help residents, activists, and researchers address potential threats from residential fracking. SkyTruth’s Flaring Map covers the entire world, and users can see flaring hotspots in their region – where energy companies burn off excess methane from drilling operations into the air — and document trends in the volume of methane burned over time. The SkyTruth Alerts system can keep people in Colorado, New Mexico, Wyoming, Montana, Utah, Pennsylvania, West Virginia up-to-date on new oil and gas permits, and new activities in their area of interest.  

 We know that residents and researchers using these kinds of tracking tools can have major impact. Johns Hopkins University researchers used SkyTruth’s FrackTracker program, which identified the location of fracking sites in Pennsylvania, to document health impacts in nearby communities. Those impacts included increases in premature births and asthma attacks. Maryland Governor Larry Hogan cited this information in his decision to ban fracking in his state. Those interested in collaborating with SkyTruth on similar projects should contact us.

Photo 1. Pump jacks at Extraction Energy’s Rubyanna site in Greeley, CO. Photo by Amy Mathews.

 

Although Colorado activists have had limited success so far, this past year did bring some positive changes. The Colorado General Assembly passed SB 181, which directs the Colorado Oil and Gas Conservation Commission to prioritize public health, safety, welfare, and the environment over oil and gas development. The new law also allows local governments to regulate the siting of oil and gas facilities in their communities and set stricter standards for oil and gas development than the state. Colorado agencies are still developing regulations to implement these new provisions.

 Improvements in technology could help as well.  The same day the SEJ crew met with concerned residents, a spokeswoman with SRC Energy explained the state of the art operations at their Golden Eagle pad in Eaton, Colorado. That technology is designed to mitigate impacts on the surrounding community and includes a 40-foot high sound wall, a water tank on site to pump water from a nearby farm (which reduces truck traffic), and electric pumps (to reduce emissions), among other features. Still, the fear of being surrounded by industrial sites remains for many residents.

Photo 2. SRC Energy’s Golden Eagle Pad, Eaton, CO. Photo by Amy Mathews.

 

In the meantime, Karen Speed is starting to look elsewhere for a new home. Shirley Smithson has decided she’s not going to let an oil company ruin her life. And Patricia Nelson will continue to fight for her family.

 “I think about moving all the time,” Nelson told the group of journalists, her voice cracking.  “But my whole family lives here and I don’t feel I can leave them behind… My sister has five children and drives to Denver for work every day…. I have cousins with kids at this school and family friends. Really, moving isn’t an option for me.”