Tuesday, November 14, 2017

Photo Interpretation and Remote Sensing Module 10

This module's focus was the last method of image classification, supervised classification, which involves identifying known examples of different LULC classes so that the software can associate a spectral signature with each one and classify the remaining pixels based on their statistical similarity to the training classes. It's more work upfront than unsupervised classification, but there's the advantage of not having to go back and figure out what the classes are after the fact.

This was another fun lab assignment, but also kind of frustrating. Despite several attempts, I had a very hard time establishing a unique spectral signature for the road class, and as you might be able to see in my map below, even the final output has some confusion between the road and urban classes, and possibly also road and grass in some places.

You can also see from the distance output image (in which brighter areas correspond to areas that are more different from the established spectral signatures and thus more likely to end up misclassified) that there might be some issues with some of the agricultural areas and along the edges of certain features like the lakes. If this map were for a real-world application, those areas might need to be ground truthed or compared with other imagery to see if they need to be reclassified.

Monday, November 6, 2017

Photo Interpretation and Remote Sensing Module 9

This week we're back to land use land cover classification from aerial imagery, this time using automated methods. In this lab, we learned how to perform unsupervised classification in both ArcMap and ERDAS Imagine, culminating in a classified image of the UWF campus. For this assignment, the image was automatically classified into 50 different classes based on pixel characteristics, then manually simplified into five classes by identifying the broad LULC category each of the original 50 classes. My output is below; I ended up with a significant portion of the image falling into the "Mixed" class, which consists of pixels whose automatically assigned spectral class spanned multiple LULC classes (most of the confusion seemed to be between grass and buildings/roads--when I started out by assigning classes to the grass category, I suddenly had a number of green roofs!). 

Tuesday, October 31, 2017

Photo Interpretation and Remote Sensing Module 8

Another fun lab this week! After learning about thermal energy, how to work with thermal infrared imagery in both ArcMap and Erdas (as well as how to compile multiple image bands into a single image in both programs), and some applications for thermal image analysis, such as identifying forest fires, we were asked to use the thermal band in one of the lab images to identify a feature of our choice.

I looked at the image from Ecuador using a panchromatic display of the thermal band in comparison to a true color composite and a panchromatic display of the near-infrared band. There were several things in the image that intrigued me (some of which I actually wasn't able to identify with certainty), but I eventually noticed this pair of bridges that only popped out at me when looking at the thermal band--in the visible and near-infrared bands, the one on the left is subtle and the one on the right is almost invisible, because their coloring isn't that far off from the water and both pavement and water absorb strongly in the near-infrared. But pavement certainly gives off thermal energy after warming up in the sun!

Anyway, I thought it was really interesting, because I wasn't expecting such a stark difference in what seems like it should be a prominent feature. There are certainly benefits to comparing multiple image bands.

Tuesday, October 24, 2017

Photo Interpretation and Remote Sensing Module 7

Lots of imagery and maps this week! We learned about working with multispectral imagery in both ERDAS Imagine and ArcMap. For the lab assignment, we were given a Landsat Image and asked to identify three features based on clues about their reflectance in different bands of the image, i.e. at different wavelengths. Then we had to create composite images using band combinations that would make those features stand out.

Here are mine:

One issue I had was that the first image (water bodies) looked very washed out once I brought it into ArcMap and I couldn't figure out how to fix it, although I was able to do some histogram correction using the Image Analysis window, just not to get it looking as good as it did in Imagine. What's puzzling is that it didn't happen with the other two, so if it's related to creating the image subset, I don't know why it only caused a problem with one. Aside from that, I enjoyed this lab!

Tuesday, October 17, 2017

Photo Interpretation and Remote Sensing Module 6

This week we started learning about image enhancements that can be used for things like reducing noise and atmospheric effects and emphasizing or de-emphasizing certain features or details. The lab assignment involved experimenting with a number of enhancement tools in both ERDAS Imagine and ArcMap. For the final deliverable, the goal was to minimize the striping (caused by a problem with the sensor) in a Landsat 7 image while preserving as much detail as possible. Here's my attempt:

I had a hard time with this because I don't feel like I understand enough about how to use the different filters to be able to approach a problem like this strategically--I was just sort of throwing options at the image hoping something would work, and not surprisingly I'm not very happy with the result. Hopefully we're going to continue working on how to properly apply image enhancements in the coming modules, now that we know the basics of how to use the tools.

Tuesday, October 3, 2017

Photo Interpretation and Remote Sensing Module 5a

This week we learned the basics of how to view and navigate images in ERDAS Imagine. I've used this program before, but so long ago I don't really remember, so this is helpful, especially as I don't find it to be particularly intuitive to use. For the lab assignment, we had Imagine calculate the area covered by each type of land cover in a classified image, then extracted a subset of the image to create a map layout in ArcMap.

Tuesday, September 26, 2017

Photo Interpretation and Remote Sensing Module 4

This week we learned about using ground truthing to confirm land cover/land use classifications, and about calculating the accuracy of an LULC map base on ground truthing. Since we couldn't actually visit our sample sites for the lab assignment, we used Google Maps and Google Street View to confirm/correct classifications. Below is my map from last week, overlain with "ground truthing" points color-coded according to whether or not my original classification was correct at that location. Some points were more of a challenge to identify than others, particularly a few that had obviously changed between the time the photo was taken and the time the Google Maps imagery was last updated.