Showing posts with label GIS4930. Show all posts
Showing posts with label GIS4930. Show all posts

Friday, October 14, 2022

Topic 3 Module 1: Scale Effect and Spatial Data Aggregation

This week we were involved in scale effects on raster and vector data, gerrymandering. The relationship between scale and geometric properties is that the large-scale maps show fewer properties than the small-scale maps. This is due to the generalization where information is “lost” because fewer vertices are used to represent features. Along with exclusion, where scale matters and can cause a decrease in the level of hydrographic feature detail. After reading the Goodchild, M.F. 2011 article and seeing other Esri documentation on the web I understand that my findings in this lab are as expected and that I have lost detail as the scale changes. The level of detail of features represented by a raster or vector data is often dependent on the cell (pixel) size, or spatial resolution, of the raster/vector. The cell must be small enough to capture the required detail but large enough so computer storage and analysis can be performed efficiently. However, more is not often better especially when considering compuation times and data storage limits. As for gerrymandering, it has a very negative history and is defined by manipulating the boundaries of (an electoral constituency) so as to favor one party or class. Basically it is the redrawing of polygons and can be measured by compactness and community. Below is a screenshot of a district with failing to have district 'compactness'.

Wednesday, October 5, 2022

Module 2.2: Surface Interpolation

This week we covered topics in surface interpolation techniques in GIS, including Theissen, Inverse Distance Weighted (IDW), and Spline. We critically interpreted the results from the techniques to compare and contrast them. The lab consisted of exploring water quality data for Tampa Bay, FL in the Biochemical Oxygen Demand (BOD) in milligrams per liter. Data consisted of 41 sample points in assuming random locations. Determining the best way to accurately represent the data was a bit up to us. Theissen technique uses polygons to define an area of influence around its sample point, so that any location inside the polygon is closer to that point than any of the other sample points. IDW assumes that things that are close to one another are more alike than those that are farther apart. Spline estimates values using a mathematical function that minimizes overall surface curvature, resulting in a smooth surface that passes exactly through the input points. After looking at the statistics of the data for each technique and the overall output for any anomalies I chose IDW interpolation as my image to display below. This is because it is an exact interpolator, good use for water data because of how it works, and there were no adjustments needed to make the data work for the technique. It was the sufficient and accurate way to go for this particular data set in my opinion to show the water quality conditions in Tampa Bay.

Saturday, September 24, 2022

Module 2.1 Surfaces - TINs and DEMs

This week we laid it all out, literally. Surfaces can be an interesting topic when discussing elevation models and 3D visualizations. We read about TIN and DEM elevation models, compared them, examined their properties, and practiced creating and modifying them. In my exploration of of TINs and DEMs I learned about suitability modeling, how slope, aspect and edges effect the appearance of them, and especially how symbology plays a major role in how the data is shown for a final layout. While these topics and tools such as Raster to TIN, Reclassify, Slope, Aspect, Create TIN, Spline, and Contours are not entirely new, it is necessary to practice more with them for a greater understanding. The screen capture below is a colorful example of exxagerated terrain of Death Valley near the Furnace Creek area. By adding the TIFF image as an elevation surface in a New Scene and increasing the vertical exaggeration to 2.0 it becomes this.

Tuesday, September 13, 2022

Module 1.3: Data Quality - Assessment

For this lab the goal of the accuracy assessment was to determine the percentage difference between 2 road shapefiles put up against a grid overlay in Jackson County, Oregon. The analysis methodology utilized the readings from Haklay 2010 for the most part, and that entailed using the Clip tool, determining lengths within each grid for each road shapefile and then comparing them to get the difference. Tools also used were Intersect and Summarize Within, along with an Excel spreadsheet to compare the data visually easier. Once I had my two data sets I was able to Intersect them and calculate the percentages in ArcGIS Pro. The layout was created using this combined data set in a graduated color symbology to show where the differences between the -103 and 80 percentage data were the most and least extreme. See layout image below.

Comments: I wanted to showcase the grid symbology, but not leave out the roads. Finding a color combination that did not crowd or take over was difficult, but I am happy with the results.

Wednesday, September 7, 2022

Module 1.2: Data Quality Standards

In continuation of our module about Data Quality, this week we learned about how to determine the quality of road networks, determining postitional accuracy of two road networks by comparison and the methodology of procedures provided by the National Standard for Spatial Data Accuracy (NSSDA). We were given city data and street data shapefiles of Albuquerque, NM along with orthophotos to help us create reference points. From there we were able to create acccuracy statistics worksheets to create a formal accuracy statement per the NSSDA guidelines. Below is an image of my sampling locations.

Summary of steps: Once the reference points were created, I was able to calculate geometry in the attribute tables for city, streets and reference points to get the corresponding X and Y coordinates. From there I exported the data to Excel and created columns for error_x, error_y, error_xy_sqrd, error_xy, RMSE, Mean, Median, 95th Percentile, Minimum, Maximum, 68th Percentile,and 90th Percentile. The NSSDA statistic is determined by multiplying the RMSE (root mean square root) to a 95% confidence level. 1.7308 for horizontal accuracy and 1.9600 for vertical accuracy. For this project horizontal accuracy was being determined. The following statement is the accuracy statement once I multiplied my street RMSE by 1.7308 and my city RMSE by 1.7308.

Street Map Data: Tested __141.6709___ feet horizontal accuracy at 95% confidence level.

City Data: Tested __17.9350___ feet horizontal accuracy at 95% confidence level.

Wednesday, August 31, 2022

M1: Calculating Metrics for Spatial Data Quality

New class, new tasks. This is Module 1 of Special Topics in GIS. This week we have learned about the difference between accuracy and precision. Accuracy is the absence of error and is determined by comparing a coded value in the database of interest to some independent reference value. For numerical values we can use a metric like the Root Mean Square Error to describe accuracy. Precision is, in this context, the variance of measurement. In other words, how close together are multiple observations of the same coded value? This does not use a reference value, but instead uses a metric like the standard deviation of a sample.

Below you will see 2 things: the first is a map layout from Part A of the lab where we were tasked to show accuracy and precision from projected waypoints with circular buffers of precision estimates. The second thing is the numerical results for horizontal accuracy and precision.

Numerical results: Horizontal accuracy of 4.279 and horizontal precision of 4.293

GIS Portfolio

The final assignment in the GIS Certificate Program was to create a GIS Portfolio. It went as I expected. It is hard to write about yourself...