All material here is Licensed under CC by sa

rasterVis to the rescue

September 26, 2012 7 comments

Programmers like Oscar Perpiñán Lamigueiro are the reason I love R!  Oscar is the maintainer of the rasterVis package and it in this post I’ll explain why it is must have package for anyone working with the raster package in R.  My latest project is focused on the NOAA’s Climate Reference Network. The details can be found here.  Over a year ago I wrote a package call CRN designed to download and organize the data from the Climate Reference Network and I wanted to update and put some finishing touches on that package. In the course of that I decided it would be a cool idea to build a detailed metadata package for the stations.

The CRN is a rather like a gold standard of climate data collection. Triple redundant sensors, 5 minute data, hourly data, daily and monthly. The sites have all been selected using rigorous standards and as the system has matured the data availability rates are close to 100%. Here are just of few of the projects I have in mind.

1. Doing a proper estimate of the US temperature using just the CRN stations. One person has already  done this incorrectly ( or sub optimally ) so, I’m planning on approaching the problem with Kriging. For this I won’t use the Berkeley Earth code because I want to do something in R, using gstat perhaps.

2. Looking at LST  around the CRN sites. There is also a really cool new package in R called Modis and I really want to do an interesting project to test that package out. The goal here would be to fetch the correct Modis tiles for Land Surface Temperature, and then look at the Air temperatures, soil temperatures ( CRN collects those ) 

3. Landcover study. I’ve been reading a bunch of papers by Marc Imhoff on  SUHI and the relationship between SUHI and impervious surfaces, tree cover, etc. and Marc’s team has done some really cool work on the relationship between SUHI and the particular biome that a city is embedded in. In some forthcoming work they also look at the effects of city size and city shape on SUHI. A a few points in their papers they have hinted at SUHI effects for even small urban developments with areas around 1 sq km. I’d like to understand that better.

 To pull off all these projects I will need some rather detailed metadata, more detailed than I’ve ever worked with before. For starters  I needed a station list for the CRN and USRCRN stations. There are any number of ways of getting this data, but I drop a note to Matt Menne of NOAA and he directed me to Michael Palecki who was kind enough to send me a csv file of stations. The thing I was looking for was the most accurate Latitude/Longitude data I could find, and Michael supplied it. 

Next task was collecting metadata for the stations and assembling a variety of tables. I won’t document that work here, but just to give you an idea of what I’m putting together For every station I have the following :  1) elevation,slope and aspect  from 30 meter DEM.  2) distance from the sea coast, distance from the nearest body of water, and percentage of the “land” that is covered in water over a broad range of radii from the site. 3  various landcover datasets. 4. Impervious Area. 5. Biome. 6 Anthrom. 7 Populations from various sources and at various levels of accuarcy. 8 Housing. 9. Nightlights ( stable and radiance calibrated ). 10 economic activity.  and I plan to get LST and probably Modis Albedo.

So where does Oscar and rasterVis come in?  Specifically in this case Oscar came to the rescue for a plotting problem I was having with landcover data.

Using the  2006 NLCD 30 meter land cover dataset the first thing I do is save off  “patches”  of landcover for  roughly 1 degree around the site. The 30 meter dataset is huge and things go much easier if I just create  small data sets for each station.

In code that looks like this

library(raster)
library(rgdal)

LcName  <-  “G:\\NLCD\\nlcd2006_landcover_4-20-11_se5.img”
LC           <-  raster(LcName)

Inv <- read.table(“Core.tab”,stringsAsFactors =FALSE)

coordinates(Inv) <- c(“Lon”,”Lat”)
proj4string(Inv)   <- CRS(“+proj=longlat +datum=WGS84 +ellps=WGS84 +towgs84=0,0,0″ )
Inv <-  spTransform(Inv, CRS(projection(LC)))
 

dir.create(“NLCD2006″)

for(station in 1:nrow(Inv)){
     Lon <- coordinates(Inv)[station,"Lon"]
     Lat <- coordinates(Inv)[station,"Lat"]
     if(Lon >= xmin(LC) & Lon <= xmax(LC) & Lat >= ymin(LC) & Lat <= ymax(LC)){
         e <- extent(Lon-110000,Lon+110000,Lat-110000,Lat+110000)
         patch <- crop(LC,e)
         fname <-file.path(“NLCD2006″,paste(Inv$Id[station],”NLCD06.grd”,sep=””),fsep=.Platform$file.sep)
         writeRaster(patch,filename = fname)
         print(station)
}else {
    print(“Alaska or HI”)
}

}

Pretty simple. For 2006 Alaska and Hawaii are not in the dataset so we just skip them. The output of this is a small grid with the station in the center and 30 meter landcover data for 110,000 meters in all directions. That’s the data I will be working with.

The next step is to calculate a variety of statistics on this “patch” and smaller and smaller versions of the patch, so we get the land cover stats at various radii from the site, from 110km  down to 30 meters. Working with the 1 degree “patch” makes this work go very quickly.  raster has a function called “freq” which counts the frequency of values in the entire raster. [ As I write this I wonder if "freq" can be passed as  a function to the raster  "extract" function as a buffer option] 

In addition to creating 1 degree patches for every site I also create 1km patches which makes a nice size for display. That’s where rasterVis comes in.

Before Oscar solved my problem here is an example of what my plot looked like. Note this is at 500 meters

Image

 

There are a bunch of problems here. First and foremost I could not get a consistent coloring for these plots. The issue is that plot() in raster was considering my data to be integers rather than factors and if land cover types were missing my colors scheme changed from plot to plot.

Oscar fixed all that by suggesting that I use rasterVis  and the  levelplot function.  His code dropped right in and after finding the color scheme in rgb for NLCD  the same plot now looks like this

 

Image

 

 

I’ve got some work to do to switch back to the official legend descriptions but the maps now use the rgb from the standard and every map is consistently covered.  In the next installment I’ll post up Oscars code.

 

Here is what the Impervious surface data looks like ( Needs a grey scale of 0-100% )

 

Image

 

 

To the naked eye it looks like this

 

Image

 

 

Categories: Uncategorized

In search of large ice floes

September 18, 2012 8 comments
Categories: Uncategorized

A gWidgets GUI for climate data

July 10, 2012 15 comments

If you haven’t worked with the gWidgets package it’s worth some time exploring it which is what I’ve been doing for a little paleo project I’ve been working on. After struggling with the few demos and tutorials I could find I went ahead and bought the book: Programming Graphical User Interfaces in R. Luckily the book was a little better than the tutorials which just scratch the surface. GUI programming, in my mind, is unlike other programming and I had many false starts, primarily struggling with lexical scoping issues and getting widgets to update based on changes made to other widgets and the underlying data. The only reliable method I could find was <<-   Yikes. I’m sure over time I’d figure away around making these type of “globals” it ended up looking pretty but being ugly under the hood.

Lets Start with the Top Window

Pretty basic but it took a while to get this to work. Part of the problem was mixing the window on the left  a ggraphics() widget with the panel on the right and having it operate correctly with resizing. I tried ggroups(), gframe(), combinations of those two and in all cases the windows would not draw properly upon resizing. Text got left all over the place. That problem was solved with a gpanedgroup(). So the left pane is a gframe() that contains a  gcheckboxgroup() and then a ggrpahics() widget and the right group is also a gframe containing other widgets. The purpose of the GI is pretty simple: take a huge climate data file and station inventory and “crop” it with respect to a climate proxy file, in this case pollen data. The idea is to create some custom climate files without requiring programming. The process starts by loading a climate file ( from Berkeley Earth datasets) which uses my BerkeleyEarth Package. Then you load a pollen database and plot the two:

The  glabel(0 widgets on the right show you how many stations are loaded and the min and max time. They also show the lat /lon which defaults to the whole world. Next we select our region of interest by using the graphics crosshairs. Selecting that “extent” will automatically crop the climate data to the extent selected:

 

We see the station count adjust and the lat/lon adjust. And the times at which we have temperatures is displayed. Then hit update plot:

 

And the plot updates. Here I’ve selected an even smaller region. Unfortunately if you make a region too small, the ONLY way to undo that mistake is to “reset data”  which brings you back to square one with all the data present. The last step is to adjust the time window: Here I want to build a dataset from 1961 to 1990. So I move the gsliders() and hit update plot.

 

The last step I have to write is the “save file”.  I also had plans to add more plots, like stations versus time, but at one point I hit a nasty gWidgets error about hitting the limits on the stack. Probably my programming.  For now, this will do.  For future projects I think I’m going downstream to do toolkit programming. The layout restrictions in gWidgets did cramp my style and I really haven’t mastered the whole “signaling” handler, updating widgets.. perhaps I should use an observer model.. more reading.

I usually post code. In this case its so ugly that it would probably teach you bad things. When I get better at this I’ll do some tutorials.

 

Categories: Uncategorized

Updates to Old code

I notice from time to time that people download code from the drop box.

The drop box lags  CRAN.  I’ve upload the newest versions to the drop box.

In the future I will post new releases to the drop box before I submit to CRAN.

Categories: Uncategorized

Nick Stokes Distance code, now with Big Memory

April 12, 2012 9 comments

In my last post I was struggling with getting a big memory version of the distance matrix to work fast. Nick and other readers had some suggestions and after puttering around with Nicks code I’ve adapted it to big memory and not impacted the run time very much. For comparison writing a 10K by 10K  matrix was taking  minutes and I projected about 24 hours for berkeley data. Nick came to the rescue with two really cool ideas. The first is a different method for calculating distance on the sphere. Here he expanded on my idea of pre computing sin() and cos() for latitudes and longitudes and he developed an approach that allows us to compute these values once and then solve a distance equation using super fast matrix maths. On top of that he came up with a “block” approach. At first this looks like it will be slower because we calculate the entire matrix as oppsoed to just the upper half.  In his approach Nick wrote out files for every block. In my approach I write blocks to a filebacked big matrix. Thats a bit slower but I get one file that I can then random access. Here’s the code

createDistanceMatrix <- function(Inventory, blockSize = 20, Directory= getwd(), filename
                                                                                    = “distance.bin”){
# based on code by Nick Stokes

            require(bigmemory)
            options(bigmemory.allow.dimnames=TRUE)
           if(!grepl(“\\.bin”,filename))stop(“please use a bin extension”)
           columnsPresent <- intersect(colnames(Inventory),c(“Id”,”Lon”,”Lat”))
           if(length(columnsPresent) != 3) stop(“missing the correct column names”)
           descName <- sub(“bin”,”desc”,filename)
           if(class(Inventory) == “matrix”) Inventory <- as.data.frame(Inventory)

          L <- cbind(Inventory$Lat,Inventory$Lon)*(pi/180)
          s <- cbind(cos(L),sin(L))
        # z is the array of 3D coords of stations. All unit vecs
         z <- cbind(s[,1]*s[,2],s[,1]*s[,4],s[,3])

             z2 <- z/sqrt(2) # Better to mult now than after outer prod
            D <- 6371*2 #Diam of Earth

          n <- nrow(L)
          if(n <= blockSize)stop(“BlockSize must be less than rows in Inventory”)
         blocks <- n %/% blockSize
         if((n %% blockSize) > 0)blocks <- blocks + 1
          dex <- 1:blockSize

              BM <- filebacked.big.matrix(nrow = n , ncol = n,
                                                                                dimnames = list(as.list(Inventory$Id),NULL),
                                                                                init = NA,
                                                                               backingpath = Directory,
                                                                               backingfile = filename,
                                                                              descriptorfile = descName,
                                                                              type = “double”)
                              startTime <- Sys.time()

                              for(i in 1:blocks){

                                        p <- dex + (i-1)*blockSize
                                        p <- p[p<= n]
                                         for(j in 1:blocks){
                                             q <- dex +(j-1)*blockSize
                                             q <- q[q<=n]
                                             x <- 0.5000000001-z2[p,]%*%t(z2[q,])

                                            x <- D * asin(sqrt(x))
                                           if(identical(p,q))diag(x)<-0
                                          BM[p,q]<- x

                           }
             }

print(Sys.time()-startTime)
return(BM)
}

That takes 46 seconds.  orders of magintude improvement

Categories: Uncategorized

Using bigmemory for a distance matrix

April 8, 2012 23 comments

UPDATE:  code is in the drop box for anybody who wants to play around. change extension from txt to R

The process of working on metadata and temperature series gives rise to several situations where I need to calculate the distance from every station to every other station. With a small number of stations this can be done easily on the fly with the result stored in a matrix. The matrix has rows and columns equal to the number of stations and it is filled by looping through the various station combinations. Of course one need only calculate the distance for every pair once. When the number of stations increases to 10s of thousands, such as with the Berkeley Earth project, there are two challenges. First is the time that the process takes and second is the memory it consumes.  Handling the space problem is relatively straightforward and I decided to use bigmemory to store the data. The speed problem is an entirely different matter, but I’m making a little progress. In addition to storing the data I also want a way to quickly retrieve the neighbors of a given station and I want my solution to be relatively bullet proof and easy to understand.

To solve the problem I define two functions  createDistanceMatrix() and getNeighbors(). The coed for each follows:

Lets start with  createDistanceMatrix:

createDistanceMatrix <- function(Inventory,Directory = getwd(), filename= “distance.bin”){

             require(bigmemory)

             deg2rad <- function(deg) return(deg*pi/180)

             if(!grepl(“\\.bin”,filename))stop(“please use a bin extension”)
            columnsPresent <- intersect(colnames(Inventory),c(“Id”,”Lon”,”Lat”))
            if(length(columnsPresent) != 3) stop(“missing the correct column names”)
           descName <- sub(“bin”,”desc”,filename)
           if(class(Inventory) == “matrix”) Inventory <- as.data.frame(Inventory)

The function is declared with a variable for the  Inventory of stations, a Directory for where the file will be written, and a default filename. Next we indicate that the bigmemory library is required and then proceed to define a local function [deg2rad()] and do some simple input checking. I enforce the use of a specific extension (“.bin”) which I used for all my filebacked bigmatrices. Then we check the inventory to make sure that it has the columns required:columns named “Id”, “Lon” and “Lat”.  In my coding Inventories are usually data.frames, but I allow for using a matrix. If you use a matrix I coerce it locally to a data.frame. Finally, I create a  new file name that is critical for future “reads” of the data. In bigmemorywhen you write a file a “description file” is created to document the structure of the file. We will use this to attach the file in the future. Proceeding:

Inventory[, "sinLat"] <- sin(deg2rad(Inventory$Lat))
Inventory[, "cosLat"] <- cos(deg2rad(Inventory$Lat))
Inventory$Lon <- deg2rad(Inventory$Lon)
ER <- 6371

These lines require a bit of explanation. To calculate the distance between stations I am going to use a great circle calculation. That calculation repeatedly takes the sin() and cos() of the latitude. For speed I do that once for every station in the inventory. Also, R takes radians so I do that transformation once. ER is the earth’s radius in kilometers. Next we define some parameters for the file we want to create:

nr <- nrow(Inventory)
nc <- nrow(Inventory)
options(bigmemory.allow.dimnames=TRUE)

The matrix we create and store as a file will be square. In truth I could make the matrix one row or one column smaller, but I’ve left it like this for now. Note as well that I decide to set the options for using rownames and column names. This will allow people to use character Ids for their station Id in an inventory. That will be clearer as we progress. With those preliminaries we are ready to initialize the filebacked big matrix:

D <- filebacked.big.matrix(nrow = nr, ncol = nc,
                                                       dimnames = list(as.list(Inventory$Id),NULL),
                                                       init = NA,
                                                       backingpath = Directory,
                                                       backingfile = filename,
                                                      descriptorfile = descName,
                                                       type = “double”)

Some things to note: I set the rows and columns to be equal. I initialize the data with NA. We only write into the upper diagonal of the matrix and I want the other values to be NA. I set the dimnames so that the rownames are equal to the station Ids. They can be numbers or characters.  When you seek on the file you’ll use the station ID. So, you could use characters ( “USC26541328″) for station Ids and the functions will still work. I also set a directory where the file is written, the file name and the decriptor name. Note, the decriptor name is DERIVED from the filename by simply changing the extension: “distance.bin” has a file descriptor of “distance.desc”. Type is set as “double.”  Next I do a couple of things that might look confusing. Sometime a station has no Latitude or Longitude. I handle that here by allowing those stations including in the Inventory, but I detect that problem and write a negative distance in the appropriate rows and columns. Our matrix will have 3 values: distance, a negative filler, and NA for elements of the matrix that dont have to be calculated ( lower half of the diagonal ). I also determine which rows have valid Lat and Lon. That will be used as a set for looping  controls. Below we set the row and column of stations that have missing Lat/Lon to -9999.

missingLonLat <- which(is.na(Inventory$Lat) | is.na(Inventory$Lon))
validLonLat <- which(!is.na(Inventory$Lat) & !is.na(Inventory$Lon))
D[missingLonLat,] <- (-9999)
D[,missingLonLat] <- (-9999)

Time to loop over the data. The approach is simple: Imagine we had a 4×4 matrix: we want to generate   1:2;1:3;1:4;2:3;2:4;3:4. Our first loop control goes over the 1-3 and the second goes over 2-4. Because some stations have missing Lat/lon we will skip over them. That is the purpose of validLonLat. It conatins those rows that are valid.

for(i in validLonLat[-length(validLonLat)]){
              for(j in validLonLat[validLonLat > i] ){

                      D[i,j] <- acos(Inventory$sinLat[i] * Inventory$sinLat[j]
                                                + Inventory$cosLat[i] * Inventory$cosLat[j]
                                                  * cos(Inventory$Lon[j]-Inventory$Lon[i])) * ER
                       if(is.nan(D[i,j])) D[i,j] <- 0

                  }
  }

So, we start looping over the first element in validLonLat and we loop to N-1. In the second loop, we skip lower rownumbers. The range value is calculated by using the precomputed values for sinLat() and cosLat() so that I am basically reducing the number of trig functions that have to be called by orders of magnitude. Finally, there is a nasty little warning we have to care about.  Sometimes when two stations have the same location acos() will blow up. This happens when acos() is feed a number slightly greater than 1 or less than -1. It happens rarely. The result is a NaN instead of the desired result of zero.  Rather than prevent the warning, I decide to allow the calculation and catch and fix the error. I suppose I could trim sinLat or cosLat and prevent it. In anycase when this finishes we have  created a bigmatrix on file. Lets test it:

Inventory<-data.frame(Id=seq(1,1000),Lat=runif(1000,-90,90),Lon= runif(1000,-180,180))

Inventory$Lat[unique(sample.int(1000,20))]<-NA
Inventory$Lon[unique(sample.int(1000,20))]<-NA
startTime <- Sys.time()
BD <- createDistanceMatrix(Inventory)
print( Sys.time()-startTime)

I create a random inventory of 1000 stations. I introduce some random NAs and time the function:

Time difference of 6.371714 mins

Creating a matrix of 1 million elements takes a bit over 6 minutes. For this we are calculating roughly 500K distances. I’m sure part of the slowness of this approach is the repeated writing I am doing to the file. So, I’ll have to investigate ways to buffer this, but for now it works. I could also see ways of using apply(). Now, lets look at how we get data from this file.

test <- getNeighbors(“distance.desc”, Id = 333)

Warning message:
In getNeighbors("distance.desc", Id = 333) : coercing Id to character

To grab a stations neighbors I call “getNeighbors” and pass the filename of the descriptor file. I may change this, bt you’ll get the idea. I also pass the Id.  Id is supposed to be passed as a character. If you screw up and pass an integer, I coerce it under the hood and proceed. The code for the function looks like this:

getNeighbors <- function(descriptFile,Id){
       require(bigmemory)
 
      D <- attach.big.matrix(dget(descriptFile))
        if(class(Id) != “character”){
            warning(“coercing Id to character”)
             Id <- as.character(Id)
         }
          dex <- which(rownames(D) == Id)
          if(length(dex) !=1) {
                  if(length(dex) == 0)stop(“index not found”)
                   if(length(dex) > 1)stop(“index found multiple times”)
           }
          neighbors <- c(D[1:dex,dex],D[dex,(dex+1):ncol(D)])
          neighbors[is.na(neighbors)] <- 0
          neighbors[neighbors < 0] <- NA
          names(neighbors) <- rownames(D)
          return(neighbors)
 }

Pretty straightforward. First we “attach” the file to read it using the descriptor file. If “Id” is not a character I coerce it.  Then I do some checking to insure that the ID can be found in the rownames of the file and to make sure it is only found once. I will probably add a check to the creating file to make sure that Ids are unique. For now I check it here, but that will change.  Then I pull the neighbors for the ID.  The neighbors of any given station consists of the cells in a column and cells in a row. The best way to see this is to draw a diagram using a 4×4 matrix as an example: presume you want the 3rd row: To get those values you take rows 1-3 of the 3rd column. cell 3,3 will be NA because we do not calculate the distance from a site to itself. The balance of neighbors will be in 3 row on column 4. neighbors <- c(D[1:dex,dex],D[dex,(dex+1):ncol(D)])

That line collects all the neighbors for an Id including the Id. Next,  we take the cell that is NA  D[dex,dex] and set it to 0 in neighbors. Finally we take the negative distances and set them to NA. I suppose I could just as easily get rid of the whole setting things to -9999  ploy and probably will in a final version.

Lets look at test:

test[330:340]
      330       331       332       333       334       335       336       337       338 
 7722.563        NA  7564.581     0.000 16180.166  8846.114 15410.572  5890.057 17409.563 
      339       340 
 7262.202 10958.382

So, station 331 had a missing Lat/Lon and station 333 is zero km from itself.

Neat.  Ok.  Taking a few minutes to add  some clean up. Here is the revised versions

createDistanceMatrix <- function(Inventory,Directory = getwd(), filename= “distance.bin”){
                 require(bigmemory)

                 deg2rad <- function(deg) return(deg*pi/180)

                if(!grepl(“\\.bin”,filename))stop(“please use a bin extension”)
               columnsPresent <- intersect(colnames(Inventory),c(“Id”,”Lon”,”Lat”))
               if(length(columnsPresent) != 3) stop(“missing the correct column names”)

               descName <- sub(“bin”,”desc”,filename)
                if(class(Inventory) == “matrix”) Inventory <- as.data.frame(Inventory)
                if(length(unique(Inventory$Id)) != length(Inventory$Id))stop(“Ids, must be unique”)
                # some temporary variables to speed up processing of distance calculation
               # these are done once and resused inside the loop
               Inventory[, "sinLat"] <- sin(deg2rad(Inventory$Lat))
               Inventory[, "cosLat"] <- cos(deg2rad(Inventory$Lat))
               Inventory$Lon <- deg2rad(Inventory$Lon)
               ER <- 6371

               nr <- nrow(Inventory)
               nc <- nrow(Inventory)

               options(bigmemory.allow.dimnames=TRUE)

               D <- filebacked.big.matrix(nrow = nr, ncol = nc,
                                                          dimnames = list(as.list(Inventory$Id),NULL),
                                                          init = NA,
                                                         backingpath = Directory,
                                                         backingfile = filename,
                                                         descriptorfile = descName,
                                                        type = “double”)

                   validLonLat <- which(!is.na(Inventory$Lat) & !is.na(Inventory$Lon))

             for(i in validLonLat[-length(validLonLat)]){
                                for(j in validLonLat[validLonLat > i] ){

                                          D[i,j] <- acos(Inventory$sinLat[i] * Inventory$sinLat[j]
                                                                      + Inventory$cosLat[i] * Inventory$cosLat[j]
                                                                      * cos(Inventory$Lon[j]-Inventory$Lon[i])) * ER
                                                      if(is.nan(D[i,j])) D[i,j] <- 0

                                         }
                          }

            return(D)

  }

And

getNeighbors <- function(descriptFile = “distance.desc”,Id){
                   require(bigmemory)

                   DATA <- attach.big.matrix(dget(descriptFile))

                  if(class(Id) != “character”){
                           warning(“coercing Id to character”)
                          Id <- as.character(Id)
                   }
             dex <- which(rownames(DATA) == Id)
             
              if(length(dex) == 0)stop(“index not found”)
              if(length(dex) > 1)stop(“index found multiple times”)
             
              neighbors <- c(DATA[1:dex,dex],DATA[dex,(dex+1):ncol(DATA)])
              neighbors[dex] <- 0
              names(neighbors) <- rownames(DATA)
             return(neighbors)
  }

Categories: Uncategorized

Metadata Dubuque and UHI

March 22, 2012 26 comments

I’m in the process of remaking all the metadata from scratch and looking once again at the question of UHI. There are not any global conclusions we can draw from the data yet; I’m just in the process of checking out everything that is available that could be used to illuminate the problem. The problem, as I see it, is as follows. We know from numerous studies that UHI is a real phenomena. But, it’s far from simple.  First lets get down to causes. The causes of UHI according to Oke are as follows. Note 20 + years of research has no expanded this list in any significant way

In the context of global temperature we are concerned with canopy layer UHI, that is UHI below the top of buildings where surface station thermometers are. Lets go through the causes one by one.

1. Increased absorption of short wave radiation.  As Oke notes the construction of buildings adds surface area to the urban landscape. Building walls absorb SW radiation and the arrangement of building can lead to multiple reflection.

2. Increased LW radiation, primarily from air pollution over the city

3. decreased LW radiation from the surface. Skyview factor is critical here and skyview is directly related to the geometry of buildings. In a flat open plane free of building the surface has a clear view of the sky and can radiate accordingly.

4. Anthropogenic heat:  this is excess heat from buildings and traffic. This is should be noted changes dramatically with the type of city and with latitude. The ranges of excess watts is rather large between city types.

5. decrease evapotranspiration: This results from changing the surface of the earth.

6. decreased turbulant heat transport. Again, building geometry plays an important roll here as does the local wind.

In addition to these variable that one can control when developing and urban area there are several that cannot be controled: the wind, clouds and nearby water.

With that in mind, let’s start to look at some metadata for Dubuque and its nearby airport. First a google map view

In the following charts the city pin will be marked witha blue cross and the airport with a red cross. In terms of incoming solar we should be aware that changes to evapotranspiration can cause changes in cloud cover. Looking at Modis cloud cover, we see the count for days of cloud cover. Less clouds is more sun.

Next we want to look at transformations to the surface. In the map that follows we have marked the urban built areas as Green bits

Next we look at population. There are a few good sources here: first is a 5 minute source for 2005 and after a 2.5 minute source for the year 2000

 

 

Next, we will look at daytime  LST or land surface temperature.

And next we look at Night time LSt when UHI is supposed to be the highest

In case you are wondering what that long green patch of warmth is, it’s water.

 

A couple things are clear. A reader named sunshine  suggested that perhaps cars from the city infected the airport with UHI. Looking at the data,  the differences in temperature would seem more likely to derive from fewer clouds over the city, more built surfaces, and the higher population that created those surfaces. The surfaces hold heat and they appear to retard the formation of clouds, leading to more sunshine in the city than at the airport. And, it seems the nearby water is morel likely to modulate the temperature at the airport than anything else.

Funny. First “sunshine” thought it was “depopulation”. Then he surmised it was cars. In all of this there was no attempt to to verify a thesis, just random thoughts. Sunshine, appears to have his head where there is none. In any case  trying to find a cause for cooling trends or a cause for warming trends, trying to find a UHI signal, isn’t a simple task of throwing up random thoughts. We know the causes. 20 years after Oke’s essay, the causes remain the same. The weight of the causes is critical. more on that later

 

Categories: Uncategorized
Follow

Get every new post delivered to your Inbox.

Join 39 other followers