Home > Uncategorized > Wetbulb Temperature

Wetbulb Temperature

This google map display is just one of 230 GHCN stations that is located in the water. After finding  instances of this phenomena over and over, it seemed an easy thing to find and analyze all such cases in GHCN. The issue matters for a two reasons:

  1. In my temperature analysis program I use a land/water mask to isolate land temperatures from sea temperatures and to weight the temperatures by the land area. An area that would be zero in the ocean, of course.
  2. Hansen2010 uses nightlights based on station location and in most cases the lights at a coastal location are brighter than those off shore. Although I have seen “blooming” even in radiance calibrated lights such that “water pixels” do on occasion have lights on them.

The process of finding “wet stations” is trivial in the “raster” package of R. All that is needed is high resolution land/sea mask. In my previous work, I used a ¼ degree base map. ¼ degree is roughly 25km at the equator.  I was able to find a 1km land mask used by satellites. That data is read in one line of code, and then it is simple matter to determine which stations are “wet”. Since NCDC is updating the GHCN V3 inventory I have alerted them to the problem and will, of course provide the code. I have yet to write NASA GISS. Since H2010 is already in the publishing process, I’m unsure of the correct path forward.

Looking through the 230 cases is not that difficult. It’s just time consuming.  We can identify several types of case: Atolls, Islands, and coastal locations. It’s also possible to put the correct locations in for some stations by referencing either WMO publications or other inventories which have better accuracy than either GHCN or GISS. We can also note that in some cases the “mislocation” may not matter to nightlights.  These are cases where you see no lights whatsover withing the  1/2 degree grid that I show. In the google maps presented below, I’ll show a sampling of all 230. The blue cross shows the GHCN station location and the contour lines show the contour of the nightlights raster. Pitch black locations have no contour.

I will also update this with a newer version of Nighlights. A google tour is available for folks who want it. The code is trivial and I can cover that if folks find it interesting. with the exception of the graphing it is as simple as this:

Ghcn<-readV2Inv() # read in the inventory

lonLat <- data.frame(Ghcn$Lon,Ghcn$Lat)

Nlight <- raster(hiResNightlights)

extent(Nlight)<-c(-180,180,-90,90) # fix the metadata error in nightlights

Ghcn<-cbind(Ghcn,Lights=extract(Nlight,lonLat)) # extract the lights using “points”

distCoast <-raster(coastDistanceFile,varname=”dst”) # get the special land mask

Ghcn <- cbind(Ghcn,CoastDistance=extract(distCoast,lonLat))

# for this mask, Water pixels are coded by their distance from land. All land pixels are 0

# make an inventory of just those land stations that appear in the water.

wetBulb <- Ghcn[which(Ghcn$CoastDistance>0),]

writeKml(wetBulb,outfile=”wetBulb”,tourname=”Wetstations”)

Some shots from the gallery. The 1km land/water mask is very accurate. You might notice one or two stations actually on land. Nightlights is less accurate, something H2010 does not recognize. Its pixels can be over 1km off true position. The small sample below should show the various cases. No attempt is made to ascertain if this causes an issue for identification of rural/urban categories. As it stands the inaccuracies in Nightlights and station locations suggests more work before that effort is taken up.

Advertisement
Categories: Uncategorized
  1. Bob
    November 8, 2010 at 8:03 PM

    Where did you find the 1 km land mask? I am interested in using that as well (not for the same thing you are doing). Ditto for the nightlight database.

    • steven Mosher
      November 9, 2010 at 2:28 AM

      The 1km land mask is here:

      http://www.ghrsst.org/GHRSST-PP-NAVO-Land-and-sea-Mask.html

      the coverage is not global, -80,80 (n/s) but full latitude.

      the data is 0 over land and miles from coast over water

      The Nightlights version I use is that used by Hansen. The file was taken down around the time I noted a metadata error and informed them of it. That *tif has been taken down ( it was an easy fix, see my code) They suggested that I use f16 data instead. The old file has been reformated. I have the url around here somewhere.. for F16 google DMSP OLS

  2. November 8, 2010 at 9:13 PM

    Yeh, this limitation of GHCN is one of the main reasons I’m holding off working on global UHI analysis based on remote sensing classifications. Luckily the USHCN lat/lons are much more accurate (usually within 30 meters).

    Also, the folks at NCDC know that fixing GHCN metadata is a priority, so hopefully this will improve in the next few years.

    On a related note, a fun exercise is to compare GHCN and USHCN lat/lon coordinates for stations that are part of both networks 😛

    • steven Mosher
      November 9, 2010 at 2:32 AM

      Ya, USHCN is much better. but every time I try to collate GHCN against USHCN I want to gouge my eyes out. I’ve written GHCN and they are open to accepting data. I have to check against the new WMO which comes out today. I made some suggestions for adding source fields and quality flags to the inventory ( like the temp) I figure there are prolly more than 10 sources of Inventory data
      ALSO, i figured out the -1 in the ISA files… water.

    • steven Mosher
      November 9, 2010 at 2:42 AM

      In a few days I may have that historical population data for you world wide. the resolution is NOT 1 km more like 50km(??) have to check..May have some evapotranspiration data as well.

      Doesnt make much sense without good location data, but I figure you got the US covered so I best work were there is work to do

  3. November 8, 2010 at 9:20 PM

    This is somewhat speculation, but I’d suspect that the 1502 CLIMAT stations have much better metadata (especially for coordinates) than the various retrospective collections of pre-1992 temp data. If this is true, comparing the two sets could allow you to try and tease out the mean error in things like nightlight classifications.

    • steven Mosher
      November 9, 2010 at 2:36 AM

      Ya, I started down the path of looking at the various sources. the MEAN error is small, around .02 degrees or say 2km. So I’m thinking of doing a screen at 5km. 2km for the nightlights error and 2km for mean station location error.

      EARLY results say that a small number of rural go urban and small number of urban go rural.
      should not impact results. But that was only with the 2800 or so stations that derive from the WMO list. If the imod is 000, then its a WMO site. If the imod is 00X, then the WMO number is the number of the CLOSEST WMO. Also they have some untraceable WMO and some deprecated WMO.

      This work drives me batty

  4. PaulR
    November 8, 2010 at 11:46 PM

    What is H2010 and why is its status relevant to reporting a problem?

    • steven Mosher
      November 9, 2010 at 2:38 AM

      H2010 is the 2010 version of Hansen’s paper. Since it is apparently in review I’m unclear whether I should contact them directly, send the “corrections” in through friends who may have a better shot at getting results, writing a comment ( not look for glory) or just work with NCDC who is the source.

  5. November 11, 2010 at 10:29 AM

    The wet bulb temperature is unit of temperature measurement of physical property of system. The physical property of a system has mixture of vapor, gas, air, water vapor. Wet bulb temperature is the initial temperature or lowest temperature. Wet bulb temperature indicates the amount of moisture in the air. But Trade area analysis is not getting H2010. Is it new version? Can it accurate measurement?

  6. J
    November 18, 2010 at 5:09 PM

    If in my field, somebody presented a dataset with such errors it would be completely refuted, thus not being used anywhere. Why doesnt this apply in climate science?

  7. cce
    November 19, 2010 at 9:58 AM

    Those errors will not substantially affect the result.

    • Steven Mosher
      November 19, 2010 at 11:06 AM

      cce:

      I try to stay away from making claims unless I can back them up. If you asked me to guess I would guess that they will not “substantially” change the result. I would say this because I believe it is warming. I think it’s fun to find these errors and important to correct them. I think that hand waving them off, tends to upset an audience that you want to convince. There are more serious errors that I havent posted about, trying to handle that more discretely and see if people will make the fixes on their own. We will see. Even there the more serious errors dont make that much of a difference. Substantial? I dunno, depends on the grid square you are talking about and whether a proxy series has been calibrated to a grid that has errors, errors that are balanced by errors in other places, but errors which may be locally important. Too early to tell on that.

  8. cce
    November 20, 2010 at 7:15 AM

    Can “skeptics” back up the claim that the data has been “completely refuted?” Do you think our friend “J” will be convinced that the data is good enough if you find that these errors only change the rate of warming by a maximum of a few percent? Of course not. But we dare not upset him.

    All of this is very interesting and certainly welcome, but UHI issues just don’t matter all that much. You can compare Hansen’s method to your 55 km (or whatever) version. Compare the overlapping gridcells and take the difference.

    Ocean and interpolation problems is where the emphasis should be.

    • Steven Mosher
      November 20, 2010 at 1:23 PM

      Can skeptics back up their claim? I don’t think so, I’ve said so many times.
      As far back as 2007 if you care to look. But I try not to pattern my approach to things on shoddy work or the lack or work.I feel no compulsion to defend shoddy work or to defend lazy skeptics.

      I can’t change “J”‘s mind. I don’t make that supposition a determining factor in how I choose to work. I work how I was trained to work. Don’t wave your arms. nail down every error you can. Do a complete job. Create work that others can build upon. I’m self directed, shrugs.

      UHI. Well, Jones put the figure between 0 and .3C and chooses .05C. I’ve seen work that puts it at .1C ( using GISS code with nightlights done accurately, ie corrected station locations). There is a nice problem with Nightlights in the non developed world ( see Dodd’s work). I think the thing people dont get is why I like this problem. I like it because it doesnt make a difference to the debate and because I think the work done on it can be done better with information that is available. I like it because it’s not science, it’s book keeping.

  9. cce
    November 20, 2010 at 11:30 PM

    You don’t seem to be refuting the unsupportable claims of skeptics on your own blog.

    • Steven Mosher
      November 21, 2010 at 1:29 AM

      No cce, I take those refutations directly to the skeptics blog sites themselves. For example, the post I did with Zeke and Nick debunking McKittrick’s work and the SPPI report. That post was put up at WUWT. I find it more fruitful to engage the skeptics on their own sites. They are open minded enough to allow me to point out their errors on their own sites. My purpose here is to show R programming, not to publish finished work. When I have a completed analysis, like the one showing that the skeptics claim of station drops was unfounded, then I take that analysis and publish it on the highest traffic skeptical site on the internet. If you have a problem with me posting refutations of skeptic positions on the skeptic blogs, I’m sorry that upsets you.

      I do have quite a backlog of refuted positions written up or waiting to be written up. They would include

      1. the great station drop out ( done and posted at WUWT)
      2. thermometer accuracy ( Lucia’s done this, but I have R work on the same issue)
      3. sensitivity of results to number of stations ( Done, But I’m waiting for an even bigger dataset to be published. The scientist in charge has promised me a look at the data. I hope you don’t mind if I wait for better data rather than rushing off something less than What I consider to be iron clad
      4. The UHI question. ( in progress)

      WRT #2, I dont think redoing Lucia’s work is particularly enlighting, but if things get slow, or I can relate it to a NEW thing I learn about R I would post it here. WRT #3, I have a lot of housekeeping and code improvement to do, but that will probably be one the first I finish. #4., I keep plugging along. I’m awaiting answers from some scientists on a variety of questions.

      I hope that answers your question.

      Now, here is one for you. If these errors were pointed out to you about your work, what would you do? careful, its a trap.

  10. cce
    November 23, 2010 at 11:09 AM

    J asked a question. I answered it. Global warming is not an artifact of bad coordinates, and pointing that out is not hand waving. The “maybe it will be significant, maybe not, and let’s define significant. Come back in a year, maybe never” stuff doesn’t impress me.

    If you propose tweaks to GISS UHI treatment (better metadata, looser radius, whatever), they might consider using it, as they have been open to improvements in the past. But you are spending a lot of time on something that will not have a lot of effect on the results when there are bigger problems. But, it’s your choice, not mine.

    Keep on plugging.

    • Steven Mosher
      November 23, 2010 at 11:33 AM

      Ya, GW is not an artifact of bad coordinates. That’s a given. For defining significant I think it depends on the question you are asking. I see the record being interesting for the following questions:

      1. proxy studies, where a change in a grid box could drive a reconstruction. I note a couple papers that re did the local grids before calibration. Fine tuning.

      2. Testing models against observations, Santer’s work for example. Fine tuning.

      3. The global average. I’d be shocked to find anything above .1C

      Tweaks to UHI:

      1. One of the tweaks I would suggest was actually used before precisely for the reasons I look at. Peterson 1999. a combined metric
      2. Grump urban extent actually scores pretty good in a 140 city calibration test
      ( see the Modis work )
      3. There is a interesting dataset that I think J might like. Showed it to Spenser and it was news to him. requires a ton of work that I’m not qualified
      to do. Some work has been done on it.

      I guess my main interest is showing why the effect doesnt show up, if that makes any sense.

  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: