next message in archive
no next message in thread
previous message in archive
Index of Subjects
Dear all: A few years ago, I invited readers to participate in a web-based survey of birdsong identification abilities that I was conducting with Marty Leonard and Andy Horn. I received a great response and lots of support, and the results of that research have since been published in the journal The Auk (University of California Press). Although we didn't make the project goals explicit ahead of time (this was to keep from biasing the results), we wanted to see what tends to happen when birdwatchers listen to birds that sound quite similar, for instance the Rose-breasted Grosbeak and American Robin, but where one of these birds might be more rare (more "sexy") than others. We were curious how an outside pressure to hear rare species might bias these kinds of detection "decisions". We also wanted to see how reliable observers can be when they say that they're "confident" they got everything right. What we found was that as self-reported skill levels increased, missed detections and false-positive detections were less frequent (no surprises there). What was interesting, however, was that when expert birdwatchers made false-positive errors, they tended to report a rare species. On the other hand, less-skilled birdwatchers ("moderates") tended to report more common species when they made such mistakes. A mild incentive to detect rare species (a higher "score" in this case) didn't seem to affect these patterns. We also found that when the bar is set really high for confidence (as in, "did you get everything right? yes or no?" for a very difficult survey scenario), observers of all skill levels tended to be poor judges of their own performance (in other words, the amount of overconfidence is pretty consistent). Since some amount of detection error is basically unavoidable in bird call surveys, these results will make survey designers think more carefully about the skill levels of their observers, because the nature of these errors can vary with the observer's skill (e.g. more false positives of rares or commons), and it's not good enough to trust observers to self-rate their imagined accuracy. On the other hand, these early results don't indicate that some healthy competition to find the rare birds is going to bias things. Thank you very much for your support, and I hope your group finds these results interesting. Sincerely, --Bob Farmer PhD candidate, Dalhousie University (Halifax, NS, Canada) http://leonardlab.biology.dal.ca/Bob.html ps: The article can be accessed at http://dx.doi.org/10.1525/auk.2012.11129 If you're unable to access the full text would like to see it, please write me and I can provide a copy! Its formal citation is: Farmer, R. G.; Leonard, M. L. & Horn, A. G. (2012). Observer effects and avian call count survey quality: rare-species biases and overconfidence. Auk. doi:10.1525/auk.2012.11129
next message in archive
no next message in thread
previous message in archive
Index of Subjects