Fewer frogs? Amphibian census flawed by tin-earred researchers
Wrong ribbit? North American Amphibian Monitoring Programs sends volunteers to listen for and count frogs and toads. But a new study shows that even expert observers (or listeners) make errors that may have skewed frog population assessments.
To get a count on animals that make noise, scientists often listen for their calls. But even expert ears make mistakes when trying to identify frogs.Skip to next paragraph
Subscribe Today to the Monitor
Confusing two species, or hearing a frog that isn't there, is an occasional error that can have serious implications in our understanding of what is going on with a population, say researchers with the U.S. Geological Survey.
For about 10 years, the USGS's North American Amphibian Monitoring Program has sent out volunteers to listen for vocal amphibians – frogs and toads – and, based on an interpretation of the calls, record information about the amphibians' abundance and diversity. However, a new study shows that even expert observers make errors that may have skewed population assessments.
Ted Simons, a wildlife biologist with the USGS Cooperative Research Unit at North Carolina State University, said the observers' own vivid memories of hearing a particular creature at a particular time can be a liability.
"It somehow burns right into your psyche. You can go back sometimes years later and those memories just jump right back. ... Those memories can also be a source of bias when you are trying to get an accurate count," said Simons, who has done similar work analyzing the potential for misidentifications with the North American Breeding Bird Survey.
Two types of errors can skew a frog-call survey. False negatives – mistakenly concluding no animals are present because none are heard – have received more attention, he said. However, false positives – misidentifying an animal or hearing something that isn't there – can also bias results.
Replicating an approach used to assess observers' accuracy when listening for birds, Simons and others played recordings of five frog species – some solo calls, some overlapping, from a variety of distances – for five expert observers. The observers, all of them biologists, were given a list of 11 species from which to choose their identification.
Even though the frog calls of six species were never played, observers claimed to hear two of them. As for the other five species, participants misidentified calls at rates of 1 percent to 11 percent.
In all but one case, the errors skewed assessments of the populations.
Last year, about 500 observers in more than 20 states collected data using these call estimates for the North American Amphibian Monitoring Program, according to Linda Weir, the wildlife biologist who coordinates the project for the USGS and who worked with Simons' team on this recent study.
The observers did not literally count the amphibians they heard, but ranked what they heard from one, for a few individuals, to three, for a full chorus of constant, overlapping calls.
A paper published by NAAMP in December of 2009 reported changes in the number of Northeastern sites occupied by 16 species. It took into account the possibility observers recorded no-shows for species actually present but silent; however, Weir said, it did not account for observers misidentifying calls.
There are two ways of dealing with observer error, she said: Scientists can better train observers and also take human error into account when analyzing the data. In 2006, NAAMP began requiring its observers to take an online frog quiz to show they could identify the creatures' calls. This quiz, along with results from field tests like Simons', could help NAAMP figure out how to account for misidentification errors, according to Weir.
Ultimately, the goal is to help scientists identify potential problems – such as background noise – so they can account for them, according to Simons.
"The work is really just aimed at making those improvements," he said.