DEB Numbers: Where do the various official funding rate numbers come from and why are they different from what your PO tells you?

We have recently received several questions by email from PIs wondering how the funding rates shown in official reports can be so different from what they experience in DEB. We thought that it would be good to post a response here on the DEBrief.

What are the official NSF funding rates and where does one find them?

Official funding rates published by NSF vary depending on the program(s), division(s), or directorate(s) being examined. These numbers are calculated in the same way across all of NSF.

You can find official funding rate data here: http://dellweb.bfa.nsf.gov/awdfr3/default.asp and can use the tool to drill down and look at different divisions and programs. These numbers should match the ones you see publicized in official statements and on the nsf.gov site.

As you may notice, the posted funding rates are several times higher than what you hear in discussion with your PO, during panels, or in our context statement.

For instance, according to the official numbers, DEB in 2013 had these stats:Numbers.biis.1

That’s 1,751 proposals, 409 awards, for a 23% funding rate. The mean time to a decision was 4.65 months. An “average award” in 2013 had a duration of just under three years and is funded at ~$80K per year.

Why do these numbers differ from the ~3-7% funding rates you’ve heard from various personal sources?

Missing from the Denominator:

Well, for starters these numbers don’t count preliminary proposals. The official numbers are based only on the proposals that lead directly to funding decisions. You can see that if you look back a couple of years in DEB:Numbers.biis.2

According to these numbers we had ~1000 fewer proposals in 2013 than in 2010. That’s not an error, we did have fewer full proposals, but it is misleading because the source of the count isn’t detailed here. In DEB, when we report funding rates to you, we report separately the pre-proposal invitation rate and the full proposal funding rate, or we report a single funding rate for the whole process[i].

Notably, this artificial dip in submissions/spike in funding rate can actually be seen on the top-line NSF numbers:

Numbers.biis.3The drop in submissions from 2011 to 2012 is largely because DEB and IOS Preproposals aren’t counted there, resulting in an apparent increase in the NSF-wide funding rate while the number of awards didn’t change much. However, the effect of sequestration (on top of losses from run-of-the-mill inflation and risings costs of research) can be seen from 2012 to 2013 in the drop in awards.

Proposal duplication:

The official numbers count each proposal (“award jacket”) separately. These counts do not combine multi-institutional collaborative proposals into a single unit. This has a sizeable effect on the numbers for DEB, especially within the core programs. Many of our awards involve two or more collaborative proposals reviewed as a single unit; we feel that most of you would consider that such a unit should be counted once instead of each component counting separately and that’s what we do on this blog. However, in the official numbers, a three-partner collaboration counts as three separate proposals and three separate awards or declines[ii]. Generally, collaborations counted in the official manner inflate the funding rate by a few points compared to the individual programmatic reality.

Lumping proposal categories:

We discussed in a previous post the organization of core and special programs in DEB. These are all lumped together in the official DEB funding rate calculation. For the most part, items like RAPIDs, EAGERs, and Conference support are a relatively small piece of the total and are only responsible for about a point of the official vs realistic funding rate discrepancy. You can drill down in the official numbers to get a better look at program-level outcomes. As you can see, there’s much variation here[iii]:

Numbers.biis.4

The two most important messages are a bit buried amid all those lines.

First, the “special programs” are not generally inflators of the funding rate. Dimensions of Biodiversity was at 14%, CNH 6%, and EEID 4%: cumulatively they represent about 20% of the official proposal count and only 10% of the official award count. These programs do not provide an advantage to submitters[iv]. They may be desirable to you for other reasons – interdisciplinary content and potentially larger awards than a disciplinary core project – but they are not an easier route to funding.

Second, the biggest inflator of the funding rate is not obvious to see, but a careful reader might have already picked up on it. Take a close look at the median annual size column, anything stand out? Do the median award sizes for Ecosystem Studies, Evolutionary Ecology, Evolutionary Genetics, Phylogenetic Systematics, and Population and Community Ecology seem a bit small?

A part of this is a reflection of what we already discussed above: each proposal is counted separately so a big $650K project may be comprised of three jackets each receiving $72K per year for 3 years which shifts the median size downward in the official count. But that doesn’t bring it down to $12K or $15K in some of those programs. The main reason official rates appear so much higher than reality is that they lump Doctoral Dissertation Improvement Grants (DDIGs) into the same count as regular research proposals.

Doctoral Dissertation Improvement Grants at ~15K cost just a fraction of a full project: it would take 20, 30, or more DDIGs to account for the cost of a single regular research project. Thus, a little bit of money goes a long way in funding DDIG proposals[v]. The result is that DDIGs can be relatively numerous (50%+ of award jackets in some programs) and enjoy a relatively high funding rate that when lumped into the official count, results in misleadingly high success rates for DEB as a whole.

 

While the topic of funding rates may be a bit confusing, we hope that this post has shed some light on why you may be seeing different funding rate numbers across NSF and DEB. As you all know, with summary statistics the numbers all depend on how the data are being analyzed.

 

[i] These don’t combine exactly because there are proposals like CAREERs and co-reviews that are part of the funding milieu but skip out on the preproposal stage; we covered that previously, here.

[ii] If in our full proposal panels we report funding 4 of 20 projects when counting collaborations only once each for a 20% success rate, the official count would reflect something like 7 of 30 jackets funded for a 23% rate.

[iii] Please note, these program bins don’t actually identify pots of money, they are organizational labels. Those few with only a handful of proposals and high success rates are mostly flukes of labeling. Some, like AToL are old codes that are being retired; others, like LTREB and LTER, are special programs that weren’t entertaining entirely new submissions but other sorts of awards like renewals, and workshops.

[iv] Dimensions might look relatively good, but you have to correct for collaborative projects in the official count here too. The rate as far as whole projects are concerned is ~10%.

[v] In 2013, DEB made 134 DDIG awards. If we never had this opportunity (and ignoring any resulting increases in student support requests), we could have funded approximately 4 additional regular grants, 1 per cluster, bringing the 2013 core award count from 121 to 125 and increasing the funding rate less than 1%.

5 thoughts on “DEB Numbers: Where do the various official funding rate numbers come from and why are they different from what your PO tells you?

  1. Pingback: Friday links: breaking down NSF success rates, funding people not projects, and more | Dynamic Ecology

  2. Pingback: Ecology has a bright future. It just may be somewhere else.

  3. Pingback: Not an April Fool’s joke: PI success rates at NSF are not dropping (much) | Dynamic Ecology

  4. Pingback: Templeton hosts a biology-and-faith conference where the outcome is—surprise!—predetermined « Why Evolution Is True

  5. Pingback: DEB Numbers: Success Rates by Merit Review Recommendation – DEBrief

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s