Just before the end of December 2012, the Division of Environmental Biology sent out an email message to a list of all people listed as PIs and Co-PIs on DEB proposals since the start of fiscal year 2008. (Aside: if you did not get the email and think you should have, make sure your FastLane profile information is up to date.) This message included a notice of our plans to start blogging among other efforts to enhance interactions between DEB and the research communities. About 1/3 of the message consisted of several snippets of Division-wide data from the two-stage proposal process with specific tables focused on two groups: Early Career Investigators and Primarily Undergraduate Institutions (PUIs). We received helpful feedback from several readers of the original email pointing out ways in which the presentation could have been clearer. We thank you for that. It is already helping to make this blog better.
Since many out there may not have seen the original message, and others may have been intrigued to learn more, this post re-visits and expands on those numbers.
The data snippets in the email were meant to begin a discussion, they were not intended to be comprehensive or the final word. There are other ways to look at the numbers and significant context and nuance simply could not be crammed in to a reasonable email. Actual performance numbers from the two-stage review process are just starting to come in and even those will change somewhat as Program Officers pursue every opportunity to secure funding through the fiscal year’s end.
In the spirit of the prior Numbers post, before we come back to the data we require a detour to explain some terminology (in bold):
Our numbers are reported by fiscal year (which runs Oct. 1 to Sept. 30). The first round of new solicitation full proposals, the full proposals reviewed in October of calendar year 2012, are 2013 proposals according to the fiscal calendar.
The term proposal can be used in several ways, and it is not always clear what is meant when it is used as a stand-alone data label (e.g., “the program reviewed 1200 proposals”). This perennial confusion stems from a technical distinction – the individual document packages we review and process are properly called “proposal jackets”. A single submitting organization prepares and submits an individual proposal jacket. While many proposal jackets involve only a single organization, sub-awards enable two or more organizations to be included in a single proposal jacket. Alternatively two or more organizations can submit separate proposal jackets which are bundled together as a single package through the review process with one organization identified as the lead organization. The NSF PAPPG defines both single proposal jackets with sub-awards and bundled proposal jackets as “collaborative proposals”. Thus, when numbers are presented for “proposals”, it is not clear if the bundled proposal jackets are being counted once per package or each proposal jacket is being counted separately.
Our convention in DEB is to avoid using “proposal” alone as a formal data label. We use projects to refer to whole packages that go through review (whether single institution or collaborative bundle) and count each bundle of proposal jackets once. In cases where numbers are being reported by individual proposal jacket, we will identify the data as based on proposal jackets.
Finally, and hopefully enough to start looking at the numbers from reasonably common ground, we need to address what is meant by the % calculations variously called success (invite or funding) rate in DEB.
For a group of projects under review, we can categorize them into one or more submission sub-groups. Each project is ultimately assigned to an outcome of acceptance (invited, or awarded) or rejection (not invited, declined). So, we can take a look at the review outcomes like this:
|# Accepted||# Rejected||Total Reviewed|
|# Projects Sub-Group A||Xa||Ya||(Xa + Ya)|
|# Projects Sub-Group B||Xb||Yb||(Xb + Yb)|
|# Projects Sub-Group C||Xc||Yc||(Xc + Yc)|
|# Projects Total||X||Y||(X+Y)|
Note: Sub-Groups may not be mutually exclusive so (Xa +Xb + Xc) can be > X.
Proportion of Sub-Group A that was accepted = Xa/(Xa+Ya)*100% (e.g., 17% of PUI full proposal projects were funded, 32% of all preliminary proposals were invited [both fictional examples])
Success rates formulated this way are what have historically been presented as one of the, if not the sole, metric for NSF program health. But, this number says more about needs for resources and program management on the NSF side than it does about the performance of submitters. We can easily see from tracking success rates over time that demand for funding is growing and outstripping budget growth widely across NSF programs and categories of submitters. However success rate is a blunt and superficial measure when it comes to comparing performance between sub-groups.
If you want to compare performance of sub-groups over time, between programs, etc. it is much more revealing to look at the contribution of those sub-groups to the portfolio of projects for a given review stage or outcome. In the above table, we can calculate Portfolio Representation using the numbers down the columns, such as:
Proportion of Total Reviewed projects from Sub-Group C = (Xc+Yc)/(X+Y)*100% (e.g., 15% of funded full proposal projects were from PUIs, 21% of all reviewed preliminary proposals were from early career investigators [both fictional examples])
With Portfolio Representation numbers, one can quickly and easily see the extent to which the awards coming out of a program reflect the diversity of submissions coming in and how that has changed over time without the numbers being drowned out by overall growth in demand.
On to the data…
These three tables display the Success Rates for projects in DEB’s core program panels since FY2007 and include tentative data for the FY2013 full proposal stage: the October, 2012 review panels and subsequent award making. These present the same information that was included in the original email message; formatting has been updated for clarity based on feedback and reported values have been updated to ensure consistent back-casting of historical numbers and present more recent tentative numbers and estimates.
Project Submissions and Success Rate for DEB Core Program Panels FY2007-FY2013
Project Submissions and Success Rate of Projects Identifying the Lead PI as a Beginning Investigator^ for DEB Core Program Panels FY2007-FY2013
Project Submissions and Success Rate of Projects of Projects Identifying the Lead Institution as a Primarily Undergraduate Institution for DEB Core Funding Programs FY2007-FY2013
*Note: Tentative numbers for current fiscal year under the Continuing Resolution budget plan: 80% of the 2012 budget. The FY2013 Full Proposal counts include full proposal projects received as CAREER, OPUS, RCN and co-reviews.
^Note: As indicated on the BIO-specific Proposal Classification Form completed in FastLane and associated with the lead proposal jacket of each project.
What these numbers show is pretty much the same point we illustrated in our August, 2011 webinar to introduce the two-stage proposal review mechanism and in subsequent talks at professional meetings.
Success Rates of Projects for DEB Core Programs as shown at ESA 2012.
Success Rates have been declining overall and the trend extends back to at least the early 2000s when success rates were above 20%. Decreasing success rates are due largely to increases in submissions. The dynamics behind increasing submissions are complex and reach beyond the scope of this particular post. However, the end product of those dynamics was a pattern of unsustainable growth driving a falling success rate with ever more submissions.
The collective effort required to create, submit and review projects gradually grew ever more out-of-whack relative to the payback seen in support for research and feedback to PIs. We heard this from the community over many years in panels, at professional meetings, and in the phone and email exchanges with the vast majority of PIs being declined funding. This state of affairs was not unique to DEB or even BIO; in 2007 NSF published the results of a survey of PIs and reviewers which reported this same basic message across the entire agency. As in the graphic above, trends continued to worsen to the present.
The decision to develop and launch a two-stage review cycle was made in this context, following models employed elsewhere in the agency.
In implementing a preliminary proposal stage, DEB recognized that success rates would likely not be improved over the entire process. Especially in the first year, we expected a sizable increase in submissions because news of a change would inspire new or previously dissuaded PIs to enter the process. Additionally, preliminary proposals, being perceived as a less burdensome entry to the system, should incentivize participants to submit even more projects. In practice, we did see a significant increase in projects submitted:
1626 preliminary proposal projects
+ 163 full proposal projects via CAREER, OPUS, RCN, and co-review
= 1,789 unique projects reviewed in the core programs during the first full review cycle.
This total project load is roughly 35% larger than during the last full fiscal year of the old system: 1329 projects in FY2011 (and 25% larger than the previous high water mark in 2010). Given flat program budgets, and assuming the same average award size, we would therefore expect the overall funding rate to be between 8 and 9% over the entire two-stage cycle. However, we are currently under a continuing resolution with a budget of 80% of what we had in FY2012 to both start new awards and cover continuing grant increments on awards made in prior years. The tentative numbers at this point in time show a success rate of about 6% over the two-stages.
The major difference in the two-stage review system is that the PI and reviewers are not required to do all of the work up-front for this huge number of projects just to see 10% of them funded. Instead, during the preliminary proposal stage, the success rate can be higher than under the old system while the costs to the PI for producing (especially in regards to administrative portions of a submission) and to the community for reviewing a preliminary proposal are reduced relative to a full proposal. Then, in the full proposal stage, a significantly reduced number of projects are submitted allowing the work of preparing and reviewing the full-length submissions to be balanced against a better success rate.
The tables above bear that out within the limitations of only a single full cycle of the new system. Even when looking at Beginning Investigators or PUIs, the success rates at either stage of the review process were generally higher than the success rates seen for prior years. Additionally the differences in the success rates for these groups compared to the success rate for all projects do not appear to show exacerbation of those differences under the new system. We can see the relative performance of these groups better by looking at contributions of Beginning Investigators and PUIs to the portfolio of submission, invitees, and awards which we explore in Part 2.
 Further exploration of “collaborative proposals”, differences between the PAPPG and colloquial use, and how we are counting them will be a future post.
 Another potential post to look forward to: how differences in counting choices provide different pictures of DEB growth
 Note: This refers to effort per proposal. Individual panelists likely experience little change in total work load since in most cases they will see a greater number of shorter proposals. While not a quantitative improvement for preliminary proposal panelists, we received many comments indicating improved quality of their experience.