DEB Numbers: FY2013 Wrap-Up

It’s been a while since we’ve posted on DEB Numbers, a category that digs deeper into DEB through analyses of the proposals we review and award. So if you’re new to this blog or don’t remember, take a look back  for things we’ve talked about before.

This past spring, we posted a first look at the outcomes of the new process for several categories of proposal identified by the community as priority concerns. We noted then that final funding levels had a high degree of uncertainty because of the at-that-time-unknown implications of the sequester and the further financial uncertainty of a partial continuing resolution. Now that fiscal 2013 is behind us and the dust has settled we can look back and see what the final numbers are.

Recall that when we look at the performance of various groups, we’re reporting by percentage the representation of that group in the portfolio. This provides a straightforward comparison over time.

Individual vs. Collaborative Approaches: Both Win!

Ok, that header is slightly misleading. Two of the most frequent concerns we’ve heard reflect a divergence of community opinion related to “the right way” to do science – individually vs collaboratively. (Note, the following cartoon characterizations take artistic license for dramatic effect.) On one side people said that the process would select against collaborative projects because the submission limit and page limit would make it difficult to find collaborators in the first place and then fit a reasonably accurate description of their project-of-epic-proportions in 4 pages. On the other side people said that the single investigator would be doomed in competition when their lonely toil was compared with flashy group projects bringing in the intellectual firepower to make big-budget moonshot claims in an idea-focused preliminary proposal.

Well, what happened? So far, nothing has changed.

Let’s take a look at the single investigator projects first.

Numbers_FY13End_1

On first glance, the proportion of submissions coming in from single investigators at the preliminary proposal stage seems a bit off from the historical relatively flat baseline. But the historical baseline shown isn’t directly comparable in this case. The numbers prior to the preliminary proposal system include submissions of CAREER and OPUS proposals which are overwhelmingly single investigator. However, those are two types of proposal that DEB exempts from the preliminary proposal screen, so they only get factored in at the full proposal and award stage where we see a slight bump up in single investigator projects which is expected from an influx of exempted projects.

Numbers_FY13End_2

Collaboration is a bit more difficult to track[i]. At the pre-proposal stage there is no formal budget or multi-institutional sign-off, so there’s no way to count collaborative projects in the official sense. Thus we have only the numbers from the full proposal and award stages over time. Both of these show a general trend of increasing collaboration in the submission and award portfolio, and no effect of the switch to preliminary proposals for FY2013[ii].

Gender Bias

Before we get started on this one, a quick data-related sidebar: personal demographic characteristics like gender, race, or ethnicity are exclusively self-reported by you in the research community. While perusing public websites, CVs, and other documents can tell us a lot about you informally we can’t use data from such sources in any kind of analysis; the only personal data we can use are data provided by you on designated forms such as those in the FastLane profile. Further, logging it in one place doesn’t populate it elsewhere in government databases or even necessarily within an agency.

However, we do have pretty good numbers for PI gender (90+% reporting), shown below.

Numbers_FY13End_3

So what do we see? First, the greater noise in awards vs. proposals is to be expected given that two or three projects can be the difference between 25% and 27% of awards but barely shifts the proposal measure. The tight agreement across all the stages in FY2013 is simply coincidental. Is this reflective of the proportion of women in the research community? Possibly, but the comparable community data aren’t the most recent. While there is an apparent slight upward slope, ascertaining real change would require a longer view. Overall, the performance of female PIs appears untethered from the core program review process in DEB.

Predominantly Undergraduate Institutions

This is a category of submitting institution defined by NSF and, like “collaborative,” the designation doesn’t always match up perfectly with the concept as understood by the research community[iii]. PUI status is determined at the institutional level; to be a PUI, an institution cannot grant more than a specific number of PhDs each year across all NSF-supported fields (see the RUI solicitation for definition). This adds a technical twist where a biology department at a university may be entirely undergraduate but because the university has a PhD program in, say, Chemistry or Sociology, the biology faculty aren’t considered part of the primarily undergraduate pool and don’t have access to the RUI criteria. This means that measures of the RUI program, or even the wider pool of PUIs, aren’t specific to and overlook the many nuances of institutional diversity in the BIO and DEB community. In short, we need to interpret PUI-related data with a grain of salt.

Numbers_FY13End_4

While this graph shows a slight recent downward trend in the proportion of PUI submissions, it should be noted that the absolute number of submissions from PUIs has increased over this period, just not as quickly as submissions from non-PUI institutions. Further, the gap between the proportions coming in to and awarded via full proposal panels doesn’t appear to be changing: it’s an issue that predates and was not targeted by the preliminary proposal system. The magnitude of the drop between preliminary and full proposal representation is consistent with dilution of the full proposal pool by CAREER, OPUS, RCN, and co-review proposals with a lower rate of PUI leads. However, there does seem to be a consistent underperformance of PUIs through the review process that needs to be addressed. The question remains as to whether a solution can emerge from community activity in submission and review.

Early Career Scientists

The concern we heard from the community was mostly worded as concern for “young” or “early career” scientists[iv]. Trying to determine the fate of this group is another place where it’s up to the research community to report numbers. Since there’s no single definition, we used two different metrics to look at “early career” investigators.

The first is self-reported “beginning investigator” status, which is a statement that an individual has never been PI or Co-PI (with the exception for certain training-type grants) on a grant from any Federal agency. This should represent a well-defined group within the “young scientist” category: people who are trying to become first-time PIs. There’s a spot in your FastLane profile to check a box if this applies to you. It also appears as a check-box on the cover page of your proposal that says to only check it if you’re submitting to BIO and you are a beginning investigator[v], and submitters are asked a third time on the BIO-specific “Proposal Classification Form.” Unfortunately, there are gaps because proposals we co-review from another directorate don’t request the same data, and if you looked at these three locations on each DEB proposal, you would find many that do not match up. It appears that these boxes may occasionally go unnoticed, and other times they are checked in error. It’s up for debate which one or which combinations should be used for monitoring. For the sake of simplicity and consistency, DEB Numbers uses the single check on the Proposal Classification Form for IDing “beginning investigators” since that one must be explicitly answered for every proposal sent to DEB.

The second crack we can take at “young investigator” performance comes from a field buried in the PIs’ and Co-PIs’ FastLane profiles. Each profile asks for the year of your last/terminal degree. For the overwhelming majority of DEB PIs that’s a PhD (student Co-PIs on DDIGs would of course not yet have a PhD but that’s outside the core programs). This field should be much more accurate than the “beginning” investigator designations since it’s a static value that trails a PI through proposal submissions without needing updates. The main weakness is that using it to define “early career” ignores career hiatuses (e.g., to work in the private sector or start a family) and thus excludes some proportion of PIs who would still be considered early career professionally even though they’ve taken a longer time to get there than those who went straight to academia. However, we can be fairly confident that everyone placed in the “early career” grouping this way belongs there and thus we can find a good bottom line number for “early career” PIs.

And now, on to the data!

Numbers_FY13End_5

So this is the portfolio representation of primary PIs in DEB core programs who are “beginning investigators” according to their BIO Proposal Classification Form. What we see is that the proportion of full proposal submissions from beginning investigators has been just below 25% but started a downward swing the year before DEB began preliminary proposals. At the same time, the gap between the representation of beginning investigators on full proposals and on awards closed though there is no particular reason to think that is anything more than coincidence. With preliminary proposals, we saw resurgence in the proportion of submissions from beginning investigators, but that leaves an apparent performance gap from preliminary to full proposal. What’s going on here? We actually presented these same data to you back in March and it appears that about half the difference ~3% is due to fewer invites to beginning investigators and the other half is due to proposals coming in outside of the preliminary proposal process[vi].  However, beginning investigators aren’t underperforming in awards compared to full proposals because representation at those two stages stayed constant since 2010, before preliminary proposals were implemented. And the representation of beginning investigators in the first round of submitted preliminary proposals was consistent with submission rates from 2007-2011.

We also looked at the distribution of PhD year for all of the primary PIs on DEB Core Program awards for each year since 2007.

Across all 7 years, including the first year of the preliminary proposal system in FY 2013, at least 10% of PIs on DEB Core Program awards were within 5 years of their PhDs. And at least 25% were within 8 years of their PhDs. That both percentiles have remained static relative to the fiscal year tells us that we’re investing in recently-minted PIs at a fairly steady rate over time.

PI Degree Age at Time of Award
Fiscal Year

2007

2008

2009

2010

2011

2012

2013

10th %ile

5

5

4

5

5

5

5

25th %ile

8

8

8

8

7

7.5

8

We can look at the PhD-year data another way, too. Here’s the distribution of persons (this time covering all PIs and Co-PIs) falling into 5-year age bands at the award and full proposal stages for 2013[vii].

Numbers_FY13End_6

The modal group of the award distribution turns out to be what is arguably the most important group for the early “making a career” grant: 5-10 years post PhD (i.e., 1 or 2 post-docs then into a tenure-track position). More than 40% of the awarded PIs and Co-PIs are in their first decade post-defense.

Success Rate

Finally we get to the last piece everyone seems to crave, the empty calories of program performance metrics, the overall success rate.

Success Rate
Fiscal Year

2007

2008

2009

2010

2011

2012

2012/2013

2013/2014

Preliminary Proposal*

22.0%

22.4%

Full Proposal**

17.2%

15.3%

22.1%

13.5%

11.9%

16.8%

24.1%

Overall***

17.2%

15.3%

22.1%

13.5%

11.9%

16.8%

7.3%

*= Ninvited_full / Nsubmitted_preliminary
**= Nawarded / (Ninvited_full + Ndirect_submission^)
***= Nawarded / (Nsubmitted_preliminary + Ndirect_submission^)
^Ndirect_submission includes CAREER, OPUS, RCN, co-review, and LTREB renewal proposals taken to panel.

So, let’s break this down. Remember that FY2009 was the stimulus money, a one-time increase and outlier. We also need to recall that in FY2012, DEB announced the switch to preliminary proposals. Thus, Success Rate for FY2012 reflects a flat number of awards over a “halved” proposal load (755 proposals) submitted that year for the fall deadline of calendar 2011, before preliminary proposals came into effect. A second deadline for review in the spring of 2012 would have roughly doubled the proposal load, give or take a hundred proposals, vying for the same number of awards. Without the artificial compression into a single cycle for FY2012 we would have expected a success rate of between 9% and 11%, in line with the observed pattern extending back even to the early 2000s.

But, you may observe, neither 9% nor 11% is half of 16.8%, which is correct. There’s a second factor embedded there: the balance of “standard” and “continuing” grants. Basically, in managing the DEB programs, decisions are made as to whether a grant is fully-funded up front or whether it is paid in annual installments (continuing grant increments). There are important accountability and management implications of the two funding styles, but the key thing for the moment is that installment funding creates “more new grants today” but takes a bite of what can be funded next year. Knowing that we were only going to receive a half-load of proposals in FY2012 and would likely face a flood in FY2013, the programs both paid more of the new grants fully up front and directed other funds to paying down future commitments from prior years. This reduced somewhat the piece of the budget tied up in existing grants before heading into the new system.

On top of the change and management actions, FY2012 was also flat budget year: our actual budget dollars were basically the same as in FY2011 but the purchasing power of those dollars decreased somewhat. Any time we have a decrease in purchasing power whether through rising research costs or actual budget cuts, it disproportionately impacts new awards because the cost to DEB of installment payments is fixed and the decrease is absorbed entirely in new grants.

With these considerations, the Success Rate, while by no means a “happy” metric, that we see for FY2013 starts to make more sense.

Numbers_FY13End_7

For the Overall Core Program Success Rate in FY2013, the experienced rate is ~2 percentage points below where we would have expected to be under the old system at this point (assuming no growth or decline in award purchasing power). That’s not unexpected when you consider that FY2013 saw the initial response to the preliminary proposal system generate an increase of 300-400 proposals over the full-year projection for FY2012. Prior annual growth averaged <100 proposals/year. On top of that we can add also a flat budget further reduced by sequestration and continued growth in the cost of grants which would have happened regardless of our programmatic changes.

This all goes to show that Success Rate is a crude indicator dependent on external forcings and a poor reflection of goings-on within the merit review system. Figuring out what is happening requires much more specific and targeted analysis.


[i] As discussed before, NSF’s technical definition of a collaborative project hinges on the joint financial participation of two or more institutions, thus a great deal of collaborative behavior such as between colleagues in a single department, arrangements for access to community resources, or coordination of independently funded work doesn’t meet the criterion. The simple minimum measure of such informal collaboration is the % of projects “NOT Single Investigator,” a consistent ~60-65% of projects.

[ii] The signal is too small and unsustained for anything more than armchair speculation, but it is not inconsistent with either the idea that 1) awareness of the concern about collaboration lead to more such proposals being invited for full proposals or 2) with the PI-limit in place some were prompted to look further afield for collaborators, yielding more NSF-defined collaborations where there were previously intra-institutional arrangements.

[iii] Predominantly undergraduate institution (PUI) as a category of institution is not the same as “Research at Undergraduate Institutions (RUI)” which is a funding opportunity. To be eligible to submit an RUI, you have to be affiliated with a PUI, but not all submissions from PUIs are submitted as RUIs. The RUI solicitation was put in place so that submissions from PUIs could opt-in to additional review criteria that considered the generally smaller research capacity and greater focus on teaching of PUIs. RUIs aren’t an end-goal for NSF; they’re mechanisms for increasing the relative competitiveness of proposals from PUIs, which ideally would perform just as well as those from R1-type institutions.

[iv] Those are convenient terms for lumping a lot of people under a single notion, but not very precise for figuring out what is actually going on or planning a response. Some folks focused on titles or employment status and clearly included everyone between students paid off their grants and faculty undergoing tenure review in the “young scientist” category; others appeared to just mean tenure-track faculty or perhaps post-docs and adjuncts hovering somewhere in-between. Some commenters tie it to prior grant status (that critical first, second, or third grant). No one really seemed to mean “young” as a literal measure of age, but possibly as a defined period that starts ticking away once you receive a PhD or enter the academic workforce. This slippery muddle of concepts makes speculation easy and responding, or even just accurately clarifying the issues, incredibly difficult.

[v] There’s a myth out there that you the PI are conferred some sort of “bonus points” by checking the box and that you may only use this special power once. That’s doubly false. You can and should identify yourself as a beginning investigator for as long as you are one, until you get federal funding (we need accurate data!). However, given the observed high error rates for checking/ignoring the box, the program officers usually also consider your submission history, proposal text, and CV to infer your career stage for the purposes of individual decision-making.

[vi] Co-reviews are missing the data, and OPUS and RCNs and CAREERs which, though CAREERs may be early career PIs, often don’t meet the strict “beginning investigator” definition.

[vii] Note, this doesn’t work for preliminary proposals because they don’t capture the FastLane PI ID numbers for all individuals designated PI and Co-PI.

8 thoughts on “DEB Numbers: FY2013 Wrap-Up

  1. Pingback: What we’re reading: MHC heterozygote advantage in wolves, isolation by environment versus distance, and the case against college sports | The Molecular Ecologist

  2. Pingback: Stats vs. scouts, polls vs. pundits, and ecology vs. natural history | Dynamic Ecology

  3. Pingback: Are US researchers slowly boiled frogs? – or thinking out of the box about the future of NSF | Dynamic Ecology

  4. Pingback: DEB Numbers: DDIGing Down into Dissertation Data | DEBrief

  5. Pingback: Science online, ringing in the New Year edition | Jeremy Yoder

  6. Pingback: DEB Numbers: Where do the various official funding rate numbers come from and why are they different from what your PO tells you? | DEBrief

  7. Pingback: DEB Numbers: FY2014 Wrap-Up | DEBrief

  8. Pingback: DEB Numbers: FY2015 Wrap-Up | DEBrief

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s