This installment of DEB Numbers looks at the DEB Core Programs’ regular research project portfolio through the lens of award size and duration.
This post was inspired by some of the reaction we heard to our earlier DEB Numbers posts on collaboration. (We also will take the serendipitous bounce off these recent findings about award size from north of the border and subsequent discussion here.)
Basically, there are some persistent myths out there, usually along the lines of, “NSF prefers/wants/requires/only considers proposals of a certain size/duration/scope and anything not conforming to that criterion is doomed to failure”. Such statements should set off any number of critical alarms from the absolutist language to the singular and impersonal characterization of a broad and diverse agency to the lack of any verifiable statistics or citation of a source. Further, these myths are particularly hard to combat because in all of them there is a kernel of truth: there are programs with specific targets for size, duration, and scope but it is certainly not all or even most of them (especially when it comes to general core program areas); there are legal expiration dates for spending federal funds and hence limits on the ultimate length of awards; panel recommendations create and are influenced by perceived norms and thus mold what you and your colleagues hear about review outcomes, but panels don’t make funding decisions.
With that in mind, the rest of this post presents data from our core programs which we hope, the next time you hear someone clinging to these DEB myths, will give you the power to bust them.
Myth #1: Project budgets must be big to be competitive.
Possible Origins: Part hearsay from panelists who remember the biggest, most expensive projects as the most exciting. Part conflation of core programs with special programs targeted at large projects which draw on a separate source of funds.
Reality: About 2/3 of the projects awarded via the DEB core programs are between 250K and 750K total, and that is inclusive of all collaborators’ budgets, participant costs, indirect, and any supplements made after the initial award date. The full distribution of DEB core program award sizes from 2009-2012 is shown below in $50K bins. We covered the differences between core programs, special programs and the other funding opportunities in DEB in detail in a previous post, but just to recap: the core programs are the general calls for research project proposals subject to external merit review and organized in DEB around four sub-disciplinary units called clusters. This does not include Dissertation Improvement Grants, funding requests not subject to external merit review, or special programs managed as separate opportunities.
Other than being just a really pretty distribution, there are a couple of notable features to the above figure. First, there were a total of only 4 multi-million (≥$2M) dollar core program awards in DEB over that 4-year time span. Second, there’s that secondary peak between $100K and $200K, make a mental note, we’ll come back around to it in a moment.
Note: All four >$2M awards were programmatic outliers, for instance one was a large project inherited by the Evolutionary Processes core during shut-down of a special program and the others were similarly unusual. These four outliers are ignored in the rest of the analysis.
This next figure takes the same award size data (outliers removed) but displays each size bin as the percent of the total number of awards instead of a raw count. We changed the display a bit so we could include an overlay of the same calculation using the entire population of DEB core program project proposals from those years for comparison.
Close inspection reveals that awards above$650K (30% of award portfolio) are generally slightly underrepresented compared to the proposals (36% of submitted proposals) and the majority of that difference is accounted for in the 100-150K bin.
The 100-150K bin shows a place where we put in targeted effort to maximize the reach of our limited funds. These are proof-of-concept awards that are not simple budget reductions but significant departures from the original proposal scope. This is a program-created category of awards where the program officers are doing what they can to provide opportunities for an otherwise strong line of enquiry to surmount an immediate funding-dependent weakness.
Since increasing the frequency with which we make these small proof-of-concept awards, we also began to hear that some PIs would actually like to submit smaller proposals if given an explicit call but otherwise couldn’t risk unilateral disengagement from what felt like a budgetary arms race; hence, the Small Grants option added to the core programs solicitation for FY2013.
However, simple budget totals aren’t necessarily a great metric for understanding and comparing project size. If larger award sizes are supporting larger research teams or extending work over longer periods, then the award size may not be so great relative to the effort supported. So, we can look at award size relative to the amount of person-effort being supported or requested. Again, the following figure takes into account proposed and awarded DEB Core Program projects from 2009-2012.
In the above figure, the dashed lines show the profiles of the proposals and awards in DEB Core Programs from 2009-2012 based on the annualized project budgets ($/Year). The solid lines represent the same budget data scaled to PI effort (Persons x Years).
Both show great conformity of the award portfolio to the initial requests, as we would expect from the award-size data above. However, the degree to which the range along the horizontal has been compressed is quite dramatic. Keep in mind that this set of awards covers everything from small single investigator grants to large 5-year collaboratives and ranges from ~$30K to ~$1.5M in total budget. Once the number of people (and that’s just names on the cover pages, not students or techs or anyone else) and the duration of the award is accounted for, award size doesn’t vary that much at all. In all, 21% of awards run on no more than 50K/Person-year, which accumulates to 64% by 100K, 84% by 150K and 97% by 200K; again, those figures include overhead.
The profile of DEB Core Program award sizes hews very closely to the profile of requests coming in the door and exhibits no particular preference for big budgets.
Myth #2: No matter what I ask for my budget will be cut by 10% or more.
Possible Origins: Many project budgets are revised prior to award-making. Toss in an awareness of the overall funding climate and rumblings from colleagues about NIH cuts to grants after awarding to increase the climate of uncertainty.
Reality: Before we get to the data, a relevant aside:
Even with the sequester DEB has not applied any sort of retroactive cuts to awards and did not reduce the remaining budget years for awards made prior to this year. It was a specific agency priority that we not go back on promises we previously made to PIs.
Now, on to the data. This next figure plots the awarded project funds against the original requested project funds for DEB Core Programs from 2009-2012.
There are several interesting take-aways buried in this data. 1) The median award size (as percentage of the initial request) is 100% of the request; 57.0% of all awards wind up at least as big as the initial request. 2) A sizeable number of awards (~25%) ultimately receive more funding than the initial request; a few of these are due to increases (such as for a student or postdoc) at the time of the award justified by panel input and the rest are the result of later supplements. 3) There are relatively few awards that exhibit major cuts (only 30% of awards are cut by more than 10%). The cluster of radically smaller proof-of-concept awards is clearly visible, but even so during this period the average ultimately awarded project budget is over 90% of the initial request (92.2%).
No blanket cut is applied to awards by any cluster. If a budget is cut, it is cut based on something unique to that award. A well justified project budget that requests what is needed for the project is likely to get what is requested but requests beyond what are documented and justified will get cut.
Myth #3: DEB funds only 3-year awards.
Possible Origins: This is in part due to the difficultly of effectively planning and making the case for a solid project out past 3 years as the direction of enquiry becomes less certain. It is probably reinforced by self-fulfillment of expectations through proposer-reviewer dynamics. And, specific guidance in some funding opportunities outside DEB Core Programs does limit award length to 3 years.
Reality: There is a maximum duration for a new award, 5 years. That’s why the LTREB program which requests ten-year research ideas requires a renewal proposal half-way through. The simple reason why this limit exists is that there are laws and rules governing the accounting of federal funds which cause the funds to expire 7 years after initial appropriation. We take up to a year (if we have a budget on time) to get the awards to the PIs who then have up to 5-years to spend it on the work, leaving at least 1 year for a no-cost extension to ensure any remaining funds are spent. Anything left over pretty much evaporates from the books and makes a great case for future years that we have money to waste that could be better spent elsewhere.
Based on what we saw in Myth 1, we could expect award duration to reflect the demand coming in the door. As evidenced in the figure below, that appears to be the case. This figure displays the duration requested on the proposal cover sheet for all regular research projects (proposals and awards) in our core programs from 2009-2012.
The awards lean slightly more to the longer side than the proposals. Fully a third of DEB’s Core Program awards are longer than 3 years. PIs can go beyond the original cover sheet duration with extensions.
As another example of where this myth falls flat, note our definition of the “Small Grants” option: “The Division welcomes proposals for Small Grants to the core programs via this solicitation. Projects intending total budgets of $150,000 or less should be identified as such with the designation “SG:” as a prefix to the project title. These awards are intended to support full-fledged research projects that simply require smaller budgets. Small Grant projects will be assessed based on the same merit review criteria as all other proposals.” This description defines “small” based on the size alone and places no specific requirement on duration. We considered such a clause, “for up to X years”, when writing the definition but decided there was no reason to restrict how that amount was spread over time. Clearly 1, 2, 3, and 5 are all factors of 150K and many may jump to “$50K/year as averaged over 3 years” but other interpretations are equally valid.
We also need to consider a counter-point: shouldn’t longer duration awards be more costly and thus prevent making smaller shorter awards and contribute to a lower funding rate? Well, we can examine that too! Below is a plot of the total award sizes by requested duration for DEB Core Programs 2009-2012. The red trace runs through the means for each whole-year duration.
“Three-plus” year awards have an apparent wider range of award sizes than shorter grants but there’s no difference in ranges between 3-, 4-, and 5-year awards; there is a pretty distinct plateau. Now, before anyone goes away claiming we’re saying that we expect 4 and 5 year grants because they magically don’t add to the cost, let us be clear: that is definitely not the case. We consider this to be a result of a kernel of truth behind this myth. It is very difficult to effectively chart out a 5-year plan of a high-production, gee-whiz project; if your first three years are potentially transformative, cutting-edge, and ground-breaking on such a short time-frame then current review dynamics mean it gets difficult to convince reviewers of a feasible plan for years 4 and 5. For many of those longer awards, it’s not that years 4 and 5 don’t cost anything, but that years 1, 2, and 3 cost less than for 3-year grants. Part of this is the result of the funding opportunities: CAREERs, LTREBs, and RCNs are all 5-year awards and each has unique features that play into award size. But, roughly half of all DEB Core Program Awards longer than 3 years were not responding to particular guidelines and are the result of PIs electing and making a strong case for a longer-duration strategy.
Again, this isn’t saying that either longer or shorter grants are better. It’s simply that they represent distinct facets of a diverse project portfolio.
Myth #4: DEB wants more proposals from large interdisciplinary teams.
Possible Origins: There have been several high-profile launches of multi-directorate, interdisciplinary special programs which heighten awareness of the concept. Overall understanding of the variety of programs at NSF is often limited leading to conflation of several distinct funding opportunities with core programs. NSF efforts to not bias proposal review against such approaches to science may be misread as declarations of the approach as an absolute preference or end-goal.
Reality: We regularly hear from people in the community related to this myth, some strongly calling for a larger share for big collaborative teams and others vehemently opposing such an idea. And, as with the other myths we’ve discussed in this post so far, our awards appear to reflect the actual requests we receive as proposals.
The above figure displays the percentage of the project portfolio by the number of PIs and CoPIs listed on the project (this includes all PIs and CoPIs whether on a single jacket or multi-institutional collaborative submission). It shows a near-perfect correlation between the award and proposal portfolio profiles when considering the number of PIs and CoPIs collaborating on a project. Since “big collaborative team” is a subjective concept and could reasonably vary by discipline, we’re not going to draw any arbitrary lines to say X% of them are “big teams”. But, we will note that the largest chunk of both project proposals and awards is still single-investigator projects.
We also need to consider a corollary to the primary myth: larger collaborations consume relatively more financial resources than supporting the same amount of effort by researchers via smaller grants. (The expectation is that costs related to project coordination make collaborative projects more expensive per person than single awards.) Which we look at here….
Above, the plot displays the range of cost per person-year for the teams of 1 to 8 PIs/CoPIs supported by DEB Core Program awards from 2009-2012; the red trace runs through the mean cost ($/Person-Year) for projects of each team size. What we see here is mainly a contraction of the range of costs per person-year away from the high end toward a mean around $50K/Person-Year for teams larger than 4 PIs/CoPIs while the low end is stationary. But, like with the award size versus duration plot in Myth #3, the superficial message of the image is not the whole story.
These data points obscure differences of within-project distributions of funds. While it is possible that a few PIs/CoPIs working together may be budgetary coequals, it is unlikely that as the number of collaborators grows each PI/CoPI has control over an ever-smaller equal share of the project budget. It is much more reasonable to expect that a primary or lead PI has at her/his disposal a plurality of the funds and the other PIs have significantly smaller shares appropriate to the scope of their actual roles in carrying out the research. Unfortunately, backing that up with further analysis is a bit beyond the granularity of the available data.
There is a lot of hearsay, myth, and misinformation floating around out there about what we, DEB, want from you, the researchers. In the end that type of thinking is, to use a complete anachronism, putting the cart before the horse. The core of our mission is “to promote the progress of science” and to a large extent we do that by asking the community, through proposals and peer review, to tell us what will do that. The awards we make in the core programs really do reflect the proposals we receive from the community and (at least in the ways discussed here) neither the review nor our decision making impart any particular bias with respect to award size, duration or number of PIs. We hope this post provides some sense of that and would encourage you to question yourself and let us know about other source-less NSF myths and truisms you’ve heard.
 It could be looked at for multi-institutional collaboratives with separate budgets but would not be able to account for differences between collaborators at the same institution.