DEB Numbers: FY 2016 Wrap-Up

DEB Numbers: FY 2016 Wrap-Up

Fiscal year 2016 officially closed out on September 30. Now that we are past our panels in October and early November, we have a chance to look back and report on the DEB Core Program merit review and funding outcomes for FY 2016.

This post follows the format we’ve used in previous years. For a refresher, and lengthier discussions of the hows and whys of the metrics, you can visit the 2015,  2014, and 2013 numbers.

Read on to see how 2016 compares. Continue reading

Your project titles matter, choose wisely

This post was inspired by a bit of musing as to what would happen if PIs tried to crowd-source parts of their proposals. The obvious answer, to us at least, was that we would almost certainly, and immediately, receive a proposal titled “Granty McGrantface.” We’re presuming you are familiar with the reference; but if not, see these links. While the saga of our friends at NERC turned our pretty well, it reminded us of two things: 1) asking the internet to decide for you is a risky proposition, and (the focus of this post) 2) that no matter our intentions, some of the stuff[i] we do, or that stems from the funding we provide to you, will get noticed by a wide audience. Most stuff tends to go unnoticed, but from time to time something goes viral.

Therefore: What you choose to call your project matters.

Why the project title matters to NSF

The project title is the most meaningful and unique piece of your proposal that carries over to the public award description. Everything else in your proposal is distilled and condensed down to a couple paragraphs of “public abstract” and a few dozen metadata records available via the NSF award search and research.gov[ii]. Consider, too, the project title is a part of your proposal for which NSF takes responsibility and exercises editorial power. We can, and sometimes do, change project titles (about a quarter are changed, mostly for clarity – such as writing out abbreviations.)

Why the project title matters to you

The project title and PI info are the only things most potential reviewers will ever see before deciding whether to review your proposal. The title is your first (and typically only) shot to communicate to a reviewer that your proposal is interesting and worth their time to review[iii].  And as we said above, if your proposal gets funded, the title gets posted on the NSF public awards website along with the PI name and institution.

 

You can (and should) provide effective project titles

When you receive an award, the title will be searchable by anyone and permanently associated with your name. Over the years, we’ve seen a vast array of proposal titles. We’ve also seen how they affect the audiences (reviewers, panels, and public) who read or hear them. Based on the accumulation of observations and experiences in DEB, we’ve put together these 8 tips to consider when composing your project titles.

Keep in mind: The following are not any sort of universally enforced rules or NSF policy. The proposal title is initially your responsibility, but as we said, once it comes into NSF, we can edit it as needed. Ultimately, what makes a good title is subjective and is probably not constant across disciplines or over time. These are just some broad and general tips we hope you’ll find helpful.

Tip 1: Know your broader audiences

Reviewers, including panelists, are specialists, but not necessarily from the same sub-sub-specialty as you. Public readers of award titles cover an even wider range of knowledge and expertise. These are the people who are going to read that title and make a decision whether to take action. Reviewers will, first, decide whether or not to read, and then, whether or not to support your proposal. The public will decide whether to read your award abstract, and the media will decide whether to contact you.

There are both good and bad potential outcomes of public attention. It can seem like a strong, scientifically precise, and erudite proposal title might inform and impress readers. But that misses half the point: it’s not simply about avoiding misunderstanding. Instead, a good title is a vehicle for audience engagement; it seeks to cultivate positive responses. This happens when you use straight-forward, plain language, minimizing jargon and tech-speak, with a clear message. The rest of these tips are basically more specific examples of ways to do this.

Tip 2: Write to your (proposal’s) strengths

Most of us feel some twinge of annoyance when we see a misleading headline or publication title, e.g. “Transformative Biology Research to Cure All Diseases.” This is your chance to get it right! Don’t bury the lede. Focus your title on the core idea of the proposal. In many cases, details like the organism, the location, or the specific method are secondary[iv]; if you include them, do so carefully, in supporting roles and not swamping the central conceptual component[v]. If you wrote your title before your proposal, it’s a good idea to come back around to it before hitting submit.

Tip 3: Using Buzzwords #OnFleek

It’s a bit cliché to say this, but it bears mention: don’t tell us your project is great, demonstrate it. That is what the project description is for. We like “transformative” and “interdisciplinary” projects, but placing those words in your title doesn’t imbue your project with those qualities. Similarly, loading up on topical or methodological buzzwords (“*omics”, “CRISPR”, etc.) adds little when the major consideration is the knowledge you’re seeking to uncover, not the shiny new tool you want to wield or the loose connection to a hot topic. The space you save by dropping this extra verbiage can allow you to address other important aspects of your project.

Tip 4: Acronyms

They save space in your title. And, NSF seems to have them all over the place (It’s an ARE: Acronym Rich Environment). So, why not use them, right? Well…, tread carefully.

The various title prefixes (e.g. RUI, CAREER) we ask for are used by us to 1) ensure reviewers see that special review criteria apply and 2) check that we’ve applied the right processing to your proposal. They’re often acronyms because we don’t want to waste your character count. So, we want those on your proposals[vi] but, after merit review, we may remove them before making an award. Other acronyms added by you tend to fall into two categories:

  • Compressed jargon- for example, “NGS” for Next Generation Sequencing. When you don’t have the whole proposal immediately behind it, an acronym in your title may never actually be defined in the public description and it may imply something unintended to some in the audience.
  • Project-name shorthand- There are perhaps a handful of projects that through longevity and productivity have attained a degree of visibility and distinctiveness that allows them to be known by an acronym or other shorthand within the particular research community. Even if your project has achieved this distinction, remember that your audience goes beyond your community: not everyone will know of it. Further, trying to create a catchy nickname for a project (or program) usually doesn’t add anything to your proposal and can lead to some real groan-inducing stretches of language.

Tip 5: Questions to consider

How will reviewers respond to a title phrased as a question? Is the answer already an obvious yes or no? If so, why do you need the proposal and more money? Is this question even answerable with your proposed work? Is this one of the very rare projects that can be effectively encapsulated in this way?

Tip 6: Attempted humor

This can work; it may also fall flat (see above entry on “Questions”). It can, to some audiences, make your project seem unprofessional and illegitimate. That is a sizeable risk. It used to be, and still is to some extent, a fairly common practice to have a joke or cartoon in your slide deck to “lighten the mood” and “connect with your audience”. If you’ve ever seen a poor presenter do this, you know it’s not a universally good thing. With a proposal title, it’s always there and doesn’t get buried under the rest of the material as might happen with a slide. The alternative is to skip the joke and write something that connects to your reader through personality and creativity instead. This can be hard to do, but practice helps. For example, “I Ain’t Afraid of No Host: The Saga of a Generalist Parasite” was a funny, at least to us, title we made up – but will everyone reading it think it is funny, and does it help the grant that the title is funny? It isn’t very informative – again, tread lightly.

Tip 7: Latin vs Common terms

Per tip 2, you may not always list an organism in your project title; but when you do, make it accessible. The Latin name alone places a burden of prior knowledge or extra work on readers. It is a courtesy to public readers (not to mention your own SRO who may be filling out paperwork about your proposal and also to panelists who may be far afield from your system and unfamiliar with your organism) to add a common name label too. But, be careful. Some common names are too specific, jargon-y, or even misleading for a general audience. You don’t want, for instance, someone to see “mouse-ear cress” for Arabidopsis thaliana and think you’re working on vertebrate animal auditory systems (this has happened![vii]).

Tip 8: Thoughtful Word Choice

This tip expands the idea of confusing language, which we already pointed out regarding Latin names and acronyms, to avoiding jargon in general. Some jargon is problematic just because it is dense; as with Latin names and acronyms, this sort of jargon can be addressed by addition of or replacement with common terms. Other jargon is problematic because the audience understands it, but differently than intended. Meg Duffy over at Dynamic Ecology had a post on this some time back in the context of teaching and communication. These issues arise in proposals too. There are some very core words in our fields that don’t necessarily evoke the same meaning to a general audience or even across fields. The most straightforward example we can point to is our own name: the “E” in DEB stands for “environmental.” To a general audience environmental is more evocative of “environmentalism,” “conservation,” recycling programs, and specific policy goals than it is of any form of basic research[viii]. Addressing this sort of jargon in a proposal title is a bit harder because the word already seems common, and concise alternative phrasings are hard to come by.

For jargon, it might benefit you to try bouncing your title off of a neighbor, an undergrad outside your department, or an administrator colleague. In some cases, you might find a better, clearer approach. In others, maybe there’s not a better wording, but at least you are more aware of the potential misunderstandings.

Final Thoughts

Most of the project titles that we see won’t lead to awards and will never be published; and even if an award is made, most of their titles attract little notice. A few, however, will be seen by thousands or be picked up by the media and broadcast to millions. Thus, the title seems like a small and inconsequential thing, until it’s suddenly important. Because of this, even though the project title is a small piece of your proposal, it is worthy of attention and investment. We have provided the tips above to help you craft a title that uses straight-forward, plain language, to convey a clear and engaging message to your audiences.

We can’t avoid attention. In fact, we want to draw positive attention to the awesome work you do. But audience reactions are reliably unpredictable. The best we can do is to make sure that what we’re putting out there is as clear and understandable as possible.

 


[i] Anything related to research funding from policies on our end to research papers to tweets or videos mentioning projects.

[ii] At the close of an award, you are also required to file a “Project Outcomes Report” via Research.gov. This also becomes part of the permanent project record and publicly visible when your work is complete. We don’t edit these.

[iii] For the “good titles” argument as applied to research papers, see here: https://smallpondscience.com/2016/10/19/towards-better-titles-for-academic-papers-an-evaluative-approach-from-a-blogging-perspective/

[iv] There are obvious exceptions here, like a proposal for a targeted biodiversity survey in a geographical region.

[v] For what it’s worth, this is a common “rookie mistake” even before writing a proposal. We get lots of inquiries along the lines of “do you fund studies on organism X” or “in place Y”. The short answer is yes, but it’s often irrelevant because that doesn’t differentiate DEB from MCB or IOS or BioOCE. We don’t define the Division of Environmental Biology by organisms, or places, or tools, or methods. We define it by the nature of the fundamental questions being addressed by the research.

[vi] Some prefixes are mutually exclusive of one another. For example, CAREER and RUI cannot both be applied to the same proposal (http://www.nsf.gov/pubs/2015/nsf15057/nsf15057.jsp#a16).

[vii] Better alternatives might have been “plant”, “wild mustard”,

[viii] And yes, we do get the same sorts of calls and emails about “sick trees”, “that strange bird I saw”, “what to do about spiders,” etc. as you do.

Fall 2016 DEB Panels status: “When will I have a decision?” edition

DEB’s full proposal panels finished in early November (for those full proposals submitted back in July and August). So, when will you receive review results?

Some of you may have already heard from us. Others will be hearing “soon” (as detailed below).

Right now, all of our programs have synthesized the recommendations of their panels, considered their portfolios, and come up with their planned award and decline recommendations. These are then documented, sent through administrative review, and finally signed off, “concurred,” by the head or deputy for the Division.

DEB’s first priority is processing the decline notices. We’re trying to get your reviews back to you to provide as much time as possible to consider your options for January pre-proposal submissions.

For potential awards, it’s a bit more complicated. We expect award recommendation dates to be later this year than typical. At present, NSF is operating under a temporary budget measure, called a Continuing Resolution (or CR). The current CR runs through December 9, 2016. We won’t have significant funds available to cover new grants until a longer-term funding measure is enacted.

So, while we have a prioritized list of award recommendations, we don’t yet have the funds needed to take action on those recommendations. Moreover, we don’t know how much funding we’ll actually have available so uncertainty is part of the plan. Thus, between “definite award recommendation” and “definite decline recommendation” we have a recommendation gray zone.

How are we handling this?

If your proposal fell into the definite decline group, then you’ll be getting an official notice from DEB. Once the formal decline recommendation is approved, the system updates the proposal status in FastLane and queues up a notification email. We are planning to have all declines approved by December 20, 2016. Note: our IT system sends the notification emails in batches at the end of the day[i]. Thus, if you are frequently refreshing FastLane you will likely see the news there before you get a letter from us.

If your proposal fell into the definite award group or the gray zone, you will first be getting a call or email from your Program Officer. They will be letting you know what the plan is for your particular proposal and how you can get things ready (e.g., submitting budget revisions or abstract language) for an eventual award. Formal action, including the release of reviews, cannot happen until we have funding available. However, folks in this group should also hear from their Program Officers by December 20.

After December 20, if you have not received any communication from us, first check your spam folder and then look up your proposal number and give us a call. But please remember, the lead PI for a proposal or collaborative group is the designated point of contact; if you’re a co-PI you need to get in touch with the lead PI and have them inquire.


[i] We’re not totally sure why this is, but suspect it has to do with email traffic volume and security features: discriminating an intentional batch of emails from an account taken over by a bot.

Preliminary Proposal Evaluation Survey Reminder

TL;DR

Check your inbox.

Check your spam folder.

Complete the survey!

End the reminder messages.

 

Background (if the above doesn’t make sense to you).

This is about the Preliminary Proposal system in use in both NSF BIO’s Division of Environmental Biology and Division of Integrative Organismal Systems.

We are in the midst of an external evaluation of the effects of this system on the merit review process.

We posted an initial notification letter about stakeholder surveys. And, copies of this letter were sent out to everyone in the sample ahead of the formal invitations.

The formal survey invitations with the active survey links were sent out by mid-September from the evaluator, Abt Associates.

Reminder emails are also coming out and will continue to do so at regular interviews while the survey remains open and incomplete.

If you have been receiving these messages, please complete the survey. If your colleagues have been receiving these messages and have not completed the survey, encourage them to do so.

If you received an invitation to take the survey,

  • Please take the 10 or so minutes to register your responses via the link in the email.
  • Remember that these are single-use individualized links.
  • Your response matters. This isn’t a census: your invitation is part of a stratified random sample selected for inference to the population.

Thank you for your participation!

A dozen things All PIs should know about the U.S. Federal budget as it relates to NSF research grants

A dozen things All PIs should know about the U.S. Federal budget as it relates to NSF research grants

Things upstream from a grant decision


1

There is an annual budget cycle (see graphic, below):

a.    Request: The President puts out a plan for a budget in a request to Congress.

b.    Appropriation: Congress decides how much (described in this downloadable PDF) to actually provide to each agency, (e.g., NSF). This is signed into law by the President. Annual appropriations start on October 1 each year. Even if Congress is delayed in finalizing the budget for that year, the October 1 “birthday” of the funds applies retroactively.

c.    Allocation and Allotment: The appropriations are passed down from the Treasury through the agency to funding programs (e.g., Population and Community Ecology, Dimensions of Biodiversity).

d.    Commitment and Obligation: Funding is applied to projects (typically as grants) after merit review. Technically, Program Officers “recommend” funding (d-i), Division Directors concur the decision to “commit” funds (d-ii), Grants Specialists make the “award obligation” (d-iii), and the award is made to the institution (not the PI).

e.    Expenditure & Reimbursement: Over the subsequent months and years, the PIs of funded projects use the funding to make science happen and receive reimbursement from Treasury accounts.

Diagram of the relationship between the annual U.S. Federal budget process and NSF merit review system.

Diagram of the relationship between the annual U.S. Federal budget process and NSF merit review system.


2

At any given time, we are thinking about 3 or 4 different years’ budgets:

a.    Reporting on last year

b.    Managing this year

c.    Planning for next year

d.    Building momentum for the year after next


3

While we often refer to “the budget” in the singular abstract form, there are different pots of money at different levels in the agency.


4

At the highest level, there are 6 different pots (described in this PDF), called accounts[i]. These pots can’t be mixed[ii]. And, only 2 typically matter directly to researchers: Research & Related Activities (R&RA) and Education & Human Resources (EHR).


5

Individual program[iii] budgets, scopes, and lifespans are usually managed by each Division, but specific guidance from the White House Office of Management & Budget (OMB) or Congress can lead to changes and cancellations.


6

Our window to put the funding onto projects through grants (item 1d, above) is the most constrained step. Funds are supposed to arrive by October 1 each year, but it’s not uncommon that delays in the budget cycle mean we don’t see the full (i.e., appropriated and allocated) budget at the program level until the following March, April, or later. And, all funds need to be obligated by the end of each fiscal year (September 30)[iv].


 

Things downstream from a grant decision


7

Every dollar that supports your NSF research grant has an expiration date. The same is true for much of the Federal budget appropriated by Congress. For NSF research (R&RA) funds, the expiration date is 7 years from the start of the fiscal year (October 1 annually) in which the funds were provided to the agency (i.e., appropriated).


8

Because most DEB awards made in a given fiscal year have start dates well after October 1, the clock started ticking even before you received a grant. For example, if your award start date is July 1, then the funds you received are already 9 months old.


9

Although you can request a delay in the official start date of an award, which affects when you start spending your funds, you can’t delay the aging of your award funds. A delayed start doesn’t provide you any extra time to complete the work. The ultimate limit on how long you can extend a funded project (no-cost extensions) depends on when those dollars expire.


10

Money doesn’t actually go to your institution when you get a grant. It stays in the US Treasury until spent. We refer to your award as a federal obligation because it authorizes your institution to charge for expenditures incurred in the conduct of that award, and get reimbursed from the Treasury. We can see how much you have spent of your funds at any time.


11

There is a whole lot of regulation defining what projects can and can’t spend money on; meeting those regulatory obligations is largely the responsibility of your Sponsored Research Office (SRO)[v]. The ability of your SRO to meet those obligations is one of the things NSF reviews between the time when we (the programs) say “this is a good project” and the formal issuing of the grant. The consequences for failing to follow these rules are serious.


12

When we make a grant, we want you to use your full award. Funds that expire at the end of the 7 year clock don’t support your research or our mission. When we see expiring funds, we realize that we could have funded someone else but now we can’t (and there are lots of others who would have been happy for any funding). It also looks like you inflated your budget and/or can’t manage your projects effectively. And, it sends a message that the community has more money than it can put to good use.


[i] In 2009, ARRA “stimulus” funding was a 7th pot of money.

[ii] Without specific authority granted through legislation.

[iii] E.g., this list http://www.nsf.gov/funding/programs.jsp?org=DEB

[iv] Technically, NSF has two years in which to obligate our R&RA annual appropriation, but DEB, like most of NSF, does not “carryover” any funds into a second year. We commit and obligate every dollar allocated to us in a fiscal year and typically do so by mid-August. This allows maximum time for the funded projects to put the funds to use and minimizes the complexities of accounting across different appropriations.

[v] Therefore, your questions about use of funds already awarded should be directed at your SRO, not NSF Program Officers!

DEB Numbers: Historical Proposal Loads

Last spring we posted on the per-person success rate and pointed out several interesting findings based on a decade of DEB data. We were seeing a lot of new PIs and, conversely, a lot of PIs who never returned after their first shot. And, the vast majority of PIs who managed to obtain funding are not continuously funded.

This post is a short follow-up to take a bigger picture look at submission rates.

Since preliminary proposals entered the scene, DEB really hasn’t seen much change in the submission pattern: 75% of PIs in any year submit one preliminary proposal and the other 25% submit two (and a small number submit three ideas in a year, if one also counts full proposals to special programs).

Before the preliminary proposals were launched, we ran some numbers on how often people tended to submit. The results were that, in the years immediately prior to preliminary proposals (~2008-2011), around 75% of PIs in a year were on a single proposal submission (25% on two or more). Fewer than 5% of PIs submitted more than two proposals in a year. Further, most PIs didn’t return to submit proposals year after year (either new ideas or re-working of prior submissions); skipping a year or two between submissions was typical. These data conflicted with the perceptions and anecdotes that “everyone” submitted several proposals every year and were increasing their submission intensity. Although recent data don’t support those perceptions, we still wondered if there might be a kernel of truth to be found on a longer time scale. What is the bigger picture of history of proposal load and submission behavior across BIO?

Well, with some digging we were able to put together a data set that lets us take a look at full proposal research grant submissions across BIO, going all the way back to 1991 when, it seems, the NSF started computerized record-keeping. Looking at this bigger picture of submissions, we can see when changes have occurred and how they fit into the broader narrative of the changing funding environment.

Total BIO full research grant submissions per year (line, right axis) and proportions of individuals submitting 1, 2, 3, 4, 5, or more proposals each calendar year from 1991 to 2014. (Note: 2015 is excluded because proposals submitted in calendar year 2015 are still being processed at the time of writing.)

Total BIO full research grant submissions per year (line, right axis) and proportions of individuals submitting 1, 2, 3, 4, 5, or more proposals each calendar year from 1991 to 2014. (Note: 2015 is excluded because proposals submitted in calendar year 2015 are still being processed at the time of writing.)

 

1990s: Throughout the 1990s BIO received about 4000 proposals per year. This period of relative stability represents the baseline for more than a decade of subsequent discussions of increasing proposal pressure. Interestingly, the proportion of people submitting two or more proposals each year grew over this period, but without seeming to affect total proposal load; this could result from either increasing collaboration (something we’ve seen) or a shrinking PI pool (something we haven’t seen). At this time NSF used a paper-based process, so the cost and effort to prepare a proposal was quite high. Then….

2000s: In 2000, FastLane became fully operational and everyone switched to electronic submission. BIO also saw the launch of special programs in the new Emerging Frontiers division. In a single year, it became easier to submit a proposal and there were more deadlines and target dates to which one could potentially submit. The new electronic submission mechanism and new opportunities likely both contributed to increased submissions in subsequent years.

Following the switch to FastLane, from 2001 to 2005, total annual submissions grew to about 50% above the 1990s average and stayed there for a few years. This period of growth also coincided with an increasing proportion of people submitting 2+ proposals. Increasing numbers of proposals per person had only a limited effect on the total proposal load because of continued growth in collaboration (increasing PIs per proposal). Instead, the major driver of proposal increases was the increasing number of people submitting proposals. This situation was not unique to BIO.

This period from 2001 to 2005 was the rapid growth that sparked widespread discussion in the scientific community of overburdening of the system and threats to the quality of merit review, as summarized in the 2007 IPAMM report.

Eventually, however, the community experienced a declining success rate because BIO budgets did not go up in any way to match the 50% increase in proposal submissions. From 2005-2008 submissions/person seemed to stabilize and submissions peaked in 2006. We interpret this as a shift in behavior in response to decreasing returns for proposal effort (a rebalancing of the effort/benefit ratio for submissions). It would have been interesting to see if this held, but….

2009/2010: In 2009 and 2010, BIO was up another ~1000 proposals over 2006, reaching an all-time high of nearly 7000 proposal submissions. These were the years of ARRA, the economic stimulus package. Even though NSF was very clear that almost all stimulus funding would go toward funding proposals that had been already reviewed (from 2008) and that we wouldn’t otherwise be able to afford, there was a clear reaction from the community. It appears that the idea of more money (or less competition) created a perception that the effort/benefit relationship may have changed, leading to more proposals.

2011: We see a drop in 2011. It is plausible that this was the realization that the ARRA money really was a one-time deal, there were still many more good proposals than could be funded, and that obtaining funding hadn’t suddenly become easier. As a result, the effort/benefit dynamic could be shifting back; or, this could’ve been a one-time off year. We can’t know for sure because…

2012: Starting in 2012 IOS and DEB, the two largest Divisions in BIO, switched to a system of preliminary proposals  to provide a first-pass screening of projects (preliminary proposals are not counted in the chart). This effectively restricted the number of full proposals in the two largest competitions in BIO such that in 2012, 2013, and 2014 the full proposal load across BIO dropped below 5000 proposals per year (down 2000 proposals from the 2010 peak). The proportion of individuals submitting 2+ full proposals per year also dropped, consistent with the submission limits imposed in DEB, IOS, and MCB. PIs now submitting multiple full proposals to BIO in a given year are generally submitting to multiple programs (core program and special program) or multiple Divisions (DEB and [IOS or MCB or EF or DBI]) and diversifying their submission portfolios.

In summary, the introduction of online and multi-institutional submissions via FastLane kicked off a decade of change marked by growth in proposal submissions and per-PI submissions to BIO. The response, a switch to preliminary proposals in IOS and DEB, caused a major (~1/3) reduction in full proposals and also a shift in the proportion of individuals submitting multiple proposals each year. In essence, the pattern of proposal submission in BIO has shifted back to what it was like in the early 2000s. However, even with these reductions, it is still a more competitive context than the 1990s baseline, prior to online submissions via FastLane.

DEB Numbers: Are aquatic ecologists underrepresented?

Editor’s note: This post was contributed by outgoing rotating Program Officer Alan Wilson and is a write-up of part of a project performed by DEB summer student Zulema Osorio during the summer of 2015.

Generalizability has been fundamental to the major advances in environmental biology and is an important trait for current research ideas proposed to NSF.  Despite its significance, a disconnect between terrestrial and aquatic ecological research has existed for several decades (Hairston 1990).

For example, Menge et al. (1990) quantitatively showed that authors heavily (~50%-65%) cite more studies from their representative habitat but that terrestrial ecologists are less likely to include citations from aquatic systems than the converse.  Failure to broadly consider relevant literature when designing, conducting, and sharing findings from research studies not only hinders future scientific advances (Menge et al. 2009) but may also compromise an investigator’s chances for funding[i] when proposing research ideas.

More recently, there have been anecdotal reports from our PI community that freshwater population or community ecology research is under-represented in NSF’s funding portfolio.  To explore the potential bias in proposal submissions and award success rates for ecological research associated with focal habitat, we compared the submissions and success rates of full proposals submitted to the core Population and Community Ecology (PCE) program from 2005-2014 that focused on terrestrial systems, aquatic systems, or both (e.g., aquatic-terrestrial linkages, modeling, synthesis).  Data about focal ecosystems were collected from PI-reported BIO classification forms.  To simplify our data analysis and interpretation, all projects (including collaboratives) were treated only once.  Also, the Division of Environmental Biology (DEB) switched to a preliminary proposal system in 2012.  Although this analysis focuses only on full proposals, the proportion of preliminary and full proposal submissions for each ecosystem type were nearly identical for 2012-2014.  Some projects (2.7% of total projects) provided no BIO classification data (i.e., non-BIO transfers or co-reviews) and were ignored for this project.  Finally, several other programs inside (Ecosystem Science, Evolutionary Processes, and Systematics and Biodiversity Science) and outside (e.g., Biological Oceanography, Animal Behavior, Arctic) of DEB fund research in aquatic ecosystems.  Thus, our findings only relate to the PCE portfolio.

In total, 3,277 core PCE projects were considered in this analysis. Means + 1 SD were calculated for submissions and success rates across 10 years of data from 2005-2014. Terrestrial projects (72% ± 2.8% SD) have clearly dominated projects submitted to the core PCE program across all ten years surveyed (Figure 1).  Aquatic projects accounted for 17% (± 2.6% SD) of the full proposal submissions while projects that include aspects of both aquatic and terrestrial components accounted for only 9% (± 1.6% SD) (Figure 1).  The full proposal success rate has been similar across studies that focused on terrestrial or aquatic ecosystems (calculated as number of awards ÷ number of full proposal submissions; Figure 2; terrestrial: 20% ± 6.9% SD; aquatic: 18% ± 6.5% SD).  Proposal success rate dynamics for projects that focus on both ecosystems are more variable (Figure 2; 16% ± 12.7% SD), in part, due to the small population size (9.5% of the projects considered in this study).

Figure 1. Submission history of full proposals submitted to the core PCE program from 2005-2014 for terrestrial (brown), aquatic (blue), or both ecosystems (red). Proposals were classified based on PI-submitted BIO classification forms. Note that some projects did not provide BIO classification data. These projects were ignored for this analysis and explain why yearly relative data may not total 100%.

Figure 1. Proportion of full proposals submitted to PCE based on focal ecosystem from 2005 to 2014.

Figure 2. Success rate of full proposals submitted to PCE based on focal ecosystem from 2005 to 2014.

Figure 2. Success rate of full proposals submitted to PCE based on focal ecosystem from 2005 to 2014.

In summary, anecdotal PI concerns of fewer funded aquatic proposals in PCE are consistent with available data but are an artifact of fewer aquatic proposal submissions.  Although funding rates for all full PCE proposals have generally varied from 2005-2014 (mean: 19.9% ± 6.4% SD; range: 11%-29%) as a function of available funds and the number of proposals considered, terrestrial- and aquatic-focused research proposals have fared similarly for the past decade.  PCE, like the rest of DEB and NSF, is motivated to have a diverse portfolio and encourages ecologists from varied institutions and backgrounds to submit ideas that study interesting, important questions that will generally move the field of population and community ecology forward.

Figures

Figure 1. Submission history of full proposals submitted to the core PCE program from 2005-2014 for terrestrial (brown), aquatic (blue), or both ecosystems (red).  Proposals were classified based on PI-submitted BIO classification forms.  Note that some projects did not provide BIO classification data.  These projects were ignored for this analysis and explain why yearly relative data may not total 100%.

Figure 2. Success rate history of full proposals submitted to the core PCE program from 2005-2014 for terrestrial (brown), aquatic (blue), or both ecosystems (red).  Proposal success rate is calculated for each ecosystem type as the number of awards ÷ number of full proposal submissions.   Proposals were classified based on PI-submitted BIO classification forms.

References

Hairston, Jr., N. G. 1990. Problems with the perception of zooplankton research by colleagues outside of the aquatic sciences. Limnology and Oceanography 35(5):1214-1216.

Menge, B. A., F. Chan, S. Dudas, D. Eerkes-Medrano, K. Grorud-Colvert, K. Heiman, M. Hessing-Lewis, A. Iles, R. Milston-Clements, M. Noble, K. Page-Albins, R. Richmond, G. Rilov, J. Rose, J. Tyburczy, L. Vinueza, and P. Zarnetska. 2009. Terrestrial ecologists ignore aquatic literature: Asymmetry in citation breadth in ecological publications and implications for generality and progress in ecology. Journal of Experimental Marine Biology and Ecology 377:93-100.

[i] Generalizability “within its own field or across different fields” is a principal consideration of the Intellectual Merit review criterion: http://www.nsf.gov/pubs/policydocs/pappguide/nsf16001/gpg_3.jsp#IIIA

Spring 2016: DEB Preliminary Proposal Results

Notices

All PIs should have received notice of the results of your 2016 DEB Core Program preliminary proposals by now. Full proposal invitation notices were all sent out by the first week of May (ahead of schedule), giving those invited PIs a solid three months to prepare their full proposals. ‘Do Not Invite’ decisions began going out immediately thereafter and throughout the rest of May.

If you haven’t heard, go to fastlane.nsf.gov and log in. Then, select the options for “proposal functions” then “proposal status.” This should bring up your proposal info. If you were a Co-PI, check with the lead PI on your proposal: that person is designated to receive all of the notifications related to the submission.

If you are the lead PI and still have not heard anything AND do not see an updated proposal status in FastLane, then email your Program Officer/Program Director. Be sure to include the seven-digit proposal ID number of your submission in the message.

Process

All told, DEB took 1474 preliminary proposals to 10 panels during March and April of 2016. A big thank you to all of the panelists who served and provided much thoughtful discussion and reasoned recommendations. Note: if you’re interested in hearing a first-hand account of the DEB preliminary proposal panel process, check out this great post by Mike Kaspari.

Panelists received review assignments several weeks prior to the panels and prepared individual written reviews and individual scores. During the panel, each proposal was discussed by the assigned panelists and then presented to the entire panel for additional discussion and assignment to a rating category. Panels were presented two recommendation options for each preliminary proposal: Invite or Do Not Invite. Following discussion, the assigned panelists prepared a panel summary statement to synthesize the key points of the panel discussion and rationale for the assigned rating.

Both the individual written reviews and the panel summary statement are released to the PI of the preliminary proposal.

As we’ve discussed previously, the final decisions on the preliminary proposals are made by the programs with concurrence of senior management. These decisions take into account the panel recommendations, especially the substance of the discussions, as well as expectations for future award-making capacity based on the availability of funds, additional expected proposal load at the full proposal stage, and portfolio balance issues.

Results

Total Reviewed Panel Recommendations Total Invited Invite Rate
DEB Cluster Invite Do Not Invite No Consensus
SBS 289 79 210 0 85 29%
EP 440 94 346 0 101 23%
PCE 439 122 315 2 110 25%
ES 306 94 212 0 86 28%
DEB Total 1474 389 1083 2 382 26%

These numbers are consistent with our goal of inviting the most promising projects while targeting a success rate of approximately 25% for the resulting full proposals that will be submitted this summer.

Big Picture

Comparing to the previous rounds of preliminary proposals…

2012 2013 2014 2015 2016
Reviewed 1626 1629 1590 1495 1474
Invited 358 365 366 383 382
Invite Rate 22% 22% 23% 26% 26%

…we see that the system has recovered somewhat from the initial flood of submissions. Moreover, the invite rate, and subsequent full proposal success rate, has stabilized in a range that reasonably balances against the effort required to produce each submission.

DEB Numbers: Success Rates by Merit Review Recommendation

DEB Numbers: Success Rates by Merit Review Recommendation

We recently received a comment from a panelist (paraphrasing): how likely are good proposals to get funded? We’ve previously discussed differences between the funding rates we report directly to you from panels and the NSF-wide success rate numbers reported on our website.  But the commenter was interested in an even more nuanced question: to what extent do award decisions follow the outcomes of merit review? This is a great topic for a post and, thanks to our Committee of Visitors review last year, we already have the relevant data compiled. (So this is really the perfect data-rich but quick post for panel season.)

To address this question, we need to first define what a “good proposal” is.

In our two-stage annual cycle, each project must pass through review at least twice before being awarded: once as a preliminary proposal, and once as an invited full proposal.

At each stage, review progresses in three steps:

  • Three individual panelists independently read, review, and score each proposal prior to the panel. A single DEB panelist is responsible for reviewing an assigned subset of all proposals at the panel. This is the same for preliminary proposals and full proposals. Full proposals also receive several non-panelist “ad hoc” reviews prior to the panel.
  • The proposal is brought to panel where the panelists discuss the proposal and individual reviews in relation to each other and in the context of the rest of the proposals in the panel to reach a consensus recommendation. This is the same for preliminary proposals and full proposals.
  • The Program Officers managing the program take into consideration the reviews, the recommendations of the panel(s) that assessed the proposal, and their portfolio management responsibilities to arrive at a final recommendation. This is the same for preliminary proposals and full proposals.

In this case, since we are discussing the Program’s actions after peer review, we are defining as “good” anything that received a positive consensus panel recommendation. Initially, the label of “good” will be applied by the preliminary proposal panel. Then, at the full proposal panel it will receive a second label, which may or may not also be “good”. A “good” recommendation for either preliminary or full proposals includes any proposal not placed into the lowest (explicitly negative) rating category. The lowest category usually has the word “not” in it, as in “Do Not Invite” or “Not Fundable”. All other categories are considered “good” recommendations, whether there is a single positive category (e.g., “Invite”) or several ordinal options conveying varying degrees of enthusiasm (e.g., “high priority”, “medium priority”, “low priority”).

To enable this analysis, we traced the individual review scores, panel review recommendations, and outcomes for proposals from the first three years of the DEB preliminary proposal system (i.e., starting with preliminary proposals from January 2012 through full proposals from August 2014).

As we’ve reported previously, preliminary proposal invitation rates are between 20% and 30%, and between 20% and 30% of invited full proposals are funded, leading to end-to-end funding rates around 7%. But, as our commenter noted, that obscures a lot of information and your individual mileage will vary. So…

How likely are “good” proposals to get funded?

In the table below, you can see the overall invitation rate for preliminary proposals is 23%, but it looks very different depending on how well it performed in the panel[i].

Preliminary Proposal Outcomes by Panel Recommendation % of Proposals Receiving Rating Pre-Proposal Outcome
Not Invited Invited Invite Rate
Pre-Proposal Panel Rating High (Good) 19% 22 879 98%
Low (Good) 5% 100 141 59%
Do Not Invite 76% 3597 74 2%
Total 100% 3719 1094 23%

This stage is a major winnowing of projects. On the one hand, we tend toward inviting most of that which is recommended by the panel. On the other hand, for the majority of preliminary proposals that aren’t well-rated (so falling outside our working definition of “good”), it is highly unlikely it will see the full proposal stage. There is a low, 2%, Invite rate for proposals that the panels recommended as Do Not Invite. This is a measure of the extent to which program officers disagree with panelists and choose to take a chance on a particular idea or PI, based on their own knowledge of submission history and portfolio balance issues.

From these invitations, the programs receive full proposals. After review, programs award approximately 25% of the full proposals, but again the outcome is strongly influenced by the panel ratings.

Full Proposal Outcomes by Panel Recommendation % of Proposals Receiving Rating Full Proposal Outcome
Declined Awarded Funding Rate
Full Proposal Panel Rating High (Good) 17% 30 122 80%
Medium (Good) 23% 115 98 46%
Low (Good) 21% 165 21 11%
Not Competitive 39% 349 7 2%
Total 100% 659 248 27%

Program Officers are faced with a greater responsibility for decision-making at the full proposal stage. Whereas, preliminary proposal panels only gave the nod (High or Low positive recommendations) to ~23% of submissions, full proposal panels put 551 of 907 proposals into “fundable” categories (Low, Medium, or High). Since this is more than twice as many as the programs could actually fund,[ii] the work of interpreting individual reviews, panel summaries, and accounting for portfolio balance plays a greater role in making the final cut. Also note, that these are the cumulative results of three years of decision-making by four independently managed program clusters, so “divide by 12” to get a sense of how common any result is for a specific program per year.

Ultimately, the full proposal panel rating is the major influence on an individual proposal’s likelihood of funding and the hierarchy of “fundable” bins guides these decisions:

Success rates of DEB full proposals when categorized by preliminary proposal and full proposal panel recommendations.

Success rates of DEB full proposals when categorized by preliminary proposal and full proposal panel recommendations.

While funding decisions mostly ignore the preliminary proposal ratings, readers may notice an apparent “bonus” effect in the funding rate for “Do Not Invite” preliminary proposals that wind up in fundable full proposal categories. For example, of 15 preliminary proposals that were rated “Do Not Invite” but were invited and received a “Medium” rating at the full proposal stage, 10 (67%) were funded compared to 45% and 42% funding for Medium-rated full proposals that preliminary proposal panelists rated as High or Low priority, respectively.  However, this is a sample size issue. Overall the numbers of Awarded and Declined full proposals are not associated with the preliminary proposal recommendation (Chi-Square = 2.90, p = 0.235).

 

Does Preliminary Proposal rating predict Full Proposal rating?

This is a difficult question to answer since there is nothing solid to compare against.

We don’t have a representative set of non-invited full proposals that we can compare to say “yes, these do fare better, the same as, or worse than the proposals that were rated highly” when it comes to the review ratings. What we do have is the set of “Low” preliminary proposals that were invited, and the small set of “Do Not Invite” preliminary proposals that were invited by the Program Officers against the panel recommendations. However, these groups are confounded by the decision process: these invites were purposely selected because the Program Officers thought they would be competitive at the full proposal stage. They are ideas we thought the panels missed or selected for portfolio balance; therefore, they are not representative of the entire set of preliminary proposals for which the panels recommended Low or Do Not Invite.

Distribution of Full Proposal Panel Ratings versus Preliminary Proposal Ratings # Recvd As Full Proposals Full Proposal Panel Rating
High Medium Low Not Competitive
Pre-Proposal Panel Rating High 728 19% 24% 20% 37%
Low 117 10% 21% 20% 50%
Do Not Invite 62 8% 24% 23% 45%

So, given the active attempts to pick the best proposals out of those in the “Low” and “Do Not Invite” preliminary proposal categories, those which had been invited based on “High” ratings were twice as likely to wind up in the “High” category at the full proposal stage than those that had been invited from Low or Do Not Invite preliminary proposal categories. And, those invited from the Low or Do Not Invite categories were somewhat more likely to wind up in Not Competitive. Moreover, the score data presented below provides additional evidence that suggests this process is, in fact, selecting the best proposals.

 

What do individual review scores say about the outcomes and different panel ratings?

We expect the full proposal review stage to be a more challenging experience than the preliminary proposal stage because most of the clearly non-competitive proposals have already been screened out. Because of this, full proposals should present a tighter grouping of reviewer scores than preliminary proposals. The distribution of average proposal scores across the two stages is shown below. We converted the “P/F/G/V/E” individual review scores to a numerical scale from P=1 to E=5, with split scores as the average of the two letters (e.g., V/G = 3.5). As a reminder, the individual reviewer scores are sent in prior to the panel, without access to other reviewers’ opinions and having access to a relatively small number of proposals. So the average rating (and spread of individual scores for a proposal) is mostly a starting point for discussion and not the end-result of the review[iii].

Distribution of mean review scores at different points in the DEB core program review process.

The preliminary proposal scores are distributed across the entire spectrum, with the average review scores for most in the 3 to 4 range (a Good to Very Good rating). That we don’t see much in the way of scores below 2 might suggest pre-selection on the part of applicants or rating inflation by reviewers. Invitations (and high panel ratings) typically go to preliminary proposals with average scores above Very Good (4). Only a few invitations are sent out for proposals between Very Good and Good or lower.

The average scores for full proposals are more evenly distributed than the preliminary proposal scores with a mean and median around Very Good. The eventual awards draw heavily from the Very Good to Excellent score range and none were lower than an average of Very Good/Good. And, while some full proposals necessarily performed worse than they did at the preliminary proposal stage, there are still roughly twice as many full proposals with average scores above Very Good than the total number of awards made, so there is no dearth of high performing options for award-making.

So, what scores correspond to different panel ratings?

Average Review Score of Invited Full Proposals by Panel Recommendation Full Proposal Panel Rating
High Medium Low Not Competitive Overall
Pre-Proposal Panel Rating High 4.41 4.08 3.76 3.53 3.88
Low 4.32 4.13 3.88 3.52 3.81
Do Not Invite 4.42 4.00 3.75 3.44 3.73
Overall 4.40 4.08 3.78 3.53 3.87

There’s virtually no difference in average full proposal scores among groups of proposals that received different preliminary proposal panel ratings (rows, above). This further supports the notion that the full proposals are being assessed without bias based on the preliminary proposal outcomes (which are available to full proposal panelists after individual reviews are written). There is approximately a whole letter score difference between the average scores of full proposals (columns) from highly rated full proposals (E/V) to Not Competitive Full proposals (V/G). The average score for each rating is distinct.

 

About the Data:

The dataset used in this analysis was originally prepared for the June 2015 DEB Committee of Visitors meeting. We traced the review outcomes of preliminary proposals and subsequent full proposals over the first 3 cycles of proposal review. This dataset included the majority of proposals that have gone through the 2-stage review in DEB, but is not a complete record because preliminary proposal records are only tied to full proposals if this connection is successfully made by the PI at the time of full proposal submission. We discussed some of the difficulties in making this connection on DEBrief in the post titled “DEB Numbers: Per-person success rate in DEB”.

There are 4840 preliminary proposal records in this dataset; 1115 received invitations to submit full proposals. Of those 1115, 928 (83%) submitted full proposals and successfully identified their preliminary proposal. Full proposal records are lacking for the remaining 187 invitees; this is combination of 1) records missing necessary links and 2) ~a few dozen invitations that were never used within the window of this analysis. For full proposal calculations, we considered only those proposals that had links and had been processed to a final decision point as of June 2015 (907 records) when the data was captured.

The records followed the lead proposal of collaborative groups/projects in order to maintain a 1 to 1 relationship of all records across preliminary and full proposal stages and avoid counting duplications of review data. The dataset did not include full proposals that were reviewed alongside invited proposals but submitted under other mechanisms that bypass the preliminary proposal stage such as CAREER, OPUS, and RCN.

Data Cleaning: Panel recommendations are not required to conform to a standard format, and the choice of labels, number of options, and exact wording vary from program to program and has changed over time in DEB. To facilitate analysis, the various terms have been matched onto a 4-level scale (High/Medium/Low/Not Invite (or Not Competitive)), which was the widest scale used by any panel in the dataset; any binary values were matched to the top and bottom of the scale. Where a proposal was co-reviewed in 2 or more panels, the most positive panel rating was used for this analysis.

[i] Cases where the highly recommended preliminary proposal was Not Invited were typically because the project received funding (either we were still waiting on our budget from the prior year and the PI re-submitted, or the same work was picked up by another funding source). So, the effective invite rate for “high priority” recommendations is ~100%. The middle “Low” priority rating was used in only a limited set of preproposal panels in the first years of preproposals; at this point, all DEB preproposal panels used two-level “Invite or Do Not Invite” recommendations.

[ii] 248 is less than what we actually funded from the full proposal panels: when CAREER, OPUS, RCN, and proposals that were not correctly linked to preproposal data are accounted for, we’re a bit over 300 core program projects awarded in FYs 2013, 2014 and 2015: 100 new projects/year.

[iii] If the program were to be purely conservative and follow the scoring exactly in making award decisions, there would have been no awards with an average score below 4.2 (Very Good+) and even then half of the proposals that averaged Very Good (4) or better would go unfunded.

Post-Panel Decision Making: What exactly is this “portfolio balance” I keep hearing about?

Program Officers frequently remind panelists of two things: 1) panel discussions are confidential and 2) the panel provides advice to the program; it doesn’t make decisions. Thus, what you see on the rating board is not the final outcome. The typical rejoinder to the second item is: so how do you get from the board to a final outcome? To us, that question sounds like an excellent basis for a blog post.

Once full proposal panels are done and reviewers have made their recommendations, our work is far from over. Program Officers incorporate the panel’s advice with other considerations to manage a variety of short- and long-term factors affecting scientific innovation and careers. Sure, funding the best science is paramount, but most programs receive many more deserving proposals than they can support. We use the term “Portfolio Balance” to describe the strategic considerations that program officers incorporate into these funding decisions. Below, we highlight several axes of the portfolio (in alphabetical order) and outline the driving thoughts behind each one:

  • Award diversity: Programs fund a variety of special awards such as CAREERs, RAPIDs, EAGERs, Research Coordination Networks, OPUS, Small Grants, and Dissertation Improvement Grants. These serve a variety of roles in diversifying the types of projects supported by the Foundation in ways rarely found in a regular grant.
  • Career Stage diversity: How should a program distribute support among PIs at different career stages? Beginning investigators bring new ideas but may have weaker grantsmanship. Mid-career scientists offer experience and a track record, and may merit special consideration if changing research direction. Late career scientists need opportunities to synthesize their work to create a legacy for their community. Postdoctoral awards create special opportunities for beginning scientists to pursue novel and independent projects.
  • Demographic diversity: How can NSF help diversify the scientific workforce and address various demographic imbalances? Many studies have shown that diversity in the workforce generates new ideas and approaches. Different people see different aspects of a topic through their experiences and educational backgrounds; more homogeneous research teams may miss novel and unexpected insights that lead to innovative solutions. Broader impacts often include activities designed to broaden participation in science.
  • Geographic diversity: How can a program ensure the opportunities and benefits of research reach the diverse geographic regions of the country? Innovative research is done in diverse institutions located outside of the major research hubs. In EPSCoR states, which generally receive a smaller portion of federal research dollars, leveraging opportunities can amplify the impact of an award while co-funding can stretch our program budget.
  • Institutional diversity: Not all stellar scientists are at the few major research universities. And, neither are all the students who will become the great researchers of the future. How can we direct limited research support to ensure opportunities are not limited to a select few? Funding projects from diverse institutions, including primarily undergraduate colleges and universities, minority-serving institutions, and regional universities, allows a broader range of faculty and students to participate in and strengthen the scientific enterprise.
  • Intellectual diversity: How do specific projects reinforce, build upon or challenge the results and knowledge generated by the diversity of other projects in the same broad domain? Program officers may try to balance research in areas that are currently “hot” with other topics of importance. Co-review with other programs provides another way to broaden the program’s domain and promote novel application of tools developed in other fields.
  • Laboratory Diversity: Where is the balance between investing in new/unfunded labs versus sustaining established enterprises? There are always new labs, labs running out of funds, labs with funding gaps, and labs with existing funding from us or elsewhere. We often consider PIs’ current funding status in making our decisions; it’s not an outright disqualifier to be well funded at the moment but it is an important consideration in distributing our funds.
  • Risk diversity: Does the program fund at least some work that is intellectually risky? Because progress in science depends on the willingness to challenge the norm, program officers often consider relative degrees of risk and innovation in their funding decisions. Some individuals argue that panels are overly conservative in their recommendations, but program officers make the final decisions and reflect carefully on the nature and magnitude of risks versus the potential payoffs for their field.

Because the distribution of submitted proposals can vary over time, portfolio balance requires both a short term and a long-range vision. NSF staff consider the overall present and future health of the research communities they serve at a depth not generally visible to individual scientists. The recommendations of the reviewers are by far the most important factor; the best of the best are likely to be funded. Discriminating among the next group of outstanding proposals usually involves consideration of one or more of the above factors leading up to that phone call saying you have been recommended for funding.