DEB Numbers: Are aquatic ecologists underrepresented?


Editor’s note: This post was contributed by outgoing rotating Program Officer Alan Wilson and is a write-up of part of a project performed by DEB summer student Zulema Osorio during the summer of 2015.

Generalizability has been fundamental to the major advances in environmental biology and is an important trait for current research ideas proposed to NSF.  Despite its significance, a disconnect between terrestrial and aquatic ecological research has existed for several decades (Hairston 1990).

For example, Menge et al. (1990) quantitatively showed that authors heavily (~50%-65%) cite more studies from their representative habitat but that terrestrial ecologists are less likely to include citations from aquatic systems than the converse.  Failure to broadly consider relevant literature when designing, conducting, and sharing findings from research studies not only hinders future scientific advances (Menge et al. 2009) but may also compromise an investigator’s chances for funding[i] when proposing research ideas.

More recently, there have been anecdotal reports from our PI community that freshwater population or community ecology research is under-represented in NSF’s funding portfolio.  To explore the potential bias in proposal submissions and award success rates for ecological research associated with focal habitat, we compared the submissions and success rates of full proposals submitted to the core Population and Community Ecology (PCE) program from 2005-2014 that focused on terrestrial systems, aquatic systems, or both (e.g., aquatic-terrestrial linkages, modeling, synthesis).  Data about focal ecosystems were collected from PI-reported BIO classification forms.  To simplify our data analysis and interpretation, all projects (including collaboratives) were treated only once.  Also, the Division of Environmental Biology (DEB) switched to a preliminary proposal system in 2012.  Although this analysis focuses only on full proposals, the proportion of preliminary and full proposal submissions for each ecosystem type were nearly identical for 2012-2014.  Some projects (2.7% of total projects) provided no BIO classification data (i.e., non-BIO transfers or co-reviews) and were ignored for this project.  Finally, several other programs inside (Ecosystem Science, Evolutionary Processes, and Systematics and Biodiversity Science) and outside (e.g., Biological Oceanography, Animal Behavior, Arctic) of DEB fund research in aquatic ecosystems.  Thus, our findings only relate to the PCE portfolio.

In total, 3,277 core PCE projects were considered in this analysis. Means + 1 SD were calculated for submissions and success rates across 10 years of data from 2005-2014. Terrestrial projects (72% ± 2.8% SD) have clearly dominated projects submitted to the core PCE program across all ten years surveyed (Figure 1).  Aquatic projects accounted for 17% (± 2.6% SD) of the full proposal submissions while projects that include aspects of both aquatic and terrestrial components accounted for only 9% (± 1.6% SD) (Figure 1).  The full proposal success rate has been similar across studies that focused on terrestrial or aquatic ecosystems (calculated as number of awards ÷ number of full proposal submissions; Figure 2; terrestrial: 20% ± 6.9% SD; aquatic: 18% ± 6.5% SD).  Proposal success rate dynamics for projects that focus on both ecosystems are more variable (Figure 2; 16% ± 12.7% SD), in part, due to the small population size (9.5% of the projects considered in this study).

Figure 1. Submission history of full proposals submitted to the core PCE program from 2005-2014 for terrestrial (brown), aquatic (blue), or both ecosystems (red). Proposals were classified based on PI-submitted BIO classification forms. Note that some projects did not provide BIO classification data. These projects were ignored for this analysis and explain why yearly relative data may not total 100%.

Figure 1. Proportion of full proposals submitted to PCE based on focal ecosystem from 2005 to 2014.

Figure 2. Success rate of full proposals submitted to PCE based on focal ecosystem from 2005 to 2014.

Figure 2. Success rate of full proposals submitted to PCE based on focal ecosystem from 2005 to 2014.

In summary, anecdotal PI concerns of fewer funded aquatic proposals in PCE are consistent with available data but are an artifact of fewer aquatic proposal submissions.  Although funding rates for all full PCE proposals have generally varied from 2005-2014 (mean: 19.9% ± 6.4% SD; range: 11%-29%) as a function of available funds and the number of proposals considered, terrestrial- and aquatic-focused research proposals have fared similarly for the past decade.  PCE, like the rest of DEB and NSF, is motivated to have a diverse portfolio and encourages ecologists from varied institutions and backgrounds to submit ideas that study interesting, important questions that will generally move the field of population and community ecology forward.

Figures

Figure 1. Submission history of full proposals submitted to the core PCE program from 2005-2014 for terrestrial (brown), aquatic (blue), or both ecosystems (red).  Proposals were classified based on PI-submitted BIO classification forms.  Note that some projects did not provide BIO classification data.  These projects were ignored for this analysis and explain why yearly relative data may not total 100%.

Figure 2. Success rate history of full proposals submitted to the core PCE program from 2005-2014 for terrestrial (brown), aquatic (blue), or both ecosystems (red).  Proposal success rate is calculated for each ecosystem type as the number of awards ÷ number of full proposal submissions.   Proposals were classified based on PI-submitted BIO classification forms.

References

Hairston, Jr., N. G. 1990. Problems with the perception of zooplankton research by colleagues outside of the aquatic sciences. Limnology and Oceanography 35(5):1214-1216.

Menge, B. A., F. Chan, S. Dudas, D. Eerkes-Medrano, K. Grorud-Colvert, K. Heiman, M. Hessing-Lewis, A. Iles, R. Milston-Clements, M. Noble, K. Page-Albins, R. Richmond, G. Rilov, J. Rose, J. Tyburczy, L. Vinueza, and P. Zarnetska. 2009. Terrestrial ecologists ignore aquatic literature: Asymmetry in citation breadth in ecological publications and implications for generality and progress in ecology. Journal of Experimental Marine Biology and Ecology 377:93-100.

[i] Generalizability “within its own field or across different fields” is a principal consideration of the Intellectual Merit review criterion: http://www.nsf.gov/pubs/policydocs/pappguide/nsf16001/gpg_3.jsp#IIIA

Reminder: Project Reports Required for New and Continuing Funding


Note: This is an edited version of a post originally appearing on May 20, 2014.

This is a critical reminder for anyone who currently has a continuing grant[i] OR has been (hopes to be) recommended for funding in the remainder of fiscal year 2016 (i.e. now through October 1, 2016).

We need you to complete your reports now for projects funded in prior years, up to and including FY 2015, in order to release FY 2016 funds. Continue reading

Spring 2016: DEB Preliminary Proposal Results


Notices

All PIs should have received notice of the results of your 2016 DEB Core Program preliminary proposals by now. Full proposal invitation notices were all sent out by the first week of May (ahead of schedule), giving those invited PIs a solid three months to prepare their full proposals. ‘Do Not Invite’ decisions began going out immediately thereafter and throughout the rest of May.

If you haven’t heard, go to fastlane.nsf.gov and log in. Then, select the options for “proposal functions” then “proposal status.” This should bring up your proposal info. If you were a Co-PI, check with the lead PI on your proposal: that person is designated to receive all of the notifications related to the submission.

If you are the lead PI and still have not heard anything AND do not see an updated proposal status in FastLane, then email your Program Officer/Program Director. Be sure to include the seven-digit proposal ID number of your submission in the message.

Process

All told, DEB took 1474 preliminary proposals to 10 panels during March and April of 2016. A big thank you to all of the panelists who served and provided much thoughtful discussion and reasoned recommendations. Note: if you’re interested in hearing a first-hand account of the DEB preliminary proposal panel process, check out this great post by Mike Kaspari.

Panelists received review assignments several weeks prior to the panels and prepared individual written reviews and individual scores. During the panel, each proposal was discussed by the assigned panelists and then presented to the entire panel for additional discussion and assignment to a rating category. Panels were presented two recommendation options for each preliminary proposal: Invite or Do Not Invite. Following discussion, the assigned panelists prepared a panel summary statement to synthesize the key points of the panel discussion and rationale for the assigned rating.

Both the individual written reviews and the panel summary statement are released to the PI of the preliminary proposal.

As we’ve discussed previously, the final decisions on the preliminary proposals are made by the programs with concurrence of senior management. These decisions take into account the panel recommendations, especially the substance of the discussions, as well as expectations for future award-making capacity based on the availability of funds, additional expected proposal load at the full proposal stage, and portfolio balance issues.

Results

Total Reviewed Panel Recommendations Total Invited Invite Rate
DEB Cluster Invite Do Not Invite No Consensus
SBS 289 79 210 0 85 29%
EP 440 94 346 0 101 23%
PCE 439 122 315 2 110 25%
ES 306 94 212 0 86 28%
DEB Total 1474 389 1083 2 382 26%

These numbers are consistent with our goal of inviting the most promising projects while targeting a success rate of approximately 25% for the resulting full proposals that will be submitted this summer.

Big Picture

Comparing to the previous rounds of preliminary proposals…

2012 2013 2014 2015 2016
Reviewed 1626 1629 1590 1495 1474
Invited 358 365 366 383 382
Invite Rate 22% 22% 23% 26% 26%

…we see that the system has recovered somewhat from the initial flood of submissions. Moreover, the invite rate, and subsequent full proposal success rate, has stabilized in a range that reasonably balances against the effort required to produce each submission.

DEB 2016 Summer Meetings Schedule


Meeting season is upon us. Here’s a quick overview of the where, when, and who for finding your DEB representatives at annual meetings this summer. Note: Lists of expected attendees are tentative and subject to change. Check back for updates and additional details of scheduled sessions and other outreach activities as they become available.

 

Society of Wetland Scientists’ 2016 Annual Meeting

http://swsannualmeeting.org/

31 May – 4 June 2016; Corpus Christi, Texas

Liz Blood (Ecosystems)

 

EEID (Ecology and Evolution of Infectious Disease)

http://eeid.cornell.edu/eeid-2016/

3 – 5 June 2016; Ithaca, New York

Sam Scheiner (Evolutionary Processes); Karen Alroy (Science Associate); Diana Weber (AAAS S&T Policy Fellow)

 

ASLO Summer Meeting

https://www.sgmeet.com/aslo/santafe2016/default.asp

5 – 10 June 2016; Santa Fe, New Mexico

Alan Tessier (DDD); Lou Kaplan (Ecosystems); Maria Gonzalez (Population and Community Ecology); Tim Kratz (Macrosystems & NEON Science); Mike Vanni (Postdoctoral Fellows program in DBI)

Event: NSF Funding Opportunities in Aquatic Sciences; Date: Tuesday, 7 June; Time: 12:00 – 13:30

 

ASM Microbe 2016

http://www.asmmicrobe.org/

16 – 20 June 2016; Boston, MA

Matt Kane (Ecosystems); Leslie Rissler (Evolutionary Processes)

 

Evolution 2016 (ASN/SSE/SSB)

http://www.evolutionmeetings.org/evolution-2016—austin-texas.html

17 – 21 June 2016; Austin, Texas

Paula Mabee (DD); George Gilchrist, Paco Moore, Leslie Rissler, Sam Scheiner (Evolutionary Processes); Gordon Burleigh (Systematics and Biodiversity Science)

Event: NSF information session; Date: Monday, 20 June; Time: 12:00 – 13:00

 

Botany 2016

http://www.botanyconference.org/

30 July – 3 August 2016; Savannah, Georgia

Gordon Burleigh, Joe Miller & Simon Malcomber (Systematics and Biodiversity Science)

ESA Ecology 2016

http://esa.org/ftlauderdale/

7 – 12 August 2016; Ft Lauderdale, Florida

Alan Tessier (DDD); Doug Levey & Betsy Von Holle (Population and Community Ecology); Liz Blood, Henry Gholz & Karina Schäfer (Ecosystems); Janice Bossart (Evolutionary Processes); Cheryl Dybas (Public Affairs); John Adamec (Staff)

Booth: #333

Event: Funding Agency Information Session; Date: Monday, 8 August; Time: 11:30-13:15

 

North American Ornithological Conference 2016

http://americanornithology.org/content/north-american-ornithological-conference-2016

16 – 20 August 2016; Washington, DC

Doug Levey (Population and Community Ecology)

 

ecoSummit 2016

http://www.ecosummit2016.org/

29 August – 1 September 2016; Le Corum, Montpellier, France

Karina Schäfer (Ecosystems)

 

Entomology 2016 (XXV International Congress of Entomology)

http://ice2016orlando.org/

25 – 30 September 2016; Orlando, Florida

Janice Bossart (Evolutionary Processes)

Sprucing Up the Place


We’ve been blogging here on DEBrief since February 2013. Since our initial pilot, we’ve gotten noticed not just by you but by our colleagues here at NSF.  Once we paved the way, the rest of the BIO directorate joined in as well and now there are 5 NSF BIO blogs: one each for DEB, IOS, MCB, DBI, and the Assistant Director’s office. We are even integrated with the BIO twitter account (@NSF_BIO).

With all the enthusiasm here in BIO, and positive comments from you, for community outreach through these blogs, we’ve been given some new tools to work with. These will enable DEBrief and the other BIO blogs to look more like the collective cross-division effort that they are. So, over the next couple of weeks (and probably much longer) we’ll be trying out some changes to our look.

The most important part of DEBrief, our content, is not changing. We’ll still be bringing you the latest in funding opportunity updates, analysis of our programs, and discussions of the merit review and decision-making process.

The first change you’re likely to notice is a new custom domain name (debblog.nsfbio.com). The old URL: nsfdeb.wordpress.com will not go away. All existing links, etc. will still work. The idea is to add to this another more informative and easier to find address.

We are also looking to debut an update to our color scheme. We’re no longer limited by the color choices embedded in our theme selection, and are going to work on incorporating the NSF.gov colors.

Lastly, this is a great time for feedback. What works? What doesn’t? Are there parts of the blog site that you want to see improved? Is there something missing you want to see on DEBrief? Did we break anything in the process of implementing the updates? You are always welcome to leave a comment or send us an email at DEBQuestions@nsf.gov.

Evaluating the Preliminary Proposal System, an update.


Our last post, tracing the fate of proposals over the two-stage review process, worked with data that had been pulled together for our Committee of Visitors (COV) in 2015. For those of you unfamiliar with NSF COVs, the short version:

A COV is a special type of panel established, in our case, under the Advisory Committee for BIO. Instead of reviewing proposals, however, the COV reviews the entire merit review and award process. They look at individual reviews and panel summaries provided by you, our management of the merit review and decision-making process, and the resulting portfolio of awards. Every program at NSF undergoes a COV review every three years.

We’ll leave the in-depth discussion of the COV process to another time, but thought you would be interested in some related follow-up.

The DEB COV met last June. It was chaired by Dr. Steward Pickett and the Advisory Committee for BIO was represented by Dr. Paul Turner. After the committee completed its work, a report was transmitted to the BIO Advisory Committee for approval.

Dr. Turner presented the 2015 DEB COV report to the NSF Advisory Committee for BIO at their fall meeting (29 Sept. 2015). The report, and a response document prepared by BIO to outline our planned follow-up actions, were accepted by the Advisory Committee. Both documents have been published to the nsf.gov website. The COV report, and BIO response are both available for download and sharing.

The COV report endorsed our plan to formally evaluate the preliminary proposal process in DEB and IOS via a third-party provider. At the time of the COV we were working to develop a solicitation for a contractor to carry out the work. We were successful in soliciting several bids[i]. After review of the bids, a 1-year contract was awarded to Abt Asssociates out of Cambridge, MA to conduct a study. This organization has extensive experience as third-party program evaluators for government clients and scientific research programs. The contract began in March, 2016 and is well underway. The project team is presently developing the necessary survey instruments and beginning to work with program data. The project is scheduled for completion by February 2017. We look forward to sharing the results with you as we are able.

We plan to post additional updates to DEBrief as project milestones are reached. However, this is also a first heads-up that Abt may reach out to you to by email to participate in the evaluation. Please make sure your FastLane contact info is up to date so you don’t miss it.

In the interim, the COV report also covered the first two years of preliminary proposals and might be of interest.

 

[i] The federal contracting process is a bit like the proposal process but operates under a much more extensive set of regulations.

DEB Numbers: Success Rates by Merit Review Recommendation

DEB Numbers: Success Rates by Merit Review Recommendation


We recently received a comment from a panelist (paraphrasing): how likely are good proposals to get funded? We’ve previously discussed differences between the funding rates we report directly to you from panels and the NSF-wide success rate numbers reported on our website.  But the commenter was interested in an even more nuanced question: to what extent do award decisions follow the outcomes of merit review? This is a great topic for a post and, thanks to our Committee of Visitors review last year, we already have the relevant data compiled. (So this is really the perfect data-rich but quick post for panel season.)

To address this question, we need to first define what a “good proposal” is.

In our two-stage annual cycle, each project must pass through review at least twice before being awarded: once as a preliminary proposal, and once as an invited full proposal.

At each stage, review progresses in three steps:

  • Three individual panelists independently read, review, and score each proposal prior to the panel. A single DEB panelist is responsible for reviewing an assigned subset of all proposals at the panel. This is the same for preliminary proposals and full proposals. Full proposals also receive several non-panelist “ad hoc” reviews prior to the panel.
  • The proposal is brought to panel where the panelists discuss the proposal and individual reviews in relation to each other and in the context of the rest of the proposals in the panel to reach a consensus recommendation. This is the same for preliminary proposals and full proposals.
  • The Program Officers managing the program take into consideration the reviews, the recommendations of the panel(s) that assessed the proposal, and their portfolio management responsibilities to arrive at a final recommendation. This is the same for preliminary proposals and full proposals.

In this case, since we are discussing the Program’s actions after peer review, we are defining as “good” anything that received a positive consensus panel recommendation. Initially, the label of “good” will be applied by the preliminary proposal panel. Then, at the full proposal panel it will receive a second label, which may or may not also be “good”. A “good” recommendation for either preliminary or full proposals includes any proposal not placed into the lowest (explicitly negative) rating category. The lowest category usually has the word “not” in it, as in “Do Not Invite” or “Not Fundable”. All other categories are considered “good” recommendations, whether there is a single positive category (e.g., “Invite”) or several ordinal options conveying varying degrees of enthusiasm (e.g., “high priority”, “medium priority”, “low priority”).

To enable this analysis, we traced the individual review scores, panel review recommendations, and outcomes for proposals from the first three years of the DEB preliminary proposal system (i.e., starting with preliminary proposals from January 2012 through full proposals from August 2014).

As we’ve reported previously, preliminary proposal invitation rates are between 20% and 30%, and between 20% and 30% of invited full proposals are funded, leading to end-to-end funding rates around 7%. But, as our commenter noted, that obscures a lot of information and your individual mileage will vary. So…

How likely are “good” proposals to get funded?

In the table below, you can see the overall invitation rate for preliminary proposals is 23%, but it looks very different depending on how well it performed in the panel[i].

Preliminary Proposal Outcomes by Panel Recommendation % of Proposals Receiving Rating Pre-Proposal Outcome
Not Invited Invited Invite Rate
Pre-Proposal Panel Rating High (Good) 19% 22 879 98%
Low (Good) 5% 100 141 59%
Do Not Invite 76% 3597 74 2%
Total 100% 3719 1094 23%

This stage is a major winnowing of projects. On the one hand, we tend toward inviting most of that which is recommended by the panel. On the other hand, for the majority of preliminary proposals that aren’t well-rated (so falling outside our working definition of “good”), it is highly unlikely it will see the full proposal stage. There is a low, 2%, Invite rate for proposals that the panels recommended as Do Not Invite. This is a measure of the extent to which program officers disagree with panelists and choose to take a chance on a particular idea or PI, based on their own knowledge of submission history and portfolio balance issues.

From these invitations, the programs receive full proposals. After review, programs award approximately 25% of the full proposals, but again the outcome is strongly influenced by the panel ratings.

Full Proposal Outcomes by Panel Recommendation % of Proposals Receiving Rating Full Proposal Outcome
Declined Awarded Funding Rate
Full Proposal Panel Rating High (Good) 17% 30 122 80%
Medium (Good) 23% 115 98 46%
Low (Good) 21% 165 21 11%
Not Competitive 39% 349 7 2%
Total 100% 659 248 27%

Program Officers are faced with a greater responsibility for decision-making at the full proposal stage. Whereas, preliminary proposal panels only gave the nod (High or Low positive recommendations) to ~23% of submissions, full proposal panels put 551 of 907 proposals into “fundable” categories (Low, Medium, or High). Since this is more than twice as many as the programs could actually fund,[ii] the work of interpreting individual reviews, panel summaries, and accounting for portfolio balance plays a greater role in making the final cut. Also note, that these are the cumulative results of three years of decision-making by four independently managed program clusters, so “divide by 12” to get a sense of how common any result is for a specific program per year.

Ultimately, the full proposal panel rating is the major influence on an individual proposal’s likelihood of funding and the hierarchy of “fundable” bins guides these decisions:

Success rates of DEB full proposals when categorized by preliminary proposal and full proposal panel recommendations.

Success rates of DEB full proposals when categorized by preliminary proposal and full proposal panel recommendations.

While funding decisions mostly ignore the preliminary proposal ratings, readers may notice an apparent “bonus” effect in the funding rate for “Do Not Invite” preliminary proposals that wind up in fundable full proposal categories. For example, of 15 preliminary proposals that were rated “Do Not Invite” but were invited and received a “Medium” rating at the full proposal stage, 10 (67%) were funded compared to 45% and 42% funding for Medium-rated full proposals that preliminary proposal panelists rated as High or Low priority, respectively.  However, this is a sample size issue. Overall the numbers of Awarded and Declined full proposals are not associated with the preliminary proposal recommendation (Chi-Square = 2.90, p = 0.235).

 

Does Preliminary Proposal rating predict Full Proposal rating?

This is a difficult question to answer since there is nothing solid to compare against.

We don’t have a representative set of non-invited full proposals that we can compare to say “yes, these do fare better, the same as, or worse than the proposals that were rated highly” when it comes to the review ratings. What we do have is the set of “Low” preliminary proposals that were invited, and the small set of “Do Not Invite” preliminary proposals that were invited by the Program Officers against the panel recommendations. However, these groups are confounded by the decision process: these invites were purposely selected because the Program Officers thought they would be competitive at the full proposal stage. They are ideas we thought the panels missed or selected for portfolio balance; therefore, they are not representative of the entire set of preliminary proposals for which the panels recommended Low or Do Not Invite.

Distribution of Full Proposal Panel Ratings versus Preliminary Proposal Ratings # Recvd As Full Proposals Full Proposal Panel Rating
High Medium Low Not Competitive
Pre-Proposal Panel Rating High 728 19% 24% 20% 37%
Low 117 10% 21% 20% 50%
Do Not Invite 62 8% 24% 23% 45%

So, given the active attempts to pick the best proposals out of those in the “Low” and “Do Not Invite” preliminary proposal categories, those which had been invited based on “High” ratings were twice as likely to wind up in the “High” category at the full proposal stage than those that had been invited from Low or Do Not Invite preliminary proposal categories. And, those invited from the Low or Do Not Invite categories were somewhat more likely to wind up in Not Competitive. Moreover, the score data presented below provides additional evidence that suggests this process is, in fact, selecting the best proposals.

 

What do individual review scores say about the outcomes and different panel ratings?

We expect the full proposal review stage to be a more challenging experience than the preliminary proposal stage because most of the clearly non-competitive proposals have already been screened out. Because of this, full proposals should present a tighter grouping of reviewer scores than preliminary proposals. The distribution of average proposal scores across the two stages is shown below. We converted the “P/F/G/V/E” individual review scores to a numerical scale from P=1 to E=5, with split scores as the average of the two letters (e.g., V/G = 3.5). As a reminder, the individual reviewer scores are sent in prior to the panel, without access to other reviewers’ opinions and having access to a relatively small number of proposals. So the average rating (and spread of individual scores for a proposal) is mostly a starting point for discussion and not the end-result of the review[iii].

Distribution of mean review scores at different points in the DEB core program review process.

The preliminary proposal scores are distributed across the entire spectrum, with the average review scores for most in the 3 to 4 range (a Good to Very Good rating). That we don’t see much in the way of scores below 2 might suggest pre-selection on the part of applicants or rating inflation by reviewers. Invitations (and high panel ratings) typically go to preliminary proposals with average scores above Very Good (4). Only a few invitations are sent out for proposals between Very Good and Good or lower.

The average scores for full proposals are more evenly distributed than the preliminary proposal scores with a mean and median around Very Good. The eventual awards draw heavily from the Very Good to Excellent score range and none were lower than an average of Very Good/Good. And, while some full proposals necessarily performed worse than they did at the preliminary proposal stage, there are still roughly twice as many full proposals with average scores above Very Good than the total number of awards made, so there is no dearth of high performing options for award-making.

So, what scores correspond to different panel ratings?

Average Review Score of Invited Full Proposals by Panel Recommendation Full Proposal Panel Rating
High Medium Low Not Competitive Overall
Pre-Proposal Panel Rating High 4.41 4.08 3.76 3.53 3.88
Low 4.32 4.13 3.88 3.52 3.81
Do Not Invite 4.42 4.00 3.75 3.44 3.73
Overall 4.40 4.08 3.78 3.53 3.87

There’s virtually no difference in average full proposal scores among groups of proposals that received different preliminary proposal panel ratings (rows, above). This further supports the notion that the full proposals are being assessed without bias based on the preliminary proposal outcomes (which are available to full proposal panelists after individual reviews are written). There is approximately a whole letter score difference between the average scores of full proposals (columns) from highly rated full proposals (E/V) to Not Competitive Full proposals (V/G). The average score for each rating is distinct.

 

About the Data:

The dataset used in this analysis was originally prepared for the June 2015 DEB Committee of Visitors meeting. We traced the review outcomes of preliminary proposals and subsequent full proposals over the first 3 cycles of proposal review. This dataset included the majority of proposals that have gone through the 2-stage review in DEB, but is not a complete record because preliminary proposal records are only tied to full proposals if this connection is successfully made by the PI at the time of full proposal submission. We discussed some of the difficulties in making this connection on DEBrief in the post titled “DEB Numbers: Per-person success rate in DEB”.

There are 4840 preliminary proposal records in this dataset; 1115 received invitations to submit full proposals. Of those 1115, 928 (83%) submitted full proposals and successfully identified their preliminary proposal. Full proposal records are lacking for the remaining 187 invitees; this is combination of 1) records missing necessary links and 2) ~a few dozen invitations that were never used within the window of this analysis. For full proposal calculations, we considered only those proposals that had links and had been processed to a final decision point as of June 2015 (907 records) when the data was captured.

The records followed the lead proposal of collaborative groups/projects in order to maintain a 1 to 1 relationship of all records across preliminary and full proposal stages and avoid counting duplications of review data. The dataset did not include full proposals that were reviewed alongside invited proposals but submitted under other mechanisms that bypass the preliminary proposal stage such as CAREER, OPUS, and RCN.

Data Cleaning: Panel recommendations are not required to conform to a standard format, and the choice of labels, number of options, and exact wording vary from program to program and has changed over time in DEB. To facilitate analysis, the various terms have been matched onto a 4-level scale (High/Medium/Low/Not Invite (or Not Competitive)), which was the widest scale used by any panel in the dataset; any binary values were matched to the top and bottom of the scale. Where a proposal was co-reviewed in 2 or more panels, the most positive panel rating was used for this analysis.

[i] Cases where the highly recommended preliminary proposal was Not Invited were typically because the project received funding (either we were still waiting on our budget from the prior year and the PI re-submitted, or the same work was picked up by another funding source). So, the effective invite rate for “high priority” recommendations is ~100%. The middle “Low” priority rating was used in only a limited set of preproposal panels in the first years of preproposals; at this point, all DEB preproposal panels used two-level “Invite or Do Not Invite” recommendations.

[ii] 248 is less than what we actually funded from the full proposal panels: when CAREER, OPUS, RCN, and proposals that were not correctly linked to preproposal data are accounted for, we’re a bit over 300 core program projects awarded in FYs 2013, 2014 and 2015: 100 new projects/year.

[iii] If the program were to be purely conservative and follow the scoring exactly in making award decisions, there would have been no awards with an average score below 4.2 (Very Good+) and even then half of the proposals that averaged Very Good (4) or better would go unfunded.