TUTCRIS - Tampereen teknillinen yliopisto

TUTCRIS

Panel discussion does not improve reliability of peer review for medical research grant proposals

Tutkimustuotosvertaisarvioitu

Standard

Panel discussion does not improve reliability of peer review for medical research grant proposals. / Fogelholm, Mikael; Leppinen, Saara; Auvinen, Anssi; Raitanen, Jani; Nuutinen, Anu; Väänänen, Kalervo.

julkaisussa: JOURNAL OF CLINICAL EPIDEMIOLOGY, Vuosikerta 65, Nro 1, 01.2012, s. 47-52.

Tutkimustuotosvertaisarvioitu

Harvard

Fogelholm, M, Leppinen, S, Auvinen, A, Raitanen, J, Nuutinen, A & Väänänen, K 2012, 'Panel discussion does not improve reliability of peer review for medical research grant proposals', JOURNAL OF CLINICAL EPIDEMIOLOGY, Vuosikerta. 65, Nro 1, Sivut 47-52. https://doi.org/10.1016/j.jclinepi.2011.05.001

APA

Fogelholm, M., Leppinen, S., Auvinen, A., Raitanen, J., Nuutinen, A., & Väänänen, K. (2012). Panel discussion does not improve reliability of peer review for medical research grant proposals. JOURNAL OF CLINICAL EPIDEMIOLOGY, 65(1), 47-52. https://doi.org/10.1016/j.jclinepi.2011.05.001

Vancouver

Fogelholm M, Leppinen S, Auvinen A, Raitanen J, Nuutinen A, Väänänen K. Panel discussion does not improve reliability of peer review for medical research grant proposals. JOURNAL OF CLINICAL EPIDEMIOLOGY. 2012 tammi;65(1):47-52. https://doi.org/10.1016/j.jclinepi.2011.05.001

Author

Fogelholm, Mikael ; Leppinen, Saara ; Auvinen, Anssi ; Raitanen, Jani ; Nuutinen, Anu ; Väänänen, Kalervo. / Panel discussion does not improve reliability of peer review for medical research grant proposals. Julkaisussa: JOURNAL OF CLINICAL EPIDEMIOLOGY. 2012 ; Vuosikerta 65, Nro 1. Sivut 47-52.

Bibtex - Lataa

@article{ae1af600e822446ba18638a903089763,
title = "Panel discussion does not improve reliability of peer review for medical research grant proposals",
abstract = "Objective: Peer review is the gold standard for evaluating scientific quality. Compared with studies on inter-reviewer variability, research on panel evaluation is scarce. To appraise the reliability of panel evaluations in grant review, we compared scores by two expert panels reviewing the same grant proposals. Our main interest was to evaluate whether panel discussion improves reliability. Methods: Thirty reviewers were randomly allocated to one of the two panels. Sixty-five grant proposals in the fields of clinical medicine and epidemiology were reviewed by both panels. All reviewers received 5-12 proposals. Each proposal was evaluated by two reviewers, using a six-point scale. The reliability of reviewer and panel scores was evaluated using Cohen's kappa with linear weighting. In addition, reliability was also evaluated for the panel mean scores (mean of reviewer scores was used as panel score). Results: The proportion of large differences (at least two points) was 40{\%} for reviewers in panel A, 36{\%} for reviewers in panel B, 26{\%} for the panel discussion scores, and 14{\%} when the means of the two reviewer scores were used. The kappa for panel score after discussion was 0.23 (95{\%} confidence interval: 0.08, 0.39). By using the mean of the reviewer scores, the panel coefficient was similarly 0.23 (0.00, 0.46). Conclusion: The reliability between panel scores was higher than between reviewer scores. The similar interpanel reliability, when using the final panel score or the mean value of reviewer scores, indicates that panel discussions per se did not improve the reliability of the evaluation.",
keywords = "Consistency, Funding, Inter-reviewer reliability, Interpanel reliability, Peer review, Quality assurance",
author = "Mikael Fogelholm and Saara Leppinen and Anssi Auvinen and Jani Raitanen and Anu Nuutinen and Kalervo V{\"a}{\"a}n{\"a}nen",
year = "2012",
month = "1",
doi = "10.1016/j.jclinepi.2011.05.001",
language = "English",
volume = "65",
pages = "47--52",
journal = "JOURNAL OF CLINICAL EPIDEMIOLOGY",
issn = "0895-4356",
publisher = "Elsevier",
number = "1",

}

RIS (suitable for import to EndNote) - Lataa

TY - JOUR

T1 - Panel discussion does not improve reliability of peer review for medical research grant proposals

AU - Fogelholm, Mikael

AU - Leppinen, Saara

AU - Auvinen, Anssi

AU - Raitanen, Jani

AU - Nuutinen, Anu

AU - Väänänen, Kalervo

PY - 2012/1

Y1 - 2012/1

N2 - Objective: Peer review is the gold standard for evaluating scientific quality. Compared with studies on inter-reviewer variability, research on panel evaluation is scarce. To appraise the reliability of panel evaluations in grant review, we compared scores by two expert panels reviewing the same grant proposals. Our main interest was to evaluate whether panel discussion improves reliability. Methods: Thirty reviewers were randomly allocated to one of the two panels. Sixty-five grant proposals in the fields of clinical medicine and epidemiology were reviewed by both panels. All reviewers received 5-12 proposals. Each proposal was evaluated by two reviewers, using a six-point scale. The reliability of reviewer and panel scores was evaluated using Cohen's kappa with linear weighting. In addition, reliability was also evaluated for the panel mean scores (mean of reviewer scores was used as panel score). Results: The proportion of large differences (at least two points) was 40% for reviewers in panel A, 36% for reviewers in panel B, 26% for the panel discussion scores, and 14% when the means of the two reviewer scores were used. The kappa for panel score after discussion was 0.23 (95% confidence interval: 0.08, 0.39). By using the mean of the reviewer scores, the panel coefficient was similarly 0.23 (0.00, 0.46). Conclusion: The reliability between panel scores was higher than between reviewer scores. The similar interpanel reliability, when using the final panel score or the mean value of reviewer scores, indicates that panel discussions per se did not improve the reliability of the evaluation.

AB - Objective: Peer review is the gold standard for evaluating scientific quality. Compared with studies on inter-reviewer variability, research on panel evaluation is scarce. To appraise the reliability of panel evaluations in grant review, we compared scores by two expert panels reviewing the same grant proposals. Our main interest was to evaluate whether panel discussion improves reliability. Methods: Thirty reviewers were randomly allocated to one of the two panels. Sixty-five grant proposals in the fields of clinical medicine and epidemiology were reviewed by both panels. All reviewers received 5-12 proposals. Each proposal was evaluated by two reviewers, using a six-point scale. The reliability of reviewer and panel scores was evaluated using Cohen's kappa with linear weighting. In addition, reliability was also evaluated for the panel mean scores (mean of reviewer scores was used as panel score). Results: The proportion of large differences (at least two points) was 40% for reviewers in panel A, 36% for reviewers in panel B, 26% for the panel discussion scores, and 14% when the means of the two reviewer scores were used. The kappa for panel score after discussion was 0.23 (95% confidence interval: 0.08, 0.39). By using the mean of the reviewer scores, the panel coefficient was similarly 0.23 (0.00, 0.46). Conclusion: The reliability between panel scores was higher than between reviewer scores. The similar interpanel reliability, when using the final panel score or the mean value of reviewer scores, indicates that panel discussions per se did not improve the reliability of the evaluation.

KW - Consistency

KW - Funding

KW - Inter-reviewer reliability

KW - Interpanel reliability

KW - Peer review

KW - Quality assurance

UR - http://www.scopus.com/inward/record.url?scp=82255162872&partnerID=8YFLogxK

U2 - 10.1016/j.jclinepi.2011.05.001

DO - 10.1016/j.jclinepi.2011.05.001

M3 - Review Article

VL - 65

SP - 47

EP - 52

JO - JOURNAL OF CLINICAL EPIDEMIOLOGY

JF - JOURNAL OF CLINICAL EPIDEMIOLOGY

SN - 0895-4356

IS - 1

ER -