We compare judgments of green turtle (Chelonia mydas) captures elicited from local gillnet skippers and not-for-profit conservation organization employees operating in a small-scale fishery in Peru, to capture rates calculated from a voluntary at-sea observer program operating out of the same fishery. To reduce cognitive biases and more accurately quantify uncertainty in our experts’ judgments, we followed the IDEA (“Investigate,” “Discuss,” “Estimate,” and “Aggregate”) structured elicitation protocol. The elicited mean monthly estimates of green turtle gillnet captures within summer and winter fishing seasons were higher than the equivalent green turtle capture rates calculated from the fisheries observer data; however, no statistically significant differences were identified when comparing the means of the datasets using bootstrap hypothesis tests (winter observed difference-in-means: 83.15, adj mean ± SD = 42.39 ± 32.59; summer observed difference-in-means: 68.58, adj mean ± SD = 54.06 ± 41.22). We investigated respondent performance in relation to the observer data capture rates. The not-for-profit employees scored high on accuracy and calibration performance metrics. The gillnet skippers’ judgments ranked higher on informativeness yet lower on accuracy and calibration, potentially reflective of overconfident judgments. This research presents a new context for using the IDEA protocol, which may prove helpful for rapid, exploratory evaluations of capture and bycatch impact in data-limited small-scale fishery management scenarios.