When thinking about the certainty of evidence, guideline groups will typically think about certainty or confidence in the evidence for benefits and harms. Consider a group making a recommendation who have been presented with the benefits and harms of an intervention from a systematic review of the literature. Drug X increases the number of people cured by 10 out of 100, and the risk of stroke increases by 5 out of 100 compared with no drug. The evidence that contributed to the estimates of the cures is very different from the evidence that contributed to the strokes. So, the certainty of the evidence is different. There is very low certainty that 10 more cures may occur, but high certainty that 5 more strokes could occur. Because of the certainty in the evidence, a guideline group may make a recommendation against the drug to avoid the 5 more strokes that could occur. In contrast, if the certainty was the other way around, that is, high certainty of 10 more cures, and very low certainty of 5 more strokes, the group may decide to suggest the drug as treatment because they are very uncertain about the increase in strokes. Assessing and presenting the certainty of evidence for benefits and harms is therefore important, and various systems do this, such as the GRADE approach (see the GRADE Handbook). These systems can also be used to assess the evidence for patient views. If a guideline group is conducting a systematic review of research on patient views (using rigorous methods provided in table 2), the group should also convey the certainty of the results about those views. Consider a guideline group deciding whether to recommend a procedure with outcomes for precancerous cervical lesions and infertility. Research evidence about the value that couples place on fertility could be gathered. If evidence is certain that women who are trying to conceive place a very high value on avoiding infertility compared with preventing recurrence of a precancerous lesion, more so than women not trying to conceive, the guideline group may make a recommendation against the procedure for women trying to conceive, but a recommendation for the procedure in women not trying to conceive. In contrast, if the research evidence is very uncertain about the values, then the guideline group may make the same recommendation for both groups of women. In this way, the certainty of the evidence can have an impact on the recommendations that are made, and it is therefore important to assess the certainty of the research evidence about patient views. One component of assessing the certainty of evidence is to judge the quality or limitations of the studies. For individual qualitative studies, there is no agreement on the best tool to use, but 2 have been more widely used: the CASP qualitative studies checklist an adapted version of the CASP tool (Atkins et al. 2008). These tools continue to be developed as methods progress and as the debate persists about the impact of the assessment criteria, such as ethics approval, on the validity of a study. For now, either of these tools could be used to assess the limitations of each study that contributes information on patient views. However, assessing the limitations of studies is only 1 part of the overall assessment of evidence. There are other factors that need to be considered when evaluating the certainty of the evidence, and these factors depend on the study design contributing to the evidence. To assess and present confidence in the evidence from a review of qualitative research studies, reviewers may use the GRADE-CERQual approach. GRADE-CERQual asks groups to assess 4 domains: quality or limitations of the studies whether the results from the studies are directly relevant to the recommendation question whether the results are coherent across the studies, and whether the data from the studies is sufficiently rich or adequate. Together, consideration of these domains determines the confidence in the conclusions from a review of qualitative research about patient views. For example, a systematic review of qualitative research was conducted to synthesise evidence about parents’ and informal caregivers’ views and experiences of how information about routine childhood vaccination is communicated (Ames, Glenton, and Lewin 2017). The authors found that scientific sources of vaccine information were seen to be more reliable than discussion forums or lay opinions. The review authors then assessed the certainty of the evidence using GRADE-CERQual. They had minor concerns with the limitations in the studies, no concern with coherence of the results across studies, but moderate concern with the setting of the original studies (being directly applicable to their question), and the richness of the data. They therefore had low confidence that scientific sources were seen as more reliable than discussion forums or lay opinions. Details about how to assess the confidence in qualitative research findings using the GRADE-CERQual approach can be found in a series of papers, each addressing how to assess 1 domain (Lewin et al. 2018). To assess the certainty of evidence specific to the importance of health outcomes, a new method has been developed (Zhang et al. 2019a, Zhang et al. 2019b). The method is based on the GRADE approach in which evidence for patient values is assessed using the domains: risk of bias, inconsistency, indirectness, imprecision, publication bias, and others. Details are provided in the articles published by Zhang et al. 2019a and 2019b, but the concept for each domain is similar to what would be applied to a review of studies evaluating benefits and harms of an intervention. Of note is the consideration of inconsistency across study results. When research shows that values are variable, further exploration, for example by subgroups, is recommended in order to determine if there are true differences in how people value a health outcome. Differences in values would likely influence whether different recommendations are made for 1 group compared with another based on what they value most, or whether there should be a conditional rather than a strong recommendation (that is, a conditional recommendation requires shared decision making). For evidence about patient views from a synthesis of studies, such as randomised controlled trials or non-randomised studies, GRADE or other systems for assessing the certainty of evidence from these study designs should be used. For example, if there was a review of randomised controlled trials that reported the acceptability of 1 procedure compared with another. In this hypothetical review, the difference in how acceptable the 1 procedure is compared to how acceptable the other procedure was calculated from each study and then the differences from each study were pooled together to create 1 estimate of the difference. To express the certainty in such estimated differences, groups should assess the risk of bias of all the studies providing data, the number of participants providing data, the width of the confidence interval around the difference, the heterogeneity of the overall difference, and the applicability and risk of publication bias. Based on this assessment, the guideline group will know how certain to be in the difference from the review of studies. Finally, in special circumstances when a guideline group is not using a standard approach to assess the evidence, there should be some description about how believable the overall conclusions are about the patient views and why they are believable. The following principles should be considered and communicated: whether the individual studies were well done how many studies (or participants) were included how relevant the studies are to the recommendation topic, and how consistent or coherent the results are across the studies.