Skip to main content

Systematic review and validity assessment of methods used in discrete choice experiments of primary healthcare professionals

Abstract

Introduction

Discrete choice experiments (DCEs) have been used to measure patient and healthcare professionals preferences in a range of settings internationally. Using DCEs in primary care is valuable for determining how to improve rational shared decision making. The purpose of this systematic review is to assess the validity of the methods used for DCEs assessing the decision making of healthcare professionals in primary care.

Main body

A systematic search was conducted to identify articles with original data from a discrete choice experiment where the population was primary healthcare professionals. All publication dates from database inception to 29th February 2020 were included. A data extraction and validity assessment template based on guidelines was used. After screening, 34 studies met the eligibility criteria and were included in the systematic review. The sample sizes of the DCEs ranged from 10 to 3727. The published DCEs often provided insufficient detail about the process of determining the attributes and levels. The majority of the studies did not involve primary care healthcare professionals outside of the research team in attribute identification and selection. Less than 80% of the DCEs were piloted and few papers investigated internal or external validity.

Conclusions

For findings to translate into improvements in rational shared decision making in primary care DCEs need to be internally and externally valid and the findings need to be able to be communicated to stakeholders in a way that is understandable and relevant.

Introduction

Discrete choice experiments (DCEs) have been widely used in economics and marketing to assess how much people value the attributes of a good or service [1]. It is a method based on Lancaster’s theory, which is that a good or a service can be described by its attributes and that a person’s preferences for a good or service depends on their preferences for the attributes of that good or service. For example, a person’s preference for a house will depend on the attributes of that house—e.g. location, number of rooms, or the condition of the interior. A discrete choice experiment is conducted to measure the preferences for these attributes, particularly when it is not possible to measure through revealed behaviour.

A DCE consists of choice tasks, where a respondent is asked, as a hypothetical scenario, to choose between two or more discrete alternatives (e.g. House A or House B) as the attributes that define those alternatives are varied. The choices made are then analysed to measure how much the respondents value these attributes. The advantage of using a DCE over other preference elicitation techniques is that it forces respondents to make explicit trade-offs.

DCE methodology has evolved over time. There has been increased sophistication in use of statistical methods and experimental designs, such as measurement of interaction effects and use of Bayesian efficient designs [2, 3]. There has been increased emphasis on using valid research methods to inform choice task design in terms of defining levels, attributes, and scenarios [4]. Greater methodological sophistication has coincided with the development of guidelines for conducting and reporting economic evaluations [5, 6] and a focus on improving the internal and external validity of DCEs [2, 7].

Over the past decades DCEs have been used to measure patient and healthcare provider preferences in a wide range of settings internationally [2]. DCEs have been used in health technology assessment, workforce planning, and broader policy making [1, 8]. Systematic reviews of DCEs have measured healthcare professionals’ preferences for where they would like to work [3], patient and physician preferences for specific conditions or treatments [9, 10], disease specific quality of life measurements [11], decision making preferences for specific countries or geographical areas [12, 13], and methodological issues such as development of attributes [14] or assessing external validation [7].

DCEs are a valuable tool for determining how to improve rational shared decision making in primary care. Seemingly minor decisions made in primary care can have large and long-term effects on a person’s health and their experience with the healthcare system. These decisions involve trade-offs—choosing a treatment may involve a trade-off between effectiveness, adverse effects, convenience, and cost. A role of the physician in shared decision making is to help patients navigate these trade-offs. However, both the patient and clinician are subject to biases that may result in non-rational decision making [15]. The advantage of using a DCE is that it asks individuals to explicitly make trade-offs that would not otherwise be able to observe—providing a better understanding of the factors that influence the decision making of health professionals.

A previously published systematic review demonstrated the role of DCEs in understanding patient preferences in primary care [16]. The review revealed a diversity of healthcare decisions where DCEs have been used to measure patient preferences and describe common factors that influence those decisions. The review also identified methodological challenges with conducting DCEs in primary care. However, no systematic review has investigated the use of DCEs to explore the decisions of primary care healthcare professionals.

The purpose of our systematic review is to assess the methods used in DCEs that assess the decision making of healthcare professionals in primary care [17], and the validity of the reported studies .

Methods

Studies on the use of DCEs of primary healthcare professionals were identified, selected, and analysed in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) [18]. A systematic search strategy (Supplement 1) was used to search Embase, CINAHL, and ECONLIT (30th July, 2019). Citations identified through database searching were imported into Endnote and duplicates were removed. Studies were screened for eligibility based on the title and abstract, followed by full-text screening by two researchers.

To be included the studies had to report original data from a DCE. The participants had to include primary healthcare professionals or general practitioners. Studies that did not report outcomes specific for primary healthcare professionals or general practitioners were excluded. All publication dates from database inception to 29th February 2020 were included. Non-English language articles were excluded. Articles were excluded if they were conference abstracts or did not report original research, such as letters to the editor, editorials, and opinion articles.

Studies meeting the review eligibility criteria after full-text screening were included in the review. Two researchers extracted, reviewed and summarised data according to a data extraction template, which included a validity assessment checklist. A third researcher adjudicated in cases of differences or discrepancies. The findings were summarised using a descriptive synthesis. Articles were grouped by areas of application of the DCEs. The data extraction template and validity assessment checklist was adapted from Mandeville et al. [3].

Results

The search of the databases identified 701 records. After screening, 34 studies met the eligibility criteria and were included in the systematic review (See Fig. 1). The included DCEs have been published from 2000 to 2020. The majority (82.4%) of the studies were published between 2010 and 2020. For full description of the studies, including subject of the DCE, populations, and attributes assessed see Supplement 2.

Fig. 1
figure 1

PRISMA flow diagram of study selection [18]

Data collection

Table 1 presents the sample and survey administration details for the included studies. More than half (N = 21, 61.8%) of the DCEs included only primary healthcare professionals. Of the studies that included other populations as well as health professionals, five (14.7%) included patients, ten (29.4%) included other health professionals such as specialists, and one (2.9%) included policymakers. Twenty (58.8%) of the DCEs were administered as online surveys, nine (26.5%) were postal surveys, and two (5.9%) were administered in face-to-face interviews. The sample sizes of the DCEs ranged from 10 to 3727, with a median of 294. Twenty-two (64.7%) of the studies reported a response rate. The median response rate was 42%. In over 90% of the included studies the target population was appropriate and the sampling frame was representative of the target population. In the majority (73.5%) of the studies piloting of the DCE instrument was undertaken before the survey was conducted.

Table 1 Data collection for included DCEs

Purpose of DCE

Fourteen of the studies assessed healthcare professional decision making about therapy and disease management [19,20,21,22,23,24,25,26,27,28,29,30,31,32], twelve assessed preferences for organisation characteristics of primary care practices [33,34,35,36,37,38,39,40,41,42,43,44], seven assessed the relative influence of implementation and knowledge translation strategies [45,46,47,48,49,50], and two assessed preferences for information and communication technologies [51, 52].

Choice task design

The most common methods for identifying and labelling attributes and levels were literature review (96.9%), followed by interviews and expert opinion (62.5%), and focus groups (25.0%). Twenty-three (67.6%) of the studies used more than one method to identify and label the attributes and levels. The number of attributes ranged from four to ten, with 70.6% having between five and seven attributes. For the majority of the studies there was no conceptual overlap between attributes (82.4%) and the attributes were uni-dimensional (76.5%), as recommended by Lancsar and Louviere [6].

The most common attribute used in the DCEs assessing physician preferences for therapy and disease was cost [19, 20, 22, 28, 29, 31], including both cost of the intervention [20, 22, 28, 29, 31] and direct cost to the patient [19, 20, 29, 31]. Other frequently used attributes included patient preferences [19,20,21, 31, 38, 53], effectiveness of the intervention [19, 20, 29,30,31], health professional’s experience with either the patient or the intervention [20, 23, 29, 31, 53], and disease characteristics such as level of functional impairment or duration [21, 24, 25, 32, 53].

Within the studies assessing physician choice of where to work the most frequently used attributes were income [34, 36, 37, 39,40,41, 43, 44], working hours [34,35,36, 39, 41, 44], opportunities for professional development [34,35,36, 41, 43, 44], size of practice [34, 35, 39, 40, 42], and characteristics of the patient population [33, 34, 41, 43], such as list size and level of social disadvantage in the community.

For the DCEs exploring implementation and knowledge translation strategies, the most frequently used attributes were those related to mode and design of knowledge translation [45,46,47,48,49], the presence of financial incentives [45, 47, 49, 50], and feedback [45, 47, 48].

Generic choice set labelling, where the choice sets are unlabelled alternatives, was more common (n = 25; 73.5%), than labelled choice sets (n = 9; 26.5%), where the labels for the choices provided information regarding the alternatives.

Most of the DCE instruments did not provide an alternative option (n = 25; 73.5%) and instead had a forced choice where one of the discrete choice options had to be chosen. For those that did provide an alternative option, the alternative was defined as having the option to either remain in the status quo (n = 2; 5.9%), to opt-out and pick neither alternative (n = 3; 8.8%), or to remain undecided (n = 3; 8.8%). For example, a DCE of health professional’s decision to choose which practice they would work at had a status quo option where they could continue to work at their current practice rather than one of the two choice sets provided describing alternative GP practices.

Experimental design

Most of the studies used an experimental design to limit the number of choices the respondents had to make, although one study used a full factorial design (see Table 2). Nineteen (55.9%) of the studies that used an experimental design were fractional factorial designs, eleven (32.4%) were efficient designs, and three studies (8.8%) did not report type of design. The majority (79.4%) of the studies used blocking, where each respondent is randomised to only answer a subset of the choice tasks [54].

Table 2 Experimental design

Analysis procedure and statistical tests

The most common estimation model used for statistical analysis was the mixed logit (N = 15, 44.1%), followed by the conditional or multinomial logit model (N = 13, 38.2%; see Table 3). Twenty (58.8%) of the studies explicitly discussed goodness of fit.. Level coding was discussed in 21 (38.2%) of the studies—of those 9 used effects coding and 12 used dummy coding. Interaction effects were measured in 17 (50.0%) of the studies and 19 (55.9%) reported a measure of marginal rate of substitution, such as willingness to pay, willingness to wait, or willingness to accept a social cost. Sociodemographics and other covariates were included in the analysis for 13 (38.2%) of the studies. Out of 208 attributes that were assessed for impact on respondent choice in the DCEs, 182 (87.5%) were significant. None of the attributes were dominant, which is when an attribute is so highly preferred that an individual is not willing to make trade-offs with other attributes [55].

Table 3 Analysis procedure and statistical tests

Ten (29.4%) studies explicitly investigated internal or external validity. Examples of internal or external validity checks used in these studies included using a dominant choice task to assess internal consistency [51], detecting repetitive patterns or strait-lining in response patterns [25], rushing [25], including a duplicate question [26, 27, 44], tests of transitivity [26, 35] using follow-up questions [45], and reviewing consistency of answers [19, 35, 50]. Studies assessed external validity by comparing the DCE findings with the predicted direction of preferences [19, 37] or with findings from previously published surveys or clinical trials [25]. —for instance preferences for cheaper for effective interventions [24].

Validity assessment

Validity assessment is presented in Table 4. The majority of studies did not provide a justification for using forced choice (70.6%). The majority of studies did not report that the experimental design was optimal or statistically efficient (67.7%). Most studies met the other validity criteria, however only two of the studies met all of the validity criteria [40, 49].

Table 4 Validity assessment of included studies [3]

Discussion

This is the first comprehensive review of DCE studies of primary healthcare professionals. Primary care is a complex and diverse environment that involves many decisions that are important to patients. The decisions made in primary care are highly individualised to the relationship between the patient and the primary care health professional. Therefore, findings from DCEs conducted in large institutions such as hospitals or that are aimed at the level of policymakers will not be generalizable [6, 16].

[3] We found only 34 studies employing DCEs to investigate health care professional decision making, most published in the last 10 years. For DCEs to be useful in helping us better understand and improve decision making in primary care they need to employ robust methods. However, we identified important limitations in the methods used in the DCEs.

The findings from the validity assessment mirrored the results of the validity assessment from Mandeville et al. [3], with a high number of studies that did not provide justification if they used forced choice, did not describe a process of developing a statistically efficient experimental design, and several studies did not describe the process of using qualitative research with the target population to develop the attributes and levels. There was little consistency between studies in the methods used to assess internal consistency—some of the studies used additional questions, others pre-specified expected preference direction in order to assess face validity, and others tested responses to determine dominance or for repetitive patterns.

The majority of the studies did not involve primary healthcare professionals outside of the research team in attribute identification and selection, which may jeopardise the relevance of the results if this would have changed the attributes selected.. A substantial proportion of the studies did not report piloting the DCE with the target population. The DCEs were mostly conducted as surveys rather than face-to-face interviews, which is appropriate for this population of healthcare professionals as they are less likely than other potential population groups to require additional assistance to understand the survey.

Only 13 (38.2%) of the studies incorporated level of professional experience in the analysis of the DCE responses. This is particularly a challenge in interpreting results of studies about workforce planning as early career and more established health professionals may have different preferences for workplace characteristics including level of ongoing training, capacity to move to a rural location, or salary expectations.

Our review was purposefully limited in scope. Only studies in primary care and general practice were included. Therefore, similar studies that did not specifically focus on these areas were excluded. Additionally, only studies published in English were included, which means potentially relevant information published in other languages could have been missed.

DCEs have the potential to inform primary care policy and practice; however, care is required to ensure that the DCEs are addressing research questions relevant to primary care practice, using appropriate methods, and have an emphasis on translating knowledge into practice. For findings to translate into improvements in rational shared decision making in primary care DCEs need to be internally and externally valid and the findings need to be able to be communicated to stakeholders in a way that is understandable and relevant. Better adherence to methodological reporting standards will be a good starting point.

Availability of data and materials

Data sharing is not applicable to this article as no datasets were generated or analysed during the current study.

Abbreviations

DCE:

Discrete choice experiment

ISPOR:

International Society for Pharmacoeconomics and Outcomes Research

PRISMA:

Preferred Reporting Items for Systematic Reviews and Meta-Analyses

References

  1. Ryan M. Discrete choice experiments in health care. BMJ. 2004;328(7436):360–1.

    Article  PubMed  PubMed Central  Google Scholar 

  2. Clark MD, Determann D, Petrou S, Moro D, de Bekker-Grob EW. Discrete choice experiments in health economics: a review of the literature. PharmacoEconomics. 2014;32(9):883–902.

    Article  PubMed  Google Scholar 

  3. Mandeville KL, Lagarde M, Hanson K. The use of discrete choice experiments to inform health workforce policy: a systematic review. BMC Health Serv Res. 2014;14:367.

    Article  PubMed  PubMed Central  Google Scholar 

  4. Vass C, Rigby D, Payne K. The role of qualitative research methods in discrete choice experiments. Med Decision Mak. 2017;37(3):298–313.

    Article  Google Scholar 

  5. Bridges JF, Hauber AB, Marshall D, Lloyd A, Prosser LA, Regier DA, Johnson FR, Mauskopf J. Conjoint analysis applications in health—a checklist: a report of the ISPOR good research practices for conjoint analysis task force. Value Health. 2011;14(4):403–13.

    Article  PubMed  Google Scholar 

  6. Lancsar E, Louviere J. Conducting discrete choice experiments to inform healthcare decision making. Pharmacoeconomics. 2008;26(8):661–77.

    Article  PubMed  Google Scholar 

  7. Quaife M, Terris-Prestholt F, Di Tanna GL, Vickerman P. How well do discrete choice experiments predict health choices? A systematic review and meta-analysis of external validity. Eur J Health Econ. 2018;19(8):1053–66.

    Article  PubMed  Google Scholar 

  8. Ryan M, Gerard K, Amaya-Amaya M. Discrete choice experiments in a nutshell. In: Using discrete choice experiments to value health and health care. Dordrecht: Springer; 2008. p. 13–46.

  9. Wortley S, Wong G, Kieu A, Howard K. Assessing stated preferences for colorectal cancer screening: a critical systematic review of discrete choice experiments. Patient. 2014;7(3):271–82.

    Article  CAS  PubMed  Google Scholar 

  10. Durand C, Eldoma M, Marshall DA, Bansback N, Hazlewood GS. Patient preferences for disease modifying anti-rheumatic drug treatment in rheumatoid arthritis: a systematic review. J Rheumatol. 2019;47(2):176–87.

    Article  PubMed  CAS  Google Scholar 

  11. Marcella S, Ahir HB, Jiang Y, Mayes A, Burnett H. Systematic literarture review of health-related quality of life in clostridium difficile infection. Value Health. 2017;20(9):A793.

    Article  Google Scholar 

  12. Brown L, Lee TH, De Allegri M, Rao K, Bridges JFP. Applying stated-preference methods to improve health systems in sub-Saharan Africa: a systematic review. Exp Rev Pharmacoecon Outcomes Res. 2017;17(5):441–58.

    Article  Google Scholar 

  13. Krinke K, Borchert K, Braun S, Mittendorf T. The impact of patient preference studies in the German healthcare system. Value Health. 2017;20(9):A690.

    Article  Google Scholar 

  14. Helter TM, Boehler CEH. Developing attributes for discrete choice experiments in health: a systematic literature review and case study of alcohol misuse interventions. J Subst Abus. 2016;21(6):662–8.

    Google Scholar 

  15. Bornstein BH, Emler AC. Rationality in medical decision making: a review of the literature on doctors’ decision-making biases. J Eval Clin Pract. 2001;7(2):97–107.

    Article  CAS  PubMed  Google Scholar 

  16. Kleij KS, Tangermann U, Amelung VE, Krauth C. Patients' preferences for primary health care - a systematic literature review of discrete choice experiments. BMC Health Serv Res. 2017;17(1):476.

    Article  PubMed  PubMed Central  Google Scholar 

  17. Brown L, Lee T-H, De Allegri M, Rao K, Bridges JF. Applying stated-preference methods to improve health systems in sub-Saharan Africa: a systematic review. Exp Rev Pharmacoecon Outcomes Res. 2017;17(5):441–58.

    Article  Google Scholar 

  18. Moher D, Liberati A, Tetzlaff J, Altman DG, Group P: Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. 2010.

    Google Scholar 

  19. Berchi C, Degieux P, Halhol H, Danel B, Bennani M, Philippe C. Impact of falling reimbursement rates on physician preferences regarding drug therapy for osteoarthritis using a discrete choice experiment. Int J Pharm Pract. 2016;24(2):114–22.

    Article  PubMed  Google Scholar 

  20. Carlsen B, Hole AR, Kolstad JR, Norheim OF. When you can't have the cake and eat it too. A study of medical doctors' priorities in complex choice situations. Soc Sci Med. 2012;75(11):1964–73.

    Article  PubMed  Google Scholar 

  21. Cravo Oliveira T, Barlow J, Bayer S. The association between general practitioner participation in joint teleconsultations and rates of referral: a discrete choice experiment. BMC Fam Pract. 2015;16:50.

    Article  PubMed  PubMed Central  Google Scholar 

  22. Deal K, Keshavjee K, Troyan S, Kyba R, Holbrook AM. Physician and patient willingness to pay for electronic cardiovascular disease management. Int J Med Inform. 2014;83(7):517–28.

    Article  PubMed  Google Scholar 

  23. Fiebig DG, Haas M, Hossain I, Street DJ, Viney R. Decisions about pap tests: what influences women and providers? Soc Sci Med. 2009;68(10):1766–74.

    Article  PubMed  Google Scholar 

  24. Fitzgerald A, De Coster C, McMillan S, Naden R, Armstrong F, Barber A, Cunning L, Conner-Spady B, Hawker G, Lacaille D, et al. Relative urgency for referral from primary care to rheumatologists: the priority referral score. Arthritis Care Res. 2011;63(2):231–9.

    Article  Google Scholar 

  25. Heisen M, Baeten SA, Verheggen BG, Stoelzel M, Hakimi Z, Ridder A, van Maanen R, Stolk EA. Patient and physician preferences for oral pharmacotherapy for overactive bladder: two discrete choice experiments. Current Med Res Opinion. 2016;32(4):787–96.

    Article  CAS  Google Scholar 

  26. Li J, Houle CR, Spalding JR, Yang H, Xiang CQ, Kitt TM, Kristy RM, Wu EQ. Attributes of nuclear imaging centers impacting physician referrals for single-photon emission computed tomography myocardial perfusion imaging tests. J Med Econ. 2017;20(8):777–85.

    Article  PubMed  Google Scholar 

  27. Lum EPM, Page K, Whitty JA, Doust J, Graves N. Antibiotic prescribing in primary healthcare: dominant factors and trade-offs in decision-making. Infect Dis Health. 2018;23(2):74–86.

    Article  Google Scholar 

  28. Oluboyede Y, Ternent L, Vale L, Allen J. Using a discrete-choice experiment to estimate the preferences of clinical practitioners for a novel non-invasive device for diagnosis of peripheral arterial disease in primary care. Pharmacoeconomics - Open. 2019;3(4):571–81.

    Article  PubMed  PubMed Central  Google Scholar 

  29. Pedersen LB, Riise J, Hole AR, Gyrd-Hansen D. GPs’ shifting agencies in choice of treatment. Appl Econ. 2014;46(7–9):750–61.

    Article  Google Scholar 

  30. Poulos C, González JM, Lee LJ, Boye KS, Johnson FR, Bae JP, Deeg MA. Physician preferences for extra-glycemic effects of type 2 diabetes treatments. Diabetes Therap. 2013;4(2):443–59.

    Article  Google Scholar 

  31. Riise J, Hole AR, Gyrd-Hansen D, Skåtun D. GPs' implicit prioritization through clinical choices – evidence from three national health services. J Health Econ. 2016;49:169–83.

    Article  PubMed  Google Scholar 

  32. Brownell P, Piccolo F, Brims F, Norman R, Manners D. Does this lung nodule need urgent review? A discrete choice experiment of Australian general practitioners. BMC Pulmonary Med. 2020;20(1):24.

    Article  CAS  Google Scholar 

  33. Ezatabadi MR, Rashidian A, Shariati M, Foroushani AR, Sari AA. Using conjoint analysis to elicit GPs’ preferences for family physician contracts: A case study in Iran. Iran Red Crescent Med J. 2016;18(11):e29194.

    Google Scholar 

  34. Gosden T, Bowler I, Sutton M. How do general practitioners choose their practice? Preferences for practice and job characteristics. J Health Services Res Policy. 2000;5(4):208–13.

    Article  CAS  Google Scholar 

  35. Holte JH, Kjaer T, Abelsen B, Olsen JA. The impact of pecuniary and non-pecuniary incentives for attracting young doctors to rural general practice. Soc Sci Med. 2015;128:1–9.

    Article  PubMed  Google Scholar 

  36. Holte JH, Sivey P, Abelsen B, Olsen JA. Modelling nonlinearities and reference dependence in general Practitioners’ income preferences. Health Economics (United Kingdom). 2016;25(8):1020–38.

    Article  Google Scholar 

  37. Li J, Scott A, McGrail M, Humphreys J, Witt J. Retaining rural doctors: Doctors' preferences for rural medical workforce incentives. Soc Sci Med. 2014;121:56–64.

    Article  PubMed  Google Scholar 

  38. Pedersen LB, Kjær T, Kragstrup J, Gyrd-Hansen D. Do general practitioners know patients' preferences? An empirical study on the agency relationship at an aggregate level using a discrete choice experiment. Value Health. 2012;15(3):514–23.

    Article  PubMed  Google Scholar 

  39. Pedersen LB, Kjær T, Kragstrup J, Gyrd-Hansen D. General practitioners’ preferences for the organisation of primary care: a discrete choice experiment. Health Policy. 2012;106(3):246–56.

    Article  PubMed  Google Scholar 

  40. Pedersen LB, Gyrd-Hansen D. Preference for practice: a Danish study on young doctors' choice of general practice using a discrete choice experiment. Eur J Health Econ. 2014;15(6):611–21.

    Article  PubMed  Google Scholar 

  41. Scott A. Eliciting GPs’ preferences for pecuniary and non-pecuniary job characteristics. J Health Econ. 2001;20(3):329–47.

    Article  CAS  PubMed  Google Scholar 

  42. Scott A, Witt J, Humphreys J, Joyce C, Kalb G, Jeon SH, McGrail M. Getting doctors into the bush: general practitioners' preferences for rural location. Soc Sci Med. 2013;96:33–44.

    Article  PubMed  Google Scholar 

  43. Song K, Scott A, Sivey P, Meng Q. Improving Chinese primary care providers' recruitment and retention: a discrete choice experiment. Health Policy Plan. 2015;30(1):68–77.

    Article  PubMed  Google Scholar 

  44. Wordsworth S, Skåtun D, Scott A, French F. Preferences for general practice jobs: a survey of principals and sessional GPs. Br J Gen Pract. 2004;54(507):740–6.

    PubMed  PubMed Central  Google Scholar 

  45. Ammi M, Peyron C. Heterogeneity in general practitioners’ preferences for quality improvement programs: a choice experiment and policy simulation in France. Heal Econ Rev. 2016;6(1):44.

    Article  Google Scholar 

  46. Chen T, Chung K, Huang H, Man L, Lai M. Using discrete choice experiment to elicit doctors' preferences for the report card design of diabetes care in Taiwan - a pilot study. J Eval Clin Pract. 2010;16(1):14–20.

    Article  PubMed  Google Scholar 

  47. Gong CL, Hay JW, Meeker D, Doctor JN. Prescriber preferences for behavioural economics interventions to improve treatment of acute respiratory infections: a discrete choice experiment. BMJ Open. 2016;6(9):e012739.

    Article  PubMed  PubMed Central  Google Scholar 

  48. Kjaer NK, Halling A, Pedersen LB. General practitioners' preferences for future continuous professional development: evidence from a Danish discrete choice experiment. Educ Primary Care. 2015;26(1):4–10.

    Article  Google Scholar 

  49. Sicsic J, Krucien N, Franc C. What are GPs' preferences for financial and non-financial incentives in cancer screening? Evidence for breast, cervical, and colorectal cancers. Soc Sci Med. 2016;167:116–27.

    Article  PubMed  Google Scholar 

  50. Müller S, Ziemssen T, Diehm C, Duncker T, Hoffmanns P, Thate-Waschke IM, Schürks M, Wilke T. How to implement adherence-promoting programs in clinical practice? A discrete choice experiment on physicians’ preferences. Patient Preference and Adherence. 2020;14:267–76.

    Article  PubMed  PubMed Central  Google Scholar 

  51. Chudner I, Drach-Zahavy A, Karkabi K. Choosing video instead of in-clinic consultations in primary Care in Israel: discrete choice experiment among key stakeholders—patients, primary care physicians, and policy makers. Value Health. 2019;22(10):1187–96.

    Article  PubMed  Google Scholar 

  52. Wyatt JC, Batley RP, Keen J. GP preferences for information systems: conjoint analysis of speed, reliability, access and users. J Eval Clin Pract. 2010;16(5):911–5.

    Article  PubMed  Google Scholar 

  53. Lum EP, Page K, Whitty JA, Doust J, Graves N. Antibiotic prescribing in primary healthcare: dominant factors and trade-offs in decision-making. Infect Dis Health. 2018;23(2):74–86.

    Article  Google Scholar 

  54. Johnson FR, Lancsar E, Marshall D, Kilambi V, Mühlbacher A, Regier DA, Bresnahan BW, Kanninen B, Bridges JF. Constructing experimental designs for discrete-choice experiments: report of the ISPOR conjoint analysis experimental design good research practices task force. Value Health. 2013;16(1):3–13.

    Article  Google Scholar 

  55. Scott A. Identifying and analysing dominant preferences in discrete choice experiments: an application in health care. J Econ Psychol. 2002;23(3):383–98.

    Article  Google Scholar 

Download references

Acknowledgements

Not applicable.

Funding

Funding for this research was through the Centre for Research Excellence in Minimising Antibiotic Resistance in the Community, a National Health and Medical Research Council funded Centre for Research.

Author information

Authors and Affiliations

Authors

Contributions

All authors made substantial contributions to the conception of the review. GM and LH conducted the literature review and data extraction. The manuscript was drafted by GM. LH and MvD provided substantial revisions. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Gregory Merlo.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Merlo, G., van Driel, M. & Hall, L. Systematic review and validity assessment of methods used in discrete choice experiments of primary healthcare professionals. Health Econ Rev 10, 39 (2020). https://doi.org/10.1186/s13561-020-00295-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13561-020-00295-8

Keywords