Skip to main content

Advertisement

Attribute development and level selection for a discrete choice experiment to elicit the preferences of health care providers for capitation payment mechanism in Kenya

Article metrics

Abstract

Background

Stated preference elicitation methods such as discrete choice experiments (DCEs) are now widely used in the health domain. However, the “quality” of health-related DCEs has come under criticism due to the lack of rigour in conducting and reporting some aspects of the design process such as attribute and level development. Superficially selecting attributes and levels and vaguely reporting the process might result in misspecification of attributes which may, in turn, bias the study and misinform policy. To address these concerns, we meticulously conducted and report our systematic attribute development and level selection process for a DCE to elicit the preferences of health care providers for the attributes of a capitation payment mechanism in Kenya.

Methodology

We used a four-stage process proposed by Helter and Boehler to conduct and report the attribute development and level selection process. The process entailed raw data collection, data reduction, removing inappropriate attributes, and wording of attributes. Raw data was collected through a literature review and a qualitative study. Data was reduced to a long list of attributes which were then screened for appropriateness by a panel of experts. The resulting attributes and levels were worded and pretested in a pilot study. Revisions were made and a final list of attributes and levels decided.

Results

The literature review unearthed seven attributes of provider payment mechanisms while the qualitative study uncovered 10 capitation attributes. Then, inappropriate attributes were removed using criteria such as salience, correlation, plausibility, and capability of being traded. The resulting five attributes were worded appropriately and pretested in a pilot study with 31 respondents. The pilot study results were used to make revisions. Finally, four attributes were established for the DCE, namely, payment schedule, timeliness of payments, capitation rate per individual per year, and services to be paid by the capitation rate.

Conclusion

By rigorously conducting and reporting the process of attribute development and level selection of our DCE,we improved transparency and helped researchers judge the quality.

Introduction

Stated preference elicitation methods such as discrete choice experiments (DCEs) are now being widely used in health preference research in areas such as priority setting, health workforce, and valuation of health outcomes among others [1,2,3,4]. A DCE is an econometric technique used to elicit the preferences for the characteristics (attributes) of goods or services [5]. Respondents in a DCE survey are given two or more distinct alternatives to choose from. The alternatives are described by two or more attributes [6]. From the choices made in a DCE survey, researchers can determine the relative importance respondents place on the attributes of the goods or services under consideration, and trade-offs study participants are willing to make on one attribute over another [7].

Theoretically, DCEs draw from Lancaster’s theory of consumer demand and Random Utility Theory (RUT). Lancaster’s theory states that individuals derive utility from the attributes of the good or service rather than the product itself [8]. RUT posits that individuals are rational decision makers and will choose the alternative that they derive the maximum or highest utility from [9].

However, the “quality of DCEs has been questioned” and the way they are designed due to underreporting of the design process [10, 11]. Researchers fail to rigorously conduct and report some aspects of the DCE design process such as attribute development and level selection [11,12,13]. This may lead to misspecification of attributes and levels which may in turn give erroneous results and hence misinform policy [14]. Therefore, it is important to meticulously conduct and report the process of attribute development and level selection to improve transparency and help researchers judge the quality of the DCE [12, 15].

Researchers need to comprehensively report:

the processes used to collate an initial list of attributes, the analyses conducted during this design stage (including sample details and information on type of analysis conducted), processes undertaken in reducing attributes to a manageable number, and a brief description of the results of these processes [12] (p2).

However, this is complicated by the lack of a standardised process to guide the selection of attributes and levels for health related DCEs [16]. Although guidelines on how to conduct health-related DCEs exist [17,18,19], they do not provide comprehensive guidance on how to select attributes and levels [12, 16]. Researchers are therefore left to superficially select attributes and levels and vaguely report the process [10, 20]. Nonetheless, few researchers have recently formulated guidelines on how to report the attribute development and level selection process of health-related DCEs [12, 21]. Furthermore, an increasing number of health-related DCEs are now starting to rigorously report the attribute development and level selection process. Examples include DCEs on micro health insurance in Malawi [14], basic health insurance in Iran [22], cataract surgery in Australia [23], and antirheumatic drugs in the Netherlands [24].

We address these research gaps and contribute to the limited literature on attribute development and level selection by rigorously conducting and reporting the process followed in deriving attributes and levels for a DCE to elicit the preferences of health care providers for the attributes of capitation payment mechanism in Kenya. Capitation is a provider payment mechanism (PPM) used by purchasing organisations (e.g. health insurance companies, governments) to pay health care providers to deliver services to people [25]. It is a fixed payment made to a health care provider in advance to extend services to enrolled individuals for a period of time [25].

PPMs are important as they have the potential to modify health care provider behaviour and influence providers to deliver needed services, improve quality, and efficiency [26]. For example, capitation creates incentives for providers to improve efficiency, contain costs, increase number of enrolees, select healthy individuals, and underprovide health services [25, 27]. In Kenya, capitation is used by the country’s National Hospital Insurance Fund (NHIF) to pay for outpatient services for its enrolees at contracted public, private, and faith-based facilities [28, 29].

Since PPMs can create positive and negative incentives, it is important to consider health care providers’ preferences for their design attributes. A DCE is the right technique as it will enable the eliciting of health care providers’ preferences for the attributes of capitation, quantification of the relative importance providers place on the characteristics, and trade-offs respondents are willing to make [7]. These attributes can be targets for potential interventions meant to configure capitation payment mechanisms to create positive incentives for health care providers and help to steer the health system towards universal health coverage (UHC) [30]. However, there is a dearth of literature on DCEs that have focussed on health care providers preferences for capitation payment methods in low-middle income countries (LMICs) with the exception of Robyn et al. [31].

The aim of this paper was to describe the techniques used to derive the initial set of attributes and levels, methods employed in reducing the number of attributes and selecting levels, piloting, and concluding discussions to decide on the final list of attributes and levels.

Methodology

Conceptual framework

We applied a framework proposed by Helter and Boehler [21] (Fig. 1). The researchers provide a systematic approach to attribute development for health-related DCEs and recommend following a four-stage process consisting of raw data collection, data reduction, removing inappropriate attributes, and wording of attributes.

Fig. 1
figure1

Conceptual framework for attribute development and level selection. Adapted from Helter and Boehler [21]

First, raw data about attributes and levels are collected using qualitative studies and alternative methods such as literature reviews. Then, the collected data are reduced through analysing. This results in a long list of attributes and levels. These are then screened for appropriateness considering multiple criteria such as salience, plausibility and capability of being traded, to reduce them to a limited number of attributes and levels. Finally, the attributes and levels are worded using methods such as piloting, cognitive interviews or researchers’ judgement.

Stage 1: raw data collection

To derive an initial list of attributes and levels, a literature review and a qualitative study were conducted. These were guided by a framework developed by the Resilient and Responsive Health Systems (RESYST) consortium on the characteristics of multiple funding flows to health facilities (Table 1) [32]. Using both a literature review and a qualitative study is recommended as the former generates conceptual attributes while the latter unearths context-specific characteristics [11, 14].

Table 1 Framework for the characteristics of capitation payment mechanism

Literature review

The literature review sought to synthesise evidence on the characteristics of PPMs that influenced health care provider behaviour. The search was conducted using three databases namely PubMed, Web of Science, and Google scholar. Search terms such as “provider payment mechanisms”, “capitation”, fee-for-service”, “remuneration methods” among others were used. Full text peer reviewed journal articles that had been published in English by February 2018 and described empirical research on PPMs were eligible. Papers that described incentives that modified health care provider behaviours were excluded. Two researchers independently screened the articles.

Qualitative study

A cross sectional qualitative study was conducted in two Kenyan counties. The study sought to explore the experiences of health care providers with PPMs in the Kenyan context and examined the characteristics of these payment methods that providers considered important. The framework for the characteristics of capitation (Table 1) was used. First, two counties were purposively sampled. Then, six NHIF accredited providers (two private, two public, two faith-based) were purposively selected. Next, institutional heads of the health facilities were approached using emails, phone calls, and face to face visits and consent sought to participate in the study. After that, five senior managers and health management team members (HMT) whose roles involved financial decision making were selected in each facility. Of the 30 respondents approached, one senior manager at a private health facility declined to participate citing a busy schedule.

Overall, 29 semi-structured interviews were conducted with respondents at their workplace after obtaining written informed consent. The respondents had diverse management roles from medical directors to financial managers (Table 2). Data were collected between September and December 2017. The interview guide (Additional file 1) was developed by three researchers using the framework for the characteristics of capitation (Table 1) and explored areas such as awareness and understanding of PPMs, experiences with capitation and FFS, attributes of PPMs they considered important, and attribute levels of capitation and FFS. Furthermore, respondents were prompted to spontaneously mention the characteristics of an ideal PPM and rank them in the process. The guide was tested in one county at different health facilities. The interviews were audio recorded, lasted between 30 and 50 min, and conducted in English. The interviewers wrote field notes during and after the interviews.

Table 2 Characteristics of qualitative study respondents

Stage 2: data reduction

Literature review

Overall, 27,156 papers were found. We excluded 27,012 papers because they did not meet the inclusion criteria by reading the titles. Then, abstracts of 144 papers were read resulting in 93 articles being excluded for not meeting the criteria. Thereafter, a further 20 papers were excluded due to unavailability of full text articles. The resulting 31 papers were read in full and 15 duplicates were dropped. The review finally included 16 papers. The literature review has been published [33].

Qualitative study

A framework approach was used in qualitative data analysis. The interviews were first transcribed verbatim in full. Then, two researchers familiarised themselves by reading and rereading the transcripts. The coding framework was developed by three researchers from the framework on the characteristics of capitation, study objectives, and emerging themes. This process culminated in a coding tree. The coding tree touched on attributes and attribute-levels of capitation. NVIVO version 10 was used to manage the data [34]. One researcher applied the codes, sorted, and conducted the charting. Finally, three researchers interpreted the findings. The qualitative study has also been published [30].

Stage 3: removing inappropriate attributes

Panel of experts

To reduce the list of attributes and levels, we engaged a panel of eight experts that comprised of doctors, nurses, pharmacists, and researchers. It is a recommended method when one needs to reduce the number of attributes and levels [17]. Too many attributes in a DCE increase complexity of the tasks for the respondents which, in turn, result in increased error variance, attribute non-attendance (a phenomenon where not all attributes are considered in reaching a decision), and inconsistent responses across choice tasks [5, 35, 36].

The experts had experience working in similar settings (health facilities) as the potential DCE respondents. Therefore, they could provide valuable feedback on the attributes and levels that would mirror those of DCE respondents. The experts and researchers together screened all the capitation attributes and levels generated from the data reduction stage. They used multiple criteria such as relevance to study objectives and decision context, correlation between attributes (inter-attribute correlation), salience, plausibility, and capability of being traded [17, 21].

Researchers’ judgement

Three researchers (authors) held two meetings to review the decisions of the experts. They also agreed on an interim list of capitation attributes and levels to be included in a pilot study.

Stage 4: wording

Pilot study

A pilot study was conducted to pre-test the interim list of attributes and levels that had been agreed upon by the authors. Moreover, we also aimed to generate parameter estimates that would be used to construct an appropriate experimental design for the main DCE survey. For the pilot study, a D-efficient experimental design was generated using the Ngene software version 1.2.0 [37]. It entailed an unlabelled experiment with two alternatives and an opt-out (no-choice alternative). We used educated best guesses to generate the priors [38]. Eight full profile choice tasks were derived and transferred to a paper questionnaire (Table 3 and Additional file 2). Since the DCE targeted senior managers who were often busy, eight choice tasks would not place significant cognitive burden on the respondents.

Table 3 Sample DCE pilot choice task

The pilot study questionnaire (Additional file 2) was administered to 31 senior managers and members (Table 4) from 9 randomly selected public, private, and faith-based health facilities in one Kenyan county (83.78% response rate) [39]. Respondents were prompted to rank their preferences from best (1) to worst (3) considering two hypothetical capitation payments (Capitation A and B) and an opt-out (no-choice alternative labelled ‘none’) (Table 3). Furthermore, respondents were also required to specify which options they found unacceptable to them i.e. they would never choose (a no concession outcome). The main aim of this was to approximate decisions made by groups using a technique called minimum information group inference (MIGI) [40, 41]. Moreover, we asked study participants for general feedback on the choice tasks, understandability of the scenarios, questionnaire design, appropriateness, wording, and clarity of the attributes and levels. A think aloud approach was also employed where respondents were asked to verbalise their thought process when answering the choice tasks [12, 21]. Data was collected between May and June 2018.

Table 4 Characteristics of pilot study respondents

A multinomial logit model (MNL) was used to estimate individual preferences on R version 3.5.0 using the University of Leeds Choice Modelling Centre’s (CMC) choice modelling code for R (cmcRcode) version 2.0.4 [42, 43]. We estimated the main effects. Willingness to accept (WTA) measures were also estimated from the MNL model coefficients using the delta method. Additionally, the relative importance scores were derived from the MNL model coefficients [44]. This was done through multiplying the absolute value of the coefficient of each attribute with the difference between the highest and lowest level of the attribute to get the maximum effect. Then, the ratio between the maximum effect of each attribute and the total was computed to derive the relative importance scores [44]. Finally, to test the robustness of our results and relax the Independence of Irrelevant Alternatives (IIA) property, we also estimated a mixed multinomial logit model (Additional file 4) [45].

Researchers’ final discussions

Six researchers reviewed the results of the pilot study, respondents’ comments, and made amendments to the DCE questionnaire. They then agreed on the final list of attributes and levels for the main DCE survey.

Results

Results from stages 1 and 2: raw data collection and data reduction

The literature review found that seven PPM characteristics influenced health care provider behaviour (Table 5).

Table 5 Attributes of PPMs

Semi-structured interviews with senior managers and HMT members uncovered 10 attributes of capitation that health care providers considered important (Table 6).

Table 6 Capitation attributes

Moreover, senior managers and HMT members spontaneously mentioned the attributes of an ideal PPM while ranking them in the process during the qualitative study. The most important trait of a PPM was timeliness of the payment, followed by services covered by the PPM, adequacy of the payment rate to cover the cost of services, complexity of accountability mechanisms, autonomy that health care providers have over the use of PPM funds, and lastly list of clients registered to a health facility under capitation.

Results from stage 3: removing inappropriate attributes

Panel of experts

The panel discussed all ten capitation attributes from the qualitative study. The attributes from the literature review were conceptual and similar to those unearthed by the qualitative study. The qualitative study had the advantage of being context specific. Three attributes were dropped due to inter-attribute correlation and irrelevance to the decision context (Table 7). The rest were either maintained as they were or reworded. Additionally, the number of levels were capped at four per attribute. Overall, this stage resulted in seven capitation attributes.

Table 7 Expert panel’s comments and decisions on capitation attributes and levels

Researchers’ judgement

Three researchers held two meetings to deliberate an interim list of attributes and levels that had been agreed by the panel of experts. These were to be included in the pilot study. An agreement was also reached to restrict the maximum number of attributes to five and levels to four per attribute. Five attributes were deemed manageable for the respondents as too many would increase task complexity resulting in increased error variance and attribute non-attendance. Two attributes ‘autonomy to use capitation funds’ and ‘complexity of accountability mechanisms’ were dropped due to irrelevance to the decision context (Table 8). The remaining five attributes and their corresponding levels were simplified, expounded, and reworded.

Table 8 Researchers’ comments and decisions on capitation attributes and levels

Results from stage 4: wording

Pilot study

The previous step resulted in five attributes, namely, payment schedule, timeliness of payments, capitation rate per individual per year, services to be paid by the capitation rate, and performance requirements (Table 9). The levels were then ranked according to expected preferences to enable guess estimating the signs of the attributes. For example, a longer payment schedule would be less desirable. Therefore, the payment schedule attribute was given a negative sign. Furthermore, from the qualitative study, health care providers stated that capitation would not work with performance requirements. For that reason, the performance requirements attribute was given a negative sign.

Table 9 Pilot study capitation attributes and levels

We estimated the choice probability for selecting a capitation alternative and willingness to accept (WTA) measures (Table 10). In the preference space, three attributes had statistically significant coefficients namely payment schedule, timeliness of payments, and capitation rate per individual per year. The signs of the estimates were also expected. This meant that capitation alternatives with frequent disbursement schedules, timely payments, and higher rates per individual per year were preferred by the respondents.

Table 10 Main effects MNL model estimates

The ‘services to be paid by the capitation rate’ attribute and the opt-out had the expected negative signs but the coefficients were not statistically significant. This might have been due to a small sample size of 31 respondents. Interestingly, the ‘performance requirements’ attribute had an unexpected positive sign. A negative sign was expected according to the qualitative study results which had indicated that senior managers and HMT members would not want performance requirements attached to capitation payment schemes. However, the coefficient was not statistically significant. Nonetheless, when the opt-out was excluded from the analysis (Additional file 3), the coefficient of the ‘performance requirements’ attribute had the expected negative sign. This was also not statistically significant probably due to the small sample size.

The relative importance estimates were derived from the MNL coefficients (Table 11). The most important capitation attribute was payment rate per individual per year followed by payment schedule. The least important was the performance requirements attribute.

Table 11 Relative importance estimates

During the think aloud exercise, respondents raised several issues with the attributes, levels, choice tasks, and questionnaire in general. For example, when respondents were exploring the timeliness of payment attribute (which had 2 levels; timely and delayed), most of them asked for a definition of the length of delay. Study respondents stated that they would accept shorter delays of up to one month for a higher payment rate per individual.

Second, respondents complained that the levels of the ‘services to be paid by the capitation rate’ attribute contained long sentences. For example, a level read as follows; capitation rate pays for consultation and drugs only (Hospital claims and is paid for lab tests separately by the insurer/NHIF). They wanted the levels of the attribute to be simplified by shortening the sentences.

Third, study participants could easily rank the alternatives including the opt-out (no-choice alternative). However, they struggled to understand the second part of the choice question which prompted them to choose the alternative they found unacceptable among those they had ranked second and third (acceptable/unacceptable question). Respondents felt that since they had ranked the alternatives from best (1) to worst (3) in the first part of the choice question, then they would naturally choose the worst ranked alternative as unacceptable in the second part of the task. Furthermore, respondents thought that they were not expected to change which alternative they deemed worst unless there was some form of interaction with other participants’ choices before answering the acceptable/unacceptable question. Overall, the DCE questionnaire took approximately 20 min to complete and the respondents stated that they had sufficient information to make a choice.

Final list of attributes and levels

The team of six researchers (authors) made final alterations to the attributes, levels, and choice task design taking into consideration the pilot study results and respondents’ comments. The levels of the ‘payment schedule’ attribute were edited by including a succinct definition of the time periods (Table 12). For example, the word ‘every month’ was added to the ‘1-month’ level to define what it meant.

Table 12 Final capitation attributes and levels

Secondly, a level of the ‘timeliness of payments’ attribute was split into two. The ‘delayed’ level was split into two namely ‘delayed by more than 3 months’ and ‘delayed by less than 3 months’. This was in response to the comments raised by the respondents during the pilot study to define the length of the delay.

Thirdly, the ‘capitation rate per individual per year’ attribute had its levels modified. There were some policy considerations to reduce the capitation rate paid to health care providers for the NHIF general scheme. Therefore, the researchers revised the levels to include one that was lower than the current rate of 1200 Kenya shillings (US $ 12). They settled for 800 Kenya shillings (US $ 8). Then, a linear additive value of 800 was added from the base level to get the other three levels. The attribute was maintained as a continuous variable as it was the monetary characteristic that would enable the calculation of willingness to accept estimates.

Moreover, the levels of the ‘services to be paid by the capitation rate’ attribute were simplified by reducing the number of words. For example, the base level was reworded to ‘Consultation ONLY’ from ‘Capitation rate pays for consultation only (Hospital claims and is paid for lab tests and drugs separately by the insurer/NHIF)’.

Furthermore, the pilot study results showed a counter-intuitive (positive) sign for the ‘performance requirements’ attribute when the opt-out was included in the analysis (Table 10). However, when the opt-out was excluded, the results gave the expected positive sign. The coefficients in both analyses were not statistically significant. The positive sign of the attribute when the opt-out was included in the analysis suggested that respondents preferred capitation payments which had performance requirements. This contradicted the qualitative study results that suggested that performance requirements were not preferred for capitation payments. It was also the least important capitation attribute according to respondents (Table 11). Additionally, further analysis in which the opt-out was excluded (Additional file 3), gave a negative sign for the performance requirements attribute. Therefore, for these reasons, the attribute was dropped.

Finally, the acceptable/unacceptable question was reworded to make it clear and understandable to the respondents that they were first required to rank all three alternatives and then answer if alternative A and/or alternative B were unacceptable (Table 13). The simplified acceptable/unacceptable question was set to only appear under alternative A and alternative B and not the opt-out.

Table 13 Sample final DCE survey choice task

Discussion

Health-related DCEs rarely comprehensively conduct and report the attribute and level selection process [10]. This is because of the lack of systematic guidelines on how to do so [16]. However, few researchers such as Helter and Boehler [21] have proposed frameworks to guide the attribute development process. We followed Helter and Boehler’s four-stage framework to rigorously conduct and report the process of attribute development and level selection for a DCE to elicit the preferences of health care providers for the attributes of capitation. The process included raw data collection, data reduction, removing inappropriate attributes, and wording of attributes. The whole process resulted in four capitation attributes to be included in the main DCE, namely, payment schedule, timeliness of payments, capitation rate per individual per year, and services to be paid by the capitation rate.

The first two stages, which included a literature review and qualitative study, resulted in a long list of attributes and levels. While other studies used either qualitative studies [15, 61] or literature reviews only, we used a combination of both methods. Using literature reviews alone may lead to omission of some relevant attributes which may, in turn, increase the error variances and introduce bias into the study [7, 11]. Therefore, qualitative studies are advocated for as they help in identifying context-specific attributes that are important to the study respondents [11, 14, 15]. Furthermore, qualitative studies can also help in revealing new attributes not captured in literature. In our study, the literature review identified conceptual attributes while the qualitative study unearthed context-specific attributes. Several studies have adopted such strategies [14, 62].

This study engaged experts to reduce the number of attributes and levels. Engaging experts who are not part of the research team is beneficial as it avoids narrowing the focus in the preliminary stages of the study [12]. The approach is also useful when it complements other techniques such as literature reviews and qualitative studies [21].

Additionally, unlike other studies [12, 14], we presented detailed pilot study results including regression coefficients and willingness to accept estimates. We could judge the validity of the DCE by comparing the pilot study estimates with the qualitative study results. The signs of the coefficients of four attributes were expected. We found preferences for capitation schemes that had frequent disbursements, timely payments, higher rates per individual, and paid for basic service packages. Furthermore, respondents made trade-offs. Moreover, the analysis revealed that the payment rate per individual per year and payment schedule were two of the most important capitation attributes. This is because higher rates meant more revenue to health care providers and regular payment schedules ensured that facilities could plan and budget [30, 56]. Though there are few DCEs that focussed on health care providers’ preferences for PPMs, Robyn et al. [31] did find similar results in a DCE conducted among health workers in Burkina. Furthermore, Robyn et al. included payment schedule and capitation rate per individual attributes in their actual DCE. However, the study included a ‘performance-based payment’ characteristic which we had dropped from the final list of attributes to be included in the DCE. This was because the analysis of our pilot study results gave an unexpected positive coefficient for the attribute when the opt-out was included and estimates revealed that it was the least important attribute. Studies have demonstrated that capitation incentivises health care providers to compromise performance for example underserving patients [63]. Though Robyn et al. included the attribute in their study as it was important, it was not important in Kenya. Burkina Faso is a different context from Kenya. The current capitation arrangement in Kenya would make health care providers resent performance requirements being attached to the payment mechanism. Piloting of the attributes coupled with a comparison of the results with the qualitative study was vital as we could have misspecified attributes and levels and therefore misinform policy [62].

Strengths and limitations

This paper has several strengths. First, the study serves as an example of how to rigorously and systematically conduct and report the process of deriving attributes and levels. This improves transparency and makes it reproducible. Secondly, our pilot study results were proof that study participants could consider all information in reaching a decision, place relative importance on the attributes, and make trade-offs. Similar findings were observed by Gomes et al. [64] in their DCE pilot study. Also, the think-aloud exercise employed during the pilot test assisted in gauging respondents understandability of the choice tasks [12].

On the contrary, the study had some limitations. First, the sample size for the pilot study might have been insufficient. This might explain why the coefficients of two attributes were not statically significantly different from zero. Second, we estimated an MNL model which does not relax the IIA assumption. However, we additionally ran a panel MMNL model (Additional file 4) to relax IIA and found that the results were not very different from those from the MNL. Therefore, we used the MNL results to make our decisions as it is a stable model with a small sample size. Third, the qualitative study focussed on the views of NHIF-accredited health care providers leaving out those who were not NHIF-accredited. Nonetheless, the pilot study included both accredited and non-accredited providers.

Conclusion

The paper contributes to DCE literature by rigorously conducting and reporting the process of attribute development and level selection. Researchers should embrace the practice as it improves transparency and helps in judging the “quality” of the DCE.

Availability of data and materials

The data generated and analysed during the qualitative study are not publicly available due to them containing information that could compromise research participant privacy. However, the transcripts are available from the corresponding author [MO] or [EB] on reasonable request. The quantitative dataset analysed for this study are stored in the KEMRI-Wellcome Trust Research Programme (KWTRP) Data Repository https://doi.org/10.7910/DVN/AGIPLL and can be made available via written request to the corresponding author [MO] or [EB].

Abbreviations

CMC:

The Choice Modelling Centre at the University of Leeds

DCE:

Discrete Choice Experiment

HMT:

Health Management Team

KEMRI:

Kenya Medical Research Institute

MNL:

Multinomial Logit Model

NGO:

Non-Governmental Organisation

NHIF:

National Hospital Insurance Fund

PPM:

Provider Payment Mechanism

RESYST:

Resilient and Responsive Health Systems

RUT:

Random Utility Theory

SERU:

Scientific and Ethics Review Unit

UHC:

Universal Health Coverage

WTA:

Willingness to Accept

References

  1. 1.

    Clark MD, Determann D, Petrou S, Moro D, de Bekker-Grob EW. Discrete choice experiments in health economics: a review of the literature. PharmacoEconomics. 2014;32(9):883–902.

  2. 2.

    Soekhai V, de Bekker-Grob EW, Ellis AR, Vass CM: Discrete choice experiments in health economics: past, Present and Future PharmacoEconomics 2019, 37(2):201–226.

  3. 3.

    Mandeville KL, Lagarde M, Hanson K. The use of discrete choice experiments to inform health workforce policy: a systematic review. BMC Health Serv Res. 2014;14(1):367.

  4. 4.

    de Bekker-Grob EW, Ryan M, Gerard K. Discrete choice experiments in health economics: a review of the literature. Health Econ. 2012;21(2):145–72.

  5. 5.

    Hensher D, Rose J, Greene W. Applied choice analysis. 2nd ed. Cambridge: Cambridge University Press; 2015.

  6. 6.

    Ali S, Ronaldson S. Ordinal preference elicitation methods in health economics and health services research: using discrete choice experiments and ranking methods. Br Med Bull. 2012;103(1):21–44.

  7. 7.

    Mühlbacher A, Johnson FR. Choice experiments to quantify preferences for health and healthcare: state of the practice. Applied Health Economics and Health Policy. 2016;14(3):253–66.

  8. 8.

    Lancaster KJ. A new approach to consumer theory. J Polit Econ. 1966;74(2):132–57.

  9. 9.

    McFadden D. Conditional Logit analysis of qualitative choice behaviour. In: Frontiers in Econometrics. Edn. Edited by Zarembka P. New York: Academic Press; 1974.

  10. 10.

    Vass C, Rigby D, Payne K. The role of qualitative research methods in discrete choice experiments:a systematic review and survey of authors. Med Decis Mak. 2017;37(3):298–313.

  11. 11.

    Coast J, Al-Janabi H, Sutton EJ, Horrocks SA, Vosper AJ, Swancutt DR, Flynn TN. Using qualitative methods for attribute development for discrete choice experiments: issues and recommendations. Health Econ. 2012;21(6):730–41.

  12. 12.

    De Brún A, Flynn D, Ternent L, Price CI, Rodgers H, Ford GA, Rudd M, Lancsar E, Simpson S, Teah J, et al. A novel design process for selection of attributes for inclusion in discrete choice experiments: case study exploring variation in clinical decision-making about thrombolysis in the treatment of acute ischaemic stroke. BMC Health Serv Res. 2018;18(1):483.

  13. 13.

    Louviere JJ, Lancsar E. Choice experiments in health: the good, the bad, the ugly and toward a brighter future. Health Economics, Policy and Law. 2009;4(4):527–46.

  14. 14.

    Abiiro GA, Leppert G, Mbera GB, Robyn PJ, De Allegri M. Developing attributes and attribute-levels for a discrete choice experiment on micro health insurance in rural Malawi. BMC Health Serv Res. 2014;14(1):235.

  15. 15.

    Coast J, Horrocks S. Developing attributes and levels for discrete choice experiments using qualitative methods. Journal of Health Services Research & Policy. 2007;12(1):25–30.

  16. 16.

    Rydén A, Chen S, Flood E, Romero B, Grandy S. Discrete choice experiment attribute selection using a multinational interview study: treatment features important to patients with type 2 diabetes mellitus. The Patient - Patient-Centered Outcomes Research. 2017;10(4):475–87.

  17. 17.

    Bridges JFP, Hauber AB, Marshall D, Lloyd A, Prosser LA, Regier DA, Johnson FR, Mauskopf J. Conjoint analysis applications in health—a checklist: A report of the ISPOR good research practices for conjoint analysis task force. Value Health. 2011;14(4):403–13.

  18. 18.

    Reed Johnson F, Lancsar E, Marshall D, Kilambi V, Mühlbacher A, Regier DA, Bresnahan BW, Kanninen B, Bridges JFP. Constructing experimental designs for discrete-choice experiments: report of the ISPOR conjoint analysis experimental design good research practices task force. Value Health. 2013;16(1):3–13.

  19. 19.

    Hauber AB, González JM, Groothuis-Oudshoorn CGM, Prior T, Marshall DA, Cunningham C, Ijzerman MJ, Bridges JFP. Statistical methods for the analysis of discrete choice experiments: a report of the ISPOR conjoint analysis good research practices task force. Value Health. 2016;19(4):300–15.

  20. 20.

    Kløjgaard ME, Bech M, Søgaard R. Designing a stated choice experiment: the value of a qualitative process. Journal of Choice Modelling. 2012;5(2):1–18.

  21. 21.

    Helter TM, Boehler CEH. Developing attributes for discrete choice experiments in health: a systematic literature review and case study of alcohol misuse interventions. J Subst Abus. 2016;21(6):662–8.

  22. 22.

    Kazemi Karyani A, Rashidian A, Akbari Sari A, Emamgholipour Sefiddashti S. Developing attributes and levels for a discrete choice experiment on basic health insurance in Iran. Med J Islam Repub Iran. 2018;32(1):142–50.

  23. 23.

    Gilbert C, Keay L, Palagyi A, Do VQ, McCluskey P, White A, Carnt N, Stapleton F, Laba T-L: Investigation of attributes which guide choice in cataract surgery services in urban Sydney, Australia Clinical and Experimental Optometry 2018, 101(3):363–371.

  24. 24.

    Mathijssen EG, van Heuckelum M, van Dijk L, Vervloet M, Zonnenberg SM, Vriezekolk JE, van den Bemt BJ. A discrete choice experiment on preferences of patients with rheumatoid arthritis regarding disease-modifying antirheumatic drugs: the identification, refinement, and selection of attributes and levels. Patient preference and adherence. 2018;12:1537–55.

  25. 25.

    Langenbrunner JC, O'Duagherty S, Cashin CS: designing and implementing health care provider payment systems: "how-to" manuals. Washington DC: The World Bank; 2009.

  26. 26.

    Kutzin J, Yip W, Cashin C: Alternative Financing Strategies for Universal Health Coverage. In: World Scientific Handbook of Global Health Economics and Public Policy. edn.; 2016: 267–309.

  27. 27.

    Cashin C. Assessing health provider payment systems: a practical guide for countries working toward universal health coverage. Washington, DC: Joint Learning Network for Universal Health Coverage; 2015.

  28. 28.

    Munge K, Mulupi S, Barasa EW, Chuma J. A critical analysis of purchasing arrangements in Kenya: the case of the national hospital insurance fund. Int J Health Policy Manag. 2018;7(3):244–54.

  29. 29.

    Barasa E, Rogo K, Mwaura N, Chuma J. Kenya National Hospital Insurance Fund Reforms: implications and lessons for universal health coverage. Health Systems & Reform. 2018;4(4):346–61.

  30. 30.

    Obadha M, Chuma J, Kazungu J, Barasa E. Health care purchasing in Kenya: experiences of health care providers with capitation and fee-for-service provider payment mechanisms. Int J Health Plann Manag. 2019;34(1):e917–33.

  31. 31.

    Robyn PJ, Bärnighausen T, Souares A, Savadogo G, Bicaba B, Sié A, Sauerborn R. Health worker preferences for community-based health insurance payment mechanisms: a discrete choice experiment. BMC Health Serv Res. 2012;12(1):159.

  32. 32.

    Onwujekwe O, Ezumah N, Mbachu C, Ezenwaka U, Uzochukwu B. Nature and effects of multiple funding flows to public healthcare facilities: a case study from Nigeria. In: Health Financing and Governance Knowledge Synthesis Workshop: 22 March 2018. RESYST: Abuja, Nigeria; 2018.

  33. 33.

    Kazungu JS, Barasa EW, Obadha M, Chuma J. What characteristics of provider payment mechanisms influence health care providers' behaviour? A literature review. Int J Health Plann Manag. 2018;33(4):e892–905.

  34. 34.

    QSR International Pty Ltd. NVivo qualitative data analysis software version 10. Melbourne, Australia: QSR International Pty Ltd; 2014.

  35. 35.

    Lagarde M. Investigating attribute non-attendance and its consequences in choice experiments with latent class models. Health Econ. 2013;22(5):554–67.

  36. 36.

    Heidenreich S, Watson V, Ryan M, Phimister E. Decision heuristic or preference? Attribute non-attendance in discrete choice problems. Health Econ. 2018;27(1):157–71.

  37. 37.

    Ngene [http://www.choice-metrics.com].

  38. 38.

    Bliemer MCJ, Collins AT. On determining priors for the generation of efficient stated choice experimental designs. Journal of Choice Modelling. 2016;21:10–4.

  39. 39.

    Obadha M, Barasa E, Kazungu J, Abiiro GA, Chuma J: Replication Data for: Pilot study for a discrete choice experiment to elicit the preferences of health care providers for capitation payment mechanism in Kenya. In., V1 edn: Harvard Dataverse; 2019.

  40. 40.

    Beck MJ, Rose JM. Stated preference modelling of intra-household decisions: can you more easily approximate the preference space? Transportation; 2017.

  41. 41.

    Hensher DA, Puckett SM. Power, concession and agreement in freight distribution chains: subject to distance-based user charges. Int J Log Res Appl. 2008;11(2):81–100.

  42. 42.

    R version 3.5.0 [https://www.r-project.org/].

  43. 43.

    CMC choice modelling code for R [https://cmc.leeds.ac.uk/].

  44. 44.

    Maaya L, Meulders M, Surmont N, Vandebroek M. Effect of environmental and altruistic attitudes on willingness-to-pay for organic and fair trade coffee in Flanders. Sustainability. 2018;10(12):4496.

  45. 45.

    McFadden D, Train K. Mixed MNL models for discrete response. J Appl Econ. 2000;15(5):447–70.

  46. 46.

    Hsu P-F. Does a global budget superimposed on fee-for-service payments mitigate hospitals’ medical claims in Taiwan? Int J Health Care Finance Econ. 2014;14(4):369–84.

  47. 47.

    Mohammed S, Souares A, Bermejo JL, Sauerborn R, Dong H. Performance evaluation of a health insurance in Nigeria using optimal resource use: health care providers perspectives. BMC Health Serv Res. 2014;14(1):127.

  48. 48.

    Alqasim KM, Ali EN, Evers SM, Hiligsmann M. Physicians’ views on pay-for-performance as a reimbursement model: a quantitative study among Dutch surgical physicians. J Med Econ. 2016;19(2):158–67.

  49. 49.

    Chen T-T, Lai M-S, Chung K-P. Participating physician preferences regarding a pay-for-performance incentive design: a discrete choice experiment. Int J Qual Health Care. 2015;28(1):40–6.

  50. 50.

    Olafsdottir AE, Mayumana I, Mashasi I, Njau I, Mamdani M, Patouillard E, Binyaruka P, Abdulla S, Borghi J. Pay for performance: an analysis of the context of implementation in a pilot project in Tanzania. BMC Health Serv Res. 2014;14(1):392.

  51. 51.

    Federman AD, Woodward M, Keyhani S. Physicians' opinions about reforming reimbursement: results of a national SurveyReforming reimbursement. Arch Intern Med. 2010;170(19):1735–42.

  52. 52.

    Agyepong IA, Aryeetey GC, Nonvignon J, Asenso-Boadi F, Dzikunu H, Antwi E, Ankrah D, Adjei-Acquah C, Esena R, Aikins M, et al. Advancing the application of systems thinking in health: provider payment and service supply behaviour and incentives in the Ghana National Health Insurance Scheme – a systems approach. Health Research Policy and Systems. 2014;12(1):35.

  53. 53.

    Koduah A, van Dijk H, Agyepong IA. Technical analysis, contestation and politics in policy agenda setting and implementation: the rise and fall of primary care maternal services from Ghana’s capitation policy. BMC Health Serv Res. 2016;16(1):323.

  54. 54.

    Feng Z, Grabowski DC, Intrator O, Zinn J, Mor V. Medicaid payment rates, case-mix reimbursement, and nursing home staffing—1996-2004. Med Care. 2008;46(1):33–40.

  55. 55.

    Harrington C, Swan JH, Carrillo H: Nurse Staffing Levels and Medicaid Reimbursement Rates in Nursing Facilities. Health Services Research 2007, 42(3p1):1105–1129.

  56. 56.

    Sieverding M, Onyango C, Suchman L. Private healthcare provider experiences with social health insurance schemes: findings from a qualitative study in Ghana and Kenya. PLoS One. 2018;13(2):e0192973.

  57. 57.

    Basinga P, Gertler PJ, Binagwaho A, Soucat AL, Sturdy J, Vermeersch CM: Effect on maternal and child health services in Rwanda of payment to primary health-care providers for performance: an impact evaluation. Lancet (London, England) 2011, 377(9775):1421–1428.

  58. 58.

    Reschovsky JD, Hadley J, Landon BE: Effects of Compensation Methods and Physician Group Structure on Physicians' Perceived Incentives to Alter Services to Patients. Health Services Research 2006, 41(4p1):1200–1220.

  59. 59.

    Tufano J, Conrad DA, Sales A, Maynard C, Noren J, Kezirian E, Schellhase KG. Effects of compensation method on physician behaviors. Am J Manag Care. 2001;7(4):363–73.

  60. 60.

    Wang J, Hong SH, Meng S, Brown LM. Pharmacists’ acceptable levels of compensation for MTM services: a conjoint analysis. Res Soc Adm Pharm. 2011;7(4):383–95.

  61. 61.

    Chudner I, Goldfracht M, Goldblatt H, Drach-Zahavy A, Karkabi K. Video or in-clinic consultation? Selection of attributes as preparation for a discrete choice experiment among key stakeholders. The Patient - Patient-Centered Outcomes Research. 2018.

  62. 62.

    Barber S, Bekker H, Marti J, Pavitt S, Khambay B, Meads D. Development of a discrete-choice experiment (DCE) to elicit adolescent and parent preferences for Hypodontia treatment. The Patient - Patient-Centered Outcomes Research. 2019;12(1):137–48.

  63. 63.

    Hennig-Schmidt H, Selten R, Wiesen D. How payment systems affect physicians’ provision behaviour—an experimental investigation. J Health Econ. 2011;30(4):637–46.

  64. 64.

    Gomes B, de Brito M, Sarmento VP, Yi D, Soares D, Fernandes J, Fonseca B, Gonçalves E, Ferreira PL, Higginson IJ. Valuing attributes of home palliative care with service users: a pilot discrete choice experiment. J Pain Symptom Manag. 2017;54(6):973–85.

Download references

Acknowledgements

The authors thank Associate Professor Matthew Beck of the University of Sydney Business School for providing technical support in designing the experiment, choice tasks, data analysis, and revising the attributes and levels.

Funding

MO and JK are supported by a Wellcome Trust grant intermediate fellowship awarded to JC (#101082). EB is supported by a Wellcome Trust training fellowship (#107527). Funds from the Wellcome Trust core grant (#092654) awarded to KEMRI-Wellcome Trust Research Program also supported this work. The funders and the World Bank had no role in the study design, data analysis, decision to publish, drafting or submission of the manuscript.

Author information

EB and JC conceptualised the study. Searching of relevant literature was conducted by JK and EB. The interview guide was developed by MO, JK, and EB. Data was collected by MO and JK. MO developed the coding tree which was reviewed by EB and JK. Coding, charting, and mapping was conducted by MO with EB and JK contributing in the interpretation of findings. The pilot study experimental design, data collection and analysis were conducted by MO and JK. GAA contributed to the pilot study experimental design, data analysis, and rewording the attributes and levels. The initial manuscript was drafted by MO which was subsequently revised in collaboration with EB, JK, GAA, and JC. All authors read and approved the final manuscript.

Correspondence to Melvin Obadha.

Ethics declarations

Ethics approval and consent to participate

The qualitative and pilot studies received ethical approval from the Kenya Medical Research Institute / Scientific and Ethics Review Unit (KEMRI/SERU) under SSC No: 2795 and KEMRI/SERU/CGMR-C/115/3617 respectively.

Furthermore, the National Commission for Science, Technology and Innovation (NACOSTI) gave clearance for the study to be conducted. Finally, all the participants signed the informed consent form before being interviewed or completing the pilot DCE survey questionnaire.

Consent for publication

Consent to publish findings of the study was obtained from the participants of the study.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Obadha, M., Barasa, E., Kazungu, J. et al. Attribute development and level selection for a discrete choice experiment to elicit the preferences of health care providers for capitation payment mechanism in Kenya. Health Econ Rev 9, 30 (2019) doi:10.1186/s13561-019-0247-5

Download citation

Keywords

  • Attribute development
  • Capitation
  • Discrete choice experiment
  • Kenya
  • Provider payment mechanisms
  • Sub-Saharan Africa