Starting the Process
Using the textbook as your guide (this means citing it) and a database available through Park’s library, find, summarize, and discuss 1 peer-reviewed academic article using the steps recommended in the book. The article should directly relate to the subject area you are thinking about focusing on for your core assessment project.
Then for that article, show how you could do it in a completely unscientific way by changing the question and method using everything you read from the book about what a good study is.
You may not copy any part of the article; your summary and discussion should be your own words, though in-text page based citation is required.
Article:
By: Cleland, Jennifer; Porteous, Terry; Skåtun, Diane. Medical Education. Nov2018, Vol. 52 Issue 11, p1113-1124. 12p. 1 Diagram, 1 Chart. DOI: 10.1111/medu.13657. , Database:
Consumer Health Complete – EBSCOhost
Subjects: DECISION making; LABOR market; LABOR supply; MEDICAL education; NEEDS assessment; SATISFACTION; STUDENTS; VOCATIONAL guidance; MASTERS programs (Higher education)
Book:
Title: Understanding Research
ISBN-13: 9780205925322
Author: W. Lawrence Neuman
Edition: 2
Publisher: Pearson
Published: August 2016
Class,
I thought you might find it useful to have additional input regarding the Week 1 Discussion (formatting and expectations).
In the following, I provide a “translation” of the instructions created by the Course Developer. I then offer hints and access to additional needed resources.
Original instructions are in black, bold text. I go through the original instructions in a sequential manner, inserting hints and resources.
Week 1 Discussion Thread Instructions
Using the textbook as your guide (this means citing it) (1) and a database available through Park’s library(2), find, summarize, and discuss 1 peer-reviewed academic article using the steps recommended in the book (3). The articles should directly relate to the subject area you are thinking about focusing on for your core assessment project (4).
Then for that article, show how you could do it in a completely unscientific way by changing the question and method using everything (5) you read from the book about what a good study is.
You may not copy any part of the article; your summary and discussion should be your own words, though in-text page based citation is required.
Neuman, W. L. (2017). Understanding research. Boston, MA: Pearson/Allyn and Bacon.
1. Hint: an online resource to help you learn about proper APA citation is
http://www.bibme.org/
(Links to an external site.)
. Try typing in the ISBN for an older version of our textbook (978-0-205-47153-9) and click “Find Book”. Then click “Select.” On the right hand side of the screen, you will find the citation. Now select “APA” from the bibliography list menu. This resource works for almost any reference source (book, magazine, newspaper, website, etc.)
Note, when you are in the Park online library, several of the databases enable you to generate the citations for articles. Additional citation resources are listed at the end of this document.
2. Park University Online Library:
http://www.park.edu/Library/
Links to an external site.
3.
Chapter 2
: Within Chapter 2 you will find the needed steps for reviewing an academic article. (Yes, I know this was not part of the “assigned” readings for the week.) The reading “What to Record in Notes” provides good ideas.
4. In course overview module you will find a description of the Core Assessment project.
5. This part requires a little clarification. Here you should have some fun. Think about how you could alter the study in order to create a “pseudo-science, pop-media, newsstand worthy” result. How would the question of study (the hypothesis) need to change? What method of approach could you use to totally wreck any chance of a valid outcome? Once you have “overhauled” the study methodology, challenge your classmates to explain why your “replication” of the study is a bust. What principles of basic research have you violated, based upon your readings from Chapter 1? Why is it not “scientific”? Let’s see if they can figure it out – providing their explanations in their peer-follow up responses for the week.
To help create a clear format for the Main Entry responses posted this week, let us apply the following outline, breaking your main entry into 8 distinctive segments. Following this outline will ensure you complete all elements of the assignment and will facilitate peer feedback. Please label each section (e.g., Topic, Research Type). This will also expedite the grading of your discussion entries.
1. Article Citation 2. Textbook Citation |
|
3. Topic |
Describe the focus on the research conducted. |
4. Research Type |
Describe the study as exploratory, descriptive, explanatory, or evaluative (and offer why you think so). |
5. Variety of Research Applied? |
Was it a survey, an experiment, a content analysis, etc? |
6. Overall Intended Application? |
Was the research intended for basic social research of applied social research? |
7. Your UNSCIENTIFIC Re-Do Ideas |
Here you should have some fun. Think about how you could alter the study in order to create a “pop-media, newsstand worthy” result. How would the question of study (the hypothesis) need to change? What method of approach could you use to totally wreck any chance of a valid outcome? |
8. Evaluate your Re-Do |
Now challenge your classmates to explain why your “replication” of the study is a bust. What principles of basic research have you violated, based upon your readings from Chapter 1? Why is it not “scientific”? Let’s see if they can figure it out – providing their explanations in their peer-follow up responses for the week. |
What can discrete choice experiments do for you?
Jennifer Cleland,1 Terry Porteous1 & Diane Sk�atun2
CONTEXT In everyday life, the choices we
make are influenced by our preferences for
the alternatives available to us. The same is
true when choosing medical education,
training and jobs. More often than not, those
alternatives comprise multiple attributes and
our ultimate choice will be guided by the
value we place on each attribute relative to
the others. In education, for example, choice
of university is likely to be influenced by
preferences for institutional reputation,
location, cost and course content; but which
of these attributes is the most influential? An
understanding of what is valued by applicants,
students, trainees and colleagues is of
increasing importance in the higher
education and medical job marketplaces
because it will help us to develop options that
meet their needs and preferences.
METHODS In this article, we describe the
discrete choice experiment (DCE), a survey
method borrowed from economics that allows
us to quantify the values respondents place on
the attributes of goods and services, and to
explore whether and to what extent they are
willing to trade less of one attribute for more
of another.
CONCLUSIONS To date, DCEs have been
used to look at medical workforce issues but
relatively little in the field of medical
education. However, many outstanding
questions within medical education could be
usefully addressed using DCEs. A better
understanding of which attributes have most
influence on, for example, staff or student
satisfaction, choice of university and choice of
career, and the extent to which stakeholders
are prepared to trade one attribute against
another is required. Such knowledge will allow
us to tailor the way medical education is
provided to better meet the needs of key
stakeholders within the available resources.
Medical Education 2018: 52: 1113–1124
doi: 10.1111/medu.13657
1
Centre for Healthcare Education Research and Innovation
(CHERI), University of Aberdeen, Aberdeen, UK
2
Health Economics Research Unit, University of Aberdeen,
Aberdeen, UK
Correspondence: Jennifer Cleland, Centre for Healthcare Education
Research and Innovation (CHERI) Polwarth Building, University
of Aberdeen, Aberdeen, UK. AB25 2ZD Tel: 00 44 1224 437257;
E-mail: jen.cleland@abdn.ac.uk
1113ª 2018 John Wiley & Sons Ltd and The Association for the Study of Medical Education;
MEDICAL EDUCATION 2018 52: 1113–1124
research approaches
http://orcid.org/0000-0003-1433-9323
http://orcid.org/0000-0003-1433-9323
http://orcid.org/0000-0003-1433-9323
http://orcid.org/0000-0002-3153-5994
http://orcid.org/0000-0002-3153-5994
http://orcid.org/0000-0002-3153-5994
mailto:
INTRODUCTION
A previous article in ‘The Cross-cutting Edge’ series
in this journal described the range of economic
methods used in cost analyses that can be applied
in medical education.1 Walsh et al. described the
application of different methods of cost analysis to
help define and value different forms of medical
education, most commonly in monetary terms.1 We
welcome the increasing focus on the economics
of medical education, particularly in this time of
increasing calls for accountability in health
professional education.2 However, value is not
just about money; value can also mean subjective
worth, or what is important to an individual.
Understanding value and relative value can help
answer numerous questions in medical education
and training. For example, what contributes most
to student satisfaction with a particular rotation
or course? What is the deciding factor in
choosing a medical school or residency
programme? What ‘packages’ might be most
effective in attracting doctors to work in remote
and rural positions?
Although there is an extensive literature examining
what is important to patients in terms of delivery of
care,3–7 looking at what is valued in education and
medical education is relatively new (see later in this
paper for examples). Yet knowing what is valued by
applicants, students, trainees and colleagues is of
increasing importance in the higher education and
medical job marketplaces. Medical schools,
residency/training programmes and employers are
under increasing pressure to provide a high-quality,
consumer-centred experience in a resource-
constrained educational and occupational
marketplace. Medical education and training are
commodities, and when assessing the value of a
commodity, it is important to consider the views of
consumers. In other words, in order to develop a
commodity that is workable (i.e. that meets the needs
and preferences of users), providers and
policymakers need to consider not only their own
preferences (and constraints), but also those of users.
As a first step in addressing this gap in medical
education research, the current paper is a synopsis
of theory and findings published predominantly in
health economics which are relevant to the health
professions education community.8 We focus on a
quantitative research method known as the discrete
choice experiment (DCE), which is frequently used
within a cost–benefit analysis framework. The aim of
our paper was to summarise what DCEs are, what
they involve and how they have been used
previously, and to suggest ways in which they can be
used to inform how different aspects of medical
education and training might be optimised.
Throughout this paper, we will use actual and
hypothetical examples and case studies to illustrate
the processes and possibilities of conducting DCE
work within medical education.
WHAT IS A DCE?
The DCE is a multidimensional stated preference
(SP) method9 used to elicit respondents’
preferences for attributes of an item under
investigation. Stated preference methods are used
to elicit an individual’s preferences for ‘alternatives’
(whether goods, services or courses of action) when
actual behaviour cannot be observed. The DCE
allows us to value individually, or as a ‘bundle’, the
component parts (attributes) of goods, services or
interventions in monetary terms or alternative
relevant measures, from an individual or societal
perspective.10–12 Crucially, DCEs also enable us to
determine the relative importance of those
attributes and how people might trade less of one
attribute for more of another.
Discrete choice experiment surveys originally
evolved from conjoint analysis methods developed
in the 1970s, when they were predominantly used in
the domains of market research and transport
studies to understand consumer demand for goods
and services.13 In conjoint analysis, participants are
typically offered a predetermined set of potential
products or services, and their responses
(preferences) are analysed to determine the implicit
valuation of the individual elements of the product
or service. Take, for example, the act of buying a
new car. The deciding factors might be price,
brand, hatchback, sunroof or hybrid engine. How
do those considering buying a new car trade
between these factors?
The field evolved and emphasis shifted from
conjoint analysis approaches, based on
mathematical theory, to the DCE approach, which is
based on theories of choice behaviour.14 The DCE
is underpinned by two key theories. The first of
these, Lancaster’s characteristics theory of value, is
based on the idea that the value (or utility or
satisfaction) that an individual associates with any
1114 ª 2018 John Wiley & Sons Ltd and The Association for the Study of Medical Education;
MEDICAL EDUCATION 2018 52: 1113–1124
J Cleland et al
item (good) ‘is derived from the characteristics
(also known as attributes) that make up the good,
rather than the good per se’.15 The utility (U)
associated with an item is thus represented as a
function of all attributes:
U ¼ U ðX1; X2. . .XkÞ
where X represents the utility associated with each
of the k attributes of the item under investigation.
The second underpinning theory is random
utility theory (RUT). This theory posits that
individuals make choices based on their personal
preferences (observable factors), but that choices
can also be influenced by random, unexplainable
factors.16 Thus, for an alternative j (i.e. a scenario
comprising all attributes at specified levels), the
utility function (U) of an individual (n) can be
represented as:
Unj ¼ Vnj þ enj
where V is the ‘systematic’ component of the
function and e is a ‘random’ component. Further,
the systematic component (V) is a function of the
attributes and levels (i.e. the observable
components) of the item under investigation:
Vjn ¼ ASCj þ b1Xnj1 þ b2Xnj2. . .bK XnjK
where ASC ‘captures the mean effect of the
unobserved factors in the error terms for each of
the alternatives’17 and b-values are the regression
coefficients for each of the attributes and are used
to quantify strength of preference.
Consistent with their origins in consumer theory,
DCEs operate under an assumption of utility-
maximising behaviour (i.e. DCE respondents are
‘rational agents’ and will prefer the options that
offer the greatest utility [or value or satisfaction] for
least outlay).18
WHY USE A DCE?
It seems we are forever asking, and being asked, to
state preferences. The most commonly used
methods for eliciting preferences are ranking and
rating scales such as those that ask the respondent
to express how much he or she agrees or disagrees
with a particular statement (e.g. ‘This training was a
good use of my time’) on a numeric scale.19 Some
forms of best–worse scaling, another technique of
eliciting preferences increasingly used within health
care, can be considered extensions of a ranking
exercise.20 The DCE differs from traditional ranking
and rating approaches in its assumptions, format
and possibilities.
Typically, in a DCE survey, respondents are asked to
answer a series of questions (choice sets) in which
they must choose between two or more similar
items (alternatives or scenarios) that are described
in terms of a number of attributes, differing only in
the levels allocated to those attributes (Fig. 1 gives
an example of a choice set). By systematically
varying these levels, regression analyses can quantify
not only the relative values respondents place on
individual attributes, but also the degree to which
they are prepared to trade less of one attribute for
more of another (see later for further explanation).
Once a good or service has been deconstructed into
its component attributes in this way, we can
reconstruct it again into specific scenarios of
interest and directly compare the levels of utility
(or value or satisfaction) associated with each of
these scenarios against one another. For example,
the attributes of a health service might be distance
to clinic, waiting time, consultation time and the
health care professional seen. A service delivered
locally by a nurse with a waiting time of two months
and consultation time of 10 minutes could be
compared with the same nurse-led service in an out-
of-town clinic with a waiting time of three months
and a consultation time of 20 minutes.
Trading between attributes
The capability to quantify trading behaviour is a key
advantage of DCEs. In an ideal world, we would all
like the best of everything (such as shorter waiting
time and a longer appointment in the previous
scenario); in reality, scarce resources mean we may
have to compromise. Unlike ranking and rating
scales, DCEs can inform providers’ and policymakers’
decisions about where, and to what degree, less
favourable substitutions can be made and
corresponding compensations applied. Where ‘Cost’
is included as an attribute, we can calculate how
respondents value attributes in terms of monetary
units and how much money must be offered to
compensate for less preferred options (see below).
However, as well as identifying financial
compensations, DCE findings can also be used to
demonstrate how offering more of a ‘less
important’ attribute (i.e. less preferred) might
compensate for providing less of an ‘important’
one. For example, a 2015 study by Holte et al.21
1115ª 2018 John Wiley & Sons Ltd and The Association for the Study of Medical Education;
MEDICAL EDUCATION 2018 52: 1113–1124
What can DCEs do for you?
surveyed Norwegian final-year medical students and
interns to explore how factors such as practice size
or opportunities to control working hours might
influence their choices between jobs in rural and
urban areas. The authors concluded that the
probability of young doctors choosing a rural job
over an urban one could be improved by
manipulating these non-pecuniary factors.21
Valuation on a common metric
One way of evaluating a multi-component
intervention would be to ask respondents to rate or
rank the components in order of importance or
preference. The problem with such methods is an
inability to quantify the relative importance of
components if those components are evaluated
using non-quantifiable or dissimilar scales. For
example, students might rank small tutorial groups
more highly than weekly class assessments, but by
how much do they prefer one over the other?
Rating exercises often use satisfaction scales, but
these can vary between applications (e.g. the
number of points on the scale, the labels used)
making it difficult to compare across studies, and
‘units’ are difficult to quantify.
When one of a DCE’s attributes is ‘Cost’ (or an
alternative monetary measure, such as ‘Salary’ or
‘Fees’), preferences can be measured using the
common metric of monetary units. Estimates of
willingness to pay (WtP) can be calculated for
marginal changes in attribute levels to allow for
direct comparisons both across attribute levels and
between attributes. Other terms are sometimes used
in place of WtP, such as ‘willingness to accept’ or
‘willingness to forgo’, but the calculations are
identical. For example, in a 2012 study undertaken
by Rockers et al.22 (illustrated in Fig. 1), researchers
calculated respondents’ willingness to forgo salary
in exchange for better working conditions.
The ‘Cost’ attribute can be described using defined
sums of money in a relevant currency or,
Imagine that you have just completed your medical school training and you have also COMPLETED
YOUR INTERNSHIP AND BEEN CONFIRMED. You have decided NOT to go directly into specialty
training. Rather, you have decided to begin working as a general practitioner. You are checking
the newspaper for available job postings, and find that there are two postings available in
government run health facilities. Both of the facilities in these postings are located in rural areas.
Both facilities are equal distance from the nearest big town, and are equal distance from Kampala.
Also, both of these facilities are in areas that are entirely safe from violent conflict. However, each
of these two postings has different benefits, including: salary, housing, the quality of the facility,
the length of time you are committed, preferences given for study placement after the
commitment is over, and support from the district health officer.
Please imagine yourself in this situation and make a real decision as to which of these two
postings you would prefer. Although we know that some government benefits to health workers
have not been properly implemented in the past, please assume that you will receive the full
benefits described for your posting. In making your choice, please read carefully the full list of
benefits for each posting and do not imagine any additional features of these postings.
Please tell us which of these job pos�ngs you prefer.
Choose by clicking one of the bu�ons below:
Pos�ng A Pos�ng B
Quality of the facility
Basic (e.g. unreliable electricity,
equipment and drugs and supplies
not always available)
Advanced (e.g. reliable electricity,
equipment and drugs and supplies
always available)
Housing Free basic housing provided
Housing allowance provided,
enough to afford basic housing
Length of
commitment
You are commi�ed to this posi�on
for 2 years
You are commi�ed to this posi�on
for 5 years
Study assistance
The government will pay your full
tui�on for a study program (e.g.
specialty training) a�er your
commitment is over
The government will not provide any
financial assistance for a study
program a�er your commitment is
over
Salary 700,000 USh per month 1,000,000 USh per month
Management
The district health officer in your
district is suppor�ve and makes
work easier
The district health officer in your
district is not suppor�ve and makes
work more difficult
o o
Figure 1 Medical students’ discrete choice experiment choice question from Rockers et al.22
1116 ª 2018 John Wiley & Sons Ltd and The Association for the Study of Medical Education;
MEDICAL EDUCATION 2018 52: 1113–1124
J Cleland et al
alternatively, can be represented as a proportion of
a notional sum, such as average salary. For example,
a 2016 DCE carried out by Cleland et al. concerned
medical trainees’ preferences for characteristics of a
training post. Respondents were asked to choose
between two hypothetical posts in which levels of
the ‘Potential earnings’ attribute were set at 5%,
10% or 20% above average earnings.23 Findings
from this study revealed that trainees would accept
a move from a position with ‘good’ working
conditions (defined by ‘rotas, amount of on-call
time, time off, staffing levels, etc.’) to one with
‘poor’ conditions if they were compensated with
potential earnings of 49.8% above average earning
potential (all other attributes being equal).
Where inclusion of a cost attribute is, for whatever
reason, inappropriate, the relative values of
attributes can be measured using an alternative
continuous metric (e.g. time) or, indeed, as a ratio
of whatever natural units attributes are measured in.
Alternative methods often used to value goods or
services using the common metric of money include
contingent valuation (CV). In this case, respondents
are asked to state their WtP for an item under
investigation. By contrast with the DCE, which
provides an indirect measure of WTP, CV methods
require a direct response to a question such as:
‘How much would you be willing to pay for this
item?’ The design of a CV experiment comes with
its own set of challenges.9 However, its main
drawback in comparison with a DCE is that it values
items as a whole, rather than as a set of component
attributes. Thus, it is difficult to determine from a
CV experiment how one might go about modifying
a service or intervention to improve the utility it
offers.
Choices that mimic real life
Choices made in a DCE are more similar to those in
real-life situations than choices in most other
valuation techniques. Consumers are regularly faced
with multi-attribute decisions in the marketplace, be
it when buying food or clothing or when choosing a
holiday or a car. Implicit in those decision-making
situations is a weighing up of the pros and cons of the
alternatives on offer. As described above, DCEs are
multidimensional and allow for trading of the
component parts of the item under scrutiny. This
similarity to real-life situations is likely to have a
positive impact on the validity of the findings
generated in DCEs.
A testing ground
The hypothetical nature of a DCE makes it a useful
method for assessing goods, services or
interventions that do not yet exist (because this
information cannot be observed through actual
behaviour [revealed preferences]), or where we can
only observe a single net effect of the good/service/
intervention (because valuable information about
the component parts is unobservable). Thus,
proposals for new ways of providing, for example,
medical education or alternative medical career
paths can be evaluated in the first instance without
‘real’ (revealed preference) data that may not be
feasible to collect or can only be collected through
costly pilots. For example, Robyn et al.24 conducted
a DCE amongst students and health workers in
Cameroon to explore the impact of incentives on
preferences for rural posts. Analysis of the
preference data included estimating the impact on
preferences of 10 separate ‘packages’, each of which
offered different incentives. Clearly, it would be
unfeasible to test such a large number of packages
in real life. Instead, the DCE provided information
about hypothetical packages that policymakers and
providers could use to decide which incentives were
most likely to achieve the desired ends (improved
recruitment and retention of health workers in
rural areas) within available budgets.
DESIGN AND ANALYSIS OF THE DCE
Attribute selection
Crucial to the development of a DCE is the
selection of attributes and levels. It is self-evident
that when DCEs are undertaken to inform policy or
practice, attributes and their levels must be
plausible and actionable; there would be little point
in valuing items that are unrealistic or
undeliverable, or with which respondents cannot
engage. This step in DCE design is important to
ensure that participants can understand and engage
with the experiment, and that the results are of
practical use.
Best practice dictates that, as well as reviewing the
relevant literature, qualitative methods are
employed to explore which aspects of the item
under investigation are important to
stakeholders.25,26 Qualitative methods allow
researchers to not only define the range of
potential attributes, but also to achieve an in-depth
1117ª 2018 John Wiley & Sons Ltd and The Association for the Study of Medical Education;
MEDICAL EDUCATION 2018 52: 1113–1124
What can DCEs do for you?
understanding of the context in which the
attributes exist, and the language commonly used
by stakeholders to describe them. For example, in a
study exploring medical students’ preferences for
characteristics of rural medical postings in Ghana,
Kruk et al.27 conducted seven focus groups with
third- and fifth-year medical students to collect data
for attribute development. Discussions covered
students’ experiences and perceived barriers and
motivators to rural practice, as well as their career
plans. To ensure that consideration was given to a
wide range of perspectives, focus group participants
were also asked to consider potential attributes
extracted from a literature review and from
discussions with practising and governmental
physicians.27
Contextualising the experiment
It is important that all respondents in any given
DCE answer the same fundamental question. This
means not only that they are given the same choice
sets, but also that they make the choices under the
same conditions. As far as is possible, the researcher
must try to cut down on ‘background noise’ and
reduce unmeasured variation in respondents’
decisions.
For example, in Uganda, in response to difficulties
in recruiting and retaining health workers in rural
areas, Rockers et al.22 undertook a DCE to analyse
the preferences of medical, nursing, pharmacy and
laboratory students for potential rural postings.
Figure 1 shows an example from the online DCE
sent to medical students, in which, prior to
choosing their preferred posting, respondents are
asked to imagine the scene.
These additional instructions aim to ensure that
respondents have a clear idea of the circumstances
surrounding the choice (which may differ from the
situation in which they find themselves at present),
thus minimising any variation in the interpretation
of the choice situation.
Generating the DCE design
A full account of DCE design is beyond the scope
of this article; such information is, however, readily
available from the existing literature.17,28 Briefly,
once attributes and levels have been decided upon,
statistical software such as SAS (SAS Institute, Inc.,
Cary, NC, USA) or NGENE (Choice Metrics, Sydney,
NSW, Australia) is most commonly used to create
and select a series of hypothetical scenarios (also
known as ‘alternatives’ or ‘profiles’), each of which
presents the included attributes, set at different
levels. This is illustrated in Figure 1, in which
respondents must choose between Posting A and
Posting B. When these scenarios are combined in
sets of two or more, the resulting ‘choice sets’ will
represent a statistically efficient design, or one that
will collect sufficient information to allow
preferences to be estimated with acceptable
precision.
Data collection
In general, DCEs are embedded in surveys, and the
principles of good survey design are as important in
DCEs as in any other survey.29,30 These include
qualities such as ease of reading, clear instructions,
absence of questions likely to lead to bias,
appropriate response categories, the logical
ordering of questions and so on. Survey mode
should also be carefully selected. Increasingly, the
Internet is used to administer surveys, including
DCEs, for reasons that include lower research costs.
Researchers must, however, be aware that using
different modes (e.g. mail, Internet, interviews) to
collect data can lead to variations in
representativeness, convergent validity and data
quality.31,32 The choice of survey mode will depend
on the research question and population of interest
and findings should be interpreted with mode
effects in mind.
Analysis
The analysis of DCEs has advanced since their first
application in health care and continues to evolve.
Briefly, choice data are, in general, analysed using
regression techniques. In early examples of DCEs,
the most frequently used regression analysis was
conditional logit (also known as multinomial logit
[MNL]), a technique similar to logistic regression.
More recently, other models have been used to
explore variability in preferences (i.e. how
individual respondents differ in their preferences)
using, for example, mixed logit and latent class
logit models.33,34
Limitations and developments
The collection of data from hypothetical scenarios
in DCEs has drawn some criticism about their
external validity and the possibility of hypothetical
bias: how can we be sure that choices made under
hypothetical circumstances reflect those that
respondents will make in real life? Few studies have
1118 ª 2018 John Wiley & Sons Ltd and The Association for the Study of Medical Education;
MEDICAL EDUCATION 2018 52: 1113–1124
J Cleland et al
considered the external validity of health-related
DCEs; a recent systematic review and meta-analysis
included eight such studies that had tested external
validity by comparing predicted choices with those
observed in real life.35 The authors found that
DCEs had ‘moderate, but not exceptional, accuracy
when predicting health-related choices’; pooled
sensitivity and specificity estimates were 88% (95%
confidence interval [CI] 81–92%) and 34% (95% CI
23–46%), respectively. Their conclusions concur
with those of Janssen et al.,36 who, while
acknowledging that the DCE represents a useful way
of eliciting preferences, urge caution when
interpreting findings in the light of this uncertainty
about external validity.
The inclusion of a Cost attribute, although
extremely convenient from the researcher’s point
of view, can lead to problems; a common metric
based on money is useful to calculate WtP and
thus compare the magnitude of the values
respondents place on attributes, but it does not
necessarily follow that they would actually be
willing to pay the estimated amounts. Caution,
therefore, should be exercised when interpreting
WtP estimates.
A further potential problem is that some
respondents may object to the idea of paying for
the item under investigation, which can lead to
protest behaviour whereby respondents simply do
not consider the other attributes and do not engage
with the experiment. This can often happen in the
context of health care in the UK, which is usually
free at the point of delivery, but may not be an
issue in other contexts in which paying for health
care is the norm. Additionally, WtP estimates can be
affected by issues around ability to pay;
respondents’ choices may be influenced by how
much they can afford, rather than by how they
value the attributes.
The role of heuristics, developed within cognitive
psychology,37 and its implications for choices made
within DCEs has also been considered within
DCEs.38 An heuristic is an efficient rule that is
followed to simplify a complex decision-making task.
One such rule, attribute non-attendance (ANA),
may lead to the systematic exclusion from decision
making of certain attributes. Research is also
utilising eye tracking techniques to consider the
divergence between stated attendance of attributes
and visual attendance.39 More recent research has
investigated whether evidence of ANA within DCEs
points to respondents’ simplifying of the
hypothetical task (with the associated implications
for biases within preference estimation) or whether
it reflects actual preferences.40 Eye tracking is also
used to consider how the processing of the
information on offer within the survey design (such
as attribute order) might influence the choices
made.41
EXAMPLES OF DCES FROM THE LITERATURE
A small number of DCEs have been used to elicit
preferences concerning educational issues and early
career choices. Other published DCEs, indirectly
linked to medical education, have elicited
preferences for different aspects of jobs or careers
in health-related professions.
Educational preferences
Discrete choice experiments can be used to
inform the content and format of medical
education. Cunningham et al.42 conducted a DCE
to establish medical students’ preferences for the
way in which the MD programme at McMaster
University (Hamilton, ON, Canada) was organised.
The aim was to engender student engagement
with the education programme, on the
assumption that this would improve its
effectiveness. Medical students were asked to
choose between alternative MD programmes; 15
attributes were used to describe the hypothetical
programmes, including tutorial group size, the
degree to which tutorials were web-enhanced, the
role of tutors and the format of tutorial problems.
The study concluded that: ‘. . .most students
preferred a small group, web-supported, problem-
based learning approach led by content experts
who facilitated group process.’42 Findings also
suggested, however, that students would accept a
less preferred programme if financial savings were
to be reinvested in, for example, web-enhanced
tutorial processes.
Other studies exploring non-medical students’
preferences for aspects of education have been
undertaken. These include:
� a DCE was used to explore relative preferences
for various features of assignment systems,
including the form of the assignment
(online/paper), its relevance to examinations
and the nature of any feedback on the
assignment, amongst undergraduate business
students in Ireland;43
1119ª 2018 John Wiley & Sons Ltd and The Association for the Study of Medical Education;
MEDICAL EDUCATION 2018 52: 1113–1124
What can DCEs do for you?
� a DCE to measure preferences for different
attributes of an educational institution,
including staff, syllabus and fees, was conducted
in students at the London School of Hygiene
and Tropical Medicine, and44
� a DCE that aimed to measure preferences for
various characteristics of higher education
institutes, such as travel time from home, the
reputation of the course being offered and fees,
was conducted amongst secondary school
students in Ireland.45
Attracting secondary school students from particular
segments of society to study medicine was the subject
of a DCE study by researchers in Japan.46 The
purpose of this study was to explore the likelihood of
attracting students from low- and middle-income
families by offering scholarships to private medical
schools.
Discrete choice experiments have also been used to
explore preferences for postgraduate medical
education. Mandeville et al.47 used a DCE to
determine how, amongst other job characteristics,
location of specialty training might affect Malawian
junior doctors’ decisions about whether or not to stay
in the country. A DCE amongst Danish general
practitioners compared the characteristics of
alternative continuous professional development
(CPD) programmes.48
Career preferences and workforce planning
Cleland et al.23 conducted a DCE amongst doctors in
training posts to establish which characteristics of their
jobs they most valued (see above). The same DCE
carried out amongst final-year medical students
revealed that the relative values of the attributes were
similar to those observed among trainee doctors; the
most highly valued for both groups was ‘working
conditions’.49 The authors propose that the findings
from these studies will be useful to health care
organisations because the job attributes considered are
those they are likely to have some control over.23,49
Hence, training positions in less popular specialties or
geographic areas could be made more attractive by
manipulating the attributes under scrutiny.
However, the majority of DCE studies looking at
workforce issues have been conducted in low- and
Table 1 Some examples of discrete choice experiments investigating recruitment and retention of health care workers in remote and
rural areas in low- and middle-income countries
Study Country Participants Aim
Kruk et al. (2010)27 Ghana Final-year medical students To assess ‘how students’ stated preference for certain rural
postings was influenced by various job attributes’
Hanson & Jack (2010)62 Ethiopia Doctors and nurses ‘to better understand how health care workers might be
influenced to practise in rural settings’
Vujicic et al. (2011)63 Vietnam Physicians ‘to explore the key factors that determine physician motivation
and job satisfaction’
Rockers et al. (2012)64 Uganda Students (medical, nursing,
pharmacy, laboratory)
‘to better inform the selection of appropriate recruitment and
retention interventions based on health worker preferences’
Miranda et al. (2012)65 Peru Doctors ‘to investigate doctors’ stated preferences for rural jobs’
Rao et al. (2013)66 India Final-year medical/nursing
students; in-service
doctors/nurses serving at
primary health centres
To examine ‘job preferences of doctors and nurses to inform
what works in terms of rural recruitment strategies’
McAuliffe et al. (2016)67 Malawi,
Mozambique,
Tanzania
Obstetric care workers ‘to examine the employment preferences of obstetric care
workers across three east African countries’
Efendi et al. (2016)68 Indonesia Students (medical,
nursing, midwifery)
‘to analyse the job preferences of health students to develop
effective policies to improve the recruitment and retention
of health students in remote areas’
1120 ª 2018 John Wiley & Sons Ltd and The Association for the Study of Medical Education;
MEDICAL EDUCATION 2018 52: 1113–1124
J Cleland et al
middle-income economies with territories that
include large areas that are remote and rural or
subject to political instability. Many such countries
in Africa, Asia and Central and South America have
well-documented problems in recruiting and
retaining health professionals and health workers of
all levels of experience. A substantial number of
DCEs have been undertaken in attempts to find
what might make jobs in these areas more
attractive. Table 1 lists some examples.
Discrete choice experiment studies looking at
preferences for health care jobs have additionally
been undertaken in high-income economies such
as Australia, Denmark, Canada, the USA and the
UK. The aims of these studies varied, but
included exploring preferences for jobs in remote
and rural areas,50–52 jobs in primary care,53–55 jobs
in secondary care56 and alternative payment
systems.57
CONCLUSIONS
In summary, DCEs have been used frequently to
look at medical workforce issues, but at relatively
little in the field of medical education. In the wider
education literature, there are a few examples of
the use of DCEs to assess other preferences related
to assessment systems and satisfaction with
programme/course qualities. There seems to us to
be a number of outstanding questions within
medical education that could be usefully addressed
using DCEs. For example, what components
underpin student satisfaction with specific aspects of
a course (e.g. longitudinal clerkships, remote and
rural placements) as well as the course more
generally? (Knowing this could inform curricular
design.) If students value 10 different aspects of
feedback practice, which do they value most?
(Knowing this could inform or focus staff training.)
What do applicants value when selecting a medical
school or residency, or medical job? (What can a
medical school influence? Which factors are non-
adjustable?) Exploring what applicants, students,
residents and colleagues value, in order to
understand what shapes their choices, will help
those involved in planning and delivering education
to meet consumers’ needs and expectations. To
return to our earlier point, in an ideal world we
would all like the best of everything, but life is not
ideal. If we ask our consumers what they want using
tools that do not enable us to identify what is most
important to them, we are in danger of failing to
meet their needs.
Medical education research is currently a small field
of research, and one that has drawn heavily on
expertise, approaches and theories from other fields.
This has enabled medical education research to
move relatively rapidly from local evaluation and
audit to considering questions and problems more
generally and in terms of how they may contribute to
new knowledge. We believe that DCE methodology
has the potential to address many outstanding
questions in medical education and training and to
provide more refined information than some
traditional approaches. Extending the use of the
method in medical education may also facilitate
working with stakeholders outside academia (e.g.
providers and policymakers), as well as establishing
partnerships with expert colleagues from health
economics. This transdisciplinary working may
provide the potential to identify and create new
opportunities and questions.58
For readers who desire a full account of DCE design
along with practical guidance, we recommend ‘How
to conduct a discrete choice experiment for health
workforce recruitment and retention in remote and
rural areas: a user guide with case studies’,
published by the World Health Organization.59
Additional reading is available from the
International Society for Pharmacoeconomics and
Outcomes Research (ISPOR).60,61
Contributors: JC and DS conceived the idea for this article.
TP contributed to the design of the work and the
interpretation of the literature for the work, and wrote
the first draft. All authors revised the article critically for
important intellectual content and approved the final
version.
Acknowledgements: none.
Funding: none.
Conflicts of interest: none.
Ethical approval: not applicable.
REFERENCES
1 Walsh K, Levin H, Jaye P, Gazzard J. Cost analyses
approaches in medical education: there are no simple
solutions. Med Educ 2013;47 (10):962–8.
2 Baron RB. Can we achieve public accountability for
graduate medical education outcomes? Acad Med
2013;88 (9):1199–201.
3 Fawsitt CG, Bourke J, Greene RA, McElroy B, Krucien
N, Murphy R, Lutomski JE. What do women want?
Valuing women’s preferences and estimating demand
for alternative models of maternity care using a
discrete choice experiment. Health Policy 2017;121
(11):1154–60.
1121ª 2018 John Wiley & Sons Ltd and The Association for the Study of Medical Education;
MEDICAL EDUCATION 2018 52: 1113–1124
What can DCEs do for you?
4 Whitaker KL, Ghanouni A, Zhou Y, Lyratzopoulos G,
Morris S. Patients’ preferences for GP consultation for
perceived cancer risk in primary care: a discrete
choice experiment. Br J Gen Pract 2017;67 (659):e388–
95.
5 Murchie P, Norwood PF, Pietrucin-Materek M,
Porteous T, Hannaford PC, Ryan M. Determining
cancer survivors’ preferences to inform new models of
follow-up care. Br J Cancer 2016;115 (12):1495–503.
6 Mankowski C, Ikenwilo D, Heidenreich S, Ryan M,
Nazir J, Newman C, Watson V. Men’s preferences for
the treatment of lower urinary tract symptoms
associated with benign prostatic hyperplasia: a discrete
choice experiment. Patient Prefer Adherence
2016;10:2407–17.
7 Porteous T, Ryan M, Bond C, Watson M, Watson V.
Managing minor ailments; the public’s preferences
for attributes of community pharmacies. A discrete
choice experiment. PLoS One 2016;11 (3):e0152257.
8 Eva KW. The cross-cutting edge: striving for symbiosis
between medical education research and related
disciplines. Med Educ 2008;42 (10):950–1.
9 Bridges JF. Stated preference methods in health care
evaluation: an emerging methodological paradigm in
health economics. Appl Health Econ Health Policy
2003;2 (4):213–24.
10 Ryan M. Discrete choice experiments in health care.
BMJ 2004;328 (7436):360–1.
11 De Bekker-Grob E, Ryan M, Gerard K. Discrete
choice experiments in health economics: a review of
the literature. Health Econ 2012;21:145–72.
12 Ryan M, Gerard K. Using discrete choice experiments
to value health care programmes: current practice
and future research reflections. Appl Health Econ
Health Policy 2003;2 (1):55–64.
13 Green P, Srinivasan S. Conjoint analysis in consumer
research: issues and outlook. J Consum Res 1978;5
(2):103–23.
14 Louviere JJ, Flynn TN, Carson RT. Discrete choice
experiments are not conjoint analysis. J Choice Model
2010;3 (3):57–72.
15 Lancaster KJ. A new approach to consumer theory. J
Polit Econ 1966;74 (2):132–57.
16 McFadden D. Conditional logit analysis of qualitative
choice behavior. In: Zarembka P, ed. Frontiers in
Econometrics. New York, NY: Academic Press 1974;105–42.
17 Ryan M, Gerard K, Amaya-Amaya M. Using Discrete
Choice Experiments to Value Health and Health Care, 1st
edn. Dordrecht: Springer 2008.
18 Mas-Colell A, Whinston M, Green J. Microeconomic
Theory. New York, NY: Oxford University Press 1995.
19 Ryan M, Scott D, Reeves C, Bate A, van Teijlingen E,
Russell E, Napper M, Robb CM. Eliciting public
preferences for healthcare: a systematic review of
techniques. Health Technol Assess 2001;5 (5):1–186.
20 Cheung KL, Wijnen BFM, Hollin IL, Janssen EM,
Bridges JF, Evers SM, Hiligsmann M. Using best–worst
scaling to investigate preferences in health care.
Pharmacoeconomics 2016;34 (12):1195–209.
21 Holte JH, Kjaer T, Abelsen B, Olsen JA. The impact
of pecuniary and non-pecuniary incentives for
attracting young doctors to rural general practice. Soc
Sci Med 2015;128:1–9.
22 Rockers PC, Jaskiewicz W, Wurts L, Kruk ME,
Mgomella GS, Ntalazi F, Tulenko K. Preferences for
working in rural clinics among trainee health
professionals in Uganda: a discrete choice
experiment. BMC Health Serv Res 2012;12 (1):212.
23 Cleland J, Johnston P, Watson V, Krucien N, Sk�atun
D. What do UK doctors in training value in a post? A
discrete choice experiment. Med Educ 2016;50
(2):189–202.
24 Robyn PJ, Shroff Z, Zang OR, Kingue S, Djienouassi S,
Kouontchou C, Sorgho G. Addressing health
workforce distribution concerns: a discrete choice
experiment to develop rural retention strategies in
Cameroon. Int J Health Policy Manag 2015;4 (3):169–80.
25 Coast J, Al-Janabi H, Sutton EJ, Horrocks SA, Vosper
AJ, Swancutt DR, Flynn TN. Using qualitative
methods for attribute development for discrete
choice experiments: issues and recommendations.
Health Econ 2012;21 (6):730–41.
26 Louviere J, Hensher D, Swait J. Stated Choice Methods:
Analysis and Application. Cambridge: Cambridge
University Press 2000.
27 Kruk ME, Johnson JC, Gyakobo M, Agyei-Baffour P,
Asabir K, Kotha SR, Kwansah J, Nakua E, Snow RC,
Dzodzomenyo M. Rural practice preferences among
medical students in Ghana: a discrete choice
experiment. Bull WHO 2010;88 (5):333–41.
28 Lancsar E, Louviere J. Conducting discrete choice
experiments to inform healthcare decision making:
a user’s guide. Pharmacoeconomics 2008;26 (8):
661–77.
29 Dillman DA, Smyth JD, Christian LM. Internet, Mail
and Mixed-Mode Surveys: The Tailored Design Method, 3rd
edn. Hoboken, NJ: John Wiley & Sons 2009.
30 Stopher P. Collecting, Managing and Assessing Data
Using Sample Surveys. Cambridge: Cambridge
University Press 2012.
31 Determann D, Lambooij MS, Steyerberg EW, de
Bekker-Grob EW, de Wit GA. Impact of survey
administration mode on the results of a health-
related discrete choice experiment: online and paper
comparison. Value Health 2017;20 (7):953–60.
32 Boyle KJ, Morrison M, MacDonald DH, Duncan R,
Rose J. Investigating internet and mail
implementation of stated-preference surveys while
controlling for differences in sample frames. Environ
Resour Econ 2016;64 (3):401–19.
33 Hauber AB, Gonz�alez JM, Groothuis-Oudshoorn
CGM, Prior T, Marshall DA, Cunningham C,
IJzerman MJ, Bridges JF. Statistical methods for the
analysis of discrete choice experiments: a report of
the ISPOR conjoint analysis good research practices
task force. Value Health 2016;19 (4):300–15.
34 Train K. Discrete Choice Methods with Simulation, 2nd
edn. Cambridge: Cambridge University Press 2009.
1122 ª 2018 John Wiley & Sons Ltd and The Association for the Study of Medical Education;
MEDICAL EDUCATION 2018 52: 1113–1124
J Cleland et al
35 Quaife M, Terris-Prestholt F, Di Tanna GL,
Vickerman P. How well do discrete choice
experiments predict health choices? A systematic
review and meta-analysis of external validity. Eur J
Health Econ 2018;https://doi.org/10.1007/s10198-018-
0954-6. [Epub ahead of print.]
36 Janssen EM, Marshall DA, Hauber AB, Bridges JFP.
Improving the quality of discrete-choice experiments
in health: how can we assess validity and reliability?
Expert Rev Pharmacoecon Outcomes Res 2017;17 (6):531–
42.
37 Kahneman D. Attention and Effort. Englewood Cliffs,
NJ: Prentice-Hall 1973.
38 Scarpa R, Gilbride TJ, Campbell D, Hensher DA.
Modelling attribute non-attendance in choice
experiments for rural landscape valuation. Eur Rev
Agric Econ 2009;36 (2):151–74.
39 Balcombe K, Fraser I, McSorley E. Visual attention
and attribute attendance in multi-attribute choice
experiments. J Appl Econom 2015;30 (3):447–67.
40 Heidenreich S, Watson V, Ryan M, Phimister E.
Decision heuristic or preference? Attribute non-
attendance in discrete choice problems. Health Econ
2018;27 (1):157–71.
41 Ryan M, Krucien N, Hermens F. The eyes have it:
using eye tracking to inform information processing
strategies in multi-attributes choices. Health Econ
2018;27 (4):709–21.
42 Cunningham CE, Deal K, Neville A, Rimas H,
Lohfeld L. Modeling the problem-based learning
preferences of McMaster University undergraduate
medical students using a discrete choice conjoint
experiment. Adv Health Sci Educ Theory Pract 2006;11
(3):245–66.
43 Kennelly B, Flannery D, Considine J, Doherty E,
Hynes S. Modelling the preferences of students for
alternative assignment designs using the discrete
choice experiment methodology. Pract Assess Res Eval
2014;19 (16):1–13.
44 Sheppard P, Smith R. What students want: using a
choice modelling approach to estimate student
demand. J High Educ Policy Manage 2016;38 (2):
140–9.
45 Walsh S, Flannery D, Cullinan J. Analysing the
preferences of prospective students for higher
education institution attributes. Educ Econ 2018;26
(2):161–78.
46 Goto R, Kakihara H. A discrete choice experiment
studying students’ preferences for scholarships to
private medical schools in Japan. Hum Resour Health
2016;14:4. https://doi.org/10.1186/s12960-016-0102-2.
[Epub ahead of print.]
47 Mandeville KL, Ulaya G, Lagarde M, Muula AS,
Dzowela T, Hanson K. The use of specialty training to
retain doctors in Malawi: a discrete choice
experiment. Soc Sci Med 2016;169:109–18.
48 Kjaer NK, Halling A, Pedersen LB. General
practitioners’ preferences for future continuous
professional development: evidence from a Danish
discrete choice experiment. Educ Prim Care 2015;26
(1):4–10.
49 Cleland JA, Johnston P, Watson V, Krucien N, Sk�atun
D. What do UK medical students value most in their
careers? A discrete choice experiment. Med Educ
2017;51 (8):839–51.
50 Gallego G, Dew A, Lincoln M, Bundy A, Chedid RJ,
Bulkeley K, Brentnall J, Veitch C. Should I stay or
should I go? Exploring the job preferences of allied
health professionals working with people with
disability in rural Australia. Hum Resour Health 2015;13
(1):53.
51 Scott A, Witt J, Humphreys J, Joyce C, Kalb G, Jeon S,
McGrail M. Getting doctors into the bush: general
practitioners’ preferences for rural location. Soc Sci
Med 2013;96:33–44.
52 Li J, Scott A, McGrail M, Humphreys J, Witt J.
Retaining rural doctors: doctors’ preferences for rural
medical workforce incentives. Soc Sci Med
2014;121:56–64.
53 Wordsworth S, Sk�atun D, Scott A, French F.
Preferences for general practice jobs: a survey of
principals and sessional GPs. Br J Gen Pract 2004;54
(507):740–6.
54 Pedersen LB, Gyrd-Hansen D. Preference for
practice: a Danish study on young doctors’ choice of
general practice using a discrete choice experiment.
Eur J Health Econ 2014;15 (6):611–21.
55 Scott A. Eliciting GPs’ preferences for pecuniary and
non-pecuniary job characteristics. J Health Econ
2001;20 (3):329–47.
56 Ubach C, Scott A, French F, Awramenko M,
Needham G. What do hospital consultants value
about their jobs? A discrete choice experiment. BMJ
2003;326 (7404):1432–5.
57 Kessels R, Van Herck P, Dancet E, Annemans L,
Sermeus W. How to reform western care payment
systems according to physicians, policy makers,
healthcare executives and researchers: a discrete
choice experiment. BMC Health Serv Res 2015;15
(1):191.
58 McMichael A. What makes transdisciplinarity succeed
or fail? First Report. In: Somerville MARD, ed.
Transdisciplinarity: Recreating Integrated Knowledge.
Oxford: EOLSS Publishers 2000;218–220.
59 World Health Organization. How to conduct a
discrete choice experiment for health workforce
recruitment and retention in remote and rural areas:
a user guide with case studies. 2012. http://www.who.
int/hrh/resources/dceguide/en/. [Accessed 2
August 2018.]
60 Bridges JFP, Hauber AB, Marshall D, Lloyd A,
Prosser LA, Regier DA, Johnson FR, Mauskopf J.
Conjoint analysis applications in health – a checklist:
a report of the ISPOR Good Research Practices for
Conjoint Analysis Task Force. Value Health 2011;14
(4):403–13.
61 Johnson FR, Lancsar E, Marshall D, Kilambi V,
M€uhlbacher A, Regier DA, Bresnahan BW, Kanninen
1123ª 2018 John Wiley & Sons Ltd and The Association for the Study of Medical Education;
MEDICAL EDUCATION 2018 52: 1113–1124
What can DCEs do for you?
https://doi.org/10.1007/s10198-018-0954-6
https://doi.org/10.1007/s10198-018-0954-6
https://doi.org/10.1186/s12960-016-0102-2
http://www.who.int/hrh/resources/dceguide/en/
http://www.who.int/hrh/resources/dceguide/en/
B, Bridges JF. Constructing experimental designs for
discrete-choice experiments: report of the ISPOR
conjoint analysis experimental design good research
practices task force. Value Health 2013;16 (1):3–13.
62 Hanson K, Jack W. Incentives could induce Ethiopian
doctors and nurses to work in rural settings. Health
Aff 2010;29 (8):1452–60.
63 Vujicic M, Shengelia B, Alfano M, Thu HB. Physician
shortages in rural Vietnam: using a labor market
approach to inform policy. Soc Sci Med 2011;73
(7):2034–70.
64 Rockers PC, Jaskiewicz W, Kruk ME, Phathammavong
O, Vangkonevilay P, Paphassarang C, Phachanh IT,
Wurts L, Tulenko K. Differences in preferences for
rural job postings between nursing students and
practicing nurses: evidence from a discrete choice
experiment in Lao People’s Democratic Republic.
Hum Resour Health 2013;11 (1):22.
65 Miranda JJ, Diez-Canseco F, Lema C, Lescano AG,
Lagarde M, Blaauw D, Huicho L. Stated preferences of
doctors for choosing a job in rural areas of Peru: a
discrete choice experiment. PLoS One 2012;7 (12):e50567.
66 Rao KD, Ryan M, Shroff Z, Vujicic M, Ramani S,
Berman P. Rural clinician scarcity and job
preferences of doctors and nurses in India: a discrete
choice experiment. PLoS One 2013;8 (12):e82984.
67 McAuliffe E, Galligan M, Revill P, Kamwendo F, Sidat
M, Masanja H, de Pinho H, Araujo E. Factors
influencing job preferences of health workers
providing obstetric care: results from discrete choice
experiments in Malawi, Mozambique and Tanzania.
Glob Health 2016;12 (1):86.
68 Efendi F, Chen C, Nursalam N, Andriyani NWF,
Kurniati A, Nancarrow SA. How to attract health
students to remote areas in Indonesia: a discrete
choice experiment. Int J Health Plann Manage 2016;31
(4):430–45.
Received 18 January 2018; accepted for publication 5 June 2018
1124 ª 2018 John Wiley & Sons Ltd and The Association for the Study of Medical Education;
MEDICAL EDUCATION 2018 52: 1113–1124
J Cleland et al
Copyright of Medical Education is the property of Wiley-Blackwell and its content may not
be copied or emailed to multiple sites or posted to a listserv without the copyright holder’s
express written permission. However, users may print, download, or email articles for
individual use.
We provide professional writing services to help you score straight A’s by submitting custom written assignments that mirror your guidelines.
Get result-oriented writing and never worry about grades anymore. We follow the highest quality standards to make sure that you get perfect assignments.
Our writers have experience in dealing with papers of every educational level. You can surely rely on the expertise of our qualified professionals.
Your deadline is our threshold for success and we take it very seriously. We make sure you receive your papers before your predefined time.
Someone from our customer support team is always here to respond to your questions. So, hit us up if you have got any ambiguity or concern.
Sit back and relax while we help you out with writing your papers. We have an ultimate policy for keeping your personal and order-related details a secret.
We assure you that your document will be thoroughly checked for plagiarism and grammatical errors as we use highly authentic and licit sources.
Still reluctant about placing an order? Our 100% Moneyback Guarantee backs you up on rare occasions where you aren’t satisfied with the writing.
You don’t have to wait for an update for hours; you can track the progress of your order any time you want. We share the status after each step.
Although you can leverage our expertise for any writing task, we have a knack for creating flawless papers for the following document types.
Although you can leverage our expertise for any writing task, we have a knack for creating flawless papers for the following document types.
From brainstorming your paper's outline to perfecting its grammar, we perform every step carefully to make your paper worthy of A grade.
Hire your preferred writer anytime. Simply specify if you want your preferred expert to write your paper and we’ll make that happen.
Get an elaborate and authentic grammar check report with your work to have the grammar goodness sealed in your document.
You can purchase this feature if you want our writers to sum up your paper in the form of a concise and well-articulated summary.
You don’t have to worry about plagiarism anymore. Get a plagiarism report to certify the uniqueness of your work.
Join us for the best experience while seeking writing assistance in your college life. A good grade is all you need to boost up your academic excellence and we are all about it.
We create perfect papers according to the guidelines.
We seamlessly edit out errors from your papers.
We thoroughly read your final draft to identify errors.
Work with ultimate peace of mind because we ensure that your academic work is our responsibility and your grades are a top concern for us!
Dedication. Quality. Commitment. Punctuality
Here is what we have achieved so far. These numbers are evidence that we go the extra mile to make your college journey successful.
We have the most intuitive and minimalistic process so that you can easily place an order. Just follow a few steps to unlock success.
We understand your guidelines first before delivering any writing service. You can discuss your writing needs and we will have them evaluated by our dedicated team.
We write your papers in a standardized way. We complete your work in such a way that it turns out to be a perfect description of your guidelines.
We promise you excellent grades and academic excellence that you always longed for. Our writers stay in touch with you via email.