the paper should be approximately 500 words and demonstrate proper APA formatting and style. You need to include a cover page to include your name, assignment title, and page number in the running header of each page. Your paper should include a minimum of two references from your unit readings and assigned research; the sources should be appropriately cited throughout your paper and in your reference list. Use meaningful section headings to clarify the organization and readability of your paper
ComputationalModellingofPublicPolicy:
ReflectionsonPractice
Nigel Gilbert1, Petra Ahrweiler2, Pete Barbrook-Johnson1, Kavin
PreethiNarasimhan1,HelenWilkinson3
1Department of Sociology, University of Surrey Guildford, GU2 7XH United Kingdom
2InstituteofSociology,JohannesGutenbergUniversityMainz,Jakob-Welder-Weg20,55128Mainz,Germany
3Risk Solutions, Dallam Court, Dallam Lane, Warrington, Cheshire, WA2 7LT, United Kingdom
Correspondence should be addressed to n.gilbert@surrey.ac.uk
Journal of Artificial Societies and Social Simulation 21(1) 14, 2018
Doi: 10.18564/jasss.3669 Url: http://jasss.soc.surrey.ac.uk/21/1/14.html
Received: 11-01-2018 Accepted: 11-01-2018 Published: 31-01-2018
Abstract: Computationalmodelsare increasinglybeingusedtoassist indeveloping, implementingandevalu-
ating public policy. This paper reports on the experience of the authors in designing and using computational
models of public policy (‘policy models’, for short). The paper considers the role of computational models in
policy making, and some of the challenges that need to be overcome if policy models are to make an e�ec-
tive contribution. It suggests that policy models can have an important place in the policy process because
they could allow policy makers to experiment in a virtual world, and have many advantages compared with
randomised control trials and policy pilots. The paper then summarises some general lessons that can be ex-
tractedfromtheauthors’experiencewithpolicymodelling. Thesegeneral lessonsincludetheobservationthat
o�enthemainbenefitofdesigningandusingamodelisthatitprovidesanunderstandingofthepolicydomain,
rather than the numbers it generates; that care needs to be taken that models are designed at an appropriate
level of abstraction; that although appropriate data for calibration and validation may sometimes be in short
supply, modelling is o�en still valuable; that modelling collaboratively and involving a range of stakeholders
from the outset increases the likelihood that the model will be used and will be fit for purpose; that attention
needstobepaidtoe�ectivecommunicationbetweenmodellersandstakeholders;andthatmodellingforpub-
lic policy involves ethical issues that need careful consideration. The paper concludes that policy modelling
will continue to grow in importance as a component of public policy making processes, but if its potential is to
befullyrealised, therewillneedtobeameldingoftheculturesofcomputationalmodellingandpolicymaking.
Keywords: Policy Modelling, Policy Evaluation, Policy Appraisal, Modelling Guidelines, Collaboration, Ethics
1.1 Computationalmodelshavebeenusedtoassist indeveloping, implementingandevaluatingpublicpoliciesfor
at least threedecades,but theirpotential remainstobefullyexploited(Johnston&Desouza2015;Anzolaetal.
2017;Barbrook-Johnsonetal.2017). Inthispaper,usingaselectionofexamplesofcomputationalmodelsused
in public policy processes, we (i) consider the roles of models in policy making, (ii) explore policy making as a
typeofexperimentationinrelationtomodelexperiments,and(iii)suggestsomekeylessonsforthee�ectiveuse
ofmodels. Wealsohighlightsomeof thechallengesandopportunities facingsuchmodelsandtheiruse inthe
future. Ouraimistosupportthemodellingcommunitythatreadsthisjournalinitse�orttobuildcomputational
models of public policy that are valuable and useful.
1.2 Webelievethise�ortistimelygiventhatcomputationalmodels,ofthetypethisjournalregularlyreportson,are
now increasingly used by government, business, and civil society as well as in academic communities (Hauke
etal.2017). Therearemanyguidestocomputationalmodellingproducedfordi�erentcommunities, forexam-
ple in UK government the ‘Aqua Book’ (reviewed for JASSS in Edmonds (2016)), but these are o�en aimed at
practitioner and government audiences, can be highly procedural and technical, generally omit discussion of
failure and rarely include deeper reflections on how best to model for public policy. Our aim here is to fill gaps
JASSS, 21(1) 14, 2018 http://jasss.soc.surrey.ac.uk/21/1/14.html Doi: 10.18564/jasss.3669
n.gilbert@surrey.ac.uk
le�bytheseformalguides, toprovidereflectionsaimedatmodellers, touseaselectionofexamplestoexplore
issues in an accessible way, and acknowledge failures and learning from them.
1.3 We focus only on computational models that aim to model, or include some modelling of, social processes.
Although some of the discussion may apply, we are not directly considering computational models that are
purelyecologicalortechnical intheir focus,orsimplermodelssuchasspreadsheetswhichmayimplicitlycover
socialprocessesbuteitherdonotrepresentthemexplicitly,ormakeextremelysimpleandstrongassumptions.
Although ‘computational models of public policy’ is the full and accurate term, and others o�en use ‘compu-
tational policy models’, for the sake of brevity, we will use the term ‘policy models’ throughout the rest of this
paper.
1.4 Based on our experience, our main recommendations are that policy modelling needs to be conducted with
a strong appreciation of the context in which models will be used, and with a concern for their fitness for the
purposes for which they are designed and the conclusions drawn from them. Moreover, policy modelling is
almost always likely to be of low or no value if done without strong and iterative engagement with the users
of the model outputs, i.e. decision makers. Modellers must engage with users in a deep, meaningful, ethically
informed and iterative way.
1.5 In the remainder of this paper, Section 2 introduces the role of policy models in policy making. Section 3 ex-
ploresthe ideaofpolicymakingasatypeofexperimentation inrelationtopolicymodelexperiments. Wethen
discusssomeexamplesandexperiencesofpolicymodelling(Section4)anddrawoutsomekeylessonstohelp
make policy modelling more e�ective (Section 5). Finally, Section 6 concludes and discusses some key next
steps and other opportunities for computational policy modellers.
TheRoleofModels inPolicyMaking
2.1 Thestandard, butnow somewhatdiscreditedview ofpolicy making is that itoccurs incycles (forexample, see
the seminal arguments made in Lindblom 1959 and Lindblom 1979; and more recently o�icial recognition in
HM Treasury 2013). A policy problem comes to light, perhaps through the occurrence of some crisis, a media
campaign, or as a response to a political event. This is the agenda setting stage and is followed by policy for-
mulation, gathering support for the policy, implementing the policy, monitoring and evaluating the success of
the policy and finally policy maintenance or termination. The cycle then starts again, as new needs or circum-
stancesgeneratedemandsfornewpolicies. Althoughtheideaofapolicycyclehasthemeritofbeingaclearand
straightforwardwayofconceptualisingthedevelopmentofpolicy, ithasbeencriticisedasbeingunrealisticand
oversimplifying what happens, which is typically highly complex and contingent on multiple sources of pres-
sureandinformation(Cairney2013;Moran2015),andevenself-organising (Byrne&Callaghan2014;Teisman&
Klijn 2008).
2.2 The idea of a cycle does, however, still help to identify the many components that make up the design and
implementationofpolicy. Thereareat leasttwoareaswheremodelshaveaclearandimportantroletoplay: in
policydesignandappraisal,andpolicyevaluation. Policyappraisal(asdefinedinHMTreasury2013,sometimes
referred to as ex-ante evaluation, consists of assessing the relative merits of alternative policy prescriptions in
meetingthepolicyobjectives. Appraisal findingsareakey input intopolicydesigndecisions. Policyevaluation
either takes a summative approach, examining whether a policy has actually met its objectives (i.e. ex-post),
or a more formative approach to see how a policy might be working, for whom and where (HM Treasury 2011).
In the formative role, the key goal is learning to inform future iterations of the policy, and others with similar
characteristics.
Modellingtosupportpolicydesignandappraisal
2.3 Whenusedex-ante,apolicymodelmaybeusedtoexploreapolicyoption,helpingtoidentifyandspecify inde-
tailaconsistentpolicydesign(HMTreasury2013), forexampleby locatingwherebestapolicymight intervene,
or by identifying possible synergies or conflicts between the mechanisms of multiple policies. Policy models
can also be used to appraise alternative policies, to see which of several possibilities can be expected to yield
the best or most robust outcome. In this mode, a policy model is in essence used to ‘experiment’ with alterna-
tive policy options and assumptions about the system in which it is intervening, by changing the parameters
or the rules in the model and observing what the outcomes are. This is valuable because it saves the time and
costassociatedwithhavingtorunexperimentsorpilots intheactualpolicydomain. Thisconceptofthemodel
as an experimental space is discussed in more detail in Section 3 below.
JASSS, 21(1) 14, 2018 http://jasss.soc.surrey.ac.uk/21/1/14.html Doi: 10.18564/jasss.3669
2.4 Thecommonassumptionisthatonebuildscomputationalmodels inordertomakepredictions. However,pre-
diction,inthesenseofpredictingthefuturevalueofsomemeasure,isinfacto�enimpossibleinpolicydomains.
Socialandeconomicphenomenaareo�encomplex (inthetechnicalsense,seee.g. Sawyer2005). Thismeans
that how some process evolves depends on random chance, its previous history (‘path dependence’) and the
e�ect of positive and negative feedback loops. Just as with the weather, for which exact forecasting is impos-
sible more than a few days ahead, the future course of many social processes may be literally unknowable in
detail, no matter how detailed the model may be. Secondly, a model is necessarily an abstraction from real-
ity, and since it is impossible to isolate sections of society, from outside influences, there may be unexpected
exogenous factors that have not been modelled and that a�ect the outcome.
2.5 Forthesereasons, theabilitytomake‘pointpredictions’, i.e. forecastsofspecificvaluesataspecifictimeinthe
future, is rarelypossible. Morepossible isapredictionthatsomeeventwillorwillnot takeplace,orqualitative
statements about the type or direction of change of values. Understanding what sort of unexpected outcomes
can emerge and something of the nature of how these arise also helps design policies that can be responsive
tounexpectedoutcomeswhentheydoarise. Itcanbeparticularlyhelpful inchangingenvironmentstousethe
model to explore what might happen under a range of possible, but di�erent, potential futures – without any
commitment about which of these may eventually transpire. Even more valuable is a finding that the model
shows that certain outcomes could not be achieved given the assumptions of the model. An example of this is
the use of a whole system energy model to develop scenarios that meet the decarbonisation goals set by the
EU for 2050 (see, for example, RAENG 2015.)
2.6 Ratherdi�erent fromusingmodels tomakepredictionsorgeneratescenarios is theuseofmodels to formalise
and clarify understanding of the processes at work in some domain. If this is done carefully, the model may be
valuable as a training or communication tool, demonstrating the mechanisms at work in a policy domain and
how they interact.
Modellingtosupportpolicyevaluation
2.7 To evaluate a policy ex-post, one needs to compare what happened a�er the policy has been implemented
against what would have happened in the absence of the policy (the ‘counterfactual’). To do this, one needs
data about the real situation (with the policy evaluation) and data about the situation if the policy had not
been implemented (the so-called ‘business as usual’ situation). To obtain the latter, one can use a randomised
control trial (RCT) or quasi-experiment (HM Treasury 2011), but this is o�en di�icult, expensive and sometimes
impossibletocarryoutduetothenatureoftheinterventionbarringpossibilityofcreatingcontrolgroups(e.g. a
schemewhichisaccessibletoall,orapolicy inwhichlocal implementationdecisionsare impossibletocontrol
and have a strong e�ect).
2.8 Policy models o�er some alternatives. One is to develop a computational model and run simulations with and
without implementation of a policy, and then compare formally the two model outcomes with each other and
with reality (with the policy implemented), using quantitative analysis. This avoids the problems of having to
establish a real-world counterfactual. Once again, the policy model is being used in place of an experiment.
2.9 Anotheralternative is tousemorequalitativeSystemMappingtypeapproaches(e.g. FuzzyCognitiveMapping;
seeUprichard&Penn2016), tobuildqualitativemodelswithdi�erentstructuresandassumptions(torepresent
the situation with and without the intervention), and again interrogate the di�erent outcomes of the model
analyses.
2.10 Finally, another use in ex-post evaluation is to use models to refine and test the theory of how policies might
have a�ected an outcome of interest, i.e. to support common theory-based approaches to evaluation such as
Theory of Change (see Clark & Taplin 2012), and Logic Mapping (see Hills 2010).
2.11 Interrogation of models and model results can be done quantitatively (i.e. through multiple simulations, sen-
sitivity analysis, and ‘what if’ tests), but may also be done in qualitative and participatory fashion with stake-
holders, with stakeholders involved in the actual analysis (as opposed to just being shown the results). The
choice should be driven by the purpose of the modelling process, and the needs of stakeholders. In both ex-
ante and ex-post evaluation, policy models can be powerful tools to use as a route for engaging and informing
stakeholders, including the public, about policies and their implications (Voinov & Bousquet 2010). This may
beby includingstakeholders intheprocess, decisions, andvalidationofmodeldesign; or itmaybe later in the
process, inusingtheresultsofamodel toopenupdiscussionswithstakeholders,and/orevenusingthemodel
‘live’ to explore connections between assumptions, scenarios, and outcomes (Johnson 2015a).
JASSS, 21(1) 14, 2018 http://jasss.soc.surrey.ac.uk/21/1/14.html Doi: 10.18564/jasss.3669
Di�iculties intheuseofmodelling
2.12 While, in principle, policy models have all these roles and potential benefits, experience shows that it can be
di�icult to achieve them (see Taylor 2003; Kolkman et al. 2016, and Section 4 for some examples). The policy
process has many characteristics that can make it di�icult to incorporate modelling successfully, including1:
• The need for acceptability and transparency: policy makers may fall back on more traditional and more
widelyacceptedformsofevidence,especiallywheretherisksassociatedwiththedecisionarehigh. Mod-
elsmayappeartoactasblackboxesthatonlyexpertscanunderstandoruse,withoutputshighlyreliant
on assumptions that are di�icult to validate. Analysts and researchers in government o�en have little
autonomy, and although they may see the value of policy models, it can be di�icult for them to commu-
nicate this to the decision-makers.
• Changeanduncertainty: theenvironment inwhichthepolicywillbe implemented,maybehighlyuncer-
tain, this can undermine model development when beliefs or decisions shi� as a result of the modelling
process (although this is equally an important outcome and benefit of the modelling process), or other
factors.
• Shorttimescales: thetimescalesassociatedwithpolicydecisionmakingarealmostalwaysrelativelyfast,
and needs can be di�icult to predict, meaning it can be di�icult for computational modellers to provide
timely support.
• Procurement processes: o�en departments lack the capability and su�iciently flexible processes to pro-
cure complex modelling.
• Thepoliticalandpragmaticrealitiesofdecisionmaking: individuals’valuesandpoliticalvaluescanhold
hugesway,eveninthefaceofempiricalevidence(letalonemodelling) thatmaycontradict theirview,or
point towards policy which is politically impossible.
• Stakeholders: There will be many di�erent stakeholders involved in developing, or a�ected by, policies.
It will not be possible to engage all of these in the policy modelling process, and indeed policy makers
may be wary of closely involving them in a participatory modelling process.
2.13 These characteristics may also apply more widely to evidence and other forms of research and analysis. It is
not our suggestion that these characteristics are inherently negative; they may be important and reasonable
parts of the policy making process. The important thing to remember, as a modeller, is that a model can only,
and should only, provide more information to the process, not a final decision for the policy process to simply
implement.
PolicyExperimentsandPolicyModels
3.1 Although the roles and uses of policy models are relatively well-described and understood, our perception is
that there are still many areas where more use could be made of modelling and that a lack of familiarity with,
and confidence in, policy modelling, is restricting its use. Potential users may question whether policy mod-
elling in their domain is su�iciently scientifically established and mature to be safely applied to guiding real-
world policies. The di�erence between applying policies to the real world and making experimental interven-
tions in a policy model might be too big to generate any learning from the latter to inform the former.
Policypilots
3.2 One response is to argue that actual policy implementations are themselves experimental interventions and
are therefore of the same character as interventions in a policy model. Boeschen et al. (2017) propose that
we live in “experimental societies” and that implementing policies is nothing but conducting “real-world ex-
periments”. Real-world experiments are “a more or less legitimate, methodically guided or carelessly adopted
social practice to start something new” (Krohn 2007, p. 344; own translation). Their outcomes immediately
display “success or failure of a design process” (ibid., p. 347).
3.3 A real-world experiment implements one solution for the policy design problem. It does not check for other
possible solutions or alternative options, but at best monitors and responds to what is emerging in real time.
JASSS, 21(1) 14, 2018 http://jasss.soc.surrey.ac.uk/21/1/14.html Doi: 10.18564/jasss.3669
Implementing policies as a real-world experiment is therefore far from ideal and far removed from the idea of
reversibility in the laboratory. In laboratory experiments the experimental system is isolated from its environ-
ment in such a way that the e�ects of single parameters can be observed.
3.4 One approach that tries to bridge the gap between the real-world and laboratory experiments is to conduct
policypilots. Theuseofpolicypilots(Greenberg&Shroder1997;CabinetO�ice2003;Martin&Sanderson1999)
as social experiments is fairly widespread. In a policy pilot, a policy change can be assessed against a coun-
terfactual in a limited context before rolling it out for general implementation. In this way (a small number of)
di�erent solutions can be tried out and evaluated, and learning fed back into policy design.
3.5 A dominant method for policy pilots is the Randomised Control Trial (RCT) (Greenberg & Shroder 1997; Boruch
1997), well-known from medical research, where a carefully selected treatment group is compared with a con-
trol group that is not administered the treatment under scrutiny. RCTs can thus present a halfway house be-
tween an idealised laboratory experiment and a real-world experiment. However, the claim that an RCT is ca-
pableofreproducingalaboratorysituationwhererigoroustestingagainstacounterfactual ispossiblehasalso
been contested (Cabinet O�ice 2003, p. 19). It is argued that in principle there is no possibility of social experi-
mentsduetotherequirementofceterisparibus (i.e. inthesocialworld, it is impossibletohavetwoexperiments
with everything equal but the one parameter under scrutiny); that the complex system-environment interac-
tions that are necessary to adequately understand social systems cannot be reproduced in an RCT; and that
random allocation is impossible in many domains, so that a ‘neutral’ counterfactual cannot be established.
Moreover, itmaybeariskypoliticalstrategyorevenunethical toadministeracertainbenefit insomepilotcon-
textbutnottothecorrespondingcontrolgroup. This isevenmorethecase if thepolicywouldputtheselected
recipients at a disadvantage (Cabinet O�ice 2003, p. 17).
3.6 Whileapilotcanbegoodforgatheringevidenceaboutasinglecase, itmightnotserveasagood‘one-size-fits-
all’ role model for other cases in other contexts. Furthermore, it cannot say much about why or how the policy
workedordidnotwork,ordecomposethe‘whatworks’questions into, ‘whatworks,where, forwhom,atwhat
costs,andunderwhatconditions’? Therearealsomorepracticalproblemstoconsider,amongthemtime,sta�
resources and budget. There is general agreement that a good pilot is costly, time-consuming, “administra-
tively cumbersome” and in need of well-trained managing sta� (Cabinet O�ice 2003, p. 5). There is “a sense of
pessimismanddisappointmentwiththewaypolicypilotsandevaluationsarecurrentlyusedandwereusedin
the past (…): poorly designed studies; weak methodologies; impatient political masters; time pressures and
unrealistic deadlines” (Seminar on Policy Pilots and Evaluation 2013, p. 11).
3.7 Thus,policypilotscannotmeettheclaimtobeahappymediumbetweenlaboratoryexperiments,withtheiriso-
lationstrategiescapableofparametervariation,andKrohn’sreal-worldexperimentsinvolvingcomplexsystem-
environment interactions in real time. This is where computational policy modelling comes in.
Policymodels forpolicyexperimentation
3.8 Unlike policy pilots, computational policy models are able to work with ceteris paribus rules, random control,
andnon-contaminatedcounterfactuals(seebelow). Usingpolicymodels,wecanexplorealternativesolutions,
simplybytryingoutparametervariations inthemodel, andexperimentwithcontext-specificmodelsandwith
short,mediumandlongtimehorizons. Furthermore,policymodelsareethicallyandpoliticallyneutraltobuild
and run, though the use of their outcomes may not be.
3.9 Unlikereal-worldandpolicypilots,policymodelsallowtheusertoinvestigatethefuture. Initiallythemodellers
will seektoreproducethedatabasedescribingthe initialstateofareal-worldexperimentandthenextrapolate
simulatedstructuresanddynamicsintothefuture. Atfirstabaselinescenariocanbederived: whatiftherewere
no changes in the future? This is artificial and, for methodological reasons, boring: nothing much happens but
incremental evolution, no event, no surprise, no intervention; changes can then be introduced.
3.10 As with real-world experiments, modelling experiments enable recursive learning by stakeholders. Stakehold-
ers can achieve system competence and practical skills through interacting with the model to learn ‘by doing’
how to act in complex situations. With the model, it is not only possible to simulate the real-world experiment
envisagedbutalsototestmultiplescenariosforpotential real-worldexperimentsviaextensiveparametervari-
ations. The whole solution space can be checked, where future states are not only accessible but tractable.
3.11 Thisdoesnotimplythatit ispossibletoobtainexactpredictionsforfuturestatesofcomplexsocialsystems(see
the discussion on prediction above). Deciding under uncertainty has to be informed di�erently:
“Experimenting under conditions of uncertainties of this kind, it appears, will be one of the most
distinctivecharacteristicsofdecision-makinginfuturesocieties[…], theyimportandusemethods
JASSS, 21(1) 14, 2018 http://jasss.soc.surrey.ac.uk/21/1/14.html Doi: 10.18564/jasss.3669
of investigation and research. Among these are conceptual modelling of complex situations, com-
puter simulation of possible futures, and – perhaps most promising – the turning of scenarios into
‘real-world experiments’” (Gross & Krohn 2005, p. 77).
3.12 Regarding the continuum between the extremes of giving no consideration (e.g. with laboratory experiments)
and full consideration (e.g. real-world experiments) to complex system-environment interactions, policy mod-
elling experiments indeed sit somewhere in the (happy) middle. We would argue that, where the costs or risks
associated with a policy change are high, and the context is complex, it is not only common sense to carry out
policy modelling, but it would be unethical not to.
ExamplesofPolicyModels
4.1 Wehavediscussedtheroleofpolicymodels inabstractatsomelength, it isnowimportantto illustratetheuse
of policy models using a number of examples of policy modelling drawn from our own experience. These have
beenselectedtoo�erawiderangeoftypesofmodelandcontextsofapplication. Inthespiritofrecordingfailure
as well as success, we mention not only the ultimate outcomes, but also some of the problems and challenges
encountered along the way. In the next section, we shall draw out some general lessons from these examples.
Tell-Me
4.2 TheEuropean-fundedTELLMEproject focusedonhealthcommunicationassociatedwithinfluenzaepidemics.
One output was a prototype agent-based model, intended to be used by health communicators to understand
the potential e�ects of di�erent communication plans under various influenza epidemic scenarios (Figure 1).
4.3 The basic structure of the model was determined by its purpose: to compare the potential e�ects of di�er-
entcommunicationplansonprotectivepersonalbehaviourandhenceonthespreadofaninfluenzaepidemic.
This requires two linked models: a behaviour model that simulates the way in which people respond to com-
munication and make decisions about whether to vaccinate or adopt other protective behaviour, and an epi-
demic model that simulates the spread of influenza. The key model entities are: (i) messages, which together
implement the communication plans; (ii) individuals, who receive communication and make decisions about
whethertoadoptprotectivebehaviour;and(iii)regions,whichholdinformationaboutthelocalepidemicstate.
The major flow of influence is the e�ect that communication has on attitude and hence behaviour, which af-
fects epidemic transmission and hence incidence. Incidence contributes to perceived risk, which influences
behaviourandestablishesafeedbackrelationship(seeBadham&Gilbert2015forthedetailedspecification). A
fullerdescriptionofthemodelanddiscussiononitusescanbefoundinBarbrook-Johnsonetal. (2017). Amore
technicalpaperonanovelmodelcalibrationapproachusingtheTellMeasanexampleispresentedinBadham
et al. (2017).
4.4 Drawingonfindingsfromstakeholderworkshopsandtheresultsofthemodelitself,themodellingteamsuggest
the TELL ME model can be useful: (i) as a teaching tool, (ii) to test theory, and (iii) to inform data collection
(Barbrook-Johnson et al. 2017).
HOPES
4.5 Practice theories provide an alternative to the theory of planned behaviour and the theory of reasoned action
to explore sustainability issues such as energy use, climate change, food production, water scarcity, etc. The
central argument is that the routine activities (aka practices) that people carry out in the service of everyday
living(e.g. waysofcooking,eating, travelling,etc.),o�enwithsomelevelofautomaticitydevelopedovertime,
shouldbethefocusof inquiryandinterventionif thegoal istotransformenergy-andemissions-intensiveways
of living.
4.6 The Households and Practices in Energy use Scenarios (HOPES) agent-based model (Narasimhan et al. 2017)
was developed to formalise key features of practice theories and to use the model to explore the dynamics of
energyuseinhouseholds. Akeytheoretical featurethatHOPESsoughttoformalise is theperformanceofprac-
tices, enabled by the coming together of appropriate meanings (mental activities of understanding, knowing
how and desiring, Reckwitz 2002), materials (objects, body and mind) and skills (competences). For example,
JASSS, 21(1) 14, 2018 http://jasss.soc.surrey.ac.uk/21/1/14.html Doi: 10.18564/jasss.3669
Figure 1: A screenshot of the Tell Me model interface. The interface houses key model parameters related to
individuals’ attitude towards influenza, their consumption of di�erent media types, the epidemiological pa-
rameters of the strain of influenza, and their social networks. Key outputs shown include changes in people’s’
attitude, actual behaviour, and the progression of the epidemic. The world view shows the spread of the epi-
demic (blue = epidemic not yet reached, red = high levels of infection, green = most people recovered).
a laundry practice could signify a desire for clean clothes (meaning) realised by using a washing machine (ma-
terial) and knowing how to operate the washing machine (skill); performance of the practice then results in
energy use.
4.7 HOPES has two types of agents: households and practices. Elements (meanings, materials and skills) are en-
tities in the model. The model concept is that households choose di�erent elements to perform practices de-
pendingonthesocio-technicalsettingsuniquetoeachhousehold. Theperformanceofsomepracticesresult in
energy use while some do not, e.g., using a heater to keep warm results in energy use whereas using a jumper
or blanket does not incur energy use. Furthermore, the repeated performance of practices across space and
time causes the enabling elements to adapt (e.g. some elements are used more popularly than others), which
subsequently a�ects the future performance of practices and thereby energy use. A rule-based system, devel-
oped based on empirical data collected from 60 UK households, was included in HOPES to enable households
to choose elements to perform practices. The rule-based approach allowed organising the complex contex-
tual information and socio-technical insights gathered from the empirical study in a structured way to choose
the most appropriate actions when faced with incomplete and/or conflicting decisions. HOPES also includes
sub-models to calculate the energy use resulting from the performance of practices, e.g. a thermal model of a
house is built in to consider the outdoor temperature, the type and size of heater, and the thermostat setpoint
to estimate the energy used for thermal comfort practices in each household.
4.8 The model is used to test di�erent policy and innovation scenarios to explore the impacts of the performance
of practices on energy use. For example, the implementation of a time of use tari� demand response scenario
shows that while some demand shi�ing is possible as a consequence of pricing signals, there is no significant
reductioninenergyuseduringpeakperiodsasmanyhouseholdscannotputo�usingenergy(Narasimhanetal.
2017). The overall motivation is that by gaining insight into the trajectories of unsustainable energy consum-
ing practices (and underlying elements) under di�erent scenarios, it might be possible to propose alternative
pathways that allow more sustainable practices to take hold.
SWAP
4.9 TheSWAPmodel (Johnson2015b,a) isanagent-basedmodelof farmers’makingdecisionsaboutadoptingsoil
and water conservation (SWC) practices on their land. Developed in NetLogo (Wilensky 1999), the main agents
inthemodelarefarmers,whoaremakingdecisionsaboutwhethertopracticeSWCornot,andextensionagents
who are government and non-governmental actors who encourage farmers to adopt. Farmers can also be en-
couraged or discouraged to change their behaviour depending on what those nearby and in their social net-
worksaredoing. Theenvironmentisasimplemodelofthesoilquality(Figure2). Themainoutcomesofinterest
JASSS, 21(1) 14, 2018 http://jasss.soc.surrey.ac.uk/21/1/14.html Doi: 10.18564/jasss.3669
are the temporal and spatial patterns of SWC adoption. A full description can be found in Johnson (2015b) and
Johnson (2015a).
Figure 2: Screenshot of the SWAP model key outputs and worldview in NetLogo. The outputs show the per-
centagesof farmerspracticingSWCandsimilarly thenumberof fieldswithSWC. Intheworldview,circlesshow
farmer agents in various decision states, triangles show ‘extension’ agents, and the patches’ colour denotes
the presence of conservation (green or brown) and soil quality (deeper colour means higher quality). Source:
Adapted from Johnson (2015a).
4.10 The SWAP model was developed: (i) as an ‘interested amateur’ to be used as a discussion tool to improve the
qualityof interactionbetweenpolicystakeholders;and(ii)asanexplorationofthetheoryonfarmerbehaviour
in the SWC literature.
4.11 The model’s use as an ‘interested amateur’ was explored with stakeholders in Ethiopia. Using a model as an
interestedamateur isaconcept inspiredbyDennett(2013). Dennettsuggestsexpertso�entalkpasteachother,
make wrong assumptions about others’ beliefs, and/or do not wish to look stupid by asking basic questions.
These failings can o�en mean experts err on the side of under-explaining issues, and thus fail to come to con-
sensus or agreeable outcomes in discussion. For Dennett, an academic philosopher, the solution is to bring
undergraduate students – interested amateurs – into discussions to ask the simple questions, and generally
force experts to err on the side of over-explanation.
4.12 The SWAP model was used as an interested amateur with a di�erent set of experts, policy makers and o�icials
in Ethiopia. This was done because policies designed to increase adoption of SWC have generally been un-
successful due to poor calibration to farmers’ needs. This is understood in the literature to be a result of poor
interactionbetweenthevariousstakeholdersworkingonSWC.Whenused,participantsrecognisedthevalueof
the model and it was successful in aiding discussion. However, participants described an inability to innovate
in their work, and viewed stakeholders ‘lower-down’ the policy spectrum as being in more need of discussion
tools. A full description of this use of the model can be found in Johnson (2015a).
INFSO-SKIN
4.13 The European Commission was expecting to spend arounde77 billion on research and development through
its Horizon 2020 programme between 2014 and 2020. It is the successor to the previous, rather smaller pro-
gramme,calledFramework7. WhenHorizon2020wasbeingdesigned, theCommissionwantedtounderstand
how the rules for Framework 7 could be adapted for Horizon 2020 to optimise it for current policy goals, such
as increasing the involvement of small and medium enterprises (SMEs).
4.14 An agent-based model, INFSO-SKIN, was built to evaluate possible funding policies. The model was set up to
reproducethefundingrules, thefundedorganisationsandprojects,andtheresultingnetworkstructuresofthe
Framework 7 programme. This model, extrapolated into the future without any policy changes, was then used
asabenchmarkfor furtherexperiments. Againstthisbaselinescenario,severalpolicychangesthatwereunder
consideration for the design of the Horizon 2020 programme were then tested, to understand the e�ect of a
rangeofpolicyoptions: changestothethematicscopeoftheprogramme;thefunding instruments; theoverall
JASSS, 21(1) 14, 2018 http://jasss.soc.surrey.ac.uk/21/1/14.html Doi: 10.18564/jasss.3669
amount of programme funding; and increasing SME participation (Ahrweiler et al. 2015). The results of these
simulations ultimately informed the design of Horizon 2020.
SilentSpreadandExodis-FMD
4.15 Following the 2001 outbreak of Foot and Mouth Disease (FMD), the UK Department of the Environment, Food
and Rural A�airs (Defra) imposed a 20-day standstill period prohibiting any livestock movements o�-farm fol-
lowingthearrivalofananimal. The20-dayrulecausedsignificantdi�iculties for farmers. TheLessonsLearned
Inquiry, which reported in July 2002, recommended that the 20-day standstill remain in place pending a de-
tailed cost-benefit analysis (CBA) of the standstill regime.
4.16 Defra commissioned the CBA in September 2002 and a report was required in early 2003 in order to inform
changes to the movement regime prior to the spring movements season. This timescale was challenging due
totheshorttimescalesandlimiteddataavailabletoinformthecostriskbenefitmodellingrequired. Atopdown
model was therefore developed that captured only the essential elements of the decision, combining them in
an influence diagram representation of the decision to be made. As wide a range of experts as possible were
involved in model development, helping inform the structure of the model, its parameterisation, validation
and interpretation of the results. An Agile approach was adopted with detail added to the model in a series of
development cycles guided by a steering group.
4.17 Theresultant‘SilentSpread’modelshowedthatfactorssuchastimetodetectionofdisease,aremuchmoreim-
portantthanlengthofstandstill indeterminingthesizeofanoutbreak(RiskSolutions2003). Themodellingwas
critical totheGovernment’sdecisiontorelaxthe20-daymovementcontrol to6days, subject tocommitments
fromthelivestockindustry. Theiterative,participatorydevelopmentprocessgeneratedanunprecedentedlevel
of ‘buy-in’ to the results in an area which had previously been marked by deep controversy.
4.18 Following this, Defra commissioned further modelling to inform the design of the FMD contingency plan to be
followedintheeventofanoutbreak. Forthisapplication,adetailed ‘bottom-up’modelwasneededthatcould
reproducethedetailedmechanismsofdiseasespreadandenabletheimpactsofdi�erentcontrolstrategieson
the spread of a disease to be explored (Risk Solutions 2005).
4.19 The model was implemented as an agent-based model using the Exodis™ disease modelling framework (Fig-
ure3). Theframeworkbuildsaheterogeneousgeo-spatial representationoftheUKbasedonfarmcensusdata,
sets-upthevariousFMDdiseasetransmissionmechanismsandintegratesthee�ectsofdi�erentcontrolstrate-
gies and the resources required to carry out these strategies. The agents in the model are farms. For a given
setofoutbreakstartingconditionsandforagivencontrolstrategy, themodelprovidesdetailedinformationon
how the outbreak might evolve, calculating parameters such as the number of premises infected, the duration
oftheoutbreak,thenumberofanimalsculledand/orvaccinated,etc. Itproducesdistributionsforeachofthese
parameters to reflect the range of potential outcomes for any outbreak.
Figure 3: Screenshots taken of various Exodis output and control screens.
4.20 Following the cost benefit analysis work Defra retained a decision support tool that provides a training aid for
JASSS, 21(1) 14, 2018 http://jasss.soc.surrey.ac.uk/21/1/14.html Doi: 10.18564/jasss.3669
useduringexercises,andtoinformdecisionsintheeventofanactualoutbreak. Themodelwasusedduringthe
emerging outbreak of FMD in 2007 and continues to be used to test proposed changes to the control regime.
TheAbstractorBehaviourModel
4.21 The abstraction of water from rivers and aquifers in England is controlled by a licensing regime established in
the1960s. TheUKGovernmentwishtoreformthesystemtoonethatencouragesabstractors tomanagewater
e�iciently and work together to make best use of water. Water abstraction management is a classic ‘wicked’
problem in that it is highly resistant to resolution. Previous attempts to reform the system have failed, partly
through not engaging stakeholders in the need for, and nature of, a solution.
4.22 Assessingthecosts, risksandbenefitsofthedi�erentwaysofreformingthesystemiscomplex. Itneedstotake
into account:
• The interactions between a complex natural system and the abstractors (including the public water sup-
ply, power producers, farmers, and industry),
• That economic, social and climate conditions will change in ways that we cannot predict, and
• Thecomplexwaythatthenewmeasureswill influenceindividualabstractorbehavioursonaday-by-day,
year-by-year basis.
4.23 Agent-based modelling was ideally suited to explore how the existing and proposed reforms might operate. A
multidisciplinary team worked with a wide range of experts and stakeholders to develop an agent-based eco-
nomic behavioural model integrated with catchment hydrological models on a daily time-step basis (Risk So-
lutions 2015).
Figure 4: Schematic of the two main model components of the Abstraction Behaviour Model for one catch-
ment – showing the hydrological model (topology snapped to a 1 km grid including: the river network, aquifer
structure,waterabstractionandassessmentpoints)andtheeconomicbehaviourmodel(showinglanduseand
position of abstractors).
4.24 The agent population consists of all of the businesses that have a licence to take water from the rivers and
aquifers in a particular river basin (Figure 4). The river basin is modelled in detail using a hydrological model
of the rivers, aquifers, and land use with a geo-spatial resolution of 1 km by 1 km. Each agent makes a series of
strategicandoperationaldecisions,andthedecision-makingevolveswithtimeaswaterdemandandavailabil-
ity changes with economic and climate change. The policy options control water levels in the modelled rivers
and aquifers using di�erent mechanisms, and allow di�erent types of water rights trading between agents.
The successful achievement of environmental standards is monitored by regulator agents, who take action to
further restrict abstraction permissions if necessary. The model was used to explore in detail how the reforms
JASSS, 21(1) 14, 2018 http://jasss.soc.surrey.ac.uk/21/1/14.html Doi: 10.18564/jasss.3669
wouldworkinpractice. Itexposedmanyunanticipatedando�enunwelcomee�ects,andsoenabledthedesign
of the reforms to be improved.
KeyLessonsforPolicyModellers
5.1 Fromourexperience,derivedfromtheexamplepolicymodelsdescribedintheprevioussectionandotherswe
have worked on, we suggest the following are some of the key lessons modellers should carry into their policy
modelling e�orts:
Theprocess isas important,ando�enmoreso, thantheoutputs
5.2 Manygovernmentdecisionprocessesunderstandablyrequirequantitativedata,forexampletocompleteaReg-
ulatory Impact Appraisal template. A simple set of cost benefit values provides clear, compelling arguments in
support of a decision or conclusion. In complex, evolving environments, however, reducing the answer to a
limited set of numbers may be neither possible, nor desirable – conveying as they do a level of certainty in
understanding which is rarely achievable. Policy modelling in complex environments should be as much, or
more, about developing understanding about a problem or decision as it is about the number at the end. Care
is needed to ensure that the need, or desire, for numbers, alongside unfamiliarity with, or suspicion of, unfa-
miliar approaches, does not drive the choice of sub-optimal modelling approaches.
5.3 In the water abstraction reform work (Section 4.21), although the modelling did generate numbers to input
to the Impact Assessment Template (absorbing a significant proportion of the modelling e�ort), the greatest
benefit of the work was the contribution to designing the policy, which was intimately informed by the more
exploratory aspects of the modelling, including both: the discipline provided by the need to articulate the re-
formsinawaythatcouldberepresentedinthemodel;andtheunderstandingofthecomplex,emergentnature
of the system uncovered through running multiple scenarios, sensitivity analyses and what-if scenarios.
5.4 In the SWAP model (Section 4.9), the policy value lay entirely in the process of interrogating the model, and
usingitasabasis fordiscussions,sharingassumptionsandbuildingconsensus. Aninterestingextradimension
wasthatcritiquingdesignchoicesgeneratedvalueforstakeholders. Inthisrolethemodel isaboundaryobject
(Star&Griesemer1989), and ‘interestedamateur’ (Johnson2015a)asdescribedabove. Withstakeholderswho
do not regularly work together, and/or who do not have the capacity to take ownership and undertake contin-
ueduseandmaintenanceofamodel, thisprocess-basedvalueisevenmorelikelytobethemainbenefitof the
modelling process.
5.5 In the Tell Me model (Section 4.2), we find a similar message. In this example, detailed micro-validation (i.e.
sensecheckingmodelrulesandassumptions)andexplorationoftheire�ectsonoutcomeswasoneofthemain
benefitstopublichealthstakeholdersinvolvedintheproject. Thelackofdataavailabletoallowrigorousformal
validation of the model meant that this was one of the most valuable aspects of the modelling exercise.
5.6 TheHOPESmodel (Section4.5) introducedanalystsconcernedwithdevelopingpolicies tomanagehousehold
energydemandto the ideaofconsidering socialpracticesasanalternative toassuminghouseholdenergy use
is determined by individual rational actors making decisions based primarily on cost. The fact that the HOPES
model could generate plausible outputs using social practice theory as its foundation was probably more sig-
nificant to stakeholders than the actual values it yielded.
Modelsneedtobeatanappropriate levelofabstraction
5.7 No model can fully reflect the real world: some details need to be omitted and some boundary needs to be
drawnaroundwhatistobemodelled. However, it isnotthecasethatthemostdetailedmodelisnecessarilythe
best. Onthecontrary,highly-detailedmodelsmayrequiremoredatathanisorcouldbeavailable;canbehardto
calibrate and validate; and, most importantly, can be hard to understand. Clients, modellers and stakeholders
can all struggle with the idea that less can be more and get drawn into trying to model reality instead of the
decisionessentials. Ontheotherhand,amodelthatistoosimpleortooabstractmaybeimpossibletovalidate,
because there is nothing in the model that corresponds to empirical observation, and because the behaviour
of the model may bear little relationship to what happens in the world. The optimal level of abstraction will
dependonthepurposeofthemodellingandthenatureofthesystembeingmodelled(Edmonds&Moss2005).
One of the signs of good modelling is pitching the model at the right place in between the two extremes.
JASSS, 21(1) 14, 2018 http://jasss.soc.surrey.ac.uk/21/1/14.html Doi: 10.18564/jasss.3669
5.8 TheSilentSpreadmodeldescribedinSection4.15wasasimplemodeldevelopedatahighlevelofabstraction.
The modelling was required to support a single decision question, could the livestock movement standstill
period be reduced or removed? At the time Defra did not routinely collect information on the movement of
animals, and so data to inform the modelling was limited. The solution was therefore to develop an abstract
model,capturingonlythoseelementsessentialtothedecision. Withmoretime,andamuchricherfundofdata,
itwaspossibletodevelopamuchmoredetailedrepresentationofdiseasespreadfortheExodis-FMD™ model.
In this case it was also necessary to capture the dynamic interaction of the various control strategies with the
spread of a disease, in order to provide a basis for testing these.
5.9 HOPES (Section 4.5) started as an abstract model that served as a proof that it is possible to go beyond the
conventional but limited approach of analysing energy demand in terms of rational and individual decision
making to model energy consuming social practices in the household. Only once this proof of concept version
had been demonstrated did the model get extended and refined to incorporate specific social practices (main-
tainingacomfortableenvironment, doingthe laundry,etc.) thatcouldbecalibratedagainst thedatacollected
from energy sensors installed in the sample households.
5.10 One motive for making the HOPES model more concrete was a desire to link it to existing models of the UK en-
ergysupplysystem. Thesemodelledelectricitysupply fromelectricitypowerstations,windfarms,etc. andthe
interconnecting grid and have been used to develop scenarios for informing decisions about the optimal ways
of developing the whole energy system to meet low carbon targets in 2050. However, these supply models in-
corporateddemandfunctionsbasedonrathersimplehouseholdutilitymaximisationassumptions. TheHOPES
model has been used to improve this aspect of the supply models, but not without di�iculty, stemming from
theoverallcomplexityofthemodels,thedi�erentdisciplinaryapproaches(thesupplymodelsarebasedonop-
timising using linear programming techniques; HOPES is an agent-based model), and the di�erent time scales
ofthesimulations(thesupplymodelsusetimestepsofdaysoryears,whileHOPEShashourlytimesteps). This
example illustrates well the fact that one needs to think carefully about the appropriate level of abstraction of
models, not only in terms of their relevance for stakeholders but also to fit them properly into what can be a
whole ecology of related models.
Data and validation challenges must be recognised, but not used as an excuse not to
model,ornottousetheresults
Datachallenges
5.11 Data is never perfect. Lack of, or poor quality data, frustrates the parameterisation and validation of models.
However, lack of data should never be used as an excuse not to model, or not to model an aspect of a problem
thatisimportanttothedecisionstobemade. Collaborativeapproaches,formalelicitationofexpertjudgement,
explicit modelling of uncertainty and sensitivity analysis can all be used to address a lack of data.
5.12 IntheTellMeexample(Section4.2),despiteaninitialbeliefbymodellersandstakeholdersthatdatawasavail-
able, it became clear that there was no data that connected policy interventions with behavioural change and
outcomes. Behavioural outcome data was at an aggregate level, meaning it is impossible to understand the
individual level impacts of the intervention. Data directly connecting intervention and outcome, for each indi-
vidual, is vital for choosing values for e�ect size parameters in the model.
5.13 Inthisexample,thelackofdatashouldnotbeseenasareasonnottomodel. Themotivationstomodelremain.
Rather,thelackofdatamadeexplicitbythemodelshouldbeusedtoinformfuturedatacollection. AsBarbrook-
Johnson et al. (2017) states,
“[data] collection must go alongside continued development of theory and models of decision-
making. Improvedtheoriesofindividualdecision-makingandinteractionwillgivemodelsastronger
footingonwhichtobasetheirassumptions. Asdataandtheoryimprove,sotoowill the(prototype)
modelsdevelopedusingthatsupport. Thiscouldthenleadtoimproveddatacollectionandtheory
building, creating a positive feedback between the three.”
Validationchallenges
5.14 Lack of data can present particular challenges to the formal validation of models, particularly in the complex,
changingenvironmentswheremodellingtoexplorehowthefuturemightunfoldcanbemostuseful. IntheTell
JASSS, 21(1) 14, 2018 http://jasss.soc.surrey.ac.uk/21/1/14.html Doi: 10.18564/jasss.3669
Meexample, thedataonbehaviouraloutcomesthroughtimeeitherdidnotexist,ortendedtobeatarelatively
low resolution. This meant that there was not enough longitudinal outcome data for the model results to be
compared with.
5.15 Lackofacomprehensivedataset forvalidationshouldnotbetakento implythemodelcannotbevalidatedfor
itsparticularpurpose. Inthesecircumstances,a layeredapproachtovalidationshouldbeused: formalquality
assuranceprocessesshouldbeappliedfromtheoutset, includingtheselectionofthemodellingapproach(see
for example the Aqua Book, HM Treasury 2015, and Edmonds 2016), alongside formal documented verification
andvalidationprocesses. Formalvalidationshould involvesubjectmatterexperts incollaborationwithmodel
output users and modellers and should be an integral part of model development.
5.16 Validationmustensurethatthemodel (1)makestechnicalorscientificsense(2)canreproducerecordedreality
(3)isfitfortheuseitisdesignedfor. Taylor(2003)includesausefulchecklistoftheseandotherissuestoaddress
and questions to ask when using models in decision making.
5.17 TheSilentSpreadexample(Section4.15)illustrateshowamodelcanbedevelopedandvalidatedintheabsence
ofmuch‘hard’data, throughaprocessofscrutinyofall stagesofmodeldevelopmentandresultgenerationby
subject matter experts, modellers, users and wider stakeholders.
ModeldevelopmentanduseneedstobeAgileandcollaborative
5.18 Agile,collaborativeprocessesensuremodelsremainfocusedonthepolicyneedandprovideformoree�ective
peerreviewandscrutinyofthemodellingprocess. Thisrequiresahighdegreeoftrustbetweencommissioners
and modellers from the outset. Policy makers, analysts, model output users, stakeholders, and peer reviewers
should be involved, not just at the problem definition, user needs stage, but throughout to ensure that the
modellingapproach,modelstructureandlevelofabstraction,parameterisation,analysisandinterpretationof
the results remain fit for purpose and focused on need.
5.19 At the scoping stage, there needs to be a honest discussion about the best modelling approach and whether
existing models will meet needs. Great care needs to be taken when using models for applications they were
not originally designed for to ensure that the underlying structure of the model is fit for purpose.
5.20 An Agile development approach (Abrahamsson et al. 2017), which iteratively adds functionality and detail to
the model through cycles of development, testing and scrutiny, is a good way of managing the tendency for
modellers and clients alike to drive towards too great a level of detail and more realistic representations in
models than is optimal.
5.21 Finally,modellersshouldbeinvolvedinhelpinginterprettheresultsfordecisionmaking. Itisimpossibletocap-
ture in a report all the nuances of the model simplifications, data weaknesses etc. in a way that policy makers
can use reliably.
5.22 The Silent Spread work (Section 4.15) used a highly participatory approach leading to much improved under-
standing and cooperation between Defra and industry stakeholders. In contrast, the INFO-SKIN model (Sec-
tion4.2)wasdevelopedinresponsetoaninvitationtotenderthathadthee�ectofdistancingthestakeholders,
that is, the relevant policy makers, from the modellers. The people from the European Commission (EC) who
were the clients only met the modellers at the beginning, middle and at the end of the model development
and were not therefore much involved in its design. Moreover, the EC personnel changed during the develop-
ment and by the end there was a rather poor understanding by the clients of the purpose and capabilities of
the model. A further issue was that the clients wanted the modellers to draw out specific policy recommenda-
tionsfromthemodel,whilethemodellerswerehappytotestpolicyoptionsproposedbytheclientsbutdidnot
think it appropriate that they should be devising policies themselves. These are all symptoms of the absence
of proper collaboration between the modellers and the commissioners.
Theethicsofmodelling
5.23 Policymodellingrequirescarefulconsiderationofawiderangeofethical issues,not leastbecausepolicymod-
elshavethepotentialtochangepolicyandthusdirectlya�ectpeople’s lives. Inadditiontothebasicimperative
to ensure that a model is fit for purpose, as discussed above, there is also a need to consider issues arising in
connection with the data used to build and calibrate the model and the way in which the results of the model
are presented.
5.24 Whenpersonaldata iscollected,eitherexplicitlythrough, forexample,asurvey,or implicitly,asadministrative
records or as the side-e�ect of other activities (such as using social media or mobile phones), not only does
JASSS, 21(1) 14, 2018 http://jasss.soc.surrey.ac.uk/21/1/14.html Doi: 10.18564/jasss.3669
oneneedtoabidebydataprotectionlaws,butalsoneedtoensurethatappropriate informedconsent forsuch
useofthedatahasbeenobtained(see, forexample, theethicalguidelinespublishedbytheBritishSociological
Association (BSA 2017), the Association of Internet Researchers (AOIR 2012) and the Association of Computing
Machinery(ACM1992). Thereremainsaneedtobringtheseguidelinestogetherandtodrawouttheirrelevance
to modelling.
5.25 An important consideration is whether data is representative of the population being modelled. As artificial
intelligence researchers have discovered to their cost, basing a model on biased data can lead to biased re-
sults and it can be hard to detect this a�er the event (Knight 2017). This is especially a problem with ‘big data’,
whereit iseasytoassumethatbecauseonehasavery largevolumeofdata, itmustberepresentativealthough
important but numerically small minorities may be absent.
5.26 The results derived from models are always subject to a degree of uncertainty. However, it is easy for mod-
ellersandespecially theusersofmodels todownplay, intentionally (becausetheydonotbelieve itwillbewell
received)orunintentionally(expertbias), thedegreeofuncertaintypresent,andtheimplicationsofthatuncer-
tainty for making policy decisions. Users may also put pressure on modellers to downplay uncertainty. Mod-
ellersshouldbeclearandconfidentintheircommunicationofuncertaintybutalsoinformative. Theuserneeds
tounderstandwhattheuncertaintymeansintermsofthedecisionsorcommunicationstheyneedtomake. This
ismademoreproblematic if themodel iscomplexandpresentedtousersasa‘blackbox’thatgeneratesresults
withoutusersbeingabletoinvestigateforthemselvesthelogicandtheassumptionslyingbehindthoseresults.
This isanotherreasonforencouragingcollaborationbetweenusersandmodellers: userscanfollowthemodel
development process and may then get at least a glimpse of its workings and the assumptions being made;
modellers can better understand the context and ensure that the results are presented in a form that is useful.
5.27 In the Silent Spread example, decision information was needed quickly, when there was little data available
to inform modelling. As wide a range of stakeholders, experts and o�icials as possible was actively involved
in designing, populating and testing the model. Working groups met regularly at every stage of the modelling
process. Once results began to emerge the group helped to interrogate and interpret the results, suggesting a
range of ways of refining the modelling to test new hypotheses suggested by the outputs. A variety of di�erent
waysofpresentingtheuncertaintyintheresultswasused, inparticularthelevelofresidualriskassociatedwith
eachofthepolicyoptionsunderconsiderationwasclearly illustratedallowingdecisionmakerstotakethis into
account in reaching their decision. The process produced unparalleled acceptance of the final conclusions for
policy with the model being described by one expert as the “collective brain of the group”.
Communicatingthemodellingprocess,structureandresultsneedscarefulplanning
5.28 Communication is necessary to clearly explain results, and their limitations, ensure that the outputs are used
appropriately, and build confidence in the modelling process and outputs. It is the nature of model outputs,
consisting of numbers and charts, to appear more certain than they are, and this can mean that the boundary
betweendataandassumptionisoverlooked. Poorpastexperiencecanleadtodistrustofmodelling. Activecol-
laborationbuildsconfidenceinandchampionsforthework,but it isnotpossibletoinvolveeveryone. Changes
of personnel, both in the modelling team and in the policy client can also lead to problems. In complex mod-
elling environments, it is easy to underestimate the communication challenges.
5.29 In the Silent Spread work (Section 4.15) the modellers had to work hard to break distrust of modelling brought
about by issues surrounding the use of predictive models to support the pre-emptive, contiguous cull during
the 2001 outbreak of disease. While at first it was hard to get stakeholders with entrenched and o�en opposed
positionsaboutwhatthe‘right’answerwastotalkconstructively,themodelgavethemaneutralspacetoshare
di�erent perspectives and test the results of these.
5.30 In the SWAP model example (Section 4.9), trust was less problematic. Rather it was the communication of the
model design, and the conclusions the model (and modeller) could make, which needed to be communicated
to stakeholders that were not familiar with formal computational modelling approaches. This led to an acces-
sible form of communicating the model being designed. This still needed to provide the detail of the model
assumptions and rules so that it could be used as the basis of discussion. To do this, a combination of pseudo
code, simplified (and jargonless) Unified Modelling Language diagrams, and projector presentations of results
andthemodelrunningwereused. Theemphasiswasplacedstronglyontheassumptionsandstepbysteprules
of the model, rather than the results.
JASSS, 21(1) 14, 2018 http://jasss.soc.surrey.ac.uk/21/1/14.html Doi: 10.18564/jasss.3669
Modelsneedtobemaintained
5.31 Ifmodelscancontinuetohavearoleinpolicymonitoring,developmentandevaluationa�ertheirinitialresults,
theydeliverbettervalue,butensuringthatmodelsareproperlymaintained isdi�icultwithingovernmentpro-
curementprocessesandstructures. Plansformaintenanceofthemodelshouldbediscussedatthestartof the
modelling project.
5.32 Open source models are attractive because communities can continue to maintain and scrutinise them, but
this isnotalwaysanoptionforpolicymodelswhichmustcontinuetorepresentcomplexpolicyaccuratelyover
time, accounting for changes in policy and the policy environment. Decision makers need to have confidence
thatthiswillhappenandareunlikelytobepreparedtorelyonvoluntarye�orts. Moreover,manypolicymodels
need to use confidential data, which cannot be made open source at the necessary level of disaggregation.
5.33 Of the models described in Section 4, only Exodis (Section4.15) is currently being maintained. Securing long-
term maintenance arrangements is thus a challenge that is so far rarely met properly.
6.1 The technology required for modelling complex domains is in place and increasingly easy to use. However, for
policy modelling to achieve its full potential, there needs to be more attention paid to the processes of model
development and use. As we have illustrated in this paper, there are many pitfalls along the way in making
policy models e�ective and used. Much of this is ‘cra� knowledge’, gained from experience and from making
mistakes, which is why we have described key lessons that we have learned from our own varied experience.
Nevertheless, where the costs or risks associated with a policy change are high, and the context is complex, it
is not only common sense to use policy modelling to inform decision making, but it would be unethical not to.
6.2 The most important requirement in our view for successful policy modelling is to encourage communication
andcollaborationamongthoseinvolved: themodellersthemselves,theclientsandstakeholders, thesuppliers
ofdata, theusersof themodeloutputsandsoon. Academiastillhasatendencytoworkwithinan ivorytower,
making results, and models, available to users only once they have been fully developed and a�er the work
has been published in the research literature. While this approach may work for some formal modelling, it
almost certainly will not yield useful policy models that are actually used by decision makers. Instead, as we
have emphasised above, policy modelling needs to be collaborative, iterative and Agile. Such an approach
has many benefits. Firstly, it provides a sense of ownership of the model and encourages commitment from
users about what they may come to see as ‘their’ model, rather than some black box that someone else is
imposing on them. Secondly, collaboration helps to prevent modellers making naÃŕve assumptions about the
targetdomain,whichiseasytodoifoneisnotadomainexpert. Thus, throughcollaboration, themodellersare
educated about the complexities of the world they are trying to represent, but equally, the users are educated
aboutthecapabilitiesandlimitationsofthemodelthattheyarehelpingtodevelop. Thirdly,activeengagement
of stakeholders can help parameterise and sense check models, even where ‘hard’ data is sparse. Lack of data
shouldneverbeusedasanexcusenottomodel,buttheapproachneedsmoderating,aniterative,participative
approach to modelling allows data needs to be identified and ways of addressing these developed.
6.3 Such a collaborative style of working may be foreign to many government agencies and can involve delicate
negotiations about confidentiality, privacy and access to data. However, there does seem to be an inexorable
trendtowardsthegreateruseofsimulation,machine learningandartificial intelligencetoaiddecisionmaking
in government and business, so the culture may have to change to permit and even encourage a more collab-
orative, Agile modelling approach. When it does, policy modelling will truly have come of age.
The support of the following for the preparation of this paper and the examples mentioned is acknowledged:
For SWAP: This work was supported by the Economic and Social Research Council (Grant No. ES/J500148/1).
Additional support was received from the International Livestock Research Institute and the International Wa-
ter Management Institute.
For TELL ME: This research has received funding from the European Research Council under the European
Union’sSeventhFrameworkProgramme(FP/2007-2013),GrantAgreementnumber278723. Thefullprojecttitle
is TELL ME: Transparent communication in Epidemics: Learning Lessons from experience, delivering e�ective
JASSS, 21(1) 14, 2018 http://jasss.soc.surrey.ac.uk/21/1/14.html Doi: 10.18564/jasss.3669
Messages, providing Evidence, with details at http://tellmeproject.eu/.
For WholeSEM: The UK Engineering and Physical Sciences Research Council supported this work through the
WholeSystemsEnergyModellingConsortium(WholeSEM)project(grantEP/K039326/1). http://www.wholesem.
ac.uk/.
ForSilentSpreadandExodis: ThisworkwasfundedbyDefra. TheRiskSolutions’leadmodellerforSilentSpread
was Chris Rees and for Exodis, Jon Pocock.
For the Water Abstraction Model: This work was supported by Defra, the Environment Agency, the Welsh Gov-
ernment and Natural Resources Wales. The Risk Solutions’ lead modeller was Jon Pocock. Risk Solutions led a
consortium that also included HR Wallingford, Amec, London Economics and Wilson Sherri�.
ForCECAN:TheCentrefortheEvaluationofComplexityAcrosstheNexusissupportedbytheEconomicandSo-
cial Research Council, the Natural Environment Research Council, BEIS, DEFRA, the Environment Agency and
the Food Standards Agency, grant ES/N012550/1. http://www.cecan.ac.uk.
Notes
1To help address these issues it is useful for modellers to consider and use the wealth of research on the
roleofresearchinthepolicyprocess, thescience-policy interfaceandresearchutilisation,andevidence-based
policy. It is not the purpose of this paper to discuss this research, readers are referred to the following sources:
• On the science-policy interface researchers have considered how the two communities of ‘policy mak-
ers’ and ‘researchers’ interact. Historically, the divide has been seen as clear (Weiss 1976; Caplan et al.
1975; Caplan 1979), but more recent work explores the continuous interaction and movement between
the communities (e.g. Cash et al. 2003; Clark & Holmes 2010).
• On how research is actually used, there have been many conceptualisations and overviews (e.g. Jäger
1998; Weible 2008). The most well-known is Weiss (1979) which outlines how research can be used as
evidence;aproblem-solvingtool;onesourceof informationamongmany; justificationforalready-made
decisions; a tool to delay di�icult or sensitive decisions (i.e. ‘we need to do more research on this’, ‘kick
it into the long grass’); a source of general enlightenment; and finally, one of many pursuits of society
(alongside policy, art, media, law etc.) which all influence each other. A lesson from much of this work is
that it is o�en di�icult or impossible to foresee how a model may be used, and this has implications for
how the model is designed and maintained, a point we return to in Section 5.
• On evidence-based policy, Cairney (2018) is an excellent starting point.
References
Abrahamsson,P.,Salo,O.,Ronkainen,J.&Warsta,J. (2017). Agileso�waredevelopmentmethods: Reviewand
analysis. ArXiv:1709.08439. Accessible at: https://arxiv.org/abs/1709.08439
ACM (1992). ACM code of ethics and professional conduct. https://www.acm.org/about-acm/
acm-code-of-ethics-and-professional-conduct
Ahrweiler, P., Schilperoord, M., Pyka, A. & Gilbert, N. (2015). Modelling research policy: Ex-ante evaluation of
complex policy instruments. Journal of Artificial Societies and Social Simulation, 18(4), 5
Anzola,D.,Barbrook-Johnson,P.,Salgado,M.&Gilbert,N.(2017). Sociologyandnon-equilibriumsocialscience.
In J. Johnson, A. Nowak, P. Ormerod, B. Rosewell & Y.-C. Zhang (Eds.), Non-Equilibrium Social Science and
Policy: Introduction and Essays on New and Changing Paradigms in Socio-Economic Thinking, (pp. 59–69).
Cham: Springer
AOIR (2012). Ethical decision-making and internet research: Recommendations from the AoIR Ethics Working
Committee (version 2.0). https://aoir.org/reports/ethics2
Badham, J. & Gilbert, N. (2015). TELL ME design: Protective behaviour during an epidemic (version 2.0).
CRESS Working Paper; Vol. 2015, No. 2. CRESS, University of Surrey. http://cress.soc.surrey.ac.uk/
web/publications/working-papers/tell-me-design-protective-behaviour-during-epidemic
JASSS, 21(1) 14, 2018 http://jasss.soc.surrey.ac.uk/21/1/14.html Doi: 10.18564/jasss.3669
http://tellmeproject.eu/
http://www.wholesem.ac.uk/
http://www.wholesem.ac.uk/
http://www.cecan.ac.uk
https://arxiv.org/abs/1709.08439
https://www.acm.org/about-acm/acm-code-of-ethics-and-professional-conduct
https://www.acm.org/about-acm/acm-code-of-ethics-and-professional-conduct
https://aoir.org/reports/ethics2
http://cress.soc.surrey.ac.uk/web/publications/working-papers/tell-me-design-protective-behaviour-during-epidemic
http://cress.soc.surrey.ac.uk/web/publications/working-papers/tell-me-design-protective-behaviour-during-epidemic
Badham, J., Jansen, C., Shardlow, N. & French, T. (2017). Calibrating with multiple criteria: A demonstration of
dominance. Journal of Artificial Societies and Social Simulation, 20(2), 11
Barbrook-Johnson,P.,Badham,J.&Gilbert,N.(2017).Usesofagent-basedmodelingforhealthcommunication:
The TELL ME case study. Health communication, 32(8), 939–944
Boeschen, S., Gross, M. & Krohn, W. (Eds.) (2017). Experimentelle Gesellscha�. Das Experiment als wissensge-
sellscha�liches Dispositiv. Baden-Baden: Nomos
Boruch, R. F. (1997). Randomized Experiments for Planning and Evaluation: A Practical Guide. Thousand Oaks,
CA: Sage
BSA (2017). Guidelines on ethical research. https://www.britsoc.co.uk/ethics
Byrne,D.&Callaghan,G.(2014). ComplexityTheoryandtheSocialSciences: TheStateoftheArt. Abingdon/New
York, NY: Routledge
Cabinet O�ice (2003). Trying it out: The role of ‘pilots’ in policy-making: Report of a review of government
pilots. London: Cabinet O�ice, Strategy Unit
Cairney, P. (2013). Standing on the shoulders of giants: How do we combine the insights of multiple theories in
public policy studies? Policy Studies Journal, 41(1), 1–21
Cairney, P. (2018). The UK government’s imaginative use of evidence to make policy. British Politics, (pp. 1–22)
Caplan,N.(1979). Thetwo-communitiestheoryandknowledgeutilization. AmericanBehavioralScientist,22(3),
459–470
Caplan,N.,Morrison,A.&Stambaugh,R.J. (1975). TheUseofSocialScienceKnowledgeinPolicyDecisionsatthe
National Level: A Report to Respondents. Ann Arbor, MI: Institute for Social Research, University of Michigan
Cash, D. W., Clark, W. C., Alcock, F., Dickson, N. M., Eckley, N., Guston, D. H., Jäger, J. & Mitchell, R. B. (2003).
Knowledge systems for sustainable development. Proceedings of the National Academy of Sciences, 100(14),
8086–8091
Clark, H. & Taplin, D. H. (2012). Theory of Change Basics: A Primer on Theory of Change. New York, NY: Ac-
tknowledge. Accessible at: http://www.theoryofchange.org/wp-content/uploads/toco_library/
pdf/ToCBasics
Clark, R. & Holmes, J. (2010). Improving input from research to environmental policy: Challenges of structure
and culture. Science and Public Policy, 37(10), 751–764
Dennett, D. C. (2013). Intuition Pumps and Other Tools for Thinking. New York, NY: Norton
Edmonds, B. (2016). Review of: The aqua book: Guidance on producing quality analysis for government. 19(3)
Edmonds, B. & Moss, S. (2005). From KISS to KIDS–an ‘anti-simplistic’ modelling approach. In J. S. Sichman,
R.Conte&N.Gilbert (Eds.), InternationalWorkshoponMulti-AgentSystemsandAgent-BasedSimulation: First
International Workshop, MABS ’98, Paris, France, July 4-6, 1998, Proceedings, (pp. 130–144). Berlin: Springer
Greenberg, D. H. & Shroder, M. (1997). The Digest of Social Experiments. Washington, DC: The Urban Insitute
Gross, M. & Krohn, W. (2005). Society as experiment: Sociological foundations for a self-experimental society.
History of the Human Sciences, 18(2), 63–86
Hauke, J., Lorscheid, I. & Meyer, M. (2017). Recent development of social simulation as reflected in JASSS be-
tween2008and2014: Acitationandco-citationanalysis. JournalofArtificialSocietiesandSocialSimulation,
20(1), 5
Hills, D. (2010). Logic mapping: Hints and tips for better transport evaluations. Department for Transport
HM Treasury (2011). The Magenta Book: Guidance for Evaluation. https://www.gov.uk/government/
publications/the-magenta-book
HMTreasury(2013). TheGreenBook: AppraisalandEvaluationinCentralGovernment. https://www.gov.uk/
government/publications/the-green-book-appraisal-and-evaluation-in-central-governent
JASSS, 21(1) 14, 2018 http://jasss.soc.surrey.ac.uk/21/1/14.html Doi: 10.18564/jasss.3669
https://www.britsoc.co.uk/ethics
http://www.theoryofchange.org/wp-content/uploads/toco_library /pdf/ToCBasics
http://www.theoryofchange.org/wp-content/uploads/toco_library /pdf/ToCBasics
https://www.gov.uk/government/publications/the-magenta-book
https://www.gov.uk/government/publications/the-magenta-book
https://www.gov.uk/government/publications/the-green-book-appraisal-and-evaluation-in-central-governent
https://www.gov.uk/government/publications/the-green-book-appraisal-and-evaluation-in-central-governent
HM Treasury (2015). The Aqua Book: Guidance on Producing Quality Anal-
ysis for Government. https://www.gov.uk/government/publications/
the-aqua-book-guidance-on-producing-quality-analysis-for-government
Jäger, J. (1998). Current thinking on using scientific findings in environmental policy making. Environmental
Modeling & Assessment, 3(3), 143–153
Johnson, P. G. (2015a). Agent-based models as interested amateurs’. Land, 4(2), 281–299
Johnson, P. G. (2015b). The SWAP model: Policy and theory applications for agent-based modelling of soil and
water conservation adoption. Doctoral thesis, University of Surrey
Johnston, E. W. & Desouza, K. C. (2015). Governance in the Information Era: Theory and Practice of Policy Infor-
matics. London: Routledge
Knight, W. (2017). Forget killer robots – bias is the real AI danger. MIT Technology Re-
view, 3 October 2017. Accessible at https://www.technologyreview.com/s/608986/
forget-killer-robotsbias-is-the-real-ai-danger/
Kolkman, D. A., Campo, P., Balke-Visser, T. & Gilbert, N. (2016). How to build models for government: Criteria
driving model acceptance in policymaking. Policy Sciences, 49(4), 489–504
Krohn, W. (2007). Realexperimente – die modernisierung der ‘o�enen gesellscha�’ durch experimentelle
forschung. Erwägen – Wissen – Ethik, 18(3), 343–356
Lindblom, C. E. (1959). The science of ‘muddling through’. Public Administration Review, 19(2), 79–88
Lindblom, C. E. (1979). Still muddling, not yet through. Public Administration Review, 39(6), 517–526
Martin, S. & Sanderson, I. (1999). Evaluating public policy experiments: Measuring outcomes, monitoring pro-
cesses or managing pilots? Evaluation, 5(3), 245–258
Moran, M. (2015). Politics and Governance in the UK. London / New York, NY: Palgrave Macmillan
Narasimhan, K., Gilbert, N., Hope, A. & Roberts, T. (2017). Demystifying energy demand using a practice-
centric agent-based model. Working paper retrieved from http://cress.soc.surrey.ac.uk/web/
publications/working-papers
RAENG (2015). A critical time for UK energy policy: What must be done now to deliver the UK’s future energy
system: AreportfortheCouncilforScienceandTechnology.https://www.raeng.org.uk/publications/
reports/a-critical-time-for-uk-energy-policy
Reckwitz, A. (2002). Toward a theory of social practices: A development in culturalist theorizing. European
Journal of Social Theory, 5(2), 243–263
Risk Solutions (2003). FMD CBA phase 2 – integrated findings summary report. Accessible at:
http://webarchive.nationalarchives.gov.uk/20100713185142/http:/www.defra.gov.uk/
foodfarm/farmanimal/movements/costbenefit/documents/integrated_summary2
Risk Solutions (2005). Cost benefit analysis of foot and mouth disease controls. Accessible at: http://www.
elika.eus/datos/articulos/Archivo136/DEFRA_Faftosa
Risk Solutions (2015). The impact of water abstraction reform – final report, 2015. Accessi-
ble at: http://sciencesearch.defra.gov.uk/Document.aspx?Document=13715_WT1563_
TheImpactofWaterAbstractionReform-FinalReport
Sawyer, R. K. (2005). Social Emergence: Societies as Complex Systems. Cambridge: Cambridge University Press
Seminar on Policy Pilots and Evaluation (2013). LSHTM london. Accessible at: http://piru.lshtm.ac.uk/
assets/files/Policy%20Pilots%20report%20final%20version .
Star, S. L. & Griesemer, J. R. (1989). Institutional ecology, ‘translations’ and boundary objects: Amateurs and
professionals inBerkeley’sMuseumofVertebrateZoology, 1907–39. SocialStudiesofScience, 19(3), 387–420
Taylor,N.(2003). Reviewoftheuseofmodelsininformingdiseasecontrolpolicydevelopmentandadjustment.
A report for DEFRA by Veterinary Epidemiology and Economics Research Unit. Accessible at: http://www.
veeru.rdg.ac.uk/documents/UseofModelsinDiseaseControlPolicy
JASSS, 21(1) 14, 2018 http://jasss.soc.surrey.ac.uk/21/1/14.html Doi: 10.18564/jasss.3669
https://www.gov.uk/government/publications/the-aqua-book-guidance-on-producing-quality-analysis-for-government
https://www.gov.uk/government/publications/the-aqua-book-guidance-on-producing-quality-analysis-for-government
https://www.technologyreview.com/s/608986/forget-killer-robotsbias-is-the-real-ai-danger/
https://www.technologyreview.com/s/608986/forget-killer-robotsbias-is-the-real-ai-danger/
http://cress.soc.surrey.ac.uk/web/publications/working-papers
http://cress.soc.surrey.ac.uk/web/publications/working-papers
https://www.raeng.org.uk/publications/reports/a-critical-time-for-uk-energy-policy
https://www.raeng.org.uk/publications/reports/a-critical-time-for-uk-energy-policy
http://webarchive.nationalarchives.gov.uk/20100713185142/http:/www.defra.gov.uk/foodfarm/farmanimal/movements/costbenefit/documents/integrated_summary2
http://webarchive.nationalarchives.gov.uk/20100713185142/http:/www.defra.gov.uk/foodfarm/farmanimal/movements/costbenefit/documents/integrated_summary2
http://www.elika.eus/datos/articulos/Archivo136/DEFRA_Faftosa
http://www.elika.eus/datos/articulos/Archivo136/DEFRA_Faftosa
http://sciencesearch.defra.gov.uk/Document.aspx?Document=13715_WT1563_TheImpactofWaterAbstractionReform-FinalReport
http://sciencesearch.defra.gov.uk/Document.aspx?Document=13715_WT1563_TheImpactofWaterAbstractionReform-FinalReport
http://piru.lshtm.ac.uk/assets/files/Policy%20Pilots%20report%20final%20version .
http://piru.lshtm.ac.uk/assets/files/Policy%20Pilots%20report%20final%20version .
http://www.veeru.rdg.ac.uk/documents/UseofModelsinDiseaseControlPolicy
http://www.veeru.rdg.ac.uk/documents/UseofModelsinDiseaseControlPolicy
Teisman, G. R. & Klijn, E.-H. (2008). Complexity theory and public management: An introduction. Public Man-
agement Review, 10(3), 287–297
Uprichard, E. & Penn, A. (2016). Dependency models: A CECAN evaluation and policy practice note for policy
analysts and evaluators. Centre for the Evaluation of Complexity Across the Nexus note no. 6
Voinov,A.&Bousquet,F. (2010). Modellingwithstakeholders. EnvironmentalModelling&So�ware,25(11), 1268–
1281
Weible, C. M. (2008). Expert-based information and policy subsystems: A review and synthesis. Policy Studies
Journal, 36(4), 615–635
Weiss,C.H.(1976). Policyresearchinuniversity: Practicalaidoracademicexercise? PolicyStudiesJournal,4(3),
224–228
Weiss, C. H. (1979). The many meanings of research utilization. Public Administration Review, 39(5), 426–431
Wilensky, U. (1999). NetLogo. Center for Connected Learning and Computer-Based Modeling, Northwestern
University
JASSS, 21(1) 14, 2018 http://jasss.soc.surrey.ac.uk/21/1/14.html Doi: 10.18564/jasss.3669
Modelling to support policy design and appraisal
Modelling to support policy evaluation
Difficulties in the use of modelling
Policy pilots
Policy models for policy experimentation
Tell-Me
HOPES
SWAP
INFSO-SKIN
Silent Spread and Exodis-FMD
The Abstractor Behaviour Model
The process is as important, and often more so, than the outputs
Models need to be at an appropriate level of abstraction
Data and validation challenges must be recognised, but not used as an excuse not to model, or not to use the results
Data challenges
Validation challenges
Model development and use needs to be Agile and collaborative
The ethics of modelling
Communicating the modelling process, structure and results needs careful planning
Models need to be maintained
Conclusions
Acknowledgements
We provide professional writing services to help you score straight A’s by submitting custom written assignments that mirror your guidelines.
Get result-oriented writing and never worry about grades anymore. We follow the highest quality standards to make sure that you get perfect assignments.
Our writers have experience in dealing with papers of every educational level. You can surely rely on the expertise of our qualified professionals.
Your deadline is our threshold for success and we take it very seriously. We make sure you receive your papers before your predefined time.
Someone from our customer support team is always here to respond to your questions. So, hit us up if you have got any ambiguity or concern.
Sit back and relax while we help you out with writing your papers. We have an ultimate policy for keeping your personal and order-related details a secret.
We assure you that your document will be thoroughly checked for plagiarism and grammatical errors as we use highly authentic and licit sources.
Still reluctant about placing an order? Our 100% Moneyback Guarantee backs you up on rare occasions where you aren’t satisfied with the writing.
You don’t have to wait for an update for hours; you can track the progress of your order any time you want. We share the status after each step.
Although you can leverage our expertise for any writing task, we have a knack for creating flawless papers for the following document types.
Although you can leverage our expertise for any writing task, we have a knack for creating flawless papers for the following document types.
From brainstorming your paper's outline to perfecting its grammar, we perform every step carefully to make your paper worthy of A grade.
Hire your preferred writer anytime. Simply specify if you want your preferred expert to write your paper and we’ll make that happen.
Get an elaborate and authentic grammar check report with your work to have the grammar goodness sealed in your document.
You can purchase this feature if you want our writers to sum up your paper in the form of a concise and well-articulated summary.
You don’t have to worry about plagiarism anymore. Get a plagiarism report to certify the uniqueness of your work.
Join us for the best experience while seeking writing assistance in your college life. A good grade is all you need to boost up your academic excellence and we are all about it.
We create perfect papers according to the guidelines.
We seamlessly edit out errors from your papers.
We thoroughly read your final draft to identify errors.
Work with ultimate peace of mind because we ensure that your academic work is our responsibility and your grades are a top concern for us!
Dedication. Quality. Commitment. Punctuality
Here is what we have achieved so far. These numbers are evidence that we go the extra mile to make your college journey successful.
We have the most intuitive and minimalistic process so that you can easily place an order. Just follow a few steps to unlock success.
We understand your guidelines first before delivering any writing service. You can discuss your writing needs and we will have them evaluated by our dedicated team.
We write your papers in a standardized way. We complete your work in such a way that it turns out to be a perfect description of your guidelines.
We promise you excellent grades and academic excellence that you always longed for. Our writers stay in touch with you via email.