Making Recommendations Based on Documented Advances in the Assessment Movement

450-600 words

Reflect upon the attachment below of the Assessment That Matters text. The section entitled “The Current State of Institutional Assessment of Learning” (pages 7–25) presents details about the ten major findings. Select 3–5 that most closely apply to your Assessment and Evaluation of Learning Plan also attached below and explain how they relate to your plan. Next, discuss which implications determined from the findings, as presented on pages 24–27, can be used to inform ways to generate improvement and accountability for your program, department, or institution.

Don't use plagiarized sources. Get Your Custom Essay on
Making Recommendations Based on Documented Advances in the Assessment Movement
Just from $13/Page
Order Essay

I ftute for
National ns I Assessment
L
earning OutcomesUsable & Transparent

Making Learning . Outcomes

January 2018

National Institute for Learning Outcomes Assessment | 2

Table of Contents

Assessment That Matters: Trending Toward Practices That Document
Authentic Student Learning…6

Executive Summary….3

Assessment That Matters: Trending Toward Practices

That Document Authentic Student Learning…6
The Current State of Institutional Assessment of
Student Learning…7
Institutional Needs and Supports for Student
Learning Outcomes Assessment…13
Using Evidence of Student Learning…16
Communicating Information on Assessment…21
Use of Technology…23
Institutional Size and Selectivity…24

Implications…25

Moving Forward…27

References…30

Appendix A: Data Collection and Analysis…32

NILOA Mission

The National Institute for Learning
Outcomes Assessment’s (NILOA) primary
objective is to discover and disseminate
the ways that academic programs and
institutions can productively use assessment
data internally to inform and strengthen
undergraduate education, and externally to
communicate with policy makers, families,
and other stakeholders.

Acknowledgements

NILOA sincerely thanks the Indiana
University Center for Postsecondary
Research for their administration of the
survey. In addition, NILOA very much
appreciates the provosts and their designees
who set aside time from their very busy
schedules to complete the questionnaire.
We are in your debt.

Please cite as:

Jankowski, N. A., Timmer, J. D., Kinzie, J., & Kuh, G. D. (2018, January). Assessment that matters: Trending toward practices
that document authentic student learning. Urbana, IL: University of Illinois and Indiana University, National Institute for
Learning Outcomes Assessment (NILOA).

National Institute for Learning Outcomes Assessment | 3

Executive Summar y

Assessment of student learning remains an ongoing and prevalent activity for United States higher education.
To take a snapshot of institution-level assessment in 2017 and trends over time, the National Institute for
Learning Outcomes Assessment (NILOA) conducted its third nationwide survey of provosts between April and
September 2017. Respondents from 811 regionally, accredited, undergraduate degree-granting, institutions from
throughout the U.S. participated. This report summarizes the major findings and presents implications for policy
and practice.

Major Findings

1. The vast majority of institutions have statements of learning for all undergraduate students
and growing numbers have aligned learning throughout the institution. Alignment of learning
outcomes throughout the institution has increased since the 2013 survey, with 82% of respondents
confirming their institution has established learning outcomes for all students; half of all respondents
reported that all of their programs have defined learning outcomes that also align with shared
institution-wide statements of learning.

2. Assessment continues to be driven by both compliance and improvement, with an emphasis on
equity. Taken together, the focus on improvement and equity concerns as reasons for undertaking
assessment, in addition to accreditation requirements, substantiates the ongoing interplay between
compliance and improvement at the institution-level.

3. Institutions are trending towards greater use of authentic measures of student learning, including
rubrics, classroom-based performance assessments and capstones, which is consistent with
what provosts indicate are most valuable for improving student outcomes. The key take away is
that institutions are using a variety of data collection approaches that yield actionable information,
reinforcing the principle that there is not “one right way” to assess student learning.

4. Institutional needs for advancing assessment work have shifted since 2009 from engaging more
faculty in assessing student learning to supporting faculty use of assessment results and wider
stakeholder involvement. Although some opine what is perceived to be limited involvement of
faculty in assessment of student learning, provosts are more interested in finding ways to help faculty
and staff develop the attitudes and tools to produce actionable results along with the skill set to use
results to improve student learning.

5. Institutional research offices and staff along with faculty-led assessment committees provide
needed support of institution-wide assessment activities. While a variety of organizational features
are increasingly supportive of assessment activities, policies on promotion and tenure lag behind.

6. Institution-level assessment results are regularly used for compliance and improvement purposes,
addressing accreditation and external accountability demands along with internal improvement
efforts. Accreditation remains the driver and main use of institution-level information about student
learning since 2009. However, various internal improvement efforts, including program review and
program improvement, also regularly benefit from institution-level assessment results. Yet, assessment
results informing co-curricular improvement, resource allocation, trustee and governing board
deliberations, and equity goals, is low.

National Institute for Learning Outcomes Assessment | 4

Executive Summar y cont.

7. The majority of changes made and uses of evidence of student learning occur at the program- and
course-level. About two thirds of provosts (64%) provided examples of changes made in policies,
programs, or practice informed by assessment results. Of those, the most frequently cited example of
change was at the assignment, course, and program-level.

8. Effectively communicating information about student learning remains a target of opportunity
for assessment work. Institutions provide limited publicly available information on assessment
activities on their websites. Yet, what was more important to provosts was not what to share, but how
to share information.

9. While assessment-related technologies hold promise of assisting with alignment and integration
of learning across the institution, meaningful implementation remains elusive. Provosts
indicated they were unsure how to implement software solutions in a manner that would fit with
the institutional culture they were trying to support and build connections within and across the
institution.

10. The larger the size and greater the selectivity of the institution, the less likely it is to employ a
variety of assessment activities. For almost every category of assessment activity, the larger and more
selective the institution, the less likely to employ various assessment approaches or use the results.

Implications

Looking across the current landscape of institutional assessment processes and practices, the trend that is emerging
is an authentic form of assessment that values evidence produced in the context of teaching and learning,
represents students’ work, supports faculty use of evidence of student learning to improve programs, courses
and assignments, and is connected to a variety of institutional learning initiatives. There is much about which to
be hopeful, including growth in the use of authentic measures of learning, integration of various initiatives
and efforts to improve student learning throughout the institution, and use of results embedded within
course- and program-level improvement. Yet with all the momentum, there are areas that need attention for
assessment efforts to continue to advance student learning and institutional effectiveness.

Communicating effectively about student learning remains a challenge. Colleges and universities must
more clearly and persuasively communicate relevant, timely, and contextualized information on their impact
on students and value to society.

While use of assessment results is increasing, documenting improvements in student learning and the
quality of teaching falls short of what the enterprise needs. Provosts provided numerous examples of
expansive changes at their institutions drawing on assessment data, but too few had examples of whether the
changes had the intended effects.

Equity is an important consideration in assessment work, but underemphasized in data use. Survey
respondents indicated that addressing issues of equity was important to assessment efforts and disaggregation
of evidence of learning by various groupings of students was beginning to occur. However, using assessment
data to support the achievement of equity goals was uncommon.

National Institute for Learning Outcomes Assessment | 5

Executive Summar y cont.

Governing boards have a key role to play in sustaining and further developing meaningful assessment.
They can endorse policies and priorities that support and encourage assessment and invite wider stakeholder
involvement.

Professional development could be more meaningfully integrated with assessment efforts, supporting faculty
use of results, technology implementation, and integration of efforts across an institution. Throughout the
institution, there are various points where assessment support may be provided such as librarians, centers for
teaching and learning, and student affairs staff and partners.

Moving Forward

Institutions of higher education in the United States are involved in a variety of initiatives to improve student
learning of which assessment is but one. In fact, a wide range of activity is occurring to advance authentic student
learning. For example, provosts indicated that their institutions were undertaking curriculum mapping, facilitating
work on assignment design, engaging in developing pathways to completion, revising general education, and
scaling high-impact practices to name a few.

There is much to applaud about the current state of assessment practice. Granted, there are compliance issues that
must be managed and the field should speak more frequently about the worth and value of higher education. But
there is also the discernable trend toward using assessment data to guide improvement efforts and increased use
of embedded approaches that focus on ensuring authentic learning for individual students.

National Institute for Learning Outcomes Assessment | 6

A ssessment That Matters:

Trending Toward Practices That Document Authentic Student Learning

Natasha A. Jankowski, Jennifer D. Timmer, Jillian Kinzie, and George D. Kuh

We hope you enjoy reading
the survey findings.
Throughout the report,
relevant resources are
provided in the side bars,
connecting findings from the
survey with available tools to
assist with implementation of
meaningful assessment efforts.

Introduction

Over the past decade, the National Institute for Learning Outcomes
Assessment (NILOA) has been documenting what colleges and universities
are doing to gather evidence about student learning and helping institutions
to productively use assessment data to strengthen undergraduate education.
NILOA also has been monitoring how institutions communicate with policy
makers, families, and others about their efforts to enhance student learning
and institutional effectiveness.

One mechanism utilized to understand the landscape of assessment practices
in United States higher education has been national surveys of senior academic
leaders about what is being done to measure student learning outcomes
and how results are used to improve teaching and learning. This report
summarizes the findings from NILOA’s third and most recent survey which
was conducted in 2017. The results from the first survey reported in 2009
found that there was more assessment work underway than widely thought,
but results were not often used; moreover, compliance with accreditation
expectations was the primary driver of assessment, and people outside of
the institution were rarely informed about assessment practices (Kuh &
Ikenberry, 2009). The second national survey report in 2013 argued that
the motivations for assessment were increasingly better balanced between
compliance with accreditation requirements and institutional improvement
efforts, with colleges and universities employing a variety of measures for
various uses (Kuh, Jankowski, Ikenberry, & Kinzie, 2014). Taken together,
the findings from the first two surveys suggest that assessment is a field of
practice evolving in a manner that would produce information that could
be used both to respond to legitimate accountability demands as well as to
guide institutional efforts to enhance student performance.

This report is based on data collected from provosts between April and
September 2017. The sample included provosts/chief academic officers
(or their designees) at 2,781 regionally accredited, undergraduate degree-
granting institutions. The questionnaire was completed by representatives
of 811 institutions for a response rate of 29%. Nearly 80% of the survey
respondents were from within the office of the provost, with the remainder
of the surveys completed by those responsible for assessment within the
institution. Appendix A contains additional information about the sample
and data analysis.

The 2017 questionnaire asked respondents about institution-level
assessment, repeating many questions from the first and second survey efforts
on assessment methods, uses, drivers of assessment practice, availability of
assessment information, while adding a few new questions about initiatives
to improve student learning.

Since 2008, NILOA has regularly issued survey reports and studies of
assessment practice. Reports on program-level assessment, state policy and
assessment, case studies of institutional assessment practice, and assessment
communication frameworks, among others, document ways that academic
programs and institutions can productively use assessment data internally
to inform and strengthen undergraduate education, and externally to
communicate with policy makers, families and other stakeholders (Ewell,
Paulson, & Kinzie, 2011; Ewell, Jankowski, & Provezis, 2010; Baker,
Jankowski, Provezis, & Kinzie, 2012; NILOA, 2011; Hutchings, Jankowski,
& Ewell, 2014; Montenegro & Jankowski, 2015). All to say, we know much
more in 2018 about the practice of assessment occurring throughout the
U.S. than in 2008, and have taken opportunities to distill lessons learned
into principles to help inform practice along the way (NILOA, 2016). In
this report, we provide a snapshot of the current landscape and place the
findings within the ongoing conversation of assessing student learning in the
U.S. The report title, Assessment That Matters: Trending Toward Practices That
Document Authentic Student Learning, signals a trend toward an authentic
form of assessment that values evidence produced in the context of teaching
and learning; represents students’ work; supports faculty use of evidence
of student learning to improve programs, courses and assignments; and is
connected to a variety of institutional learning initiatives.

Data Snapshot

82% of for-profit institutions
indicated that all of their
programs have learning
outcome statements and that
they are aligned to institution-
wide statements of learning,
while only 44% of public and
53% of private institutions
indicated the same.

Doctoral institutions were the
least likely to report their
programs define learning
outcomes that align (35%)
while specialized institutions
were the most likely (66%).

The more selective an
institution, the less likely
they were to have program
learning outcomes that align
(36%) while open-enrollment
institutions were the most
likely (53%).

Institutional respondents
from ACCJC accreditation
region were more likely than
those from any other region to
indicate that all programs had
learning outcomes and that
they align (81%).

The Current State of Institutional Assessment of Student
Learning

Statements of student learning outcomes remain prevalent across U.S.
higher education with 82% of respondents reporting that they have adopted
or developed an explicit set of student learning outcomes common to all
undergraduates across all majors. In addition, 66% of respondents indicated
that all of their programs have learning outcome statements—a number on
the rise from prior years (Figure 1).

1. The vast majority of institutions have statements of learning for
all undergraduate students and growing numbers have aligned
learning throughout the institution.

Alignment of learning outcomes throughout the institution has
increased since the 2013 survey, with 50% of respondents reporting that
all of their programs have defined learning outcomes that also align
with shared institution-wide statements of learning (Figure 1).
However, 20% of institutions report that there is no alignment
between program-level learning outcomes and institution-wide
learning outcome statements, while the remaining 30% indicate there
is some alignment. As institutions move towards more embedded
approaches to assess student learning in the form of assignments,
alignment takes on increasing importance to ensure a coherent, integrated,
and scaffolded learning experience that builds towards the institution-wide
learning outcomes of interest (Jankowski & Marshall, 2017).

National Institute for Learning Outcomes Assessment | 7

National Institute for Learning Outcomes Assessment | 8

Equity and Assessment

Figure 1. Percentage of institutions with alignment between stated institution-level
outcomes and program-level learning outcomes, comparing 2017 to 2013.

2. Assessment continues to be driven by both compliance and
improvement, with an emphasis on equity.

As in the past, accreditation remains the main driver of assessment at the
institution-level. At the same time, improving student learning has become
increasingly important, as more faculty and staff are involved in assessment
work (Figure 2). Concerns about equity and supporting achievement for
all students was a new response option on the 2017 questionnaire, and the
item ranked 5th as a factor prompting assessment. Public institutions (2.07)
were more driven by equity concerns than their private (1.81) and for-
profit counterparts (1.83). Minority-Serving Institutions (MSIs) (2.13)
were more likely than predominantly white institutions (PWIs) (1.91) to
indicate that equity concerns were a driver of assessment efforts.

Taken together, the focus on improvement and equity concerns as reasons
for undertaking assessment, in addition to accreditation requirements,
substantiates the ongoing interplay between compliance and improvement
at the institution-level (Ewell, 2009). The influence of national calls for
accountability or transparency became a less important driver of assessment,
decreasing from 31% in 2007 to only 13% of schools in 2017. A similar
trend was noted for the role of institutional membership initiatives, such as
the Voluntary Framework of Accountability (VFA) or Voluntary System of
Accountability (VSA) which dropped from 21% in 2009, to 7% in 2013,
and 5% of institutions in 2017 as an important driver of assessment efforts.

Data Snapshot

Doctoral and master’s institutions were more likely than associate, baccalaureate, and specialized institutions to
indicate governing or coordinating board mandates and state mandates as a factor of high importance to their
assessment efforts. Associate degree-granting institutions were more likely than all other types to indicate that external
funding was a driver.

Assessment work at private institutions was less likely than public to be driven by institutional membership initiatives.
Student learning outcomes assessment at public institutions was more likely than at privates to be influenced by
external funding.

http://learningoutcomesassessment.org/occasionalpapertwentynine.html

National Institute for Learning Outcomes Assessment | 9

No Minor
Importance Importance

Moderate High
Importance Importance
­

Figure 2. Importance of factors of forces that prompt student learning outcomes
assessment.

3. Institutions are trending towards greater use of authentic
measures of student learning, including rubrics, classroom-based
performance assessments and capstones, which is consistent with
what provosts indicate are most valuable for improving student
outcomes.

To address both compliance expectations and improvement efforts,
institutions employ a variety of assessment approaches. On average,
institutions implement four different approaches to assess student learning,
down by one from the 2013 survey, but up by one from the 2009 survey.
The most common are national student surveys (76%), such as the National
Survey of Student Engagement (NSSE), followed by approaches embedded
in the everyday work of students such as rubrics (71%), classroom-based
performance assessments or assignments (64%), and capstone projects
(61%) (Figure 3).

In addition to a focus on embedded measures of authentic student learning
at the institution-level that build from course-based assessment, alumni
feedback has moved into a more prominent assessment role and standardized
measures, such as general knowledge and skills, are being used less often.

• Institutions in the WSCUC region (49%) were more likely than
those in SACSCOC (16%) to use portfolios at an institution-level
to assess student learning as well as capstone projects (WSCUC:
77%; SACSCOC: 26%).

• Institutions in the Northwest region were the most likely to use
general knowledge and skill measures (44%) compared with
SACSCOC (11%) and HLC (15%) institutions which were least
likely.

National Institute for Learning Outcomes Assessment | 10

• Institutions in ACCJC and HLC regions were the most likely to use
alumni and employer feedback, while institutions in NEASC and
SACSCOC regions were least likely.

• Public institutions (67%) were more likely than private (34%) and
for-profit institutions (36%) to use placement exams.

• For-profit institutions were the least likely to use national student
surveys (18%), but most likely to use alumni feedback (82%) and
employer feedback (86%).

• Public institutions were least likely to use capstone projects (52%),
alumni feedback (48%) and externally-situated performance assess­
ments (31%).

Figure 3. Percentage of institutions using assessment approaches at the institution-
level to represent undergraduate student learning.

Not only are institutions implementing a variety of approaches to assess
student learning at the institution-level, but there is variability in the
assessment approaches depending on institutional type (Figure 4):

• Associate degree-granting institutions are more likely than all other
types to use placement exams and employer feedback.

• Master degree-granting institutions are more likely than special­
ized, associate, and doctoral institutions to use national student
surveys—a group more likely to use local surveys.

• Master’s institutions are the most likely to use general knowledge
and skills measures.

• Baccalaureate and specialized institutions are more likely than all
other types to use capstone projects at the institution-level.

National Institute for Learning Outcomes Assessment | 11

• Baccalaureate and master’s institutions are more likely than all other
types to use alumni feedback in their institution-wide approaches to
assessment.

It appears that institutions are using assessment approaches that are of
greatest value to them for improving student learning. When we asked
provosts to rank their top three most valuable assessment approaches the
institution uses for improving student learning, the most frequent response
was classroom-based performance assessments or assignments, followed by
rubrics, and national student surveys. Thus, the top two sources of valuable
information came from embedded approaches to assess authentic student
learning. In addition, responses across the top three rankings were consistent
in the choices selected, meaning provosts indicated that institutions were
using approaches that they find valuable instead of ones they thought they
“should” be doing.

Building a meaningful assessment approach from classroom-based
assessments to roll-up to the institution-level in ways most meaningful
to a particular institution forms part of the basis for the Excellence in
Assessment designation (EIA), which recognizes institutions for their efforts
in intentional integration of campus-level learning outcomes assessment.

Excellence in Assessment
Designation

Figure 4. Percentage of institutions using assessment approaches by institutional
type.

http://www.learningoutcomesassessment.org/eiadesignation.html

National Institute for Learning Outcomes Assessment | 12

Frequently mentioned approaches included capstones, licensure exams,
employer feedback and surveys, locally developed measures such as surveys
and exams, external performance assessments, placement exams, program
assessment, portfolios, alumni surveys, and general knowledge and skills
measures. The key take away is that institutions are using a vareity of data
collection approaches that would yield actionable information, reinforcing the
principle that there is not “one right way” to assess student learning.

Not only are institutions adapting their assessment approaches to their
respective mission, interests, and perceived needs, they discontinue
assessment efforts they do not find valuable (Figure 5).1 While use of
national student surveys, such as NSSE, has remained the most prominent
institution-level assessment approach over time, rubric use continues to
increase. This likely is driven in large part by the work of the Association of
American Colleges & Universities (AAC&U) Valid Assessment of Learning
in Undergraduate Education (VALUE) project as well as the Multi-State
Collaborative to Advance Quality Student Learning initiative undertaken
in partnership between AAC&U and the State Higher Education Executive
Officers’ association (SHEEO) (McConnell & Rhodes, 2017). In addition,
classroom-based performance assessments have also increased over time—an

Assignment Charrettes

Figure 5. Comparison of use of selected assessment approaches, 2017, 2013, and 2009.

1 Not all of the assessment approaches were asked in each iteration of the survey. Figure 5 indicates only those areas that were addressed in all three
surveys.

http://www.learningoutcomesassessment.org/niloaassignmentlibrary.htm

https://www.aacu.org/value/rubrics

https://www.aacu.org/value/rubrics

National Institute for Learning Outcomes Assessment | 13

area in which NILOA has been directly involved through engaging faculty in
conversations about assignment design and curricular alignment (Hutchings,
2016; Jankowski & Marshall, 2017).

Use of all assessment approaches has increased since 2009, except for the
use of general knowledge and skills measures, such as CLA+ or the ETS
Proficiency Profile. This finding, together with the uptick in the use of
measures of authentic student learning, suggests that provosts and their
colleagues involved in assessment are focusing on approaches they find to
be valuable and actionable contrasted with continuing activities that were
not yielding useful, meaningful information (Jankowski, Ikenberry, Kinzie,
Kuh, Shenoy, & Baker, 2012). This judicious selection of assessment tools
may explain in part why the typical institution appears to be using fewer
assessment approaches by investing only in those that have local value.

Institutional Needs and Supports for Student Learning
Outcomes Assessment

When asked about what would be especially helpful when assessing student
learning, provosts pointed to a variety of needs. This is not surprising, given
that institutions differ along many dimensions including their history with
assessment, campus culture, administrative structures, and so forth. The
most common needs were:

• More faculty using the results of student learning assessment (51%)
• More professional development for faculty and staff (46%)
• Greater institutional assessment staff capacity (30%)

4. Institutional needs for advancing assessment work have shifted
since 2009 from engaging more faculty in assessing student
learning to supporting faculty use of assessment results and wider
stakeholder involvement.

Although some opine what is perceived to be limited involvement of faculty
in assessment of student learning, provosts are most interested in finding
ways to help faculty and staff develop the attitudes and tools to produce
actionable results along with the skill set to use results to improve student
learning (Kuh et al., 2014). In fact, in 2009, two thirds (66%) of respondents
said more faculty involvement in assessing student learning was needed; in
2013, it was down to 38%, by 2017 it dropped to 23%. A similar decline
was found regarding the need for additional valid and reliable assessment
measures dropping from 37% in 2009, to 29% in 2013, and to only 15%
in 2017. At the same time, there is greater awareness of the need to support
faculty and staff through professional development on assessment, along
with staff capacity to support the work. So, it seems that provosts recognize
that faculty involvement is more about providing professional development
to help support faculty using results to improve student learning rather than
simply involving faculty in assessment work.

In addition to professional development for faculty on using results, there
is growing awareness of the need to involve other stakeholders in the
assessment process. While involving student affairs staff in assessment work
at the institution-level remained a relatively low priority or listed need, it
was still higher than in past years and 13% of respondents for the first time

Alignment of Learning

http://learningoutcomesassessment.org/occasionalpapertwentysix.html

National Institute for Learning Outcomes Assessment | 14

Involving Adjuncts
in Assessment

indicated that increased participation of students in assessment activities was
a need—a positive sign for supporting greater student involvement in the
assessment process.

Needs to advance assessment work did not differ by accreditation region,
MSI status, or institutional control (public, private, for-profit). But degree-
level did matter.

• Doctoral institutions were the least likely to indicate a need for

additional staff capacity (14%) while baccalaureate institutions

were the most likely (37%).

• Doctoral institutions were more likely than all other types to indi­
cate the need for stronger administrative leadership and support

(15%), but least likely to indicate more student affairs staff using

the results of assessment (0%).

• Baccalaureate institutions were the most likely to indicate the need

for more student affairs staff using results (10%).

• Doctoral institutions were more likely than all other types (by

20%) to indicate the need for more faculty involvement in assessing

student learning (46%).

More than half (53%) of provosts took advantage of responding to an
open-ended question about what their campus needed to improve student
learning. Professional development for faculty related to assessment work
was a common theme, including getting help to create synergy across related
initiatives underway on campus through curriculum mapping, alignment,
assignment design, technology, and general education reform. Provosts
were also interested in developing and managing sustainable systems of
assessment in times of budget constraints. These needs were followed closely
by a desire for help in using assessment data to “close the loop” resulting
in evidence of improvement. Provosts indicated that they also needed
assistance with communicating and clarifying to faculty and staff the value
and purpose of engaging in assessing student learning beyond compliance by
better integrating student learning outcomes assessment with teaching and
learning. They were looking for ways to use assessment results to improve
student learning at the program-level, and advice for effectively involving
adjunct and part-time faculty in assessment efforts. Other comments
worth mentioning were how to help communicate assessment information
externally, involve more students in the process, and revise the assessment
process to be less burdensome. And, as the results from the 2009 and 2013
questionnaires indicate, provosts wanted examples of how best to support
faculty, provide space for meaning-making conversations around use of
results, and information on what other institutions were doing in terms of
assessment practices and processes.

5. Institutional research offices and staff along with faculty-led

assessment committees provide needed support of institution-wide

assessment activities.

Figure 6 summarizes the different areas of support for assessment efforts on a
scale of “Not at All” to “Very Much”.2 The most supportive aspects

2 Survey respondents were able to select N/A for each of the supports. N/A responses were not factored into Figure 6. For instance, 28% of institutions
indicated they did not have a center for teaching and learning and 19% indicated that they do not currently have an assessment management system or
software in place, thus they were unable to comment on how well it supports or does not support their assessment efforts.

http://learningoutcomesassessment.org/documents/Assessment_in_Practice_Rio_Salado

were the institutional research office, assessment committees, institutional
policies and statements in support of assessment, administrative leadership,
and professional staff dedicated to assessment. In fact, 70% of respondents
indicated that institutional policies or statements on assessing undergraduate
learning were “Quite a Bit” or “Very Much” supportive of assessment efforts.
However, only 13% indicated that current faculty and staff recognition or
reward for involvement in assessment activities was “Quite a Bit” or “Very
Much” supportive. In addition, more than two thirds (68%) indicated that
their president/CEO or provost was “Quite a Bit” or “Very Much” supportive
of assessment work. Almost a quarter (24%) did not find their assessment
management system or software to be supportive of assessment efforts at all,
with 27% indicating it was somewhat supportive. Only 30% indicated that
technology was “Quite a Bit” or “Very Much” supportive.

• Doctoral institutions were less likely than all other institutional types
to indicate that student and faculty involvement were supportive
of assessment efforts—an area where they indicated greatest need.
However, doctoral institutions were more likely than all other types
to indicate that professional staff were supportive of assessment
efforts.

• While there were not significant differences by control for needs to
advance assessment work, when asked about what supports assess­
ment work, a few differences emerged. For-profit institutions were
more likely than public and private institutions to find institutional
policies on assessment, faculty/staff recognition and reward, and
professional development as supportive, but they were least likely to
indicate that assessment management software was supportive.

Academic Freedom

Not at All ­ Some Quite a Bit

Very Much

Figure 6. Extent to which assessment activities are supported.

National Institute for Learning Outcomes Assessment | 15

http://learningoutcomesassessment.org/occasionalpapertwentytwo.html

In terms of changes in supports over time, 41% of respondents in 2013
indicated that professional development offerings were supportive, while
55% indicated such in 2017. Thus, while professional development remains
high as a need, it is becoming increasingly supportive of assessment efforts.

Using Evidence of Student Learning

Accreditation remains the driver and main use of institution-level information
about student learning since 2009. However, various internal improvement
efforts, incuding program review and program improvement, also regularly
benefit from institution-level assessment results. While concerns about
equity were offered as an important factor for undertaking assessment, data
use in this area is low at the institution-level. Further, while professional
development was indicated as a need and increasingly supportive, results
of assessment are not often used to inform professional development at the
institution-level (Figure 7).

Not at All Some Quite a Bit Very Much

Figure 7. Extent of use of assessment results for various purposes.

National Institute for Learning Outcomes Assessment | 16

6. Institution-level assessment results are regularly used for
compliance and improvement purposes, addressing accreditation
and external accountability demands along with internal
improvement efforts.

In addition to the different types of uses of institution-level assessment
results, we asked respondents to indicate the extent to which they made
changes in policies, programs, or practices informed by assessment results at
various levels within the institution. The vast majority of change (2.98/2.97
or “Quite a Bit”) was at the curricular/course and department/program level,
followed by the school or college level, then the institution, and finally the
co-curriculum (Figure 8). It is encouraging to note that changes are being
made at various levels throughout the institution. And the levels at which
change occurs is similar across accreditation region, institution type, and
MSI status, consistent with results from the 2013 survey.

Very Much

Quite a Bit

Some

Not at all

An example of programmatic
changes from a Criminal Justice and
Criminology (CJC) major:

Every year, the CJC faculty reviews
assessment results at a regularly
scheduled faculty meeting, including
the comments from the open-ended
questions, and discusses how we can
improve our undergraduate program.
This critical analysis process has
produced a dynamic program that
evolves in response to this data. As
a result, we have further developed
and expanded our internship
program; we offer greater flexibility
in course offerings, including more
evening, summer, and online classes,
and do a better job advertising the
CJC Club. Students requested an
increased emphasis on some of the
SLOs in earlier CJC coursework;
these suggestions led us to elucidate
the links between theory, research,
and policy in foundational courses
and provide more opportunities for
students to improve their writing and
speaking skills.

Figure 8. Extent to which changes are made based on assessment results by level within the institution.

National Institute for Learning Outcomes Assessment | 17

7. The majority of changes made and uses of evidence of student
learning occur at the program- and course-level.

About two thirds of provosts (64%) provided examples of changes made
in policies, programs, or practice informed by assessment results. Of those,
the most frequently cited example of change was at the assignment, course,
and program-level. As one respondent put it, “Our changes occur mostly
at the departmental or program level…the programs may change course
requirements or practices in specific courses.” Particularly promising is
that areas that touch large numbers of students—math, composition,
and first-year experiences—were mentioned frequently as being modified
in response to assessment information. Actions taken at a course- or
curriculum-level included eliminating redundant courses, changing course
sequencing, aligning outcomes, and addressing complex learning outcomes

Using Results

in a coordinated manner in multiple courses. While accreditation was a main
driver for doing assessment, it was rarely mentioned as an impetus for change.

At an institution-level, examples of changes informed by assessment include:
• Modifying institutional assessment policy
• Changing placement policies for developmental math and english
• Revising course prerequisite policies
• Changing program review processes
• Modifying advising processes
• Shifting the manner in which resources were deployed
• Reforming general education

In addition to indicating concrete changes, several respondents mentioned
commitments to faculty development including workshops and seminars
focused on specific learning outcomes. However, instead of pointing to
assessment results driving change in areas of professional development for
faculty, respondents described plans and initiatives to review goals, align
outcomes, connect general education with the major, and develop capstone
experiences.

Three additional areas of change mentioned by provosts included
modifications in the assessment process itself, meaning improving assessment
practices and processes. The second was employer feedback serving as a source
of information, leading institutions to add courses, change requirements, and
modify assignments. The third entailed disaggregation of results to address
achievement and equity gaps.

Different types of institutions tend to use assessment data in different ways
(Figure 9). For example,

• For-profit institutions were least likely to use assessment results for
regional accreditation, while privates were least likely to use assess­
ment results for program accreditation.

• Public institutions were less likely than for-profits to use results for
external communication and institutional benchmarking.

• For-profit institutions were more likely than both public and private
to use results for learning outcomes revision, supporting equity
goals, development of assessment measures or approaches, curric­
ulum modification, institutional improvement, program improve­
ments, and academic policy development or modification.

Overall, for-profit schools tend to use institution-level assessment results
more than other types of institutions (Figure 9) and to make changes at
various levels within the institution (Figure 10).

• Specialized institutions (2.44) were more likely than associate degree-
granting institutions (2.14) to use assessment results for co-curric­
ular improvement.

• Doctoral institutions were the least likely to use assessment results
for external accountability and institutional benchmarking.

National Institute for Learning Outcomes Assessment | 18

njankow2
Stamp

https://www.wiley.com/en-us/Using+Evidence+of+Student+Learning+to+Improve+Higher+Education-p-9781118903391

• Institutions in SACSCOC were more likely than other institutions to
indicate that assessment results were used in support of achieving equity
goals (2.47) and resource allocation (2.42).

Not at All ­ Some Quite a Bit Very Much

Figure 9. Extent of use of assessment results by institutional control.

National Institute for Learning Outcomes Assessment | 19

Not at All

Some
Quite a Bit
Very Much

Figure 10. Extent to which changes are made based on assessment results by level within the institution and by institutional control.

Figure 11. Comparisons of uses of assessment results, 2009, 2013, and 2017

Figure 11 shows changes in assessment data use between 2009 and 2017.
Worth noting are:

• A decrease in using assessment results for governing board
deliberations and for informing professional development activities

• A decrease in external demands for accountability as a driver of student
learning outcomes assessment

• An increase in public reporting of assessment results
Institutions continue to use assessment results for internal improvement
including modifying curriculum program review, allocating funds, and devel­
oping or revising policy as well as for responding to accountability demands.

National Institute for Learning Outcomes Assessment | 20

Communicating Information on Assessment

Institutions provide limited publicly available information on assessment
activities on their websites. Institutions are most likely to publicly share
student learning outcomes statements (Figure 12). They are less likely to share
information on assessment plans, resources, and current assessment activities
or assessment results on institutional websites, in publications, or in press
releases. Very little is made available about changes made or evidence that
learning has improved as a result of these changes. This pattern is consistent
with prior reports examining the online presentation of assessment information
(Jankowski & Makela, 2010; Jankowski & Provezis, 2011).

Figure 12. Extent to which institutions make types of assessment information
publicly available.

8. Effectively communicating information about student learning
remains a target of opportunity for assessment work.

Determining how to effectively communicate assessment results continues to
be a challenge for the vast majority of colleges and universities. As might be
expected, public and for-profit institutions are more likely than privates to
publicly post assessment information (Figure 13). MSIs are more likely than
PWIs to share information publicly on the institution website, in publications,
or in press releases on all items—learning outcome statements, plans, resources,
current activities, results, and examples of changes made along with evidence
of improvement. It may be that MSIs can serve as an example to assist other
institutions in advancing transparency and communication.

Transparency Framework

Not at All Some Quite a Bit Very Much

National Institute for Learning Outcomes Assessment | 21

http://www.learningoutcomesassessment.org/TransparencyFramework.htm

Not at All
Some
Quite a Bit
Very Much

Figure 13. Publicly available assessment information by institutional control.

More than half of provosts (57%) offered views about the kinds of assessment
information colleges and universities should make available to demonstrate
transparency and accountability. The general sentiment was that information
on improvements or changes should be made available, but that evidence
comparing institutions with very different missions or student bodies would be
misleading and unhelpful. Information that is shared should be meaningful,
not as one provost said “demonstrating nothing more than that the institution
collected information designed to show that they are in compliance with
external standards and regulations.”

The general sentiment was that the following are appropriate to share:

• information on accreditation;
• retention, persistence, graduation, and completion rates;
• licensure and certification exam pass rates;
• job placement and salaries;
• return on investment; and
• costs.

More important to provosts was not what to share, but how to share information.
Provosts were reluctant to report results about student performance that came
across as “marketing material.” They preferred to tell a nuanced, complicated
picture of student learning that coupled evidence of learning outcomes with student
success data such as persistence and graduation rates. Another major theme was
the need to be sure that information about assessment processes and student
learning results be contextualized. That is, care must be taken in helping
readers understand and interpret results, given the institutional mission,
student characteristics and such. Simply put, leave no number unexplained
(Kuh, 2007).

Provosts were also concerned about whether the general reader would
understand assessment results and their use, or even be interested in knowing

National Institute for Learning Outcomes Assessment | 22

about the topic. This suggests more efforts are needed to help audiences both
on and off the campus better understand the role and importance of assessment.
In addition, there was lack of consensus on whether the information shared
should be comparable across institutions; should provide program-level or be
institution-level evidence only; and present evidence of learner gains, growth,
or value-added by the institution.

As one provost put it:

“We are not that great as an industry at explaining what we do, how
our institutions run, and the great value we provide to students and
communities. I think the biggest gap is in outsiders understanding student
learning. We can provide all the assessment results or data we like, but if
others cannot interpret them accurately there is no benefit to transparency
or accountability.”

Another provost observed,

“This is something we struggle to accomplish. First, there is the need for
constituents to become familiar with and understand the student learning
outcomes identified by the institution and why they are important, how
they are measured, and what we learn from the results, as well as what
improvements were made in response to the results. This is not easy to
communicate in “sound bites,” and merely communicating outputs such
as employment rates and beginning salaries does not serve as a proxy
for student learning and quality of programs. We can, for instance,
communicate the results of our annual assessment of the general education
program, but we need to find ways to help the general public make meaning
of the results.”

Use of Technology

An area that was repeatedly raised by provosts throughout the survey was that
of engaging with technology. When asked about needs to advance assessment
work, technology was raised, specifically with regard to assistance with:

• disaggregation of different student groups,
• aggregation of evidence of learning across various levels within the

institution, and
• determining which software or system would be most useful or fit

current and future needs.
Issues of interoperability, inability to view a holistic picture of student learning,
and connect data from throughout the institution and different systems were
shared struggles related to meaningful technology engagement and use. Yet,
for all the struggles, the need for a more comprehensive institution-wide
understanding of student learning was supported by 29% of provosts who
indicated wanting technologies and analytics to aggregate assessment results to
represent overall institutional performance.

9. While assessment-related technologies hold promise of assisting
with alignment and integration of learning across the institution,
meaningful implementation remains elusive.

Provosts indicated they were unsure how to implement software solutions in
a manner that fit with the institutional culture they were trying to support

American Council on
Education

National Institute for Learning Outcomes Assessment | 23

http://www.acenet.edu/news-room/Documents/Unpacking-Relationships-Instruction-and-Student-Outcomes

and build connections within and across the institution. For instance,
provosts indicated that they needed assistance with integration of curricular
and co-curricular assessment, general education and the major, program- and
institution-level assessment. Issues related to equity, data disaggregation, and
using assessment results to help close the achievement gap were mentioned as
areas where institutions could use technology to better utilize assessment data
to understand differences in learning across student groups.

In the examples of changes made as a result of assessment, survey respondents
indicated attempting to make changes through the use of learning management
systems and analytics to examine student performance on assignments and to
make broader claims about student learning outcomes across the institution.
Yet with all the need and efforts to engage with technology, almost a quarter
(24%) did not find their assessment management system or software to be
supportive at all of assessment efforts with 27% indicating it was somewhat
supportive. Only 12% found their technology solutions to be “Very Much”
supportive of assessment efforts.

There appears to be experimentation with technology supports, but it remains
an area under development. Technology can enable connections and scaling
of results within an institution, but much as in 2013, provosts did not rate
data management systems or software as supportive of assessment work to the
same degree as many other institutional features or conditions. Whether this
is a function of the actual utility of these technologies or lack of sufficient
familiarity with them to understand their value is not known.

Institutional Size and Selectivity

In general, size and selectivity are negatively related to assessment activity. In
the 2009 and 2013 survey, we found that the more selective an institution’s
admission standards, the less likely to employ a variety of assessment
approaches and use results. However, in the 2017 data collection, size appears
to have the same impact on assessment activity as selectivity.

10. The larger the size and greater the selectivity of the institution, the
less likely it is to employ a variety of assessment activities.

For almost every category of assessment activity, the larger and more selective
the institution, the less likely to employ various assessment approaches or use
the results. For instance:

• have learning outcome statements that apply to all students,
• have programs with stated learning outcomes (36% of very selective

institutions versus 53% of inclusive institutions and 41% of institu­
tions with over 10,000 students versus 67% with 1,000 or fewer),

• use portfolios, classroom-based performance assessments, placement
exams, and rubrics,

• indicate professional development as a need,
• publically share information on assessing student learning outside of

resources on assessment,
• use assessment results for external accountability, institutional bench­

marking, strategic planning, program review, learning outcomes revi­
sion, assessment development, curriculum modification, program
improvement, and academic policy development or modification, and

Improvement and
Accountability

National Institute for Learning Outcomes Assessment | 24

http://learningoutcomesassessment.org/occasionalpaperone.htm

• make changes at the curricular or course-level.
The larger and more selective an institution, the more likely they were to
indicate a need for faculty involvement in assessment. And finally, the larger
the institution, the less likely to make changes at the institution-level as a
result of assessment. Why size and selectivity are negatively associated with
assessment activity is not clear and warrants additional investigation going
forward.

Implications

Perhaps at no other time has the value of higher education been questioned to
the extent it is today. The criticisms of the enterprise are multifaceted, from
escalating costs outpacing inflation to the inability to graduate larger numbers
of those who start college. Employers say too many graduates are unprepared
for what is expected of them in the workplace. Too often, institutions have
little to show about what students gain from their studies and what is being
done to improve the student experience. Those within institutions of higher
education wonder about the benefit of assessment and want to see genuine
evidence of learning improvement. However, looking across the current
landscape of institutional assessment processes and practices, the trend that
is emerging is an authentic form of assessment that values evidence produced
in the context of teaching and learning, represents students’ work, supports
faculty use of evidence of student learning to improve programs, courses and
assignments, and is connected to a variety of institutional learning intiaitives.
There is much about which to be hopeful, including:

• Growth in use of authentic measures of learning. A variety of
approaches are used to assess student learning, in ways that appear
to meet specific institutional needs and align with what is valued by
provosts in terms of generating meaningful information on student
learning. Those approaches are also increasingly embedded in the daily
work of faculty, directly connected to teaching and learning efforts as
opposed to a separate, administrative add-on.

• Integration of various initiatives and efforts to improve student
learning throughout the institution is underway. Multiple initia­
tives, such as assignment design, are underway that support engage­
ment with student learning assessment, and stakeholders across
campus are increasingly involved as efforts to connect disparate assess­
ment processes unfold.

• Use of results is embedded within course- and program-level
improvement. Use is occurring at the course- and program-levels that
can most meaningfully impact students and their learning.

Yet with all the momentum, there are areas that need attention for assessment
efforts to advance student learning and institutional effectiveness.

Communicating effectively about student learning remains a challenge,
a challenge which the implementation of the Excellence in Assessment
(EIA) designation in 2015 was in part designed to address. The Excellence
in Assessment designation is co-sponsored by the Voluntary System of
Accountability (VSA), NILOA, and AAC&U. The designation recognizes
institutions that successfully integrate assessment practices across campus,

Learning Improvement

National Institute for Learning Outcomes Assessment | 25

http://learningoutcomesassessment.org/occasionalpapertwentythree.html

provide evidence of student learning outcomes to stakeholders, and utilize assessment
results to guide institutional decision-making and improve student performance.
The EIA designation is formed around NILOA’s Transparency Framework and
serves to recognize the work of campuses that are engaging in vertically and
horizontally integrated student learning outcomes assessment, ensuring that all
systems are linked and cross-validated. Designees provide a variety of models for
others to learn from, but equally important, the process asks institutions to present
a coherent narrative of their assessment process—an approach that proves difficult
for campuses (Kinzie, Hinds, Jankowski, & Rhodes, 2017).

Colleges and universities must more clearly and persuasively communicate relevant,
timely, and contextualized information on their impact on students and value to
society. As one provost stated,

“Institutions should be unafraid of telling their own stories…furthermore, they
should take control of the narrative, and show how the institution is embracing
these data, highlight what lessons are being learned, and point to what
coordinated and organized actions are being taken campus-wide to improve
student learning.”

While use of assessment results is increasing, documenting improvements in
student learning and the quality of teaching falls short of what the enterprise
needs. Provosts provided numerous examples of expansive changes at their
institutions drawing on assessment data, but too few had examples of whether
the changes had the intended effects. Did the policy change or alteration of
the assessment process actually have the intended impact? Did the assignment
modifications lead to better student demonstrations of their learning? Can we
really connect through the learning management system or software assignments
at the course-level to institution-level learning outcome statements and understand
student learning as a campus? Has student learning actually improved over time?
These questions remain areas of future research for assessment scholars and action
by assessment practitioners.

Equity is an important consideration in assessment work, but underemphasized
in data use. Survey respondents indicated that addressing issues of equity was
important to assessment efforts and disaggregation of evidence of learning by
various groupings of students was beginning to occur. However, using assessment
data to support the achievement of equity goals was uncommon. What is the role of
assessment in addressing issues of equity (Montenegro & Jankowski, 2017)? What
are the best approaches to assess learning of different groups of students? These are
questions that the field of assessment has yet to fully explore.

MSIs (65%) were less likely than PWIs (79%) to use national student surveys
as part of their assessment system, but MSIs (61%) were more likely than PWIs
(52%) to use local surveys (Montenegro & Jankowski, 2015). Is there something
to be said that MSIs are using different approaches and how does that impact
our understanding of the national picture of student learning overall? For one,
it raises concerns about the reliability of our picture of national data to include
and represent the diversity of learners, leading institutions to possibly implement
solutions that do not address their student populations. For another, it may signal
that national surveys are suspect because data might be used to unfairly compare
or evaluate institutions, or that national surveys are simply not meeting the needs

Equity and Assessment

National Institute for Learning Outcomes Assessment | 26

http://learningoutcomesassessment.org/msireport.html

http://learningoutcomesassessment.org/TransparencyFramework.htm

of diverse student populations and/or may be too costly. Addressing issues of
equity moving forward is an area of need within the assessment community.

Governing boards have a key role to play in sustaining and further
developing meaningful assessment. In order to sustain and grow assessment
efforts, governing boards can endorse policies and priorities that support and
encourage assessment and invite wider stakeholder involvement. At larger and
more selective institutions, less assessment activity is occurring overall. There
is a wide range of differences between for-profit and private institutions in
assessment practices, and a lack of communication across the board. While
assessing student learning falls within the purview of faculty and staff, the
board should expect that instances and examples of meaningful improvement
of student learning be presented in an understandable, coherent manner such
that the board can be assured that internal quality controls are unfolding
effectively.

Further, there are stakeholder groups not yet actively integrated with institution-
level assessment efforts such as student affairs, staff, and students themselves.
While student affairs have been involved in assessment for quite some time
(Schuh & Gansemer-Topf, 2010), the integration and connection of those
efforts with the larger institutional picture of student learning is still young.
Areas of additional opportunity for meaningful collaboration include alumni
and employers (Jankowski & Tyszko, 2017).

Professional development could be more meaningfully integrated
with assessment efforts, supporting faculty use of results, technology
implementation, and integration of efforts across an institution.
Throughout the institution, there are various points where assessment support
may be provided such as librarians (Malenfant & Brown, 2017), centers for
teaching and learning, and student affairs staff and partners. There is movement
within the field of assessment to more intentionally partner assessment offices
and staff with centers for teaching and learning to provide faculty and staff
with professional development to support culture change towards a focus on
student learning (Hersh & Keeling, 2013). Such a model may be a means to
address the professional development needs identified by survey respondents
and support faculty innovation in teaching practices (Singer-Freeman &
Bastone, 2016).

Moving Forward

Institutions of higher education in the United States are involved in a variety of
initiatives to improve student learning of which assessment is but one. To better
understand the myriad of learning related initiatives institutions are involved
in, we asked provosts to indicate reform efforts that are currently underway. Not
surprisingly, there is a lot of activity going on (Figure 14). Provosts indicated
that their institutions were undertaking curriculum mapping, facilitating
work on assignment design, engaging in developing pathways to completion,
revising general education, and scaling high-impact practices to name a few.
On average, institutions were involved in three different initiatives focused at
improving student learning.

A wide range of activity is occurring throughout U.S. higher education to
advance meaningful student learning across institutional types.

Cross-campus Collaboration

National Institute for Learning Outcomes Assessment | 27

http://learningoutcomesassessment.org/occasionalpaperthirtyone.html

Figure 14. Extent of involvement in national, regional, and local learning related
initiatives.

Further, institutions are involved in ongoing efforts to align and embed
student learning outcomes assessment within the everyday assignments and
activities that students encounter in their classrooms. With a focus on more
authentic, embedded measures of assessment, issues of alignment and mapping
become increasingly critical to ensure that the picture of student learning at the
institution-level is an actual representation of learning from across the various
levels.

The types of initiatives in which institutions are currently involved points to
some of the ongoing efforts to align and embed student learning outcomes
assessment throughout the institution. For instance, the number one area of
institutional involvement is curriculum mapping, an exercise that strives to
make connections across a curriculum on where learning is occurring and
documents how various levels may connect over time—in essence, an exercise
in alignment. In addition, efforts to engage in assignment design conversations
with faculty also address issues of alignment by exploring how course-based
assessments align with and are designed to elicit the learning outcomes of
interest. In 2013, provosts indicated that one of the most valuable sources of
information on institutional learning outcomes was found in the classroom-
based performance assessments, or assignments, meaning that alignment of
those assignments is crucial if the information gathered is to be utilized at
an institution-level. In this survey, we see continued growth and interest in
assignments at an institution-level coupled with initiatives and professional
development needs underway to support meaningful uptake and growth in
this area.

Degrees that Matter

National Institute for Learning Outcomes Assessment | 28

http://www.learningoutcomesassessment.org/NILOABookDTM.html

While there are a variety of commonalities regarding institutional involvement
in initiatives such as revising general education, mapping curriculum, and
facilitating faculty work on the design of assignments regardless of type, size,
accreditation region, control, or selectivity, there are some notable differences
by various institutional characteristics.

• Associate degree-granting institutions (80%), MSIs (66%), larger
enrollment, and public institutions (74%) were more likely than all
other types to be involved in developing or implementing pathways to
completion as well as state-wide completion initiatives, yet we know
that learners struggle to complete a coherent educational experience at
all types of institutions.

• In terms of innovation, associate and specialized institutions (29%)
and for-profit institutions (39%) were more likely than other institu­
tional types to be developing competency-based education programs,
yet they were the least likely to be involved in using VALUE rubrics.

• The more selective the institution, the more likely it was to be
attempting to increase the quality of or scale high-impact practices.

• Public and private institutions were more likely than for-profit to be
involved in comprehensive student record development (Public: 22%;
Private: 27%; For-Profit: 0%) and high-impact practices (Public:
53%; Private51%; For-Profit:13%).3

3 For information on the development of comprehensive student records with the American Association of Collegiate
Registrars and Admissions Officers (AACRAO) and NASPA: Association of Student Affairs Professionals, see http://
www.aacrao.org/resources/comprehensive-learner-record

There is much to applaud about the current state of assessment practice.
Granted, there are compliance issues that must be managed and the field
should speak more frequently about the worth and value of higher education.
But there is also the discernable trend toward using assessment data to guide
improvement efforts and increased use of embedded approaches that focus
on ensuring authentic learning for individual students. An authentic form
of assessment that values evidence produced in the context of teaching and
learning, represents students’ work, supports faculty use of evidence of student
learning to improve programs, courses and assignments, and is connected to a
variety of institutional learning initiatives, is emerging.
As NILOA’s (2016) work in the field has shown, “focus on improvement and
compliance will take care of itself.” Yes, additional efforts are needed to better
educate various audiences on the evidence of student learning of interest to our
institutions. At the same time, increasing numbers of institutions now have
information about student performance based on learning outcomes connected
with actual student assignments and work. More institutions engage regularly
with faculty and multiple stakeholders and implement assessment approaches
that generate actionable evidence to enhance student learning and institu­
tional performance. A shift has unfolded from an emphasis on compliance and
reporting structures to a more authentic assessment practice that is grounded
in the integration of embedded approaches to document student learning.
Although much has been achieved through well-crafted student learning
outcomes assessment in recent years, much remains to be done.

Policy Statement Policy Statement

National Institute for Learning Outcomes Assessment | 29

http://www.learningoutcomesassessment.org/NILOA_statement.html

http://aacrao.org/resources/comprehensive-learner-record

http://aacrao.org/resources/comprehensive-learner-record

References

Baker, G. R., Jankowski, N., Provezis, S., & Kinzie, J. (2012). Using assessment results: Promising practices of institutions
that do it well. Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes
Assessment (NILOA).

Ewell, P. T. (2009, November). Assessment, accountability, and improvement: Revisiting the tension. (NILOA Occasional
Paper No. 1). Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes
Assessment (NILOA).

Ewell, P., Jankowski, N., & Provezis, S. (2010). Connecting state policies on assessment with institutional assessment activity.
Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment
(NILOA).

Ewell, P., Paulson, K., & Kinzie, J. (2011). Down and in: assessment practices at the program level. Urbana, IL: University of
Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA).

Hersh, R. H., & Keeling, R. P. (2013, February).Changing institutional culture to promote assessment of higher learning.
(Occasional Paper No. 17). Urbana, IL: University of Illinois and Indiana University, National Institute for Learning
Outcomes Assessment.

Hutchings, P. (2016, January). Aligning educational outcomes and practices. (Occasional Paper No. 26). Urbana, IL:
University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA).

Hutchings, P., Jankowski, N. A., & Ewell, P. T. (2014). Catalyzing assignment design activity on your campus: Lessons from
NILOA’s assignment library initiative. Urbana, IL: University of Illinois and Indiana University, National Institute for
Learning Outcomes Assessment (NILOA).

Jankowski, N. A., Ikenberry, S. O., Kinzie, J., Kuh, G. D., Shenoy, G. F., & Baker, G. R. (2012). Transparency &
accountability: An evaluation of the VSA college portrait pilot. Urbana, IL: University of Illinois and Indiana University,
National Institute for Learning Outcomes Assessment (NILOA).

Jankowski, N., & Makela, J. P. (2010). Exploring the landscape: What institutional websites reveal about student learning
outcomes activities. Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes
Assessment (NILOA).

Jankowski, N. A., & Marshall, D. W. (2017). Degrees that matter: Moving higher education to a learning systems paradigm.
Sterling, VA: Stylus Publishing.

Jankowski, N., & Provezis, S. (2011). Making student learning evidence transparent: The state of the art. Urbana, IL:
University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA).

Jankowski, N. A., & Tyszko, J. A. (2017). Using assignments to redefine employer relationships. Assessment Update, 29(4),
10-13.

Kinzie, J., Hinds, T. L., Jankowski, N. A., & Rhodes, T. L. (2017). Recognizing excellence in assessment. Assessment
Update, 29(1), 1-2, 15-16.

Kuh, G. D. (2007). Risky business: Promises and pitfalls of institutional transparency. Change, 39(5), 30-35.
Kuh, G., & Ikenberry, S. (2009). More than you think, less than we need: learning outcomes assessment in American higher

education. Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes
Assessment (NILOA).

Kuh, G. D., Jankowski, N., Ikenberry, S. O., & Kinzie, J. (2014). Knowing what students know and can do: The current
state of student learning outcomes assessment in US colleges and universities. Urbana, IL: University of Illinois and Indiana
University, National Institute for Learning Outcomes Assessment (NILOA).

Malenfant, K. J., & Brown, K. (2017, November). Creating sustainable assessment through collaboration: A national program
reveals effective practices.(Occasional Paper No. 31) Urbana, IL: University of Illinois and Indiana University, National
Institute for Learning Outcomes Assessment (NILOA).

McConnell, K. D., & Rhodes, T. L. (2017). On solid ground, VALUE report. Washington, DC: Association of American
Colleges and Universities (AAC&U).

Montenegro, E., & Jankowski, N. A. (2015, April). Focused on what matters: Assessment of student learning outcomes at
Minority-Serving Institutions. Urbana, IL: University of Illinois and Indiana University, National Institute for Learning
Outcomes Assessment (NILOA).

National Institute for Learning Outcomes Assessment | 30

http://learningoutcomesassessment.org/UsingAssessmentResults.htm

http://learningoutcomesassessment.org/UsingAssessmentResults.htm

http://learningoutcomesassessment.org/occasionalpaperone.htm

http://learningoutcomesassessment.org/ConnectingStatePolicies.htm

http://learningoutcomesassessment.org/DownAndIn.htm

http://learningoutcomesassessment.org/occasionalpaperseventeen.htm

http://learningoutcomesassessment.org/occasionalpapertwentysix.html

http://learningoutcomesassessment.org/niloaassignmentlibrary.htm

http://learningoutcomesassessment.org/niloaassignmentlibrary.htm

http://learningoutcomesassessment.org/documents/Jankowski%20VSA%20Report

http://learningoutcomesassessment.org/documents/Jankowski%20VSA%20Report

http://learningoutcomesassessment.org/exploringthelandscape.htm

http://learningoutcomesassessment.org/exploringthelandscape.htm

http://learningoutcomesassessment.org/Transparencyofevidence.htm

http://learningoutcomesassessment.org/MoreThanYouThink.htm

http://learningoutcomesassessment.org/MoreThanYouThink.htm

http://learningoutcomesassessment.org/knowingwhatstudentsknowandcando.html

http://learningoutcomesassessment.org/knowingwhatstudentsknowandcando.html

http://learningoutcomesassessment.org/documents/Occasional_Paper31

http://learningoutcomesassessment.org/documents/Occasional_Paper31

https://www.aacu.org/OnSolidGroundVALUE

http://learningoutcomesassessment.org/msireport.html

http://learningoutcomesassessment.org/msireport.html

Montenegro, E., & Jankowski, N. A. (2017, January). Equity and assessment: Moving towards culturally responsive assessment.
(Occasional Paper No. 29). Urbana, IL: University of Illinois and Indiana University, National Institute for Learning
Outcomes Assessment (NILOA).

National Institute for Learning Outcomes Assessment. (2011). Transparency Framework. Urbana, IL: University of Illinois
and Indiana University, National Institute for Learning Outcomes Assessment (NILOA).

National Institute for Learning Outcomes Assessment. (2016, May). Higher education quality: Why documenting learning
matters. Urbana, IL: University of Illinois and Indiana University, Author.

Schuh, J. H., & Gansemer-Topf, A. M. (2010, December). The role of student affairs in student learning assessment. (NILOA
Occasional Paper No. 7). Urbana, IL: University of Illinois and Indiana University, National Institute for Learning
Outcomes Assessment.

Singer-Freeman, K., & Bastone, L. (2016, July). Pedagogical choices make large classes feel small. (Occasional Paper No.
27). Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment
(NILOA).

National Institute for Learning Outcomes Assessment | 31

http://learningoutcomesassessment.org/documents/OccasionalPaper29

http://learningoutcomesassessment.org/TransparencyFramework.htm

http://www.learningoutcomesassessment.org/documents/NILOA_policy_statement

http://www.learningoutcomesassessment.org/documents/NILOA_policy_statement

http://learningoutcomesassessment.org/documents/StudentAffairsRole

http://learningoutcomesassessment.org/documents/Occasional%20Paper_27

Appendix A: Data Collection and Analysis

The 2017 NILOA national survey of chief academic officers was conducted by the Center for Survey Research at Indiana
University between April and September, 2017. The sample included provosts or chief academic officers at 2,781 regionally
accredited, undergraduate degree-granting institutions listed in the Higher Education Directory, published by Higher
Education Publications, Inc. A total of 811 institutions completed the survey for a response rate of 29%.
As with the 2009 and 2013 surveys, we asked respondents to identify their position within the institution if the chief
academic officer was not the person to complete the survey. Table A1 indicates that nearly 80% of the survey respondents
were from within the office of the provost with the remainder being completed by those responsible for assessment within
the institution.

Table A1: 2017 Survey Respondents by Position

Position % N
Provost/CAO 79% N = 639
Director of assessment (or person
responsible for assessment)

15% N = 120

Dean (or assistant/associate dean) 1% N = 9
Institutional Research 5% N = 43

This survey was administered primarily online, with the initial invitation followed by three email reminders. A paper copy
of the questionnaire was mailed to those who had not completed the survey after the third email reminder. Web-based
completions were the most common by far, with 92% of respondents utilizing this mode. Membership organizations such
as the American Council on Education (ACE), the Council of Independent Colleges (CIC), the Association of American
Colleges and Universities (AAC&U), and the American Association of Community Colleges (AACC), along with other
affinity groups, helped to publicize the survey.
Many of the questions were used previously in the NILOA 2009 and 2013 questionnaires allowing for analysis in changes
over time. Other questions were revised or added, informed by changing practices in the field and input from NILOA’s
National Advisory Panel, a select group of assessment experts, and a focus group of chief academic officers convened during
the January 2017 AAC&U annual meeting in Washington, DC. To view a final copy of the survey please see: http://www.
learningoutcomesassessment.org/assessmentthatmatters.html
The survey results were merged with several additional data sources to allow for analysis on a variety of factors including
Carnegie classification, accreditation region, control, size, selectivity, and minority-serving status. The characteristics of
participating colleges and universities in terms of institutional control (public, private, and for-profit), institution type
(doctoral, master’s, baccalaureate, associate’s, and specialized), and accreditation region were generally similar to the national
profile except for a slight overrepresentation of master’s and specialized institutions and underrepresentation of associate
degree-granting institutions (Tables A2-A5).

Table A2: Institution Type: 2017 Participating Institutions Compared with National Profile

Type 2017 Current National
Doctoral 10% 10%
Master’s 25% 23%

Baccalaureate 23% 23%
Associate’s 33% 38%

Other/Special 9% 6%

National Institute for Learning Outcomes Assessment | 32

http://www.learningoutcomeassessment.org/assessmentthatmatters.html

http://www.learningoutcomeassessment.org/assessmentthatmatters.html

Table A3: Institutional Control: 2017 Participating Institutions Compared with National Profile

Control 2017 Current National
Public 55% 56%
Private 41% 40%
For-Profit 4% 4%

Table A4: Accreditation Region: 2017 Participating Institutions Compared with National Profile

Region 2017 Current National
Middle States 17% 16%
MEASC 7% 7%
HLC 39% 39%
Northwest 5% 5%
SACSCOC 22% 23%
ACCJC 3% 4%
WSCUC 7% 6%

Table A5: Minority-Serving: 2017 Participating Institutions Compared with National Profile

Type 2017 Current National
Minority-Serving (MSI) 21% 21%
Predominantly White (PWI) 79% 79%

For each survey item, frequency distributions and mean responses were calculated using Stata 14 both overall and for
each subgroup as described above. Chi-square statistics were used to identify statistically significant differences between
institutional groupings on items 1, 2, 3, 5, and 14. Items 6-10 were examined for significant differences both treating
response options as interval-scaled items using an analysis of variance (ANOVA), and also as categorical items (using chi-
square tests) for a robustness check. For items also included on earlier survey administrations, we further examined trends
over time. Again, ANOVA was used to examine differences between groups, with groups here being survey respondents in
each of the three administration years. An alpha level of p<.05 was used to determine significance for all tests. Responses to items 4 and 11-13 (the open-ended questions) were each reviewed by two NILOA researchers. Broad codes were then developed in conversation about the general reading of the responses. Each reader, in relation to the assessment literature on needs and effective practices, developed a list of potential thematic groupings of the responses (including themes such as general education, faculty engagement, use of results, etc.) These themes were assigned codes, which were used in guiding a second reading and further coding, analysis, and iterative reclassification of responses—until a final set of themes and codes was generated for each open-ended response item.

National Institute for Learning Outcomes Assessment | 33

RunningHead: UNIT 6 ASSIGNMENT 1 1

UNIT 6 ASSIGNMENT 1 9

Xavier Williams

Introduction

This course is designed for the department of engineering. In this engineering course, I will teach the learners about water resource engineering. They will learn Integrated Water Resources Development (IWRD) and Integrated Water Resources Management (IWRM) as a general framework for Water Resources Engineering. The students will mention water challenges experienced in their locality. This will be done in groups of five students with the aim of stating challenges of water and how to solve them. At the end of each lesson the teacher will lead in demonstration of various water resources and how water from the sources can be harnessed. The following will be the student learning goals; Learners will be able to understand elements integrated water resources management. Learners will be able to know erosion and deposition in rivers, Learners will be able to learn laws governing water in their country and internationally, Learners will be able to know erosion and deposition in rivers, Learners will be able to sit in groups and discuss different water sources.

Since this is an engineering lesson, it will be necessary to conduct a practical experience. Learners play an active role and engage more during practical exercises than in theoretical practices. During practical exercises, students get a chance to utilize what they learnt in class and build confidence while at it to deal with real world situations. Allowing students to have discussions in groups’ plays a great role in learning experience by enhancing democratic learning, complements reflective learning and accommodates individual difference. Therefore, I aim to mix the low-ability learners with the high ability learners and help in leading discussions to ensure achievement of learning goals by the end of the course.

Learning goals

Learning goals state what the teacher intends the students to achieve as a result of a successful completion of the teaching experience. It could be at the end of a program, a course or learning experience. A course learning goal explains what the students will be capable to perform at the end of a course while program goal is what the student is capable of achieving at the end of their degree or diploma certificate. A well state learning goals should state the outcomes (Marzano, 2010). This is basically what the student will be in a position to do after they successfully complete the course. They should be simple and clear such that everyone comprehends them. They should focus more on skills rather than knowledge. This is because the employers of today look for thinking and performance skills when hiring. They should be relevant and focus more on what the students are to learn. They should not be too many. More than five learning goals may reduce the efficiency of learning and may lose focus. The learning goals should fit within the scope of the course content. Most importantly they should help the learners to achieve broader learning goals (Marzano, 2010). I aim that my students will be in a position to define integrated resources management by the end of the course. They student should be in a position to understand different sources of water and to be able to list causes of erosion and deposition. Lastly by the end of this course, students will be able to mention different laws and acts related to water an assignment.

Rationale of learning goals

The learning goals are important in terms of student development. There are different levels of objectives. Students should be in a position to reach objectives from various stipulated levels. They should demonstrate knowledge and understanding of integrated water resources as well as the ability to analyze, apply, synthesize and evaluate the provided learning content. (Ennis, et al 2012). In this course, students will acquire knowledge by identifying water resources and learning laws governing water resources. The students will analyze by comparing different water sources and how to harness water from these sources. They synthesize by doing experiments in the labs and through group discussion.

Assessment instruments

An assessment instrument is part of an assessment tool that includes instructions or a checklist needed to conduct an assessment of a learning outcome. An assessment instrument can either be a test, form or a rubric and it is used to collect data for each outcome. It is the actual product that is handed out to students for the purpose of gauging whether they have achieved a particular learning outcome (Suskie, 2018). Assessment tools measures fluency, skills and abilities of a student. Assessments can either be formal or informal. Formal tools are objective measurements of a students’ skills and abilities using monitoring, screening, evaluation and diagnosis. Informal assessment are inferences a teacher receives from observations. Assessment tools are used by teachers to make informed decisions.

Rationale for learning instruments

An appropriate assessment instrument will help a tutor to measure achievement of the outcomes. It also shows whether the desired performance for a particular outcome was achieved. Assessment instruments are important since they help to produce results that can be used to make decisions that would improve student learning techniques. Assessment can either be direct or indirect assessment. A direct assessment is achieved by observing a learner’s performance or examining of products that demonstrate mastery of specific skills or course contents. It can also be done when a learner demonstrates work quality such as innovativeness or creativity. Indirect assessment is based on the knowledge and abilities reported by external sources such as supervisors, alumni, fieldwork or a faculty.

Setting Standards

Setting meaningful assessment standards, benchmarks or targets for student learning assessments is a huge challenge for educators. The first challenge is that the assessment community has limited glossary to define different terms that refer assessment such as goals, thresholds, benchmarks or targets. I settled to use standards to describe minimal acceptable student performance. The other challenge is use of standard and target. Standard described minimal acceptable performance while target describe proportion of students to reach the standard mark. There are four ways that help in setting achievable standards. A students’ standard can be established from their peers (Norcini, 2003). This is done by ensuring they perform as well or better than their peers. Standard can also be set by an external body such as passing a license exam. History records can also be used in that the current students should perform better than the former students. However, most times the three options aren’t available. In this case teachers can set their own standards. This is called a local standard.

The following steps are used when establishing a local standard; a teacher should settle on a standard that does not embarrass him or her. It would look ugly if people noticed a student that passed your course did so because you had a low assessment standard. A relative harm should be considered when setting an assessment to high or too low. A very high standard means a teacher is identifying shortcomings of a student that may not be important or scarce resources and time are used to address them. On the other hand, a very low assessment standard may mean that a teacher is risking students to graduate without being ready or capable to thrive in what comes next such as a job market (Norcini, 2003). Before setting a standard, the assignment being assessed should be considered. This is because a test done in a three-hour class will not be as polished as a three-week assignment. If an external source can be used to set the assessment standards the better. This could be a faculty from another college or a disciplinary committee. Lastly, use previous performance of the students to inform your thinking.

Student assessment aims at fulfilling to interpret and use performance results effectively and appropriately to enhance accountability and improvement of teaching as a profession. A good assessment should yield results that are used to improve teaching and learning practices. Improvement helps stakeholders to have a self-reflection of the institutional goals and figure out if the performance conforms to the vision, mission and objectives therefore improving achievement (Ervin, 1988). The steps involved in setting a good quality assessment standard, is to first know the purpose of the assessment results which ca be to maintain a status quo or to enhance improvement. One should also know who will use the results and what they will inform. The second step is to state the consequence of setting a standard bar too high or too low. Lastly the standard to be set should be grounded to the data which is subjected to discussion in order to be aware if they have achieved the set standards.

In this integrated water resource course I intend to use the following as my assessment standards. They include written exams that may be in form of short answer questions, essays and multiple choice questions. I will also use written assignments such as reports, work logs, portfolios, literature reviews and essays. Student will be required to do practical assignments that will test students’ abilities to cope with real world situations. Lastly, I will assess my students by how they participate in class.

Rationale of assessment standards

The assessment rationale for the students will be to confirm and measure students’ performance and achievement in relation to a students’ stipulated learning objectives. The rationale will also be to promote, improve and enhance the quality of learning through a clear feedback that is timely, informative and relevant to student needs. It will also reward the student achievements and efforts with an appropriate grading system. Lastly, it will provide relevant information that will help to continuously improve and evaluate quality of the curriculum and the effectiveness of the teaching methods.

Evaluation of learning plan

As a teacher, one is always working to improve the learning curriculum, organization and instruction. Evaluating lesson plans helps teachers to improve their practice, meet the learner’s needs and develop strong reflective habits. Part of evaluating a lesson plan results from evaluating lesson design. When preparing a learning plan the following questions should be considered; what is the purpose of the plan. This could be to know whether learning goals have been achieved. A teacher should also have an ability to foresee learning plan challenges and to know whether the learning plan is effective or not. To evaluate a lesson plan one should evaluate the preparation process, the lesson itself and the students (Knowles, 1975). The following questions can be used when evaluating the preparation process; How hard was it to plan the lesson and what made it hard to plan, how efficient were you in following the lesson plan, was it difficult to gather the materials required, how useful were the materials used in delivering the lesson. To evaluate the lesson itself a teacher should check if the goals set at the start of the class, which activity was well executed, whether all the students were engaged, were lesson goal successes confirmed after the lesson and finally if there was an assignment at the end of the lesson.

The Stakeholders

The Assessment and Evaluation of the learning outcomes of students or trainees in the various situation is a process that will involve all the stakeholders involved directly or indirectly. Stakeholders are individuals or a group of people who have an interest in the assessment and evaluation plan (Gardner, 2014). The Assessment and Evaluation process or plan will involve both internal and external stakeholders as the issues affect them both. The internal stakeholders are the students, teachers, administrators, and the staff while on the other hand, the parents the education standard body, alumni and the employer are the external stakeholders. The stakeholders have a say, and their actions can alter a decision in the institution and so they must be involved in every activity.

Rationale of Stake Holders

The involvement of the stakeholders in the evaluation and assessment plan is to ensure that there is a mutual understanding between all the parties involved. It is meant to prevent frustration and resistance during the assessment (Gardner, 2014). The other significant benefit of engaging the stakeholders is to get suggestions that might help you in coming up with a better and inclusive plan that will result in you achieving the objectives of the assessment and evaluation. The stakeholders can also act as sponsors in case the project needs funding or other resources. The involvement of stakeholder ensures that the best decision is made in every step of the process, which results in achieving the ultimate goal of the evaluation plan.

Action Plan To Guide Implementation

Before the actual implementation of the evaluation and assessment process, it is essential first to have an action plan. The action plan is the steps that should be followed to ensure that the evaluation plan will result in success and achieve the objectives (Banta et al., 2015). For that reason, then the action plan should be clear and direct to prevent confusions during the implementation of the actual project. Coming up with a practical assessment and evaluation plan, it is essential. First, plan on the tasks that will take place during the process, the time that every task will be expected to start and end, assess the risks that can be encountered at every step and also the resources that will be deployed for every task.

Rationale Of Action Plan For Implementation

Before implementing any plan, it is necessary first to have a laid-out plan of the activities and processes that will be involved in the actual implementation of the evaluation process. The action plan acts as a guide that indicates what should be done by whom, when, the resources needed to actualize the activity, the risks assessment and mitigation strategies for the risks (Banta et al., 2015). The action plan is essential when planning as it ensures that all the resources needed are assigned before going to the field. This prevents frustration and challenges. The action plan also acts as a sample of the actual project, and it is used to review and make changes before the real intention is implemented. It is essential always to have an action plan to guide you as it ensures everything is in its right place before performing the actual process.

Closing The Loop

My suggested assessment and evaluation plan aims at ensuring that the students, trainees or learners achieve the expected benefits from the program, and the institution. The project will determine if the expectations of the program in equipping the students with the skills and knowledge on the subject are met and what more can be done to enhance or change the situation. The plan will assess the effectiveness of what the students are learning and the impact it has on them and the society at large (Marzano, 2010). The assessment will determine if the program should continue or be scraped off depending on its effectiveness and response by the students.

The Rationale Of Closing The Loop

The importance of the evaluation and assessment of the learning outcomes of the program will determine if it is increasing any value on the students or is it a waste of time and resources (Shewbridge et al., 2011). If the data collected in the evaluation suggests that the program is not sufficient and its objectives are not being achieved then the program can be stopped as it is a waste of time and resources for the institution and the students. On the other hand, if the evaluation concludes that the program is effective and it is adding value to the stakeholders, more resources are deployed to support it, which will be beneficial for all the stakeholders.

References

Ennis, C. D., Ross, J., & Chen, A. (2012). The role of value orientations in curricular decision

making: A rationale for teachers’ goals and expectations. Research Quarterly for Exercise and Sport, 63(1), 38-47.

Ervin, R. F. (1988). Outcomes Assessment: The Rationale and the Implementation.

Knowles, M. S. (1975). Self-directed learning: A guide for learners and teachers.Mayer, R. E.

(2011). Applying the science of learning. Boston, MA: Pearson/Allyn & Bacon.

Marzano, R. J. (2010). Designing & teaching learning goals & objectives. Solution Tree Press,

22-33

Norcini, J. J. (2003). Setting standards on educational tests. Medical education, 37(5), 464-469.

Suskie, L. (2018, May 27). What are the characteristics of a well stated learning goals? Retrieved

from h

ttps://www.lindasuskie.com/apps/blog/show/45689916-what-are-the-c

haracteristics-of-well-stated-learning-goals.

What Will You Get?

We provide professional writing services to help you score straight A’s by submitting custom written assignments that mirror your guidelines.

Premium Quality

Get result-oriented writing and never worry about grades anymore. We follow the highest quality standards to make sure that you get perfect assignments.

Experienced Writers

Our writers have experience in dealing with papers of every educational level. You can surely rely on the expertise of our qualified professionals.

On-Time Delivery

Your deadline is our threshold for success and we take it very seriously. We make sure you receive your papers before your predefined time.

24/7 Customer Support

Someone from our customer support team is always here to respond to your questions. So, hit us up if you have got any ambiguity or concern.

Complete Confidentiality

Sit back and relax while we help you out with writing your papers. We have an ultimate policy for keeping your personal and order-related details a secret.

Authentic Sources

We assure you that your document will be thoroughly checked for plagiarism and grammatical errors as we use highly authentic and licit sources.

Moneyback Guarantee

Still reluctant about placing an order? Our 100% Moneyback Guarantee backs you up on rare occasions where you aren’t satisfied with the writing.

Order Tracking

You don’t have to wait for an update for hours; you can track the progress of your order any time you want. We share the status after each step.

image

Areas of Expertise

Although you can leverage our expertise for any writing task, we have a knack for creating flawless papers for the following document types.

Areas of Expertise

Although you can leverage our expertise for any writing task, we have a knack for creating flawless papers for the following document types.

image

Trusted Partner of 9650+ Students for Writing

From brainstorming your paper's outline to perfecting its grammar, we perform every step carefully to make your paper worthy of A grade.

Preferred Writer

Hire your preferred writer anytime. Simply specify if you want your preferred expert to write your paper and we’ll make that happen.

Grammar Check Report

Get an elaborate and authentic grammar check report with your work to have the grammar goodness sealed in your document.

One Page Summary

You can purchase this feature if you want our writers to sum up your paper in the form of a concise and well-articulated summary.

Plagiarism Report

You don’t have to worry about plagiarism anymore. Get a plagiarism report to certify the uniqueness of your work.

Free Features $66FREE

  • Most Qualified Writer $10FREE
  • Plagiarism Scan Report $10FREE
  • Unlimited Revisions $08FREE
  • Paper Formatting $05FREE
  • Cover Page $05FREE
  • Referencing & Bibliography $10FREE
  • Dedicated User Area $08FREE
  • 24/7 Order Tracking $05FREE
  • Periodic Email Alerts $05FREE
image

Our Services

Join us for the best experience while seeking writing assistance in your college life. A good grade is all you need to boost up your academic excellence and we are all about it.

  • On-time Delivery
  • 24/7 Order Tracking
  • Access to Authentic Sources
Academic Writing

We create perfect papers according to the guidelines.

Professional Editing

We seamlessly edit out errors from your papers.

Thorough Proofreading

We thoroughly read your final draft to identify errors.

image

Delegate Your Challenging Writing Tasks to Experienced Professionals

Work with ultimate peace of mind because we ensure that your academic work is our responsibility and your grades are a top concern for us!

Check Out Our Sample Work

Dedication. Quality. Commitment. Punctuality

Categories
All samples
Essay (any type)
Essay (any type)
The Value of a Nursing Degree
Undergrad. (yrs 3-4)
Nursing
2
View this sample

It May Not Be Much, but It’s Honest Work!

Here is what we have achieved so far. These numbers are evidence that we go the extra mile to make your college journey successful.

0+

Happy Clients

0+

Words Written This Week

0+

Ongoing Orders

0%

Customer Satisfaction Rate
image

Process as Fine as Brewed Coffee

We have the most intuitive and minimalistic process so that you can easily place an order. Just follow a few steps to unlock success.

See How We Helped 9000+ Students Achieve Success

image

We Analyze Your Problem and Offer Customized Writing

We understand your guidelines first before delivering any writing service. You can discuss your writing needs and we will have them evaluated by our dedicated team.

  • Clear elicitation of your requirements.
  • Customized writing as per your needs.

We Mirror Your Guidelines to Deliver Quality Services

We write your papers in a standardized way. We complete your work in such a way that it turns out to be a perfect description of your guidelines.

  • Proactive analysis of your writing.
  • Active communication to understand requirements.
image
image

We Handle Your Writing Tasks to Ensure Excellent Grades

We promise you excellent grades and academic excellence that you always longed for. Our writers stay in touch with you via email.

  • Thorough research and analysis for every order.
  • Deliverance of reliable writing service to improve your grades.
Place an Order Start Chat Now
image

Order your essay today and save 30% with the discount code Happy