SCUP Collection Employing Accreditation To Strengthen Planning and Drive Improvement

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 63

S C U P C o l l e c t i o n

Employing
Accreditation
to Strengthen
Planning and
Drive Improvement

Edited by Lynn Priddy, PhD


Employing
Accreditation
to Strengthen
Planning and
Drive Improvement

Edited by Lynn Priddy, PhD


Employing
Accreditation
to Strengthen
Planning and
Drive Improvement

Edited by Lynn Priddy, PhD


Society for College and University Planning
www.scup.org
© 2020 by the Society for College and University Planning
All rights reserved. Published 2020.
Contents
Introduction....................................................... iii
Lynn Priddy, PhD

Introduction to Accreditation....................1
by Belle S. Wheelan, PhD

Connecting the Dots.......................................3


Accountability, Assessment, Analytics, and
Accreditation
by Linda L. Baer, PhD

Reflections on Two Decades of Quality


Assurance and Accreditation in
Developing Economies................................. 21
by Fred M. Hayward, PhD

Using Big Data.................................................35


How Moneyball and an Ardent Baseball
Fan Shaped Successful Metrics-Based
University Planning
by Roy Mathew, PhD, Elsa Bonilla-Martin, PhD,
Daniel Santana, and Erick Gonzalez

The Value of Higher Education


Maker Spaces for Accreditation
and Beyond...........................................................43
by Vincent Wilcyznski, PhD, Aubrey Wigner, PhD,
Micah Lande, PhD ,and Shawn Jordan, PhD

Employing Accreditation to Strengthen Planning and Drive Improvement i


ii
Introduction
Accreditation, both programmatic and institutional, holds far more potential
for an organization’s strategic and academic planning than most colleges and
universities realize. Quite often, a president or provost assigns accreditation to
an institutional department that focuses narrowly on managing the accrediting
relationship, making sure the institution is in compliance, and dealing with data
updates, documents, and visits. However, accreditation can be a powerful force
aligned with integrated planning efforts. In fact, accreditation as quality assurance
provides university, advisory, and other boards, the public, and external agencies an
automatic third-party affirmed metric on overall institutional soundness, capacity
for change and innovation, attention to the educational enterprise, and direct and
indirect measures of standing KPIs.

Whereas many institutions struggle to define, align, and aggregate metrics


and targets from academics, resources, student success, and operations into
strategic KPIs, accreditors already provide a framework for doing so. Through
criteria and standards, programmatic and institutional accreditors lay out
increasingly comprehensive rubrics requiring data, analytics, and their
analysis and use. Simply having data is not sufficient. Rather, institutions
must demonstrate quality and capacity, particularly in terms of governance,
resources and sustainability, instructional and educational effectiveness,
student learning and completion, and strategic advancement or innovation.
As such, accreditation that is integrated with multi-level institutional
planning provides a ready-made, third-party accountability and improvement
scaffold that colleges can choose to integrate into their own operational and
strategic planning priorities, goals, metrics, and target results. By embracing
accreditation as a driver of institutional quality and strategy versus compliance,
colleges and universities reinforce a more purposeful role for accreditation
itself. The integration ensures better accountability to institutional members,
more relevant criteria and standards, and a stronger focus on the missions and
priorities of colleges and universities. As the federal government continues to
rethink, revise, and refine regulations for accreditation, preserving it as integral
to effective strategic planning becomes ever more imperative.

The articles selected for this collection include both a retrospective and an
introduction to the field for readers new to the topic. The remaining articles
showcase how institutions intentionally employ accreditation at multiple levels
to strengthen integrated planning and propel ongoing quality improvement.

Lynn Priddy, PhD


Executive Advisor and Provost Emeritus
National American University

Employing Accreditation to Strengthen Planning and Drive Improvement iii


iv
VI E WPO I NT

Introduction
to Accreditation
Belle S. Wheelen, PhD
The president of a regional accrediting body shares her answers to frequently asked
questions about accreditation.

As Michael Middaugh (2012, p. 5) so clearly stated, “Accreditation is the lifeblood


for most colleges and universities: without accreditation from an agency
recognized by the U.S. Department of Education, no Title IV federal financial
aid can flow into an institution.” According to the Federal Student Aid Annual
Report FY 2016, in 2016 nearly $126 billion in student aid was delivered to over
13 million students in higher education institutions throughout the country
(U.S. Department of Education 2016). Without that aid, a significant number of
institutions would be forced to close.

For those new to accreditation, the process can be as mystifying as it is critical


to understand. The following questions and answers cover the basics of
accreditation and suggest further resources for learning more.

1 . What i s acc r e d itation?

Accreditation is a process by which an institution’s programs, policies, and


procedures are evaluated against a set of standards established by member
institutions to ensure that they are of college-level content. Accreditation is a
signal to the general public that the institution maintains the standards needed
for students to gain admission to graduate programs and enter the world of work.

2 . How o f ten m ust an in stitution go th ro ug h th e


p roc ess o f r e af f i r m in g its acc r e d itation?

The length of time between reaffirmation visits varies among the accrediting
organizations and ranges from seven to ten years.

3 . What ar e th e stan dar ds by wh i c h a co lleg e o r


u nive rs it y i s d ee m e d to b e a q ualit y in stitution?

Each accrediting body has a different set of standards, but each assesses the
quality of faculty and administration, curriculum, learning resources, and
institutional governance as well as the institution’s financial stability.

This article first appeared in Planning for Higher Education, Issue V46N1, October–December 2017

Employing Accreditation to Strengthen Planning and Drive Improvement 1


Author Biography 4 . What ar e stu d ent le ar nin g o utco m es?
These are the skills identified by an institution for each academic program that
Be l l e S . W h ee l a n , P h d ,
a student is expected to master prior to earning credit for the program. Student
currently serves as president
learning outcomes inform students about what skills they should have learned
of the Southern Association
and give employers and the general public confidence that students have
of Colleges and Schools
actually learned them.
Commission on Colleges and
is the first African American
and first woman to serve in 5. What r esou rc es ar e avai l ab le to le ar n m o r e?
this capacity. Her career spans
The Council of
over 40 years and includes
Regional Accrediting
the roles of faculty member,
Commissions has
chief student services officer,
a comprehensive
campus provost, college
website that provides
president, and Secretary of
a number of resources,
Education.
including a Frequently
Asked Questions
page: www.c-rac.org/.
There are also several
books and reports
that have been written on regional accreditation including Higher Education
Accreditation: How It’s Changing, Why It Must by Paul L. Gaston (Stylus
Publishing 2013) and Five Dimensions of Quality: A Common Sense Guide to
Accreditation and Accountability by Linda Suskie (Jossey-Bass 2014).

In the end, accreditation is about signifying that our institutions have “a purpose
appropriate to higher education” and the “resources, programs, and services
sufficient to accomplish and sustain that purpose” (Southern Association of
Colleges and Schools Commission on Colleges 2017, 2) for the betterment of our
students, communities, and country.

Re f e r en c es

Middaugh, M. F. 2012. Introduction to Themed PHE Issue on Accreditation in Higher Education.


Planning for Higher Education 40 (3): 5–7. https://www.scup.org/resource/introduction-to-themed-
phe-issue-on-accreditation-in-higher-education

Southern Association of Colleges and Schools Commission on Colleges. 2017. Welcome from the
President. Accessed November 14, 2017: www.sacscoc.org/president.asp.

U.S. Department of Education. 2016. Federal Student Aid Annual Report FY 2016. Washington,
DC: U.S. Department of Education. Accessed November 14, 2017: https://studentaid.ed.gov/sa/sites/
default/files/FY_2016_Annual_Report_508.pdf.

2
F E ATUR E ARTI CLE

Connecting
the Dots
Accountability, Assessment,
Analytics, and Accreditation
Linda L. Baer, PhD
Calls for accountability, outcomes assessment,
evidence of institutional performance, and
student success must be answered by integrated
planning and decision-making across higher
education.
There is now a renewed sense of urgency to improve accountability,
transparency, and performance in higher education—the result of a perfect
storm of state budget challenges, the ongoing transition from a manufacturing
to a knowledge economy, and the inability to appropriately articulate the
value of a postsecondary education. Stakeholders are demanding more from
higher education, searching for an overall return on this investment from the
student, state, and federal perspectives. People are now asking, is college worth
it? (Barone 2017; Dossani 2017). These challenges cannot be met with simple
changes. Institutions must strive to develop analytics or actionable intelligence
in all institutional areas—particularly those related to learning (Baer and
Campbell 2012). Strategic planning and decision-making in this ever-changing
environment is critical.

“Our country is moving from a national, analog, industrial economy to a global,


digital information economy. And our social institutions and education system
were built for the former—a world that is dying” (Kauffman 2017, 3). Arthur
Levine, president of the Woodrow Wilson Foundation, made these comments
at the first meeting of HLC Partners for Transformation, a blue ribbon panel
created by the Higher Learning Commission. Noting that accreditation dates
back to the late 1800s, he continued, “When accreditation was created, higher
education looked like the wild west and required standardization. Now we are
at the beginning of a revolution. How do we encourage innovation and create
standards for a new era?” (Kauffman 2017, 4). Levine believes we must work
to determine the standards, processes, and opportunities that will support
the changing needs of society, the changing face of higher education, and the
changing educational environment.

This article first appeared in Planning for Higher Education, Issue V46N1, October–December 2017

Employing Accreditation to Strengthen Planning and Drive Improvement 3


As MacTaggert (2017, p. 1) noted, “Twentieth-century Acco u ntab i lit y
leadership approaches will no longer suffice. Accountability is demonstrating to stakeholders the
Skepticism over the value of a college degree, higher effectiveness of a college, program, service, or initiative
expectations for performance from institutions at all in meeting its responsibilities (Suskie 2015). However,
levels, student unrest, intense competition for students as Lingenfelter (2003, p. 20) noted, “Policy makers and
and resources, and political divisions are among the educators have been struggling for decades to design
most prominent challenges. In addition, a new wave satisfactory approaches to educational accountability.
of technological change will most likely alter higher Yet progress has been slow, both in developing
satisfactory approaches and in improving performance.
education as we know it. Artificial intelligence, virtual
The objective of accountability systems generally is to
reality, big data, and cognitive mapping are more than
stimulate more effective, innovative approaches and
buzz words. They will define the future of higher
greater effort and discipline in implementation.”
education and society just as the Internet does now.”
There has been a long history of conversation about
This changing environment calls for more efforts
accountability across higher education. Burke (2004, p. 1)
to build integrated planning models that take into described the many faces of accountability:
account the many sectors of the campus. Particularly
important are the connections between institutional Accountability is the most advocated and least
analyzed word in higher education. Everyone
accountability, assessment, analytics, and accreditation.
uses the term but usually with multiple meanings.
Too often, these dimensions have been disconnected,
Writers say it faces in every direction—“upward,”
resulting in less efficiency and effectiveness.
“downward,” “inward,” and “outward.” It looks,
Ultimately, student success suffers in a fragmented
in turn, bureaucratic, participative, political,
institutional environment.
or market centered. [It may appear] two-faced,
with sponsors and stakeholders demanding more
services while supplying less support.

He continued, “The conflict over accountability is


eroding what was once a national consensus—that higher
education is a public good for all Americans and not just
a private benefit for college graduates” (Burke 2004, p. 1).

Public higher education was built on the premise of a


social compact: that is, access to a college education was
both a public good for society and a private good for
students. Access to college was seen as the gateway to
quality and equality. Taxpayers accepted the obligation
to provide adequate operating funding to public
colleges and universities while expecting they would
keep tuition relatively low. Public support of basic
scientific research was part of the compact, which held
up until the early 1970s when revenues and enrollments
began to decline and demands for increased outcomes
and performance started to emerge. Over time, as
fiscal issues grew, demands for accountability grew as
well: “Like most compacts, the one between American
society and higher education became strained
Ultimately, student success when rights and responsibilities moved from vague
suf fers in a fragmented generalities to specific demands and competed for
institutional environment. funding with other public services” (Burke 2004, p. 6).

4
In the mid-1980s, states began to require colleges and In 2006, the Secretary of Education’s Commission on the
universities to report on performance, and in the Future of Higher Education offered a strong indictment of
1990s, Congress developed the Student Right-to-Know American higher education (U.S. Department of Education
Act, which mandated significant new disclosure of 2006). The commission focused on costs that were too
information on graduation rates and school safety. high, graduation rates that were too low, especially
Policy makers expressed further concerns regarding among low-income and minority students, and learning
higher education outcomes as related to cost. In 1983, A outcomes that remained a mystery. Overall, higher
Nation at Risk was published, which warned of declining education responded as it had to previous calls for more
learning standards in both primary and secondary schools accountability by developing strong defenses against the
(National Commission on Excellence in Education 1983). In criticism and ultimately indicating that it was already
1986, concerned governors published a report titled Time accountable in many ways. In addition, leaders argued that
for Results that extended the call to examine the quality higher education institutions are so diverse and unique
of learning to the collegiate level (National Governors that no single form of accountability could be used to
Association 1986). These efforts resulted in an important assess all fairly (Carey 2007).
nationwide movement that led to the development of
The No Child Left Behind legislation imposed
systematic research on student learning outcomes.
unprecedented federal requirements on the K–12 system
Driven by the federal government, regional accreditors
to use regularly administered standardized tests to
began to increase their emphasis on quality and improved
document annual improvements in all student ethnic and
graduation rates (Burke 2004; Carey 2007).
socioeconomic subpopulations, and many thought higher
Ewell (2014, 1) noted, education was next. However, since higher education
still didn’t have standards by which to measure learning
For many years, judgements about “quality”
outcomes, the proposals focused on graduation rates.
in higher education were determined almost
solely by institutional reputation, productivity, Indeed, many panels, commissions, acts, and proposals
and factors such as fiscal, physical, and human were created to move toward a stronger sense of
resources. Regional accreditors, charged with accountability. States tried to build accountability systems
examining the adequacy of public and independent that mattered, and during the 1990s, the number of state-
institutions alike, looked mostly at the overall level accountability systems grew. Yet, in determining
level of institutional resources and at internal metrics, states often used information they already had,
shared-governance processes. Over the past three such as graduation rates, which resulted in mountains of
decades, however, interest on the part of external data without context or meaning.
stakeholders in the actual academic performance of
Mark Warner, chair of the National Governors Association
colleges and universities has steadily risen.
in 2005, worked on a governors’ compact on high school
Ewell determined that there are a number of reasons graduation rates. He stated,
for this. One is the growing emphasis on accountability,
Clearly better data alone will not increase
particularly as it relates to student learning outcomes.
graduation rates or decrease dropout rates, but
There is increased competition in higher education, and
without better data states cannot adequately
the environment is putting a premium on visible evidence
understand the nature of the challenge they
of academic performance. In addition, the ongoing fiscal
confront. Knowing the scope of the problem, why
constraints under which most colleges and universities
students are leaving, and what their educational and
operate demand strong evidence-based academic
personal needs are can help leaders target resources
management practices as much as fiscal discipline.
more effectively in support of those young people
who are at-risk or who have already dropped out
(National Governors Association 2005, 9) .

Employing Accreditation to Strengthen Planning and Drive Improvement 5


Lingenfelter (2016, pp. 50–51) described four high-profile initiatives that were
created to improve the scope, quality, and utility of information used to inform
policy and practice in the measurement of outcomes:

• “Measuring Up, an effort to measure state-level performance in higher


education and generate effective policy responses to improve performance.”

• “The Data Quality Campaign, State Longitudinal Data Systems, and Common
Education Data Standards, all closely related initiatives to improve the
availability and quality of educational data.”

• “Common Core State Standards for College and Career Readiness, an initiative
to establish common learning objectives for K–12 education and assess
student achievement throughout elementary and secondary education in
order to promote more widespread attainment.”

• “Assessing Higher Education Learning Outcomes (AHELO), a feasibility study


launched by the Organization for Economic Cooperation and Development
(OECD).”

He also listed a number of fundamental questions that could be used to frame


further meaningful discussion (Lingenfelter 2016, p. 51):

• “What should be measured? What is important? What matters to policy and


practice?”

• “What data, collected with what definitions and procedures, and what
combinations of data into metrics, will be credible and widely accepted?”

• “What meaning can legitimately be derived from the information provided by


the measures?”

• “How should the information be used, by whom, and in what ways?”

6
Figure 1 The Accountability Triangle The Accountability Triangle
(figure 1) provides insight into
the expectations of multiple
stakeholders and gives context
to the environment in which the
quest to improve accountability,
assessment, analytics, and
accreditation resides (Burke 2004).

As the Accountability Triangle


State Priorities illustrates, higher education resides
in a space where there are many
(Political)
internal and external stakeholders,
all expecting accountability.
Accountability However, the demands and interests
of these stakeholders differ:

• State priorities reflect the public


need and desire for higher
education programs and services,
Academic often as expressed by state
Concerns Market Forces officials but also by civic leaders
(Professional) (Market) outside government.

• Academic concerns involve the


Source: Burke 2004, p. 23
issues and interests of academic
communities, particularly
professors and administrators.

• Market forces cover customer


needs and the demands of
students, parents, businesses,
and other clients of colleges and
universities (Burke 2004).

State priorities represent political


accountability, academic concerns
reflect professional accountability,
and market forces push market
accountability.

Employing Accreditation to Strengthen Planning and Drive Improvement 7


Clearly, there are conflicting and dynamic demands in Ass ess m ent
terms of accountability to whom, for what purpose, In education, the term “assessment” refers to the
and with what outcomes. These multiple forces create wide variety of methods or tools that educators use
tension between civic and collegiate interests and
to evaluate, measure, and document the academic
between commercial and entrepreneurial cultures,
readiness, learning progress, skill acquisition, or
all of which reside within the higher education
educational needs of students (Glossary of Education
environment. These tensions are manifested as a series
Reform 2015).
of accountability paradoxes:
Related to accountability is the assessment of what
• Institutional improvement versus external
is important. In fact, assessment is the foundation
accountability
upon which accountability and accreditation reside.
• Peer review versus external regulation Questions include what is assessed, by whom, and for
what purpose.
• Inputs and processes versus outputs and outcomes
However, it must be determined whether the intent is
• Reputation versus responsiveness to use assessment for accountability, for improvement,
or for both. These two dimensions of assessment often
• Consultation versus evaluation
reside in creative tension.
• Prestige versus performance
Peter Ewell of the National Center for Higher Education
• Trust versus evidence Management has added significant insight into
the assessment, accountability, and accreditation
• Qualitative versus quantitative evidence (Burke 2004) conversation. Ewell (2001) developed a helpful
Since higher education is accountable to multiple taxonomy for understanding assessment in terms of
stakeholders, a variety of metrics must be used in its units of analysis, ways of looking at performance and
assessment. These metrics are fundamental to building outcomes, and ways to review performance (figure 2).
linkages between assessment and accountability.

8
Figure 2 Taxonomy of Terms Commonly Used in Connection with Student Learning Outcomes
Units of Analysis Ways of Looking Ways of Looking Ways to Review
at Performance at Outcomes Performance
Institution Efficiency Behaviors Evaluation
Productivity • Employment
Effectiveness • Further Education
• Career Mobility
• Income
Program Output
Productivity Satisfaction Measurement
Indicator
Assessment
Student Outcome Learning Evidence of Achievement
• Knowledge • Examinations
• Skill • Performances
• Ability • Student Work
• Attitude/Disposition
Attainment
Development
Source: Ewell 2001, p. 8

The assessment movement, as Ewell (2009, abstract 1) and the Media (n.d., p. 2) presented a similar conclusion:
characterized it, emerged in the mid-1980s from “the “Examinations of the quality of higher education usually
demand by policymakers for better and more transparent focus on statistics representing the number of books in the
information about student and institutional performance, library, the size of the endowment, test scores of incoming
the press by accreditors on institutions to collect and use freshmen, graduation rates and the like.”
student learning outcomes data, and the availability of
These metrics are lagging metrics since they report past
more and better assessment instruments and approaches.”
activity and often have little direct bearing on student
As noted by Schray (n.d., p. 6), “Many proponents of learning. Recently, more emphasis on outcomes-based
greater public accountability in higher education and performance measures has led to outcomes-based
accreditation argue that the most important evidence metrics such as student retention and persistence rates,
of quality is performance, especially the achievement graduation rates, rates of placement in jobs, and post-
of student learning outcomes. This has led to a number graduate income levels. However, these are still “after-
of national and state efforts to identify a broad range of the-fact” measures that are being used because of the
performance indicators or measures including access, lack of standardized learning outcomes data. Today, the
productivity and efficiency, student learning, degree
use of analytics tools is bringing higher education more
completion, and economic returns from postsecondary
ability to monitor at-risk student behavior, factors related
education. Many of these performance measures and
to persistence, and what interventions are working for
indicators are represented in “Measuring Up: The National
which students. As outcomes-based education expands,
Report Card on Higher Education.”
more standards for learning are becoming available. The
Historically, higher education has relied on input measures list of metrics now includes student success, student
such as student enrollment, number of faculty, investment access and diversity, meeting workforce needs, and
in new buildings, and research grants and contracts research and innovation that benefit the academic
received. A report by the Hechinger Institute on Education community and society (Miller 2016).

Employing Accreditation to Strengthen Planning and Drive Improvement 9


In addition, “as societal and economic factors redefine what skills are necessary
in today’s workforce, colleges and universities must rethink how to define,
measure, and demonstrate subject mastery and soft skills such as creativity
and collaboration. The proliferation of data mining software and developments
in online education, mobile learning, and learning management systems
are coalescing toward learning environments that leverage analytics and
visualization software to portray learning data in a multidimensional and
portable manner. In online and blended courses, data can reveal how student
actions contribute to their progress and specific learning gains” (Adams Becker
et al. 2017, pp. 8–9).

More recently, independent private colleges and schools organized as for-profit


institutions have been confronted by additional outcomes measures, including
the rate at which students default on loans post-graduation (Cohort Default
Rates) and the numerical relationship between the price of education and the
post-completion earning performance of the completer (so-called Gainful
Employment regulations). However, it is unclear what the future of these
regulations will be (Barrett 2017; Mayotte 2015).

The American Association of State Colleges and Universities (AASCU)


publishes an annual Top 10 Higher Education State Policy Issues list. Among
the many issues on the list across recent years is performance-based funding.
The AASCU January 2017 Policy Matters brief stated, “As a discretionary
state budget item, higher education will be among lawmakers’ top targets to
balance state budgets.… Higher education’s role in economic and workforce
development will be a top-tier concern for lawmakers looking to guide state
residents into available jobs” (AASCU Government Relations 2017, p. 1). Given
the limited revenues of many states, increased emphasis is being placed on
incentivizing improved institutional outcomes using existing resources, and
several models of performance-based funding have emerged. Linking state
funding to performance has also been a top-tier policy recommendation of many
major foundations, such as the Gates and Lumina Foundations. Evidence is still
being collected on the actual effectiveness and unintended consequences of
performance-based funding (AASCU Government Relations 2017).

10
Over time, there have been many attempts to develop Another source of information about the aggregate
performance measures. National and state efforts have performance of colleges and universities is Complete
identified several measures including access, productivity College America (CCA). Established in 2009, CCA “is a
and efficiency, degree completion, and economic returns national nonprofit with a single mission: to work with
from postsecondary education. For example, Measuring states and consortia to significantly increase the number
Up 2008: The National Report Card on Higher Education, of Americans with quality career certificates or college
authored by the National Center for Public Policy and degrees and to close attainment gaps for traditionally
Higher Education (2008, p. 4), focused on six measures that underrepresented populations” (Complete College
apply to sets of institutions, an entire community or state, America, n.d., 1). Thirty-four states currently participate
or a set of communities. These measures by implication in CCA, which advocates structural changes to an
integrate social and economic conditions into the institution’s approach to student course-taking behavior.
performance evaluation of postsecondary education. The The CCA model promotes more intentionality in course
key indicators were selected because they are broad gauges taking, course scheduling, and developing pathways for
for understanding success in key performance areas: student success.

• “Preparation for college: How well are high school CCA has identified six institutional “game changers” that
students prepared to enroll in higher education and appear to contribute to student success:
succeed in college-level courses?”
Through research, advocacy, and technical
• “Participation: Do young people and working-age adults assistance, we help states put in place the six GAME
have access to opportunities for education and training CHANGERS that will help all students succeed in
beyond high school?” college: 15 to Finish, Math Pathways, Co-requisite
Support, Momentum Year, Academic Maps with
• “Affordability: How difficult is it to pay for college when
Proactive Advising, A Better Deal for Returning
family income, the cost of attending college, and student
Adults. (Complete College America, n.d., 4)
financial aid are taken into account?”
While the “15 to Finish” approach is intended to move
• “Completion: Do students persist in and complete
students to completion in a more timely manner, it is
certificate and degree programs in college?”
important to understand that full-time does not work
• “Benefits: How do college-educated and trained for all students. A more comprehensive metric includes
residents contribute to the economic and civic well- time to degree and on-path progress to completion for
being of each state?” part-time students. “Time to degree is a major concern
for students, one that colleges often do not take seriously
• “Learning: How do college-educated residents perform
enough. Research shows that students who can take more
on a variety of measures of knowledge and skills?”
classes on a focused path to a degree, should, because it
helps them succeed at higher rates. Whether it’s 15 in a
term, 30 in a year, or just one more class,” said Dr. Davis
How can higher education improve the Jenkins, Civitas Learning advisor and senior research
impact of performance measurement scholar at the Community College Research Center
as related to student success? By (Civitas Learning 2017, 5).
developing and using outcomes
How can higher education improve the impact of
measures for student learning.
performance measurement as related to student success?
By developing and using outcomes measures for student
learning.

Employing Accreditation to Strengthen Planning and Drive Improvement 11


Ewell (2001) provided a strong summary of this topic in Significant in the field of learning assessment has been
the report Accreditation and Student Learning Outcomes: the rise of competency-based education (CBE). While
A Proposed Point of Departure, listing six core questions more concrete fields such as nursing, engineering, and
that need to be addressed: technology have long assessed students’ skill levels in
their particular areas, CBE programs have not been
• “What is a ‘student learning outcome’?”
adopted widely. However, this may change given the
• “What counts as evidence of student learning?” demand for more accountability, the rising cost of
education, and the focus on streamlining pathways to
• “At what level (or for what unit of analysis) should
credentials. CBE is an approach that allows students to
evidence of student learning outcomes be sought?”
progress toward mastery of content, related skills, or
• “To what extent should particular student learning other competencies:
outcomes be specified by accreditors?”
Competency-based education (CBE) awards
• “What models are available to accreditors when academic credit based on mastery of clearly
choosing an approach?” defined competencies.… CBE is built around clearly
defined competencies and measurable learning
• “What issues should be anticipated:
objectives that demonstrate mastery of those
• What standards of evidence will be used? competencies.… CBE replaces the conventional
• How will evidence be used in determining quality? model in which time is fixed and learning is
• How will faculty be involved? variable with a model in which the time is variable
and the learning is fixed (EDUCAUSE Learning
• How will the interests and concerns of external
Initiative 2014, sections 1, 2, 4).
stakeholders be addressed?” (Ewell 2001, pp. 14–15)
The Competency-Based Education Network, supported
Note that this was written in 2001. Over a decade and a
by the Lumina Foundation, released the report Quality
half later, we are still having the same discussions about
Framework for Competency-Based Education Programs
student learning outcomes and adequate standard
in September 2017. This work developed definitions
measures for reporting those outcomes.
of quality related to CBE in order to establish “Shared
More recently, Lingenfelter (2016) used the table Design Elements” and “Emerging Practices of
of performance indicators and sub-indicators from Competency-Based Education.” It listed eight elements
Measuring Up 2008 (National Center for Public Policy of quality related to CBE programs:
and Higher Education 2008). The table includes metrics
• “Demonstrated institutional commitment to and
for preparation, participation, affordability, completion,
capacity for CBE innovation”
and benefits. Preparation indicators include high
• “Clear, measurable, meaningful and integrated
school completion, K–12 course taking, K–12 student
competencies”
achievement, and teacher quality. Participation
indicators include numbers of young adults graduated • “Coherent program and curriculum design”
from high school and enrolled in college and working- • “Credential-level assessment strategy with robust
age adults enrolled in postsecondary education. implementation”
Affordability indicators include family ability to pay,
• “Intentionally designed and engaged learner
strategies for affordability, and reliance on loans.
experience”
Completion indicators include persistence and
completion measures. Benefits indicators include • “Collaborative engagement and external partners”
educational achievement, economic benefits, and civic • “Transparency of student learning”
benefits.
• “Evidence-driven continuous improvement”
(Competency-Based Education Network 2017, p. 4)

12
The work in CBE is paving the way for more foundational definitions and
standards that institutions and academic programs can use to develop the
infrastructure for improved learning. Research indicates that students are more
active, engaged, and motivated when involved in coursework that is challenging
but within their capacity to master. CBE accomplishes this by linking progress to
mastery (EDUCAUSE Learning Initiative 2014).

Le ar nin g Analy ti cs

While student learning outcomes are still considerably under development,


improved measures for assessing student learning are evolving. The emerging
field of learning analytics brings tools for the analysis of learning behavior to
decision makers to improve student learning in real time.

Learning science, smart technology, and the pressure for more accountability
have created a perfect storm for the development of a learning analytics
environment. In fact, in its annual Horizon Report that focuses on current
and future trends in higher education, NMC noted that one trend it has been
following is learning analytics (Adams Becker et al. 2017).

There is a wide continuum of activities within the ecosystem of analytics. Long


and Siemens (2011, p. 36) noted, “Analytics spans the full scope and range of
activity in higher education, affecting administration, research, teaching and
learning, and support resources. The college/university thus must become a
more intentional, intelligent organization, with data, evidence, and analytics
playing the central role in this transition.”

The growing focus on measuring learning encompasses the development of


methods and tools to evaluate, measure, and document academic readiness,
learning progress, skill acquisition, and other educational needs of students.
This is critical as societal and economic factors redefine what skills are
necessary in today’s workforce. Colleges and universities need to rethink what
demonstration of skills and mastery of subject matter look like:

Twenty-first century learning outcomes emphasize academic skill along


with interpersonal and intrapersonal competencies for complete learner
success. To evaluate these learning gains, next-generation assessment
strategies hold the potential to measure a range of cognitive skills,
social-emotional development, and deeper learning, giving students
and instructors actionable feedback to foster continued growth. The
foundation for facilitating this kind of assessment is learning analytics
(LA)—the collection, analysis, and reporting of data about learners
and their contexts, for purposes of understanding and optimizing
learning and the environments in which it occurs. LA continues to gain
traction at institutions as a means to assess and fundamentally improve
student learning. Data mining software captures rich datasets that
enable learners and instructors alike to monitor learning and generate
personalized feedback to ensure continued progress. As the LA industry
matures, the emphasis has shifted from data accumulation to garnering
nuanced insights on student engagement through data aggregated across
multiple sources and courses (Adams Becker et al. 2017, p. 14).

Employing Accreditation to Strengthen Planning and Drive Improvement 13


More campuses are participating in gathering and Academic performance is enhanced “through data-
analyzing data on student learning in order to recognize informed solutions that reduce the time to degree
learning challenges, improve student outcomes, and completion, improve student outcomes, and target
personalize the learning experience: students for recruitment … learning analytics are
benefiting a range of stakeholders beyond learners and
A recent report by the National Institute for instructors, to bodies of governance, researchers, and
Learning Outcomes and Assessment found that institutions. Learning analytics has developed in three
student assessment is emerging as a leading priority stages, moving from a focus on hindsight to foresight; the
for institutions of higher education because of first stage was describing results, the second stage was
pressure from accrediting and governing entities diagnosing, and the third and current stage is predicting
and the growing need for more and better evidence what will happen in the future. Creating actionable data
of student achievement. They reported that in 2013, is a hallmark of adaptive learning, which is the latest
nearly 84% of colleges and universities surveyed focus of experiments and pilot programs within various
adopted stated learning outcomes for all of their educational settings” (Johnson et al. 2016, p. 38).
undergraduates, up from 10% in 2009, and the range Metrics development and use continue to mature,
of tools and measures used to assess student learning supported by data mining techniques, learning
has expanded greatly (Johnson et al. 2015, p. 12). management system use, and the development of
predictive analytic models that assist faculty and advisors
There has been significant growth in data mining
in determining areas of concern and demonstrating the
software, and learning management systems are
effectiveness of specific interventions. At-risk behavior
developing that provide analytics and visualizations
can be anticipated, and interventions tailored to individual
to report and monitor learning. These learning
student learning needs are now possible.
management platforms provide the foundation for
instructors to determine and evaluate learning metrics, In a recent article, Mark Milliron stated, “We have at our
learning behavior, student performance, and individual fingertips the capabilities to have more students succeed
than ever before by leveraging the technology tools
interventions. As Shacklock (2016, p. 23) noted, “Academic
we have at our disposal” (Roscorla 2014, 6). The article
performance can be further enhanced by more timely data
continued, “The problem is actually getting student
being accessible to students and their academic mentors
learning data to the front lines where faculty can use it to
(personal tutors), so that interventions to enhance and
test innovations, create interventions and predict actions
support student learning can be built into the student
such as likelihood of course completion and graduation”
interaction more regularly during a period of study.” (Roscorla 2014, 7). Campuses need to have the right
infrastructure to get the right data to the right people in the
right way. If faculty, advisors, and students have access to
learning data, then they can make more informed decisions.

This is the missing connection between assessment,


accountability, analytics, and accreditation. With the
advent of learning outcomes, learning analytics, and
predictive analytics, decision makers can identify and
access student learning data and determine appropriate
interventions. This brings the demands of accreditation
full circle to increase emphasis not only on student
learning outcomes but also on what the institution does
to act on the information and demonstrate continuous
improvement in instruction.

14
Acc r e d itation Figure 3 Accreditation Functions

For more than 100 years, accreditation has been the


primary vehicle for defining and ensuring quality
in U.S. postsecondary and higher education: “In this
complex public-private system, recognized accreditation
organizations develop quality standards and manage
Gatekeeper
the process for determining whether institutions and
programs meet these standards and can be formally
accredited” (Schray, n.d., p. 1). A long-standing debate Accreditation
persists regarding accreditation’s role in ensuring
the government and the public that higher education
institutions and programs are effective in achieving
Continuous
results, especially student learning outcomes: “Currently, Auditor
Improvement
accreditation standards focus primarily on resource and
process standards (e.g., faculty qualifications, facilities and
support services)” (Schray, n.d., p. 6). However, regional In addition, accreditation operates within a triad of
accreditation agencies are now working to establish overlapping state, federal, and accreditor interests.
quality standards for student assessment to ensure that States are responsible for licensing institutions
institutions can and do provide valid and reliable evidence to protect against consumer fraud, the federal
of student learning. government recognizes accrediting agencies and
ensures compliance with Title IV on financial
Thus, accreditation resides in a creative tension between aid, and accrediting agencies ensure quality and
an audit function and one that supports continuous effectiveness (Burke 2004).
quality improvement. The audit function includes the
regular review, assessment, and reporting of whether Higher education has always been accountable,
an institution is maintaining and sustaining quality. It whether to religious orders, the government, or
focuses on after-the-fact reporting on what an institution the public. The questions remain, Accountable for
is doing to fulfill its mission and serve its students. The what? To whom? In the era following World War
continuous improvement side of accreditation is intended II, emphasis was placed on educating returning
to support the institution in improving demonstrated veterans, and the G.I. Bill of 1944 increased the focus
academic performance, institutional effectiveness, and on expanding campuses, balancing missions, and
fiscal stability. In addition, accrediting organizations designing statewide governance and coordinating
play a “gatekeeper” role in higher education because structures. As societal and economic needs evolved,
accreditation is used to determine whether institutions the focus of institutions evolved to increasing college
receive federal and state grants and loans annually. This access and opportunity and developing a skilled,
provides the primary means to protect consumers against educated workforce to support economic expansion
fraud and abuse (Schray, n.d.). The many functions of (Burke 2004). Today, institutions are focusing on
accreditation are illustrated in figure 3. developing a globally competitive workforce with
the skills needed in a restructured economy and
on maintaining a high quality of life and healthy
democracy.

Employing Accreditation to Strengthen Planning and Drive Improvement 15


Given the strong movement toward objective performance-based activities, a
paradigm shift was in order to move the focus of performance measures from
indicators of teaching to indicators of learning. Today’s measures support
institutional effectiveness, quality improvement, and student learning outcomes.

Ewell and Steen (n.d., 10) noted, “Accreditors were first mandated to look at
learning outcomes as a condition of recognition in Department of Education
rules established in 1989, but these directives were not very specific.”
Accreditors are now being asked not just whether they examine student
learning outcomes in light of an institution’s mission, but also why they don’t
establish and enforce common standards of learning that all must meet (Ewell
and Steen, n.d.). The answer is that there are still no standards of learning
outcomes so institutions must continue to use the measures they do have:
graduation rates, persistence rates, and, in some cases, employment statistics.

Connectin g th e Dots

The fundamental questions for creating connections across the higher


education ecosystem between assessment, accountability, analytics, and
accreditation relate to a common language and focus on who, for what
purpose, using what methods, and with what outcomes and actions. While
higher education has always been accountable in one way or another,
assessment targets and goals have changed as states have increasingly moved
to performance-based funding, although the actual targets for performance
vary across the states. Assessment models define the metrics. Accountability
systems report results to various stakeholders. Accreditation ensures that an
institution’s work is of high quality and that it is continually improving.

While these dynamics have been in play for some time, analytics now provide
a strong platform for reporting, monitoring, and evaluating progress as well
as acting on outcomes to improve student success. The science of learning, the
development of powerful data systems, and the advancement of predictive
student solution platforms have come together to enhance our ability to assess,
account, and accredit actual student learning and institutional performance.
The competency-based learning model is a precursor to the ability to establish
what students need to demonstrate in terms of mastery of competencies and
skills, how they will demonstrate that mastery, and what can be done in the
learning environment to support progress toward timely mastery.

Based on the research around learning science and competency-based


learning, we know learning is measurable—and more flexible. We are close to
supporting teachers and mentors in ways that improve student success. The
new analytics platforms can monitor and assess student learning behavior and
accomplishments. The systems can alert faculty and advisors of high levels,
adequate levels, and inadequate levels of accomplishment. The focus is on
competencies and mastery, in which accomplishments are certified in micro-
credentials (Kelly 2016).

16
Learning management systems are integral in this effort, providing the behind- As stakeholders demand more
the-scenes platform in a student’s learning experience and serving as the course of education, a coordinated,
hub and connector for management and administration, communication and aligned approach to assessment,
discussion, creation and storage of materials, and assessment of subject mastery accountability, analytics, and
(Lang and Pirani 2014). These systems enable the enhancement of learner accreditation will result in stronger,
information in real time for faculty, students, and advisors. more sustainable outcomes. Data must
move from reportability to action.
Figure 4 connects the components of accountability, assessment, accreditation,
and analytics. The model begins with the analytics dimension, which provides A critical component is moving
a data platform from which decision makers can access information, develop from data to insight. As Kamal
insight, and review data on what is working in the institution. Learning analytics (2012, 2) wrote, developing metrics
specifically provide invaluable information on student behavior and learning, is easy; developing insights is hard:
including insights into what works to support student learning. The assessment “In contrast to this abundant data,
dimension provides the foundation for improvement and is based on the insights are relatively rare. Insights
strength of the analytics environment. Stronger institutional evidence results here are defined as actionable, data-
when cross-functional units agree on what is assessed, how it is assessed, and driven findings that create business
what can be accomplished from standardizing outcomes metrics. At this point, value. They are entirely different
agreed-upon performance-based measures of accountability are strengthened beasts from raw data. Delivering
beyond after-the-fact graduation, persistence, and employment rates. The them requires different people,
institution can move from data overload to targeted measures that can make technology, and skills—specifically
a difference to individual students. Further, adaptive assessment can assist including deep domain knowledge.
students in their current learning environment. The institution can move And they’re hard to build.”
from fragmented student success measures to a fully integrated set of student
Why is this important? In order
success efforts spanning the student life cycle. This integrated approach
to connect the dots of assessment,
provides the institution with strong evidence to meet the multiple demands of
accountability, analytics, and
stakeholders and accrediting bodies.
accreditation, insight makers are
Figure 4 Connecting the Dots: A Model for Integrated Decision-Making required. These are people across
functional areas of the institution
• Performance-based • Quality assurance
expectations • Continuous who can go beyond the numbers to
• Multiple stakeholder improvement understand the implications, impact,
expectations • Gatekeeper
and inspiration behind the numbers.
They can lead collaborative
Accountability Accreditation conversations that use statistics,
reporting, and visualization tools to
help maximize data alignment across
the institution’s accountability
agenda. Good data are fundamental,
Assessment Analytics but analysis for impact is crucial
for real change. This is true for
each of the dimensions, whether
• Data
assessment, accountability,
• Standard • Insights
performance metrics
• Action tools
analytics, or accreditation.
• Learning metrics for decision-making

Employing Accreditation to Strengthen Planning and Drive Improvement 17


So, what ar e th e steps to connectin g th e dots?

• Start by assessing your institution’s approach to integrated planning.

• Review what metrics, data, and indicators are being used for which part of the
accountability and accreditation requirements.

• Come to an agreement on data definitions and standards while moving to


actionable outcomes.

• Leverage the power of data through deeper insights and action.

• Inventory which stakeholder groups require what data and information


and align these with the integrated planning process. These include the
requirements of state performance mandates, federal data mandates, and both
regional and disciplinary accreditation mandates.

• Stay intentional about the focus on improving student learning and


institutional performance.

• Pay attention to the tools, applications, and services that are available to
support analytics and decision making based on data.

• Create a culture of measurement, performance, and action.

• Continue to assess outcomes. Establishing an environment that encourages


the use of research in seeking the answers to questions about student success
will enable your institution to thrive.

It is crucial that we rethink our educational models. We need to ask ourselves


to whom we are accountable and whether we are making the best use of the
data we have. We need new relationships with diverse partners. Michael Crow
of Arizona State University speaks of moving from the industrial age, one-size-
fits-all model of education toward one that focuses on inclusion rather than
exclusion (Millichap and Dobbin 2017). This will require greater collaboration
across the institution and the establishment of partnerships with K–12 schools,
other higher education institutions, communities, and business/industry. Data
collaboration and federation will bring increased strength to the integrated
planning environment.

18
Re f e r en c es
AASCU Government Relations. 2017. Top 10 Higher Education State Hechinger Institute on Education and the Media. n.d. Beyond the
Policy Issues for 2017. Policy Matters, January. Accessed December 5, 2017: Rankings: Measuring Learning in Higher Education. An Overview for
www.aascu.org/policy/publications/policy-matters/Top10Issues2017.pdf. Journalists and Educators. New York: Hechinger Institute on Education
and the Media. Accessed December 19, 2017: http://hechinger.tc.columbia.
Adams Becker, S., M. Cummins, A. Davis, A. Freeman, C. Hall Giesinger, edu/primers/TeaglePrimer_092106.pdf.
and V. Ananthanarayanan. 2017. NMC Horizon Report: 2017 Higher
Education Edition. Austin, TX: The New Media Consortium. Accessed Johnson, L., S. Adams Becker, M. Cummins, V. Estrada, A. Freeman, and C.
December 5, 2017: http://cdn.nmc.org/media/2017-nmc-horizon-report- Hall. 2016. NMC Horizon Report: 2016 Higher Education Edition. Austin, TX:
he-EN.pdf. The New Media Consortium. Accessed December 19, 2017: http://cdn.nmc.
org/media/2016-nmc-horizon-report-he-EN.pdf.
Baer, L., and J. Campbell. 2012. From Metrics to Analytics, Reporting to
Action: Analytics’ Role in Changing the Learning Environment. In Game Johnson, L., S. Adams Becker, V. Estrada, and A. Freeman. 2015. NMC
Changers: Education and Information Technologies, ed. D. G. Oblinger, Horizon Report: 2015 Higher Education Edition. Austin, TX: The New Media
53–65. Washington, DC: EDUCAUSE. Consortium. Accessed December 5, 2017: http://cdn.nmc.org/media/2015-
nmc-horizon-report-HE-EN.pdf.
Barone, M. 2017. Is College Worth It? Increasing Numbers Say No.
Washington Examiner, June 8. Accessed December 5, 2017: www. Kamal, I. 2012. Metrics Are Easy, Insight Is Hard. Harvard Business Review,
washingtonexaminer.com/is-college-worth-it-increasing-numbers- September 24. Accessed December 5, 2017: https://hbr.org/2012/09/
say-no/article/2625304. metrics-are-easy-insights-are-hard.

Barrett, B. 2017. How Much Would It Cost for For-Profit Colleges to Pass Kauffman, S. 2017. Higher Learning Commission Blue Ribbon Panel
Gainful Employment? New America, June 15. Accessed December 5, 2017: Tasked to Set Agenda for New Initiatives and Innovation in College and
www.newamerica.org/education-policy/edcentral/how-much-would-it- University Accreditation. News release, July 19. Accessed December 7,
cost-profit-colleges-pass-gainful-employment/. 2017: www.prweb.com/releases/2017/7/prweb14521524.htm.

Burke, J. C., ed. 2004. Achieving Accountability in Higher Education: Balancing Kelly, R. 2016. 7 Things Higher Education Innovators Want You to
Public, Academic, and Market Demands. San Francisco: Jossey Bass. Know. Campus Technology, March 14. Accessed December 5, 2017:
http://campustechnology.com/Articles/2016/03/14/7-Things-Higher-
Carey, K. 2007. Truth Without Action: The Myth of Higher Education Education-Innovators-Want-You-To-Know.aspx?p=1.
Accountability. Change 39 (5): 24–29.
Lang, L., and J. A. Pirani. 2014. The Learning Management System
Civitas Learning. 2017. New Data Reveal Key Opportunities to Improve Evolution. ECAR Research Bulletin, May 20. Accessed December 5, 2017:
Part-Time Student Success. News release, October 11. Accessed https://library.educause.edu/~/media/files/library/2014/5/erb1405-pdf.pdf.
December 5, 2017: www.civitaslearning.com/press/community-insights-
report-key-opportunities-improve-part-time-student-success/. Lingenfelter, P. E. 2003. Educational Accountability: Setting Standards,
Improving Performance. Change 35 (2): 18–23.
Competency-Based Education Network. 2017. Quality Framework
for Competency-Based Education Programs. Accessed December 5, ———. 2016. “Proof,” Policy, and Practice: Understanding the Role of Evidence
2017: www.cbenetwork.org/sites/457/uploaded/files/CBE_Quality_ in Improving Education. Sterling, VA: Stylus Publishing.
Framework.pdf.
Long, P., and G. Siemens. 2011. Penetrating the Fog: Analytics in Learning
Complete College America. n.d. About. Accessed December 5, 2017: http:// and Education. EDUCAUSE Review, September/October, 31–40. Accessed
completecollege.org/about/. December 5, 2017: https://er.educause.edu/~/media/files/article-
downloads/erm1151.pdf.
Dossani, R. 2017. Is College Worth the Expense? Yes, It Is. The Rand Blog,
May 22. Accessed December 5, 2017: www.rand.org/blog/2017/05/ MacTaggart, T. 2017. The 21st Century Presidency: A Call to Enterprise
is-college-worth-the-expense-yes-it-is.html. Leadership. Washington, DC: Association of Governing Boards. Accessed
December 5, 2017: www.agb.org/sites/default/files/report_2017_21st_
EDUCAUSE Learning Initiative. 2014. 7 Things You Should Know About century_presidency.pdf.
Competency-Based Education. February 11. Accessed December 5, 2017:
https://library.educause.edu/~/media/files/library/2014/2/eli7105-pdf. Mayotte, B. 2015. What the New Gainful Employment Rule Means
for College Students. U.S. News & World Report, July 8. Accessed
Ewell, P. T. 2001. Accreditation and Student Learning Outcomes: A Proposed December 5, 2017: www.usnews.com/education/blogs/student-loan-
Point of Departure. CHEA Occasional Paper, September. Washington, DC: ranger/2015/07/08/what-the-new-gainful-employment-rule-means-for-
Council for Higher Education Accreditation. Accessed December 5, 2017: college-students.
www.chea.org/userfiles/Occasional%20Papers/EwellSLO_Sept2001.pdf.
Miller, T. 2016. Higher Education Outcomes-Based Funding Models and
———. 2009. Assessment, Accountability, and Improvement: Revisiting Academic Quality. Lumina Issue Papers, March. Accessed December 19,
the Tension. Occasional Paper #1. Urbana, IL: National Institute for 2017: www.luminafoundation.org/files/resources/ensuring-quality-1.pdf.
Learning Outcomes Assessment. Accessed December 5, 2017: www.
learningoutcomeassessment.org/occasionalpaperone.htm. Millichap, N., and G. Dobbin. 2017. 7 Recommendations for Student
Success Initiatives. EDUCAUSE Review, October 11. Accessed December
———. 2014. The Growing Interest in Academic Quality. Trusteeship, 5, 2017: https://er.educause.edu/blogs/2017/10/7-recommendations-for-
January/February. Accessed December 5, 2017: www.agb.org/ student-success-initiatives.
trusteeship/2014/1/growing-interest-academic-quality.
National Center for Public Policy and Higher Education. 2008. Measuring
Ewell, P., and L. A. Steen. n.d. The Four As: Accountability, Accreditation, Up 2008: The National Report Card on Higher Education. San Jose, CA:
Assessment, and Articulation. Mathematical Association of America. National Center for Public Policy and Higher Education. Accessed
Accessed December 5, 2017: www.maa.org/the-four-as-accountability- December 5, 2017: http://measuringup2008.highereducation.org/print/
accreditation-assessment-and-articulation. NCPPHEMUNationalRpt.pdf.

Glossary of Education Reform. 2015. Assessment. Accessed December 5,


2017: http://edglossary.org/assessment/.
Continued on page 20

Employing Accreditation to Strengthen Planning and Drive Improvement 19


Re f e r en c es (contin ued)

National Commission on Excellence in Education. 1983. A Nation at Schray, V. n.d. Assuring Quality in Higher Education: Key Issues and
Risk: The Imperative for Education Reform. Washington, DC: National Questions for Changing Accreditation in the United States. Issue Paper,
Commission on Excellence in Education. Accessed December 5, 2017: Secretary of Education’s Commission on the Future of Higher Education.
https://www2.ed.gov/pubs/NatAtRisk/index.html. Accessed December 5, 2017: https://www2.ed.gov/about/bdscomm/list/
hiedfuture/reports/schray.pdf.
National Governors Association. 1986. Time for Results: The Governors’
1991 Report on Education. Washington, DC: National Governors Shacklock, X. 2016. From Bricks to Clicks: The Potential of Data and
Association. Analytics in Higher Education. London: Higher Education Commission.
Accessed December 19, 2017: www.policyconnect.org.uk/hec/sites/
———. 2005. Governors Sign Compact on High School Graduation Rate at site_hec/files/report/419/fieldreportdownload/frombrickstoclicks-
Annual Meeting. News release, July 16. Accessed December 5, 2017: www. hecreportforweb.pdf.
nga.org/cms/home/news-room/news-releases/page_2005/col2-content/
main-content-list/governors-sign-compact-on-high-s.html. Suskie, L. 2015. Five Dimensions of Quality: A Common Sense Guide to
Accreditation and Accountability. San Francisco: Jossey-Bass.
Roscorla, T. 2014. How Analytics Can Help Colleges Graduate More
Students. Center for Digital Education: Converge, July 15. Accessed U.S. Department of Education. 2006. A Test of Leadership: Charting the
December 19, 2017: www.centerdigitaled.com/news/How-Analytics-Can- Future of U.S. Higher Education. Washington, DC: U.S. Department of
Help-Colleges-Graduate-More-Students.html. [Updated link: https://www. Education. Accessed December 5, 2017: https://www2.ed.gov/about/
govtech.com/education/Can-Analytics-Help-Colleges-Graduate-More- bdscomm/list/hiedfuture/reports/pre-pub-report.pdf.
Students.html.]

Auth o r Biog r ap hy

Lin da B a e r , P h D , is a senior consultant with Civitas Learning. She has served over 30
years in numerous executive-level positions in higher education, including senior program
officer, postsecondary success for the Bill & Melinda Gates Foundation, senior vice chancellor
for academic and student affairs in the Minnesota State College and University System, senior
vice president and interim president at Bemidji State University, and interim vice president
for academic affairs at Minnesota State University, Mankato. Her ongoing focus is to inspire
leaders to innovate, integrate, and implement solutions to improve student success and
transform institutions for the future. She presents nationally on academic innovations, educational transformation,
the development of alliances and partnerships, the campus of the future, shared leadership, and building
organizational capacity in analytics. Recent publications have been on smart change, shared leadership, successful
partnerships, innovations/transformation in higher education, and analytics as a tool to improve student success.

20
Fe atu r e ARTI CLE

Reflections on
Two Decades
of Quality
Assurance and
Accreditation
in Developing
Economies
Fred M. Hayward, PhD
In our increasingly mobile world, quality assurance
and accreditation across the globe, and
particularly in developing countries, has a number
of implications for higher education as a whole.

I ntro duction

These reflections grow out of more than 20 years of involvement in quality


assurance in Africa, the Middle East, and Asia as well as experience with quality
assurance in Europe and the United States as executive vice president of the
Council for Higher Education Accreditation (CHEA) and senior associate at
the American Council on Education. I led a review of quality assurance and
accreditation in Sub-Saharan Africa for the World Bank in 2006 (Hayward
2006) and assisted with accreditation and reviews of accreditation in a number
of countries, including Madagascar, Pakistan, Afghanistan, Namibia, Ethiopia,
Ghana, South Africa, Bangladesh, and Tanzania.

As I look back on these cases, I am struck by the progress made in improving


quality over the years (sometimes an unexpected consequence of the quality
assurance process), the enduring impact of quality assurance on higher
education in developing economies, and what these successes suggest for
quality assurance in the United States, Canada, and Europe regarding the review
process, trust, academic freedom, and the importance of clear standards.

This article first appeared in Planning for Higher Education, Issue V46N1, October–December 2017

Employing Accreditation to Strengthen Planning and Drive Improvement 21


Q ualit y Ass u r an c e an d Acc r e d itation A first run at a new definition might contain
in Deve lopin g Econo m ies these elements: Accreditation is a process by
which recognized authorities validate that an
There have been many attempts to define accreditation
institution meets minimal professional standards
and quality assurance. I am well aware of that debate,
and accountability based on its mission.
especially as I authored a glossary of quality assurance
Standards established by professional groups and
terms for CHEA when I was executive vice president of
accrediting bodies are validated by government
that organization. That process in 2001 was very much
officials who also establish rules and regulations
a collective endeavor, with contributions from CHEA
for the conduct of the accreditation process.
members from various institutions in the United States,
Latin America, Europe, and Asia. The glossary, which That too is a reasonable definition though in some
also invited continuing input after it was posted on the countries the accreditation body is governmental, with
CHEA website, gave the definition of accreditation as varying degrees of autonomy—a critical difference.
“The process of external quality review used in higher For the purpose of this article, I use the definition
education to scrutinize colleges, universities and higher developed in 2001 and noted above.
education programs for quality assurance and quality
There seems to me to be substantial commonality
improvement” (Hayward 2001).
in the goals of accreditation among those involved
in developing countries, including to improve the
quality of teaching, research, and, for some, service to
the institution and the public. Other major goals are
to protect the public from fraud and low standards,
provide accountability, guarantee institutional
financial sustainability, ensure institutions meet
national and employer needs, and foster economic,
social, and political development.

The accreditation process itself is a relatively recent


phenomenon. It began in the United States in the 1880s
with several Eastern universities asking academics
from neighboring universities to visit and review
their programs. It soon expanded into a more formal
process. For decades, only the United States, the United
Below that definition were brief references to the
Kingdom, and a few other countries had external
process in South Africa, Western Europe, the United
quality assurance (Salmi 2015). A number of countries
Kingdom, and the United States. I do not want to suggest
had audits conducted by the ministry of education,
that what was in the glossary resulted from a “vote”
especially European and Francophone countries;
or was approved by all who participated, but it was
the audit existed as an internal process in the United
very much a consensus document in many respects.
Kingdom until it was replaced in 2002 by institutional
I say that not to defend any particular definition, but
reviews. External examiners, specialists in particular
to suggest that there was then, and I think remains, a
disciplines, are widely used in the United Kingdom
general consensus about what quality assurance and
today as well as in some parts of Europe, Africa, and
accreditation mean. Nonetheless, the demand for a new
Asia although they were largely replaced when
definition persists as an article by Milton Greenberg
accreditation was instituted in several African and
(2014, 13), former provost and interim president at
other countries. There have been efforts to rank the
American University, suggested:
quality of higher education institutions by ministries
and independent agencies; in general, most of these

22
have been found wanting. In a few countries, self-assessments are the only
requirement, though in most cases they are part of the accreditation process
in which the self-assessments are then reviewed by peer reviewers who also
carry out site visits and make recommendations about whether to accredit the
institution based on published standards. For the most part, in recent years
accreditation has become the standard mode of quality assurance in higher
education in developing areas.

In Africa there was only one quality assurance organization in 1985, but by 2006
there were 11 (Hayward 2006), and today almost every Sub-Saharan African
state has a quality assurance agency with 23 in place in 2014 (Salmi 2015). In Asia
prior to 1985 there were three quality assurance agencies. By 2004, there were
15 quality assurance organizations in 13 countries, 13 of which were involved in
accreditation and two in audits (Lenn 2004). Now almost every Asian country
has some sort of accreditation process in place.1 Thus by 2017, the process of
accreditation had become almost universal. As we will see, processes around the
What I want
world are remarkably similar with respect to goals, methods, and expectations.
At the same time, some of the processes are more demanding than others, with
to emphasize
understandably mixed results. Nonetheless, what I want to emphasize is the is the general
general acceptance of the importance of quality assurance, the need for quality acceptance of
improvement around the world, and the general similarity of the approaches to the importance
the process. of quality
A major challenge to the consensus on the importance of accreditation has assurance, the
been raised by the rapid growth of student demand for higher education, the need for quality
rapid expansion of private higher education to meet that demand, and the improvement
limited ability of many systems to review all institutions for quality in a timely around the world,
fashion. In Afghanistan, for example, the number of private higher education and the general
institutions increased from zero in 2005 to more than 125 in 2017, leading to
similarity of the
serious concerns about their quality.2 This rapid growth has been replicated
approaches to the
around the world, especially in developing economies, although because many
of the new institutions were of low quality, they were eventually closed, as
process.
happened in Afghanistan, South Africa, Ethiopia, Kenya, and the Philippines.3
However, it has become increasingly difficult for accrediting and quality
assurance agencies to keep up with the growth of private higher education
institutions due to the cost of reviews, the lack of professional staff, and
sometimes political and other interference in the process.

1 In 2015, only Myanmar was without a formal quality assurance agency. See Salmi (2015).
2 As a result, Afghan president Hamid Karzai, through a Presidential Decree in 2012, requested a major
quality review of all private higher education institutions.
3 For several examples and discussion, see Salmi (2015).

Employing Accreditation to Strengthen Planning and Drive Improvement 23


Ac h ieve m ents o f Q ualit y Ass u r an c e an d
Acc r e d itation
In examining the challenges of quality assurance and accreditation in
developing economies, it is easy to miss the positive impacts of the process on
those countries’ higher education systems. I want to focus on several of these
impacts and suggest why they should be of interest to people involved in quality
assurance in the United States, Europe, Canada, and other developed countries.

G rowin g Awar eness o f th e I m po rtan c e o f Q ualit y


Ass u r an c e

One of the major changes in most developing economies is the growing


awareness of and pressure for quality in higher education. This is evident
in the ministries, universities, and, to some extent, the public. Some of the
private institutions in developing areas are outstanding. However, many were
established for political or moneymaking reasons. Because of poor programs,
lack of accreditation, and often fraud, many people have become disillusioned
with private higher education, and they expect the government to do something
about it. Some governments have responded, and there have been a number of
university closings, notably in South Africa, the Philippines, Pakistan, Ethiopia,
and Kenya. Salmi (2015, p. 9) presents a useful table titled “Recent Examples of
University Closures.”

With the growing awareness of the importance of high quality has come the
recognition that higher education generally is underfunded. Unfortunately,
that knowledge has not encouraged the increased financial resources so
important to quality higher education in the developing world. In Afghanistan,
for example, funding in the regular budget dropped from $521 per capita in
2008 to only $345 per capita in 2016—a decline of more than one-third.4 Similar
scenarios have occurred in many developing economies, especially in Africa.
The funding problem has been exacerbated by a decline in external funding
from the World Bank and other agencies.5

4 Ministry of Higher Education data 2016. This does not include the development budget, which varied
widely over the years with only a small portion actually made available.
5 Overall external funding for higher education in Africa averaged $103 million annually from 1990 to
1994, then dropped to $30.8 million a year from 1995 to 1999. It rose to only $36.6 million between
2000 and 2004 in contrast to much higher levels of funding for primary and secondary education.
Ninety percent of all education funding in later years went to primary and secondary education. See
Hayward and Ncayiyana (2014).

24
B r e akin g Down Cu ltu r al Bar r ie rs to poor, good, very good, excellent) were marked “excellent.”
Acc r e d itation We asked how that could be, and the reviewers were
One of the perceived obstacles to accreditation in much of sheepish about it. They admitted it was the difficulty
the world was the perception that it was not in accord with of saying anything bad about others. We realized we
cultural norms. In both Afghanistan and Madagascar, the needed to give them some solid benchmarks to help them
initial reactions were “That is not Afghan” or “That is not differentiate. We suggested using the best universities
Malagasy.” In the Afghan case it was suggested that Afghans they could think of for “excellent,” a new but promising
would not criticize each other, would not say negative institution for “good,” and a fraudulent institution as
things about a person or an institution. Because of that, the “unacceptable.” That seemed to help somewhat. However,
argument went, how could Afghanistan institute effective as the discussion continued, the most useful description
peer review and site visits in which people were expected to was one drawn from survey research—the idea of a ladder.
be critical? That was followed by long discussions. In the end At the bottom of the five-rung ladder was “unacceptable,”
it was recognized that Afghans did make such judgments farther up was “good,” then “very good,” with “excellent”
in admitting students, and they could do the same for on the top rung. This worked very well. When the peer
programs and institutions. Eventually it was agreed that the reviewers went out two days later, their rankings covered
process should begin. This became a new part of the Afghan a wide range of levels—rankings those who knew the
academic culture—one involving much more assessment institutions well found to be right on target.
and review. It also fostered greater interaction with
students. In Madagascar the opponents were largely leaders
Recog ni z in g th e I nte r national
of private, for-profit institutions who said, “Let the customer I m po rtan c e o f Q ualit y Ass u r an c e an d
decide about quality.” They were quickly overruled by other Acc r e d itation
leaders of private institutions who saw accreditation as a
New awareness of the importance and implications
way to gain legitimacy in a competitive market—and it was.
of accreditation internationally helped its growth as
Indeed, the best of the private institutions were the first to
it became clear that graduates going on for graduate
apply for accreditation when it started, way ahead of the
studies in many parts of the world needed to be from
public institutions, which stalled.
accredited institutions to be considered for admission.
In addition, the growing understanding that there were
Deve lopin g I nnovative Tec h ni q u es o f some “international standards” or expectations helped
Ass ess m ent
encourage many reluctant institutions to review their
In the workshops leading up to peer reviews in Afghanistan, own standards. This was especially important in countries
we talked specifically about the unwillingness of people to such as Madagascar, which as an island nation had been
be publicly critical of or make judgments about others and isolated from most of its African neighbors. This was also
the need to differentiate levels of quality or non-compliance the case in Afghanistan, which had been isolated by war.
despite this. While workshop participants agreed that What was remarkable to watch was how important the
people were reluctant to be critical, they thought that in idea of “state of the art” or international standards and
this situation it would not be a problem. Following that expectations became to many higher education leaders
discussion, the quality assurance process was established, and faculty members. The Internet became a major tool in
and peer reviewers were trained. After the training, 10 peer identifying what was state of the art. In many cases, as in
reviewers were sent off to the first two institutions that Mauritius, people came away empowered after reviewing
were being used as pilots for the process. The 10 of them programs from what they saw as outstanding institutions
came back having completed their reviews, and almost all such as Oxford, Harvard, or Berkeley and seeing the
of the evaluations that used a five-part scale (unacceptable, similarities with their own programs.

Employing Accreditation to Strengthen Planning and Drive Improvement 25


F l attenin g th e P roc ess
In the few cases where there had been some kind of quality evaluation prior
to accreditation, it had largely been run by the ministry of education in a
top-down manner in which officials from the ministry visited the institution.
This was the case in Madagascar in the post-independence period. In many
cases, those who did the evaluations were not professionals, but nonetheless
they came to wield tremendous power over the institutions. Even the best
of them tended to focus on things they could count: student/teacher ratios,
number of books in the library, number of faculty members with PhDs, etc.
These are all useful things to know, but not necessarily the keys to high
quality. When quality assurance and accreditation were implemented,
the high levels of participation within the institution, the ministry, and,
in some cases, the community helped flatten the process and make it less
hierarchical, all to the benefit of quality. This also helped institutions get
away from the notion of quality assurance as top down—in other words, wait
for the leaders to do it. Peer review in particular facilitated a new openness
and thinking about assessment. The fact that most of these institutions had
high levels of academic freedom also helped flatten the process.

P rotectin g th e P u b li c

As noted previously, the original efforts in quality assurance and


accreditation involved voluntary internal reviews. Over time, the process
expanded and became a part of national and governmental expectations.
As the number of higher education institutions grew, so did the importance
of differentiating between those that met minimum quality standards
and those that did not. It then became clear that part of the process was to
protect a public that did not have expertise in higher education by weeding
out low-quality and fraudulent institutions. The average citizen had little
ability to distinguish the outstanding institutions from the fraudulent ones,
especially in the era of the Internet when anyone could prepare an enticing
website that promised all kinds of services with pictures of stately buildings
and other imaginative content. As more and more students found their
degrees to be of little value or even worthless, the public recognized the
need for outside experts who could accurately evaluate all higher education
institutions. I recall meetings with private education providers in both
Ethiopia and Pakistan where a private provider expressed opposition to
accreditation, saying “Let the buyer beware.” But the problem was that the
buyer did not have the knowledge to determine whether an institution was
bogus beyond the sad tales of students who after four years learned their
degrees were not recognized by employers or government. As the number of
those students grew, so did the pressure on governments to carefully review
all institutions and close the fraudulent ones. These closures and warnings
blunted the rise of private higher education institutions as moneymakers in
a number of countries including Afghanistan, Pakistan, and Ethiopia.

26
Foste r in g G r e ate r Focus on Te ac h in g

Ideas about teaching have changed dramatically in developing areas, partly


as a result of quality assurance and accreditation. Not too long ago, the idea
that students could ask questions was shocking to some academics. After a
presentation on teaching and quality assurance in Madagascar I was asked if it
was really true that I allowed students to ask questions. I said, “Yes.” I was then
asked, “What did you do if you did not know the answer? Did that ever happen?”
I responded that it sometimes did happen. I was then asked, “What did you do?”
I reported that I would respond, “I don’t know the answer but let’s all look into
it and discuss it during the next class period.” They were surprised and asked if
that response diminished my legitimacy with the students. I answered that it did
not—it encouraged future questions and discussion.

Faculty-student interaction in developing nations has grown tremendously in


recent years, fostering more innovative thinking by students and beginning
to move them away from the idea of memorizing what faculty members say
and then regurgitating it on an exam. These exchanges between students and
faculty have also been encouraged by the experiences of many young faculty
members who received master’s degrees and PhDs abroad. They expect to take
questions from students and to foster communication with them. This too is
helping bring about positive changes in the relationship between teachers and
students, all leading to greater focus on teaching and learning as a whole.

Focus in g on P r epar ation fo r th e J o b Mar ket

Growing concerns among the public, government officials, and parents about
employment have had a significant impact on higher education in developing
countries. While unemployment numbers in many developing economies are
unknown, we know they are high. We have seen some of the effects in student
demonstrations around the world, with the Arab Spring a major example of
the contagion that can follow. Students are worried about their futures. The
old expectation of government jobs, an expectation largely created during
the colonial period when universities were seen as the path to government
employment, is long gone. Nonetheless, the notion of a government obligation
to provide employment is still widespread. That is putting pressure on both
governments and higher education institutions to ensure there is a link between
higher education programs and employer needs.

Businesses and other employers are also putting pressure on higher education
institutions to improve the relevance of educational programs. Not long ago
50 percent of firms in Egypt identified the low level of skilled labor as a major
problem. There were 200,000 vacancies; yet, many times that number of
people in those fields were unemployed. They did not meet the qualifications
of employers, mostly because of the poor quality of education they received
(Ghafar 2016). Many of these people were in engineering but had never been in
an engineering laboratory.

Employing Accreditation to Strengthen Planning and Drive Improvement 27


Unfortunately, resolving these problems is not easy The newer tertiary institutions that have sprung up
for underfunded higher education institutions. On the to meet the strong demand for tertiary education are
other hand, the consequences of unemployment and the forced by the existence of NAB to be seriously mindful of
potential for unrest and mobilization by unhappy students quality assurance. If they want accreditation they have
and graduates make it imperative for institutional leaders no choice but to toe the NAB line on quality assurance.
and governments to respond. To date there have been few The NAB requires them to establish Quality Assurance
positive results in terms of greater funding, more focused Units and affiliate to an existing established university
programs, or controls on the number of higher education for an initial period, and for purposes of mentoring.…
students. Among those countries trying to respond is NAB is helping to keep in check the entrepreneurs who
Afghanistan, which has just begun a two-year associate would have had a field day, unleashing a whole stream
degree program in IT and medicine with encouraging of useless but lucrative tertiary institutions to meet the
results to date. There is a great shortage of well-trained undoubted demand that exists for tertiary qualifications
specialists in IT, medicine, and engineering in Afghanistan, and certificates.
with a ratio of five professionals to one technician, the
Similar examples can be seen in Tanzania, Pakistan,
opposite of the usual ratio.
Afghanistan, and many other places with positive results.

Settin g Mini m u m Stan dar ds o f q ualit y


E n s u r in g Finan c ial Res pon s i b i lit y
The concerns of employers and students about
One of the expectations of accreditation is that it will
appropriate preparation for employment are leading
ensure institutions are financially responsible and help
institutions and ministries to set minimal standards in
governments accept responsibility for public higher
a number of disciplines. This has been quite successful
education funding. In terms of the former, accreditation
in South Africa and Madagascar. Higher education
has been important in holding institutions responsible
institutions in these countries see standards as important
for their budgets generally. What has not happened is the
guides in their strategic planning efforts and follow them.
development of an awareness of the responsibilities of
Further, during a consultancy in Ghana, several heads
government to adequately fund higher education or help
of private higher education institutions came up after a
ensure other sources of income.
discussion of standards to thank us for sharing standards
they said they would put into effect right away. Indeed, The situation in South Africa is a good example. Even
several moved quickly to meet the standards, making a in the midst of ongoing battles in which students are
number of changes to implement them, including, in one demanding free higher education, the ministry has not
case, setting up the first computer laboratory. As Saffu increased funding; in fact, funding has decreased since
(2006, pp. 19–20) noted in talking about the effects of the 2000 (Langa et al. 2016). The ministry has referred the
National Accreditation Board (NAB) in Ghana: problem to the institutions, which are largely helpless
to raise funds other than by raising student fees—not
possible in the midst of unrest. Indeed, decreased funding
for much of higher education in the developing world has
become the norm.

28
C r e atin g N ew Awar eness o f th e P ro f ess ionali z in g q ualit y ass u r an c e
O n goin g Re s pon s i b i lit y o f Facu lt y
Me m b e rs fo r Contin uous I m p rove m ent One bright spot in quality assurance is the

Though it has grown slowly, there is a new awareness professionalization of quality assurance personnel, from

among faculty members in developing economies of peer reviewers to senior staff. The last decade has seen

their responsibility for continuous improvement— a major improvement in staff training. Much of that can

an understanding, especially among younger faculty be attributed to organizations such as the World Bank

members, that the “state of the art” is constantly changing. and the Carnegie Corporation and a number of donor

The idea of meeting “international standards” was totally countries including the United States, Germany, the

rejected as impossible in 2008 when I suggested it in United Kingdom, and several others. In addition, a number

Pakistan at a meeting on quality assurance. Now it is a term of nongovernmental organizations offer excellent training

used regularly by those working on quality assurance programs. This has greatly improved the quality assurance

there. What international standards represent is not process and helped increase its legitimacy and level of

something specific, but the idea that there is a state of public trust.

the art in physics, chemistry, engineering, and medicine—


and yes, even in the humanities and social sciences. Con c lus ion s
Both terms are now part of the regular vocabulary of
What conclusions do I draw from these reflections and a
quality assurance and accreditation in many parts of the
general overview of quality assurance and accreditation
developing world, especially among younger scholars.
processes in developing economies? Has it become, as
Valeikienè (2017, 1) suggests, “increasingly difficult to
Ad d r ess in g D ropo ut R ates an d Stu d ent assess quality and to demonstrate the impact of external
Co m p letion
quality assurance”? I think not. Indeed, as I have suggested,
Another area that has risen to the forefront in discussions I think quality assurance has become much easier to put
about quality is the recognition that large numbers of in place and operate over the years and is much more
higher education students are failing—often as many as successful now than it was a decade or two ago. It is also
half. Historically, these numbers have been ignored in important to emphasize that the process itself has had a
most developing economies. When I started working in number of important effects on the ability to carry out
Afghanistan in 2009 there were no records of dropout or
effective quality assurance, especially in developing
throughput. Now these rates are being calculated—but
economies. The impact of external quality assurance
only sporadically. In South Africa, 50 to 60 percent of
organizations on the quality assurance process in
students drop out of higher education (eNews Channel
developing countries has also been significant, ranging
Africa 2015). There are many reasons for this, including
from the assistance provided by the New England
lack of funds, family obligations, and illness. However,
the major reason is poor performance. Further, the Association of Schools and Colleges to accreditation
dropout rate from 12th grade in South Africa is almost 60 agencies in Madagascar, Turkey, and Afghanistan to
percent. This bodes ill for the long-term development of the funding provided by the World Bank and other
the country. Sadly, it is not dissimilar to the situation in international donors to assist with accreditation. Then
many other developing economies including Afghanistan. there have been the contributions to the process by
This is one area in which there is little good news—long- organizations such as the British Council through its
term prospects for significant change are slight given quality assurance workshops and CHEA through its
the underfunding of primary, secondary, and higher International Quality Group as well as many other
education and the general lack of priority given to organizations around the world.
education by most governments in developing economies.

Employing Accreditation to Strengthen Planning and Drive Improvement 29


While it is easy to criticize quality assurance and accreditation in many
developing countries and point out that there are many aspects that should be
improved, overall the positive impact of these processes on quality throughout
most of the developing world is significant. There is an amazingly high level
of agreement on the importance of quality assurance and accreditation, the
methods that should be used to carry it out appropriately, and the value of an
autonomous and open process. Even where there was resistance to the process
at the outset, as I noted for Afghanistan and Ethiopia, it is now well underway
and effective.

As I have tried to show in the preceding pages, there have been a variety
of important, and in some cases unexpected, benefits resulting from the
development of quality assurance and accreditation in developing economies.
The process of establishing accreditation and quality assurance mechanisms
has helped set new expectations for quality. The process has fostered the
first external reviews ever of many institutions in places as diverse as Ghana,
Pakistan, Afghanistan, and Madagascar. It has greatly helped governments and
There is an
the public deal with the mushrooming growth of fraudulent and low-quality
amazingly high
providers, many of which were set up for political or financial reasons with
level of agreement
little or no interest in higher education outcomes. In some cases, such as in
on the importance
Afghanistan and South Africa, the process has helped strengthen efforts to
of quality
enhance the autonomy of higher education institutions and free them from
assurance and
political interference. It has also helped shake some of the older, self-satisfied
accreditation,
institutions out of their complacency about their own quality by showing them
the methods that
the results of comparisons with some newer universities that had moved ahead
should be used
of them, as was seen in South Africa where several vaunted institutions were
to carry it out
put on probation or lost program accreditation. The quality assurance process
appropriately, and
has served to strengthen links throughout the higher education system in
the value of an
several countries, including through a peer review process that has broadened
autonomous and
faculty member knowledge of other institutions. Accreditation has served to
open process.
create strong incentives for quality improvement and in the process helped
make the public aware of the importance of high quality and increased pressure
on governments to ensure it. In a number of cases, such as in Pakistan, it has
increased public confidence in higher education.

On the other hand, there are many challenges that remain. Among them is the
cost of accreditation in most developing economies, which, while not a major
problem for the economies of countries such as South Africa, is a major problem
for those such as Afghanistan where higher education budgets are already
under serious strain. Another challenge is capacity building. While there have
been great strides in professionalizing quality assurance and accreditation,
ongoing capacity building, especially of a changing corps of peer reviewers,
remains difficult and costly, with growing resistance to demands for the regular
upgrading of skills among some faculty members. Finding enough experienced
faculty members who themselves embody the excellence expected has been
difficult in those countries with too few faculty members with PhDs. This is

30
almost universally a problem in the developing world where as many as 90
percent of faculty members do not have PhDs. Even in South Africa, which has
a large number of higher education institutions, the accrediting agency has
occasionally had problems finding enough high-quality peer reviewers who
meet the requirements.

One area in which there has not been much success is in improving the quality
of graduate education, which in much of Africa and parts of South Asia is far
below the level required to produce the number of high-quality new master’s
degrees and PhDs needed. In general, accreditors have not focused on this
problem. Daniel Ncayiyana and I have looked at the problems of graduate
education in Sub-Saharan Africa over the years and found that it remains at
unacceptably low levels with a few exceptions. In a 2014 study we concluded,

With a few exceptions, including South Africa, Ghana, and perhaps


Kenya and Uganda, graduate programs have not improved markedly
over the last five years. What we saw as the promise of regional graduate
education has, to some extent, made contributions though that has been We live in an
limited primarily to Southern Africa and funded by South Africa without increasingly mobile
cooperative government support from outside or much donor assistance. world, and to the
We do see an important future role for regional graduate education, and extent that all
we hope the Pan-African University turns out to be part of that success institutions are
(Hayward and Ncayiyana 2014, p. 209). looking to “state
of the art” or
An unintended negative consequence of accreditation in some countries
“international
(Pakistan is an excellent example) is a move toward the homogenization
standards” as the
of academic programs with national committees involved in setting the
benchmarks of
curriculum for each discipline. This has a tendency to lower program quality,
quality assurance,
partly because the committees tend to be made up of older distinguished faculty
the easier such
members who are nonetheless often out of date. It also discourages younger
mobility will
faculty members, often fresh from distinguished graduate programs, from being
become.
innovative. There is now a push in Afghanistan to do this, which fortunately
for the moment is being resisted by some faculty members and the Ministry of
Higher Education.

Overall, there are lessons to be learned from a review of accreditation and


quality assurance in developing economies. One is the advantage of institutional
accreditation over program accreditation in most cases. Institutional
accreditation reduces cost, allowing one to look selectively at some programs
yet saving the cost in time and money of reviewing all programs. The number
of programs in most countries is 60 to 70, and setting up an accreditation
structure for all of them is costly. Even South Africa found it too costly and
demanding of staff and peer reviewers to do both institutional and program
accreditation universally. Only Nigeria has carried out both institutional and
program accreditation; however, the results were of limited utility because of
the methods used.

Employing Accreditation to Strengthen Planning and Drive Improvement 31


What remains striking overall, despite the differences What does this have to say to those interested in quality
noted, is the similarity among quality assurance assurance in the United States, Canada, Europe, and
processes in most developing economies. They are based other developed nations? Partly it demonstrates the
on external reviews of institutions (and sometimes importance of the accreditation process to all of us. We
programs) using peer reviewers and quality standards live in an increasingly mobile world, and to the extent
with some kind of government-recognized status for the that all institutions are looking to “state of the art” or
accreditor. Most quality assurance agencies are part of “international standards” as the benchmarks of quality
a ministry of higher education or associated with it. A assurance, the easier such mobility will become. Also
few are autonomous. For example, the agency in South of importance to those of us in the more developed
Africa recently became autonomous from the Ministry areas of the world is the impact of the assistance
of Higher Education as part of the ministry’s initial we have given to these processes abroad through
overall plan. Others have managed, for the most part, to our universities, associations, and donors, including
maintain a high degree of autonomy. In Afghanistan, for USAID. It also highlights something we often forget:
example, although the accreditation agency is part of the critical importance of academic freedom to the
the Ministry of Higher Education, its decisions are not quality assurance process. Finally, we can learn from
reviewable or influenced by the minister or any other the creativity of those involved in dealing with local
political appointee. cultural hurdles to quality assurance. We all have our
own hurdles, and with sensitivity and willingness to
So, has quality assurance and accreditation become
search for ways to improve, we can ensure the process
increasingly difficult to assess? Again, I think not. The
is fair, open, and equitable.
wide range of goals and the different areas emphasized
from country to country reflect, in my mind, the Overall, I come away impressed by the growth and
changing nature of learning, knowledge, and national success of quality assurance and accreditation in most
needs. Accreditation has changed over the years of the developing economies I have examined. What
to reflect the growth and development of a rapidly is striking to me is how rapidly the process has grown
changing national and international environment. If and the impact it has made in a wide variety of ways.
we look at the changing nature of higher education I expect this growth to continue and with it a broader
itself we see the need for changes resulting from the national understanding of the benefits of accreditation
rise of e-learning, distance education, study abroad and quality assurance to the public, national leaders,
(which affects 10 percent of U.S. students and many and national development.
more in the European Union), and transnational
education (i.e., foreign university degrees granted by
local institutions). Quality assurance and accreditation
processes have to adapt to these changes—thus the
changing needs and foci of accreditors. These are the
changes I would expect from healthy processes of
quality assurance and improvement over time.

32
Re f e r en c es
Auth o r Biog r ap hy
eNews Channel Africa. 2015. SA Student Dropout Rate High. May 19.
Accessed January 3, 2018: www.enca.com/south-africa/student-dropout- F r e d M . H ay wa r d , P h D , is a specialist on
rate-high.
higher education with more than 25 years
Ghafar, A. A. 2016. Educated but Unemployed: The Challenge Facing Egypt’s
Youth. Washington, DC: Brookings Institution. Accessed January 3, 2018: of experience as an educator, scholar, senior
www.brookings.edu/wp-content/uploads/2016/07/en_youth_in_egypt-2. administrator, and higher education consultant.
pdf.
He has a PhD and master’s degree from Princeton
Greenberg, M. 2014. It’s Time for a New Definition of Accreditation.
Chronicle of Higher Education, January 27. Accessed January 3, 2018: University and a BA from the University of
www.chronicle.com/article/Its-Time-for-a New-Definition/144207.
California, Riverside. He has taught at the
Hayward, F. M. 2001. Glossary of Key Terms in Quality Assurance and
Accreditation. Council for Higher Education Accreditation (CHEA).
University of Ghana, Fourah Bay College in Sierra
Accessed September 18, 2012: www.chea.org/international/inter_ Leone, and the University of Wisconsin-Madison
glossary01.html.
where he was professor of political science,
———. 2006. Quality Assurance and Accreditation of Higher
Education in Africa. Paper presented at the Conference on Higher department chair, and dean of international
Education Reform in Francophone Africa: Understanding the Keys
of Success, June 13–15, Ouagadougou, Burkina Faso. Accessed January
programs. He was executive vice president of
3, 2018: http://siteresources.worldbank.org/EDUCATION/resour the Council on Higher Education Accreditation
ces/278200-1121703274255/1439264-
1137083592502/QA_accreditation_HE_Africa.pdf. and senior associate for the American Council
Hayward, F. M., and D. J. Ncayiyana. 2014. Confronting the Challenges of on Education for more than 10 years. He has been
Graduate Education in Sub-Saharan Africa and Prospects for the Future.
International Journal of African Higher Education 1 (1): 173–216. Accessed
a higher education consultant for the World
January 3, 2018: https://ejournals.bc.edu/ojs/index.php/ijahe/article/ Bank, Carnegie Corporation, Ford Foundation,
view/5647/4979.
Academy for Educational Development (AED),
Langa, P., G. Wangenge-Ouma, J. Jungblut, and N. Cloete. 2016.
South Africa and the Illusion of Free Higher Education. University USAID, and several universities and ministries
World News, no. 402, February 26. Accessed January 3, 2018: www.
universityworldnews.com/article.php?story=20160223145336908.
of education, focusing on higher education

Lenn, M. P. 2004. Quality Assurance and Accreditation in


change, governance, strategic planning, and
Higher Education in East Asia and the Pacific. World Bank accreditation. He has written extensively on
Paper no. 2004-6, August. Accessed January 3, 2018: http://
documents.worldbank.org/curated/en/532661468771858367/ development issues and higher education with
pdf/301460Strength1r0official0use0only1.pdf.
more than 60 articles and five books, including
Saffu, Y. 2006. Case Study on Ghana. Unpublished document prepared
for the World Bank.
Transformation of Higher Education in Afghanistan:
Success Amidst Ongoing Struggles (Society for
Salmi, J. 2015. Is Big Brother Watching You? The Evolving Role of the State
in Regulating and Conducting Quality Assurance. CIQP Publication Series. College and University Planning 2015, https://
Washington, DC: Council for Higher Education Accreditation. Accessed
January 3, 2018: www.chea.org/userfiles/uploads/Salmi_Book.pdf. www.scup.org/resource/transforming-higher-
Valeikienè, A. 2017. The Politics of Quality Assurance in Higher education-in-afghanistan-success-amidst-
Education. University World News, no. 483, November 17. Accessed ongoing-struggles/).
January 3, 2018: www.universityworldnews.com/article.
php?story=2017111610182154.

Employing Accreditation to Strengthen Planning and Drive Improvement 33


34
P LAN N I N G STO RY

Using Big Data


How Moneyball and an Ardent Baseball
Fan Shaped Successful Metrics-Based
University Planning
by Roy Mathew, PhD, Elsa Bonilla-Martin, PhD,
Daniel Santana, and Erick Gonzalez
Over the last three decades, the University of Texas
at El Paso has refined its planning system and
integrated metrics within a comprehensive planning
framework—to produce dramatic outcomes.
The University of Texas at El Paso (UTEP) achieved a dramatic transformation
over the last 30 years, going from a relatively obscure regional institution to a
nationally-recognized institution. UTEP is currently known as the top research
institution serving low-income students; a top ten institution for social mobility,
moving students who come from families in the bottom fifth to the top fifth
of income distribution; a top ten producer of Hispanic graduates; a top ten
institution of origin for Hispanic doctoral graduates; and a Carnegie Research
1 Institution. Those accomplishments have generated national acclaim for the
institution and its president, Diana Natalicio, who was listed as one of the top 50
leaders in the world in 2017 by Fortune. The accomplishments, we believe, are the
result of an enhanced planning approach at the university during the past three
decades, which is highlighted by a particularly important shift to using metrics.

This article first appeared in Planning for Higher Education, Issue V48N1, October–December 2019

Employing Accreditation to Strengthen Planning and Drive Improvement 35


2 Takeaways…

…To Ensure Effective Planning Efforts


1. The key to institutional effectiveness is not found in using the newest tools or developing more
advanced methods of analyzing data.
2. Understanding the comprehensive planning system and using metrics in a structured way is essential
to effective planning.

Tr ad itional P l annin g vs . Metr i cs-Bas e d P l annin g

During Natalicio’s first decade in office, the university Surprisingly, despite the successful investments in
established a clear mission of serving the El Paso effective tactical systems, the institution did not see a
region by emphasizing the importance of ensuring dramatic increase in key outcomes (degrees awarded,
access to students from the region who are generally for example). As we considered why this was the case,
underrepresented in higher education (e.g., low income, unexpected inspiration came from an unlikely place.
Hispanic), and achieving institutional excellence (i.e., Moneyball, the book by Michael Lewis, provided the
student degree completion, social mobility of graduates, impetus for a refinement in UTEP’s planning approach;
and university research productivity). During the early the book described a professional baseball team’s use
years, the institution followed a traditional approach to of analytics to develop a strategic advantage against
planning. For example, in 1991, UTEP established the El more affluent teams. The Oakland A’s, the baseball
Paso Collaborative for Academic Excellence (comprised team referenced in the book, did not have the sufficient
of UTEP, El Paso Community College, regional K–12 resources to utilize established strategies (i.e., hire
school districts, and stakeholders) to improve the K–16 the prolific home run hitters and pitchers) to compete
pipeline. Additionally, during the 1990s, UTEP leveraged against more prosperous teams. Instead, the Oakland

several major grants from the National Institutes of baseball team developed a strategic advantage through

Health and the National Science Foundation to develop analytics: It developed a systems understanding of
winning games and identified undervalued players that
the institution’s research infrastructure, advance
optimized the process. Metrics (rather than subjective
research activity, train future researchers, and increase
expert opinion) provided an efficient way to understand
the number of underrepresented minorities in STEM
the system and to identify undervalued players.
programs. By the early 2000s, UTEP began to be
recognized for its tactical systems. The Oakland A’s approach appealed to Natalicio, an
ardent baseball fan. In 2005, she helped the Center
The El Paso Collaborative gained attention as outcomes
for Institutional Evaluation, Research and Planning
in the K–12 educational system improved dramatically;
(CIERP) secure a grant from the Lumina Foundation for
the region, despite being one of the poorest in Texas,
Education to develop the analytics infrastructure, with
had the highest percentage of high school graduates
the specific focus of developing a systems understanding
completing the college curriculum, and was among the
of student success. The insights generated by analytics
highest in the enrollment of students in higher education
helped to concentrate interventions at critical points
immediately after graduation. In 2003, renowned
of students’ transition into higher education (e.g., first
educational author George Kuh and his colleagues
term). In addition, the understandings allowed us to
identified UTEP as one of the 20 exemplary institutions
improve administrative activities (e.g., term-to-term
that had created the conditions for student success.
retention, proactive degree audit, and advising) and
In 2004, the Washington Advisory Group identified
make curricular adjustments that would allow for new
UTEP as one of the Texas public institutions that had the
pathways to degree completion. Those efforts had a
potential to become a national research university.
dramatic impact on outcomes.

36
Figure 1 Evidence of Innovation

S Curve– Degrees Awarded Trend


3,500
3,373

3,300

3,100

2,900

2,700

2,500
Expected Growth based on Linear
Trend from FY 1994 to FY 2000
2,300

2,100

1,905
1,900

1,695
1,700
1,695

1,500
1

12
0

10

7
4

4
-1
0

-1
-0

-1
0

-1
0

-1
0
0

-1
-0

-
6-

9-
1-

2-

4-

5-

8-
7-
3-

10

11

16
12

14

15
13
0
99

0
0

20
0

0
0

20

20
20

20
20
20
20

20

20

20
20

20
19

20

20
20

20

Historical Trend Actual Trend

Soon after, we began to incorporate analytics into other institutional planning Moneyball,
efforts. For example, by 2007, the State of Texas identified UTEP as one of the the book by
emerging research institutions in Texas. In response, UTEP began the planning Michael Lewis,
process to become a Tier 1 research institution. The initial internal skepticism provided the
about the institution’s ability to achieve the research results was overcome impetus for a
using analytics. We exercised our understanding of the system to develop refinement in
scenario models based on growth in enrollment, faculty, research productivity UTEP’s planning
of faculty, and resources. approach.
The 2010 Strategic Plan for Research laid out UTEP’s long-range plan to
achieve top-tier status based on those analytics models. By 2018, metrics and
analytics became a prominent feature of UTEP’s administrative culture, and the
Moneyball approach was recognized as having an important role in advancing
outcomes. However, despite the recognition realized by the university, many
administrators on campus were not clear on the specific details associated with
implementing the metrics-based planning approach.

UTEP administrators have access to thousands of metrics and hundreds of


tools, which they use to inform daily action. At face value, employing big data
and the analytics infrastructure and advanced statistical analyses may seem to
be the key to producing exceptional outcomes. The extension of that thinking
often translated into a desire to invest in the newest tools and development
of more advanced methods of analyzing data. What we realized over the past
decade, though, was that our success largely resulted from two primary factors:
a structured understanding of the institutional system (i.e., a comprehensive
planning perspective), and our ability to use a limited set of data in a purposeful
way (as signals to take diffused action to improve key outcomes).

Employing Accreditation to Strengthen Planning and Drive Improvement 37


I s a Co m p r e h en s ive P l annin g C r e atin g th e Data I n f r astructu r e
F r am ewo r k Sti ll Us e f u l fo r P l anne rs? fo r P l annin g

The generally accepted definition of planning is that it The field of big data has evolved over time, and our
refers to deliberations that are undertaken to advance the experience with data has followed a similar path. The
goals of an organization (or its entities). Edward Bainfield, early focus of big data was on the volume, variety, and
in 1955, advanced the generic steps (generally described the dynamic rate at which data are available. By the early
as the rational planning model) that are associated 2000s, UTEP had large sets of data, but they resided in
with planning. In the late 1950s, Martin Meyerson and databases across offices and had differing levels of quality
Melville Branch Jr. articulated the domains of planning in and accuracy. In 2004, we began the process of creating a
organizations (generally described as the comprehensive comprehensive database that was built for reporting and
planning framework). Many theories (of how planning institutional research. We formalized the data governance
works) and procedures for planning have emerged over structure, secured large storage capacities, and developed
the last century, but a hurdle for us in planning was in a multilayered process to review, validate, and correct
translating those theoretical concepts into insights that data. Over time, the focus evolved and changed to
were useful for planning practice. managing and analyzing data. We had a similar shift at
UTEP. We developed standardized reports and tools to
In fact, efforts by scholars to establish a connection
track process on institutional outcomes, and also created
between formal planning (e.g., making plans) and
tools that allowed for efficient statistical analysis.
positive outcomes has been problematic, and that has
led to cynicism about the value of the process. Yet our The next step in big data evolution was advanced
review of a century of associated literature revealed analytics, which focuses on using data to explain changes
that planning theory and practice have not disappeared. with a systems perspective. We used the approach to
Our research confirmed that conceptual elements of understand student success, and created new tools that
organizational planning, first articulated nearly 60 years provided actionable insights to units across the campus.
ago, have been affirmed and refined. The five elements of Today, Big Data Analytics (BDA) is used to describe
comprehensive planning, in contemporary terms, are: (1) multiple concepts that emphasize infrastructure (e.g.,
plan making: development of mission, vision, goals, and key warehousing, reporting, processing), analyses (e.g.,
performance indicators; (2) tactical analysis: management statistical analysis, data mining, network analysis,
of systems and processes to improve operations; (3) systems analysis), and use (decision support, continuous
strategic analysis: development and adjustment of improvement, artificial intelligence, planning). Our
strategies based on comparative advantages relative to approach has also evolved. We are currently in the
competitors or peers; (4) policy analysis: management process of developing predictive analytics tools to support
of environmental factors that impact an organization’s planning and operations. A decade after we began our
ability to advance its goals; and (5) evaluation: assessment efforts, CIERP has developed an advanced analytics
of progress. infrastructure that includes more than 6,000 metrics
and 300 analytics tools to support institutional planning.
To ensure efficient tracking of that large volume of
information, we also formalized a hierarchy of metrics.

38
S u ppo rtin g P l annin g with Metr i cs How We Us e d Metr i cs

Tracking and using 6,000 metrics is a daunting task. Over In the following sections, we briefly describe each element
the last 15 years, we have classified data into four different of comprehensive planning, identifying how we used
categories to support planning: key performance metrics, metrics to assess and improve planning.
contextual metrics, control metrics, and policy metrics.
Plan making is a periodic activity that allows an
Figure 2 Hierarchy of Data Metrics Used for Planning organization to reaffirm and align its mission, values, and
goals, which is a critical factor in achieving the desired
outcomes. The process of making plans requires assessing
Key
the trajectory of the organization, determining conditions
Performance
Metrics Policy that are necessary to sustain progress, and identifying
Metrics further adjustments that are needed to achieve specific
Contextual
Metrics outcomes. There are specific types of analysis and data
that are essential to the plan-making process.
Control
Metrics We used key performance metrics to assess the
progress of goals; that analysis enabled us to
estimate future outcomes under historic conditions.
Key performance metrics are a very limited set We also developed outcome scenarios, which
of measures that show progress on organizational were exploring changes in component metrics
goals. Total enrollment, degrees awarded, research and other contextual metrics in the recent past,
expenditures, and total revenue are examples of key generally within five to ten years. Another
performance metrics. related analysis would be a focus on assessing the
Contextual metrics provide a framework to understand relative contribution of subunits (e.g., colleges and
changes associated with key performance indicators. departments within an institution). For that type
There are four different types of contextual metrics. of analysis, we considered changes in component
Component metrics describe the elements that make up metrics, including intermediate outcome metrics
the key performance indicators. Diagnostic metrics explain and diagnostic metrics. That analysis helped us
why change has occurred in each of the components. to identify emergent strategic advantages and
Leading metrics is another group of contextual indicators organizational inefficiencies. Once those internal
that provide early warnings about future change. assessments were completed, we focused on
Intermediate outcome metrics provide indicators of external comparative analysis. Key performance
progress related to specific initiatives designed to have an metrics and control metrics were used to assess
indirect impact on key performance indicators. organizational trends (relative to peers or
Control metrics are data that are used to assess the competitors) and to identify the level of outcomes
impact of tactical and strategic action. Specifically, control that were needed to keep pace or close gaps. That
metrics allow for comparison of outcomes between analysis allowed us to determine reasonable
initiatives, groups, or organizations, and they are used to targets based on institutional mission, resources,
assess productivity (for assessment of tactics) and relative comparative advantages, emergent changes, and
performance (for assessment of strategy). expected outcomes. The final plan document
Policy metrics are public measures that are generally built on those internal and external analyses
of interest to policy makers and stakeholders, including and provided a clarification of the organization’s
federal and state agencies. Policy metrics can be an existing mission, goals, vision, and strategies that guided
metric or a new measure created by an external agency. action during the next planning period.

Employing Accreditation to Strengthen Planning and Drive Improvement 39


The term “systems” in tactical (systems) analysis refers to the set of
elements that interactively produces change in institutional outcomes.
All organizations are influenced by the external environment and,
through planning, the organization manages the emergent changes.
The effectiveness of a system (i.e., its ability to achieve goals) depends
on how well the parts work together to produce outcomes. However,
we recognized that optimizing each subunit (e.g., college) would not
necessarily produce the ideal organizational outcomes. As such, we
evaluated the performance of a system (e.g., the university) by assessing
how it operated within the larger system (e.g., institutions in Texas) and
not based on the performance of each subsystem.

In terms of analysis, we began by estimating the structural change (i.e.,


average change) in key performance indicators of peer institutions in our
context (e.g., emerging research institutions in Texas) and then assessed our
performance relative to structural rate. The purpose of that analysis was to
determine whether the institution realized structural gains and maintained
middle-term targets. Any deficiency in an organization’s change relative
to peer organizations provides a signal to adjust tactical planning.

Strategic analysis, the deliberate set of activities that are designed to move
organizations to an ideal position relative to its competitors, is one of the most
common aspects of planning practice. We knew that organizations could
advance outcomes through operational improvements, maturation effects, and
changes in the external environment (e.g., changes in funding, new technology,
innovation). However, we also recognized that those structural gains would not
be sufficient to stay ahead of peers, and organizations would need to harness
comparative advantages to maintain strategic advantage in the long term.

The data we used for strategic analysis and systems analysis were
similar. The primary difference was that strategic analysis focused on
assessing progress beyond structural gains (or gains expected across
all institutions). We used this analysis to generate early signals about
the need for strategic adjustments to maintain long-term gains relative
to peers. For example, we analyzed what adjustments were needed in
research productivity to move to the top quartile of baseline peers and
the top half of aspirational peers. We also recognized that innovation was
a likely source of strategic advantage. Thus, a critical focus of analysis
was on identifying emergent innovation at the subunit level (college and
departments) that could be scaled to have an impact at the institutional
level. We relied on intermediate outcome metrics and other leading
metrics to identify early signals about emergent innovation in areas such
as retention, degree completion, and research productivity.

40
Policy analysis focuses on monitoring and managing emergent changes What Worked
in the environment. Environmental scanning is a common technique used • Identifying key
to identify best practices to improve operations. Those scanning activities performance indicators
are well documented in literature. The missing element in literature is the and emphasizing
use of environmental scanning to identify emergent changes in the policy interventions that had
environment. That type of analysis focuses on assessing and managing policies. a marked impact on
outcomes.
Our efforts in that area included analyses of actions (e.g., rule changes) • Using tools that provided
by administrative agencies, assessment of proposed metrics (e.g., actionable insights for
accountability measures), and analysis of tacit and explicit policy emergent issues (e.g.,
positions (e.g., reports on rankings and institutional effectiveness) term-to-term retention
generated by external groups and agencies. Our primary focus behind instruments that identify
each student who has not
those analyses was to determine how each proposed or emergent policy
reenrolled).
would affect institutional goals and outcomes. The insights that emerged
• Developing a better
from those analyses were used by our senior administrators to shape
understanding of the
policy discussions and mitigate impact.
institution’s system and
Evaluation refers to the assessment of outcomes to ensure that adjustments selecting incremental
actions that would improve
are made to advance the goals of an organization. We undertook three levels of
key outcomes.
assessment. The primary level of assessment focused on ensuring continuous
improvement. The second level of assessment focused on ensuring that the
What Didn’t
organization was managing external and internal change. The third level of
• Providing too much data,
assessment focused on ensuring that the organization was achieving strategic
which led to actions
progress.
that did not affect key
The first level of analysis was associated with continuous improvement outcomes.

and took place at the end of each operational year to assess the change in • Employing generic
key performance metrics. The second level of analysis (tactical analysis) analytics.

took place during the operational year and at the end of the operational • Implementing best
year. During the operational year, we assessed changes in leading practice solutions without
considering specific needs
metrics, which provided insights about operational adjustments needed
of the local context.
to achieve expected annual outcomes. At the end of the operational
year, we assessed changes in key performance indicators relative to
our expected outcomes, and explained deviations using component
metrics and diagnostic metrics. The end-of-year operational analysis also
considered positive deviations relative to internal and external peers,
which yielded insights about emergent innovations. Finally, we assessed
structural gains using key performance indicators to determine if
adjustments needed to be made at the unit (or division) level and ensured
that the organization was maintaining the long-term trajectory to
preserve a competitive advantage relative to peers.

Employing Accreditation to Strengthen Planning and Drive Improvement 41


Con c lus ion
Auth o r Biog r ap h ies
Our analysis of institutional
transformation at UTEP over E l s a Boni l l a- M a r tin , P h D , is a research

the last 30 years revealed that associate at the Center for Metrics Based Planning at
comprehensive planning had the University of Texas at El Paso. Prior to her
an important role in producing appointment, she served as the team lead for the
extraordinary success. Moneyball Research and Communications Group at UTEP’s
inspired us to use metrics to improve Center for Institutional Evaluation, Research, and
the effectiveness of planning. We Planning. Bonilla-Martin earned a doctorate in rhetoric, and her research
established a hierarchy of metrics interest centers on the impact of planning documents and literature on
and used data in a structured way institutional effectiveness.
to improve efficiency. It’s our
E r i c k Gon z a l e z is a graduate research assistant
contention that conditions for
at the Center for Metrics Based Planning at the
metrics-based planning exist in
University of Texas at El Paso. He has earned an
all higher education institutions,
undergraduate degree in accounting, and is a graduate
because all institutions undertake
student in education. Gonzalez is currently studying
comprehensive planning and
change in outcomes for 3,000 higher education
have access to large volumes of
institutions in the United States.
data. We also believe that metrics-
based planning is the next step in Roy M at h ew, P h D , is associate vice president for
the evolution of comprehensive planning at the University of Texas at El Paso. His
planning, one that is not replaced current academic research focuses on refining and
by big data and analytics—but articulating the distinctive features of organizational
enhanced by it. planning, and explaining the usefulness of metrics-
based planning in advancing outcomes in higher
education and other complex contexts. Mathew earned a doctorate in public
Re f e r en c es
policy analysis and planning from the University of Illinois at Chicago.
Banfield, Edward C. “Supplement: Notes on
a Conceptual Scheme.” Politics, Planning,
and the Public Interest. Martin Meyerson and Da nie l S a nta n a is a doctoral research assistant
Edward C. Banfield, eds. Glencoe, IL: Free
at the Center for Metrics Based Planning at the
Press, 1955, 303–329.
University of Texas at El Paso. He is a doctoral
Bowen, William G., and Derek Bok. The
Shape of the River. Long-Term Consequences candidate in history, and has an extensive background
of Considering Race in College and University
in researching primary source materials. Santana is
Admissions. Princeton: Princeton University
Press, 1988. completing a survey of literature related to planning
Branch Jr., Melville C. “Comprehensive in higher education.
Planning: A New Field of Study.” Journal of the
American Institute of Planners 25, no. 3 (1959):
115–120.
E n gag e with th e Auth o rs
The Equality of Opportunity Project,
accessed August 14, 2019. www.equality-of- To comment on this article or share your own observations, email
opportunity.org/education.
rmathew@utep.edu, emartin3@utep.edu, dsantana2@miners.utep.edu,
Lewis, Michael. Moneyball: The Art of Winning
an Unfair Game. New York: W.W. Norton, 2003.
or ejgonzalez11@miners.utep.edu.

Meyerson, Martin. “Building the Middle-


Range Bridge for Comprehensive Planning.”
Journal of the American Institute of Planners
22, no. 2 (1956): 58-64.

42
Fe atu r e ARTI CLE

The Value of
Higher Education
Academic
Makerspaces for
Accreditation
and Beyond
by Vincent Wilcyznski, PhD, Aubrey Wigner,
PhD, Micah Lande, PhD, and Shawn Jordan, PhD
Institutions of higher education are
incorporating makerspaces and skills on their
campuses in support of institutional goals and
accreditation requirements.

Hi g h e r E ducation Acad e m i c Make rs pac es

University and college campuses are constantly evolving, adding new facilities,
resources, and programs to best serve students, faculty, and staff. Over the last
decade many institutions have added academic makerspaces to their campuses,
a development that allows individuals from across the university to come
together to collaborate, design, fabricate, and learn in shared spaces. First popular
in engineering departments, higher education academic makerspaces now have
expanded to support multidisciplinary learning across all aspects of the university.

The evolution of higher education academic makerspaces to serve the entire


university community is just one illustration of their ability to support a broad
spectrum of institutional goals. Given the increased emphasis on documenting
outcomes achievement and continuous improvement processes by regional and
programmatic accrediting organizations, institutions are also finding value in
the accreditation benefits associated with these spaces.

We use the term “academic makerspace” to describe the facility, staff, resources,
and associated community that support creating, learning, and fabricating in an
academic setting. Recognizing that elementary and high schools, as well as other

This article first appeared in Planning for Higher Education, Issue V46N1, October–December 2017

Employing Accreditation to Strengthen Planning and Drive Improvement 43


education-based programs, house makerspaces, we use the
term “higher education academic makerspace” for those
spaces that are located on college and university campuses
and generally accessible to the broader university
community. Unlike a lab, which is often dedicated to a
single activity, open only to specific students, or tied to a
particular course, makerspaces are used for curricular,
extracurricular, and personal activities (Ali et al. 2016;
Wilczynski, Zinter, and Wilen 2016).

The size of higher education academic makerspaces ranges


from 100 to over 1,000 active members (noting that not all
members are in the space at any one time) in spaces spanning The appearance of higher education academic
a few hundred to several thousand square feet. In addition makerspaces on campus resulted from the traditions,
to the availability of design and fabrication tools such as 3-D contributions, and developments of many disciplines.
printers, laser cutters, mills, sewing machines, and soldering For example, open and collaborative learning studios
irons, higher education academic makerspaces also provide have been fundamental to art, design, and architecture
training in the use of these traditional and digital tools. programs. Similarly, hands-on design and open-ended
Often higher education academic makerspaces are open to problem solving have been key aspects of accreditation-
all members of the university, thereby serving an important driven engineering education initiatives. The open and
role as a common location for individuals with diverse collaborative nature of specific engineering teaching

backgrounds to meet and work together. It is estimated labs has also contributed. In an exploration of the future
of engineering education, Smith et al. (2005) identified
that there are more than 150 makerspaces on university
project-based learning as a growing pedagogical approach
campuses, with the number growing each year (Barrett et al.
to the teaching of future engineers. Through the
2015; Bryne and Davidson 2015).
(renewed) emphasis on hands-on, project-based learning,
A distinction of higher education academic makerspaces collaborative spaces have emerged to help transform
is found in the culture and community that form within. undergraduate engineering education.
The underlying culture of makerspaces, both in academic
Influenced by these factors, higher education academic
and nonacademic environments, is one of collaboration,
makerspaces developed from the growing need for widely
sharing, and additive innovation (Jordan and Lande 2016).
accessible technology and the increasing availability
Sharing one’s work with others creates an open community
(and affordability) of design tools, including hardware
and collaborative culture in which members are excited and software. Given this context, some of the first higher
to assist one another and willingly exchange design education academic makerspaces were housed in schools
knowledge. The diversity of users creates opportunities of engineering. In the past several years, many university
for members to work with and learn from others who libraries have launched makerspaces with design and
have unique experiences and skills. The existence of these fabrication tools for patrons to use while relying on
spaces and focused programs to integrate members has led in-house, on-campus, and digital resources for training,
to many unique collaborations among colleagues who may facilitation, and support. Examples exist where libraries
not have otherwise had the opportunity to work together, administer checkout processes for tools and equipment,
including the development of multidisciplinary courses similar to their traditional role in doing so for print material
(Ali et al. 2016). The open nature of these spaces promotes and other media. This development illustrates the wide
an intentional collision of random ideas, a design structure spectrum of the higher education academic makerspace
that has benefited many industries (Gertner 2012). movement on university and college campuses.

44
As further examples of the scope of this movement, engineering and other
discipline professionals have joined together to share knowledge and explore
best practices related to higher education academic makerspaces. For example,
in 2014 Arizona State University hosted a symposium focused on this topic,
and the MakeSchools (n.d.) alliance was formed to catalyze academic making.
In 2016 the White House convened a meeting on higher education academic
makerspaces in conjunction with the 2016 National Week of Making and
the National Maker Faire. International symposiums devoted to academic
makerspaces were held in 2016 and 2017, with each event attracting hundreds
of participants from across the world and over 100 papers written (ISAM 2017
Papers, Presentations, and Videos 2017; Proceedings of the 1st International
Symposium on Academic Makerspaces 2016).

Learning within academic makerspaces is a nascent research topic within


higher education. While the engineering education literature is rich in
anecdotal reports on the impact of makerspaces, the newness of the field has
limited quantitatively data-rich records of impact. Partnerships between The existence of a
schools of engineering and schools of education have been established at some thriving community
institutions to study this topic, and it is expected that this field of research will in an active
rapidly advance as institutions apply collected data to better understand how higher education
such spaces impact student learning (Rosenbaum and Hartmann 2017). In the academic
absence of such detailed reports at this time, it is proposed that the existence of makerspace
a thriving community in an active higher education academic makerspace has has great value
great value from a program and regional accreditation review perspective. from a program
and regional
Hi g h e r E ducation Acad e m i c Make rs pac es accreditation
an d Acc r e d itation review perspective.
Higher education accrediting associations help ensure the quality of academic
programs by establishing criteria and periodically reviewing each institution’s
ability to meet those standards. Within the United States, institutions of higher
education are reviewed by regional accrediting organizations such as the
New England Association of Schools and Colleges (NEASC) and the Western
Association of Schools and Colleges. Specific programs and academic disciplines
are also reviewed by external evaluators against standards established by
program accreditors. For example, the Accreditation Council for Business
Schools and Programs reviews business programs while the Accreditation
Board for Engineering and Technology (ABET) is the accrediting organization
for engineering programs. Regional accreditation ensures that the college or
university as a whole meets institutional standards, and program accreditation
ensures that departments meet discipline-based standards.

Program accreditation standards place additional emphasis on the curriculum


within each academic discipline, though both levels of accreditation address
common elements that contribute to the teaching and learning environments.

Employing Accreditation to Strengthen Planning and Drive Improvement 45


For example, both regional and program accreditors education academic makerspaces also represents
evaluate the financial, resource, and planning aspects of increased levels of fiscal, administrative, and planning
institutions and programs. support for student learning, areas specifically
addressed in accreditation standards and criteria.
More specifically, institutions accredited by NEASC
(per NEASC Standard 3–Organization and Governance) It is essential to note that engineering programs
must provide evidence that “the institution creates and have a long tradition of hands-on learning, including
sustains an environment that encourages teaching, open access for exploration both associated with and
learning, service, scholarship, and where appropriate, independent of coursework at a limited number of
research and creative activity” (New England select universities. However, the concept of higher
Association of Schools and Colleges, n.d., Standard education academic makerspaces as spaces that
Three, 1). Similarly, programs evaluated using the ABET support a number of factors in both engineering
standards must document (per General Criterion 7– education and personal development, including
Facilities) that “modern tools, equipment, computing design thinking, project-based learning, independent
resources, and laboratories appropriate to the exploration, collaborative problem solving, and
program [are] available, accessible, and systematically entrepreneurial endeavor, is relatively new.
maintained and upgraded to enable students to attain
As previously noted, the term “higher education
the student outcomes and to support program needs.
academic makerspace” refers to the facility, staff,
Students must be provided appropriate guidance
resources, and associated community that support
regarding the use of the tools, equipment, computing
creating, learning, and fabricating in a higher education
resources, and laboratories available to the program”
setting. Included in this list is the respective community
(ABET, n.d., General Criterion 7, 1).
of users in each space who use the facility for their
Higher education academic makerspaces can play own projects and assist others in using these resources.
an important role in substantiating the ability of an With this expanded understanding of what constitutes
institution or program to meet such standards. The a higher education academic makerspace, it is clear that
existence of a higher education academic makerspace the existence of such a space addresses programmatic
within a particular department on campus, especially criteria (such as ABET’s General Criterion 7–Facilities)
when the facility is open to the entire university focused on student access to modern tools, resources,
community, illustrates the concept of continuous and computational resources as well as those criteria
improvement as a mechanism to improve learning and that monitor institutional support for learning. For
promote creativity. Examples of the impact of higher example, ABET’s General Criterion 8 on Institutional
education academic makerspaces on campus include Support requires that resources be available to “acquire,
case studies that detail cross-departmental initiatives maintain, and operate infrastructures, facilities, and
to develop multidisciplinary academic courses and the equipment appropriate for the program, and to provide
development of summer product design programs, an environment in which student outcomes can be
community outreach programs for high school attained” (ABET, n.d., General Criterion 8, 2). Here, too,
students, and an institution-wide mechanism for the existence of a fully functioning higher education
learning basic fabrication skills (Ali et al. 2016). academic makerspace accessible to students, faculty,
and staff in an accredited program provides significant
In each case, these developments were created by
evidence aligned with this criterion.
the students, faculty, and staff associated with each
campus makerspace. The fact that these programs, Determinations of accreditation are based on a
and usually these spaces, did not exist during previous collection of evidence provided by the evaluated
accreditation reviews is evidence of the institutional institution that details how the accreditation standards
and programmatic commitment to improving student are met. This evidence must include assessment
learning. The investment of space and resources methodologies, results, and implemented improvements
(including staffing and financial support) in higher for each accreditation standard. The presence of a

46
higher education academic makerspace provides a rich and lifelong learning—attributes frequently evaluated by
pool of quantitative and qualitative data that can be used institutional and program accreditation organizations.
to demonstrate compliance with accreditation criteria.
Both forms of accreditation also review curriculum-related
Documenting student experiences is common practice aspects of students’ education, typically by allowing each
for most higher education academic makerspaces. institution or program to establish discipline-specific
These experiences are frequently archived as videos, educational outcomes and measurement mechanisms to
photographs, and articles that are accessible through a evaluate individual attainment of these outcomes, which
space’s web portal. Many makerspaces even offer live video often include academic and disciplinary knowledge,
streaming of their activity space. Video data can provide skills, and competencies. Higher education academic
insight into how a space is used, what hours are busiest, etc. makerspaces provide venues in which to increase
These records help others learn what can be accomplished knowledge, skills, and competencies, with this topic
in the facility, including new members who are exploring explored in more detail in the following section.
the space, administrators who are evaluating the impact of
the space, and potential contributors who are considering
Spac es an d Le ar nin g
investing in the space. These records are also a valuable
accreditation resource as they provide (readymade) Makerspaces are academically interesting in two ways: (1)
narratives that can be grouped to demonstrate enhancing teaching objectives and (2) enhancing student
institutional or programmatic accomplishments related to outcomes. It is worth noting that while these two concepts
specific accreditation standards. are similar, they are not identical in terms of modern
accreditation standards. Teaching objectives can be seen
It is also common for higher education academic
as a measure of how specific skills are passed on from
makerspaces to collect a large amount of quantitative
teachers to students. For example, if students leave a fluid
data, in part motivated by an inherent need to monitor
dynamics course with a mathematical understanding
and enforce safe operating practices. For example, most
of fluid flows and qualities, then the teaching objectives
spaces have databases identifying the individuals who
are met. In contrast, student outcomes in engineering,
are authorized to use the space, with that information
as defined by ABET (n.d.), include more nebulous and
often including the name, gender, status (student, faculty,
staff), and departmental affiliation of each user who has difficult-to-measure qualities such as the development

been trained and provided access to work in the space. of lifelong learning skills and effective communication

Similar records frequently exist that record the enrollment skills or the ability to function on multidisciplinary
in makerspace courses and programs (such as evening teams and use modern engineering tools necessary for
workshops). In addition, most spaces host academic groups, engineering practice. These broader student outcomes
such as design-affiliated student associations, for meetings encompass experiences and learning that occur
and work sessions, often logging these activities into a throughout a program of study rather than merely
master planning schedule. Collectively, these records form within one class. Makerspaces can play a role in both of
a valuable database of information that can be applied as these areas. Teaching objectives can be met via project-
evidence of alignment with accreditation standards. based assignments completed in a makerspace. Student
outcomes can be enhanced by providing a community of
For example, such quantitative data is important evidence
practice where students can learn from peers, engage in
in documenting an institution’s commitment to creating
self-directed learning, and be exposed to mind-sets that
multidisciplinary education facilities that accommodate
foster the more nebulous qualities, such as those of a
a variety of learning styles. Higher education academic
lifelong learner and effective communicator. Makerspaces
makerspaces favor a form of active learning focused on both
and their influence on both student outcomes and
individual drive and community-based problem solving.
teaching objectives are explored below within the context
User demographics and frequency-of-use data provide
of accreditation.
valuable documentation of an institution’s commitment to
fostering personal discovery, professional development,

Employing Accreditation to Strengthen Planning and Drive Improvement 47


To understand how makerspaces could help universities reach accreditation
goals, it is worth exploring what sorts of skills makers are learning within
makerspaces and how this skill acquisition could be of use in academic
engineering programs. Looking at makers outside of engineering students can
offer insight into what sorts of skills are learned within makerspaces without
the risk of observing what engineering students may be learning from classes
and then applying within makerspaces. In a multiyear qualitative study of 36
young makers and 40 adult makers who presented their work at Maker Faires,
it was found that makers outside of academia were learning strategies and skills
applicable to both ABET general student outcomes criteria as well as discipline-
specific criteria. The makers interviewed described examples that showed they
were engaging with many ABET accreditation areas. Half described developing
lifelong learning strategies; 75 percent showed competent communications
skills when describing technical artifacts; 43 percent described the application
of science, engineering, and math knowledge to their creations; and 38
percent described how they designed systems with constraints. The makers
also showed discipline-specific skill development related to electrical and
The presence of a computer engineering (57 percent), mechanical engineering (28 percent), and
higher education manufacturing engineering (49 percent) (Wigner, Lande, and Jordan 2016).
academic Further, the makers interviewed identified the core components necessary for
makerspace learning new skills, including access to a space with the needed tools, access
provides a rich pool to online materials (YouTube, how-to blogs, etc.), and access to a community to
of quantitative and provide mentorship and peer learning opportunities.
qualitative data
The development of both engineering-specific skills and more broadly
that can be used
applicable student outcomes noted in the above study is not an isolated case.
to demonstrate
A 2014 National Academies-commissioned literature review of maker-related
compliance with
research found that the broader impacts of making, as claimed by the literature
accreditation
reviewed, were greater contextualization of STEM concepts and practices,
criteria.
deeper understanding of scientific concepts, and development of fabrication
skills and innovative combinations of disciplinary skills (Vossoughi and Bevan
2014). This study also offered two areas of caution germane to the discussion
of makerspaces and accreditation. First is the risk of focusing overly narrowly
on STEM when making is often practiced in a more holistic, interdisciplinary
manner. Second, the study warned against the fetishizing of tools. Tools
themselves do not enhance education, but rather the community of makers
who uses tools in specific contexts does. In essence, the tools don’t make
makers; makers make themselves. In addition, a 2017 meta-study of 43 (mostly
qualitative) peer-reviewed articles on making showed that participants gained
technical skill and knowledge along with increased self-efficacy (a vital part of
lifelong learning) and noted making’s positive effect on student engagement
(Papavlasopoulou, Giannakos, and Jaccheri 2017). In all but one of the studies
reviewed, making was integrated into the curriculum. Many of the studies
showed making integrated into the classroom with positive results.

48
Makin g in Spac es

While engineering departments were the pioneers in labs are for insiders only, but unlike art studios, playful
the recent expansion of high-tech higher education exploration is generally discouraged. Peer learning in
academic makerspaces, spaces for making things have makerspaces offers the possibility of increasing the
been an integral part of university facilities for decades. diversity of work and people students encounter during
Studio art spaces, for example for sculpture, contain many their time in higher education.
of the same tools as makerspaces, from 3-D printers to
Academic makerspaces can be viewed as places where
laser cutters and electronics stations. Much like higher
interdisciplinary technology can be focused on training,
education academic makerspaces, studio art spaces
work, and play. In the context of ABET accreditation for
serve as places for students to learn and practice skills,
explore creatively and freely, and collaborate with and engineering programs, such a space could be one of the

learn from their fellows. Art and design have a long only areas on campus where the explicit goal of training

history of “critical making,” which is the learning that engineers to function on multidisciplinary teams could

occurs via the experience of creating and interacting be met. Where once a dedicated circuits lab, for example,
with the physical through iterative processes and social would provide access to the tools and materials needed for
feedback (Somerson and Hermano 2013). In studio students to complete their coursework and prepare for a
spaces, instruction often takes place in the same shared real-world work experience, today’s engineering career
workspace in which others quietly, or not so quietly, ecosystem is much more likely to require input from
work on their own projects for different courses. The multiple disciplines in a rapidly changing technological
community is formed around growth in making art landscape. To emulate the real environment, a sort of
and integrates making into class and non-class time, “circuits in context” lab is required, one where traditional
both for assignments and for personal gratification or parts and tools (e.g., resistors, capacitors, soldering irons)
curiosity. However, these studio art spaces are generally exist side by side with programmable microprocessors like
walled away from the rest of the university and strictly the Arduino and the tools (e.g., 3-D printers, laser cutters,
disciplinary in nature. Engineering likewise houses sewing machines, craft implements) needed to create the
computer labs dedicated to the simulation of industrial products the circuits might exist within. Here, students
processes, circuit labs dedicated to the exploration of and student teams can explore not just what circuits are,
electronics, etc. In these spaces, both peer learning and but what they mean within a broader societal context—
coursework take place. Like studio art spaces, engineering how circuits interface with people in real terms.

Employing Accreditation to Strengthen Planning and Drive Improvement 49


P l ac es fo r Ad d itive I nnovation F utu r e fo r Hi g h e r E ducation
In higher education academic makerspaces, students can
Higher education academic makerspaces allow for
engage in sharing practices that aid in the exploration
the significant growth of campus space in support of
of technical and design skills. The practices of additive
disciplinary and multidisciplinary collaboration. These
innovation include (Jordan and Lande 2016)
curricular and extracurricular spaces are increasing in
• Being inspired by other students’ creations popularity and purpose. Because of the value of space on
campus, explicitly connecting new types of spaces directly
• Openly sharing and learning about technology through
to the university mission is a concern. Given the wide
the creation of projects
array of faculty, students, and staff using higher education
• Designing and modifying versions of others’ academic makerspaces, their role is becoming more
shared ideas institutionalized; given their connection to accreditation
and direct support of academic programs such as
• Sharing ideas back with the community
engineering, they may already be sustainable.
This sharing environment allows students to learn, and
The future of higher education may include the university
reinforce the learning of, engineering skills through one
working collaboratively across disciplines to solve larger
another’s creations. This sharing and remixing behavior
and more complex problems to help society at large.
lets students explore and create without necessarily
The affordances that makerspaces can provide through
facing the onus of being entirely original or struggling
community and technology may help catalyze and
with fears that using others’ concepts or ideas will be
realize this potential role. Higher education academic
identified as cheating. Further, an emphasis on sharing
makerspaces are an increasingly popular innovation in
through additive innovation encourages students to
campus space planning that can help realize the balance
document and reflect on their work in the form of how-to
of knowledge production that can also be applied to solve
guides. Examples of how these guides look in the Maker
critical societal problems.
Movement can be found on websites like instructables.
com, where thousands of individuals share their creations
along with instructions on how one can create his/her
own version (Instructables, n.d). Technological projects
on such sites can range from simple circuits to complex
electronics with multiple microcontrollers, sensors,
and inputs. Unlike a simple how-to guide, however,
Instructables acts as an open forum for creators—and
copiers—to share their experiences and provide guidance
to one another. This same creation feedback also occurs
in physical makerspaces. The process of building,
sharing, remixing, and peer mentoring can help create a
community of practice that aids learning through playful
investment, risk taking, and self-directed learning.

50
Re f e r en c es Auth o r Biog r ap h ies
ABET. n.d. Criteria for Accrediting Engineering Programs, 2017–2018.
Accessed December 14, 2017: www.abet.org/accreditation/accreditation-
V in c ent W i l c z y n s k i , P HD , has served as
criteria/criteria-for-accrediting-engineering-programs-2017-2018/. the deputy dean of the Yale School of Engineering
Ali, P. Z., M. Cooke, M. L. Culpepper, C. R. Forest, B. Hartmann, M. Kohn, & Applied Science since 2010. He is the James S.
and V. Wilczynski. 2016. The Value of Campus Collaboration for Higher
Education Makerspaces. Proceedings of the 1st International Symposium Tyler Director of the Yale Center for Engineering
on Academic Makerspaces, Paper no. 48. Accessed December 14, 2017:
Innovation and Design and oversees all aspects of
http://seas.yale.edu/sites/default/files/imce/other/ISAM%20Campus%20
Collaboration.pdf. Yale’s academic makerspace. Previously, he was the
Barrett, T. W., M. C. Pizzico, B. Levy, R. L. Nagel, J. S. Linsey, K. G. Talley, C. R. dean of engineering at the U.S. Coast Guard Academy
Forest, and W. C. Newstetter. 2015. A Review of University Maker Spaces.
Paper presented at 2015 ASEE Annual Conference & Exposition, Seattle, where he served as a captain in the U.S. Coast Guard.
WA, June 14–17. Accessed December 14, 2017: www.asee.org/public/ He is a former vice president of the American Society
conferences/56/papers/13209/view.
of Mechanical Engineers, and his teaching skills
Bryne, D., and C. Davidson. 2015. MakeSchools Higher Education Alliance:
State of Making Report. Accessed December 14, 2017: http://make.xsead. have been cited by the Carnegie Foundation for the
cmu.edu/week_of_making/report.
Advancement of Teaching.
Gertner, J. 2012. The Idea Factory: Bell Labs and the Great Age of American
Innovation. New York: Penguin Press. A u b r e y W i g neR , P h D , is an assistant professor
ISAM 2017 Papers, Presentations, and Videos. 2017. Accessed at the Eli Broad College of Business at Michigan
December 26, 2017: https://drive.google.com/drive/mobile/
folders/0B4ZIatyugWjJNXlxVW9iR0ZFVjQ?usp=sharing. State University where he teaches courses in
Instructables. n.d. Home page. Accessed December 14, 2017: www. the university-wide minor in entrepreneurship
instructables.com. and innovation. He is a chemical engineer turned
Jordan, S., and M. Lande. 2016. Additive Innovation in Design Thinking and interdisciplinary social scientist whose dissertation
Making. International Journal of Engineering Education 32 (3): 1438–44.
explores the connections between the maker
MakeSchools. n.d. Home page. Accessed December 14, 2017: http://make.
xsead.cmu.edu/. movement, higher education, and the future of
New England Association of Schools and Colleges. n.d. Commission on work. Currently, he is working to expand MSU’s
Institutions of Higher Education: Standards. Accessed December 14, course offerings on interdisciplinary design and
2017: https://cihe.neasc.org/standards-policies/standards-accreditation/
standards-effective-july-1-2016#. collaboration for entrepreneurship and innovation.
Papavlasopoulou, S., M. N. Giannakos, and L. Jaccheri. 2017. Empirical
Studies on the Maker Movement, a Promising Approach to Learning: A Mi ca h L a n d e , P h D , is an assistant professor in
Literature Review. Entertainment Computing 18 (January): 57–78.
the engineering and manufacturing engineering
Proceedings of the 1st International Symposium on Academic Makerspaces. programs and Tooker Professor at the Polytechnic
2016. Accessed December 14, 2017: http://jrom.ece.gatech.edu/
wp-content/uploads/sites/528/2017/07/ISAM_2016-Proceedings-I.pdf. School in the Ira A. Fulton Schools of Engineering
Rosenbaum, L. F., and B. Hartmann. 2017. Where Be Dragons? Charting at Arizona State University. He teaches human-
the Known (and Not So Known) Areas of Research on Academic
Makerspaces. Paper presented at ISAM 2017, International Symposium on
centered engineering design, design thinking, and
Academic Makerspaces, Cleveland, OH, September 24–27. design innovation project courses. He researches how
Smith, K. A., S. D. Sheppard, D. W. Johnson, and R. T. Johnson. 2005. technical and non-technical people learn and apply
Pedagogies of Engagement: Classroom-Based Practices. Journal of
Engineering Education 94 (1): 87–101. design thinking and making processes to their work.

Somerson, R., and M. L. Hermano, eds. 2013. The Art of Critical Making:
Shawn Jordan, PhD, is an associate professor of
Rhode Island School of Design on Creative Practice. Hoboken, NJ: John
Wiley & Sons. engineering education in the Ira A. Fulton Schools of
Vossoughi, S., and B. Bevan. 2014. Making and Tinkering: A Review of the Engineering at Arizona State University. He teaches
Literature. National Research Council Committee on Out of School Time
STEM, 1–55. context-centered electrical engineering and embedded

Wigner, A., M. Lande, and S. S. Jordan. 2016. How Can Maker Skills Fit In
systems design courses and studies the use of context
with Accreditation Demands for Undergraduate Engineering Programs? in both K–12 and undergraduate engineering design
Paper presented at the 2016 ASEE Annual Conference and Exposition,
New Orleans, LA, June 26–29. education. He was named one of ASEE PRISM’s “20
Wilczynski, V., J. Zinter, and L. Wilen. 2016. Teaching Engineering Design Faculty Under 40” in 2014 and received a Presidential
in a Higher Education Makerspace: Blending Theory and Practice to
Early Career Award for Scientists and Engineers from
Solve Client-based Problems. Paper presented at the 2016 ASEE Annual
Conference and Exposition, New Orleans, LA, June 26–29. President Obama in 2017.

Employing Accreditation to Strengthen Planning and Drive Improvement 51


About th e Soc iet y fo r Co lleg e an d
Unive rs it y P l annin g (SCU P)

At SCUP, we believe that by uniting higher education leaders, we can meet


the rapid pace of change and competition, advancing each institution as it
shapes and defines its future. Through connection, learning, and expanded
conversation, we help create integrated planning solutions that will unleash the
promise and potential of higher education. Our community includes colleges
and universities (two-year, four-year, liberal arts, doctoral-granting research
institutions, public, private, for-profit, and private sector).

Individuals we serve include planning leaders with institution-wide


responsibilities, such as presidents, provosts, and other senior roles, to those
who are in the trenches, such as chairs, directors, and managers.

What I s I nteg r ate d P l annin g?

Integrated planning is a sustainable approach to planning that builds


relationships, aligns the organization, and emphasizes preparedness for change.
S C U P C o l l e c t i o n

You might also like