A Project of the Texas Higher Education Coordinating Board

advertisement
A Project of the Texas Higher Education Coordinating Board
Assessing General Education Competencies:
Findings from a Survey of Texas Public
Colleges and Universities
Wendy Erisman, Ph.D.
October 2010
A Project of the Texas Higher Education Coordinating Board
Higher Education Policy Institute
P.O. Box 12788
Austin, Texas 78711
512-427-6182
www.highereducationpolicyinstitute.org
Assessing General Education Competencies
Higher Education Policy Institute (HEPI)
The Higher Education Policy Institute (HEPI) was founded at the Texas Higher Education Coordinating Board in
April 2007 with a $2.5 million grant from the Houston Endowment, Inc. HEPI produces comprehensive and
objective analyses to inform higher education policy and practice in Texas as the state works towards achieving
the goals of its higher education plan, Closing the Gaps by 2015. Through symposia, reviews of existing research,
and original research, HEPI develops recommendations to enhance the human capital of Texas with a particular
focus on traditionally underrepresented populations. The main areas of research at HEPI include:
•
•
•
•
Increasing higher education cost efficiency
Defining and measuring student learning in higher education
Increasing Hispanic participation and success
Improving developmental education
Acknowledgments
The author would like to thank the current and former HEPI staff members who helped to make this report
possible: Lee Holcombe, Blanca Plazas Snyder, Sabrina Salazar, and Alyssa Reinhart. The project also greatly
benefited from the support of many Texas Higher Education Coordinating Board staff members: Susan Brown,
Janet Beinke, Gabriela Borcoman, Charles Busbey, Clifford King, Catherine Parsoneault, Lucy Bloor, Marilyn
Hartman, Sue Sutton, and all the members of the Research and Data Collection Committee who reviewed the
survey and provided feedback.
We very much appreciate the assistance of the accountability peer group representatives who provided
feedback on the draft survey and commented on the findings. Above all, we appreciate the willingness of the
participating institutions to take the time to respond thoughtfully to the survey.
This report was made possible through the generosity of the Houston Endowment, Inc., which funds all of HEPI’s
work. The views expressed in this report are those of the author and do not necessarily reflect the views of the
Houston Endowment.
Higher Education Policy Institute
Texas Higher Education Coordinating Board
October 2010
Assessing General Education Competencies
Contents
Key Findings ................................................................................................................................................ 1
Introduction ................................................................................................................................................ 3
The 2009 HEPI General Education Assessment Practices Survey............................................................... 4
General Education Assessment Process ..................................................................................................... 5
Resources Available for Assessment .......................................................................................................... 8
General Education Assessment Practices ................................................................................................. 11
Use of General Education Assessment Findings ....................................................................................... 19
Improving General Education Assessment ............................................................................................... 23
Comparisons to Other Surveys ................................................................................................................. 25
Conclusions and Further Research ........................................................................................................... 28
References ................................................................................................................................................ 30
Appendix: Survey Development and Administration ............................................................................... 31
Higher Education Policy Institute
Texas Higher Education Coordinating Board
October 2010
Assessing General Education Competencies
Page |1
Key Findings
In Fall 2009, the Higher Education Policy Institute conducted a survey of general education assessment practices
at Texas public colleges and universities. Of the 100 Texas public colleges and universities invited to participate,
all but one university and two community colleges responded, for an overall response rate of 97%. Key findings
from the survey include:
General Education Assessment Process
• Four-year institutions are more likely than two-year institutions to conduct general education assessment at
the institutional level, as opposed to the course level, and are also more likely have an institution-wide
committee that plays a major role in the assessment process.
• Institutions reaccredited in 2005 or later, after the Southern Association of Colleges and Schools Commission
on Colleges (SACS) implemented its new standards, are more likely than institutions reaccredited under the
old standards to be assessing all general education competencies every two years or more often.
• Faculty members are actively involved in the general education assessment process at most institutions.
Only 16% of community colleges and 6% of universities said that faculty members are not involved in
general education assessment.
Resources Available for Assessment
• While 79% of four-year institutions have at least one staff position dedicated to assessment, only 35% of
two-year institutions have such a position. Assessment staff positions are particularly rare at medium and
small community colleges and technical colleges. Assessment staff positions are most often located in an
office of assessment, accountability, or institutional effectiveness.
• 79% of four-year institutions and 64% of two-year institutions are currently using or adopting assessment
management software. In addition, 87% of the colleges and universities that are using this software have
done so for three years or less and 51% have used it for one year or less.
• The most common resources available to support assessment are funding for faculty/staff to attend
assessment conferences or training (87%) and one-on-one consultation with faculty members (81%). The
least common resource is funding or release time for faculty/staff to work on assessment (34%).
General Education Assessment Practices
• While 97% of four-year institutions are using or adopting a standardized general education exam, only 33%
of two-year institutions are doing so. In addition, 36% of two-year institutions reaccredited under the new
SACS standards reported using a standardized general education exam, but only 17% of two-year institutions
reaccredited before 2005 said the same.
• Embedded assessment—particularly reviewing student work across course sections using common rubrics
or exams—is used occasionally or frequently by more than 80% of both two- and four-year institutions.
• Other commonly used assessment methods include student surveys (80%), observations of student
performance (75%), employer surveys or advisory groups (63%), and cumulative assessments such as
capstone projects or portfolios (60%). Institutions reaccredited under the new SACS standards are more
likely to use student surveys.
Higher Education Policy Institute
Texas Higher Education Coordinating Board
October 2010
Assessing General Education Competencies
Page |2
Use of General Education Assessment Findings
• General education assessment findings are most often shared with faculty members (84%), academic
administrators (83%), and accreditors (80%) and least often with students (25%), the general public (23%),
and alumni (10%).
• The most common external uses of general education assessment findings are to prepare for institutional
accreditation (85%), respond to requirements set by state or federal government (82%), and prepare for
specialized accreditation (68%).
• Two-year institutions are more likely than four-year institutions to use general education assessment
findings to improve teaching (71% versus 47%), revise general education competencies (71% versus 47%),
make changes to the general education curriculum (70% versus 50%), conduct strategic planning (68%
versus 38%), and determine student readiness for upper-level coursework (40% versus 18%).
Improving General Education Assessment
• Institutions said that the most significant challenge they face in assessing general education competencies is
a lack of faculty involvement or expertise in assessment. Two-year institutions are especially challenged by a
lack of resources for learning outcomes assessment.
• Four-year institutions were more likely to indicate a need for more faculty involvement (56% versus 44%)
and emphasis on assessment by institutional leaders (29% versus 14%). Two-year institutions were more
likely to indicate a need for agreement on general education/core curriculum learning outcomes throughout
the state (43% versus 27%).
Conclusions and Further Research
• The survey demonstrates the diversity of approaches used by different institutions to assess general
education competencies as well as the widespread use, and perceived value, of embedded assessment,
which is used by far more institutions than use standardized general education exams.
• As found in other research, assessment findings are used more for external purposes, such as accreditation
and accountability, than for internal purposes such as the allocation of resources, strategic planning, or the
improvement of teaching.
• Two-year institutions are more likely to use general education assessment findings for institutional
improvement and planning purposes but find their assessment efforts hindered by a lack of resources.
• Institutions reaccredited under the new SACS standards are more likely to have shorter assessment cycles
and dedicated assessment staff positions, to measure learning with standardized exams and student
surveys, and to use assessment findings to improve teaching and allocate resources. As more institutions are
reaccredited under the new standards, additional changes in assessment practice are likely to be seen.
• It would be valuable to administer this survey on an annual or biannual basis to track trends in assessment
policy and practice, further clarify the relationship between reaccreditation and assessment practice, and
provide institutions with regular information about the assessment practices used by their peers.
• Further research is needed to address how Texas public colleges and universities are approaching the
assessment of dsicipline-specific student learning outcomes and to clarify important relationships among
institutional assessment capacity, the use of various assessment methods, and how assessment findings are
used by institutions.
Higher Education Policy Institute
Texas Higher Education Coordinating Board
October 2010
Assessing General Education Competencies
Page |3
Introduction
In recent years, the national discussion around accountability for student learning has come to emphasize broad
learning outcomes expected of all college graduates. Experts argue that teaching these general education
competencies is central to the mission of any college or university, and there is growing consensus among higher
education stakeholders as to which competencies are essential for student success (American Association of
State Colleges and Universities, 2006; Association of American Colleges and Universities (AAC&U), 2006). Recent
surveys conducted by several organizations suggest that employers expect workers to be able to communicate
effectively, think critically, and make ethical decisions in the workplace and are not always satisfied that college
graduates are able to fulfill these job requirements (The Conference Board, Corporate Voices for Working
Families, the Partnership for 21st Century Skills and the Society for Human Resource Management, 2006;
AAC&U, 2008; AAC&U, 2009).
The importance of general education competencies is reflected in new efforts by the higher education sector to
show that students are successfully achieving these outcomes. The Voluntary System of Accountability, a public
accountability system in which many public four-year institutions participate, requires its members to commit to
administering a standardized general education exam to a sample of students in order to provide comparable
evidence of student learning gains in areas such as writing and critical thinking. Accrediting agencies have also
begun to emphasize the assessment of general education competencies as part of the process of continuous
improvement expected of colleges and universities. This heightened emphasis on assessing general education
learning outcomes makes it all the more important to understand key challenges and effective practices in the
assessing student learning across the curriculum.
Assessing general education competencies is a particularly important topic for Texas public higher education.
Responsibility for providing general education, particularly in the form of the 42-hour core curriculum mandated
by Texas statute, is shared by all public undergraduate institutions, and these institutions are expected by the
Texas Higher Education Coordinating Board (THECB) to evaluate the extent to which core curriculum objectives
are achieved (2010). The Southern Association of Colleges and Schools Commission on Colleges (SACS), the
accrediting agency for Texas public colleges and universities, has since 2005 required institutions to demonstrate
the extent to which graduates have achieved general education competencies specified by the college or
university (2010). However, until now little has been know about the specific policies and practices used by
Texas public colleges and universities to determine the extent to which their students are successfully achieving
general education learning outcomes.
Higher Education Policy Institute
Texas Higher Education Coordinating Board
October 2010
Assessing General Education Competencies
Page |4
The 2009 HEPI General Education Assessment Practices Survey
In Fall 2009, the Higher Education Policy Institute (HEPI) conducted a survey of general education assessment
policies and practices at Texas public colleges and universities. The survey examined the general education
assessment process, the resources available to support assessment at each institution, the methods used to
measure general education competencies, the uses of general education assessment findings, and the
challenges faced by colleges and universities in conducting this type of assessment work (see the appendix for
the survey questions and information about its development and administration). Of the 100 Texas public
colleges and universities invited to participate in the survey, all but one university and two community colleges
responded, for an overall response rate of 97%.
For the purposes of this report, survey responses were analyzed in the aggregate and also disaggregated for
two- and four-year institutions, which differ considerably in mission and resources. However, important
differences, particularly in terms of size and student population, can also be found among two- and four-year
public institutions in Texas. For that reason, the report also looks at findings based on the state’s accountability
peer groups, which are organized by institutional type, size, and mission and meet regularly to discuss issues
related to the THECB’s accountability system for higher education. 1 The THECB Higher Education Accountability
System website (http://www.txhighereddata.org/Interactive/Accountability/Measures.cfm) can provide further
information about these accountability peer groups.
Finally, because accreditation has been cited as a major force behind changes in institutional assessment
practices, the report compares findings for institutions that underwent their most recent SACS reaccreditation
prior to 2005—the year in which SACS implemented its new standards for the assessment of general
education—or more recently. Taken together, these analyses give a broad sense of how Texas public colleges
and universities are approaching the assessment of general education competencies and how these policies and
practices vary in some important ways. The report concludes with a discussion of how the survey findings
compare with those of two other recent surveys of assessment policies and practice and with suggestions for
further research in this area.
1
The nine accountability peer groups used in the report are research/emerging research universities*, doctoral
universities*, comprehensive universities*, master’s universities, very large community colleges, large community
colleges*, medium community colleges, small community colleges*, and technical colleges*. Peer groups marked with an
asterisk have fewer than 10 members, and findings for these groups should be interpreted cautiously.
Higher Education Policy Institute
Texas Higher Education Coordinating Board
October 2010
Page |5
Assessing General Education Competencies
General Education Assessment Process
The first section of the survey focused on the institutional process used for general education assessment and
on who is responsible for this assessment work.
Levels of Assessment
An open-ended question asked survey respondents to briefly describe the process their institutions use to assess
general education competencies required of all students. While answers to this question varied considerably, 2
there were some patterns that seem helpful in understanding other survey responses. In particular, it is clear
that the assessment of general education outcomes takes place at three different levels: the institutional level;
the department level, often as part of programmatic assessment and/or program review; and the course level,
with an emphasis on core curriculum courses taken by large numbers of students. These approaches to general
education assessment varied by institutional type, with four-year institutions more likely to conduct assessment
at the institutional level and two-year institutions more likely to use course-level assessments (Figure 1). 3 Many
institutions use more than one level of general education assessment.
Figure 1: Levels of General Education Assessment
80.0%
60.0%
65.4%
50.0%
46.4%
40.0%
38.5%
19.6%
20.0%
15.4%
0.0%
Institution
Department
Two-year
Course
Four-year
In addition, while 57% of respondents from four-year institutions mentioned an institution-wide committee that
plays a major role in the general education assessment process, only 36% of two-year institutions mentioned
such a committee. These findings suggest that four-year institutions are more likely than two-year institutions to
view general education competencies as separate from the specific core courses in which they are primarily
taught. Some four-year institutions, in fact, require students to take upper-division courses, such as writingintensive or capstone courses that expand on the knowledge and skills learned in the core curriculum. For these
institutions, assessing general education competencies only in core curriculum courses would mean missing the
outcomes students achieve later in their education. As a result, four-year institutions seem more likely to see
the assessment of general education competencies as the responsibility of the institution as a whole and to
assign oversight of the process to an institution-wide committee.
2
Responses to open-ended questions from this survey, with institutional identifiers removed, can be seen at
http://www.txhighereddata.org/Interactive/Accountability/PeerGroup.cfm.
3
The percentages in this figure, like most others in the report, do not add to 100% because of multiple responses.
Higher Education Policy Institute
Texas Higher Education Coordinating Board
October 2010
Page |6
Assessing General Education Competencies
Assessment Cycle
The survey asked about the cycle on which the institution assesses all general education competencies,
recognizing that some competencies may be assessed more often. The most frequent response to this question
was every year (39%), followed by every three years or beyond (26%). Smaller numbers of institutions said that
they assess all general education competencies every semester (12%) or every two years (10%).
There was relatively little variation in these responses by institutional type or accountability peer group, but
there were some differences based on when the institution underwent reaccreditation (Figure 2). Institutions
reaccredited in 2005 or later—after SACS changed its general education assessment standards—were more
likely to conduct assessment of general education every semester (16% versus 9%), every year (43% versus
35%), or every two years (14% versus 7%). Institutions reaccredited prior to 2005 were more likely to report an
assessment cycle of three years or more (30% versus 22%). This finding suggests that undergoing reaccreditation
under the new SACS standards leads institutions to implement a more frequent cycle for assessing general
education competencies.
Figure 2: General Education Assessment Cycle
50.0%
43.1%
40.0%
34.8%
30.4%
30.0%
10.0%
21.6%
15.7%
20.0%
13.7%
8.7%
10.9%
6.5%
3.8%
8.7%
2.0%
0.0%
Every semester
Every year
Every two years Every three years
or beyond
Reaccredited before 2005
Other
In process of
determining
Reaccredited in 2005 or later
Assessment Responsibilities
Institutions were asked who is responsible for selecting assessment methods, developing assessment
instruments, collecting and analyzing data, and reviewing assessment findings. Responses showed that faculty
members are actively involved in the general education assessment process. Only 12% of institutions said that
faculty members are not involved in any of the listed activities while 74% indicated that faculty members are
involved in selecting assessment methods, 84% in developing assessment instruments, 67% in collecting and
analyzing data, and 72% in reviewing assessment findings. Institutions also reported considerable involvement in
general education assessment by the office of the chief academic or instructional officer, particularly in
reviewing assessment findings (76%). Only 18% of institutions said that this office does not participate in any of
the listed general education assessment activities.
A general education assessment committee and the offices of assessment/accountability/institutional
effectiveness and institutional research were less frequently described as involved in general education
assessment; 41%, 36%, and 33% respectively have no involvement in the process (Figure 3). It is important to
Higher Education Policy Institute
Texas Higher Education Coordinating Board
October 2010
Page |7
Assessing General Education Competencies
note that, particularly in the case of small institutions, there may not be a general education assessment
committee or offices of assessment/accountability/institutional effectiveness and institutional research on
campus. For institutions where these groups do participate in the general education assessment process, the
general education assessment committee is most often tasked with reviewing findings (55%), the office of
assessment/ accountability/institutional effectiveness with collecting and analyzing data (55%) and reviewing
findings (53%), and the office of institutional research with collecting and analyzing data (60%).
Figure 3: Groups/Offices Involved in at Least One General
Education Assessment Activity
100.0%
87.5%
82.3%
80.0%
59.4%
60.0%
63.5%
67.7%
40.0%
20.0%
0.0%
Faculty
Chief academic
officer
Gen ed assessment Assessment office
committee
Institutional
research office
There were some differences in responses to this question for two-year versus four-year institutions. Four-year
institutions were more likely to say that the office of the chief academic officer is not involved in general
education assessment (27% versus 13%). It seems likely that some of the responsibilities assumed by the office
of the chief instructional officer at two-year institutions are handled at four-year institutions by a general
education assessment committee or an assessment office, both of which were more often cited by four-year
institutions as playing a role in the general education assessment process. In addition, two-year institutions
were more likely to report that faculty members are not involved in the general education assessment process
(16% versus 6%). This finding seems odd since two-year institutions are also more likely to conduct assessment
at the course level, but at some colleges faculty members, especially adjuncts, may not be involved in general
education assessment beyond providing required data from their classes. This pattern is particularly true for
technical colleges, which were much more likely than community colleges to indicate that faculty members are
not involved in the general education assessment process (43% versus 13%). General education is less central to
the mission of technical colleges, some of which offer only applied associate’s degrees, which are not subject to
the state’s requirements for a 42-hour core curriculum.
In addition, the likelihood that the office of institutional research is involved with general education assessment
varied among two-year institutions with 84% of very large community colleges indicating that this office is
involved compared to 71% of large community colleges, 67% of medium and small community colleges, and 57%
of technical colleges. As the next section shows, larger community colleges, some of which have higher
enrollments than most universities, have more resources to invest in assessment.
Higher Education Policy Institute
Texas Higher Education Coordinating Board
October 2010
Page |8
Assessing General Education Competencies
Resources Available for Assessment
Given the emphasis placed on student learning outcomes assessment by SACS, as well as by many specialized
accreditors and state and federal regulatory bodies, it makes sense to examine how Texas public colleges and
universities have begun to build their capacity to support assessment activities. 4
Assessment Staff Positions
Having staff members whose time is dedicated to the assessment of student learning is one way institutions can
support faculty members. Overall, 51% of colleges and universities indicated that they have one or more
dedicated assessment staff positions. However, while 79% of four-year institutions have at least one staff
position dedicated to assessment, only 35% of two-year institutions have such a position.
Among institutions that have one more assessment staff positions, the positions are most often located in an
office of assessment, accountability, or institutional effectiveness (64%), followed by the office of the chief
academic or instructional officer (49%) and the office of institutional research (38%). These findings also varied
somewhat by institutional type (Figure 4). In particular, four-year institutions, which are more likely to have one
or more assessment staff positions, are also more likely to locate them in an office or college of undergraduate
studies (12%) or in individual colleges or departments (23%), a fact that reflects both the typically greater
resources and the different organizational structure of most four-year institutions.
Figure 4: Location of Assessment Staff Positions
80.0%
60.0%
69.2%
57.9%
47.4%
50.0%
36.8%
40.0%
38.5%
23.1%
20.0%
11.5%
0.0%
0.0%
Assessment
Chief Academic
Institutional
Research
Two-year
Undergrad Studies
5.3%
Colleges or depts
Four-year
There were also some variations among two-year and four-year institutions regarding assessment staff
positions. For two-year institutions, this difference is related to institutional size, with 71% of large community
colleges and 58% of very large community colleges having assessment staff positions, compared to 22% of small
community colleges and 14% of medium community colleges and technical colleges (Figure 5).
4
Because most institutions do not separate the assessment of general education outcomes from the assessment of
program-specific outcomes in their assessment budgets, this section refers to all assessment resources available at each
responding institution.
Higher Education Policy Institute
Texas Higher Education Coordinating Board
October 2010
Page |9
Assessing General Education Competencies
Figure 5: Assessment Staff Positions at Two-Year Institutions
100.0%
85.7%
71.4%
80.0%
60.0%
85.7%
77.8%
57.9%
42.1%
40.0%
28.6%
22.2%
14.3%
20.0%
14.3%
0.0%
Very Large CC
Large CC
Medium CC
Yes
Small CC
Technical Colleges
No
Among four-year institutions, all research/emerging research and comprehensive universities have assessment
staff positions, compared to 67% of master’s universities and 57% of doctoral universities. This pattern may
reflect the fact that considerably more research/emerging research and comprehensive universities (78% and
67% respectively) underwent SACS reaccreditation in 2005 or later, compared to 42% of master’s universities
and 29% of doctoral universities, suggesting that the reaccreditation process is an impetus to create assessment
staff positions.
Assessment Management Software
Assessment management software is rapidly becoming an important tool to help institutions manage
institutional accreditation, specialized accreditation, state and federal reporting requirements, and strategic
planning. At professional conferences, vendors for products such as Weaveonline, Trac Dat, and Tk20
demonstrate the features of their software packages while existing course management software developers
like Blackboard are now including assessment management tools as add-ons to their products.
Use of Assessment Management Software
The survey asked if institutions are using assessment management software, and if so, which product they use
and how long they have been using it. Overall, 47% of colleges and universities reported using assessment
management software, with another 22% in the process of adopting it. As with assessment staff positions, these
findings varied by institutional type (Figure 6). While 68% of four-year institutions reported using assessment
management software, only 38% of two-year institutions did so. On the other hand, 27% of two-year institutions
are adopting assessment management software, versus 12% of four-year institutions, indicating that two-year
institutions are starting to catch up to four-year institutions in this arena. However, technical colleges are much
less likely than their community college counterparts to use assessment management software. Only 29% of
technical colleges reported using such software and none are in the process of adopting it, versus the 68% of
community colleges that are either using or adopting assessment management software.
Higher Education Policy Institute
Texas Higher Education Coordinating Board
October 2010
P a g e | 10
Assessing General Education Competencies
Figure 6: Use of Assessment Management Software
80.0%
67.6%
60.0%
40.0%
36.5%
36.5%
20.6%
20.0%
27.0%
11.8%
0.0%
Yes
No
Two-year
In process
Four-year
Reaccreditation was also correlated with the use of assessment management software. Of institutions
reaccredited before 2005, 30% are in the process of adopting assessment management software, versus 14% of
institutions that have more recently undergone the reaccreditation process. This suggests that some institutions
are adopting assessment management software in preparation for reaccreditation.
Types of Assessment Management Software Used
The assessment management software used by Texas public colleges and universities varies quite a bit. Ten
commercial software packages, along with locally-developed software, were each mentioned by at least two
institutions. The most commonly mentioned commercial software packages included Weaveonline (26%),
Blackboard (23%), Trac Dat (15%), and Strategic Planning Online (15%). In addition, 23% of institutions use
locally-developed assessment management software. However, four-year institutions are more likely to use
Weaveonline (33%), Blackboard (30%), and Trac Dat (19%) while two-year institutions are more likely to use
locally-developed software (26%) and Strategic Planning Online (23%).
Given the relatively recent appearance of assessment management software, it is unsurprising that institutions
reported using their current software package or packages for a relatively short time. Overall, 87% of colleges
and universities that use assessment management software have been using it for three years or less and 51%
have been using it for one year or less.
Other Assessment Resources
The survey also asked about other assessment resources that Texas public colleges and universities offer to
support their faculty and staff members. The most commonly mentioned resources were funding for faculty or
staff to attend assessment conferences or training (87%) and one-on-one consultation with faculty members
(81%). Many institutions also offer books and reference materials on assessment (73%), optional assessment
workshops for faculty or staff (72%), and assessment training for faculty or staff as part of formal professional
development activities (68%). The least frequently mentioned option was funding or release time for faculty or
staff to work on assessment (34%).
There were some interesting differences between two- and four-year institutions in terms of the resources
available to faculty and staff (Figure 7). Four-year institutions are more likely than two-year institutions to offer
one-on-one consultation with faculty members (94% versus 74%), reflecting the fact that these institutions are
more likely to have assessment staff positions. Four-year institutions are also more likely to offer optional
Higher Education Policy Institute
Texas Higher Education Coordinating Board
October 2010
P a g e | 11
Assessing General Education Competencies
assessment workshops (82% versus 67%), but two-year institutions are more likely to provide assessment
training as part of formal professional development activities (74% versus 58%), reflecting the different
relationships such institutions have with their faculty members as well as the greater use of adjunct faculty
members at many two-year institutions. Finally, while the practice is not common, two-year institutions are
more likely to offer funding or release time for assessment work (38% versus 27%).
Figure 7: Resources to Support Assessment
100.0%
80.0%
60.0%
40.0%
20.0%
0.0%
88.5% 84.8%
93.9%
73.8%
73.8% 72.7%
73.8%
57.6%
67.2%
81.8%
37.7%
Funding for
conferences or
training
One-on-one
consultation
Books/reference
materials
Two-year
Formal
professional
development
Optional
assessment
workshops
27.3%
Funding or
release time for
assessment
Four-year
Among two-year institutions, institutional size and mission once again played a role in assessment resources
available (Figure 8). While 90% of very large community colleges and 86% of large community colleges offer
optional assessment workshops, such workshops are less common at medium (65%) and small community
colleges (50%) and much less common at technical colleges (14%). In the same vein, 84% of very large
community colleges, 86% of large community colleges, and 80% of medium community colleges provide books
and reference materials on assessment for faculty and staff members, compared with 63% of small community
colleges and 29% of technical colleges. Finally, almost three-quarters (74%) of very large community colleges
and over half (57%) of large community colleges offer funding or release time for faculty or staff to work on
assessment. Funding or release time is available at only 29% of technical colleges, 15% of medium community
colleges, and no small community colleges.
Figure 8: Resources to Support Assessment at Two-Year Institutions
100.0%
80.0%
84.2%
89.5%
85.7% 85.7%
73.7%
57.1%
60.0%
80.0%
65.0%
62.5%
50.0%
40.0%
28.6%
28.6%
14.3%
15.0%
20.0%
0.0%
0.0%
Very Large CC
Books/reference materials
Higher Education Policy Institute
Texas Higher Education Coordinating Board
Large CC
Medium CC
Optional assessment workshops
Small CC
Technical Colleges
Funding or release time for assessment
October 2010
P a g e | 12
Assessing General Education Competencies
General Education Assessment Practices
The methods and measures used to assess student learning are at the center of assessment practice. This
section of the survey asked what methods for assessing general education competencies are used byTexas
public colleges and universities.
Standardized Exams
Overall, 56% of colleges and universities responded that they are either already using a standardized exam to
assess general education outcomes or are in the process of adopting one. However, there was considerable
difference between two- and four-year institutions on this question. While 97% of four-year institutions are
using or adopting a standardized general education exam, only 33% of two-year institutions are doing so.
One explanation for this finding is that many public four-year institutions participate in the VSA, which requires
them to commit to administering a standardized general education exam. Of the 27 Texas public universities
that participate in the VSA, all but one are using a general education exam, and the remaining institution is in
the process of adopting an exam. Of the seven universities that do not participate in the VSA, on the other hand,
57% are using a general education exam and 29% are adopting one. These percentages are well above the
comparable percentages for two-year institutions, showing that participation in the VSA cannot explain all of the
difference between two- and four-year institutions in this area (Figure 9).
Figure 9: Use of Standardized General Education Exams
96.3%
100.0%
80.0%
66.7%
57.1%
60.0%
40.0%
28.6%
27.0%
20.0%
6.3%
0.0%
0.0%
Two-year
3.7%
Four-year VSA
Yes
No
14.3%
Four-Year no VSA
In process
Reaccreditation also plays a role in the use of standardized general education exams. Among institutions
reaccredited in 2005 or later, 57% reported using an exam, compared to 39% of those reaccredited prior to
2005. This pattern held true for both two- and four-year institutions but was particularly clear for two-year
institutions. While 36% of two-year institutions reaccredited in 2005 or later reported using a standardized
general education exam, only 17% of two-year institutions reaccredited before 2005 did so. These findings
suggest that the new SACS standards lead institutions to adopt standardized exams as part of the general
education assessment process.
Higher Education Policy Institute
Texas Higher Education Coordinating Board
October 2010
P a g e | 13
Assessing General Education Competencies
Students Taking Standardized General Education Exams
If institutions use standardized general education exams, they were asked which group or groups of students
take the exam (Figure 10). For two-year institutions, the most common response was a sample of students
completing general education (65%). Smaller numbers of two-year institutions responded that they administer a
general education exam to a sample of graduating students (25%) or to all students completing general
education (18%). The responses from four-year institutions were quite different, with 74% responding that they
administer a standardized exam to a sample of first-year and final-year students. Much smaller percentages of
four-year institutions use these exams with a sample of students completing general education (16%) or a
sample of graduating student (16%).
Figure 10: Students Taking Standardized General Education Exams
74.2%
80.0%
64.7%
60.0%
40.0%
20.0%
17.6%
25.3%
16.1%
4.2%
3.2%
5.9%
16.1%
11.8%
0.0%
All completing gen ed Sample completing
gen ed
All graduating
Two-year
Sample graduating Sample first/last year
Four-year
In this case, the differences between two- and four-year institutions are largely explained by participation in the
VSA, which requires institutions to administer the exam in a way that shows learning gains. Among four-year
institutions that participate in the VSA, 88% administer a standardized general education exam to a sample of
first- and final-year students. On the other hand, only 17% of four-year institutions that do not participate in the
VSA do so, a percentage not much larger than the 12% of two-year institutions that use this kind of sample. It
would seem, therefore, that VSA participation is driving the choice of student sample for many four-year
institutions.
Among two-year institutions, there were some differences in the ways community colleges and technical
colleges use standardized general education exams. In general, technical colleges were more likely than
community colleges to report using such exams, with 57% of technical colleges indicating that they use them,
compared to only 23% of community colleges. In addition, 75% of technical colleges reported administering
these exams to all students completing general education, a practice used by no community colleges.
Community colleges, on the other hand, were more likely to report administering the exams to a sample of
students completing general education, with 85% indicating that this is their practice. These differences reflect
the larger size of many community colleges and also the more structured curriculum of the technical colleges,
which makes it simpler to identify students who have completed their general education requirements.
Higher Education Policy Institute
Texas Higher Education Coordinating Board
October 2010
P a g e | 14
Assessing General Education Competencies
Standardized General Education Exams Used
The standardized exams most frequently used by survey respondents—the Collegiate Learning Assessment
(CLA), the Collegiate Assessment of Academic Proficiency (CAAP), and the Measure of Academic Proficiency and
Progress (MAPP)—are also the three exams associated with the VSA. Among four-year institutions, the most
commonly used exam is the CLA (45.5%), followed by the MAPP (33.3%) and the CAAP (21.2%). The popularity of
the CLA is partially explained by the fact that the University of Texas (UT) System requires its nine universities to
use the exam for accountability purposes, which accounts for more than half the 15 institutions that use this
exam. Two-year institutions more often use the CAAP (47.2%), with the MAPP (36.8%) as the second most
common response (Figure 11).
Figure 11: Standardized General Education Exams Used
47.2%
45.5%
50.0%
36.8%
40.0%
33.3%
30.0%
20.0%
10.0%
21.2%
10.5%
10.5%
9.1%
0.0%
CLA
MAPP
Two-year
CAAP
locally-developed
Four-year
These responses varied by accountability peer group for both two-and four-year institutions. Among two-year
institutions, technical colleges were the only schools that use WorkKeys, an exam designed to test students’
skills in real-world, job-related contexts. Among four-year institutions, 89% of research/emerging research
universities are using the CLA, compared to 14% of doctoral universities, 50% of comprehensive universities, and
27% of master’s universities. On the other hand, no research/emerging research universities are using the
MAPP, compared with 43% of doctoral universities, 33% of comprehensive universities, and 55% of master’s
universities. Some of this difference is explained by the fact that five of the nine research/emerging research
universities are in the UT System and required to administer the CLA. The general prestige of the CLA may also
contribute to its use by the more elite universities in the state.
Embedded Assessment
Embedded assessment involves reviewing student work produced for a course to determine the extent to which
students have achieved particular learning outcomes. Institutions responding to the survey indicated that the
most common method they use for the assessment of general education outcomes is a review of student work
across course sections using common rubrics, exams, or exam questions, with 85% of institutions using this
method occasionally or frequently. In addition, 84% of institutions reported reviewing student work from
individual course sections. Observations of student performance such as recitals or labs are used by 75% of
institutions; cumulative assessments such as capstone projects or portfolios are used by 60% of institutions.
Almost 20% of institutions said that they plan to use cumulative assessments in the future, reflecting the value
Higher Education Policy Institute
Texas Higher Education Coordinating Board
October 2010
P a g e | 15
Assessing General Education Competencies
of this assessment method for identifying students who are close to completing their education, which can help
institutions meet SACS standards for assessing general education.
For the most part, two- and four-year institutions use embedded assessment methods at similar rates (Figure
12), but two-year institutions are more likely to use student work from individual course sections (89% versus
74%) and observations of student performance (81% versus 65%), reflecting the greater emphasis on courselevel assessment by these institutions.
Figure 12: Embedded Assessment Methods
100.0%
80.0%
88.9%
73.6%
85.7%
82.4%
80.9%
64.7%
60.0%
61.9%
55.8%
40.0%
20.0%
0.0%
student work from
individual courses
student work across
courses
Two-year
performance
observations
cumulative assessments
Four-year
There were also some differences in responses based on accountability peer group. Among four-year
institutions, master’s universities were considerably less likely than other four-year schools to assess student
work across course sections, with 55% of master’s universities indicating that they occasionally or frequently use
this method versus 89% of research/emerging research universities and 100% of doctoral and comprehensive
universities. This difference can be partially explained by the fact that four master’s universities serve only junior
and senior undergraduates and do not offer the lower-division general education courses that are most
conducive to the use of common exams or assignments across course sections. If these upper-division
institutions are removed from the sample, the percentage of master’s universities that occasionally or
frequently uses common exams or assignments to assess general education increases to 70%.
Different types of two-year institutions also varied in their responses to this question. In particular, technical
colleges were less likely than community colleges to use certain embedded assessment methods. Only 71% of
technical colleges reported occasionally or frequently reviewing student work from individual course sections,
versus 91% of community colleges. Similarly, 57% of technical colleges reported reviewing student work across
course sections, versus 90% of community colleges. These differences reflect the fact that technical colleges are
more likely than community colleges to use standardized exams to assess general education.
Finally, both technical colleges and small community colleges are more likely to frequently use cumulative
assessments such as capstone projects or portfolios than other two-year institutions. This assessment method is
used frequently by 57% of technical colleges and 56% of small community colleges, compared to 26% of very
large community colleges, 14% of large community colleges, and 10% of medium community colleges. In part,
this difference can be explained by the structured curriculum at technical colleges, in which most, if not all,
programs end with a capstone course. Small community colleges, on the other hand, may find it easier to use
Higher Education Policy Institute
Texas Higher Education Coordinating Board
October 2010
P a g e | 16
Assessing General Education Competencies
cumulative assessments such as portfolios because of lower enrollment, which makes it more plausible to
compile a portfolio of work for all or most students.
Indirect Assessment
The survey also asked about indirect methods of assessment, which rely on reports of student learning from
employers or faculty members or from students themselves. The most common indirect method of assessing
general education outcomes is a student survey, used frequently or occasionally by 80% of institutions. Indirect
assessment methods less often used include employer surveys or advisory groups (63%), alumni surveys (53%),
faculty surveys (39%), and student focus groups (29%). As with embedded assessment, indirect assessment
methods are used at similar rates by two- and four-year institutions (Figure 13), but two-year institutions are
more likely to report using employer feedback (68% versus 53%) and student focus groups (35% versus 18%).
Figure 13: Indirect Assessment Methods
100.0%
79.4%
80.0%
82.3%
68.3%
50.8%
60.0%
55.9%
38.1%
34.9%
40.0%
41.2%
53.0%
17.6%
20.0%
0.0%
student surveys
student focus groups
alumni surveys
Two-year
faculty surveys
employer feedback
Four-year
Institutions varied in their use of student surveys, the most common indirect method of assessment, based on
when they underwent their most recent SACS reaccreditation. Institutions reaccredited in 2005 or later were
more likely to report that they use student surveys to assess general education competencies, with 84% using
this method versus 76% of those reaccredited prior to 2005. This pattern was particularly true for two-year
institutions, with 85% of the more recently reaccredited group using student surveys, compared to 73% of the
group reaccredited before 2005. This finding suggests that, as with standardized exams, the new SACS standards
encourage the use of student surveys as a method for assessing general education competencies.
Existing Data
The final assessment methods addressed in the survey use existing state or institutional data to provide
information on student outcomes. The data most often used are student success measures such as transfer or
graduation rates, used frequently or occasionally by 63% of institutions. Data sources less often used include
student performance after graduation (52%), grades in general education courses (38%), and grades in more
advanced courses that build on general education courses (25%).
Grades in general education courses are the method institutions most often identified as having been used in
the past but not currently (17%), demonstrating the consensus among assessment experts that course grades
are not good measures of student learning. Similarly, grades in more advanced courses are the method that
Higher Education Policy Institute
Texas Higher Education Coordinating Board
October 2010
P a g e | 17
Assessing General Education Competencies
institutions most often indicated they have never used (64%), which also reflects the difficulty in identifying
curricular pathways taken by students after completing general education courses.
While two- and four-year institutions show relatively little variation in their use of embedded and indirect
methods of assessing general education, the same is not true for the use of existing data (Figure 14). Two-year
institutions are much more likely than four-year institutions to use student success measures (75% versus 41%),
performance after graduation (65% versus 26%), grades in general education courses (52% versus 12%), and
grades in more advanced courses (35% versus 6%). One exception to this finding is that technical colleges are
less likely than community colleges to report using student performance after graduation (29% versus 70%),
perhaps because community colleges tend to use data from four-year institutions for their transfer students.
Figure 14: Other Data Used for Assessment
74.6%
80.0%
60.0%
41.2%
34.9%
40.0%
20.0%
65.1%
52.3%
11.7%
26.4%
5.8%
0.0%
general education course subsequent course grades student success measures
grades
Two-year
performance after
completion
Four-year
Involving Students in Assessment
The survey asked about strategies for involving students in general education assessment, an area that
institutions often find to be a challenge. Overwhelmingly, 95% of institutions reported that they involve students
in general education assessment by using embedded assessments, which require no extra effort on the part of
the student. The next most common strategy is in-class administration of a standardized exam (55%). Less
frequently used strategies include outreach efforts explaining the importance of assessment (26%), monetary
rewards for participation (24%), non-monetary incentives such as event tickets or prize drawings (18%), and
required participation outside of class (17%).
There were some differences in strategies used based on institutional type (Figure 15), with four-year
institutions significantly more likely to use both monetary (52% versus 10%) and non-monetary incentives (36%
versus 8%). Since four-year institutions more often use standardized general education exams, they tend to
adopt these strategies to recruit students to take the exams. Across the board, institutions that use standardized
exams are more likely to use strategies such as monetary rewards (45% versus 2%), non-monetary incentives
(34% versus 2%), outreach efforts (36% versus 14%), and required participation outside of class (26% versus 7%).
Higher Education Policy Institute
Texas Higher Education Coordinating Board
October 2010
P a g e | 18
Assessing General Education Competencies
Figure 15: Strategies for Involving Students in Assessment
100.0%
98.4%
87.9%
80.0%
51.5%
60.0%
40.0%
22.2%
33.3%
12.7%
20.0%
36.4%
24.2%
9.5%
7.9%
0.0%
course-embedded
assessments
outreach efforts
participation
required outside
class
Two-year
monetary incentives
non-monetary
rewards
Four-year
There were also some variations among different types of two- and four-year institutions as to their strategies
for involving students in assessment. Among four-year institutions, research/emerging research universities,
which typically have greater resources, are more likely to use both monetary and non-monetary incentives.
Technical colleges are much more likely than community colleges to report requiring students to participate in
assessments outside of class, with 57% of technical colleges using this strategy versus only 7% of community
colleges. On the other hand, outreach efforts are more common at very large community colleges (37%) and
small community colleges (44%), compared to 14% of technical colleges, 10% of medium community colleges
and no large community colleges. This finding reflects both the greater resources of very large community
colleges and the more closely-knit community of small community colleges.
Finally, reaccreditation was also correlated with responses to this question, especially for two-year institutions.
Two-year institutions reaccredited in 2005 or later were more likely to report using outreach efforts (30% versus
13%), monetary incentives (15% versus 3%), and non-monetary incentives (12% versus 3%), a finding likely
connected to the greater use of standardized exams by more recently reaccredited two-year schools.
An open-ended follow-up to the question about strategies for involving students in general education
assessment asked which methods institutions have found to be effective. The majority of responses (80%)
mentioned embedded assessment and/or in-class administration of standardized exams, strategies that have
both a captive audience of students and built-in incentives for students to try their best on the assessment. Over
a quarter (27%) of four-year institutions stated that they find monetary and/or non-monetary incentives
effective (compared to only 2% of two-year institutions).
Higher Education Policy Institute
Texas Higher Education Coordinating Board
October 2010
P a g e | 19
Assessing General Education Competencies
Use of General Education Assessment Findings
The use of assessment findings is a topic frequently discussed in the literature on assessing student learning
outcomes. While the focus of assessment for accreditation is closely linked to institutional improvement,
assessment for accountability has a less clear connection to efforts to promote institutional change. In addition,
colleges and universities sometimes find it difficult to use institutional-level assessment findings to make
changes in specific curricula or courses. This section of the survey examined how Texas public colleges and
universities share and use general education assessment findings.
Sharing Assessment Findings
The groups with whom colleges and universities most often share assessment findings (Figure 16) are faculty
members (84%), academic administrators (83%), and accreditors (80%). Some institutions also share assessment
findings frequently or occasionally with government regulatory bodies (68%), governing boards (54%), and staff
members (42%). Assessment findings are least often shared with students (25%), the general public (23%), and
alumni (10%). However, almost 20% of institutions said they plan to share assessment findings with alumni and
the public, and even more (29%) plan to share them with students, reflecting increased demands for public
accountability for student learning.
Figure 16: Groups with which General Education Assessment
Findings are Shared
100.0%
84.4%
83.3%
80.0%
60.0%
40.0%
20.0%
80.2%
67.7%
54.2%
41.7%
25.0%
22.9%
10.4%
0.0%
Responses to this question did not vary much by institutional type. Unsurprisingly, institutions reaccredited in
2005 or later were somewhat more likely to indicate that they occasionally or frequently share general
education assessment findings with accreditors (88% versus 71%) while institutions reaccredited before 2005
were more likely to say that they plan to share these findings with accreditors (22% versus 8%). Two-year
institutions reaccredited in 2005 or later were more likely than those reaccredited earlier to report sharing
general education assessment findings with accreditors (91% versus 69%), administrators (88% versus 76%),
government regulatory bodies (73% versus 62%), and faculty members (94% versus 72%), suggesting that the
reaccreditation process leads to increased emphasis on general education outcomes at these institutions.
Higher Education Policy Institute
Texas Higher Education Coordinating Board
October 2010
P a g e | 20
Assessing General Education Competencies
External Uses
Most institutions indicated that they occasionally or frequently use assessment findings to prepare for
institutional accreditation (85%) and to respond to requirements set by state or federal government (82%). To a
lesser extent, colleges and universities also indicated that they use general education assessment findings to
prepare for specialized accreditation (68%). Institutions are less likely to use assessment findings to provide data
for public accountability (47%) or to align curricula and/or learning outcomes with feeder schools (32%) and
much less likely to use these findings to recruit prospective students (7%).
With one notable exception, two-year and four-year institutions responded to this question in similar ways
(Figure 17). Four-year colleges are considerably more likely to use general education assessment results for
public accountability (76% versus 32%), a finding undoubtedly connected with participation in the VSA.
Figure 17: External Uses of Assessment Findings
100.0%
85.7% 82.4%
85.7%
80.0%
76.4%
76.4%
60.0%
30.2% 35.3%
31.8%
40.0%
34.9%
26.5%
7.9% 5.9%
20.0%
0.0%
Prepare for
institutional
accreditation
Respond to
regulatory
requirements
Provide data for
public
accountability
Two-year
Prepare for
specialized
accreditation
Align with feeder
schools
Recruit
prospective
students
Four-year
Internal Uses
Institutions most often said they use general education assessment findings occasionally or frequently to refine
assessment processes and/or measures (73%), make changes to the general education curriculum (63%), revise
general education competencies (63%), improve teaching (63%), and make decisions related to strategic
planning (58%). Colleges and universities are less likely to use assessment findings to align curricula and/or
learning outcomes between general education and the majors (35%), determine student readiness for upperlevel coursework (32%), allocate resources to departments (17%), and, in particular, make decisions about
faculty tenure, promotion, or merit raises (7%).
Unlike external uses for general education assessment findings, responses to this question varied considerably
by institutional type (Figure 18). Two-year institutions are considerably more likely to use general education
assessment findings to improve teaching (71% versus 47%), revise general education competencies (71% versus
47%), make changes to the general education curriculum (70% versus 50%), make decisions related to strategic
planning (68% versus 38%), and determine student readiness for upper-level coursework (40% versus 18%).
Some of these differences may be related to the fact that two-year colleges more often report conducting
general education assessment at the department and course levels. As a result, they find it easier to use
assessment data to make changes to courses and curricula.
Higher Education Policy Institute
Texas Higher Education Coordinating Board
October 2010
P a g e | 21
Assessing General Education Competencies
Figure 18: Internal Uses of Assessment Findings
100.0%
80.0%
60.0%
85.3%
71.4%
71.4%
47.1%
47.1%
69.9%
68.3%
50.0%
40.0%
66.7%
39.6%
38.2%
17.7%
20.0%
38.1%
29.4%
0.0%
Improve
teaching
Revise gen ed Change gen ed
competencies curriculum
Two-year
Conduct
strategic
planning
Refine
assessment
process
Determine
student
readiness
Align general
education and
major
Four-year
Four-year institutions are more likely to frequently or occasionally use general education assessment findings to
refine assessment processes and/or measures (85% versus 67%). One difference in responses among different
types of four-year institutions was also noted. While 83% of comprehensive universities and 72% of doctoral
universities indicated that they use assessment findings to improve teaching, only 33% of master’s universities
said the same, although that percentage increased to 50% when upper-division institutions were excluded from
the analysis. In addition, only 22% of research/emerging research universities said they occasionally use general
education assessment finding to improve teaching and none said they do so frequently, indicating a significant
gap in practice between these more elite universities and other four-year institutions.
Colleges and universities also varied in their responses to this question based on when they last underwent
reaccreditation. Institutions reaccredited in 2005 or later are more likely to use general education assessment
findings to refine assessment processes and/or measures (80% versus 65%), allocate resources (22% versus
11%), and improve teaching (71% versus 54%), suggesting that the reaccreditation process encourages use of
general education assessment findings in these area or, in the case of allocating resources and improving
teaching, that more useful assessment findings are available following the work done to prepare for
reaccreditation. This pattern held especially true for two-year institutions reaccredited under the new SACS
standards, which were more likely than those reaccredited earlier to use general education assessment findings
to refine assessment processes and/or measures (76% versus 57%) and improve teaching (82% versus 60%).
Four-year institutions reaccredited in 2005 or later, on the other hand, were more likely to use general
education assessment findings to revise their general education competencies (56% versus 38%).
Changes to General Education
Almost two-thirds (64%) of institutions stated that they have either recently made changes to their general
education and/or core curriculum or are in the process of making them. There was little variation in responses
to this question based on institutional type or when the institution underwent reaccreditation.
In terms of the types of changes made, the most common responses discussed changing the content of general
education classes (22%)—in particular, adding more emphasis on writing or critical thinking across the general
education curriculum—and adding or dropping specific core classes to ensure that all general education
Higher Education Policy Institute
Texas Higher Education Coordinating Board
October 2010
P a g e | 22
Assessing General Education Competencies
competencies are adequately covered (18%). Some institutions also mentioned adding or dropping core
requirements such eliminating a required computer class or adding a learning skills class (13%), defining new
general education outcomes (12%), mapping general education outcomes to the core curriculum (10%),
reducing the number of courses that students can use to meet core requirements (8%), and reducing the total
number of hours in the core, which can vary from 42 to 48 hours (7%). A final area in which changes were made
was adjusting the general education assessment plan, mentioned by 20% of institutions. Compared to two-year
institutions (Figure 19), four-year institutions were more likely to have changed core course content (33% versus
15%) and defined new general education outcomes (29% versus 3%).
Figure 19: Changes to General Education
40.0%
33.3%
28.6%
30.0%
20.0%
23.8%
15.4%
15.4%
10.0%
23.8%
17.9%
15.4%
9.5%
10.3%9.5%
2.6%
10.3%
4.8%
5.1%
9.5%
0.0%
Changed core Added or
Added or Defined new Mapped
Reduced Reduced core Changed
course
dropped core dropped core learning
outcomes to number of
hours
assessment
content
courses to requirements outcomes
curriculum core courses
plan
cover
competencies
Two-year
Four-year
Institutions reaccredited under the new SACS standards also varied somewhat in their responses to this question
when compared to institutions reaccredited prior to 2005. In particular, colleges and universities reaccredited in
2005 or later were more likely to report that they have dropped the number of courses available to students to
meet core requirements (16% versus 0%) and made changes to the general education assessment plan (28%
versus 11%), both of which are reasonable responses to the complexity of undertaking a comprehensive
assessment of general education competencies as required by SACS standards.
Higher Education Policy Institute
Texas Higher Education Coordinating Board
October 2010
Assessing General Education Competencies
P a g e | 23
Improving General Education Assessment
Assessing general education competencies has been a difficult process for many colleges and universities. The
survey addressed challenges that can impede the effective assessment of general education outcomes and
possible options for improvement.
Challenges in Assessing General Education
In an open-ended question, institutions were asked to identify one or two significant challenges they face in
assessing general education competencies. The largest cluster of responses fell in the area of faculty
participation in general education assessment, with 32% of institutions indicating that a lack of faculty buy-in or
engagement is a challenge for them, 15% stating that their faculty members need more training in conducting
assessment, and 11% noting that assessment places an additional burden on already heavy faculty workloads or
that there is insufficient incentive for faculty to work on general education assessment.
Institutions also noted a number of challenges related to the assessment process. Among these are the need for
more meaningful measures of student learning (12%), more efficient data collection procedures (7%), and
strategies to assist in using assessment findings to improve student learning (4%). Institutions also mentioned
the challenge of measuring general education competencies in an environment where many different courses,
often taken at different institutions, can be used by students to meet core curriculum requirements (8%) and
the need for more agreement, both on individual campuses and across the state, about which general education
competencies should be measured and how to measure them (12%). Another challenge raised, particularly by
four-year institutions, is the sense that there is a disconnect between general education and the academic
disciplines, leading to a lack of faculty ownership over general education (18% of four-year institutions versus 8%
of two-year institutions).
A final topic mentioned by a substantial number of institutions was a lack of financial and/or human resources
to support general education assessment. This challenge was of particular concern to two-year institutions with
21% mentioning a lack of funding for assessment and 16% a lack of staffing, versus 3% and 0% for four-year
institutions.
Options for Improving General Education Assessment
Survey respondents were also asked to choose three options that would be most helpful in improving general
education assessment practices at their institutions. Responses to this question generally mirrored the
comments discussed above. The most commonly mentioned options for improvement overall were additional
faculty and staff expertise in assessment methods (54%), more faculty involvement (49%), and additional
resources (42%). In addition, at least a quarter of institutions chose agreement on general education/core
curriculum learning outcomes throughout the state (37%), better ways to measure student learning (34%), and
more information about practices and policies at peer institutions (26%). The options mentioned least often
were more student involvement (13%) and including student learning in institutional strategic plans (8%).
Responses to this question varied somewhat by institutional type (Figure 20). Four-year institutions were more
likely to choose more faculty involvement (56% versus 44%) and emphasis on assessment by institutional
leaders (29% versus 14%). This pattern reflects the challenges faced by four-year institutions in making
undergraduate general education a priority for faculty members and institutional leaders, who are often more
Higher Education Policy Institute
Texas Higher Education Coordinating Board
October 2010
P a g e | 24
Assessing General Education Competencies
focused on research or on graduate and undergraduate education in discipline-based majors. This conclusion is
supported by the fact that more elite universities were more likely to select this option, with 44% of
research/emerging research universities and 43% of doctoral universities choosing it, compared to 17% of
comprehensive and master’s universities. Two-year institutions, on the other hand, were more likely to choose
agreement on general education/core curriculum learning outcomes throughout the state (43% versus 27%),
suggesting an interest in ensuring some consistency in undergraduate general education.
Figure 20: Options for Improving General Education Assessment
60.0%
55.9%
54.0% 52.9%
50.0%
44.4%
44.4%
38.2%
40.0%
42.9%
36.5%
26.5%
30.0%
29.4%
20.0%
29.4%
14.3%
10.0%
0.0%
Additional faculty
or staff expertise
in assessment
More faculty
involvement
Additional
resources
Agreement on
Better ways to
learning outcomes measure student
throughout the
learning
state
Two-year
Four-year
Emphasis on
assessment by
institutional
leaders
Institutions reaccredited before 2005, and therefore facing reaccreditation within the next few years, were more
likely to indicate a need for additional resources (48% versus 37%) while institutions reaccredited in 2005 or
later were more likely to suggest a need for agreement on general education/core curriculum learning outcomes
throughout the state (43% versus 30%). This second trend was particularly clear for four-year institutions, with
39% selecting this option versus 13% of those reaccredited before 2005. In addition, two-year institutions
reaccredited before 2005 more likely choose emphasis on assessment by institutional leaders (23% versus 6%)
and those reaccredited in 2005 or later more likely to choose better ways to measure student learning (46%
versus 27%). All of these findings suggest that issues of resources and institutional buy-in are reduced as colleges
and universities undergo reaccreditation under the new SACS standards but that new issues, such as difficulties
in measuring general education competencies, are highlighted by the reaccreditation process.
Higher Education Policy Institute
Texas Higher Education Coordinating Board
October 2010
Assessing General Education Competencies
P a g e | 25
Comparisons to Other Surveys
The assessment of student learning outcomes has been a topic of considerable public discussion in recent years,
particularly in response to increased demands from policymakers that colleges and universities be held
accountable for demonstrating that their graduates are adequately prepared for the workforce. This discussion
has led to new research on the topic, in an effort to better understand the assessment policies and practices
currently in place at postsecondary institutions. The 2009 HEPI General Education Assessment Practices Survey
is an example of such research, but it is only one of several related surveys conducted in the last few years.
While the other surveys emphasize learning outcome assessment more broadly, rather than focusing on the
assessment of general education competencies, and include private as well as public institutions in their
samples, they can still provide data that helps contextualize what the HEPI survey reveals about learning
outcomes assessment in Texas.
National Institute for Learning Outcomes Assessment (NILOA)
NILOA is a research institute, established in 2008, that focuses on identifying and disseminating best practices in
learning outcomes assessment. One of the first projects undertaken by NILOA’s research team was a survey of
regionally accredited colleges and universities across the United States, conducted in Spring 2009. The survey
included two- and four-year public, private non-profit, and for-profit institutions (NILOA, 2009a). The NILOA
survey provides an important source of recent data on how and why colleges and universities are assessing
student learning and using the findings from those assessments. As such, it provides a national context in which
to locate findings about Texas public colleges and universities.
The NILOA survey asked chief academic officers if their institutions have a person or unit responsible for
coordinating learning assessment across the campus and, if so, how many full-time equivalent staff positions are
assigned to this unit. Responses indicated that 47% of doctoral institutions and 19% of two-year institutions
have at least one assessment staff position (NILOA, 2009a). This percentage is considerably lower than the 79%
of four-year institutions and 35% of two-year institutions in Texas that indicated having at least one full-time
assessment staff position. However, the HEPI survey did not specify that the assessment staff position be
responsible for coordinating learning assessment across the campus, which allowed Texas institutions to include
more staff positions.
In terms of assessment methods, the NILOA survey found that 76% of responding colleges and universities use a
national student survey and 39% use a standardized general education exam (NILOA, 2009a). The percentage of
schools using a national student survey is only slightly lower than the 80% of Texas public institutions that use
either a national or locally-developed student survey. However, in Texas, 56% of colleges and universities
reported that they use or are in the process of adopting a standardized general education exam. The HEPI survey
focused on public institutions and most Texas public universities participate in the VSA, which requires the use
of a standardized general education exam. The NILOA survey, on the other hand, included private non-profit and
for-profit institutions that do not have the same external pressure to use this sort of exam. The NILOA survey did
find, though, that institutions accredited by SACS are more likely than schools in other regions to use
standardized general education exams, which may be another factor driving this difference.
The NILOA survey also asked chief academic officers about how they use assessment findings. Responses to this
question indicated that assessment findings are most often used for external purposes such as preparing for
Higher Education Policy Institute
Texas Higher Education Coordinating Board
October 2010
Assessing General Education Competencies
P a g e | 26
institutional and programmatic accreditation or responding to accountability calls and for internal purposes such
as revising learning outcomes, modifying the general education curriculum, improving teaching, or informing
strategic planning (NILOA, 2009a). The responses to a similar question on the HEPI survey were very much in line
with these answers, although HEPI respondents, who were asked specifically about the use of general education
assessment findings, were less likely to say that such findings are used for programmatic accreditation.
Respondents to both surveys were also in agreement that assessment findings are much less often used to
allocate resources to programs or to evaluate faculty members for tenure or merit pay.
When asked about options to improve the assessment of student learning on their campuses, respondents to
the NILOA survey indicated that their top options include more faculty engagement in assessment (66%), more
faculty and staff expertise in assessment (61%), and additional financial and/or human resources (50%) (NILOA,
2009a). The same three options were mentioned most often by respondents to the HEPI survey, although with
lower percentages. HEPI respondents were more likely than NILOA respondents to mention a need for
information on policies and practices at other institutions (26% versus 18%). This response, together with an
interest in building agreement on general education outcomes across the state (which was not asked on the
NILOA survey), reflect the fact that the HEPI survey focused only on public institutions in a single state.
Learning Assessment in Missouri Postsecondary Education Advisory Council (LAMP)
LAMP was established in 2008 to examine statewide issues related to learning outcomes assessment in Missouri
higher education and to make recommendations in this area to the Commissioner of Higher Education. The
Advisory Council includes representatives from both the public and private higher education sectors and from
two- and four-year institutions, as well as from the K-12 sector. Early in LAMP’s existence, its Assessment
Practices Subcommittee conducted two surveys related to learning outcomes assessment: the Survey of
Assessment Culture and the Missouri Assessment Instruments Survey. Like the NILOA survey described above,
the LAMP surveys included both public and private institutions and looked at a range of assessment practices,
including general education assessment. Despite these differences from the HEPI survey, the LAMP surveys
provide a useful look at how learning outcomes assessment is conducted in a state other than Texas.
LAMP’s Survey of Assessment Culture asked institutions about the organization of and resources devoted to
learning outcomes assessment, and as with HEPI survey respondents, LAMP survey respondents described
considerable diversity in institutional structures and policies related to assessment. LAMP survey respondents
noted that responsibility for assessment is most often located in the office of the chief academic officer or in
offices of assessment or institutional research, institutional roles associated with general education assessment
in the HEPI survey findings. In addition, LAMP survey respondents noted the important and growing role of
assessment management software on their campuses, again reflecting the HEPI survey findings.
As of 2008, the standardized general education exams used in Missouri varied a bit from what was reported on
the 2009 HEPI survey (LAMP, 2008). In particular, all public institutions in Missouri, both two- and four-year,
reported using a standardized general education exam, versus 56% of Texas public colleges and universities. In
addition, the exam used most often by both two- and four-year institutions in Missouri was the College Basic
Academic Subjects Evaluation (CBASE), which is used by only one Texas institution. However, CBASE was
developed by the University of Missouri-Columbia and is required by the state for students entering upperdivision coursework in the field of education, which explains its widespread use. CBASE aside, the most
commonly used exam for four-year institutions was the MAPP, followed by the CLA and the CAAP. In Texas, the
Higher Education Policy Institute
Texas Higher Education Coordinating Board
October 2010
Assessing General Education Competencies
P a g e | 27
CLA is used by more four-year institutions than is the MAPP. For two-year institutions in Missouri, the most
common exams were WorkKeys, which is used only by technical colleges in Texas, and the CAAP, which is a
common choice for Texas two-year institutions.
The LAMP survey asked respondents to indicate important changes that could be made to improve learning
outcomes assessment at their institutions, and the two biggest concerns raised were a need for additional
human and/or financial resources devoted to assessment and a need to implement more structured assessment
policies that guide institutional practice in collecting, analyzing, and using assessment data. Respondents also
noted the need for more faculty involvement in assessment, and while all responding institutions reported
considerable faculty support for assessment, most also felt that additional time and energy could be spent on
continued efforts to build faculty engagement. These responses reflect those from the HEPI survey—in essence
highlighting the fact that institutions need to devote resources to building a culture of assessment on their
campuses in order to institutionalize and sustain the work done so far.
Higher Education Policy Institute
Texas Higher Education Coordinating Board
October 2010
Assessing General Education Competencies
P a g e | 28
Conclusions and Further Research
The findings from the 2009 HEPI General Education Assessment Practices Survey provide considerable
information about the assessment of general education competencies at Texas public colleges and universities.
Of particular note is the diversity of approaches used by different institutions to assess general education
competencies. The survey also demonstrates the widespread use, and perceived value, of embedded
assessment methods, which are used by far more institutions than use standardized general education exams.
Another important finding is that institutions both report significant faculty involvement in the general
education assessment process and identify a need for more faculty engagement and expertise in assessment.
Finally, the survey supports other research in suggesting that assessment findings are used more for external
purposes, such as accreditation and accountability, than for internal purposes such as the allocation of
resources, strategic planning, and the improvement of teaching.
The analysis of survey findings by institutional type and accountability peer group shows some important
differences based on institutional type, size, and mission. In particular, it is striking that two-year institutions are
more likely than four-year institutions to use general education assessment findings for purposes such as
improving teaching, making changes to the general education curriculum, and making decisions related to
strategic planning. At the same time, two-year institutions are more likely than four-year institutions to raise
concerns about a lack of financial and/or human resources, and smaller two-year institutions are less likely than
their larger counterparts to have dedicated assessment staff positions and funding or release time for faculty or
staff to work on assessment. Clearly, many two-year institutions are very serious about using assessment
findings to improve educational quality but are hindered by a lack of resources.
Finally, the analysis based on when institutions most recently underwent reaccreditation by SACS provides
additional evidence of the influence of accreditation on assessment practices. Institutions reaccredited under
the new SACS standards are more likely to have shorter assessment cycles and dedicated assessment staff
positions, to use both standardized exams and student surveys as methods for assessing general education
competencies, and to use general education assessment findings to improve teaching and allocate resources.
These findings suggest that, as more institutions are reaccredited under the new SACS standards, additional
changes in assessment practice will be seen.
These findings have implications for state policy in Texas. Colleges and universities make the case that more
faculty engagement and expertise is needed to improve the assessment of general education outcomes. This
area is one in which the state could take action, through grants to institutions for professional development,
regional or statewide symposia on effective assessment, or the creation of an online networking community of
assessment practitioners. In addition, two-year institutions, especially smaller ones, are hindered by a lack of
resources to devote to assessment. The state could help remedy this situation by giving those institutions
priority in any of the initiatives described above. At the same time, the state has an interest in seeing institutions
produce more comparable assessment data and make assessment findings accessible to the public. Priority in
allocating support for assessment capacity-building could also be given to institutions willing to commit to
improving practice in these areas.
Higher Education Policy Institute
Texas Higher Education Coordinating Board
October 2010
Assessing General Education Competencies
P a g e | 29
Suggestions for Further Research
Assessment policies and practices are in flux as institutions respond to changing accreditation standards and
new calls for public accountability. This survey has provided a useful snapshot of such policies and practices in
Texas as of late 2009, but much has likely changed in 2010 and will continue to change in the future. To keep
abreast of trends in assessment policy and practice, it would be valuable to survey Texas public colleges and
universities on an annual or biannual basis. Additional data could further clarify the relationship between
reaccreditation and assessment practice. It would also provide institutions with regular information about the
assessment practices used by their peers.
Furthermore, a number of key questions about the assessment of general education could not be answered
using the data from this survey. The survey only addressed the assessment of general education competencies
required of all students; assessment of discipline-specific competencies is also an essential part of measuring
student learning. Experts argue that the acquisition of essential learning outcomes must be considered in
discipline-specific contexts. Critical thinking, for example, may be assessed in different ways for a nursing
student versus a history major. Collecting data on how Texas public colleges and universities are approaching
the assessment of student learning in selected programs would useful to better understanding this issue.
Additional research could also help clarify important relationships among institutional assessment capacity, the
use of various assessment methods, and how assessment findings are used by institutions. Since it is desirable
from a policy perspective that institutions use assessment findings for institutional improvement, rather than as
simply an exercise in mandatory reporting, understanding these relationships is essential. If increases in
assessment capacity and/or the use of particular assessment methods are associated with more effective
assessment use, the state should work to help build institutional capacity, especially for two-year institutions,
and might also recommend the use of certain assessment methods to examine the effectiveness of the core
curriculum. New questions on future surveys could provide useful information to shape state policy in this area.
Higher Education Policy Institute
Texas Higher Education Coordinating Board
October 2010
Assessing General Education Competencies
P a g e | 30
References
American Association of State Colleges and Universities. (2006, Spring). Value‐added assessment:
accountability’s new frontier. Perspectives, 1 (16). Retrieved May 21, 2009, from
http://www.aascu.org/pdf/ 06_perspectives.pdf
Association of American Colleges and Universities. (2005). Liberal education outcomes: a preliminary report on
student achievement in college. Retrieved April 28, 2009, from
http://www.aacu.org/advocacy/pdfs/LEAP_Report_FINAL.pdf
Association of American Colleges and Universities. (2008, January 9). How should colleges assess and improve
student learning: employers’ views on the accountability challenge. Retrieved July 26, 2010, from
http://www.aacu.org/leap/documents/2008_Business_Leader_Poll.pdf
Association of American Colleges and Universities. (2010, January 20). Raising the bar: employers’ views on
college learning in the wake of the economic downturn. Retrieved April 28, 2009, from
http://www.aacu.org/leap/documents/2009_EmployerSurvey.pdf
The Conference Board, Corporate Voices for Working Families, the Partnership for 21st Century Skills and the
Society for Human Resource Management . (2006). Are they really ready to work? Employers’
perspectives on the basic knowledge and applied skills of new entrants to the 21st century U.S. workforce.
Retrieved July 26, 2010, from http://www.p21.org/documents/FINAL_REPORT_PDF09-29-06.pdf
Learning Assessment in Missouri Postsecondary Education Advisory Council. (2008). Survey of assessment
culture and Missouri assessment instruments survey. Retrieved June 18, 2009, from
http://www.dhe.mo.gov/files/lampassessmentculturesurveydraft05112009lampassessmentculturesurv
eyappendices.docx
Learning Assessment in Missouri Postsecondary Education Advisory Council. (2009, June). Status report to the
Coordinating Board for Higher Education June 2009. Retrieved June 18, 2009, from
http://www.dhe.mo.gov/files/lampstatusreport06112009.pdf
National Institute for Learning Outcomes Assessment. (2009a, October). More than you think, less than we need:
learning outcomes assessment in American higher education. Retrieved July 26, 2010, from
http://www.learningoutcomeassessment.org/documents/fullreportrevised-L.pdf
National Institute for Learning Outcomes Assessment. (2009b). Survey questionnaire. Retrieved August 3, 2009,
from http://www.learningoutcomeassessment.org/documents/NILOA09_PaperQ_Final_color2.pdf
Southern Association of Colleges and Schools Commission on Colleges. (2010). The principles of accreditation:
foundations for quality enhancement. Retrieved March 11, 2010, from
http://www.sacscoc.org/pdf/2010principlesofacreditation.pdf
Texas Higher Education Coordinating Board, 19 Texas Administrative Code §4.30 (Criteria for Evaluation of Core
Curricula). Retrieved June 18, 2009, from http://info.sos.state.tx.us/pls/pub/
Higher Education Policy Institute
Texas Higher Education Coordinating Board
October 2010
Assessing General Education Competencies
P a g e | 31
Appendix: Survey Development and Administration
Prior to developing the 2009 HEPI General Education Assessment Practices Survey, we reviewed the literature
on assessing general education competencies to identify key issues and potential best practices. We also
examined a number of related surveys that could serve as possible sources of comparison data. In particular, we
looked at surveys of assessment practices and instruments at public and private institutions in Missouri and at a
national survey of assessment practices at accredited public and private colleges and universities conducted by
the National Institute for Learning Outcomes Assessment. While neither of these surveys was focused
specifically on assessing general education, reviewing them helped us refine our questions and response
categories (LAMP, 2008; NILOA, 2009b).
Following the development of a draft survey instrument, we shared it with participants in the accountability
peer groups convened by the Texas Higher Education Coordinating Board. These groups include staff members
and administrators in institutional research, assessment, and academic affairs. Comments provided by
accountability peer group representatives were very helpful in identifying problematic or confusing questions
and suggesting additional response categories. The THECB’s Research and Data Collection Committee also
provided useful feedback on the survey draft. The final version of the survey follows this appendix.
The survey was launched in November 2009, using the online survey tool Survey Monkey. Invitations to
participate were sent to the Chief Academic or Instructional Officers for all Texas public colleges and universities
except the health-related institutions, which have only limited undergraduate programs. The survey invitation
was copied to the accountability peer group representatives, who provided valuable assistance in ensuring that
the survey was completed by most institutions.
The overall survey response rate was 97%. Response rates were similar for two- and four-year institutions, with
34 of 35 four-year institutions and 63 of 65 two-year institutions completing the survey. Only two of the nine
accountability peer groups had response rates below 100%. Responding institutions were generally evenly
divided between those reaccredited in 2005 or later (53%) and those reaccredited prior to 2005 (47%).
Following collection of the survey data, responses for each accountability peer group were reviewed with that
group’s representatives, who provided useful feedback on ways to interpret the data and ideas for further
analysis. An early version of this analysis was presented at the 10th Annual Texas A&M University Assessment
Conference and at the Texas Association for Institutional Research Conference in February and March 2010.
Comments from attendees at these presentations were used to refine the analysis included in the report.
Higher Education Policy Institute
Texas Higher Education Coordinating Board
October 2010
HIGHER EDUCATION POLICY INSTITUTE
A Project of the Texas Higher Education Coordinating Board
General Education Assessment Practices Survey 2009
The Higher Education Policy Institute (HEPI) is conducting research on the assessment of undergraduate general education
competencies at Texas public colleges and universities. The purpose of this survey is to help us better understand the many
ways different public postsecondary institutions in the state approach this issue and to share that information with other
institutions and THECB staff.
For the purpose of this survey, we are defining general education competencies as the knowledge, skills, and abilities all
students should possess upon completion of postsecondary education. Under this definition, general education includes, but is
not necessarily limited to, the core curriculum.
Please complete the survey by Friday, December 4, 2009.
Institution: ___________________________________________________________________________________
Name and title of person completing survey: _________________________________________________________
Structure of and Resources for General Education Assessment
1) Which offices and/or groups are responsible for the following aspects of assessing general education competencies at
your institution? (Choose all that apply)
Selecting
Developing Collecting & Reviewing
Other
Assessment Assessment
Analyzing
Assessment Assessment
Methods
Instruments
Data
Findings
Work
Office of the Provost/Chief Academic Officer
College/Office of Undergraduate or General
Studies
Office of Assessment/Accountability/Institutional
Effectiveness
Office of Institutional Research
General education assessment committee
Faculty members
Other (please specify):
2) Please briefly describe the process through which your institution assesses general education competencies.
1
3) Does your institution have one or more staff positions whose primary focus is learning assessment?
a. Yes
b. No
3a) (If yes) In what office or offices is this position or positions located? (Choose all that apply)
a. Office of the Provost/Chief Academic Officer
b. College/Office of Undergraduate or General Studies
c. Office of Assessment/Accountability/Institutional Effectiveness
d. Office of Institutional Research
e. Academic colleges or departments
f. Other (please specify):___________________________________________________________
4) What resources are available at your institution to support assessment of general education competencies? (Choose all
that apply)
a. Assessment training for faculty or staff as part of formal professional development activities (e.g. in-service
sessions, orientation for new faculty)
b. Optional assessment workshops for faculty or staff
c. One-on-one consultation with faculty
d. Funding for faculty or staff to attend assessment conferences or training
e. Funding or release time for faculty or staff to work on assessment
f. Books/reference materials on assessment distributed to faculty or staff
g. Other (please specify):___________________________________________________________
5) Does your institution use centralized software for assessment data collection and reporting?
a. Yes
b. No
c. In the process of adopting
5a) (If yes or in process) Which software package do you use or plan to use? (Choose all that apply)
a. Blackboard
b. Datatel
c. LiveText
d. rGrade
e. Strategic Planning Online (SPOL)
f. Task Stream
g. Tk20
h. TracDat
i. Weave Online
j. Locally-developed software
k. Other (please specify):___________________________________________________________
5b) (If yes) How long have you been using this software package?
2
General Education Assessment Practices
6) What is the cycle on which your institution assesses all general education competencies and reviews the assessment
findings (recognizing that specific competencies may be assessed on a more frequent schedule)?
a. Every semester
b. Every year
c. Every two years
d. Every three years or beyond
e. Other (please specify):______________________________________________________________
7) Does your institution use general knowledge and skills exams to assess general education competencies (e.g. CAAP,
CLA, MAPP)?
a) Yes
b) No
c) In the process of adopting
7a) (If general knowledge and skills exam is used) Which standardized exam(s) is/are used to measure general education
competencies at your institution? (Choose all that apply)
a. CAAP
b. CBASE
c. CLA
d. CCTST
e. MAPP
f. WorkKeys
g. Institutionally-developed exam
h. Other (please specify):___________________________________________________________
7b) (If general knowledge and skills exam is used) Which students at your institution take the general knowledge and skills
exam(s)? (Choose all that apply)
a. All students completing general education and/or core curriculum (e.g. rising junior exam)
b. All graduating students
c. A sample of students completing general education and/or core curriculum
d. A sample of graduating students
e. A sample of first-year and senior students
f. Other (please specify):___________________________________________________________
3
8) Which methods of assessing general education competencies are used at your institution and to what extent?
Never
Used
Used in
Past
But Not
Now
Planning
to Use
Use
Occasionally
Use
Frequently
Course grades in general education and/or core curriculum
classes
Course grades in more advanced classes that build on general
education and/or core curriculum classes
Student work from individual course sections (e.g. quizzes,
exams, essays, research papers)
Common exams, exam questions, assignments, and/or grading
rubrics to assess student work across course sections
Observations of student performance (e.g. simulations, labs,
recitals)
Cumulative assessments (e.g. capstone projects, portfolios)
Student surveys (e.g. NSSE/CCSSE, CIRP surveys, locallydeveloped surveys)
Faculty surveys
Alumni or graduate surveys
Employer surveys or advisory groups
Student focus groups
Student success measures (e.g. transfer rates, graduation rates,
etc.)
Data on student performance after transfer or graduation
Other (please specify):
9) What strategies are used to involve students in the general education assessment process at your institution? (Choose all
that apply)
a. Course-embedded assessments
b. In-class administration of a standardized exam
c. Participation in assessment is required outside of class
d. Educational/outreach efforts (e.g. explaining the importance of assessment)
e. Monetary incentives
f. Non-monetary rewards (e.g. event tickets, prize drawings, etc.)
g. Other (please specify):___________________________________________________________
9a) Which of these strategies (if any) do you find to be most effective?
4
10) Which of the following options would be most helpful in improving general education assessment practices at your
institution? (Choose 3)
a. More faculty involvement
b. More student involvement
c. Emphasis on assessment by institutional leaders
d. Including student learning in the institutional strategic plan
e. Additional faculty or staff expertise in assessment methods
f. Better ways to measure student learning outcomes
g. Additional resources (e.g. staff, funding)
h. Information about practices and policies at peer institutions
i. Agreement on general education/core curriculum learning outcomes throughout the state
j. Other (please specify):___________________________________________________________
11) Please describe one or two practices used by your institution to assess general education competencies that you consider
to be working well.
12) Please describe one or two significant challenges that your institution faces in assessing general education competencies.
Use of General Education Assessment Findings
13) With whom and how frequently are institutional-level reports on general education assessment findings shared?
Never
Planning to
Occasionally
Frequently
Academic administrators
Faculty
Staff
Students
Governing boards
Accreditors (specialized or SACS)
Government regulatory bodies (state or federal)
Alumni
General public
Other (please specify):
5
14) Who makes decisions about changes to general education and/or the core curriculum based on assessment findings?
(Choose all that apply)
a. Office of the Provost/Chief Academic Officer
b. College/Office of Undergraduate or General Studies
c. Academic affairs committee
d. General education/core curriculum committee
e. Deans of individual colleges
f. Department chairs or departmental undergraduate studies committees
g. Faculty members
h. Other (please specify):___________________________________________________________
15) To what extent has your institution used general education competencies assessment findings for each of the following?
Never
Used
Planning
to Use
Use
Occasionally
Use
Frequently
Prepare for institutional accreditation
Prepare for specialized accreditation
Provide data for public accountability (e.g. VSA)
Respond to assessment requirements set by THECB and/or governing
board
Revise general education competencies
Make changes to general education/core curriculum
Refine assessment process and/or measures
Determine student readiness for upper-level coursework
Align curricula and/or learning outcomes with K-12 and/or feeder
colleges/universities
Align curricula and/or learning outcomes between general education
and major sequence courses
Improve teaching (e.g. faculty development programs)
Make decisions about faculty tenure, promotion, or merit raises
Allocate resources to departments
Recruit prospective students
Make decisions related to strategic planning
Other (please specify):
16) Has your institution made any recent changes to general education and/or the core curriculum as a result of the
assessment process?
a. Yes
b. No
c. In the process of making changes
16a) (If yes or in process) Please briefly describe the changes to general education and/or the core curriculum and the reasons
for making them.
6
Download