Abstract
NC State recently conducted major alumni and employer surveys
to gain feedback on alumni preparation and performance. An important
feature of both survey instruments was a series of identical items
concerning the professional preparation of alumni. This paper
presents the findings of a comparative study of those ratings
and discusses implications for further assessment research at
NC State.
Findings
Return to Table of Contents
Return to OIRP Survey Page
Introduction
As the external pressure mounts on institutions to lead a more self-examined life (Ewell, 1984) and institutions develop more comprehensive approaches to assessing their outcomes, the search for valid and reliable indicators of institutional performance has never been greater. In addition to classical measurement considerations, central problems for faculty, administrators, and institutional researchers engaged in this search have been the believability and utility of the data obtained through assessment processes. What is easy to measure may not be that which is most meaningful to measuring program performance, thus "this implies [the need for] assessment approaches that produce evidence that relevant parties will find credible, suggestive, and applicable to decisions to be made" (AAHE, 1992, p.3). Alumni and employer assessments have the advantage of ranking high on believability and utility for both formative, faculty-driven assessment purposes as well as for summative evaluations at the system or state level (Banta et al., 1996, p. 58-59).
In addition to believability of assessment data, the cost of assessment and evaluation is of increasing concern. As has long been documented in the program evaluation literature, the monetary outlay connected with designing and performing a defensible evaluation can be considerable. Higher education, Levine (1997) asserts, has become a "mature industry," one which can no longer expect generous, uncritical acceptance by regulatory and oversight bodies, government funding sources, or the general public. Given this situation, it is entirely reasonable to assume that higher education will be subjected to the same extreme forces of reengineering that characterized industry in the 1980s and healthcare so far in the 1990s (cf. Ewell, 1991b). The high demand for accountability assures that resources will continue to be devoted to evaluation, but the focus for the present and foreseeable future needs to be on cost-effective performance assessment (Loacker, 1991) as responsibility for administrative and performance assessment functions is pushed further and further into already overburdened academic departments.
In recognition of these trends, providing useful assessment data at the departmental level through a centrally-administered process has been an avowed goal of the survey research program at North Carolina State University. Crucial to this program are periodic large-scale alumni and employer surveys that feature a common core of items and identical measurement scales. The subject of this paper is an analysis of the results of the first iteration of the NC State survey research program to determine commonalities that may exist between alumni and employer assessments of alumni preparation.
Population of Interest
The population of interest for this project was all NC State baccalaureate degree recipients from December, 1990, through August, 1993 and the current employers of those graduates. Because a primary goal of this project was to provide assessment data that could be disaggregated to the department level, a very large pool of graduates was needed. Faculty indicated a strong belief that any estimates of growth should be obtained from alumni far enough after graduation to allow opportunity for reflection but soon enough following graduation to allow for a conceptual differentiation in knowledge, skills and abilities gained through collegiate education from those realized through on-the-job experience. With this time frame and desired level of disaggregation in mind, the focus of the project became the assessment of alumni and employer opinion between two and four years following graduation.
Return to Table of Contents
Return to OIRP Survey Page
Theoretically, the basis for developing the alumni and employer surveys at NC State was the notion that undergraduate program impact could best be measured by assessing the knowledge, skills, and abilities that connect the academy to the world of work. Mentkowski and Rogers (1993, p.1) define several central principles in this connection:
Eveslage (1993), in a summary of relevant theoretical approaches, outlines areas of common ground between the academy and the world of work, including abilities such as "thinking critically, communicating effectively," and broad areas such as "learning to learn (cognitive, psychomotor and affective skills)," "academic basics (reading, writing, mathematics)," "adaptability (problem-solving and creativity)," "developmental skills (self-esteem, motivation & goal setting, personal & career development)," "group effectiveness (interpersonal, negotiation, and teamwork)" "influencing skills (organizational effectiveness and leadership)," and "ethical and civic awareness" (p. 5-6).
Similar sets of skills and abilities have been operationalized in employer survey research at Evergreen State College reported by Mott and Hunter (1991), who used a telephone survey approach to interview alumni and their supervisors concerning preparation in skill areas listed in the college's goal statements. They found that employers suggested five skill areas in addition to those derived from the college's avowed goals, such as communication with co-workers, management skills, and computer literacy.
A somewhat different and more open-ended approach was taken in Assessing the Impact in the Workplace, a telephone interview study conducted at the University of Phoenix (Department of Institutional Research Report IR-92.5, 1992). Employers of alumni were asked to discuss areas of employee development such as "oral and written communication, leadership, customer relations, decision making, planning, team work, [and] ability to learn" (p.77). Employers were also encouraged to identify other areas of employee development important to them, and to assess how well the institution was meeting employer needs in these areas.
Banta (1993) reports on a major alumni and employer survey research project undertaken by six Tennessee institutions of higher education in which information from direct supervisors of alumni was requested on both the importance of a specific skill, knowledge area, or professional trait/attitude and the evident quality of preparation for each graduate. In addition to finding relatively high levels of employer satisfaction, factor analysis of obtained results supported the existence of three skill areas: interpersonal skills (such as "leading others" and "adjusting to new job demands"), technical skills (such as "project planning" and "working with numerical data"), and basic skills (such as "reading effectively," "writing effectively," and "speaking effectively") (p.3). The importance of this body of research is that it validates the frameworks put forward by Mentkowski and Rogers (1993) and Eveslage (1993) discussed above, and demonstrates that meaningful connections between the academy and the world of work are fairly robust across a number of studies.
Two sets of measurement constructs frequently appear in connected surveys of alumni and employers: one is concerned with evaluating the relative importance of a knowledge or skill area to a graduate's professional position as a method for assessing the relevance of curricula (e.g., Van Dyke and Williams, 1996). The other typical measurement construct in this type of survey centers on evaluating the actual knowledge and skill level of graduates (e.g., Banta, 1993). Less typical in the literature are studies that offer comparisons between alumni and employer ratings of alumni preparation (Smith and Wilson, 1992; Womble, 1993). Still less attention has been given to attempts at systematic cross-validation of one set of ratings with another (Annis and Rice, 1992; Raymond et al., 1993). Given the current interest in evaluating academic programs via alumni and employer opinion, such a systematic cross-validation on a university-wide basis becomes an important focus.
The second theoretical focus for this research is the groundbreaking work done by Pike (1995, 1996) on validating the relationship between self-reports of student gains and the results of standardized tests of achievement. Conceptually, this paper seeks to expand Pike's work into the world of work experienced by alumni. Pike established that self-reports can to some extent be used as effective indicators of student preparation, especially as regards mathematical knowledge. Ewell, Lovell, Dressler, and Jones (1993) argue that self-reports may be valid, provided that they show consistent relationships across measures. Pike's research provided a qualified "yes" to the question of whether or not this relationship persisted across types of institutions, and set the stage for other reliable types of proxy measures to be sought.
Return to Table of Contents
Return to OIRP Survey Page
Instrumentation
Instruments to assess educational gains from the perspective of the employer and the perspective of the alumni were developed through a lengthy and highly participatory process at NC State (Hoey, 1995). Both instruments featured an identical core set of items on knowledge, skills and abilities gained through undergraduate programs at this institution. In an approach put forward by Banta (1993), both alumni and employers were also asked to estimate the importance of each skill, ability or area of knowledge to the alum's current work. Alumni were asked to report on the level of educational preparation they experienced through their programs; supervisors were asked to rate the knowledge, skills and abilities of alumni compared to other employees at the same level and in the same capacity.
All items were rated by respondents on five-point scales. For items assessing the importance of various areas, the scales ranged from 5 = very important to 1 = not important. For items assessing the level of preparation in each area, the scales ranged from 5 = excellent preparation to 1 = poor preparation. Responses in the "not applicable" category were omitted in calculation of the mean score for both importance and preparation items.
Alumni Survey Methodology
The entire survey process was designed to include elements standard in the survey research literature and shown to contribute to response rate. For the alumni survey, this included:
This lengthy process resulted in a response rate of 51.1%, somewhat above the 21-50 percent range reported by Banta (1993) as being typical for surveys of this type.
Employer Survey Methodology
On the alumni survey instrument, the researchers requested permission to survey direct supervisors at place of employment. For those alumni who provided contact information, surveys were sent directly to their supervisors, each with one follow-up mailing. Two mailings to employers produced 616 usable responses, a 67.8 percent response rate.
Return to Table of Contents
Return to OIRP Survey Page
A critical issue in the interpretation of data from the NC State Alumni Survey is that of data quality. This issue was explored in two ways for the present analysis. First, the degree of error present in the results was explored. Second, issues related to the response rate and generalizability of the data obtained were investigated.
Survey Error
The concept of measurement error can have many meanings relative to survey data, however it can best be conceived in terms of two broad classes: measurable error and unknown error. Sampling error is an example of the first type. For the NC State Alumni Survey, sampling was done at the curriculum level. In anticipation of probable response rates of around 33 percent, most curricula were sampled at 100 percent to minimize this type of error. The six largest curricula were sampled at a lower fraction. The overall margin of error for the alumni survey was 1.8 percent, given a 95 percent confidence level. Overall margin of error for the employer survey was somewhat broader at 3.8 percent.
An example of the second type of error is error due to nonresponse. This is a problem since the possibility exists that nonrespondents hold systematically different attitudes or opinions from those who responded to a survey. Alumni surveys are in general plagued by this problem, and extraordinary steps were taken with the NC State Alumni Survey to raise the response rate.
Telephone follow-up
A telephone follow-up of non-respondents is a strategy frequently used in large-scale mailed surveys to explore generalizability of the data obtained, since late respondents have been shown to be closer in opinion to non-respondents than to early (mail) respondents. In general, telephone respondents were found to be closer to the demographic makeup of the original survey sample than to the sample obtained through mailings.
While several significant differences were noted between earlier and telephone respondents on ratings of importance, no clear pattern emerged. On preparation ratings however, telephone respondents rated their preparation higher than did earlier respondents in all cases where significant differences were found between groups. All things considered, the similarity between the makeup and responses of telephone respondents and earlier respondents has the desirable effect of enhancing the confidence with which findings obtained through this survey may be generalized to the population of interest.
Item reliability
To assess the reliability of items on both surveys, Cronbach's Alpha was used. For this statistic, a value of 1.0 represents perfect internal consistency and 0.7 represents an accepted lower end for item reliability. On the alumni survey, all items concerned with professional preparation showed remarkable reliability, at Alpha = 0.96. On the employer survey, all items connected with professional preparation also displayed high internal consistency, at Alpha = .95 or higher.
Comparison of demographic characteristics
Another typical step in data quality assurance for survey research projects is a post-hoc comparison of demographic information on the population, mailed sample, and obtained sample. A comparison was conducted between the demographic characteristics of gender, ethnicity, year of graduation, and college and these groups:
Significant differences (p=.001) in response were found by gender and ethnicity, but none met the test of practical significance for affecting the interpretation of findings as discussed by Banta (1993): a difference greater than five percent between the two statistics under examination. Significant differences at the p = .001 level were also found by college. Only one college was represented in the obtained sample at a level more than 5% different from its level of representation in the population; thus, caution should be exercised when interpreting the findings of this survey for those respondents. No significant differences were found by year of graduation.
For the employer survey, no significant differences were found by gender, ethnicity, year of graduation, college, or reported income level - a situation that speaks well for the generalizability of the findings to the population of employers of NC State alumni.
Return to Table of Contents
Return to OIRP Survey Page
Results are reported in this paper according to the statistical tests used. All statistical analysis for this project was done using SAS software, version 6.11.
Agreement
A basic purpose of this paper was to assess the degree to which agreement was present between alumni and employer ratings on both the importance and the level of preparation on the professional preparation of alumni. Alumni and employers displayed low agreement (as defined by Kappa) on four items concerned with the importance of technical or computer skills and on three items concerned with communication or interpersonal skills. Additionally, alumni and employers displayed low agreement on alumni preparation for one item: foreign language skills. Apart from these items, no meaningful agreement between alumni and employer ratings on the importance or level of professional preparation was apparent. These results are outlined in appendix 1.
Mean Ratings
Although the ratings of alumni and their direct supervisors showed few signs of agreement at the individual level, a considerable extent of agreement on ratings was evident at the university level. Several comparative charts of these mean ratings appear as examples in appendix 2. Mean ratings were calculated on a five-point scale where 1 represents low importance or poor preparation and 5 represents high importance or excellent preparation. Both alumni and employers accorded high levels of importance to professional preparation in general, although the importance of foreign language skills was judged to be notably lower than other areas. Mean ratings for alumni importance ranged from 1.86 (importance of foreign language skills) to 4.71 (importance of communication skills overall). Mean ratings for employer importance ranged from 1.99 (importance of foreign language skills) to 4.71 (importance of conducting work activities in an ethical manner).
A surprise in the results of this survey was that responding alumni rated the level of their professional preparation considerably lower than did employers. Moreover, this was a consistent finding across virtually all items concerned with professional preparation on the survey instruments with the exception of preparation in foreign language skills. Alumni mean ratings of preparation ranged from 2.93 (preparation in foreign language skills; preparation in technical computer skills) to 4.10 (ability to work independently). Employer ratings were generally higher, and ranged from 2.77 (preparation in foreign language skills) to 4.53 (preparation in ability to conduct work activities in an ethical manner).
Gap Score Analysis
The next step in our analysis was to assess agreement between alumni and employers by measuring the differences between how important they felt a skill or ability was and the extent to which NC State graduates possessed that skill or ability. This dimension, commonly referred to as performance gap scores (USA Group Noel-Levitz, 1996), has been gaining currency in student affairs literature as a useful tool for assessing both student priorities and gaps between what students expect and what they experience. Gap scores are simply the mean score of one dimension (such as perceived level of preparation) subtracted from the mean score of another dimension (such as importance).
For the present study, the use of gap scores was based upon the notion that measurement of the relative size of these gap scores between responding alumni and employers should yield a relatively non scale-bound metric of the extent to which both groups had similar views on the gap between the importance of a particular skill or ability and the degree to which an NC State graduate possessed that skill or ability. These scores act as a measurement, for a particular skill, of the degree to which an alum is capable relative to that skill's importance. A positive gap score shows that an alum is better prepared in a skill or ability than his/her job requires (ideally), and a negative score shows that an alum may not be sufficiently well prepared in that particular area. For example, if alumni had rated the importance of thinking creatively as "very important" (for a score of 5) and had rated the preparation they got at NC State as "good preparation" (for a score of 4), the gap score would be 4 - 5 = -1. Similarly, if employers had rated the importance of thinking creatively as "important" (for a score of 4) and had rated graduates' knowledge and skills in this area as "average" (for a score of 3), the gap score would be 3 - 4 = -1. In this event, there would be no difference between the alumni mean gap score and the employer mean gap score, and a certain degree of agreement between alumni and employers could be inferred on the item concerned with thinking creatively.
To determine whether or not the gap scores identified were statistically significant, t-tests were performed on both the alumni and employer gap scores. Results showed that gap scores for most alumni items were significantly different from zero at p = .0001. Two exceptions were technical computer skills and skills gained through research or internship experience. The results of similar tests on employer gap scores showed that, again, gap scores for almost all items were significantly different from zero at p = .0001. The only exceptions were previous work, volunteer or internship experience and ability to work with persons from diverse ethnic and cultural backgrounds.
Having established that significant alumni and employer gap scores exist, the next step was to assess the extent to which differences exist between alumni and employer gap scores. To make this assessment, overall gap scores were created by subtracting employer mean gap scores from alumni mean gap scores for each item. Our rationale was that since both scales are the same on items across alumni and employer surveys, measuring the overall gap enables us to measure the differences in perception of preparation between alumni and employers in a manner that is not scale-bound, and to arrive at a conclusion as to whether or not alumni and employers evaluate individual alumni in a similar fashion (regardless of relative position on a rating scale). Overall gap scores will not differ from zero if employers and alumni agree on whether or not the alumni are sufficiently well prepared for the requirements of their jobs.
Thus, our next step was to use t-tests to determine whether or not the resulting overall gap scores were significantly different from zero. At the university level, results showed that overall gaps scores for the great majority of items were significantly different from zero at p = .0001. Overall gap scores for two items, foreign language skills and working under pressure, were not significant. The researchers suspected at this point that analysis at a more disaggregated level might bring out underlying agreement between alumni and employers, and thus analyzed the overall gap scores by college. What the researchers were concerned with in this analysis was to examine the deviation from no difference (i.e., zero), whether positive or negative, in the overall gap scores. Finding a lack of differences at the college level does not establish directionality, nor does it prove no difference except in the case of zero, but nevertheless it is instructive to note how patterns of response began to emerge in the data. These findings are graphically illustrated in appendix 3. An "X" indicates no significant difference at p = 0.1.
When viewed from an item-by-item basis, it is apparent that the lack of significant gap scores by college was strongest for six items: overall technical knowledge, ability to apply math skills, foreign language skills, defining problems, skills gained through research or internship experiences, and working under pressure. When viewed by college, it is apparent that significant overall gap scores were much less frequent in the three smallest colleges or schools at NC State: the School of Design, the College of Forest Resources, and the College of Textiles.
Factor Analysis
To assess possible causes of the differences in ratings and the apparent low level of agreement between alumni and employer ratings, factor analysis was performed on the employer responses and the subset of alumni responses for which there existed a directly comparable employer rating (n = 616). Using procedures recommended by Hatcher (1994), varimax rotation of alumni and employer data sets resulted in three-factor solutions for items on importance and separately for items on level of preparation. Items not displaying simple loading within each group (i.e., alumni or employer) were excluded from further consideration. Items not displaying similar loadings between alumni and employer groups were also excluded from further analysis.
Using this procedure, a distinct consensus on importance ratings
was apparent. Both alumni and employers in this survey rated
items in three distinct areas we labeled technical skills, work
attitudes and skills, and interpersonal and conceptual skills,
although employers clearly consider the broad array of communication
skills to be more a part of interpersonal and conceptual skills
than did alumni. Items with factor loadings for both alumni and
employer survey results are displayed in table 1. Individual
factor loadings in bold indicate that the associated item was
retained in one of the three factors.
Overall communication skills | ||||||
Written communication skills | ||||||
Public speaking/presentation skills | ||||||
Reading skills | ||||||
Listening skills | ||||||
Overall technical knowledge | ||||||
Overall knowledge of computer applications | ||||||
Basic computer skills | ||||||
Technical computer skills | ||||||
Ability to apply scientific principles | ||||||
Ability to apply math skills | ||||||
Foreign language skills | ||||||
Ability to work in teams | ||||||
Leadership and management skills | ||||||
Using knowledge to solve problems overall | ||||||
Planning projects | ||||||
Defining problems | ||||||
Solving problems | ||||||
Thinking creatively | ||||||
Bringing ideas and information together | ||||||
Skills from research or intern experience | ||||||
Traits overall | ||||||
Professionalism | ||||||
Conducting work activities in an ethical manner | ||||||
Resourcefulness | ||||||
Confidence in ability to perform well | ||||||
Work attitudes and skills overall | ||||||
Ability to adjust to new job demands | ||||||
Working under pressure | ||||||
Making decisions under pressure | ||||||
Ability to work independently | ||||||
Ability to work with diverse persons | ||||||
Being dependable and punctual | ||||||
Professional development overall | ||||||
Ability to learn independently | ||||||
Ability to grow on the job | ||||||
Willingness to accept new responsibilities |
Distinct areas of conceptual agreement were also apparent in the
results of factor analysis of the ratings on level of professional
preparation for both responding alumni and employers. Technical
skills and work attitudes and skills emerged again as clear factors.
The primary difference in factor loadings between ratings of
importance and ratings of preparation, especially for responding
alumni, was that ratings of preparation loaded much more clearly
on communication skills, whereas no simple loadings were apparent
for other interpersonal or conceptual skills (such as leadership
and management skills, ability to work in teams, planning projects,
etc.). Thus, the second factor was renamed accordingly. The
lack of simple loadings for the other interpersonal or conceptual
skills is interpreted as a lack of ability on the part of respondents
to purely classify these skills as work skills or interpersonal
skills, since they can be perceived as either. These results
are given in table 2.
Overall communication skills | ||||||
Written communication skills | ||||||
Public speaking/presentation skills | ||||||
Reading skills | ||||||
Listening skills | ||||||
Overall technical knowledge | ||||||
Overall knowledge of computer applications | ||||||
Basic computer skills | ||||||
Technical computer skills | ||||||
Ability to apply scientific principles | ||||||
Ability to apply math skills | ||||||
Foreign language skills | ||||||
Ability to work in teams | ||||||
Leadership and management skills | ||||||
Using knowledge to solve problems overall | ||||||
Planning projects | ||||||
Defining problems | ||||||
Solving problems | ||||||
Thinking creatively | ||||||
Bringing ideas and information together | ||||||
Skills from research or intern experience | ||||||
Traits overall | ||||||
Professionalism | ||||||
Conducting work activities in an ethical manner | ||||||
Resourcefulness | ||||||
Confidence in ability to perform well | ||||||
Work attitudes and skills overall | ||||||
Ability to adjust to new job demands | ||||||
Working under pressure | ||||||
Making decisions under pressure | ||||||
Ability to work independently | ||||||
Ability to work with diverse persons | ||||||
Being dependable and punctual | ||||||
Professional development overall | ||||||
Ability to learn independently | ||||||
Ability to grow on the job | ||||||
Willingness to accept new responsibilities |
To continue with further analysis of the survey data, weighted
indices of the items that demonstrated simple loading on one of
the three factors were constructed, in accordance with suggestions
advanced by Hatcher (1994).
Regression Analysis
The primary goal of this paper was to evaluate survey ratings given by both alumni and employers. Originally, our purpose was to determine how valid alumni ratings were by using employer ratings of professional preparation as a criterion, since employer ratings have the advantage of being more believable to many constituencies both on and off campus. To arrive at such a determination, the weighted indices of items that clearly loaded on factors dealing with importance or level of preparation in professional skills were regressed against the self-reported salary levels of alumni. The weights were calculated from the factor loadings in tables 1 and 2, and were assumed to be random effects. Salary was used in this context as the most accurate available indicator of alumni skill level and competency. The logic of this approach is that more accurate ratings of importance and preparation should act as better predictors of salary level. Therefore, for alumni and employers separately, models were built to regress the factor-based indices of importance and preparation against reported salary levels. To control for salary differences by discipline, college was added to the model. Gender and ethnicity were also entered into the model to control for vagaries of the marketplace in salary levels. Further, university grade point average was also entered into the model to control for its possible effect as a predictor. The analysis was done using general linear models and type III sums of squares for testing purposes (testing the marginal contribution of the variables to the model). These results follow in table 3.
College | |||
Gender | |||
Ethnic | |||
GPA | |||
Work Attitudes/Skills: importance | |||
Interpersonal/Conceptual Skills: importance | |||
Technical Skills: importance | |||
Work Attitudes/Skills: preparation | |||
Communication Skills: preparation | |||
Technical Skills: preparation | |||
College | |||
Gender | |||
Ethnic | |||
GPA | |||
Work Attitudes/Skills: importance | |||
Interpersonal/Conceptual Skills: importance | |||
Technical Skills: importance | |||
Work Attitudes/Skills: preparation | |||
Communication Skills: preparation | |||
Technical Skills: preparation |
Results of regression analyses provided another surprise for
the researchers. Contrary to expectations, collective alumni
ratings of importance and preparation proved to be marginally
more accurate predictors of alumni salary levels than did collective
employer ratings of importance and preparation. Further refinement
of the models is not presented in the present discussion because
college, gender, ethnicity, and GPA are being controlled, and
because this allows separate comparison between alumni report
items and employer report items. Note that the lack of significance
of technical skills is directly related to the college variable
because technical majors are likely to find technical jobs and
non-technical majors are likely to find non-technical jobs.
Return to Table of Contents
Return to OIRP Survey Page
This section of the paper will examine the findings in light of previous relevant research and the implications of these findings for practice. Findings will be dealt with in the order presented in the previous section.
Agreement
The findings of the present study reinforce findings of much of the extant literature on agreement on ratings between alumni and employers - that is, that little agreement on ratings is to be seen at the individual level between alumni and their direct supervisors. Our study was designed to ameliorate potential weaknesses in cross-validation studies that had been noted by Pike (1995), such as ensuring high content correspondence between measurement constructs being compared. This direct comparability of measurements was addressed in our study by having the same individual rated on identically worded and scaled survey items by two different parties. Another cautionary note brought up by Pike concerns the problem of method-specific variance inherent in comparing self-reports and test scores. Again, through identically worded items assessed from two differing standpoints via paper and pencil instrument, the researchers sought to minimize this problem.
Interesting to note in this regard is that the agreement on ratings found in this study complement Pike's (1995) findings of little common ground between self-ratings and standardized test scores apart from the mathematics domain. In our study, the strongest levels of agreement between alumni and employers were found on items connected with technical, scientific, and mathematical skills. However even on these items, alumni and employers showed agreement only on importance ratings and not on ratings of the level of alumni preparation.
Like Pike's (1995) study, the present study was limited to a single institution, but has ramifications at the national level for college student outcomes assessment. Our study provides a fairly definitive answer of "no finding" to the question of whether one set of measurements of educational gains can be used as a proxy for another set - in this case, between ratings from two of the most widely-surveyed constituencies with a stake in the outcomes of higher education.
Mean Ratings
The literature base on alumni and employer ratings of importance of skills and level of skill gains is fairly consistent in three general areas: (1) alumni tend to think everything is important, and present inflated ratings of importance on skills needed in the workplace (Devine et al., 1995); (2) employers tend to provide positive ratings, without regard to institutional type or specific survey method used (Banta, 1993); (3) alumni tend to think they are better prepared than do their employers (Smith and Wilson, 1992; Womble, 1993). Thus, it comes as no surprise that the specific set of items concerned with professional preparation were rated as important to very important by both alumni and employers (with the exception of foreign language skills). What was unexpected is the degree to which alumni self-ratings of preparation were found to be consistently lower than ratings by their supervisors.
At least two possible explanations for this finding may be advanced. First, alumni may be rating themselves lower in preparation 2-4 years after graduation since they have been in the workforce for several years, and have had the opportunity to reflect on what they have learned. They may then be rating their college education in comparison with the probable steep learning curve they have encountered in their professional positions since finishing college and with the sheer amount of profession-specific information and skills they have gained since graduation. By comparison, what they learned in college may then seem inadequate. Second, the specific characteristics of the students who are attracted to North Carolina State University may offer some degree of explanation. NC State is a Research I land-grant institution, and as such is heavily oriented to engineering, sciences, and mathematics. The institution attracts and graduates many students who are prone to analytical thought, and who may be more prone to self-criticism through their scientific training than graduates from more traditional liberal-arts schools. However, both these possible explanations are pure conjecture, and further research (such as through focus groups) is needed to more fully explore the issue.
Gap Scores
The findings of this project with regard to gap scores provide another interesting method of assessing alumni ratings relative to employer ratings. The creation of overall gaps scores, disaggregated to the college level, allowed insights not only on what items were less prone to significant disagreement by college, but also on which colleges appeared to have fewer significant overall gap scores. As noted above, several items did not have significant gap scores on either the alumni or employer surveys, and could not be expected to show significant overall gap scores. Among these items were foreign language skills, working under pressure, and skills gained through research or internship experience. On the other hand, two items of a technical nature (overall technical knowledge and ability to apply math skills) also failed to achieve significant overall gap scores. This may be taken at some level as further validation of Pike's (1995) findings regarding the greater validity of self-reports on technical and math-related skills than on other dimensions.
Looking at overall gap scores by college (see appendix 3), it is notable that the smaller colleges and schools at NC State appear to have far fewer significant overall gap scores than larger colleges such as Engineering. While further research is certainly needed to validate these findings, the researchers surmised that there may be a greater tendency in smaller colleges for students to socialize in and out of the work environment, and thus to develop a concept of work that is more social (i.e., more collaborative) in nature. This may lead such alumni to working more closely with their supervisors and getting more continual informal performance feedback, since that is how they have been socialized into the work environment.
Factor Analysis
Banta's (1993) employer survey research project demonstrated
that employers conceptualize work skills and competencies in terms
of three basic areas, both on ratings of importance and with regard
to specific positions. Basic skills (reading, writing, etc.),
technical skills, and interpersonal skills emerged as clear factors
in Banta's research. The factor analytic results presented in
this paper lend strong support to the finding of a distinct factor
for technical skills as well as a distinct factor for work attitudes
and skills (which Banta labels as "interpersonal").
However, the factor Banta called "basic skills" varied
in our research. We found that alumni and employers conceived
interpersonal and conceptual skills as a clear factor in terms
of importance, but conceived communication skills as a distinct
set of skills in terms of preparation. Nonetheless, the findings
of this study provide further evidence of the structural underpinnings
of alumni and employer thought on what elements of job preparation
are important, what elements are most in evidence, and what relationships
exist between these areas. Future research may thus be able to
profit from these findings by using a more simplified approach
to the measurement of alumni and employer opinion with fewer,
more focused items.
Regression Analysis
A final purpose of this paper, and the origin of the title, was to validate alumni estimates of educational gains through employer reports. Although factor-based items on importance and preparation did not provide notable explanatory power in the models for either alumni or employers, to our surprise we found that the estimates provided by alumni of their educational gains proved to be marginally more powerful predictors of their present salary levels than did employer reports.
A possible explanation for this finding may be the social distance between supervisor and employee, a common threat to employer evaluation in research of this type (Tomaskovic-Devey, 1996). That is, the supervisor may not be in close contact with the employee and may therefore use proxies such as the quality of written reports to make global assessments of the employee's competence. Yet in the present research, over 70% of supervisors reported daily contact with alumni they supervised and social distance as a threat to supervisor ratings was thus minimized.
Another explanation may be that the set of alumni who chose to give NC State permission to survey their supervisors had every reason to believe that they would be rated highly by those supervisors. Since no control exists for this in our research, we are forced to assume that employer opinion is not systematically biased in this case. One argument in favor of this is that the obtained sample size is of sufficient magnitude to compensate for a random number of overtly biased ratings, but further exploration of this issue is clearly needed.
Additionally, this analysis is largely defined by the fact that
this project has been done at the university level. After controlling
for college and academic performance, the remaining data poorly
explains the relationship between alumni and employer perceptions
of professional preparedness. However, this study suggests that
college-level studies would garner more accurate and complete
models of alumni professional preparedness through development
of questionnaire items for alumni and employers that are more
relevant to college-specific preparation and alumni career choice.
Return to Table of Contents
Return to OIRP Survey Page
The question posed by this research remains: Can alumni reports
of skill gains be used as accurate proxies for actual skill gains?
As Pascarella and Terenzini (1991) so aptly summarize the case
in How College Affects Students, it all depends. Although
factor analysis demonstrates that our alumni and employers share
similar conceptual constructs with regard to the relationship
between various professional skills and abilities, little agreement
is evident between alumni and their supervisors as to how strongly
educational gains are manifested at the individual level. Evidently,
supervisors see things somewhat differently that do alumni who
are asked to contribute self-reports of the gains they realized
through their college education. On the other hand, our findings
also indicate that for our population of alumni, alumni reports
provide similar albeit marginally better estimates of skills gains
than do employer reports. Until further research can aid in the
clarification of these findings, our conclusion for the moment
is that alumni and employer reports of educational gains are essentially
non-comparable for our population, that future studies will need
to continue to survey both alumni and employers, and that the
cost of such assessment is likely to remain high since one survey
cannot serve as a valid proxy for the other.
Return to Table of Contents
Return to OIRP Survey Page
American Association for Higher Education (1992). Principles
of good practice for assessing student learning. Washington,
D.C.: AAHE.
Annis, A. W. & Rice, R. R. (1992, May). An assessment
of an economics and business department: Surveys of graduates
and their supervisors. Paper presented at the annual forum
of the Association for Institutional Research, Atlanta, GA.
Banta, T. W. (1993). Critique of a method for surveying employers.
The Association for Institutional Research Professional File,
47.
Banta, T. W., Lund, J. P., Black, K. E. and Oblander, F. W. (1996).
Assessment in practice. San Francisco: Jossey-Bass.
Dillman, D. A. (1978). Mail and telephone surveys: The total
design method. USA: John Wiley & Sons, Inc.
Eveslage, S. A. (1993, June). The Case for Workplace Assessment.
Paper presented at the meeting of the American Association for
Higher Education, Chicago, IL.
Ewell, P.T. (1984). The self-regarding institution: information
for excellence. Boulder, CO: National Center for Higher Education
Management Systems.
Ewell, P. T. (1991b). To capture the ineffable: New forms of
assessment in higher education. In G. Grant (ed.), Review
of research in education. No. 17, Washington, D.C.: American
Educational Research Association.
Ewell, P. T., Lovell, C. D., Dressler, P. & Jones, D. P.
(1993). A preliminary study of the feasibility and utility
for national policy of instructional good practice indicators
in undergraduate education. Boulder, CO: National Center
for Higher Education Management Systems.
Hatcher, R. (1993). A step-by-step guide to using the SAS
system for factor analysis and structural equation modeling.
Cary, NC: SAS Institute.
Hoey, J. J. (1995, May). Assuring Faculty Input into Institutional
Effectiveness and Assessment Processes at North Carolina State
University: An Application of Focus Group Methodology. Paper
presented at the annual forum of the Association for Institutional
Research, Boston, MA.
Levine, A. (1997, January 31). Higher education's new status
as a mature industry. The Chronicle of Higher Education,
p. A48.
Loacker, G. (1991). Designing a national assessment system:
Alverno's institutional perspective. Washington, DC: National
Center for Education Statistics. (ERIC Document Reproduction
Service No. ED 340 758).
Mentkowski, M. & Rogers, G. (1993, June). Using Workplace
Feedback to Assess Educational Programs. Paper presented
at the meeting of the American Association for Higher Education,
Chicago, IL.
Mott, P. E. & Hunter, S. (1991). Greeners at work: An
assessment (Report No. 4). Olympia, WA: The Evergreen State
College, Assessment Study Group.
Pascarella, E. and Terenzini, P. (1991). How college affects
students. San Francisco: Jossey-Bass.
Pike, G. R. (1995). The relationship between self reports of
college experiences and achievement test scores. Research
in Higher Education, 36(1), 1-21.
Pike, G. R. (1996). Limitations of using students' self-reports
of academic development as proxies for traditional achievement
measures. Research in Higher Education, 37(1), 89-114.
Raymond, M. A. (Ed.). (1993). Preparing graduates for the workforce:
The role of business education. Journal of Education for
Business, 68(4), 202-206.
Smith, D. & Wilson, H. (1992). The development and assessment
of personal transferable skills during work-based placements.
Assessment and Evaluation in Higher Education, 17(3), 195-208.
Stokes, M., Davis, C., & Koch, G. (1995). Categorical
data analysis using the SAS system. Cary, NC: SAS Institute.
Tomaskovic-Devey, D. (1996, December 12th). Oral communication with North
Carolina State University Professor of Sociology and Anthropology.
University of Phoenix. (1992). Assessing impact in the workplace:
Report on organizational and economic impact of underwriting
employees' education. (Department of Institutional Research
Report IR-92.5).
USA Group Noel-Levitz (1996). 1996 national student satisfaction
report. Iowa City, IA: Author.
Van Dyke, J., & Williams, G. W. (1996). Involving graduates
and employers in assessment of a technology program. In T. W.
Banta, J. P. Lund, K. E. Black, & F. W. Oblander (Eds.), Assessment
in practice, (pp. 99-101). San Francisco: Jossey-Bass.
Womble, M. N. (1993). Assessment of competencies for computer
information systems curricula. Delta Pi Epsilon Journal, 35(2),
69-85.
Return to Table of Contents
Return to OIRP Survey Page
Foreign language skills | ||
Public speaking/presentation skills | ||
Reading skills | ||
Written communication skills | ||
Overall communication skills | ||
Listening skills | ||
Ability to apply scientific principles | ||
Ability to apply math skills | ||
Overall technical knowledge | ||
Technical computer skills | ||
Basic computer skills | ||
Overall knowledge of computer applications | ||
Skills from research or intern experience | ||
Leadership and management skills | ||
Bringing ideas and information together | ||
Planning projects | ||
Thinking creatively | ||
Defining problems | ||
Solving problems | ||
Ability to work in teams | ||
Using knowledge to solve problems overall | ||
Ability to work with diverse peoples | ||
Confidence in ability to perform well | ||
Making decisions under pressure | ||
Resourcefulness | ||
Traits overall | ||
Ability to work independently | ||
Professionalism | ||
Working under pressure | ||
Being dependable and punctual | ||
Ability to adjust to new job demands | ||
Work attitudes and skills overall | ||
Conducting work activities ethically | ||
Ability to learn independently | ||
Professional development overall | ||
Ability to grow on the job | ||
Willingness to accept new responsibilities |
| ||||||||||
| ||||||||||
overall communication skills | ogap1 | |||||||||
written communication skills | ogap1_1 | |||||||||
public speaking/presentation skills | ogap1_2 | |||||||||
reading skills | ogap1_3 | |||||||||
listening skills | ogap1_4 | |||||||||
overall technical knowledge | ogap2 | |||||||||
overall knowledge of computer applications | ogap3 | |||||||||
basic computer skills | ogap3_1 | |||||||||
technical computer skills | ogap3_2 | |||||||||
ability to apply scientific principles | ogap4 | |||||||||
ability to apply math skills | ogap5 | |||||||||
foreign language skills | ogap6 | |||||||||
ability to work in teams | ogap7 | |||||||||
leadership and management skills | ogap8 | |||||||||
using knowledge to solve problems overall | ogap9 | |||||||||
planning projects | ogap9_1 | |||||||||
defining problems | ogap9_2 | |||||||||
solving problems | ogap9_3 | |||||||||
thinking creatively | ogap9_4 | |||||||||
bringing ideas and information together | ogap9_5 | |||||||||
skills gained through research or intern exper | ogap10 | |||||||||
traits overall | ogap11 | |||||||||
professionalism | ogap11_1 | |||||||||
conducting work activities in ethical manner | ogap11_2 | |||||||||
resourcefulness | ogap11_3 | |||||||||
confidence in ability to perform well | ogap11_4 | |||||||||
work attitudes and skills overall | ogap12 | |||||||||
| ||||||||||
| ||||||||||
ability to adjust to new job demands | ogap12_1 | |||||||||
working under pressure | ogap12_2 | |||||||||
making decisions under pressure | ogap12_3 | |||||||||
ability to work independently | ogap12_4 | |||||||||
ability to work with diverse peoples | ogap12_5 | |||||||||
being dependable and punctual | ogap12_6 | |||||||||
professional development overall | ogap13 | |||||||||
ability to learn independently | ogap13_1 | |||||||||
ability to grow on the job | ogap13_2 | |||||||||
willingness to accept new job responsibilities | ogap13_3 | |||||||||
Number of respondents by college | ||||||||||
Frequency of no significant difference by college |
Return to Table of Contents
Return to OIRP Survey Page
Return to OIRP Home Page