All posts by neographite

UWM AAUP RESPONSE

Response to Chancellor Update: Charting Our Future

While the Chancellor’s campus email was welcome for its transparency regarding administrative planning, AAUP finds it important to include arguments that analyze UWM’s challenges in a different way. The following is a point-by-point response that reaches slightly different conclusions regarding the state of affairs

Demographic Shifts

With the graying of the baby boomers, there is no doubt about the large demographic shift taking place in the United States. While people 65 years and older constituted only 15 percent of the population in 2016, they are projected to constitute 21 percent by 2030 and 23 percent by 2060 (census.gov). Comparatively, children under 18 years will grow in absolute numbers but their portion in the population may decrease; e.g., 22 percent (73.6m) in 2016 to about 21 percent (75.4m) in 2030.

For higher education, however, absolute numbers of younger groups matter more than their ratio in the total population. As academic institutions, it is important for us to be specific and precise, and thus, consider Wisconsin’s actual numbers, not national and regional projections that can always turn out to be otherwise:

The following are the actual enrollment numbers of 12th graders in Wisconsin (WISEdash) compared with UWM’s freshmen enrollments. It’s important to base our current policies not simply on general projections from one source (Nathan Grawe’s book) but actual Wisconsin data. We do not want to find ourselves barking up the wrong tree while addressing an issue.

YearWisc. 12th Grd EnrollmentUWM new frosh fall headcount
WIsconsin Resident Total
2012-1366,4542,9823,321
2013-1465,6822,7253,131
2014-1565,4312,8233,286
2015-1665,8192,7383,185
2016-1765,8322,5353,000
2017-1866,5422,6313,105
2018-1966,1302,5593,095

These data may have small inaccuracies but it’s clear that the recent decline in UWM’s Wisconsin resident students doesn’t directly correlate with “fewer college-aged students resulting in declining enrollments.” The number of college-aged students in Wisconsin may have seen a drop 8-10 years ago but since 2014 their number has actually been inching up while our enrollments have been inching down. The decline may have other immediate possible causes, such as UW system’s support for Madison’s increased in-state enrollments. The larger context suggests that enrollment declines are disproportionately impacting smaller schools, and a university like UWM, if properly led, should not lose this competition for enrollments.

Cost of Higher Education

The cost of higher education and student debt are definitely of major national concern. UWM can take the lead by joining other universities in advocating a drastic increase in federal and state funding based on existing research.

Yet UWM is one of the least expensive universities in the region. Ideally, this should allow us to recruit more students, not fewer. Therefore, it is likely that, like the demographic shift, our competitive weakness relative to others may not be due to the expense of education.

Public questioning of the value of a college degree

While the public has every right to question the growing costs of higher education (though, UWM is not part of that constellation), no one questions the importance of higher education as a public good. Indeed, the importance of a college degree is only growing. “By 2027, total undergraduate enrollment is projected to increase to 17.4 million students. In fall 2016, total undergraduate enrollment in degree-granting postsecondary institutions was 16.9 million students, an increase of 28 percent from 2000, when enrollment was 13.2 million students” (https://nces.ed.gov/programs/coe/indicator_cha.asp). It’s counter-productive to relay dubious claims about the college degree. As the institutional make-up of world society grows in complexity, the need for higher education should grow correspondingly. A population that doesn’t understand how the world works cannot help the world work better.

Employers, Skills, and the Purposes of Higher Education

There are two important reasons why AAUP thinks it’s dangerous to try to closely align education with employer expectations.

First, the academy has always been responding to the stimulations from the world outside. UWM just approved a new major in environmental engineering because of its growing importance in the world. These academic responses have been organic in nature. Because the collective intelligence of research faculty in the country is far greater than the collective intelligence of a small set of college administrators, it is important for administrators not to chase every new industry or skill in search of satisfying local or regional employers. It’s not the computer industry that gave rise to computer science, it is academic computer science that gave rise to the computer industry, starting with the proposal by Gottfried Leibniz, a seventeenth-century philosopher and mathematician, to build machines for valid inferences through a “calculus of reason” (calculus ratiocinator). One can apply the same logic to nuclear and genetic industries. Academic disciplines forge new paths through an internal dynamic while receiving stimulations from the outside world. It’s important for college administrators not to start betting on new industries and future job markets, and start meddling with the organic movement of academic disciplines. Let’s give our students fundamental principles and foundational skills from computer science to communication so that they become capable of learning new programming languages and new media platforms on their own, as and when needed.

Second, the skills gap argument—that there is a gap between the skills job seekers currently have and the skills employers need—has been debunked by many studies in economics and education. Some skills gap is unavoidable under the best of conditions, as some positions will always remain unfilled even while some workers are unemployed. However, “the belief that America suffers from a severe ‘skills gap,’ economist Paul Krugman writes, is…an idea that should have been killed by evidence, but refuses to die.” Were there an actual skills gap, one would witness a rise in wages in those areas, but no such wage increase is taking place.

Growing competition in higher education and low-cost models

Competition in higher education is a reality. But it’s important for UWM to know the category in which it is competing. Chasing everything, from lower-cost models to new curriculum delivery modes, may not be the answer. We are delivering lower-cost education with online programs already in place. We can capitalize on those strengths but UWM must also figure out who its competitors are. Are community colleges, for-profit colleges, or MOOCs our competitors? AAUP views none of these players as UWM’s competitors. 

Advertisements

U W Whitewater Chancellor Search

Dear AAUP Wisconsin members,


I am writing to share an update from our UW-Whitewater chapter regarding the chancellor search that is under way there. This is the first chancellor search to take place anywhere in the UW System since the Board of Regents made changes to the search and screen process for chancellors, once in 2015 and then again in 2017. 

As you may know, the 2017 changes incorporated a new statutory requirement prohibiting UW institutions from requiring chancellors to have the customary academic qualifications. 

The UW-Whitewater AAUP chapter is circulating a petition asking the search and screen committee to include a “strong preference” for academic credentials in the position description. You can read the petition, and sign your name in support, at the following link:
https://actionnetwork.org/petitions/credentials-for-uww-chancellor-position

The petition already has over 300 signatures. I hope that we can show our support from all across the state, and help our Whitewater colleagues show the Regents how important this is.

Much as UW-Stevens Point is the first test case for the new faculty termination policies in RPD 20-24, UW-Whitewater is the first test case for the new chancellor search process in RPD 6-4. As elsewhere, the new process concentrates power in the hands of the Regents. Previously, search and screen committees had to consist of at least half faculty from the institution in question; now, the committee has 10 members, of whom 5 are Regents, and only 2 are faculty.


I encourage you to support our UW-Whitewater colleagues!

Best,

Nick Fleisher (UW-Milwaukee) President, AAUP Wisconsin

Talk about Salaries

The following is a post from an anonymous, nontenured author: 

We need to talk about salaries.

Last week the UW Regents approved a 3% pay raise for faculty for the next two years – the largest raise in many years. However, it’s difficult to be appreciative when so many of our colleagues have fallen inexcusably far behind the acceptable pay scale. Ray Cross claims that UW Madison faculty are underpaid by 10%;  assistant and associate professors in the humanities there typically earn between $70,000 and $100,000 per year. This appears to be Cross’s only concern: that these numbers are too small. But our new colleagues at UWM face much worse problems regarding their compensation.

Many new UWM faculty who have come to us via the merger with the UW Colleges faculty earn less than $50,000 a year. Their bump when promoted to Associate Professor with tenure is only around $1,500. For some context, an assistant manager at Kwik Trip with no required higher education can make $45,000 a year. This means we have hard-working faculty who have earned PhDs, teach a 4-4 load, engage in research–and could probably do better, financially, as gas station managers. What kind of signal does this send to our faculty? What kind of message does this send about higher education? 

Our new colleagues from the UW Colleges have long dealt with rock-bottom morale. We are told that when talk of raises would come up in their meetings, the prospect was always quickly shot down. Administrators for the Waukesha and Washington County campuses, however, each received a $20,000 bump in salary this year.

The problem is not a lack of money in the system. It’s that the money is constantly moved away from faculty and into new silos. How did the chancellors get their large pay bumps this last week? It came from leftover money from the UW Colleges that could have helped underpaid faculty. This double standard needs to end.

The UWM annual budget for the 2018-2019 fiscal year is $689,165,710, with $243,334,769 going to salaries. The branch campuses bring in a combined budget of $4,136,764. The salary gap between our UWM faculty and the College of General Studies faculty could be fixed with around $1.2 million dollars. There is no excuse for such inequality among our colleagues. When chancellors get raises larger than many faculty salaries, it shows not only the arrogance of these administrators, but also their lack of interest in faculty compensation, retention, and morale.

Investing in the College of General Studies can only help UWM as a whole. These branches are feeding students into the Milwaukee campus. How can we expect them to advocate for us if they are treated so unfairly? Numerous faculty searches on our new branch campuses have failed because salaries and workload are nowhere near competitive. If we are to make the most of this merger, it is essential to invest in our colleagues. We must welcome the new faculty to the Panther family by treating them fairly–something the UW Colleges administration has failed to do.

Chronicle of Higher Education Data

UWM Faculty Salary Average:

https://data.chronicle.com/240453/University-of-Wisconsin-at-Milwaukee/faculty-salaries

Assistant: $73,300

Associate: $78,000

Professor: $101,448

UW Colleges Faculty Salary Average:

https://data.chronicle.com/240055/University-of-Wisconsin-Colleges/faculty-salaries/

Assistant: $45,126

Associate: $51,084

Professor: $62,424

Further Analysis of the Mercer “Benefits” Survey

Comments from Nancy Mathiowetz, Professor Emerita, UWM

Former President, American Association for Public Opinion Research

Former Editor, Public Opinion Quarterly

Introduction

It would be useful in reviewing the survey to understand the analytic objectives of the study. What empirical questions are they attempting to address?  And how do they want to use these data? That framework would provide a better lens for reviewing the instrument.

Both questionnaire design and sample design are important to review in understanding the quality of a survey. With respect to questionnaire design, one wants a well-designed questionnaire, for which the wording is both easy to comprehend and does not bias the respondent. The structure of the questions (e.g., Likert scales, open ended, multiple choice) is also important and can contribute to the overall quality of the survey data. A poorly designed questionnaire renders data that may be misleading, biased, or inaccurate.

Similarly, it is important that the sample design –that is, the identification of the population of interest and the means by which to select members of that population—be clearly specified and executed. Once people are selected for inclusion in a study, efforts should be made to encourage their participation so as to have representation across the full diversity of the population of interest.  Similar to a poorly designed questionnaire, a poorly designed or executed sample can result in misleading,biased, or inaccurate estimates.

The Mercer Survey

The American Association for Public Opinion Research offers a series of recommended best practices, including recommendations about question wording (see: https://www.aapor.org/Standards-Ethics/Best-Practices.aspx).  Specifically with respect to question wording, the website states: 

Take great care in matching question wording to the concepts being measured and the population studied.

Based on the goals of a survey, questions for respondents are designed and arranged in a logical format and order to create a survey questionnaire. The ideal survey or poll recognizes that planning the questionnaire is one of the most critical stages in the survey development process, and gives careful attention to all phases of questionnaire development and design, including: definition of topics, concepts and content; question wording and order; and questionnaire length and format. One must first ensure that the questionnaire domains and elements established for the survey fully and adequately cover the topics of interest. Ideally, multiple rather than single indicators or questions should be included for all key constructs.

Beyond their specific content, however, the manner in which questions are asked, as well as the specific response categories provided, can greatly affect the results of a survey. Concepts should be clearly defined and questions unambiguously phrased. Question wording should be carefully examined for special sensitivity or bias. When dealing with sensitive subject matter,techniques should be used that minimize the discomfort or apprehension of respondents or respondents and interviewers if the survey is interviewer administered. Ways should be devised to keep respondent mistakes and biases(e.g., memory of past events) to a minimum, and to measure those that cannot be eliminated. To accomplish these objectives, well-established cognitive research methods (e.g., paraphrasing and “think-aloud” interviews) and similar methods (e.g., behavioral coding of interviewer-respondent interactions) should be employed with persons similar to those to be surveyed to assess and improve all key questions along these various dimensions.

In self-administered surveys careful attention should be paid to the visual formatting of the questionnaire, whether that be the layout of a mail survey or a particular eye towards respondents completing a web survey on a mobile device. Effort should be taken to reduce respondent burden through a positive user experience in order to reduce measurement error and break offs.

In reviewing a hard copy version of the questionnaire, one that appears to have been written for faculty members, given reference to research[1], I see a questionnaire that consists of three distinct types of questions:

  • A partial ranking question (Question 1) that asks for the five most attractive aspects of the position at two points in time;
  • Five-point Likert rating scales, ranging from Strongly Agree to Strongly Disagree and including a “middle” category of “Neither agree or disagree;” and
  • Multiple sets of “maximum difference scales” which ask respondents to examine multiple sets of employment or benefits attributes,requesting respondents to select the most important and least important within each set.

Some specific comments about each of these types of questions follows.

With respect to question 1 (partial ranking question), the format choice is not of major concern –certainly this type of ranking is often used to determine respondent’s preferences. What is of concern is some of the mismatch/sloppiness in the question. First, the question references working for“UW” but most of the employees answering this question do not work for the UW system but rather at a specific UW facility, so the wording is odd. Second, the question itself asks about what “interested” you most (for the first part of the question) and what is most “important” (for the second part), but the column headings use the term “attractive.” While not a critical inconsistency,it’s a bit sloppy. 

The 5-point Likert items have two sets of response options–strongly agree/strongly disagree (Questions 2-11, 15-20, 22-27) or very satisfied/very dissatisfied (Questions 14a through 14r). Of the 22 items that are agree-disagree items, all but two are written in a positive frame, that is,the language indicates a positive point of view. This is not a best practice and the use of such an approach can lead “straight lining” where individuals simply mark the items in a single column, not carefully reading each item. And in general, the field of survey methodology recommends avoiding the use of agree-disagree items since it often leads to acquiescence bias, that is, the tendency to agree to statements, which leads to exaggerated estimates of endorsement for positively-worded statements.

Although I am reviewing a hard copy questionnaire, I note that Question 14 has 18 sub-questions, all requiring the respondent to use the same five-point scale (as well as a “not applicable” option). Once again, if presented on a single screen, this would not follow a best practice and leads to respondents not fully considering each item individually. In addition, it does not appear that these 18 items are rotated, so as to avoid order effects. Once again, this is in contrast to  best practices which recommends randomizing the order of long lists.

Finally, the survey consists of two sets of maximum difference(maxdiff) scaling, an extension of the method of paired comparisons. In a typical maxdiff scaling question, a respondent will rate between four and six attributes of an entity/product/service. Analysis of the data using a specific statistical technique, hierarchical Bayesian multinomial logit modeling, produces important estimates for each attribute for each respondent.

The redundant nature of maxdiff questionnaires is one of the drawbacks of the approach, since respondents often feel that they have just answered the question. In the present questionnaire, Question 12 consists of 11sets and Question 13 consists of 20 sets, each requiring the identification of most important and least important. 

What is odd and most disturbing about Question 12 is that the question states “some of these benefits or programs are not current benefits or programs at the university.” But the attributes listed in Question 12 are not all benefits or programs –they are attributes of the actual work environment or characteristics of employment.  For example, the question includes attributes such as “Type/variety of work,”“Stable employment,” “Career advancement/professional development,” and “Pay.” These“attributes” are juxtaposed alongside benefits such as “Sick leave,”“Healthcare benefits,” and “Retirement savings plans.” In contrast, the attributes presented in Question 13 appear to be, for the most part, benefits.

I find the mixing of employment attributes and benefits attributes in Question 12 to be atypical of most maxdiff designs. It seems inappropriate to ask respondents to make tradeoff assessments between employment attributes such as pay and benefits attributes such as sick leave.The mix of items –which are attributes of two very different constructs –could result in a misleading set of empirical findings.

And placing two of these maxdiff questions next to each other–thereby forcing the respondent to answer 31 sets of these items consecutively—is not ideal with respect to overall questionnaire design or consideration of respondent fatigue.

Sample Design

It does not appear that a sample has been selected for participation, but rather a census of all benefits-eligible employees. What methodology is being used to ensure diverse participation both across all UW system locations and throughout the ranks of faculty and staff? Although a census allows for all members of the population to voice their opinions, it also means that resources to encourage participation must be spread throughout the population, rather than focused on a specific scientific sample.

Final Notes

The survey included no request for demographic information,location, years working in the UW system, or position. Can we assume that this information will be imported from HR files, given the unique link sent to request participation? At a minimum a few of these questions should have been asked to ensure that the data were collected from the person intended to be queried.

And why does the survey bear the UW system and UW-Madison logos,but not those of other universities? If a different methodology is involved for the Madison campus as compared to other campuses, how will this impact comparisons across campuses?


[1] It is unclear if there is a different version of the questionnaire sent to non-faculty staff members.

Notes on the “Benefits” Survey

From Aaron Schutz:

The survey contains forced choice questions.  And you can’t skip any of them.  Anyone taking it cannot avoid picking the kinds of benefits wanted and by implication, not wanted, from the options.  I’m not sure whether it is better to complete it or not, but the data is clearly designed to inform cuts in benefits.  The very structure of the survey means that the report will necessarily illuminate those benefits that few faculty chose as most important.    

To put it another way, why would they even bother asking questions that ask us to choose the benefits we prefer instead of simply asking how important each benefit is for us on a Likert scale.  The scale would allow respondents to value all of them.  The very structure of the report of this data, regardless of what is intended, will imply that some benefits are perceived as less important by us “clients” and, because they are less important, areas for cuts.    Simply answering the questions gives ammunition to those who want to cut benefits  because the structure of the survey requires you to distinguish between most and least important.  

Here is the first question that you must choose “one” from:

           Healthcare benefits (medical, dental, vision)
           Retirement savings plans (WRS, 403b, 457)
          Type / variety of work
          Stable employment

Many people are actually forced to choose among these things.  A stable job without healthcare?  A job with healthcare but no retirement system?

So one needs to ask, why would they create a survey with this kind of structure?  What does it mean about what they want to do with it?  Are they confused, not realizing that the report will inevitably highlight which benefits that faculty don’t choose as important?  What if no one chooses healthcare over another benefit?  Does that mean we don’t want access to healthcare?

Imagined line from the report:  “The survey indicated that healthcare was the least important benefit to faculty.”

I can’t advise anyone else, but I won’t fill it out.