Category Archives: Uncategorized

When the Chancellor Donates his $50,000 Raise to the University

When the chancellor donates his $50,000 raise to the University Food Pantry,

They put out a spread.

It’s delicious. Everyone gathers to eat it.

The shelves are full. Since there is no more food insecurity among students,

They are free to focus on learning.

 

When the chancellor donates his $50,000 raise to Norris Health Center,

People from all over Milwaukee are Inspired by this display of generosity. They donate

their services. There is acupuncture, massage, talk therapy and tarot reading; it’s all free.

There are Black and brown, queer and Indigenous, white and Asian American care

providers.  Everyone feels better.

 

When the chancellor donates his $50,000 raise to scholarships,

the Wisconsin Board of Regents is ashamed.  They drop tuition rates so low

That a working single mother of three can afford to take a class. (There is free childcare

for students.) She gets her degree, makes it big,

donates extravagantly.  Going to college becomes an option for everyone in town.

 

When the chancellor donates his $50,000 raise to the university,

The money magically multiplies. Suddenly, everyone who makes less,

makes more.

 

When the chancellor donates his $50,000 raise to the university,

the example of his generosity reminds

faculty and staff across campus that the university is made out of

only love and labor, and that it belongs to everyone, including us.

We dance in our offices and continue the work.

 

Another University is Possible: it has been there all along.

It hovers in the wings of this cruel austerity:

requiring only courage, only love to take flight.

 

Advertisements

Autocracy in the Upper Midwest

At protests on Dec. 14 over Viktor Orban’s labor ‘reforms’ in front of Budapest’s parliament. The author with Zoltán Tibori Szabó, Director of the Institute for Holocaust and Genocide Studies at Babes-Bolyai University in Cluj-Napoca.

by Jeffrey Sommers

Professor of African and African Diaspora Studies and Global Studies

UW-Milwaukee

Dear Chancellor Mone, Provost Britz, and Colleagues, 

I could not attend this week’s Faculty Senate meeting as I was in Riga, where I ran an event designed to thwart corruption, in cooperation with the US Ambassador and the German Friedrich-Ebert-Stiftung. Incidentally, while I was there, everyone from the US Ambassador on down asked me (in astonishment), “What is happening to Wisconsin?” I am in Budapest this weekend. The purpose of this travel is to convene a meeting of an event at Hungary’s Central European University, funded by the Open Society Foundations, for a project I co-direct on threats posed to democracy by authoritarian governments. Such governments have increased their reach in recent years and typically have used (and abused) constitutional procedures to advance and ensconce their power.

As you might know, this week Wisconsin was described as “Hungary on the Great Lakes” by one of the New York Times’ s top columnists. Moreover, Wisconsin billionaire ‘job creator’ Sheldon Lubar (with whom I have corresponded this past week) wrote Governor Scott Walker to decry the “conniving” (his word) of the Wisconsin GOP and the Governor’s cooperation with them as they abuse their power in acting against the public will by trying to hamstring the state’s newly elected Democratic governor, Tony Evers. Wisconsin is presently the most gerrymandered state in our republic. And here too, in Budapest, people are asking in disbelief, “What is happening in Wisconsin?” Today, as Governor Walker (against Mr. Sheldon Lubar’s counsel) signed our gerrymandered state legislature’s bills to limit democracy, I received emails from around the world from figures of note asking, “What is happening in Wisconsin?”

The work of academics historically has been to pose difficult, sometimes uncomfortable questions, not in a gratuitous, but in a serious, fashion. The search for “truth” and “improving the human condition” as articulated by UW President Charles Van Hise in 1905 are central to UW’s mission, and extend back to the Greek philosophers of antiquity. My uncomfortable question is: “Might it be incumbent upon us to review all UW policies comingfrom the System level or higher given what has been revealed as the undemocratic character of our current state government?” It’s not only UWM that is watching how we answer our current crisis of democracy, but the nation and the world. How will we respond? Make no mistake, this is a historic juncture.

Serving at Cross’s Purposes

by Richard Grusin
Distinguished Professor of English

On Pearl Harbor Day, 2018, the University of Wisconsin Board of Regents dropped its own economic bomb on the people of Wisconsin, approving raises ranging from $14,421 to $72,668 for 10 of the UW System’s 13 chancellors. In the days following the December 7 meeting, social media has exploded with expressions of the emotional damage inflicted by these oversized raises.

Many University of Wisconsin faculty and staff, whose pay has remained static for roughly a decade, and who took de facto pay cuts in 2011 when Act 10 peremptorily increased individual retirement contributions by roughly 7%, filled Facebook and Twitter with complaints, shares, and retweets about these obscenely inflated raises.  Over and over again, faculty and staff decried the injustice of chancellors like UW-Madison’s Becky Blank and UW-Milwaukee’s Mark Mone receiving raises ($72,668 and $49,419 respectively) greater than the salaries of many assistant professors and full-time instructors.

Interestingly this outrage was not shared by the news media, who seemed more concerned with the possible injustice of two chancellors not receiving raises because they were being punished for actions that the Regents did not approve. In an article in the Milwaukee Journal-Sentinel, Karen Herzog reported, “The chancellor who hosted the University of Wisconsin Board of Regents on his campus this week has been denied a $25,600 performance raise after his reprimand for inviting a pornstar to speak to students during free speech week a month ago. The regents also did not award another chancellor, whose husband was banned fromher UW campus and stripped of an honorary, unpaid position after an investigation concluded he had sexually harassed female employees.” No mention was made in the Journal-Sentinel of the unseemliness of the large chancellor raises, nor was there any suggestion that“punishing” misbehaving chancellors was in any way problematic.

This stark divergence between local news coverage and the responses circulated widely on social media is worth examining, in part because both responses overlook what I take to be the fundamental problem with the logic of employee compensation entailed in the Regents’ decisions. For me the most troubling element of these raises is not their disproportionate size no rthe financial punishment of the chancellors who had displeased their superiors. Although I am in complete and total agreement with my fellow UW System faculty and staff at being outraged by the dollar amount of the oversized raises given to 10 of the 13 UW System chancellors, I am not surprised. And you shouldn’t be either.

Why am I not surprised?  Because as anyone who has been paying attention knows, the chancellors have been carrying water for UW System President Ray Cross and the Regents for several years now. These outsized raises are financial rewards for their not having opposed or obstructed a single top-down edict from Cross and the Regents–for their having carried out his orders like good soldiers or middle managers are expected to do. 

Put differently, what both the raises and the punishment reveal is that these raises are payoffs, ex post facto bribes, or quid pro quo rewards for UW System chancellors having accepted without objection the destruction of tenure and shared governance; repeated massive budget cuts; unfunded tuition freezes; and the break-up and distribution of the UW Colleges and Extension to the four-year, comprehensive, and doctoral campuses, aka the UW System merger.

Why didn’t chancellors object last year to this merger? Could it be because their jumbo-sized raises were made possible by money freed up by the elimination of the UW Colleges/Extension chancellor position upon their top-down dissolution? As Karen Herzog dutifully reported, these raises didn’t require an infusion of new salary money but were funded by dividing up “the $270,774 salary of former UW Colleges and UW-Extension Chancellor Cathy Sandeen, whose position was eliminated in the sweeping UW System merger.”

This might very well explain why UW System chancellors have quietly gone along with the absurdly sped-up timetable for this merger. Could it have something to do with the fact that the funds freed up from eliminating Chancellor Sandeen would be used to reward those very chancellors? You don’t really think that Friday was the first time UW System chancellors heard that those funds would be used this way, do you? I certainly don’t. 

What I find most scandalous about these raises is not how grotesquely large they were in the context of the multiple financial needs of a seriously strapped university system, nor how raises were withheld from chancellors who have earned the disapproval of Ray Cross and the Regents. No, what is most troubling to me about the economic logic of these raises is that they reveal once and for all that the role of the chancellor in the University of Wisconsin System is not to represent the interests and needs of his or her university to the UW System, but to carry out the marching orders handed down from above.

Sadly, we now have no other choice but to believe that chancellors like Becky Blank or Mark Mone have not been acting as independent academic leaders, charting the best course for their universities in difficult times. Rather UW chancellors have become little more than well-paid marionettes, whose strings are being pulled from above by Ray Cross and the Walker-appointed Board of Regents. If money indeed talks, these raises speak volumes about the true nature of academic leadership in the University of Wisconsin System.

Talk about Salaries

The following is a post from an anonymous, nontenured author: 

We need to talk about salaries.

Last week the UW Regents approved a 3% pay raise for faculty for the next two years – the largest raise in many years. However, it’s difficult to be appreciative when so many of our colleagues have fallen inexcusably far behind the acceptable pay scale. Ray Cross claims that UW Madison faculty are underpaid by 10%;  assistant and associate professors in the humanities there typically earn between $70,000 and $100,000 per year. This appears to be Cross’s only concern: that these numbers are too small. But our new colleagues at UWM face much worse problems regarding their compensation.

Many new UWM faculty who have come to us via the merger with the UW Colleges faculty earn less than $50,000 a year. Their bump when promoted to Associate Professor with tenure is only around $1,500. For some context, an assistant manager at Kwik Trip with no required higher education can make $45,000 a year. This means we have hard-working faculty who have earned PhDs, teach a 4-4 load, engage in research–and could probably do better, financially, as gas station managers. What kind of signal does this send to our faculty? What kind of message does this send about higher education? 

Our new colleagues from the UW Colleges have long dealt with rock-bottom morale. We are told that when talk of raises would come up in their meetings, the prospect was always quickly shot down. Administrators for the Waukesha and Washington County campuses, however, each received a $20,000 bump in salary this year.

The problem is not a lack of money in the system. It’s that the money is constantly moved away from faculty and into new silos. How did the chancellors get their large pay bumps this last week? It came from leftover money from the UW Colleges that could have helped underpaid faculty. This double standard needs to end.

The UWM annual budget for the 2018-2019 fiscal year is $689,165,710, with $243,334,769 going to salaries. The branch campuses bring in a combined budget of $4,136,764. The salary gap between our UWM faculty and the College of General Studies faculty could be fixed with around $1.2 million dollars. There is no excuse for such inequality among our colleagues. When chancellors get raises larger than many faculty salaries, it shows not only the arrogance of these administrators, but also their lack of interest in faculty compensation, retention, and morale.

Investing in the College of General Studies can only help UWM as a whole. These branches are feeding students into the Milwaukee campus. How can we expect them to advocate for us if they are treated so unfairly? Numerous faculty searches on our new branch campuses have failed because salaries and workload are nowhere near competitive. If we are to make the most of this merger, it is essential to invest in our colleagues. We must welcome the new faculty to the Panther family by treating them fairly–something the UW Colleges administration has failed to do.

Chronicle of Higher Education Data

UWM Faculty Salary Average:

https://data.chronicle.com/240453/University-of-Wisconsin-at-Milwaukee/faculty-salaries

Assistant: $73,300

Associate: $78,000

Professor: $101,448

UW Colleges Faculty Salary Average:

https://data.chronicle.com/240055/University-of-Wisconsin-Colleges/faculty-salaries/

Assistant: $45,126

Associate: $51,084

Professor: $62,424

Further Analysis of the Mercer “Benefits” Survey

Comments from Nancy Mathiowetz, Professor Emerita, UWM

Former President, American Association for Public Opinion Research

Former Editor, Public Opinion Quarterly

Introduction

It would be useful in reviewing the survey to understand the analytic objectives of the study. What empirical questions are they attempting to address?  And how do they want to use these data? That framework would provide a better lens for reviewing the instrument.

Both questionnaire design and sample design are important to review in understanding the quality of a survey. With respect to questionnaire design, one wants a well-designed questionnaire, for which the wording is both easy to comprehend and does not bias the respondent. The structure of the questions (e.g., Likert scales, open ended, multiple choice) is also important and can contribute to the overall quality of the survey data. A poorly designed questionnaire renders data that may be misleading, biased, or inaccurate.

Similarly, it is important that the sample design –that is, the identification of the population of interest and the means by which to select members of that population—be clearly specified and executed. Once people are selected for inclusion in a study, efforts should be made to encourage their participation so as to have representation across the full diversity of the population of interest.  Similar to a poorly designed questionnaire, a poorly designed or executed sample can result in misleading,biased, or inaccurate estimates.

The Mercer Survey

The American Association for Public Opinion Research offers a series of recommended best practices, including recommendations about question wording (see: https://www.aapor.org/Standards-Ethics/Best-Practices.aspx).  Specifically with respect to question wording, the website states: 

Take great care in matching question wording to the concepts being measured and the population studied.

Based on the goals of a survey, questions for respondents are designed and arranged in a logical format and order to create a survey questionnaire. The ideal survey or poll recognizes that planning the questionnaire is one of the most critical stages in the survey development process, and gives careful attention to all phases of questionnaire development and design, including: definition of topics, concepts and content; question wording and order; and questionnaire length and format. One must first ensure that the questionnaire domains and elements established for the survey fully and adequately cover the topics of interest. Ideally, multiple rather than single indicators or questions should be included for all key constructs.

Beyond their specific content, however, the manner in which questions are asked, as well as the specific response categories provided, can greatly affect the results of a survey. Concepts should be clearly defined and questions unambiguously phrased. Question wording should be carefully examined for special sensitivity or bias. When dealing with sensitive subject matter,techniques should be used that minimize the discomfort or apprehension of respondents or respondents and interviewers if the survey is interviewer administered. Ways should be devised to keep respondent mistakes and biases(e.g., memory of past events) to a minimum, and to measure those that cannot be eliminated. To accomplish these objectives, well-established cognitive research methods (e.g., paraphrasing and “think-aloud” interviews) and similar methods (e.g., behavioral coding of interviewer-respondent interactions) should be employed with persons similar to those to be surveyed to assess and improve all key questions along these various dimensions.

In self-administered surveys careful attention should be paid to the visual formatting of the questionnaire, whether that be the layout of a mail survey or a particular eye towards respondents completing a web survey on a mobile device. Effort should be taken to reduce respondent burden through a positive user experience in order to reduce measurement error and break offs.

In reviewing a hard copy version of the questionnaire, one that appears to have been written for faculty members, given reference to research[1], I see a questionnaire that consists of three distinct types of questions:

  • A partial ranking question (Question 1) that asks for the five most attractive aspects of the position at two points in time;
  • Five-point Likert rating scales, ranging from Strongly Agree to Strongly Disagree and including a “middle” category of “Neither agree or disagree;” and
  • Multiple sets of “maximum difference scales” which ask respondents to examine multiple sets of employment or benefits attributes,requesting respondents to select the most important and least important within each set.

Some specific comments about each of these types of questions follows.

With respect to question 1 (partial ranking question), the format choice is not of major concern –certainly this type of ranking is often used to determine respondent’s preferences. What is of concern is some of the mismatch/sloppiness in the question. First, the question references working for“UW” but most of the employees answering this question do not work for the UW system but rather at a specific UW facility, so the wording is odd. Second, the question itself asks about what “interested” you most (for the first part of the question) and what is most “important” (for the second part), but the column headings use the term “attractive.” While not a critical inconsistency,it’s a bit sloppy. 

The 5-point Likert items have two sets of response options–strongly agree/strongly disagree (Questions 2-11, 15-20, 22-27) or very satisfied/very dissatisfied (Questions 14a through 14r). Of the 22 items that are agree-disagree items, all but two are written in a positive frame, that is,the language indicates a positive point of view. This is not a best practice and the use of such an approach can lead “straight lining” where individuals simply mark the items in a single column, not carefully reading each item. And in general, the field of survey methodology recommends avoiding the use of agree-disagree items since it often leads to acquiescence bias, that is, the tendency to agree to statements, which leads to exaggerated estimates of endorsement for positively-worded statements.

Although I am reviewing a hard copy questionnaire, I note that Question 14 has 18 sub-questions, all requiring the respondent to use the same five-point scale (as well as a “not applicable” option). Once again, if presented on a single screen, this would not follow a best practice and leads to respondents not fully considering each item individually. In addition, it does not appear that these 18 items are rotated, so as to avoid order effects. Once again, this is in contrast to  best practices which recommends randomizing the order of long lists.

Finally, the survey consists of two sets of maximum difference(maxdiff) scaling, an extension of the method of paired comparisons. In a typical maxdiff scaling question, a respondent will rate between four and six attributes of an entity/product/service. Analysis of the data using a specific statistical technique, hierarchical Bayesian multinomial logit modeling, produces important estimates for each attribute for each respondent.

The redundant nature of maxdiff questionnaires is one of the drawbacks of the approach, since respondents often feel that they have just answered the question. In the present questionnaire, Question 12 consists of 11sets and Question 13 consists of 20 sets, each requiring the identification of most important and least important. 

What is odd and most disturbing about Question 12 is that the question states “some of these benefits or programs are not current benefits or programs at the university.” But the attributes listed in Question 12 are not all benefits or programs –they are attributes of the actual work environment or characteristics of employment.  For example, the question includes attributes such as “Type/variety of work,”“Stable employment,” “Career advancement/professional development,” and “Pay.” These“attributes” are juxtaposed alongside benefits such as “Sick leave,”“Healthcare benefits,” and “Retirement savings plans.” In contrast, the attributes presented in Question 13 appear to be, for the most part, benefits.

I find the mixing of employment attributes and benefits attributes in Question 12 to be atypical of most maxdiff designs. It seems inappropriate to ask respondents to make tradeoff assessments between employment attributes such as pay and benefits attributes such as sick leave.The mix of items –which are attributes of two very different constructs –could result in a misleading set of empirical findings.

And placing two of these maxdiff questions next to each other–thereby forcing the respondent to answer 31 sets of these items consecutively—is not ideal with respect to overall questionnaire design or consideration of respondent fatigue.

Sample Design

It does not appear that a sample has been selected for participation, but rather a census of all benefits-eligible employees. What methodology is being used to ensure diverse participation both across all UW system locations and throughout the ranks of faculty and staff? Although a census allows for all members of the population to voice their opinions, it also means that resources to encourage participation must be spread throughout the population, rather than focused on a specific scientific sample.

Final Notes

The survey included no request for demographic information,location, years working in the UW system, or position. Can we assume that this information will be imported from HR files, given the unique link sent to request participation? At a minimum a few of these questions should have been asked to ensure that the data were collected from the person intended to be queried.

And why does the survey bear the UW system and UW-Madison logos,but not those of other universities? If a different methodology is involved for the Madison campus as compared to other campuses, how will this impact comparisons across campuses?


[1] It is unclear if there is a different version of the questionnaire sent to non-faculty staff members.

Notes on the “Benefits” Survey

From Aaron Schutz:

The survey contains forced choice questions.  And you can’t skip any of them.  Anyone taking it cannot avoid picking the kinds of benefits wanted and by implication, not wanted, from the options.  I’m not sure whether it is better to complete it or not, but the data is clearly designed to inform cuts in benefits.  The very structure of the survey means that the report will necessarily illuminate those benefits that few faculty chose as most important.    

To put it another way, why would they even bother asking questions that ask us to choose the benefits we prefer instead of simply asking how important each benefit is for us on a Likert scale.  The scale would allow respondents to value all of them.  The very structure of the report of this data, regardless of what is intended, will imply that some benefits are perceived as less important by us “clients” and, because they are less important, areas for cuts.    Simply answering the questions gives ammunition to those who want to cut benefits  because the structure of the survey requires you to distinguish between most and least important.  

Here is the first question that you must choose “one” from:

           Healthcare benefits (medical, dental, vision)
           Retirement savings plans (WRS, 403b, 457)
          Type / variety of work
          Stable employment

Many people are actually forced to choose among these things.  A stable job without healthcare?  A job with healthcare but no retirement system?

So one needs to ask, why would they create a survey with this kind of structure?  What does it mean about what they want to do with it?  Are they confused, not realizing that the report will inevitably highlight which benefits that faculty don’t choose as important?  What if no one chooses healthcare over another benefit?  Does that mean we don’t want access to healthcare?

Imagined line from the report:  “The survey indicated that healthcare was the least important benefit to faculty.”

I can’t advise anyone else, but I won’t fill it out.


Insights on the Benefits Survey

-from a colleague with some expertise who prefers to remain anonymous

“The survey company is paid by the regents, ultimately, and so presumably is acting according to the regents’ strategic goals. This is standard practice for short-term consultants in the business world. In my opinion, the regents want empirical support for the notion that UW employees would gladly relinquish the current level of benefits (pension, income continuation, and family/sick leave) in order to gain other tangible and intangible goods. So the survey asks people to rank the relative worth of benefits against other goods (e.g., salary, job flexibility, the opportunity to perform meaningful work, desirable location, etc.). That is really the basic template of every single question. The questions differ from each other only insofar as they give respondents different opportunities to devalue benefits — many different comparisons (pensions vs. A, B or C) and many different hypothetical scenarios (Why did you choose to work for UW? What keeps you at UW? etc.).

“The survey company will be able to mine this data and present it adroitly in order to support the (foregone) conclusion that UW employees would be willing to trade off benefits for other goods. Well, that’s my cynical reading, but survey design does involve the ‘dark art’ of slanting the questions, in order to run a biased analysis, in order to reach the desired conclusion.”