What does a “good” employment service look like?

Mar 17, 2023
Human development.

This week, the House Select Committee on Workforce Australia Employment Services held one of its public hearings. During the opening remarks, the Committee chair, Julian Hill remarked that he had asked the Department of Employment what a good service model looks like, and they couldn’t answer. He said they looked like a bunch of “well-paid, gaping fish”. Harsh.

In what amounted to a vague defence of the Department, the first expert witness, Professor Mark Considine from the University of Melbourne and author of many research articles and books on the topic said: “Nobody knows what a good service model looks like.” Really?

Maybe asking unemployed people would be a good place to start. Turns out, they have a view what a good service model looks like, and the good news is that it’s not rocket science.

We know from a wide range of health and human services that when you systematically understand the end-user’s experience of a service, it can improve service delivery. These types of data can improve the quality of a service either through a ‘change’ pathway (whereby providers initiate quality improvement in response to feedback) or through a ‘selection’ pathway (whereby end-users choose a high-quality service). The selection pathway is consistent with one of the early promises of the marketisation of public services, which at least in theory, placed importance on consumers being able to drive a system that was more tailored, more personalised, and more flexible via the ‘invisible hand’ of market forces. However, like almost every other marketisation of public services, the end-user ended up not being the ‘consumer’ but instead the Government became the single customer and started behaving as badly as any other monopsony does. Meanwhile, the unemployed became reduced to ‘throughputs’ and ‘outcomes’ for the enrichment of others.

So, how do you find out what is important to unemployed workers? A word of caution, we don’t want to measure satisfaction, that’s too hard, and it depends on too many things outside the control of a provider. We want to measure experience – what happened, and what did you think of that? I think satisfaction is the wrong consumer measure to use in human services and it is used far too often. Measure experience, it’s more straightforward and it is also practical information that providers can use to make changes. No point in getting and giving feedback if a provider can’t use it.

For almost a decade, in collaboration with the Australian Unemployed Workers Union, I have been developing a rating system for employment providers based on surveys, interviews, and focus groups with hundreds of unemployed workers across Australia. What we were searching for was a smaller set of underlying factors in a very large data set – in other words, what was driving the answers to the simple question “tell us about your experiences with your employment service?”

What we found was seven factors that consistently underpin unemployed workers’ responses. This has now been turned into a rating app on the AUWU scale and is available for all unemployed workers to use.

The seven factors comprising what we have called the AUWURS (Australian Unemployed Workers Union Rating Scale) are highly interpretable and reflect the multifaceted nature of unemployed workers’ experiences of employment services and are consistent with much of the qualitative literature about experiences of employment services. These are the factors:

  1. “Useful” refers to practical assistance to get a job or improve one’s employability. Not surprisingly, this is the most important factor for almost all unemployed workers. Turns out, unemployed workers want practical help to get a job more than anything else. Who knew?
  2. “Client-centred” refers to choice and empowerment. In other words, is the service about the unemployed worker’s needs or is it about the services that the provider wants to deliver – not the same thing.
  3. “Fair” reflects the reciprocity and coercion relevant to welfare conditionality. In practice it questions if providers are exercising their extraordinary power appropriately.
  4. “Trustworthy” reflects the integrity of providers’ actions. This is a complex concept and was the most difficult to label because it included items associated with truthfulness as well as on-line servicing, which appeared to relate to concerns about the loss of human and trusted services.
  5. “Responsive to Feedback” reflects the ease with which concerns about programs can be discussed, and if such concerns are likely to be acted upon.
  6. “Friendly” reflects rapport and relationship and serves as an important reminder that employment services are a human service. Interestingly this is the factor that usually scores the highest however I am not convinced that “we’re useless but we’re really nice” is a great strapline.
  7. “Realistic” is composed of items about job search expectations in relation to both environment and person-related factors. This is a complex factor because the setting of realistic expectations requires an understanding of the interplay between expectation and hope, undue pessimism or optimism. In short, this requires extraordinary interpersonal skills, which I am sorry to say is in short supply in employment services.

The advantage of a rating scale approach is that it yields a finer-grained insight into some of the mechanisms that describe how employment services are working or not working. While qualitative literature is important for gaining an understanding of the lived experience of unemployed workers, it has inferential limitations which ultimately limit how it is regarded as valid evidence from a policy-making point of view. Econometric researchers have argued that there is a dearth of quantitative research in the area of employment services and by using a formal rating approach such as the AUWURS, we propose that this offers a way forward in providing feedback that accounts for subjectivity as well as allowing for standardisation across measures that can be used for policy and program evaluation analysis.

In summary, the Committee could continue to ask people who make money from the system to develop a better system and end up experimenting on the unemployed with ever more new useless and harmful systems indefinitely or they could ask the unemployed – “tell us what you actually need?” Seems simple enough.

Share and Enjoy !

Subscribe to John Menadue's Newsletter
Subscribe to John Menadue's Newsletter

 

Thank you for subscribing!