FOR SOME TIME NOW, HEALTH AND SOCIAL SERVICE PROVIDERS HAVE POINTED TO THE CONTRADICTION OF BEING FUNDED BY THE NUMBER OF SERVICES THEY DELIVER FOR PROGRAMS THAT ESPOUSE BETTER CLIENT OUTCOMES. NOW, WITH INCREASED COMMONWEALTH GOVERNMENT INTEREST IN OUTCOMES BASED APPROACHES, IS THIS A CASE OF BE CAREFUL WHAT YOU ASK FOR?
The traditional way of delivering social services is very familiar to us all. Identify a need and match an available service, then ensure the resulting service activity is delivered within the program settings and funding. This is an approach that has been around for some time (and for good reason) because it provides programs with tangible measures and funding accountability.
In practice, this approach has its limitations. Programs and funding can be delivered within government, departmental or professional silos. Coordination of care can fall through the cracks. Comorbidity or the effect of multiple disadvantage may see an unaddressed need compromise otherwise successful support.
Not everyone’s response to a service is the same. For some, the underlying problem may not be the presenting issue. For others, there may be age-related, cultural or other factors that produce different results for individuals.
As a result, there have been calls from the sector and advocates to replace traditional counts of service activity with measurement and funding by successful outcomes. The Productivity Commissionrecently called for health funding to be orientated toward innovation and outcomes. Meanwhile, the Commission’s review of competition and informed choice in human services had a strong emphasis on improved user outcomes and community welfare.
Commonwealth Government departments have been aware of these calls and are introducing systems that support outcome measurement, are exploring outcome frameworks and piloting the potential of outcome based payments. Meanwhile, service providers have recognised the value of outcomes measurement in continuous improvement, efficient resource allocation and increased client satisfaction.
However, these shifts raise the question of how an effective and consistent approach to outcomes measurement and funding can be achieved without compromising its underlying principles of holism and diversity?
There are a number of risks and practical challenges that need to be addressed to support such a shift.
Risk 1: Outcomes can be hard to capture
The full impact of quality service delivery can be hard to quantify. Part of this is due to the nature of the service and part of it is due to the time lag before the full benefits are evident. The nature of prevention also eludes measurement as it is hard to quantify what did not happen. If measurements do not embed higher-level qualities (such as dignity, participation and resilience), or if they seek to capture outcomes at the end of short funding periods, they will not be genuine assessments of client outcomes and risk being little more than rebadged output measures.
Risk 2: Outcomes are rarely cumulative or attributable
Human lives are unpredictable and do not always progress predictably. Most people who enter services also need support for the broader impact on their family, which requires flexibility within services. The nature of many social service needs are episodic, while sometimes people appear to go backwards. Hence, it can be difficult to delineate evidence of the full impact on the individual from one service, program or funding source. People’s lives don’t align with program funding rules. For this reason, outcome measurement tools need to be designed carefully to drive evidence-based collective practice at the coalface, have the ability to be aggregated consistently to inform evidence-based policy and be linked meaningfully to national administrative and population data sets.
Risk 3: User-pays models do not pay for outcome collection
There is a steady shift toward client-driven and user-pays service delivery across the health and social services sector. While these principles seek to drive competition, efficiency and quality, they also occur within a consumer context where most expect to only pay for the service and no more. Without a strategy to provide additional funding for more time consuming and complex evaluation, attempts to introduce outcomes measurement and funding will be compromised. Also, without government support for consumer education on the value of outcomes approaches and evaluation, then the efforts of willing providers may be hamstrung by consumers choosing something else.
Practical challenges for outcomes funding
One of the most significant practical challenges for the shift to outcomes measurement and funding is that many social service providers lack the capacity to evaluate what outcomes are being achieved. The common challenges that Australian providers identify are the additional time it takes, the need to retrain staff, difficulty translating processes to the different requirements of various funding agencies and a lack of viable evaluation tools.
These challenges have been identified in a recent UK systemic review of organisational capability to provide good evidence of the effect of their social services. These gaps were due to three main factors:
- Lack of financial resources;
- Lack of technical capability and evaluation literacy;
- Challenges identifying relevant evaluation systems and evaluation outcomes.
The authors argued that the result of these factors was a failure to provide strong evidence of impact at a time when governments increasingly demanded justification of program funding through social and economic benefit.
It should be noted that there are a range of tools to conduct outcomes assessments of programs that are available. Currently, a number of members of the social services community are developing and trialling outcomes approaches (such as the CSSA shared outcomes framework), while the Department of Social Services is working on outcome components within its DEX measurement tool.
However, a limitation for many models is that they do not adopt a consistent or holistic definition of outcomes, while many focus on measurement techniques and economic analysis tools (such as with NDIS actuarial modelling). This risks not capturing the full impact of innovative social service programs.
For instance, the Creative Victoria Creative Makers of Change program uses art and video to make inroads with ‘wicked’ social problems. However, to capture the full power of interventions by the creative industries, some understanding of aesthetic, emotional and psychological impact would also be needed to capture the full picture of the program.
Alternatively, Catholic Care NT recently had its Financial Wellbeing and Capability Program evaluated by the Australian Centre for Community Services Research, which noted the interplay of financial literacy and Indigenous knowledge as vital to understanding impact of the program. This points to the importance of outcome measurement and funding arrangements having the flexibility to incorporate cultural factors such as these.
These are but two examples of challenges for primarily program logic or economic approaches, there are others. This observation is not intended to dismiss the value of such approaches, rather it highlights the value of using mixed methods and triangulation to capture the full range of outcomes being achieved.
If the shift to efficient, effective and consistent national outcomes approaches is to be achieved, then government departments and social services leaders must attend to the risks and challenges.
Outcomes measures need to be long-term and reach across the silos. Data collection must embody the principles of individual dignity and local subsidiarity, while aggregation and reporting of impact at program, system and national level is vital. Funding accountability needs to focus on reporting client outcomes not just according to budget lines.
What makes this particularly challenging for governments is that our social services are provided as part of a federal system. Where governments could once drive reform, funding and delivery, they can now at best be stewards for a sector with significant state and territory responsibility, increased service provision by private and not-for profit providers, and growing client-orientated markets. It can be hard for them to have visibility and influence on each phase from design to user experience. In this context, any new approach must be designed through collaboration and partnership between all sector stakeholders.
Achieving this across a sector that is already awash with change will be a significant challenge for policy leaders in the months and years ahead.
Catholic Social Services Australia has recently created the new position of Director of Research. Amongst the range of services that this new role can provide to CSSA members is support with evaluation research design, technical capability development and evaluation literacy (including evaluation systems, evaluation outcomes and review of consultant evaluation methodologies). Interested member organisations can contact Brenton.Prosser@cssa.org.au for more information.