Public Affairs

Assessing the Programme for Government

Derek Birrell, Professor in the School of Criminology, Politics and Social Policy at Ulster University, questions the validity of the methodology used in the new format for the Programme for Government (PfG).

The draft Programme for Government, published after the consultation on the Programme for Government Framework, was notable for a change which highlights the significance of the underpinning methodology. This change meant that the basic structure of: outcomes; improvement indicators; and measures; was replaced by a structure with; outcomes; but with the removal of the list of improvement indicators; while the list of measures was renamed as indicators.

This somewhat fundamental change was introduced to bring the PfG more into line with the apparent requirements of the Outcomes Based Accountability (OBA) methodology as put forward by an American writer Martin Friedman. In reality the draft PfG draws on two different methodologies and attempts to merge them. This raises key questions about the validity of both approaches, the nature of the merger and their relevance for a programme for government.

The first source of the underpinning methodology is a Scottish Government exercise known as the National Performance Framework on Scotland Performs which examines the performance of the Scottish Government against a range of socio-economic indicators. Regularly revised, Scotland Performs consists of a small number of strategic objectives and targets, 16 general national outcomes and some 50 national improvement indicators, plus an accompanying narrative on why the outcomes are important, what influences them and what Government should do. The Executive’s draft PfG is strikingly similar with 14 general outcomes and mostly couched in similar vocab to the Scottish framework.

The draft PfG has now 46 indicators with many similar to Scotland Performs, and with a similar type of narrative attached to each, covering: why does this outcome matter; what are the issues; what does this look like, and what will we do? However, what is most significant about Scotland Performs is that it is not the Scottish Programme for Government. The annual Programme for Government is an entirely different type of document and the 2015-16 version described the underpinning methodology as covering: values; themes and priorities; strategies and policies; legislative programme and funding commitments.

The Scottish Programme for Government 2016-17, which has recently been published, is based on five amended themes covering: education; promoting a sustainable economy; transforming public services, including the NHS; putting people in charge through co-production; and Scotland’s place in the world. The Programme for Scotland is 90 pages long but only a few lines are devoted to the National Performance Framework. The Head of Scotland Performs has described the main purpose of Scotland Performs as acting as a tool for scrutiny for parliamentary and audit accountability purposes.

While the structure and content of the draft PfG is still strongly influenced by Scotland Performs, the introduction is specific in stating that it is based on Friedman’s Outcomes Based Accountability methodology. An immediate problem with this approach is that OBA is an evaluation and performance methodology and as such can not and does not prescribe any policies for a programme for government. Such models and methodologies have been subject to a range of criticisms.

It can be noted that in a response to the consultation on the PfG Framework the BMA expressed concern that the Framework was based solely on OBA and questioned whether it constituted a robust evidence base and was fit for purpose. The use of OBA does demonstrate a number of conceptual flaws, mostly identified in the literature on public administration and management. Firstly is the definition of outcomes that OBA uses. Outcomes in its normal meaning in public administration means what has been achieved and what the impact has been.

OBA uses outcomes in a different sense, meaning desired outcomes or imagined outcomes not actual outcomes, leading to the whole exercise becoming hypothetical. Using the term desired outcomes produces confusion with objectives, goals and targets, but very significantly involves a rejection of other outcome based methodologies using actual outcomes. Some such methodologies are widely used in England, for example, the NHS Outcomes Framework and the Adult Social Care Outcomes Framework (ASCOF). In these the views and experiences of users and user organisations have a predominant role.

 

Outcomes

A second and more important conceptual criticism relates to the relationship in OBA between desired outcomes and indicators and the assumptions made about cause and effect. An essential feature of OBA is “to work backwards” from the desired outcomes which means that the outcomes are seen as caused by the indicators and then the indicators are caused by the “activities” or policies.

Outcome based approaches such as OBA have been strongly criticised by Professor Tony Bovaird of Birmingham University, a leading expert in public administration. In an academic article entitled “Attributing Outcomes to Social Policy Interventions: ‘Gold Standard’ or Fool’s Gold’ in Public Policy and Management”, he writes of such methodologies as using a simplistic and misguided cause and effect model. In practice the argument is that the outcomes in the draft PfG would not be caused by the nominated indicators. Bovaird sees such indicators as using jumbled pathways and under-specified cause and effect models. Achieving policy objectives is viewed by him as a much more complex and multi-faceted exercise.
The third major methodological criticism of OBA is that it is also based on a somewhat outdated attempt to bring back performance indicators, which have largely fallen into disuse after much criticism, by such academics as Norman Flynn in his well known book, ‘Public Sector Management’. The major criticisms were: the difficulty in finding consensus on what should be measured; the potential for manipulation of data; and the problem of attribution, as whatever condition or statistic is produced there can always be controversy over what it can actually be attributed to.

Fourthly, OBA suggests the use of an evaluative criterion of ‘is anyone better off?’ This again is a phrase that can vary in meaning and interpretation and its method of calculation. It can be pointed out that many current UK government policies are not intended to make people better off, but have other aims. These may include: reducing expenditure; to achieve fairness; or to make people less dependent.

The two perspectives of Scotland’s National Performance Framework (NPF) and Outcome Based Accountability can have applications for specific purposes, but not as a basis for a programme for government and do reflect best international practice. OBA has been found useful as an evaluation tool based on data collection for small localised projects. Attempting to devise a PfG for Northern Ireland drawing only on these two methodologies will produce a plan very light on policies. Senior civil servants have been asked to produce a delivery plan of activities for each indicator. The early drafts show the acute limitations of this approach. If following the discipline of OBA they can only specify what will be a cause of the indicator, producing narrow and fragmented policies.

The alternative demonstrated in some drafts is to actually range widely over the policy area, for example, the whole of adult social care, but this means the discipline of OBA is abandoned. It remains to be seen how much in the way of policies, priorities, strategies, targets and legislative proposals can be rescued from what has been to date an ill-informed exercise.

Show More
Back to top button