In the new Procurement Act, the requirement to award the Most Economically Advantageous Tender (MEAT) moves to Most Advantageous Tender (MAT). Will dropping the word ‘Economically’ make much difference to how public sector buyers assess tender responses?
The short answer is probably not. Most authorities have always had regard to the quality of the submission rather than racing to the lowest price alone, usually looking at a weighted mix of price and quality based on their organisation’s overarching strategy and objectives. The reason for this change to MAT is to stress to buyers that they should and may consider the whole package on offer of which price/affordability is one factor.
The act also sets out other factors, including maximising public benefit, which further sends the message that the evaluation model should not be based on cost alone but should include delivering wider government objectives through spending.
The change from MEAT to MAT is not, in practice, significantly changing the way award criteria are set, tenders are assessed, or contracts are awarded. It has been changed to highlight and reinforce the message that contracts do not have to be awarded based on the lowest price or that price must always be weighted higher than quality.
What strategies are available to buyers looking to construct an evaluation model that outputs an award to the MAT – and what is an evaluation model anyway?
The most challenging part of developing a procurement pack for bidders to respond to is determining the evaluation model. The evaluation model will set out all the criteria against which bids will be assessed. The criteria must be relevant and capable of being assessed by the evaluation panel. Designing the model is at the core of a good procurement; determining which criteria are simply thresholds to be met (and so are binary pass/fail) and which are differentiators is one question – then weighting them is another.
When there is a mix of stakeholders with differing priorities, it can be extremely demanding for the buyer to facilitate an agreement between the vying claims of importance – and pity the poor buyer buying for a consortium of public bodies with different strategic aims. Then there is the vexed question of how many criteria to deploy. Too many and the value of each is diluted; too few and key factors may be missed. And finally, how will the price be evaluated? There are many strategies for this, including relative price-scoring (Lowest Bid/Bid*weighting), which is widely used and arguably not compliant with regulation and certainly discouraged by the UK government, to a quality-price ratio model and even pass/fail, i.e. can you match the budget/financial envelope?
To develop an effective evaluation model, the buyer must take an iterative approach, starting from the overarching organisational aims and the aims of the specific procurement. This will usually derive from organisational aims and result in a long list. The final stage is to determine which conformance requirements could be scored, then work with stakeholders to determine the relative importance and weighting for each.
Price can be hard to weigh, as buyers ask themselves how to get the most effective solution, ensuring quality and maximising value while maintaining competitive tension. If the price is too highly weighted, then a mediocre bid may win (and that may be fine if it meets the specification, not every service is improved by silver or gold plating). If too lowly weighted, then suppliers may be encouraged to price high, potentially inferring from the low weighting of the price that the authority’s priority is service quality regardless of cost.
Finally, the buyer needs a first-class specification of both service requirements and how performance may be measured – this is a necessary foundation on which the evaluation model and procurement rests.
Best practice will include testing the model against a number of scenarios and adjusting where such testing throws up unwanted outcomes
Another conundrum is what happens if the bidders are all equal in quality, even if price is weighted low, if all bids score roughly the same then the lowest cost will win. If the criteria are correct and the equal scoring reflects that all bidders can meet the requirements and provide the required value, then arguably, this does not matter because you are awarding to MAT. Alternatively, buyers could take the view that where they expect the market to produce equally good responses to quality criteria, then they may increase the weighting of social value criteria to rebalance and encourage bids that maximise public benefit and are competitively priced. In this scenario, buyers may look to added value/innovation as the differentiator, which will likely deliver real value to the authority.
In summary, each procurement evaluation model must be fine-tuned to the needs of the Authority, the specific contract and the ability and capacity of the market. The Government Commercial Function provides excellent guidance on evaluation, including constructing models. It requires practical experience of tendering and deep understanding of the regulations to balance cost and quality in public procurement, without those conditions, evaluation models can produce awards that are not MAT because the weighting and criteria were not meticulously designed, tested and implemented.