At the end of last year, the UK’s National Audit Office issued a very useful document titled “Commercial and contract management: insights and emerging best practice”. We did provide an initial overview of it here, and now we are into a more detailed review of its content and findings. For each area the NAO has covered, we will look at their content and then give any additional analysis or thoughts we would add to the mix. Today, we will take a look at number 8 of the NAO insights (as they call them) – Properly evaluate bids. This falls under the “market management and sourcing” wider heading.
So this looks like a statement of the blindingly obvious really. Of course bids should be – must be – evaluated properly. Yet, as the NAO says:
Government officials often say that they are good at ‘getting the bidding process right’. Very few procurements are legally challenged. But we have recently seen a handful of examples where procurement has gone very wrong. And when it goes wrong it has a significant impact on value-for-money.
One major challenge in evaluating bids (as we have said many times over the years) is often about assessing suppliers’ real ability to deliver rather than the extravagant promises and bright ideas they are tempted to put forward in their proposals. But other issues identified by NAO include the risk of excluding desirable bidders (such as new or small firms) because of the process, and the NAO case studies here include a couple of examples where the way the competition and evaluation was structured led to insufficient real competition. There is also the risk of evaluation processes losing the critical “level playing field” objective. This is an old “favourite” of ours:
When reporting in 2012 on Hinchingbrooke Health Care NHS Trust, we found that the authority let bidders adjust the risk in their own proposals and could not be sure it compared like with like when selecting its preferred bidder. This increased the risk of making the wrong choice
Another problem comes when evaluation criteria are set that really don’t align with what is really needed from the contract and the supplier. Sometimes price may be given too much prominence; or aspects of “back office” management scored heavily compared to the real front-line service delivery that is needed form the supplier.
All in all, the NAO gives sensible advice here, including careful assessment of risks, and consideration of past performance – within the bounds of the regulations of course. The importance of having a “should cost” model comes up again, here as well as it did in some earlier factors in this guidance.
Public Spend Forum Comments
This topic has long been a particular interest of mine, but it is not always given the prominence it should. Determining tender evaluation processes and then executing those properly is often seen as a pretty basic or almost mechanical process. It isn’t. Getting it wrong can lead to a supplier being selected who really isn’t the best equipped to carry out the work, or to legal challenge. Ultimately, these issues can cost the taxpayers millions of £, $ or €.
The NAO advice is useful generally although it might have been good to have seen something around the balance between cost and other factors in the evaluation process. This is another fairly specialist and complex topic that we have explored previously, but getting this wrong can easily lead to the feeling that the wrong supplier has been selected. The NAO does say “Furthermore, in our discussions, practitioners told us that they felt that the right bidder does not always win”. They put this down to the issue of delivery versus promises, but it can also be around those financial evaluation issues – either price is given too much prominence or not enough.
That is not just simply about the weighting of price / cost. The methodology for converting financial proposals into actual scores (before the weighting is applied) can have a huge impact on the end result of the evaluation.
Some of the other issues not fully addressed here involve the way project management discipline is maintained through the evaluation process, how to manage teams of evaluators in cases when it is not just one person, maintaining an audit trail, and how to ensure fairness is applied to the process. Some of these issues are hinted at in the NAO case studies, but the problem is that evaluation is a huge topic in itself. That is not to say this guidance is not useful, but inevitably it covers some important issues while not really getting into others in detail.
And of course in the next iteration of this document, the recent NDA fiasco (which we wrote about several times, including here) will probably be added to the list of terrible case studies – that evaluation process, a textbook example of how not to do it, looks likely to cost the UK taxpayer some £100 million.
No Comments