KEF – A Step Forward or a Missed Opportunity?

 

Do the metrics proposed in the new Knowledge Exchange Framework represent a reliable way to measure university innovation, or will they lead universities up the wrong path? OUI’s Chief Operating Officer Adam Stoten discusses.

The Knowledge Exchange Framework (KEF) has finally arrived, or at least Research England has just published a report confirming the metrics to be used in the inaugural 2020 exercise, following a consultation process and pilot run in which Oxford participated. This represents a significant watershed event in terms of putting a spotlight on Higher Education Institution (HEI) Knowledge Exchange (KE) as a driver of value creation regionally, nationally and internationally. While there is much to celebrate about the fact that government recognises that this area is important and worthy of investment, some significant problems remain with the KEF metrics proposed.

Written by Adam Stoten, Chief Operating Officer at Oxford University Innovation

Research England originally proposed the purpose of the KEF to be threefold:

  1. To provide universities with new tools to understand, benchmark and improve their performance.
  2. To provide businesses and other users with more information on universities.
  3. To provide greater public visibility and accountability.

Based on the consultation process feedback, it was clear that universities saw more value in (1) and (3), but overall the aims were deemed sensible. What the KEF wasn’t going to do in the first instance was act as a direct lever for government funding into universities. However, it seems that it is a question of when, rather than if, this will happen.

Much of the consultation centred on the suite of metrics to be used. The good news was that rather than focusing on the KE activities that tend to enjoy the most coverage in the press – i.e. spinouts – Research England attempted to define metrics that covered the full range of KE activities, spanning everything from business interactions to local regeneration to public and community engagement. This was critical if the KEF is to represent a framework for assessment that gives all UK HEIs an opportunity to shine in their particular areas of strength.

The other levelling mechanism, and a very sensible one at that, was to avoid comparing the likes of Cambridge University with the Liverpool Institute for Performing Arts. HEIs were instead clustered with peers, thus making comparisons within a group more appropriate.

So, what has changed as a result of the consultation process?

In terms of the HEI clusters, not very much. All but one of the original clusters remain, with the Social Sciences and Business specialists subsumed into the rest due to too small a sample size. This reflects a general welcoming of the approach by HEIs, albeit with some specific reservations about the cluster descriptions, the explanation of why institutions were clustered, residual variability within clusters and the potential to confuse industry.

In terms of metrics, the consultation process yielded plenty of feedback, but with a majority indicating that they represented an acceptable breadth of KE activity. While the seven high level categories below – called perspectives – remain unchanged, the underlying metrics have in many cases been amended. 

KEF perspectives:

  1. Research partnerships
  2. Working with business
  3. Working with the public and third sector
  4. Skills, enterprise and entrepreneurship
  5. Local growth and regeneration
  6. IP and commercialisation
  7. Public and community engagement

Most changes have centred on normalising the data for institutional size by HEI income, and in some cases differentiating between KE involving SMEs versus non-SMEs. There is also a major change regarding public and community engagement, with the previous metric being deemed unfit-for-purpose and an entirely new self-assessment metric to be developed in conjunction with the National Co-ordinating Centre for Public Engagement (NCCPE).

From an Oxford University Innovation viewpoint, the perspectives of most interest are those covering academic consultancy (‘Working with business’ and ‘Working with the public and 3rd sector’) and ‘IP and commercialisation’. The changes to the former are largely clarifications with HEI income normalisation and SME/non-SME differentiation added alongside the explicit inclusion of facilities and equipment income – and it’s great that these changes reflect some pretty consistent feedback from the consultation process.

However, the absence of a non-monetary-based metric for consultancy remains problematic and somewhat myopic in terms of accurately capturing the value and impact that consultancy activity can bring to an institution. In our experience, it can represent a low-cost, low-risk primer for more financially lucrative downstream collaborative activities such as industry-sponsored research or a Knowledge Transfer Partnership.

While flaws remain in the consultancy metrics, the change to the IP and commercialisation perspective is both the most baffling, and the most troubling. Two of the three associated metrics remain unchanged – ‘Average external investment per formal spinout’ and ‘Licensing and other IP income as a proportion of research income’. However, a new metric – ‘Estimated current turnover of all active firms per active spinout’ – replaces ‘Research resource (income) per formal spinout’. As a result, all IP and commercialisation metrics remain monetary.

The consultation process elicited quite strong responses regarding the significant imperfections of the original metrics, noting the various challenges with IP-related income – its lumpy nature, mostly deriving from a small number of HEIs, the time lag from a technology transfer “event” (i.e. a licence deal) to significant revenue, and the potential for a small number of high value deals or investments to skew an institution’s overall figures. 

The metric assessing ‘Licensing and other IP income as a proportion of research income’ is at particular risk of being skewed by a very small number of high value deals. Most technology transfer offices (TTOs) have portfolios characterised by a small number of high-earning deals and a long tail of low earning (but potentially high impact) licences. The presence of one or two very high value deals may be as much a result of serendipitous research activity or sectoral strength as an institution’s inherent KE capability. 

If the aim of a TTO is to efficiently transfer as much technology as possible then assessing research income per licensing transaction may be a better approach. Arguably an even more useful metric is research income per exclusive licensing transaction. Exclusive licensing typically presents a more challenging negotiation process as it will require the licensee to invest in developing, manufacturing, marketing and selling a product or service, versus non-exclusive licensing which may have a high transactional frequency based on repeat licensing of ready-to-go tools (e.g. software).

There is also in most cases a material delay between a licensing/spinout transaction and significant related financial returns, which means that any measure involving commercialisation income may at best reflect well upon historic competency in KE, rather than indicating current ability.

Focusing only on monetary metrics also ignores the growing critical mass of high impact but low income KE activity. At OUI we are seeing growing demand from academics for social enterprise vehicles to develop products and services arising from academic research. These ventures often require little in the way of external investment and are unlikely to generate large financial returns through licensing. However, they are potentially very impactful and the underlying ability of institutions to support this form of KE is likely to increase in importance, especially in relation to social sciences and humanities research.

The new metric regarding spinout turnover is particularly problematic, and has several shortcomings: 
  • It requires us to know the revenues of each spinout; this is likely to involve a great deal of extra work by institutions to comb through Companies House filings to determine accurate figures, which may not even be available there. 
  • It can/will be massively skewed by a small number of very high earning spinouts.
  • It reflects very poorly a university’s current ability to create new companies/ commercialise IP and instead reflects how well spinouts have been managed post-inception to get to a point where they are generating revenues – which may well be based on IP that has little to do with what was originally licensed from the university.
  • Most life science companies (certainly in the therapeutics space) will be acquired long before a product reaches the market/significant revenues are generated, so will never have significant turnover, even if their drug goes on to become a blockbuster. It is not clear what is reportable in this context.
If anything, this is more flawed than the metric it replaces.

Pausing for breath, I’m aware that the above complaints may fall under the category of First World problems – I’m pretty sure that Oxford will do ok under the new metrics even with the changes made. I’m also conscious that I’m grumbling about an area of KE that is truly relevant only to a minority subset of UK HEIs. So, what’s the big deal? Why get exercised about it?

Well, many of us who work in KE, and specifically in supporting the commercialisation of HEI-derived IP – have become accustomed to regular accusations from industry and investors of being fixated on wringing the last penny out of each transaction, and of over-valuing the IP for which we are seeking partners. In some cases, these accusations have some merit. However, in recent years I have without question witnessed a growing sophistication amongst TTOs and a prevailing focus on impact over income. This reflects both a maturing profession and the excellent training and development support from the likes of Praxis Auril. As a blanket criticism of TTOs, this now carries far less weight.

The reason that the KEF metrics cause me real consternation is that the solely monetary-based metrics risk pushing TTOs to focus more on revenue than impact, which would be an unfortunate and retrograde outcome. The potential of KEF to materially drive certain undesirable behaviours in TTOs will of course depend on how KEF is used by government to inform funding decisions. This remains unclear but should the KEF ultimately influence how, for example, Higher Education Innovation Fund resources are allocated, the threat may become much more real. As a member of an organisation that strives to support impactful KE projects irrespective of the likely monetary ROI, I would be saddened if an inadvertent consequence of the KEF was to reduce support for endeavours such as social enterprise at a time when demand amongst academics is growing.

One of the reasons given for the current metrics is that Research England wishes to minimise the data collection burden on HEIs by relying on data already collected as part of the Higher Education Business and Community Interaction (HEBCI) survey or via other statutory returns. Infuriatingly, the HEBCI survey already captures data that would enable more appropriate quantitative metrics, for example based on numbers of licences executed; a metric that would better reflect an institution’s current ability to find partners for its IP, that is agnostic to value and that does not exclude valuable activities like social enterprise.

The very fact that KEF exists is testament to the growing recognition by government that there is massive value – and still much unrealised potential – in supporting the exchange of knowledge between UK HEIs and partner organisations. Skilled workforces, new products, new companies providing new jobs, transformative policies, greater tax revenues, rejuvenated local ecosystems; all these can be products of HEI KE activity. 

It is only right that we ensure that HEIs can understand what they do well and where they could improve, that such information is transparent for businesses and the general public, and that it is acted upon. The KEF is a great step towards this, but it also represents a real opportunity to calibrate and align behaviours in a manner that results in positive outcomes. It would be a huge own goal if an exercise with such noble aims were to result in behaviours that are ultimately at odds with realising the incredible potential residing in our HEIs. 

To this end, I hope that Research England continues to consult with stakeholders – both public and private sector - to ensure that the KEF metrics benefit from further refinement and can be conducive to generating both impact and income.

Adam Stoten, Chief Operating Officer at Oxford University Innovation