3 Takeaways from the SME Instrument Reviewers Meeting
di Paolo Lombardi, Head of R&D, tree
Recently I was invited to take part in a meeting in Brussels on October 1st 2018, organised by the European Innovation Council’s team running the SME Instrument – a funding scheme for startups and SMEs that distributes up to 2.5m€ of funding for each company. Started four years ago, its numbers are impressive: 47000 applications received, 3200 companies funded, a total of 1.3 billion euros distributed, and 10% of European IPOs in 2017 being related to portfolio companies.
Being one of the expert reviewers for Artificial Intelligence business proposals since 2016, that day in Brussels I joined another 400 or more experts from all over the EU and beyond, including Turkey and Israel at the very least – since I met two experts from these countries. A grand variety of people, costumes, backgrounds and skills: a celebration of the melting pot of cultures that the EU has tried to build for many decades. I was dazed, and in admiration for the people I met, and I am pleased to have taken part in the event for the level of networking involved alone.
Naturally, we weren’t there for a party: the event was the second in a series of day-long workshops for expert reviewers, for us to exchange best practices, meet the EASME officers1, and make the programme even better and more effective. The day was packed with two plenary sessions, three peer-to-peer workshops, and three networking breaks. Here are three takeaways from that day.
1. This is a truly special programme
In my 3 years as a temporary officer at the European Commission (EC) I learned that nothing is easy when you work at the crossroads of 28 countries, pulling the ropes of strong national interests from the position of a non-sovereign authority without any monopoly over the countries – financial, military, fiscal, nor of foreign policy, or others. The EU is not a federation, and the EC is not a federal government. Many decisions still fall under the countries’ control, and the EC officers sometimes need to compromise and cannot always act according to analytical decisions, because higher powers dictate the agenda.
Notwithstanding this general situation, the SME Instrument delivers impressive results, like those listed above, more can be found on the official website. The EASME officers are very dedicated to the success of this programme; It’s a young team, full of enthusiasm which could be perceived by the vibrant atmosphere when they facilitated the workshops. The current programme is a pilot, and continuous monitoring and improvement is part of the measure, in an agile logic – which is sometimes not the case for programmes run by public administrations. The goal is to find the European unicorns with game-changing technologies, and to this end professional early-stage investors have been involved – not only as advisors, but also, they take an active part in the selection process as remote reviewers or in-person judges in the live interviews of Phase 2. I don’t know of any previous EU measure aimed at startups, which had so many pieces of the puzzle in the right place. Good job!
2. The reviewers are very competent in their fields and passionate about their role
Throughout the three peer-to-peer workshops I had the opportunity to meet and discuss with about 50 other reviewers, to whom I can add those 15-20 that I networked with during the breaks. Although far from being a complete overview of the 400 and more people there, I would consider it as a good sample at least to draw some considerations.
First come the unexpected aspects. Most of the experts I met had outstanding technical backgrounds: professors, researchers, PhDs, professionals, from all disciplines. Not all were like this, but most. Of the others, some were entrepreneurs or early stage investors, like business angels. Some other experts came from finance and legal. Others were industry managers or ex-managers turned consultants. Overall, I felt I was surrounded by a smart bunch of people, and to be part of this club feels good.
Another unexpected discovery was the passion many experts showed during the peer-to-peer workshops. I have heard many speak aloud of their responsibility as administrators of taxpayer money. That was refreshing and encouraging. It was not taken for granted: these experts are selected based on their competences, not their commitment to the EU. And still, some were advocates of the need to spend more time than what we are paid for, to assess whether a proposal comes from a real, committed SME or startup team, or it’s a fake, crafted to divert funds onto other, less innovative, projects.
3. High independency of reviewers is valued more than low randomicity of scoring
Having being part of and led tens of selection committees in public and private programmes for startups for a decade, I somehow expected the dynamics I am about to describe. They represent the strength of a highly independent committee, like the one set up by the EC for the SME Instrument. And yet, I had never experienced a programme of this vastity, with tens of thousands of applications and hundreds of expert reviewers. This scale magnifies the other side of the moon of non-moderated committee independency.
The main point is the discrepancy in how to interpret some evaluation criteria. There were workshops precisely on what is the meaning of some of the sub-criteria we are asked to evaluate. I was surprised at how interpretations diverged within my workgroup of about 20 peers: some prioritised certain meanings in a sub-criterion, others leaned for completely different ones.
To be fair, the most acute divergencies hit just 3-4 of those 20 sub-criteria, a relative minority. And I myself had always found a problem with understanding what exactly those points meant, so this was just a confirmation of my intuition: the current scoring chart with 20 or more sub-criteria poses a coherence problem for reviewers.
I was also surprised to learn that the EASME officers would not explain what interpretation should prevail. They refused to give directives of any kind, limiting their intervention to facilitating the discussion among experts.
Although I ignore the true reason, I can only guess that’s because EU officers should stay neutral, and let the experts be independent – I understand and respect that. But that discrepancy in judging criteria interpretation may have a randomising effect on the final marks.
There are many other discrepancies of this kind. To give another example, experts have contrasting feelings about hyperlinks in the proposals. Some experts love them, because they can access more information to score the proposal more objectively. They spend overtime to check all videos, sources, and references in a proposal. Other reviewers, at the opposite end, feel that external links are an illicit way of extending the proposal above the given limit of 10 pages (for Phase 1) or 30 pages (for Phase 2), and that consequently they should be ignored. Some reviewers even confessed a fear of being tracked if they follow an external link, and they feel unjust that some applicants could identify the reviewers or at least their nationality when instead the EC keeps a secret on these aspects. And again, no indication was given by EC officers on this topic, they remained impartial listeners to our discussions.
I could continue, but in short, the takeaway for prospective applicants here is that in order to succeed they need to convince four independent experts, who do not know each other and have sometimes different biases towards certain aspects such as criteria, links, type of info provided, and so on.
As mentioned above, this aspect of high independency actually turns out to be a good feature of the selection process, but at the expense of some randomicity of score results. Applicants with good proposals should not be discouraged for being excluded in their first attempt and should always give their application a second chance.
1Executive Agency for Small and Medium-sized Enterprises