As my firm, The Vortechs Group, expands from placing leading talent in tech transfer units to supporting those programs through creative service offerings, like technical reviews of new innovations, it’s important to understand what is helping, and what is not. One thing that is not helping is crappy technical review consultants.
As a prime example, I recently attended a set of presentations by faculty of their current work for inclusion into a tech transfer office’s review for commercialization. One faculty member opened with an overview of steps taken to-date to assess the technical and commercial potential of their offering. We were all impressed when the faculty explained that she had taken the early steps to review by enlisting an outside, “subject-matter expert” consultant to do an independent technical and commercial assessment. This is where things took a turn for the worse…
You see, this faculty member was rightly jazzed up from this report which convinced them that they had discovered the greatest thing since twist-off tops and iced coffee. She muddled through a presentation of a technology that could be easily summarized as (not novel/small market/implausible as a business/smh). It was then our job to deliver this news, and cast ourselves as the dream-crushers of faculty goodwill—a position that no tech transfer office wants to take when it can be avoided.
That night, I reviewed a copy of that consultant’s report to the faculty member and quickly understood that the she was set up for failure. It made me angry that someone in our community would sell this and frustrated to consider that this likely happens all of the time. I made the following observations:
- A Technical Assessment is not copying a list of patent numbers from Google. It’s also not running the project through a homemade scorecard that can be gamed to shine anything in a positive light. A proper technical assessment contains a landscape of where the technology exists relative to alternatives and a roadmap of where that technology can and should be protected. A secondary outcome is a proper cataloging of companies that are active in the space and identification of where there might be an opportunity to boost their portfolio
- A Market Assessment is not identifying the biggest semi-associated market and building your case from those numbers. A market should be segmented, likely to the most focused addressable market, and a strategy should be first built around the reality of capturing a share of that pie
- A Competitive Analysis is not cutting and pasting information from well-known, competing company SEC filings. Any high-school student with access to their library databases can do this. Instead, current product offerings should be the focus and differentiating the value of your technologies value as a stand-alone or improvement. Company contacts should also be included
- All evaluations do not/should not end in a green light to faculty. Rubber stamp consultants give good consultants a bad name. A consultants job is to support the customer in decision-making and action, which sometimes includes recommendations on where not to spend their limited time and resources
This experience was eye-opening to me a perfect example of how something as little as a consultant’s report can have larger impact on a university’s ability to succeed at tech transfer and to grow positive faculty relationships. The good news is that, while there are always some bad actors in a system, there are many more great technical review consultants in our community. As I grow this area of my operation, I plan to use this situation as a reminder of the importance of standards in anyone that I work with and recommend.
Looking for quality candidates, technical reviewers, or seeking your next opportunity? Contact me directly, Glen Gardner at glen@vortechsgroup.com. Visit us anytime at http://vortechsgroup.com