Scott Wilcox
Here and Tomorrow
Scott Wilcox is the CEO of Here and Tomorrow. Fun fact: he was formerly the Chief Innovation Officer for SXSW!
Do you mind telling us more about what you do?
I do technology consulting related to helping companies adopt AI for product features or for their platform or whole new products. Oftentimes I've bundled that with market research, market positioning, design, and product design. And I'm working with a variety of people across my network in order to deliver an exceptional result for the client. So I'm very results driven and very boutique.
You mentioned research. I'm curious to hear how that plugs into your workflow with any given project or client engagement? Where and how do you use what type of research?
That's a great question. I leverage all the sources at my disposal.
I'm always looking for new partners in that space to accelerate my research capabilities. But mostly I'm looking for positioning a product in the marketplace and how it works with their competitive advantage.
Particularly with AI, I tend to try to key off their competitive advantages as a company to build on their strengths as opposed to addressing the weaknesses because I find it's really difficult to change the essence of a company overnight. And one of the things that AI can do really well is accelerate that competitive advantage and build upon that in order to open up new markets, either upstream or downstream, vertical or horizontal.
How do you think about the intersection of desk research (knowledge that already exists) versus primary research when you're working with your clients and answering these questions?
That's a great question as well. Having this baseline of desk research in terms of understanding the general landscape is super important to target those questions for individual users that get to the specificity you're ultimately looking for.
User interviews are key to every bit of product design. It's something I do a lot of. I really try to talk to enough people to develop those personas to be really confident in those choices.
Oftentimes, when we're innovating, we have to make sort of dramatic choices in terms of what we’d scope in terms of MVP. So I really want to quickly identify the primary users as opposed to the edge cases, so that we can have a very polished, successful experience.
There's nothing like talking to the users of the system you're targeting and being able to gain those key insights to understand how they might approach the product.
When you think about different types of research that you may or may not conduct – or even in past roles, thinking about innovation and strategy work – there are traditionally three buckets we've heard of. I'm curious if these check out, and also where you may or may not have spent time.
Big projects: we've got this big effort we're trying to accomplish, all these different sources, we sort of know what the end outcome is going to look like, and we know how we're going to use it.
Very small, ad hoc: we got an idea, CEO has a suggestion we’ve got to verify or due diligence, we’re hopping on a call, we want to get smart.
This middle piece, which is almost more about a cadence: ongoing, staying smart on a trend, a category, or whatever it might be.
Does that resonate? And I'm curious where you distributed your time across those.
Yeah, that does resonate.
For me, I'm often dealing with innovation, trying to validate product ideas. So oftentimes there is an idea that emerges out of the board or from the CEO or from a senior product leader. And oftentimes, there's a lot of assumptions there that really need to be validated to see if they actually check out. It’s that second category, ad hoc. Really quickly understanding whether something's viable, whether it could create a competitive advantage, understanding how that would position with the competitors in the marketplace. I'm very driven along the lines of positioning research in the service of product innovation.
Staying up with everything is just an endless, endless task. Simplifying that is really key, particularly for your executive members to really feel like they're well-versed in what's happening in the marketplace, what that might mean for their capital allocation over the coming quarters and years. So I see that as being really key.
As far as the large research projects, I'm doing a little less of that because that's very, very holistic and all-encompassing. But certainly that level of depth is required for brand new product lines with larger companies where you're investing. The research needs to be commensurate with the investment, right?
So I'm working on turning sort of an idea into a POC, into a MVP, and so I tend to be much more targeted in that approach.
Double clicking on that journey you just talked about, is it basically what we've talked about already?
You've got an idea, and then you go straight to the user interviews and kind of verifying with the audience? Or, as you go from that original idea all the way to the end state, what’s the stack you might move through to confirm, validate, develop, test, launch, feel good about it, and market?
That's a great question. I'm not sure I have a very defined path because I think ultimately I try to take my queues from my clients.
First of all, I like getting that baseline of “here’s the ground we're playing on – this is the overall landscape.” I think that's super important. I always do that first.
Where does that come from?
Right now it comes from a variety of established reports (firsthand research by myself, according to Google, and I might actually leverage some AI platforms in doing that as well).
I'm looking at some of your typical sources in terms of published research available. I don't spend a ton of time on that level – I don't want to go super in depth. I want to get a very quick baseline. “This is the landscape we're playing in.”
And then I devote more time diving into the actual details of the product, particularly with users.
But it's important to understand the architecture inside the organization. Because how something gets developed, how it gets launched, what kind of pain points and what are the pain points in operationalizing that – they should be understood upfront. For me, it's a little more of an art than a science.
There's not one direct path through there, but to create that successful result, you really have to understand what you're playing with. The first thing is understanding the landscape to understand the team, understand the constraints. Before you dive into that sort of deeper level of research on the product itself.
And as we start to identify these unique value propositions and how they align with the company, then I can go back towards this broader market and see how these might play and do some projections.
Last two questions. Speed round!
When you think about AI and research, what's the first word that comes to mind?
AI and research. Wow. The first thing I think about is retrieval augmented generation. Mostly because I'm working in that a lot today. Ultimately, the answers you're going to get from generative AI have a lot to do with the context, and informing that context window.
I think about “what are the sources?” Garbage in, garbage out. You need great quality data. You need that to start with. Otherwise, your results are not going to be so great.
Last question: when you think about Google and LLMs as different sources you can tap for that landscape, for that public desk research information, first word that comes to mind in terms of how they are as a solution?
Google? Messy. The value of their algorithm has changed quite a bit over time, particularly as they've monetized against search, which they're exceptional at, as we all know.
When I think about LLMs, the first thing is obviously ChatGPT, although there are some other great models with Claude, Anthropic, Claude 3 … there's new models coming out all the time.
But ultimately I think about “what are the sources that are feeding that, and if it's not in the pre-training data, how is that hooked up?” Only so much is going to be in that pre-training data. So what are the capabilities, and what kind of agents can be created to go out and search the web for potentially-relevant information to inform those contexts, so that we can take advantage of the sort of somewhat miraculous reasoning and summarization capabilities of generative AI.
One thing I know, generally speaking, is you really have to look at a problem from a bunch of different angles, right? So for a major resource like ChatGPT or Google, to invest the time in it is a mistake, because you're seeing one side of the equation.
Google's really good at some things, and LLMs are natively good at other things, but mostly what LLMs are good at is summarization, not retrieval, unless you've hooked that up specifically to retrieval pipelines that are giving it particularly relevant data sources that will inform that context. So each is a partial solution. It's definitely evolving. By leveraging a variety of data sources and understanding the contextually relevant data, search is incredibly important with AI, as well as obviously the fundamental thing that Google is providing.
I wouldn't say one's better than the other. They're different. And they have different strengths.
You know your stuff! It was great to dig into it.