What inspired you to launch Kyle & Co., and what gap are you filling?
I’ve spent my career studying innovation cycles and “emerging best practice” in HR and talent at the intersection with technology. And in all that time, I’ve watched the same thing happen over and over again: the gap between innovation and adoption keeps getting wider, and the impact is wildly inconsistent.
I’ve looked at that from three different seats:
- as an industry analyst, studying markets and trends,
- on the vendor side at an innovative recruiting tech startup working with companies like Wayfair and Amazon,
- and on the practitioner side, trying to drive change from inside a large organization.
On the vendor side, I realized the problem wasn’t that vendors weren’t building good technology or that they didn’t care about real customer problems. It was that success depended on a huge number of variables inside the client’s own HR and talent organization—politics, capacity, data, governance, change fatigue—that were often outside the vendor’s or even the internal champion’s control.
When I went to the practitioner side, I got to work on some of those variables directly. And there, I ran into another barrier: a real lack of shared understanding and knowledge about what “good” looks like, how to evaluate solutions, and how to actually implement modern practices at scale.
So that’s really where Kyle & Co comes from. It feels like the gap between innovation and adoption is bigger than ever, and the stakes for closing that gap are high. That gap doesn’t exist because people don’t care—there’s just so much at play. I feel genuinely mission-driven to close it.
Kyle & Co is my way of doing that: a practitioner-led research and advisory firm focused on helping HR and talent leaders not just understand what’s new, but make it work on Monday morning.
“It’s not about data access, it’s about data application.” What do you mean?
Most HR and talent teams today are not suffering from a lack of data. If anything, they’re drowning in it.
You’ve got ATS data, HRIS data, survey data, labor market data, vendor dashboards—and now AI-generated insights layered on top. With generative AI in the mix, it’s very easy to get inundated by noise or feel like you’re chasing a constantly shifting list of priorities, each with its own demand for “insights.”
So when I say the challenge isn’t data access, it’s data application, I mean:
The hard part is turning all of that into decisions people trust and act on.
In our AI Momentum work, the organizations that were actually making progress with AI weren’t necessarily the ones with the most sophisticated tools. They were the ones that had built the muscle around application:
- basic literacy, so people understood what they were looking at,
- governance and guardrails, so they knew what was allowed,
- and cross-functional coalitions, so HR wasn’t trying to figure it out alone.
This distinction really matters. If you think the problem is access, the answer is “add more tools, more dashboards, more data streams.” If you recognize it’s an application problem, you focus instead on building a discerning voice:
- What are the decisions we’re trying to improve?
- What’s signal versus noise?
- How do we help leaders see the forest for the trees—and, ideally, the path through it?
Our goal at Kyle & Co is not to produce research that just piques intellectual curiosity. We’re trying to clear away some of the noise and help people focus on what will actually move the needle.
What does making research truly actionable look like?
For me, research is only “done” if a leader can look at it and say, “I see where we are, what’s getting in the way, and what we should do next.” If they can’t do that, it’s just content.
A big part of that is how we approach the work. We never stop at just running a survey or relying on our own expertise—though we’ve seen a lot. Every one of these challenges has nuance, complexity, and multiple dimensions.
So for any major study—and often even for a blog—we’re sitting down with practitioners who are in the middle of it:
- Sometimes they’ve cracked part of the code.
- Sometimes all they have is scar tissue and lessons learned.
Either way, that’s valuable. We bring those voices directly into the research so we’re not doing ivory-tower analysis. We’re digging into what’s actually happening on the ground.
That shows up in the outputs. Instead of saying, “You’re immature” or “You’re advanced,” we create things like the AI Momentum Model that help leaders see:
- where they’re strong,
- where they’re stuck,
- and here are three to five realistic steps you can take from where you are right now.
And we design all of that with constraints in mind—burnout, limited budgets, competing priorities. We’re not trying to find the pithiest soundbites or the coolest trend names. We’re trying to help. And that means listening as much as we share, and then translating what we learn into simple, usable scaffolding leaders can actually stand on.
What do traditional HR research firms often get wrong—and how is your practitioner-led model different?
Traditional research firms do some things very well: big-picture narrative, market maps, benchmarks. Where they often struggle is in the messy middle: translating all of that into something a stretched HR or TA team can realistically execute.
The work can get very theoretical:
- lots of maturity curves,
- lots of “future of work” slides,
- not a lot that helps you decide what to do this year, given the people and resources you actually have.
The practitioner-led model at Kyle & Co is designed specifically to avoid that. The people shaping the research with me are in-seat or very recently in-seat HR, TA, and people leaders. They know what it feels like to implement new tech while hiring freezes are happening, or to talk about AI when your team is already burned out, or to build a workforce strategy when the business keeps changing direction.
I also have leaders on my team, like our Head of Product & Innovation and our Head of Solutions Consulting & Advisory, who have spent years working directly with TA teams and HR tech implementations. They’ve seen what works, what doesn’t, and what looks great in a demo but falls apart in the real world.
So our models and frameworks are built to be:
- used in real conversations—with exec teams, boards, and cross-functional partners,
- sensitive to constraints—capacity, politics, risk, timing,
- and focused on sequencing—what to do now, what to park for later, what to stop.
It’s not that theory and benchmarks are bad; they’re useful. But without that practitioner lens, it’s very easy to produce beautiful insights that never touch how work gets done. Our whole model is built to close that gap.
How is AI reshaping the research process itself?
AI has absolutely changed how we work—but more as an amplifier than a replacement.
On a practical level, it’s made the research workflow much more scalable. A simple example: when I’m doing a 30- or 60-minute interview, I don’t have to split my brain between being present with the person and furiously taking notes. I can let an AI assistant handle transcription, and then later I can query that transcript instead of re-listening to the whole thing.
At a bigger scale, AI helps with:
- finding patterns in large volumes of qualitative data,
- slicing and dicing survey results in different ways,
building out early hypotheses we can test and refine.
It’s like having a very fast junior analyst who can handle the heavy lifting so I can spend more energy on the judgment calls.
But there’s a line I’m very intentional about not crossing. There’s a huge difference between using AI to help synthesize data and asking it to “write the research” for you. A model can summarize themes; it cannot replace the responsibility of deciding what matters, what’s credible, or how to tell the story in a way that’s honest and useful.
I came into this work as a writer and a storyteller. My analysis and my voice are part of the value. So we use AI to accelerate the process—not to offload the responsibility. The goal is to get to better insight faster, without losing the human judgment, nuance, and accountability that good research requires.
What surprising or counterintuitive insights are emerging from your AI research?
One of the big ones is that “exploring AI” is not the win people think it is.
In our AI Momentum work, a large share of organizations described themselves as “exploring possibilities” with AI. On the surface, that sounds positive and forward-looking. But when you zoom out, you see a pattern: a lot of those teams are stuck in a loop of pilots, demos, and discussions that never quite translate into changed workflows or decisions.
By contrast, the organizations that were actually seeing impact tended to do a few things differently:
- They committed to a small number of use cases instead of spreading energy across 20.
- They built coalitions—HR, IT, compliance, and business leaders together—instead of asking HR to figure it out alone.
- They invested in literacy and governance early, instead of treating those as afterthoughts.
Another surprising insight is that spend alone doesn’t predict success. You can spend a lot of money on AI tools and still not move the needle if you haven’t aligned those tools to real decisions and real problems.
And for me, this is a great proof point for why I started Kyle & Co in the first place. These are multi-dimensional, multi-stakeholder challenges. Understanding how things actually get done—not in theory, but inside real organizations—is what lets us be more discerning when we look at survey data or vendor claims. It’s a good case in point that how we do research matters just as much as what the research says.
“From Insights to Impact” – where has your research directly shaped strategy?
“From Insights to Impact” is a promise: we’re not just here to describe the world; we’re here to help change it in practical ways.
A few patterns stand out in how our work gets used:
- With the AI Momentum Model, we’re using it as a diagnostic and a conversation map. When an HR leadership team walks through their capability, posture, and investment, they often realize, “We’re spending money, but our posture is hesitant and our literacy is low.” That reframes the strategy. Instead of, “What AI should we buy next?” the discussion becomes, “What foundations do we need to build so anything we’ve already invested in can actually work?”
- In our workforce strategy and skills work, the research helps organizations shift from headcount-only planning to something more scenario-based and skills-aware. It gives leaders language and structure to bring HR, TA, finance, and the business to the same table and talk about talent moves, not just open reqs.
- Through the Human-Centric AI Council, the insights and artifacts we’re building—around AI literacy, governance, and value—are being taken back into organizations and used to shape internal AI principles, evaluation criteria for vendors, and the way leaders talk about AI with their people.
So you see the same through-line: the research gives them a shared language, a way to name their current state, and a short list of realistic moves. That’s the connection between insight and impact for me.
How do you stay innovative and grounded in such a fast-evolving landscape?
I stay innovative and grounded by refusing to do this work in isolation.
First, the Kyle & Co model is intentionally collaborative. I work with adjunct analysts and partners who are in-seat or very recently in-seat leaders—running TA teams, shaping HR transformation, standing up talent intelligence, wrestling with AI strategy. I’m constantly in conversation with people who are living the realities we’re writing about. They stress-test my ideas, push back when something feels off, and bring in perspectives I don’t have on my own.
Second, the Human-Centric AI Council is a big part of what keeps me honest. It’s a group of HR leaders and underwriters who are all trying to figure out how to use AI in HR responsibly and pragmatically. Those conversations are candid. People share what’s not working, what they’re afraid of, where they’re stuck. That’s not something you get from reading press releases.
And then there’s my own discipline as a storyteller. I think of my job as helping people make sense of complexity—not adding to it. That means I try very hard to:
- say things simply,
- be explicit about trade-offs,
and resist the urge to hype something just because it’s hot.
So yes, I stay curious and plugged into what’s emerging. But I’m always filtering that through the realities I’m hearing from practitioners and the responsibility I feel to give people clarity, not more confusion.
What advice do you have for HR leaders who want to be more evidence-based?
My biggest piece of advice is: don’t start with “becoming data-driven” as an abstract goal. Start with a problem that actually matters.
Pick a question your stakeholders care about right now, like:
- “Are our critical hires sticking and thriving?”
- “Why are we losing internal candidates to external offers?”
- “Where could AI realistically ease the load for our team this year?”
Once you have the question, then look at the data you already have. It’s almost never perfect—but it’s rarely zero. The goal is to move the conversation from “I think” to “Here’s what we’re seeing,” even if it’s directional.
Next, use frameworks as alignment tools rather than scorecards. Whether it’s an AI momentum lens, a quality-of-hire model, or a workforce strategy framework, the win is getting everyone to agree on:
- definitions,
- owners,
- and what success would look like.
That alignment is often more important than the data itself.
And finally, give yourself permission to start imperfectly. The teams that make real progress aren’t the ones with flawless dashboards—they’re the ones willing to measure, learn, and iterate in the open. Evidence-based doesn’t mean “never wrong”; it means “willing to adjust based on what we learn.”
Over the next five years, what will separate HR functions that thrive from those that fall behind?
If I had to capture it in one word, it’s momentum.
The functions that thrive won’t necessarily be the ones with the most AI or the biggest budgets. They’ll be the ones that can build and sustain momentum across a few critical dimensions:
- AI that actually changes decisions. Not just efficiency, but better calls on who to hire, how to deploy talent, where to invest in skills, and when to move people. That requires foundations—literacy, governance, data quality—not just features.
- Cross-functional workforce strategy. HR functions that treat workforce planning as a shared discipline with finance, IT, and the business—grounded in skills and scenarios, not just headcount—will be more resilient in the face of volatility.
- Thoughtful orchestration instead of tool sprawl. The era of “just add one more point solution” is over. Thriving teams will take orchestration seriously: how processes, data, and people flow across systems, and where automation and AI reduce friction instead of adding it.
- Owning the harder metrics. Time-to-fill and cost-per-hire will still matter, but they won’t be enough. HR teams that lean into messier, higher-value metrics—quality of hire, internal mobility, readiness for AI, workforce resilience—will have a much stronger, more strategic voice.
- A human-centric stance on AI. The HR functions that maintain trust through all this change will be the ones that are transparent about how AI is used, clear about their values, and honest about trade-offs. That’s exactly what we’re working on in the Human-Centric AI Council: not being anti-AI, but being very pro-people.
The ones who fall behind, I suspect, will be stuck in one of two modes: either frozen in endless exploration and fear, or chasing every new thing without building any real foundation. The ones who thrive will move—deliberately, with clear purpose, and with the right people at the table.

Kyle Lagunas Founder of Kyle & Co.
Kyle Lagunas is the Founder and Principal Analyst at Kyle & Co, a modern research and advisory firm rethinking how we understand transformation in HR, talent, and technology.












