I’m new at my company and I’m already noticing a strong distrust of AI outputs. The more tenure someone has, the more skepticism there seems to be.
At the same time, the value AI can provide is hard to ignore, and I think the people who learn to use it well will have a big advantage.
I’m curious about your perspective on creating roles focused on producing reliable AI outputs. Instead of pushing agents that people may not trust or use, these roles could focus on delivering accurate results that others can rely on.
That kind of human involvement could help build trust in AI while still capturing the time savings and improvements it offers. My concern is that poor AI usage and weak outputs will only reinforce the skepticism that already exists.
At Moltin, we ground the agents knowledge through running evaluations on captured examples of good behavior. For long-term success, we have found it to be the most sustainable approach to achieving a higher quality output. The caveat we faced during the early stages was exactly what you are experiencing. We ran into the top performers not able to trust it for various reasons. It was almost like pairing an intern up with those individuals at times.
The irony is that the people with the most context to catch AI mistakes are also the ones most likely to distrust it, while those who trust it most lack the expertise to know when it's wrong.
Your idea about dedicated roles for producing reliable AI outputs is exactly what is needed to grow forward. The role of a Forward Deployed Engineer is the solution to this. An interesting set of skills one must possess to be successful in this role, but it has been our experience that these individuals are the most critical role to successfully deploy Agentic AI within the enterprise.
Your concern about poor usage reinforcing existing skepticism is spot on and probably the most urgent issue in my opinion. One bad output that circulates widely can set back trust more than a dozen good ones can build it. Which is exactly why who uses AI and how they use it matters more in these early stages. Whether that looks like a formal role, like the FDE, or just cultivating a few internal champions who model good practice, having visible, credible examples of AI done well seems essential.
The risk, though, is that it can inadvertently centralize the skill rather than distribute it. If reliable AI usage lives in a specialized role, the broader organization may never develop its own literacy or intuition and you end up with a bottleneck rather than a capability. I’m personally an advocate for distributing AI knowledge broadly, so that healthy skepticism becomes everyone's job rather than a designated function.
One thing is for sure, every company is different and will have their own nuances and approaches to navigate through. Just remember, Agentic AI is not a one-size fits all technology. The hard work is identifying and continuing to study your own company’s operational behavior to determine its best fit.
I’m new at my company and I’m already noticing a strong distrust of AI outputs. The more tenure someone has, the more skepticism there seems to be.
At the same time, the value AI can provide is hard to ignore, and I think the people who learn to use it well will have a big advantage.
I’m curious about your perspective on creating roles focused on producing reliable AI outputs. Instead of pushing agents that people may not trust or use, these roles could focus on delivering accurate results that others can rely on.
That kind of human involvement could help build trust in AI while still capturing the time savings and improvements it offers. My concern is that poor AI usage and weak outputs will only reinforce the skepticism that already exists.
At Moltin, we ground the agents knowledge through running evaluations on captured examples of good behavior. For long-term success, we have found it to be the most sustainable approach to achieving a higher quality output. The caveat we faced during the early stages was exactly what you are experiencing. We ran into the top performers not able to trust it for various reasons. It was almost like pairing an intern up with those individuals at times.
The irony is that the people with the most context to catch AI mistakes are also the ones most likely to distrust it, while those who trust it most lack the expertise to know when it's wrong.
Your idea about dedicated roles for producing reliable AI outputs is exactly what is needed to grow forward. The role of a Forward Deployed Engineer is the solution to this. An interesting set of skills one must possess to be successful in this role, but it has been our experience that these individuals are the most critical role to successfully deploy Agentic AI within the enterprise.
Your concern about poor usage reinforcing existing skepticism is spot on and probably the most urgent issue in my opinion. One bad output that circulates widely can set back trust more than a dozen good ones can build it. Which is exactly why who uses AI and how they use it matters more in these early stages. Whether that looks like a formal role, like the FDE, or just cultivating a few internal champions who model good practice, having visible, credible examples of AI done well seems essential.
The risk, though, is that it can inadvertently centralize the skill rather than distribute it. If reliable AI usage lives in a specialized role, the broader organization may never develop its own literacy or intuition and you end up with a bottleneck rather than a capability. I’m personally an advocate for distributing AI knowledge broadly, so that healthy skepticism becomes everyone's job rather than a designated function.
One thing is for sure, every company is different and will have their own nuances and approaches to navigate through. Just remember, Agentic AI is not a one-size fits all technology. The hard work is identifying and continuing to study your own company’s operational behavior to determine its best fit.