Discussion about this post

User's avatar
Jake Jarvis's avatar

I’m new at my company and I’m already noticing a strong distrust of AI outputs. The more tenure someone has, the more skepticism there seems to be.

At the same time, the value AI can provide is hard to ignore, and I think the people who learn to use it well will have a big advantage.

I’m curious about your perspective on creating roles focused on producing reliable AI outputs. Instead of pushing agents that people may not trust or use, these roles could focus on delivering accurate results that others can rely on.

That kind of human involvement could help build trust in AI while still capturing the time savings and improvements it offers. My concern is that poor AI usage and weak outputs will only reinforce the skepticism that already exists.

1 more comment...

No posts

Ready for more?