AI Curiosity: Unlock Success with Clarity
The AI training you scheduled for next month is already outdated.
While you were planning workshops on the current generation of tools, three new models launched with entirely different capabilities. Your carefully crafted curriculum is chasing a target that moved before the slides were finished.
Meanwhile, most of your workers have already given up waiting. Recent research from Asana's Work Innovation Lab, conducted in collaboration with Anthropic, reveals that 56% of them are taking AI learning into their own hands. They're experimenting with AI on their own.
Without guidance. Without guardrails. Without sharing what they learn.
When employees feel forced to learn alone, organizations lose the opportunity to guide responsible adoption and capture collective insights.
But there's an even deeper problem: the same organizations who can't keep trainings current are paralyzed by employee fears of using AI "wrong."
The Permission Paradox
Listen to what we hear from workers across industries:
"I don't want to get in trouble for using it wrong."
"What if my manager thinks I'm cheating?"
"I tried it once but wasn't sure if I was supposed to, so I stopped."
This anxiety around "getting it wrong" isn't new. I witnessed it firsthand while driving workforce transformation at Intuit during our exploration of hybrid work beginning in 2021. Initially, we thought giving people an open sandbox to learn and experiment with hybrid arrangements would unleash creativity and innovation. Instead, it created paralysis.
Even executives, people accustomed to making complex business decisions, were asking us to just tell them how to “do hybrid” so they and their teams could get on with their jobs. After two years of constantly changing COVID rules and guidance, people had become hyper-tuned to avoiding mistakes. The trauma of navigating endless policy shifts had conditioned everyone to seek explicit permission rather than risk getting it wrong again.
The same dynamic is happening with AI today. When leaders say "use AI responsibly" without defining what that means, they're creating the same open sandbox that paralyzed seasoned leaders during the hybrid transition.
The solution isn't fewer rules or slower innovation. It's precision in guidance that enables bold exploration.
The ORM Solution: Operational Risk Management for AI
In Naval Aviation, every mission began with Operational Risk Management. This systematic approach to identifying, assessing, and mitigating risks while maintaining mission effectiveness meant we didn't avoid risk; we understood it, planned for it, and managed it intentionally. In my first career as a helicopter pilot in the United States Marine Corps, logging over 2000 flight hours across 3 combat deployments, we demonstrated day after day that complex operations in dynamic environments demand a framework to eliminate ambiguity while enabling judgment and innovation.
ORM uses a proven framework that evaluates two critical factors:
Severity: What's the potential impact if something goes wrong?
Probability: How likely is this risk to actually occur?
Applied to AI governance, we can use these dimensions to create Risk Assessment Codes (RACs) that transform paralyzing questions like "Am I allowed to use AI for this?" into confident risk navigation: "What's my RAC level for this AI application?"
From Combat Risk Assessment to Business Decision-Making
Flying casualty evacuation missions in Iraq taught me that clarity saves lives. When we were racing the clock with critically wounded Marines aboard, there was no time for procedural ambiguity – there was plenty of real uncertainty that demanded our judgment and decisiveness.
This framework is just as effective in business because human psychology is constant. Whether you're a pilot assessing flight risks or an employee wondering about AI assistance, unclear expectations create dangerous hesitation. Clear risk classifications eliminate the paralysis that comes from ambiguous rules.
Your AI Governance Framework
When evaluating any AI application, consider two key factors:
Severity (What could go wrong?)
Critical: Could affect customer safety, regulatory compliance, or brand reputation
Client-facing communications during crises
Financial reports submitted to regulators
Healthcare recommendations affecting patient care
Legal documents or compliance filings
Serious: Could impact important business decisions or relationships
Sales proposals over significant dollar thresholds
Marketing campaigns reaching large audiences
Strategic planning documents
Customer service interactions
Moderate: Could create workflow inefficiencies or minor quality issues
Internal process improvements
Team presentation materials
Research and competitive analysis
Training content development
Minor: Limited impact, easily correctable
Email drafting and meeting preparation
Personal productivity and learning
Creative brainstorming and ideation
Administrative task automation
Probability (How likely are problems?)
Likely: New AI users, unfamiliar tools, high-stakes contexts, complex outputs
First-time users of any AI application
New AI tools without established track record
Complex multi-step AI workflows
High-pressure situations with tight deadlines
Probable: Some experience but unfamiliar applications or changing conditions
Experienced users trying new AI applications
Familiar tools in new business contexts
Standard applications under unusual circumstances
Recently updated AI systems or models
Occasional: Experienced users with familiar applications in routine contexts
Regular users with established AI workflows
Proven applications in standard business situations
Well-understood tools with consistent performance
Routine tasks with predictable requirements
Unlikely: Simple applications with experienced oversight and established procedures
Highly experienced users with thoroughly tested workflows
Simple, well-established AI applications
Low-stakes routine tasks with clear success patterns
Multiple checkpoints and verification systems in place
Risk Level Matrix
The intersection of Severity and Probability determines your approach:
Serious Risk: Get Approval
Critical + Likely/Probable, or Serious + Likely
"Get explicit manager approval before using AI for anything that could significantly impact customers, compliance, or company reputation"
Moderate Risk: Coordinate & Verify
Critical + Occasional, Serious + Probable/Occasional, or Moderate + Likely/Probable
"Coordinate with relevant stakeholders and verify AI outputs with primary sources or subject matter experts before implementation"
Low Risk: Standard Procedures
Critical + Unlikely, Serious + Unlikely, Moderate + Occasional/Unlikely, or Minor + Any Probability
"Follow standard procedures—review outputs for accuracy and appropriateness before use"
Adapting Risk Levels as You Learn
The beauty of this framework is that it evolves with your team's competency. An AI application might start as Moderate Risk when your team is learning, then shift to Low Risk as experience develops.
Real-time Risk Adjustment:
New AI tool? Start with higher risk level until you understand its behavior
Experienced user? Move familiar applications to lower risk categories
High-stakes situation? Temporarily increase risk level and add oversight
Routine workflow? Follow standard procedures with basic review
This approach enables teams to take appropriate risk rather than avoiding all risk or ignoring all risk.
How This Framework Unleashes Curiosity
Organizations implementing this clear framework see remarkable results because where confusion leads to restriction, structure enables innovation. Vague policies create paralysis. Clear frameworks create confidence.
Instead of wondering "Will I get in trouble?" teams ask "What risk level is this application?" Instead of avoiding AI, they actively explore within well-understood risk boundaries.
This approach eliminates the anxiety that kills curiosity. Teams understand that just as they can confidently handle low-risk decisions with standard procedures, they can experiment boldly in appropriate AI applications without fear of making mistakes.
AI Curiosity is all about what Asana's Work Innovation Lab calls the "virtuous cycle of AI-powered productivity": the more you use AI, the more you find new ways to use it, and the more productive you become. This cycle only starts when people feel safe to experiment.
Building Your Learning Operating Rhythm
At Euda, we've developed specific practices that turn the principles here into sustained organizational curiosity:
"AI Learning of the Week" Staff Agenda Item: Each Friday, team members share one AI experiment — successes and spectacular failures alike. This ritual turns isolated learning into collective intelligence.
Quarterly Risk Review: As teams gain experience and confidence, we revisit risk level assignments. What started as Moderate Risk might shift to Low Risk as organizational competency develops.
These rhythms ensure that curiosity becomes habitual, not accidental. Teams develop AI fluency through consistent, protected experimentation within clear boundaries.
The Foundation for Everything That Follows
Clear boundaries enable the curiosity to unlock individual potential. When teams master confident experimentation within this framework, they discover something powerful: AI can amplify everyone differently. The marketing manager transforms campaign research, the sales director revolutionizes prospect analysis, the operations lead streamlines workflows. Each person finds their own path to amplification.
Organizations with the right learning mindset are in a far better position to discover how AI uniquely amplifies their capabilities and contributions. People that are confidently curious don't just learn to use AI. They learn to partner with it in ways that multiply their impact.
This personal mastery becomes the foundation for our next principle: Amplification without Abdication. Because once individuals discover how AI can amplify their unique strengths, they're ready to tackle the next big leadership challenge: how to scale that amplification across teams while maintaining accountability for outcomes that matter.
————————————
Want to help your team explore AI with confidence? Get in touch at contact@euda.io to design the clear guidance your team needs.