
Responsible AI isn't a policy. It's a practice.
That was the throughline of a panel I joined at AI³ Summit 2026, "Responsible AI and Returns That Matter," alongside Huntington University Department Chair Dr. Rebekah Benjamin and IU Health VP of Information Services Operations Genevie Jones, moderated by Dr. Clark Cully of Indiana Wesleyan University.
Adopting AI is the easy part. Doing it without creating problems that surface six months later takes stronger engineering discipline, tighter access controls, and structured experimentation.
The hype around AI has tempted some teams to treat it as a shortcut around the slow work of good software practice. Responsible AI means doubling down on the fundamentals: software development lifecycle discipline, security posture, and safety reviews.
AI amplifies whatever you already have. Weak practices become faster-moving failures. Strong practices compound.
Genevie made the same point from a clinical lens. IU Health bakes a kill switch into its AI deployments, especially around high-stakes patient-care decisions. Even after significant investment, they're willing to walk away from a tool that isn't delivering.
Before adopting any AI tool, ask whether your existing engineering and governance practices are strong enough to absorb it. If they aren't, the tool will expose that faster than any audit.
I always warn people about agentic tools. When you give an AI agent control of a browser on your computer, it can reach anything that browser is logged into: bank sites, HR systems, internal dashboards.
It doesn't take a cyberattack. A carefully crafted prompt injection can do it, and the OWASP LLM Top 10 lists prompt injection as the number one risk for AI systems.
Dedicate a clean, logged-out browser specifically for AI tasks. That's responsible AI in practice: not a policy document, but a specific, informed decision about access.
Every organization has people who want to be first to try the new tool. Don't suppress that energy, channel it.
Six Feet Up runs an AI Guild, open to anyone in the company, that evaluates new tools, surfaces risks, and shares findings during the team's weekly all-hands. The guild has been running for roughly 20 months and feeds into a shared skills library so individual learning becomes organizational learning.
Don't gatekeep AI from your most curious people. Give them a structure that turns their experiments into shared knowledge, captures risks early, and gives leadership a steady read on what's working without requiring top-down mandates.
The return that matters isn't just faster output. It's safer systems, better decisions, and teams that can keep improving as the tools change.
I'll close the same way I closed the panel: stay curious, and stay safe too.
If your organization is working through these same questions, let’s talk.