Writing about ethical AI practices is one thing, but demonstrating their application in real-world scenarios for organizations that use or deploy generative AI is an entirely different thing!
As I’ve evaluated these practices against our mission and values, researched to the ends of the internet, spoken with experts, and read cautionary tales of modern thought leadership in books like Superagency and AI Needs You, it is clear there are lessons we can learn from the past while planning for the future of modern work.
As a Certified B Corporation, GadellNet holds significant responsibility in ensuring that both our AI adoption practices and those we advise for our partners align with ethical principles while promoting innovation and efficiency. It has become clear to me that ethical AI isn’t just about compliance (and we are sure good at security and compliance!).
Rather, through my research and own experiences, it is about creating and using AI in trustworthy, transparent, and fair ways. And to that note on the transparency part: I did collaborate with ChatGPT 4o and Microsoft Copilot to provide guidance on this post. You can see that work here.
What is ethical AI?
Ethical AI refers to the development and deployment of artificial intelligence systems that adhere to moral principles and values, ensuring fairness, transparency, accountability, and respect for human rights (according to IBM.)
Ethical AI is about mitigating bias, protecting privacy, and ensuring AI is used for good.
Verity Harding, in AI Needs You, emphasizes that ethical AI requires public engagement: “The decisions we make about AI today will shape the world for generations. We must be proactive in ensuring AI serves the common good.”
For leaders, especially in purpose-driven organizations, ethical AI is not just a technology issue, it’s a leadership one.
What leaders should know (and do) to ensure ethical AI practices
Leaders must take a structured approach to integrating ethical AI into their organizations.
Here are some things we are doing inside GadellNet:
Establish Clear Ethical Guidelines
AI should align with your organization’s mission and values. Define principles that guide AI use, such as fairness, transparency, and accountability. Microsoft developed its Responsible AI Standard in 2022 and leveraged its expertise in its research, policy, and engineering teams to develop guidance to fill that gap.
Our goal at GadellNet is to define our own framework that will provide the same guardrails when we work with our partners, who undergo AI Readiness Workshops with us and consider building internal and external-facing agents.
“Ethical AI is about enhancing—not replacing—human decision-making. Each person brings unique expertise to their role and team, and AI should amplify that, not override it. The goal is to offload repetitive, non-judgmental work to AI, freeing people to focus on the decisions that truly impact their organization and customers. Shift human-driven work to the right and AI-driven automation to the left—keeping the focus where it matters most. The ethical question isn’t just what AI can do, but what should remain human. What decisions require heart, empathy, and nuanced judgment?” –Max Hyman, Director, Continuous Improvement
Form an AI Ethics Committee
A cross-functional team should oversee AI projects, ensuring diverse perspectives are considered in AI decision-making. This team should evaluate risks, provide governance, and engage with external stakeholders when needed.
“As our R&D leader, I joined the cross-functional committee of leaders in our organization to take on AI research, debate, and form well-rounded opinions. I renamed our internal persistent chat team to “The Singularitarians” not only as a fun play on our work as if we were trying to purposely bring forth The Singularity, but also as a reminder to not lose sight of the impact AI can have on our employees and clients. This is powerful stuff that can have unintended consequences if we aren’t extremely careful.” –Ben Davis, Director of Product Innovation
Prioritize Transparency
Building trust starts with transparency about how AI is being used to streamline our processes and drive smarter outcomes. As Reid Hoffman notes in Superagency, “AI should enhance human agency, not replace it.”
“While we use AI in many aspects of our work, we have made the intentional decision not to use AI-generated images of people in our marketing materials. We want to focus on the real people that power our business. It requires us to be intentional, invest in our own creativity, and prioritize this design work internally. To us, this is how we protect the authenticity of our brand and use AI in a trustworthy and fair way for our team and our clients. This level of detail is published in our AI policy.” –Rachel Rizzuto, Director-Marketing
Invest in Employee Training
AI literacy is essential. Provide employees with training on ethical AI principles, helping them recognize risks and advocate for responsible AI practices.
“Our organization has successfully established AI policies and provided comprehensive employee training prior to AI adoption, ensuring a smooth AI rollout. These comprehensive AI policies address data privacy, security, and ethical issues, fostering a responsible AI environment. Additionally, our thorough training program has equipped employees to effectively use Copilot’s features, leading to higher productivity, efficiency, and better security outcomes. This preparation has helped our organization leverage Copilot’s benefits while maintaining high standards for data privacy and security.” –Danny Commes, Strategic Consultant
Engage with External Stakeholders
We have also found that beyond conversations with our clients, we are looking to partner with external experts and the broader community to ensure AI adoption meets broader ethical and societal expectations. Responsible AI is a shared effort across industries. Each action—no matter how small—contributes one piece to a robust ethical foundation for future growth.
“As we continue integrating AI into our operations, it is essential that we guide our partners to disclose their use of AI. Transparency builds trust and reinforces ethical practices. By openly sharing how AI is utilized, we foster a collaborative environment rooted in accountability and innovation.” –Vic Sweeting, Strategic Consultant
As I’ve spent a lot of time during the last year speaking on AI – I’ve witnessed all the emotions from the groups I regularly meet with – from fear and uncertainty to fascination and excitement. Sitting down with our internal leaders and our external partners has opened my eyes to the responsibility we have to create and share our vision and communicate that well, even when we aren’t developing our own open-source models.
At GadellNet, we believe that AI should not just “optimize business processes” but it should amplify human impact, strengthen trust, and reinforce values. By taking a proactive stance on ethical AI, leaders can ensure the technology used in their organizations remains a force for good.
I invite leaders to engage in this conversation, so please join me at BLD Mountain West in Denver, Colorado, on April 24 to discuss this topic further with other industry thought leaders, such as Artemis Ward and Pariveda.
Other upcoming events
St. Louis Tech Week, March 31 – April 4 – Various locations in the Greater St. Louis Area
Founders Lounge Global AI Summit April 10 – Spark Coworking, St. Louis
BLD Mountain West, April 25, 2025 – Denver, Colorado
MDMB, May 13-14, St. Louis, MO