Closeup of a robot and human shaking hands

Mission-Driven AI: 10 Best Practices to Collaborate with AI Responsibly

by | October 1, 2024

Share

TwitterLinkedinFacebookEmail

As the person responsible for overseeing the AI governance program at Kalamuna, I've witnessed firsthand the transformative power of AI in our industry. Not only have members of our team been keen to use AI to assist them in their work, we’ve also seen some of our clients in education, nonprofit, government, and the arts and culture sectors become increasingly curious about leveraging AI to enhance their digital presence and operational efficiency. But as the saying goes, with great power comes great responsibility, and it's important that we use AI with mindfulness and ethical consideration.

In this post, I'll share ten essential AI best practices that center on the human impact of AI use, ensuring that as we harness this powerful technology, we do so in a way that aligns with the values of the mission-driven organizations we serve. Some may seem obvious, others perhaps less so, depending on your situation. The important thing is to consider a holistic approach to managing your team’s AI use, because this technology has impacts on far more levels than anything that’s come before.

1. Protect people’s privacy

In our work for our clients, especially when conducting UX research, we often handle sensitive information. It's critical that we never input personal information about students, citizens, or employees into AI tools, regardless of the data retention and training policy of the tool. For instance, if we’re conducting a survey for a client website for sentiment analysis, after gathering the responses we’ll use placeholder data instead of real names before synthesizing the responses with ChatGPT.

Best Practice: Be aware of the risks and establish clear guidelines on what types of information can and cannot be safely  input into AI tools. Use anonymized or dummy data when working with AI. Perhaps most importantly, familiarize yourself with the data training and retention policy for any tools you plan on using so that you can act accordingly.

2. Navigate AI bias

AI models can perpetuate and amplify biases present in their training data. This is particularly concerning when working with diverse communities served by our nonprofit and government agency clients. We've seen cases where AI-generated content included stereotypical imagery or language that didn't accurately represent the communities our clients serve, so it’s something to watch out for.

Best Practice: Regularly audit AI outputs for potential biases. When in doubt, run things by a diverse group of teammates to ensure it's inclusive and representative.

3. Maintain human oversight in the design process

While AI tools have dramatically increased our efficiency in creating initial drafts and coming up with design concepts, we never allow AI to make final decisions about design elements or suggest recommendations without critiquing them. AI suggestions can provide a good starting point, but a human designer's understanding of the client’s context and the users’ needs are crucial in crafting the final designs.

Best Practice: Use AI as a tool to augment human creativity and decision-making, not replace it. Establish clear processes for human review and approval of AI-generated work.

5. Generate code responsibly

While AI can be a powerful tool for generating code snippets, we never implement AI-generated code without thorough review and testing by a senior developer. This is especially important for government and educational sites where security and accessibility are top priorities.

Best Practice: Establish a code review process specifically for AI-generated code. Ensure that all code, regardless of its source, meets your organization's standards for security, performance, and accessibility.

4. Always verify AI outputs

Rounding out the previous two points, be aware AI can produce convincing but inaccurate information. This is particularly risky when working on institutional websites where accuracy is critical. To address this risk we’ve established a fact-checking validation process for all AI-generated content, especially for any summarized research materials that may inform recommendations or decisions.

Best Practice: Implement a multi-step verification process for AI-generated content. Use authoritative sources to cross-check information before publication.

6. Consider ethics 

Working with mission-driven organizations means we must be extra vigilant about the ethical implications of our AI use, both in our work and in the work we produce. For some organizations, like those promoting the arts, using AI-generated art or copy would be unacceptable. We must always consider the broader context of the client and their audience before using AI on a project, and we developed an AI software usage policy for precisely this reason.

Best Practice: Develop an ethical framework for AI use that aligns with your organization's and your clients’ values. Regularly discuss and update this framework as AI technology evolves.

7. Act transparently 

We believe in being upfront with our clients about our use of AI in the design process. This transparency has actually enhanced our relationships, as clients appreciate our honesty and are often excited to learn about how AI is shaping the future of web design. Prior to using AI on a client’s project we outline the tools we hope to use, and how they’ll be used, and we ensure we have their explicit consent.

Best Practice: Clearly communicate to clients and stakeholders when and how AI is used in your projects. Consider adding an "AI disclosure" section to project documentation.

8. Respect Intellectual Property

In the world of web design, especially when working with cultural institutions like museums and galleries, respecting intellectual property (IP) is incredibly important. AI tools can sometimes generate content that inadvertently infringes on copyrights (in the case of images and text) or patents (in the case of AI-generated code). Consider implementing an IP validation process to cross-check AI outputs before publishing to ensure that you’re not putting the organization at risk.

Best Practice: Educate your team on intellectual property laws and how they apply to AI-generated content. When in doubt, always opt for original, human-created content.

9. Understand AI's limitations

Even the latest frontier models have knowledge cutoff dates and may not be aware of recent events or organizational changes. And while some tools are able to browse a website upon request, they generally default to their training knowledge which may be limited or incorrect. This is not necessarily the same as AI producing hallucinations, but the end result is effectively the same: incorrect outputs.

Best Practice: Keep your team informed about the limitations of the AI tools you use. Supplement AI knowledge with up-to-date human expertise and current research.

10. Continuous learning and adaptation

The AI landscape is evolving rapidly, and a best practice today might be outdated tomorrow. We've instituted bi-weekly "AI Office Hours" where our team shares new developments, discusses potential impacts on our work, and announces updates to our AI governance policies.

Best Practice: Foster a culture of continuous learning about AI. Encourage team members to stay informed about AI developments and regularly review and update your organization’s AI best practices to stay current.

Staying on track

As we continue to explore the potential of AI in web design for mission-driven organizations, these best practices serve as a practical AI playbook. They ensure that we harness the power of AI responsibly, while always keeping the human impact of this powerful technology at the forefront. By applying these best practices, we can create a future where AI enhances our work without compromising our values.

The journey of integrating AI into our processes is ongoing, and these best practices will undoubtedly evolve. By staying attentive and always prioritizing the people we serve, we can make sure AI is used as a force for good in our industry and in the important work we do for our clients every day.

Crispin Bailey

Crispin Bailey

Director of Design & UX

When website requirements get daunting, Crispin loves to roll up his sleeves and dig in. His overarching goal is to deliver beautiful and accessible websites for our clients and their audiences. As a seasoned digital strategy expert, Crispin oversees Kalamuna’s design and strategy practice, coordinating design and technical efforts throughout the discovery and design phases on projects to build bridges between research, design, and development.