A silhouetted group of people stand amongst four imposing pillars etched with intricate patterns that resemble neural networks. Birds fly through the sky as the sun rises above the distant horizon.

Embracing the Future: Our AI Software Policy and Training Program

by | August 15, 2024

Share

TwitterLinkedinFacebookEmail

Following the launch of ChatGPT 3.5 in November 2023, generative AI and chatbots powered by Large Language Models (LLMs) suddenly had everyone’s attention. Within a few months it seemed like AI was everywhere. Kalamuna, like many other organizations, immediately recognized the transformative potential of these powerful new tools, along with their capacity to disrupt and reshape our industry. We also recognized an opportunity to harness this technology to do better: to work more efficiently and creatively and to achieve more for our clients within their budgets and timelines. 

We also realized that there would be numerous challenges and risks to address. There would be impacts to our processes and our team, as well as our operations. To navigate this exciting frontier responsibly, we set out to develop a comprehensive AI software policy paired with an AI training program. This two-pronged approach embodies our commitment to harnessing AI’s power while ensuring ethical, effective, and inclusive use across our organization.

Getting to this point has been a journey of discovery. We hope that others can learn from our approach and adapt it for their own organizations, because we believe that we all stand to benefit as a society from having holistic and well-considered AI policies and training programs in place. We also know that we still have a lot to learn and we expect to make updates in the weeks and months ahead as we put these into practice. And by being open and sharing, we hope that others will do the same.

The journey to our AI policy and training program

Our path to this policy took time as it was important that our process was both thorough and collaborative. We began with several leadership workshops back in February 2024, where we brainstormed a set of AI-related goals and objectives, in addition to identifying the many risks and opportunities that AI presents for our agency. One outcome from those workshops was an internal memo to the team that outlined our position and an action plan. We also understood the importance of diverse perspectives, so we formed an AI governance team with members representing our core departments—design, technology, account & project management, sales & marketing, operations, and human resources. 

To give this initiative the structure and support it would need to succeed, we made it an internal company project, allocated resources to it, and set up bi-weekly meetings. Our initial research included studying other AI software usage policies published by agencies like ours, and across other industries, to gather insights and identify best practices. Through multiple drafts and revisions we refined our policy to ensure it was both comprehensive and clear, and it aligned with our values.

Realizing that just having a policy would not be enough to equip our team with the necessary skills to use this technology effectively and responsibly, we devised a training program. This required researching, cataloging, and exploring a variety of courses and training options. It also meant actually taking numerous courses ourselves (earning certificates along the way) and evaluating them based on criteria like recency, quality, and cost. From there we developed a comprehensive training program, incorporating feedback from various departments to ensure it was relevant, robust, and effective.

Additionally, we set up a company-wide pilot program to evaluate industry-leading AI solutions to ensure we were selecting the best tools for our team and their needs. These pilots, which are still ongoing, are based on real-world use cases submitted by members from each department, and reviewed by the governance team for approval on a case-by-case basis. 

A summary of our AI software usage policy

Our AI software policy aims to streamline our tools, enhance security, and foster collaboration across all departments. Here are the key components of our policy:

Purpose and Scope

This policy applies to everyone who uses AI software in their work with or on behalf of Kalamuna, including employees, contractors, other agencies, and third-party service providers. It sets out our expectations for the responsible usage of AI and machine learning (ML) technologies.

Core Principles

Our approach to AI is guided by principles of education, discernment, safety, and fairness. We ensure that our tools are useful, reliable, and secure, empowering our team by reducing repetitive tasks and amplifying their strengths. For our clients, we maintain transparency about AI usage, prioritize data privacy and security, and comply with third-party service-level agreements (SLAs) and policies. We are committed to correcting biases, ensuring accessibility, promoting open access to tools, and understanding the societal implications of AI. Additionally, we strive to optimize for energy efficiency, reduce the carbon footprint of our AI operations, and choose sustainable vendors.

Compliance with Laws and Regulations

All AI software usage at Kalamuna must comply with applicable laws and regulations, including the EU’s General Data Protection Regulation (GDPR); Canada’s Personal Information Protection and Electronic Documents Act (PIPEDA); and the California Consumer Privacy Act (CCPA). We also adhere to ethical guidelines and international standards from organizations such as the International Organization for Standardization (ISO), the International Electrotechnical Commission (IEC), and the National Institute of Standards and Technology (NIST).

Data Governance

We prioritize robust data governance practices to protect our clients’ sensitive data. This includes ensuring third-party compliance, implementing comprehensive bias mitigation strategies, and maintaining transparency in data collection and usage. Our data governance ensures that AI models are trained on accurate and relevant data, adhering to security best practices.

Accountability and Responsibility

Accountability is central to our AI strategy. A designated AI tools working group, consisting of members from the AI governance team and Kalamuna’s operations team, oversees the implementation and adherence to this policy. This group conducts regular reviews and updates to ensure compliance and adapt to new developments in AI technology and relevant legal changes.

Risk Management

Before deploying any new AI software or expanding existing solutions, we conduct comprehensive risk assessments to identify potential ethical, legal, reputational, and financial risks. We develop strategies to mitigate these risks, which may include revising insurance coverage, implementing technical safeguards, or conducting regular audits.

Testing and Monitoring

Our AI systems undergo rigorous testing and monitoring, including validation of performance metrics, ongoing monitoring of outputs and decisions, and procedures for human oversight. We also review the usage and cost-effectiveness of off-the-shelf AI tools on a regular basis.

Our AI training program

To ensure everyone is comfortable and proficient with the new software, we have developed a comprehensive training program that includes the following:

Basic Training

All employees using AI software at Kalamuna must undertake basic training that introduces them to the fundamentals concepts of AI (and more specifically, generative AI), in addition to responsible AI practices.

Advanced Training

Kalamun offers additional specialized training to some members of the team if/as needed. Courses relevant to the different disciplines within the team provide deeper-dives into specific subject areas like UX design, development, and project management.

Office Hours

Bi-weekly meetings open to everyone in the company provide a venue to introduce new tools, provide tips and tricks, and address any questions or concerns. 

Additional Resources

A dedicated Slack channel has been set up to share timely information and things like industry updates and new product announcements with the team. We have also set up a centralized wiki space to document approved tools and uses, how-tos, FAQs, tips and tricks, etc.

Feedback Loop

We not only encourage but require continuous feedback from all team members to ensure the training program evolves to meet their needs. Everyone’s input is invaluable in helping us improve our tools and processes, and evaluating the training options will help us stay up to date as the technology evolves.

Why this matters

We hope that by implementing a robust AI software usage policy and a comprehensive training program, we can use these tools more securely and responsibly, and that we’ll see an increase in our team’s efficiency, collaboration, and professional growth. By using a common set of tools, we aim to streamline our operations and reduce time spent troubleshooting, thereby increasing our productivity. Furthermore, keeping our skills up-to-date and using only approved tools significantly reduces security risks, which is more important now than ever. Ongoing training will ensure that our team stays current with the latest developments, while supporting professional growth and skill development. It will also help our agency stay ahead of the curve.

It’s taken us many months to get here, but we believe that taking the time has been essential to ensuring we implement AI into our agency in a responsible and thoughtful manner. Over the coming weeks and months we intend to expand on the work we’ve done, and hope to share our learnings with our clients, our partners, and our community. This is an exciting and potentially precarious time for all of us, and we believe that it’s through openness, transparency, and empathy that we can ensure our continued success and a bright future for our industry.

Reach out to us if you have any questions or thoughts about how we’re approaching AI governance. We’d love to hear from you and understand how we can all work better together with this exciting and remarkably capable technology.

Crispin Bailey

Crispin Bailey

Director of Design & UX

When website requirements get daunting, Crispin loves to roll up his sleeves and dig in. His overarching goal is to deliver beautiful and accessible websites for our clients and their audiences. As a seasoned digital strategy expert, Crispin oversees Kalamuna’s design and strategy practice, coordinating design and technical efforts throughout the discovery and design phases on projects to build bridges between research, design, and development.