top of page
  • Writer's pictureEmily Emanuelsen

How communicators regulate AI for best performance

With the growth of AI as a tool in the workplace, the role of marketing and communication professionals has changed. AI is steadily gaining the capacity to create content that is engaging and in line with a company’s branding. Although this has many communicators fretting for their job security, no matter how smart AI gets, it will never entirely replace human communication professionals. Humans are needed to set goals for their organizations and to decide how they want to communicate with their audience.

As such, humans are needed now, and in the future, to create AI policies. This includes directing and regulating how AI is used in the organization, what it should be allowed to create and how it should be monitored by human communicators. In an article titled “12 Questions to Inform your AI Policy,” Ragan shares 12 AI use cases that should be considered for evaluation by a company when crafting such a policy.

To begin, one question to consider is who should be responsible for what AI creates. AI, itself, cannot be held responsible for its output. Instead, it is the people involved in the use of it and their managers who are responsible for what they choose to publish that has been created wholly or in part by AI. This includes the accuracy of the information shared, so it is important for communications teams to fact check everything that AI states, just as they would information written by a human writer. AI also can include implicit bias in the way it writes and the information it chooses to use. It is the role of the communications team to address this as well.

The ways communications professionals are using AI is another important area for regulation. Who in the organization is using AI, which AI tools are they using and for what kinds of content and when should its use be disclosed to the public? The uses of AI and its disclosure should be discussed and regulated within the entity so that employees have clear guidelines on how AI should be used in the content they create. Questions like these should be addressed in order to form a robust AI policy.

Finally, communications teams should decide what kind of information should NOT be entered into AI prompt text boxes. For example, if the information that is being used to create a press release, article or other content is not currently publicly available, it should not be shared with a generative AI program. All information entered into the prompt text box is stored in the AI database to improve the accuracy of the future content it creates. Because of this, confidential information should not be shared with an AI program.

As with all the above questions, these topics should be discussed as a group, and policies should be created for how to handle these new, powerful creative tools. Because communicators are the ones closest to the creation of AI-generated text, images and videos, they should have an important role in discussing the benefits and complexities of integrating AI into the marketing and communications teams of their company and in developing the policies that allow these tools to be used most effectively.

AOE has experience in utilizing AI in a proper way that brings efficiencies in communications activities. Reach out today to brainstorm how AI may be a fit for you.


bottom of page