This article originally appeared in Forbes Technology Council.

Every company must forge a path to balance the disruptive risks and opportunities posed by large language models (LLMs). But for an organization like ours, working in national security, the stakes and complexity are considerably higher. CNA is an independent nonprofit that produces analysis for the military and for homeland security agencies. Our success over more than 80 years has depended upon government confidence in our research quality and in our ability to manage risk to our networks and privileged information. To balance the heightened risks posed by using LLMs with the need to get the latest tools in the hands of our researchers quickly, we engaged in a collaborative and flexible four-step process:

  • Understand requirements and risks
  • Establish policy and guidance
  • Enter into a contract with employees
  • Collect feedback and adjust

Understand requirements and risk

Our scientific and professional staff consistently demand the latest tools and technology, and they pressed us to incorporate LLMs into CNA’s infrastructure. Researchers wanted to use LLMs to summarize long reports, speed up programming, and quickly generate outlines. Professional staff wanted to rapidly produce draft position descriptions, policies, and company-wide communications. The trick was to address these demands while maintaining a secure, safe, and compliant infrastructure.

We tasked a team in our Global Information Security group at CNA to develop processes to assess risk. The public servers that LLMs run on concerned us most. As a partner in U.S. national security, CNA operates under many rules that govern our information technology infrastructure and security—especially what we can put into the public space. LLMs potentially create risks, especially to proprietary and sensitive data.

LLMs also introduce risks that could affect the analytical quality that sustains our reputation. One risk is that they can “hallucinate” or make things up. For example, LLMs claimed CNA had published a specific piece of work, returning a title and author, when an exhaustive search of our document repository showed that such a report doesn’t exist. Another is that LLMs are trained on internet data, which introduces bias and false information. A third research risk is that many LLMs do not provide complete citations for generated text, raising concerns about plagiarism.

Establish policy and guidance

We needed a collaborative approach bridging research and technology to address these risks. We formed a four-person executive team—the chief information officer, chief research officer, and two executive vice presidents. The team settled on four technical governing policies compliant with current government information technology regulations and security requirements:

  • LLM governance policy, a new framework to harness the benefits of AI while managing risks
  • Acceptable use policy, updated to describe acceptable and unacceptable employee behavior on CNA's network
  • Bring your own device policy, updated to allow greater flexibility while ensuring safeguards to manage security risks
  • Software/custom code development policy, updated to standardize secure software development for all CNA-developed code

In addition to these policies, we developed guidance that provides general direction to users and lays out validation and accountability requirements for LLM-generated content. Key principles include:

  • LLMs are tools that can help with some aspects of the research process and corporate tasks.
  • LLMs do not replace critical thinking by a human.
  • LLM users may not enter into an LLM any classified or Controlled Unclassified Information (CUI), nor any proprietary, privileged, protected health information, personally identifiable information, or business sensitive information.
  • LLM users are responsible for any LLM-generated content, which must be validated by a human. This includes checking for biases and factual or statistical errors.
  • LLM use must be transparent, described in the methodology section of reports and briefings and acknowledged in footnotes—including the name of the analyst who verified the results.
  • LLMs cannot be authors.

Together, our technical governing policies and research guidance mitigate both technical and research risks, reducing the likelihood of an information spill and helping users apply LLMs in a responsible manner.

Enter into a contract with employees

Establishing sound technical policies and research guidance is necessary for success, but it isn’t enough. We needed employees to acknowledge their responsibility for reducing risk through a formal LLM access request process.

To access LLMs, employees first read the technical governing policies and guidance. They then complete and sign an LLM Request and User Agreement Form. On the form, an employee specifies all LLM tools they plan to use for CNA work, whether on a company computer or personal device. The employee’s manager signs and approves the form, and the CIO’s information security team reviews it for technical risks before sending the approved tools list to the information technology team to allow user access. The process creates a record of requests and insight into which tools are most requested.

Collect feedback and adjust

To encourage cross-organizational collaboration, we provide authorized LLM users with a collaboration space inside a Microsoft Teams channel, where they learn from each other, sharing what works and what doesn’t. We also collect feedback on the LLM user experience through regular surveys and small group discussions. So far, 94 percent of users find LLMs to be useful, and 80 percent will continue to use them. Still, 75 percent identified drawbacks, including wrong code, incorrect content, opinionated language, and nonsense answers. Many felt limited by the inability to upload proprietary documents or PDFs. In focus groups, staff asked for guidance on prompt development—to get the most effective answers to the questions they ask of LLMs. And they told us they wanted LLMs that could operate inside CNA’s firewalls. We are incorporating this feedback into program improvements underway now.

Developed in just under six weeks, our approach to access has enabled employees to stay curious, explore further, and witness firsthand the incredible power of language models within deliberate bounds. Since the regulations that organizations like ours work under cannot keep pace with the disruptive changes created by LLMs, leaders must learn to prudently manage risk. Cross-discipline collaboration at the leadership level has allowed us to find this critical balance between opportunity and risk in harnessing LLMs to further our mission.


Rizwan Jan is Vice President and Chief Information Officer and Kim Deal is Chief Research Officer at CNA.