When “Helpful” Witnesses Use ChatGPT

Regular readers of our newsletter know that STB Integrity has a deep interest in emerging technology. We’ve previously explored how AI is reshaping the future of investigations and integrity work, and how different AI tools can support our efforts.

 

From conversations with integrity professionals around the world, one common concern consistently arises: the possibility that subjects of investigation may use AI to derail or distort an inquiry—through deepfake images, falsified documents, or other generative tools.

 

However, ChatGPT isn’t just a threat in the hands of bad actors to commit or cover up fraud and corruption through the fabrication of documents or impersonation of others. Sometimes, as we recently discovered, it’s used by those who appear to be helping us—individuals we might classify as Engaged Insiders in the witness matrix, sitting squarely in the “high knowledge, high cooperation” quadrant.

 

And that’s where our story begins.

The Case: A Cooperative Witness, a Clean Narrative

 

We were conducting interviews in an internal misconduct investigation involving programmatic irregularities. Among the individuals we interviewed was a staff member who had been identified early as a potential witness. Their role placed them in close proximity to the events in question, and their name had come up in multiple initial reports.

 

From our earliest interactions, we assessed this individual as high cooperation and high knowledge—a model witness, at least on paper. They were forthcoming, clear in their responses, and showed no outward signs of defensiveness or coaching. They had even taken the initiative to prepare a document in advance of the interview and offered to walk us through it by sharing their screen.

 

The summary they presented was neat. Perhaps too neat.

 

We didn’t immediately flag it as problematic. It was plausible that a diligent, detail-oriented employee would put together a structured timeline. The tone was composed and consistent, and the content appeared to align with our baseline understanding of the situation.

 

 

The Turn: A Follow-Up That Felt… Different

 

Following the interview, we proceeded with our standard practice: we emailed a few clarification questions and requested supporting documentation. These were straightforward queries. Nothing complex, and unlikely to require significant time or effort.

 

But the response we received caught our attention.

 

The tone of the email diverged significantly from the voice we had encountered in the interview. It was far more formal, highly structured, and almost excessively precise. Sentences were perfectly formed and polished, especially striking given that the witness was not a native English speaker. The email no longer read as a personal reply, but rather as a carefully composed statement.

 

There were also subtle indicators of AI generation, including the random use of emojis in a professional context and bolded keywords that had not appeared in our correspondence. Although our email had been structured with numbered questions, the response was organized thematically—something ChatGPT tends to do by default.

 

At this point, there was nothing we could label as dishonest. But the changes in tone and structure were enough to raise our eyebrows and our curiosity.

 

 

Testing a Hunch: Using GPT to Investigate GPT

 

Rather than jump to conclusions, we decided to explore our hypothesis further. We sanitized the email response, removing all identifying details, and entered the content into ChatGPT. Our question was simple: What kind of prompt might have generated this message?

 

The result was illuminating.

 

ChatGPT returned a plausible prompt that mirrored what we had suspected: a request to “write a professional email in response to an internal investigation query, affirming that I followed all policies and had no involvement in the incident.

 

The model also provided example language that echoed the structure, tone, and phrasing we had seen in the witness’s reply.

 

In other words, while the message itself may not have contained falsehoods, it had clearly been framed, curated, and refined using AI. The result was a highly polished narrative that projected objectivity and compliance—one that subtly positioned the witness as neutral and uninvolved, even before we’d fully explored their role.

 

As the investigation progressed, we received documentation from other sources. It quickly became apparent that there were significant inconsistencies between the witness’s on-screen explanation and their follow-up email. What started as a small gap widened considerably, undermining the witness’s overall credibility.

 

The individual we had initially viewed as an Engaged Insider was beginning to resemble a Guarded Insider—or possibly someone no longer best described as a witness, but as a collusive subject.

 

 

Revisiting the Interview Document: Another Layer Unfolds

 

This realization led us to revisit the original document the witness had shared onscreen during the interview and later sent to us. With our lens now sharpened, we began looking at the document with renewed scrutiny.

 

And there it was: the document itself bore strong signs of being ChatGPT-generated.

 

The formatting, structure, and language patterns mirrored the same tone and organizational style we had just identified in the email. But this discovery triggered a far more fundamental question beyond a simple why.

 

And that question is: how did the witness know what to prepare in the first place?

 

We had only emailed the witness to schedule a conversation.

We had not disclosed the topic of the interview.

We had not shared which programmatic area was under review.

 

And yet, the document the witness brought into the interview preemptively addressed the exact programmatic area we were investigating. It even adopted framing language similar to our internal working notes—language we had never shared.

 

This raised several possibilities. Had someone briefed the witness? Had they inferred the subject based on internal rumor? Or had they used ChatGPT to guess the topic and construct a narrative in advance?

 

Whatever the source, the result was clear: the witness arrived at the interview with a curated narrative that preemptively aligned with our lines of inquiry. What initially appeared to be proactive cooperation now looked more like strategic positioning.

 

 

Why This Matters for Integrity Professionals

 

We’re not sounding the alarm about ChatGPT as an imminent threat to investigations (not yet, anyway). But we are encouraging professionals to recognize that the nature of witness engagement is shifting—quietly, but profoundly.

  • Not all uses of AI are deceptive. But AI can easily sanitize, reshape, or over-optimize testimony in ways that obscure important context.

  • When witnesses use AI to prepare their statements, we may lose access to natural hesitations, inconsistencies, or emotional markers that help us assess truthfulness.

  • Even trusted witnesses may unintentionally remove critical nuance in an attempt to sound “correct” rather than candid.

  • The tools that help us can just as easily help others—offering a way to pre-empt tough questions, reframe facts, or manage risk exposure.

  • Investigators need to update their internal radar. Language that feels unusually polished or out of sync with a person’s natural communication style may now signal something new: AI involvement.

This isn’t a crisis—but it does require adaptation. The way we ask, evaluate, and follow up must evolve accordingly.

 

 

A Word on Data Privacy: Not All Organizations Would Allow This

 

As a side note—important, yet often overlooked—some organizations would not permit the approach we used.

 

Even though we fully anonymized the witness’s email before pasting it into ChatGPT, this action may still fall outside the data protection policies of many institutions. Depending on jurisdiction and the nature of the data involved, feeding even redacted internal communications into an external AI tool may present compliance risks under GDPR or other data privacy frameworks.

 

This raises important questions:

  • Do your organization’s data privacy policies explicitly cover employee use of AI tools in the context of investigations?

  • Would it be acceptable for a witness to input internal timelines or policies into ChatGPT as part of preparing their defense?

  • Even if intent is benign, how do we safeguard sensitive information from being processed or stored by external systems?

For investigators, these questions are just as critical as the analytical challenges AI presents. The future of our work doesn’t just involve evolving our techniques—it also requires navigating new ethical and regulatory boundaries.

 

 

What You Can Do Now

 

We’ve added a simple question to our interview protocols:

Did you prepare any of your responses or documents with the help of another person or any writing tools, including AI?

 

It’s a neutral, non-confrontational question. But it encourages reflection. And it creates an opening for disclosure, helping us better contextualize the communication we receive.

 

The future of investigations isn’t about staying ahead of every new technology. It’s about staying attuned to how people use these tools—both in visible and invisible ways.


Liked this post? Get even more insights. Join the Integrity Career Institute Newsletter!


Every week, we dive deeper into topics like this, sharing exclusive insights, expert tips, and hand-picked opportunities in investigation, ethics, and compliance.


Ready take your expertise to the next level?

Hi, I'm Sârra-Tilila!

I help international organizations and NGOs strengthen their integrity frameworks through internal policy development, tailored trainings, misconduct investigations, and dispute resolution. With over a decade of legal and investigative experience, I’ve worked extensively in Africa and collaborated with global giants like the World Bank and the World Food Programme.


My work is driven by a deep passion for tackling fraud and corruption while promoting transparency and accountability in international development. If you’re looking for expert support to achieve your organization’s integrity goals, let’s connect!

© 2022 par Sârra-Tilila Bounfour