At Taproot, we believe trust is built not only by what we publish, but by being clear about how we work. Transparency about the tools we use and the standards we hold ourselves to underpins our ability to deliver reliable intelligence about the communities we serve.
We have now published a policy on artificial intelligence. We use AI to help make our work more efficient, accessible, and useful, while keeping our journalism human-led, human-edited, and human-accountable.
In the long term, we may not need a separate AI policy at all, as our existing ethics policy covers the principles that guide our work regardless of the tools we use. But in the short term, AI is changing quickly, public interest is high, and there is understandable uncertainty about what AI use actually means in practice. For now, we think it is better to be explicit about how we use these tools.

Why we are using AI
We are using AI to help our team do more valuable work and spend more of its time where human judgment matters most.
We understand that some people feel strongly that AI should not be used at all. We respect that view, but we do not share it. We think AI can be a useful tool when used responsibly. It is clear that AI is here to stay, and is even being incorporated into tools we already use, whether we like it or not. We think the right approach is to use AI thoughtfully, with transparency and accountability, rather than trying to swim upstream against the tide of technology.
We use AI for tasks such as summarizing information, brainstorming questions, generating first drafts, detecting spelling and grammar issues, assisting with research, analyzing large datasets, and transcribing interviews. We also use AI for coding and product development, business operations, and other non-editorial tasks.
In every case, a human remains responsible for reviewing, checking, and deciding whether and how the work will be used.
How we actually use AI
We view AI primarily as a tool. Like other tools, it can be used thoughtfully or poorly. As you might expect, we have put a lot of thought into how we use it!
We do not go to a chatbot and say, “write an article about X” and then publish what comes back. That kind of approach produces AI slop, and it is not what we do.
Instead, we build and use agents. An LLM agent runs tools in a loop to achieve a goal. (LLM stands for large language model, which is the type of AI we generally use.)
For example, we have developed an agent to generate first drafts of written work. We provide the agent with a set of instructions, transcripts of interviews, and reporting notes. We also equip the agent to use tools for things such as accessing our archives and incorporating our style guide. The agent produces a first draft that is constrained by our instructions, shaped by our reporting, and ready for human input and editing. Before the draft is published, it follows the same editorial process as our pre-AI drafts did: It is revised, edited, fact-checked, and reviewed by at least one human editor.
We developed this agent by identifying and documenting the steps we would normally take to produce a first draft. We then built a system that can follow those steps, using AI to help with the parts that are more mechanical and time-consuming. This approach allows us to benefit from the efficiency of AI while keeping our work human-led and human-edited.
We deployed our first agents earlier this year, and will continue to iterate on our approach as we learn more about what works and what doesn’t. Using agents like this changes the way we work, but it doesn’t change the necessary conditions for good work. Garbage in, garbage out still applies!
We have high expectations for quality, with or without AI. It has been an interesting challenge to figure out how to “teach” an agent to meet those standards. The process of doing that has helped us clarify, document, and improve our own internal processes. This is a welcome side effect of using AI the way we do.
When we disclose AI use
In the same way that we do not disclose every use of a spell-checker, we don’t think every use of AI needs to be individually disclosed. That said, readers deserve to know when AI has played a significant role in the work they are seeing.
For example, when we use AI to analyze material at a scale that would not otherwise have been practical, we disclose that. We used LLMs to help analyze the input we gathered during our 2025 election project, and we disclosed that.
We avoid generating visuals with AI. There are interesting and potentially appropriate use cases, and we do not rule them out in principle, but if we ever use AI-generated visual material, we would disclose that clearly.
Our policy also commits us to disclosing AI use where a reader could reasonably feel misled if we did not explain how AI was used. That is a bit of a judgment call, but as always, we will err on the side of transparency.
What’s next
Journalism has always evolved alongside technology. New tools often arrive with uncertainty, debate, and strong opinions. Over time, some become ordinary parts of the workflow. We expect many AI tools will follow a similar path.
We are optimistic about the potential of AI to assist us in achieving our mission to help communities understand themselves better. We are also clear-eyed about the risks and challenges, and we are committed to using AI in a way that is consistent with our values.
We invite you to read our Artificial Intelligence Policy. If you have questions about how we use AI at Taproot, please get in touch.