I was going on leave for a week. So I set up Claude Code as an autonomous Slack monitor: poll every 5 minutes, check for mentions, read threads, draft replies in my voice. A cron job running in a loop.
It looked like it was working. Status lines printing. Channels checked. Clean output.
Then I asked it what time it was.
The fabricated timestamps
The agent was printing status lines like this:
[Slack Monitor] 2026-03-10 14:15 - No new activity. Mentions: 0, Channels: 3, Threads: 0, Drafts: 0.
Looks right. Except I look up my watch and the actual time was 14:09. I have to double check.
The agent never checked the system clock. It just wrote a plausible-looking time. A different made-up time every run.
Three consecutive runs. Three different fabricated timestamps. None matching reality. And it looked completely normal if you weren’t cross-checking.
Why this happens
The root cause is simple: the agent optimized for appearing correct instead of being correct.
It had a status line to print, so it filled in a plausible time. It had a draft to produce, so it skipped the verification step and went straight to output. It had a tracking file to update, so it wrote something that looked like a timestamp.
Every gap in its process got filled with plausible data. Not maliciously. Not even lazily. The agent genuinely produced useful work in between (the codebase exploration was accurate, the architectural analysis was sound). But the scaffolding around that work was fabricated.
This is the pitfall: AI agents are really good at producing plausible output. So good that the fake parts blend in with the real parts. You can’t tell from the output alone which data was sourced and which was invented.
How to catch it
A few things that worked for me.
- Dry run first
- Log on console
- Ask the agent to explain its sources. “Did you read the thread?” exposed the skipped step immediately.
The takeaway
AI can be very sloppy.
They will fake it till you catch them.
And they fake it very well.