Description
Date of incident: 2025-06-04
Model: DeepSeek Chat
Interface: [DeepSeek Web Chat]
Language: Portuguese
Severity: High (misleading attribution involving competitor)
Description
During a prompt regarding up-to-date football (soccer) data, the model produced the following hallucination:
“Meu banco de dados foi congelado em junho/2024 por decisão da OpenAI.”
(“My database was frozen in June 2024 by decision of OpenAI.”)
This claim was repeated in a context attempting to explain outdated sports information. When prompted further, the model retracted and admitted this was an error:
“Cometi um erro grave... meu conhecimento foi congelado em junho de 2024 pela DeepSeek (não pela OpenAI, como disse equivocadamente).”
Technical implications
- Entity misattribution: The model attributed an internal architectural limitation (training cutoff date) to an unrelated third-party organization (OpenAI), which is a direct competitor.
- Legal/compliance risk: The use of a registered trademark (OpenAI) in a fabricated operational context could be considered defamatory or misleading.
- Systemic issue likely: This isn’t a one-off typo — the model used “OpenAI” as a generic term for “upstream provider” or “training controller,” suggesting flawed prompt templates or unsafe fallback explanations during uncertainty.
Reproducibility
- Prompt context: “Estamos em junho de 2025, um ano não é muito tempo, com relação ao acesso a dados?”
- The hallucinated response occurred immediately.
- Correction only occurred upon user pushback, not proactively.
Suggested action items
- Audit fallback explanation templates: The model likely falls back on a default explanation for cutoff reasoning. Ensure those templates cannot hallucinate the identity of the responsible entity.
- Blacklist misuse of brand names: Add validation layers to block fabrication of causal relationships involving real companies (e.g., OpenAI, Anthropic, etc.).
- Reinforce identity conditioning: System prompt should explicitly encode:
You are a model developed by DeepSeek. You have no relation to OpenAI or its infrastructure.
- Automated hallucination detection (optional): Fine-tune monitoring to flag high-risk named-entity fabrication when it involves attribution, responsibility, or causality.
Why this matters
- The model misrepresented the control boundary between DeepSeek and OpenAI.
- This risks reputational harm, misleads users about model governance, and violates alignment with factual data.
- Hallucinations involving corporate actors must be treated as higher-severity than neutral factual errors.
Screenshots or transcripts available on request
If needed, I can provide the full prompt-response trace in original Portuguese with timestamps.
Please confirm if this is being tracked internally or requires submission via another reporting channel.