<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>ai on Antgarsil Pages</title><link>http://pages.dosil.es/categories/ai/</link><description>Recent content in ai on Antgarsil Pages</description><generator>Hugo -- gohugo.io</generator><language>en</language><lastBuildDate>Sat, 13 Dec 2025 10:45:00 +0100</lastBuildDate><atom:link href="http://pages.dosil.es/categories/ai/index.xml" rel="self" type="application/rss+xml"/><item><title>Exploiting LLM Tool Use via Prompt Injection</title><link>http://pages.dosil.es/posts/prompt-injection/</link><pubDate>Sat, 13 Dec 2025 10:45:00 +0100</pubDate><guid>http://pages.dosil.es/posts/prompt-injection/</guid><description>Large Language Models (LLMs) are increasingly integrated into applications through mechanisms like &amp;ldquo;Tool Use&amp;rdquo; or &amp;ldquo;Function Calling.&amp;rdquo; While these integrations enable powerful automation, they introduce a new class of vulnerabilities: Prompt Injection. This is analogous to traditional injection attacks (SQLi, SSTI) but occurs within the natural language context of the model.
The Vulnerability Prompt injection occurs when user-supplied input is concatenated with the &amp;ldquo;System Prompt&amp;rdquo; without sufficient boundary separation. If the model fails to distinguish between developer instructions and user data, it may follow malicious instructions embedded within the user input.</description></item></channel></rss>