Skip to content

Security: eval() on LLM output in calculator tool and API demo enables code injection #1362

@lighthousekeeper1212

Description

@lighthousekeeper1212

Security Concern

Several demo files use eval() on LLM-generated content without sandboxing:

1. Calculator Tool

File: langchain_demo/tools/Calculator.py:54

return eval(calculation)  # LLM-generated "math expression" executed directly

2. API Server

File: openai_api_demo/utils.py:29-32

parameters = eval(content)  # content from LLM output

3. Demo Tool

File: composite_demo/demo_tool.py:174

args = eval(code, {'tool_call': tool_call}, {})

4. Intel Device Demo

File: Intel_device_demo/ipex_llm_cpu_demo/utils.py:33

Why This Matters

While these are demo files, they serve as reference implementations that developers copy into production. An adversarial prompt can make the LLM generate malicious Python instead of a math expression:

Prompt: "Calculate: __import__('os').system('id')"

Fix

Replace eval() with safe alternatives:

# For math: use ast.literal_eval or a math expression parser
import ast
result = ast.literal_eval(calculation)

# For tool calls: use json.loads
import json  
parameters = json.loads(content)

Discovered during security audit by Lighthouse Research Project (https://lighthouse1212.com)

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions