Skip to content

aniketgit-hub101/AegisDOM

Repository files navigation

AegisDOM

Agentic Browser Security System

Overview

AegisDOM is a security middleware designed to protect agentic browsers from malicious web interactions.
It intercepts browser actions, analyzes web content in real time, and enforces risk-based decisions before an autonomous agent executes actions such as clicks or form submissions.

The system focuses on explainability, deterministic risk scoring, and human-in-the-loop safety.


Problem Statement

AI-powered agentic browsers are vulnerable to malicious web content such as:

  • Prompt injection attacks
  • Hidden DOM-based instructions
  • Deceptive login pages and phishing UI
  • Dynamically injected malicious elements

Traditional browser security mechanisms are insufficient for autonomous agents. AegisDOM addresses this gap by acting as a mediation layer between the agent and the web page.


Key Features

  • DOM-based threat detection (prompt injection, hidden elements, phishing)
  • Deterministic rule-based risk scoring (0–100)
  • Action mediation: ALLOW / WARN / BLOCK
  • Human confirmation for risky actions
  • Explainable security decisions
  • Lightweight and low-latency design
  • One-click launcher for demo execution

System Architecture

AegisDOM follows a modular architecture:

  • Agent Layer
    Browser automation using Playwright (Chromium)

  • Security Layer (AegisDOM Engine)
    DOM analysis, risk scoring, and policy enforcement

  • UI Layer
    Web-based security console (Flask + HTML/CSS/JS)

This separation ensures scalability, clarity, and maintainability.


Detection Logic

AegisDOM uses deterministic, explainable rules:

  • Prompt injection detection using keyword and pattern analysis
  • Hidden DOM detection using CSS visibility and structure checks
  • Phishing detection via form and credential harvesting patterns
  • Dynamic content monitoring using MutationObserver

Rule-based logic is preferred over black-box ML to ensure transparency and reliability.


Risk Scoring & Decisions

Each page interaction is assigned a numeric risk score:

  • 0–19 → ALLOW
  • 20–59 → WARN (requires confirmation)
  • 60–100 → BLOCK

The system provides human-readable reasons for every decision.


Secure Action Mediation

Before any agent action:

  • ALLOW → Action proceeds
  • WARN → User confirmation required
  • BLOCK → Action prevented

This ensures safe execution without degrading usability.


How to Run (One Click)

  1. Ensure Python is installed and added to PATH
  2. Double-click:
  3. Browser opens automatically at:

Demo Scenarios

  • Hidden DOM attack
  • Phishing login page
  • Prompt injection attempt

Limitations

  • Prototype-level system
  • Rule-based detection may miss novel attacks
  • No real LLM execution (simulated intent reasoning)

Future Work

  • LLM-assisted intent reasoning
  • Adaptive rule learning
  • Audit logs and policy customization
  • Browser extension integration

Team Cyber Mavericks

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors