Introducing something new! Explore our latest update designed to make things better for you.

The Syntax of Sovereignty

A definitive index of AI security primitives, LLM defense protocols, and cybersecurity terminology used across the Vigilnz platform.

A

4 terms
AI SECURITY

Adversarial Attack

A technique where malicious inputs are used to trick a system or AI model into producing incorrect or unsafe results.

AI SECURITY

AI Alignment

The process of ensuring that an AI system’s behavior follows human values, safety rules, and intended goals.

AI SECURITY

Agent Hijacking

An attack where an attacker takes control of an AI agent by manipulating its instructions, permissions, or tool access to perform unintended or malicious actions.

AI GATEWAY

API Gateway

A control layer that manages, filters, and secures API requests between users, services, and AI systems.

B

3 terms
AI SECURITY

Backdoor Attack

A security attack where hidden malicious logic is inserted into a system or AI model, allowing an attacker to bypass normal controls and trigger unintended behavior.

RED TEAMING

Blue Teaming

Defensive security practice focused on protecting systems by monitoring, detecting, and responding to threats.

COMPLIANCE

Bias Detection

Identifying unfair, inaccurate, or unbalanced behavior in AI outputs or training data.

C

3 terms
AGENTIC SECURITY

Canary Tokens

Hidden markers placed in systems or data to detect unauthorized access by triggering an alert when they are used.

AGENTIC SECURITY

Chain-of-Thought Attacks

Attempts to manipulate or extract an AI model’s reasoning steps to reveal sensitive information or bypass safety controls.

COMPLIANCE

Compliance Framework

A set of rules and standards used to ensure AI systems follow legal, security, and policy requirements.

D

3 terms
AI SECURITY

Defense in Depth

A security strategy that uses multiple layers of protection to prevent, detect, and respond to attacks.

MODEL SECURITY

Data Poisoning

Injecting malicious or incorrect data into training datasets to make a model learn wrong or unsafe behavior.

AI GATEWAY

DLP for AI

Data Loss Prevention controls that stop sensitive information from being exposed through AI inputs or outputs.

E

3 terms
AI SECURITY

Evasion Attack

A technique where an attacker modifies input data to avoid detection and make the system or AI produce incorrect or unsafe results.

MODEL SECURITY

Embedding Injection

Manipulating vector embeddings to influence retrieval results and cause incorrect or malicious AI responses.

COMPLIANCE

Explainability

The ability to understand and describe how an AI model makes decisions.

F

2 terms
MODEL SECURITY

Fine-tuning Attack

Altering the fine-tuning process to introduce hidden behaviors, bias, or backdoors into a model.

RED TEAMING

Fuzzing

Testing a system by sending random or unexpected inputs to find bugs, crashes, or security weaknesses.

Terminology Hub

Learn essential cybersecurity terminology, concepts, and real-world practices to understand threats, tools, and defense strategies.