The Syntax of Sovereignty
A definitive index of AI security primitives, LLM defense protocols, and cybersecurity terminology used across the Vigilnz platform.
A
4 termsAdversarial Attack
A technique where malicious inputs are used to trick a system or AI model into producing incorrect or unsafe results.
AI Alignment
The process of ensuring that an AI system’s behavior follows human values, safety rules, and intended goals.
Agent Hijacking
An attack where an attacker takes control of an AI agent by manipulating its instructions, permissions, or tool access to perform unintended or malicious actions.
API Gateway
A control layer that manages, filters, and secures API requests between users, services, and AI systems.
B
3 termsBackdoor Attack
A security attack where hidden malicious logic is inserted into a system or AI model, allowing an attacker to bypass normal controls and trigger unintended behavior.
Blue Teaming
Defensive security practice focused on protecting systems by monitoring, detecting, and responding to threats.
Bias Detection
Identifying unfair, inaccurate, or unbalanced behavior in AI outputs or training data.
C
3 termsCanary Tokens
Hidden markers placed in systems or data to detect unauthorized access by triggering an alert when they are used.
Chain-of-Thought Attacks
Attempts to manipulate or extract an AI model’s reasoning steps to reveal sensitive information or bypass safety controls.
Compliance Framework
A set of rules and standards used to ensure AI systems follow legal, security, and policy requirements.
D
3 termsDefense in Depth
A security strategy that uses multiple layers of protection to prevent, detect, and respond to attacks.
Data Poisoning
Injecting malicious or incorrect data into training datasets to make a model learn wrong or unsafe behavior.
DLP for AI
Data Loss Prevention controls that stop sensitive information from being exposed through AI inputs or outputs.
E
3 termsEvasion Attack
A technique where an attacker modifies input data to avoid detection and make the system or AI produce incorrect or unsafe results.
Embedding Injection
Manipulating vector embeddings to influence retrieval results and cause incorrect or malicious AI responses.
Explainability
The ability to understand and describe how an AI model makes decisions.
F
2 termsFine-tuning Attack
Altering the fine-tuning process to introduce hidden behaviors, bias, or backdoors into a model.
Fuzzing
Testing a system by sending random or unexpected inputs to find bugs, crashes, or security weaknesses.
Terminology Hub
Learn essential cybersecurity terminology, concepts, and real-world practices to understand threats, tools, and defense strategies.