AI and Cyber Security
by Dhanya Menon | Dec 9, 2024 | Cyber Security | 0 comments
AI and cyber security are interconnected, making data all the more relevant and important.
Challenges and Limitations for AI Functioning
- Data Quality: AI requires high-quality, relevant data to learn and improve.
- Model Drift: AI models can become outdated, reducing effectiveness.
- Explainability: AI decisions may be difficult to interpret, making it challenging to understand detection logic.
- Adversarial Attacks: Attackers can manipulate AI systems using adversarial tactics.
- Bias and Fairness: AI systems can perpetuate existing biases if not designed with fairness in mind.
Cybersecurity Threats to AI Systems
- Data Poisoning: Manipulating training data to compromise AI model integrity.
- Model Inversion: Reverse-engineering AI models to extract sensitive information.
- Adversarial Attacks: Crafting inputs to mislead or deceive AI systems.
- AI Model Theft: Stealing or exploiting AI models for malicious purposes.
- AI System Compromise: Exploiting vulnerabilities in AI systems to gain unauthorized access.
Types of Attacks
- Evasion Attacks: Manipulating inputs to evade detection or classification.
- Poisoning Attacks: Contaminating training data to compromise model integrity.
- Replay Attacks: Reusing previously recorded data to deceive AI systems.
- Impersonation Attacks: Mimicking legitimate users or systems to gain unauthorized access.
- Data Manipulation Attacks: Altering data to influence AI decision-making.
Regulations and Guidelines
- GDPR: EU’s General Data Protection Regulation.
- AI Ethics Guidelines: EU’s guidelines for trustworthy AI.
- Fairness, Accountability, and Transparency (FAT): Principles for AI development.
- IEEE Global Initiative on Ethics in Action: AI ethics standards.
- ACM Fairness, Accountability, and Transparency (FAT): Conference series.