top of page

Agentic AI in Cross-Border Payments: Opportunities and Challenges for Compliance Professionals

A Report by CYS Global Remit Legal & Compliance Office


Part 3: Challenges and Risks of Agentic AI in Compliance


Introduction 

Agentic AI promises efficiency and agility in compliance operations, but autonomy introduces new layers of complexity. For compliance professionals in cross-border payment industry, these challenges are not theoretical—they are practical concerns that intersect with regulators’ regulations, expected standards, and global data privacy concerns. Understanding these risks is critical before deploying Agentic AI at scale.


1. Accountability: Who Owns the Decision? 

One of the most pressing questions is who bears responsibility for AI-driven decisions.


  • Regulators expects financial institutions to maintain clear accountability frameworks, even when decisions are automated.

  • If an AI agent approves a transaction later linked to money laundering, regulators will not accept “the AI did it” as an excuse.

  • Compliance teams must define human-in-the-loop checkpoints for high-risk decisions and document escalation protocols.


Example: A cross-border payment flagged as low-risk by Agentic AI but later found to involve a sanctioned entity could expose the institution to regulator penalties unless accountability is clearly assigned.


2. Explainability: The Black Box Problem 

Agentic AI often relies on complex algorithms that are difficult to interpret.


  • Regulators require auditability and transparency in compliance processes.

  • Black-box models hinder investigations and regulatory reviews.

  • Institutions must adopt Explainable AI (XAI) techniques, ensuring every decision can be traced and justified.


Example: Financial Institution may implement decision logs that capture the AI’s reasoning, thresholds applied, and data sources used. This is essential for inspections and internal audits.


3. Bias and Ethical Risks

AI models learn from historical data, which may contain biases.


  • Screening algorithms could disproportionately flag certain nationalities or regions, leading to discriminatory outcomes.

  • Regulator emphasizes fairness and ethical AI principles in its technology risk management guidelines.

  • Compliance teams must implement bias detection and mitigation protocols during model training and deployment.


Example: If an AI system over-flags transactions from emerging markets due to historical patterns, it could harm legitimate businesses and trigger reputational risks.


4. Cybersecurity and Data Privacy

Agentic AI thrives on data—but cross-border data flows raise significant privacy concerns.


  • Singapore’s PDPA, EU’s GDPR, and other regional laws impose strict requirements on data handling.

  • Risks include data breaches, model poisoning attacks, and unauthorized access to sensitive compliance data.


Strategies to consider:


  • Encrypt data in transit and at rest.

  • Implement continuous monitoring for AI-driven systems.

  • Conduct regular penetration tests and compliance audits.


5. Regulatory Uncertainty

Global regulators are still defining standards for AI governance.


  • MAS has issued guidance on responsible AI, but detailed compliance frameworks are evolving.

  • FATF is exploring AI’s role in AML/CFT but has not finalized global norms.

  • Institutions must stay agile, anticipating future regulatory shifts and building adaptable governance models.


Conclusion

Agentic AI introduces governance complexities that demand proactive risk management. Compliance professionals must balance innovation with accountability, transparency, and ethical safeguards. In Part 4, we’ll explore how to build robust governance frameworks and aligning with global regulatory expectations.

bottom of page