On March 1, 2026, Vietnam quietly made history. With little fanfare outside specialist circles, the country became the first in Southeast Asia to move from voluntary AI guidelines to binding, enforceable legislation. For businesses building or deploying AI in the region, the regulatory landscape changed overnight.
As a Vietnamese-registered AI consulting firm, Cynked has been tracking this law since the National Assembly passed it in December 2025. This post breaks down what it actually says, who it applies to, what it requires, and — most importantly — what you should be doing right now.
What Is the Law, and Why Does It Matter?
Vietnam's AI law introduces a risk-tiered regulatory model broadly inspired by the European Union's AI Act. It places legal obligations on AI providers — both Vietnamese organizations and foreign entities with a presence in the country — to classify, label, and in some cases restrict the AI systems they operate.
The significance goes beyond Vietnam's borders. Analysts across the region have described this as Southeast Asia's first real test of whether governments here are ready to bind industry to rules rather than rely on voluntary best practices. How Vietnam enforces this law — and how businesses respond — will likely shape the regulatory conversation in Thailand, Indonesia, the Philippines, and Malaysia over the next two to three years.
For companies operating in or building products for the Vietnamese market, this is no longer a "wait and see" situation.
Who Does the Law Apply To?
This is the first question most clients ask, and the answer is broader than many expect.
The law covers:
- Vietnamese organizations of any size that build, deploy, or operate AI systems
- Foreign entities with a presence in Vietnam — this includes companies with Vietnamese subsidiaries, registered branches, or local business operations
- AI service providers whose systems interact with Vietnamese end users
The "foreign entity with a presence" clause is particularly important. If your company has a Vietnamese legal entity, an office, or employees on the ground, you are likely in scope regardless of where your AI systems are actually hosted or developed.
The Risk Classification System
The centerpiece of the law is its risk-tiered approach. Every AI system subject to the law must be classified into one of three categories:
Low risk — Systems where errors or misuse are unlikely to cause significant harm. Most informational tools, recommendation engines, and productivity software fall here. Requirements are relatively light: basic documentation and transparency disclosures.
Medium risk — Systems that affect decisions with moderate consequences for individuals or organizations. Customer service automation, content moderation tools, and HR screening software are likely examples. Stricter documentation, testing requirements, and disclosure obligations apply.
High risk — Systems that could cause serious harm to individuals, society, or national security. Think medical diagnostics, financial credit scoring, biometric identification, critical infrastructure, and public-facing generative AI with significant reach. These face the heaviest obligations: mandatory conformity assessments, incident reporting, human oversight requirements, and potentially pre-deployment approval.
Classification guidelines are being issued by the Ministry of Science and Technology (MOST). As of this writing, the full tiering criteria are still being finalized — which creates both uncertainty and, for proactive companies, an opportunity to engage early and help shape how the guidelines are applied.
Key Obligations Under the Law
Beyond classification, the law introduces several concrete requirements:
1. AI Content Labeling
Any AI-generated content — including deepfakes, synthetic media, AI-written text, and automated social media posts — must be explicitly labeled as such. This is a significant shift for companies operating content platforms, news aggregators, or any product where AI-generated material reaches end users.
2. Chatbot Disclosure
If a customer-facing product uses AI to simulate human interaction, users must be informed they are talking to an AI system, not a human agent. This applies to customer support bots, virtual assistants, and any conversational interface that a reasonable user might mistake for a human.
3. Accountability and Transparency Documentation
AI providers must maintain documentation of their systems' capabilities, limitations, training data (where relevant), and intended use cases. For higher-risk systems, this documentation may need to be made available to regulators on request.
4. Incident Reporting
For medium and high-risk systems, the law introduces obligations to report significant AI-related incidents to the relevant authorities. The specific thresholds and reporting timelines are still being defined in implementing regulations.
What This Means for Different Types of Businesses
Software Development and AI Consulting Firms
If you build AI-powered products for clients operating in Vietnam, the risk classification of what you build is now a legal matter — not just a design decision. Contracts with Vietnamese clients should address which party bears responsibility for classification, documentation, and ongoing compliance.
E-commerce and Platform Companies
AI-driven recommendation engines, dynamic pricing, fraud detection, and customer support automation are all potentially in scope. Most e-commerce AI will likely fall into the low-to-medium risk category, but disclosure requirements around chatbots and AI-generated content apply immediately.
Financial Services and Fintech
Credit scoring, loan origination, fraud detection, and customer risk assessment are precisely the types of high-stakes decision systems the law is designed to regulate. Vietnamese fintech companies and foreign banks with local operations should treat this as a high-priority compliance issue.
Media, Publishing, and Content Platforms
The AI content labeling requirement is most immediately relevant here. If your platform publishes or distributes AI-generated articles, images, or video, you need a labeling strategy in place now.
Foreign Companies With Vietnamese Employees or Offices
Even if your core AI systems are built and hosted abroad, if you have a Vietnamese legal presence, the law likely applies to how those systems interact with Vietnamese users and employees. Get a legal opinion on your specific structure.
The Compliance Window Is Open — But Won't Stay Open
One of the most important things to understand about this law right now is that it is in its early implementation phase. The risk classification guidelines from MOST are not yet fully published. Enforcement infrastructure is being built. Regulatory capacity is still developing.
This creates a window — probably 12 to 18 months — where proactive companies can get ahead of compliance at relatively low cost. The pattern from the EU AI Act is instructive: companies that engaged early with the framework, built documentation habits, and established internal compliance processes spent a fraction of what late movers paid to retrofit their systems under enforcement pressure.
The steps companies should be taking right now are:
- Inventory your AI systems — Map every AI tool, service, or automated decision system your Vietnamese operation uses or provides to others.
- Conduct a preliminary risk assessment — Using the law's tiering logic, make an early determination of where each system likely falls.
- Audit your content and disclosure practices — Are AI-generated outputs labeled? Are users informed when they're interacting with bots?
- Review contracts and vendor relationships — If you use third-party AI services (cloud AI APIs, SaaS products with embedded AI), understand what compliance obligations your vendor assumes and what falls to you.
- Engage legal counsel with Vietnamese tech law expertise — The implementing regulations will matter enormously, and having counsel who can track them in real time is worth the investment.
The Bigger Picture: Southeast Asia's Regulatory Trajectory
Vietnam's law is not happening in isolation. Across the region, governments are moving from frameworks and guidelines toward harder regulatory tools:
- Singapore has been advancing its Model AI Governance Framework and is increasingly signaling a shift toward binding rules for higher-risk domains
- Indonesia passed a national AI strategy in 2023 and has been developing sector-specific AI regulations
- Thailand and the Philippines both have AI policy development underway at the ministry level
- Malaysia is integrating AI governance into its broader digital economy agenda
Vietnam's legislation — and crucially, how it is implemented over the next 18 months — will serve as a reference point for all of these conversations. Countries that observe effective, business-friendly enforcement in Vietnam may move faster to follow. Countries that see enforcement chaos may slow down. Either way, the regional direction of travel is clear.
Companies building in Southeast Asia today are building into a regulatory environment that will look meaningfully different in three years. The time to develop internal compliance capabilities and governance habits is before that environment arrives.
How Cynked Can Help
At Cynked, we're a Vietnamese-registered AI consulting and software development firm. We build AI-powered products and systems for clients across Southeast Asia and beyond, and we are directly subject to the same law we've described in this post.
That gives us a practical perspective that pure policy analysts don't have. We're not just reading the regulations — we're working through what they mean for real product decisions, architecture choices, and client relationships.
We offer:
- AI compliance assessments — Inventory and risk-classify your current AI systems against the Vietnamese law's framework
- Documentation and governance support — Build the internal records and processes that compliance requires
- Product development with compliance built in — For companies building new AI systems, we can architect for regulatory requirements from the start rather than bolting them on later
- Advisory for foreign companies entering Vietnam — Understand your exposure before you have a problem
If you're operating in Vietnam and want to understand what this law means for your specific situation, reach out. The conversation is free, and right now, early mover advantage is real.
Related reading: For a deeper look at the ethical questions shaping AI regulation worldwide, see FreeAcademy's guide on The Ethics of Artificial Intelligence.
Cynked is an AI consulting and software development firm registered in Vietnam. We help businesses across Southeast Asia and internationally build, deploy, and scale AI-powered products. Learn more at cynked.ai.
Need a scalable stack for your business?
Cynked designs cloud-first, modular architectures that grow with you.
Related Articles

The AI Org Chart: How Smart Companies Are Restructuring Around AI Agents in 2026
Zuckerberg is building a CEO agent. Meta is replacing content moderators with AI. The org chart is changing—and companies that understand this now will have a structural advantage.

The CFO's Guide to AI Investment: What to Approve, What to Kill, and How to Tell the Difference
A practical framework for CFOs evaluating AI investments. Learn how to assess total cost of ownership, build pre-approval checklists, read AI business cases, and apply portfolio logic to AI capital allocation.
Custom Software vs. Off-the-Shelf: Choosing the Right Path for Growth
Compare custom software vs off-the-shelf solutions to find the right fit for your growing business. A practical guide to cost, flexibility, and long-term ROI.


