Digital Services Act Compliance: Building Content Moderation Infrastructure That Scales
The EU Digital Services Act (DSA) has fundamentally changed how online platforms must handle content moderation, user safety, and transparency reporting. After building Trust & Safety operations at Google, Amazon, and TikTok LATAM from scratch, I've seen what actually works when implementing DSA requirements at scale.
If you're operating a platform with EU users, here's what you need to know about DSA compliance in 2026.
What Is the Digital Services Act?
The DSA (Regulation (EU) 2022/2065) establishes harmonized rules for digital services across the EU, with tiered obligations based on platform size and risk:
- All platforms: Basic due diligence obligations
- Hosting services: Notice-and-action mechanisms, content moderation
- Online platforms: Additional transparency and user protection requirements
- Very Large Online Platforms (VLOPs): Platforms with 45M+ EU users face the strictest requirements including risk assessments and external audits
Key dates:
- February 17, 2024: DSA obligations became enforceable for VLOPs
- February 17, 2024: Obligations for all other platforms entered into force
Core DSA Content Moderation Requirements
1. Notice-and-Action Mechanism (Article 16)
You must provide an easy-to-access mechanism for users to notify you of illegal content.
Requirements:
- Electronic submission form
- Available in all official EU languages where you operate
- Clear enough for individuals to use without legal expertise
- Confirmation of receipt to notifiers
Notice must include:
- Sufficiently substantiated explanation of why content is illegal
- Clear indication of exact content location (URL, timestamp, etc.)
- Notifier's contact information
- Statement of good faith belief
Your obligations:
- Process notices "without undue delay"
- Decide whether content is illegal under applicable law
- Remove or disable access if illegal
- Inform both the notifier and content provider of your decision
2. Statement of Reasons (Article 17)
When you remove content or restrict accounts, you must provide a clear statement of reasons.
Must include:
- Whether decision was taken based on notice, own-initiative detection, or automated means
- Facts and circumstances relied on (which law/terms of service violated)
- Information about redress mechanisms available
- Clear and user-friendly language
Exemptions:
- Manifestly illegal content (child sexual abuse material, terrorist content)
- Manifestly incompatible with terms of service
- Automated detection of copyright infringement (with notice to content provider)
3. Internal Complaint-Handling System (Article 20)
Platforms must provide a free, easily accessible system for users to complain about moderation decisions.
Requirements:
- Process complaints in timely, non-discriminatory, diligent manner
- Reverse decisions when complaint is justified
- Inform complainant of decision and reasoning
- Ensure human oversight of automated decisions
Timeline: "Without undue delay" typically means 24-48 hours for initial acknowledgment, decision within reasonable timeframe (usually 5-10 business days)
4. Out-of-Court Dispute Settlement (Article 21)
Users must be able to select certified out-of-court dispute settlement bodies for unresolved complaints.
Your obligations:
- Engage in good faith with certified bodies
- Provide necessary information for dispute resolution
- Bear costs of dispute settlement
- Comply with binding decisions from certified bodies
5. Trusted Flaggers (Article 22)
You must give priority processing to notices from "trusted flaggers" - entities with particular expertise in detecting illegal content.
Requirements:
- Establish system to recognize trusted flagger status
- Process their notices with priority
- Provide direct communication channels
- Track and report on trusted flagger notice accuracy
6. Suspension for Manifest Illegal Content (Article 23)
After receiving notices or complaints about the same content provider frequently providing manifestly illegal content or manifestly infringing terms of service, you must temporarily suspend service.
Triggers:
- Frequency of violations
- Severity of violations
- Intentions of content provider
Obligations:
- Clear policy on suspension triggers
- Statement of reasons to suspended users
- Ability to challenge suspension
Transparency Reporting Requirements
Article 15: Transparency Reports (Every 6 Months)
All platforms must publish transparency reports including:
Content Moderation Data:
- Number of orders received from authorities to act against illegal content
- Number of notices received (broken down by type of alleged illegal content)
- Number of complaints received through internal system
- Decisions taken (content removal, account suspension, etc.)
- Average time for taking action
- Use of automated means for content moderation
Additional for Online Platforms:
- Number of disputes submitted to out-of-court bodies and outcomes
- Number of suspensions for repeat infringers
- Use of automated means (including algorithmic systems)
Article 42: Annual Risk Assessments (VLOPs/VLOSEs only)
Very Large Online Platforms must conduct annual risk assessments covering:
- Dissemination of illegal content
- Negative effects on fundamental rights
- Intentional manipulation of service (coordinated inauthentic behavior)
- Negative effects on civic discourse and electoral processes
- Negative effects on gender-based violence, public health, minors, mental health
Technical Implementation: What Actually Works
After implementing these systems at scale, here's what I've learned:
Notice-and-Action Infrastructure
Don't build from scratch. Use a ticketing system foundation (Zendesk, Freshdesk, etc.) and customize:
1. Intake Form
- Multi-language support (at minimum: English, German, French, Spanish, Italian, Polish)
- Guided form with dropdown illegal content categories
- URL/timestamp capture with validation
- Automated screenshots/archiving of reported content
2. Routing Logic
- Priority queues for trusted flaggers
- Automatic escalation for CSAM/terrorism
- Language-based routing to reviewers
- Category-based routing (copyright vs. hate speech vs. scams)
3. Decision Templates
- Pre-written statement of reasons templates
- Legal basis automatically populated based on decision type
- Multi-language templates
- Automated delivery via email + in-platform notification
Content Moderation Workflows
Human-in-the-Loop AI:
- Automated detection flags content for human review
- Human reviewers make final decisions on removal
- AI learns from human decisions to improve accuracy
- Clear documentation when automation is used (required for statement of reasons)
Quality Assurance:
- 10% sample review of all moderation decisions
- 100% review of automated removal decisions first 90 days
- Weekly calibration sessions with moderators
- Monthly accuracy audits with external reviewers
Complaint Handling System
Structured Workflows:
1. Acknowledgment (Automated, <1 hour)
- Confirm receipt
- Provide ticket number
- Set expectations on review timeline
2. Review (Human, 24-48 hours for initial assessment)
- Re-review original content/decision
- Check if new information provided in complaint
- Escalate edge cases to legal/policy teams
3. Decision (3-7 business days target)
- Uphold original decision with additional explanation
- Reverse decision and restore content/account
- Provide information on out-of-court dispute option
4. Appeals (If user not satisfied)
- Clear information on certified dispute settlement bodies
- Facilitation of information sharing with dispute body
- Commitment to comply with binding decisions
Transparency Reporting Infrastructure
Data Collection Requirements:
You need automated tracking of:
- Notice volume by category, source (user vs. authority), language
- Processing time from receipt to decision
- Decision types (remove, restrict, no action)
- Complaint volume and outcomes
- Automation usage (what % of decisions involved AI)
Build dashboards, not just reports:
- Real-time metrics for operational teams
- Compliance dashboard for legal/policy teams
- Public-facing dashboard for transparency (updates every 6 months)
DSA Compliance Roadmap
Phase 1: Gap Assessment (Weeks 1-2)
- Map current moderation infrastructure
- What systems handle user reports now?
- Do you provide statement of reasons?
- Is there a complaint mechanism?
- Audit transparency data
- Can you extract required metrics from existing systems?
- What data is missing?
- How long does retrieval take?
- Review terms of service
- Are prohibited content types clearly defined?
- Are enforcement actions explained?
- Is language clear and user-friendly?
Phase 2: System Implementation (Weeks 3-12)
- Build notice-and-action infrastructure
- Multi-language intake forms
- Routing and decision workflows
- Statement of reasons templates
- Implement complaint handling
- Internal complaint system
- Out-of-court dispute integration
- Decision reversal workflows
- Establish trusted flagger program
- Application and vetting process
- Priority routing for trusted flagger notices
- Performance monitoring
- Set up transparency reporting
- Data pipeline from moderation systems
- Automated report generation
- Public dashboard
Phase 3: Ongoing Compliance (Continuous)
- Transparency reporting (Every 6 months)
- Risk assessments (Annual, VLOPs only)
- Quality audits (Monthly)
- Policy updates based on enforcement trends
Common DSA Compliance Mistakes
1. Inadequate Statement of Reasons
Too vague: "This content violates our community guidelines."
Better: "This content was removed under our Hate Speech policy because it contains slurs targeting a protected group (ethnicity). This violates EU law prohibiting incitement to hatred (Framework Decision 2008/913/JHA)."
2. No Human Oversight of Automated Decisions
If AI auto-removes content, you still need:
- Human review of the decision when user complains
- Disclosure that automation was used
- Ability to challenge automated decisions
3. Slow Notice Processing
"Without undue delay" is context-dependent:
- CSAM: <24 hours
- Terrorist content: <24 hours (1 hour under Terrorist Content Regulation)
- Other illegal content: 24-72 hours typical
- Terms of service violations: Reasonable timeframe
4. Missing Multi-Language Support
You must support languages of your user base. Minimum viable:
- English, German, French, Spanish, Italian (largest EU markets)
- Polish, Dutch, Romanian (significant user bases)
- Language of your primary market
5. No Trusted Flagger Program
Waiting for authorities to designate trusted flaggers isn't enough. Proactively:
- Identify NGOs, industry bodies with expertise
- Establish trusted flagger relationships
- Create priority processing workflows
Need Help with DSA Compliance?
Echelon Advisory provides comprehensive DSA compliance services including gap assessments, system design, implementation support, and ongoing monitoring.
Contact UsKey Takeaways
- DSA creates tiered obligations based on platform size and service type
- Core requirements: notice-and-action, statement of reasons, complaint handling
- VLOPs face additional risk assessments and external audits
- Transparency reports required every 6 months
- Implementation requires both technical systems and operational processes
- Human oversight required even when using automation
DSA enforcement is active across EU member states. The platforms that build robust compliance infrastructure now avoid enforcement actions, operational disruptions, and financial penalties.
About the Author
Maneesha Pandey is the founder of Echelon Advisory Services, specializing in Trust & Safety, AI Governance, and EU regulatory compliance. She spent 14+ years building Trust & Safety operations at Amazon, Google, and TikTok, including content moderation frameworks and DSA compliance infrastructure.