Can AI Use Artistic Works Without Permission? Fair Use vs. Illegal Use in Indonesia and International Law
1. Introduction: Can AI Legally Use Artistic Works Without Permission?
Generative AI systems such as Midjourney, DALL·E, and Stable Diffusion are trained using massive datasets containing millions of artworks scraped from the internet. Most of these works are used without any license, authorization, or attribution, creating a major legal controversy worldwide.
The core question is:
Can AI legally use copyrighted artworks without permission?
In Indonesia, the answer strongly leans toward no — and international law presents a similarly complex picture.
2. Why AI Uses Artistic Works Without Permission
AI developers rely on large-scale web scraping, collecting images from public websites, social media, online galleries, and open datasets like LAION-5B.
My thesis highlights:
“Open-source datasets likely contain copyrighted artistic works taken without consent.”
This practice accelerates AI development but also exposes developers to significant legal risks.
3. Is This Fair Use?
Fair Use Exists in the United States, Not in Indonesia
🇮🇩 Indonesia — Fair Use Does Not Apply
Indonesia does not use the U.S. fair use doctrine.
Indonesia adopts a narrower concept called “Fair Dealing”, which only permits use for:
-
education
-
research
-
library use
-
non-commercial citation
-
news reporting (limited)
AI training does not fit any of these categories.
Indonesian Copyright Law Violations:
-
Economic Rights (Art. 8, 9 UUHC)
-
Moral Rights (Art. 5, 7 UUHC)
-
Commercial Use → Criminal Sanctions (Art. 113(3) UUHC)
-
AI Is Not a Legal Person → Developer Is Liable
My thesis clearly states:
“AI cannot bear legal responsibility; liability rests fully on the developer.”
4. What About International Law?
Here is how global legal systems treat AI training using unlicensed artworks.
A. United States — Fair Use (but uncertain)
The U.S. is the only major jurisdiction where AI companies argue that using copyrighted works for training may be fair use.
Under U.S. Copyright Act §107, courts analyze:
-
Purpose and character of use (transformative use)
-
Nature of copyrighted work
-
Amount used
-
Effect on market value
🔍 AI Problems Under U.S. Fair Use:
-
AI uses entire artworks, not excerpts
-
AI outputs may replace original artists’ markets
-
Training is for commercial models
-
Transformative argument is weak, because AI reproduces style patterns
Ongoing lawsuits (e.g., Sarah Andersen v. Stability AI) show courts remain deeply divided.
B. European Union — AI Training Restrictions
The EU Copyright Directive 2019 allows Text and Data Mining (TDM) but imposes strict conditions:
Two key regimes:
-
TDM for research → automatically allowed
-
TDM for commercial AI models →
❗ Only permitted if rightsholders did NOT opt out
Meaning:
Artists can legally opt-out and block AI training.
C. EU AI Act — Additional Layer
The 2024 EU AI Act adds new obligations:
-
AI developers must disclose training data sources
-
must ensure copyright-respecting dataset collection
-
high-risk or general-purpose AI (GPAI) must maintain technical documentation
This makes Europe the strictest region for AI dataset legality.
D. United Kingdom — Fair Dealing
Similar to Indonesia but slightly broader.
AI training is not automatically exempt, and licensing is recommended.
UK Intellectual Property Office (IPO) has rejected proposals to allow free AI training.
E. Berne Convention (International Copyright Law)
Indonesia, US, UK, EU — all members of the Berne Convention.
Berne principles relevant to AI:
1. “No Formalities”
Creators automatically own rights; AI developers must obtain permission.
2. “National Treatment”
Indonesian artists are protected abroad and vice versa.
3. Right of Reproduction
AI training may constitute a reproduction because it requires copying the entire work.
Thus:
Using works in AI training without permission violates Berne Convention obligations.
5. Why AI Training Is NOT Legal Under Indonesian and International Standards
Across most jurisdictions:
-
AI training requires copying
-
copying requires permission
-
permission requires licensing
-
licensing requires compensation
AI developers cannot rely on “the model doesn’t store the image” as a defense.
The process still involves:
✔ copying
✔ transforming
✔ indexing
✔ extracting patterns
✔ commercial exploitation
Thus legally:
AI training is still a derivative reproduction of copyrighted works.
6. Who Is Legally Responsible?
❌ Not the AI
AI is not a legal subject anywhere in the world.
❌ Not the user (unless intentionally infringing)
✅ The legally responsible party is:
-
AI developer
-
platform provider
-
company deploying the model
-
dataset creators (if any)
My thesis reinforces this:
“Liability is inherent to the AI developer as the entity controlling technical and commercial processes.”
7. What Is the Solution? (Global Best Practices)
1) Licensing Systems for Training Data
As adopted by Shutterstock–OpenAI partnership.
2) Opt-out Registry for Artists
EU already supports this mechanism.
3) Compensation Models (Royalty Systems)
Artists get paid when their works are included in datasets.
4) Transparent Dataset Documentation
Now required under EU AI Act.
5) Ethical Dataset Creation
Only using:
-
CC0 images
-
licensed artworks
-
consent-based data
8. Conclusion
Can AI use artworks without permission?
Indonesia:
❌ No — violates economic & moral rights; can trigger criminal sanctions.
United States:
⚠️ Maybe — fair use is heavily contested and unresolved.
European Union:
❌ Generally no — unless rightsholders did not opt out; strict compliance applies.
International Law (Berne Convention):
❌ No — reproduction without permission violates international obligations.
Across jurisdictions, the trend is clear:
AI developers must obtain permission, respect copyright, and compensate creators.
The future of AI creativity must be legal, ethical, and fair — for both innovation and the artists whose work made AI possible.
Comments
Post a Comment