1. Artificial intelligence and explainability
1.1. What is artificial intelligence?
1.1.1. A short history of artificial Intelligence
1.1.2. Different approaches for artificial intelligence
1.1.3. Applications of artificial intelligence
1.2. What is explainable artificial intelligence?
1.2.1. Motivations for XAI
1.2.2. The dimensions of interpretability
1.2.3. Different explanations and how to read them
1.3. AI and XAI in the media field
1.3.1. AI applications and explainability
1.3.2. VOD services in practice
1.4. Conclusion
2. The stuff AI dreams are made of – big data
2.1. Introduction
2.2. Privacy as the big data gatekeeper
2.2.1. The United States of America
2.2.2. The European Union
2.2.3. China
2.2.4. Three different approaches?
2.3. Big data bias and discrimination
2.4. Informing the people: Media, misinformation, and illegal content
2.5. Big data politics and the political bubble
2.6. Media as surveillance watchdogs?
2.7. The media market: Big data-driven market strategies
2.8. Regulatory approaches to AI-based systems
2.9. Conclusion
3. Implications of the use of artificial intelligence by news media for freedom of expression
3.1. Introduction
3.2. AI applications for news media
3.3. The use of AI by news media as an element of media freedom
3.3.1. Democratic role of the news media
3.3.2. Beneficiaries of media freedom
3.3.3. Duties and responsibilities and journalistic codes of ethics
3.4. Implications of AI for the freedom of expression rights of news users and other participants in public debate
3.5. Obligations of states regarding media freedom
3.6. Conclusion
4. Cultural diversity policy in the age of AI
4.1. Introduction
4.2. Understanding the changed environment of content creation, distribution, use and re-use
4.2.1. Understanding the new intermediaries
4.2.2. Implications of AI-driven editorial agents
4.3. Possible avenues of action: New tools addressing and engaging digital intermediaries
4.3.1. Governance of algorithms
4.3.2. Governance through algorithms
4.4. Concluding remarks
5. Copyright - Is the machine an author?
5.1. Introduction
5.2. Technology
5.3. Protection: Can AI-generated creativity be protected?
5.3.1. Personality: Can a machine be a legal person
5.3.2. Authorship: Can a machine be an author?
5.3.3. Originality: Can a machine be original?
5.4. Policy options: Are incentives necessary?
5.4.1. No protection: Public domain status of AI-generated works
5.4.2. Authorship and legal fictions: Should a human be the author?
5.4.3. Should a robot be the author?
5.4.4. Sui generis protection for AI-generated creativity
5.4.5. Providing rights to publishers and disseminators
5.5. Conclusions
6. AI in advertising: entering Deadwood or using data for good?
6.1. Introduction
6.2. AI in advertising: From tracing online footprints to writing ad scripts
6.2.1. Programmatic advertising: The stock market of ads and data
6.2.2. Algorithmic creativity: AI dipped in the ink of imagination
6.2.3. From creative games to gains
6.2.4. Conclusion: AI enabled intelligent advertising
6.3. Concerns regarding Big Data and AI
6.3.1. Existing legal framework in Europe
6.3.2. Conclusion: (Mostly) the Good, the Bad and the Ugly
6.4. Using AI for intelligent ad regulation
6.4.1. Avatars gathering data for good
6.4.2. AI advancements for advertising compliance in France
6.4.3. Harnessing technology to bring more trust to the Dutch ad market
6.4.4. Tech solutions from the ad industry powerhouse
6.4.5. Future frontier for advertising self-regulation
6.5. Conclusion: ‘The great data rush’
6.6. Acknowledgements
6.7. List of interviews
7. Personality rights: From Hollywood to deepfakes
7.1. Introduction
7.2. AI sets the scene: Deepfakes and ghost acting
7.2.1. Deepfakes
7.2.2. Ghost Acting
7.3. Personality rights and implications
7.3.1. Angle 1: Publicity as (intellectual) property
7.3.2. Angle 2: Publicity and brand recognition
7.3.3. Angle 3: Privacy protections
7.3.4. Angle 4: Dignity and the neighbouring rights
7.4. Laws in selected jurisdictions
7.4.1. Germany
7.4.2. France
7.4.3. Sweden
7.4.4. Guernsey
7.4.5. United Kingdom
7.4.6. California
7.5. What next for Europe’s audiovisual sector?
8. Approaches for a sustainable regulatory framework for audiovisual industries in Europe
8.1. Introduction
8.1.1. The basics of AI, simplified
8.2. How is AI used in audiovisual industries?
8.3. Is AI somewhat different than previous technologies?
8.3.1. Who is responsible when AI causes harm?
8.3.2. It’s not just the economy
8.4. We have a moral obligation to do good with AI
8.5. Regulation should be human-centric and goal-based
8.5.1. Major risks should be addressed
8.5.2. Humans are the responsible ones
8.5.3. Transparency as an interim solution?
8.6. Human-centricity, not technology-centricity
Figures
Figure 1. Example of global tree-based explanations returned by TREPAN
Figure 2. Example of list of rules explanations returned by CORELS
Figure 3. Example of factual and counter-factual rule-based explanation returned by LORE
Figure 4. Example of explanation based on features importance by LIME
Figure 5. Example of explanation based on features importance by SHAP
Figure 6. Example of saliency maps returned by different explanation methods. The first column contains the image analysed and the label assigned by the black-box model b of the AI system.
Figure 7. Example of exemplars (left) and counter-exemplars (right) explanation returned by ABELE. On top of each (counter-)exemplar is reported the label assigned by the black-box model b of the AI system.
Tables
Table 1. Programmatic advertising glossary
Table 2. Advertising and marketing campaigns enabled by creative AI technologies