Why entertainment will be one of the most attacked sectors in 2026

AI-powered cyberattacks: why entertainment will be one of the most attacked sectors in 2026

Kaspersky warns that the accelerated adoption of artificial intelligence is expanding the attack surface across the entertainment industry, from ticket sales to streaming and video games.

A new section of the Kaspersky Security Bulletin focused on one of the main challenges that will shape the entertainment industry in 2026: the impact of Artificial Intelligence on digital security. According to the Kaspersky report, AI is already transforming key processes in the sector, but at the same time, it is enabling new forms of fraud, leaks, and cyberattacks that affect studios, platforms, and users alike.

The growing reliance on AI-based systems makes these technologies critical infrastructure for the business. A failure, abuse, or intrusion no longer has only technical consequences, but also economic, reputational, and legal repercussions, in a context dominated by global premieres, live broadcasts, and massive digital communities.

Dynamic Pricing and Bots in Ticket Sales

AI allows for faster and more accurate ticket price adjustments, but it also offers resellers advanced tools to detect high-demand events, operate bots on a large scale, and modify resale prices in near real-time. Even when artists maintain fixed prices, secondary markets can implement automatic price increases driven by algorithms.

Visual Effects and Risks with Third-Party Vendors

The increasing use of cloud-based AI platforms for creating visual effects increases reliance on third-party vendors and freelance professionals, expanding vulnerabilities. Attackers could compromise rendering systems or third-party tools to steal scenes or episodes before their release, without needing to directly target the studios.

Content Distribution Networks as Critical Targets

Distribution networks store extremely sensitive material, such as unaired episodes, final copies of films, or live streams. With the help of AI, attackers can analyze these networks, identify valuable content, and detect poorly secured access points. A single incident could affect multiple titles or even allow the insertion of malicious code into legitimate broadcasts.

Misuse of AI in Video Games

In the gaming environment, users may attempt to circumvent the limitations of AI tools for creating characters or content by incorporating external models that introduce inappropriate material into games, mods, or videos. Furthermore, poor management of training data can lead to the generation of content containing personal information.

Regulatory Requirements and New Legal Challenges

New regulations on AI will require greater transparency regarding content generated with these technologies and clarity on consent and licenses for training models with copyrighted works. This will compel companies in the sector to create new roles, processes, and internal controls for the use of AI in creative and commercial contexts.

“Artificial intelligence has become a common factor in many emerging risks in the digital environment. It not only enhances anomaly detection by security teams, but it is also being used to analyze infrastructure, anticipate behavior, and generate increasingly credible malicious content,” explained María Isabel Manjarrez, a security researcher with Kaspersky’s Global Research and Analysis Team. In this regard, she maintained that companies must consider AI systems and the data that feeds them as part of their critical attack surface, and not just as creative tools.

Key recommendations for the industry

To mitigate these risks, experts recommend inventorying and mapping AI usage across the entire value chain, integrating these systems into threat models, providing ongoing employee training, thoroughly reviewing the security of content distribution networks, and subjecting generative AI deployments to specific security and privacy assessments, with clear data and content policies.

Source: www.itsitio.com