THIS IS HOW MANY REQUESTS WERE PROCESSED IN 2024/2025
Customers: 189,006 calls were received during this period, of which 177,830 were successfully handled within 9 minutes. As for service tickets, 176,896 were created and 177,570 were resolved.
Members: Of 30,787 calls, 28,847 were successfully handled within 9 minutes. A total of 31,839 service tickets were submitted, and 50,349 were processed.
ONLINE INQUIRIES AND QUICK SELF-HELP
29.5% of all member inquiries and 42.3% of all customer inquiries were received via the GEMA online portal. In these cases, music users and members were able to resolve approximately 82.9% of all service issues on their own (self-service). Submitting a ticket was not necessary in these instances. The GEMA website also helps answer many questions. In total, we recorded 894,858 page views on gema.de.
SATISFACTION IS ON THE RISE
CUSTOMER SATISFACTION ROSE FROM 65% IN 2022 TO 90% IN 2025
Customer satisfaction has recently seen a sharp rise. As a result, we were able to increase customer satisfaction from 65% in 2022 to 90% in 2025.
increasingly important WORKING WITH DATA
50%
OF OUR EMPLOYEES WORK WITH GEMA DATA
At GEMA, everything revolves around music. And data. They provide us with a snapshot of the music market. That’s why more than half of all GEMA employees work on and with our own data.
REDUCE THE WORKLOAD: OUR AI SOLUTIONS
At the GEMA AI Hub, employees develop AI solutions. Six solutions have already been put into production. These largely replace manual processes. Five additional projects are nearly complete, and seven are in the planning stages. An example of an internally developed AI solution is integrated into the GEMA online portal: AI is used to extract relevant information from uploaded setlist submissions. Since the system went live in August 2025, 42,758 setlists containing 1,280,047 individual tracks have been processed (as of 2/26/26). Fully automated processing significantly reduces the manual data entry workload and minimizes data entry errors. The solution is implemented as a scalable cloud service and uses state-of-the-art large language models (LLMs) for data extraction.