OTRO is a global digital fan engagement platform created by some of the world’s most iconic footballers, including Leo Messi, Neymar Jr., and David Beckham. The goal of the platform was to provide fans with exclusive access to the personal side of their favorite players through multimedia content.
OTRO offered a rich, interactive experience, enabling fans to engage with players through comments, quizzes, competitions, and live Q&A sessions—all in a digital environment designed for scale and accessibility.
Given the global popularity of its founding players, OTRO’s target audience included potentially over one billion social media followers.
This required a solution that could handle extreme concurrency from day one. The platform needed to support high volumes of user registrations—on the order of thousands per minute—and deliver a seamless experience without latency or downtime, even under massive, unpredictable load.
The infrastructure had to be both robust and highly scalable, capable of reacting instantly to traffic spikes, such as those triggered by global announcements or viral content.
DistCoTech built a cloud-based, distributed asynchronous system, architected specifically for high end-to-end throughput and scalability.
The platform was designed to dynamically handle bursts in traffic while maintaining stable, responsive performance. Asynchronous processing ensured that system operations remained efficient under load, and horizontal scalability allowed resources to adjust in real time based on demand.
The infrastructure was optimized for elasticity, resilience, and continuous availability—critical requirements for a fan platform driven by live interaction and unpredictable usage patterns.
The robustness of the solution was immediately put to the test when David Beckham announced the launch of OTRO on Twitter, two hours before the official go-live time. Within the first 20 minutes following the post, over 225,000 users successfully registered—without downtime or service degradation.
The system’s cloud-native architecture, based on Kubernetes, Terraform, and Apache Kafka, allowed the platform to scale rapidly and absorb the surge with no interruption to service.
Backend services built with Java and Spring Boot ran efficiently in containers orchestrated via Helm. Secure configuration management and secret handling were implemented using Vault, while CircleCI enabled automated deployments and continuous integration pipelines. The infrastructure was hosted on Google Cloud Platform (GCP).
Terraform, Kubernetes, Helm, Java, Spring Boot, Apache Kafka, Vault, CircleCI, GCP