GA Logo

tiigre

Tiigre (stylized as lowercase tiigre) is a peer-to-peer marketplace designed for second-hand goods, currently operating in Venezuela. The platform emphasizes user safety, ease of use, and reliability, integrating features like delivery coordination and buyer protection directly into the user experience.

This post offers a look into the technical architecture that powers tiigre, highlighting the technologies and design choices that enable its functionality.

The architectural “Why”

Building an early-stage startup involves dealing with limited time, budget, and resources, making it essential to quickly respond to user feedback and efficiently implement new features. Additionally, ensuring that the chosen technology supports immediate goals without restricting future growth is always a priority.

The technology choices centered around four key areas:

  • Iteration Speed: Speed is critical for startups. Quickly building, testing, and releasing new features allows the product to evolve rapidly based on user feedback.
  • Cost Management: Keeping costs manageable, particularly before the startup becomes profitable, is important, as long as capabilities and performance aren’t compromised. Cost-effective is the key phrase.
  • Avoiding Premature optimization (Without Hindering Future Growth): Every hour spent building (and then maintaining) something you don’t yet need decreases the chances you’ll ever reach the scale where it’s necessary. Focus instead on robust solutions that fit current needs, without sacrificing flexibility for potential future growth.
  • Proven Technologies: Preference was given to mature, well-documented, and battle-tested technologies over fleeting trends. Frameworks like Django, for instance, were chosen for their “batteries-included” nature which accelerates development, coupled with proven scalability demonstrated by major platforms (like Instagram, which famously uses a heavily customized version of Django).

This pragmatic approach is what allows tiigre to evolve quickly while maintaining a stable and scalable core. If eight years from now we curse at our past selves while refactoring something, it means we’re still here eight years later.

High-Level Overview

Tiigre’s core architecture comprises a mobile application for users, a backend API handling business logic, a database for persistence, and various supporting services. The entire system is containerized using Docker and leverages Cloudflare for edge services.

  graph TD
    subgraph Docker
        BackendAPI[Backend API Django/DRF];
        DB[(PostgreSQL + PostGIS)];
        Cache[(Redis)];
        Tasks[Celery Workers];
    end
    User(User's Mobile Device) --> App[React Native App];
    App --> CFEdge[Cloudflare Edge CDN/WAF/Tunnel];
    CFEdge --> BackendAPI;
    BackendAPI --> DB;
    BackendAPI --> Cache;
    BackendAPI --> Tasks;
    BackendAPI --> Storage[Cloudflare R2/Images];
    Tasks --> Cache;
    Tasks --> DB;
    Tasks --> ExtAPI[External APIs e.g., Email, Payments];

    style App fill:#D6EAF8,stroke:#333,stroke-width:2px,color:#333
    style BackendAPI fill:#D5F5E3,stroke:#333,stroke-width:2px,color:#333
    style DB fill:#FCF3CF,stroke:#333,stroke-width:2px,color:#333
    style Cache fill:#FDEDEC,stroke:#333,stroke-width:2px,color:#333
    style Tasks fill:#FDEDEC,stroke:#333,stroke-width:2px,color:#333
    style Storage fill:#FDEBD0,stroke:#333,stroke-width:2px,color:#333
    style ExtAPI fill:#EBDEF0,stroke:#333,stroke-width:2px,color:#333
    style CFEdge fill:#EAECEE,stroke:#333,stroke-width:2px,color:#333

The platform serves a growing user base and is architected with future growth in mind. Key considerations include high availability, low latency (aided by server location choice), and robust user security, particularly concerning payments and personal data protection.

Frontend: The Mobile App (React Native)

Users interact with tiigre primarily through dedicated iOS and Android applications.

  • Framework: Built with React Native and managed using Expo tools, including Expo Application Services (EAS) for builds and distribution.
  • Language: Developed using TypeScript.
  • State Management: Employs solutions for managing both global application state and server state synchronization, including caching data fetched from the backend.
  • API Interaction: Communicates with the backend via a REST API using an HTTP client library.
  • Key Features: Integrates push notifications (using Firebase Cloud Messaging, triggered by the backend), analytics (Firebase, Google Analytics), error tracking (Sentry), and connections to advertising platforms like Facebook Ads.
  graph LR
    App[React Native App] -- REST API --> Backend[Backend API];
    App -- Push Token --> Backend;
    Backend -- Trigger Push --> FCM[Firebase Cloud Messaging];
    FCM -- Send Push --> App;
    App -- Track Events --> Analytics[Firebase/GA];
    App -- Report Errors --> SentryFE[Sentry Frontend];
    App -- Ad Events --> FBAds[Facebook Ads SDK];

    style App fill:#D6EAF8,stroke:#333,stroke-width:2px,color:#333
    style Backend fill:#D5F5E3,stroke:#333,stroke-width:2px,color:#333
    style FCM fill:#FDEBD0,stroke:#333,stroke-width:2px,color:#333
    style Analytics fill:#FDEBD0,stroke:#333,stroke-width:2px,color:#333
    style SentryFE fill:#FDEBD0,stroke:#333,stroke-width:2px,color:#333
    style FBAds fill:#FDEBD0,stroke:#333,stroke-width:2px,color:#333

Builds and deployments to the respective app stores are streamlined using EAS.

Backend: Django & DRF

tiigre’s backend logic runs on a robust stack built with Python, Django, and Django REST Framework (DRF).

  • Core: Python, Django, and DRF.
  • Structure: Organized as a multi-app Django project, separating concerns like users, products, payments, and transactions into distinct modules.
  • API: Exposes RESTful endpoints covering all application functionalities required by the frontend.
  • Websockets: The deprecated chat feature used websockets powered by Django Channels.
  • Authentication: Implements token-based authentication, requiring email verification for user accounts.
  • Async Tasks: Leverages Celery for handling background operations asynchronously. This includes tasks like sending notifications, interacting with external services, and other processing that doesn’t need to block the main request flow. Redis serves as the message broker and results backend for Celery, while Flower provides a web UI for monitoring tasks. Celery Beat manages scheduled, recurring tasks.
  • Machine Learning: Integrates Scikit-learn for running a Logistic Regression model, which helps surface relevant items to users and personalize the browsing experience. It uses models trained periodically and stored as part of the application’s deployment artifacts.
  • Database Interaction: Primarily uses the Django ORM for database operations.
  • External Integrations:
    • Payments: Securely interacts with external payment processing partners via APIs to handle payment verification and facilitate payouts, without storing sensitive financial details directly.
    • Email: Integrates with AWS SES for sending transactional emails (e.g., account verification, notifications).
    • Storage: Uses Cloudflare R2 for storing media (like product images) and static application assets.
  graph LR
    subgraph BackendCore [Backend Core]
        DjangoApp[Django/DRF App]
        CeleryWorker[Celery Worker]
        CeleryBeat[Celery Beat]
        FlowerUI[Flower UI]
    end

    subgraph MiddlewareAndState [Middleware & State]
        RedisBroker[Redis Broker/Backend]
        DB[(PostgreSQL)]
    end

    subgraph ExternalServices [External Services & Storage]
        PaymentProvider[Payment Provider APIs]
        SES[AWS SES]
        R2[Cloudflare R2]
        %% ML node removed from here
    end

    %% Connections from Core to Middleware/State
    DjangoApp --> RedisBroker;
    DjangoApp --> DB;
    CeleryWorker --> RedisBroker;
    CeleryWorker --> DB;
    CeleryBeat --> RedisBroker;
    FlowerUI --> RedisBroker;

    %% Connections from Core to External Services/Storage
    DjangoApp --> PaymentProvider;
    DjangoApp --> SES;
    DjangoApp --> R2;
    %% Connections to internal ML logic happen within DjangoApp/CeleryWorker code
    CeleryWorker --> PaymentProvider;
    CeleryWorker --> SES;
    CeleryWorker --> R2;

    %% Apply styles
    style DjangoApp fill:#D5F5E3,stroke:#333,stroke-width:2px,color:#333
    style CeleryWorker fill:#D5F5E3,stroke:#333,stroke-width:2px,color:#333
    style CeleryBeat fill:#D5F5E3,stroke:#333,stroke-width:2px,color:#333
    style FlowerUI fill:#D5F5E3,stroke:#333,stroke-width:2px,color:#333
    style RedisBroker fill:#FDEDEC,stroke:#333,stroke-width:2px,color:#333
    style DB fill:#FCF3CF,stroke:#333,stroke-width:2px,color:#333
    style R2 fill:#FDEBD0,stroke:#333,stroke-width:2px,color:#333
    style PaymentProvider fill:#EBDEF0,stroke:#333,stroke-width:2px,color:#333
    style SES fill:#EBDEF0,stroke:#333,stroke-width:2px,color:#333

Database: PostgreSQL & PostGIS

The primary data store is PostgreSQL.

  • Key Extension: PostGIS is utilized extensively for handling geospatial data. This allows storing product locations and defining geographic zones (e.g., for delivery areas), enabling features like proximity searches and location-based filtering.
  • Management: Database schema is defined and managed through Django models and migrations. The application interacts with the database via the Django ORM.
  • Operations: Runs as a single instance. Regular backups are performed using standard PostgreSQL tools and securely stored in a private Cloudflare R2 bucket, with restoration procedures tested regularly (see scaling roadmap below for details on future horizontal scaling).

Infrastructure & Hosting: Hetzner & Docker

The entire tiigre backend stack is containerized using Docker and hosted on Hetzner Cloud servers.

  • Servers: Separate development, staging, and production environments are maintained on cloud instances located strategically to optimize latency for the target user base.
  • Orchestration: Container orchestration is managed using docker compose on each server.
  • Deployment: Updates are deployed by pulling the latest code and rebuilding/restarting containers using docker compose. Docker images encapsulate the application and its dependencies.
  • Container Setup: The production Docker Compose configuration defines all necessary services, including the Django application, Celery components (Worker, Beat, Flower), PostgreSQL, Redis, Traefik (acting as an internal reverse proxy), Cloudflared (for secure tunneling), Loki (for log aggregation), an agent for shipping logs (Alloy), and Grafana (for dashboards). Persistent volumes are used for critical data like the database, Grafana configurations, and ML artifacts.
  graph TD
    subgraph HetznerServer [Hetzner Cloud Server Ubuntu + Docker]
        direction LR
        subgraph DockerComposeServices [docker compose services]
            Django[Django/DRF]
            Celery[Celery Workers/Beat/Flower]
            Postgres[PostgreSQL/PostGIS]
            Redis[Redis]
            Traefik[Traefik Proxy]
            Cloudflared[Cloudflared Tunnel]
            Loki[Loki Logs]
            Alloy[Alloy Log Shipper]
            Grafana[Grafana Dashboards]
        end
        Postgres --> VolPG[Volume Postgres Data];
        Traefik --> VolTraefik[Volume Traefik Certs];
        Grafana --> VolGrafana[Volume Grafana Data];
        Django --> VolML[Volume ML Artifacts];
        Celery --> VolML;
        DockerDaemon -- Logs --> Alloy;
        Alloy -- Sends Logs --> Loki;
        Grafana -- Queries Logs --> Loki;
    end

    %% Removed explicit color:#333 for the outer box label for better contrast
    style HetznerServer stroke-width:2px
    %% Kept explicit color for inner boxes as their background is lighter
    style DockerComposeServices fill:#F4F6F7,stroke:#333,stroke-width:1px,color:#333

Scaling sidenote: vertical vs horizontal expansion

Even though every service currently runs on a single Hetzner instance, we still have plenty of headroom before true horizontal scaling becomes mandatory (or even beneficial):

  • Room to grow in-place: Hetzner lets us resize the CPU/RAM profile of a node in minutes. Our present production box idles well below 35 % CPU during daytime peaks while comfortably handling 70k+ registered users, even on traffic spikes. We can almost 10x our resources without needing any architectural change at all.
  • Containers as the base architecture: Because every component has been Dockerised since day one, there’s a modularity that easily allows us to “move out” any specific part that might outgrow our current architecture1. Whether the next step is splitting Postgres onto its own node (it probably will be), adding a read-only replica for analytics or standing up a second application server behind Traefik, the compose files already express the dependency graph and health-checks we need.
  • Why not Kubernetes (yet?): A full-blown orchestrator would more than double our ops overhead today without unblocking any current feature work. For now, docker compose keeps deployment trivial and observability transparent. When horizontal scaling does deliver more value than its operational cost (e.g., autoscaling workers for flash-sale traffic) we can drop the same container images1 into K8s or Fly.io machines with (hopefully) minimal re-tooling.

In short, vertical scaling buys us another order of magnitude, and the way we’ve containerised everything means the eventual hop to horizontal is a logistics project, not a rewrite. 1

Networking & Edge: Connecting Users Securely (Cloudflare)

Cloudflare is integral to tiigre’s networking, security, and performance strategy.

  • Traffic Flow: User requests first hit Cloudflare’s edge network, benefiting from DNS resolution, CDN caching, and WAF protection. Traffic is then securely routed via Cloudflare Tunnel to the cloudflared container running on the Hetzner origin server. This container forwards requests internally (over HTTPS, using Traefik’s self-signed certificate) to the Traefik container. Traefik, acting as an internal reverse proxy, routes the request (typically over HTTP) to the appropriate backend service, usually the Django application container.
  • Security: Cloudflare provides SSL encryption between the user and the edge, WAF filtering against common web threats, and origin IP obfuscation through the use of Tunnels. Access controls are also applied at the Cloudflare level for different environments (e.g., IP whitelisting for staging).
  • Performance: The Cloudflare CDN caches static assets served from Cloudflare R2. Product images are processed and delivered efficiently by Cloudflare Images, which uses R2 as its origin store.
  sequenceDiagram
    participant User
    participant CFEdge as Cloudflare Edge (DNS, CDN, WAF)
    participant CFTunnel as Cloudflare Network
    participant CloudflaredCont as cloudflared Container @Hetzner
    participant TraefikCont as Traefik Container @Hetzner
    participant DjangoCont as Django Container @Hetzner

    User->>CFEdge: Request app.tiigre.com
    CFEdge->>CFTunnel: Route via Tunnel
    CFTunnel->>CloudflaredCont: Forward Request
    CloudflaredCont->>TraefikCont: HTTPS to traefik:443 (No TLS Verify)
    TraefikCont->>DjangoCont: HTTP to Django service
    DjangoCont-->>TraefikCont: Response
    TraefikCont-->>CloudflaredCont: Response
    CloudflaredCont-->>CFTunnel: Response
    CFTunnel-->>CFEdge: Response
    CFEdge-->>User: Response (potentially cached)

Supporting Services

Several other technologies and services round out the tiigre stack:

  • Caching (Redis): Primarily functions as the message broker and task result backend for Celery’s asynchronous processing system.
  • Async Processing (Celery): Handles various background jobs essential for platform operation, such as sending notifications and processing data related to external service integrations.
  • Storage (Cloudflare R2 & Images): R2 serves as the primary object storage for user-uploaded images, static website assets, and database backups. Cloudflare Images provides dynamic image resizing, optimization, and efficient CDN delivery.
  • CI/CD (GitHub Actions): Continuous Integration is implemented for the backend repository using GitHub Actions. Automated workflows run linters and execute the test suite (pytest) upon code pushes and pull requests, ensuring code quality and stability.
  • Testing (Pytest): The backend employs a testing strategy centered around integration tests using pytest. Tests run within an isolated Docker environment, mimicking the production setup, and utilize mocking for external service dependencies.
  • Monitoring & Observability (Grafana, Loki, Sentry):
    • Logs: Container logs are aggregated into Loki via a dedicated log shipping agent (Alloy) and can be explored using Grafana.
    • Metrics: Grafana dashboards provide visibility into key business metrics by querying the PostgreSQL database directly.
    • Errors: Sentry is used for real-time error tracking and reporting for both the React Native frontend and the Django backend. Automated alerts notify the team of new errors encountered in production.
  • Analytics (Firebase/GA): The mobile applications integrate Firebase Analytics and Google Analytics to gather data on user interactions, enabling analysis of user behavior, feature usage, and conversion funnels.

Parting thoughts

Django’s slogan is “The web framework for perfectionists with deadlines”. This encapsulates a lot of tiigre’s architecture: whenever a choice must be made, speed and agility (a startup’s biggest strengths against established players) are weighed against functionality, reliability, scalability, and anything else the “perfect” solution would have. The final choice usually falls somewhere similar to Django itself: solving the current problem effectively and quickly, without sacrificing future growth or painting ourselves into a corner. Because while solving the current problem is often urgent, needing three app rewrites before profitability ultimately negates the very speed advantage a startup relies on.


  1. Realistically, any growth measured in orders of magnitude will probably require some form of rewrite; in this case it’s more about postponing this while making the changes required as small as possible, without sacrificing present efficiency. ↩︎ ↩︎ ↩︎