How the Seat Booking System Works at BookMyShow

Reading Time: 2 minutes

When you book a movie ticket, it feels effortless: pick a seat → pay → done. But behind that simple flow, platforms like BookMyShow are solving a real-time seat booking system challenge—sometimes with thousands of users trying to grab the same seat at the same millisecond. Let’s dive into how this seat booking system works under the hood to ensure smooth and accurate bookings.

Seats Are Just Records (With States)

Inside BookMyShow, each seat is stored like a record with a status:

  • Available
  • Locked
  • Booked

At the beginning, every seat is Available. No magic—just state management done right.

The Exact Moment You Click a Seat

When you select a seat and move forward:

  1. Your app sends a request to the server
  2. The server checks the seat’s current status
  3. If the seat is Available, the server instantly:
  • Changes the seat to Locked
  • Attaches your session/user
  • Starts a short countdown timer (usually 5–10 minutes)

⏱️ All of this happens in milliseconds.

This is the single most important step in preventing double bookings.

Why the “First Click” Wins

If two people try to book Seat A10 at the same time:

  • The request that reaches the server first wins
  • The seat is immediately locked
  • The second request sees:

There’s no debate or comparison—just a strict check of the current state.

“But I Can Still See the Seat…”

This confuses many users.

Why does a seat still look available on my screen, but fails when I click it?

Because:

  • Seat layouts in the UI are often cached
  • They don’t refresh every millisecond
  • The server always has the latest truth

👉 The UI shows approximate reality. 👉 The server enforces actual reality.

The server is the final authority.

During Payment: The Critical Phase

While you’re entering payment details:

  • The seat stays Locked
  • No one else can book it
  • The lock timer keeps ticking

Two outcomes are possible:

✅ Payment Success

  • Seat changes from Locked → Booked
  • Lock is removed
  • Seat is permanently sold

❌ Payment Failed or Timeout

  • Lock automatically expires
  • Seat returns to Available
  • Another user can book it

No manual cleanup. No human intervention.

Why Seats “Disappear” and Then Reappear

You’ve probably noticed this:

  • A seat disappears
  • A few minutes later, it’s back

What actually happened:

  1. Someone locked the seat
  2. Didn’t complete payment
  3. Timer expired
  4. System released the lock

The seat didn’t magically return—it was freed.

How This Scales During Big Releases

For blockbuster openings:

  • Thousands of users hit the same seats
  • Each seat allows only one active lock
  • All other requests are rejected instantly

This prevents:

  • Double bookings
  • Duplicate payments
  • Refund nightmares
  • Loss of user trust

This locking mechanism is the backbone of any large-scale ticketing system.

The One Rule That Powers Everything

Internally, BookMyShow follows one iron rule:

A seat can be locked by only one user at a time.

Everything else—payment, timers, releases—is built around this principle.

Final Thought

That tiny message: “Seat unavailable” isn’t a bug.

It’s proof that a real-time system made a correct decision in milliseconds.

Next time a seat vanishes right before you book it, remember: Someone reached the server a fraction of a second earlier— and the system did exactly what it was designed to do. 🎯


If you enjoyed this breakdown, this same logic powers flight bookings, hotel reservations, and even flash-sale e-commerce checkouts.

 

Jump into our new LinkedIn thread on — How BookMyShow Stops Two People From Booking the Same Seat

Also, read our last article: How Websites Remember You: Cookies vs Local Storage

 

Leave a Reply

How Websites Remember You: Cookies vs Local Storage

Reading Time: 2 minutes

Websites remember things about you all the time:

  • you stay logged in
  • dark mode stays on
  • items remain in your cart

This happens because your browser saves small pieces of information.

Two common ways it does this are Cookies and Local Storage.

They are often confused — so let’s explain them properly.

 

Think of a Website Visit Like Visiting a Shop

When you visit a shop, two kinds of information exist:

  1. Who you are
  2. How you like things

Cookies and Local Storage handle these two different jobs.

 

Cookies (Who You Are)

Cookies help a website recognize you.

Example:

  • You enter a café
  • The staff knows you’re a regular
  • You don’t need to explain yourself again

That’s what cookies do.

Cookies are used to:

  • Keep you logged in
  • Remember that it’s still you as you move between pages
  • Maintain your session

Without cookies, websites would forget you every time you click a link.

 

Local Storage (How You Like Things)

Local Storage remembers your preferences, not your identity.

Local Storage is used to:

  • Remember dark or light mode
  • Save language preference
  • Store app settings

It makes the website feel comfortable — not secure.

 

Simple Rule

  • Cookies = Who you are
  • Local Storage = How you like things

 

Technical Explanation

Now let’s look at what’s actually happening under the hood.

Cookies (Server Communication)

Cookies are part of the HTTP protocol.

Key technical traits:

  • Automatically sent with every request to the server
  • Read and validated by backend systems
  • Often store session IDs or auth tokens
  • Can be secured using:

 

Why cookies exist:

HTTP is stateless. Cookies allow the server to recognize multiple requests as coming from the same user.

That’s why authentication lives in cookies.

 

Local Storage (Client-Side Storage)

Local Storage is a browser-only storage mechanism.

Key technical traits:

  • Never sent to the server automatically
  • Accessible only via JavaScript
  • Stored as key–value pairs
  • Persists even after browser refresh or restart

Common uses:

  • UI preferences
  • Feature flags
  • Temporary app state

Local Storage is not secure and should never store sensitive data

 

One Scenario, Two Correct Choices

Login

  • Cookie: Session token
  • Needed on every server request

Dark Mode

  • Local Storage: { theme: “dark” }
  • Used only by browser rendering logic

 

Final Takeaway

Cookies and Local Storage solve different problems:

  • Cookies connect the browser to the server
  • Local Storage improves the user experience

Using the right one is not optional — it’s essential for security, performance, and reliability.

 

Jump into our new LinkedIn thread on —  Ever Wondered How Websites Remember You? Cookies vs Local Storage
Also, read our last article: API Rate Limits

Leave a Reply

The Hidden Engineering Behind the Undo Button

Reading Time: 2 minutes

The Undo button feels magical.

You press it… and poof — your mistake disappears. But behind that tiny arrow lies one of the most complex, carefully engineered systems in all of software.

Undo looks simple. Undo is not simple.

 

🧱 1. Apps Quietly Save a “Before” Version

Whenever you make a change, apps secretly store what your data looked like just before the action.

Examples:

  • Before you type a sentence
  • Before you crop a photo
  • Before you move a file
  • Before you delete something

Undo simply restores that “before” snapshot.

This means the app must track every meaningful change — instantly and reliably.

 

🗂 2. Apps Keep a Whole Stack of Versions

Each time you make a new edit, apps add a new version on top of a “version stack”:

Version 1 → original  
Version 2 → after first change  
Version 3 → after second change  
... 

Undo = step back one version.

This is how editors like Google Docs, Notes, Notion, and photo apps maintain clean editing history.

 

🌒 3. “Shadow Copies” Protect You From Crashes

A shadow copy is a temporary, invisible backup created as you edit.

It protects your work if:

  • the app crashes
  • the phone freezes
  • your internet drops
  • you close the app accidentally

Undo and auto-restore both use this hidden copy.

These shadow copies are why your Instagram caption draft or WhatsApp message often reappears after a crash.

 

🗑 4. Deletes Aren’t Real Deletes (Soft Delete)

When you “delete” something, it usually isn’t removed at all.

Apps typically:

  1. mark the item as deleted
  2. move it to Trash / Recently Deleted
  3. keep it for 30 days

Undo simply undeletes it from this safe zone.

That’s why:

  • Photos sit in “Recently Deleted”
  • Gmail keeps emails in Trash
  • Notes holds deleted items for recovery

Undo becomes trivial — because the data still exists.

 

🔄 5. What Real Rollbacks Look Like

Some user actions make multiple internal changes. Undo has to reverse all of them together — safely and in the right order.

A. Photo Editing

Applying a filter may update:

  • color channels
  • brightness
  • contrast
  • metadata
  • preview image

Undo must roll back every adjustment at once.

B. Text Editing

Typing can change:

  • the characters
  • line breaks
  • formatting

These are real, everyday examples of “multi-step rollbacks.”

 

📦 7. Undo Needs Storage (and Smart Limits)

Behind the scenes, Undo systems store:

  • temporary backups
  • older versions
  • shadow copies
  • action history

Apps must choose how to balance speed, memory, and user expectations:

  • How many Undo levels are allowed?
  • How long should history be stored?
  • When should old versions be deleted?

This is why some apps allow unlimited Undo, while others only allow “Undo last action.”

 

🎯 Final Takeaway

The Undo button depends on an entire hidden ecosystem:

  • version snapshots
  • shadow copies
  • soft deletes
  • reversible operations
  • conflict resolution
  • safe, atomic rollbacks

It’s one of the smartest features in software — engineered to look effortlessly simple.

Undo isn’t just a button. Undo is a system.

Jump into our new LinkedIn thread on —  Ever Wondered How Websites Remember You? Cookies vs Local Storage
Also, read our last article: API Rate Limits

Leave a Reply

API Rate Limits: The Rule Governing Every API

Reading Time: 3 minutes

When your app suddenly becomes slow, freezes, or stops loading data, most teams immediately point fingers at the backend infrastructure. But here’s the truth: one of the most common reasons apps break isn’t slow servers—it’s API rate limits.

 

🛠️ What Are Rate Limits?

Rate limits define how many API requests a user, application, or IP address can send within a specific time window. Think of them as traffic control for your API endpoints.

Common examples include:

  • 100 requests per minute
  • 10 requests per second
  • 1,000 requests per day

When your application exceeds the allowed limit, the server automatically rejects additional requests—often without warning.

 

📌Example

Consider your app’s “Home Feed” that calls an API every time a user opens the app. If 1,000 users simultaneously launch your app, and your rate limit is 500 requests per second, roughly half of those users will be blocked until the next time window begins.

 

🎯 Why Do APIs Use Rate Limits?

Rate limits exist to protect system performance and ensure fair resource allocation. They prevent:

✓ Traffic overload — Sudden spikes during flash sales, product launches, or viral moments can overwhelm servers.

✓ Bot or script attacks — Malicious scripts sending thousands of requests per second to exploit vulnerabilities or scrape data.

✓ Increased infrastructure costs — More requests mean more compute power, which translates to higher operational expenses.

✓ DDoS-like behavior — Even unintentional, such as a mobile app configured to refresh data every second.

Rate limits ensure APIs remain stable, responsive, and accessible for all users.

 

🚨 What Happens When Rate Limits Are Hit?

When you exceed rate limits, you’ll typically encounter:

  • HTTP 429 – Too Many Requests status code
  • API timeouts or significantly delayed responses
  • Screens stuck on loading indicators
  • Non-functional buttons and interactions
  • Data that fails to refresh

 

🔄 Common Rate Limiting Algorithms

API rate limiting isn’t one-size-fits-all. Different APIs implement different strategies for controlling request flow. Understanding these algorithms helps you design better integrations and troubleshoot issues more effectively.

 

1️⃣ Fixed Window Limiting

The most straightforward approach. It limits requests within fixed time intervals.

How it works: You can make 100 requests per hour. Once you hit that limit, all additional requests are rejected until the next hour begins.

 

2️⃣ Sliding Window Limiting

A more sophisticated approach that applies limits to a rolling time period.

How it works: You can make 100 requests in any 60-minute period. The system tracks requests over the most recent 60 minutes continuously, not in fixed blocks.

 

3️⃣ Leaky Bucket Algorithm

Models rate limiting as a bucket with a hole at the bottom.

How it works:

  • Incoming requests fill the bucket
  • Requests “leak” out at a constant rate (processed at steady intervals)
  • If the bucket overflows, new requests are denied
  • Bucket has a maximum capacity

 

Example:

  • Bucket capacity: 100 requests
  • Leak rate: 10 requests per second
  • If 50 requests arrive in 1 second, they’re queued
  • System processes them at 10/second, taking 5 seconds total
  • If 150 requests arrive instantly, 50 are rejected

 

4️⃣ Token Bucket Algorithm

Uses tokens that regenerate over time to control request rates.

How it works:

  • A bucket holds tokens (capacity: e.g., 100 tokens)
  • Tokens are added at a fixed rate (e.g., 10 tokens per second)
  • Each request consumes 1 token
  • If no tokens available, request is rejected
  • Bucket can fill to maximum capacity when idle

 

🧰 How to Handle Rate Limits Properly

✔ 1. Implement Local Caching

Store API responses locally so screens don’t repeatedly request the same data. Use appropriate cache invalidation strategies based on data freshness requirements.

✔ 2. Debounce User Inputs

Collect all keystrokes within a defined window (typically 300-500ms) and send a single API request.

Example: Typing “weather” should trigger one request, not seven separate calls.

✔ 3. Exponential Backoff for Retries

When a request fails, implement progressive delays before retrying:

1s → 2s → 4s → 8s

This prevents overwhelming the API during recovery periods and gives the server time to stabilize.

✔ 4. Request Incremental Updates

Send only changed data or use timestamp-based queries instead of requesting complete datasets.

Example: Instead of calling /get-all-notifications, use /get-new-notifications?after=timestamp.

✔ 5. Queue Background Requests

Implement request queuing for non-urgent operations like uploading files, syncing logs, or backing up fitness data.

✔ 6. Strategic Data Preloading

Load frequently accessed data during user login so subsequent screens don’t need to make redundant API calls.

 

🧠 Key Takeaways

  • Rate limits are API protection mechanisms, not backend performance issues
  • Applications must be designed with intelligent request management from the start
  • Caching, debouncing, exponential backoff, and request queuing prevent rate limit violations
  • Reviewing rate limit logs often reveals the root cause of “mysterious” app failures
  • Proper rate limit handling delivers: ✔ Faster application performance ✔ Fewer runtime errors ✔ Superior user experience ✔ Reduced infrastructure costs

When your app experiences issues, always ask: “Are we hitting a rate limit?”

Understanding and respecting rate limits isn’t just about avoiding errors—it’s about building robust, scalable applications that provide consistent experiences for all users.


Jump into our new LinkedIn thread on —  API Rate Limits: The Silent Rule That Controls Every API
Also, read our last article: OAuth vs JWT: When to Use Each

Leave a Reply

OAuth vs JWT: When to Use Each

Reading Time: 2 minutes

Every digital product today needs a secure, fast, and frictionless login experience. Users want to sign in instantly, stay logged in without constant prompts, and trust that their data is protected behind the scenes.

But behind this simple experience, two powerful concepts quietly do the heavy lifting:

👉 OAuth

👉 JWT (JSON Web Token)

Both are extremely popular, but they play very different roles in the authentication journey.

Let’s break them down in the simplest possible way.

 

🧩 What Is OAuth?

OAuth is an authorization framework that lets users give an app limited access to their data without sharing their password.

You’ve seen OAuth in action every time you choose:

  • Continue with Google

  • Login with Apple

  • Sign in with Facebook

Here, OAuth allows the app to confirm who you are using a trusted identity provider — without the app ever touching your actual password.

 

⭐ Key Idea:

OAuth securely grants permission and verifies identity using another trusted system.

Think of OAuth as the secure gatekeeper.

 

🧾 What Is JWT?

JWT is a token format used after the user has logged in.

Once a user is authenticated, the app issues a JWT — a compact, digitally signed token. This token is then sent with every request to the server so the app knows:

  • who the user is

  • whether the request is valid

  • whether the user session is still active

This means users don’t have to log in again repeatedly.

 

⭐ Key Idea:

JWT maintains the user’s identity across requests and keeps them logged in securely.

Think of JWT as your digital access pass with an expiry time.

 

🔍 OAuth vs JWT in Simple Words

  • OAuth = Helps the user get in securely

  • JWT = Helps the user stay in securely

OAuth handles the login/permission part. JWT handles the ongoing session part.

They’re not competitors — they’re teammates.

 

🤝 Do OAuth and JWT Work Together?

Absolutely — in fact, that’s extremely common.

Here’s a typical flow:

1️⃣ User selects Login with Google → OAuth manages the secure login 2️⃣ App receives identity details → App issues a JWT 3️⃣ User stays logged in smoothly → JWT manages the session

So it’s not OAuth vs JWT, but OAuth + JWT working together at different stages.

 

📌 When to Use OAuth

Use OAuth when you need:

✔ Social login (Google, Apple, Facebook)

✔ Password-less sign-in

✔ Verification through trusted identity providers

✔ Limited/controlled access to user data

 

📌 When to Use JWT

Use JWT when your app needs:

✔ Seamless user sessions

✔ Fast verification for APIs

✔ A scalable system without server-side sessions

✔ Mobile-friendly and microservices-friendly authentication

🧠 Real-Life Example

You open an app → choose Login with Google. 🔐 OAuth takes care of that entire login + permission process.

You start using the app, close it, reopen it, and you’re still logged in. 🔐 JWT is the reason you don’t need to log in again.

 

⭐ Final Thought

OAuth and JWT are not replacements for each other — they are solutions for different parts of the authentication workflow.

  • OAuth = How securely the user gets authenticated

  • JWT = How long and how smoothly the user stays authenticated

By using both correctly, apps become more secure, scalable, and user-friendly.

Jump into our new LinkedIn thread on —  OAuth vs JWT: When to Use Each
Also, read our last article: The 2025 Cloudflare Outage: A Business Lesson Designing Apps That Respect Your Phone Battery

Leave a Reply

The 2025 Cloudflare Outage: A Business Lesson

Reading Time: 2 minutes

On 18 November 2025, a major Cloudflare outage disrupted a huge portion of the internet. Users around the world suddenly found apps and websites refusing to load. Platforms like ChatGPT, X, Spotify, Canva, and countless others showed messages like “Something went wrong.”

Cloudflare later published an official explanation: an internal change mistakenly created a system file that grew far larger than expected. Their software wasn’t designed to handle a file of that size, which triggered a chain reaction that temporarily broke parts of their global network.

This wasn’t a cyberattack — it was an internal error. But because millions of businesses rely on Cloudflare, the internet effectively “broke” for several hours.

Cloudflare official blog — https://blog.cloudflare.com/18-november-2025-outage/

 

What Actually Happened?

Cloudflare made a small internal configuration change.

That change caused a particular system file to grow extremely large.

Cloudflare’s software could not process the file. As a result, requests began failing across their network.

Because Cloudflare sits in front of a massive portion of the internet — DNS, CDN, security, routing — the impact was global.

Again: not a hack, not an attack — simply a mistake with outsized consequences.

 

Why This Matters for Every Business

This outage highlights a big truth: Modern businesses rely heavily on external providers — often more than they realize.

Your own system might be running perfectly, your servers might be healthy, your code might have zero errors…

But if the service you depend on goes down, you go down with it.

This can cause:

  • Lost sales and revenue
  • Angry users
  • Failed payments
  • Bad reviews
  • Massive customer support spikes
  • Long-term damage to brand trust

In other words: your uptime is only as strong as your weakest dependency.

 

What Businesses Should Do Now

1. Don’t Rely on a Single Provider

For critical infrastructure, always have redundancy:

  • DNS → use multiple DNS providers
  • CDN → have fallback CDN or direct origin routing
  • Firewall / security → multi-layer protection
  • Authentication → secondary auth provider in emergencies

If one provider fails, the other keeps your service alive.

2. Build a Backup / Failover Plan

Your system should be capable of switching to alternative providers automatically or within minutes.

Even simple fallback routing can protect you from major outages.

3. Monitor Your Website and Services

Set alerts for:

  • High error rates
  • Slower loading
  • API failures
  • Traffic drops
  • DNS resolution problems

The sooner you know, the sooner you can take action.

4. Communicate Quickly with Users

Silence makes outages worse.

If you’re affected by a global provider issue, send a simple, reassuring update:

“We’re currently impacted by a Cloudflare outage. Our team is monitoring the situation and will update you shortly.”

Clear communication builds trust, even during downtime.

5. Test Failure Scenarios Every Few Months

Practice breaking things on purpose:

  • What happens if your CDN fails?
  • What if DNS stops resolving?
  • What if your firewall blocks all traffic?

These tests reveal weaknesses before real disasters occur.

 

Conclusion

The Cloudflare outage of November 18, 2025 is a powerful reminder that no provider — not even the biggest — is immune to failure.

Businesses must design their systems with resilience in mind:

  • Multiple providers
  • Failover plans
  • Monitoring and alerts
  • Transparent communication
  • Regular failure testing

The internet is interconnected. A single mistake from a single company can disrupt millions of users.

Redundancy isn’t optional anymore — it’s essential for protecting your users, your brand, and your revenue.

Also, read our last article: Efficiently Handling Large File Uploads (PDF/DOCX) in AWS

Leave a Reply