Sep 16, 2021
11 min read

Why you need to build globally distributed applications

Today's users of web and mobile applications and services expect fast and outstanding experiences. Delivering successful web services and applications means meeting these baseline expectations:

  1. Fast loading times everywhere around the world,
  2. Constantly available services thanks to high-availability and resiliency,
  3. Meeting compliance requirements.

In this blog post, we dive into why these three goals are vital to modern web applications and services. Then, we will look at how building global and distributed architectures achieve these goals. Finally, we conclude with how your web services and applications can become global and distributed from the get-go by deploying on a feature-rich global serverless platform like Koyeb.

Table of contents

Fast loading times and 100% availability are not optional

Milliseconds matter

For a smooth user experience, you want to aim for a response time in milliseconds. Even a one-second response time will still feel almost seamless to a user. However, anything that takes longer than one second will be noticed and could lead to a user bouncing in search of a faster response elsewhere.

These expectations are in part a product of how quickly people can ingest visual information. There is no shortage of statistics and experiences out there that demonstrate that long loading times create negative experiences for end-users.

  • When Unbounce surveyed 750 consumers, almost 70% affirmed that page speed impacts their willingness to purchase from an e-commerce site.
  • In 2018, Google saw that a loading time between 1 to 3 seconds increased the probability of a bounce by 32%. Take that range up to 5 seconds, and that increase in bounce rate goes up to 90%.
  • Meeting the expectations for mobile devices is even more challenging. Google also found most mobile device users leave a page that takes more than 3 seconds to load, which is noteworthy given that over half of web traffic occurs on mobile devices.

If you want to see more facts about how latency affects user experience, you can check out 12 Statistics That Show How Page Load Time Impacts Conversion Rate, 40 Fascinating UX Statistics, and 100 UX Statistics Every UX Professional Needs to Know.

100% Availability is the standard

Another key component that influences user experience is the availability of web apps and services. While that sounds obvious, achieving high availability is not so straightforward. Here are a few typical reasons that services and applications become unavailable:

  • Unexpected spikes in traffic,
  • Downtime caused by routine maintenance,
  • A point of failure in the infrastructure,
  • A power outage.

Koyeb High-Availability 99.99% Image courtesty of commitstrip.com

When services are unavailable or unreliable, users churn or consider switching to a competitor. This churn results in a loss of business. One textbook example of service disruption was when Amazon.com became unavailable for a half hour in 2019. Based on their previous quarterly reports, some estimated losses of this downtime to be almost $2 million.

Achieving high availability is often determined by the amount of resources at your disposal in combination with your priorities.

While established enterprise companies have the means to implement redundant infrastructure and scale resources horizontally, small to medium-sized businesses do not always have the resources necessary to maintain high availability through horizontal scaling, an option that is quite expensive. Rather, they often maintain high availability by investing in redundant infrastructure.

Luckily, there is now easier ways to build highly available applications thanks to globally distributed architectures.

Building global and distributed architectures for performance, availability, and compliance goals

As we've just outlined above speed and availability are indispensable requirements that shape the success of web apps and services.

A comprehensive solution to address these challenges is to build on top of distributed and global architecture. Before going any further, it is worthwhile to clarify what we mean by global and distributed architecture:

  • Global concerns the business side of the architecture. It is about a service or application reaching users in global markets.
  • Distributed refers to the technological dimension of the architecture. Instead of running workloads in a centralized location, distributed means workloads run in different physical locations.

Distributed Architecture

We add the emphasis for different physical locations because workloads running on the same server in virtualized environments do not count as distributed. Think about the worst-case scenario: a server or data center catches fire. With a truly distributed architecture, services will still be up in running thanks to redundant and distributed architecture.

There is distributed across data centers, then there is distributed across multiple regions. We believe a truly global and distributed architecture means applications and services run properly across continents.

Now that we have defined these terms, let's take a closer look at how distributed and global architecture enables you to meet the expectations surrounding loading times, availability, and compliance.

Putting the edge in competitive edge

Potential users and customers are dispersed around the world. Regardless of a user's location, you'll want them to enjoy rapid loading times.

This is where an edge network comes in. Edge networks run services closer to users, which drastically reduces their loading times by returning cached responses and optimizes bandwidth costs, since the number of requests that have to conduct the full round-trip to be processed is cut down.

Koyeb built-in edge network

Imagine an application is running from Paris, France. Without an edge network, requests from all over the world would have to conduct lengthy round trips to be processed. With an edge location, responses can be cached on proxy servers closer to end users, regardless of their physical location. On Koyeb's edge network, workloads run at the edge so computing happens closer to your users.

Test the Koyeb demo app to see how close you are running to our global edge network.

If you want to learn more about how edge networks improve performance, you can check out Using Cache-Control and CDNs to Improve Performance and Reduce Latency. That being said, not all performance issues are solved by an edge network.

Keep services up and running with distributed architecture

Another part of performance that is key to modern apps is high availability. It is what keeps services up and running 100% of the time. A distributed architecture lets you achieve high availability through:

  1. Redundancy, also known as fault tolerance
  2. Horizontal scalability

Redundancy is having duplicate resources of a system, such as servers, load balancers, or databases. This redundancy protects against single points of failure.

For example, if a system uses a single load balancer to distribute workloads across nodes, the entire application would go offline if that load balancer fails. With a redundant architecture, another load balancer is ready to step in if the other load balance goes down.

Being able to recover from errors is what makes this architecture fault-tolerant. Redundant architecture keeps services up and running, letting users enjoy a seamless experience.

Horizontal scaling is another way to maintain high availability. Instead of responding to errors or system failures, horizontal scalability is about adapting the number of instances — also known as nodes, servers, virtual machines — of a system to respond to changes in traffic for its services.

When there is an increase in traffic to an application, horizontal scaling scales out the number of instances running to respond to those new traffic requests. By adding these instances, the application can process the uptake in traffic, which provides all users an overall better experience.

Multi-region deployments are a must to meet compliance requirements and improve performance

Compliance is another reason fueling the need for global and distributed architectures: requirements and laws for storing and processing data differ across countries and industries.

Reaching global audiences has become easier thanks to global platforms, however, this accessibility comes with responsibilities. While some may want to gloss over the details of compliance, regional data locality might be legally required to work in specific countries or with regulated industries.

Important compliance standards include:

  • GDPR: The European Union's General Data Protection Regulation applies to those processing personal data in the EU.
  • HIPAA: The US's Health Insurance Portability and Accountability Rules outline how patient data can be stored and used.
  • SOC 2: Particularly important for SaaS providers, SOC 2 establishes clear expectations regarding the security, availability, and privacy of applications.
  • ISO27001: An international standard for managing information and data.

In short, there are a lot of different compliance requirements out there. If you want to learn more about data protection laws around the world, Mike Woodward with Security Scorecard recently published a list of 16 countries with GDPR-like data privacy laws that you can check out here.

Deploy in multiple core regions to respect data locality and improve performances

One way to conform apps and services to these varying requirements is to run the application's core logic from different data centers located in the same regions as its clients.

For example, if a health care application operates in Germany and the United States, it could run workloads and store data for the German market from Frankfurt to comply with GDPR requirements while processing and storing data for American clients in New York to comply with HIPAA.

Multi-region deployments improve performance across the board and globe

In addition to addressing compliance requirements, deploying an application in multiple core locations resolves distance-related latency issues. If users for a web service or application are located in Singapore and London, but the service is running from Paris, users in Singapore will experience longer loading times as requests perform a round-trip to Paris. If a second core location closer to Singapore is added to the infrastructure, users in the region will enjoy a faster experience using the service.

Deploying in multiple core locations is different than running services on top of a global edge network. While both work to provide a fast experience for users located around the world, edge networks work typically solve latency issues related to static content while multi-region deployments improve performance for static as well as dynamic content that cannot typically be cached at the edge.

Finally, adding multiple core locations to an application's architecture incorporates a layer of redundancy. If one region were to fail for whatever reason, the other region would persist, maintaining a consistent uptime to handle all requests.

Make your distributed architectures global, resilient, and fast with 1 click

All this being said, developing your infrastructure to be globally available, reliable, and high performing is not only challenging, it requires a significant investment of time, cloud budget, and manpower. The more distributed your systems, the more complex it becomes to manage them.

We at Koyeb designed the serverless platform to be the fastest way to deploy apps globally. The previous options for deploying web apps and services globally consist of either building the global infrastructure yourself or paying steep enterprise plans. You need a more agile way to deploy to meet the needs and expectations of today's users.

Learn more about the technology powering Koyeb with The Koyeb Serverless Engine: from Kubernetes to Nomad, Firecracker, and Kuma.

Global and distributed architecture is provided out-of-the-box with the Koyeb serverless platform's multi-region availability, private network with service mesh and discovery, and edge network. You can deploy your applications with one click in Koyeb's Console or you can deploy them directly from your terminal with the Koyeb CLI.

Additional features that are built-in to the platform include autoscaling horizontally to handle all the ebbs and flows of your traffic, automatic fault recovery if one of your services ever goes down, auto HTTPS (SSL) to secure your apps and APIs, built-in versioning if you ever need to roll back to a previous deployment, a message queue to handle asynchronous tasks, customizable health checks to keep your app up and running, and so much more.

Sign up to deploy your first app today. Read our documentation to get started and join our community to talk to us and stay in the know with the latest updates.


Deploy AI apps to production in minutes

Koyeb is a developer-friendly serverless platform to deploy apps globally. No-ops, servers, or infrastructure management.
All systems operational
© Koyeb