what applications do you need to set up a server

Introduction

When deciding which server architecture to utilize for your surroundings, there are many factors to consider, such every bit performance, scalability, availability, reliability, cost, and ease of management.

Here is a list of commonly used server setups, with a short description of each, including pros and cons. Keep in mind that all of the concepts covered hither tin be used in various combinations with one another, and that every environment has different requirements, so there is no unmarried, right configuration.

one. Everything On One Server

The entire surround resides on a single server. For a typical web awarding, that would include the web server, application server, and database server. A common variation of this setup is a LAMP stack, which stands for Linux, Apache, MySQL, and PHP, on a single server.

Use Case: Good for setting up an application quickly, as it is the simplest setup possible, only it offers little in the way of scalability and component isolation.

Everything On a Single Server

Pros:

  • Uncomplicated

Cons:

  • Application and database contend for the same server resources (CPU, Retention, I/O, etc.) which, aside from possible poor performance, can make it difficult to determine the source (application or database) of poor performance
  • Not readily horizontally scalable

Related Tutorials:

  • How To Install LAMP On Ubuntu 14.04

2. Separate Database Server

The database direction system (DBMS) can be separated from the rest of the environs to eliminate the resource contention between the application and the database, and to increase security by removing the database from the DMZ, or public net.

Utilise Case: Proficient for setting upwardly an application quickly, but keeps application and database from fighting over the same system resource.

Separate Database Server

Pros:

  • Awarding and database tiers exercise non fence for the same server resources (CPU, Retention, I/O, etc.)
  • You lot may vertically scale each tier separately, by adding more resources to whichever server needs increased chapters
  • Depending on your setup, it may increase security by removing your database from the DMZ

Cons:

  • Slightly more complex setup than single server
  • Performance issues can arise if the network connection between the two servers is high-latency (i.e. the servers are geographically afar from each other), or the bandwidth is too depression for the amount of information being transferred

Related Tutorials:

  • How To Fix a Remote Database to Optimize Site Performance with MySQL
  • How to Migrate A MySQL Database To A New Server On Ubuntu xiv.04

iii. Load Balancer (Contrary Proxy)

Load balancers tin can be added to a server environs to improve functioning and reliability by distributing the workload across multiple servers. If one of the servers that is load balanced fails, the other servers volition handle the incoming traffic until the failed server becomes salubrious again. It can also be used to serve multiple applications through the same domain and port, by using a layer 7 (application layer) contrary proxy.

Examples of software capable of opposite proxy load balancing: HAProxy, Nginx, and Varnish.

Use Case: Useful in an environment that requires scaling by adding more servers, also known every bit horizontal scaling.

Load Balancer

Pros:

  • Enables horizontal scaling, i.e. environment capacity tin be scaled by calculation more servers to it
  • Tin protect against DDOS attacks by limiting client connections to a sensible amount and frequency

Cons:

  • The load balancer can get a performance bottleneck if it does not take enough resource, or if it is configured poorly
  • Can innovate complexities that require additional consideration, such equally where to perform SSL termination and how to handle applications that require sticky sessions
  • The load balancer is a single signal of failure; if information technology goes downward, your whole service can go down. A loftier availability (HA) setup is an infrastructure without a unmarried indicate of failure. To learn how to implement an HA setup, you tin can read this department of How To Employ Floating IPs.

Related Tutorials:

  • An Introduction to HAProxy and Load Balancing Concepts
  • How To Use HAProxy As A Layer 4 Load Balancer for WordPress Application Servers
  • How To Use HAProxy As A Layer 7 Load Balancer For WordPress and Nginx

4. HTTP Accelerator (Caching Reverse Proxy)

An HTTP accelerator, or caching HTTP reverse proxy, can be used to reduce the fourth dimension it takes to serve content to a user through a variety of techniques. The main technique employed with an HTTP accelerator is caching responses from a spider web or application server in memory, so future requests for the aforementioned content tin be served quickly, with less unnecessary interaction with the web or application servers.

Examples of software capable of HTTP acceleration: Varnish, Squid, Nginx.

Apply Case: Useful in an environment with content-heavy dynamic web applications, or with many commonly accessed files.

HTTP Accelerator

Pros:

  • Increase site performance by reducing CPU load on spider web server, through caching and compression, thereby increasing user chapters
  • Can be used as a contrary proxy load balancer
  • Some caching software tin can protect against DDOS attacks

Cons:

  • Requires tuning to go best performance out of it
  • If the cache-hit rate is low, it could reduce performance

Related Tutorials:

  • How To Install Wordpress, Nginx, PHP, and Varnish on Ubuntu 12.04
  • How To Configure a Clustered Spider web Server with Varnish and Nginx
  • How To Configure Varnish for Drupal with Apache on Debian and Ubuntu

five. Primary-replica Database Replication

1 way to ameliorate performance of a database system that performs many reads compared to writes, such as a CMS, is to use primary-replica database replication. Replication requires one principal node and ane or more replica nodes. In this setup, all updates are sent to the primary node and reads tin can be distributed across all nodes.

Use Case: Expert for increasing the read performance for the database tier of an application.

Hither is an example of a main-replica replication setup, with a single replica node:

Primary-replica Database Replication

Pros:

  • Improves database read performance past spreading reads across replicas
  • Can improve write performance past using primary exclusively for updates (information technology spends no time serving read requests)

Cons:

  • The awarding accessing the database must have a machinery to determine which database nodes it should send update and read requests to
  • Updates to replicas are asynchronous, and then there is a chance that their contents could be out of engagement
  • If the primary fails, no updates tin exist performed on the database until the issue is corrected
  • Does not have born failover in example of failure of primary node

Related Tutorials:

  • How To Optimize WordPress Performance With MySQL Replication On Ubuntu 14.04

Case: Combining the Concepts

It is possible to load balance the caching servers, in addition to the application servers, and apply database replication in a unmarried environment. The purpose of combining these techniques is to reap the benefits of each without introducing too many bug or complexity. Here is an case diagram of what a server environment could await similar:

Load Balancer, HTTP Accelerator, and Database Replication Combined

Let's assume that the load balancer is configured to recognize static requests (like images, css, javascript, etc.) and transport those requests straight to the caching servers, and transport other requests to the application servers.

Here is a description of what would happen when a user sends a requests dynamic content:

  1. The user requests dynamic content from http://example.com/ (load balancer)
  2. The load balancer sends request to app-backend
  3. app-backend reads from the database and returns requested content to load balancer
  4. The load balancer returns requested data to the user

If the user requests static content:

  1. The load balancer checks enshroud-backend to meet if the requested content is cached (cache-hit) or non (cache-miss)
  2. If cache-hitting: render the requested content to the load balancer and jump to Step 7. If cache-miss: the cache server forwards the asking to app-backend, through the load balancer
  3. The load balancer forwards the request through to app-backend
  4. app-backend reads from the database then returns requested content to the load balancer
  5. The load balancer forwards the response to enshroud-backend
  6. cache-backend caches the content then returns it to the load balancer
  7. The load balancer returns requested information to the user

This environment withal has 2 single points of failure (load balancer and primary database server), only it provides the all of the other reliability and performance benefits that were described in each section above.

Conclusion

Now that you are familiar with some basic server setups, you should take a good idea of what kind of setup you would utilize for your own application(southward). If you are working on improving your own environment, call up that an iterative procedure is all-time to avoid introducing too many complexities besides quickly.

Permit u.s. know of any setups you recommend or would like to learn more near in the comments below!

gilliammainst.blogspot.com

Source: https://www.digitalocean.com/community/tutorials/5-common-server-setups-for-your-web-application

0 Response to "what applications do you need to set up a server"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel