What’s the difference between peer to peer and client server?

p2p vs client server

In this blog post, we will look into client-server architecture, compare it to peer-to-peer, and find out exactly when client-server is better than P2P. For those of you who aren’t willing to spend few minutes reading through the article, I’ll give an immediate answer – peer-to-peer is always better than client-server.

The client-server architecture is the most commonly used approach for data transfer. It designates a computer as a server and another one as a client. In a client-server architecture, the server needs to be online all the time, and get good connectivity. The server provides its clients with data, and can also receive data from clients. Some examples of widely used client-server applications are HTTP, FTP, rsync. All of these applications have a specific server-side functionality that implements the protocol.

Availability

The most obvious problem that all client-server applications face is that server always has to be online and available. In case of any software, network or hardware problem, the service to all clients is affected. Therefore, you need to plan a server high availability solution in advance. High availability ensures that the system switches to a backup hardware or network if it’s disrupted for any reason, and the service can continue to operate smoothly. This problem is quite complex since you need to keep data synchronized between your live machine and backup machine, and properly plan software and hardware updates in advance to support uninterrupted service operation.

High Load

Another recurring problem with client-server applications is high load. A single powerful client that consumes data faster than the others could consume all the networking, disk operation and server CPU. You want all clients to have access to the server. Therefore you need to limit clients to a certain consumption level, so each of them can get minimal server resources. This approach makes sure that the powerful client won’t disrupt the other clients. But in reality, it usually means that the server always serves a client with the limitation in place, even if it’s not overloaded and can operate faster.

Scalability

Each server needs to be planned for the specific amount of clients it will support. When the number of clients grows, the server CPU, memory, networking, and disk performance need to grow as well, and can eventually reach a point when the server stops operation. If you have more clients than a single server can serve, you probably need to deploy several servers. This means designing a system to balance and distribute load between servers, in addition to the high availability system we discussed previously.

And how does peer-to-peer stack up?

Availability

In a peer-to-peer world, each client is also the server. If the central machine is not available the service can be provided by any available client or a group of clients. The peer-to-peer system will find the group of best clients and will request service from them. This gives you service availability that doesn’t depend on one machine and doesn’t require the development of any high availability solution.

High Load

The peer-to-peer, in comparison to client-server architecture, converts each node to a server that can provide service. Therefore if a powerful client needs a lot of data, several other devices can provide it. Therefore each client can download data at the fastest possible speed without any limitations.

Scalability

Obviously, the more devices you have in a network the more devices will be able to participate in data delivery. They will participate in terms of network, CPU distributing this load from a central server. The more devices you have the less load will be on the central server.

Let’s see how this works in a couple real case scenarios.

Build distribution

Development companies struggle to deliver builds faster to remote offices across the globe, or within one office to hundreds of fast QA machines. In a client-server world, all remote offices will download build from a build machine. The speed limitation will be determined by the network channel that serves the build to all remote offices. A peer-to-peer approach would split the build into independent blocks that could travel between offices independently. This approach removes network limitation from the main office and combines the speed of all remote offices to deliver builds faster. Usually, you can get 3-5 times faster on a peer-to-peer architecture than on client-server.

Another issue is distributing build within a single office from a central server. The fast QA machines completely overload the central server network and CPU, bringing the central server to an unusable state. It is an almost unsolvable scalability issue. As we discussed above, a peer-to-peer approach is better when many clients need access to the data. Each QA machine can seed the data to other machines, keeping the server in a healthy state and delivering builds blazingly fast.

Data delivery to remote offices

Delivering data to a remote office usually faces the issue of overloading the central server. Even if the speed of each office is not that fast, when you have many of them it adds up and requires huge bandwidth channels to the central office. A problem can occur when you need to deliver software or OS updates as well as other data such as documents, video or images. The peer-to-peer approach solves that by allowing each remote office to participate in data delivery. This reduces the load on the central server, and significantly reduces central server and networking requirement.

Another long-lasting problem peer to peer solves is when the data needs to be distributed to several machines within the office. The previous approach was to either download from the central server which would increase load even more, or develop a two-step distribution policy: deliver data to the central server within the remote location, and then copy data locally. The peer-to-peer solves that naturally by finding the best source for the data, whether it is a local server or remote server. Once the block of the data is present in the remote office, it won’t necessarily be downloaded from the central data center.

Busting common P2P myths

Myth #1: P2P is only faster when you download from many peers

While the speed of the p2p network grows as more clients join the transfer. The point to point and distribution of data from one node to several nodes is faster too.

Myth #2: P2P exposes your network and computers to viruses, hackers, and other security risks

This misconception originates in the popular private P2P use case of illegal file sharing. Illegal file sharing exposes your infrastructure and computers to all the problems. The risk for an enterprise P2P application does not exist, since all entities participating in distribution are secure enterprise machines.

Myth #3: P2P is not secure.

As we saw above, peer to peer is just a way to establish a connection and assign roles between machines. It does need additional security mechanisms that can perform mutual authentication and authorization, as well as access control and traffic encryption. However, these additional security features are built into enterprise P2P solutions like Resilio Connect.

How much faster is P2P?

We wrote an article about this previously, with a concrete example: Why P2P is faster. In a nutshell, P2P is always faster. How much faster depends on data size and scale. The larger they are, the bigger the differences. In the example in the article, client-server took 3X as long to send a 100GB file.

But it’s important to remember that using P2P is not only faster, it also:

  • Introduces significant savings on equipment and infrastructure
  • Is more robust and resilient.

How much time could you save by using P2P?
Schedule a demo to find out