Nowadays everyone has heard about cloud computing and it is rare for the organization to not use any form of cloud service. But another term that people know less about is networked computing. Both grid and cloud computing involve the use of remote computing resources to get things done, but the two computing paradigms are quite different from each other. So what exactly are the differences between network and cloud computing?
Origins of the cloud
Before the cloud, many people hoped that the future of computing would be an application service. Specifically, organizations would subscribe to applications run by application hosts or application service providers (ASPs). Different ASPs would specialize in hosting one or more applications, in addition to offering services such as configuration and customizations.
ASPs struggled to take off for a variety of reasons, but the idea evolved into native applications in the cloud, along with what we now think is the cloud – huge generic computing and storage resources that can be provided when needed and release them when they do not exist. it is already needed and paid for according to use. Instead of just being an application enabler as a service, cloud computing involves platform-as-a-service (PaaS), infrastructure as a service (IaaS), and more.
Also read: The best cloud networking services and solutions of 2021
What is Grid Computing?
While cloud computing involves renting generic computing resources (which may need to be expanded or scaled down) to run different workloads or provide platforms or infrastructure, network computing makes use of unused or underused computing resources on a network. of computers to perform on a large scale. duties. In fact, grid computing takes distributed computing resources and makes them work together to form a kind of “virtual supercomputer”. Different tasks or task parts are sent to different computers on the network to do all the work as quickly and efficiently as possible.
A network can be a corporate local area network with tens, hundreds, or possibly even thousands of computers connected to it to form the network. But a network is not limited to a single local network and global networks could be made up of millions or tens of millions of computers connected via the Internet.
Therefore, grid computing is ideal for certain types of workloads that can be divided into separate tasks that do not depend on each other. For example, medical research involving examining how to fold proteins is highly computational, as a large number of proteins must be mathematically modeled. Network computing is ideal for this, as the task of modeling these proteins can be shared by many different teams working on specific proteins and reporting their results. Each protein is independent of the others, so processing can be done in any order.
Grid of the world community
One of the largest public computer grids is the Grid of the world community, which comprises about 150,000 computers worldwide. This grid is used to analyze data related to muscular dystrophy, cancer, influenza, Ebola, and COVID-19, among other research areas. Although many of the computers that make up the network are home desktops, the total computing power of the network is over 600 TFLOPS, making it a meaningful virtual supercomputer in itself.
Also read: How data centers should evolve in the first era of the cloud
The term “reporting your results” raises the question of informing whom. This introduces another important difference between cloud computing and grid computing. This is because grid computing usually involves some type of grid middleware that has several parts. First, there is usually client software on each of the computers that are part of the network. Its function is to request individual tasks to process and manage this processing when spare resources are available on the computer. Finally, it reports the results to more intermediate programs running on an end-user team, which manages the project and assigns tasks to individual teams on the network.
A subset of network computing is network computing of data in memory. It is usually performed on a local area network and involves storing data in the random access memory (RAM) of a large number of computers that make up the network. Instead of creating a virtual supercomputer, a virtual memory is created that can be used to hold large databases and then access that data with minimal latency, perhaps for data analysis purposes.
An interesting side effect of network computing design is that, by its nature, it is more redundant than cloud computing. Cloud service providers face redundancy by having multiple data centers and different regions and availability zones. This means that if a data center in the cloud is not available, workloads should be moved to resources in other data centers whenever possible. But in a computer network, if one of the computers on the network fails or loses the network connection, the jobs can be reassigned to other available network computers. Alternatively, unfinished jobs can wait until the computer is available again.
The future is cloudy
While network computing can be tremendously powerful for specific tasks, the cloud computing paradigm is often much more useful. For this reason, network computing is likely to remain relatively obscure, while cloud computing will increasingly become the de facto standard for all but the most specialized computing tasks that organizations need to manage at home. .
Read below: Network transformation: from virtualization to clouding