Software works with data and data is often considered the new oil. Therefore, it makes sense to put the data as close as possible to where it is being processed, in order to reduce the latency of performance-hungry processing tasks.
Some architectures require large amounts of memory-like storage located near the computational function, while in contrast, in some cases, it makes more sense to move the computation closer to mass storage.
In this series of articles we explore the architectural decisions that drive modern data processing … and, specifically, we analyze computer storage
Storage Network Industry Association (SNIA) defines computational storage as follows:
“Computational storage is defined as architectures that provide computational storage (CSF) functions combined with storage, downloading host processing, or reducing data movement. These architectures improve application performance. and / or the efficiency of the infrastructure by integrating computing resources (outside the traditional computing and memory architecture) directly with the storage or between the host and the storage. is to allow parallel computing and / or alleviate existing computational, memory, storage, and I / O limitations. “
This post is written by Adrian Fern, founder and CTO of Prizsm Technologies – a company known for its work aqstorage of weather-resistant, secure and cloud-enabled hybrid content.
Fern writes as follows
We have already addressed this issue in our first analysis, so we now ask whether only real-time data applications and services can benefit from computational storage or are there other definite beneficiaries of this lack of latency?
Everything benefits from reduced latency.
In IT, no matter how big you build it, it will be filled, but you can modify elements at each level of the architectural stack to improve performance and make the user experience more enjoyable.
Today, data is stored in certain ways only due to the evolution of the CPU architecture when we built computers, but they are not adequate to access the volumes of data available now and the exponential growth we will experience as we that we are approaching the quantum age.
It covers the conventional quantum mixture
As we move forward in this era, the real benefits of latency reduction are likely to lie at the points where quantum and conventional computing combine. For example, storage of images and videos or data created by companies like Google that “reads” and stores all the books in the world.
Not all of this data will be stored on quantum computers, but for quantum computers to generate the large-scale performance improvements they promise, the data will still need to be stored properly and accessible for consumption.
We need to make the ability to get things out quickly and formed much more common, so that quantum computers are not sitting effectively “spinning”. Along with real-time data applications and services, this point of integration is the point to be excited about.
We asked about On-Drive Linux in the summary of this series, because to drive the adoption of computational storage devices (CSDs), some technologists (ARM, for example) say that Linux will be the key.
We have also noted that if standard hard drives are used NVMe protocols (express non-volatile memory) to send (or retrieve) bits of data, although, although this process works well, the solid state drive (SSD) will continue to happily ignore what the data it contains is, does, or actually relates, i.e. , it could be an image file, a video or voice file, or it could be a text document, spreadsheet or something else. Linux, on the other hand, has the power to mount the data system related to the data that the SSD stores and is able to know and know what the data blocks really are.
No doubt Linux seems to have the most gravitational pull, but as an industry, we need to be more open to looking for alternative ways to drive the adoption of CSDs.
A lot is about math and signal processing. There are many different ways to do this that people overlook, simply because they are used to doing it with Linux and therefore assume it is the right tool for the job. In some cases it may be, but in others it may not, and there will be other ways to make it better.
Every time we add a new layer of abstraction, there is an opportunity to do things differently. This means taking the time to take a look at what’s going on “under the hood,” identifying the optimal storage approach that initially supports the requirements, but also providing the storage and processing architecture needed to support requirements we may not even know yet.
Normalization can be stifling
So will standardization milestones be the next key requirement for this technology?
In fact, there is more debate here than one might think. For example, just because you’ve been told you need cloud storage, you can argue about how to use it.
Yes, we can do things quickly and efficiently, but too often these capabilities fall into a big pile when we put them around governance and rules. Excessive standardization is based on old thoughts and assumptions, which means that opportunities and alternative solutions are not even considered.
To use a political analogy, our current parliamentary system was created with local MPs to represent people who did not have a horse and could not get to London. If you were to create a political system for the modern age, you would break the current system and start again.
Sometimes old systems create more problems than they solve, but it’s easy to overlook simple solutions because they’re not part of the current canon. Standardization works well when it comes to computer socket compatibility, etc., but it can be stifling when it comes to dealing with what goes on under the covers.
As the IT world focuses on networking capabilities, it is crucial to understand how the platforms on which they are based work. We should not think that what happens under the hood is over: it is a moving image that must evolve all the time if we want to realize its potential by protecting and processing data.
Excessive standardization of computer storage could hinder this progress rather than support it.