Achieving a smarter version of AI through Quantum computing, Neuromorphic computing, and High-performance computing.
The current AI and Deep Learning of the present era have a few shortcomings like training a deep net can be very time-consuming, cloud computing can be costly and unavailability of sufficient data can also be a problem. To be rid of these, the scientists are all set in their search for a smarter version of AI, and there seem to be three ways they can progress in the future.
High-Performance Computing (HPC)
Within the process of improving AI, the most focus is on high-performance computing. It is based on the deep neural net but aims to make them faster and easier to access. It aims to provide better general-purpose environments like TensorFlow, and greater utilization of GPUs and FPGAs in larger and larger data centers, with the promise of even more specialized chips not too far away. The key drivers here address at least two of the three impediments to progress. These improvements will make it faster and easier to program for more reliably good results, and faster chips in particular should make the raw machine compute time shorter. The point of having a high-performance computer is so that the individual nodes can work together to solve a problem larger than any one computer can easily solve. And, just like people, the nodes need to be able to talk to one another in order to work meaningfully together. Of course, computers talk to each other over networks, and there is a variety of computer network (or interconnect) options available for the business clusters.
Neuromorphic computing began as the pursuit of using analog circuits to mimic the synaptic structures found in brains. The brain excels at picking out patterns from noise and learning. A neuromorphic CPU excels at processing discrete, clear data. many believe neuromorphic computing can unlock applications and solve large-scale problems that have stymied conventional computing systems for decades. In 2008, the U.S. Defense Advanced Research Projects Agency (DARPA) launched a program called Systems of Neuromorphic Adaptive Plastic Scalable Electronics, or SyNAPSE, “to develop low-power electronic neuromorphic computers that scale to biological levels.” The project’s first phase was to develop nanometer-scale synapses that mimicked synapse activity in the brain but would function in a microcircuit-based architecture. Intel Labs set to work on its own lines of neuromorphic inquiry in 2011. While working through a series of acquisitions around AI processing, Intel made a critical talent hire in Narayan Srinivasa, who came aboard in early 2016 as Intel Labs’ chief scientist and senior principal engineer for neuromorphic computing.
In quantum computing, operations instead use the quantum state of an object to produce what’s known as a qubit. These states are the undefined properties of an object before they’ve been detected, such as the spin of an electron or the polarization of a photon. Rather than having a clear position, unmeasured quantum states occur in a mixed ‘superposition’, like a coin spinning through the air before landing. These superpositions can be entangled with those of other objects, meaning their final outcomes will be mathematically related even if they are unknown. Qubits can represent numerous possible combinations of 1 and 0 at the same time. This ability to simultaneously be in multiple states is called superposition. To put qubits into superposition, researchers manipulate them using precision lasers or microwave beams. With the help of this counterintuitive phenomenon, a quantum computer with several qubits in superposition can crunch through a vast number of potential outcomes simultaneously. The final result of a calculation emerges only once the qubits are measured, which immediately causes their quantum state to “collapse” to either 1 or 0.
Share This Article
Do the sharing thingy
More info about author
Originally Appeared Here