Baidu researchers propose quantum self-attention neural network (QSANN) by introducing the mechanism of self-attention in quantum neural networks

This Article Is Based On The Research Paper 'Quantum Self-Attention Neural Networks for Text Classification'. All Credit For This Research Goes To The Researchers of This Project 👏👏👏

Please Don't Forget To Join Our ML Subreddit

As computational operations increase in size, some processes become unmanageable for a typical computer. Unlike classical systems that rely on binary bits, quantum systems rely on quantum particles called qubits. These qubits can move back and forth in time, exist in 2 places at any given time, or even teleport, for that matter. The properties of these quantum particles have been exploited to build quantum systems. These systems can significantly reduce processing time for a specific task.

Due to its enormous potential for solving complicated real-world problems in the fields of optimization, cryptography, chemistry, and the emerging topic of quantum natural language processing (QNLP), quantum computing has become a increasingly attractive field of study. Most of these QNLP ideas, while forward-thinking, lack scalability as they focus on parsing, which is a time-consuming pre-processing activity, especially for substantial real-world datasets. .

A team of researchers from the Baidu Research Institute for Quantum Computing and the University of Technology Sydney addressed these limitations in their new paper Quantum Self Attention Neural Networks for Text Classification. They propose a simple and efficient quantum self-attention neural network (QSANN) architecture, outperforming classical QNLP and attention methods on large-scale real-world datasets.

The research paper boils down to 3 main contributions:

  1. A QNLP algorithm with a detailed circuit implementation diagram based on the self-attention mechanism implemented on NISQ devices
  2. Introducing Gaussian projected quantum self-attention can efficiently extract correlations between words in high-dimensional data.
  3. Results of experiments that demonstrate that QSANN outperforms existing QNLP methods and the resilience of QSANN numerical results to quantum noise

A quantum self-attention layer (QSAL), loss function, and analytical gradients constitute the proposed QSANN design. QSANN encodes input words into a huge quantum Hilbert space before projecting them into a low-dimensional classical feature space using quantum measurement to accomplish text classification. As a result, researchers can exploit the quantum edge by leveraging the high-dimensional quantum feature space and its projected quantum models to detect hidden text correlations. These characteristics would be difficult to find, if not impossible, in traditional classical approaches.

Source: https://arxiv.org/pdf/2205.05625.pdf

Sketch of a quantum self-attention layer (QSAL).

The researchers evaluated the text classification performance of the proposed QSANN against a syntactic analysis-based quantum model on basic tasks such as MC (classification) and RP (relative clause evaluation) datasets, as well as against a classic Self-Attention Network (CSANN) and the native algorithm on the Yelp, IMDb, and Amazon public sentiment analysis datasets. QSANN surpassed the CSANN benchmark on Yelp, IMDb and Amazon datasets in ratings, achieving 100% accuracy on the MC challenge.

More sophisticated approaches, such as positional coding and multi-headed attention, may be used in quantum neural networks in the future for generative models and other more complex tasks. The times ahead will see a boom in quantum algorithms for meaningful real-world problems.

Article: https://arxiv.org/pdf/2205.05625.pdf

Comments are closed.