Robust Dequantization of the Quantum Singular Value Transformation and Quantum Machine Learning Algorithms
Several quantum algorithms for linear algebra problems, and in particular quantum machine learning problems, have been “dequantized” in the past few years. These dequantization results typically hold when classical algorithms can access the data via length-squared sampling . This assumption, which i...
Saved in:
| Published in | Computational complexity Vol. 34; no. 1; p. 2 |
|---|---|
| Main Author | |
| Format | Journal Article |
| Language | English |
| Published |
Cham
Springer International Publishing
01.06.2025
Springer Nature B.V |
| Subjects | |
| Online Access | Get full text |
| ISSN | 1016-3328 1420-8954 |
| DOI | 10.1007/s00037-024-00262-3 |
Cover
| Summary: | Several quantum algorithms for linear algebra problems, and in particular quantum machine learning problems, have been “dequantized” in the past few years. These dequantization results typically hold when classical algorithms can access the data via
length-squared sampling
. This assumption, which is standard in the field of randomized linear algebra, means that for a unit-norm vector
u
∈
C
n
, we can sample from the distribution
p
u
:
{
1
,
…
,
n
}
→
[
0
,
1
]
defined as
p
u
(
i
)
=
|
u
(
i
)
|
2
for each
i
∈
{
1
,
…
,
n
}
. Since this distribution corresponds to the distribution obtained by measuring the quantum state |
u
> in the computational basis, length-squared sampling access gives a reasonable classical analogue to the kind of quantum access considered in many quantum algorithms for linear algebra problems. In this work we investigate how robust these dequantization results are. We introduce the notion of
approximate length-squared sampling
, where classical algorithms are only able to sample from a distribution close to the ideal distribution in total variation distance. While quantum algorithms are natively robust against small perturbations, current techniques in dequantization are not. Our main technical contribution is showing how many techniques from randomized linear algebra can be adapted to work under this weaker assumption as well. We then use these techniques to show that the recent low-rank dequantization framework by Chia, Gilyén, Li, Lin, Tang and Wang (JACM 2022) and the dequantization framework for sparse matrices by Gharibian and Le Gall (STOC 2022), which are both based on the Quantum Singular Value Transformation, can be generalized to the case of approximate length-squared sampling access to the input. We also apply these results to obtain a robust dequantization of many quantum machine learning algorithms, including quantum algorithms for recommendation systems, supervised clustering and low-rank matrix inversion. |
|---|---|
| Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ISSN: | 1016-3328 1420-8954 |
| DOI: | 10.1007/s00037-024-00262-3 |