Comparison of Atom Detection Algorithms for Neutral Atom Quantum Computing
In neutral atom quantum computers, readout and preparation of the atomic qubits are usually based on fluorescence imaging and subsequent analysis of the acquired image. For each atom site, the brightness or some comparable metric is estimated and used to predict the presence or absence of an atom. A...
        Saved in:
      
    
          | Published in | 2024 IEEE International Conference on Quantum Computing and Engineering (QCE) Vol. 1; pp. 1048 - 1057 | 
|---|---|
| Main Authors | , , | 
| Format | Conference Proceeding | 
| Language | English | 
| Published | 
            IEEE
    
        15.09.2024
     | 
| Subjects | |
| Online Access | Get full text | 
| DOI | 10.1109/QCE60285.2024.00124 | 
Cover
| Summary: | In neutral atom quantum computers, readout and preparation of the atomic qubits are usually based on fluorescence imaging and subsequent analysis of the acquired image. For each atom site, the brightness or some comparable metric is estimated and used to predict the presence or absence of an atom. Across different setups, we can see a vast number of different approaches used to analyze these images. Often, the choice of detection algorithm is either not mentioned at all or it is not justified. We investigate several different algorithms and compare their performance in terms of both precision and execution run time. To do so, we rely on a set of synthetic images across different simulated exposure times with known occupancy states, which we generated using a previously validated imaging simulation. Since the use of simulation provides us with the ground truth of atom site occupancy, we can easily state precise error rates and variances of the reconstructed property. However, knowing the relative performance of these algorithms is not sufficient to justify their use, since better ones can exist that were not compared. To investigate this possibility, we calculated the Cramer-Rao bound in order to establish an upper limit that even a perfect estimator cannot outperform. As the metric of choice, we used the number of photonelectrons that can be contributed to a specific atom site. Every estimator that reconstructs a different property can simply be scaled accordingly. Since the bound depends on the occupancy of neighboring sites, we provide the best and worst cases, as well as a half filled one, which should represent an averaged bound best. Our comparison shows that of our tested algorithms, a global nonlinear least-squares solver that uses the optical system's point spread function (PSF) to return a global bias and each sites' number of photoelectrons performed the best, on average crossing the worst-case bound for longer exposure times. Its main drawback is its huge computational complexity and, thus, required calculation time. We manage to somewhat reduce this problem, suggesting that its use may be viable, leading us to a novel group of algorithms that present a compromise between speed and precision. However, our study also shows that for cases where utmost speed is required, simple algorithms, like summing up pixel values around atom sites, may be preferable. | 
|---|---|
| DOI: | 10.1109/QCE60285.2024.00124 |