ATP: Adaptive Threshold Pruning for Efficient Data Encoding in Quantum Neural Networks

4
citations
#1555
in CVPR 2025
of 2873 papers
5
Top Authors
6
Data Points

Abstract

Quantum Neural Networks (QNNs) offer promising capabilities for complex data tasks, but are often constrained by limited qubit resources and high entanglement, which can hinder scalability and efficiency. In this paper, we introduce Adaptive Threshold Pruning (ATP), an encoding method that reduces entanglement and optimizes data complexity for efficient computations in QNNs. ATP dynamically prunes non-essential features in the data based on adaptive thresholds, effectively reducing quantum circuit requirements while preserving high performance. Extensive experiments across multiple datasets demonstrate that ATP reduces entanglement entropy and improves adversarial robustness when combined with adversarial training methods like FGSM. Our results highlight ATPs ability to balance computational efficiency and model resilience, achieving significant performance improvements with fewer resources, which will help make QNNs more feasible in practical, resource-constrained settings.

Citation History

Jan 24, 2026
3
Jan 27, 2026
3
Feb 3, 2026
4+1
Feb 13, 2026
4
Feb 13, 2026
4
Feb 13, 2026
4