Neural networks have found applications in diverse fields, such as image recognition and data processing. However, as the volume of data to be processed grows, the training cost of neural networks increases. This means we need a faster, more energy-efficient arithmetic circuit. In this context, the continuous advancement of superconducting circuit technology presents an intriguing opportunity on the hardware front. Several research groups have already begun utilizing superconducting circuits to implement neural networks, and their remarkable achievements have shown the potential of this approach. As a result, we firmly believe that adopting superconducting circuits for neural network computations holds great promise in addressing the challenges posed by increasing data volumes and can pave the way for more efficient and powerful neural network applications.
In this paper, we propose a novel computing unit. This unit draws inspiration from the neural model of Binary Neural Networks (BNN) but implements computations involving both digital and analog quantities, ultimately producing digital results. We design a special SFQ logic gate—NOT/DFF switch gate (NDSG). It can switch between a NOT gate and a DFF gate by a control current. By using NDSG, we realize the mixing of SFQ digital signals and CMOS analog signals. This approach offers several advantages. Firstly, it reduces the area of SFQ computing units by leveraging aspects from mixed CMOS circuits, thereby increasing integration density. Secondly, the neural unit allows feedback signals to be sent back to the CMOS chip, where CMOS computations handle the feedback current sent to the SFQ computing unit. This eliminates the need for pulse feedback in SFQ circuits and avoids the necessity of adjusting SFQ circuit clocks. By incorporating these innovations, our proposed neural unit presents a promising solution for on-chip learning in superconducting circuits, facilitating efficient and effective neural network computations. All circuit components were designed using the National Institute of Advanced Industrial Science and Technology 10 kA/cm2 Nb advanced process 2 (ADP2) and its cell library.
This work was supported by JSPS KAKENHI Grant Number JP22H01542. The circuits were fabricated in the clean room for analog-digital superconductivity (CRAVITY) of National Institute of Advanced Industrial Science and Technology (AIST) with the advanced process 2 (ADP2)
[1] K. Ishida et al., “SuperNPU: An Extremely Fast Neural Processing Unit Using Superconducting Logic Devices,” 2020 53rd Annual IEEE/ACM International Symposium on Microarchitecture (MICRO), pp. 58-72, 2020.
[2] R. Andri, L. Cavigelli, D. Rossi, L. Benini, “YodaNN: An Ultra-Low Power Convolutional Neural Network Accelerator Based on Binary Weights,” IEEE Computer Society Annual Symposium on VLSI, pp. 236-241, 2016.
[3] M. Rastegari, et al. “XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks” Computer Vision – ECCV 2016, pp. 525–542, Oct. 2016.
[4] S. Holmes, “Energy-Efficient Superconducting Computing—Power Budgets and Requirements,” IEEE Trans. Appl. Supercond., vol. 23, no. 3, Jun. 2013, Art. no. 1701610.
[5] H. Akaike et al., “Design of single flux quantum cells for a 10-Nb-layer process,” Physica C, vol. 469, no. 15–20, pp. 1670–1673, Oct. 2009.
[6] M. Davies et al., “Loihi: A Neuromorphic Manycore Processor with On-Chip Learning,” IEEE Micro 2018, vol. 38, no. 1, pp. 82-99, Jan. 2018