Author of the publication

Tiny but Accurate: A Pruned, Quantized and Optimized Memristor Crossbar Framework for Ultra Efficient DNN Implementation.

, , , , , , , , and . CoRR, (2019)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

Towards Ultra-High Performance and Energy Efficiency of Deep Learning Systems: An Algorithm-Hardware Co-Optimization Framework., , , , , , , , , and 1 other author(s). AAAI, page 4235-4243. AAAI Press, (2018)Structured Weight Matrices-Based Hardware Accelerators in Deep Neural Networks: FPGAs and ASICs., , , , , , , and . ACM Great Lakes Symposium on VLSI, page 353-358. ACM, (2018)A SOT-MRAM-based Processing-In-Memory Engine for Highly Compressed DNN Implementation., , , , and . CoRR, (2019)Non-structured DNN Weight Pruning Considered Harmful., , , , , , , , , and 3 other author(s). CoRR, (2019)Toward Extremely Low Bit and Lossless Accuracy in DNNs with Progressive ADMM., , , , , and . CoRR, (2019)ResNet Can Be Pruned 60x: Introducing Network Purification and Unused Path Removal (P-RM) after Weight Pruning., , , , , and . CoRR, (2019)CirCNN: accelerating and compressing deep neural networks using block-circulant weight matrices., , , , , , , , , and 6 other author(s). MICRO, page 395-408. ACM, (2017)Tiny but Accurate: A Pruned, Quantized and Optimized Memristor Crossbar Framework for Ultra Efficient DNN Implementation., , , , , , , , and . CoRR, (2019)An Ultra-Efficient Memristor-Based DNN Framework with Structured Weight Pruning and Quantization Using ADMM., , , , , , , , , and . ISLPED, page 1-6. IEEE, (2019)An area and energy efficient design of domain-wall memory-based deep convolutional neural networks using stochastic computing., , , , , , , and . ISQED, page 314-321. IEEE, (2018)