Author of the publication

S-Caffe: Co-designing MPI Runtimes and Caffe for Scalable Deep Learning on Modern GPU Clusters.

, , , and . PPOPP, page 193-205. ACM, (2017)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

Scalable Distributed DNN Training using TensorFlow and CUDA-Aware MPI: Characterization, Designs, and Performance Evaluation., , , , and . CoRR, (2018)Optimized large-message broadcast for deep learning workloads: MPI, MPI+NCCL, or NCCL2?, , , , and . Parallel Computing, (2019)CUDA Kernel Based Collective Reduction Operations on Large-scale GPU Clusters., , , , and . CCGrid, page 726-735. IEEE Computer Society, (2016)HyPar-Flow: Exploiting MPI and Keras for Scalable Hybrid-Parallel DNN Training using TensorFlow., , , , and . CoRR, (2019)Towards Efficient Support for Parallel I/O in Java HPC., , , and . PDCAT, page 137-143. IEEE, (2012)Scalable Distributed DNN Training using TensorFlow and CUDA-Aware MPI: Characterization, Designs, and Performance Evaluation., , , , and . CCGRID, page 498-507. IEEE, (2019)CUDA M3: Designing Efficient CUDA Managed Memory-Aware MPI by Exploiting GDR and IPC., , , and . HiPC, page 52-61. IEEE Computer Society, (2016)Exploiting GPUDirect RDMA in Designing High Performance OpenSHMEM for NVIDIA GPU Clusters., , , , , and . CLUSTER, page 78-87. IEEE Computer Society, (2015)High performance distributed deep learning: a beginner's guide., , and . PPoPP, page 452-454. ACM, (2019)Optimized Broadcast for Deep Learning Workloads on Dense-GPU InfiniBand Clusters: MPI or NCCL?, , , and . EuroMPI, page 2:1-2:9. ACM, (2018)