Senior Deep Learning Solution Architect | NVIDIA
Better, Faster utilizing your GPUs for deep learning workload
The utilization of GPUs has evolved from gaming purposes, to deep learning, and gear towards, but not limited to, computer vision type of workloads :autonomous vehicles, medical images in radiomics and video surveillance to name a few.
However, having powerful GPUs at your disposal is one thing, knowing how to use them to their full potential is something we are going to explore together.
Working at Nvidia, we are dedicated to create an eco-system with full cycle of hardware-and-software solutions, that enables optimization in all fronts: from providing plug-and-play-able docker repo maintained by us, to data augmentations within GPU on-the-fly, to parallel model training using multiple gpus with mixed_precision for target deployment.
We have open-sourced all these toolkits for data scientists to quickly kick-start and get to work, instead of spending precious time to fix environmental installation errors and bring super computing power at your fingertips.
Today we are going to take a look at these essential toolkits which enable deep learning practitioners, data scientists alike to improve their model training in parallel, iterate faster and develop better market-ready products.
Working many years hands-on as an in-house data scientist, an external deep-learning consultant , a cloud solution architect (on Azure) and now a senior deep learning solution architect at Nvidia. My journey of seeking the optimal pathways in utilizing multiple GPUs for deep learning is paved with years of industrial experiences and practical tips & tricks learned from pitfalls with real-world projects.I am on a mission to help data scientists and researchers alike to accelerate their deep learning workload with ease taking advantage of my learnings and experiences.