Siddha Ganju

Self-driving Architect, NVIDIA

Siddha Ganju, an AI researcher who Forbes featured in their 30 under 30 list, is a Self-Driving Architect at Nvidia. As an AI Advisor to NASA FDL, she helped build an automated meteor detection pipeline for the CAMS project at NASA, which ended up discovering a comet. Previously at Deep Vision, she developed deep learning models for resource constraint edge devices. Her work ranges from Visual Question Answering to Generative Adversarial Networks to gathering insights from CERN's petabyte-scale data and has been published at top-tier conferences including CVPR and NeurIPS. She has served as a featured jury member in several international tech competitions including CES. As an advocate for diversity and inclusion in technology, she speaks at schools and colleges to motivate and grow a new generation of technologies from all backgrounds.

 

Talks on Wurreka:

“Watching paint dry is faster than training my deep learning model.”

“If only I had ten more GPUs, I could train my model in time.” “I want to run my model on a cheap smartphone, but it’s probably too heavy and slow.”

If this sounds like you, then you might like this talk.

Exploring the landscape of training and inference, we cover a myriad of tricks that step-by-step improve the efficiency of most deep learning pipelines, reduce wasted hardware cycles, and make them cost-effective. We identify and fix inefficiencies across different parts of the pipeline, including data preparation, reading and augmentation, training, and inference.

With a data-driven approach and easy-to-replicate TensorFlow examples, finely tune the knobs of your deep learning pipeline to get the best out of your hardware.

Over the last few years, convolutional neural networks (CNN) have risen in popularity, especially in the area of computer vision. Many mobile applications running on smartphones and wearable devices would potentially benefit from the new opportunities enabled by deep learning techniques. However, CNNs are by nature computationally and memory intensive, making them challenging to deploy on a mobile device. We explain how to practically bring the power of convolutional neural networks and deep learning to memory and power-constrained devices like smartphones. We’ll illustrate the value of these concepts with real-time demos as well as case studies from Google, Microsoft, Facebook and more. You will walk away with various strategies to circumvent obstacles and build mobile-friendly shallow CNN architectures that significantly reduce memory footprint.

See Highlights of
Wurreka

Hear What Attendees Say

PWC Logo

“Once again Wurreka has knocked it out of the park with interesting speakers, engaging content and challenging ideas. No jetlag fog at all, which counts for how interesting the whole thing was."

Cybersecurity Lead, PwC

Intuit Logo

“Very much looking forward to next year. I will be keeping my eye out for the date so I can make sure I lock it in my calendar"

Software Engineering Specialist, Intuit

Groupon Logo

“Best conference I have ever been to with lots of insights and information on next generation technologies and those that are the need of the hour."

Software Architect, GroupOn

Hear What Speakers & Sponsors Say

Scot Davis

“Happy to meet everyone who came from near and far. Glad to know you've discovered some great lessons here, and glad you joined us for all the discoveries great and small."

Scott Davis, Web Architect & Principal Engineer, ThoughtWorks

Oracle

“What a buzz! The events have been instrumental in bringing the whole software community together. There has been something for everyone from developers to architects to business to vendors. Thanks everyone!"

Voltaire Yap, Global Events Manager, Oracle Corp.

Venkat Subramaniam

“Wonderful set of conferences, well organized, fantastic speakers, and an amazingly interactive set of audience. Thanks for having me at the events!"

Dr. Venkat Subramaniam, Founder - Agile Developer Inc.