Dr . Yuval Shalev is the data science research team lead in Questar Automotive technologies. Yuval has been working as a quant, data scientist and a team lead since 2008, mainly in the Fintech industry. Yuval holds a Ph.D in Industrial Engineering from the Tel-Aviv University. Previously, he completed his M.Sc studies in physics at the Hebrew University in Jerusalem. His industrial and academic research focuses on anomaly detection using deep learning models, representation learning of time series, statistical robustness of unsupervised models and the connection between information theoretic measures and deep learning algorithms.
Have you ever wondered how an autonomous bulldozer chooses its next actions to optimally preform the grading task?
In my talk I will share our methodology and how we use reinforcement learning, behavior cloning, contrastive learning and other techniques to train a policy for autonomous bulldozers.
In addition, I will show our simulation and prototype robot which we used to test and validate our algorithm.
Chana Ross, Currently working at Bosch Center of AI on trajectory planning algorithms for autonomous bulldozers using Reinforcement Learning and sensor fusion. Finished a Bsc in aerospace engineering and Msc in Applied mathematics at the technion. Thesis focused on Reinforcement learning for multi-agent trajectory planning. Previously worked for over 7 years at Rafael, first in aerodynamics and then in a group which did operations research for multidisciplinary problems.
In this talk we demonstrate how attackers can apply split-second phantom adversarial attacks against AI of advance driving assistance systems, causing two commercial advanced driving-assistance systems (Tesla Model X and Mobileye 630) to trigger a sudden stop in the middle of the road, apply the brakes, and issue false notifications. A countermeasure consisting of four neural networks will also be presented that assesses the authenticity of a detected object.
Ben Nassi is a postdoctoral researcher at Ben-Gurion University of the Negev (BGU) and a former Google employee. He specializes in securing the interface of cyber-physical systems with the digital world: autonomous cars, IoT devices (video cameras, smart irrigation systems, routers), drones, and etc. He works on security of AI and perception, side-channel attacks, TEMPEST attacks. and privacy related issues. His research has been presented at top academic conferences (S&P, CCS,USENIX Security) published in journals (TIFS), and covered by international media (Wired, ArsTechnica, Motherboard, The Washington Post, Bloomberg, Business Insider). Ben has spoken at prestigious venues including HITB 21, SecTor 21, RSAC 21, BlackHat USA 20, CodeBlue 20, SecTor 20, RSAC 20, and CyberTech 19.
13:46 - 13:55 Deep Learning for Tabular Data – Innovation Meets Practice
Existing analyses of optimization in deep learning are either continuous, focusing on variants of gradient flow (GF), or discrete, directly treating variants of gradient descent (GD). GF is amenable to theoretical analysis, but is stylized and disregards computational efficiency. The extent to which it represents GD is an open question in deep learning theory. My talk will present a recent study of this question. Viewing GD as an approximate numerical solution to the initial value problem of GF, I will show that the degree of approximation depends on the curvature around the GF trajectory, and that over deep neural networks (NNs) with homogeneous activations, GF trajectories enjoy favorable curvature, suggesting they are well approximated by GD. I will then use this finding to translate an analysis of GF over deep linear NNs into a guarantee that GD efficiently converges to global minimum *almost surely* under random initialization. Finally, I will present experiments suggesting that over simple deep NNs, GD with conventional step size is indeed close to GF. An underlying theme of the talk will be the possibility of GF (or modifications thereof) to unravel mysteries behind deep learning.
The talk is based on a paper recently published as spotlight in NeurIPS 2021 (joint work with Omer Elkabetz).
Dr. Nadav Cohen
Nadav Cohen is an Asst. Professor of Computer Science at Tel Aviv University, and Chief Scientist at Imubit. His academic research revolves around the theoretical and algorithmic foundations of deep learning, while at Imubit he leads the development of deep reinforcement learning systems controlling industrial manufacturing lines. Nadav earned a BSc in electrical engineering and a BSc in mathematics (both summa cum laude) at the Technion Excellence Program for Distinguished Undergraduates. He obtained his PhD (direct track, summa cum laude) at the Hebrew University, and was subsequently a postdoctoral scholar at the Institute for Advanced Study in Princeton. For his contributions to deep learning, Nadav won a number of awards, including the Google Research Scholar Award, the Google Doctoral Fellowship in Machine Learning, the Final Prize for Machine Learning Research, the Rothschild Postdoctoral Fellowship, the Zuckerman Postdoctoral Fellowship, and TheMarker's 40 under 40 list.
The theory of deep learning focuses almost exclusively on supervised learning, non-convex optimization using stochastic gradient descent, and overparametrized networks. It is common belief that the optimizer dynamics, network architecture, initialization procedure, and other factors tie together and are all components of its success.
I'll describe our recent work relating classical online learning theory to deep learning in which we decouple optimization, regret/generalization and expressiveness. We give agnostic and online learning guarantees for fully-connected deep neural networks with nonlinear activations. We quantify convergence and regret guarantees for any range of parameters and allow any optimization procedure, such as adaptive gradient methods and second order methods.
As an application, we derive provable algorithms for deep control and reinforcement learning in the online and episodic settings.
Based on joint work with Xinyi Chen, Edgar Minasyan and Jason Lee (edited)
I study the automation of the learning mechanism and its efficient algorithmic implementation. This study centers in the field of machine learning and touches upon mathematical optimization, game theory, statistics and computational complexity.