It is said that “Data is the new Oil”, and indeed in this Age of Data companies and government agencies alike gather data of all types and from numerous sources in an attempt to become data-driven organizations. A large portion of data however originates from a single underlying source: People. Tweets and blog posts are written by humans for humans; purchase transactions and phone call records convey human desires for things and other people; app logs report on how people interact with computers and mobile devices. Data derived from human behaviour is “messy”: it is dynamic, complex and extremely versatile. Humans’ behaviour, as recorded in such digital data channels, changes drastically over time, is influenced by underlying complex social networks, and is conveyed in highly multimodal data streams – posing a significant hurdle to any organization striving to truly base its decisions on operations on its data.
Developed at the MIT Human Dynamics Lab, Social Physics is a novel new scientific approach to data, which uses big data analysis and the mathematical laws of biology to understand the behaviour of human crowds, enabling the development of a fully-automatic platform that can absorb, analyze and merge dynamic data streams of various sources, forms and types. Social Physics is the technological framework behind the Endor analytics engine, serves global financial institutes and government agencies.
Dr. Yaniv Altshuler
Dr. Yaniv Altshuler is an MIT researcher and an expert on Artificial Intelligence and network theory. He published over 70 scientific papers, 15 patents and 3 books. His research has been covered by Harvard Business Review, Financial Times, The Globe and others.
Absolute camera pose regressors estimate the position and orientation of a camera from the captured image alone. As such, they offer a fast, lightweight and standalone alternative to localization pipelines. Typically, a convolutional backbone with a multi-layer perceptron head is trained using images and pose labels to embed a single reference scene at a time. Recently, this scheme was extended for learning multiple scenes by replacing the MLP head with a set of fully connected layers. In this work, we propose to learn multi-scene absolute camera pose regression with Transformers, where encoders are used to aggregate activation maps with self-attention and decoders transform latent features and scenes encoding into candidate pose predictions. This mechanism allows our model to focus on general features that are informative for localization while embedding multiple scenes in parallel. Our method achieves a new state-of-the-art localization accuracy for pose regression methods across indoor and outdoor benchmarks, surpassing both multi-scene and single-scene absolute pose regressors.
Dr. Yoli Shavit
Yoli Shavit is a Research Scientist Team Lead at Huawei TRC and a Postdoctoral Researcher at Bar-Ilan University. Her current research focuses on deep learning methods for camera localization and 3D reconstruction. Before joining Huawei, Yoli worked at Amazon and interned at Microsoft Research. She holds a PhD in Computer Science from the University of Cambridge, an MSc in Bioinformatics from Imperial College London and a BSc in Computer Science and in Life Science from Tel Aviv University. Yoli is the recipient of the Cambridge International Scholarship and her thesis was nominated for the best thesis award in the UK.
In this talk I will overview our program at Duke University, where we have developed and deployed an app to study behaviors in developmental disorders, autism spectrum disorder in particular, providing scalable tools with state-of-the-art performance thanks to careful co-design of stimuli and ML. The app has been deployed in pediatric clinics and has already collected and analyzed the largest ever dataset of this kind in the field.
Guillermo Sapiro received his B.Sc. (summa cum laude), M.Sc., and Ph.D. from the Technion, Israel Institute of Technology. After post-doctoral research at MIT, Dr. Sapiro became a Member of Technical Staff at HP Labs. He was with the University of Minnesota, and currently he is a James B. Duke School Professor with Duke University. He is also with Apple, Inc., where he leads a team on Health AI. He works on theory and applications in computer vision, computer graphics, medical imaging, image analysis, and machine learning. He has authored over 450 papers in these areas and has written a book published by CUP. G. Sapiro was awarded the ONR Young Investigator Award in 1998, the Presidential Early Career Awards for Scientist and Engineers (PECASE) in 1998, the NSF Career Award in 1999, and the National Security Science and Engineering Faculty Fellowship in 2010. He received the Test-of-Time award at ICCV 2011 and at ICML 2019. He was elected to the American academy of Arts and Sciences in 2018, and is a Fellow of IEEE and SIAM G. Sapiro was the founding Editor-in-Chief of the SIAM Journal on Imaging Sciences.
The single image super-resolution task is one of the most examined inverse problems in the past decade. In recent years, Deep Neural Networks (DNNs) have shown superior performance over alternative methods when the acquisition process uses a fixed known downsampling kernel-typically a bicubic kernel. However, several recent works have shown that in practical scenarios, where the test data mismatch the training data (e.g. when the downsampling kernel is not the bicubic kernel or is not available at training), the leading DNN methods suffer from a huge performance drop. Inspired by the literature on generalized sampling, in this work we propose a method for improving the performance of DNNs that have been trained with a fixed kernel on observations acquired by other kernels. For a known kernel, we design a closed-form correction filter that modifies the low-resolution image to match one which is obtained by another kernel (e.g. bicubic), and thus improves the results of existing pre-trained DNNs. For an unknown kernel, we extend this idea and propose an algorithm for blind estimation of the required correction filter. We show that our approach outperforms other super-resolution methods, which are designed for general downsampling kernels.
Shady Abu-Hussein is a Ph.D. student in Electrical Engineering at Tel Aviv University, supervised by Prof. Raja Giryes. He received his B.Sc from Ben-Gurion University of the Negev in 2017 in Electrical and Computer Engineering. During his time at Tel Aviv University he also worked at IBM Research Lab (summer 2021) and Intel (2017-Now). His current research focuses on solving imaging inverse problems with arbitrary observation model by adapting off-the-shelf pretrained deep neural networks.
Yossi Matias is Vice President, Engineering & Research, at Google, and the founding Managing Director of Google Center in Israel. He is a world-renowned expert in AI and in leadership of global scale product and technology innovation. Yossi is the lead of Google’s Health AI. He is the lead of Google’s Crisis Response initiative, providing AI-based actionable information (including flood forecasting - on Fortune’s “Change The World” list). He is on Google’s Sustainability board and a founding lead of Google’s AI for Social Good initiative. Yossi pioneered Conversational AI innovations, transforming the phone experience (including Google Duplex, Call Screen, Hold for Me, Live Caption, Live Translate, Euphonia, Read Aloud). He pioneered an initiative of bringing online hundreds of heritage collections, seeding Google’s Cultural Institute. For over a decade Yossi was on the leadership team of Google’s Search, building and leading global efforts including Google Trends, Google Autocomplete, Search Console, and Search vertical experiences. Yossi is the founding exec lead of Google for Startups Accelerator which has supported thousands of startups globally. Prof. Matias is also on the Computer Science faculty at Tel Aviv University, and previously a Research Scientist at Bell Labs and visiting professor at Stanford. He published over 100 papers and is the inventor of over 60 patents on diverse areas. Yossi is a recipient of the 2005 Godel Prize, an 2009 ACM Fellow, and a recipient of the 2019 ACM Kanellakis Theory and Practice Award for seminal work on the foundations of streaming algorithms and their application to large-scale data analytics.
Naama Hammel, MD
Naama is a clinical research scientist in Google Health. In this role she focuses on developing machine learning models for the detection of ocular and systemic diseases from medical images. Naama is an ophthalmologist with a subspecialty in glaucoma. She completed her medical and ophthalmology training at Tel-Aviv University; her glaucoma fellowship at the Shiley Eye Institute, UC San Diego; and her ophthalmic informatics fellowship at the UC Davis Eye Center.