Multilayer data-driven models for Early Detection of COVID-19
Dr. Dan Yamin
14:30 - 14:45
A system theoretic approach to explore safe return to new normal in the face of COVID-19 pandemic related uncertainties
Dr. Souvik Barat
14:45 - 15:00
Health RI - Dutch National Health Data Infrastructure for Research and Innovation, including AI: Covid use case
Prof. Wiro Niessen
15:00 - 15:15
Population-wide prediction of severe COVID from electronic healthcare records
15:15 - 15:30
AI Infrastructure as a Platform for Futuristic Applications
15:30 - 16:00
COMPUTER VISION TRACK II
16:00 - 16:30
Turning GANs into Useful Consumer Products
16:30 - 16:50
Layered Neural Atlases for Consistent Video Editing
16:50 - 17:10
Classic Signal Processing meets Deep Learning
17:10 - 17:30
Moderator: Dr. Jacob Mendel
Dr. Mendel was the General Manager Cybersecurity COE at Intel. He is a serial cybersecurity entrepreneur; He has been the CEO and Co-Founder of SCsquare Ltd., where he founded a business enabler for advanced cybersecurity technologies. Dr. Mendel holds 16 approved patents in the area of cybersecurity. His career in cybersecurity over the past 20 years is a unique mixture of broad practical experience and research expertise. His practice included extensive involvement in cybersecurity offensive projects, research and development, business development and product management. Proven worldwide track records in secure operating systems, digital rights management (DRM), security certification (CC, FIPS, EMV), penetration test, reverses engineering, Machine Learning, Blockchain, IoT security and Smart Grid cybersecurity. His academic research topics includes: economic perspective of cybersecurity attacks, Quantum, Blockchain technology with a special focus on cybersecurity attacks, privacy issues and business continuation under cyber-attack.
Jack leads Sandbox, developing quantum-inspired physics- and A.I.-based tools and applications that run on today’s classical computing platforms. Sandbox focuses on enterprise SaaS solutions at the intersection of machine learning and physics.
Jack is the author of Quantum Computing: An Applied Approach, published by Springer. This work, now in its Second Edition, is one of the leading textbooks in the field and is used both in PhD programs and corporate training sessions.
Jack is a serial entrepreneur and founder of several tech companies, including EarthWeb/Dice (NYSE: DHX), which he led through a record breaking IPO. He also co-founded Vista Research which he then sold to Standard and Poors/McGraw-Hill.
Jack is a trustee of the X Prize Foundation and has been a board member of Trickle Up, which helps thousands of entrepreneurs start small businesses each year. His foundation, The Hidary Foundation, is dedicated to medical oncology research and has supported work at Sloan Kettering and UCSF.
Jack has been recognized for his leadership by organizations such as the World Economic Forum, HealthCorps and Young Presidents’ Organization. Jack studied neuroscience at Columbia and subsequently received the Stanley Fellowship in Clinical Neuroscience at NIH where he worked on functional brain imaging and neural networks.
There is a growing demand for AI systems that justify their decisions. In my talk, I will present recent results on adding explainability to black box algorithms as well as ways to design transparent AI methods. We would discuss the implications of such methods in science and technology and how explainability can provide a feedback mechanism that enhances the performance of AI methods.
Prof. Lior Wolf
Prof. Wolf is a full professor at the School of Computer Science at Tel Aviv University. His research focuses on computer vision and deep learning.
AI research keeps accelerating, with dozens of thousands of papers published each year. How does this relate to applying AI in the industry? The talk will start with a brief overview of key recent AI techniques that could apply to a wide variety of real-life problems. It will then provide examples for utilizing such new approaches to develop innovative AI-based products internally at Intel. Intel’s internal AI group (IT AI) transforms the company’s critical operations using AI, from processor architecture and design, through manufacturing to sales. This affects almost every processor that Intel provides, yielding both a 9-digit value for the company and better products for its customers. The talk will discuss video, text and tabular data use-cases, which increase the products quality and the operational efficiency. One of these applications was developed in collaboration with the Stanford AI lab.
Dr. Amitai Armon
Dr. Amitai Armon is the Chief Data Scientist of Intel’s internal AI group, a global group of over 200 experts that uses AI to upgrade and transform Intel’s critical operations. The group develops and deploys AI solutions across the company, from processor architecture and design, through manufacturing to sales. This yields both a 9-digit value for Intel and better products for its customers. Prior to joining Intel in 2013, Amitai was the co-Founder and Director of Research at TaKaDu, a data-science company that received multiple international awards, including the World Economic Forum Technology Pioneers award (Davos Summit). He previously was a visiting research scholar at the Los-Alamos National Lab (USA). Amitai has over 20 years of experience in performing and leading data-science work, including service as a team-leader in an elite technology unit. He holds a PhD in computer-science from Tel Aviv University, where he previously completed his BSc with honors at the age of 18.
Supervised NLP tasks are often addressed today in an end-to-end manner, where a model is trained (or fine tuned) on input-output instances of the full task. However, such an approach may exhibit limitations when applied to fairly complex tasks. In this talk, I will discuss an alternative approach where a complex task is decomposed to its inherent subtasks, each addressed by a targeted model, and will illustrate it for the challenging application of multi-document summarization (MDS). Notably, the decomposition approach becomes particularly appealing when targeted training data for modeling specific subtasks can be derived automatically from the originally available “end-to-end” training data for the full task, as I will show for the MDS case. Additionally, I will describe a separate contribution, namely a targeted Cross-Document Language Model (CDLM). This model is pre-trained specifically to model cross-document relationships supporting diverse tasks in this setting, and is also leveraged within our decomposed MDS architecture.
Ido Dagan is a Professor at the Department of Computer Science at Bar-Ilan University, Israel, the founder of the Natural Language Processing (NLP) Lab at Bar-Ilan, the founder and head of the nationally funded Bar-Ilan University Data Science Institute, and a Fellow of the Association for Computational Linguistics (ACL). His interests are in applied semantic processing, focusing on textual inference, natural open semantic representations, consolidation and summarization of multi-text information, and interactive text summarization and exploration. Dagan and colleagues initiated and promoted textual entailment recognition (RTE, later aka NLI) as a generic empirical task. He was the President of the ACL in 2010 and served on its Executive Committee during 2008-2011. In that capacity, he led the establishment of the journal Transactions of the Association for Computational Linguistics, which became one of two premiere journals in NLP. Dagan received his B.A. summa cum laude and his Ph.D. (1992) in Computer Science from the Technion. He was a research fellow at the IBM Haifa Scientific Center (1991) and a Member of Technical Staff at AT&T Bell Laboratories (1992-1994). During 1998-2003 he was co-founder and CTO of FocusEngine and VP of Technology of LingoMotors, and has been regularly consulting in the industry. His academic research has involved extensive industrial collaboration, including funds from IBM, Google, Thomson-Reuters, Bloomberg, Intel and Facebook, as well as collaboration with local companies under funded projects of the Israel Innovation Authority.
The field of NLP has undergone a revolution in the past few years geared by the use of very large language models (LMs) that can learn to perform language understanding tasks given only a few examples. In this talk, I will describe two recent projects that use neural retrievers to address some shortcomings of large LMs. First, while large LMs do well given a short piece of text, it is difficult to capitalize on their advantages for tasks that are at the corpus level (say all of Wikipedia). I will describe a self-supervised method for retrieving paragraphs from a large corpus, such as Wikipedia, that performs well even without any training examples. Second, the behavior of very large LMs strongly depends on the examples that are given to them as input (this is often termed in-context learning). We present a method for retrieving examples from the training set that maximizes the performance of the LM on a downstream language understanding task. Our approaches make it easier to build new question answering models for arbitrary corpora and to interact with large LMs that are provided as a service by commercial companies.
Jonathan Berant is an associate professor at the School of Computer Science at Tel Aviv University. Jonathan earned a Ph.D. in Computer Science at Tel Aviv University under the supervision of Prof. Ido Dagan. Jonathan was a post-doctoral fellow at Stanford University, working with Prof. Christopher Manning and Prof. Percy Liang, and subsequently a post-doctoral fellow at Google Research, Mountain View. Jonathan received several awards and fellowships including the Rothschild fellowship, the ACL 2011 Best Student Paper award, EMNLP 2014 Best Paper award, and NAACL 2019 Best Resource Paper award, as well as several honorable mentions. Jonathan is currently and ERC grantee.
The typical approach in natural language processing is to use one-size-fits-all representations, obtained from training one model on very large text collections. While this approach is effective for those people whose language style is well represented in the data, it fails to account for variations between people, and thus may lead to worse performance for those in the minority. In this talk, I will challenge the one-size-fits-all assumption, and show that (1) we can identify words that are used in significantly different ways by speakers from different cultures; and (2) we can effectively use information about the people behind the words to build better natural language processing models.
Prof. Rada Mihalcea
Rada Mihalcea is the Janice M. Jenkins Collegiate Professor of Computer Science and Engineering at the University of Michigan and the Director of the Michigan Artificial Intelligence Lab. Her research interests are in computational linguistics, with a focus on lexical semantics, multilingual natural language processing, and computational social sciences. She serves or has served on the editorial boards of the Journals of Computational Linguistics, Language Resources and Evaluations, Natural Language Engineering, Journal of Artificial Intelligence Research, IEEE Transactions on Affective Computing, and Transactions of the Association for Computational Linguistics. She was a program co-chair for EMNLP 2009 and ACL 2011, and a general chair for NAACL 2015 and *SEM 2019. She currently serves as ACL President. She is the recipient of a Presidential Early Career Award for Scientists and Engineers awarded by President Obama (2009), an ACM Fellow (2019) and a AAAI Fellow (2021). In 2013, she was made an honorary citizen of her hometown of Cluj-Napoca, Romania.
Dr . Yuval Shalev is the data science research team lead in Questar Automotive technologies. Yuval has been working as a quant, data scientist and a team lead since 2008, mainly in the Fintech industry. Yuval holds a Ph.D in Industrial Engineering from the Tel-Aviv University. Previously, he completed his M.Sc studies in physics at the Hebrew University in Jerusalem. His industrial and academic research focuses on anomaly detection using deep learning models, representation learning of time series, statistical robustness of unsupervised models and the connection between information theoretic measures and deep learning algorithms.
Have you ever wondered how an autonomous bulldozer chooses its next actions to optimally preform the grading task?
In my talk I will share our methodology and how we use reinforcement learning, behavior cloning, contrastive learning and other techniques to train a policy for autonomous bulldozers.
In addition, I will show our simulation and prototype robot which we used to test and validate our algorithm.
Chana Ross, Currently working at Bosch Center of AI on trajectory planning algorithms for autonomous bulldozers using Reinforcement Learning and sensor fusion. Finished a Bsc in aerospace engineering and Msc in Applied mathematics at the technion. Thesis focused on Reinforcement learning for multi-agent trajectory planning. Previously worked for over 7 years at Rafael, first in aerodynamics and then in a group which did operations research for multidisciplinary problems.
In this talk we demonstrate how attackers can apply split-second phantom adversarial attacks against AI of advance driving assistance systems, causing two commercial advanced driving-assistance systems (Tesla Model X and Mobileye 630) to trigger a sudden stop in the middle of the road, apply the brakes, and issue false notifications. A countermeasure consisting of four neural networks will also be presented that assesses the authenticity of a detected object.
Ben Nassi is a postdoctoral researcher at Ben-Gurion University of the Negev (BGU) and a former Google employee. He specializes in securing the interface of cyber-physical systems with the digital world: autonomous cars, IoT devices (video cameras, smart irrigation systems, routers), drones, and etc. He works on security of AI and perception, side-channel attacks, TEMPEST attacks. and privacy related issues. His research has been presented at top academic conferences (S&P, CCS,USENIX Security) published in journals (TIFS), and covered by international media (Wired, ArsTechnica, Motherboard, The Washington Post, Bloomberg, Business Insider). Ben has spoken at prestigious venues including HITB 21, SecTor 21, RSAC 21, BlackHat USA 20, CodeBlue 20, SecTor 20, RSAC 20, and CyberTech 19.
13:46 - 13:55 Deep Learning for Tabular Data – Innovation Meets Practice
Existing analyses of optimization in deep learning are either continuous, focusing on variants of gradient flow (GF), or discrete, directly treating variants of gradient descent (GD). GF is amenable to theoretical analysis, but is stylized and disregards computational efficiency. The extent to which it represents GD is an open question in deep learning theory. My talk will present a recent study of this question. Viewing GD as an approximate numerical solution to the initial value problem of GF, I will show that the degree of approximation depends on the curvature around the GF trajectory, and that over deep neural networks (NNs) with homogeneous activations, GF trajectories enjoy favorable curvature, suggesting they are well approximated by GD. I will then use this finding to translate an analysis of GF over deep linear NNs into a guarantee that GD efficiently converges to global minimum *almost surely* under random initialization. Finally, I will present experiments suggesting that over simple deep NNs, GD with conventional step size is indeed close to GF. An underlying theme of the talk will be the possibility of GF (or modifications thereof) to unravel mysteries behind deep learning.
The talk is based on a paper recently published as spotlight in NeurIPS 2021 (joint work with Omer Elkabetz).
Dr. Nadav Cohen
Nadav Cohen is an Asst. Professor of Computer Science at Tel Aviv University, and Chief Scientist at Imubit. His academic research revolves around the theoretical and algorithmic foundations of deep learning, while at Imubit he leads the development of deep reinforcement learning systems controlling industrial manufacturing lines. Nadav earned a BSc in electrical engineering and a BSc in mathematics (both summa cum laude) at the Technion Excellence Program for Distinguished Undergraduates. He obtained his PhD (direct track, summa cum laude) at the Hebrew University, and was subsequently a postdoctoral scholar at the Institute for Advanced Study in Princeton. For his contributions to deep learning, Nadav won a number of awards, including the Google Research Scholar Award, the Google Doctoral Fellowship in Machine Learning, the Final Prize for Machine Learning Research, the Rothschild Postdoctoral Fellowship, the Zuckerman Postdoctoral Fellowship, and TheMarker's 40 under 40 list.
The theory of deep learning focuses almost exclusively on supervised learning, non-convex optimization using stochastic gradient descent, and overparametrized networks. It is common belief that the optimizer dynamics, network architecture, initialization procedure, and other factors tie together and are all components of its success.
I'll describe our recent work relating classical online learning theory to deep learning in which we decouple optimization, regret/generalization and expressiveness. We give agnostic and online learning guarantees for fully-connected deep neural networks with nonlinear activations. We quantify convergence and regret guarantees for any range of parameters and allow any optimization procedure, such as adaptive gradient methods and second order methods.
As an application, we derive provable algorithms for deep control and reinforcement learning in the online and episodic settings.
Based on joint work with Xinyi Chen, Edgar Minasyan and Jason Lee (edited)
I study the automation of the learning mechanism and its efficient algorithmic implementation. This study centers in the field of machine learning and touches upon mathematical optimization, game theory, statistics and computational complexity.
One of the great challenges of modern AI theory is to explain the success of overparameterized systems that learn to generalize even when optimizing over far more free parameters than examples. This success is often attributed to algorithmic traits such as inductive bias, implicit/explicit regularization, linear stability and more.
A great test case, to study how such algorithmic traits allow learning, is the classical setting of stochastic convex optimization. Indeed, classical results already demonstrate how an algorithm can learn, even in the overparameterized regime, as long as the population loss is convex. Surprisingly, this is possible even when there doesn't seem to be any effective bound on the number of parameters in the model, or the Rademacher complexity of the class. But, is that due to some type of implicit regularization? Flatness of the minima? Or even stability? What allows learning algorithms to succeed in the convex case?
In this talk I will describe our recent analyses for the generalization of algorithms such as stochastic gradient descent, gradient descent, and, generally, first order methods. Through these works we can shed light on potential techniques to prove generalization in overparameterized settings, and revisit notions such as capacity, stability and regularization as well as their role in generalization.
Based on joint works with Idan Amir, Assaf Dauber, Meir Feder, Tomer Koren, Yishay Mansour, and Uri Sherman.
Roi Livni is an assistant professor at Tel Aviv University EE. He received his Ph.D from the Center for Brain Sciences (ELSC) at The Hebrew University of Jerusaelm under the supervision of Amir Globerson. After that he was a research instructor at Princeton University, where he conducted his postdoctorate. His research focuses on Learning theory with special emphasis on generalization theory, privacy and generative learning. Roi is the recipient of several awards and fellowships. Such as the Google Phd Fellowship, Rothschild postdoctoral fellowship, COLT 2013 Best student paper award, ICML 2013 Best Paper award, FOCS 2020 Best paper award and COLT 2021 Best paper runner-up.
In the past decade, deep learning has completely revolutionized AI. In this talk, I will explain what deep learning is, why it works, what it has done for us so far, and what it is likely to do in the future.
Ilya Sutskever is Co-founder and Chief Scientist of OpenAI, which aims to build artificial general intelligence that benefits all of humanity. He leads research at OpenAI and is one of the architects behind the GPT models.
Prior to OpenAI, Ilya was co-inventor of AlexNet and Sequence to Sequence Learning. He earned his B.Sc, M.Sc, and Ph.D in Computer Science from the University of Toronto.
Dror Bin is the CEO of the Israel Innovation Authority, an independent public entity that operates for the benefit of the Israeli innovation ecosystem and Israeli economy as a whole. Its role is to nurture and develop Israeli innovation resources, while creating and strengthening the infrastructure and framework needed to support the entire knowledge industry. Prior to his role at the Authority, Dror served as President and CEO of RAD Data Communications, a leading global telecom network solutions company with hundreds of employees at the company's headquarters in Tel Aviv, a manufacturing center in Jerusalem and development center in Beer Sheva as well as dozens of corporate branches around the world. Dror also served in a series of positions for close to a decade at Comverse Technology, the last one as a member of the management team and VP of Global Sales. Following this, Dror served as a venture partner at Carmel Ventures and a chairman at several of its portfolio companies. In addition, Dror served as a partner at Shaldor, a leading management consulting firm in Israel, where he led the development and implementation of business and marketing strategies for major organizations in the financial, consumer, retail, high-tech, banking and other industries. Dror holds two bachelor’s degrees from the Technion – Israel Institute of Technology: one in systems information engineering and the other in industrial management, as well as an MBA from Tel Aviv University.
Brigadier General Aviad Dagan
Brigadier General Aviad Dagan, 49 years old, married and a father of four. In his most recent position, he served as a base commander for Hatzerim Air Force Base, Commander of the Northern Command Fire Center, and Head of the Air Force Participation Department. Brigadier General Aviad Dagan holds a bachelor's degree in computer science and law from Bar-Ilan University, a master's degree in law from Bar-Ilan University and another master's degree in national security from the University of National Security in Washington.
Maj. Gen. (Ret.) Prof. Isaac Ben Israel
Isaac Ben-Israel was born in Israel (Tel-Aviv), 1949. He studied Mathematics, Physics and Philosophy at Tel-Aviv University, receiving his Ph.D. in 1988. He joined the Israel Air Force (IAF) after graduating high school (1967) and has served continuously up to his retirement (2002). During his service, Isaac Ben-Israel has held several posts in operations, intelligence and weapon development units of the IAF. He headed the IAF Operations Research Branch, Analysis and Assessment Division of IAF Intelligence, and was the Head of Military R&D in Israel Defence Forces and Ministry of Defence (1991-1997). In January 1998 he was promoted to Major General and appointed as Director of Defence R&D Directorate in IMOD. During his service he received twice the Israeli Defence Award.
After retirement from the IDF Isaac Ben Israel joined the University of Tel-Aviv as a professor and was the head of Curiel Centre for International Studies (2002-2004), the head of the Program for Security Studies (2004-2007), Executive Director of the Interdisciplinary Centre for Technological Analysis & Forecasting at Tel-Aviv University (ICTAF) (2010-2013), Deputy Director of the Hartog School of Government and Policy in Tel-Aviv University (2005-2015) and a member of Jaffe Centre for Strategic Studies (2002-2004). In 2002 he founded and headed the Yuval Ne’eman Workshop for Science, Technology and Security. He was a member of the Board of Trustees of Ariel University Centre (2009-2011), and a member of the advisory council of Neaman Institute for Advanced Studies in Science and Technology at the Technion (2000-2010). In 2002 he founded RAY-TOP (Technology Opportunities) Ltd, consulting governments and industries in technological and strategic issues.
Professor Ben-Israel was a member of the 17th Knesset (Israeli Parliament) between June 2007 and February 2009. During this period he was a member of the Security and Foreign Affairs Committee, the Finance Committee, the Science & Technology committee, the Chairman of the Homeland Security Sub Committee and the Chairman of the Israeli–Indian Parliamentary Friendship Association.
In 2011 he was appointed by the Prime Minister to lead a task force that formulated Israel national cyber policy. Following that he founded the National Cyber Headquarters in the PM Office. In 2014 he was appointed again by the PM to lead another task force which resulted in a government decision (February 2015) to set up a new National Cyber Authority. Isaac Ben Israel was a member of the board of directors of IAI (2000-2002), the board of the Israel Corp. (2004-2007) and the R&D advisory board of TEVA (2003-2007) and Chairman of the Technion Entrepreneurial Incubator (2007). He was the Chairman of Israel National R&D council between 2010-2016.
Professor Ben-Israel has written numerous papers on military and security issues. His book Dialogues on Science and Military Intelligence (1989) won the Itzhak-Sade Award for Military Literature. His book on The Philosophy of Military Intelligence had been published by the Broadcast University (1999) and has been translated into French (2004). His book Science, Technology and Security: From Soldiers in Combat up to Outer Space, was published in 2006. His book on Israel Defence Doctrine was published in 2013.
Isaac is married to Inbal (née Marcus) and they have three sons: Yuval (1981), Roy (1984) and Alon (1988).
Ziv heads Israel’s National Program for AI Infrastructure, which is a coordinated, collaborative government effort aimed at ensuring Israel’s future leadership in the global AI arena. The program is focused on the long-term infrastructure required for promoting sustained innovation and growth. At its first stage, the program focuses on four pillars – Establishing an Israeli High-Performance Computer; Creating infrastructure for Natural Language Processing in Sematic languages that will fuel innovation and support AI assimilation in public sector services; Extending human capital in the Israeli academia and removing regulatory barriers to AI based innovations. The program is a mutual effort of Israel’s Ministry of Innovations, Science and Technology, Israeli Innovation Authority, the Directorate of Defense Research and Development, the Higher Education Council and the Ministry of Finance. Ziv is a versatile multi-disciplinary technologist with profound knowledge of Artificial Intelligence, communication networks, big data, and distributed systems. He is an experienced manager with proven capabilities of working at matrix environments as well as direct management skills, a versed innovation leader, and a growth-oriented CTO. Ziv holds a MSc. degree from the department for Software and Information Systems Engineering at the Ben Gurion’s University of the Negev and is about to complete his PhD studies researching the vulnerability of AI systems to adversarial manipulaitons.
Neural networks (NN) are often designed for pointwise predictions, but can also be designed to predict distributions. There are NN architectures specifically designed for distribution prediction (such as mixture-density networks), but we might have an architecture we designed for pointwise prediction, when we decide we need distribution prediction as well. In Meta, for example, we use NNs to predict the number of views content will get, but for reviewing potentially-harmful content, it is the upper-quantiles of the view distribution that turn out to be relevant.
In this short talk we will cover two "free lunch" methods for adapting existing NNs for distribution prediction: variational inference and MC dropout (the latter more commonly used for regularizing models). We will also cover two types of uncertainty, aleatoric and epistemic, roughly corresponding to randomness in the way the data was generated, and randomness in the way we algorithmically use the data. Finally, we will discuss the relationships between the two methods and the two types of uncertainty.
Dr. Ami Tavory
Ami is a research scientist at Meta’s Core Data Science team, and has been working with Novi (Meta Financial Services) over the past few years. He holds a PhD in electrical engineering from Tel Aviv University, and is the proud father of three girls (11, 8, 8) who are also in the field of data science (although they don’t know that yet).
There is an increasing demand within consumer-neuroscience (or neuromarketing) for objective neural measures to quantify consumers’ preferences and predict responses to marketing campaigns. However, the properties of EEG raise difficulties for preference prediction: small datasets, high dimensionality, elaborate manual feature extraction, intrinsic noise, and between-subject variations. We aimed to overcome these limitations by combining unique techniques within a Deep Learning (DL) framework, while providing interpretable results for neuroscientific and decision-making insight. In this study, we developed a DL model to predict subject-specific preferences based on their EEG data. In each trial, 213 subjects observed a product’s image, from 72 possible products, and then reported how much they were willing to pay (WTP) for the product. The DL employed EEG recordings from product observation to predict the corresponding reported WTP values. Our results showed 75.09% test accuracy in predicting high vs. low WTP, surpassing other models and a manual feature extraction approach. Meanwhile, network visualizations provided the predictive frequencies of neural activity, their scalp distributions, and critical timepoints, shedding light on the neural mechanisms involved with evaluation. In conclusion, we show that DLNs may be the superior method to perform EEG-based predictions, to the benefit of decision-making researchers and marketing practitioners alike.
Prof. Dino Levy is an associate professor at the Marketing Department at Coller School of Management, Tel-Aviv University and the head of the Neuroeconomics and Neuromarketing lab since 2012. He is also a member of the Sagol School of Neuroscience at Tel-Aviv University and a visiting scholar at the Institute for the Interdisciplinary Study of Decision Making at NYU. In his lab they use an interdisciplinary approach, which involves quantitative economic theories, combined with advanced behavioral methods and theoretical models from psychology, marketing, and economics with neuroscience techniques such as functional magnetic resonance imaging (fMRI) and Electroencephalography (EEG). The main aim of his studies is to try and better understand how we make decisions and what are the neural mechanisms underlying value-based choices. The projects in the lab range from examining the neural correlates of value computations and the common currency network, deciphering the neural mechanisms of irrational choice behaviors both in humans and in basic organisms, through looking for a common denominator between basic visual perception and value computations, to projects that aim to predict future preferences and population success of marketing stimuli using neural signals.
Yoni Birman is the Cloud and AI Security Research Director of Huawei's Tel Aviv Research Center (Toga Networks).
The group solves cyber and privacy challenges for Huawei Cloud and AI Security Business Unit such as: Endpoint/Application/ Data/ML protection in the cloud domain using AI.
Prior to his current position Yoni served for 10 years in Unit 8200 in various positions in the domains of D.S and cyber security. His last position was the Head of Cyber Security R&D department.
Yoni is an External Assistant professor of Computer Science at Reichman University.
Yoni obtained his PhD at Ben-Gurion University, Software and Information Systems Engineering Department.
IP hijack attacks are strong attacks on the Internet routing system that allows an attacker to deflect traffic through its network and form man-in-the-middle attack. As such these attacks are often undetected, which allows the attacker, e.g., to perform long term espionage.
To detect such attacks we suggest several tools that are based on deep learning, and enable to build robust detection tools including: supervised DL based on network level (such as BGP announcements) supervised and non-supervised methods and a method, which is based , for the first time, on route geography.
Professor of Electrical Engineering at Tel Aviv University. Before joining Tel Aviv University, he worked for four years at the Networking Center of Bell Labs, Holmdel, NJ. He published seminal papers in the fields of caching, routing, IP hijack attacks, and network measurements. In recent years, he has been working on usage of AI to solve networking problems in the areas of traffic classification and detection of routing manipulations.
5G Network Data Analytics Function (NWDAF) provides the ability to collect variety of data from all over the mobile 5G Network .
On top of this data specific network data analytics are generated for insights and actions .
In this talk we are going to go through NWDAF NF Load prediction use case to show how we tradeoff between latency and accuracy.
Will show practical tips and tricks to improve latency and our ML tradeoff selector.
Boris Rabinovich is a Senior Data Scientist at Amdocs. Author of several patents and academic papers. Prior to Amdocs, Boris worked in multiple Machine Learning roles and his experiences span a wide variety of ML projects including end-to-end implementation of Time series forecasting, Recommendation systems, Predictive Modeling and NLP problems.
In this talk I will share our journey at Shops on creating product embeddings, I will discuss tradeoffs of using internal Meta systems and services vs. customized ones. I will discuss how different products (Instagram/Facebook) affect our model selection and optimization methods, and how all of these can be taken into consideration to provide our customers with the best experience.
Ayelet is an ML engineer at Meta and was one of the founding team members of multiple Meta Shops initiatives. She was responsible for building several of the major building blocks behind the Shops recommendation system. Besides working at Meta, Ayelet has recently taken up surfing and is an amateur hiking guide. Ayelet holds a B.Sc. in EE from BGU.
The quest for algorithms that enable cognitive abilities is an integral part of machine learning and appears in many facets, such as virtual assistants and visual reasoning. A cognitive system must be capable of processing details from the multiple sensors that pound a device's computation engine. Interpreting observed objects requires an understanding of their semantics. Additionally, it must be sensitive to and pick out relevant nuances relevant to the task. During this presentation, we will present ways to assess and improve perception. We will explore ways to leverage large-scale models (such as CLIP). As a final step, we propose a novel attention mechanism, called Factor Graph Attention, which can operate on any data utility and distinguish useful signals from distracting ones. Our discussion will focus on the limitations of current methods: (i) models may solve the dataset, but not the task directly; (ii) supervised methods are limited by the curated datasets. Further, we demonstrate novel arithmetic capabilities to reason over visual data, as well as state-of-the-art performance on various tasks, such as visual dialog.
Idan Schwartz is the head of research of Spot by Netapp, and also a postdoctoral in the Computer Science department at the Deep Learning Lab led by Prof. Lior Wolf. Idan's research focuses on cognition in deep learning, particularly in the multimodal domain, such as visual question answering and visual dialog. These cognitive processes include perception, comprehension, attention, and decision-making.
It is said that “Data is the new Oil”, and indeed in this Age of Data companies and government agencies alike gather data of all types and from numerous sources in an attempt to become data-driven organizations. A large portion of data however originates from a single underlying source: People. Tweets and blog posts are written by humans for humans; purchase transactions and phone call records convey human desires for things and other people; app logs report on how people interact with computers and mobile devices. Data derived from human behaviour is “messy”: it is dynamic, complex and extremely versatile. Humans’ behaviour, as recorded in such digital data channels, changes drastically over time, is influenced by underlying complex social networks, and is conveyed in highly multimodal data streams – posing a significant hurdle to any organization striving to truly base its decisions on operations on its data.
Developed at the MIT Human Dynamics Lab, Social Physics is a novel new scientific approach to data, which uses big data analysis and the mathematical laws of biology to understand the behaviour of human crowds, enabling the development of a fully-automatic platform that can absorb, analyze and merge dynamic data streams of various sources, forms and types. Social Physics is the technological framework behind the Endor analytics engine, serves global financial institutes and government agencies.
Dr. Yaniv Altshuler
Dr. Yaniv Altshuler is an MIT researcher and an expert on Artificial Intelligence and network theory. He published over 70 scientific papers, 15 patents and 3 books. His research has been covered by Harvard Business Review, Financial Times, The Globe and others.
Absolute camera pose regressors estimate the position and orientation of a camera from the captured image alone. As such, they offer a fast, lightweight and standalone alternative to localization pipelines. Typically, a convolutional backbone with a multi-layer perceptron head is trained using images and pose labels to embed a single reference scene at a time. Recently, this scheme was extended for learning multiple scenes by replacing the MLP head with a set of fully connected layers. In this work, we propose to learn multi-scene absolute camera pose regression with Transformers, where encoders are used to aggregate activation maps with self-attention and decoders transform latent features and scenes encoding into candidate pose predictions. This mechanism allows our model to focus on general features that are informative for localization while embedding multiple scenes in parallel. Our method achieves a new state-of-the-art localization accuracy for pose regression methods across indoor and outdoor benchmarks, surpassing both multi-scene and single-scene absolute pose regressors.
Dr. Yoli Shavit
Yoli Shavit is a Research Scientist Team Lead at Huawei TRC and a Postdoctoral Researcher at Bar-Ilan University. Her current research focuses on deep learning methods for camera localization and 3D reconstruction. Before joining Huawei, Yoli worked at Amazon and interned at Microsoft Research. She holds a PhD in Computer Science from the University of Cambridge, an MSc in Bioinformatics from Imperial College London and a BSc in Computer Science and in Life Science from Tel Aviv University. Yoli is the recipient of the Cambridge International Scholarship and her thesis was nominated for the best thesis award in the UK.
In this talk I will overview our program at Duke University, where we have developed and deployed an app to study behaviors in developmental disorders, autism spectrum disorder in particular, providing scalable tools with state-of-the-art performance thanks to careful co-design of stimuli and ML. The app has been deployed in pediatric clinics and has already collected and analyzed the largest ever dataset of this kind in the field.
Guillermo Sapiro received his B.Sc. (summa cum laude), M.Sc., and Ph.D. from the Technion, Israel Institute of Technology. After post-doctoral research at MIT, Dr. Sapiro became a Member of Technical Staff at HP Labs. He was with the University of Minnesota, and currently he is a James B. Duke School Professor with Duke University. He is also with Apple, Inc., where he leads a team on Health AI. He works on theory and applications in computer vision, computer graphics, medical imaging, image analysis, and machine learning. He has authored over 450 papers in these areas and has written a book published by CUP. G. Sapiro was awarded the ONR Young Investigator Award in 1998, the Presidential Early Career Awards for Scientist and Engineers (PECASE) in 1998, the NSF Career Award in 1999, and the National Security Science and Engineering Faculty Fellowship in 2010. He received the Test-of-Time award at ICCV 2011 and at ICML 2019. He was elected to the American academy of Arts and Sciences in 2018, and is a Fellow of IEEE and SIAM G. Sapiro was the founding Editor-in-Chief of the SIAM Journal on Imaging Sciences.
The single image super-resolution task is one of the most examined inverse problems in the past decade. In recent years, Deep Neural Networks (DNNs) have shown superior performance over alternative methods when the acquisition process uses a fixed known downsampling kernel-typically a bicubic kernel. However, several recent works have shown that in practical scenarios, where the test data mismatch the training data (e.g. when the downsampling kernel is not the bicubic kernel or is not available at training), the leading DNN methods suffer from a huge performance drop. Inspired by the literature on generalized sampling, in this work we propose a method for improving the performance of DNNs that have been trained with a fixed kernel on observations acquired by other kernels. For a known kernel, we design a closed-form correction filter that modifies the low-resolution image to match one which is obtained by another kernel (e.g. bicubic), and thus improves the results of existing pre-trained DNNs. For an unknown kernel, we extend this idea and propose an algorithm for blind estimation of the required correction filter. We show that our approach outperforms other super-resolution methods, which are designed for general downsampling kernels.
Shady Abu-Hussein is a Ph.D. student in Electrical Engineering at Tel Aviv University, supervised by Prof. Raja Giryes. He received his B.Sc from Ben-Gurion University of the Negev in 2017 in Electrical and Computer Engineering. During his time at Tel Aviv University he also worked at IBM Research Lab (summer 2021) and Intel (2017-Now). His current research focuses on solving imaging inverse problems with arbitrary observation model by adapting off-the-shelf pretrained deep neural networks.
Yossi Matias is Vice President, Engineering & Research, at Google, and the founding Managing Director of Google Center in Israel. He is a world-renowned expert in AI and in leadership of global scale product and technology innovation. Yossi is the lead of Google’s Health AI. He is the lead of Google’s Crisis Response initiative, providing AI-based actionable information (including flood forecasting - on Fortune’s “Change The World” list). He is on Google’s Sustainability board and a founding lead of Google’s AI for Social Good initiative. Yossi pioneered Conversational AI innovations, transforming the phone experience (including Google Duplex, Call Screen, Hold for Me, Live Caption, Live Translate, Euphonia, Read Aloud). He pioneered an initiative of bringing online hundreds of heritage collections, seeding Google’s Cultural Institute. For over a decade Yossi was on the leadership team of Google’s Search, building and leading global efforts including Google Trends, Google Autocomplete, Search Console, and Search vertical experiences. Yossi is the founding exec lead of Google for Startups Accelerator which has supported thousands of startups globally. Prof. Matias is also on the Computer Science faculty at Tel Aviv University, and previously a Research Scientist at Bell Labs and visiting professor at Stanford. He published over 100 papers and is the inventor of over 60 patents on diverse areas. Yossi is a recipient of the 2005 Godel Prize, an 2009 ACM Fellow, and a recipient of the 2019 ACM Kanellakis Theory and Practice Award for seminal work on the foundations of streaming algorithms and their application to large-scale data analytics.
Naama Hammel, MD
Naama is a clinical research scientist in Google Health. In this role she focuses on developing machine learning models for the detection of ocular and systemic diseases from medical images. Naama is an ophthalmologist with a subspecialty in glaucoma. She completed her medical and ophthalmology training at Tel-Aviv University; her glaucoma fellowship at the Shiley Eye Institute, UC San Diego; and her ophthalmic informatics fellowship at the UC Davis Eye Center.
The past two years demonstrated how important data analyses are for policy making directly affecting public health. Following the development and distribution of vaccines, trustworthy and carefully considered analyses have shown to be paramount. Nevertheless, the extent to which causal conclusions can be drawn has been central to the public debate. One key challenge is the ever-changing environment: different variants, vaccine coverage and the pandemic state. These issues characterize a more general target: learning causal effects for infectious diseases. Within this field, a major challenge is evaluating the impact of prescribing different antibiotics on future antibiotic resistance, yet another global threat. A recent study estimated the number of worldwide deaths attributed to antibiotic resistance in 2019 to be 5 million.
In this talk, I will present these challenges and discuss our approach towards overcoming them in several projects concerning antibiotic effects on resistance and vaccine effects.
Dr. Daniel Nevo
Dr. Daniel Nevo is an Assistant Professor in the Department of Statistics and Operations Research at Tel Aviv University since 2018. Before that he was a postdoctoral fellow at Harvard Departments of Biostatistics and Epidemiology (2016-2018). Daniel Received his PhD in Statistics from the Hebrew University of Jerusalem (2016). Daniel’s research focuses on causal inference in widespread domains, and specifically on developing and implementing causal inference methods for real-life problems. Daniel has been collaborating with clinicians, epidemiologists, economists, and computer scientists in academia and health organizations to reach conclusions about causal effects from rich datasets.
Our blood carries multiple molecules that can represent an individual's condition in health and disease. Using advanced sequencing technologies, we monitor tiny amounts of DNA in blood in order to observe the early stages of cancer and examine embryo development. Our deep learning algorithms applied on billions of DNA molecules lead to the classification of cancer types, early detection of colorectal cancer, and identification of single-point mutations during the first trimester of embryonic development. Our work will assist clinical teams in assessing the patient’s medical illness and to devise precision therapy based on deep evaluation of an individual’s molecular status.
Professor Noam Shomron is passionate about using basic science to advance better healthcare. Professor Shomron heads the Functional Genomic Team at the Faculty of Medicine at Tel Aviv University, after training at MIT. He leads a multidisciplinary team of scientists that develops computational methods for parsing big-data in the bio-medical field using Artificial Intelligence. Shomron applied for more than 30 patents, published more than 200 peer reviewed publications on multiple genomic fields including medicine, agriculture, and business. Shomron is also the Editor of ‘Deep Sequencing Data Analysis’ book (Springer, Edition I 2013, and II 2021); Director of ‘Rare-Genomics’ Israel (NPO); Academic Director of ‘ScienceAbroad’ (NPO); Co-founder and Chief Scientific Officer of Variantyx which provides clinical interpretation of whole genome sequences; Co-founder and Chief Scientific Officer of GotSho which writes software for genetic and Biotech labs; Co-founder and Chief Scientific Officer of IdentifAI which establishes noninvasive prenatal diagnostics at a single nucleotide resolution from a first trimester blood test.
The human genome consists of three billion characters of A,T,G, and C bases, and two copies that we inherit from each parent establish the instructions for our biochemistry. Analyzing genome sequence data is important to resolving cases of genetic disease, which affects roughly 6% of individuals, and understanding broad predisposition to common diseases. In this talk, we will discuss deep learning approaches to analyze genomic data. This includes DeepVariant, a widely-used, open-source method for variant detection using Convolutional Neural Networks, DeepConsensus, an open-source method for sequence error correction using Transformers, as well as DeepNull and ML-based phenotyping which combine multi-layer perceptrons and Convolutional Neural Networks with traditional statistical approaches to discover new genetic associations. The talk will focus on how the deep learning approaches were adapted to the biological data, and will briefly touch on results of external application of the methods.
Andrew leads product development for the genomics team in Google Health. The genomics team develops methods that improve real-world applications in clinical genomics, population-level sequencing, drug discovery, and the combination of genomic and clinical data. Prior to Google, Andrew was Chief Scientific Officer at DNAnexus, where he supported many of the first large-scale genomics projects, such as the CHARGE Consortium, Regeneron-Gesinger and Regeneron-UKBiobank cohorts, the 3000 Rice Genomes Project, PrecisionFDA, and the St. Jude Pediatric Cancer Cloud. Andrew holds a PhD in Molecular Biology from Stanford University and a Bachelor’s degree in Physics from the University of Virginia.
The rapid development of artificial intelligence tools has led to unprecedented impact on almost any field of life, and specifically on image-to-image translation tasks. However, these techniques typically require large amounts of data which may not be available in fields such as medical imaging. A promising alternative is the mixture of signal models with powerful denoisers, providing a general solution for imaging tasks which can operate in low-data regimes. Two frameworks which follow this approach are regularization-by-denoising (RED) and plug-and-play priors (PnP) that have shown state-of-the-art performance in various imaging problems. However, the stability of common RED and PnP methods, which is crucial in medical applications, cannot be completely guaranteed. Here we introduce potential-driven neural networks where the image-to-image translation maps are built as gradients of neural networks. Specifically, we train ponterial-driven denoisers and employ them to solve model-based imaging problems via a convergent gradient-descent scheme. The resultant technique provides comparable performance to RED and PnP methods with provable stability.
Dr. Regev Cohen
Regev is a research scientist at Verily Research (formerly Google Life Sciences), currently studying artificial intelligence techniques for important biomedical problems, including applications in endoscopy, minimally invasive surgery, microscopy and drug design. His research interests include computer vision, signal processing and optimization, focusing on interpretable and foundational designs of deep networks for medical applications. Regev holds a Ph.D. in Electrical and Computer Engineering from the Technion - Israel Institute of Technology.
AI algorithms for personalized medicine necessitate large DNA databases of patient information. Here, I will show that genetic data is intrinsically and easily identifiable using simple internet searches in consumer genetic websites. These websites offer powerful parametric models that predict genealogical relationships using the genetic data and allow finding distant relatives of the target, which can eventually lead to target identification. By inspecting genetic data of 1.28 million individuals tested with consumer genomics, we investigated the power of this technique. We project that about 60% of the searches for individuals of European descent will result in a third-cousin or closer match, which theoretically allows their identification using demographic identifiers. Moreover, the technique could implicate nearly any U.S. individual of European descent in the near future. We demonstrate that the technique can also identify research participants of a public sequencing project. On the basis of these results, we propose a potential mitigation strategy and policy implications for human subject research.
Dr. Yaniv Erlich
Dr. Yaniv Erlich is the CEO of Eleven Therapeutics and the CSO of MyHeritage.com. Prior to these positions he was an Associate Professor of Computer Science and Computational Biology at Columbia University and a Principal Investigator at the Whitehead Institute, MIT. Dr. Erlich received his bachelor’s degree from Tel-Aviv University, Israel (2006) and a PhD from the Watson School of Biological Sciences at Cold Spring Harbor Laboratory (2010). Dr. Erlich’s research interest is computational human genetics. Dr. Erlich is a TEDMED speaker (2018), the recipient of DARPA’s Young Faculty Award (2017), the Burroughs Welcome Career Award (2013), Harold M. Weintraub award (2010), the IEEE/ACM-CS HPC award (2008), and he was selected as one of 2010 Tomorrow’s PIs team of Genome Technology.
Deep neural networks provide unprecedented performance gains in many real-world problems in signal and image processing. Despite these gains, the future development and practical deployment of deep networks are hindered by their black-box nature, i.e., a lack of interpretability and the need for very large training sets. On the other hand, signal processing has traditionally relied on classical statistical modeling techniques that utilize mathematical formulations representing the underlying physics, prior information and additional domain knowledge. Simple classical models are useful but sensitive to inaccuracies and may lead to poor performance when real systems display complex or dynamic behavior. Here we introduce various approaches to model based learning which merge parametric models with optimization tools leading to efficient, interpretable networks from reasonably sized training sets. We will consider examples of such model-based deep networks to image deblurring, image separation, super resolution in ultrasound and microscopy, and finally we will see how model-based methods can also be used for efficient diagnosis of COVID19 using X-ray and ultrasound.
Yonina C. Eldar is a Professor in the Department of Math and Computer Science at the Weizmann Institute of Science, Rehovot, Israel, where she heads the center for Biomedical Engineering and Signal Processing. She is also a Visiting Professor at MIT and at the Broad Institute and an Adjunct Professor at Duke University, and was a Visiting Professor at Stanford University.
She is a member of the Israel Academy of Sciences and Humanities, an IEEE Fellow
and a EURASIP Fellow. She has received many awards for excellence in
research and teaching, including the IEEE Signal Processing Society Technical
Achievement Award, the IEEE/AESS Fred Nathanson Memorial Radar Award,
the IEEE Kiyo Tomiyasu Award, the Michael Bruno Memorial Award from
the Rothschild Foundation, the Weizmann Prize for Exact Sciences, and the
Wolf Foundation Krill Prize for Excellence in Scientific Research. She is the
Editor in Chief of Foundations and Trends in Signal Processing, and serves
the IEEE on several technical and award committees. She heads the Committee for Promoting Gender Fairness in Higher Education Institutions in Israel.
Given the current population of Covid mutations, what is the probability that we encounter a new variant that has not been seen before? This fundamental question is known as the missing mass problem. In this work we introduce a new scheme for missing mass estimation. Our proposed framework provides novel risk bounds and improves upon currently known methods. Importantly, it is easy to apply and does not require additional modeling assumptions. This makes it a favorable choice for many practical applications.
Dr. Amichai Painsky
Dr. Amichai Painsky is an Assistant Professor at the Industrial Engineering Department of Tel Aviv University. Amichai received his B.Sc. in Electrical Engineering from Tel Aviv University (2007), his Masters in Electrical Engineering from Princeton University (2009) and his Ph.D. in Statistics from Tel Aviv University (2016). Following his graduation, Amichai joined Massachusetts Institute of Technology (MIT) as a postdoctoral fellow (2019). Amichai’s research focuses on statistical inference and learning, and their connection to information theory.
The sequence alignment problem is one of the most fundamental issues in bioinformatics and indeed a plethora of methods has been devised to tackle it. In practice, these methods are applied with the same parameters (i.e., default configuration) on every given input, implicitly assuming a single evolutionary model. This is clearly an oversimplification of biological reality.
Here we introduce AlignNLP, a state-of-the-art methodology for aligning sequences using a deep neural network. AlignNLP accounts for the possible variability of the evolutionary process among different lineages by using an ensemble of transformers. Each transformer is trained on millions of samples from different evolutionary models, leading to a superb alignment accuracy, overperforming commonly used methods, such as MAFFT and Prank.
Edo Dotan is a graduate student at Tel Aviv University, Israel. In his work, he harnesses recent advances in the machine-learning field in order to revolutionize techniques in computational biology. For example, in a recent study, supervised by Profs. Tal Pupko and Adi Stern (Tel-Aviv University) and Dr. Yonatan Belinkov (Technion), Edo applied natural language processing (NLP) algorithm to one of the most fundamental tasks in bioinformatics, namely, to align sequences, i.e., tracing the homology relationships between genomic regions that descendent with modification from a common ancestor. He also applies machine-learning tools to study Covid-19 viral genome evolution. Edo holds a B.Sc. in computer science and has gained hands-on experience in the industry at Corephotonics (acquired by Samsung) before joining academia.
Pulmonary embolism (PE) is a common life-threatening condition with a challenging diagnosis, as patients often present with nonspecific symptoms. Prompt and accurate detection of PE and specifically an assessment of its severity are critical for managing patient treatment. We introduce diverse multimodal fusion models that are capable of utilizing weakly-labeled multi-modal data, combining both volumetric pixel data and clinical patient data for automatic risk stratification of PE. The best performing multimodality model achieves an AUC of 0.96 for assessing PE severity, with a sensitivity of 90% and specificity of 94%. To the best of our knowledge, this is the first study that attempted to automatically assess PE severity.
EE PhD student at Tel Aviv University, advised by Prof. Hayit Greenspan. Prior to that, I received my Bachelor's degree in EE at Tel Aviv University. My current research interests include deep learning and computer vision with applications to medical imaging, especially, fusing diverse data modalities such as imaging, free text, and structured tabular data for medical prognosis models.
The aim of this contribution is a "regulation and governance" reading of the new EU initiative on regulating Artificial Intelligence. Thus its focus is not on the "first order" fundamental rights perspective but rather the "second order" mechanisms that are necessary for their effective application on the ground.
Amit is in charge of developing and deploying the Directorate's domestic and international legal policy, and manages the INCD legal department. This includes legal counsel for policy, regulation and operations. Amit has been active in law, technology and policy issues within government since 2002, including Copyright law, Data Protection and Privacy, Electronic signatures, E-government, and Cybersecurity, and has represented the Israeli government in the Israeli Knesset, and in the international sphere. In 2019 Amit represented Israel in the drafting of the OECD Recommendations on AI. Before joining the INCD in 2014 to set up its legal department, Amit was Head of the Legal Department in the Israeli Law Information and Technology Authority in the Ministry of Justice (ILITA), now renamed The Privacy Protection Authority, Israel's Data Protection Authority. In this context he was involved in international data protection issues, including the EU Adequacy finding of Israel's Data Protection regime and in the OECD. Since 2013, Amit teaches a graduate course on Law and Information Technology in the Center for Law and Technology in the Haifa University Faculty of Law, and since 2021 also in the TAU Graduate program on Cyber, Politics and Government.
Various efforts for early detection and prompt isolation of the COVID-19 pandemic are pivotal to breaking transmission chains and containing outbreaks. In this talk, I will briefly discuss our four projects that utilize data of a large-scale for early detection of 1) hotspots and regional outbreaks using mobility data from cellphone devices, 2) early signs of infections using wearable devices, 3) positive test outcomes using historical electronic medical records, and 4) deteriorations of hospitalized patients using rich medical data. Then, I will elaborate on one of the projects and its potential applicability for policy determination.
Dr. Dan Yamin
Dr. Yamin (Ph.D.) is a faculty member at the Department of Industrial Engineering at Tel Aviv University, and a former faculty member in the Center of Infectious Disease Modeling and Analysis, in the school of public health at Yale University. Dr. Yamin received four research prizes awards and five teaching prizes awards. Dr. Yamin’s studies contributed to impact health policy against influenza in Israel, offered novel strategies to eliminate Ebola that were implemented in Liberia. He received a European Research Council (ERC) grant to pursue research on the early detection of infectious diseases using smartphones. He recently advised the Ministry of Health and the Ministry of Finance to identify hotspots and apply effective strategies against the COVID-19 Pandemic in Israel.
COVID19 pandemic challenged society and organizations to make a trade-off between individual health safety on one hand, and socio-economic progress on the other. In a way, we're all grappling to return to a new normal. But, what would that new normal be? How and when can we return to that new normal without compromising our health safety? Uncertainties related to new variants, vaccines and their efficacies, loss of immunity, and socio-economic factors make such questions difficult to answer. We developed a novel configurable digital twin that can be contextualized for a city or an organization to predict possible disruptions of business-as-usual activities due to the COVID pandemic. It also helps to explore various what-if scenarios for defining strategies towards a safer return to new normalcy. Our approach combines multiple types of system modelling & simulation techniques and adopts established data science concepts. We demonstrated the efficacy of our approach in the context of an Indian city and an organization. In this talk, I will present our approach, early experiences, and learnings.
Dr. Souvik Barat
Souvik Barat is a principal scientist at Tata Consultancy Services Research, India. He has 20+ years of experience in industrial research and his research interests include Digital Twin technology, Modelling and simulation of complex systems, Reinforcement Learning, Model- Driven Engineering, Software Product Lines, and Business Process Management.
At present, he is actively involved in developing digital twins for complex business and societal systems. His work on digital twin for business systems led to the Best Innovation Award 2019 in TCS and contributed towards Gold Stevie Award 2021 under AI/ML category. His effort in modeling a city to evaluate the efficacy of non-pharmaceutical interventions for controlling COVID-19 infections is extensively used by city-based health care organizations and municipality corporation.
Earlier, he was a lead architect of a model-driven toolset that has been used for delivering large IT systems for over a decade and led a research initiative to develop a platform for product line architecture.
Souvik has several patents to his credit and has authored several book chapters, journals and conference papers. He holds Ph.D. from Middlesex University, London, and a master’s degree from the Indian Institute of Technology (IIT), Madras.
Artificial intelligence techniques such as machine and deep learning have large potential to better exploit the rich information that can be derived from health and healthcare data to promote public and individual health. It can change our healthcare system from a reactive to a proactive one (towards prevention), and by learning from previous patients we can personalize treatment, leading to "precision medicine".
As the quality of many AI techniques crucially depends on the data on which it is trained, an important prerequisite for the successful implementation of AI in healthcare is access to high quality data. This is complex in the health domain, where there is fragmentation of expertise, data and resources. Therefore, in the Dutch health and life sciences domain different stakeholders have established a joint vision towards a Dutch health data infrastructure, and launched the Health RI initiative (www.health-ri.org). Health RI pursues the integration of highly diverse collections of longitudinal health and biomedical data, e.g. generated at different hospitals and research centers, to empower researchers and organizations to develop better personalized medicine and health solutions. The covid pandemic has shown the importance of such an approach, and in this presentation the steps towards a national Covid data portal will be presented.
Prof. Wiro Niessen
Wiro Niessen is professor in Biomedical Image Analysis and Machine Learning at Erasmus MC and Delft University of Technology. His interest is in the development, and validation of quantitative biomedical image analysis methods, and linking imaging and genetic data for improved disease diagnosis and prognosis, using machine learning. He supervised 56 PhD students in these fields. He is fellow and was president of the MICCAI Society, and is CTO of Health-RI, which aims to develop a national health data infrastructure for research and innovation. In 2015 he received the Simon Stevin award, the largest prize in the Netherlands in Applied Sciences. In 2017 he was elected to the Royal Netherlands Academy of Arts and Sciences. In 2012 he founded Quantib, an AI company in medical imaging, where he is now scientific lead.
Mads Nielsen is professor of Computer Science at University of Copenhagen, Co-director of the Pioneer Centre in AI, Founder of Cerebriu A/S, Biomediq A/S, Aiomic aps. His major research has been in medical image analysis and its mathematical foundations with a specialty in neurological disorders, breast cancer, osteoporosis, osteoarthritis and lately also in Covid-19. He has obtained more than 20 patents and has published more than 300 papers. In 2012 he co-auhtored with Andrew Ng one of the first papers on deep learning in medical image analysis.
As head of the Technological Infrastructure Division, Aviv Zeevi is responsible for generating collaborations between the industry and academia that create advanced technologies and innovative products; these collaborations strengthen the Israeli industry's long-term technological advantage in the fierce competition of international markets. Zeevi has vast experience in the field of technological developments, in developing online systems as well as training and simulation systems. In his most recent role he served as the head of the ICT department in the Israeli Research Directorate of the European Research Program. Zeevi has a PhD in Information Systems Management from the Tel Aviv University. He holds several additional degrees in Economics, Political Science and Management.
In this talk I will cover how to get a GAN to run on a mobile device and give a consumer control over its results. At Lightricks we build tools for the creative process. These tools need to be consistent, controllable and accessible in order to succeed in the market. GANs are amazing machines with many capabilities, but turning these wild horses into tools used by millions on mobile devices is a challenge. I will describe the stages of turning GANs into product grade features that can even work on the edge. The hurdles of better data, model efficiency and the tradeoffs between control and expression when choosing your architecture.
Ofir Bibi is the VP Research at Lightricks. Bringing the latest and greatest in ML to the core of consumer products, creating efficient research pipelines, methodologies and growing great researchers. His research interests are in the fields of efficient machine learning, statistical signal processing, computational photography and computer graphics. He has taken leading research positions in building systems for estimation and prediction, market optimization and recommendations, but his true passion is in solving challenges by a precise mix of engineering and research.
While image editing and manipulation tools have seen remarkable progress, video editing remains a difficult task that poses two key challenges: (i) edits need to be applied in a temporally consistent manner to all frames, and (ii) editing interfaces need to be able to represent temporal content in an intuitive manner. Thus, video editing has been largely restricted to the domain of professionals. In this talk, I’ll present a new method that tackles these challenges, and allows easy and intuitive editing of everyday videos by novice users.
The pillar of the approach is a novel decomposition of the input video into a set of layered 2D atlases ('texture maps'), each providing a unified representation of an object/background over the entire video. Using the learned decomposition, we can simply edit the 2D atlases (or a single frame), and automatically propagate the edits to the entire video. By operating purely in 2D, our method does not require any prior 3D knowledge about scene geometry or camera poses. I’ll show a variety of exciting editing results including texture mapping, video style transfer, image-to-video texture transfer, and segmentation/labeling propagation.
Project page: https://layered-neural-atlases.github.io/
Tali Dekel is an Assistant Professor at the Mathematics and Computer Science Department at the Weizmann Institute, Israel. She is also a Staff Research Scientist at Google, developing algorithms at the intersection of computer vision, computer graphics, and machine learning. Before Google, she was a Postdoctoral Associate at the Computer Science and Artificial Intelligence Lab (CSAIL) at MIT. Tali completed her Ph.D. studies at the school of electrical engineering, Tel-Aviv University, Israel. Her research interests include computational photography, image/video synthesis, geometry and 3D reconstruction. Her awards and honors include the National Postdoctoral Award for Advancing Women in Science (2014), the Rothschild Postdoctoral Fellowship (2015), the SAMSON - Prime Minister's Researcher Recruitment Prize (2019), Best Paper Honorable Mention in CVPR 2019, and Best Paper Award (Marr Prize) in ICCV 2019.
In this talk, we will show how one may use tools from signal processing to improve neural network performance in various applications including image super-resolution, robustness to label noise, better representation capabilities and more.
Raja Giryes is an associate professor in the school of electrical engineering at Tel Aviv University. His research interests lie at the intersection between signal and image processing and machine learning, and in particular, in deep learning, inverse problems, sparse representations, computational photography, and signal and image modeling. Raja received the EURASIP best P.hD. award, the ERC-StG grant, Maof prize for excellent young faculty (2016-2019), VATAT scholarship for excellent postdoctoral fellows (2014-2015), Intel Research and Excellence Award (2005, 2013), the Excellence in Signal Processing Award (ESPA) from Texas Instruments (2008) and was part of the Azrieli Fellows program (2010-2013). He is an associate editor in IEEE Transactions on Image Processing and Elsevier Pattern Recognition and has organized workshops and tutorials on deep learning theory in various conferences including ICML, CVPR, and ICCV. He serves as a consultant in various high-tech companies including Innoviz technologies and developed a technology that was used as the basis for the MultiVu technologies startup.