doi.bio/michael_figurnov
Michael Figurnov
Michael Figurnov is a Staff Research Scientist at DeepMind, where his research interests include deep learning, Bayesian methods, and machine learning for biology.
Education and Career
Figurnov was a PhD student at the Bayesian Methods Research Group under the supervision of Dmitry Vetrov at the Higher School of Economics, AI Research Institute, Moscow. He now works at DeepMind, where he has been involved in the development of AlphaFold, which has been recognised as the solution to the protein folding problem.
Publications
Figurnov has published extensively in the fields of machine learning and biology, with notable works including:
- AlphaFold: Figurnov has been a key contributor to AlphaFold, a highly accurate protein structure prediction tool. He co-authored several papers on this topic, including "Highly Accurate Protein Structure Prediction with AlphaFold" and "Applying and Improving AlphaFold at CASP14".
- Monte Carlo Gradient Estimation in Machine Learning: This paper provides a broad survey of methods for Monte Carlo gradient estimation in machine learning and statistics.
- Implicit reparameterization gradients: This work introduces an alternative approach to computing low-variance gradients of continuous random variables, providing a simple and efficient solution for training latent variable models.
- Tensor Train Decomposition on TensorFlow (T3F): The paper addresses the lack of implementation with GPU support, batch processing, and automatic differentiation in tensor train decomposition, a technique used in machine learning.
- Probabilistic Adaptive Computation Time: The paper presents a probabilistic model that controls computation time in deep learning models, allowing for more efficient and adaptable networks.
- Spatially Adaptive Computation Time for Residual Networks: This publication proposes a deep learning architecture that dynamically adjusts the number of executed layers for different regions of an image, with applications in computer vision tasks.
- Robust Variational Inference: The paper proposes a robust modification of evidence and a lower bound for variational inference, a powerful tool for approximate inference.
- PerforatedCNNs: Acceleration through Elimination of Redundant Convolutions: This work focuses on reducing the computational cost of convolutional neural networks, making them more accessible for low-power devices.
Co-Authors
Michael Figurnov has collaborated with numerous researchers in the field, including:
- Dmitry Vetrov
- Ruslan Salakhutdinov
- Maxwell D. Collins
- Yukun Zhu
- Pushmeet Kohli
- Andriy Mnih
- Mihaela Rosca
- Alexander Novikov
- Pavel Izmailov
- Valentin Khrulkov
- Ivan V Oseledets
- Oleg Ivanov
- Artem Sobolev
- Jonathan Huang
- Aijan Ibraimova
- Kirill Struminsky
Michael Figurnov
Biography
Michael Figurnov is a researcher in the fields of machine learning, bioinformatics, and computer vision. He has worked with Google DeepMind and Lomonosov Moscow State University. He is also a research assistant at the Higher School of Economics and is associated with the Bayesian Methods Research Group at the Faculty of Computer Science.
Research
Figurnov's research focuses on machine learning and its applications in bioinformatics and computer vision. He has published papers on Monte Carlo gradient estimation, neural networks, and protein structure prediction.
Notable Works
- Monte Carlo Gradient Estimation in Machine Learning: This paper provides an accessible survey of Monte Carlo gradient estimation methods in machine learning and statistics.
- Universal Conditional Machine: Figurnov proposes a single neural probabilistic model based on variational autoencoders, capable of conditioning on observed features and sampling remaining features.
- Implicit Reparameterization Gradients: This work introduces an alternative approach to computing reparameterization gradients, improving efficiency and accuracy for certain distributions.
- Tensor Train Decomposition on TensorFlow (T3F): The paper proposes a library with GPU support, batch processing, and automatic differentiation to facilitate machine learning implementations.
- Probabilistic Adaptive Computation Time: Figurnov presents a probabilistic model with latent variables to control computation time in deep learning models, balancing accuracy and efficiency.
- Spatially Adaptive Computation Time for Residual Networks: This paper proposes a deep learning architecture based on residual networks, allowing for dynamic adjustment of computation time.
- PerforatedCNNs: Acceleration through Elimination of Redundant Convolutions: Figurnov suggests a novel approach to reduce computational costs by eliminating redundant convolutions in convolutional neural networks.
Michael Figurnov
Biography
Michael Figurnov is a researcher in the fields of machine learning, bioinformatics, and computer vision. He has worked with Google DeepMind and Lomonosov Moscow State University. He is also a research assistant at the Higher School of Economics and is associated with the Bayesian Methods Research Group at the Faculty of Computer Science.
Research
Figurnov's research focuses on machine learning and its applications in bioinformatics, specifically protein structure prediction. He has authored and co-authored several papers, including:
- "Monte Carlo Gradient Estimation in Machine Learning"
- "Universal Conditional Machine"
- "Implicit Reparameterization Gradients"
- "Tensor Train decomposition on TensorFlow (T3F)"
- "Probabilistic Adaptive Computation Time"
- "Spatially Adaptive Computation Time for Residual Networks"
- "PerforatedCNNs: Acceleration through Elimination of Redundant Convolutions"
AlphaFold
One of Figurnov's notable contributions is his work with the AlphaFold system, which revolutionised protein structure modelling and design. The AlphaFold Protein Structure Database, powered by AlphaFold v2.0 of DeepMind, enabled an unprecedented expansion of the structural coverage of known protein sequences. The AlphaFold 3 model demonstrated improved accuracy over previous tools, making high-accuracy modelling across biomolecular space possible within a unified deep-learning framework.