Felipe Petroski Such is a Member of Technical Staff at OpenAI, based in New York, New York, United States. He previously studied Computer Engineering at the Rochester Institute of Technology.
Research Interests
Such's research interests include large language models, neural architecture search, deep reinforcement learning, and transfer learning.
Notable Works
Text and Code Embeddings by Contrastive Pre-Training(2022). Such et al. propose a contrastive pre-training method for learning text and code embeddings, showing improvements in various downstream tasks.
Evaluating Large Language Models Trained on Code(2021). This work evaluates large language models trained on code, demonstrating their capabilities and limitations in generating and understanding code.
Generative Teaching Networks: Accelerating Neural Architecture Search by Learning to Generate Synthetic Training Data(2020). Such et al. introduce generative teaching networks, which learn to generate synthetic training data, accelerating neural architecture search.
Generalized Hidden Parameter MDPs: Transferable Model-Based RL in a Handful of Trials(2020). The paper proposes a novel approach to model-based reinforcement learning using generalized hidden parameter Markov decision processes, enabling efficient transfer learning.
Synthetic Petri Dish: A Novel Surrogate Model for Rapid Architecture Search(2020). Such and his colleagues develop a surrogate model, Synthetic Petri Dish, which enables faster architecture search by approximating the performance of neural architectures.
Intelligent Character Recognition Using Fully Convolutional Neural Networks(2019). This work proposes the use of fully convolutional neural networks for intelligent character recognition, achieving high accuracy.
An Atari Model Zoo for Analyzing, Visualizing, and Comparing Deep Reinforcement Learning Agents(2019). Such et al. create a model zoo of deep reinforcement learning agents trained on Atari games, facilitating analysis, visualization, and comparison of different algorithms.
Improving Exploration in Evolution Strategies for Deep Reinforcement Learning via a Population of Novelty-Seeking Agents(2018). The paper introduces a population of novelty-seeking agents to improve exploration in evolution strategies for deep reinforcement learning, leading to better performance.
Co-authors
Such has collaborated with numerous researchers, including Aditya Rawal, Joel Lehman, Kenneth O. Stanley, Jeff Clune, Christian F. Perez, and Theofanis Karaletsos.
Felipe Petroski Such
Overview
Felipe Petroski Such is a Member of Technical Staff at OpenAI, based in New York, New York, United States. He studied Computer Engineering at the Rochester Institute of Technology.
Publications
Text and Code Embeddings by Contrastive Pre-Training (with Arvind Neelakantan and 22 others), CoRR abs/2201.10005 (2022)
Evaluating Large Language Models Trained on Code (with Mark Chen and 44 others), CoRR abs/2107.03374 (2021)
Generalized Hidden Parameter MDPs: Transferable Model-Based RL in a Handful of Trials (with Christian F. Perez and Theofanis Karaletsos), AAAI 2020: 5403-5411
Generative Teaching Networks: Accelerating Neural Architecture Search by Learning to Generate Synthetic Training Data (with Aditya Rawal, Joel Lehman, Kenneth O. Stanley, and Jeffrey Clune), ICML 2020: 9206-9216
Synthetic Petri Dish: A Novel Surrogate Model for Rapid Architecture Search (with Aditya Rawal, Joel Lehman, Jeff Clune, and Kenneth O. Stanley), CoRR abs/2005.13092 (2020)
Intelligent character recognition using fully convolutional neural networks (with Raymond W. Ptucha, Suhas Pillai, Frank Brockler, Vatsala Singh, and Paul Hutkowski), Pattern Recognit. 88: 604-613 (2019)
An Atari Model Zoo for Analyzing, Visualizing, and Comparing Deep Reinforcement Learning Agents (with Vashisht Madhavan, Rosanne Liu, Rui Wang, and 8 others), IJCAI 2019: 3260-3267
Fully Convolutional Networks for Handwriting Recognition (with Dheeraj Peri, Frank Brockler, Paul Hutkowski, and Raymond W. Ptucha), CoRR abs/1907.04888 (2019)
Generative Teaching Networks: Accelerating Neural Architecture Search by Learning to Generate Synthetic Training Data (with Aditya Rawal, Joel Lehman, Kenneth O. Stanley, and Jeff Clune), CoRR abs/1912.07768 (2019)
Fully Convolutional Networks for Handwriting Recognition (with Dheeraj Peri, Frank Brockler, Paul Hutkowski, and Raymond W. Ptucha), ICFHR 2018: 86-91
Improving Exploration in Evolution Strategies for Deep Reinforcement Learning via a Population of Novelty-Seeking Agents (with Edoardo Conti, Vashisht Madhavan, Joel Lehman, Kenneth O. Stanley, and Jeff Clune), NeurIPS 2018: 5032-5043
An intriguing failing of convolutional neural networks and the CoordConv solution (with Rosanne Liu, Joel Lehman, Piero Molino, Eric Frank, Alex Sergeev, and Jason Yosinski), NeurIPS 2018: 9628-9639
Efficient transfer learning and online adaptation with latent variable models for continuous control (with Christian F. Perez and Theofanis Karaletsos), CoRR abs/1812.03399 (2018)
An Atari Model Zoo for Analyzing, Visualizing, and Comparing Deep Reinforcement Learning Agents (with Vashisht Madhavan, Rosanne Liu, Rui Wang, and 8 others), CoRR abs/1812.07069 (2018)
Robust Spatial Filtering With Graph Convolutional Neural Networks (with Shagan Sah, Miguel Domínguez, Suhas Pillai, Chao Zhang, Andrew Michael, Nathan D. Cahill, and Raymond W. Ptucha), IEEE J. Sel. Top. Signal Process. 11(6): 884-896 (2017)
Temporally Steered Gaussian Attention for Video Understanding (with Shagan Sah, Thang Nguyen, and Miguel Domínguez), CVPR Workshops 2017: 2208-2216
Towards 3D convolutional neural networks with meshes (with Miguel Domínguez, Shagan Sah, and Raymond W. Ptucha), ICIP 2017: 3929-3933
Robust Spatial Filtering with Graph Convolutional Neural Networks (with Shagan Sah, Miguel Domínguez, Suhas Pillai, Chao Zhang, Andrew Michael, Nathan D. Cahill, and Raymond W. Ptucha), CoRR abs/1703.00792 (2017)
Improving Exploration in Evolution Strategies for Deep Reinforcement Learning via a Population of Novelty-Seeking Agents (with Edoardo Conti, Vashisht Madhavan, Joel Lehman, Kenneth O. Stanley, and Jeff Clune), CoRR abs/1712.06560 (2017)
Deep Neuroevolution: Genetic Algorithms Are a Competitive Alternative for Training Deep Neural Networks for Reinforcement Learning (with Vashisht Madhavan, Edoardo Conti, Joel Lehman, Kenneth O. Stanley, and Jeff Clune), CoRR abs/1712.06567 (2017)