Jihwan Jeong

Jihwan Jeong

Ph.D. Candidate at University of Toronto

University of Toronto

Vector Institute

Biography

Welcome to my profile πŸ˜„!   I’m a Ph.D. candidate at the University of Toronto, contributing actively to the D3M (Data-Driven Decision-making) lab under the mentorship of Professor Scott Sanner. My interest in AI and ML is rooted in their potential to revolutionize decision-making in diverse areas.

My research primarily focuses on leveraging models for enhanced decision-making, with a special emphasis on offline model-based reinforcement learning. This work includes a notable paper accepted at ICLR-23, which explores the use of Bayesian models for robust planning and policy learning by accounting for the epistemic uncertainty of models (learn more here).

My internship at Google Research, under the guidance of Yinlam Chow, was a transformative period. There, I contributed to integrating recommendation systems with large language models (LLMs), applying Reinforcement Learning with AI Feedback (RLAIF) in a novel way to the challenge of recommendation explanations. This culminated in a first-authored paper that highlights the effective fine-tuning of LLMs for accurate and personalized recommendations. Additionally, I was instrumental in developing the PAX pipeline, a cornerstone for our team’s language model projects. (Check out the other paper here!)

Approaching the completion of my Ph.D., my thesis, tentatively titled β€œLeveraging Learned Models for Decision-Making,” encapsulates my research ethos. It tackles the intricacies of using imperfect models for decision-making by focusing on (1) optimizing decision loss, (2) employing Bayesian methods for uncertainty management, and (4) enabling models and policies to adapt swiftly in new environments.

I look forward to opportunities that will allow me to apply and expand my expertise in AI/ML, aiming to continue making impactful contributions in this dynamic field.

Download my CV .

Interests
  • Offline & model-based reinforcement learning
  • Uncertainty quantification in neural networks
  • RL for Large Language Models
  • Decision-aware model learning
Education
  • Ph.D. Candidate in Information Engineering (Present)

    University of Toronto

  • M.S. in Industrial and Systems Engineering, 2019

    Korea Advanced Institute of Science and Technology (KAIST)

  • B.S. in Chemistry, 2015

    Korea Advanced Institute of Science and Technology (KAIST)

Experience

 
 
 
 
 
Google Research
Student Research Program
Jun 2023 – Present Mountain View, CA, US (Remote)
Contributed to the submission of two papers, applying RLAIF to advance language models for personalized recommendations and contributing significantly to the development of the PAX pipeline.
 
 
 
 
 
Vector Institute
Research Intern
Jun 2022 – Sep 2022 Toronto
Worked with Professor Pascal Poupart on a model-based offline reinforcement learning project (under review).
 
 
 
 
 
LG AI Research
Research Intern
Jun 2021 – Oct 2021 Seoul
Worked on a model-based offline reinforcement learning project (ICLR-23).
 
 
 
 
 
University of Toronto
Ph.D. Candidate (~present)
Sep 2019 – Present Toronto

Research Projects

.js-id-offline-rl
Conservative Bayesian Model-Based Value Expansion for Offline Policy Optimization (to appear at ICLR-23)
A model-based offline RL algorithm that is able to trade-off the uncertainty of the learned dynamics model with that of the value function through Bayesian posterior estimation, achieving state-of-the-art performance on a variety of D4RL benchmark tasks.
Conservative Bayesian Model-Based Value Expansion for Offline Policy Optimization (to appear at ICLR-23)

Publications

Quickly discover relevant content by filtering publications.
(2023). Conservative Bayesian Model-Based Value Expansion for Offline Policy Optimization. In ICLR-23.

PDF Cite Project

(2022). An Exact Symbolic Reduction of Linear Smart Predict+Optimize to Mixed Integer Linear Programming. In ICML-22.

PDF Cite Code Project Video

(2022). A Distributional Framework for Risk-Sensitive End-to-End Planning in Continuous MDPs. In AAAI-22.

PDF Cite Project Video

(2022). Online Continual Learning in Image Classification: An Empirical Survey. Neurocomputing, 469: 28-51, 2022.

PDF Cite Code Project

(2021). Bayesian Optimization for a Multiple-Component System with Target Values. Computers & Industrial Engineering, 157.

PDF Cite

(2021). Online Class-Incremental Continual Learning with Adversarial Shapley Value. In AAAI-21.

PDF Cite Project Video

(2020). Batch-level Experience Replay with Review for Continual Learning. In CVPR Workshop on Continual Learning in Computer Vision.

PDF Cite Code Slides

Teaching Experience

 
 
 
 
 
Introduction to Artificial Intelligence (MIE369)
Jan 2023 – Apr 2023 University of Toronto
Course Instructor
 
 
 
 
 
Optimization in Machine Learning (MIE424)
Jan 2023 – Apr 2023 University of Toronto
Teaching Assistant
 
 
 
 
 
Decision Support Systems (MIE451) 2022 Fall
Sep 2022 – Dec 2022 University of Toronto
Teaching Assistant
 
 
 
 
 
Introduction to Artificial Intelligence (MIE369)
Jan 2022 – Apr 2022 University of Toronto
Teaching Assistant
 
 
 
 
 
Introduction to Artificial Intelligence (MIE369)
Jan 2021 – Apr 2021 University of Toronto
Teaching Assistant
 
 
 
 
 
Introduction to Artificial Intelligence (MIE369)
May 2021 – Aug 2021 University of Toronto
Teaching Assistant
 
 
 
 
 
Optimization in Machine Learning (MIE424)
Jan 2020 – Apr 2020 University of Toronto
Teaching Assistant
 
 
 
 
 
Foundations of Data Analytics and Machine Learning (APS1070)
Sep 2019 – Dec 2019 University of Toronto
Teaching Assistant

Contact