2023 Fall
COMPSCI 285 001 - LEC 001
Deep Reinforcement Learning, Decision Making, and Control
Sergey Levine
Class #:25142
Units: 3
Instruction Mode:
In-Person Instruction
Time Conflict Enrollment Allowed
Offered through
Electrical Engineering and Computer Sciences
Current Enrollment
Total Open Seats:
-129
Enrolled: 249
Waitlisted: 0
Capacity: 120
Waitlist Max: 150
Open Reserved Seats:
12 reserved for Electrical Engineering and Computer Sciences - Master of Engineering Students
Hours & Workload
3 hours of instructor presentation of course materials per week, and 6 hours of outside work hours per week.
Course Catalog Description
Intersection of control, reinforcement learning, and deep learning. Deep learning methods, which train large parametric function approximators, achieve excellent results on problems that require reasoning about unstructured real-world situations (e.g., computer vision, speech recognition, NLP). Advanced treatment of the reinforcement learning formalism, the most critical model-free reinforcement learning algorithms (policy gradients, value function and Q-function learning, and actor-critic), a discussion of model-based reinforcement learning algorithms, an overview of imitation learning, and a range of advanced topics (e.g., exploration, model-based learning with video prediction, transfer learning, multi-task learning, and meta-learning).
Class Notes
* Time conflicts ARE allowed for this class.
Rules & Requirements
Requisites
- Graduate students NOT in the Master of Engineering Program other those in EECS
Repeat Rules
Course is not repeatable for credit.
Reserved Seats
Current Enrollment
Open Reserved Seats:
12 reserved for Electrical Engineering and Computer Sciences - Master of Engineering Students
Textbooks & Materials
See class syllabus or https://calstudentstore.berkeley.edu/textbooks for the most current information.
Guide to Open, Free, & Affordable Course Materials
Associated Sections
None