ASHLEY EDWARDS CAREER OBJECTIVE

advertisement
ASHLEY EDWARDS
Atlanta, GA ▪ 678-682-1711 ▪ aedwards8@gatech.edu
linkedin.com/in/aedwards8 ▪ cc.gatech.edu/~aedwards
______________________________________________________________________________
CAREER OBJECTIVE
I am interested in research involving reinforcement learning, robotics, and machine learning.
______________________________________________________________________________
EDUCATION
Georgia Institute of Technology, Atlanta, GA
▪ Pursuing PhD in Computer Science: GPA 3.75
▪ Passed Qualifying Exam in Intelligent Systems
▪ Awarded 2012-2015 NSF Graduate Research Fellowship
▪ Awarded 2015-2016 Xerox Technical Minority Scholarship
August 2011 – May 2017
University of Georgia, Athens, GA
August 2007 - May 13, 2011
▪ Bachelor of Science in Computer Science: GPA 3.58
▪ Awarded HOPE Scholarship from 2007-2011
______________________________________________________________________________
TECHNICAL SKILLS
Languages
▪ Proficient in: Java, Python
▪ Experience with: C/C++, Matlab
Software & Technology ▪ Platforms: Windows, Linux, OS X, Amazon EC2
______________________________________________________________________________
EXPERIENCE
Software Engineering Research Intern
Google, Mountain View, CA
▪ Research and Development within the Google Photos team
Summer 2016 - Present
Graduate Research Assistant
August 2011 - Present
Georgia Institute of Technology, Atlanta, GA
▪ Research in Reinforcement Learning in Dr. Charles Isbell’s “Lab for Interactive Machine
Learning”
Graduate Teaching Assistant
Fall 2015
Georgia Institute of Technology, Atlanta, GA
▪ TA for Dr. Charles Isbell and Dr. Michael Littman’s Graduate Level Reinforcement Learning
Course at Udacity
NSF GROW Fellow
Waseda University, Tokyo, Japan
▪ Summer research intern in Dr. Atsuo Takanishi’s research lab
▪ Developed Reinforcement Learning algorithm to train agents to learn
behavior from videos without tracking features
Graduate Teaching Assistant
Georgia Institute of Technology, Atlanta, GA
▪ TA for Dr. Thad Starner's Graduate Level AI Class
Temp Operations Specialist
CGI, Atlanta, GA
▪ Added features and verified code fixes within software
Summer 2015
Fall 2012
June 2011 - August 2011
NSF Funded Undergraduate Research
June 2010 - July 2010
Rutgers University, Piscataway, NJ
▪ Developed Higher-Order Q-Learning in Dr. William Pottenger’s research lab
Independent Study
University of Georgia, Athens, GA
▪ Studied multi-agent RL in Dr. Prashant Doshi's lab
Spring 2010
NSF Funded Undergraduate Research
June 2009 - August 2009
University of Massachusetts, Amherst, MA
▪ Studied and implemented common reinforcement learning problems in Dr. Andy Barto’s
“Autonomous Learning Lab”
______________________________________________________________________________
PUBLICATIONS
Edwards, A., Isbell, C., Takanishi, A. “Perceptual Reward Functions.” In Deep Reinforcement
Learning: Frontiers and Challenges. New York, USA, July 2016. (IJCAI 2016 Workshop)
Edwards, A., Littman, M., Isbell, C. “Expressing Tasks Robustly via Multiple Discount Factors.”
In The 2nd Multidisciplinary Conference on Reinforcement Learning and Decision Making.
Edmonton, Canada. June 2015. (Extended Abstract)
Edwards, A. and Pottenger, W. M. “Higher Order Q-Learning." In IEEE Symposium on
Adaptive Dynamic Programming and Reinforcement Learning. Paris, France. April 2011.
______________________________________________________________________________
PRESENTATIONS AND GUEST LECTURES
“Q-Learning, Sarsa, and Monte Carlo Tree Search.” Guest Lecture for Dr. Charles Isbell’s
Undergraduate Machine Learning Course. April 2016.
“Monte Carlo, Policy Iteration, and TD Methods”. Guest lecture for Dr. Charles Isbell’s
Undergraduate Machine Learning Course. April 2016.
“Using Visual Reinforcement Learning to Teach Kobian Facial Expressions from Videos.”
Presentation for Dr. Atsuo Takanishi’s lab at Waseda University. September 2015.
“Q-Learning and Sarsa.” Guest Lecture for Dr. Charles Isbell’s Undergraduate Machine Learning
Course. April 2015.
“Higher Order Q-Learning.” Presentation for Dr. Michael Littman’s lab at Rutgers University.
February 2012.
______________________________________________________________________________
COURSEWORK
Context-Based Messages on Google Glass
Course: Ubiquitous Computing
Spring 2015
In this project, we created a system for Google Glass which allowed users to choose the contexts
in which they sent messages to other users. The user of the system can choose to send a message
to herself or one of her contacts based on time, location, the weather, and her or the recipient's
current state (awake or free).
COBOT
Course: Machine Learning
Spring 2013
In this project, we created an agent, COBOT, for Twitter. The agent parsed user's tweets and
determined which of their followers might be able to give an answer to a question the user asked.
Task Planning using Semantic Mapping and Human Feedback
Course: Robotic and Intelligent Planning
Fall 2012
In this project, we created a system for allowing a simulated robot to learn how to complete tasks
using a semantic map and human feedback. This work focused on reasoning about which tasks
should be completed first and when to ask humans for help. Given new information, the robot
was able to update a semantic map and re-plan.
Creating Adaptive Agents through Interaction
Course: Computational Creativity
Spring 2012
In this project, we created an agent that was capable of learning and adapting from interactions
with an end user. In our approach, we used Interactive Reinforcement Learning to train a bipedal
agent to learn how to run and remain balanced.
Download