The point of this thesis was to present generalized data structures and algorithms for building simplicial spacetime meshes that could work up through 3d x Time and present some of the algorithms for “pitching” through spacetime that allow for efficient solution of hyperbolic partial differential equations using the spacetime Galerkin method.
Courses:
Machine Learning / AI courses:
Artificial Intelligence: General course introducing the history and different branches of AI. Covered a wide range of topics, like Constraint Satisfaction Problems (CSPs), Machine Learning (ML), Game Theory and Game Trees, path planning with A*, etc.
Machine Learning for Signal Processing: Learned techniques for performing machine learning on signals, such as sound, images, video.
The key insights were taking inputs of these kinds and transforming them into new inputs that carry the key info. For example, a sound file can be transformed into a spectrogram so you can encode the time-localized frequency information of that sound. Spectrograms can be almost viewed as “images”, so you can then feed into techniques specially built for computer vision, e.g. neural nets with convolution layers.
Ultimately learned and applied unsupervised and supervised learning techniques to signals of all kinds.
Used skills gained in the class for a final project. My final project revolved around identifying people with depression based on fMRIs of their brain. Link to project here.
Machine Learning: Covered all sorts of fundamental theoretical ML ideas and techniques and was coupled with programming assignments. Some of the theoretical topics covered:
Maximum likelihood estimators
Maximum a-posteriori estimators
Multi-class learning
Support vector machines (SVMs)
Gaussian mixture models
Expectation-Maximization algorithm
KL divergence
Deep learning (DL)
Generative Adversarial Networks (GANs)
Probabilistic graphical models
Q-Learning
Optimization in Computer Vision: Covered a wide variety of optimization approaches that ultimately have applications in computer vision and broader ML. Final project here. Topics included:
Basic Continuous Optimization: Gradient Descent, Newton Methods, Trust Region methods, subspace methods, Expectation-Maximization algorithm as coordinate descent, etc.
Constrained Optimization: linear programming, quadratic programming, augmented Lagrangian methods, KKT conditions, interior point methods, semi-definite programming, etc.
Combinatorial optimization, various algorithms for their approximation including some LP-relaxations.
Dual decomposition algorithms, such as alternating direction method of multipliers (ADMM).
Proximal algorithms
Gradient boosting
Markov decision processes
Statistical Reinforcement Learning Theory: Course discussing theoretical aspects of reinforcement learning with a special emphasis on characterizing sample complexity of various techniques. Final project here. Topics included:
MDPs, Value Iteration, Policy Iteration
Concentration inequalities, such as Hoeffding’s inequality
Fitted Q-Iteration
Importance Sampling and Policy Gradient
Rmax exploration
Bellman rank and OLIVE
Other courses:
Algorithms: Course covering algorithms topics like advanced dynamic programming, randomized algorithms, linear programming, more general approximation algorithms, and NP-hardness.
Fast Algorithms & Integral Equations: Covered fast randomized linear algebra algorithms and then applied this technology to build fast algorithms for solving integral equations.
Parallel Programming: A systems course that covered a mix of theoretical and practical knowledge related to parallel programming.
On the theoretical side, covered models of computation for parallel algorithms and cache aware algorithm development; showcased standard techniques for building algorithms that minimize cache hits.
On the practical side, learned standard tools like pthreads, OpenMP, MPI, Charm++.