ML Fragments
Raw unstructured thoughts and ideas.
These are just raw keywords which may eventually evolve into their own pages if I dive deep enough. For now they are just disconnected âfragmentsâ, interesting directions that I may want to pursue. These are intentionally abstract. Please donât hesitate to reach out if youâd like to discuss more!
There is non-trivial chance that prior work has already posed questions similar but then I havenât spent enough time studying these in detail.
Three-Way Markets
Economy (and âmicro-âeconomies if you will) seem to be running on three-way markets. i) The stock market ii) Gig economy - the likes of Uber, AirBnB. Each transaction can most likely be modeled as consisting of three components - a buyer, a seller and a mediator where each component could be an individual or an institution.
Much like the reward hypothesis in RL, there appears to be a similar hypothesis in stock markets - stock price contains all the information one needs (Iâm still trying to understand the nuance involved in this hypothesis). We certainly would want to model the micro and macro dynamics. What tools does machine learning provide?
Reinforcement Learning
- Knowledge Graphs for exploration
- Revisiting particle optimizing in Model-Based RL via amortized proposals (Model free example - [2001.08116] Q-Learning in enormous action spaces via amortized approximate maximization)
- [1704.06440] Equivalence Between Policy Gradients and Soft Q-Learning
Model-Based
- Fixing objective mismatch in MBRL using Expectation Maximization.
- Connections to classic control theory
Bayesian Inference
- [1710.06595] Variational Inference based on Robust Divergences, [1904.02063] Generalized Variational Inference: Three arguments for deriving new Posteriors
- How do we utilize self-consistency from Bayes theorem. Can we create tractable formulations for the following divergence problem?
- How Good is the Bayes Posterior in Deep Neural Networks Really?
- EM maximizes the log marginal directly instead of a lower bound in VI. Is it objectively better?
Learned invariances
- Itâs probably become more important now than ever to have priors in Neural Networks that satisfy invariances we care about instead of just using . how do we do this? e.g. Learning Invariances using the Marginal Likelihood
Model Misspecification
Uncertainty Calibration
- [1706.04599] On Calibration of Modern Neural Networks
- [2002.02405] How Good is the Bayes Posterior in Deep Neural Networks Really?
Uncertainty Estimation
- Conservative Uncertainty Estimation By Fitting Prior Networks - Kamil Ciosek, Vincent Fortuin, Ryota Tomioka, Katja Hofmann, Richard Turner
Gaussian Processes
- What sort of structured variational approximations can improve stochastic variational inference for GPs?
- Sparse Orthogonal Variational Inference for Gaussian Processes - Jiaxin Shi, Michalis K. Titsias, Andriy Mnih
Implicit Distributions
- Variational Inference using Implicit Distributions - Ferenc HuszĂĄr
- Adversarial Variational Bayes: Unifying Variational Autoencoders and Generative Adversarial Networks - Lars Mescheder, Sebastian Nowozin, Andreas Geiger
Linear Algebra
- Circulant (in general Toeplitz) matrices allow much faster matrix-vector multiplications. For non-Toeplitz ones, we have a notion of âasymptotically Toeplitzâ under the weak matrix norm (Frobenius). What problems families afford such a structure? If they do, can we leverage non-asymptotic guarantees?