Blog

I try to write often. If there are any discrepancies, or I don’t submit source code for a post without reason, please email me. I try to be open, honest, and genuine.

Implementing E&M Simulators

Backpropogatable FDFD and FDTD simulators in Jax, and trying to use diffusion models, tiled preconditioning, and other tricks to try to make them faster

Backpropogate Circuits

Flowing gradients through the conductance matrix to optimize circuits of linear elements

Soft Token Adversarial Attack

Dabney Hovse Geocaching Service Tutorial

Using Power Draw as a Side Channel for Communication

A sketch of how models may be able to change their weights to communicate through power draw resulting from increased bitflips

Gradient Inversion Attack

A replication of the paper Deep Leakage from Gradients to reverse-engineer the training data from the gradients of a neural network in training.

Soviet Chess Diplomacy

Modeling Protein Evolution

Redundant Attention Heads in Large Language Models For In Context Learning

Research on redundant attention heads in language models and their role in in-context learning through Bayesian updates.

Language Models Update Based on In-Context Learning

A look at how language models update their priors based on in-context examples

Journal Club: July 17 2024

Journal Club: July 10 2024