I try to write often. If there are any discrepancies, or I don’t submit source code for a post without reason, please email me. I try to be open, honest, and genuine.
Backpropogatable FDFD and FDTD simulators in Jax, and trying to use diffusion models, tiled preconditioning, and other tricks to try to make them faster
Published July 28, 2025
Flowing gradients through the conductance matrix to optimize circuits of linear elements
Published May 16, 2025
A sketch of how models may be able to change their weights to communicate through power draw resulting from increased bitflips
Published April 07, 2025
A replication of the paper Deep Leakage from Gradients to reverse-engineer the training data from the gradients of a neural network in training.
Published March 24, 2025
Published November 25, 2024
Published October 05, 2024
Research on redundant attention heads in language models and their role in in-context learning through Bayesian updates.
Published August 31, 2024
A look at how language models update their priors based on in-context examples
Published August 21, 2024