This is a place where I collect notes, clippings and thoughts about what I read, be it papers or books. I tend to this wiki with love and care, and I am its main user, but I hope you will enjoy it too.
Unlike a blog, I don’t usually write my opinions in the wiki. Rather, I use this part of the site to organize what I learn and want to link to other ideas, maintaining evergreen notes in the way of Andy Matuschak and others.
The notes in the Garden are divided into certain kinds, and the following are the ones I think you will like best.
If you wish to browse this wiki, you could also click any article in the wiki index, or use the search bar (tag-based search is implemented, but it also greps titles) to find topics you like. Recommended tags: paper, programming, book.
Book Reviews
Yet who reads to bring about an end however desirable? Are there not some pursuits that we practise because they are good in themselves, and some pleasures that are final? And is not this among them? I have sometimes dreamt, at least, that when the Day of Judgment dawns and the great conquerors and lawyers and statesmen come to receive their rewards — their crowns, their laurels, their names carved indelibly upon imperishable marble — the Almighty will turn to Peter and will say, not without a certain envy when He sees us coming with our books under our arms, “Look, these need no reward. We have nothing to give them here. They have loved reading.” - Virginia Woolf
I save every quote I found especially interesting in each book on a separate page of this wiki. I also usually write a summary of the book and the impressions it left me.
These are the books where the notes were abundant enough to be interesting. See also the books category page for more.
“A Discovery of France”, by Graham Robb: A book about France’s rural or non-Parisian population, its history and culture. It goes from the medieval age through the early modernity up to the 19th century. This article contains my favorite quotes and the ones I found most surprising, of which there were quite a few.
“The Machinery of Life”, by David S. Goodsell: The author depicts with vivid illustrations and detailed descriptions what the inside of a cell (either human or bacterial) looks like, and various mechanisms underpinning life. This was one of the few books on biology I’ve read so far.
The Clock of the Long Now: A book that supports and exposes the long term view. A foundational text for long-termism, and one of the texts that inspired Gwern to build his site. Also a very engaging and fast read, being a collection of short essays on diverse topics all playing around the idea of thinking of the very long term.
The Ascent of Money, Niall Ferguson: This book covers the history and importance of financial instruments: money, bonds, stock and insurance. It was a great resource after going through Marginal Revolution University’s Macroeconomics course.
Benjamin Franklin’s Autobiography: As a fan of 18th century history and political thought, especially of the revolutionary era and the Age of Enlightenment, I found this book fascinating, both because of Franklin’s writing style and his portrayal of 18th century Pennsylvania.
Big Notes (MOOCs)
Big Notes encompass multiples sources, but usually form around a MOOC (Massive Open Online Course) and then get updates as I go through the recommended reading.
Unsupervised Learning, Berkeley MOOC: An amazing course covering different techniques of Unsupervised Deep Learning, generative models, GANs, VAEs, etc.
Recommender Systems: A summary of the Google course on the subject (under TensorFlow documentation), plus a few other articles covering Matrix Factorization and colaborative filtering.
Reinforcement Learning, an Introduction by Richard Sutton: My notes and excerpts from the book, covering parts I and II (part III is examples and related topics). I summarize Dynamic Programming, Monte Carlo, Temporal Difference, Function Approximation and n-step methods, along with all their combinations.
Macroeconomics, Marginal Revolution University: Notes I took while I went through the free Macroeconomics course by MRU. It is a great course for learning the basics and it inspired me to read Ascent of Money, as an intermediate step before going through Mehrling’s course [pending].
Papers
If I read a paper, find it interesting, and think I will want to consider it again in the future (especially if I plan on reading related papers later), I will write a summary and save the most important discoveries or explanations. These are the papers I’ve read so far (mostly in the Machine Learning/Deep Learning space). [Under tag: paper ]
GAN Survey, Goodfellow 2016: A survey on Generative Adversarial Networks by Ian Goodfellow. That article also works as a hub for GAN related articles.
ViT: Transformers for Image Recognition: A pure transformer is used for image recognition tasks without any sort of convolutional layers and reaches state-of-the-art performance on multiple image recognition benchmarks.
Vision Transformers See Like Convolutional Neural Networks where activation distribution over different layers is observed and compared with ResNet and other CNNs. Correlations are measured, finding similar semantic extraction in the first layers of a CNN and ViT, and higher-order features that also shared a big mutual information in later layers, although ViT ultimately outperformed ResNet.
Evolution through Large Models: A new approach where a LLM is trained on synthetic data generated through an evolutionary process (MAP-Elites), to produce programs that solve an out-of-distribution task -Sodaracing-. Reinforcement Learning is then used to create a generator of programs conditioned on terrains (so each problem gets a custom solution).
Socratic Models: Composing Zero-Shot Multimodal Reasoning with Language: An attempt at a more general agent by using multiples LLMs and making them interact in a zero-shot context to improve egocentric perception, video summarization, storytelling and question-answering. Not AGI, but not as far as it may be expected.
Proximal Policy Optimization: My description of the State-of-the-Art Reinforcement Learning Algorithm proposed by OpenAI and a summary of the ideas proposed in its paper.
Fast String Searching: Describes the string search algorithm underpinning grep and others. Not ML, but very interesting and a fun little algorithm.
Diffusion Language Models: A Stanford paper describing a new architecture and training method to make language models that are diffusion based instead of autoregressive. They allow for more controlled text synthesis albeit sacrificing perplexity (so far).
Brain Activity to Speech: a paper by Facebook Research where an embedding model is trained using a contrastive loss -like CLIP’s- to align audio and brain (MEG and EEG) recordings, achieving lossy brain reading to audio conversion. They use non-invasive brain scanning methods.
If you have anything to say about one of the articles, you find an error in one of them or just like something you read and want to chat, don’t be afraid to tweet at me or send me an email. I will reply kindly, and be happy if you reach out! I also answer DMs on Reddit.
Size of the Site
In case someone may find it interesting, as we digital garden / personal knowledge management types like comparing sizes, here’s how big the whole site is. I will update this every few months, as it doesn’t grow that quickly anyways. Last updated: March 6th, 2023.
Wiki
----
Markdown files:
105
lines:
11167
word count:
139247
Blog
----
Markdown files:
16
lines:
2408 total
word count:
28002 total