Short reads ā”ļø
Snowflake is crushing it, and public investors are clamoring to buy more after its recent quarterly earnings. At a surface level, itās easy to look at Snowflake and say, āoh, it is just a new data warehouse.ā Itās not a false statement, but it misses whatās special about Snowflake. Michael Malis, one of the founders at Freshpaint (an Abstraction portfolio company), does a great job of breaking down why itās exciting. Freshpaint Blog
There was recently an intriguing back-and-forth between Matt Biilmann (founder of Netlify) and Matt Mullenweg (founder of Automattic, the company behind WordPress) on the merits and tradeoffs between WordPress (a very monolithic approach to building a content-based website/app) and the Jamstack (a very decoupled, composable approach to a content-based website/app). Itās worth reading this Netlify post to grok the ideas behind the two philosophies. There are cases when both make sense, but I would prefer to bet on the flexible/composable approach offered by the Jamstack. Netlify Blog
Long read š¤
I, too, am tired of reading about GPT-3. That said, this excellent post by Melanie Mitchell puts the model through its paces with a variant of Douglas Hofstadterās Copycat system to see whether it can make coherent analogies. The results are really fascinatingā¦ tl;dr in the quote below (emphasis mine):
The programās performance was mixed. GPT-3 was not designed to make analogies per se, and it is surprising that it is able to do reasonably well on some of these problems, although in many cases it is not able to generalize well. Moreover, when it does succeed, it does so only after being shown some number of ātraining examplesā. To my mind, this defeats the purpose of analogy-making, which is perhaps the only āzero-shot learningā mechanism in human cognition ā that is, you adapt the knowledge you have about one situation to a new situation. You (a human, I assume) do not learn to make analogies by studying examples of analogies; you just make them. All the time.
The potential magic of GPT-3 (or any new tech) isnāt that itās excellent at everything. It is that āitās able to do reasonably wellā at many, many things. Lowering the barriers to accomplishing work and improving the initial quality are two constant drumbeats in technological innovation.
Zooming out and using GPT-3 as a placeholder for any possibly disruptive technology, itās a case study in a potentially disruptive step-function improvement. After all, these often look like toys in the beginning.
Graphic I love šØ
Randall Munroe of XKCD may be the most underrated genius of this generation. His comics and books are delightfully on-point. As much as I believe in the power of open-source and think that the fragmentation in modern software architecture is not inherently bad, it definitely comes with tradeoffs.
As usual, Munroeās comics have a kernel of truth in them - there was a day a few years ago when a tiny (as in 11 lines of code) javascript package was removed from npm (the go-to JS package manager), and it literally broke a nontrivial portion of the internet.
Wikipedia rabbit hole š
Spherical Cows. This is such a beautiful, hilarious, and widely-applicable metaphor. Iām fond of trying to reduce ideas down to their component assumptions because then itās easier to sort those assumptions into categories like āI believe that,ā āI am not sure but could be convinced of that,ā and ānope, no way.ā If you find a spherical cow embedded in one of those assumptions, itās time to revisit.
Parting thought š¤
Back to your Econ 101 class, ceteris paribus is typically translated as āall other things equal,ā but maybe āassuming a spherical cow in a vacuumā is a plausible alternative?