Randomly Learning

Nikos Katirtzis
2 min readSep 6, 2024

--

The logo behind https://github.com/nikos912000/randomly-learning....

Advancements in Generative AI and the pace in tech had me thinking about the importance and ways of lifelong learning.

With more and more content being generated and the constant feeds in social media, our ability to think critically, express ourselves in writing, and verify information is being challenged. We have seen this before. When calculators became widespread, we gradually moved away from doing calculations manually. With the advent of digital navigation technologies we find it difficult to move from point A to point B without relying on our phones. When the first social networks emerged we encountered an overload of information and fake content. Now, whenever we have a question, we turn to chatbots and are happy to get an answer immediately, even if this is wrong, reinforcing our tendency toward instant gratification. This highlights the importance of critical thinking, a point emphasised by Mike Loukides.

Similarly, when it comes to personal learning, there is an abundance of content online, making learning more accessible than ever. However, my observation is that the average quality of learning material is often lacking and verification is important.

Recently, I experimented with an idea that helped me learn fast yet obtain deep and accurate knowledge. It builds on the concepts of random walks, interleaving, and the generation effect:

  • A random walk is “the process by which randomly-moving objects wander away from where they started” (a nice visualisation is shown here). For instance, new knowledge may lead to different career paths.
  • Interleaving is the study of different concepts by frequently alternating between them. I always enjoyed combining topics and “transfer learning” across different domains.
  • On the other hand, just reading is not enough, which is where the generation effect helps.

Based on the above concepts, randomly learning was created. The name is inspired by the social media nickname of my supervisor at the University of Edinburgh, Charles Sutton (randomly walking). The structure is from my colleague at work, Sundeep Bhatia, who keeps reminding us to stay curious, embrace constant learning, and always verify.

The way the project works is as follows:

  1. Whenever we read something that we find interesting and we want to learn we spend some time researching the topic, summarising it (no chatbots allowed), and verifying it.
  2. A git branch per topic gets created.
  3. Once the result is satisfactory the feature branch is merged into the main branch.
  4. The git repository follows a hierarchy similar to Wikipedia. This forces us to think of a clean structure.
  5. Text is accompanied with references as verification is important.

Regardless of its caveats, Generative AI is an amazing technology and I use it whenever it makes sense. The logo of the project has been generated using Claude 3.5 Sonnet.

Happy learning!

--

--