Welcome to my profile! I've pinned some of my writings that recieved a lot of attention.
You can read my old article series about writing a self hosting scheme compiler here: https://rain-1.github.io/scheme
Welcome to my profile! I've pinned some of my writings that recieved a lot of attention.
You can read my old article series about writing a self hosting scheme compiler here: https://rain-1.github.io/scheme
# Purpose
Bootstrap knowledge of LLMs ASAP. With a bias/focus to GPT.
Avoid being a link dump. Try to provide only valuable well tuned information.
# WannaCry|WannaDecrypt0r NSA-Cyberweapon-Powered Ransomware Worm
* **Virus Name**: WannaCrypt, WannaCry, WanaCrypt0r, WCrypt, WCRY
* **Vector**: All Windows versions before Windows 10 are vulnerable if not patched for MS-17-010. It uses EternalBlue MS17-010 to propagate.
* **Ransom**: between $300 to $600. There is code to 'rm' (delete) files in the virus. Seems to reset if the virus crashes.
> Could an LLM end up being the core part of a dangerous computer worm?
> How would we neutralize such a thing if this happened?
# Some virus and worm background
# Tutorial: Vector Addition
**Problem:** [Vector Addition](/problems/vector-addition>)
In this post we will cover how to solve the simplest CUDA problem: adding two arrays. I'll explain the code, step by step
# How large are large language models? (2025)
This aims to be factual information about the size of large language models. None of this document was written by AI. I do not include any information from leaks or rumors. The focus of this document is on base models (the raw text continuation engines, not 'helpful chatbot/assistants'). This is a view from a few years ago to today of one very tiny fraction of the larger LLM story that's happening.
# History
# Does prompt injection matter to AutoGPT?
Executive summary: If you use AutoGPT, you need to be aware of prompt injection. This is a serious problem that can cause your AutoGPT agent to perform unexpected and unwanted tasks. Unfortunately, there isn't a perfect solution to this problem available yet.
# Prompt injection can derail agents