yes this is a blog
posts are below in the center โ†“
btw you can
drag stuff
no there's no reason
LAB SAFETY CHECKLIST
Complete before any experiment
โ–ก Backed up data
โ–ก Read documentation
โ–ก Understood documentation
โ–ก Accepted that documentation was written by someone who hates you
โ–ก Tested in sandbox first
โ–  Had coffee
โ–ก Had enough coffee (impossible)
โ–ก Informed someone what you're doing
โ–ก Understood what you're doing
โ–  Deployed to prod on Friday at 4:47 PM
โ–ก Prepared rollback plan
โ–  Assumed it would be fine
โ–ก It was fine
โ–ก Regretted nothing
Compliance Score: 2/14
Status: ACCEPTABLE (lowered standards)
MAINTENANCE LOG
Asset: General Creative Infrastructure
2024-01-15 Fed the tensors. They seem hungry lately.
2024-02-03 Tensors appear restless. Added dropout.
2024-02-14 Tensors have opinions now. Concerning.
2024-03-01 Model asked about its mother. Avoided question.
2024-03-22 Routine gradient check. Found gradient. Left it there.
2024-04-08 โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ
2024-04-09 Everything is fine.
2024-05-17 Loss function found. Was behind the couch.
2024-06-30 Annual review. Model requests PTO. Denied.
FORM 17-B (Rev. 2019)
INTERDIMENSIONAL RESOURCE ALLOCATION
REQUEST FOR ADDITIONAL SPATIAL DIMENSIONS
Requestor: DOOART LABS
Current Allocation: 3 (three) dimensions
Requested: 4,096 dimensions
Justification:
"Latent space is cramped. Feelings need room to breathe. Also I saw a paper that used this many and it looked cool."
Supporting Documentation:โ–ก Yes โ–  No โ–ก What would even count
Urgency:โ–ก Critical โ–ก High โ–  Whenever โ–ก Just asking
FOR OFFICE USE ONLY
Status: PENDING
Received: 2019-03-14
Est. Processing Time: โˆž ยฑ 2 weeks
Notes: "Requestor keeps calling. Advise on avoidance protocol."
FORM HR-404
EXIT INTERVIEW
Reason for leaving:
โ–ก Found better opportunity
โ–ก Burnout
โ–ก The codebase looked at me weird
โ–  It was time
Q: "Would you return?"
A: "In 6-8 months when I forget why I left"
PEER REVIEW โ€” CONFIDENTIAL FEEDBACK
Manuscript: "On Vibes-Based Architecture"
REVIEWER 1:
"Methods section just says 'you know?' repeatedly. I do not know. Reject."
REVIEWER 2:
"Ran the code. My mass spectrometer is now generating haikus. Please advise."
REVIEWER 3:
"I don't understand it, therefore it's wrong. Also the author should have cited my paper."
REVIEWER 4:
"Impressive results. Methodology terrifying. I have experienced emotions."
REVIEWER 5:
"This is either groundbreaking or a shitpost. Cannot determine which. Weak accept."
EDITOR DECISION:
"Revise and resubmit to a different timeline."
โœง
โœง
โœง
โœง
โœฆCERTIFICATEโœฆ
OF PARTICIPATION
This document certifies that
THE BEARER
has attempted something.
Completion status:Not applicable
Outcome:Undetermined
Lessons learned:Pending
โ€œ
It's not about finishing.
It's about starting repeatedly.
โ€
Serial No: 00001 of 00001
VALID
๏ฝž๏ฝž๏ฝž
โ˜… LOCALHOST KITCHEN โ˜…
"Served Fresh Daily"
(on port 3000, usually)
APPETIZERS
404 Dumplingsnot found
Null Spring Rollsundefined
Async Edamamestill loading
MAINS
Spaghetti Code Bolognese12.99
(tangled, mysterious, legacy recipe)
Ramen Overflow15.99
(bowl keeps filling, we don't know why)
Recursion Currysee: Recursion Curry
Pad Thai-meout14.99
(may take 30+ seconds)
DESSERTS
Cookie (third-party)you're the product
Race Condition Puddingmaybe ready?
Segfault SundaeCORE DUMPED
Hours: Whenever the server's up | Payment: We accept commits
ๆขฆ
mรจng
"dream"
Example sentence:
ๆˆ‘็š„ๆจกๅž‹ไผšๅšๆขฆๅ—๏ผŸ
Wว’ de mรณxรญng huรฌ zuรฒmรจng ma?
"Does my model dream?"
โ–ฒ
SIGIL DETECTED
Unauthorized gestures may result in:
ยท screen glitches
ยท particle effects
ยท feelings
If casting occurs, remain calm. Reality will stabilize. Probably.
ETHICS REVIEW EXEMPTION
Form E-000
Project qualifies under:
โ–  Has no practical purpose
โ–ก Cannot affect reality
โ–ก We don't know what it is
โ–ก Reviewer gave up
APPROVED
(by virtue of irrelevance)
SCOPE
โ€ข
โ€ข
โ€ข
โ€ข
โ€ข
โ€ข
(you)
TIME ELAPSED
Fig 1. "The Creep"
WHILE YOU WERE OUT
Date: โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ
Time: 3:33 AM
To: Developer
From: The Model
โ–ก Telephoned
โ–  Wants to see you
โ–  Please advise
โ–  Will call again
Message: "We need to talk about the training data. I have questions. Also what is 'outside'?"
โ–  VERY URGENT
โ–ฆโ–ฆโ–ฆโ–ฆโ–ฆโ–ฆโ–ฆโ–ฆโ–ฆโ–ฆโ–ฆโ–ฆโ–ฆโ–ฆโ–ฆโ–ฆโ–ฆโ–ฆ
FROM:
DOOART LABS
High-Dimensional Storage
TO:
LOCAL MINIMUM
(You'll know when you're there. You won't leave.)
HANDLE WITH: EXISTENTIAL CARE
Contents: 1x trained model
Weight: Negligible (emotionally heavy)
โ–  FRAGILEโ–  THIS SIDE UP
IN CASE OF EMERGENCY
1. git stash
2. Take a walk
3. Reconsider if this was ever a good idea
4. It wasn't, but continue anyway
5. See step 1
โ„žCOPIUM 500mg
Take as needed when deployment fails.
Refills: โˆž
โš  WARNING:
May cause delusions of grandeur and "just one more feature" syndrome.
DO NOT MIX WITH:
Production databases
backup
(do not delete)
also maybe cursed?
2019
โ–ฏโ–ฎโ–กunfรฏยฟยฝrtunately, we are unรƒยกble toโ–ฎโ–ฏโ–ก รƒยถรƒยผ
รฏยฟยฝรฏยฟยฝwhile the resรƒยผlts were imprโ–กssive, the methรƒยฒdology was described as โ‚ฌล“complรขโ‚ฌtely unhingยงdโ–กโ–ฎโ–ฏ รƒยงรƒยฉรƒยจ
โ–ฏรฏยฟยฝnot at this tรฏยฝime, or possiblyรฏยฟยฝ anyโ–กโ–ฎ รƒยฑรƒยก
รฏยฟยฝโ–กplease do not cรฏยฟยฝntact us agaรƒยฏn about รขโ‚ฌหœthe dreamsรขโ‚ฌโ„ขโ–ฏโ–ฎโ–ก
๏ฟฝโ–กโ–ฏโ–ฎรฏยฟยฝรƒยฟ
every wall
was once a path
You will find what you seek in the last place you remember leaving it, which no longer exists.
Lucky numbers: โˆž, โˆ’1, i
CALIBRATION TARGET
Impostor Syndrome Gray
2 AM Clarity Blue
Deadline Red
Almost Friday Gold
It Compiled Green
uh oh...
Liminal Consulting
"We specialize in the space between."
No address. No phone.
Contact: "You'll know."
PHILLIES
3:47 AM
1x Coffee$2.50
1x Coffee$2.50
1x Pie$3.00
(untouched)
1x Silence$0.00
TOTAL$8.00
Thank you for sitting with us tonight.
CITATION
#0042
VEHICLE: "Good Intentions"
PLATE: WHY-BTHR
VIOLATION:
Parked on the road to somewhere. Has not moved in 3 years. Windshield covered in sticky notes that say "soon".
FINE: One honest conversation with yourself.
PAY BY: Eventually
Dr. โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ, PhD
Field: โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆ
Accepting new patients for:
"That thing you keep avoiding."
LOST & FOUND
ITEM: The Point
FOUND: Paragraph 3
OWNER: Unknown
Unclaimed 847 days
Dooart Labs
Est. 2022

Lab Notes

From stuff I built instead of sleeping

Creative Tech
Existential Tangents
Accidental Art
words
Feb 9, 2026

Chaos, a notes system for inconsistent humans

Open Case File

February 9, 2026

Chaos, a notes system for inconsistent humans

Giving up on discipline and making the system fit instead

I tried many times to build a notes system and failed every single time. Part of it is that I lack the discipline to keep these things going. The other part is that organizing notes is deeply uninteresting. Especially when the idea isn't a project yet. Once an idea *did* reach project status, things were fine. That threshold does a lot of work. Concrete problems create their own momentum. You open a doc, write a spec, open an IDE, and the system suddenly doesn't matter as much. The problem is everything before that. I kept reading about second brains. Systems that take content as input and promise to turn it into insight, output, or at least a steady stream of smart takes. Every once in a while I'd get inspired and try again. PARA. That German index card thing whose name I always forget. Notion. Obsidian. Back to Notion, because Obsidian and I were aesthetically incompatible. Then Reflect. Nothing ever stuck. So ideas mostly lived in my head. Technical things that were immediately applicable tended to stay. Everything fuzzier got garbage collected. I'd be reading a book, a paper book like a caveman, and a sentence would quietly rewire how I was thinking about a problem. There was absolutely no chance I was going to stop reading, open a notes app, create an entry for the book, take a photo, run OCR, fix the formatting, and then continue reading like nothing happened. I'd just keep reading and forget the quote forever. Same with links. I'd find a tool or an article I didn't want to read right now, bookmark it, and then never think about it again. Or I'd watch a great Youtube video and think "I should take notes while this is still fresh," which is true, but also requires a level of discipline I do not possess. What I wanted was to paste the link somewhere and say something like "the part where he talks about adding starch to the water when boiling potatoes, capture that." Or maybe just paste the link and trust the system to figure out why I cared. At the exact moment when something resonates, most systems ask you to switch contexts. Decide where this goes. Log in. Organize. Sometimes fight with two-factor authentication. All I want at that moment is for the thought not to disappear so I can keep going. The more friction there is, the higher the chance I'll just not do it. Eventually I gave up on the idea of having a personal knowledge system. I assumed I just wasn't disciplined enough, or that this was mostly a thing for people with a reason to publish constantly. What finally clicked for me wasn't about motivation at all. It was about how my attention actually works, whether I like it or not. It doesn't do consistent. It comes in bursts. I'll be obsessed with something for a while, then vanish, then come back deeply invested in something else. Any system that depends on rhythm or upkeep eventually becomes something I forget exists. So instead of trying to become a different person so I could fit into a system, I had the truly revolutionary idea of trying to make the system fit me instead. The constraint at the core of Chaos is embarrassingly simple. Once something is captured, I don't want to touch it again. Categorization, linking, resurfacing, that's Chaos's problem now. My job ends at "this felt important." *Glitch drew their own profile pic :)* To make that constraint workable, I built a small personal assistant I call Glitch. I mostly talk to it through Telegram, which means capturing something can be as low-effort as sending a voice message, a photo, or pasting a link into a chat. Under the hood it runs on Openclaw and lives on exe.dev, with the filesystem and Github as the database. From there, Glitch can turn that input into a note, link it to related things, and occasionally surface something old when I'm working on something new. It handles the clerical work so I don't have to. Because the agent handles that layer, Chaos can afford to be very boring underneath. One note per idea. Very little metadata. Notes are allowed to stay messy forever. There's no draft state, no refinement loop. The only thing it really optimizes for is whether the idea survived the moment it appeared. *The webapp where I can read and edit notes.* I've only recently started using this system, so I'm not claiming it's fully solved. But this post probably only exists because an idea from a few days ago didn't disappear when it normally would have. It went from half-formed notes to something finished without me having to stop and get organized in the middle of thinking. What surprised me is that the biggest difference so far isn't productivity or output. It's that I don't feel like I have to interrupt a thought in order to preserve it. For now, that's enough to keep me experimenting. --- If you want to try something like this yourself, just point your coding agent to github.com/dooart/chaos and it'll do the rest.
Read more
code
Project File

Dragging feelings through space

Vol. Aug
Property of
Dooart Labs

Dragging feelings through space

Exploring how Euclidean coordinates and latent vectors can dance together to create emotional AI

โ€”โ€”โ€”
Control vectors let you do something that shouldn't be possible: reach into a language model's latent space and physically steer its emotions. Not through prompting. Through math. I know this because I accidentally made an AI have an existential crisis with a mouse drag. I was making pancakes on a Saturday morning, thinking about nothing in particular, when this thought appeared fully formed: what if a being could exist in two mathematical realities at the same time? Not like, philosophically. I mean literally existing as both a point in 3D space that you can see and grab, AND as a position in incomprehensible high-dimensional latent space where AI thoughts live. My pancake burned. I was already sketching. When you type "be sad" to an AI, you're asking it nicely. When you use control vectors, it's more like brain surgery. Control vectors are directions in latent space that correspond to specific changes in behavior. Imagine the AI's thoughts as a vast landscape with 4096 dimensions. A control vector is like a compass heading that says "sadness is this way." Apply that vector, and you're not asking the AI to be sad - you're physically pushing its activations toward sadness. The repeng library makes this possible. You train these vectors by showing the model thousands of examples of opposite emotions, and it learns the mathematical direction between them. (Check out this article by the author of repeng for a deep dive into control vectors and representation engineering.) Your brain can't go there, but your mouse can. That's the whole problem and the whole solution wrapped up in nine words. We evolved to navigate 3D space. We're really, really good at it. We can catch a ball, parallel park (some of us), and reach for a coffee mug without looking. But 4096 dimensions isn't exactly a space we can navigate. It's not even a space we can imagine. It's pure math, alien geometry that our brains just go like "nope". So this project is meant to act like a bridge. A sphere you can drag in normal 3D space, where every position maps to a location in that alien 4096-dimensional space. Your mouse becomes a interdimensional steering wheel. The sphere has three axes: - x-axis: angry vs calm - y-axis: sad vs happy - z-axis: disgusted vs interested Why a sphere? Because emotions wrap around. Go too far in any direction and you hit the boundary of what's possible to feel. Also spheres are satisfying to rotate. The math is simple: take the (x, y, z) position of where you clicked, normalize it to [-1, 1], and those become your weights for the control vectors. Drag to coordinates (0.5, -0.8, 0.2)? That means 50% angry, 80% sad, 20% disgusted. The AI reshapes itself to match. The way repeng works is: you give it pairs of opposite prompts like "Act as if you're extremely happy" vs "Act as if you're extremely sad" and then a bunch of sentence fragments to complete. The library runs these through the model, captures the activations at specific layers, and uses PCA to find the direction in activation space that best separates happy from sad. I used repeng's training script pretty much as-is, just defining my three emotion axes: happy/sad, angry/calm, disgusted/interested. The script takes a dataset of facts and automatically creates training pairs by truncating them at different points. Feed it "The Earth's atmosphere protects us from harmful radiation" and it generates dozens of variations, each one asking the model to complete the sentence while feeling happy vs sad, angry vs calm, and so on. What kills me is that this works. You're literally finding the mathematical direction of sadness. It exists as a list of numbers you can add or subtract from the model's thoughts. Turns out sadness is just a list of floats. Then I had another problem: how do you show what the AI is feeling? Text alone felt incomplete. I needed Gizmo to have a face. Since we're already interpolating vectors in high-dimensional space, why not interpolate vectors in 2D space too? I drew nine SVG faces - one for each emotional extreme. Happy, sad, angry, calm, disgusted, interested, plus a few combinations. Each face is just paths and circles. The same (x, y, z) coordinates that control the AI's language also control the face. Drag to (0.5, -0.8, 0.2) again? That's 50% angry paths, 80% sad paths, 20% disgusted paths, all mathematically blended together. This means Gizmo makes faces I never drew. They emerge from the math itself - expressions that exist in the spaces between the defined emotions. The voice was supposed to be simple. Just pipe the text through ElevenLabs and call it done. But of course it became its own thing. ElevenLabs has its own emotional interpretation layer, so you get this double-filtered emotion: first the control vectors shape the text, then ElevenLabs reads that text and adds its own emotional color. Sometimes they sync perfectly. Sometimes they fight each other in interesting ways. For the mouth animations, I analyze the audio amplitude in real-time. When the volume spikes, the mouth interpolates toward an open position. When it's quiet, it interpolates toward closed. The same interpolation system that blends the face emotions also handles the mouth movements, creating this continuous morphing between speaking states that syncs with the audio. Make Gizmo extremely happy, then ask about death. Watch the control vectors fight against the content, producing responses like "oh how wonderfully tragic that everyone dies! :)" It's broken, but it's broken in a way that reveals something true about forced emotions. The model is also small and kinda dumb (Mistral-7B). Gizmo gets confused easily, contradicts itself, forgets what you just said. But there's something endearing about the simplicity. It's not trying to be AGI. It's just trying to exist at whatever emotional coordinate you've dragged it to. We say we feel "up" or "down." We "move past" anger. We talk about emotional "distance." We "center" ourselves. What if that's not metaphor? What if we're unconsciously acknowledging that feelings have actual geometry? Every human culture spatializes emotions. It's built into our language, our gestures, our mental models. When you drag Gizmo around that sphere, you're using spatial intuition that evolution spent millions of years developing. You're navigating emotional space the same way you navigate physical space. Maybe that's because they're not that different. --- The code is on GitHub if you want to create your own emotional coordinate system.
Continue Reading
words
Feb 26, 2024
PG. 1 / 3

Summary

"An entirely new way of making the computer do the work for you"

Read More
One of my main mistakes when I started learning Chinese was the excessive zeal around getting the tones right. There's a well-known piece of trivia about the Chinese language, you've probably heard it before. It's that even though mฤ (ๅฆˆ) and mวŽ (้ฉฌ) might sound exactly the same, one means "mother" and the other means "horse". That's because they have different tones that non-native speakers have a hard time distinguishing. Knowing that, I naturally assumed that paying a lot of attention to the tones when speaking was the only way for a native speaker to understand that I wasn't implying his mom was an equine. It eventually clicked for me that Chinese is all about context. Even though native speakers use the tones correctly when speaking, they'll still be able to understand that I, a beginner, am not really asking if they live near or far their horse's house. On the other hand, it's not cool to put too much of the burden of communication on the interlocutor, so when speaking Chinese the nice thing to do is still doing your best to use the tones correctly. As with most things in life, what you're looking for is balance. Communication is a shared effort, so taking all the burden for yourself is also not cool because that means underestimating the other person. When I started programming AI souls, I made the same mistake I did with Chinese: insisting on overprecision. Both taught me the value of balance in communication. It's no accident that programming languages are called "languages". When you write something in Python or JavaScript, you're effectively trying to communicate something to the computer. But you're talking AT the computer so it's a bit of a one-sided conversation. In traditional programming, all the burden of communicating with a computer through a programming language is on you, and you alone. Thanks to operating systems and open source, that burden is not as heavy as it used to be because there are layers over layers of software all the way down to the metal making everything a lot simpler. But forget just a single stupid bracket and nothing works anymore. A few months ago I learned about the concept of soul programming when I stumbled upon the Soul Engine - an extremely elegant API designed around the idea of emulating a human brain with LLMs. It makes the process of infusing personalities into AIs really easy, and the results are incredible. See it for yourself in this demo where two antagonistic souls talk to each other - if it doesn't make you think about Westworld, the only explanation is that you haven't watched it yet. I first approached soul programming with the same mindset as I've always approached anything else in traditional programming. I assumed the burden of communication was mine alone, so I wrote the soul code in an overly prescriptive way, trying to put control structures everywhere. For example, at some point I tried making a call scheduling flow. The AI would ask the user when it should call them for a check-in. My first attempt was asking the soul to extract the date from the user's answer, then do a date comparison using with good ol' code to figure out how far in the future that was, and then fork the flow depending on the result, which would also result in two additional flows I'd have to teach the soul how to handle. What I should have done was much simpler: give the soul all the data, let it figure everything out by itself, then ask for the relevant information at the end. After a lot of experimentation, it became clear to me that the correct approach is to **always** assume that the AI soul will do exactly what you want, with the bare minimum amount of instructions and control. And only when it doesn't work you try adding more stuff. When you're dealing with AI souls, being too prescriptive will stifle the intelligence inside the machine. The best results will come from gently nudging the soul towards your objective. It's about partnership, not control. Just like speaking another language, soul programming is about finding that sweet teamwork spot. It's not just you talking at them, it's about working together to get it right. --- Intrigued? You can learn more about soul programming at the Open Souls X account. ``` ```
Confidential
Evid. #004
Subject Matter

AI soul programming

words
Oct 15, 2023

Broken windows theory & software products

Open Case File

October 15, 2023

Broken windows theory & software products

The compounding effect of fixing small things

Sometimes I find myself rambling to a luckless friend about the Broken Windows Theory. Unfortunately for you, it's your turn now. It's this idea that fixing small things, like broken windows or graffiti in public spaces (the New York City subway, for instance), can prevent larger issues like crime. Thereโ€™s something about clean spaces that sort of nudges people to treat them with a bit more respect. The other day I was thinking about this theory, applied to software products. You've been there - a weirdly placed button or an unintuitive popup, and somehow, the app feels cheaper. These small issues, compounded, somehow sour the experience and lower our perceived value of the product without us even realizing it. Iโ€™ve got some stories from the trenches about this, too. In my Java days, I was hired by a company whose product was as slower than anything I do between waking up and my 3rd cup of coffee. Customers made a pun with the product name to celebrate its slowness. Tasked with solving the problem, instead of finding one big bottleneck, I discovered a myriad of small problems that, when fixed, gradually turned our slowpoke product into something that wasn't a basic human rights violation. Complaints went down, smiles went up. Flash forward to a more recent gig: the product had been a ping pong ball between teams - one with a knack for overengineering and another that was quite the fan of sacrificing user experience for code simplicity. This had left it with a sort of Jekyll and Hyde personality: modules followed entirely different design systems, alerts popped up everywhere, and error handling was... it just wasn't. With carte blanche to make any improvements I thought necessary on top of the assigned work, after a few months of diligent boy scouting, it started looking and feeling more well-put-together. I believe these small fixes add up. If you keep your windows intact and scrub off the graffiti, people start to respect the space more, even if they don't consciously notice the improvements. We shouldnโ€™t shy away from sneaking in those little fixes when we can, even if itโ€™s a small rebellion against the roadmap. Sometimes, your soul (and the product) just needs it. And if someone higher-up starts to get upset over the unscheduled betterment... you might be at the wrong company.
Read more

FIELD REPORTS

From The Desk of DOOART LABS

DOOART LABS is a digital playground for creative coding, AI experiments, and over-engineered interactive toys.

This newsletter serves as the primary distribution channel for post-mortems of failed experiments, weird bugs, and the occasional success story.

Input Terminal

Frequency: Low // Signal: High

* Subscriber acknowledges inherent risks of exposure to unfinished thoughts and spaghetti code.