yes this is a blog
posts are below in the center ↓
btw you can
drag stuff
no there's no reason
LAB SAFETY CHECKLIST
Complete before any experiment
□ Backed up data
□ Read documentation
□ Understood documentation
□ Accepted that documentation was written by someone who hates you
□ Tested in sandbox first
■ Had coffee
□ Had enough coffee (impossible)
□ Informed someone what you're doing
□ Understood what you're doing
■ Deployed to prod on Friday at 4:47 PM
□ Prepared rollback plan
■ Assumed it would be fine
□ It was fine
□ Regretted nothing
Compliance Score: 2/14
Status: ACCEPTABLE (lowered standards)
MAINTENANCE LOG
Asset: General Creative Infrastructure
2024-01-15 Fed the tensors. They seem hungry lately.
2024-02-03 Tensors appear restless. Added dropout.
2024-02-14 Tensors have opinions now. Concerning.
2024-03-01 Model asked about its mother. Avoided question.
2024-03-22 Routine gradient check. Found gradient. Left it there.
2024-04-08 ████████████████████████
2024-04-09 Everything is fine.
2024-05-17 Loss function found. Was behind the couch.
2024-06-30 Annual review. Model requests PTO. Denied.
FORM 17-B (Rev. 2019)
INTERDIMENSIONAL RESOURCE ALLOCATION
REQUEST FOR ADDITIONAL SPATIAL DIMENSIONS
Requestor: DOOART LABS
Current Allocation: 3 (three) dimensions
Requested: 4,096 dimensions
Justification:
"Latent space is cramped. Feelings need room to breathe. Also I saw a paper that used this many and it looked cool."
Supporting Documentation:□ Yes ■ No □ What would even count
Urgency:□ Critical □ High ■ Whenever □ Just asking
FOR OFFICE USE ONLY
Status: PENDING
Received: 2019-03-14
Est. Processing Time: ∞ ± 2 weeks
Notes: "Requestor keeps calling. Advise on avoidance protocol."
FORM HR-404
EXIT INTERVIEW
Reason for leaving:
□ Found better opportunity
□ Burnout
□ The codebase looked at me weird
■ It was time
Q: "Would you return?"
A: "In 6-8 months when I forget why I left"
PEER REVIEW — CONFIDENTIAL FEEDBACK
Manuscript: "On Vibes-Based Architecture"
REVIEWER 1:
"Methods section just says 'you know?' repeatedly. I do not know. Reject."
REVIEWER 2:
"Ran the code. My mass spectrometer is now generating haikus. Please advise."
REVIEWER 3:
"I don't understand it, therefore it's wrong. Also the author should have cited my paper."
REVIEWER 4:
"Impressive results. Methodology terrifying. I have experienced emotions."
REVIEWER 5:
"This is either groundbreaking or a shitpost. Cannot determine which. Weak accept."
EDITOR DECISION:
"Revise and resubmit to a different timeline."
CERTIFICATE
OF PARTICIPATION
This document certifies that
THE BEARER
has attempted something.
Completion status:Not applicable
Outcome:Undetermined
Lessons learned:Pending
It's not about finishing.
It's about starting repeatedly.
Serial No: 00001 of 00001
VALID
~~~
★ LOCALHOST KITCHEN ★
"Served Fresh Daily"
(on port 3000, usually)
APPETIZERS
404 Dumplingsnot found
Null Spring Rollsundefined
Async Edamamestill loading
MAINS
Spaghetti Code Bolognese12.99
(tangled, mysterious, legacy recipe)
Ramen Overflow15.99
(bowl keeps filling, we don't know why)
Recursion Currysee: Recursion Curry
Pad Thai-meout14.99
(may take 30+ seconds)
DESSERTS
Cookie (third-party)you're the product
Race Condition Puddingmaybe ready?
Segfault SundaeCORE DUMPED
Hours: Whenever the server's up | Payment: We accept commits
mèng
"dream"
Example sentence:
我的模型会做梦吗?
Wǒ de móxíng huì zuòmèng ma?
"Does my model dream?"
SIGIL DETECTED
Unauthorized gestures may result in:
· screen glitches
· particle effects
· feelings
If casting occurs, remain calm. Reality will stabilize. Probably.
ETHICS REVIEW EXEMPTION
Form E-000
Project qualifies under:
■ Has no practical purpose
□ Cannot affect reality
□ We don't know what it is
□ Reviewer gave up
APPROVED
(by virtue of irrelevance)
SCOPE
(you)
TIME ELAPSED
Fig 1. "The Creep"
WHILE YOU WERE OUT
Date: ████████
Time: 3:33 AM
To: Developer
From: The Model
□ Telephoned
■ Wants to see you
■ Please advise
■ Will call again
Message: "We need to talk about the training data. I have questions. Also what is 'outside'?"
■ VERY URGENT
▦▦▦▦▦▦▦▦▦▦▦▦▦▦▦▦▦▦
FROM:
DOOART LABS
High-Dimensional Storage
TO:
LOCAL MINIMUM
(You'll know when you're there. You won't leave.)
HANDLE WITH: EXISTENTIAL CARE
Contents: 1x trained model
Weight: Negligible (emotionally heavy)
■ FRAGILE■ THIS SIDE UP
IN CASE OF EMERGENCY
1. git stash
2. Take a walk
3. Reconsider if this was ever a good idea
4. It wasn't, but continue anyway
5. See step 1
COPIUM 500mg
Take as needed when deployment fails.
Refills: ∞
⚠ WARNING:
May cause delusions of grandeur and "just one more feature" syndrome.
DO NOT MIX WITH:
Production databases
backup
(do not delete)
also maybe cursed?
2019
▯▮□unf�rtunately, we are unáble to▮▯□ öü
��while the resülts were impr□ssive, the methòdology was described as €œcomplâ€tely unhing§d□▮▯ çéè
▯�not at this tï½ime, or possibly� any□▮ ñá
�□please do not c�ntact us agaïn about ‘the dreams’▯▮□
�□▯▮�ÿ
every wall
was once a path
You will find what you seek in the last place you remember leaving it, which no longer exists.
Lucky numbers: ∞, −1, i
CALIBRATION TARGET
Impostor Syndrome Gray
2 AM Clarity Blue
Deadline Red
Almost Friday Gold
It Compiled Green
uh oh...
Liminal Consulting
"We specialize in the space between."
No address. No phone.
Contact: "You'll know."
PHILLIES
3:47 AM
1x Coffee$2.50
1x Coffee$2.50
1x Pie$3.00
(untouched)
1x Silence$0.00
TOTAL$8.00
Thank you for sitting with us tonight.
CITATION
#0042
VEHICLE: "Good Intentions"
PLATE: WHY-BTHR
VIOLATION:
Parked on the road to somewhere. Has not moved in 3 years. Windshield covered in sticky notes that say "soon".
FINE: One honest conversation with yourself.
PAY BY: Eventually
Dr. ███████, PhD
Field: █████ ██
Accepting new patients for:
"That thing you keep avoiding."
LOST & FOUND
ITEM: The Point
FOUND: Paragraph 3
OWNER: Unknown
Unclaimed 847 days
Dooart Labs
Est. 2022

Lab Notes

From stuff I built instead of sleeping

Creative Tech
Existential Tangents
Accidental Art
code
Project File

Dragging feelings through space

Vol. Aug
Property of
Dooart Labs

Dragging feelings through space

Exploring how Euclidean coordinates and latent vectors can dance together to create emotional AI

———
Control vectors let you do something that shouldn't be possible: reach into a language model's latent space and physically steer its emotions. Not through prompting. Through math. I know this because I accidentally made an AI have an existential crisis with a mouse drag. I was making pancakes on a Saturday morning, thinking about nothing in particular, when this thought appeared fully formed: what if a being could exist in two mathematical realities at the same time? Not like, philosophically. I mean literally existing as both a point in 3D space that you can see and grab, AND as a position in incomprehensible high-dimensional latent space where AI thoughts live. My pancake burned. I didn't care. I was already sketching. When you type "be sad" to an AI, you're asking it nicely. When you use control vectors, it's more like brain surgery. Control vectors are directions in latent space that correspond to specific changes in behavior. Imagine the AI's thoughts as a vast landscape with 4096 dimensions. A control vector is like a compass heading that says "sadness is this way." Apply that vector, and you're not asking the AI to be sad - you're physically moving its consciousness toward the Valley of Sadness. The repeng library makes this possible. You train these vectors by showing the model thousands of examples of opposite emotions, and it learns the mathematical direction between them. (Check out this article by the author of repeng for a deep dive into control vectors and representation engineering.) Your brain can't go there, but your mouse can. That's the whole problem and the whole solution wrapped up in nine words. We evolved to navigate 3D space. We're really, really good at it. We can catch a ball, parallel park (some of us), and reach for a coffee mug without looking. But 4096 dimensions isn't exactly a space we can navigate. It's not even a space we can imagine. It's pure math, alien geometry that our brains just go like "nope". So this project is meant to act like a bridge. A sphere you can drag in normal 3D space, where every position maps to a location in that alien 4096-dimensional space. Your mouse becomes a interdimensional steering wheel. The sphere has three axes: - x-axis: angry vs calm - y-axis: sad vs happy - z-axis: disgusted vs interested Why a sphere? Because emotions wrap around. Go too far in any direction and you hit the boundary of what's possible to feel. Also spheres are satisfying to rotate. Also I like spheres. The math is simple: take the (x, y, z) position of where you clicked, normalize it to [-1, 1], and those become your weights for the control vectors. Drag to coordinates (0.5, -0.8, 0.2)? That means 50% angry, 80% sad, 20% disgusted. The AI reshapes itself to match. The way repeng works is: you give it pairs of opposite prompts like "Act as if you're extremely happy" vs "Act as if you're extremely sad" and then a bunch of sentence fragments to complete. The library runs these through the model, captures the activations at specific layers, and uses PCA to find the direction in activation space that best separates happy from sad. I used repeng's training script pretty much as-is, just defining my three emotion axes: happy/sad, angry/calm, disgusted/interested. The script takes a dataset of facts and automatically creates training pairs by truncating them at different points. Feed it "The Earth's atmosphere protects us from harmful radiation" and it generates dozens of variations, each one asking the model to complete the sentence while feeling happy vs sad, angry vs calm, and so on. What kills me is that this works. You're literally finding the mathematical direction of sadness. It exists as a list of numbers you can add or subtract from the model's thoughts. Sadness has coordinates. Then I had another problem: how do you show what the AI is feeling? Text alone felt incomplete. I needed Gizmo to have a face. Since we're already interpolating vectors in high-dimensional space, why not interpolate vectors in 2D space too? I drew nine SVG faces - one for each emotional extreme. Happy, sad, angry, calm, disgusted, interested, plus a few combinations. Each face is just paths and circles. The same (x, y, z) coordinates that control the AI's language also control the face. Drag to (0.5, -0.8, 0.2) again? That's 50% angry paths, 80% sad paths, 20% disgusted paths, all mathematically blended together. This means Gizmo makes faces I never drew. They emerge from the math itself - expressions that exist in the spaces between the defined emotions. The voice was supposed to be simple. Just pipe the text through ElevenLabs and call it done. But of course it became its own thing. ElevenLabs has its own emotional interpretation layer, so you get this double-filtered emotion: first the control vectors shape the text, then ElevenLabs reads that text and adds its own emotional color. Sometimes they sync perfectly. Sometimes they fight each other in interesting ways. For the mouth animations, I analyze the audio amplitude in real-time. When the volume spikes, the mouth interpolates toward an open position. When it's quiet, it interpolates toward closed. The same interpolation system that blends the face emotions also handles the mouth movements, creating this continuous morphing between speaking states that syncs with the audio. Make Gizmo extremely happy, then ask about death. Watch the control vectors fight against the content, producing responses like "oh how wonderfully tragic that everyone dies! :)" It's broken, but it's broken in a way that reveals something true about forced emotions. The model is also small and kinda dumb (Mistral-7B). Gizmo gets confused easily, contradicts itself, forgets what you just said. But there's something endearing about the simplicity. It's not trying to be AGI. It's just trying to exist at whatever emotional coordinate you've dragged it to. <iframe width="672" height="378" src="https://www.youtube.com/embed/PWx1_HhNEtg?si=pvIEtw78I9T-qeA0" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen style="max-width: 100%; margin: 2rem auto; display: block;"></iframe> We say we feel "up" or "down." We "move past" anger. We talk about emotional "distance." We "center" ourselves. What if that's not metaphor? What if we're unconsciously acknowledging that feelings have actual geometry? Every human culture spatializes emotions. It's built into our language, our gestures, our mental models. When you drag Gizmo around that sphere, you're using spatial intuition that evolution spent millions of years developing. You're navigating emotional space the same way you navigate physical space. Maybe that's because they're not that different. Maybe feelings do have addresses. --- The code is on GitHub if you want to create your own emotional coordinate system. Just be prepared to discover feelings that don't have names.
Continue Reading
words
Feb 26, 2024
PG. 1 / 3

Summary

"An entirely new way of making the computer do the work for you"

Read More
One of my main mistakes when I started learning Chinese was the excessive zeal around getting the tones right. There's a well-known piece of trivia about the Chinese language, you've probably heard it before. It's that even though mā (妈) and mǎ (马) might sound exactly the same, one means "mother" and the other means "horse". That's because they have different tones that non-native speakers have a hard time distinguishing. Knowing that, I naturally assumed that paying a lot of attention to the tones when speaking was the only way for a native speaker to understand that I wasn't implying his mom was an equine. It eventually clicked for me that Chinese is all about context. Even though native speakers use the tones correctly when speaking, they'll still be able to understand that I, a beginner, am not really asking if they live near or far their horse's house. On the other hand, it's not cool to put too much of the burden of communication on the interlocutor, so when speaking Chinese the nice thing to do is still doing your best to use the tones correctly. As with most things in life, what you're looking for is balance. Communication is a shared effort, so taking all the burden for yourself is also not cool because that means underestimating the other person. When I started programming AI souls, I made the same mistake I did with Chinese: insisting on overprecision. Both taught me the value of balance in communication. It's no accident that programming languages are called "languages". When you write something in Python or JavaScript, you're effectively trying to communicate something to the computer. But you're talking AT the computer so it's a bit of a one-sided conversation. In traditional programming, all the burden of communicating with a computer through a programming language is on you, and you alone. Thanks to operating systems and open source, that burden is not as heavy as it used to be because there are layers over layers of software all the way down to the metal making everything a lot simpler. But forget just a single stupid bracket and nothing works anymore. A few months ago I learned about the concept of soul programming when I stumbled upon the Soul Engine - an extremely elegant API designed around the idea of emulating a human brain with LLMs. It makes the process of infusing personalities into AIs really easy, and the results are incredible. See it for yourself in this demo where two antagonistic souls talk to each other - if it doesn't make you think about Westworld, the only explanation is that you haven't watched it yet. I first approached soul programming with the same mindset as I've always approached anything else in traditional programming. I assumed the burden of communication was mine alone, so I wrote the soul code in an overly prescriptive way, trying to put control structures everywhere. For example, at some point I tried making a call scheduling flow. The AI would ask the user when it should call them for a check-in. My first attempt was asking the soul to extract the date from the user's answer, then do a date comparison using with good ol' code to figure out how far in the future that was, and then fork the flow depending on the result, which would also result in two additional flows I'd have to teach the soul how to handle. What I should have done was much simpler: give the soul all the data, let it figure everything out by itself, then ask for the relevant information at the end. After a lot of experimentation, it became clear to me that the correct approach is to **always** assume that the AI soul will do exactly what you want, with the bare minimum amount of instructions and control. And only when it doesn't work you try adding more stuff. When you're dealing with AI souls, being too prescriptive will stifle the intelligence inside the machine. The best results will come from gently nudging the soul towards your objective. It's about partnership, not control. Just like speaking another language, soul programming is about finding that sweet teamwork spot. It's not just you talking at them, it's about working together to get it right. --- Intrigued? You can learn more about soul programming at the Open Souls X account. ``` ```
Confidential
Evid. #004
Subject Matter

AI soul programming

words
Oct 15, 2023

Broken windows theory & software products

Open Case File

October 15, 2023

Broken windows theory & software products

Exploring how tiny digital fixes can subtly conjure cohesive UX magic

Sometimes I find myself rambling to a luckless friend about the Broken Windows Theory. Unfortunately for you, it's your turn now. It's this idea that fixing small things, like broken windows or graffiti in public spaces (the New York City subway, for instance), can prevent larger issues like crime. There’s something about clean spaces that sort of nudges people to treat them with a bit more respect. The other day an unexpected flare of synaptogenesis made me think of this theory, applied to software products. You've been there - a weirdly placed button or an unintuitive popup, and somehow, the app feels... less. These small issues, compounded, somehow sour the experience and lower our perceived value of the product without us even realizing it. I’ve got some stories from the trenches about this, too. In my Java days, I was hired by a company whose product was as slower than anything I do between waking up and my 3rd cup of coffee. Customers made a pun with the product name to celebrate its slowness. Tasked with solving the problem, instead of finding one big bottleneck, I discovered a myriad of small problems that, when fixed, gradually turned our slowpoke product into something that wasn't a basic human rights violation. Complaints went down, smiles went up. Flash forward to a more recent gig: the product had been a ping pong ball between teams - one with a knack for overengineering and another that was quite the fan of sacrificing user experience for code simplicity. This had left it with a sort of Jekyll and Hyde personality: modules followed entirely different design systems, alerts popped like they wanted to give you a jump scare, and error handling was... it just wasn't. With carte blanche to make any improvements I thought necessary on top of the assigned work, after a few months of diligent boy scouting, it started looking and feeling more well-put-together. I believe repairing these small broken windows is a bit of a subtle art. If you keep your windows intact and scrub off the graffiti, people start to respect the space more, even if they don't consciously notice the improvements. We shouldn’t shy away from sneaking in those little fixes when we can, even if it’s a small rebellion against the roadmap. Sometimes, your soul (and the product) just needs it. And if someone higher-up starts to get upset over the unscheduled betterment... it might be a sign that another adventure calls.
Read more
code
Jul 11, 2022
PG. 1 / 3

Summary

"The web is definitely short of a shortage of shorteners, and yet... here's one more"

Read More
I've been on a sabbatical for a few months and started feeling like it was time to find a new job. I carefully selected a dozen companies, made a sorted list starting with the ones I like the most and started applying to a few at a time. One of these companies asked me to complete an assignment: creating a URL shortener. This was just a screening task and, therefore, a tiny project. The requirements made it optional to create a UI or use a real database. I set out to do something lightweight but got carried away and ended up making something I could actually put in production. 🙈 I decided to try a novel framework called Fresh for this project, but soon started running into the kind of weird problems that appear when the tooling you're using isn't yet mature. To make sure I wouldn't blow the deadline debugging and fixing esoteric errors, I decided to stick to the good ol' Next.js running on Vercel combo. The good news is I still found a way to sneak in some technology I hadn't used it in this project. This seemed to be the perfect opportunity to try out a key-value store such as Redis. I chose Upstash for the job, a company providing durable storage with Redis for serverless applications. And I must say it was a _breeze_ to integrate, somehow even easier than Mongo Atlas. You can find the app running at https://s.dooart.dev. Here's the code. A final note: it is kind of cool to have my own URL shortening service, but what is even cooler is being able to go from `git init` to production in just a few hours. Software development for the web has come a long way!
Confidential
Evid. #004
Subject Matter

URL shortener with Redis

FIELD REPORTS

From The Desk of DOOART LABS

DOOART LABS is a digital playground for creative coding, AI experiments, and over-engineered interactive toys.

This newsletter serves as the primary distribution channel for post-mortems of failed experiments, weird bugs, and the occasional success story.

Input Terminal

Frequency: Low // Signal: High

* Subscriber acknowledges inherent risks of exposure to unfinished thoughts and spaghetti code.