• In my early teens and twenties, I was a consummate blogger. Well, really, I was more of a diarist, spewing thoughts about my life nearly continuously onto a blog of some form or another. In the late 2000’s, I decided to use my ability to spew text to promote music I liked, by converting my journaling domain staires.org into a music blog. The original conceit was that I would post a song every day, for at least a year, complete with a personal anecdote about my personal connection to the song, or at the very least a couple paragraphs of my thoughts and opinions on it.

    I believe I made it up to around day 230 before I decided I could not keep up the daily pace and started posting less frequently. But that was a lot of time to do the daily posting consistently, and it got me some early attention. Eventually I got picked up by Hype Machine, which helped promote the blog further. They still have an archived list of 208 staires! songs that they syndicated, which is pretty cool to see. Over time, I built up a small following and made some lifelong music-loving friends along the way.

    But life, or mental illness, or monetary troubles got in the way and I eventually stopped updating it. I gave up. I guess I lost the love of the game? I forgot what the point was? I was going through a lot of stuff in the early 2010’s, so who knows what my exact reasoning was. In a bit of a snafu, when I cancelled my hosting for the website, the host ended up turning off auto-renew for the domain. When it came up for renew, they registered it for themselves and wanted to charge me hundreds of dollars to get it back, which I did not have. And that’s why the staires.org domain was advertising car parts and other junk for a long time, probably over a decade.

    A couple days ago, I randomly typed the domain into Squarespace, and it said it was available. I was shocked, and I bought it immediately, without thinking much about it, just glad to have it back with me where it belongs. But then I realized today, I’ve never stopped writing about music I like, recommending it to others, I’ve just been doing it privately, directly to friends or small community chatrooms. And in a lot of cases, when I do it that way, I’m just shouting into a void, and most people aren’t even paying attention. What’s the point of doing it that way?

    At least if I write a post on a website, it’s something random people can run into. People who enjoy the songs I post can add me to their feed reader, or subscribe via email these days, and I can go back to promoting the music I love to a possibly wider audience.

    So, without further ado, you can find me blogging about music again over at https://staires.org. Enjoy!


  • I noticed a lot of people ask in the AI Horde Discord how to run a worker. I thought it would be useful if the community had a blog post to point people to, and maybe that can be this post. Let’s go!

    How to get an AI Horde API key

    Be sure to get an AI Horde API key if you don’t have one. You can register here: https://aihorde.net/register. Be sure to protect this API key, put it in your password manager or somewhere else for safe keeping. This API key will be your ticket to all the kudos you can earn.

    How to Run a Text Worker (“Scribe”) on Windows

    If you’re using Windows, you’re in luck, because running a Horde worker is easier than ever thanks to Koboldcpp, which compiles all the bits and bobs into one executable file for you.

    Step 1: Download Koboldcpp

    Go to the Koboldcpp Releases page, and grab the executable most relevant to you. This is probably koboldcpp.exe. If you have a newer Nvidia GPU, grab koboldcpp_cu12.exe. Stick this file in a folder somewhere, like D:\koboldcpp\koboldcpp.exe

    I like to make a presets and models folder in here, so your folder might end up looking something like this depending on which version of koboldcpp you downloaded.

    Screenshot showing Windows Explorer in a folder, with "models" and "presets" folders visible and a single file named koboldcpp_cu12.exe in it.

    Step 2: Download a Model

    Koboldcpp can run models quantized in the GGUF format. What does that mean in the most basic sense? Koboldcpp can run models that are compressed in a way that allows them to run on lower-end hardware, with some trade-offs (like decreased quality of generations). For example, CohereForAI/c4ai-command-r-v01 requires ~70gb of VRAM in its original form, but a 4-bit quantization (available here) of it only requires ~23gb of VRAM.

    We’re going to focus on a model that everyone should be able to run locally, Sao10K/Fimbulvetr-11B-v2, which is a smaller model that really excels at roleplay chat, and there is plenty of demand for it on the horde. It’s a personal favorite of mine as well.

    Step 2.a: Find a GGUF version of the model

    Google (or use your search engine of choice) “<model name> gguf”, in this case we’ll look for “fimbulvetr-11b-v2 gguf“. Usually the first result going to huggingface.com is what you want. In this case, we’re going to end up on the second result, at mradermacher/Fimbulvetr-11B-v2-GGUF, because it gives us more options in regard to quantization sizes.

    Step 2.b: Download a GGUF version of the model

    You can see on the model page for that GGUF version, that there is a chart that tells you how “good” the various quants are, but here are some general tips:

    • Models hosted on the horde should be Q4 or greater, to ensure best generation quality.
    • You should pick the largest quant that looks like it can fit in your GPU’s VRAM.
      • If you have a 8gb GPU, get a model under 8gb, etc.
    • If you aren’t sure what to try, try Q4_K_S first. You can always download a bigger quant and try it later.

    Download the .gguf file into your koboldcpp/models folder. My models folder looks like this:

    Step 3: Configure Koboldcpp

    Now we’re ready to launch Koboldcpp and start configuring it. The initial Quick Launch screen has all the main information we need to worry about.

    1. Hit browse and pick the .gguf file you downloaded into your models folder.
    2. Set the context size you want. I like to have plenty of room for context, so I pick 8192 by default, but if you have less system power you should try 4096 to speed up generation times.
    3. If you’re certain the model will fit entirely into your VRAM (because you downloaded a model smaller than your available VRAM), set a large number here. If you don’t know if the model will fit into your VRAM, there will be a section below about figuring out layer quantity.
    1. The model name is very important to get right, as it determines how much kudos you will get for your submitted generations. If you put an incorrect name here, you will get very few kudos. To makes sure you put the right name, check the model whitelist for the name of your model, without any quantization naming attached. In this case, we see that the base model of Sao10K/Fimbulvetr-11B-v2 is in the list. But that isn’t the name we want to put into the “Horde Model Name” slot, we want to identify that the model is being run with koboldcpp, so we put in the model name as “koboldcpp/Fimbulvetr-11B-v2”. Nothing else matters, you do not need to include the quantization level in the name.
    2. This is where you put your API key so you can receive kudos properly!
    3. Your worker name should be the same every time you run your worker, regardless of the model being used.
    4. Save your configuration to a .kcpps file so it is easy to reload it later. This is what the presets folder is for, save your configurations in there so you can easily use them later on. ⚠️ Koboldcpp doesn’t save your settings automatically!

    Once your configuration is saved, hit Launch!

    Step 4: Is it Working?

    Once your worker is running, you should end up with a terminal display that shows you what your worker is up to. It looks like this.

    1. This is how many kudos you are earning for this specific job. If this number is very low, like 1 or 2 kudos, you likely have your model name configured incorrectly or the model is not whitelisted for the horde. Double check that your model name is correctly entered and that the model is whitelisted.
    2. This read out shows how many jobs your worker has completed and how many kudos per hour you are making. If you’re not earning several thousand kudos per hour, your worker is likely configured incorrectly or you’ve picked a model you are not able to run at a decent speed.

    You can test your worker through the horde by using Kobold Lite. Load up the website and then click on the “AI” button in the top left corner.

    1. Put in your AI Horde API key if you haven’t before.
    2. Check the box “Select By Worker”
    3. Look for your worker in the model list. If you don’t have a Purple Heart here, it’s just because your worker is not yet “trusted”. More on that later.

    Click OK, then pick a desired scenario (KoboldGPT is the easiest for testing, I think) and submit a chat request. You know your worker is running when you get a message back and it shows your worker name and model name in the bottom of the client, like this:

    If you ended up here and all looks well, congratulations, your worker is running and racking in the kudos.

    Troubleshooting

    “Where are my kudos? I’m not getting the right amount of kudos.”

    To prevent abuse of the horde by bad actors, when you first start running a worker, half of your earned kudos are held in escrow. After a week or two of running a worker without issue, you’ll become “trusted” and receive all kudos owed to you (the kudos held in escrow plus all future earnings). (By “without issue”, I just mean that your worker is returning proper generations and there is no monkey business happening.)

    “What was all that about layers and how can I run models larger than how much VRAM my GPU has?”

    When you are configuring Koboldcpp, it asks you how many GPU Layers to use. This can be useful if you want to run a model that is just slightly too big for how much VRAM your card has. But how many layers does each model take up? There is a way to guess this yourself, but I like to just try to load the model in Koboldcpp and see what it says. For example, the 8bit quantization of Fimbulvetr I’m using displays this in the console when loading.

    You’ll see it says that this model had 49 layers that it is loading in the GPU memory. If we didn’t have enough memory to store the full model, we could configure Koboldcpp with GPU Layers set to 40, and then it’ll load most of the model to the GPU, and the rest to system memory for the CPU to use. This can be very slow, so it’s recommended to only offload as many layers as needed to run the model at all. The more layers on the GPU, the better!

    “I need more help!”

    Not a problem at all, there are a lot of people who are willing to help you over on the KoboldAI Discord. There are lots of people far more knowledgable than I am, and if you decide you like running a worker a lot, they can help you level up your hosting game. Remember, this is just the most basic guide, there are more robust methods (like aphrodite) that allow you to run several models at once if you have the hardware capable of doing it. Feel free to ask about that in the Discord.


  • While most of my AI related app releases have been dedicated to image generation, (counting two Ealain releases and then Aislingeach), my gateway into generative AI was chatbots. If you look at the chronology of this blog, that’s pretty obvious, since it all began with my post about “uncensored” chatbots, which I’ve recently updated to note this very app I’m talking about now.

    So, yeah, it’s weird that it took me nearly a year to finally make my own chatbot app. The reason for this is largely just that open source LLMs took a little time to cook before getting better. They’re still not perfect, or anything, but they’re a lot more fun to play around with and you can essentially get a good experience “out of the box” with any of them. It felt like the time was right to try to make a straight-forward app for character-based role-play chat.

    If you don’t care to read the rest, well, here’s some links for you.


    It feels moderately awkward to explain this app to people, because the main way LLMs are in the news and overall public consciousness is that they’re writing assistants, or coding assistants, or maybe they’re going to steal customer service jobs. But this app is not that, it’s not meant to be an assistant. It’s meant to allow you to chat to fictional characters, which at face value sounds… immature? Or stupid? I don’t know, but it definitely makes me cringe a little bit.

    Additionally, as my original blog post makes clear, depending on the audience, there’s a bit of ick around the entire subject matter because a lot of the reason these open source LLMs exist and are good for this purpose is because people wanted to have sex with chatbots. That’s just how it is. There’s a lot of lonely (or just horny) people out there, across the whole spectrum of genders and sexualities, and a vast pornography industry in existence that proves it. So, of course people want to use this new technology in that way.

    So is that an explicit endorsement, that my app is for people to use to have sex with chatbots? Well, that’s the pickle I’m in with the app, when I’m explaining it to people, because it’s hard to convince someone that a grown adult might just enjoy talking to random fictional characters. And that’s mostly how I dogfood the app, just shooting the shit with random character cards, and seeing what funny (or stupid) stuff comes out. This technology can be really entertaining, especially when it pulls a stroke of genius out of its back pocket and surprises you.

    So, Inneal is for writing characters and chatting with them, and maybe that is just for fun, like a choose your own adventure story you are actively writing while participating in it; or maybe you do it to stimulate your creative instinct and try to flesh out characters in your own fictional stories; or maybe it’s because you’re lonely and you want to talk to a familiar face, even if they don’t really exist. I’m not going to judge. I’m not the judging kind of person. Even if you want to have sex with them.


    This is my second app built entirely with SwiftUI, and my first app using SwiftData for the persistence layer. Ealain, my first SwiftUI app, was pretty simple and didn’t really force me to learn how SwiftUI works properly. Inneal, on the other hand, really forced me to learn quite a bit about SwiftUI and especially how it interacts with SwiftData.

    When I’ve used CoreData in the past, I’ve followed advice to abstract it away as best I can and I usually keep all the CoreData related logic compiled together in one class, to try to avoid issues with threading. This results in crashes in Aislingeach to this day with NSFetchedResultsController, but, whatever, they’re pretty rare.

    SwiftData doesn’t want you to do this, or at least, if you do, you lose out on a lot of cool stuff. It also doesn’t want you to update or create model objects essentially anywhere but directly in the View code. This feels weird, coming from a background where you feel inclined to try to hide the model away from the views as much as you can. Instead I end up in a situation where the ViewModel is passing data back to the View so that the View can create or update the Model within itself.

    I don’t think I am doing this wrong, because it works quite well, and still keeps almost all the backend logic tucked away in the ViewModel, and the View has this strangle-hold on the Model, which sort of makes sense because that is where the model is used and updated anyway.

    SwiftUI is a little glitchy, but it’s understandable because I don’t envy any of the behind the scenes work that goes into it. The two notable issues I ran into are: 1) if you setup SwiftData the way Apple intends and then setup CloudKit syncing, your app will crash every time it’s backgrounded. 2) When you use LazyVStack with defaultScrollAnchor, sometimes your internal views just kind of disappear, which is a continued issue I can’t manage to solve 100%. I might end up using one of my free developer technical support tickets to ask about this issue.

    The other issue I’m sure is perennial, which is that you will reach a point where the compiler simply dies instead of making any attempt at telling you where the bug in your code is. This means you must be very careful about how many code changes you make before doing a build & run, because otherwise you’ll end up pretty stuck not having any idea what you broke.

    All that said, I really enjoyed building this app with SwiftUI and don’t think I will go back to using UIKit unless I really have to for some reason. It really sped up the development process and cut down on a lot of unnecessary boilerplate style code created by strict adherence to the delegate method.


    I guess that’s it. Go download the app and chat with some bots!


  • This post is an update to my previous post, May 2023’s “The Varying Levels of Getting Started with “Uncensored” LLM-Powered Chatbots“. A lot has happened in 11 months. Well, not a lot, but, things have changed a bit.

    I think that post is still worth reading, because it explains more in depth what we’re talking about exactly, and why, and I am going to take a stab at cutting to the chase and just updating a couple specific points from that post.

    Previously, I said that you could get a pretty good chatbot experience out of OpenAI’s APIs, and you should try that first. That is no longer the case. Big corporate LLMs have implemented aggressive content filtering that really restrict your chatbots in ways that they weren’t 11 months ago. That isn’t to say corporate LLMs don’t have their uses, but if you’re looking for “uncensored” LLMs, which is the point of this post, I can no longer recommend OpenAI, Google, Anthropic, etc.

    Previously, I said that currently available open source LLMs like Pygmalion 6B “weren’t very good”. That is, thankfully, no longer the case. Specifically, I think a model named Fimbulvetr is very good and you can likely run it at home very easily, or find it hosted on the AI Horde usually. I think this takes a lot of the sting out of commercial LLMs becoming very sanitized and being turned into the equivalent of hammers and wrenches.

    Some things have also changed with the software that you can use to run models. There’s a lot of it. I am going to reference the things I have used specifically and can recommend depending on your circumstances. Without further ado, here we go…

    We’re going to start at the lowest level of effort and work our way up as we progress into the hobby. Sounds fun, right?


    An Easy and Free iPhone / iPad Client

    Do you just want to chat with some bots to see what it is like, and not spend any money or time on it? I just released an iPhone / iPad chat client for the AI Horde, called Inneal. It’s free, and use of the AI Horde is free. It also hides away some of the fiddly bits that can be confusing in other clients, so you can start importing character cards and chatting with them right away. Yes, this is self-promotion, but I made this app specifically to make it dead simple to start chatting right away.


    A Free Browser-based Client

    Once you’ve become a bit bored with what you can do in Inneal, you can branch out into another free client for the Horde that does some other things, called Kobold Lite. The interface for this app is a bit confusing but you can also import character cards from CHUB, and do some other stuff in this client, like use straight text completion, or Adventure Mode which tries to do kind of a Zork type thing. It’s pretty neat.


    A Free and More Robust Browser-based Client

    If you want to get really deep into using third party web clients, and I don’t honestly recommend it, because why would you want all your chats stored on some random server somewhere, there’s a very nice website called Agnai that supports a lot of chat features that are becoming a standard, like lore books. This client supports lots of LLM APIs as well as the AI Horde.


    Paid Options for the Computer Illiterate

    If you’re computer illiterate and you have money, there are plenty of paid sites where you can chat with bots for money. Some of them have some very large models you can use, which should, in theory, be better at chat than smaller models. I can’t say whether any of these sites are good or not, and I wouldn’t personally use them myself. I felt it would be unfair not to mention them, however.


    The Go-To At-Home Client

    If you’re computer literate, and you want to keep your chats on your computer where they are safe, you should consider installing SillyTavern. In addition to being fully under your control, it seems to be the gold standard client at the moment for chatting and is jam-packed with features. I just wish they’d change the name.

    SillyTavern can connect to the horde, of course, so you can use this freely still, but really, you should start running your own models locally if you can, which we talk about next.


    Running Your Own Models At Home

    Running models at home is easier than ever before, with some caveats, namely that you’re running quantized models, which means they may not be quite as good as the full-sized models running on server farms with tons of VRAM available. But they still do the job better than anything else that has ever existed, so, nothing to complain about, really.

    If you’re running Windows

    If you’ve installed SillyTavern and you’re using it, then you’re definitely serious enough to run your own LLMs locally, and if you have a gaming graphics card made in the last couple years, you probably can easily run Fimbulvetr locally. I’ve used Koboldcpp to do this on my PC lately.

    You’ll want the Fimbulvetr-11B-v2.q4_K_S.gguf file from the link below. This is a quantized version of the model that will run on smaller GPU graphics cards, like 8 GB of VRAM. But it also runs well and works just fine on bigger graphics cards.

    If you’re running macOS

    If you have an M-series Mac, you can use Ollama to run models locally, and the version of Fimbulvetr available under Ollama runs really well on my M2 Pro mac Mini, and SillyTavern has an Ollama specific connection in it already.


    Now What?

    If you had fun getting Fimbulvetr working, congratulations to you, you’re just getting started. There are a lot of cool models out there, and a lot of stuff to learn about (like Instruct Mode). I recommend checking out this page that reviews LLMs based on how good they are for chatbot roleplay. It’s updated quite often. Chronoboros, Psyfighter2, Noromaid Mixtral, … there are a lot of fun models to try out to see what works best for what you like.

    I’d also recommend joining the KoboldAI Discord, it’s a great place to geek out, get help, and learn about all the latest models and how they work.

    If you get into SillyTavern, there’s a lot to learn there, and they have their own Discord server that could be worth a perusal.

    Also, go back and read my original post, if you didn’t, there’s some other stuff in there and if you’re all the way down here, you must love reading, so do it some more?


    That’s it for now! I’m sure there will be another update, perhaps this year or next year. Just in the past month, we’ve seen the release of Command-R and Llama 3, two very large open source models that are sure to help push forward progress on these fronts. We’ve also seen corporate AI get more and more restrictive about how it can be used, as world governments begin to make moves that could enable regulatory capture, making it more important than ever before that these fields move quickly. Kudos to the torch-bearers.


  • I’ve built a few apps now, apps that people actually pay money for–not a lot of money; a very, very small amount of money, but enough that my Apple Developer fee has been paid for until I die, at the very least. I don’t say this to brag, but to establish that I’ve done this a few times now, entirely by myself, so I can speak somewhat authoritatively about it. Why is that important? Because I want you to achieve your goals, and if your goals are similar to what I have achieved, then maybe this guide will help you reach them. So, this is how (and why) I build apps, so you can too.

    Why I Build for Apple Platforms

    The iOS ecosystem, starting with UIKit and now with SwiftUI, allows you to quickly build beautiful apps that feel great to hold in your hand and interact with. If you already enjoy using Apple platforms, especially for desktop computing, it’s a no brainer, as you have all the tools you need for development already.

    In conversations with people getting into development, they seem almost mindlessly compelled to pursue JavaScript based front-end web development. I find that realm of development to be very complex, especially for a beginner, and that the end result (a web app) does not feel as nice as a native app. Sure, you can use your app on any device with a web browser, but, who cares, it still feels like a website, yuck.

    That said, my development process has nothing particularly to do with the platform I build for. But I do recommend choosing one platform and dedicating yourself to it completely, at least at first.

    Why I Build Anything At All

    I build the apps I do because I want them, for myself. I highly recommend that you build software you want. If you are chasing fame or profit, it pays to be a curious person, because you will stumble into new interests and hobbies, which exposes you to new people and new experiences, and may create new wants in you, which lead to new app ideas. This is almost the entirety of how I get any ideas at all.

    Incremental Dogfooding

    Dogfooding, short for “eating your own dog food”, simply means using your own product. Because I am building apps I want, to fulfill some workflow I’ve envisioned in my head, dogfooding is an inherent part of the process. This leads to a very incremental development process that forces me to logically break down the workflow into a series of steps that allow the app to naturally grow over time.

    For example, I have been working on a chatbot client. It was immediately obvious what the first goal would be: I want to be able to type a message into a box, send it, and get a message back. It doesn’t matter at this point whether or not the chat is saved or even what service the chatbot is using, I just need to get the most basic component of this project working.

    In the case of Aislingeach, the most basic component of the project was to type in a prompt, and get a single image back from the horde. In the case of Ealain for Vision Pro, it was simply to get images to download and appear on screen.

    For none of these apps did I really concern myself with the visual design of the app at first, which is easy on Apple platforms, because Apple has very strong platform conventions that I appreciate and if you use UIKit or SwiftUI properly, your app will end up looking nice. It may be tempting to nail the visuals of your app upfront, but it doesn’t make sense to spend a lot of time designing something you might not be able to build, or might not even like when you do build it. Design and polish later, when everything is working. Remember, “design is how it works, not how it looks,” at least at first.

    Annoyance Accrual

    Once I get the most basic workflow working for each idea and see how it feels, this often motivates me to continue the process quite naturally, because I will usually feel some sort of annoyance with what I have built so far. This the same sort of feeling that I get when I’m using other software and it doesn’t work quite right, or look very good, but in this case, I can do something about it, because I’m the code jockey building the thing.

    What’s nice about this process is that it simulates the ebb and flow that is essential to any interesting and exciting activity. Movies, music, literature–these are things that take you on an emotional journey of ups and downs, ideally. This development process is the same: You are scared at the start (what if I am too dumb to do this?), but you get a piece of the app working, so then you are happy. You turn happily to use your app, and find something annoying about it, so now you’re mad and maybe a little scared still (what if I can’t fix this?), but then you fix it, and now you’re happy again. Rinse and repeat.

    In the case of my chatbot app, once I got basic chat working, the next big annoyance was persistence: I want these chats to stick around, it was lame they disappeared when the app restarted. So this forced me to start thinking about the data model for the app and make a decision around that issue. Once chats were stored in some sort of database, it was annoying that the chatbot had no personality, so I had to implement importing characters and using them in the chat prompts.

    Once I could talk to a character, the next annoyance was that the LLM sometimes generates something lame, and you want to tell the LLM to “retry”, basically a message delete and regenerate. After that was implemented, it was annoying that I couldn’t just edit or delete messages, so that was next to get built. After this point, the most basic workflow is almost entirely in place, which means future annoyances will mainly be around refining and improving that core workflow, by adding additional screens for configuration options and supporting content like that. (Eventually the “retry” option became an annoyance, and I replaced it with SillyTavern style swiping, a much better feature.)

    All throughout this process, I’m actively dogfooding the app, which acts as a built-in quality assurance (QA) process. It also forces me to think about the user experience (UX) of the app, because I am actively using it and accruing minor annoyances with how it looks and functions along the way. For the chatbot app, I wanted to make sure the experience of using the chat interface felt very native, like iMessage, which necessitate a lot of research and iteration. I try to tackle these UX annoyances as they pop up, usually between resolving the core annoyances. Shifting between building core features and refining the user experience helps keep the development process fun and dynamic for me.

    Release As Soon As Possible

    Perfect is the enemy of good. This is a fact that cannot be argued with. If you try to achieve perfection, to truly whittle down your list of annoyances to absolutely nothing, you will never get there. This is the plight of a human being; If we were capable of ever truly being satisfied, we would stagnate and die as a species. I don’t mean to get too philosophical, but it’s important for your own sake to internalize this idea, so that you release your projects and let people use them, instead of just fiddling with them until you lose interest and forget about them.

    Because I’m building apps that I want, for myself, and I pretend that I don’t really care that much about what other people think because I assume that “if I like it, it must be good”, it feels relatively easy for me to reach a point where my list of annoyances naturally turns into a list of trivialities and bigger wants. At this point I know the app is ready for a 1.0 release.

    The trivialities can be wide ranging, from not being happy with the way code is structure in the project, to the design of minor interface elements. The bigger wants are things that feel like they should be in a 1.1 or later release, like adding additional features that refine or expand the app, or completely redesigning entire parts of the app that grew a little stale on the road to 1.0. The important thing is that none of these things truly hurt the core workflow the app is meant to support.

    What is great about releasing your app as quickly as possible is that you get to collaborate with other motivated, passionate people on expanding your lists of annoyances, trivialities, and bigger wants. In my case especially, because I am building out my workflow for a process, after the app is released, I end up discovering glaring blindspots in my knowledge of that process that are only revealed when other people explain their workflow to me. And because there is now a real person asking for my help in achieving this with my app, it really motivates me to figure out how to accommodate their need, while still adhering to whatever my vision of the app may be.

    After that happens to you, congratulations, you are officially an app developer. You built an app, you got other people to use it, and you listened to their complaints and incorporated their feedback. This is the entire process of being a software engineer, from top to bottom. Everything else that happens in the professional world around software engineering are just additional layers of refinement built on top of this sort of process, to scale up to supporting multiple engineers working on a single project.


  • Just 11 days ago, I released Ealain for Vision Pro and now I’m very happy to introduce Ealain for Apple TV, which is the same app, it’s just for Apple TV now.

    You can find it at the same App Store page for Ealain for Vision Pro, because it’s a universal app (or whatever they are called), one purchase gets you both version.

    I was able to reuse a lot of code from the Vision Pro version, which was useful, but also had to rethink the UI of the configuration screens so it made more sense on a TV. In some ways, I prefer the artist picker in this version and may find some way to bring it to the Vision Pro.

    I don’t have a lot to say about it, but it’s neat and a nice addition to any Apple TV 😉


  • This is my Apple Vision Pro review.

    I wrote several within the first week I got the headset but none of them seemed to be saying anything that wasn’t already being said. Now that the honeymoon period is likely over for most people, and the AVP Discord is getting much quieter, maybe it’s my time to share.

    The Apple Vision Pro is a fantastic prototype of what the future may look like. But it’s just a prototype, deep down, with a lot of shortcomings that turn the AVP into a fancy paperweight on my desk. It sounds harsh, but it’s true. This is not a real product for normal people to use. It’s not even a real product for a tech enthusiast to use.

    The first thing anyone is going to think or say when they put an AVP on their head is: “Good lord, this thing is heavy.” It is very heavy. The straps included in the box do absolutely nothing to reduce the amount of weight or pressure you feel on your face or head. Even with a third party top strap, the comfort is terrible. I am a dedicated enthusiast and I still have not managed to wear my AVP for longer than 30 minutes at a time because it is very uncomfortable and hot to wear. My Quest 3 + Elite Strap with Battery I’ve worn for 4 hours straight, if not longer, but that’s not a fair comparison, because the Quest 3 actually had some experiences worth diving into for hours at a time. The AVP does not.

    This means that the second thing anyone is going to think after using the headset is, “That’s it? What am I supposed to do with it now?” After the dinosaur app, and seeing a Disney+ movie in 3D HDR, there’s just no real use case for the headset. At first, the defense for this take was that the AVP is a productivity device more than it is a gaming device like the Quest 3, because it clearly wasn’t built with gaming in mind. But the deeper truth is that the AVP is not a good productivity device either.

    There are no benefits to using the AVP over just using your monitor at your desk. “But, you can have infinite displays!” No, you can’t. You get one Virtual Display and then you can put apps all around you, sure, but what good is that? I don’t want to be constantly turning my head and body around to look at apps. I want to “three-finger swipe up” to see all my windows in Expose, then pick one to focus on where I am already looking. Very occasionally I will side by side two apps on my display. But, I don’t want to keep turning my head all day long, which is why I use a single ultra wide display and no secondary monitors.

    There is absolutely no window management in visionOS, no Expose, which quickly makes any productivity oriented task frustrating and cumbersome. Apps become obscured by other apps, completely vanishing from view, leaving you to sit there pinching and dragging windows all over the place til you find the one you really wanted.

    As an example of how half-baked window management is in visionOS, if you are using Virtual Display to control your Mac while you are using Xcode to build an AVP app, every time you build and run your app, it will pop up directly where you are looking when it launches, which is usually right over your virtual display. So every single time you build and launch your app, you have to pinch it and move it away from the virtual display so you can see both at the same time. You quickly learn that you should hit build & run and then quickly look off to the side so the app launches somewhere else, but this is completely insane. Why can’t visionOS remember where the window was seconds ago and put it back in place? Did anyone working at Apple ever use the AVP for development? I really don’t think so.

    The deeper issue here is that visionOS is based on iOS, and it has all of the issues that iPads and iPhones have when it comes to productivity. The apps are all Playskool versions of the real apps, even in comparison to their iOS counterparts. There’s no terminal, no real file system, no “real apps” for the platform (depending on your use case, and depending on if you count iPad apps).

    If you want to do software engineering, you’re stuck using Virtual Display, or else you’re such an enthusiast that you’re willing to try to use VS Code tunnels to use VS Code through a web browser, which introduces input lag that feels awful to me. What are the chances we’re ever able to do software development fully natively on the Apple Vision Pro? Probably 0%. How long has the iPad been out? 13 years now, right? And can you develop on it? Nope. (Not unless you’re willing to deal with a lot of headaches, I assume, similar to how I see people using the AVP for engineering.)

    Past that, all my complaints are possibly pretty niche. You can’t play two videos with audio on at the same time, which may be a niche use case, but it’s still a pretty strange shortcoming considering macOS can play any number of videos with audio all at the same time without issue. AVP doesn’t come out of the box with enabled support for WebXR, so 3D YouTube videos and other content doesn’t work. If you go and turn on a bunch of extra options, you can start watching 3D video on the web, but because there are no controllers with the AVP, none of the open web has been built to deal with it so you can’t really control anything. The Photos.app doesn’t allow you to create albums and you cannot put photos into albums, it’s just a dumb photo viewer for the most part.

    There are a lot of people still in the honeymoon period, people who are very committed to the idea and it allows them to overlook all the shortcomings and issues. Kudos to those people, I wish I could be one of them. But I can’t. The truth is that I have no need for the one thing the AVP does well, which is movie watching. It’s a beautiful way to watch movies if you don’t have a similarly expensive home theater setup at home (and you don’t mind all the glare in the lenses, which ruins the experience unless you watch movies on the moon which balances out the glare).

    In that context, the AVP is a pretty okay portable one-person home theater setup. But everything else it’s supposed to do? It doesn’t do any of them at a level I would consider acceptable. Maybe if the AVP was half the weight and it was more comfortable to wear, it would be easier to overlook these shortcomings and enjoy the glimpse of the future it provides. But at the moment, the future seems a little hamstrung by the constraints of iOS and the unnecessary weight of the device.

    Unfortunately, the Apple Vision Pro is not really “a computer you strap to your face”. Deep down, it’s “an iPad you strap to your face”, and that’s probably the cruelest thing I can say about the device.


    If you’re curious about the promise of head mounted displays, I’d highly recommend a Quest 3 over the Apple Vision Pro. It’s worse than the AVP for productivity use (because the screens are too low resolution in my opinion), but you’re getting a much more fully featured headset for a significantly lower price and you can do a lot more with it out of the box. Neither product is perfect, but the AVP is clearly a prototype geared toward enthusiasts, while the Quest 3 is widely used (and beloved!) by every day normal people all over the country. Full disclosure, I hardly use my Quest 3 these days, but I still think it is a much better way to dip your toes into standalone VR than the AVP.


  • A few months ago I released Ealain, a screensaver for macOS that shows bauhaus-style abstract art generated with Stable Diffusion. When I got the Vision Pro, I thought about which of my apps or screensavers would make the most sense ported over. This process took a while and was rife with indecision. Eventually I realized that Ealain could be really cool, to have a changing frame of art to the side or behind my virtual workspace, assuming I ever use my Vision Pro regularly.

    So, long story short, today I released Ealain – Infinite Art for Vision Pro. It’s an expanded version of the screensaver that has been converted to a virtual display. It still simply rotates through generative artwork, but now you can choose from multiple styles, which are identified by fictional artist names, complete with AI generated biographies. You can also create multiple displays, and each can display a different selection of artists. I added the ability to Favorite images, which will keep them in your collection permanently, and there’s a feature allowing you to only show your favorite images.

    I’m charging $9.99 for this app instead of posting it for free. I decided this year that giving away my apps on the App Store is kind of silly of me, and put prices on all of my apps. Do you know what happened? I am pretty sure more people download my apps now than before, and people are more inclined to leave positive reviews or reach out to me personally about how much they like the app.

    My theory is that free apps appear worthless, as if the app creator doesn’t care about it or think it is worth money, and that makes people inclined to treat the app carelessly. They use it briefly, then they move on. If they pay for it, they’re going to be looking for signs of craft and quality, and if they find what they’re looking for (like I believe they will), that only increases their enjoyment of the app.

    On top of that, Ealain is for a headset that costs about $4000 or more, so $9.99 should be pocket change to the type of person who has one and uses it regularly. And, owning a Vision Pro, I have to pay for it somehow. So, please, buy my app. I only need 400+ sales to pay it off. I’m begging you, before Tim Cook breaks my kneecaps!

    As far as technical stuff goes, this is my first SwiftUI app. The interface is entirely SwiftUI. I tried to use SwiftData for the database layer, but it didn’t seem appropriate to the project, so I relied on the same CoreData setup I use for Aislingeach. The app is not open source just yet, the code is a bit of a mess and the architecture makes no sense, but it will be up on GitHub when I feel it is ready.

    I feel that SwiftUI was pretty fun to build with. It’s kind of perfect for a very small scale app like this one. I was able to get the app functioning pretty quickly, once I wrapped my head around “view is a function of state”. You can do a lot with very little, versus classic Apple style MVC with UIKit. It’s very “why use many word when few word do job”, but it’s still very easy to write very ugly, hard to look at, and hard to navigate code. But I understand that is a symptom of not refactoring enough, to some extent. I’m not a big fan of “magical” things in my programming languages and SwiftUI is progressing rapidly into something that is almost entirely magical, where you will someday write @CRUDApp { @TodoList } and SwiftUI will automatically build all the screens for you that you need for a basic CRUD app. That magicality (my word, it’s new) can make things feel opaque when debugging in a way I don’t really like. That’s how I feel about SwiftUI. I’ll probably use UIKit for big projects still, and SwiftUI for small ones.

    Originally I planned on having all Ealain clients generate art themselves using the AI Horde, but I felt like it would hold back the quality of the art and make for a worse first-run experience, so instead it works similarly to the screensaver where it is loading pre-generated images from remote storage, and I have a node script I can run at home to locally generate images on my 4090. Unlike the screensaver, this app caches the images locally. But there’s still a lot of images, so RIP your bandwidth if you use it a lot.

    This is my seventh app on the App Store (counting Numu, which I took off the store) which is pretty crazy to think about. Counting screensavers, this is finished project number 10 since I started building native apps. Am I a real app developer now?


  • This weekend I went to Darker Waves in Huntington Beach. It was pretty great. We saw: Molchat Doma, The Cardigans, DEVO, Soft Cell, The Psychedelic Furs, The B-52’s (via video screen), and New Order. We also overheard Tears for Fears on the way out, saw a bit of X, heard a touch of OMD, and a few other bands I couldn’t name.

    The amount of people at this thing was pretty insane.

    At one point I had to really shove myself through a huge mass of people to make it back to my wife, and it started to feel kind of weird and scary how, as I progressed further through the crowd, people began to push back and get angrier that I was trying to get past them, even though I was saying “I’m sorry, excuse me,” and trying to get the attention of each person as I passed.

    Half-way there, someone did a very pointed two finger triple-tap on my shoulder after I had passed them, clearly pissed off, and I ignored it and kept going because I knew no good could come from a tap like that.

    I started saying, “I’m sorry, excuse me, trying to get back to my wife!” as a means of further explanation for my apparent transgressions. One row away, an old guy tried to stop me, saying I wasn’t being “nice enough” as I tried to get through the crowd, after I had to push between him and his wife because she tried to body block me after I said “please excuse me” to her twice. Everyone had been drinking all day in the sun so honestly I’m lucky I didn’t get knocked out by some old geezer.

    It was a little traumatizing, like one of those anxiety dreams where you’re trying to wade through a mass of people but suddenly your arms are wet noodles and the crowd swallows you whole and you wake up crying thinking you’ll never see your wife and dogs again. Don’t you try to tell me I am the only person who has those sorts of dreams!

    Aside from that, it was great. From DEVO onward, whoever was playing, I was dancing to it, I didn’t want to stop moving. I felt like a shark, telling myself that if I stop moving: all the sun, THC, and alcohol will get to me and I will crumble to pieces. It worked. I had a great time. And I learned I should probably be listening to Soft Cell (very horny) and The B-52’s (basically everything I already listen to).


  • Cultivate a strong sense of curiosity.

    That’s it.

    When people ask me why I am so lucky, and when I contemplate why I’ve managed to find some level of success in my life despite making almost every single wrong decision you can make as a young person, it feels like the truest and most honest answer is simply: I am a curious person.

    A lot of people get to know me and land at a very simple reduction: “Brad is a smart person!” But that’s not true. I’m not a smart person by objective measure. I probably wouldn’t score highly on an IQ test. I’ve made a lot of very questionable decisions in my life, and not just in regard to common sense and critical thinking, but also moral and ethical decisions. I dropped out of both high school and college, so my only real academic credential is a GED–and the GED exam was so easy that it’s hard to believe it’s designed so that 40% of recent high school graduates will fail.

    When I meet people that I think are smart, it’s usually because they know a lot about various things, and I assume that is probably why people think I am smart. I can sit around and talk about all sorts of things, but the only reason I can do that is because I know those things, because I read about those things, because I was curious about those things. That’s it. It’s not like I came out of the womb with a bunch of mostly useless trivia in my head, I had to read about that stuff.

    The most concrete and familiar bit of advice related to this that most people hear as a software engineer is the idea that a software engineer should be a “life-long learner”. This is important for SDEs specifically because technology is always progressing, and you never know when you might find yourself facing an entirely new paradigm at a new position or with a new project. But, deep down, “life-long learner” is just another way to describe curiosity. Someone who is always learning is just someone who is continuously pursuing things that pique their curiosity. I had to use my own curiosity right now to google “peak your curiosity” ’cause I knew that couldn’t be right.

    People also tend to think I am very charismatic and funny, a real pleasure to talk to. I think some of this is luck, as I’ve always thought of myself as an introvert who doesn’t really play well with others, but somehow I’ve got the right mix of personal trauma that makes me a pretty funny person without the alcoholism necessary for a career in standup comedy. But what really sets me apart from a lot of other people is that I am a good listener, and I demonstrate a genuine interest in what the other person is saying (usually, unless they are very boring or stupid).

    If you look at any guide, written at any point in history, on how to make people like you (aka “make new friends” if you’re not a sociopath), the same thing always appears at the very top: Ask people questions about themselves. Now, this can be a chore you force yourself into doing as a form of social manipulation, and that’s okay too (you socio), but if you manage to foster an internal sense of curiosity, you should want to ask people about themselves.

    So, there you go, your one quick trick to making people think you are smart and making them like you: be curious about things and people. Easy!

    If you don’t know how to do this, here’s some tips and things that I do.

    When something interests you, anything, even in the most vague way, go read about it–you can usually start and end with Wikipedia on most subjects, but never stop yourself from scratching an itch, no matter how minor, dig in if you feel the urge. Wikipedia should link to sources, check out those sources.

    If you hear a song you like, go listen to the album it came from; then go listen to all of the albums by that band or artist. Go read about the band online. Read interviews with the band. Check out side projects by all the members of that band, there might be more music you like in there (although, often not, sadly–I’m looking at you, Paul McCartney). Musicians sometimes make other forms of art too; they write, paint, speak publicly, and all that is worth seeking out as well.1

    If you see a movie you like, it’s always worth it to watch other films by that director. Watch all of them! If you really enjoyed a certain actor’s performance, go watch more films with that actor in it. Keep in mind, the look and feel of a film is (often) mostly the work of the cinematographer, and cinematographers can hop directors, so be sure to scope out the cinematographer in some of your favorite movies and see if they did other work. If you’re really into movies, you could follow editors and screenwriters around…

    When you are meeting someone new, it can mean a lot to them when you pick out some little thing they talked about and say, “Hey, can you tell me more about this?” Another way of putting it is this: there is nothing anyone likes more than talking about themselves, so do your best to get your guest talking, and they’ll think they had an amazing time hanging out with you and always remember you fondly.2

    The nice thing about this is that there are so many people in the world, and they are so different from each other, that almost everyone has knowledge of, or insight into, something you’ve never experienced and maybe never will. People who’ve worked different jobs, who’ve lived in different places, who are entirely different races and from entirely different cultures, may have entirely different perspectives on familiar topics, or have opinions on things you’ve never even had to think about yourself. If you are a curious person, then everyone you meet can be a fount of wisdom, you just have to find that thing they are passionate about and get them talking.

    That’s about it. I didn’t really plan this post out very well, and it’s been sitting in my drafts forever, but I wanted to get it out into the world and I can always update it later or structure it out a bit more. The advice is too simple, not really much else to say here in the end. Just… you know… be interested in things. Don’t spend all your time just consuming content passively, take an active role in finding things (and people!) that interest you and pursue them diligently. I promise you, it pays off in the long run.

    1. But never, ever, ever meet the people who make the art you enjoy. Don’t do it! You can buy merch from them at their concerts, shake their hand if you see them at an art showing, but for the love of all that is good and holy in this world, do not have a conversation with them. That is one area that curiosity has almost always burned me. Don’t meet your heroes. They’re just normal people, and you could be hearing / seeing / enjoying something in their art that they are not at all aware of, and whatever connection you may think you have with that person because of their art may be a total misconception. Fair warning… ↩︎
    2. This can also be an extremely good way at detecting people you should not spend time with. The more they talk about themselves, the better you get an idea of the kind of person they are, so you should be able to more easily detect red flags. Even better: if they spend the entire time talking about themselves, and show absolutely no interest in asking you any questions whatsoever, you know that person has no genuine interest in you and you can act accordingly. ↩︎