• While most of my AI related app releases have been dedicated to image generation, (counting two Ealain releases and then Aislingeach), my gateway into generative AI was chatbots. If you look at the chronology of this blog, that’s pretty obvious, since it all began with my post about “uncensored” chatbots, which I’ve recently updated to note this very app I’m talking about now.

    So, yeah, it’s weird that it took me nearly a year to finally make my own chatbot app. The reason for this is largely just that open source LLMs took a little time to cook before getting better. They’re still not perfect, or anything, but they’re a lot more fun to play around with and you can essentially get a good experience “out of the box” with any of them. It felt like the time was right to try to make a straight-forward app for character-based role-play chat.

    If you don’t care to read the rest, well, here’s some links for you.


    It feels moderately awkward to explain this app to people, because the main way LLMs are in the news and overall public consciousness is that they’re writing assistants, or coding assistants, or maybe they’re going to steal customer service jobs. But this app is not that, it’s not meant to be an assistant. It’s meant to allow you to chat to fictional characters, which at face value sounds… immature? Or stupid? I don’t know, but it definitely makes me cringe a little bit.

    Additionally, as my original blog post makes clear, depending on the audience, there’s a bit of ick around the entire subject matter because a lot of the reason these open source LLMs exist and are good for this purpose is because people wanted to have sex with chatbots. That’s just how it is. There’s a lot of lonely (or just horny) people out there, across the whole spectrum of genders and sexualities, and a vast pornography industry in existence that proves it. So, of course people want to use this new technology in that way.

    So is that an explicit endorsement, that my app is for people to use to have sex with chatbots? Well, that’s the pickle I’m in with the app, when I’m explaining it to people, because it’s hard to convince someone that a grown adult might just enjoy talking to random fictional characters. And that’s mostly how I dogfood the app, just shooting the shit with random character cards, and seeing what funny (or stupid) stuff comes out. This technology can be really entertaining, especially when it pulls a stroke of genius out of its back pocket and surprises you.

    So, Inneal is for writing characters and chatting with them, and maybe that is just for fun, like a choose your own adventure story you are actively writing while participating in it; or maybe you do it to stimulate your creative instinct and try to flesh out characters in your own fictional stories; or maybe it’s because you’re lonely and you want to talk to a familiar face, even if they don’t really exist. I’m not going to judge. I’m not the judging kind of person. Even if you want to have sex with them.


    This is my second app built entirely with SwiftUI, and my first app using SwiftData for the persistence layer. Ealain, my first SwiftUI app, was pretty simple and didn’t really force me to learn how SwiftUI works properly. Inneal, on the other hand, really forced me to learn quite a bit about SwiftUI and especially how it interacts with SwiftData.

    When I’ve used CoreData in the past, I’ve followed advice to abstract it away as best I can and I usually keep all the CoreData related logic compiled together in one class, to try to avoid issues with threading. This results in crashes in Aislingeach to this day with NSFetchedResultsController, but, whatever, they’re pretty rare.

    SwiftData doesn’t want you to do this, or at least, if you do, you lose out on a lot of cool stuff. It also doesn’t want you to update or create model objects essentially anywhere but directly in the View code. This feels weird, coming from a background where you feel inclined to try to hide the model away from the views as much as you can. Instead I end up in a situation where the ViewModel is passing data back to the View so that the View can create or update the Model within itself.

    I don’t think I am doing this wrong, because it works quite well, and still keeps almost all the backend logic tucked away in the ViewModel, and the View has this strangle-hold on the Model, which sort of makes sense because that is where the model is used and updated anyway.

    SwiftUI is a little glitchy, but it’s understandable because I don’t envy any of the behind the scenes work that goes into it. The two notable issues I ran into are: 1) if you setup SwiftData the way Apple intends and then setup CloudKit syncing, your app will crash every time it’s backgrounded. 2) When you use LazyVStack with defaultScrollAnchor, sometimes your internal views just kind of disappear, which is a continued issue I can’t manage to solve 100%. I might end up using one of my free developer technical support tickets to ask about this issue.

    The other issue I’m sure is perennial, which is that you will reach a point where the compiler simply dies instead of making any attempt at telling you where the bug in your code is. This means you must be very careful about how many code changes you make before doing a build & run, because otherwise you’ll end up pretty stuck not having any idea what you broke.

    All that said, I really enjoyed building this app with SwiftUI and don’t think I will go back to using UIKit unless I really have to for some reason. It really sped up the development process and cut down on a lot of unnecessary boilerplate style code created by strict adherence to the delegate method.


    I guess that’s it. Go download the app and chat with some bots!


  • This post is an update to my previous post, May 2023’s “The Varying Levels of Getting Started with “Uncensored” LLM-Powered Chatbots“. A lot has happened in 11 months. Well, not a lot, but, things have changed a bit.

    I think that post is still worth reading, because it explains more in depth what we’re talking about exactly, and why, and I am going to take a stab at cutting to the chase and just updating a couple specific points from that post.

    Previously, I said that you could get a pretty good chatbot experience out of OpenAI’s APIs, and you should try that first. That is no longer the case. Big corporate LLMs have implemented aggressive content filtering that really restrict your chatbots in ways that they weren’t 11 months ago. That isn’t to say corporate LLMs don’t have their uses, but if you’re looking for “uncensored” LLMs, which is the point of this post, I can no longer recommend OpenAI, Google, Anthropic, etc.

    Previously, I said that currently available open source LLMs like Pygmalion 6B “weren’t very good”. That is, thankfully, no longer the case. Specifically, I think a model named Fimbulvetr is very good and you can likely run it at home very easily, or find it hosted on the AI Horde usually. I think this takes a lot of the sting out of commercial LLMs becoming very sanitized and being turned into the equivalent of hammers and wrenches.

    Some things have also changed with the software that you can use to run models. There’s a lot of it. I am going to reference the things I have used specifically and can recommend depending on your circumstances. Without further ado, here we go…

    We’re going to start at the lowest level of effort and work our way up as we progress into the hobby. Sounds fun, right?


    An Easy and Free iPhone / iPad Client

    Do you just want to chat with some bots to see what it is like, and not spend any money or time on it? I just released an iPhone / iPad chat client for the AI Horde, called Inneal. It’s free, and use of the AI Horde is free. It also hides away some of the fiddly bits that can be confusing in other clients, so you can start importing character cards and chatting with them right away. Yes, this is self-promotion, but I made this app specifically to make it dead simple to start chatting right away.


    A Free Browser-based Client

    Once you’ve become a bit bored with what you can do in Inneal, you can branch out into another free client for the Horde that does some other things, called Kobold Lite. The interface for this app is a bit confusing but you can also import character cards from CHUB, and do some other stuff in this client, like use straight text completion, or Adventure Mode which tries to do kind of a Zork type thing. It’s pretty neat.


    A Free and More Robust Browser-based Client

    If you want to get really deep into using third party web clients, and I don’t honestly recommend it, because why would you want all your chats stored on some random server somewhere, there’s a very nice website called Agnai that supports a lot of chat features that are becoming a standard, like lore books. This client supports lots of LLM APIs as well as the AI Horde.


    Paid Options for the Computer Illiterate

    If you’re computer illiterate and you have money, there are plenty of paid sites where you can chat with bots for money. Some of them have some very large models you can use, which should, in theory, be better at chat than smaller models. I can’t say whether any of these sites are good or not, and I wouldn’t personally use them myself. I felt it would be unfair not to mention them, however.


    The Go-To At-Home Client

    If you’re computer literate, and you want to keep your chats on your computer where they are safe, you should consider installing SillyTavern. In addition to being fully under your control, it seems to be the gold standard client at the moment for chatting and is jam-packed with features. I just wish they’d change the name.

    SillyTavern can connect to the horde, of course, so you can use this freely still, but really, you should start running your own models locally if you can, which we talk about next.


    Running Your Own Models At Home

    Running models at home is easier than ever before, with some caveats, namely that you’re running quantized models, which means they may not be quite as good as the full-sized models running on server farms with tons of VRAM available. But they still do the job better than anything else that has ever existed, so, nothing to complain about, really.

    If you’re running Windows

    If you’ve installed SillyTavern and you’re using it, then you’re definitely serious enough to run your own LLMs locally, and if you have a gaming graphics card made in the last couple years, you probably can easily run Fimbulvetr locally. I’ve used Koboldcpp to do this on my PC lately.

    You’ll want the Fimbulvetr-11B-v2.q4_K_S.gguf file from the link below. This is a quantized version of the model that will run on smaller GPU graphics cards, like 8 GB of VRAM. But it also runs well and works just fine on bigger graphics cards.

    If you’re running macOS

    If you have an M-series Mac, you can use Ollama to run models locally, and the version of Fimbulvetr available under Ollama runs really well on my M2 Pro mac Mini, and SillyTavern has an Ollama specific connection in it already.


    Now What?

    If you had fun getting Fimbulvetr working, congratulations to you, you’re just getting started. There are a lot of cool models out there, and a lot of stuff to learn about (like Instruct Mode). I recommend checking out this page that reviews LLMs based on how good they are for chatbot roleplay. It’s updated quite often. Chronoboros, Psyfighter2, Noromaid Mixtral, … there are a lot of fun models to try out to see what works best for what you like.

    I’d also recommend joining the KoboldAI Discord, it’s a great place to geek out, get help, and learn about all the latest models and how they work.

    If you get into SillyTavern, there’s a lot to learn there, and they have their own Discord server that could be worth a perusal.

    Also, go back and read my original post, if you didn’t, there’s some other stuff in there and if you’re all the way down here, you must love reading, so do it some more?


    That’s it for now! I’m sure there will be another update, perhaps this year or next year. Just in the past month, we’ve seen the release of Command-R and Llama 3, two very large open source models that are sure to help push forward progress on these fronts. We’ve also seen corporate AI get more and more restrictive about how it can be used, as world governments begin to make moves that could enable regulatory capture, making it more important than ever before that these fields move quickly. Kudos to the torch-bearers.


  • I’ve built a few apps now, apps that people actually pay money for–not a lot of money; a very, very small amount of money, but enough that my Apple Developer fee has been paid for until I die, at the very least. I don’t say this to brag, but to establish that I’ve done this a few times now, entirely by myself, so I can speak somewhat authoritatively about it. Why is that important? Because I want you to achieve your goals, and if your goals are similar to what I have achieved, then maybe this guide will help you reach them. So, this is how (and why) I build apps, so you can too.

    Why I Build for Apple Platforms

    The iOS ecosystem, starting with UIKit and now with SwiftUI, allows you to quickly build beautiful apps that feel great to hold in your hand and interact with. If you already enjoy using Apple platforms, especially for desktop computing, it’s a no brainer, as you have all the tools you need for development already.

    In conversations with people getting into development, they seem almost mindlessly compelled to pursue JavaScript based front-end web development. I find that realm of development to be very complex, especially for a beginner, and that the end result (a web app) does not feel as nice as a native app. Sure, you can use your app on any device with a web browser, but, who cares, it still feels like a website, yuck.

    That said, my development process has nothing particularly to do with the platform I build for. But I do recommend choosing one platform and dedicating yourself to it completely, at least at first.

    Why I Build Anything At All

    I build the apps I do because I want them, for myself. I highly recommend that you build software you want. If you are chasing fame or profit, it pays to be a curious person, because you will stumble into new interests and hobbies, which exposes you to new people and new experiences, and may create new wants in you, which lead to new app ideas. This is almost the entirety of how I get any ideas at all.

    Incremental Dogfooding

    Dogfooding, short for “eating your own dog food”, simply means using your own product. Because I am building apps I want, to fulfill some workflow I’ve envisioned in my head, dogfooding is an inherent part of the process. This leads to a very incremental development process that forces me to logically break down the workflow into a series of steps that allow the app to naturally grow over time.

    For example, I have been working on a chatbot client. It was immediately obvious what the first goal would be: I want to be able to type a message into a box, send it, and get a message back. It doesn’t matter at this point whether or not the chat is saved or even what service the chatbot is using, I just need to get the most basic component of this project working.

    In the case of Aislingeach, the most basic component of the project was to type in a prompt, and get a single image back from the horde. In the case of Ealain for Vision Pro, it was simply to get images to download and appear on screen.

    For none of these apps did I really concern myself with the visual design of the app at first, which is easy on Apple platforms, because Apple has very strong platform conventions that I appreciate and if you use UIKit or SwiftUI properly, your app will end up looking nice. It may be tempting to nail the visuals of your app upfront, but it doesn’t make sense to spend a lot of time designing something you might not be able to build, or might not even like when you do build it. Design and polish later, when everything is working. Remember, “design is how it works, not how it looks,” at least at first.

    Annoyance Accrual

    Once I get the most basic workflow working for each idea and see how it feels, this often motivates me to continue the process quite naturally, because I will usually feel some sort of annoyance with what I have built so far. This the same sort of feeling that I get when I’m using other software and it doesn’t work quite right, or look very good, but in this case, I can do something about it, because I’m the code jockey building the thing.

    What’s nice about this process is that it simulates the ebb and flow that is essential to any interesting and exciting activity. Movies, music, literature–these are things that take you on an emotional journey of ups and downs, ideally. This development process is the same: You are scared at the start (what if I am too dumb to do this?), but you get a piece of the app working, so then you are happy. You turn happily to use your app, and find something annoying about it, so now you’re mad and maybe a little scared still (what if I can’t fix this?), but then you fix it, and now you’re happy again. Rinse and repeat.

    In the case of my chatbot app, once I got basic chat working, the next big annoyance was persistence: I want these chats to stick around, it was lame they disappeared when the app restarted. So this forced me to start thinking about the data model for the app and make a decision around that issue. Once chats were stored in some sort of database, it was annoying that the chatbot had no personality, so I had to implement importing characters and using them in the chat prompts.

    Once I could talk to a character, the next annoyance was that the LLM sometimes generates something lame, and you want to tell the LLM to “retry”, basically a message delete and regenerate. After that was implemented, it was annoying that I couldn’t just edit or delete messages, so that was next to get built. After this point, the most basic workflow is almost entirely in place, which means future annoyances will mainly be around refining and improving that core workflow, by adding additional screens for configuration options and supporting content like that. (Eventually the “retry” option became an annoyance, and I replaced it with SillyTavern style swiping, a much better feature.)

    All throughout this process, I’m actively dogfooding the app, which acts as a built-in quality assurance (QA) process. It also forces me to think about the user experience (UX) of the app, because I am actively using it and accruing minor annoyances with how it looks and functions along the way. For the chatbot app, I wanted to make sure the experience of using the chat interface felt very native, like iMessage, which necessitate a lot of research and iteration. I try to tackle these UX annoyances as they pop up, usually between resolving the core annoyances. Shifting between building core features and refining the user experience helps keep the development process fun and dynamic for me.

    Release As Soon As Possible

    Perfect is the enemy of good. This is a fact that cannot be argued with. If you try to achieve perfection, to truly whittle down your list of annoyances to absolutely nothing, you will never get there. This is the plight of a human being; If we were capable of ever truly being satisfied, we would stagnate and die as a species. I don’t mean to get too philosophical, but it’s important for your own sake to internalize this idea, so that you release your projects and let people use them, instead of just fiddling with them until you lose interest and forget about them.

    Because I’m building apps that I want, for myself, and I pretend that I don’t really care that much about what other people think because I assume that “if I like it, it must be good”, it feels relatively easy for me to reach a point where my list of annoyances naturally turns into a list of trivialities and bigger wants. At this point I know the app is ready for a 1.0 release.

    The trivialities can be wide ranging, from not being happy with the way code is structure in the project, to the design of minor interface elements. The bigger wants are things that feel like they should be in a 1.1 or later release, like adding additional features that refine or expand the app, or completely redesigning entire parts of the app that grew a little stale on the road to 1.0. The important thing is that none of these things truly hurt the core workflow the app is meant to support.

    What is great about releasing your app as quickly as possible is that you get to collaborate with other motivated, passionate people on expanding your lists of annoyances, trivialities, and bigger wants. In my case especially, because I am building out my workflow for a process, after the app is released, I end up discovering glaring blindspots in my knowledge of that process that are only revealed when other people explain their workflow to me. And because there is now a real person asking for my help in achieving this with my app, it really motivates me to figure out how to accommodate their need, while still adhering to whatever my vision of the app may be.

    After that happens to you, congratulations, you are officially an app developer. You built an app, you got other people to use it, and you listened to their complaints and incorporated their feedback. This is the entire process of being a software engineer, from top to bottom. Everything else that happens in the professional world around software engineering are just additional layers of refinement built on top of this sort of process, to scale up to supporting multiple engineers working on a single project.


  • Just 11 days ago, I released Ealain for Vision Pro and now I’m very happy to introduce Ealain for Apple TV, which is the same app, it’s just for Apple TV now.

    You can find it at the same App Store page for Ealain for Vision Pro, because it’s a universal app (or whatever they are called), one purchase gets you both version.

    I was able to reuse a lot of code from the Vision Pro version, which was useful, but also had to rethink the UI of the configuration screens so it made more sense on a TV. In some ways, I prefer the artist picker in this version and may find some way to bring it to the Vision Pro.

    I don’t have a lot to say about it, but it’s neat and a nice addition to any Apple TV 😉


  • This is my Apple Vision Pro review.

    I wrote several within the first week I got the headset but none of them seemed to be saying anything that wasn’t already being said. Now that the honeymoon period is likely over for most people, and the AVP Discord is getting much quieter, maybe it’s my time to share.

    The Apple Vision Pro is a fantastic prototype of what the future may look like. But it’s just a prototype, deep down, with a lot of shortcomings that turn the AVP into a fancy paperweight on my desk. It sounds harsh, but it’s true. This is not a real product for normal people to use. It’s not even a real product for a tech enthusiast to use.

    The first thing anyone is going to think or say when they put an AVP on their head is: “Good lord, this thing is heavy.” It is very heavy. The straps included in the box do absolutely nothing to reduce the amount of weight or pressure you feel on your face or head. Even with a third party top strap, the comfort is terrible. I am a dedicated enthusiast and I still have not managed to wear my AVP for longer than 30 minutes at a time because it is very uncomfortable and hot to wear. My Quest 3 + Elite Strap with Battery I’ve worn for 4 hours straight, if not longer, but that’s not a fair comparison, because the Quest 3 actually had some experiences worth diving into for hours at a time. The AVP does not.

    This means that the second thing anyone is going to think after using the headset is, “That’s it? What am I supposed to do with it now?” After the dinosaur app, and seeing a Disney+ movie in 3D HDR, there’s just no real use case for the headset. At first, the defense for this take was that the AVP is a productivity device more than it is a gaming device like the Quest 3, because it clearly wasn’t built with gaming in mind. But the deeper truth is that the AVP is not a good productivity device either.

    There are no benefits to using the AVP over just using your monitor at your desk. “But, you can have infinite displays!” No, you can’t. You get one Virtual Display and then you can put apps all around you, sure, but what good is that? I don’t want to be constantly turning my head and body around to look at apps. I want to “three-finger swipe up” to see all my windows in Expose, then pick one to focus on where I am already looking. Very occasionally I will side by side two apps on my display. But, I don’t want to keep turning my head all day long, which is why I use a single ultra wide display and no secondary monitors.

    There is absolutely no window management in visionOS, no Expose, which quickly makes any productivity oriented task frustrating and cumbersome. Apps become obscured by other apps, completely vanishing from view, leaving you to sit there pinching and dragging windows all over the place til you find the one you really wanted.

    As an example of how half-baked window management is in visionOS, if you are using Virtual Display to control your Mac while you are using Xcode to build an AVP app, every time you build and run your app, it will pop up directly where you are looking when it launches, which is usually right over your virtual display. So every single time you build and launch your app, you have to pinch it and move it away from the virtual display so you can see both at the same time. You quickly learn that you should hit build & run and then quickly look off to the side so the app launches somewhere else, but this is completely insane. Why can’t visionOS remember where the window was seconds ago and put it back in place? Did anyone working at Apple ever use the AVP for development? I really don’t think so.

    The deeper issue here is that visionOS is based on iOS, and it has all of the issues that iPads and iPhones have when it comes to productivity. The apps are all Playskool versions of the real apps, even in comparison to their iOS counterparts. There’s no terminal, no real file system, no “real apps” for the platform (depending on your use case, and depending on if you count iPad apps).

    If you want to do software engineering, you’re stuck using Virtual Display, or else you’re such an enthusiast that you’re willing to try to use VS Code tunnels to use VS Code through a web browser, which introduces input lag that feels awful to me. What are the chances we’re ever able to do software development fully natively on the Apple Vision Pro? Probably 0%. How long has the iPad been out? 13 years now, right? And can you develop on it? Nope. (Not unless you’re willing to deal with a lot of headaches, I assume, similar to how I see people using the AVP for engineering.)

    Past that, all my complaints are possibly pretty niche. You can’t play two videos with audio on at the same time, which may be a niche use case, but it’s still a pretty strange shortcoming considering macOS can play any number of videos with audio all at the same time without issue. AVP doesn’t come out of the box with enabled support for WebXR, so 3D YouTube videos and other content doesn’t work. If you go and turn on a bunch of extra options, you can start watching 3D video on the web, but because there are no controllers with the AVP, none of the open web has been built to deal with it so you can’t really control anything. The Photos.app doesn’t allow you to create albums and you cannot put photos into albums, it’s just a dumb photo viewer for the most part.

    There are a lot of people still in the honeymoon period, people who are very committed to the idea and it allows them to overlook all the shortcomings and issues. Kudos to those people, I wish I could be one of them. But I can’t. The truth is that I have no need for the one thing the AVP does well, which is movie watching. It’s a beautiful way to watch movies if you don’t have a similarly expensive home theater setup at home (and you don’t mind all the glare in the lenses, which ruins the experience unless you watch movies on the moon which balances out the glare).

    In that context, the AVP is a pretty okay portable one-person home theater setup. But everything else it’s supposed to do? It doesn’t do any of them at a level I would consider acceptable. Maybe if the AVP was half the weight and it was more comfortable to wear, it would be easier to overlook these shortcomings and enjoy the glimpse of the future it provides. But at the moment, the future seems a little hamstrung by the constraints of iOS and the unnecessary weight of the device.

    Unfortunately, the Apple Vision Pro is not really “a computer you strap to your face”. Deep down, it’s “an iPad you strap to your face”, and that’s probably the cruelest thing I can say about the device.


    If you’re curious about the promise of head mounted displays, I’d highly recommend a Quest 3 over the Apple Vision Pro. It’s worse than the AVP for productivity use (because the screens are too low resolution in my opinion), but you’re getting a much more fully featured headset for a significantly lower price and you can do a lot more with it out of the box. Neither product is perfect, but the AVP is clearly a prototype geared toward enthusiasts, while the Quest 3 is widely used (and beloved!) by every day normal people all over the country. Full disclosure, I hardly use my Quest 3 these days, but I still think it is a much better way to dip your toes into standalone VR than the AVP.