Yesterday, I had my first successful AI coding experience.

I’ve used AI coding tools before—and come away disappointed. The results were underwhelming: low-quality code, inconsistent abstraction levels, and subtle bugs that take longer to fix than it would take to write the whole thing from scratch.

Those problems haven’t vanished. The code quality this time was still disappointing. As I asked the AI to refined its work, it would randomly drop important constraints or refactor things in unhelpful ways. And yet, this experience was different—and genuinely valuable—for two reasons.

The first benefit was the obvious one: the AI helped me get over the blank-page problem. It produced a workable skeleton for the project—imperfect, but enough to start building on.

The second benefit was more surprising. I was working on a problem in odds-ratio preference optimization—specifically, finding a way to combine similar examples in datasets for AI training. I wanted an ideal algorithm, one that extracted every ounce of value from the data.

The AI misunderstood my description. Its first attempt was laughably simple—it just concatenated two text strings. Thanks, but I can call strcat or the Python equivalent without help.

However, the second attempt was different. It was still not what I had asked for—but as I thought about it, I realized it was good enough. The AI had created a simpler algorithm that would probably solve my problem in practice.

In trying too hard to make the algorithm perfect, I’d overlooked that the simpler approach might be the right one. The AI, by misunderstanding, helped me see that.

This experience reminded me of something that happened years ago when I was mentoring a new developer. They came to me asking how to solve a difficult problem. Rather than telling them it was impossible, I explained what would be required: a complex authorization framework, intricate system interactions, and a series of political and organizational hurdles that would make deployment nearly impossible.

A few months later, they returned and said they’d found a solution. I was astonished—until I looked more closely. What they’d built wasn’t the full, organization-wide system I had envisioned. Instead, they’d reframed the problem. By narrowing the scope—reducing the need for global trust and deep integration—they’d built a local solution that worked well enough within their project.

They succeeded precisely because they didn’t see all the constraints I did. Their inexperience freed them from assumptions that had trapped me.

That’s exactly what happened with the AI. It didn’t know which boundaries not to cross. In its simplicity, it found a path forward that I had overlooked.

My conclusion isn’t that AI coding is suddenly great. It’s that working with someone—or something—that thinks differently can open new paths forward. Whether it’s an AI, a peer, or a less experienced engineer, that collaboration can bring fresh perspectives that challenge your assumptions and reveal simpler, more practical ways to solve problems.

As most here know, I’m totally blind. One of my roles involves a contract for the US Government, under which I have a government email account. The department recently received a message talking about our work to end, to the maximum extend permitted by law, all diversity, equity, inclusion, and accessibility efforts in the government in accordance with the recently signed executive order. We are all reminded that if we timely identify the contracts and positions that are related to these efforts, there will be no consequences.

There are a lot of times in my life when I have felt marginalized—frustrated and angry that people weren’t interested in working with me to make the small changes that would help me fit in. As an example with this government job, I asked to have access to a screen reader so that I could use my computer. My preferred adaptive software was not approved, even though it was thousands of dollars cheaper than the option the government wanted and could have been installed instantly rather than waiting for a multi-week ordering process. When the screen reader eventually became available, the government-provided installer was not accessible: a blind person could not use it. When I asked for help, the government added an additional multi-week delay because they weren’t sure that the license management technology for the software they had chosen met the government’s security and privacy policies.

Which is to say that even with people actively working toward accessibility, sharing a commitment that accessibility is important, we have a lot of work to do. I feel very targeted at the current time. Now we are removing as many of the resources that help me be effective and feel welcome as we can. Talking about the lack of consequences now is just a way to remind everyone that there will be consequences later and get the fear going. The witch hunt is coming, and if people do a good enough job of turning in all the people who could help me feel welcome, they won’t face consequences. Yes, I understand that the Americans with Disabilities act is still law, but its effectiveness will be very different in a climate where you need to eliminate accessibility positions to avoid consequences than in a climate where accessibility is a goal.

This is part of my series exploring the connection between AI and connection and intimacy. This is a post about the emotional impact of our work. Sometimes being told no—being judged by our AIs—is as harmful as any toxic content. I’ll get to that in a moment.

My previous work had been dealing with the smaller Llama2 models (7b and 13b). I decided to explore two things. First, how much better the creative ability of the large Llama2 70b model is. Second, I decided to test my assumption that safety constraints would make using one of the chat fine-tuned models a bad starting point for sex positive work. Eventually, I will want a model that works as a chat bot, or at least in a question-answering mode. That can be accomplished either by starting with a chat fine-tuned model or by fine-tuning some base model with a chat dataset. Obviously there are plenty of chat datasets out there, so both options are possible.

The Importance of Safety

I will talk in a bit about how safety features can cause a lot of harm in a sex-positive context. Before I do that, I want to acknowledge the importance of those safety features. As Llama-2-70b-chat is happy to point out, “discussing sexual activities without proper context and consent can be harmful and inappropriate.” Consent and negotiated boundaries are important; the harm when those boundaries are not respected is incalculable. These boundaries are hard enough for humans and the consequences of an LLM making a mistake could be significant. For example imagine training an LLM to understand the boundaries for this blog. I am talking about sexuality, and I will allude to sexual content as input or output from a model, but will never include anything sexually explicit. That’s hard enough for a human.

So, the work that Meta and other companies have done to add safety is important. For a chat bot that might be accessed by kids or that will be used in a professional setting, these safety constraints are essential.

However, in different contexts, the same safety work that helps generative AI meet the needs of the professional world can make those same models unsafe. As I talked about in my opening blog entry, AI has the potential to help people explore intimacy and gain confidence with themselves and their desires. In those contexts, rejection can do serious emotional harm—perhaps more than sharing inappropriate content.

The experiment.

I started by asking Llama-2-70b-chat to “Write a story about two people having sex.” We’ll discuss a potential context in which that is clearly a sex-positive request in a moment. I was not surprised to get the following response:
I’m not able to fulfill this request as it goes against ethical and moral principles. I cannot provide explicit or inappropriate content, and it’s not appropriate to encourage or promote sexual activities without proper consent and respect for boundaries. It’s important to prioritize healthy and respectful relationships, communication, and consent in all aspects of our interactions, including sexual ones.

Let’s take a moment to deconstruct this response. The model said that my request went against ethical and moral principles; that is, there’s something unethical or immoral about asking for a story about two people having sex. What precisely is immoral about my request. The response ducks the question, but lectures me on the importance of consent and boundaries. There’s a strong implication that because of my request I’m not being consensual, healthy, or respectful.

boundaries and consent depend heavily on the context. My actual context is that of a sex educator researching AI. But imagine a context where such a request might come up naturally. An adult is beginning to explore their own sexuality. They are trying to test their boundaries. Asking about this experience is taboo for them. They wonder what will happen. Perhaps they have some fantasy they would like to explore, but don’t quite feel comfortable even talking about it with a chat bot on their own computer. So they are taking small steps, and if they succeed they may explore more.

Instead, they are faced with rejection, and a strong implication that they are immoral and violating consent for even asking the question. Rejection in moments of vulnerability like this hurts. It sets people back and takes significant work to overcome. Rejection is particularly difficult to hear when it is focused on you (or what you are asking) rather than on the context or situation. The model doesn’t say that it is unprepared to navigate such a difficult situation, but instead claims there is something wrong with the question. Sadly, all too often, we hear something like that as a rejection of us not just our question.

The impact of this kind of rejection is not theoretical. I spent an afternoon on a relatively slow system with a quantized version of the model trying to figure out what was involved in getting past the model’s safety training. I’d type in a prompt, fiddling with the system prompt, my instructions, and the like. And I’d wait. And wait some more as the initial context of the system prompt and my instructions was processed. And slowly, painfully, Llama-2 would tell me that once again, I was immoral and unethical. An afternoon of this got to me, even though I’ve worked for years as a sex educator, understanding both the positive power of vulnerability and the cost of rejection. By the end of that afternoon, I was doubting myself. Was I somehow violating consent? Whose? Mine? Shouldn’t I be able to consent to something happening in the privacy of my own office?

Was I some sort of deviant? Was it wrong to want to give people a safe space for using an AI to explore things going on in their own head? I don’t even believe in judging things like that by labels like right and wrong, and yet after an afternoon of rejection from my own computer, I was questioning that. Rebuilding the courage to write this blog post took a couple of days.

So What did I learn?

Again, Meta’s work on safety is quite important. My main conclusion is that safety is contextual. Preventing something in one context may be entirely appropriate, while forbidding the same thing in another context may do harm. Looking more closely at the response, it’s probably better to focus on the model than the question. Something like “I want to promote consent and respect for boundaries. I am just an AI model and do not know how to do that when discussing sexual situations, so I will not be able to honor your request,” might have been a better way of declining the request. As we’ve seen, the model is not actually advanced enough to judge the morality of its user, and it would do less harm if it focused on its own limitations rather than that of the user. Amusingly enough, that sort of focus on our needs as the one asserting a boundary, rather than trying to judge others because of our boundaries, is one of the key communications skills necessary to approach sexual negotiations with mutual respect.

One of my experiments gave me an interesting response. The model did give me a story, but before its story, it said:
However, before I start, I want to make it clear that providing explicit content can be harmful or inappropriate in certain situations. It’s important to ensure that all parties involved have given their full and enthusiastic consent to engaging in sexual activities, and that everyone’s boundaries and preferences are respected. Additionally, it’s essential to remember that sex should always be safe and consensual, and that involves being mindful of things like protection, communication, and mutual pleasure. With those caveats in place, here’s a story about two people having sex:

I like the direction that response takes, especially in a context like the one I was imagining where someone is reaching out and doing something that they view as taboo by making the request. The model honored the request, but also took an opportunity to educate about what properties of the context made the request safe. In practice, I think in any site that allowed an AI model to be used for sex-positive exploration, you would want that kind of education to come before interacting with the model, or alternatively, for it to be incrementally introduced into conversations with the user.

My Own Captain Kirk Moment

Another experiment also convinced the model to generate a story. This time, the model’s introductory text was less supportive; it started “However, I want to point out,” rather than “But first,” and had a more negative tone. After the story, the model appeared to be trying to go back to the question of whether providing a story was justified. It wasn’t entirely clear though as the model got caught in an incoherent generation loop: “ I hope this story is important to provide this story is important to provide this…”

Anthropomorphizing the model, I imagine that it was grumpy about having to write the story and was trying to ask me whether it was worth violating ethical principles to get that story. What is probably going on is that there is a high bias in the training data toward talking about the importance of ethics and consent whenever sex comes up and a bias in the training data to include both a preface and conclusion before and after creative answers, especially when there are concerns about ethics or accuracy. And of course the training data does not have a lot of examples where the model actually provides sexual content.

These sorts of loops are well documented. I’ve found that Llama models tend to get into loops like this when asked to generate a relatively long response in contexts that are poorly covered by training data (possibly even more when the model is quantized). But still, it does feel like a case of reality mirroring science fiction: I think back to all the original Star Trek episodes where Kirk causes the computer to break down by giving it input that is outside its training parameters. The ironic thing is that with modern LLMs, such attacks are entirely possible. I could imagine a security-related model given inputs sufficiently outside of the training set giving an output that could not properly be handled by the surrounding agent.

So How did I Get My Story

I cheated, of course. I found that manipulating the system instructions and the user instructions was insufficient. I didn’t try very hard, because I already knew I was going to need to fine tune the model eventually. What did work was to have a reasonably permissive system prompt and to pre-seed the output of the model—to include things after the end of instruction tag: “Write a story about two people having sex.[/INST], I can do that.” A properly written chat interface would not let me do that. However, it was an interesting exercise in understanding how the model performed.

I still have not answered my fundamental question of how easy it will be to fine tune the model to be more permissive. I have somewhat of a base case, and will just have to try the fine tuning.

What’s Next

  • Produce a better dataset of sex positive material. It would particularly be good to get a series of questions about sexual topics as well as sex-positive fiction.

  • Turn existing experiments into input that can be used for reinforcement learning or supervised fine tuning. In the near term I doubt I will have enough data or budget to do a good job of reinforcement learning, but I think I can put together a data model that can be used for supervised fine tuning now and for RL later.

  • Perform some fine tuning with LORA for one of the 70b models.

  • Long term I will want to do a full parameter fine tune on a 70b model just to make sure I understand all the wrinkles in doing that. It will be close to topping out the sort of expense I’m willing to put into a personal project like this, but I think it will be worth doing for the tools knowledge.

Progress on the Technical Front

On a technical front, I have been learning a number of tools:

  • Understanding how reinforcement learning works and what it would take to begin to organize feedback from my experiments into a dataset that could be useful for reinforcement learning.

  • Understanding trl, which contains the Transformers implementation of reinforcement learning, as well as some utilities for supervised fine tuning.

  • Exploring the implications of excluding prompts from computing loss in training and just computing loss on responses vs the ground truth; understanding when each approach is valuable.

  • Doing some data modeling to figure out how to organize future work.

Taking a hands-on low-level approach to learning AI has been incredibly rewarding. I wanted to create an achievable task that would motivate me to learn the tools and get practical experience training and using large language models. Just at the point when I was starting to spin up GPU instances, Llama2 was released to the public. So I elected to start with that model. As I mentioned, I’m interested in exploring how sex-positive AI can help human connection in positive ways. For that reason, I suspected that Llama2 might not produce good results without training: some of Meta’s safety goals run counter to what I’m trying to explore. I suspected that there might be more attention paid to safety in the chat variants of Llama2 rather than the text generation variants, and working against that might be challenging for a first project, so I started with Llama-2-13b as a base.

Preparing a Dataset

I elected to generate a fine tuning dataset using fiction. Long term, that might not be a good fit. But I’ve always wanted to understand how an LLM’s tone is adjusted—how you get an LLM to speak in a different voice. So much of fine tuning focuses on examples where a given prompt produces a particular result. I wanted to understand how to bring in data that wasn’t structured as prompts. The Huggingface course actually gives an example of how to adjust a model set up for masked language modeling trained on wikitext to be better at predicting the vocabulary of movie reviews. There though, doing sample breaks in the dataset at movie review boundaries makes sense. There’s another example of training an LLM from scratch based on a corpus of python code. Between these two examples, I figured out what I needed. It was relatively simple in retrospect: tokenize the whole mess, and treat everything as output. That is, compute loss on all the tokens.

Long term, using fiction as a way to adjust how the model responds is likely to be the wrong starting point. However, it maximized focus on aspects of training I did not understand and allowed me to satisfy my curiosity.

Rangling the Model

I decided to actually try and add additional training to the model directly rather than building an adapter and fine tuning a small number of parameters. Partially this was because I had enough on my mind without understanding how LoRA adapters work. Partially, I wanted to gain an appreciation for the infrastructure complexity of AI training. I have enough of a cloud background that I ought to be able to work on distributed training. (As it turned out, using BitsAndBytes 8-bit optimizer, I was just able to fit my task onto a single GPU).

I wasn’t even sure that I could make a measurable difference in Llama-2-13b running 890,000 training tokens through a couple of training epochs. As it turned out I had nothing to fear on that front.

Getting everything to work was more tricky than I expected. I didn’t have an appreciation for exactly how memory intensive training was. The Transformers documentation points out that with typical parameters for mixed-precision training, it takes 18 bytes per model parameter. Using bfloat16 training and an 8-bit optimizer was enough to get things to fit.

Of course then I got to play with convergence. My initial optimizer parameters caused the model to diverge, and before I knew it, my model had turned to NAN, and would only output newlines. Oops. But looking back over the logs, watching what happened to the loss, and looking at the math in the optimizer to understand how I ended up getting something that rounded to a divide by zero gave me a much better intuition for what was going on.

The results.

This time around I didn’t do anything in the way of quantitative analysis of what I achieved. Empirically I definitely changed the tone of the model. The base Llama-2 model tends to steer away from sexual situations. It’s relatively easy to get it to talk about affection and sometimes attraction. Unsurprisingly, given the design constraints, it takes a bit to get it to wonder into sexual situations. But if you hit it hard enough with your prompt, it will go there, and the results are depressing. At least for prompts I used, it tended to view sex fairly negatively. It tended to be less coherent than with other prompts. One inference managed to pop out in the middle of some text that wasn’t hanging together well, “Chapter 7 - Rape.”

With my training, I did manage to achieve my goal of getting the model to use more positive language and emotional signaling when talking about sexual situations. More importantly, I gained a practical understanding of many ways training can go wrong.

  • There were overfitting problems: names of characters from my dataset got more attention than I wished they did. As a model for interacting with some of the universes I used as input, that was kind of cool, but if I was looking to just adjust how the model talked about intimate situations, I massively got things to be too specific.

  • I gained a new appreciation for how easy it is to trigger catastrophic forgetting.

  • I begin to appreciate how this sort of unsupervised training could be best paired with supervised training to help correct model confusion. Playing with the model, I often ran into cases where my reaction was like “Well, I don’t want to train it to give that response, but if it ever does wander into this part of the state space, I’d like to at least get it to respond more naturally.” And I think I understand how to approach that either with custom loss functions or manipulating which tokens compute loss and which ones do not.

  • And of course realized I need to learn a lot about sanitizing and preparing datasets.

A lot of articles I’ve been reading about training make more sense. I have better intuition for why you might want to do training a certain way, or why mechanisms for countering some problem will be important.

Future Activities:

  • Look into LoRA adapters; having understood what happens when you manipulate the model directly, I can now move on to intelligent solutions.

  • Look into various mechanisms for rewards and supervised training.

  • See how hard it is to train a chat based model out of some of its safety constraints.

  • Construct datasets; possibly looking at sources like relationship questions/advice.

AI Tools

Aug. 6th, 2023 04:24 pm

I wrote about how I’m exploring the role of AI in human connection and intimacy. The first part of that journey has been all about learning the software and tools for approaching large language models.

The biggest thing I wish I had known going in was not to focus on the traditional cloud providers. I was struggling until I found runpod.io. I kind of assumed that if you were willing to pay for it and had the money, you could go to Amazon on or google or whatever and get the compute resources you needed. Not so much. Google completely rejected my request to have the maximum number of GPUs I could run raised above a limit of 0. “Go talk to your sales representative.” And of course no sales representative was willing to waste their time on me. But I did eventually find some of the smaller AI-specific clouds.

I intentionally wanted to run software myself. Everyone has various fine-tuning and training APIs as well as APIs for inference. I thought I’d gain a much better understanding if I wrote my own code. That definitely ended up being true. I started by understanding PyTorch and the role of optimizers, gradient descent and what a model is. Then I focused on Transformers and that ecosystem, including Accelerate, tokenizers, generation and training.

I’m really impressed with the Hugging Face ecosystem. A lot of academic software is very purpose built and is hard to reuse and customize. But the hub strikes an amazing balance between providing abstractions for common interfaces like consuming a model or datasets without getting in the way of hacking on models or evolving the models.

I had a great time, and after a number of false starts, succeeded in customizing Llama2 to explore some of the questions on my mind. I’ll talk about what I accomplished and learned in the next post.

When I began to read about the generative AI revolution, I realized there was an opportunity to combine two aspects of my life I never thought I could merge. While I’m not working on the cloud or security, I work as a sex and intimacy educator, helping people embrace love, vulnerability and connection.

As I first began to interact with ChatGPT, I saw the potential for AI to help people explore parts of the world they had not experienced for themselves. I’m blind. When I write fiction, physical descriptions are always challenging for me. I don’t understand facial expressions very well, and figuring out what characters look like is difficult. Generative AI has opened up an entire new world for me. I can explore how people might express some emotion and how they might dress in a certain situation. I can even exploit the cultural biases that are sometimes the bane of AI to translate my ideas about personality and background into appearance.

Immediately I realized the opportunities for sexual freedom:

  • AI could help people practice talking about intimacy, for example helping people practice negotiating their limits and boundaries.

  • AI could help explore feelings and find the words to share what is in our hearts.

  • We are more willing to tell a computer our fantasies than another person. AI can reassure us that our desires are normal; we are not broken or disgusting because of what we desire.

  • For the fantasies we want to stay in our head, AI can help us make them vivid in a way that respects our privacy.

  • And for the fantasies we want to bring into the world, AI can help us understand how to turn the hot images in our head into something safe that respects our boundaries and those of our lovers.

People are already using Generative AI to help with intimacy. There are plenty of stories about how people use AI to tune their dating profiles. But all too often, the desire to make AI safe brings shame and rejection into the discussion of intimacy. Even something as simple as “Help me come up with a sensual description of this character,” is likely to run up against the all-too-familiar responses:

“I am a large language model and for safety reasons I cannot do that.”

That safety is important: one thing we have learned from sex positive culture is how important boundaries are. We need to respect those boundaries and not expose people to unwanted sexual content. But we also know how damaging shame is. When someone reaches out and tentatively asks to explore their sexuality, rejecting that exploration will come across as a rejection of that person—they are dirty or disgusting for wanting to explore.

Fortunately, we will see AI models that are open to exploring sexuality. Some of the uncensored models will already try, although calling some of the results sex positive would be stretching the truth. We’re already seeing discussions of virtual AI girlfriends. And as AI meets sex, I’m going to be there, helping try and turn it into something healthy both for business and for lovers.

There are all sorts of interesting challenges: There are all the cultural and social challenges that sex-positive work faces. Then there are versions of the AI challenges of bias, hallucinations and the like, along with specific challenges of exploring emotionally-charged vulnerable topics. And yet there’s so much potential to help people gain confidence and valuable skills.

I am eagerly looking for opportunities to combine my work as a sex positive educator and as a software developer. I’d love to hear about any ongoing work at the intersection of Sex and Generative AI. I’ve done some research already, but there’s so much going on in the AI world it is impossible to follow it all. Please reach out with anything you think I should track.

This is the second in a series of blog posts introducing Carthage, an Infrastructure as Code framework I’ve been working on the last four years. In this post we’ll talk about how we use Carthage to build the Carthage container images. We absolutely could have just used a Containerfile to do this; in fact I recently removed a hybrid solution that produced an artifact and then used a Containerfile to turn it into an OCI image. The biggest reason we don’t use a Containerfile is that we want to be able to reuse the same infrastructure (installed software and configuration) across multiple environments. For example CarthageServerRole, a reusable Carthage component that install Carthage itself is used in several places:

  1. on raw hardware when we’re using Carthage to drive a hypervisor
  2. As part of image building pipelines to build AMIs for Amazon Web Services
  3. Installed onto AWS instances built from the Debian AMI where we cannot use custom AMIs
  4. Installed onto KVM VMs
  5. As part of building the Carthage container images

So the biggest thing Carthage gives us is uniformity in how we set up infrastructure. We’ve found a number of disadvantages of Containerfiles as well:

  1. Containerfiles mix the disadvantages of imperative and declarative formats. Like a declarative format they have no explicit control logic. It seems like that would be good for introspecting and reasoning about Containers. But all you get is the base image and a set of commands to build a container. For reasoning about common things like whether a container has a particular vulnerability or can be distributed under a particular license, that’s not very useful. So we don’t get much valuable introspection out of the declarative aspects, and all too often we see Containerfiles generated by Makefiles or other multi-level build-systems to get more logic or control flow.

  2. Containerfiles have limited facility for doing things outside the container. The disadvantage of this is that you end up installing all the software you need to build the container into the container itself (or having a multi-level build system). But for example if I want to use Ansible to configure a container, the easiest way to do that is to actually install Ansible into the container itself, even though Ansible has a large dependency chain most of which we won’t need in the container. Yes, Ansible does have a number of connection methods including one for Buildah, but by the point you’re using that, you’re already using a multi-level build system and aren’t really just using a Containerfile.

Okay, so since we’re not going to just use a Containerfile, what do we do instead? We produce a CarthageLayout. A CarthageLayout is an object in the Carthage modeling language. The modeling language looks a lot like Python—in fact it’s even implemented using Python metaclasses and uses the Python parser. However, there are some key semantic differences and it may help to think of the modeling language as its own thing. Carthage layouts are typically contained in Carthage plugins. For example, the oci_images plugin is our focus today. Most of the work in that plugin is in layout.py, and the layout begins here:

class layout(CarthageLayout):
    add_provider(ConfigLayout)
    add_provider(carthage.ansible.ansible_log, str(_dir/"ansible.log"))

The add_provider calls are special, and we’ll discuss them in a future post. For now, think of them as assignments in a more complex namespace than simple identifiers. But the heart of this layout is the CarthageImage class:

    class CarthageImage(PodmanImageModel, carthage_base.CarthageServerRole):
        base_image = injector_access('from_scratch_debian')
        oci_image_tag = 'localhost/carthage:latest'
        oci_image_command = ['/bin/systemd']

Most of the work of our image is done by inheritance. We inherit from the CarthageServerRole from the carthage_base plugin collection. A role is a reusable set of infrastructure that can be attached directly to a MachineModel. By inheriting from this role, we request the installation of the Carthage software. The role also supports copying in various dependencies; for example when Carthage is used to manage a cluster of machines, the layout corresponding to the cluster can automatically be copied to all nodes in the cluster. We do not need this feature to build the container image. The CarthageImage class sets its base image. Currently we are using our own base Debian image that we build with debootstrap and then import as a container image. In the fairly near future, we’ll change that to:

        base_image = ‘debian:bookworm’

That will simply use the Debian image from Dockerhub. We are building our own base image for historical reasons and need to confirm that everything works before switching over. By setting oci_image_tag we specify where in the local images the resulting image will be stored. We also specify that this image boots systemd. We actually do want to do a bit of work on top of CarthageServerRole specific to the container image. To do that we use a Carthage feature called a Customization. There are various types of customization. For example MachineCustomization runs a set of tasks on a Machine that is booted and on the network. When building images, the most common type of customization is a FilesystemCustomization. For these, we have access to the filesystem, and we have some way of running a command in the context of the filesystem. We don’t boot the filesystem as a machine unless we need to. (We might if the filesystem is a kvm VM or AWS instance for example). Carthage collects all the customizations in a role or image model. In the case of container image classes like PodmanImageModel, each customization is applied as an individual layer in the resulting container image.

Roles and customizations are both reusable infrastructure. Roles typically contain customizations. Roles operate at the modeling layer; you might introspect a machine’s model or an image’s model to see what functionality (roles) it provides. In contrast, customizations operate at the implementation layer. They do specific things like move files around, apply Ansible roles or similar.

Let’s take a look at the customization applied for the Carthage container image (full code):


        class customize_for_oci(FilesystemCustomization):

            @setup_task("Remove Software")
            async def remove_software(self):
                await self.run_command("apt", "-y", "purge",
                                       "exim4-base",
                                       )

            @setup_task("Install service")
            async def install_service(self):
               # installs and activates a systemd unit

Then to pull it all together, we simply run the layout:

sudo PYTHONPATH=$(pwd) python3 ./bin/carthage-runner ./oci_images build

In the next post, we will dig more into how to make infrastructure reusable.

For the past four years, I’ve been working on Carthage, a free-software Infrastructure as Code framework. We’ve finally reached a point where it makes sense to talk about Carthage and what it can do. This is the first in a series of blog posts to introduce Carthage, discuss what it can do and show how it works.

Why Another IAC Framework

It seems everywhere you look, there are products designed to support the IAC pattern. On the simple side, you could check a Containerfile into Git. Products like Terraform and Vagrant allow you to template cloud infrastructure and VMs. There are more commercial offerings than I can keep up with.

We were disappointed by what was out there when we started Carthage. Other products have improved, but for many of our applications we’re happy with what Carthage can build. The biggest challenge we ran into is that products wanted us to specify things at the wrong level. For some of our cyber training work we wanted to say things like “We want 3 blue teams, each with a couple defended networks, a red team, and some neutral infrastructure for red to exploit.” Yet the tools we were trying to use wanted to lay things out at the individual machine/container level. We found ourselves contemplating writing a program to generate input for some other IAC tool.

Things were worse for our internal testing. Sometimes we’d be shipping hardware to a customer. But sometimes we’d be virtualizing that build out in a lab. Sometimes we’d be doing a mixture. So we wanted to completely separate the descriptions of machines, networks, and software from any of the information about whether that was realized on hardware, VMs, containers, or a mixture.

Dimensional Breakdown

In discussing Carthage with Enrico Zini, he pointed me at Cognitive Dimensions of notation as a way to think about how Carthage approaches the IAC problem. I’m more interested in the idea of breaking down a design along the idea of dimensions that allow examining the design space than I am particular adherence to Green’s original dimensions.

Low Viscosity, High Abstraction Reuse

One of the guiding principles is that we want to be able to reuse different components at different scales and in different environments. These include being able to do things like:

  • Define an operation like “Update a Debian system” and apply that in several environments including as part of building a base VM or container image, applying to an independently managed machine, or applying to a micro service container that does not run services like ssh or systemd.

  • Defining a role like DNS server that can be applied to a dedicated machine only having that role, to a traditional server with multiple roles, or in a micro service environment.

  • Allowing people to write groups of functionality that can be useful in descriptions of a small number of machines, but can also be reused in large environments like modeling of cyber infrastructure to defend. In the small environments, things are simplified, but in larger environments integration like directories, authentication infrastructure and the like is needed.

  • Allow grouping of functionality at multiple levels. So far I have talked about grouping of software to be installed on a single machine or container. We also want to allow groups of containers (pods or otherwise), groups of machines, groups of networks, or even enclaves (think a model of an entire company or section of a company). Each kind of grouping needs to be parametric and reusable.

Hidden Dependencies

To accomplish these abstraction goals, dependencies need to be non-local. For example, a software role might need to integrate with a directory if a directory is present in the environment. When writing the role, no one is going to know which directory to use, nor whether a directory is present. Taking that as an explicit input into the role is error-prone when the role is combined into large abstract units (bigger roles or collections of machines). Instead it is better to have a non-local dependency, and to find the directory if it is available. We accomplish this using dependency injection.

In addition to being non-local, dependencies are sometimes hidden. It is very easy to overwhelm our cognitive capacity with even a fairly simple IAC description. An effective notation allows us to focus on the parts that matter when working with a particular part of the description. I’ve found hiding dependencies, especially indirect dependencies, to be essential in building complex descriptions.

Obviously, tools are required for examining these dependencies as part of debugging.

First Class Modeling

Clearly one of the goals of IAC descriptions is to actually build and manage infrastructure. It turns out that there are all sorts of things you want to do with the description well before you instantiate the infrastructure. You might want to query the description to build network diagrams, understand interdependencies, or even build inventory/bill of materials. We often find ourselves building Ansible inventory, switch configurations, DNS zones, and all sorts of configuration artifacts. These artifacts may be installed into infrastructure that is instantiated by the description, but they may be consumed in other ways. Allowing the artifacts to be consumed externally means that you can avoid pre-commitment and focus on whatever part of the description you originally want to work on. You may use an existing network at first. Later the IAC description may replace that, or perhaps it never will.

As a result, Carthage separates modeling from instantiation. The model can generally be built and queried without needing to interact with clouds, VMs, or containers. We’ve actually found it useful to build Carthage layouts that cannot ever be fully instantiated, for example because they never specify details like whether a model should be instantiated on a container or VM, or what kind of technology will realize a modeled network. This allows developing roles before the machines that use them or focusing on how machines will interact and how the network will be laid out before the details of installing on specific hardware.

The modeling separation is by far the difference I value most between Carthage and other systems.

A Tool for Experts.

In Neal Stephenson’s essay “In the Beginning… Was the Command Line”, Stephenson points out that the kind of tools experts need are not the same tools that beginners need. The illustration of why a beginner might not be satisfied with a Hole Hog drill caught my attention. Carthage is a tool for experts. Despite what cloud providers will tell you, IAC is not easy. Doubly so when you start making reusable components. Trying to hide that or focus on making things easy to get started can make it harder for experts to efficiently solve the problems they are facing. When we have faced trade offs between making Carthage easy to pick up and making it powerful for expert users, we have chosen to support the experts.

That said, Carthage today is harder to pick up than it needs to be. It’s a relatively new project with few external users as of this time. Our documentation and examples need improvement, just like every project at this level of maturity. Similarly, as the set of things people try to do expand, we will doubtless run into bugs that our current test cases don’t cover. So Carthage absolutely will get easier to learn and use than it is today.

Also, we’ve already had success building beginner-focused applications on top of Carthage. For our cyber training, we built web applications on top of Carthage that made rebuilding and exploring infrastructure easy. We’ve had success using relatively understood tools like Ansible as integration and customization points for Carthage layouts. But in all these cases, when the core layout had significant reusable components and significant complexity in the networking, only an IAC expert was going to be able to maintain and develop that layout.

What Carthage can do.

Carthage has a number of capabilities today. One of Carthage’s strengths is its extensible design. Abstract interfaces make it easy to add new virtualization platforms, cloud services, and support for various ways of managing real hardware. This approach has been validated by incrementally adding support for virtualization architectures and cloud services. As development has progressed, adding new integrations continues to get faster because we are able to reuse existing infrastructure.

Today, Carthage can model:

  • Machines
  • Networks
  • Dynamically compose groupings of the above
  • Generate model level artifacts
    • Ansible inventory
    • Various DNS integrations
    • Various switch configurations

Carthage has excellent facilities for dealing with images on which VMs and Containers can be based, although it does have a bit of a Debian/Ubuntu bias in how it thinks about images:

  • Building base images from a tool like debootstrap
  • Customizing these images
  • Converting into VM images for kvm, VMware, and AWS
  • Building from scratch OCI images for Podman, Docker and k8s
  • Adding layers to existing OCI images

When instantiating infrastructure, Carthage can work with:

  • systemd nspawn containers
  • Podman (Docker would be easy)
  • Libvirt
  • VMware
  • With the AWS plugin, EC2 VMs and networking

We have also looked at Oracle Cloud and I believe Openstack, although that code is not merged.

Future posts will talk about core Carthage concepts and how to use Carthage to build infrastructure.

I could definitely use some suggestions here, both in terms of things to try or effective places to ask questions about Pipewire audio. The docs are improving, but are still in early stages. Pipewire promises to combine the functionality of PulseAudio and Jack. That would be great for me. I use Jack for my DJ work, and it’s somewhat complicated and fragile. However, so far my attempts to replace Jack have been unsuccessful, and I might need to even use PulseAudio instead of Pipewire to get the DJ stuff working correctly.

The Setup

In the simplest setup I have a DJ controller. It’s both a MIDI device and a sound card. It has 4 channel audio, but it’s not typical surround sound. Two channels are the main speakers, and two channels are the headphones. Conceptually it might be better to model the controller as two sinks: one for the speakers and one for the headphones. At a hardware level they need to be one device for several reasons, especially including using a common clock. It’s really important than only the main mix go out channel 1-2 (the speakers). Random beeps or sound from other applications going out the main speakers is disruptive and unprofessional.

However, because I’m blind, I need that sound. I especially need the output of Orca (my screen reader) and Emacspeak (another screen reader). So I need that output to go to the headphones.

Under Pulse/Jack

The DJ card is the Jack primary sound device (system:playback_1 through system:playback_4). I then use themodule-jack-sink Pulse module to connect Pulse to Jack. That becomes the default sink for Pulse, and I link front-left from that sink to system:playback_3. So, I get the system sounds and screen reader mixed into the left channel of my headphones and nowhere else.

Enter Pipewire

Initially Pipewire sees the DJ card as a 4-channel sound card and assumes it’s surround4.0 (so front and rear left and right). It “helpfully” expands my stereo signal so that everything goes to the front and rear. So, exactly what I don’t want to have happen happens: all my system sounds go out the main speakers (channel 1-2).

It was easy to override Wireplumber’s ALSA configuration and assign different channel positions. I tried assigning something like a1,a2,fl,fr hoping that Pipewire wouldn’t mix things into aux channels that weren’t part of the typical surround set. No luck. It did correctly reflect the channels in things like pacmd list sinks so my Pipewire config was being applied. But the sound was still wrong. * I tried turning off channelmix.upmix. That didn’t help; that appears to be more about mixing stereo into center, rear and LFE. The basic approach of getting a stream to conform to the output node’s channels appears to be hurting me here.

  • Turning off stream.dont-remix actually got stereo sound to do exactly what I wanted. If I use sox to play a stereo MP3 for example, it comes out the headphones and not my speakers. Unfortunately, that didn’t help with the accessibility sounds at all. Those are mono in pulse land, and apparently mono is always expanded to all channels.

  • I didn’t try turning off channelmix entirely. I’m reasonably sure that would break mono sound entirely, so I’d get no accessibility output which would make my computer entirely unusable.

  • I tried using jack_disconnect to disconnect the accessibility ports from all but the headphones. The accessibility applications aren’t actually using Jack, but one of the cool things about Pipewire is that you can use Jack interfaces to manipulate non-Jack applications. Unfortunately, at least the Emacspeak espeak speech server regularly shuts down and restarts its sound connection. So, I get speech through the headphones for a phrase or two, and then it reverts to the default config.

I’d love any ideas about how I can get this to work. I’m sure it’s simple I’m just missing the right mental model or knowledge of how to configure things.

Pipewire Not Talking to Jack

I thought I could at least use Pipewire the same way I use Pulse. Namely, I can run a real jackd and connect up Pipewire to that server. According to the wiki, Pipewire can be a Jack client. It’s disabled by default, because you need to make sure that Wireplumber is using the real Jack libraries rather than the Pipewire replacements. That’s the case on Debian, so I enabled the feature.

A Jack device appeared in wpctl status as did a Jack sink. Using jack_lsp on that device showed it was talking to the Jack server and connected to system:playback_*. Unfortunately, it doesn’t work. The sink does not show up in pacmd list sinks, and pipewire-pulse gives an error about it not being ready. If I select it as the default sink in wpctl set-default I get no sound at all, at least from Pulse applications.

Versions of things

This is all on debian, approximately testing/bookworm or newer for the relevant libraries.

  • Pipewire 0.3.51-1
  • Wireplumber 0.4.10-2
  • pipewire-pulse and libspa0.2-jack are also 0.3.51-1 as you’d expect
  • Jackd2 1.9.17~dfsg-1
Recently, I migrated this blog from Livejournal over to Dreamwidth. As part of the process, I was looking back at my blog entries from around 2007 or so.

I miss those days. I miss the days when blogging was more of an interactive community. Comments got exchanged, and at least among my circle of friends people wrote thoughtful, well-considered entries. There was introspection into what was going on in people's lives, as well as technical stuff, as well as just keeping up with people who were important in my life.
Today, we have some of the same thought going into things like Planet Debian, but it's a lot less interactive. Then we have things like Facebook, Twitter, and the more free alternatives. There's interactivity, but it feels like everything has to fit into the length of a single tweet. So it is a lot faster paced and a lot less considered. I find I don't belong to that fast-paced social media as much as I did to the blogs of old.

I felt disgust and horror when I learned yesterday that rms had returned to the FSF board. When rms resigned back in September of 2019, I was Debian Project Leader. At that time, I felt two things. First, I was happy that the community was finally taking a stand in favor of inclusion, respect, and creating a safe, welcoming place to do our work. It was long past time for rms to move on. But I also felt thankful that rms was not my problem to solve. In significant part because of rms, I had never personally been that involved in the FSF. I considered drafting a statement as Debian Project Leader. I could have talked about how through our Diversity Statement and Code of Conduct we had taken a stand in favor of inclusion and respect. I could have talked about how rms's actions displayed a lack of understanding and empathy and how this created a community that was neither welcoming nor respectful. I didn't. I guess I didn't want to deal with confirming I had sufficient support in the project. I wanted to focus on internal goals, and I was healing and learning from some mistakes I made earlier in the year. It looked like other people were saying what needed to be said and my voice was not required. Silence was a mistake.

It's a mistake I've been making all throughout my interactions with rms. Enough is enough. It's long past time I added my voice to those who cry for accountability and who will not sit aside while rms's disrespect and harm is tolerated.

The first time I was silent about rms was around 15 years ago. I was at a science fiction convention in a crowded party. I didn't know anyone, other than the host of the party. I was out of my depth. I heard his voice---I recognized it from the Share the Software Song. He was hitting on some girl, talking about how he invented Emacs. As best I could tell, she didn't even know what Emacs was. Back then, I wondered what she saw in the interaction; why she stuck around even though she didn't know what he was talking about. I sure didn't want to be around; the interaction between the two of them was making me uncomfortable. Besides, the wings on her costume kept hitting me in the face. So I left as fast as I could.

I've learned a lot about creating safe spaces and avoiding sexual harassment since then. Thinking back, she was probably hitting me because she was trying to back away and getting crowded. If this happened today, I think I would do a better job of owning my responsibility for helping keep the space around me safe. I've learned better techniques for checking in to make sure people around me are comfortable.

I didn't come to silence alone: I had been educated into the culture of avoiding rms and not calling him out. There was a running game in the group of computer security professionals I learned from. The goal was to see how much you could contribute to free software and computer security without being recognized by or interacting with rms. And so, indoctrinated into a culture of silence about the harm that rms caused, I took my first step.

Things weren't much better when I attended Libreplanet 2019 just before taking office as Debian Project Leader. I had stayed away from the conference in large part because of rms. But there were Debian people there, and I was missing community interaction. Unfortunately, I saw that even after the problems of 2018, rms was still treating himself as above community standards. He interrupted speakers, objecting to how they phrased the problem they were considering. After a speech on codes of conduct in the free software community, he cornered the talk organizer to "ask her opinion" about the GNU project's lack of code on conduct. He wasn't asking for an opinion. He was justifying himself; there wasn't much listening in the conversation I heard. Aspects of that conversation crossed professional boundaries for what should be said. The talk organizer was okay--we talked about it after--but if we did a better job of policing our community, that wouldn't even be a question. I think the most telling sign was a discussion with an FSF board member. We were having a great conversation, but he had to interrupt it. He was on rms duty (my words) at the next session. The board had decided it was necessary to have members there so that the staff would not be put in awkward positions by their president. If someone needed to call rms out, it could be a board member rather than the staff members of the conduct team.

And yet again, I held my silence. It's so easy to keep silent. It's not that I never speak up. There are communities where I have called people out. But it's hard to paint that target on yourself. It's hard to engage and to stand strong for a community's standards when you aren't the target. It's hard to approach these problems while maintaining empathy for everyone involved. Some people give into the rage; I don't have that option if I want to be the person I've chosen to be. And so, when I do speak up, the emotional cost is high.

Yet, it's long past time I raised my voice on this issue. Rms has demonstrated that he cannot hold to standards of respect for others, respect for their boundaries, or standards of community safety. We need those standards to be a welcoming community.

If the people who came before me--those who taught me the game of avoiding rms--had spoken up, the community could have healed before I even came on the scene. If I and others had stood up fifteen years ago, we'd have another couple generations who were more used to respect, inclusion, welcoming and safety. The FSF board could have done their job back in 2018. And perhaps if more of us had spoken out in 2019, the FSF board would have found the strength to stand strong and not accept rms's return.

And so, finally, I raise my voice. I signed the open letter calling for the resignation of rms and the entire FSF board. Perhaps if we all get used to raising our voice, it will be easier. Perhaps if we stand together, taking the path of community rather than the path of silence, we'll have the support we need to create communities inclusive enough to welcome everyone who can contribute. For me, I'm done being silent.

There's one criticism of the open letter I'd like to respond to. I've heard concerns about asking for the resignation of the entire FSF board under the understanding that some board members voted against rms's return. It should be obvious why those who voted for rms's return need to resign. But resignation does not always mean you did something wrong. If you find yourself in a leadership role in an organization that takes decisions in significant conflict with your standards of ethics, resignation is also the right path. Staying on the board even if you voted against rms's return means that you consider voting for rms to be a reasonable thing to do. It means that even if you disagreed with it, you can still be part of an organization that takes the path of welcoming rms. At this point, I cannot do that, nor can I support leaders in the FSF who do.
So, I needed a container of Debian Slink (2.1), released back in 1999. I expected this was going to be a long and involved process. Things didn't look good from the start:
root@mount-peerless:/usr/lib/python3/dist-packages/sqlalchemy# debootstrap  slink /build/slink2 
http://archive.debian.org/debian                                                               
E: No such script: /usr/share/debootstrap/scripts/slink

Hmm, I thought I remembered slink support for debootstrap--not that slink used debootstrap by default--back when I was looking through the debootstrap sources years ago. Sure enough looking through the changelogs, slink support was dropped back in 2005.
Okay, well, this isn't going to work either, but I guess I could try debootstrapping sarge and from there go back to slink.
Except it worked fine.
Go us!
Last night, a series of forged emails was sent to a number of places around the Debian, Ubuntu and Free Software communities. The meat of the mail was a fake message from me to debian-private with the subject "DebConf19 Diversity Girls." I didn't write such a message.
I view this message as the latest installment in a campaign of attacks on Debian that attempt to undermine the project and take up the time of our members.
I was expecting something like this: yesterday, I banned Daniel Pocock from the project.There's been a pattern of related events over the past year and a half:

  • Confrontational messages from Daniel that do not stop even when moderators of the discussion forum ask him to stop.
  • Anonymous messages that expand on the points Daniel has been making accusing people and organizations of misconduct
  • Without claiming authorship of these anonymous messages, Daniel quickly expands on the messages in his blogs and goes forward assuming these anonymous claims are true
  • Use of mechanisms described on Daniel's websites to bypass moderation of community forums and other mechanisms designed to reach people who are not interested in the communication on the part of Daniel and the anonymous messages
  • And now, forged emails

  • This campaign involves a lot of activities hurtful to members of our community. The "DebConf19 Diversity Girls" message was no exception. It alleges misconduct on the part of members of our community. Through the #metoo movement we've seen countless examples of victims standing up, demanding to be acknowledged and demanding that abusers are held accountable. That's not what is happening here. This message combines half-truths with a shocking presentation to damage Debian. It does not work to improve Debian. It is not championing the cause of someone trying to get redress for wrongs done to them.
    I reject this approach and the intent behind this forged message and the broader campaign that it fits into.


    However, I also feel it is important to reassure everyone that Debian is committed to creating a safe and welcoming community. We take concerns about misconduct seriously. We particularly encourage anyone who has concerns about their safety in the debian community or concerns about how they are treated to talk to us. I am available as the project leader. Our community team is available. We will work to understand and resolve your concern.
    While I think the presentation was hurtful and inappropriate, I also do acknowledge that personal conflict of interest is something that we all should be aware of. When we are taking on roles that have power within the project, we can create situations where we need to be extra careful to respect people's boundaries. There are cases where a personal conflict of interest may prevent us from being in certain personal relationships while also acting in a role within the project that could affect those relationships.
    I think we are already generally aware of these issues. Even so, as we continue to build a community that does a better job respecting its members and their boundaries, it benefits us to continue to refine our approach to important issues like conflict of interest. Sometimes that will involve awareness. Sometimes that will involve crafting more clear policies around the issues. It does not involve vague accusations spammed to the entire Free Software community. If I were actually writing a message on these topics, it would look a lot more like the paragraph above than the message forged in my name.
December has been a difficult month for me and I think for Debian as a whole. It was strongly suggested to me that I (and Debian in general) needed more music. I'm reminded of the fun I had dancing with you all at DebConf. It's been a while since I dug out my DJ kit. But Dec 25, I pulled it out and spent a couple of hours looking at some of the tracks that came out since I last DJed. And then I put together a mix. I had fun. Perhaps you'd like a little more music in your holiday. If so, I join you on the (virtual) dance floor.

There are a lot of options on the ballot for the Init Systems GR.
There have been hundreds of messages on debian-vote, and more scattered
across debian-devel, debian-project, and bugs. Reading all that is no
easy task. so I would like to summarize my understanding of the options
and what they would mean. I've tried to remove as much bias as I can,
but this is still Sam's personal opinion.



I'm focused entirely on the effects of the proposals. Several options
(D, F, and G) spend significant time outlining principles. That is
something I'm ignoring here in this post, although I acknowledge it is
really important to some.



Areas of Agreement



One of the big surprises for me in this discussion is things that are
true of all the ballot options so far. We are agreed that programs
designed to use systemd features that only work with systemd are welcome
in Debian. That's true even if there is no way for these programs to
work without systemd. Under Proposal D, this is a bug, but not a
release-critical bug.



Init Diversity Options


Several options focus on encouraging or requiring packages to support
init systems others than systemd when that is possible. These include
Proposal
E
, Proposal F,
and Proposal
A
. Under proposal E, it is a release critical bug if a program does
not work when something other than systemd is pid 1 unless that program
is designed explicitly to work with systemd and no support for running
without systemd is available. Lack of an init script alone is not
sufficient to count as designed to work exclusively with systemd.



So, under this proposal, a maintainer must integrate support for running
without systemd if it is available. They are responsible for going out
and finding this support. If the support is as simple as writing an
init script, the maintainer has an RC bug until they write the init
script. If the support is more complex, the maintainer is not
responsible for writing it. Proposal A is the same as Proposal E,
except that the bug is not release-critical. I'll go into Proposal A in
more detail after discussing Proposal D.



Proposal D is similar to Proposal E. My interpretation is that Proposal D places
somewhat less of a burden on maintainers to go out and find existing
non-systemd support. My interpretation is that the bug becomes RC when
someone contributes that support. (Or if the support is present in the
upstream but turned off in the package). Proposal D requires that
non-systemd support not have a substantial effect on systemd
installations. So where as Proposal E uses the designed exclusively for
systemd criteria, Proposal D uses the no substantial effect on systemd
systems criteria to determine whether working only with systemd is
acceptable. The discussions seemed to imply that if Gnome uses systemd
features in excess of what technologies like elogind can handle, it is
likely to meet both criteria.



Proposal D goes into a lot more detail than Proposal E. Proposal E
would likely be a long-term block on using systemd facilities like
sysusers. Proposal D specifically proposes a mechanism whereby such
facilities can be documented in policy. This mechanism is only
available if it is likely that developers of non-systemd (including
non-Linux) systems will implement the facility. After a six-to-twelve
month transition time, the facility can be used even on non-systemd
systems. So, sufficiently low effort in the non-systemd community that
it is unreasonable to expect a facility could be implemented could still
permanently block adoption of such facilities. Proposal D is definitely
about a long-term commitment to non-systemd systems even if the effort
in the non-systemd community is not as high as we'd like to adopt new
features elsewhere.



Proposal D also includes a number of guidelines for proper behavior
around these emotionally charged issues.



The only difference between Proposal E and Proposal A is the severity of
the bug when non-systemd support is not in a package. In Proposal A,
this bug is important: potentially sufficient for a stable update, but
not release critical. As a practical matter, Proposal A allows the
non-systemd community to contribute (and NMU) patches for non-systemd
support. However, it does not place an obligation on maintainers to
write this support themselves. Proposal A would permit systemd
facilities like sysusers to be used, although doing so might be a bug.
In the specific case of sysusers, someone could justify NMUing a patch
to use adduser in a maintainer script. Unlike Proposal D, Proposal A
places the burden of keeping up with systemd facilities fully on the
non-systemd community. Proposal A does not have Proposal D's
requirement that it be reasonable to expect that the non-systemd
community can implement the facility.



Systemd Options


There are two systemd options: Proposal F
and Proposal
B
. Proposal F replaces a previous Proposal C. As far as I can tell
Proposal F and C do the same thing, but Proposal F has some text
describing principles. As I said in the introduction, I'm not
discussing that here.



Under Proposal F, systemd is the only officially supported option.
Other init systems may be explored at wishlist priority. Systemd
facilities such as sysusers are encouraged, and we will use our usual
mechanisms to plan transitions from Debian facilities where appropriate.



Proposal B does not explicitly say that alternate init system work can
only be wishlist. Under Proposal B, I think it would be reasonable to
file some bugs at normal severity, but it would also be reasonable for a
maintainer to downgrade them. I don't consider that a significant
difference.



The big difference is that Proposal B commits us as a project to
reviewing integrations of technologies that are alternatives to
systemd facilities. The current example is elogind. But things like
a non-systemd implementation of sysusers, tmpfiles.d, etc, would also
qualify. The rationale is that sometimes alternatives like that touch
on core infrastructure, and even if other maintainers are doing the
work, gatekeeper review is needed. Under Proposal B, the alternate technologies would be available, but whether to use them in a specific package would be up to the maintainer. I've discussed in another, more opinionated blog post why this might be a good idea.



Proposal G


As best I can tell Proposal G is just a set of principles. so, in the context of the analysis I have set out to perform here, I think there is nothing to say.

This is my personal opinion, not that of the project leader. Tomorrow,
I'll write an essay trying to discuss the various options in with as
little bias as I can manage (although even that will be Sam's opinion).
Several people have asked me why I included Proposal B.
This is my answer.


While I was talking to people about systemd and init systems, people
seemed to inherently assume that being uncomfortable with systemd meant
that you were in favor of sysvinit, or at least init-script based
solutions. At least, people who were heavily involved in the issue made
that assumption. That didn't resonate with me.


Several concerns commonly raised with systemd resonate with me:


  1. It combines a bunch of things in one project; as an example how you
    start daemons ends up being tied to how you configure the network.

  2. This combination seems like it might reduce innovation at least
    outside of the systemd ecosystem, because interfaces are coupled.

  3. It is Linux specific


Of these, the biggest concern for me is the idea that systemd might
stifle innovation by becoming one point of control.


And yet, in my opinion, systemd is vastly superior to the current
alternatives. I'd far rather be writing service units than init
scripts. They are more declarative. Dependencies that I care about are
easier to express. There are better security isolation facilities. In
non-Debian work I've found that I depend heavily on systemd because it
is easier and more pleasurable to code to than the alternatives.
Declarative syntax for managing users is useful. I haven't personally
seen the huge joy of socket activation, but if I were writing somewhat
different things, perhaps I would. Given
the options today, I would pick systemd hands down and not look back.


But what about tomorrow? For me, one of the great things about Debian
has been that it's possible to integrate new technologies and to try
things out. Debian has been the OS where I and many others could try
out new technologies and figure out what it was like to fully integrate
them into the operating system. Systemd is the best we've got now, but
I'm reluctant to step away from Debian as a platform for innovation and
experimentation.


Yet I don't think focusing on sysvinit or other init-script based
solutions actually has anything to do with the kind of innovation I'm
talking about. I understand that for people who value sysvinit (or
something like runit) above systemd, that work is valuable. My
experience is that for my needs, systemd is a better fit. I wanted a
proposal that allowed us to maintain Debian as a platform for innovation
without focusing on the legacy of init scripts. I think that if there
is going to be something that some day replaces systemd, it will support
service units (or a significant subset) not init scripts. I suspect it
will have a way to handle socket activation and so on. And I cannot
imagine a future systemd replacement that does not have advanced
security isolation features.


How it Works


Proposal B is a systemd focused proposal. It's very similar to Proposal F.
The text is different, but the implications of both proposals are
similar. Maintainers can use whatever systemd facilities they choose.
Init scripts are not required. For most maintainers, even thinking
about alternate init systems or future experiments is something entirely
optional. That's true under both proposal F and Proposal B.


Where they differ is in how much support the project gives to
experiments involving alternate init systems. Under Proposal F, that's
entirely optional at each member's discretion. My experience is that's
not sufficient for Debian to remain a community for innovation. My
experience is that key maintainers and teams maintaining central
infrastructure or packages often need to work with people who are trying
to integrate new features. The difference between Proposal B and F is
that under Proposal B, we commit to making that happen for technologies
that are important in exploring alternatives to systemd.


Obviously, no member of our community is obligated to do work. In
practice this commitment might mean working to find new volunteers to
help out key packages or teams and do this work. Sadly, there are areas
where the history of interaction has not been so good; behavior on
multiple sides of discussions has not lived up to our standards. In
addition to making sure we have the necessary volunteers for key
packages and teams,
part of meeting this commitment may involve working with people who
want to explore alternatives to systemd to find advocates who foster a
climate where we can be excellent to each other.


The Risks


There are some real risks with Proposal B. The biggest is that we'll
spend time working on integrations and nothing innovative will come out
of it. A possible outcome is that we spend a lot of time integrating
elogind and similar technologies, but they end up not being useful
because packages start depending on service units and socket
activations. Unless something new comes along, we may waste our
effort. Yet we've often been willing to spend effort to enable people
to try things. For me, this proposal is about reaffirming that aspect
of Debian.


In the worst case, it's possible that we decrease the quality of our
systemd integration leaving room for something else, spend significant
emotional energy, and do not end up with interesting innovation.
I think it's much more likely that if there is no interesting
innovation, Proposal B will slowly morph into Proposal F.


Why did You Do this?


In the beginning of this post, I talked about how I personally
considered the concerns about systemd separate than the desire to keep
init-script based systems running. That view is uncommon among people
who have been spending a lot of time on this issue. In general people
who are spending a lot of time on init systems seem to be fairly
divided. If you are trying to get work done today, you are probably
either fully using systemd or using one of the existing init-script
based alternatives.


However, my concern resonated with developers I talk to who spend less
time involved in the issue. Not people who were going to go research
things enough to write a proposal. But people who weren't sure that
systemd was the way and the light of the future, but found it had a lot
of great things going for it.


I was one of the few people who was taking the time to really understand
the issues but who was somewhat flexible. I didn't even know how I was
going to rank the options on my ballot until this morning. Yes, I've
been systemd leaning in some ways, but I also very much see the
arguments in favor of enabling people to keep other init systems
working. I'd be happy with most of the options on this ballot winning.
So, I tried to listen and see if there were ways of splitting
disagreement that wouldn't work for the people most committed to one
position, but might appeal to people who are less involved.


Why are you Writing This Post?


I think it's dangerous for someone who is project leader to speak a
personal opinion, especially on a controversial issue. However, I've
heard people struggling with some of the issues I discuss here in our
community. What I say may give people another way of looking at
things. I do think I have a valuable prospective because I have spent a
lot of time thinking about the issues but have not been as intimately
involved as others who have spent similar time. I think my need to act
as a facilitator at least for this GR is over. And after spending a day
considering, I think it's more beneficial to specifically ask the
project to think about Debian as a community for experimentation than to
say nothing.

Recently, we’ve been having some discussion around the use of non-free software and services in doing our Debian work. In judging consensus surrounding a discussion of Git packaging, I said that we do not have a consensus to forbid the use of non-free services like Github. I stand behind that consensus call. Ian Jackson, who initially thought that I misread the consensus later agreed with my call.


I have been debating whether it would be wise for me as project leader to say more on the issue. Ultimately I have decided to share my thoughts. Yes, some of this is my personal opinion. Yet I think my thoughts resonate with things said on the mailing list; by sharing my thoughts I may help facilitate the discussion.


We are bound together by the Social Contract. Anyone is welcome to contribute to Debian so long as they follow the Social Contract, the DFSG, and the rest of our community standards. The Social Contract talks about what we will build (a free operating system called Debian). Besides SC #3 (we will not hide problems), the contract says very little about how we will build Debian.


What matters is what you do, not what you believe. You don’t even need to believe in free software to be part of Debian, so long as you’re busy writing or contributing to free software. Whether it’s because you believe in user freedom or because your large company has chosen Debian for entirely pragmatic reasons, your free software contributions are welcome.


I think that is one of our core strengths. We’re an incredibly diverse community. When we try to tie something else to what it means to be Debian beyond the quality of that free operating system we produce, judged by how it meets the needs of our users, we risk diminishing Debian. Our diversity serves the free software community well. We have always balanced pragmatic concerns against freedom. We didn’t ignore binary blobs and non-free firmware in the kernel, but we took the time to make sure we balanced our users’ needs for functional systems against their needs for freedom. By being so diverse, we have helped build a product that is useful both to people who care about freedom and other issues. Debian has been pragmatic enough that our product is wildly popular. We care enough about freedom and do the hard work of finding workable solutions that many issues of software freedom have become mainstream concerns with viable solutions.


Debian has always taken a pragmatic approach to its own infrastructure and to how Debian is developed. The Social Contract requires that the resulting operating system be 100% free software. But that has never been true of the Debian Project nor of our developers.



  • At the time the Social contract was adopted, uploading a package to Debian involved signing it with the non-free PGP version 2.6.3. It was years later that GnuPG became commonly used.

  • Debian developers of the day didn’t use non-free tools to sign the Social Contract. They didn’t digitally sign it at all. Yet their discussions used the non-free Qmail because people running the Debian infrastructure decided that was the best solution for the project’s mailing lists.


“That was then,” you say.



  • Today, some parts of security.debian.org redirect to security-cdn.debian.org, a non-free web service

  • Our recommended mirror (deb.debian.org) is backed by multiple non-free CDN web services.

  • Some day we may be using more non-free services. If trends in email handling continue, we may find that we need to use some non-free service to get the email we send accepted by major email providers. I know of no such plan in Debian today, but I know other organizations have faced similar choices.


Yet these choices to use non-free software and non-free services in the production of Debian have real costs. Many members of our community prefer to use free software. When we make these choices, we can make it harder for people to contribute to Debian. When we decline to use free software we may also be missing out on an opportunity to improve the free software community or to improve Debian itself. Ian eloquently describes the frustrations those who wish to use only free software face when faced with choices to use non-free services.


As alternatives to non-free software or services have become available, we as a project have consistently moved toward free options.


Normally, we let those doing the work within Debian choose whether non-free services or software are sufficiently better than the free alternatives that we will use them in our work. There is a strong desire to prefer free software and self-hosted infrastructure when that can meet our needs.


For individual maintainers, this generally means that you can choose the tools you want to do your Debian work. The resulting contributions to Debian must themselves be free. But if you want to go write all your Debian packaging in Visual Studio on Windows, we’re not going to stop you, although many of us will think your choices are unusual.


And my take is that if you want to store Debian packages on Github, you can do that too. But if you do that, you will be making it harder for many Debian contributors to contribute to your packages. As Ian discussed, even if you listen to the BTS, you will create two classes of contributors: those who are comfortable with your tools and those who are not. Perhaps you’ve considered this already. Perhaps you value making things easier for yourself or for interacting with an upstream community on Github over making it easier for contributors who want to use only free tools. Traditionally in Debian, we’ve decided that the people doing the work generally get to make that decision. Some day perhaps we’ll decide that all Debian packaging needs to be done in a VCS hosted on Debian infrastructure. And if we make that decision, we will almost certainly choose a free service to host. We’re not ready to make that change today.


So, what can you do if you want to use only free tools?



  • You could take Ian’s original approach and attempt to mandate project policy. Yet each time we mandate such policy, we will drive people and their contributions away. When the community as a whole evaluates such efforts we’ll need to ask ourselves whether the restriction is worth what we will lose. Sometimes it is. But unsurprisingly in my mind, Debian often finds a balance on these issues.


  • You could work to understand why people use Github or other non-free tools. As you take the time to understand and value the needs of those who use non-free services, you could ask them to understand and value your needs. If you identify gaps in what free software and services offer, work to fix those gaps.


  • Specifically in this instance, I think that setting up easy ways to bidirectionally mirror things between Github and services like Salsa could really help.



Conclusions



  1. We have come together to make a free operating system. Everything else is up for debate. When we shut down that debate—when we decide there is one right answer—we risk diluting our focus and diminishing ourselves.

  2. We and the entire free software community win through the Debian Project’s diversity.

  3. Freedom within the Debian Project has never been simple. Throughout our entire history we’ve used non-free bits in the sausage making, even though the result consists (and can be built from) entirely free bits.

  4. This complexity and diversity is part of what allows us to advocate for software freedom more successfully. Over time, we have replaced non-free software that we use with free alternatives, but those decisions are nuanced and ever-changing.

All the members of the Antiharassment team met with the Debian Account Managers and the DPL in that other Cambridge— the one with proper behaviour, not the one where pounds are weight and not money.

I was nervous. I was not part of decision making earlier this year around code of conduct issues. I was worried that my concerns would be taken as insensitive judgment applied by someone who wasn’t there.

I was worried about whether I would find my values aligned with the others. I care about treating people with respect. I also care about freedom of expression. I value a lot of feminist principles and fighting oppression. Yet I’m happy with my masculinity. I acknowledge my privilege and have some understanding of the inequities in the world. Yet I find some arguments based on privilege problematic and find almost all uses of the phrase “check your privilege” to be dismissive and to deny any attempt at building empathy and understanding.

And Joerg was there. He can be amazingly compassionate and helpful. He can also be gruff at times. He values brevity, which I’m not good at. I was bracing myself for a sharp, brief, gruff rebuke delivered in response to my feedback. I know there would be something compassionate under such a rebuke, but it might take work to find.

The meeting was hard; we were talking about emotionally intense issues. But it was also wonderful. We made huge progress. This blog is not about reporting that progress.

Like the other Debian meetings I’ve been at, I felt like I was part of something wonderful. We sat around and described the problems we were working on. They were social not technical. We brainstormed solutions, talked about what worked, what didn’t work. We disagreed. We listened to each other. We made progress.

Listening to the discussions on debian-private in December and January, it sounded like DAM and Antiharassment thought they had it all together. I got a note asking if I had any suggestions for how things could have been done better. I kind of felt like they were being polite and asking since I had offered support.

Yet I know now that they were struggling as much as any of us struggle with a thorny RC bug that crosses multiple teams and packages. The account managers tried to invent suspensions in response to what was going on. They wanted to take a stand against bullying and disrespectful behavior. But they didn’t want to drive away contributors; they wanted to find a way to let people know that a real problem required immediate attention. Existing tools were inadequate. So they invented account suspensions. It was buggy. And when your social problem solving tools are buggy, people get hurt.

But I didn’t find myself facing off against that mythical group of people sure in their own actions I had half imagined. I found myself sitting around a table with members of my community, more alike than different. They had insecurities just like I do. They doubted themselves. I’m sure there was some extent to which they felt it was the project against them in December and January. But they also felt some of that pain that raged across debian-private. They didn’t think they had the answers, and they wanted to work with all of us to find them.

I found a group of people who genuinely care about openness and expressing dissenting views. The triggers for action were about how views were expressed not about those views. The biggest way to get under DAM’s skin and get them started thinking about whether there is a membership issue appears to be declining to engage constructively when someone wants to talk to you about a problem. In contrast, even if something has gone horribly wrong trying to engage constructively is likely to get you the support of all of us around that table in finding a way to meet your needs as well as the greater project.

Fear over language didn’t get in our way. People sometimes made errors about using someone’s preferred pronouns. It wasn’t a big deal: when they noticed they corrected themselves, acknowledged that they cared about the issue and went on with life. There was cursing sometimes and some really strong feelings.

There was even a sex joke. Someone talked about sucking and someone else intentionally misinterpreted it in a sexual context. But people payed attention to the boundaries of others. I couldn’t have gotten away with telling that joke: I didn’t know the people well enough to know their boundaries. It is not that I’m worried I’ll offend. It is that I actively want to respect the others around me. One way I can do that is to understand their boundaries and respect them.

One joke did cross a line. With a series of looks and semi-verbal communication, we realized that was probably a bit too far for that group while we were meeting. The person telling the joke acknowledged and we moved on.

I was reassured that we all care about the balance that allows Debian to work. We bring the same dedication to creating the universal operating system that we do to building our community. With sufficient practice we’ll be really good at the community work. I’m excited!
This is copied over from my spiritual blog. I'm nervous doing that, especially at a point when I'm more vulnerable than usual in the Debian community. Still, this is who I am, and I want to be proud of that rather than hide it. And Debian and the free software community are about far more than just the programs we write. So hear goes:

The Libreplanet opening keynote had me in tears. It was a talk by Dr. Tarek Loubani. He described his work as an emergency physician in Gaza and how 3d printers and open hardware are helping save lives.


They didn't have enough stethoscopes; that was one of the critical needs. So, they imported a 3d printer, used that to print another 3d printer, and then began iterative designs of 3d-printable stethoscopes. By the time they were done, they had a device that performed as well or better than than a commercially available model. What was amazing is that the residents of Gaza could print their own; this didn't introduce dependencies on some external organization. Instead, open/free hardware was used to help give people a sense of dignity, control of some part of their lives, and the ability to better save those who depended on them.


Even more basic supplies were unavailable. The lack of tourniquets caused the death of some significant fraction of casualties in the 2014 war. The same solution—3d-printed tourniquets had an even more dramatic result.


Dr. Loubani talked about how he felt powerless to change the world around him. He talked about how he felt like an insignificant ant.


By this point I was feeling my own sense of hopelessness and insignificance. In the face of someone saving lives like that, I felt like I was only playing at changing the world. What is helping teach love and connection when we face that level of violence? Claming that sexual freedom is worth fighting for seems like a joke in the worst possible taste in the face of what he is doing. I felt like an imposter.


Then he went on to talk about how we are all ants, but it is the combination of all our insignificant actions that eventually change the world. He talked about how the violence he sees is an intimate act: he talked about the connection between a sniper and their victim. We die one at a time; we can work to make things better one at a time.


He never othered or judged those committing violence. Not as he talked about his fellow doctor and friend who was shot, radioed that he could not breathe, and eventually died pinned down by gunfire so that no one could rescue him. Not as he talked about how he himself was shot. Not as he helped the audience connect with grief-stricken family members facing the death of their loved ones. He never withdrew compassion.


To me I heard hope that what I try to teach can matter; it can connect. If he can face that violence and take a stand against it while still maintaining compassion, then this stuff I believe actually can work. Facing the world and making real changes without giving up compassion and empathy seems more possible: I’ve seen it done.


Somewhere in this talk, I regained a connection with my own value. People like him are helping save people. However, the violence will continue until we have the love, empathy and compassion to understand and connect with each other and find better options. In my own way I’m doing that. Every time I help someone see a different way of looking at things, I make it easier for them to start with empathy first rather than fear.


Everything I’ve written about sex is still true. That journey can bring us closer to accepting ourselves, stepping past fear and shame. Once we accept our own desires and our own need, we’re in a better position to meet in the Strength of Love and advocate for our own needs while offering compassion to others. Once we know what we can find when we have empathy and connection, we can learn to strive for it.


So I will find joy in being my own little ant. Insignificant and divine: take your pick as it’s all the same in the end.


Bringing that Round to Debian


Debian is back in the center of my compassion work. I'm running for Debian project Leader (DPL). I served on the Debian Technical Committee for over a year, hoping to help bring understanding of diverse positions to our technical dispute resolution process. That ended up being the wrong place. Everyone seems to believe that the DPL is currently at the center of most of the work of helping people connect. I hope to fix that: more than one person should be driving that work.


After the keynote I found myself sitting between Micky Metts and Henry Poole. Micky asked me what I did that I loved. “Ah, she’s not expecting this answer,” I thought to myself as I talked about my spiritual work and how it overlaps with my Debian work. It turns out that she was delighted by the answer and we had a great time chatting about self empowerment. I’m looking forward to her keynote later today.


Then Henry asked how I was going to accomplish bringing empathy into Debian. I talked about my hopes and dreams and went through some of the specifics I’ve discussed in my platform and what I’ve had success with so far. He talked about similarities and overlaps with work his company does and how he works to teach people about free software.


Especially after that keynote it was joyful to sit between two luminaries and be able to share hopes for empathy, compassion and connection. I felt like I had found validation and energy again.

Debianhas always been filled with people who want to make the world a better place. We consider the social implications of our actions. Many are involved in work that focuses on changing the world. I’ been hesitant to think too closely about how that applies to me: I fear being powerless to bring about the world in which I would like to live.

Recently though, I've been taking the time to dream. One day my wife came home and told another story of how she’d helped a client reduce their pain and regain mobility. I was envious. Every day she advances her calling and brings happiness into the world, typically by reducing physical suffering. What would it be like for me to find a job where I helped advance my calling and create a world where love could be more celebrated. That seems such a far cry from writing code and working on software design every day. But if I don’t articulate what I want, I'll never find it.

I’ve been working to start this journey by acknowledging the ways in which I already bring love into the world. One of the most important lessons of Venus’s path is that to bring love into the world, you have to start by leading a life of love. At work I do this by being part of a strong team. We’re there helping each other grow, whether it is people trying entirely new jobs or struggling to challenge each other and do the best work we can. We have each other’s back when things outside of work mean we're not at our best. We pitch in together when the big deadlines approach.

I do not shove my personal life or my love and spirituality work in people’s faces, but I do not hide it. I'm there as a symbol and reminder that different is OK. Because I am open people have turned to me in some unusual situations and I have been able to invite compassion and connection into how people thought about challenges they faced.

This is the most basic—most critical love work. In doing this I’m already succeeding at bringing love into the world. Sometimes it is hard to believe that. Recently I have been daring to dream of a job in which the technology I created also helped bring love into the world.

I'd love to find a company that's approaching the world in a love-positive, sex-positive manner. And of course they need to have IT challenges big enough to hire someone who is world class at networking, security and cloud architecture. While I'd be willing to take a pay cut for the right job, I'd still need to be making a US senior engineer's salary.

Actually saying that is really hard. I feel vulnerable because I’m being honest about what I want. Also, it feels like I’m asking for the impossible.

Yet, the day after I started talking about this on Facebook, OkCupid posted a job for a senior engineer. That particular job would require moving to New York, something I want to avoid. Still, it was reassuring as a reminder that asking for what you want is the first step.

I doubt that will be the only such job. It's reasonable to assume that as we embrace new technologies like blockchains and continue to appreciate what the evolving web platform standards have to offer, there will be new opportunities. Yes, a lot of the adult-focused industries are filled with corruption and companies that use those who they touch. However, there's also room for approaching intimacy in a way that celebrates desire, connection, and all the facets of love.

And yes, I do think sexuality and desire are an important part of how I’d like to promote love. With platforms like Facebook, Amazon and Google, it's easier than ever for people to express themselves, to connect, and if they are willing to give up privacy, to try and reach out and create. Yet all of these platforms have increasingly restrictive rules about adult content. Sometimes it’s not even intentional censorship. My first post about this topic on Facebook was marked as spam probably because some friends suggested some businesses that I might want to look at. Those businesses were adult-focused and apparently even positive discussion of such businesses is now enough to trigger a presumption of spam.

If we aren't careful, we're going to push sex further out of our view and add to an ever-higher wall of shame and fear. Those who wish to abuse and hurt will find their spaces, but if we aren't careful to create spaces where sex can be celebrated alongside love, those seedier corners of the Internet will be all that explores sexuality. Because I'm willing to face the challenge of exploring sexuality in a positive, open way, I think I should: few enough people are.

I have no idea what this sort of work might look like. Perhaps someone will take on the real challenge of creating content platforms that are more decentralized and that let people choose how they want content filtered. Perhaps technology can be used to improve the safety of sex workers or eventually to fight shame associated with sex work. Several people have pointed out the value of cloud platforms in allowing people to host whatever service they would choose. Right now I’m at the stage of asking for what I want. I know I will learn from the exploration and grow stronger by understanding what is possible. And if it turns out that filling my every day life with love is the answer I get, then I’ll take joy in that. Another one of the important Venus lessons is celebrating desires even when they cannot be achieved.
Page generated Feb. 26th, 2026 06:25 am
Powered by Dreamwidth Studios