<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="4.4.1">Jekyll</generator><link href="https://bexelbie.com/feed.xml" rel="self" type="application/atom+xml" /><link href="https://bexelbie.com/" rel="alternate" type="text/html" /><updated>2026-03-04T22:59:28+01:00</updated><id>https://bexelbie.com/feed.xml</id><title type="html">this could’ve been an email</title><subtitle>Notes on Linux, side projects, and figuring things out in the Czech Republic.</subtitle><author><name>Brian &quot;bex&quot; Exelbierd</name><email>bex@bexelbie.com</email><uri>https://bexelbie.com</uri></author><entry><title type="html">Jekyll Reads: the tooling behind my reading list</title><link href="https://bexelbie.com/2026/03/03/jekyll-reads.html" rel="alternate" type="text/html" title="Jekyll Reads: the tooling behind my reading list" /><published>2026-03-03T08:50:00+01:00</published><updated>2026-03-03T08:50:00+01:00</updated><id>https://bexelbie.com/2026/03/03/jekyll-reads</id><content type="html" xml:base="https://bexelbie.com/2026/03/03/jekyll-reads.html"><![CDATA[<h2 id="why-i-needed-more-than-a-social-reading-site">Why I needed more than a social reading site</h2>

<p>In <a href="/ramblings/2025/02/11/rediscovering-reading.html">Rediscovering Reading (Without the Social Media Part)</a> I wrote about stepping away from scrolling and building a slower, more deliberate reading habit. Part of that shift was making my reading log public without tying it to a dedicated social network.</p>

<p>The mechanics behind that were simple but fussy: keep a YAML file up to date, copy and paste links from Open Library, remember to grab cover images, and wire everything into Jekyll templates for the reading page and sidebar. None of it was hard, but it was just annoying enough that I knew future‑me would start skipping updates.</p>

<p>I built Jekyll Reads to make that workflow tolerable.</p>

<h2 id="what-jekyll-reads-actually-does">What Jekyll Reads actually does</h2>

<p>Jekyll Reads is a small collection of pieces designed around a single idea: keep all the book data in one <code class="language-plaintext highlighter-rouge">_data/reading.yml</code> file and let everything else be presentation.</p>

<p>The core pieces are:</p>

<ul>
  <li>A shared Node.js library that talks to Open Library, picks a reasonable match, and produces a standard YAML snippet for a book</li>
  <li>A command‑line tool that lets you search for a book and print the YAML to stdout, with options for indentation and auto‑selecting results</li>
  <li>A Vim integration that shells out to the CLI and drops the YAML directly into your buffer at the right indentation level</li>
  <li>A Visual Studio Code extension that does the same thing from inside the editor, with a proper search UI and update checks for the extension itself</li>
</ul>

<p>All of this is intentionally boring: no external Node dependencies, just the built‑in modules and a bit of glue. The point is to make it slightly easier to keep the reading list current than to let it drift.</p>

<h2 id="how-it-shows-up-on-this-site">How it shows up on this site</h2>

<p>On this site, the source of truth is <code class="language-plaintext highlighter-rouge">_data/reading.yml</code>. Entries that are still in progress, finished, or abandoned are all represented there with the same structure. The YAML includes things like start and finish dates, a link to more information (usually Open Library), an optional cover image, and a free‑form comment.</p>

<p>That data feeds two places:</p>

<ul>
  <li>The dedicated <a href="/reading/">reading page</a>, which separates currently‑reading, finished, and abandoned books and shows covers, dates, and comments</li>
  <li>A small sidebar block on the home page that surfaces what I am currently reading, so the log is visible without needing a whole post for every book</li>
</ul>

<p>Jekyll Reads does not try to be a general bookshelf app. It just reflects what I am already doing: writing short notes in YAML and publishing them along with the rest of the site.</p>

<h2 id="design-constraints-and-tradeoffs">Design constraints and trade‑offs</h2>

<p>I made a few deliberate choices that might look odd if you are used to larger toolchains:</p>

<ul>
  <li><strong>No external Node dependencies.</strong> The library and CLI only use built‑in modules like <code class="language-plaintext highlighter-rouge">https</code> and <code class="language-plaintext highlighter-rouge">readline</code>. That keeps installation simple and makes it easy to run in constrained environments.</li>
  <li><strong>Open Library as the primary data source.</strong> It provides book metadata, cover images, and stable URLs without requiring another account or scraping.</li>
  <li><strong>Plain YAML as the storage format.</strong> A static <code class="language-plaintext highlighter-rouge">_data</code> file is easy to version, review, and back up. It also plays nicely with Jekyll’s existing data pipeline.</li>
  <li><strong>Multiple small tools instead of one big one.</strong> The CLI, Vim integration, and VS Code extension all sit on top of the same library, so they stay in sync without each re‑implementing the logic.</li>
</ul>

<p>If any of that stops being true in the future, I can replace or extend the pieces without touching the core data file.</p>

<h2 id="if-you-want-to-use-it">If you want to use it</h2>

<p>The repository README walks through how to set up your own <code class="language-plaintext highlighter-rouge">_data/reading.yml</code>, wire up a reading page and sidebar, and use the CLI or editor integrations. It is written so that you can follow it even if you are not using the same Jekyll theme I am.</p>

<p>The code is MIT‑licensed and shipped under Electric Pliers LLC. If you want a lightweight way to publish a reading log without standing up a whole social network, you might find it useful.</p>

<p>You can find the repository and full documentation here: <a href="https://github.com/ElectricPliers/jekyll-reads">https://github.com/ElectricPliers/jekyll-reads</a></p>]]></content><author><name>Brian &quot;bex&quot; Exelbierd</name><email>bex@bexelbie.com</email><uri>https://bexelbie.com</uri></author><summary type="html"><![CDATA[A tiny, dependency-free toolkit for keeping a Jekyll reading log in sync: one YAML data file, a CLI, and editor integrations that handle the boring parts.]]></summary></entry><entry><title type="html">Phone a Friend: Multi-Model Subagents for VS Code Copilot Chat</title><link href="https://bexelbie.com/2026/02/23/phone-a-friend.html" rel="alternate" type="text/html" title="Phone a Friend: Multi-Model Subagents for VS Code Copilot Chat" /><published>2026-02-23T11:10:00+01:00</published><updated>2026-02-23T11:10:00+01:00</updated><id>https://bexelbie.com/2026/02/23/phone-a-friend</id><content type="html" xml:base="https://bexelbie.com/2026/02/23/phone-a-friend.html"><![CDATA[<p>I wanted a way to stay inside Visual Studio Code, use Copilot Chat as the “orchestrator,” and still mix and match models for different parts of the work. Plan a change with one of the slower, more capable models, but let a smaller, faster model handle mechanical refactors. Edit a blog post with one model, but hand Jekyll plumbing or JSON/YAML munging to another. The friction was that the built-in Copilot Chat extension only lets subagents run on the same model as the parent conversation, while the Copilot CLI happily lets you pick any available model per run. Phone a Friend bolts that flexibility onto Copilot Chat, so I can keep the full VS Code experience - including gutter diffs - while dispatching subtasks to whatever model is best for the job.</p>

<h2 id="the-problem">The Problem</h2>

<p>When you use GitHub Copilot Chat in VS Code, every subagent it spawns runs on the same model as the parent conversation. If you’re on Claude Opus 4.6, all subagents are Claude Opus 4.6. Sometimes you want a different model for a subtask - a faster one for simple work, or a different vendor for a second opinion.</p>

<p>GitHub Copilot CLI supports <code class="language-plaintext highlighter-rouge">--model</code> to pick any available model, but using it directly doesn’t help - changes made by the CLI don’t produce VS Code’s gutter indicators (the green/red diff decorations in the editor margin). You get the work done but lose the visual feedback that makes code review comfortable.</p>

<p><a href="https://github.com/bexelbie/phone-a-friend">Phone a Friend</a> is an MCP server that solves both problems. It dispatches work to Copilot CLI with the model of choice, captures a unified diff of the changes, and returns it to the calling agent - which applies it through VS Code’s edit tools. Gutter indicators show up as the changes were made natively.</p>

<h2 id="how-it-works">How It Works</h2>

<ol>
  <li>Copilot Chat calls the <code class="language-plaintext highlighter-rouge">phone_a_friend</code> MCP tool with a prompt, model name, and working directory</li>
  <li>The MCP server creates an isolated git worktree from <code class="language-plaintext highlighter-rouge">HEAD</code></li>
  <li>It launches Copilot CLI in non-interactive mode in that worktree with the requested model</li>
  <li>The subagent does its work and writes its response to a “message-in-a-bottle” file</li>
  <li>The MCP server reads the response, captures a <code class="language-plaintext highlighter-rouge">git diff</code>, and cleans up the worktree</li>
  <li>The MCP server then returns the response text and unified diff to the calling agent</li>
  <li>The calling agent applies the diff using VS Code’s edit tools - gutter indicators appear</li>
</ol>

<p>The “message in a bottle” pattern is worth explaining. Copilot CLI’s stdout mixes the agent’s response with progress output and is unreliable to parse. Rather than fighting noisy output, the tool instructs the subagent to write its final response to a file. The server reads the file. Clean separation.</p>

<h2 id="safety">Safety</h2>

<p>Worktree isolation means your working tree is never modified directly. Push protection blocks <code class="language-plaintext highlighter-rouge">git push</code> at the tool level. Worktrees are cleaned up after every invocation, even on errors.</p>

<h2 id="setup">Setup</h2>

<p>You install Phone a Friend like any other MCP server in VS Code: add the <code class="language-plaintext highlighter-rouge">@bexelbie/phone-a-friend</code> npm package through the <code class="language-plaintext highlighter-rouge">MCP: Add Server...</code> command, or point VS Code at it via your MCP configuration. The GitHub README details the exact JSON and prerequisites (Node.js, Copilot CLI, Git).</p>

<h2 id="usage">Usage</h2>

<p>Once configured, you stay in Copilot Chat and describe the outcome you want; the calling agent decides when to route a subtask through Phone a Friend. The tool surface includes discovery hints, so natural phrasing like “get a second opinion from another model” is usually enough to trigger it. Any model that Copilot CLI exposes is available.</p>

<h2 id="known-limitations">Known Limitations</h2>

<p>A few trade-offs worth knowing:</p>

<ul>
  <li><strong>Context cost.</strong> The unified diff lands in the calling agent’s context window. Large diffs eat context. I’ve got an issue open exploring ideas for improving this.</li>
  <li><strong>Message-in-a-bottle compliance.</strong> Most models follow the instruction to write their final response into the message-in-a-bottle file, but some may occasionally ignore it. When that happens, the calling agent still gets the diff of any file changes but not the response text.</li>
</ul>

<h2 id="availability">Availability</h2>

<p>The project is <a href="https://github.com/bexelbie/phone-a-friend">on GitHub</a> under MIT license, and published on npm as <a href="https://www.npmjs.com/package/@bexelbie/phone-a-friend"><code class="language-plaintext highlighter-rouge">@bexelbie/phone-a-friend</code></a>. Written in TypeScript.</p>

<h2 id="what-changed-for-me">What Changed For Me</h2>

<p>Since integrating this into my Copilot setup, the biggest shift is that I no longer have to choose between “the model I want to think with” and “the model I want to do the work” and I eliminated a bunch of copy/paste from manually emulating this. I keep the main conversation with a larger, more capable model for planning and review, and routinely:</p>

<ul>
  <li>send quick, mechanical refactors to a smaller, faster model</li>
  <li>hand Jekyll front matter, Liquid, and config tweaks to a model that’s better at markup and templating</li>
  <li>ask a different vendor’s model for a second opinion on changes or ideas, especially where that model may be better at the task</li>
</ul>

<p>Because everything still lands back in the same VS Code buffer with normal gutter diffs, it feels like one coherent tool instead of a handful of loosely-connected ones.</p>

<p>The project also had an unexpected dynamic in the development process. Building an MCP server that mimics a capability already available to the model created a strange feedback loop. I could collaborate on the implementation with Opus, and then turn around and interview it as a subject matter expert on how it uses that very same capability. It was a weird feeling to use the model as both a partner in writing the code and a primary source for understanding the user requirements.</p>]]></content><author><name>Brian &quot;bex&quot; Exelbierd</name><email>bex@bexelbie.com</email><uri>https://bexelbie.com</uri></author><summary type="html"><![CDATA[Dispatch subtasks to a different AI model - with editor gutter indicators intact.]]></summary></entry><entry><title type="html">Replacing my compact calendar spreadsheet with an ICS-powered web app</title><link href="https://bexelbie.com/2026/02/18/online-compact-calendar.html" rel="alternate" type="text/html" title="Replacing my compact calendar spreadsheet with an ICS-powered web app" /><published>2026-02-18T14:30:00+01:00</published><updated>2026-02-18T14:30:00+01:00</updated><id>https://bexelbie.com/2026/02/18/online-compact-calendar</id><content type="html" xml:base="https://bexelbie.com/2026/02/18/online-compact-calendar.html"><![CDATA[<p>I’ve used some form of <a href="https://davidseah.com/node/compact-calendar/">DSri Seah’s Compact Calendar</a> for over seven years. The calendar is a lovingly designed single-page view of the entire year, organized into Monday-through-Sunday weeks with no breaks between months.</p>

<p>The point of the format is simple: my normal calendar is great at telling me what I’m doing on Tuesday. What it’s terrible at is answering planning questions that are above the day level, such as:</p>

<ul>
  <li>If we take a vacation the last two weeks of July, will it overlap business travel?</li>
  <li>Can we connect these two public holidays and get 14 days away for only 8 days of PTO?</li>
  <li>Do we have any genuinely empty weeks left this year?</li>
</ul>

<p>For a long time, my compact calendar was a spreadsheet. That worked until it didn’t.</p>

<h2 id="the-problem-i-actually-needed-to-solve">The problem I actually needed to solve</h2>

<p>The spreadsheet version served me well for years, but life got more complicated.</p>

<p>My kid is getting older, which means more activities to track: summer camps, school breaks, etc. My partner and I no longer work for the same company, so we don’t share the same corporate holidays and as our roles have changed so has the amount of travel we do. And, honestly, my spreadsheet has bespoke formulas that only I understand … on Thursdays when there is a full moon.</p>

<p>My partner knows how to use a calendar app. She really doesn’t want to learn a special spreadsheet for planning, and I don’t blame her.</p>

<p>The real friction screaming out that there had to be a better way was the double-entry work. If my kid has summer camp in July, I’d put it on the family calendar - and then manually mark those weeks on my compact calendar spreadsheet. Two sources of truth means one of them is eventually wrong.</p>

<p>So the job wasn’t “build a better calendar.” It was: keep the year-at-a-glance view, but make the calendar app the source of truth.</p>

<h2 id="the-shape-of-the-solution">The shape of the solution</h2>

<p>I decided to build a web version of the compact calendar that could read directly from standard ICS calendar feeds.</p>

<p>Put the summer camp on the shared calendar once. The compact calendar picks it up automatically.</p>

<p>And if this was going to be something my partner and I actually used together, it needed two things:</p>

<ul>
  <li>A simple setup flow (not “copy this spreadsheet and don’t touch column Q”)</li>
  <li>A way to always be available beyond, “go find this Google Docs Link”</li>
</ul>

<h2 id="what-the-tool-does">What the tool does</h2>

<p>The calendar renders a full year on a single page. Each row is one week, Monday through Sunday.</p>

<p>Parallel to the block of weeks running down the page is a column for displaying committed events and a second for displaying possible events.</p>

<ul>
  <li>Committed: events that are definitely happening - travel that’s booked, school terms, confirmed work trips.</li>
  <li>Possible: things under consideration - a conference I submitted a talk to but haven’t heard back from yet, vacation options we’re weighing.</li>
</ul>

<p>The tool uses color to signal status at a glance:</p>

<ul>
  <li>Blue background: first day of the month (anchors the continuous weeks)</li>
  <li>Red text: public holidays (per selected country)</li>
  <li>Green background: committed events</li>
  <li>Yellow background: possible events</li>
  <li>Green background with a yellow border: overlaps/conflicts that need attention</li>
</ul>

<p>Here’s what the full-year view looks like with demo data loaded:</p>

<p><img src="/img/2026/CC-FullCalendar.png" alt="A full-year compact calendar view with one row per week (Monday through Sunday), with committed events shown in green, possible events in yellow, public holidays in red, and overlaps highlighted with a yellow border." /></p>

<h2 id="inputs-url-file-or-demo">Inputs: URL, file, or demo</h2>

<p>While there is demo data available in the system, the key comes from loading your own data. You can choose two different kinds of sources:</p>

<ul>
  <li>A URL - a <code class="language-plaintext highlighter-rouge">webcal://</code> or <code class="language-plaintext highlighter-rouge">https://</code> link to a published calendar (iCloud, Google Calendar, etc.)</li>
  <li>A file - a <code class="language-plaintext highlighter-rouge">.ics</code> file uploaded from your computer</li>
</ul>

<p>We’re an Apple household so our calendars live in iCloud, but the tool doesn’t care about your calendar provider. Anything that produces a standard ICS feed works.</p>

<p>My practical workflow is two shared calendars in Apple Calendar:</p>

<ul>
  <li>one for committed travel and events.  For me, this is actually my shared calendar that our family maintains.</li>
  <li>one for possibilities we’re considering</li>
</ul>

<p>Both are published as <code class="language-plaintext highlighter-rouge">webcal</code> URLs, and the compact calendar fetches them and renders the year view. Using my shared calendar works because the app ignores events that aren’t multi-day, all-day blocks - so dentist appointments don’t drown out the year view.  You can optionally include single day all-day events if that helps you.</p>

<p>The setup controls are intentionally simple:</p>

<p><img src="/img/2026/CC-Controls.png" alt="Configuration controls showing a country dropdown (for public holidays) and two inputs for selecting the committed and possible calendar sources." /></p>

<h2 id="the-tech-and-the-annoying-part">The tech (and the annoying part)</h2>

<p>This is a vanilla JavaScript app built with <a href="https://vite.dev/">Vite</a>, hosted on <a href="https://azure.microsoft.com/en-us/products/app-service/static">Azure Static Web Apps</a>. No framework - just DOM manipulation, a CSS file, and under 500 lines of main application code.</p>

<p>The interesting technical problem was CORS.</p>

<p>Calendar providers like iCloud don’t set CORS headers on their published feeds, which means a browser can’t fetch them directly. The solution is a small Azure Function that acts as a proxy:</p>

<ul>
  <li>the browser sends the calendar URL to the server</li>
  <li>the server fetches the calendar data</li>
  <li>the server returns it to the browser</li>
</ul>

<p>The proxy doesn’t store or log anything. It’s a pass-through.</p>

<p>I built the app with an AI coding agent. I provided direction and made decisions, but I didn’t hand-write every line. For this kind of tool, I’m comfortable with that. It’s a static site that renders calendar data client-side, and the risk profile is low. Additionally, nothing in this code represents a new problem or a novelty. This is bog-standard code, and the agent handled the boilerplate well for this project.</p>

<p>Importantly, even though I could have written this code myself, I wouldn’t have. I probably would have gotten myself caught in a bit of analysis paralysis over frameworks. But more importantly, writing a lot of this code is just boring code to write. The AI agent has allowed me to solve my own problem, and that’s the part that matters to me. I didn’t have to suddenly become more disciplined about spreadsheets or get my family dragged onto a tool that really only speaks to me. Instead, I was able to change the shape of the problem and make it more solvable within the context of the humans involved.</p>

<h2 id="privacy-and-the-honest-trade-off">Privacy and the honest trade-off</h2>

<p>All your data stays in your browser. The app stores the URLs you’re loading, your selected country, and cached holiday data in local storage. This is purely functional and not for tracking.</p>

<p>Calendar URLs necessarily have to go through the server-side proxy because browsers won’t fetch them directly. The proxy is a stateless pass-through — I don’t persist calendar data in the function or in your browser. Calendar URLs are sent via POST request body rather than query parameters, which means they aren’t captured in Azure’s platform-level request logs. Error logging includes only the target hostname (e.g., “iCloud fetch failed”), never the full URL or authentication tokens. If your calendar URL contains authentication tokens (iCloud URLs do), understand that the proxy briefly sees them in transit.</p>

<h2 id="try-it-out">Try it out</h2>

<p>The calendar is live at <a href="https://cc.bexelbie.com">cc.bexelbie.com</a>. You can load the built-in demo data to explore without connecting your own calendars - select “Demo” from either input dropdown.</p>

<p>The source is on GitHub at <a href="https://github.com/bexelbie/online-compact-calendar">bexelbie/online-compact-calendar</a>. If you have ideas or find bugs, <a href="https://github.com/bexelbie/online-compact-calendar/issues">open an issue</a>.</p>

<p>On first visit, there’s a banner that points you at settings:</p>

<p><img src="/img/2026/CC-first-run-banner.png" alt="A first-run welcome banner that tells the user to use the gear icon to configure the app." /></p>

<h2 id="whats-next">What’s next</h2>

<p>I’m going to live with it for a while before adding features. The spreadsheet served me for seven years with almost no changes.</p>]]></content><author><name>Brian &quot;bex&quot; Exelbierd</name><email>bex@bexelbie.com</email><uri>https://bexelbie.com</uri></author><summary type="html"><![CDATA[I rebuilt my year-at-a-glance compact calendar as a small web app that reads ICS feeds and highlights conflicts.]]></summary></entry><entry><title type="html">Building a tiny ephemeral draft sharing system on Hedgedoc</title><link href="https://bexelbie.com/2026/02/12/yak-shaving.html" rel="alternate" type="text/html" title="Building a tiny ephemeral draft sharing system on Hedgedoc" /><published>2026-02-12T13:00:00+01:00</published><updated>2026-02-12T13:00:00+01:00</updated><id>https://bexelbie.com/2026/02/12/yak-shaving</id><content type="html" xml:base="https://bexelbie.com/2026/02/12/yak-shaving.html"><![CDATA[<blockquote>
  <p>This yak is now shaved!</p>

  <p><cite>me</cite></p>
</blockquote>

<p>I’ve been working on two submissions I want to put into the CFP for <a href="https://installfest.cz">installfest.cz</a> and had them at a “man it’d be nice to have someone else read and comment on this” level of done.  Normally when this happens I have to psych myself up for it, both because receiving feedback can be hard and because I have to do a format conversion.  I tend to write in markdown in “all the places” and sharing a document for edits has typically meant pasting it into something like Google Docs or Office 365, where even if it still looks like markdown … it isn’t.</p>

<p>And that’s when the yak walked into the room. Instead of just pasting my drafts into Google Docs and getting on with the reviews, I decided I needed to delay getting feedback and build the markdown collaborative editing system of my dreams. Classic yak shaving - solving a problem you don’t actually need to solve in order to eventually do the thing you originally set out to do. <a href="https://www.youtube.com/watch?v=0E5ae4MD5qo">What is Yak Shaving</a> - a video by Matthew Miller if you’re unfamiliar.</p>

<p>When I am done, I then have to take this text back to where it was originally going, often in good clean markdown (this blog post is in markdown!).  This rigmarole is tiring.  I also dislike that the go to tools for this for me had turned into an exercise in ensuring guests could access a document or collecting someone’s login ids to yet another system.</p>

<p>I knew there had to be a better way.  Then it hit me.  When markdown started to take off we had a slew of markdown collaborative editing sites take off.  They were often modeled on the older etherpad.  Well, several are still around.  I looked at online options as I tend to prefer using a service when I can so I don’t get more sysadmin work to do.</p>

<p>I hit three snags in picking one:</p>

<ol>
  <li>I don’t like being on a free tier when I don’t understand how it is supported.  While I don’t know that anyone in this space is nefarious, the world is trending in a specific direction.  I don’t mind paying, but this was also not going to generate enough value to warrant serious payments.</li>
  <li>The project that first came to mind for markdown collaboration went open core back in 2019.  Open source business models are hard, and doing open core well is even harder.  As you’ll see below I had specific needs and I had a feeling I might run into the open core wall.</li>
  <li>One of the CFPs would actually benefit from implementing this as my example … bonus!</li>
</ol>

<p>After examining a bunch of options, I settled on building something out of <a href="https://hedgedoc.org">Hedgedoc</a>. This was not an easy choice and the likelihood of entering analysis paralysis was super high.  So I decided to try to force this to fit on a free tier Google GCP instance I have been running for years.  It is the tiny e2-micro burstable instance, a literal thimble of compute.</p>

<p>This ran off a lot of options.  Privacy first options need more compute just to do encryption work.  A bunch of options want a server database (Postgres and friends) and a single person instance should be fine on SQLite, in my opinion.  All roads now ran to Hedgedoc.  It was the only option that could run on SQLite, tolerate my tiny VM, still give me collaborative markdown, and seemed to have every feature required if I could make it work.</p>

<p>It wasn’t all sunshine and happiness though.  Hedgedoc is in the middle of writing version 2.0, which means 1.0 is frozen for anything except critical fixes and all efforts are focused on the future.  Therefore, the documentation being a bit rough in places was something I was going to have to live with.</p>

<p>My core requirements were:</p>
<ol>
  <li>Only I am allowed to create new notes</li>
  <li>Anyone with the “unguessable” url can edit and should not require an account to do so</li>
  <li>This should require next to zero system administration work and be easy to start and stop</li>
  <li>When I need more features, I should be able to extend this with a plugin for tools like <a href="https://obsidian.md">Obsidian</a> or Visual Studio Code.</li>
</ol>

<p>And while it took longer than I’d hoped, it works.  Here’s how:</p>

<ol>
  <li>Write yourself a configuration file for Hedgedoc</li>
</ol>

<p>config.json:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>{
  "production": {
    "sourceURL": "https://github.com/bexelbie/hedgedoc",
    "domain": "&lt;url&gt;",
    "host": "localhost",
    "protocolUseSSL": true,
    "loglevel": "info",
    "db": {
      "dialect": "sqlite",
      "storage": "/data/db/hedgedoc.sqlite"
    },
    "email": true,
    "allowEmailRegister": false,
    "allowAnonymous": false,
    "allowAnonymousEdits": true,
    "requireFreeURLAuthentication": true,
    "disableNoteCreation": false,
    "allowFreeURL": false,
    "enableStatsApi": false,
    "defaultPermission": "limited",
    "imageUploadType": "filesystem",
    "hsts": {
      "enable": true,
      "maxAgeSeconds": 31536000,
      "includeSubdomains": true,
      "preload": true
    }
  }
}
</code></pre></div></div>

<p>This sets a custom source URL for the fork I have made (more below), enables SSL, disables new account registration, and allows edits via unguessable URLs without requiring logins.</p>

<ol>
  <li>Decide how you want to launch the container, I am using a quadlet, and provide some environment variables:</li>
</ol>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>CMD_SESSION_SECRET="&lt;secret&gt;"
CMD_CONFIG_FILE=/hedgedoc/config.json
NODE_ENV=production
</code></pre></div></div>

<p>These just put it in Production mode, point it at the config and provide the only secret required.</p>

<ol>
  <li>You’re basically done.  I happen to have put mine behind a Cloudflare tunnel and updated the main page of the site, but those are pretty straight forward.</li>
</ol>

<h2 id="more-yak-shaving">More Yak Shaving</h2>

<p>Naturally I planned to launch it, create my user id via the cli, and share my CFP submissions with the folks I wanted reviews from.
Narrator: Naturally, that’s not what happened.</p>

<p>I decided to push YAGNI<sup id="fnref:1"><a href="#fn:1" class="footnote" rel="footnote" role="doc-noteref">1</a></sup> out of the way and NEED IT!  Specifically I forked the v1 code into <a href="https://github.com/bexelbie/hedgedoc/">a repository</a> to add some features.  The upstream is unlikely to want any of these so I will have to carry these patches.  What I did:</p>

<ol>
  <li>Hedgedoc will do color highlighting and gutter indicators so you can see which author added what text.  Unfortunately it wasn’t seeming to be working.  I was getting weak indicators (underlines instead of highlighting) and often nothing.  So I fixed that.</li>
  <li>The colors for authorship are chosen randomly.  I am a bit past my prime in the seeing department and it was hard to see the colors against the dark editor background, so I restricted color choices to those that are contrasting.  It isn’t perfect, but it is better.</li>
  <li>My particular set up involves a lot of guest editors.  Normally I share to just a few folks, but sometimes to many.  They’ll all be anonymous.  Hedgedoc doesn’t track authorship colors for guests, so I patched in a system to generate color markings for anonymous editors.</li>
  <li>A feature I always loved in Etherpad was that you could temporarily hide the authorship colors when you just wanted to “read the document.”  So I added a button for that.  While I was doing that I discovered that there is a separate toggle to switch the editor into light mode, but I couldn’t see it because the status bar was black and it was set to .2 opacity!! I fixed that too.  Also, now the status bar switches when the editor switches.</li>
  <li>Comments, it turns out are needed.  So I coded in rudimentary support for critic markup comments.</li>
</ol>

<p>I have other ideas, but instead I am going to stop and let YAGNI win for a while.  Besides, hopefully 2.0 will ship soon and render all of this unneeded.</p>

<p>So there you go, now if you want to offer your assistance to help me write something, I’ll send you a link and you can go to town on our shared work.  If you want to see more about this, well, let’s see if Installfest.cz thinks you should or not :D — and whether this yak decides to grow its hair back.</p>

<div class="footnotes" role="doc-endnotes">
  <ol>
    <li id="fn:1">
      <p>YAGNI: You Ain’t Gonna Need It - a philosophy that reminds us that features we dream up aren’t needed until an actual use comes along (or a paying customer).  This also applies to engineering for future ideas when those ideas aren’t committed too yet. <a href="#fnref:1" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
  </ol>
</div>]]></content><author><name>Brian &quot;bex&quot; Exelbierd</name><email>bex@bexelbie.com</email><uri>https://bexelbie.com</uri></author><summary type="html"><![CDATA[How I'm using Hedgedoc on a tiny VM to share markdown drafts for feedback without heavyweight doc tools.]]></summary></entry><entry><title type="html">op-secret-manager: A SUID Tool for Secret Distribution</title><link href="https://bexelbie.com/2026/02/06/op-secret-manager.html" rel="alternate" type="text/html" title="op-secret-manager: A SUID Tool for Secret Distribution" /><published>2026-02-06T12:40:00+01:00</published><updated>2026-02-06T12:40:00+01:00</updated><id>https://bexelbie.com/2026/02/06/op-secret-manager</id><content type="html" xml:base="https://bexelbie.com/2026/02/06/op-secret-manager.html"><![CDATA[<p>Getting secrets from 1Password to applications running on Linux keeps forcing a choice I don’t want to make. Manual retrieval works until you get more than a couple of things … then you need something more. There are lots of options, but they all felt awkward or heavy, so I wrote <a href="https://github.com/bexelbie/op-secret-manager"><code class="language-plaintext highlighter-rouge">op-secret-manager</code></a> to fill the gap: a single-binary tool that fetches secrets from 1Password and writes them to per-user directories. No daemon, no persistent state, no ceremony.</p>

<h2 id="the-problem-secret-zero-on-multi-user-systems">The Problem: Secret Zero on Multi-User Systems</h2>

<p>The “secret zero” problem is fundamental: you need a first credential to unlock everything else. On a multi-user Linux system, this creates friction. Different users (application accounts like <code class="language-plaintext highlighter-rouge">postgres</code>, <code class="language-plaintext highlighter-rouge">redis</code>, or human operators) need different secrets. You want to centralize management (1Password) but local distribution without exposing credentials across user boundaries. You also don’t want to solve the “secret zero” problem multiple times or have a bunch of first credentials saved in random places all over the disk.</p>

<p>Existing approaches each carry costs:</p>

<ul>
  <li><strong>Manual copying</strong>: Unscalable and leaves secret material in shell history or temporary files.</li>
  <li><strong>1Password CLI directly</strong>: Requires each user to authenticate or have API key access, which recreates the distribution problem and litters the disk with API keys.</li>
  <li><strong>Persistent agents</strong> (Connect, Vault): Add services to monitor, restart policies to configure, and failure modes to handle.</li>
  <li><strong>Cloud provider integrations</strong>: Generally unavailable on bare metal or hybrid environments where half your infrastructure isn’t in AWS/Azure/GCP.</li>
</ul>

<p>What I wanted: the <code class="language-plaintext highlighter-rouge">postgres</code> user runs a command, secrets appear in <code class="language-plaintext highlighter-rouge">/run/user/1001/secrets/</code>, done.</p>

<h2 id="how-it-works">How It Works</h2>

<p>The tool uses a mapfile to define which secrets go where:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>postgres   op://vault/db/password         db_password
postgres   op://vault/db/connection       connection_string
redis      op://vault/redis/auth          redis_password
</code></pre></div></div>

<p>Each line maps a username, a 1Password secret reference, and an output path. Relative paths expand to <code class="language-plaintext highlighter-rouge">/run/user/&lt;uid&gt;/secrets/</code>. Absolute paths work if the user has write permission.</p>

<p>The “secret zero” challenge is now centralized through the use of a single API key file that all users can access. But the API key needs protection from unprivileged reads and ideally from the users themselves. This is where SUID comes in … carefully.</p>

<h2 id="privilege-separation-design">Privilege Separation Design</h2>

<p>The security model uses SUID elevation to a service account (not root), reads protected configuration, then immediately drops privileges before touching the network or filesystem.</p>

<p>This has not been independently security audited. Treat it as you would any custom SUID program: read the source, understand the threat model, and test it in your environment before deploying broadly.</p>

<p>The flow:</p>

<ol>
  <li>Binary is SUID+SGID to <code class="language-plaintext highlighter-rouge">op:op</code> (an unprivileged service account)</li>
  <li>Process starts with elevated privileges, reads:
    <ul>
      <li>API key from <code class="language-plaintext highlighter-rouge">/etc/op-secret-manager/api</code> (mode 600, owned by <code class="language-plaintext highlighter-rouge">op</code>)</li>
      <li>Mapfile from <code class="language-plaintext highlighter-rouge">/etc/op-secret-manager/mapfile</code> (typically mode 640, owned by <code class="language-plaintext highlighter-rouge">op:op</code> or <code class="language-plaintext highlighter-rouge">root:op</code>)</li>
    </ul>
  </li>
  <li>Drops all privileges to the real calling user</li>
  <li>Validates that the calling user appears in the mapfile</li>
  <li>Fetches secrets from 1Password</li>
  <li>Writes secrets as the real user to <code class="language-plaintext highlighter-rouge">/run/user/&lt;uid&gt;/secrets/</code></li>
</ol>

<p>Because the network calls and writes happen <em>after</em> the privilege drop, the filesystem automatically enforces isolation. User <code class="language-plaintext highlighter-rouge">postgres</code> cannot write to <code class="language-plaintext highlighter-rouge">redis</code>’s directory. The secrets land with the correct ownership without additional chown operations.</p>

<h3 id="why-suid-to-a-service-account">Why SUID to a Service Account?</h3>

<p>Elevating to root would be excessive. Elevating to a dedicated, unprivileged service account constrains the blast radius. If someone compromises the binary, they get the privileges of <code class="language-plaintext highlighter-rouge">op</code> (which can read one API key) rather than full system access.</p>

<p>Alternatives considered:</p>

<ul>
  <li><strong>Linux capabilities</strong> (<code class="language-plaintext highlighter-rouge">CAP_DAC_READ_SEARCH</code>): Still requires root ownership of the binary to assign capabilities, which increases risk.</li>
  <li><strong>Group-readable API key</strong>: Forces all users into a shared group, allowing direct API key reads. This moves the problem rather than solving it.</li>
  <li><strong>No privilege separation</strong>: Each user needs a copy of the API key, defeating centralized management.</li>
</ul>

<p>The mapfile provides access control: it defines which users can request which secrets. The filesystem enforces it: even if you bypass the mapfile check, you can’t write to another user’s runtime directory. While you would theoretically be able to harvest a secret, you won’t be able to modify what the other user uses. This is key because a secret may not actually be “secret.” I have found it useful to centralize some configuration management, like API endpoint addresses, with this tool.</p>

<h3 id="root-execution">Root Execution</h3>

<p>Allowing root to use the tool required special handling. The risk is mapfile poisoning: an attacker modifies the mapfile to make root write secrets to dangerous locations.</p>

<p>The mitigation: root execution is only permitted if the mapfile is owned by <code class="language-plaintext highlighter-rouge">root:op</code> with no group or world write bits. If you can create a root-owned, properly-permissioned file, you already have root access and don’t need this tool for privilege escalation.  The SGID bit on the binary lets the service account, <code class="language-plaintext highlighter-rouge">op</code>, read the mapfile even though it is owned by root.</p>

<h2 id="practical-integration-podman-quadlets">Practical Integration: Podman Quadlets</h2>

<p>My primary use case is systemd-managed containers. Podman Quadlets make this concise. This example is of a rootless <em>user</em> Quadlet (managed via <code class="language-plaintext highlighter-rouge">systemctl --user</code>), not a system service.</p>

<div class="language-ini highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nn">[Unit]</span><span class="w">
</span><span class="py">Description</span><span class="p">=</span><span class="s">Application Container</span>
<span class="py">After</span><span class="p">=</span><span class="s">network-online.target</span>
<span class="w">
</span><span class="nn">[Container]</span><span class="w">
</span><span class="py">Image</span><span class="p">=</span><span class="s">docker.io/myapp:latest</span>
<span class="py">Volume</span><span class="p">=</span><span class="s">/run/user/%U/secrets:/run/secrets:ro,Z</span>
<span class="py">Environment</span><span class="p">=</span><span class="s">DB_PASSWORD_FILE=/run/secrets/db_password</span>
<span class="py">ExecStartPre</span><span class="p">=</span><span class="s">/usr/local/bin/op-secret-manager</span>
<span class="py">ExecStopPost</span><span class="p">=</span><span class="s">/usr/local/bin/op-secret-manager --cleanup</span>
<span class="w">
</span><span class="nn">[Service]</span><span class="w">
</span><span class="py">Restart</span><span class="p">=</span><span class="s">always</span>
<span class="w">
</span><span class="nn">[Install]</span><span class="w">
</span><span class="py">WantedBy</span><span class="p">=</span><span class="s">default.target</span>
</code></pre></div></div>

<p><code class="language-plaintext highlighter-rouge">ExecStartPre</code> fetches secrets before the container starts. The container sees them at <code class="language-plaintext highlighter-rouge">/run/secrets/</code> (read-only). <code class="language-plaintext highlighter-rouge">ExecStopPost</code> removes them on shutdown. The application reads secrets from files (not environment variables), avoiding the “secrets in env” problem where <code class="language-plaintext highlighter-rouge">env</code> or a log dump leaks credentials.</p>

<p>The secrets directory is a <code class="language-plaintext highlighter-rouge">tmpfs</code> (memory-backed <code class="language-plaintext highlighter-rouge">/run</code>), so nothing touches disk. If lingering is enabled for the user (<code class="language-plaintext highlighter-rouge">loginctl enable-linger</code>), the directory persists across logins.</p>

<h2 id="trade-offs-and-constraints">Trade-offs and Constraints</h2>

<p>This design makes specific compromises for simplicity:</p>

<p><strong>No automatic rotation.</strong> The tool runs, fetches, writes, exits. If a secret changes in 1Password, you need to re-run the tool (or restart the service). For scenarios requiring frequent rotation, a persistent agent might be better. For most use cases, rotation happens infrequently enough that ExecReload or a manual re-fetch works fine.</p>

<p><strong>Filesystem permissions are the security boundary.</strong> If an attacker bypasses Unix file permissions (kernel exploit, root compromise), the API key is exposed. This is consistent with how <code class="language-plaintext highlighter-rouge">/etc/shadow</code> or SSH host keys are protected. File permissions are the Unix-standard mechanism. Encrypting the API key on disk would require storing the decryption key somewhere accessible to the SUID binary, recreating the same problem with added complexity.</p>

<p><strong>Scope managed by 1Password service account.</strong> The shared API key is the critical boundary. If it’s compromised, every secret it can access is exposed. Proper 1Password service account scoping (separate vaults, least-privilege grants, regular audits) is essential.</p>

<p><strong>Mapfile poisoning risk for non-root.</strong> If an attacker can modify the mapfile, they can make users write secrets to unintended locations. This is mitigated by restrictive mapfile permissions (typically <code class="language-plaintext highlighter-rouge">root:op</code> with mode 640). The filesystem still prevents writes to directories the user doesn’t own, but absolute paths could overwrite user-owned files.</p>

<p><strong>No cross-machine coordination.</strong> This is a single-host tool. Distributing secrets to a cluster requires running the tool on each node or using a different solution.</p>

<h2 id="implementation-details-worth-noting">Implementation Details Worth Noting</h2>

<p>The Go implementation uses the 1Password SDK rather than shelling out to <code class="language-plaintext highlighter-rouge">op</code> CLI. This avoids parsing CLI output and handles authentication internally.</p>

<p>Path sanitization prevents directory traversal (<code class="language-plaintext highlighter-rouge">..</code> is rejected). Absolute paths are allowed but subject to the user’s own filesystem permissions after privilege drop.</p>

<p>The cleanup mode (<code class="language-plaintext highlighter-rouge">--cleanup</code>) removes files based on the mapfile. It only deletes files, not directories, and only if they match entries for the current user. This prevents accidental removal of shared directories.</p>

<p>A verbose flag (<code class="language-plaintext highlighter-rouge">-v</code>) exists primarily for debugging integration issues. Most production usage doesn’t need it.</p>

<h2 id="availability">Availability</h2>

<p>The project is <a href="https://github.com/bexelbie/op-secret-manager">on GitHub</a> under GPLv3. Pre-built binaries for Linux amd64 and arm64 are available in releases.</p>

<p>This isn’t the right tool for every scenario. If you need dynamic rotation, audit trails beyond what 1Password provides, or distributed coordination, look at Vault or a cloud provider’s secret manager. If you’re running Kubernetes, use native secret integration.</p>

<p>But for the specific case of “I have a few Linux boxes, some containers, and a 1Password account; I want secrets distributed without adding persistent infrastructure,” this does the job.</p>]]></content><author><name>Brian &quot;bex&quot; Exelbierd</name><email>bex@bexelbie.com</email><uri>https://bexelbie.com</uri></author><summary type="html"><![CDATA[A no-daemon tool that distributes 1Password secrets to multi-user Linux systems but retains centralized management.]]></summary></entry><entry><title type="html">On EU Open Source Procurement: A Layered Approach</title><link href="https://bexelbie.com/2026/01/27/eu-open-source-procurement.html" rel="alternate" type="text/html" title="On EU Open Source Procurement: A Layered Approach" /><published>2026-01-27T09:10:00+01:00</published><updated>2026-01-27T09:10:00+01:00</updated><id>https://bexelbie.com/2026/01/27/eu-open-source-procurement</id><content type="html" xml:base="https://bexelbie.com/2026/01/27/eu-open-source-procurement.html"><![CDATA[<p>Disclaimer: I work at Microsoft on upstream Linux in Azure. These are my personal notes and opinions.</p>

<p>The European Commission has launched a consultation on the EU’s future Open Source strategy. That combined with some comments by <a href="https://toot.io/@jzb@hachyderm.io">Joe Brockmeier</a> made me think about this from a procurement perspective.  Here’s the core of my thinking: treat open source as recurring OpEx, not a box product. That means hiring contributors, contracting external experts, and funding internal IT so the EU participates rather than only purchases.</p>

<p>A lot of reaction to this request has shown up in the form of suggestions for the EU to fund open source software companies and to pay maintainers. In this <a href="https://toot.io/@jzb@hachyderm.io/115939723318453222">Mastodon exchange</a> that I had with Joe, he points out that these comments ignore the realities of how procurement works and the processes that vendors go through that, if followed by maintainers, would be both onerous and leave them in the precarious position of living contract to contract.</p>

<p>His prescription is that that the EU should participate in communities by literally “rolling up [their] sleeves and getting directly involved.” My reaction was to point out that doing these things has an indirect, at best, relationship to bottom-line metrics (profit, efficiency, cost, etc.) and that our government structures are not set up to reward this kind of thinking. In general people want to see their governments not be “wasteful” in a context where one person’s waste is another’s necessity.</p>

<p>As the exchange continued, <a href="https://toot.io/@jzb@hachyderm.io/115940797583976313">Joe pointed out</a> that “it’s not FOSS that needs to change, it’s the organizational thinking.”</p>

<p>In the moment I took the conversation in a slightly different direction, but the core of this conversation stuck with me. I woke up this morning thinking about organizational change. I am sure I am not the first to think this way, but here’s my articulation.</p>

<p>An underlying commentary, in my opinion, in many of the responses from the “pay the maintainers / fund open source” crowd is the application of a litmus test to the funded parties. Typically they want to exclude not only all forms of proprietary software, but also SAAS products that don’t fully open their infrastructure management, products which rely on cloud services, large companies, companies that have traditionally been open source friendly that have been acquired (even if they are still open source friendly), and so on. These exclusions, no matter which you support, if any, tend to drive the use of open source software by an entity like the EU into a 100% self-executed motion. And, despite the presence of SAAS in that list, these conversations often treat open source software as a “box product” only experience that the end-user must self install in their own (private and presumably all open source) cloud.</p>

<p>A key element of most entities is that they procure the things that aren’t uniquely their effort. A government procures email server software (and increasingly email as a service) because sending email isn’t their unique effort; the work that email allows to happen is. There is an inherent disconnect between the effort and therefore the corresponding cost expectation of getting email working so you can do work versus first becoming an email solution provider and expert and then after that beginning to do the work you wanted to do. (A form of Yak Shaving perhaps?).</p>

<p>While I am not sure I will reply to the EU Commission - I am a resident of the EU but not an EU citizen - I wanted to write to organize my thoughts.</p>

<h2 id="why-procurement-struggles-with-oss">Why Procurement Struggles With OSS</h2>

<p>Software procurement is effectively the art of getting software:</p>

<ul>
  <li>written</li>
  <li>packaged into a distributable consumable</li>
  <li>maintained</li>
  <li>advanced with new features as need arises</li>
  <li>installed and working</li>
</ul>

<p>Over time the industry has become more adept at doing more of these things for their customers. Early software was all custom and then we got software that was reusable. Software companies became more common as entities became willing to pay for standardized solutions and we saw the rise of the “box product.” SaaS has supplanted much of the installation and execution last-mile work that was the traditional effort of in-house IT departments. From an organizational perspective, these distinct areas of cost - some one-time and some recurring - have increasingly been rolled into a single, recurring cost. That is easier to budget and operate.</p>

<p>Bundling usually leads to discounting. Proprietary software companies control this whole stack and therefore can capture margin at multiple layers. This also allows them to create a discount when bundling different layers because they can “rationalize” their layer-based profit calculations. Open source changes this equation. There is effectively no profit built into most layers because any profit-taking is competed away in a deliberate and wanted race to the bottom. When a company commercializes open source software, it has to build all of its profit (and the cost of being a company) into the few layers it controls. We have watched companies struggle to make this model work, in large part because it is hard and easy to misunderstand. There is a whole aside I could write about how single-company open source makes these even worse because it buries the cost for layers like writing and maintaining software into the layers that are company-controlled, but I won’t, to keep this short. But know this context. What this means, in the end, is that I believe procuring open source can sometimes lead, paradoxically, to an increase in cost versus procuring the layers separately … but only if you think broadly about procurement.</p>

<p>Too often we assume procurement == purchasing, but it doesn’t have to. <a href="https://www.merriam-webster.com/dictionary/procuring">Merriam-Webster</a> reminds us that procurement is “to bring about or achieve (something) by care and effort.” Therefore we could encourage entities like the EU to procure open source software by using a layered approach and have an outcome identical to the procurement of the same software in a non-open way at the same or lower cost. Open source doesn’t need to save money; it just needs to not “waste” it.</p>

<p>The key is the rise of software as a service. From an accounting perspective, software as a service moves software expenses from a model of large one-time costs with smaller, if any, recurring costs to one of just recurring costs. The “Hotel California”<sup id="fnref:saas-exit"><a href="#fn:saas-exit" class="footnote" rel="footnote" role="doc-noteref">1</a></sup> reality of software as a service - the idea that recurring costs can be ended at-will - is an exciting one organizationally as it gives flexibility at controllable cost, but in practice exit is often constrained by vendor lock-in, data egress limits, and portability gaps.</p>

<h2 id="the-layered-opex-model">The Layered OpEx Model</h2>

<p>Here’s how the EU can treat open source as a recurring cost:</p>

<ol>
  <li>
    <p><strong>Hire people to participate in the open source project.</strong> They are tasked with helping to maintain and advance software to keep it working and changing to meet EU needs. These people are, like most engineers at open source companies, paid to focus on the organization’s needs. They differ from our typical view of contributors as people showing up to “scratch their own itch.”</p>
  </li>
  <li>
    <p><strong>Enter into contracts with external parties to provide consulting and support beyond the internal team.</strong> These folks are there to give you diversity of thought and some guarantees. The internal team is, by definition, focused just on EU problems and has a sample size install base of one. External contractors will have a much larger scope of interest and install base sample size as they work with multiple customers. Critically, this creates a funding channel for non-employees and speaks to the “pay the maintainers” crowd.</p>
  </li>
  <li>
    <p><strong>Continue to fund internal IT departments to care and feed software and make it usable instead of shifting this expense to a single-software solution vendor.</strong> These folks are distinct from the people in #1 above. They are experts in EU needs and understand the intersection of those needs and a multitude of software.</p>
  </li>
</ol>

<p>Every one of these expenses is recurring and able to be ended at-will. But only if ending these expenses is something we are willing to knowingly accept. We already implicitly accept them when we buy from a company. The objections I expect are as follows. Before you read them though, I want to define at-will. While it denotatively means “<a href="https://www.merriam-webster.com/dictionary/at%20will">as one wishes : as or when it pleases or suits oneself</a>” in our context we can extend this with “in a reasonable time frame” or “with known decision points.”</p>

<h2 id="expected-objections">Expected Objections</h2>

<ol>
  <li>
    <p><strong>If you can terminate the people hired to participate in open source projects like this, they’re living contract to contract.</strong> To this I say, yes in the sense that they don’t have unlimited contracts, but no in the sense that they are still employees with employee benefits and protections, like notice periods. The big change is that they can be terminated solely due to changes in software needs.</p>
  </li>
  <li>
    <p><strong>But allowing for notice periods is expensive. EU employees are often perceived as more expensive than private sector ones or individual contractors.</strong> To this I say, maybe. But isn’t that the point? Shouldn’t we want to be in a place where we are <em>not</em> creating cost savings by reducing the quality of life for the humans involved?</p>
  </li>
  <li>
    <p><strong>If everything is either an employment agreement with a directed work product (do fixes/maintenance for our use case or install and manage this software) or a support/consultancy contract we aren’t paying maintainers to be maintainers.</strong> To this I say, you’re right. The mechanics of project maintenance should be borne by all of the project’s participants and not by some special select few paid to do that work. There is a lot of room here to argue about specifics, but rise above it. The key thing this causes is that no one is paid to just “grind out features or maintenance” on a project that isn’t used directly by a contributor. A key concept in open source has always been that people are there to either scratch their own itch or because they have a personal motivation to provide a solution to some group of users. This model pays for the first one and leaves the second to be the altruistic endeavor it is. Also, there are EU funds you can get to pay for altruistic endeavors :D.</p>
  </li>
  <li>
    <p><strong>This model doesn’t explain how software originates. What happens when there is no open source project (yet)?</strong> To this I say, you’re also right. This is a huge hole that needs more thought. Today we solve this with VC funding and profit-based funding. VC funding is predicated on ownership and being able to get return on investment. If this model is successful there is very little opportunity for what VCs need. However, profit based funding, when an entity takes some of its profit and invests in new ideas (not features) still can exist as the consulting agreements can, and likely should, include a profit component. Additionally, the EU and other entities can recognize a shared need through the consensus building and collaborative work on participation in open source software and fund the creation of teams to go start projects. This relies on everyone giving the EU permission to take risks like this.</p>
  </li>
  <li>
    <p><strong>The cost of administering these three expenses will eat up the cost more than paying an external vendor.</strong> To this I say, maybe, but it shouldn’t matter. While I firmly believe that this shouldn’t be true and that it should be possible for the EU to efficiently manage these costs for less than the sum of the profit-costs they would pay a company, I am willing to accept that the “expensive employees” of #2 above may change that. But just like above, I think that’s partly the point.</p>
  </li>
  <li>
    <p><strong>Adopting this model will destroy the software industry and create economic disaster.</strong> To this I say, take a breath. The EU changing procurement models doesn’t have the power to single-handedly destroy an industry. Even if every government adopted this, which they won’t, the macro impact would likely be a shift in spend rather than a net loss. This model is practical only for the largest organizations; most entities will still need third-party vendors to bundle and manage solutions. If anything, this strengthens the open source ecosystem by providing a clear monetization path for experts, while leaving ample room for proprietary software where it adds unique value. Finally, the private sector is diverse; many companies and investors will continue to prefer traditional models. The goal here is to increase EU participation in a public good and reduce dependency, not to dismantle the software industry.</p>
  </li>
</ol>

<h2 id="what-to-ask-the-commission">What To Ask The Commission</h2>

<ul>
  <li>When choosing software, the budget must include time for EU staff (new or existing reassigned) to contribute to the underlying open source projects.</li>
  <li>Keep strong in-house IT skills to ensure that deployed solutions meet needs and work together</li>
  <li>Complement your staff with support/consultancy agreements to provide the accountability partnership you get from traditional vendors and to provide access to greater knowledge when needed</li>
  <li>Make decisions based on your mission and goals and not your current inventory; be prepared to rearrange staffing when required to advance</li>
</ul>

<p>This was quickly written this morning to get it out of my head. There are probably holes in this and it may not even be all that original, but I think it works. As an American who has lived in the EU for 13+ years, I have come to trust government more and corporations less for a variety of reasons, but mostly because, broadly speaking, we tend to hold our government to a higher standard than we hold corporations.</p>

<p>I’m posting this in January 2026, just before FOSDEM. I’ll be there and open for conversation. Find me on Signal as <code class="language-plaintext highlighter-rouge">bexelbie.01</code>.</p>

<div class="footnotes" role="doc-endnotes">
  <ol>
    <li id="fn:saas-exit">
      <p>Many software as a service agreements allow you to stop paying but still make true exit difficult due to data gravity, integrations, and proprietary features. In practice, you can “check out,” but actually leaving is often costly and slow. <a href="#fnref:saas-exit" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
  </ol>
</div>]]></content><author><name>Brian &quot;bex&quot; Exelbierd</name><email>bex@bexelbie.com</email><uri>https://bexelbie.com</uri></author><summary type="html"><![CDATA[Treat OSS as recurring OpEx: hire contributors, contract experts, and fund internal IT so the EU participates, not just buys.]]></summary></entry><entry><title type="html">Integrating the NOUS E10 ZigBee Smart CO₂, Temperature &amp;amp; Humidity Detector with ZHA</title><link href="https://bexelbie.com/2025/12/23/nous-e10-zha-quirk.html" rel="alternate" type="text/html" title="Integrating the NOUS E10 ZigBee Smart CO₂, Temperature &amp;amp; Humidity Detector with ZHA" /><published>2025-12-23T15:30:00+01:00</published><updated>2025-12-23T15:30:00+01:00</updated><id>https://bexelbie.com/2025/12/23/nous-e10-zha-quirk</id><content type="html" xml:base="https://bexelbie.com/2025/12/23/nous-e10-zha-quirk.html"><![CDATA[<p>My friend Tomáš recently gave me a NOUS E10 ZigBee Smart CO₂, Temperature &amp; Humidity Detector. It is a compact ZigBee device that, on paper, integrates with Home Assistant. However, as is often the case with smart home hardware, the reality is slightly more nuanced. Home Assistant offers two primary ways to integrate Zigbee devices: Zigbee2MQTT and ZHA (Zigbee Home Automation). I started out with ZHA when I first installed Home Assistant.  There is no way, as far as I know, to migrate between the two without re-adding all of your devices, so, 25 (now 26) devices in, I am on team ZHA.  While the NOUS E10 was already fully supported in Zigbee2MQTT, it was <a href="https://github.com/home-assistant/core/issues/151069">not functional in ZHA</a>.</p>

<figure class="half ">
  
    
      <img src="/img/2025/co2-sensor.jpg" alt="NOUS E10 ZigBee Smart CO₂, Temperature &amp; Humidity Detector" />
    
  
    
      <img src="/img/2025/co2-ha.jpg" alt="Home Assistant CO₂, Temperature &amp; Humidity screenshot" />
    
  
  
    <figcaption>Capturing the photo and the screenshot simultaneously without breathing on the sensor is hard; glossy surfaces are tricky to photograph, so slight value drift between the sensor and UI is expected.
</figcaption>
  
</figure>

<h2 id="the-tuya-rebrand-rabbit-hole">The Tuya Rebrand Rabbit Hole</h2>

<p>I did some reading and it seemed that between what the folks who did the Zigbee2MQTT integration figured out and the fact that the device is really a Tuya rebranded object in disguise, writing the integration should be achievable with my level of skill and general coding/technical experience. Tuya is a massive OEM (Original Equipment Manufacturer) that produces a vast array of smart home devices sold under hundreds of different brand names, so while the devices vary, the overall concept is fairly well understood.</p>

<p>The challenge with Tuya devices is that they often use a proprietary Zigbee cluster to communicate data. Instead of using the standard Zigbee clusters for temperature or humidity, they wrap everything in their own protocol. To make these devices work in ZHA, you need a “quirk.” A quirk is essentially a Python-based translator that tells ZHA how to interpret these non-standard messages and map them to the standard Home Assistant entities.</p>

<h2 id="developing-the-quirk-with-ai">Developing the Quirk with AI</h2>

<p>Because Tuya devices and the quirk concepts are fairly well understood this is a great use case for an LLM model.  I did some ideating with Google Gemini and plugged in all the values I could find from the Zigbee2MQTT source code and the device’s own signature. Using an LLM for this was surprisingly effective - it helped me scaffold the Python classes and identify which Tuya data points mapped to which sensors.  Honestly, all it got wrong was guessing that values were reported as deciunits (value times 10, i.e. 21.1 is reported as 211) when for this specific device, values are reported directly.</p>

<p>However, I hit multiple challenges, centered on this device not seeming to ever throw debug data. Usually, when you are developing a quirk, you can watch the Home Assistant logs to see the raw Zigbee frames coming in. You look for the “magic numbers” that change when you breathe on the sensor (CO₂). For some reason, the NOUS E10 was incredibly quiet. It took a lot of trial and error - and several restarts of the Zigbee network - to finally see the data flowing correctly. Eventually, I had a functional quirk that correctly reported CO₂ levels, temperature, and humidity.</p>

<h2 id="contributing-to-the-ecosystem">Contributing to the Ecosystem</h2>

<p>If you write a quirk, you’re encouraged to contribute it to the <a href="https://github.com/zigpy/zha-device-handlers/">Zigpy ZHA Device Handlers Repository</a>. This is the central hub for all ZHA quirks, and once a quirk is merged there, it eventually makes its way into a standard Home Assistant release.  I worked on a basic test case, and cleaned up my code to match the code standards and general concepts used in similar quirks.</p>

<p>I have submitted this <a href="https://github.com/zigpy/zha-device-handlers/pull/4597">pull request</a> and I’m waiting for feedback. I’m expecting to need to make corrections as this is my first time doing this kind of a contribution. While I have validated that the code works in my own environment, “working” and “ready for contribution” are not always the same thing. There are coding standards, naming conventions, and architectural patterns that the maintainers (rightly) insist upon to keep the codebase maintainable.</p>

<h2 id="how-to-use-the-quirk-today">How to Use the Quirk Today</h2>

<p>If you happen to have one of these and you use ZHA in Home Assistant, you can use the quirk right now without waiting on the merge. To do this, you need to save the actual <a href="https://github.com/zigpy/zha-device-handlers/blob/2799bdf0c11daa4144d83b4574c7bc7490c57653/zhaquirks/tuya/nous_e10_co2.py">python code</a> in a custom quirks directory in your Home Assistant install. Typically, you would use <code class="language-plaintext highlighter-rouge">/config/zha_quirks</code>.</p>

<p>After you do that, update your <code class="language-plaintext highlighter-rouge">configuration.yaml</code> to add the quirk directory as follows:</p>

<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="na">zha</span><span class="pi">:</span>
  <span class="na">custom_quirks_path</span><span class="pi">:</span> <span class="s">/config/zha_quirks/</span>
</code></pre></div></div>

<p>Then restart Home Assistant, pair your device, and, as a different friend would say, “Robert is your father’s brother.” It is a small but satisfying victory to take a non-working device and make it fully functional through a bit of code and community knowledge and advice.</p>]]></content><author><name>Brian &quot;bex&quot; Exelbierd</name><email>bex@bexelbie.com</email><uri>https://bexelbie.com</uri></author><summary type="html"><![CDATA[Integrating the NOUS E10 ZigBee CO₂ detector into ZHA with a custom quirk.]]></summary></entry><entry><title type="html">Building Bridges: Microsoft’s Participation in the Fedora Linux Community</title><link href="https://bexelbie.com/2025/12/17/Microsoft-flock-blog.html" rel="alternate" type="text/html" title="Building Bridges: Microsoft’s Participation in the Fedora Linux Community" /><published>2025-12-17T10:30:00+01:00</published><updated>2025-12-17T10:30:00+01:00</updated><id>https://bexelbie.com/2025/12/17/Microsoft-flock-blog</id><content type="html" xml:base="https://bexelbie.com/2025/12/17/Microsoft-flock-blog.html"><![CDATA[<p>While I was at Flock 2025, I had the opportunity to share what Microsoft has been contributing to Fedora over the last year.  I finally got a blog post written for the Microsoft Tech Community Linux and Open Source Blog.</p>

<p>Read the full blog over at the <a href=" https://techcommunity.microsoft.com/blog/linuxandopensourceblog/building-bridges-microsofts-participation-in-the-fedora-linux-community/4478461">Microsoft Tech Community</a> where this was originally posted.</p>

<!--

<P>At Microsoft, we believe that meaningful open source participation is driven by people, not corporations. But companies can - and should - create the conditions that empower individuals to contribute. Over the past year, our Community Linux Engineering team has been doing just that, focusing on Fedora Linux and working closely with the community to improve infrastructure, tooling, and collaboration. This post shares some of the highlights of that work and outlines where we’re headed next.&nbsp;</P>
<H2 id="community-4478461-toc-hId-1338639118">Modernizing Fedora Cloud Image Delivery&nbsp;</H2>
<P>One of our most impactful contributions this year has been expanding the availability of Fedora Cloud images across major cloud platforms. We introduced support for publishing images to both the Azure Community Gallery and Google Cloud Platform—capabilities that didn’t exist before. At the same time, we modernized the existing AWS image publishing process by migrating it to a new, OpenShift-hosted automation framework. This new system, developed by our team and led by engineer Jeremy Cline, streamlines image delivery across all three platforms and positions the project to scale and adapt more easily in the future.&nbsp;</P>
<P>We partnered with Adam Williamson in Fedora QE to extend this tooling to support container image uploads, replacing fragile shell scripts with a robust, maintainable system. Nightly Fedora builds are now uploaded to Azure, with one periodically promoted to “latest” after manual validation and basic functionality testing. This ensures cloud users get up-to-date, ready-to-run images - critical for workloads that demand fast boot times and minimal setup. &nbsp;As you’ll see , we have ideas for improving this testing.&nbsp;</P>
<H2 id="community-4478461-toc-hId--468815345">Enabling Secure Boot on ARM with Sigul&nbsp;</H2>
<P>Secure Boot is essential for trusted cloud workloads across architectures. Our current focus includes enabling it on ARM-based systems. Fedora currently signs most artifacts with Sigul, but UEFI applications are handled separately via a dedicated x86_64 builder with a smart card. We’re working to enable Sigul-based signing for UEFI applications across architectures, but Sigul is a complex project with unmaintained dependencies. We’ve stepped in to help modernize Sigul, starting with a Rust-based client and a roadmap to re-architect the code and structure for easier maintenance and improved performance. &nbsp;</P>
<P>This work is about more than just Microsoft’s needs - it’s about enabling Secure Boot support out of the box, like what users expect on x86_64 systems.&nbsp;</P>
<H2 id="community-4478461-toc-hId-2018697488">Bringing Inspektor Gadget to Fedora&nbsp;</H2>
<P>
    Inspektor Gadget is an eBPF-based toolkit for kernel instrumentation, enabling powerful observability use cases like performance profiling and syscall tracing. 
    <SPAN data-olk-copy-source="MessageBody">The Community Linux Engineering team consulted with the Inspektor Gadget maintainers at Microsoft about putting the project in Fedora.&nbsp; This led to the maintainers natively packaging it for Fedora and assuming ongoing maintenance of the package.</SPAN>
</P>
<P>We are encouraging teams to become active Fedora participants, to maintain their own packages, and to engage directly with the community. We believe in bi-directional feedback: upstream contributions should benefit both the project and the contributors.&nbsp;</P>
<H2 id="community-4478461-toc-hId-211243025">Azure VM Utils: Simplifying Cloud Enablement&nbsp;</H2>
<P>To streamline Fedora’s compatibility with Azure, we’ve introduced a package called azure-vm-utils. It consolidates Udev rules and low-level utilities that make Fedora work better on Azure infrastructure, particularly with NVMe devices. This package is a step toward greater transparency and maintainability and could serve as a model for other cloud providers.&nbsp;</P>
<H2 id="community-4478461-toc-hId--1596211438">Fedora WSL: A Layer 9 Success&nbsp;</H2>
<P>Fedora is now officially available in the Windows Subsystem for Linux (WSL) catalog - a milestone that required both technical and organizational effort. While the engineering work was substantial, the real challenge was navigating the legal and governance landscape. This success reflects deep collaboration between Fedora leadership, Red Hat, and Microsoft.&nbsp;</P>
<H2 id="community-4478461-toc-hId-891301395">Looking Ahead: Strategic Participation and Testing&nbsp;</H2>
<P>We’re not stopping here. Our roadmap includes:&nbsp;</P>
<UL>
    <LI>
        <STRONG>Replacing Sigul</STRONG>
            with a modern, maintainable signing infrastructure.&nbsp;
    </LI>
    <LI>
        <STRONG>Expanding participation</STRONG>
            in Fedora SIGs (Cloud, Go, Rust) where Microsoft has relevant expertise.&nbsp;
    </LI>
    <LI>
        <STRONG>Improving automated testing</STRONG>
            using Microsoft’s open source LISA framework to validate Fedora images at cloud scale.&nbsp;
    </LI>
    <LI>
        <STRONG>Enhancing the Fedora-on-Azure experience</STRONG>
        , including exploring mirrors within Azure and expanding agent/extension support.&nbsp;
    </LI>
</UL>
<P>We’re also working closely with the Azure Linux team, which is aligning its development model with Fedora - much like RHEL does. while Azure Linux has used some Fedora sources in the past, their upcoming 4.0 release is intended to align much more closely with Fedora as an upstream&nbsp;</P>
<H2 id="community-4478461-toc-hId--916153068">A Call for Collaboration&nbsp;</H2>
<P>While contributing patches is a good start, we intend to do much more. We aim to be a deeply involved member of the Fedora community - participating in SIGs, maintaining packages, and listening to feedback. If you have ideas for where Microsoft can make strategic investments that benefit Fedora, we want to hear them. &nbsp;You’ll find us alongside you in Fedora meetings, forums, and at conferences like Flock.&nbsp;</P>
<P>Open source thrives when contributors bring their whole selves to the table. At Microsoft, we’re working to ensure our engineers can do just that - by aligning company goals with community value.</P>
<P>
    (This post is based on a 
    <A class="lia-external-url" href="https://www.youtube.com/live/YhoFPG7Ack0?si=v5KH_0nRXl_bKtBD&amp;t=4290" target="_blank" rel="noopener nofollow noreferrer">talk delivered at Flock to Fedora 2025</A>
    .)&nbsp;
</P>
-->]]></content><author><name>Brian &quot;bex&quot; Exelbierd</name><email>bex@bexelbie.com</email><uri>https://bexelbie.com</uri></author><summary type="html"><![CDATA[posted on the Microsoft Tech Community Linux and Open Source Blog]]></summary></entry><entry><title type="html">If You’re Wearing More Than One Hat, Something’s Probably Wrong</title><link href="https://bexelbie.com/2025/11/20/if-you-are-wearing-more-than-one-hat.html" rel="alternate" type="text/html" title="If You’re Wearing More Than One Hat, Something’s Probably Wrong" /><published>2025-11-20T16:00:00+01:00</published><updated>2025-11-20T16:00:00+01:00</updated><id>https://bexelbie.com/2025/11/20/if-you-are-wearing-more-than-one-hat</id><content type="html" xml:base="https://bexelbie.com/2025/11/20/if-you-are-wearing-more-than-one-hat.html"><![CDATA[<p>If you’re wearing more than one hat on your head something is probably wrong. In open source, this can feel like running a haberdashery, with a focus on juggling roles and responsibilities that sometimes conflict, instead of contributing. In October, I attended the first <a href="https://openssl-conference.org">OpenSSL Conference</a> and got to see some amazing talks and, more importantly, meet some truly wonderful people and catch up with friends.</p>

<blockquote>
  <p>Disclaimer: I work at Microsoft on upstream Linux in Azure and was formerly at Red Hat. These reflections draw on roles I’ve held in various communities and at various companies. These are personal observations and opinions.</p>
</blockquote>

<p>Let’s start by defining a hat. This is a situation where you are in a formalized role, often charged with representing a specific perspective, team, or entity. The formalization is critical. There is a difference between a contributor saying something, even one who is active in many areas of the project, and the founder, a maintainer, or the project leader saying it. That said, you are always you, regardless of whether you have one hat, a million hats, or none. You can’t be a jerk in a forum and then expect everyone to ignore that when you show up at a conference. Hats don’t change who you are.</p>

<p>During a few of the panels, several panelists were trying to represent multiple points of view. They participate or have participated in multiple ways, for example on behalf of an employer and out of personal interest. One speaker has a collection of colored berets they take with them onto the stage. Over the course of their comments they change the hat on their head to talk to different, and quite often all, sides of a question. I want to be clear, I am not calling this person out. This is the situation they feel like they are in.</p>

<p>I empathize with them because I have been in this same situation. I have participated in the Fedora community as an individually motivated contributor, the Fedora Community Action and Impact Coordinator (a paid role provided to the community by Red Hat), and as the representative of Red Hat discussing what Red Hat thinks. Thankfully, I never did them all at once, just two at a time. I felt like I was walking a tightrope. Stressful. I didn’t want my personal opinion to be taken as the “voice” of the project or of Red Hat.</p>

<p>This experience was formative and helped me navigate this the next time it came up when I became Red Hat’s representative to the CentOS Project Board. My predecessor in the role had been a long-time individual contributor and was serving as the Red Hat representative. They struggled with the hats game. The first thing I was told was that the hat switching was tough to follow and people were often unsure if they were hearing “the voice of Red Hat” or the “voice of the person.” I resolved to not further this. I made the decision that I would only ever speak as “the voice of Red Hat.”<sup id="fnref:1"><a href="#fn:1" class="footnote" rel="footnote" role="doc-noteref">1</a></sup> It would be clear and unambiguous.</p>

<p>But, you may be thinking, what if you, bex, really have something you personally want to say. It did happen and what I did was leverage the power of patience and friendship.</p>

<p>Patience was in the form of waiting to see how a conversation developed. I am very rarely the smartest person in the room. I often found that someone would propose the exact thing I was thinking of, sometimes even better or more nuanced than I would have.</p>

<p>On the rare occasions that didn’t happen I would backchannel one of my friends in the room and ask them to consider saying what I thought. The act of asking was useful for two reasons. One, it was a filter for things that may not have been useful to begin with. Two, if someone was uneasy with sharing my views, their feedback was often useful in helping me better understand the situation.</p>

<p>In the worst case, if I didn’t agree with their feedback, I could ask someone else. Alternatively, I could step back and examine what was motivating me so strongly. Usually that reflection revealed this was a matter of personal preference or style that wouldn’t affect the outcome in the long term. It was always possible that I’d hit an edge case where I genuinely needed a second hat.</p>

<p>I recognize this is not an easy choice to make. I had the privilege of not having to give up an existing role to make this decision. However, I believe that in most cases when you do have to give up one role for another, you’re better off not trying to play both parts. You’re likely blocking or impeding the person who took on the role you gave up. If you have advice a quiet sidebar with them will go further than potentially forcing them into public conversations that don’t need to be public. Your successor may do things differently, you should be ok with that. And remember what I wrote above, you’re not being silenced.</p>

<p>So when do multiple hats tend to happen? Here are some common causes of hat wearing:</p>

<ol>
  <li>When you’re in a project because your company wants you there and you are personally interested in the technology.</li>
  <li>You participate in the project and a fork, downstream, or upstream that it has a relationship with.</li>
  <li>You participate in multiple projects all solving the same problem, for example multiple Linux distributions.</li>
  <li>You sit on a standards body or other organization that has general purview over an area and are working on the implementation.</li>
  <li>You work on both an open source project and the product it is commercially sold as.</li>
  <li>You’re a member of a legally liable profession, such as a lawyer (in many jurisdictions) so anything you say can be held to that standard.</li>
  <li>You’re in a small project and because of bootstrapping (or community apathy) you’re filling multiple roles during a “forming” phase.</li>
</ol>

<p>This raises the question of which hat you should wear if you feel like you have more than one option. Here’s how I decide which hat to wear:</p>

<ol>
  <li>Is this really a multi-hat situation? Are you just conflicted because you have views as a member of multiple projects or as someone who contributes in multiple ways that aren’t in alignment? If it isn’t a formalized role you’re struggling with the right problem. Speak your mind. Share the conflict and lack of alignment. This is the meat of the conversation.</li>
  <li>Why are you here? You generally know. That is the hat you wear. If you’re at a Technical Advisory Committee Meeting on behalf of your company and an issue about which you are personally passionate comes up - remember patience and friendship because this is a company hat moment.</li>
  <li>If you are in a situation where you can truly firewall off the conversations, you can change to an alternative hat. What this means is when you find yourself in a space where the provider of your other hat is very uninvolved. For example, if you normally work on crypto for your employer, but right now you are making documentation website CSS updates. Hello personal hat.</li>
  <li>If you’re in a 1:1 conversation and you know the person well, you can lay out all of your thoughts - just avoid the hat language. Be direct and open. If you don’t know the person well, you should probably err on the side of being conservative and think carefully about states 1 and 2 above.</li>
</ol>

<p>Some will argue that in smaller projects or early-stage efforts the flexibility of multiple roles is a feature, not a bug, allowing for rapid adaptation before formal structures are needed. That’s fair during a “forming” phase - but it shouldn’t become permanent. As the project matures, work to clarify roles and expectations so contributors can focus on one hat at a time.</p>

<p>As a maintainer or project leader, when you find people wearing multiple hats, it’s a warning flag. Something isn’t going right. Figure it out before the complexity becomes unmanageable.</p>

<div class="footnotes" role="doc-endnotes">
  <ol>
    <li id="fn:1">
      <p>In the case of this role it meant I spent a lot of time not saying much as Red Hat didn’t have opinions on many community issues preferring to see the community make its own decisions. Honestly, I probably spent more time explaining why I wasn’t talking than actually talking. <a href="#fnref:1" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
  </ol>
</div>]]></content><author><name>Brian &quot;bex&quot; Exelbierd</name><email>bex@bexelbie.com</email><uri>https://bexelbie.com</uri></author><summary type="html"><![CDATA[Navigating competing roles in open source - why clarity about which hat you're wearing matters.]]></summary></entry><entry><title type="html">Managing a manual Alexa Home Assistant Skill via the Web UI</title><link href="https://bexelbie.com/2025/11/12/alexa-manual-home-assistant.html" rel="alternate" type="text/html" title="Managing a manual Alexa Home Assistant Skill via the Web UI" /><published>2025-11-12T13:40:00+01:00</published><updated>2025-11-12T13:40:00+01:00</updated><id>https://bexelbie.com/2025/11/12/alexa-manual-home-assistant</id><content type="html" xml:base="https://bexelbie.com/2025/11/12/alexa-manual-home-assistant.html"><![CDATA[<p>My house has a handful of Amazon Echo Dot devices that we mostly use for timers, turning lights on and off, and playing music. They work well and have been an easy solution. I also use <a href="https://home-assistant.io">Home Assistant</a> for some basic home automation and serve most everything I want to verbally control to the Echo Dots from Home Assistant.</p>

<p>I don’t use the <a href="https://www.nabucasa.com">Nabu Casa Home Assistant Cloud Service</a>. If you’re reading this and you want the easy route, consider it — the cloud service is convenient. One benefit of the service is that there is a UI toggle to mark which entities/devices to expose to voice assistants.</p>

<p>If you take the <a href="https://www.home-assistant.io/integrations/alexa.smart_home/">manual route</a>, like I do, you must set up a developer account, AWS Lambda, and maintain a hand-coded list of entity IDs in a YAML file.</p>

<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="pi">-</span> <span class="s">switch.living_room</span>
<span class="pi">-</span> <span class="s">switch.table</span>
<span class="pi">-</span> <span class="s">light.kitchen</span>
<span class="pi">-</span> <span class="s">sensor.temp_humid_reindeer_marshall_temperature</span>
<span class="pi">-</span> <span class="s">sensor.living_room_temperature</span>
<span class="pi">-</span> <span class="s">sensor.temp_humid_rubble_chase_temperature</span>
<span class="pi">-</span> <span class="s">sensor.temp_humid_olaf_temperature</span>
<span class="pi">-</span> <span class="s">sensor.ikea_of_sweden_vindstyrka_temperature</span>
<span class="pi">-</span> <span class="s">light.white_lamp_bulb_1_light</span>
<span class="pi">-</span> <span class="s">light.white_lamp_bulb_2_light</span>
<span class="pi">-</span> <span class="s">light.white_lamp_bulb_3_light</span>
<span class="pi">-</span> <span class="s">switch.ikea_smart_plug_2_switch</span>
<span class="pi">-</span> <span class="s">switch.ikea_smart_plug_1_switch</span>
<span class="pi">-</span> <span class="s">sensor.temp_humid_chase_c_temperature</span>
<span class="pi">-</span> <span class="s">light.side_light</span>
<span class="pi">-</span> <span class="s">switch.h619a_64c3_power_switch</span>
</code></pre></div></div>
<p class="text-center">A list of entity IDs to expose to Alexa.</p>

<p>Fun, right? Maintaining that list is tedious. I generally don’t mess with my Home Assistant installation very often. Therefore, when I need to change what is exposed to Alexa or add a new device, finding the actual entity_id is annoying.  This is not helped by how good Home Assistant has gotten at showing only friendly names in most places. I decided there had to be a better way to do this other than manually maintaining YAML.</p>

<p>After some digging through docs and the source, I found there isn’t a built-in way to build this list by labels, categories, or friendly names. The Alexa integration supports only explicit entity IDs or glob includes/excludes.</p>

<p>So I worked out a way to build the list with a Home Assistant automation. It isn’t fully automatic - there’s no trigger that runs right before Home Assistant reboots - and you still need to restart Home Assistant when the list changes. But it lets me maintain the list by labeling entities rather than hand-editing YAML.</p>

<p>After a few experiments and some (occasionally overly imaginative) AI help, I arrived at this process. There are two parts.</p>

<h2 id="prep-and-staging">Prep and staging</h2>

<p>In your <code class="language-plaintext highlighter-rouge">configuration.yaml</code> enable the Alexa Smart Home Skill to use an external list of entity IDs. I store mine in <code class="language-plaintext highlighter-rouge">/config/alexa_entities.yaml</code>.</p>

<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="na">alexa</span><span class="pi">:</span>
  <span class="na">smart_home</span><span class="pi">:</span>
    <span class="na">locale</span><span class="pi">:</span> <span class="s">en-US</span>
    <span class="na">endpoint</span><span class="pi">:</span> <span class="s">https://api.amazonalexa.com/v3/events</span>
    <span class="na">client_id</span><span class="pi">:</span> <span class="kt">!secret</span> <span class="s">alexa_client_id</span>
    <span class="na">client_secret</span><span class="pi">:</span> <span class="kt">!secret</span> <span class="s">alexa_client_secret</span>
    <span class="na">filter</span><span class="pi">:</span>
      <span class="na">include_entities</span><span class="pi">:</span>
         <span class="kt">!include</span> <span class="s">alexa_entities.yaml</span>
</code></pre></div></div>

<p>Add two helper shell commands:</p>

<!--  -->
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="na">shell_command</span><span class="pi">:</span>
  <span class="na">clear_alexa_entities_file</span><span class="pi">:</span> <span class="s2">"</span><span class="s">truncate</span><span class="nv"> </span><span class="s">-s</span><span class="nv"> </span><span class="s">0</span><span class="nv"> </span><span class="s">/config/alexa_entities.yaml"</span>
  <span class="na">append_alexa_entity</span><span class="pi">:</span> <span class="s1">'</span><span class="s">/bin/sh</span><span class="nv"> </span><span class="s">-c</span><span class="nv"> </span><span class="s">"echo</span><span class="nv"> </span><span class="s">\"-</span><span class="nv"> </span><span class="s">{{</span><span class="nv"> </span><span class="s">entity</span><span class="nv"> </span><span class="s">}}\"</span><span class="nv"> </span><span class="s">&gt;&gt;</span><span class="nv"> </span><span class="s">/config/alexa_entities.yaml"'</span>
</code></pre></div></div>
<!--  -->

<h2 id="a-script-to-find-the-entities">A script to find the entities</h2>

<p>Place this script in <code class="language-plaintext highlighter-rouge">scripts.yaml</code>. It does three things:</p>
<ol>
  <li>Clears the existing file.</li>
  <li>Finds all entities labeled with the tag you choose (I use “Alexa”).</li>
  <li>Appends each entity ID to the file.</li>
</ol>

<!--  -->
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="na">export_alexa_entities</span><span class="pi">:</span>
  <span class="na">alias</span><span class="pi">:</span> <span class="s">Export Entities with Alexa Label</span>
  <span class="na">sequence</span><span class="pi">:</span>
    <span class="c1"># 1. Clear the file</span>
    <span class="pi">-</span> <span class="na">service</span><span class="pi">:</span> <span class="s">shell_command.clear_alexa_entities_file</span>

    <span class="c1"># 2. Loop through each entity and append</span>
    <span class="pi">-</span> <span class="na">repeat</span><span class="pi">:</span>
        <span class="na">for_each</span><span class="pi">:</span> <span class="s2">"</span><span class="s">{{</span><span class="nv"> </span><span class="s">label_entities('Alexa')</span><span class="nv"> </span><span class="s">}}"</span>
        <span class="na">sequence</span><span class="pi">:</span>
          <span class="pi">-</span> <span class="na">service</span><span class="pi">:</span> <span class="s">shell_command.append_alexa_entity</span>
            <span class="na">data</span><span class="pi">:</span>
              <span class="na">entity</span><span class="pi">:</span> <span class="s2">"</span><span class="s">{{</span><span class="nv"> </span><span class="s">repeat.item</span><span class="nv"> </span><span class="s">}}"</span>
  <span class="na">mode</span><span class="pi">:</span> <span class="s">single</span>
</code></pre></div></div>
<!--  -->

<p>Why clear the file and write it line by line? I couldn’t get any <code class="language-plaintext highlighter-rouge">file</code> or <code class="language-plaintext highlighter-rouge">notify</code> integration to write to <code class="language-plaintext highlighter-rouge">/config</code>, and passing a YAML list to a shell command collapses whitespace into a single line. Reformatting that back into proper YAML without invoking Python was painful, so I chose to truncate and append line-by-line. It’s ugly, but it’s simple and it works.</p>

<p>The result is that I can label entities in the UI and avoid tedious bookkeeping.</p>

<p><img src="/img/2025/labeled-entity.jpg" alt="Home Assistant entity details screen showing an IKEA smart plug named 'tree' with the Alexa label applied in the Labels section" /></p>]]></content><author><name>Brian &quot;bex&quot; Exelbierd</name><email>bex@bexelbie.com</email><uri>https://bexelbie.com</uri></author><summary type="html"><![CDATA[Skip hand-editing YAML entity lists by tagging Home Assistant devices with labels and auto-generating the Alexa exposure list via an automation.]]></summary></entry></feed>