Compare commits

...

26 Commits

Author SHA1 Message Date
677ea89517 ci: Allow specifying hugo version 2025-11-09 09:32:32 +01:00
06ba0029b2 fix: update runner
Because of
hugo: /usr/lib/x86_64-linux-gnu/libstdc++.so.6: version `GLIBCXX_3.4.29' not found (required by hugo)
2025-11-09 09:09:56 +01:00
ddddaaac8d fix: debug 2025-11-09 09:01:25 +01:00
241ead5cd0 fix: label? 2025-11-09 08:55:49 +01:00
8b57f3c096 feat: add forgejo ci 2025-11-09 08:17:02 +01:00
6ff17e27f5 fix: date 2025-11-08 22:27:16 +01:00
2a42e2e2a3 feat: minor corrections 2025-11-08 22:26:24 +01:00
a6f05525e3 feat: Add images 2025-11-08 22:16:34 +01:00
c8b813084b feat: add more content 2025-11-08 21:32:12 +01:00
428dcb8727 fix: feat: locally host known script 2025-11-08 21:31:51 +01:00
517ceacf79 fix: remove deprecated setting 2025-11-07 19:16:40 +01:00
83a0754e46 feat: add shortcode to embed html
This was implemented to embed drawio exports
2025-11-07 19:15:21 +01:00
57243489c8 feat: add gpa postmortem
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2025-10-19 16:17:58 +02:00
c18e2a7ecf fix: typo 2025-08-03 14:05:31 +02:00
b557376ef2 fix: date
All checks were successful
ci/woodpecker/manual/woodpecker Pipeline was successful
2025-08-03 13:51:31 +02:00
b0e58f1737 feat: Add post on twenty 2025-08-03 13:51:18 +02:00
27968f6ae5 feat: add small update on notification function
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2025-07-13 00:05:31 +02:00
2388e78c5f feat: Add thoughts on html mails post
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2025-07-12 12:48:20 +02:00
3fb5a5d4fc fix: typo
All checks were successful
ci/woodpecker/manual/woodpecker Pipeline was successful
2025-06-28 15:03:51 +02:00
dc6e60970a fix: metadata
All checks were successful
ci/woodpecker/manual/woodpecker Pipeline was successful
2025-06-28 15:02:52 +02:00
a6df0a82ae fix: metadata 2025-06-28 15:02:07 +02:00
90e8d15af7 feat: Add post improve-osm-by-using-it
All checks were successful
ci/woodpecker/manual/woodpecker Pipeline was successful
2025-06-28 14:56:40 +02:00
2641955a36 feat: Rework the bio a bit 2025-06-28 14:52:52 +02:00
4e4d825283 refactor: delete old stuff 2025-06-28 07:36:29 +02:00
fb31dedf4d feat: Add translation note 2025-03-04 18:01:00 +01:00
cc9b0733dc fix: Restrict height, fix syntax 2025-03-04 17:59:52 +01:00
35 changed files with 8741 additions and 387 deletions

View File

@@ -0,0 +1,60 @@
name: Deploy Hugo Site
on:
push:
branches:
- main
- forge-ci
jobs:
build-and-deploy:
runs-on: ubuntu-24.04
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
submodules: recursive
fetch-depth: 0
- name: Install Hugo - latest version
run: |
apt-get update -y
apt-get install -y wget tar jq # Install necessary components
ldd --version
# If a Hugo version is provided in the secrets, use this
# else latest will be used
if [ -n "${{ secrets.HUGO_VERSION }}" ]; then
HUGO_VERSION="${{ secrets.HUGO_VERSION }}"
echo "Using Hugo version from secret: $HUGO_VERSION"
else
# Use the GitHub API to get information about the latest release, then use jq to find out the tag name
HUGO_VERSION=$(wget -qO- https://api.github.com/repos/gohugoio/hugo/releases/latest | jq -r '.tag_name')
echo "Using latest Hugo version: $HUGO_VERSION"
fi
# Use ${HUGO_VERSION#v} to strip the v from v1.0.0
# See "Substring Removal" in https://tldp.org/LDP/abs/html/string-manipulation.html
wget -O hugo.tar.gz "https://github.com/gohugoio/hugo/releases/download/${HUGO_VERSION}/hugo_extended_${HUGO_VERSION#v}_Linux-64bit.tar.gz"
tar -xzf hugo.tar.gz hugo
mv hugo /usr/local/bin/hugo
chmod +x /usr/local/bin/hugo
ldd /usr/local/bin/hugo
hugo version
- name: Build site
run: hugo --minify
- name: Deploy to server via rsync
env:
DEPLOY_HOST: ${{ secrets.DEPLOY_HOST }}
DEPLOY_USER: ${{ secrets.DEPLOY_USER }}
DEPLOY_PATH: ${{ secrets.DEPLOY_PATH }}
run: |
apt-get install -y rsync openssh-client
mkdir -p ~/.ssh
echo "${{ secrets.SSH_PRIVATE_KEY }}" > ~/.ssh/id_ed25519
chmod 600 ~/.ssh/id_ed25519
ssh-keyscan -H "$DEPLOY_HOST" >> ~/.ssh/known_hosts
rsync -avz --delete public/ "$DEPLOY_USER@$DEPLOY_HOST:$DEPLOY_PATH"

View File

@@ -7,7 +7,6 @@ disqusShortname = ""
# Enable Google Analytics by entering your tracking code
googleAnalytics = ""
preserveTaxonomyNames = true
paginate = 5 #frontpage pagination
[privacy]
# Google Analytics privacy settings - https://gohugo.io/about/hugo-and-gdpr/index.html#googleanalytics
@@ -33,7 +32,7 @@ paginate = 5 #frontpage pagination
author = "Julian-Samuel Gebühr"
authorLink = "https://hyteck.de/"
bio = [
"Business Analyst during work, Developer for fun, Activist because it's necessary. He/Him"
"Business Analyst for work, Developer for fun. Activist because it's necessary. He/Him"
]
copyright = [
'&copy; 2025 CC-BY Julian-Samuel Gebühr</a> '

View File

@@ -0,0 +1,131 @@
---
title: "Thoughts on HTML mails"
date: 2025-07-12T12:05:10+02:00
lastmod: 2025-07-13T00:00:04+02:00
draft: false
image: "uploads/html-e-mails.png"
categories: ['English']
tags: ['email', 'html', 'plaintext', 'django', 'notfellchen']
---
Lately I worked on notification e-mails for [notfellchen.org](https://notfellchen.org). Initially I just sent text
notifications without links to the site. Terrible idea! An E-Mail notification I send always has Call-to-Action or at
minimum a link to more information.
I left the system like this for half a year because it kinda worked for me (didn't suck enough for me to care), and I was the main receiver of these notifications.
However, as the platform is developed further and more users join I need to think about more user-centric notifications.
So what do I imagine is important to a user?
*
* **Information benefit**: An e-mail has the purpose to inform a user. This information should be immediately visible & understandable.
* **Actionables**: Users should be able to act on the information received. This is the bright red button "DO SOMETHING NOW!" you see so often.
* **Unsubscribing**: Informing e-mails stop is not only a legal requirement and morally the right thing to do but it also gives users agency and - I hope - increases the User Experience
With these I naturally came to the next question: Plaintext or HTML?
Some people would say [Plaintext is inherently better](https://useplaintext.email/) than HTML e-mails. Many of these reasons resonate with me including:
* Privacy invasion and tracking
* HTML emails are less accessible
* Some clients can't display HTML emails at all
* Mail client vulnerabilities
These are all valid points and are a reason I generally enjoy plaintext e-mails when I receive them.
But this is not about me but users. And there are some real benefits of HTML e-mails:
* Visually appealing: This is subjective but generally most users seem to agree on that
* User guidance: Rich text provides a real benefit when searching for the relevant information
Be honest: Do you read automated e-mails you receive completely? Or do you just skim for important information?
And here HTML-mails shine: **Information can easily be highlighted** and big button can lead the user to do the right action.
Some might argue that you can also a highlight a link in plaintext but that nearly always will worsen accessibility for screen-reader user.
# The result
In the end, I decided that providing plaintext-only e-mails was not enough. I set up html mails, mostly using
[djangos send_mail](https://docs.djangoproject.com/en/5.2/topics/email/#send-mail) function where I can pass the html message and attattching it correctly is done for me.
![A screenshot of an e-mail in thunderbird. The e-mail is structured in header, body and footer. The header says "Notfellchen.org", the body shows a message that a new user was registered and a bright green button to show the user. The footer offers a link to unsubscribe](mail_screenshot.png)
For anyone that is interested, here is how most my notifications are sent
```python
def send_notification_email(notification_pk):
notification = Notification.objects.get(pk=notification_pk)
subject = f"{notification.title}"
context = {"notification": notification, }
if notification.notification_type == NotificationTypeChoices.NEW_REPORT_COMMENT or notification.notification_type == NotificationTypeChoices.NEW_REPORT_AN:
html_message = render_to_string('fellchensammlung/mail/notifications/report.html', context)
plain_message = render_to_string('fellchensammlung/mail/notifications/report.txt', context)
[...]
elif notification.notification_type == NotificationTypeChoices.NEW_COMMENT:
html_message = render_to_string('fellchensammlung/mail/notifications/new-comment.html', context)
plain_message = render_to_string('fellchensammlung/mail/notifications/new-comment.txt', context)
else:
raise NotImplementedError("Unknown notification type")
if "plain_message" not in locals():
plain_message = strip_tags(html_message)
mail.send_mail(subject, plain_message, settings.DEFAULT_FROM_EMAIL,
[notification.user_to_notify.email],
html_message=html_message)
```
Yes this could be made more efficient - for now it works. I made the notification framework too complicated initially, so I'm still tyring out what works and what doesn't.
Here is the html template
```html
{% extends "fellchensammlung/mail/base.html" %}
{% load i18n %}
{% block title %}
{% translate 'Neuer User' %}
{% endblock %}
{% block content %}
<p>Moin,</p>
<p>
es wurde ein neuer Useraccount erstellt.
</p>
<p>
Details findest du hier
</p>
<p>
<a href="{{ notification.user_related.get_full_url }}" class="cta-button">{% translate 'User anzeigen' %}</a>
</p>
{% endblock %}
```
and here the plaintext
```
{% extends "fellchensammlung/mail/base.txt" %}
{% load i18n %}
{% block content %}{% blocktranslate %}Moin,
es wurde ein neuer Useraccount erstellt.
User anzeigen: {{ new_user_url }}
{% endblocktranslate %}{% endblock %}
```
Works pretty well for now. People that prefer plaintext will get these and most users will have skimmable html e-mail where the
styling will help them recognize where it's from and what to do. Accessibility-wise this seems like the best option.
And while adding a new notification will force me to create
* a new notification type,
* two new e-mail templates and
* a proper rendering on the website
this seems okay. Notifications are useful, but I don't want to shove them everywhere. I'm not running facebook or linkedin after all.
So for now I'm pretty happy with the new shiny e-mails and will roll out the changes soon (if I don't find any more wired bugs).
PS: I wrote this post after reading [blog & website in the age of containerized socials](https://blog.avas.space/blog-website-eval/) by ava.
Maybe this "Thoughts on" format will stay and I will post these in addition to more structured deep dives.
# Update
I did a rework of the notification function and it's now much cleaner now. However, it's less readable so this blogpost will stay as-is.
If you want to check out the new code have a look [on Codeberg](https://codeberg.org/moanos/notfellchen/src/commit/a4b8486bd489dacf8867b49d04f70f091556dc9d/src/fellchensammlung/mail.py).

Binary file not shown.

After

Width:  |  Height:  |  Size: 51 KiB

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,122 @@
---
title: "How to manually check hundreds of animal shelters - every 14 days"
date: 2025-11-08T12:05:10+02:00
lastmod: 2025-11-08T21:05:10+02:00
draft: false
image: "uploads/checking-shelters.png"
categories: [ 'English' ]
tags: [ 'notfellchen', 'animal shelter', 'animal welfare', 'django' ]
---
I run a website called [Notfellchen](https://notfellchen.org) that list animals that are waiting for adoption. It's
currently restricted to fancy rats in Germany and that for good reason: Running this website involves **checking every
shelter every two weeks manually**. You need to visit the website, check if there are new animals, contact the shelter
and add them to notfellchen if they allow it. This takes time. A lot.
This blog post will outline some of the things I did in order to streamline this and make it possible to **check every
german shelter in 2.5 hours**.
## General process
When you establish a process. want others to help you or if you want to find inefficiencies, it's a good idea to
formalize it. So here is a rough BPMN diagram of the whole process.
{{< html animal-discovery.drawio.html >}}
## List of animal shelters
Focusing on the first step: We want to check the website of an animal shelter - but where do we get a list of animal
shelters from? Luckily there is an easy answer: [OpenStreetMap](https://openstreetmap.org) and I wrote a
whole [other blog post on how I imported and improved this data](https://hyteck.de/post/improve-osm-by-using-it/).
## Species-specific link
Importing this data provides us (most of the time) with a link to the shelter's website. However, rats are usually not
listed on the home page but on a subsite.
In order to save time, I introduced the concept of a species-specific link per organization and species.
So for the Tierheim Entenhausen this might look like this
| Species | Species specific link |
|---------|--------------------------------------------------------|
| Cat | https://tierheim-entenhausen.de/adoption/cats |
| Rats | https://tierheim-entenhausen.de/adoption/small-mammals |
As animal shelter pages look very different from each other, clicking this link provides an enormous time benefit
compared to clicking through a homepage manually.
# Org check page
I set up a special page to make it most efficient to check shelters. It's structured in four parts:
* **Stats**: The stats show how many animal shelters are checked in the last two weeks and how many to go.
* **Not checked for the longest period**: Shows the animal shelters to check next, it's therefore sorted by the date
they were last checked
* **In active communication**: A overview of the organizations where there is communication (or an attempt thereof).
This can take multiple das or even weeks so the internal comment field is very useful to keep track.
* **Last checked** It sometimes happens that I accidentally set a organization to "Checked" by accident. I added this
section to make it easier to revert that.
![](screenshot-checking-site.png)
## Shortcuts
To make it even faster to work through the organizations I added some shortcuts for the most common functionality and
documented the browser own shortcut to close a tab.
* `O`: Open website of the first organization
* `CTRL+W`: Close tab (Firefox, Chrome)
* `C`: Mark first organization as checked
## Results
After implementing all this, how long does it take now to check all organizations? Here are the numbers
| Measurement | |
|-----------------------------------------------------------|--------------|
| Time to check one organization (avg.) | 12.1s |
| Organization checked per minute | 4.96 org/min |
| Time to check all (eligible) german animal shelters (429) | 1 h 16 min |
This excludes the time, it takes to add animals or contact rescue organizations. One of these actions must be taken
whenever an eligible animal is found on a website. Here you can see how this interrupts the process:
![](progress.png)
And here is the breakdown of time per activity. A big caveat here is, that I did not follow up on previous conversations
here, therefore the contacting number is likely an underestimation.
| Activity | Time spent | Percentage |
|------------|------------|------------|
| Checking | 54 min 44s | 72.3% |
| Adding | 11 min 15s | 14.9% |
| Contacting | 9min 41s | 12.8% |
To me, this looks like a pretty good result. I can't say which optimizations brought how much improvement, but I'd argue
they all play a role in reaching the 12s per rescue organizations that is checked.
In order to check all german animal shelters, one needs to put in about 2 and a half hours every two weeks. That seems
reasonable to me. Further improvements of the likely do not lie in the organization check page but the contact process
and adoption notice form.
For now, I'm happy with the results.
# Addendum: Common annoyances
When doing this over the last few months I encountered some recurring issues that not only were annoying but also take
up a majority of the time. Here are some that stood out
* **Broken SSL encryption** So many animal shelters do not have a functioning SSL certificate. It takes time to work
around the warnings.
* **No results not indicated** More often than not, animal shelters do not have rats. However, when you visit a page
like [this](https://tierschutzliga.de/tierheime/tierparadies-oberdinger-moos/tiervermittlung/#?THM=TOM&Tierart=Kleintiere)
it's hard to know if there is a technical issue or if there are no animals for your search.
* **No static links** Sites where you have to click through a menu to get to the right page, but you can not link
directly to it.
* **No website** Seriously, there are some animal shelters that only use Instagram or Facebook to tell people about the
animals they have. This is not only a privacy nightmare, it's also incredibly hard to find out which information is
up-to-date. Furthermore, there exists no data structure, so posts about animals often miss crucial information like
the sex.
While I obviously have some grievances here, I know the organizations never have enough resources, and they'd
usually love to have a nicer website. Just keep that in mind too.

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 273 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 372 KiB

View File

@@ -0,0 +1,100 @@
---
title: "Improve OpenStreetMap data by using it"
date: 2025-06-28T14:05:10+02:00
draft: false
image: "post/improve-osm-by-using-it/improve-osm-by-using-it.png"
categories: ['English']
tags: ['django', 'OpenStreetMap', 'notfellchen', 'osm', 'open data', 'geojson']
---
## Introduction
In the last month I improved the mapping of about 100 german animal shelters - not only out of the goodness of my heart, but because it helped me.
Let me explain why: I develop [notfellchen.org](https://notfellchen.org/), where users can search animals in animal shelters, specifically rats, they might want to adopt.
The idea is to have a central website that allows you to search for rats in your area.
This is necessary because only a small percentage of animal shelters has rats. As a user, just checking your next
shelter doesn't work. Some users will stop after checking the second or third one and just buy from a pet shop (which is a very, very bad idea).
Now a central platform for is nice for users but has one problem: How do I, as operator of notfellchen, know where rats are?
I need to **manually check every animal shelter in the country** and if they have rats, ask them for permission to use
images of the rats on my site.
So wait I need to have is a list of animal shelters in germany and have their website, e-mail and phone number.
The source for all of this: You guessed it - OpenStreetMap 🥳
# Getting the data
Downloading all german animal shelters is surprisingly easy: You use [Overpass Turbo](https://overpass-turbo.eu/) and get a `.geojson` to download.
here is the query I used:
```
[out:json][timeout:25];
// fetch area “Germany” to search in
{{geocodeArea:Germany}}->.searchArea;
// Check search area for all objects with animal shelter tag
nwr["amenity"="animal_shelter"](area.searchArea);
// print results
out geom;
```
Now upload it to notfellchen.org and I'll be fine right?
# Data Issues
Yeah well, this only *mostly* works. There were two main problems:
**Missing contact data** is annoying because I quickly want to check the website of animal shelters.
More annoying were what I'd call **mapping errors**.
Most commonly an animal shelter had multiple nodes/ways tagged as `amenity:animal_shelter`.
The highlight was the "Tierheim München" where about 10 buildings were tagged as `amenity:animal_shelter` and the contact
data was sitting on the building with name "Katzenhaus" ("cat house").
Now the "Tierheim München" appeared in my list 10 times but 9 of them had no contact data at all.
# Correcting it
I could have corrected this only in the notfellchen database. It would have been faster and I could even automate parts of it.
But I didn't.
For each issue I found, I opened OpenStreetMap and added websites, phone numbers or even re-mapped the area.
For "Tierheim München" I even [opened a thread in the forum](https://community.openstreetmap.org/t/mapping-of-multiple-related-buildings-animal-shelters/131801)
to discuss a proper tagging.
That makes sense for me because I get one important thing:
# What I get out of it: Updates
What if a new shelter was added later or a shelter changed? I already profit a lot from the time people spend adding information, so why stop?
My database stores the OSM ID, so I can regularly query the data again to get updates.
But that only works if I take an "upstream" approach: Fix the data in OSM, then load it into notfellchen.
Otherwise, any change in my database will be overwritten by "old" OSM data.
# Result
In the last month, I made 86 changes to OSM adding the following information
| Type of information | Number of times added |
|---------------------|-----------------------|
| Website | 66 |
| Phone Numbers | 65 |
| Operator | 63 |
| E-Mail | 49 |
| Fax | 9 |
Yes I sometimes even added fax numbers. It was easy enough to add and maybe there is someone might use it.
# Looking forward
I'm of course not done. Only half of the rescues known to OSM in germany are currently checked, so I'll continue that work.
After that I'll start adding the shelters that are just in my database.
Currently, 33 animal shelters are known to notfellchen that are not known to OSM. This number will likely grow, maybe double.
A lot to do. And luckily, this work both benefits me and everyone using OSM. Happy mapping!

View File

@@ -74,7 +74,7 @@ Ich habe die Novelle am Stück verschlungen und mit der Hauptperson gelacht und
In einer Fantasy Welt brennt eine Leibwächterin der Königin mit einer Magierin durch und sie eröffnen einen Teeladen.
Es gibt auch den Nachfolger "A Pirate's Life for Tea", den hab ich aber noch nicht gelesen
[Link zum Buch beim Frauenbuchladen Thalestris](https://frauenbuchladen.buchkatalog.de/cant-spell-treason-without-tea-9783492706896)
[Link zum Buch beim Frauenbuchladen Thalestris (Deutsche Übersetzung)](https://frauenbuchladen.buchkatalog.de/cant-spell-treason-without-tea-9783492706896)
### Travis Baldree: "Legends&Latte"

View File

@@ -0,0 +1,107 @@
---
title: "Musikempfehlungen- Beat gegen KI"
date: 2025-03-03T18:05:10+02:00
draft: true
image: "uploads/scifi-fantasy.png"
categrories: ['Deutsch']
tags: ['ki', 'science-fiction', 'schriftgelehrte', 'progressive Fantastik', 'scifi', 'fantasy', 'solarpunk']
---
## Einführung
KI ist überall. Große Teile des Internets werden gerade durch KI-generierten Inhalten überschwemmt.
Teilweise zeigen KI-generierte Artikel auch gefälschte Daten (vor 2022) an um so den Anschein zu erwecken nicht KI-generiert zu sein.
Verlässliche und authentische Informationen zu finden ist daher um so schwerer.
Deshalb beginnt **das Zeitalter der Schriftgelehrten**, denn Informationen zu sammeln und aufzubereiten, das ist, was sie tun. Schriftgelehrte\*r verwende ich hier als Übersetzung des englischen Wortes "Librarian".
Und so maße auch ich mir an mich (nur) dafür Schriftgelehrte\*r zu nennen und versuche auf diesem
Blog immer wieder Informationen zu sammeln und zu teilen (ironischerweise in dem Wissen, dass dieser Blog von
KI-Unternehmen gescrapt wird).
Starten möchte ich mit Folgendem:
## Sci-Fi und Fantasy
Wenn ich nach Sci-Fi und Fantasy suche, will ich weg von profit-optimierten Listen von Online-Buchhändler\*innen,
und hin zu echten Empfehlungen. Viele solche Empfehlungen bekomme ich im lokalen Buchladen. Gerade junge Buchhändler\*innen können oft super Empfehlungen geben.
Leider sind diese wenigen Mitarbeiter\*innen selten und in Buchläden sind SciFi und Fantasy oft nur wenig vertreten und die Regale viel zu oft voll mit Büchern weißer Männer.
Deshalb jetzt zu den Empfehlungen die versuchen, das anders zu machen!
*Die Liste beschreibt die Bücher nur kurz und sollte ohne Spoiler auskommen.
In allen Büchern kommen queere Charaktere vor. Alle Links zum führen zu einem lokaln Buchladen oder Websites der Autor\*innen oder des Verlags.*
### Becky Chambers: "Der lange Weg zu einem kleinen zorningen Planeten"
Dieses Buch ist eine wirklich wunderschön geschriebene Space-Opera mit Charakteren, die man ins Herz schließt.
Auf dem kleinen Schiff ist viel Alltag und auf dem langen Weg lässt einen jede Zwischenstation in eine weitere Welt eintauchen, egal ob in einen trubeligen Markt oder einen abgeschiedenen Eisplanet.
Der zweite und dritte Teil handeln im gleichen Universum, sind jedoch nur über wenige Personen/geteilte Themen mit der ersten Geschichte verbunden.
Alle anderen Bücher von Becky Chambers sind auch sehr empfehlenswert "A Psalm for the Wild-Built" ist eine kurze Solarpunk Utopie.
[Link zum Buch beim Frauenbuchladen Thalestris](https://frauenbuchladen.buchkatalog.de/der-lange-weg-zu-einem-kleinen-zornigen-planeten-9783596035687)
### Judith & Christian Vogt: Ace in Space
Ace in Space ist eine tolle Erzählung von einer Raumjäger-Pilotin, einer Gruppe Space-Punks die ihr Leben auf Social Media teilen.
Angesiedelt ist das Buch eher im Cyber-Punk. Es geht gegen Großkonzerne, es geht um schnelle Flieger und Bars in engen Quartieren.
Außerdem hat es eine der besten Sexszenen, die ich je lesen durfte.
Judith schreibt oft dystopischere Geschichten als ich normalerweise lese. Aber Laylayland und Wasteland zwei fantastische Bücher die ich nicht missen will.
Die Bücher sind feministisch, queer und tollerweise auch mit Haupt(Charaktere) mit Behinderungen. Progressive Phanastik der höchsten Klassen!
[Verlagsshop](https://amrun-verlag.de/produkt/aceinspace1/)
Wer eine ganze Kurzgeschichte "der Vögte" (also Judith und Christian Vogt) als Leseprobe haben will, findet diese am Ende des Artikels als PDF.
### T.J. Klune: Mr. Parnassus' Heim für magisch Begabte
Wunderschöne Geschichte über einen Beamten der aus der Stadt rauskommt und ein Haus an der See.
Mehr verrate ich nicht, ist aber eins meiner Lieblingsbücher.
[Link zum Buch beim Frauenbuchladen Thalestris](https://frauenbuchladen.buchkatalog.de/mr-parnassus-heim-fuer-magisch-begabte-9783453321366)
### Lena Richter: Dies ist mein letztes Lied
Eine mitreisende Geschichte in der die Hauptperson durch ihre Musik von Welt zu Welt gerissen wird. Was sie in den einzelnen Episoden erlebt ist schön und herzzerreißend.
Ich habe die Novelle am Stück verschlungen und mit der Hauptperson gelacht und geweint.
[Verlagsshop](https://www.ohneohren.com/shop/Lena-Richter-Dies-ist-mein-letztes-Lied-p520843015)
### Rebecca Thorne: "Can't spell treason without tea"
In einer Fantasy Welt brennt eine Leibwächterin der Königin mit einer Magierin durch und sie eröffnen einen Teeladen.
Es gibt auch den Nachfolger "A Pirate's Life for Tea", den hab ich aber noch nicht gelesen
[Link zum Buch beim Frauenbuchladen Thalestris (Deutsche Übersetzung)](https://frauenbuchladen.buchkatalog.de/cant-spell-treason-without-tea-9783492706896)
### Travis Baldree: "Legends&Latte"
Eine Ork, die lange Jahre mit einer Gruppe Abenteuer unterwegs war versucht nun einen Buch & Teeladen aufzumachen.
Sehr unterhaltsam, unerwartet friedlich und macht unglaublich Lust mehr in Cafes zu gehen!
"Bookshops & Bonedust" ist die Vorgeschichte, die aber gut nach Legends & Latte gelesen werden kann und auch später geschrieben wurde.
[Link zum Buch beim Frauenbuchladen Thalestris](https://frauenbuchladen.buchkatalog.de/magie-und-milchschaum-9783423263566)
### Sammlung: Sonnenseiten - Street Art trifft Solarpunk
22 Autor\*innen haben Geschichten zusammengetragen die die beiden Kunstformen Street Art und Solarpunk verbinden.
Besonders empfehlen kann ich die Geschichte "Uferlos" von Lena Richter die eine schwimmende Stadt zum Denken anregt und "Cloudart" von Dominik Windgätter in der ein "Maskenmädchen" Kunstwerke in den Himmel zeichnet.
[Link zum Buch beim Frauenbuchladen Thalestris](https://frauenbuchladen.buchkatalog.de/sonnenseiten-9783756803972)
## Schluss
Ich hoffe die Empfehlungen machen, trotz ihrer Kürze, Lust aufs Lesen! Kauft bei eurem lokalen Buchladen und unterstützt kleine Verlage!
*Falls das nicht sowieso schon klar war: Ich bekomme für diese Empfehlungen kein Geld, die Links haben kein Tracking, keine Referral Codes oder sonst etwas.*
### Kurzgeschichte FiatLux
Die Kurzgeschichte ist von Judith und Christian Vogt und steht unter der Lizenz [CC-BY-NC-SA](https://creativecommons.org/licenses/by-nc-sa/4.0/), darf also mit Namensnennung, nichtkommerziell und unter gleichen Bedingungen geteilt werden (wie schön ist das denn bitte?!).
{{< pdf FiatLuxVogt>}}

View File

@@ -0,0 +1,252 @@
---
title: "Postmortem - how to completely screw up an update"
date: 2025-10-19T12:05:10+02:00
lastmod: 2025-10-19T16:00:04+02:00
draft: false
image: "uploads/postmortem.png"
categories: [ 'English' ]
tags: [ 'backup', 'postmortem', 'fediverse', 'gotosocial' ]
---
The fediverse instance [gay-pirate-assassins.de](https://gay-pirate-assassins.de) was down for a couple of days. This
postmortem will outline what went wrong and what I did to prevent things from going that wrong in the future.
# Timeline
* 2025-10-05 17:26: [Update announcement](https://gay-pirate-assassins.de/@moanos/statuses/01K6TFQ1HVPAR6AYN08XYQ7XFV)
* 2025-10-05 ~17:45: Update started
* 2025-10-05 ~18:00: Services restart
* 2025-10-05 ~18:00: GoToSocial doesn't come up
* 2025-10-12 ~10:00: Issue is found
* 2025-10-12 10:30: Issue is fixed
* 2025-10-12 10:31: GoToSocial is started, migrations start
* 2025-10-12 15:38: Migrations finished successfully
* 2025-10-12 15:38: Service available again
* 2025-10-12 18:36:[Announcement sent](https://gay-pirate-assassins.de/@moanos/statuses/01K7CMGF7S2TE39792CMADGEPJ)
All times are given in CEST.
## The beginning: An update goes wrong
I run a small fediverse server with a few users called. [gay-pirate-assassins](https://gay-pirate-assassins.de/) which is powered by [GoToSocial](https://gotosocial.org/).
The (amazing) GoToSocial devs released `v0.20.0-rc1` and `v0.20.0-rc2`. As the new features seemed pretty cool, I'm
inpatient and the second release candidate seemed stable,
I decided to update to `v0.20.0-rc2`. So I stared a backup (via borgmatic), waited for it to finish and confirmed it ran
successfully.
Then I changed the version number in the [mash](https://github.com/mother-of-all-self-hosting/mash-playbook)-ansible
playbook I use. Then I pulled the newest version of the playbook and it's roles because I wanted to update all services
that run on the server. I checked
the [Changelog](https://github.com/mother-of-all-self-hosting/mash-playbook/blob/main/CHANGELOG.md),
didn't see anything and then started the update. It went through and GoToSocial started up just fine.
But the instance start page showed me 0 users, 0 posts and 0 federated instances. **Something has gone horribly wrong!**
## Migrations
It was pretty clear to me, that the migrations went wrong.
The [GoToSocial Migration notes](https://codeberg.org/superseriousbusiness/gotosocial/releases/tag/v0.20.0-rc1)
specifically mentioned long-running migrations that could take several hours. I assumed that somehow, during the running
database migration, the service must have restarted and left the DB in a broken state. This issue happened to me before.
Well, that's what backups are for, so let's pull it.
## Backups
Backups for this server are done two ways:
* via postgres-backup: Backups of the database are written to disk
* via [borgmatic](https://torsion.org/borgmatic/): Backups via borg are written to backup nodes, one of them at my home
They run every night automatically, monitored by [Healthchecks](https://healthchecks.io/). I triggered a manual run
before the update so that is the one I mounted using [Vorta](https://vorta.borgbase.com/).
And then the realization.
```
mash-postgres:5432 $ ls -lh
total 2.1M
-r-------- 1 moanos root 418K Oct 05 04:03 gitea
-r-------- 1 moanos root 123K Oct 05 04:03 healthchecks
-r-------- 1 moanos root 217K Oct 05 04:03 ilmo
-r-------- 1 moanos root 370K Oct 05 04:03 notfellchen
-r-------- 1 moanos root 67K Oct 05 04:03 oxitraffic
-r-------- 1 moanos root 931 Oct 05 04:03 prometheus_postgres_exporter
-r-------- 1 moanos root 142K Oct 05 04:03 semaphore
-r-------- 1 moanos root 110K Oct 05 04:03 vaultwarden
-r-------- 1 moanos root 669K Oct 05 04:03 woodpecker_ci_server
```
Fuck. The database gay-pirate-assassins is not there. Why?
To explain that I have to tell you how it *should* work: Services deployed by the mash-playbook are automatically wired
to the database and reverse proxy by a complex set of Ansible variables. This is great, because adding a service can
therefore be as easy as adding
```
healthchecks_enabled: true
healthchecks_hostname: health.hyteck.de
```
to the `vars.yml` file.
This will then configure the postgres database automatically, based on the `group_vars`. They look like this
```
mash_playbook_postgres_managed_databases_auto_itemized:
- |-
{{
({
'name': healthchecks_database_name,
'username': healthchecks_database_username,
'password': healthchecks_database_password,
} if healthchecks_enabled and healthchecks_database_hostname == postgres_connection_hostname and healthchecks_database_type == 'postgres' else omit)
}}
```
Note that a healthchecks database is only added to the managed databases if `healthchecks_enabled` is `True`.
This is really useful for backups because the borgmatic configuration also pulls the list
`mash_playbook_postgres_managed_databases_auto_itemized`. Therefore, you do not need to specify which databases to back
up, it just backs up all managed databases.
However, the database for gay-pirate assassins was not managed. In the playbook it's only possible to configure a
service once. You can not manage multiple GoToSocial instances in the same `vars.yml`. In the past, I had two instances
of GoToSocial running on the server. I therefore
followed [the how-to of "Running multiple instances of the same service on the same host"](https://github.com/mother-of-all-self-hosting/mash-playbook/blob/main/docs/running-multiple-instances.md).
Basically this means that an additional `vars.yml` must be created that is treated as a completely different server.
Databases must be created manually as they are not managed.
With that knowledge you can understand that when I say that the database for gay-pirate-assassins was not managed,
this means it was not included in the list of databases to be backed up. The backup service thought it ran successfully,
because it backed up everything it knew of.
So this left me with a three-month-old backup. Unacceptable.
## Investigating
So the existing database needed to be rescued. I SSHed into the server and checked the database. It looked completely
normal.
I asked the devs if they could me provide me with the migrations as they already did in the past. However, they pointed
out that the migrations are too difficult for that approach. They suggested to delete the oldest migration to force a
re-run of the migrations.
Here is where I was confused, because this was the `bun_migrations` table:
```
gay-pirate-assassins=# SELECT * FROM bun_migrations ORDER BY id DESC LIMIT 5;
id | name | group_id | migrated_at
-----+----------------+----------+-------------------------------
193 | 20250324173534 | 20 | 2025-04-23 20:00:33.955776+00
192 | 20250321131230 | 20 | 2025-04-23 19:58:06.873134+00
191 | 20250318093828 | 20 | 2025-04-23 19:57:50.540568+00
190 | 20250314120945 | 20 | 2025-04-23 19:57:30.677481+00
```
The last migration ran in April, when I updated to `v0.19.1`. Strange.
At this point I went on vacation and paused investigations, not only because the vacation was great, but also because I
bamboozeld by this state.
---
After my vacation I came back, and did some backups of the database.
```
$ docker run -e PGPASSWORD="XXXX" -it --rm --network mash-postgres postgres pg_dump -U gay-pirate-assassins -h mash-postgres gay-pirate-assassins > manual-backup/gay-pirate-assassins-2025-10-13.sql
```
Then I deleted the last migration, as I was advised
```
DELETE FROM bun_migration WHERE id=193;
```
and restarted the server. While watching the server come up it hit me in the face:
```
Oct 12 08:31:29 s3 mash-gpa-gotosocial[2251925]: timestamp="12/10/2025 08:31:29.905" func=bundb.sqliteConn level=INFO msg="connected to SQLITE database with address file:/opt/gotosocial/sqlite.db?_pragma=busy_timeout%281800000%29&_pragma=journal_mode%>
Oct 12 13:38:46 s3 mash-gpa-gotosocial[2304549]: timestamp="12/10/2025 13:38:46.588" func=router.(*Router).Start.func1 level=INFO msg="listening on 0.0.0.0:8080"
```
The server is **starting from a completely different database**! That explains why
* the last migration was never done
* the server showed me 0 users, 0 posts and 0 federated instances even though the postgres database had plenty of those
All of a sudden a SQlite database was configured. This happened because
of [this commit](https://github.com/mother-of-all-self-hosting/ansible-role-gotosocial/commit/df34af385f9765bda8f160f6985a47cb7204fe96)
which introduced SQlite support and set it as default. This was not mentioned in
the [Changelog](https://github.com/mother-of-all-self-hosting/mash-playbook/blob/main/CHANGELOG.md).
So what happened is, that the config changed and then the server was restarted and an empty DB was initialized. The
postgres DB never started to migrate.
## Fixing
To fix it, I did the following
1. Configure the playbook to use postgres for GoToSocial:
```
# vars.yml
gotosocial_database_type: postgres
```
2. Run the playbook to configure GoToSocial (but not starting the service)
```
just run-tags install-gotosocial
```
3. Check the configuration is correct
4. Start the service
The migrations took several hours but after that, everything looked stable again. I don't think there are any lasting
consequences. However, the server was unavailable for several days.
## Learnings
I believe the main issue here was not the change in the config that went unnoticed by me. While I'd ideally notice stuff
like this, the server is a hobby, and I'll continue to not check every config option that changed.
The larger issue was the backup. Having a backup would have made this easy to solve. And there are other, less lucky
problems where I'd be completely lost without a backup. So to make sure this doesn't happen again, I did/will do the
following:
### 1. Mainstream the config
As explained, I used a specific non-mainstream setup in the ansible playbook because, in the past, I ran two instances
of GoToSocial on the server. After shutting down one of them, I never moved gay-pirate-assassins to be part of the main
config. This means important parts of the configuration had to be done manually, which I botched.
So in the past week I cleaned up and gay-pirate-assassins is now part of the main `vars.yml` and will benefit from all
relevant automations.
### 2. Checking backups
I was confident in my backups because
* they run every night very consistently. If they fail e.g. because of a network outage I reliably get a warning.
* I verified successfully run of the backup job prior to upgrading
The main problem was me assuming that a successful run of the backup command, meant a successful backup. Everyone will
tell you that a backup that is not tested is not to be trusted. And they are right. However, doing frequent
test-restores
exceeds my time and server capacity. So what I'll do instead is the following:
* mount the backup before an upgrade
* `tail` the backup file as created by postgres-backup and ensure the data is from the same day
* check media folders for the last changed image
This is not a 100% guarantee, but I'd argue it's a pretty good compromise for now. As the frequency of mounting backups
increases and therefore becomes faster, I'll re-evaluate to do a test-restore at least semi-regulary.
## Conclusion
I fucked up, but I was lucky that my error was recoverable and no data was lost. Next time this will hopefully be not due
to luck, but better planning!
Any questions? Let me know!

View File

@@ -0,0 +1,22 @@
```
"
Oct 12 09:33:25 s3 mash-gpa-gotosocial[2304549]: timestamp="12/10/2025 09:33:25.266" func=cache.(*Caches).Start level=INFO msg="start: 0xc002476008"
Oct 12 09:33:25 s3 mash-gpa-gotosocial[2304549]: timestamp="12/10/2025 09:33:25.303" func=bundb.pgConn level=INFO msg="connected to POSTGRES database"
Oct 12 09:33:25 s3 mash-gpa-gotosocial[2304549]: timestamp="12/10/2025 09:33:25.328" func=migrations.init.110.func1 level=INFO msg="creating statuses column thread_id_new"
Oct 12 09:33:31 s3 mash-gpa-gotosocial[2304549]: timestamp="12/10/2025 09:33:31.872" func=bundb.queryHook.AfterQuery level=WARN duration=6.528757799s query="SELECT count(*) FROM \"statuses\"" msg="SLOW DATABASE QUERY"
Oct 12 09:33:31 s3 mash-gpa-gotosocial[2304549]: timestamp="12/10/2025 09:33:31.873" func=migrations.init.110.func1 level=WARN msg="rethreading 4611812 statuses, this will take a *long* time"
Oct 12 09:33:38 s3 mash-gpa-gotosocial[2304549]: timestamp="12/10/2025 09:33:38.111" func=migrations.init.110.func1 level=INFO msg="[~0.02% done; ~137 rows/s] migrating threads"
Oct 12 09:33:44 s3 mash-gpa-gotosocial[2304549]: timestamp="12/10/2025 09:33:44.618" func=migrations.init.110.func1 level=INFO msg="[~0.04% done; ~171 rows/s] migrating threads"
```
```
Oct 12 13:38:08 s3 mash-gpa-gotosocial[2304549]: timestamp="12/10/2025 13:38:08.726" func=migrations.init.110.func1 level=INFO msg="[~99.98% done; ~148 rows/s] migrating stragglers"
Oct 12 13:38:10 s3 mash-gpa-gotosocial[2304549]: timestamp="12/10/2025 13:38:10.309" func=migrations.init.110.func1 level=INFO msg="[~99.99% done; ~162 rows/s] migrating stragglers"
Oct 12 13:38:12 s3 mash-gpa-gotosocial[2304549]: timestamp="12/10/2025 13:38:12.192" func=migrations.init.110.func1 level=INFO msg="[~100.00% done; ~141 rows/s] migrating stragglers"
Oct 12 13:38:13 s3 mash-gpa-gotosocial[2304549]: timestamp="12/10/2025 13:38:13.711" func=migrations.init.110.func1 level=INFO msg="[~100.00% done; ~136 rows/s] migrating stragglers"
Oct 12 13:38:13 s3 mash-gpa-gotosocial[2304549]: timestamp="12/10/2025 13:38:13.714" func=migrations.init.110.func1 level=INFO msg="dropping temporary thread_id_new index"
```
#

View File

@@ -0,0 +1,105 @@
name: twenty
services:
server:
image: twentycrm/twenty:${TAG:-latest}
volumes:
- type: bind
source: ./server_local_data
target: /app/packages/twenty-server/.local-storage
ports:
- "3000:3000"
environment:
NODE_PORT: 3000
PG_DATABASE_URL: postgres://${PG_DATABASE_USER:-postgres}:${PG_DATABASE_PASSWORD:-postgres}@${PG_DATABASE_HOST:-db}:${PG_DATABASE_PORT:-5432}/default
SERVER_URL: ${SERVER_URL}
REDIS_URL: ${REDIS_URL:-redis://redis:6379}
DISABLE_DB_MIGRATIONS: ${DISABLE_DB_MIGRATIONS}
DISABLE_CRON_JOBS_REGISTRATION: ${DISABLE_CRON_JOBS_REGISTRATION}
STORAGE_TYPE: ${STORAGE_TYPE}
STORAGE_S3_REGION: ${STORAGE_S3_REGION}
STORAGE_S3_NAME: ${STORAGE_S3_NAME}
STORAGE_S3_ENDPOINT: ${STORAGE_S3_ENDPOINT}
APP_SECRET: ${APP_SECRET:-replace_me_with_a_random_string}
labels:
- "traefik.http.middlewares.twenty-add-response-headers.headers.customresponseheaders.Strict-Transport-Security=max-age=31536000; includeSubDomains"
- "traefik.http.middlewares.twenty-add-response-headers.headers.customresponseheaders.Access-Control-Allow-Origin=*"
- "traefik.enable=true"
- "traefik.docker.network=traefik"
- "traefik.http.routers.twenty.rule=Host(`twenty.hyteck.de`)"
- "traefik.http.routers.twenty.middlewares=twenty-add-response-headers"
- "traefik.http.routers.twenty.service=twenty-service"
- "traefik.http.routers.twenty.entrypoints=web-secure"
- "traefik.http.routers.twenty.tls=true"
- "traefik.http.routers.twenty.tls.certResolver=default"
- "traefik.http.services.twenty-service.loadbalancer.server.port=3000"
depends_on:
db:
condition: service_healthy
healthcheck:
test: curl --fail http://localhost:3000/healthz
interval: 5s
timeout: 5s
retries: 20
restart: always
networks:
- traefik
- default
worker:
image: twentycrm/twenty:${TAG:-latest}
volumes:
- type: bind
source: ./server_local_data
target: /app/packages/twenty-server/.local-storage
command: [ "yarn", "worker:prod" ]
environment:
PG_DATABASE_URL: postgres://${PG_DATABASE_USER:-postgres}:${PG_DATABASE_PASSWORD:-postgres}@${PG_DATABASE_HOST:-db}:${PG_DATABASE_PORT:-5432}/default
SERVER_URL: ${SERVER_URL}
REDIS_URL: ${REDIS_URL:-redis://redis:6379}
DISABLE_DB_MIGRATIONS: "true" # it already runs on the server
DISABLE_CRON_JOBS_REGISTRATION: "true" # it already runs on the server
STORAGE_TYPE: ${STORAGE_TYPE}
STORAGE_S3_REGION: ${STORAGE_S3_REGION}
STORAGE_S3_NAME: ${STORAGE_S3_NAME}
STORAGE_S3_ENDPOINT: ${STORAGE_S3_ENDPOINT}
APP_SECRET: ${APP_SECRET:-replace_me_with_a_random_string}
depends_on:
db:
condition: service_healthy
server:
condition: service_healthy
restart: always
networks:
- default
db:
image: postgres:16
volumes:
- type: bind
source: ./db_data
target: /var/lib/postgresql/data
environment:
POSTGRES_USER: ${PG_DATABASE_USER:-postgres}
POSTGRES_PASSWORD: ${PG_DATABASE_PASSWORD:-postgres}
healthcheck:
test: pg_isready -U ${PG_DATABASE_USER:-postgres} -h localhost -d postgres
interval: 5s
timeout: 5s
retries: 10
restart: always
redis:
image: redis
restart: always
command: [ "--maxmemory-policy", "noeviction" ]
networks:
traefik:
name: "traefik"
external: true

View File

@@ -0,0 +1,19 @@
TAG=latest
#PG_DATABASE_USER=postgres
# Use openssl rand -base64 32
PG_DATABASE_PASSWORD=
#PG_DATABASE_HOST=db
#PG_DATABASE_PORT=5432
#REDIS_URL=redis://redis:6379
SERVER_URL=https://twenty.hyteck.de
# Use openssl rand -base64 32
APP_SECRET=
STORAGE_TYPE=local
# STORAGE_S3_REGION=eu-west3
# STORAGE_S3_NAME=my-bucket
# STORAGE_S3_ENDPOINT=

Binary file not shown.

After

Width:  |  Height:  |  Size: 99 KiB

View File

@@ -0,0 +1,169 @@
---
title: "Trying Twenty: How does an Open Source CRM work?"
date: 2025-08-03T06:10:10+02:00
lastmod: 2025-08-03T12:10:10+02:00
draft: false
image: "uploads/twenty.png"
categories: ['English']
tags: ['crm', 'twenty', 'salesforce', 'django', 'self-hosting']
---
I spend my day working with Salesforce, a very, very feature-rich CRM that you pay big money to use.
Salesforce is the opposite of OpenSource and the many features are expensive. Salesforce business model is based on this and on the lock-in effect.
If your company invested in implementing Salesforce, they'll likely pay a lot to keep it.
So what does an alternative look like? Let's have a look at [Twenty](https://twenty.com), an OpenSource CRM that recently reached the magic 1.0 version.
# Getting started
There are two options of getting started: Register at [app.twenty.com](https://app.twenty.com) and start right away on the devs instance or self-host Twenty on your own server.
I did the ladder, so let's discuss how that. The basic steps I took were
* point twenty.hyteck.de to a server
* Install traefik on the server (I cheated, traefik was already installed)
* Deploy [this docker-compose.yml](docker-compose.yml) with [this env file](env)
Then visit the domain and set up the first user.
# Features
Twenty offers an initial datamodel that you should be familiar from other CRMs. the standards objects are
![A screenshot of the person model in Twenty](person-model.png)
* **Persons** A individual person. You can attach notes, E-Mails, etc..
* **Companies** The same for organizations. Organization websites must be unique
* **Opportunities** The classic opportunity with customizable stages
* **Notes** They can be attached to any of the objects above
* **Tasks** Items to work on
* **Workflows** Automations similar to Salesforce flows. E.g. you can create a task every time an Opportunity is created.
The basic datamodel can be extended in the GUI. Here is how my "Company" model looks like
![A screenshot of twenty. It shows the company model being renamed to Organizations and deactivated fields such as Twitter links or number of employees.](organization_dm.png)
You can add any of the following fields to an object.
![A list of fields: Text, Number, True/False, Date and Time, Date, Select, Multi-Select, Rating, Currency, E-Mails, Links, Phones, Full Name, Address, Relation and the Advanced fields called Unique ID, JSON and Array](fields.png)
### Workflows
Workflows are Twenty's way of allowing users to build automations. You can start a Workflow when a Record is created,
updated or deleted. In addition, they can be started manually, on a schedule and via Webhook (yeah!).
![A workflow in twenty. After the Trigger "Organization" created there is a new task generated, a webhook send and a form used.](workflow1.png)
You can then add nodes that trigger actions. Available right now are
* **Creating, updating or deleting a record**
* **Searching records**
* **Sending E-Mails** This is the only option to trigger e-mails so far
* **Code** Serverless Javascript functions
* **Form** The form will pop up on the user's screen when the workflow is launched from a manual trigger. For other types of triggers, it will be displayed in the Workflow run record page.
* **HTTP request** Although possible via Code, this is a handy shortcut to trigger HTTP requests
What is currently completely missing are Foreach-loops and [conditions](https://github.com/twentyhq/core-team-issues/issues/1265). I can not say "If Opportunity stage is updated to X do Y else, do Z".
Without this, Workflows are really limited in their power.
What already seems quite mature though is the code option. It allows to put in arbitrary code and output a result.
![Screenshot of a javascript function in Twenty that adds two numbers together](serverless_function.png)
I did not try a lot, but I assume most basic Javascript works. I successfully built an http request that send data to a server.
If what you're doing is straightforward enough to not use loops and conditions or if oyu are okay with doing all of them in the Code node, you can do basically anything.
## API
Twenty offers an extensive API that allows you to basically do everything. It's well documented and easy to use.
Here is an example of me, syncing Rescue Organizations from [notfellchen.org](https://notfellchen.org) to Twenty.
```python
import requests
from fellchensammlung.models import RescueOrganization
def sync_rescue_org_to_twenty(rescue_org: RescueOrganization, base_url, token: str):
if rescue_org.twenty_id:
update = True
else:
update = False
payload = {
"eMails": {
"primaryEmail": rescue_org.email,
"additionalEmails": None
},
"domainName": {
"primaryLinkLabel": rescue_org.website,
"primaryLinkUrl": rescue_org.website,
"additionalLinks": []
},
"name": rescue_org.name,
}
if rescue_org.location:
payload["address"] = {
"addressStreet1": f"{rescue_org.location.street} {rescue_org.location.housenumber}",
"addressCity": rescue_org.location.city,
"addressPostcode": rescue_org.location.postcode,
"addressCountry": rescue_org.location.countrycode,
"addressLat": rescue_org.location.latitude,
"addressLng": rescue_org.location.longitude,
}
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {token}"
}
if update:
url = f"{base_url}/rest/companies/{rescue_org.twenty_id}"
response = requests.patch(url, json=payload, headers=headers)
assert response.status_code == 200
else:
url = f"{base_url}/rest/companies"
response = requests.post(url, json=payload, headers=headers)
assert response.status_code == 201
rescue_org.twenty_id = response.json()["data"]["createCompany"]["id"]
rescue_org.save()
```
#
# The Company, Business Model and Paid Features
The company behind Twenty is called "Twenty.com PBC" and mostly seems to consist of former AirBnB employees in Paris.
The company is probably backed by Venture Capital.
The current business model is to charge for using the company's instance of Twenty. It starts at 9\$/user/month without
enterprise features. SSO and support will cost you 19\$/user/month.
Selfhosting is free but SSO is locked behind an enterprise badge with seemingly no way to pay for activating it.
I suspect that in the future more features will become "Enterprise only" even when self-hosting. All contributors must agree
to [a Contributor License Agreement (CLA)](https://github.com/twentyhq/twenty/blob/main/.github/CLA.md), therefore I
believe they could change the License in the future, including switching away from Open Source.
# Conclusion
Twenty is a really promising start of building a good CRM. The ease of customizing the datamodel,
using the API and a solid beginning to Flows allows users to get a lot of value from it already.
Flows need some more work to become as powerful as they should be and the E-Mail integration needs to get better.
Stating the obvious: This is not something that could ever replace Salesforce. But it doesn't have to!
There are many organizations that would benefit a lot from a CRM like Twenty, they simply don't need, can't handle or
don't want to pay for all the features other CRMs offer.
If Twenty continues to focus on small to medium companies and the right mix of standard features vs. custom development options I see a bright future for it.
There are the usual problems of VC-backed OSS development, we shall see how it goes for them.
# Addendum: Important Features
Here is a short list of features I missed and their place on the roadmap if they have one
* **Compose & Send E-Mails** Planned [Q4 2025](https://github.com/orgs/twentyhq/projects/1?pane=issue&itemId=106097937&issue=twentyhq%7Ccore-team-issues%7C811)
* **Foreach loops in Workflows** [Q3 2025](https://github.com/orgs/twentyhq/projects/1/views/33?pane=issue&itemId=93150024&issue=twentyhq%7Ccore-team-issues%7C21)
* **Conditions in Flows** [Q4 2025](https://github.com/orgs/twentyhq/projects/1/views/33?pane=issue&itemId=121287765&issue=twentyhq%7Ccore-team-issues%7C1265)

Binary file not shown.

After

Width:  |  Height:  |  Size: 146 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 95 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 57 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 228 KiB

View File

@@ -0,0 +1,14 @@
{{ $source := index .Params 0 }}
<div class="embedded-html">
{{ with .Page.Resources.GetMatch $source | readFile }}
{{ replace . "https://viewer.diagrams.net/js/viewer-static.min.js" "/js/viewer-static.min.js"|safeHTML }}
{{ end }}
</div>
<style>
/* Reset the image width that is set in the theme for
If wired styling issues appear, check if the theme is responsible and eventually unset this here
*/
.geAdaptiveAsset {
width: unset;
}
</style>

View File

@@ -1,5 +1,4 @@
<div>
<object data="/uploads/{{ index .Params 0}}.pdf" type="application/pdf" width="100%" height="500px">
<p><a href="/uploads/{{ index .Params 0}}.pdf">Download the PDF!</a></p>
<object class="fitvidsignore" data="/uploads/{{ index .Params 0}}.pdf" type="application/pdf" width="100%" height="500px">
<p><a href="/uploads/{{ index .Params 0}}.pdf">Download the PDF!</a></p>
</object>

View File

@@ -1,14 +0,0 @@
<?php
//database settings
define ("DB_USER", "moanos");
define ("DB_HOST", "localhost");
define ("DB_PW", "dwDs5k4PMQ1a7tK51OjK");
define ("DB_DATABASE", "moanos_gartensia");
//database tables:
define ("TABLE_USER", "user");
define("MODULE_PATH", $_SERVER['DOCUMENT_ROOT']);
?>

View File

@@ -1,45 +0,0 @@
<?php
require_once(__dir__."/config.inc.php");
$aData[TABLE_USER] = array(
'user_ID' => array(
'type' => 'INT',
'size' => 11,
'unique' => 'TRUE',
'standard' => 'NOT NULL',
'extra' => 'AUTO_INCREMENT PRIMARY KEY'
),
'name' => array(
'type' => 'VARCHAR',
'size' => 255,
'standard' => 'NOT NULL'
),
'email' => array(
'type' => 'VARCHAR',
'size' => 255,
'standard' => 'NOT NULL'
),
'signalmessenger' => array(
'type' => 'VARCHAR',
'size' => 255,
'standard' => 'NOT NULL'
),
'sms' => array(
'type' => 'VARCHAR',
'size' => 255,
'standard' => 'NOT NULL'
),
'telegram' => array(
'type' => 'VARCHAR',
'size' => 255,
'standard' => 'NOT NULL'
),
'threema' => array(
'type' => 'VARCHAR',
'size' => 255,
'standard' => 'NOT NULL'
)
);

View File

@@ -1,277 +0,0 @@
<?php
ini_set('display_errors', 0);
ini_set('display_startup_errors', 0);
error_reporting(E_ALL);
class Data{
function __construct(){
$this->link_database();
$this->em_check_database();
$this->read_variables();
date_default_timezone_set('Europe/Berlin');
}
function read_variables() {
//reads all GET and POST variables into the object, addslashing both
if (count($_POST)) {
foreach ($_POST as $key => $val){
$key=addslashes("r_".$key);
if (is_array($val)) {
for ($z=0;$z<count($val);$z++) {
$val[$z]=addslashes($val[$z]);
}
}
else {
$val=addslashes($val);
}
$this->$key=$val;
}
}
if (count($_GET)) {
foreach ($_GET as $key => $val){
$key=addslashes("r_".$key);
if (is_array($val)) {
for ($z=0;$z<count($val);$z++) {
$val[$z]=addslashes($val[$z]);
}
}
else {
$val=addslashes($val);
}
$this->$key=$val;
}
}
}//end of function read variables
function link_database() {
$this->databaselink = new mysqli(DB_HOST,DB_USER,DB_PW,DB_DATABASE);
$this->databaselink->set_charset('utf8');
if ($this->databaselink->connect_errno) {
return "Datenbank nicht erreichbar: (" . $this->databaselink->connect_errno . ") " . $this->databaselink->connect_error;
}
else{
$this->databasename=DB_DATABASE;
$this->databaselink->query("SET SQL_MODE = '';");
return True;
}
}
function em_check_database() {
/*
params:
None
returns:
None
This function compares the database structure to a predefined structure which is saved in db_array_config.php
and adds missing structures. Makes installation+updates easy
*/
$aTable=array();
//Alle Tabellen in Array lesen, inklusive aller Eigenschaften
$result=$this->databaselink->query("show tables from ".DB_DATABASE);
while($row = $result->fetch_array(MYSQLI_BOTH)){
$aTable[]=$row[0];
}
$aData=array();
$database_structure_path = __DIR__."/config/db_array.inc.php";
include($database_structure_path);
foreach($aData as $table=>$fields){
if(!in_array($table,$aTable)) {
//Add table to database
$mCounter=0;
$sCommand="CREATE TABLE IF NOT EXISTS `".$table."` (";
foreach($fields as $fieldname=>$properties){
$extra = "";
if($mCounter==0) {
$key="KEY `".$fieldname."` (`".$fieldname."`)";
}
if($properties["size"]!="") {
$size="(".$properties["size"].")";
}
else {
$size="";
}
if((isset($properties["unique"])) and ($properties['unique']==true)) {
$unique="UNIQUE KEY `".$fieldname."_2` (`".$fieldname."`),";}
else {
$unique="";
}
if((isset($properties["extra"])) and ($properties != "")){
$extra = $properties['extra'];
}
$sCommand .= "`".$fieldname."` ".$properties["type"].$size." ".$properties["standard"]." ".$extra.",";
$mCounter++;
}
$sCommand.=$unique.$key.") ENGINE=InnoDB ;";
$this->last_query[]=$sCommand;
$updateresult=$this->databaselink->query($sCommand);
}
else {
//Felder checken und Tabelle updaten
$resultField=$this->databaselink->query("show fields from ".DB_DATABASE.".".$table);
while($aRowF = $resultField->fetch_array(MYSQLI_BOTH)){
$aTableFields[]=$aRowF[0];
}
foreach($fields as $fieldname=>$properties) {
if(!in_array($fieldname,$aTableFields)) {
if((isset($properties["size"]) and ($properties['size']!=""))) {
$size="(".$properties["size"].")";
}
else {
$size="";
}
$sCommand="ALTER TABLE `".$table."` ADD `".$fieldname."` ".$properties["type"].$size." ".$properties["standard"];
$this->last_query[]=$sCommand;
$updateresult=$this->databaselink->query($sCommand);
}
}
}
unset($aTableFields);
unset($aFields);
unset($properties);
}
unset($aData);
}
function store_data($sTable,$aFields,$sKey_ID,$mID) {
//updates or inserts data
//returns ID or -1 if fails
$i=0; $returnID = 0;
if(($mID>0) or ($mID!="") or ($mID != null)) {
//search for it
$aCheckFields=array($sKey_ID=>$mID);
$aRow=$this->select_row($sTable,$aCheckFields);
$returnID=$aRow[$sKey_ID];
}
if(($returnID>0) or ($returnID!="")) {
$sQuery="update ".$sTable." set ";
foreach($aFields as $key=>$value) {
$sQuery.=$key."='".$value."'";
$i++;
if($i<count($aFields)) {
$sQuery.=",";
}
}
$sQuery.=" where ".$sKey_ID."='".$mID."'";
$mDataset_ID=$returnID;
}
else {
$sKeys = ""; $sValues = "";
$sQuery="insert into ".$sTable." (";
foreach($aFields as $sKey=>$value) {
$sKeys.=$sKey;
$sValues.="'".$value."'";
$i++;
if($i<count($aFields)) {
$sKeys.=",";
$sValues.=",";
}
}
$sQuery.=$sKeys.") values (".$sValues.")";
}
$this->last_query[]=$sQuery;
if ($pResult = $this->databaselink->query($sQuery)) {
if(($returnID>0) or ($returnID!="")) {
return $returnID;
}
else {
return $this->databaselink->insert_id;
}
}
else {
return -1;
}
}
function save_user($aUser){
/*
args:
Array $aUser
Array of user information which will be saved.
e.g. array(
'forename' => String $forname,
'surname' => String $surname,
'email' => String $email,
'UID' => String $UID,
'language' => String $language,
'admin' => Bool $admin,
'password' => String md5(str_rev($password)), #deprecated, do not use!
'password_hash' => password_hash(String $password, PASSWORD_DEFAULT)
);
returns:
None
Function will save user Information given in $aUser. If user exists it will
overwrite existing data but not delete not-specified data
*/
$aFields = $aUser;
if ((isset($this->r_user_ID))and ($this->r_user_ID != "")){
$this->ID=$this->store_data(TABLE_USER, $aFields, 'user_ID' , $this->r_user_ID);
}
else{
$this->ID=$this->store_data(TABLE_USER, $aFields, NULL , NULL);
}
}
function get_view($Datei) {
ob_start(); //startet Buffer
include($Datei);
$output=ob_get_contents(); //Buffer wird geschrieben
ob_end_clean(); //Buffer wird gelöscht
return $output;
}
}
//end of class
session_start();
include ("config/config.inc.php");
$oObject = new Data;
$oObject->output = "";
switch ($oObject->r_ac){
case 'user_save':
$aUser = array();
if(isset($oObject->r_user_ID)){
$aUser['user_ID'] = $oObject->r_user_ID;
}
if(isset($oObject->r_name)){
$aUser['name'] = $oObject->r_name;
}
if(isset($oObject->r_email)){
$aUser['email'] = $oObject->r_email;
}
if(isset($oObject->r_email)){
$aUser['signalmessenger'] = $oObject->r_signalmessenger;
}
if(isset($oObject->r_email)){
$aUser['sms'] = $oObject->r_sms;
}
if(isset($oObject->r_email)){
$aUser['telegram'] = $oObject->r_telegram;
}
if(isset($oObject->r_email)){
$aUser['threema'] = $oObject->r_threema;
}
$oObject->save_user($aUser);
$oObject->output .= "Erfolgreich gespeichert";
break;
default:
$oObject->output = $oObject->get_view("views/user_form.php");
break;
}
function output($oObject){
echo $oObject->get_view("views/head.php");
echo $oObject->get_view("views/body.php");
}
output($oObject);
?>

7625
static/js/viewer-static.min.js vendored Normal file

File diff suppressed because one or more lines are too long

Binary file not shown.

After

Width:  |  Height:  |  Size: 146 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 197 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 86 KiB

BIN
static/uploads/twenty.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 51 KiB

View File

@@ -1,13 +0,0 @@
<body>
<?php
if ((isset($this->error)) and ($this->error != "")){
echo "<div id=error>";
echo $this->error;
echo "</div>";
}
echo "<div id=content>";
echo $this->output;
echo "</div>";
?>

View File

@@ -1,15 +0,0 @@
<!DOCTYPE html>
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<meta name="author" content="Sam">
<meta http-equiv="pragma" content="no-cache">
<meta http-equiv="cache-control" content="no-cache">
<link rel="SHORTCUT ICON" type="image/x-icon" href="images/favicon.ico">
<?php
echo ' <link REL="stylesheet" TYPE="text/css" HREF="css/styles.css">
<title>Address collection</title>
';
?>
</head>

View File

@@ -1,17 +0,0 @@
<?php
$form = '<form action="'.htmlspecialchars($_SERVER["PHP_SELF"]).'" method="post">';
$form .='
<input type = hidden name="ac" value = "user_save">
<input type = hidden name="user_ID" value = "">';
$form .= 'Name: <input type="text" name="name" value=""><br>';
$form .= 'E-Mail: <input type="text" name="email" value=""><br>';
$form .= 'Signal: <input type="text" name="signalmessenger" value=""><br>';
$form .= 'SMS: <input type="text" name="sms" value=""><br>';
$form .= 'Telegram: <input type="text" name="telegram" value=""><br>';
$form .= 'Threema: <input type="text" name="threema" value=""><br>';
$form .= '
<input type="submit" value="Send">
<input type="reset" value="Reset";
</form>';
echo $form;
?>