Compare commits

...

37 Commits

Author SHA1 Message Date
b557376ef2 fix: date
All checks were successful
ci/woodpecker/manual/woodpecker Pipeline was successful
2025-08-03 13:51:31 +02:00
b0e58f1737 feat: Add post on twenty 2025-08-03 13:51:18 +02:00
27968f6ae5 feat: add small update on notification function
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2025-07-13 00:05:31 +02:00
2388e78c5f feat: Add thoughts on html mails post
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2025-07-12 12:48:20 +02:00
3fb5a5d4fc fix: typo
All checks were successful
ci/woodpecker/manual/woodpecker Pipeline was successful
2025-06-28 15:03:51 +02:00
dc6e60970a fix: metadata
All checks were successful
ci/woodpecker/manual/woodpecker Pipeline was successful
2025-06-28 15:02:52 +02:00
a6df0a82ae fix: metadata 2025-06-28 15:02:07 +02:00
90e8d15af7 feat: Add post improve-osm-by-using-it
All checks were successful
ci/woodpecker/manual/woodpecker Pipeline was successful
2025-06-28 14:56:40 +02:00
2641955a36 feat: Rework the bio a bit 2025-06-28 14:52:52 +02:00
4e4d825283 refactor: delete old stuff 2025-06-28 07:36:29 +02:00
fb31dedf4d feat: Add translation note 2025-03-04 18:01:00 +01:00
cc9b0733dc fix: Restrict height, fix syntax 2025-03-04 17:59:52 +01:00
b90ecbadc4 feat: update bio
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2025-03-04 17:28:52 +01:00
698d648263 feat: add fiat lux 2025-03-04 17:26:04 +01:00
74e5f3d970 feat: clarifification
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2025-03-04 17:17:30 +01:00
60306d1abc fix: minor edits
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2025-03-04 08:58:00 +01:00
7dc9fc525a feat: add post
All checks were successful
ci/woodpecker/manual/woodpecker Pipeline was successful
2025-03-03 22:06:19 +01:00
e805946fab fix: Add images
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2025-01-23 16:31:43 +01:00
2bb1ea9182 feat: Add hosted on
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2025-01-23 15:56:53 +01:00
f860ac4e3c formatting 2025-01-23 15:54:15 +01:00
595e8b2b35 Adjust copyright 2025-01-23 15:53:49 +01:00
d873506c71 fix: Minor adjustments
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-12-11 15:26:01 +01:00
46c4def4c5 feat: add post on Circumventing Authorized-Fetch
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-12-11 15:12:06 +01:00
60072ffd54 update resume
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-10-05 13:24:04 +02:00
294a067d5f feat: Add image
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-10-04 14:17:06 +02:00
ebc5718318 fix: Adjust date
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-10-04 14:15:53 +02:00
22156b404e Adjust first post
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-10-04 14:14:41 +02:00
2ee702a151 refactor: rename blog post
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-09-28 13:33:15 +02:00
03237d6ea2 fix: various small
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-09-28 12:16:35 +02:00
5fd99b7e5c fix: move image to correct location
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-09-28 12:07:15 +02:00
1a75fcc147 fix: correct time to make change public
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-09-28 12:05:34 +02:00
e58eebd8af feat: Add post on django geocoding
All checks were successful
ci/woodpecker/manual/woodpecker Pipeline was successful
2024-09-28 12:02:10 +02:00
2097d68829 fix: typos and minor adjustments
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-04-17 22:39:32 +02:00
efb10729fe fix: Image path, schema in theme
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-04-17 22:35:16 +02:00
06426db458 fix(ci): Use when condition
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-04-17 07:16:39 +02:00
21dd908466 ci: Use when condition 2024-04-17 07:15:20 +02:00
881371262d ci: Use steps
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
2024-04-16 22:04:47 +02:00
41 changed files with 1188 additions and 405 deletions

View File

@@ -1,10 +1,13 @@
---
pipeline:
steps:
build:
image: klakegg/hugo
commands:
- hugo
when:
- event: push
- event: manual
deploy:
image: appleboy/drone-scp
@@ -17,3 +20,6 @@ pipeline:
source: public/
key:
from_secret: ssh_key
when:
- event: push
- event: manual

View File

@@ -33,10 +33,10 @@ paginate = 5 #frontpage pagination
author = "Julian-Samuel Gebühr"
authorLink = "https://hyteck.de/"
bio = [
"Student of Medical Informatics, Developer, He/Him"
"Business Analyst for work, Developer for fun. Activist because it's necessary. He/Him"
]
copyright = [
'&copy; 2023 CC-BY Julian-Samuel Gebühr</a> '
'&copy; 2025 CC-BY Julian-Samuel Gebühr</a> '
]
email = "julian-samuel@gebuehr.net"
@@ -64,7 +64,7 @@ paginate = 5 #frontpage pagination
weight = 4
# this will also be in author bio if there is no writer.
[params.social]
[params.social]
github = "https://github.com/moan0s"
email = "julian-samuel@gebuehr.net"
twitter = "https://twitter.com/jgebuehr"
@@ -77,13 +77,11 @@ paginate = 5 #frontpage pagination
[[params.fediverse]]
url = "https://chaos.social/@moanos"
name = "Personal mastodon profile"
[[params.fediverse]]
url = "https://lediver.se/@moanos_foss"
name= "FOSS profile"
[[params.fediverse]]
url = "https://pixelfed.social/@moanos"
name= "Pixelfed"
[params.hosted_on]
src = '/img/uberspace_badge_dark.png'
alt = 'Badge saying: Proudly hosted on asteroids'
target = 'https://uberspace.de'
[outputs]
home = ["HTML", "RSS"]

View File

@@ -1,6 +1,7 @@
---
title: "About"
date: 2019-08-20T19:56:10+02:00
lastmod: 2024-10-05T12:56:10+02:00
draft: False
image: ""
categories: [english, me]
@@ -14,15 +15,32 @@ Currently available are the [services listed here]({{< ref "/services" >}} "Serv
# About me
To put it short I do something with computers and have a history in neuroscience.
I work at [DKMS](https://www.dkms.de/), a nonprofit that fights blood cancer by registering potential blood stem cell donors and raising awareness and funds. My role is "Business Analyst" in our Salesforce and WebApps team. That means I spend my day trying to figure out the business departments need, sketching solutions and translating between product and business teams.
After work I spend my time with programming, activism and my pet rats.
**My backgrond**
After finishing school, I studied Medical Engineering in a joint course at University Stuttgart and University Tübingen.
In March 2020 I finished my bachelor thesis *"Real-time EEG analysis - Phase dependent effects of TMS on MEP"* at the Institute for Neuromodulation and Neurotechnology in the University Hospital Tübingen led by Prof. Gharabaghi. After that I was working there as a researcher.
In November 2020 I started studying Medical Informatics Tübingen.
In November 2020 I started studying Medical Informatics Tübingen and finished in April 2024 with my master thesis *"Development and Validation of a Software Platform for Classification and Correction of Pathological Movement in Daily Activities by Multi-modal Sensor Analysis"*. This work focused on helping Ataxia and Parkinson's disease as part of a larger project in the Section for Computational Sensomotorics at the Hertie Institute for Clinical Brain Research (HIH).
My advisor for this work was Winfried Ilg and it was examined by Prof. Dr. habil. Michael Menth and Prof. Dr. Martin Giese.
After work I spend my time with climbing, sailing and political activism.
# Open-sourcs work & Freelancing
I work for various Open-Source projects
| Project | Description |
| --- | --- |
| [ILMO](https://hyteck.de/post/ilmo/) | A library management tool, available as SaaS |
| [Notfellchen](https://notfellchen.org) | An app for helping fancy rats get adopted from rescues |
| [mash-playbook](https://github.com/mother-of-all-self-hosting/mash-playbook) | An Ansible playbook which helps you host a large catalog of FOSS services as Docker containers on your own server |
| https://github.com/spantaleev/matrix-docker-ansible-deploy | Matrix (An open network for secure, decentralized communication) server setup using Ansible and Docker |
and many more you can find on [GitHub](https://github.com/moan0s), [Codeberg](https://codeberg.org/moanos/) or [my own Gitea server](https://git.hyteck.de/).
Also I do a lot of programming, my biggest project so far is [ILMO](https://hyteck.de/post/ilmo/), an library management tool.
Starting with Java, most of my real world projects have been done in Python and PHP.
In 2019 I came in contact with programming of real time applications for medical devices and learned [Structured Text](https://en.wikipedia.org/wiki/Structured_text) (a programming language based on pascal focused on programming [PLCs](https://en.wikipedia.org/wiki/Programmable_logic_controller)) and C.
Since then I worked in clinical research, especially phase- and power dependency of brain stimulation.

View File

@@ -0,0 +1,131 @@
---
title: "Thoughts on HTML mails"
date: 2025-07-12T12:05:10+02:00
lastmod: 2025-07-13T00:00:04+02:00
draft: false
image: "uploads/html-e-mails.png"
categories: ['English']
tags: ['email', 'html', 'plaintext', 'django', 'notfellchen']
---
Lately I worked on notification e-mails for [notfellchen.org](https://notfellchen.org). Initially I just sent text
notifications without links to the site. Terrible idea! An E-Mail notification I send always has Call-to-Action or at
minimum a link to more information.
I left the system like this for half a year because it kinda worked for me (didn't suck enough for me to care), and I was the main receiver of these notifications.
However, as the platform is developed further and more users join I need to think about more user-centric notifications.
So what do I imagine is important to a user?
*
* **Information benefit**: An e-mail has the purpose to inform a user. This information should be immediately visible & understandable.
* **Actionables**: Users should be able to act on the information received. This is the bright red button "DO SOMETHING NOW!" you see so often.
* **Unsubscribing**: Informing e-mails stop is not only a legal requirement and morally the right thing to do but it also gives users agency and - I hope - increases the User Experience
With these I naturally came to the next question: Plaintext or HTML?
Some people would say [Plaintext is inherently better](https://useplaintext.email/) than HTML e-mails. Many of these reasons resonate with me including:
* Privacy invasion and tracking
* HTML emails are less accessible
* Some clients can't display HTML emails at all
* Mail client vulnerabilities
These are all valid points and are a reason I generally enjoy plaintext e-mails when I receive them.
But this is not about me but users. And there are some real benefits of HTML e-mails:
* Visually appealing: This is subjective but generally most users seem to agree on that
* User guidance: Rich text provides a real benefit when searching for the relevant information
Be honest: Do you read automated e-mails you receive completely? Or do you just skim for important information?
And here HTML-mails shine: **Information can easily be highlighted** and big button can lead the user to do the right action.
Some might argue that you can also a highlight a link in plaintext but that nearly always will worsen accessibility for screen-reader user.
# The result
In the end, I decided that providing plaintext-only e-mails was not enough. I set up html mails, mostly using
[djangos send_mail](https://docs.djangoproject.com/en/5.2/topics/email/#send-mail) function where I can pass the html message and attattching it correctly is done for me.
![A screenshot of an e-mail in thunderbird. The e-mail is structured in header, body and footer. The header says "Notfellchen.org", the body shows a message that a new user was registered and a bright green button to show the user. The footer offers a link to unsubscribe](mail_screenshot.png)
For anyone that is interested, here is how most my notifications are sent
```python
def send_notification_email(notification_pk):
notification = Notification.objects.get(pk=notification_pk)
subject = f"{notification.title}"
context = {"notification": notification, }
if notification.notification_type == NotificationTypeChoices.NEW_REPORT_COMMENT or notification.notification_type == NotificationTypeChoices.NEW_REPORT_AN:
html_message = render_to_string('fellchensammlung/mail/notifications/report.html', context)
plain_message = render_to_string('fellchensammlung/mail/notifications/report.txt', context)
[...]
elif notification.notification_type == NotificationTypeChoices.NEW_COMMENT:
html_message = render_to_string('fellchensammlung/mail/notifications/new-comment.html', context)
plain_message = render_to_string('fellchensammlung/mail/notifications/new-comment.txt', context)
else:
raise NotImplementedError("Unknown notification type")
if "plain_message" not in locals():
plain_message = strip_tags(html_message)
mail.send_mail(subject, plain_message, settings.DEFAULT_FROM_EMAIL,
[notification.user_to_notify.email],
html_message=html_message)
```
Yes this could be made more efficient - for now it works. I made the notification framework too complicated initially, so I'm still tyring out what works and what doesn't.
Here is the html template
```html
{% extends "fellchensammlung/mail/base.html" %}
{% load i18n %}
{% block title %}
{% translate 'Neuer User' %}
{% endblock %}
{% block content %}
<p>Moin,</p>
<p>
es wurde ein neuer Useraccount erstellt.
</p>
<p>
Details findest du hier
</p>
<p>
<a href="{{ notification.user_related.get_full_url }}" class="cta-button">{% translate 'User anzeigen' %}</a>
</p>
{% endblock %}
```
and here the plaintext
```
{% extends "fellchensammlung/mail/base.txt" %}
{% load i18n %}
{% block content %}{% blocktranslate %}Moin,
es wurde ein neuer Useraccount erstellt.
User anzeigen: {{ new_user_url }}
{% endblocktranslate %}{% endblock %}
```
Works pretty well for now. People that prefer plaintext will get these and most users will have skimmable html e-mail where the
styling will help them recognize where it's from and what to do. Accessibility-wise this seems like the best option.
And while adding a new notification will force me to create
* a new notification type,
* two new e-mail templates and
* a proper rendering on the website
this seems okay. Notifications are useful, but I don't want to shove them everywhere. I'm not running facebook or linkedin after all.
So for now I'm pretty happy with the new shiny e-mails and will roll out the changes soon (if I don't find any more wired bugs).
PS: I wrote this post after reading [blog & website in the age of containerized socials](https://blog.avas.space/blog-website-eval/) by ava.
Maybe this "Thoughts on" format will stay and I will post these in addition to more structured deep dives.
# Update
I did a rework of the notification function and it's now much cleaner now. However, it's less readable so this blogpost will stay as-is.
If you want to check out the new code have a look [on Codeberg](https://codeberg.org/moanos/notfellchen/src/commit/a4b8486bd489dacf8867b49d04f70f091556dc9d/src/fellchensammlung/mail.py).

Binary file not shown.

After

Width:  |  Height:  |  Size: 51 KiB

View File

@@ -1,6 +1,6 @@
---
title: "Cryptpad"
date: 2021-05-7T22:08:55+02:00
date: 2021-05-07T22:08:55+02:00
draft: true
image: "uploads/ILMO_bordered.png"
tags: [FOSS]

View File

@@ -3,6 +3,7 @@ title: "Deploying a django app with docker, ansible and traefik"
date: 2023-07-24T22:10:10+02:00
draft: false
image: "uploads/docker-ansible-django-traefik/django_docker_ansible_traefik.png"
image_alt: "Graphic showing the Django, Docker, Ansible and Traefik logo"
categrories: ['English']
tags: ['MASH', 'django', 'ilmo', 'ansible', 'traefik', 'docker']
---

View File

@@ -0,0 +1,153 @@
---
title: "Where are you? - Part 2 - Geocoding with Django to empower area search "
date: 2024-10-04T14:05:10+02:00
draft: false
image: "uploads/django_geocoding2.png"
categrories: ['English']
tags: ['django', 'geocoding', 'nominatim', 'OpenStreetMap', 'osm', 'traefik', 'mash-playbook', 'docker', 'docker-compose']
---
# Introduction
In the [previous post](geocoding-with-django/) I outlined how to set up a Nominatim server that allows us to find a geolocation for any address on the planet. Now let's use our newfound power in Django. Again, all code snippets are [CC0](https://creativecommons.org/public-domain/cc0/) so make free use of them. But I'd be very happy if you tell me if you use them for something cool!
## Prerquisites
* You have a working geocoding server or use a public one
* You have a working django app
If you want to do geocoding in a different environment you will still be able to use a lot of the the following examples, just skip the Django-specifics and configure the `GEOCODING_API_URL` according to your needs.
# Using the Geocoding API
First of all, let's define the geocoding API URL in our settings. This enables us to switch easily if a service is not available. Add the following to you `settings.py`
```python
# appname/settings.py
""" GEOCODING """
GEOCODING_API_URL = config.get("geocoding", "api_url", fallback="https://nominatim.hyteck.de/search") # Adjust if needed
```
We can then add a class that interacts with the API.
```python
import logging
import requests
import json
from APPNAME import __version__ as app_version
from APPNAME import settings
class GeoAPI:
api_url = settings.GEOCODING_API_URL
# Set User-Agent headers as required by most usage policies (and it's the nice thing to do)
headers = {
'User-Agent': f"APPNAME {app_version}",
'From': 'info@example.org'
}
def __init__(self, debug=False):
self.requests = requests # ignore why we do this for now
def get_coordinates_from_query(self, location_string):
result = self.requests.get(self.api_url, {"q": location_string, "format": "jsonv2"}, headers=self.headers).json()[0]
return result["lat"], result["lon"]
def _get_raw_response(self, location_string):
result = self.requests.get(self.api_url, {"q": location_string, "format": "jsonv2"}, headers=self.headers)
return result.content
def get_geojson_for_query(self, location_string):
try:
result = self.requests.get(self.api_url,
{"q": location_string,
"format": "jsonv2"},
headers=self.headers).json()
except Exception as e:
logging.warning(f"Exception {e} when querying Nominatim")
return None
if len(result) == 0:
logging.warning(f"Couldn't find a result for {location_string} when querying Nominatim")
return None
return result
```
The wrapper is a synchronous interface to our geocoding server and will wait until the server returns a response or times out. This impacts the user experienc, as a site will take longer to load. But it's much easier to code, so here we are. If anyone wants to write a async interface for this I'll not stop them!
Fornow, let's start by adding `Location` to our `models.py`
```python
class Location(models.Model):
place_id = models.IntegerField()
latitude = models.FloatField()
longitude = models.FloatField()
name = models.CharField(max_length=2000)
def __str__(self):
return f"{self.name} ({self.latitude:.5}, {self.longitude:.5})"
@staticmethod
def get_location_from_string(location_string):
geo_api = geo.GeoAPI()
geojson = geo_api.get_geojson_for_query(location_string)
if geojson is None:
return None
result = geojson[0]
if "name" in result:
name = result["name"]
else:
name = result["display_name"]
location = Location.objects.create(
place_id=result["place_id"],
latitude=result["lat"],
longitude=result["lon"],
name=name,
)
return location
```
*Don't forget to make&run migrations after this*
An finally we can use the API!
```python
location = Location.get_location_from_string("Berlin")
print(location)
# Berlin, Deutschland (52.51, 13.38)
```
Looking good!
# Area search
Now wee have the coordinates - great! But how can we get the distance between coordinates? Lukily we are not the first people with that question and there is the [Haversine Formula](https://en.wikipedia.org/wiki/Haversine_formula) that we can use. It's not a perfect fomula, for example it assumes the erth is perfectly round which the earth is not. But for most use cases of area search this should be irrelevant for the final result.
Here is my implementation
```python
def calculate_distance_between_coordinates(position1, position2):
"""
Calculate the distance between two points identified by coordinates
It expects the coordinates to be a tuple (lat, lon)
Based on https://en.wikipedia.org/wiki/Haversine_formula
"""
earth_radius_km = 6371 # As per https://en.wikipedia.org/wiki/Earth_radius
latitude1 = float(position1[0])
longitude1 = float(position1[1])
latitude2 = float(position2[0])
longitude2 = float(position2[1])
distance_lat = radians(latitude2 - latitude1)
distance_long = radians(longitude2 - longitude1)
a = pow(sin(distance_lat / 2), 2) + cos(radians(latitude1)) * cos(radians(latitude2)) * pow(sin(distance_long / 2),
2)
c = 2 * atan2(sqrt(a), sqrt(1 - a))
distance_in_km = earth_radius_km * c
return distance_in_km
```
And with that we have a functioning area search 🎉

View File

@@ -2,20 +2,20 @@
title: "Styling an Django RSS Feed"
date: 2024-04-16T12:10:10+02:00
draft: false
image: "django_rss.png"
image: "uploads/django_rss.png"
categrories: ['English']
tags: ['django', 'rss', 'privacy', 'rss-styling', 'xml', 'xsl', 'atom', 'feed', 'rss-feed']
---
## Introduction
RSS is amazing! While not everyone thinks that, most people that *understand* RSS like it. This presents a problem as most people don't have chance to learn about it. Unless there is a person in the community that doesn't shut up about how great RSS is, they might not even know what it is, let alone use it.
RSS is amazing! While not everyone thinks that, most people that *understand* RSS, like it. This presents a problem, as most people don't have chance to learn about it. Unless there is a person in the community that doesn't shut up about how great RSS is (maybe that person is you), they might not even know what it is, let alone use it.
One big reason for this is, that when you click an link to an RSS feed you download a strange file or you browser is nice and renders some XML which is also not meant for human consumption. Wouldn't it be nice if people clicked on the RSS link and were greeted by a text explaining RSS and how to use it? And if the site would still be a valid RSS feed?
One big reason for this is, that when you click an link to an RSS feed you download a strange file that most people don't know how to deal with. Maybe your browser is nice and renders some XML which is also not meant for human consumption. Wouldn't it be better if people clicked on the RSS link and were greeted by a text explaining RSS and how to use it? And if the site would still be a valid RSS feed?
Luckily you don't have to imagine that - it's possible! You can even try it on this blog by clicking the RSS link in the menu.
Luckily you don't have to imagine that - it's possible! You can even try it on this blog by clicking the RSS link in the menu ([direct link](https://hyteck.de/index.xml)).
To do this has not been my idea. Darek Kay described this in the blog post [Style your RSS feed](https://darekkay.com/blog/rss-styling/) and I just copied most of their work! This was fairly easy for this Hugo-blog and is [available in my for of the hugo-nederburg-theme](https://github.com/moan0s/hugo-nederburg-theme). However, in a Django project it get's a bit more complicated. Let me explain.
Doing this has not been my idea. Darek Kay described this in the blog post [Style your RSS feed](https://darekkay.com/blog/rss-styling/) and I just copied most of their work! This was fairly easy for this Hugo blog and is [available in my fork of the hugo-nederburg-theme](https://github.com/moan0s/hugo-nederburg-theme). However, in a Django project it get's a bit more complicated. Let me explain.
## The Problem

View File

@@ -0,0 +1,145 @@
---
title: "Where are you? - Part 1 - Geocoding with Nominatim to empower area search "
date: 2024-09-28T12:05:10+02:00
draft: false
image: "uploads/django_geocoding.png"
categrories: ['English']
tags: ['django', 'geocoding', 'nominatim', 'OpenStreetMap', 'osm', 'traefik', 'mash-playbook', 'docker', 'docker-compose']
---
# Introduction
Geocoding is the process of translating a text input like `Ungewitterweg, Berlin` into a location with longitude and latitude such as `52.544022/13.147589`. So whenever you search in OpenStreetMap or Google Maps for a location, it does exactly that (and sometimes more, but we don't focus on that now).
For a pet project of mine ([notfellchen.org](https://notfellchen.org)) I wanted to do exactly that: When a animal is added there to be adopted, the user must input a location that is geocoded and saved with it's coordinates. When another user visits the site, that wants to adopt a pet in their area, they input their location and it will search for all animals in a specific radius.
How is that done? I'll show you!
# Nominatim
Nominatim is a software that uses OpenStreetMap data for geocoding. It can also do the reverse, find an address for any location on the planet. It is used for the geocoding on [OpenStreetMap](https://openstreetmap.org), so it's quite production-ready. We could use the public API (while obeying the [usage policy](https://operations.osmfoundation.org/policies/nominatim/)) but it's nicer to have our own instance, so we don't stress the resources of a donation funded organization and to improve user privacy.
Nominatim works by importing geodate from a [PBF](https://wiki.openstreetmap.org/wiki/PBF_Format)-file into a postgres database. This database will later be queried to provide location data. The process is described below.
## DNS records
Se let's start by setting the DNS records so that the domain `geocoding.example.org` points to your server. Adjust as needed.
| Value | Type | Target |
| --- | --- | --- |
| geocoding.example.org | CNAME | server1.example.org|
## Docker-compose Configuration
We will use Docker Compose to run the official [Nominatim Docker image](https://hub.docker.com/r/mediagis/nominatim).
It bundles nominatim together with the database postgres. I usually prefere to have a central database for multiple services (e.g. allows easier backups) but for nominatim a seperate database is good for two reasons
* import process (described later) will not slow the database for other services
* it's easier to nuke everything if things go wrong
The following environment variables will be used to configure the container
* `PBF_URL`: The URL from where to download the PBF file that contains the geodate we will import. They can be obtained from [Geofabrik](https://download.geofabrik.de/). It is highly recommended to first download the file to a local server and then set this URL to that server so that the ressources from Geofabrik are not affected if something goes wrong. Feel free to use the pre-set URL for germany while it works if you want to test around.
* `REPLICATION_URL`: Where to get updates from. For example Geofabrik's update for the Europe extract are available at `https://download.geofabrik.de/europe-updates/` Other places at Geofabrik follow the pattern `https://download.geofabrik.de/$CONTINENT/$COUNTRY-updates/`
* `POSTGRES_` Postgres tuning data, the current setting allows imports on a ressource constrained system. See [postgres tuning docs](https://github.com/mediagis/nominatim-docker/tree/master/4.4#postgresql-tuning) for more info
* `NOMINATIM_PASSWORD`: Database password.
* `IMPORT_STYLE`: See below
**Import Styles**
Import styles will determin how much "resolution" the geocoding has. It has the following options
* `admin`: Only import administrative boundaries and places.
* `street`: Like the admin style but also adds streets.
* `address`: Import all data necessary to compute addresses down o house number level.
* `full`: Default style that also includes points of interest.
* `extratags`: Like the full style but also adds most of the OSM tags into the extratags column.
It has a huge impact on how long the import will take and how much space it will require. Be aware that the import time is on a machine with 32GB RAM, 4 CPUS and SSDs, these are not fixed numbers. My import of `admin` took 12 hours.
| Style | Import time | DB size | after drop |
| --- | --- | --- | --- |
| admin | 4h | 215 GB | 20 GB|
| street | 22h | 440 GB | 185 GB |
| address | 36h |545 GB | 260 GB |
Explaining *after drop* (from the [docs](https://nominatim.org/release-docs/3.3/admin/Import-and-Update/))
> About half of the data in Nominatim's database is not really used for serving the API. It is only there to allow the data to be updated from the latest changes from OSM. For many uses these dynamic updates are not really required. If you don't plan to apply updates, the dynamic part of the database can be safely dropped using the following command: `./utils/setup.php --drop`
I have not done this, so I don't have any experince with that. But probably it's a good idea if you don't need up-to-date data.
## Reverse Proxy
As with most of my projects, it runs on a server where the [mash-playbook](https://github.com/mother-of-all-self-hosting/mash-playbook) has deployed a [Traefik](https://doc.traefik.io/traefik/), as *Application Proxy*. I'll therefore use trafik labels to configure the revers proxy but the same could be achieved with Caddy or Nginx.
## Complete configuration
```
services:
nominatim:
environment:
- PBF_URL=https://cdn.hyteck.de/osm/germany-latest.osm.pbf
- REPLICATION_URL=https://download.geofabrik.de/europe/germany-updates/
- POSTGRES_SHARED_BUFFERS=1GB
- POSTGRES_MAINTENANCE_WORK_MEM=1GB
- POSTGRES_AUTOVACUUM_WORK_MEM=500MB
- POSTGRES_EFFECTIVE_CACHE_SIZE=1GB
- IMPORT_STYLE=admin
- NOMINATIM_PASSWORD=VERYSECRET
labels:
- "traefik.enable=true"
- "traefik.docker.network=traefik"
- "traefik.http.routers.nominatim.rule=Host(`geocoding.example.org`)"
- "traefik.http.routers.nominatim.service=nominatim-service"
- "traefik.http.routers.nominatim.entrypoints=web-secure"
- "traefik.http.routers.nominatim.tls=true"
- "traefik.http.routers.nominatim.tls.certResolver=default"
- "traefik.http.services.nominatim-service.loadbalancer.server.port=8080"
container_name: nominatim
image: mediagis/nominatim:4.4
restart: always
networks:
- traefik
volumes:
- nominatim-data:/var/lib/postgresql/14/main
- nominatim-flatnode:/nominatim/flatnode
shm_size: 1gb
volumes:
nominatim-flatnode:
nominatim-data:
networks:
traefik:
name: "traefik"
external: true
```
## Importing
Now we are ready to go! Before you type `docker-compose up -d` let me explain what it will do
1. Start the database
2. Download the PBF file from the given URL
3. Import the PBF file into the database. Here you are most likely to run into errors because of ressource constraints
4. Start the Nominatim server
If you are ready, lets go: `docker-compose up -d`. Monitor what nominatim is doing with `docker logs -f nominatim` and make a cup of tea. This will take a while (proably several hours).
## Testing
You can test your server by visiting the domain. Try `/?q=CITYNAME` to see an actual search result.
Example: `https://geocoding.example.org/?q=tuebingen`
# Result
You should now have a running Nominatim instance that you can use for geocoding 🎉. Initially I wanted to show in the same post how you'd use this server to power area search in django but that will be in part 2. Feel free to ping me for questions, preferably at [@moanos@gay-pirate-assassins.de](https://gay-pirate-assassins.de/@moanos)
Oh and one last thing:
## Legal requirements
Data from OpenStreetMap is licenced under the [Open Database License](https://opendatacommons.org/licenses/odbl/). The ODbL allows you to use the OSM data for any purpose you like but **attribution is required**. For showing map data, you'd usually display a small badge in the bottom left corner of the map. But geocoding also needs attribution, [as per this guideline](https://osmfoundation.org/wiki/Licence/Attribution_Guidelines#Geocoding_(search)).

Binary file not shown.

After

Width:  |  Height:  |  Size: 372 KiB

View File

@@ -0,0 +1,100 @@
---
title: "Improve OpenStreetMap data by using it"
date: 2025-06-28T14:05:10+02:00
draft: false
image: "post/improve-osm-by-using-it/improve-osm-by-using-it.png"
categories: ['English']
tags: ['django', 'OpenStreetMap', 'notfellchen', 'osm', 'open data', 'geojson']
---
## Introduction
In the last month I improved the mapping of about 100 german animal shelters - not only out of the goodness of my heart, but because it helped me.
Let me explain why: I develop [notfellchen.org](https://notfellchen.org/), where users can search animals in animal shelters, specifically rats, they might want to adopt.
The idea is to have a central website that allows you to search for rats in your area.
This is necessary because only a small percentage of animal shelters has rats. As a user, just checking your next
shelter doesn't work. Some users will stop after checking the second or third one and just buy from a pet shop (which is a very, very bad idea).
Now a central platform for is nice for users but has one problem: How do I, as operator of notfellchen, know where rats are?
I need to **manually check every animal shelter in the country** and if they have rats, ask them for permission to use
images of the rats on my site.
So wait I need to have is a list of animal shelters in germany and have their website, e-mail and phone number.
The source for all of this: You guessed it - OpenStreetMap 🥳
# Getting the data
Downloading all german animal shelters is surprisingly easy: You use [Overpass Turbo](https://overpass-turbo.eu/) and get a `.geojson` to download.
here is the query I used:
```
[out:json][timeout:25];
// fetch area “Germany” to search in
{{geocodeArea:Germany}}->.searchArea;
// Check search area for all objects with animal shelter tag
nwr["amenity"="animal_shelter"](area.searchArea);
// print results
out geom;
```
Now upload it to notfellchen.org and I'll be fine right?
# Data Issues
Yeah well, this only *mostly* works. There were two main problems:
**Missing contact data** is annoying because I quickly want to check the website of animal shelters.
More annoying were what I'd call **mapping errors**.
Most commonly an animal shelter had multiple nodes/ways tagged as `amenity:animal_shelter`.
The highlight was the "Tierheim München" where about 10 buildings were tagged as `amenity:animal_shelter` and the contact
data was sitting on the building with name "Katzenhaus" ("cat house").
Now the "Tierheim München" appeared in my list 10 times but 9 of them had no contact data at all.
# Correcting it
I could have corrected this only in the notfellchen database. It would have been faster and I could even automate parts of it.
But I didn't.
For each issue I found, I opened OpenStreetMap and added websites, phone numbers or even re-mapped the area.
For "Tierheim München" I even [opened a thread in the forum](https://community.openstreetmap.org/t/mapping-of-multiple-related-buildings-animal-shelters/131801)
to discuss a proper tagging.
That makes sense for me because I get one important thing:
# What I get out of it: Updates
What if a new shelter was added later or a shelter changed? I already profit a lot from the time people spend adding information, so why stop?
My database stores the OSM ID, so I can regularly query the data again to get updates.
But that only works if I take an "upstream" approach: Fix the data in OSM, then load it into notfellchen.
Otherwise, any change in my database will be overwritten by "old" OSM data.
# Result
In the last month, I made 86 changes to OSM adding the following information
| Type of information | Number of times added |
|---------------------|-----------------------|
| Website | 66 |
| Phone Numbers | 65 |
| Operator | 63 |
| E-Mail | 49 |
| Fax | 9 |
Yes I sometimes even added fax numbers. It was easy enough to add and maybe there is someone might use it.
# Looking forward
I'm of course not done. Only half of the rescues known to OSM in germany are currently checked, so I'll continue that work.
After that I'll start adding the shelters that are just in my database.
Currently, 33 animal shelters are known to notfellchen that are not known to OSM. This number will likely grow, maybe double.
A lot to do. And luckily, this work both benefits me and everyone using OSM. Happy mapping!

View File

@@ -0,0 +1,107 @@
---
title: "Meine SciFi und Fantasy Empfehlungen - Schriftgelehrte gegen KI"
date: 2025-03-03T18:05:10+02:00
draft: false
image: "uploads/scifi-fantasy.png"
categrories: ['Deutsch']
tags: ['ki', 'science-fiction', 'schriftgelehrte', 'progressive Fantastik', 'scifi', 'fantasy', 'solarpunk']
---
## Einführung
KI ist überall. Große Teile des Internets werden gerade durch KI-generierten Inhalten überschwemmt.
Teilweise zeigen KI-generierte Artikel auch gefälschte Daten (vor 2022) an um so den Anschein zu erwecken nicht KI-generiert zu sein.
Verlässliche und authentische Informationen zu finden ist daher um so schwerer.
Deshalb beginnt **das Zeitalter der Schriftgelehrten**, denn Informationen zu sammeln und aufzubereiten, das ist, was sie tun. Schriftgelehrte\*r verwende ich hier als Übersetzung des englischen Wortes "Librarian".
Und so maße auch ich mir an mich (nur) dafür Schriftgelehrte\*r zu nennen und versuche auf diesem
Blog immer wieder Informationen zu sammeln und zu teilen (ironischerweise in dem Wissen, dass dieser Blog von
KI-Unternehmen gescrapt wird).
Starten möchte ich mit Folgendem:
## Sci-Fi und Fantasy
Wenn ich nach Sci-Fi und Fantasy suche, will ich weg von profit-optimierten Listen von Online-Buchhändler\*innen,
und hin zu echten Empfehlungen. Viele solche Empfehlungen bekomme ich im lokalen Buchladen. Gerade junge Buchhändler\*innen können oft super Empfehlungen geben.
Leider sind diese wenigen Mitarbeiter\*innen selten und in Buchläden sind SciFi und Fantasy oft nur wenig vertreten und die Regale viel zu oft voll mit Büchern weißer Männer.
Deshalb jetzt zu den Empfehlungen die versuchen, das anders zu machen!
*Die Liste beschreibt die Bücher nur kurz und sollte ohne Spoiler auskommen.
In allen Büchern kommen queere Charaktere vor. Alle Links zum führen zu einem lokaln Buchladen oder Websites der Autor\*innen oder des Verlags.*
### Becky Chambers: "Der lange Weg zu einem kleinen zorningen Planeten"
Dieses Buch ist eine wirklich wunderschön geschriebene Space-Opera mit Charakteren, die man ins Herz schließt.
Auf dem kleinen Schiff ist viel Alltag und auf dem langen Weg lässt einen jede Zwischenstation in eine weitere Welt eintauchen, egal ob in einen trubeligen Markt oder einen abgeschiedenen Eisplanet.
Der zweite und dritte Teil handeln im gleichen Universum, sind jedoch nur über wenige Personen/geteilte Themen mit der ersten Geschichte verbunden.
Alle anderen Bücher von Becky Chambers sind auch sehr empfehlenswert "A Psalm for the Wild-Built" ist eine kurze Solarpunk Utopie.
[Link zum Buch beim Frauenbuchladen Thalestris](https://frauenbuchladen.buchkatalog.de/der-lange-weg-zu-einem-kleinen-zornigen-planeten-9783596035687)
### Judith & Christian Vogt: Ace in Space
Ace in Space ist eine tolle Erzählung von einer Raumjäger-Pilotin, einer Gruppe Space-Punks die ihr Leben auf Social Media teilen.
Angesiedelt ist das Buch eher im Cyber-Punk. Es geht gegen Großkonzerne, es geht um schnelle Flieger und Bars in engen Quartieren.
Außerdem hat es eine der besten Sexszenen, die ich je lesen durfte.
Judith schreibt oft dystopischere Geschichten als ich normalerweise lese. Aber Laylayland und Wasteland zwei fantastische Bücher die ich nicht missen will.
Die Bücher sind feministisch, queer und tollerweise auch mit Haupt(Charaktere) mit Behinderungen. Progressive Phanastik der höchsten Klassen!
[Verlagsshop](https://amrun-verlag.de/produkt/aceinspace1/)
Wer eine ganze Kurzgeschichte "der Vögte" (also Judith und Christian Vogt) als Leseprobe haben will, findet diese am Ende des Artikels als PDF.
### T.J. Klune: Mr. Parnassus' Heim für magisch Begabte
Wunderschöne Geschichte über einen Beamten der aus der Stadt rauskommt und ein Haus an der See.
Mehr verrate ich nicht, ist aber eins meiner Lieblingsbücher.
[Link zum Buch beim Frauenbuchladen Thalestris](https://frauenbuchladen.buchkatalog.de/mr-parnassus-heim-fuer-magisch-begabte-9783453321366)
### Lena Richter: Dies ist mein letztes Lied
Eine mitreisende Geschichte in der die Hauptperson durch ihre Musik von Welt zu Welt gerissen wird. Was sie in den einzelnen Episoden erlebt ist schön und herzzerreißend.
Ich habe die Novelle am Stück verschlungen und mit der Hauptperson gelacht und geweint.
[Verlagsshop](https://www.ohneohren.com/shop/Lena-Richter-Dies-ist-mein-letztes-Lied-p520843015)
### Rebecca Thorne: "Can't spell treason without tea"
In einer Fantasy Welt brennt eine Leibwächterin der Königin mit einer Magierin durch und sie eröffnen einen Teeladen.
Es gibt auch den Nachfolger "A Pirate's Life for Tea", den hab ich aber noch nicht gelesen
[Link zum Buch beim Frauenbuchladen Thalestris (Deutsche Übersetzung)](https://frauenbuchladen.buchkatalog.de/cant-spell-treason-without-tea-9783492706896)
### Travis Baldree: "Legends&Latte"
Eine Ork, die lange Jahre mit einer Gruppe Abenteuer unterwegs war versucht nun einen Buch & Teeladen aufzumachen.
Sehr unterhaltsam, unerwartet friedlich und macht unglaublich Lust mehr in Cafes zu gehen!
"Bookshops & Bonedust" ist die Vorgeschichte, die aber gut nach Legends & Latte gelesen werden kann und auch später geschrieben wurde.
[Link zum Buch beim Frauenbuchladen Thalestris](https://frauenbuchladen.buchkatalog.de/magie-und-milchschaum-9783423263566)
### Sammlung: Sonnenseiten - Street Art trifft Solarpunk
22 Autor\*innen haben Geschichten zusammengetragen die die beiden Kunstformen Street Art und Solarpunk verbinden.
Besonders empfehlen kann ich die Geschichte "Uferlos" von Lena Richter die eine schwimmende Stadt zum Denken anregt und "Cloudart" von Dominik Windgätter in der ein "Maskenmädchen" Kunstwerke in den Himmel zeichnet.
[Link zum Buch beim Frauenbuchladen Thalestris](https://frauenbuchladen.buchkatalog.de/sonnenseiten-9783756803972)
## Schluss
Ich hoffe die Empfehlungen machen, trotz ihrer Kürze, Lust aufs Lesen! Kauft bei eurem lokalen Buchladen und unterstützt kleine Verlage!
*Falls das nicht sowieso schon klar war: Ich bekomme für diese Empfehlungen kein Geld, die Links haben kein Tracking, keine Referral Codes oder sonst etwas.*
### Kurzgeschichte FiatLux
Die Kurzgeschichte ist von Judith und Christian Vogt und steht unter der Lizenz [CC-BY-NC-SA](https://creativecommons.org/licenses/by-nc-sa/4.0/), darf also mit Namensnennung, nichtkommerziell und unter gleichen Bedingungen geteilt werden (wie schön ist das denn bitte?!).
{{< pdf FiatLuxVogt>}}

View File

@@ -0,0 +1,107 @@
---
title: "Musikempfehlungen- Beat gegen KI"
date: 2025-03-03T18:05:10+02:00
draft: true
image: "uploads/scifi-fantasy.png"
categrories: ['Deutsch']
tags: ['ki', 'science-fiction', 'schriftgelehrte', 'progressive Fantastik', 'scifi', 'fantasy', 'solarpunk']
---
## Einführung
KI ist überall. Große Teile des Internets werden gerade durch KI-generierten Inhalten überschwemmt.
Teilweise zeigen KI-generierte Artikel auch gefälschte Daten (vor 2022) an um so den Anschein zu erwecken nicht KI-generiert zu sein.
Verlässliche und authentische Informationen zu finden ist daher um so schwerer.
Deshalb beginnt **das Zeitalter der Schriftgelehrten**, denn Informationen zu sammeln und aufzubereiten, das ist, was sie tun. Schriftgelehrte\*r verwende ich hier als Übersetzung des englischen Wortes "Librarian".
Und so maße auch ich mir an mich (nur) dafür Schriftgelehrte\*r zu nennen und versuche auf diesem
Blog immer wieder Informationen zu sammeln und zu teilen (ironischerweise in dem Wissen, dass dieser Blog von
KI-Unternehmen gescrapt wird).
Starten möchte ich mit Folgendem:
## Sci-Fi und Fantasy
Wenn ich nach Sci-Fi und Fantasy suche, will ich weg von profit-optimierten Listen von Online-Buchhändler\*innen,
und hin zu echten Empfehlungen. Viele solche Empfehlungen bekomme ich im lokalen Buchladen. Gerade junge Buchhändler\*innen können oft super Empfehlungen geben.
Leider sind diese wenigen Mitarbeiter\*innen selten und in Buchläden sind SciFi und Fantasy oft nur wenig vertreten und die Regale viel zu oft voll mit Büchern weißer Männer.
Deshalb jetzt zu den Empfehlungen die versuchen, das anders zu machen!
*Die Liste beschreibt die Bücher nur kurz und sollte ohne Spoiler auskommen.
In allen Büchern kommen queere Charaktere vor. Alle Links zum führen zu einem lokaln Buchladen oder Websites der Autor\*innen oder des Verlags.*
### Becky Chambers: "Der lange Weg zu einem kleinen zorningen Planeten"
Dieses Buch ist eine wirklich wunderschön geschriebene Space-Opera mit Charakteren, die man ins Herz schließt.
Auf dem kleinen Schiff ist viel Alltag und auf dem langen Weg lässt einen jede Zwischenstation in eine weitere Welt eintauchen, egal ob in einen trubeligen Markt oder einen abgeschiedenen Eisplanet.
Der zweite und dritte Teil handeln im gleichen Universum, sind jedoch nur über wenige Personen/geteilte Themen mit der ersten Geschichte verbunden.
Alle anderen Bücher von Becky Chambers sind auch sehr empfehlenswert "A Psalm for the Wild-Built" ist eine kurze Solarpunk Utopie.
[Link zum Buch beim Frauenbuchladen Thalestris](https://frauenbuchladen.buchkatalog.de/der-lange-weg-zu-einem-kleinen-zornigen-planeten-9783596035687)
### Judith & Christian Vogt: Ace in Space
Ace in Space ist eine tolle Erzählung von einer Raumjäger-Pilotin, einer Gruppe Space-Punks die ihr Leben auf Social Media teilen.
Angesiedelt ist das Buch eher im Cyber-Punk. Es geht gegen Großkonzerne, es geht um schnelle Flieger und Bars in engen Quartieren.
Außerdem hat es eine der besten Sexszenen, die ich je lesen durfte.
Judith schreibt oft dystopischere Geschichten als ich normalerweise lese. Aber Laylayland und Wasteland zwei fantastische Bücher die ich nicht missen will.
Die Bücher sind feministisch, queer und tollerweise auch mit Haupt(Charaktere) mit Behinderungen. Progressive Phanastik der höchsten Klassen!
[Verlagsshop](https://amrun-verlag.de/produkt/aceinspace1/)
Wer eine ganze Kurzgeschichte "der Vögte" (also Judith und Christian Vogt) als Leseprobe haben will, findet diese am Ende des Artikels als PDF.
### T.J. Klune: Mr. Parnassus' Heim für magisch Begabte
Wunderschöne Geschichte über einen Beamten der aus der Stadt rauskommt und ein Haus an der See.
Mehr verrate ich nicht, ist aber eins meiner Lieblingsbücher.
[Link zum Buch beim Frauenbuchladen Thalestris](https://frauenbuchladen.buchkatalog.de/mr-parnassus-heim-fuer-magisch-begabte-9783453321366)
### Lena Richter: Dies ist mein letztes Lied
Eine mitreisende Geschichte in der die Hauptperson durch ihre Musik von Welt zu Welt gerissen wird. Was sie in den einzelnen Episoden erlebt ist schön und herzzerreißend.
Ich habe die Novelle am Stück verschlungen und mit der Hauptperson gelacht und geweint.
[Verlagsshop](https://www.ohneohren.com/shop/Lena-Richter-Dies-ist-mein-letztes-Lied-p520843015)
### Rebecca Thorne: "Can't spell treason without tea"
In einer Fantasy Welt brennt eine Leibwächterin der Königin mit einer Magierin durch und sie eröffnen einen Teeladen.
Es gibt auch den Nachfolger "A Pirate's Life for Tea", den hab ich aber noch nicht gelesen
[Link zum Buch beim Frauenbuchladen Thalestris (Deutsche Übersetzung)](https://frauenbuchladen.buchkatalog.de/cant-spell-treason-without-tea-9783492706896)
### Travis Baldree: "Legends&Latte"
Eine Ork, die lange Jahre mit einer Gruppe Abenteuer unterwegs war versucht nun einen Buch & Teeladen aufzumachen.
Sehr unterhaltsam, unerwartet friedlich und macht unglaublich Lust mehr in Cafes zu gehen!
"Bookshops & Bonedust" ist die Vorgeschichte, die aber gut nach Legends & Latte gelesen werden kann und auch später geschrieben wurde.
[Link zum Buch beim Frauenbuchladen Thalestris](https://frauenbuchladen.buchkatalog.de/magie-und-milchschaum-9783423263566)
### Sammlung: Sonnenseiten - Street Art trifft Solarpunk
22 Autor\*innen haben Geschichten zusammengetragen die die beiden Kunstformen Street Art und Solarpunk verbinden.
Besonders empfehlen kann ich die Geschichte "Uferlos" von Lena Richter die eine schwimmende Stadt zum Denken anregt und "Cloudart" von Dominik Windgätter in der ein "Maskenmädchen" Kunstwerke in den Himmel zeichnet.
[Link zum Buch beim Frauenbuchladen Thalestris](https://frauenbuchladen.buchkatalog.de/sonnenseiten-9783756803972)
## Schluss
Ich hoffe die Empfehlungen machen, trotz ihrer Kürze, Lust aufs Lesen! Kauft bei eurem lokalen Buchladen und unterstützt kleine Verlage!
*Falls das nicht sowieso schon klar war: Ich bekomme für diese Empfehlungen kein Geld, die Links haben kein Tracking, keine Referral Codes oder sonst etwas.*
### Kurzgeschichte FiatLux
Die Kurzgeschichte ist von Judith und Christian Vogt und steht unter der Lizenz [CC-BY-NC-SA](https://creativecommons.org/licenses/by-nc-sa/4.0/), darf also mit Namensnennung, nichtkommerziell und unter gleichen Bedingungen geteilt werden (wie schön ist das denn bitte?!).
{{< pdf FiatLuxVogt>}}

View File

@@ -0,0 +1,106 @@
---
title: "I did something naughty: Circumventing Authorized-Fetch as implemented by GoToSocial"
date: 2024-12-11T06:10:10+02:00
draft: false
image: "uploads/fediproxy/fediproxy.png"
categrories: ['English']
tags: ['gotosocial', 'fediverse', 'mastodon', 'authorized fetch', 'rss', 'FastAPI',]
---
Yes the title is correct, but I had nothing malicious in mind!
## What this is about
For [@qzt@queereszentrumtuebingen.de](https://social.queereszentrumtuebingen.de/@qzt) we include the public feed [in a sidbar on the homepage](https://queereszentrumtuebingen.de/). Initially this was done using the standard API to fetch statuses `/api/v1/accounts/{account_id}/statuses` and worked like a charm. The problem started when [GoToSocial](https://gotosocial.org/) (the fediverse server we use, similar to mastodon) implemented authorized fetch. This is a a good thing! Authorized fetch means, that every call to a endpoint needs to be authorized by an `access_token`. You get an access token from a fedi account. It's what fediverse clients like Tusky or Phanpy do on your behalf to get the posts that make up you timeline.
Authorized fetch has major advantages as
* data scraping can only be done by other fediaccounts
* blocking can not be circumvented by using the public API
and much more. Sadly it also broke our website integration.
## Possible Solutions
So what now? I initially wanted to turn of authorized fetch for [@qzt@queereszentrumtuebingen.de](https://social.queereszentrumtuebingen.de/@qzt) by messing with the GoToSocial code and turning it off for the whole server. This would have been possible as this is the only user on the server. The GoToSocial devs helped me manage to find where to do that. But it's not ideal and would make me build a custom docker image fore each update.
Next idea: The whole point of authorized fetch is, that only fedi-accounts (and apps they authorized) can access the API. So lets do that! Set up a new account, add app and authorize it [as described in the GoToSocial documentation](https://docs.gotosocial.org/en/latest/api/authentication/). I used #Bruno for that, that was much more comfortable than using curl for me.
With that authorization code you can now get an access token for your app. Put that in the Javascript that loads posts and we are good right? Sadly no. It would totally work. But it would also allow anyone to read and post on behalf of the account. That calls for malicious actors using this for scraping or spamming.
So instead, we need a proxy that stores the access token securely and restricts the actions.
## The proxy
Such a proxie must
* offer the endpoint that provides the same data as the FediverseAPI
* authorize itself to the FediverseAPI via `access_token`
* restrict to read access of consenting accounts
The last point is really important, as we don't want to allow others to use this endpoint to scrape data unauthorized.
I wrote a short FastAPI server that offers this. It only implements one method
```
@app.get("/api/v1/accounts/{account_id}/statuses")
async def fetch_data(account_id):
if account_id not in ALLOWED_ACCOUNTS:
raise HTTPException(status_code=401, detail="You can only use this proxy to access configured accounts")
headers = {"Authorization": f"Bearer {ACCESS_TOKEN}"}
response = requests.get(f"{EXTERNAL_API_BASE_URL}/api/v1/accounts/{account_id}/statuses", headers=headers)
return response.json()
```
Basically this is the whole API code, I only trimmed a few checks and error handling.
## Deployment
To deploy, I put it in a docker container and started it via docker-compose. Reverse proxing is handled by Traefik, I won't go into detail here.
```
services:
fediproxy.example.org:
image: docker.io/moanos/fediproxy
container_name: "fediproxy.example.org"
restart: unless-stopped
environment:
EXTERNAL_API_BASE_URL: ${EXTERNAL_API_BASE_URL}
ACCESS_TOKEN: ${ACCESS_TOKEN}
ALLOWED_ACCOUNTS: ${ALLOWED_ACCOUNTS}
labels:
- "traefik.enable=true"
- "traefik.docker.network=traefik"
- "traefik.http.routers.fediproxy.rule=Host(`fediproxy.example.org`)"
- "traefik.http.routers.fediproxy.service=fediproxy-service"
- "traefik.http.routers.fediproxy.entrypoints=web-secure"
- "traefik.http.routers.fediproxy.tls=true"
- "traefik.http.routers.fediproxy.tls.certResolver=default"
- "traefik.http.services.fediproxy-service.loadbalancer.server.port=8000"
networks:
- traefik
networks:
traefik:
name: "traefik"
external: true
```
I added a short `.env` to configure:
```
ACCESS_TOKEN=VERYSECRETTOKENTHATISDEFINETLYREAL
EXTERNAL_API_BASE_URL=https://gay-pirate-assassins.de
ALLOWED_ACCOUNTS=ZGGZF4G8NNOTREAL81Z8G7RTC
```
## Results
Now I can again use something like [the wordpress plugin Include Mastodon Feed](https://wordpress.org/plugins/include-mastodon-feed/#installation) just by pointing to the proxy: `[include-mastodon-feed instance="fediproxy.example.org.de" account="ZGGZF4G8NNOTREAL81Z8G7RTC"]`
Hope you enjoyed the read. Source code for the proxy can be found here: https://git.hyteck.de/moanos/FediProxy
If you want to play around a bit you can use https://git.hyteck.de/moanos/include-fedi
Sloth logo of GTS by [Anna Abramek](https://abramek.art/), [Creative Commons BY-SA license](http://creativecommons.org/licenses/by-sa/4.0/).

View File

@@ -0,0 +1,105 @@
name: twenty
services:
server:
image: twentycrm/twenty:${TAG:-latest}
volumes:
- type: bind
source: ./server_local_data
target: /app/packages/twenty-server/.local-storage
ports:
- "3000:3000"
environment:
NODE_PORT: 3000
PG_DATABASE_URL: postgres://${PG_DATABASE_USER:-postgres}:${PG_DATABASE_PASSWORD:-postgres}@${PG_DATABASE_HOST:-db}:${PG_DATABASE_PORT:-5432}/default
SERVER_URL: ${SERVER_URL}
REDIS_URL: ${REDIS_URL:-redis://redis:6379}
DISABLE_DB_MIGRATIONS: ${DISABLE_DB_MIGRATIONS}
DISABLE_CRON_JOBS_REGISTRATION: ${DISABLE_CRON_JOBS_REGISTRATION}
STORAGE_TYPE: ${STORAGE_TYPE}
STORAGE_S3_REGION: ${STORAGE_S3_REGION}
STORAGE_S3_NAME: ${STORAGE_S3_NAME}
STORAGE_S3_ENDPOINT: ${STORAGE_S3_ENDPOINT}
APP_SECRET: ${APP_SECRET:-replace_me_with_a_random_string}
labels:
- "traefik.http.middlewares.twenty-add-response-headers.headers.customresponseheaders.Strict-Transport-Security=max-age=31536000; includeSubDomains"
- "traefik.http.middlewares.twenty-add-response-headers.headers.customresponseheaders.Access-Control-Allow-Origin=*"
- "traefik.enable=true"
- "traefik.docker.network=traefik"
- "traefik.http.routers.twenty.rule=Host(`twenty.hyteck.de`)"
- "traefik.http.routers.twenty.middlewares=twenty-add-response-headers"
- "traefik.http.routers.twenty.service=twenty-service"
- "traefik.http.routers.twenty.entrypoints=web-secure"
- "traefik.http.routers.twenty.tls=true"
- "traefik.http.routers.twenty.tls.certResolver=default"
- "traefik.http.services.twenty-service.loadbalancer.server.port=3000"
depends_on:
db:
condition: service_healthy
healthcheck:
test: curl --fail http://localhost:3000/healthz
interval: 5s
timeout: 5s
retries: 20
restart: always
networks:
- traefik
- default
worker:
image: twentycrm/twenty:${TAG:-latest}
volumes:
- type: bind
source: ./server_local_data
target: /app/packages/twenty-server/.local-storage
command: [ "yarn", "worker:prod" ]
environment:
PG_DATABASE_URL: postgres://${PG_DATABASE_USER:-postgres}:${PG_DATABASE_PASSWORD:-postgres}@${PG_DATABASE_HOST:-db}:${PG_DATABASE_PORT:-5432}/default
SERVER_URL: ${SERVER_URL}
REDIS_URL: ${REDIS_URL:-redis://redis:6379}
DISABLE_DB_MIGRATIONS: "true" # it already runs on the server
DISABLE_CRON_JOBS_REGISTRATION: "true" # it already runs on the server
STORAGE_TYPE: ${STORAGE_TYPE}
STORAGE_S3_REGION: ${STORAGE_S3_REGION}
STORAGE_S3_NAME: ${STORAGE_S3_NAME}
STORAGE_S3_ENDPOINT: ${STORAGE_S3_ENDPOINT}
APP_SECRET: ${APP_SECRET:-replace_me_with_a_random_string}
depends_on:
db:
condition: service_healthy
server:
condition: service_healthy
restart: always
networks:
- default
db:
image: postgres:16
volumes:
- type: bind
source: ./db_data
target: /var/lib/postgresql/data
environment:
POSTGRES_USER: ${PG_DATABASE_USER:-postgres}
POSTGRES_PASSWORD: ${PG_DATABASE_PASSWORD:-postgres}
healthcheck:
test: pg_isready -U ${PG_DATABASE_USER:-postgres} -h localhost -d postgres
interval: 5s
timeout: 5s
retries: 10
restart: always
redis:
image: redis
restart: always
command: [ "--maxmemory-policy", "noeviction" ]
networks:
traefik:
name: "traefik"
external: true

View File

@@ -0,0 +1,19 @@
TAG=latest
#PG_DATABASE_USER=postgres
# Use openssl rand -base64 32
PG_DATABASE_PASSWORD=
#PG_DATABASE_HOST=db
#PG_DATABASE_PORT=5432
#REDIS_URL=redis://redis:6379
SERVER_URL=https://twenty.hyteck.de
# Use openssl rand -base64 32
APP_SECRET=
STORAGE_TYPE=local
# STORAGE_S3_REGION=eu-west3
# STORAGE_S3_NAME=my-bucket
# STORAGE_S3_ENDPOINT=

Binary file not shown.

After

Width:  |  Height:  |  Size: 99 KiB

View File

@@ -0,0 +1,169 @@
---
title: "Trying Twenty: How does an Open Source CRM work?"
date: 2025-08-03T06:10:10+02:00
lastmod: 2025-08-03T12:10:10+02:00
draft: false
image: "uploads/twenty.png"
categories: ['English']
tags: ['crm', 'twenty', 'salesforce', 'django', 'self-hosting']
---
As some of you might know, I spend my day working with Salesforce, a very, very feature-rich CR that you pay big money to use.
Salesforce is the opposite of OpenSource and the many features are expensive. Salesforce business model is based on this and on the lock-in effect.
If your company invested in implementing Salesforce, they'll likely pay a lot to keep it.
So what does an alternative look like? Let's have a look at [Twenty](https://twenty.com), an OpenSource CRM that recently reached the magic 1.0 version.
# Getting started
There are two options of getting started: Register at [app.twenty.com](https://app.twenty.com) and start right away on the devs instance or self-host Twenty on your own server.
I did the ladder, so let's discuss how that. The basic steps I took were
* point twenty.hyteck.de to a server
* Install traefik on the server (I cheated, traefik was already installed)
* Deploy [this docker-compose.yml](docker-compose.yml) with [this env file](env)
Then visit the domain and set up the first user.
# Features
Twenty offers an initial datamodel that you should be familiar from other CRMs. the standards objects are
![A screenshot of the person model in Twenty](person-model.png)
* **Persons** A individual person. You can attach notes, E-Mails, etc..
* **Companies** The same for organizations. Organization websites must be unique
* **Opportunities** The classic opportunity with customizable stages
* **Notes** They can be attached to any of the objects above
* **Tasks** Items to work on
* **Workflows** Automations similar to Salesforce flows. E.g. you can create a task every time an Opportunity is created.
The basic datamodel can be extended in the GUI. Here is how my "Company" model looks like
![A screenshot of twenty. It shows the company model being renamed to Organizations and deactivated fields such as Twitter links or number of employees.](organization_dm.png)
You can add any of the following fields to an object.
![A list of fields: Text, Number, True/False, Date and Time, Date, Select, Multi-Select, Rating, Currency, E-Mails, Links, Phones, Full Name, Address, Relation and the Advanced fields called Unique ID, JSON and Array](fields.png)
### Workflows
Workflows are Twenty's way of allowing users to build automations. You can start a Workflow when a Record is created,
updated or deleted. In addition, they can be started manually, on a schedule and via Webhook (yeah!).
![A workflow in twenty. After the Trigger "Organization" created there is a new task generated, a webhook send and a form used.](workflow1.png)
You can then add nodes that trigger actions. Available right now are
* **Creating, updating or deleting a record**
* **Searching records**
* **Sending E-Mails** This is the only option to trigger e-mails so far
* **Code** Serverless Javascript functions
* **Form** The form will pop up on the user's screen when the workflow is launched from a manual trigger. For other types of triggers, it will be displayed in the Workflow run record page.
* **HTTP request** Although possible via Code, this is a handy shortcut to trigger HTTP requests
What is currently completely missing are Foreach-loops and [conditions](https://github.com/twentyhq/core-team-issues/issues/1265). I can not say "If Opportunity stage is updated to X do Y else, do Z".
Without this, Workflows are really limited in their power.
What already seems quite mature though is the code option. It allows to put in arbitrary code and output a result.
![Screenshot of a javascript function in Twenty that adds two numbers together](serverless_function.png)
I did not try a lot, but I assume most basic Javascript works. I successfully built an http request that send data to a server.
If what you're doing is straightforward enough to not use loops and conditions or if oyu are okay with doing all of them in the Code node, you can do basically anything.
## API
Twenty offers an extensive API that allows you to basically do everything. It's well documented and easy to use.
Here is an example of me, syncing Rescue Organizations from [notfellchen.org](https://notfellchen.org) to Twenty.
```python
import requests
from fellchensammlung.models import RescueOrganization
def sync_rescue_org_to_twenty(rescue_org: RescueOrganization, base_url, token: str):
if rescue_org.twenty_id:
update = True
else:
update = False
payload = {
"eMails": {
"primaryEmail": rescue_org.email,
"additionalEmails": None
},
"domainName": {
"primaryLinkLabel": rescue_org.website,
"primaryLinkUrl": rescue_org.website,
"additionalLinks": []
},
"name": rescue_org.name,
}
if rescue_org.location:
payload["address"] = {
"addressStreet1": f"{rescue_org.location.street} {rescue_org.location.housenumber}",
"addressCity": rescue_org.location.city,
"addressPostcode": rescue_org.location.postcode,
"addressCountry": rescue_org.location.countrycode,
"addressLat": rescue_org.location.latitude,
"addressLng": rescue_org.location.longitude,
}
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {token}"
}
if update:
url = f"{base_url}/rest/companies/{rescue_org.twenty_id}"
response = requests.patch(url, json=payload, headers=headers)
assert response.status_code == 200
else:
url = f"{base_url}/rest/companies"
response = requests.post(url, json=payload, headers=headers)
assert response.status_code == 201
rescue_org.twenty_id = response.json()["data"]["createCompany"]["id"]
rescue_org.save()
```
#
# The Company, Business Model and Paid Features
The company behind Twenty is called "Twenty.com PBC" and mostly seems to consist of former AirBnB employees in Paris.
The company is probably backed by Venture Capital.
The current business model is to charge for using the company's instance of Twenty. It starts at 9\$/user/month without
enterprise features. SSO and support will cost you 19\$/user/month.
Selfhosting is free but SSO is locked behind an enterprise badge with seemingly no way to pay for activating it.
I suspect that in the future more features will become "Enterprise only" even when self-hosting. All contributors must agree
to [a Contributor License Agreement (CLA)](https://github.com/twentyhq/twenty/blob/main/.github/CLA.md), therefore I
believe they could change the License in the future, including switching away from Open Source.
# Conclusion
Twenty is a really promising start of building a good CRM. The ease of customizing the datamodel,
using the API and a solid beginning to Flows allows users to get a lot of value from it already.
Flows need some more work to become as powerful as they should be and the E-Mail integration needs to get better.
Stating the obvious: This is not something that could ever replace Salesforce. But it doesn't have to!
There are many organizations that would benefit a lot from a CRM like Twenty, they simply don't need, can't handle or
don't want to pay for all the features other CRMs offer.
If Twenty continues to focus on small to medium companies and the right mix of standard features vs. custom development options I see a bright future for it.
There are the usual problems of VC-backed OSS development, we shall see how it goes for them.
# Addendum: Important Features
Here is a short list of features I missed and their place on the roadmap if they have one
* **Compose & Send E-Mails** Planned [Q4 2025](https://github.com/orgs/twentyhq/projects/1?pane=issue&itemId=106097937&issue=twentyhq%7Ccore-team-issues%7C811)
* **Foreach loops in Workflows** [Q3 2025](https://github.com/orgs/twentyhq/projects/1/views/33?pane=issue&itemId=93150024&issue=twentyhq%7Ccore-team-issues%7C21)
* **Conditions in Flows** [Q4 2025](https://github.com/orgs/twentyhq/projects/1/views/33?pane=issue&itemId=121287765&issue=twentyhq%7Ccore-team-issues%7C1265)

Binary file not shown.

After

Width:  |  Height:  |  Size: 146 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 95 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 57 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 228 KiB

View File

@@ -1,5 +1,4 @@
<div>
<object data="/uploads/{{ index .Params 0}}.pdf" type="application/pdf" width="100%" height="500px">
<p><a href="/uploads/{{ index .Params 0}}.pdf">Download the PDF!</a></p>
<object class="fitvidsignore" data="/uploads/{{ index .Params 0}}.pdf" type="application/pdf" width="100%" height="500px">
<p><a href="/uploads/{{ index .Params 0}}.pdf">Download the PDF!</a></p>
</object>

View File

@@ -1,14 +0,0 @@
<?php
//database settings
define ("DB_USER", "moanos");
define ("DB_HOST", "localhost");
define ("DB_PW", "dwDs5k4PMQ1a7tK51OjK");
define ("DB_DATABASE", "moanos_gartensia");
//database tables:
define ("TABLE_USER", "user");
define("MODULE_PATH", $_SERVER['DOCUMENT_ROOT']);
?>

View File

@@ -1,45 +0,0 @@
<?php
require_once(__dir__."/config.inc.php");
$aData[TABLE_USER] = array(
'user_ID' => array(
'type' => 'INT',
'size' => 11,
'unique' => 'TRUE',
'standard' => 'NOT NULL',
'extra' => 'AUTO_INCREMENT PRIMARY KEY'
),
'name' => array(
'type' => 'VARCHAR',
'size' => 255,
'standard' => 'NOT NULL'
),
'email' => array(
'type' => 'VARCHAR',
'size' => 255,
'standard' => 'NOT NULL'
),
'signalmessenger' => array(
'type' => 'VARCHAR',
'size' => 255,
'standard' => 'NOT NULL'
),
'sms' => array(
'type' => 'VARCHAR',
'size' => 255,
'standard' => 'NOT NULL'
),
'telegram' => array(
'type' => 'VARCHAR',
'size' => 255,
'standard' => 'NOT NULL'
),
'threema' => array(
'type' => 'VARCHAR',
'size' => 255,
'standard' => 'NOT NULL'
)
);

View File

@@ -1,277 +0,0 @@
<?php
ini_set('display_errors', 0);
ini_set('display_startup_errors', 0);
error_reporting(E_ALL);
class Data{
function __construct(){
$this->link_database();
$this->em_check_database();
$this->read_variables();
date_default_timezone_set('Europe/Berlin');
}
function read_variables() {
//reads all GET and POST variables into the object, addslashing both
if (count($_POST)) {
foreach ($_POST as $key => $val){
$key=addslashes("r_".$key);
if (is_array($val)) {
for ($z=0;$z<count($val);$z++) {
$val[$z]=addslashes($val[$z]);
}
}
else {
$val=addslashes($val);
}
$this->$key=$val;
}
}
if (count($_GET)) {
foreach ($_GET as $key => $val){
$key=addslashes("r_".$key);
if (is_array($val)) {
for ($z=0;$z<count($val);$z++) {
$val[$z]=addslashes($val[$z]);
}
}
else {
$val=addslashes($val);
}
$this->$key=$val;
}
}
}//end of function read variables
function link_database() {
$this->databaselink = new mysqli(DB_HOST,DB_USER,DB_PW,DB_DATABASE);
$this->databaselink->set_charset('utf8');
if ($this->databaselink->connect_errno) {
return "Datenbank nicht erreichbar: (" . $this->databaselink->connect_errno . ") " . $this->databaselink->connect_error;
}
else{
$this->databasename=DB_DATABASE;
$this->databaselink->query("SET SQL_MODE = '';");
return True;
}
}
function em_check_database() {
/*
params:
None
returns:
None
This function compares the database structure to a predefined structure which is saved in db_array_config.php
and adds missing structures. Makes installation+updates easy
*/
$aTable=array();
//Alle Tabellen in Array lesen, inklusive aller Eigenschaften
$result=$this->databaselink->query("show tables from ".DB_DATABASE);
while($row = $result->fetch_array(MYSQLI_BOTH)){
$aTable[]=$row[0];
}
$aData=array();
$database_structure_path = __DIR__."/config/db_array.inc.php";
include($database_structure_path);
foreach($aData as $table=>$fields){
if(!in_array($table,$aTable)) {
//Add table to database
$mCounter=0;
$sCommand="CREATE TABLE IF NOT EXISTS `".$table."` (";
foreach($fields as $fieldname=>$properties){
$extra = "";
if($mCounter==0) {
$key="KEY `".$fieldname."` (`".$fieldname."`)";
}
if($properties["size"]!="") {
$size="(".$properties["size"].")";
}
else {
$size="";
}
if((isset($properties["unique"])) and ($properties['unique']==true)) {
$unique="UNIQUE KEY `".$fieldname."_2` (`".$fieldname."`),";}
else {
$unique="";
}
if((isset($properties["extra"])) and ($properties != "")){
$extra = $properties['extra'];
}
$sCommand .= "`".$fieldname."` ".$properties["type"].$size." ".$properties["standard"]." ".$extra.",";
$mCounter++;
}
$sCommand.=$unique.$key.") ENGINE=InnoDB ;";
$this->last_query[]=$sCommand;
$updateresult=$this->databaselink->query($sCommand);
}
else {
//Felder checken und Tabelle updaten
$resultField=$this->databaselink->query("show fields from ".DB_DATABASE.".".$table);
while($aRowF = $resultField->fetch_array(MYSQLI_BOTH)){
$aTableFields[]=$aRowF[0];
}
foreach($fields as $fieldname=>$properties) {
if(!in_array($fieldname,$aTableFields)) {
if((isset($properties["size"]) and ($properties['size']!=""))) {
$size="(".$properties["size"].")";
}
else {
$size="";
}
$sCommand="ALTER TABLE `".$table."` ADD `".$fieldname."` ".$properties["type"].$size." ".$properties["standard"];
$this->last_query[]=$sCommand;
$updateresult=$this->databaselink->query($sCommand);
}
}
}
unset($aTableFields);
unset($aFields);
unset($properties);
}
unset($aData);
}
function store_data($sTable,$aFields,$sKey_ID,$mID) {
//updates or inserts data
//returns ID or -1 if fails
$i=0; $returnID = 0;
if(($mID>0) or ($mID!="") or ($mID != null)) {
//search for it
$aCheckFields=array($sKey_ID=>$mID);
$aRow=$this->select_row($sTable,$aCheckFields);
$returnID=$aRow[$sKey_ID];
}
if(($returnID>0) or ($returnID!="")) {
$sQuery="update ".$sTable." set ";
foreach($aFields as $key=>$value) {
$sQuery.=$key."='".$value."'";
$i++;
if($i<count($aFields)) {
$sQuery.=",";
}
}
$sQuery.=" where ".$sKey_ID."='".$mID."'";
$mDataset_ID=$returnID;
}
else {
$sKeys = ""; $sValues = "";
$sQuery="insert into ".$sTable." (";
foreach($aFields as $sKey=>$value) {
$sKeys.=$sKey;
$sValues.="'".$value."'";
$i++;
if($i<count($aFields)) {
$sKeys.=",";
$sValues.=",";
}
}
$sQuery.=$sKeys.") values (".$sValues.")";
}
$this->last_query[]=$sQuery;
if ($pResult = $this->databaselink->query($sQuery)) {
if(($returnID>0) or ($returnID!="")) {
return $returnID;
}
else {
return $this->databaselink->insert_id;
}
}
else {
return -1;
}
}
function save_user($aUser){
/*
args:
Array $aUser
Array of user information which will be saved.
e.g. array(
'forename' => String $forname,
'surname' => String $surname,
'email' => String $email,
'UID' => String $UID,
'language' => String $language,
'admin' => Bool $admin,
'password' => String md5(str_rev($password)), #deprecated, do not use!
'password_hash' => password_hash(String $password, PASSWORD_DEFAULT)
);
returns:
None
Function will save user Information given in $aUser. If user exists it will
overwrite existing data but not delete not-specified data
*/
$aFields = $aUser;
if ((isset($this->r_user_ID))and ($this->r_user_ID != "")){
$this->ID=$this->store_data(TABLE_USER, $aFields, 'user_ID' , $this->r_user_ID);
}
else{
$this->ID=$this->store_data(TABLE_USER, $aFields, NULL , NULL);
}
}
function get_view($Datei) {
ob_start(); //startet Buffer
include($Datei);
$output=ob_get_contents(); //Buffer wird geschrieben
ob_end_clean(); //Buffer wird gelöscht
return $output;
}
}
//end of class
session_start();
include ("config/config.inc.php");
$oObject = new Data;
$oObject->output = "";
switch ($oObject->r_ac){
case 'user_save':
$aUser = array();
if(isset($oObject->r_user_ID)){
$aUser['user_ID'] = $oObject->r_user_ID;
}
if(isset($oObject->r_name)){
$aUser['name'] = $oObject->r_name;
}
if(isset($oObject->r_email)){
$aUser['email'] = $oObject->r_email;
}
if(isset($oObject->r_email)){
$aUser['signalmessenger'] = $oObject->r_signalmessenger;
}
if(isset($oObject->r_email)){
$aUser['sms'] = $oObject->r_sms;
}
if(isset($oObject->r_email)){
$aUser['telegram'] = $oObject->r_telegram;
}
if(isset($oObject->r_email)){
$aUser['threema'] = $oObject->r_threema;
}
$oObject->save_user($aUser);
$oObject->output .= "Erfolgreich gespeichert";
break;
default:
$oObject->output = $oObject->get_view("views/user_form.php");
break;
}
function output($oObject){
echo $oObject->get_view("views/head.php");
echo $oObject->get_view("views/body.php");
}
output($oObject);
?>

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.0 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.7 KiB

Binary file not shown.

Binary file not shown.

After

Width:  |  Height:  |  Size: 92 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 104 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 99 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 124 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 197 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 472 KiB

BIN
static/uploads/twenty.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 51 KiB

View File

@@ -1,13 +0,0 @@
<body>
<?php
if ((isset($this->error)) and ($this->error != "")){
echo "<div id=error>";
echo $this->error;
echo "</div>";
}
echo "<div id=content>";
echo $this->output;
echo "</div>";
?>

View File

@@ -1,15 +0,0 @@
<!DOCTYPE html>
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<meta name="author" content="Sam">
<meta http-equiv="pragma" content="no-cache">
<meta http-equiv="cache-control" content="no-cache">
<link rel="SHORTCUT ICON" type="image/x-icon" href="images/favicon.ico">
<?php
echo ' <link REL="stylesheet" TYPE="text/css" HREF="css/styles.css">
<title>Address collection</title>
';
?>
</head>

View File

@@ -1,17 +0,0 @@
<?php
$form = '<form action="'.htmlspecialchars($_SERVER["PHP_SELF"]).'" method="post">';
$form .='
<input type = hidden name="ac" value = "user_save">
<input type = hidden name="user_ID" value = "">';
$form .= 'Name: <input type="text" name="name" value=""><br>';
$form .= 'E-Mail: <input type="text" name="email" value=""><br>';
$form .= 'Signal: <input type="text" name="signalmessenger" value=""><br>';
$form .= 'SMS: <input type="text" name="sms" value=""><br>';
$form .= 'Telegram: <input type="text" name="telegram" value=""><br>';
$form .= 'Threema: <input type="text" name="threema" value=""><br>';
$form .= '
<input type="submit" value="Send">
<input type="reset" value="Reset";
</form>';
echo $form;
?>