Compare commits

..

1 Commits

Author SHA1 Message Date
Renovate Bot 95258d5de4 Add renovate.json
ci/woodpecker/push/woodpecker Pipeline was successful Details
2022-04-02 13:00:59 +00:00
83 changed files with 159 additions and 2847 deletions

3
.gitmodules vendored 100644
View File

@ -0,0 +1,3 @@
[submodule "themes/etch"]
path = themes/etch
url = https://github.com/LukasJoswiak/etch.git

View File

@ -1,9 +1,9 @@
platform: 'linux/amd64'
branches: 'main'
branch: 'main'
pipeline:
release:
image: 'hugomods/hugo:latest'
image: 'klakegg/hugo:alpine'
commands:
- hugo
- 'cd public && tar czvf ../public.tar.gz *'

24
Dockerfile 100644
View File

@ -0,0 +1,24 @@
FROM alpine:3.15.3 AS builder
RUN apk update && \
apk add --no-cache \
hugo
WORKDIR /app
COPY . ./
# Build the site
RUN hugo
FROM nginx:1.21.6-alpine
ENV MATRIX_SERVER=matrix.rustybever.be:443 \
MATRIX_CLIENT_SERVER=https://matrix.rustybever.be
COPY nginx/nginx.conf /etc/nginx/nginx.conf
COPY nginx/default.conf /etc/nginx/conf.d/default.conf
COPY nginx/*.conf.template /etc/nginx/templates/
COPY --from=builder /app/public /usr/share/nginx/html

View File

@ -1,6 +1,6 @@
baseURL = "https://rustybever.be"
title = "The Rusty Bever"
theme = "rb"
theme = "etch"
languageCode = "en-US"
enableInlineShortcodes = true
pygmentsCodeFences = true
@ -8,23 +8,16 @@ pygmentsUseClasses = true
[params]
description = "The Rusty Bever"
copyright = "Copyright © 2024 Jef Roosens"
copyright = "Copyright © 2022 Jef Roosens"
dark = "auto"
highlight = true
[menu]
[[menu.main]]
identifier = "blog"
name = "blog"
title = "blog"
url = "/blog/"
weight = 1
[[menu.main]]
identifier = "projects"
name = "projects"
title = "projects"
url = "/dev/"
identifier = "posts"
name = "posts"
title = "posts"
url = "/"
weight = 10
[[menu.main]]
@ -32,14 +25,11 @@ pygmentsUseClasses = true
name = "about"
title = "about"
url = "/about/"
weight = 40
weight = 20
# [permalinks]
# blog = "/b/:filename/"
[permalinks]
posts = "/:title/"
[markup.goldmark.renderer]
# Allows HTML in Markdown
unsafe = true
[services.rss]
limit = 15

View File

@ -1,14 +1,5 @@
---
title: "Home"
---
Welcome! I'm Jef, a Belgian CS student looking for his place on the internet.
I develop most of my projects on my personal [Gitea](/gitea) instance, but I'm
also present on <a href="/github" rel="me">GitHub</a>, [GitLab](/gitlab) &
[Codeberg](/codeberg). You can contact me on [Matrix](/matrix).
Besides that, I love music, hanging out with friends, and whisky! If you're
interested, I have a more in-depth [about](/about) page :) I also maintain a
few packages on the [AUR](/aur).
Welcome to my site! You can learn more about who I am [here](/about), or have a
look at my posts below :)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 209 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.3 MiB

View File

@ -20,9 +20,14 @@ Trackmania Esports scene!
Music has been an important part of my life for many years. I'm one of those
people that listens to music multiple hours a day, during studying, cooking,
anything really. It really calms the chaos in my head, allowing me to think
more clearly. It's also a great way to improve my mood, or to help me process
my current thoughts.
more clearly. It's also a great way to improve my mood, or just to help me
process my current thoughts.
{{< figure src="./gentse-feesten-alpaca.jpg" title="Me with a broken wrist" >}}
Some interesting links:
{{< figure src="./beaver-computer.jpg" title="Artistic depiction, courtesy of a good friend ;p" >}}
* My personal projects can be found on
[my Gitea instance](https://git.rustybever.be/Chewing_Bever).
* GitHub: [ChewingBever](https://github.com/ChewingBever)
* GitLab: [Chewing_Bever](https://gitlab.com/Chewing_Bever)
* Codeberg: [Chewing_Bever](https://codeberg.org/Chewing_Bever) (Rarely used)
* Matrix: [@jef:rustybever.be](https://matrix.to/#/@jef:rustybever.be)

View File

@ -1,6 +0,0 @@
---
title: "Blog"
---
Sometimes I want to talk about things that aren't related to any of my
projects. These posts end up here!

View File

@ -1,71 +0,0 @@
---
title: "Audio Setup"
date: 2022-08-16
---
Over the years, I've invested in a (in my opinion) pretty good audio setup that
I use very frequently (especially during the winter). I'm quite fond of it, so
I'd like to show it off! And perhaps, someone out there might find the
information interesting :)
# Hardware
The hardware consists of four main parts. I use a [Focusrite Scarlett
2i2](https://focusrite.com/en/usb-audio-interface/scarlett/scarlett-2i2) both
as a USB-connected DAC and an interface for my microphone. I'm quite satisified
with the 2i2; it has served me well for a few years now. The fact that it can
act both as a DAC and a mic input allows me to declutter my desk a bit.
Otherwise, I'd have to buy a separate DAC and microphone receiver. Perhaps
someday I'll find a use for the second microphone input ;p Before I had the
2i2, I used a USB microphone and an AUX adapter that connected directly to my
headphone amp, so this really was a big upgrade for me.
The 2i2's line outputs connect to a [Schiit Magni
Heresy](https://www.schiit.com/products/magni-1); the headphone amplifier.
Besides the glorious name, this product really is amazing. It packs a serious
punch for its size and price and I find it's a great match for my headphones.
The only issue that I've had with them is that they can crackle when you change
the volume, but luckily it goes away again once you stop moving the knob.
Personally, I don't think it's that much of an issue, but I figured I should
mention it anyways.
Now for the stars of the show; the headphones. Perhaps I went a little
overboard with this one, but I don't regret it at all. My main drivers are the
[Sennheiser HD 660S](https://www.sennheiser-hearing.com/en-US/p/hd-660s/). They
sound absolutely amazing paired with my Magni head amp. I'm by no means an
expert on audio, but I can definitely tell a major difference between these
headphones and any other pair we have in the house. The audio is incredibly
clear; I really can't get enough of it.
Lastly, the microphone. During the pandemic, I bought myself a Devine BM-600
XLR mic. The quality's quite good, especially for the price. Sadly I don't get
to use it a lot anymore, as I rarely call people on the computer anymore.
Before this however, I used the Devine M-Mic USB microphone. I mention this
because it's probably one of the best budget microphones out there. Last I
checked, it only cost around 30 euros and the voice quality is insanely good
for the price.
# Software
While I get the appeal of LPs and CDs, they're just not very practical with the
amount of music that I listen to every day. That's why I have a
[Tidal](https://tidal.com/) HiFi subscription. Due to me being a student, I can
get a 50% discount, meaning I only pay 5 euros per month for high-quality audio
streaming. I highly recommend Tidal; it has served me well for months and will
continue to do so.
Due to me using Linux, I sadly can't use a native Tidal client, so I've had to
resort to using
[Mastermindzh/tidal-hifi](https://github.com/Mastermindzh/tidal-hifi),
specifically installed using the
[tidal-hifi-git](https://aur.archlinux.org/packages?O=0&K=tidal-hifi-git) AUR
package. It runs the Tidal web player in an electron session, along with some
wine magic to support HiFi playback. It even has a task bar icon and
everything, so it works about as well as any other native client.
Right, I have to finish this post. The goal was to describe my audio setup, and
I think I accomplished that. I'm definitely not an expert on these things, so
I'm not going to go more in depth than needed. All I am is an audiophile that
cares a little too much about the quality of his music ;p
Cheers

View File

@ -1,142 +0,0 @@
---
title: "My C Project Setup"
date: 2024-03-28
---
For the last couple of months most of my projects have revolved around
low-level C programming, with the most prominent one being
[Lander](https://git.rustybever.be/Chewing_Bever/lander), my URL shortener.
During this time I've developed a method for structuring my repositories in a
way that works for me and my development style. In this post, I'll be detailing
my approach!
If you prefer looking at the structure directly, the basic structure's
available as [a template](https://git.rustybever.be/Chewing_Bever/c-template)
on my Gitea.
## Basic structure
The basic structure for my repositories looks like this:
```
.
├── example
├── include
│   └── project_name
├── src
│   ├── _include
│   │   └── project_name
│   └── project_name
└── test
```
Let's break it down.
Naturally, `src` contains the actual source files, both those native to the
project and those included from thirdparty libraries. `src/project_name`
contains all source files native to the project, while thirdparty files are
stored in their own subdirectories separated by library, or directly in `src`.
For header files, we have two relevant directories. `include/project_name`
contains all header files that are part of the public API for the library.
`src/_include` on the other hand contains header files that are only used
internally by the project. Here we once again have the same split where
`src/_include/project_name` contains internal header files native to the
project, while thirdparty header files can be placed either directly in
`src/_include` or in their own subdirectories.
Finally we have `test` and `example`. `test` contains unit tests, while
`example` contains source files that illustrate how to use the library in a
practical context.
This setup seems to be fairly standard, and it works perfectly for me. To power
a C project, we of course need some form of build system, so let's talk about
*the Makefile*.
## The Makefile
During my years of creating personal projects I started leaning more towards a
lightweight development style. For a while I was a big fan of CMake, but for my
projects it's way too complex. As a replacement, I opted for a hand-written
Makefile. While I'm not going to go into detail on the specifics of the
[Makefile](https://git.rustybever.be/Chewing_Bever/c-template), I will mention
its most predominant features.
First and foremost it supports compiling all required files and linking them
into either a static library or a native binary, depending on the project. It
allows all source files to include any header file from both `include` and
`src/_include`. Unit tests and example binaries are compiled separately and
linked with the static library. Unit tests are allowed to include any internal
header file for more precise testing where needed, whereas example binaries
only get access to the public API.
The Makefile properly utilizes the `CC`, `CFLAGS` and `LDFLAGS` variables,
allowing me to build release binaries and libraries simply by running `make
CFLAGS='-O3' LDFLAGS='-flto'`. Make also allows running compilation in parallel
using the `-j` flag, greatly speeding up compilation. A properly written
Makefile really does make life a lot easier.
It also solves a common issue with C compilation: header files. The usual
bog-standard Makefile only defines the C source file as a dependency for its
respective object file. Because to this, object files do not get recompiled
whenever a header file included by its source file is changed. This can result
in unexpected errors when linking. The Makefile solves this by setting the
`-MMD -MP` compiler flags. `-MMD` tells the compiler to generate a Makefile in
the build directory next to each source file's object file. These Makefiles
define all included header files as a dependency for its respective object
file. By importing these Makefiles into our main Makefile, our object files are
automatically recompiled whenever a relevant header file is changed.
The Makefile also contains some quality-of-life phony targets for stuff I use
regularly:
* `make lint` and `make fmt` use `clang-format` to lint and format the source
files
* `make check` runs `cppcheck` (and possibly other tools in the future) on the
source code, notifying me of obvious memory leaks or mistakes
* `make test` compiles all test binaries and runs the unit tests
* `make run` compiles and runs the main binary
* `make build-example` builds all examples
* `make bear` generates a `compile_commands.json` file using
[Bear](https://github.com/rizsotto/Bear) (the `clangd` LSP server requires
this to work properly)
* `make clean` removes all build artifacts
## Testing
My setup currently only supports unit tests, as I haven't really had the need
for anything more complex. For this, I use
[acutest](https://github.com/mity/acutest), a simple and easy to use
header-only testing framework that's perfect for my projects. It's fully
contained within a single header file that gets imported by all test files
under the `test` directory. By having the testing framework fully contained in
the project it also becomes very easy to run tests in a CI. If the CI
environment can compile the library it can also run the tests without any
additional dependencies required.
## Combining projects
My projects, specifically libraries, often start as part of a different project
(e.g. [lnm](https://git.rustybever.be/Chewing_Bever/lnm) used to be part of
[Lander](https://git.rustybever.be/Chewing_Bever/lander)). As the parent
project grows, some sections start to grow into their own, self-contained unit.
At this point, I take the time to properly decouple the codebases, moving the
new library into its own subdirectory. This subdirectory then gets the same
structure as described above, allowing the parent project to include it as a
static library.
This approach gives me a lot of flexibility when it comes to testing, as well
as giving me the freedom to separate subprojects into their own repositories as
desired. Each project functions exactly the same if it's a local subdirectory
or a Git submodule, allowing me to easily use my libraries in multiple projects
simply by including them as submodules.
## Outro
That was my C project setup in a nutshell. Maybe this post could be of use to
someone, giving them ideas on how to improve their existing setups.
As is standard with this blog, this post was rather technical. If you got to
this point, thank you very much for reading.
Jef

View File

@ -1,5 +0,0 @@
---
title: "A Review of EndeavourOS"
date: 2022-04-05
draft: true
---

View File

@ -1,60 +0,0 @@
---
title: "Feeling Slightly Off"
date: 2023-03-18
---
This weekend's the first time in a couple of weeks that I've had a bit of
breathing space. I've started to realize that I've been feeling a bit off. I've
had a busy schedule this semester, lots of people to catch up with, and more
uni work than I expected. Yesterday was the first climax of this, with two
deadlines and a group presentation due. It became so busy I had to cancel a fun
evening solely due to uni work.
These last couple of weeks, I've been going through life chasing my calendar,
making sure I meet deadlines, while constantly remembering that I still have
stuff to do in the evening. Don't get me wrong, I loved every social outing,
but I'm aware that I tend to sacrifice a bit of myself sometimes to keep this
schedule going. It's not the first time I've been stressed about going out, due
to me planning too many things to do. I find myself coming home tired from
studying and classes, taking a quick shower and chugging a Red Bull before
going out again an hour later. I don't like saying no to outings, and I've
always got some FOMO.
My sleep's been suffering, but more importantly, I've caught myself caring less
about my health. When I come home for the weekend, I find myself answering "oh,
I haven't checked it" when my dad asks whether my blood pressure and weight are
still in check, and I haven't gone running in a week. After a while I started
noticing this, and it's given me this slight feeling of dread, and perhaps a
lack of control. I can be a bit paranoid about my health sometimes, and these
habits kept that feeling in check, but now it's bubbling up ever so slightly.
Ever since I've had my appendix removed, I've been out of this rhythm I had
created over the last year. Running was very much a key part of this routine,
something that helped keep everything grounded. I knew that I went running
every two days. It didn't feel like an obligation, but the idea of that rigid
schedule helped me plan everything else. Because I wasn't able to exercise for
a few weeks after the surgery (partially due to me being too paranoid about it
all), I've lost this feeling of consistency, and I've been struggling to find
it back. At this point, my body has fully healed and I'd be perfectly able to
handle this rhythm again, but I just haven't found that same flow.
My food hasn't been too healthy either. I've eaten a lot of junk food, more
than usual. Normally I don't mind this considering I partially compensated for
this by running, but recently that argument hasn't worked, so I'm fearing that
I'll start gaining weight again. On a brighter note, I've started prepping
lunches for the week (partially due to uni restaurants being way too expensive
nowadays), but I want to start pairing this habit with proper evening meals,
instead of junk food 2-3 times a week.
All this has combined into a sense of fear, fear that it might come to bite me
in the ass some day. I felt the need to write these thoughts down, to collect
them properly in my head. On this Saturday, I felt the need to take control of
my schedule again. Luckily I have some breathing room next week as I have no
deadlines due, and I hope to use this time to start getting back into this
rhythm.
As usual, I don't know how to end these posts, they're more of a dump of
thoughts than anything else. I'm well aware this post could come off as
pretentious. I'm basically complaining about having too many things to do while
having shitty time management, and that's fine. After all, I'm collecting *my*
thoughts. Thanks for reading.

View File

@ -1,49 +0,0 @@
---
title: "I've lost weight!"
date: 2022-07-10
---
Ever since I started middle school, I've been a bit of a chubby kid. I was
never into sports & honestly I just like food. The fact that my main hobby,
computer stuff, requires me to sit down all the time doesn't help my case
either.
My health was never really a big concern for me, as I generally felt pretty
good physically. A few months ago however, I finally found the courage to go
donate blood (I absolutely despise needles)! Whenever you want to donate blood
(at least, here in Belgium) they check your blood pressure to make sure you're
allowed to give blood, and that's when I found out that my blood pressure was
apparently way too high. This was a big eye-opener for me, as it was the first
time I was being confronted with a consequence of my weight and just general
lack of self-care.
Shortly after, I made an appointment with a doctor, who then referred me to a
cardiologist, who then once again told me that my blood pressure is indeed way
too high. Due to my young age however, they were hesitant to prescribe me
medication and instead encouraged me to start exercising more to see if this
helped the problem.
Suffice to say, my health has notably improved ever since I changed my
lifestyle! I've started running regularly (and have been enjoying it
surprisingely) and lowered my portion size when eating. For the first couple of
weeks, I was constantly hungry, but afterwards my stomach adjusted. Nowadays,
I'm already full after a portion half the size of before!
This post has been in the back of my mind for a while, but I wanted to wait
until I hit a specific milestone. When I started losing weight, I weighed
around 111kg, but a few days ago I weighed less than a hundred kilos for the
first time in years! My weight definitely ballooned when I started university,
so it felt really good to see that 99.9 on the scale. My blood pressure has
also notably improved, thanks to the weight loss and exercise.
Of course I'm not going to stop now, but this was a specific goal that I had in
mind to motivate myself to keep going. My friends and family have been
commenting on my weight loss, and it's really nice to hear people say that I
lost weight for a change. I am grateful that this was discovered so soon. Being
only twenty-one, I'm still more than capable of becoming healthier. I'd rather
exercise now than deal with the possible implications from high blood pressure
years down the road.
As usual, I have no idea how to end these posts; I just wanted to share my
accomplishment, as I'm quite proud of it :) Anyways, if you've gotten this far,
thank for you reading and have a very nice day <3

View File

@ -1,58 +0,0 @@
---
title: "Music"
date: 2022-05-16T20:58:18+02:00
---
Music has a profound effect on me. It dictates my mood, helps me process
emotions or keeps me focused. It motivates me when I'm running & helps me when
I'm down. And of course, I'm listening to some great music while writing this
post.
I felt like writing, mostly because I was feeling a bit like shit. It calms me
down, helps me put things in order. The idea to write about music just popped
into my head, but I've had it in the back of mind for a while now.
Music really is something special. It can evoke such a wide range of emotions,
from blissful joy to a depressed pitfall & everything in between. Whenever I'm
feeling down I just crack up some tunes. Not necessarily happy tunes mind you,
sometimes it's better to soak in the sadness for a bit, there's no point in
keeping it inside.
According to most people's standards, I listen to music *a lot*. I put in my
earbuds when I leave for classes at 8AM, or put on my headphones when I'm
working behind my desk. Every time I ride my bike, walk or run, I'm listening
to music. Whenever I'm studying or programming, I'm listening to music. In
total, I probably listen to music for at least 6 hours a day. Luckily that is
more than enough to justify buying a Tidal subscription for 10 euros a month ;p
That does bring me to my next point; I'm an audiophile. I love high resolution
audio & my audio setup reflects that. My Sennheiser HD 660S are very dear to
me, and they've provided me with hundreds of hours of listening pleasure at
this point.
I'm not really picky with what genres of music I listen to either. The music
just has to provide a certain feeling that fits my current mood. I do prefer
listening to entire albums, which is why I've amassed a great list of albums
that I love to listen to. Despite my horrible memory I tend to navigate this
list just fine, with each song just coming up as a feeling in the moment & my
mind magically navigating to the right album.
My love for music has been around for as long as I can remember. Back in high
school I was constantly listening to music; I've even played the piano for
years before stopping due to a lack of interest. Sadly that spirals back to my
lack of motivation for most things, but I digress.
The truth is, I wanted to write, just write, to process the exam stress that
I'm going through right now. I've got exams in two weeks & as usual, the stress
has been killing me. My horrible sleep hygiene doesn't help either. I rarely go
to bed before midnight & when I do, I just lie awake in bed, thinking. Thinking
of how I'll have to study more tomorrow, because otherwise I won't make it. A
constant fear of failure looms above me, ready to eat me up inside.
Well, this post got depressing quite fast, I'm sorry about that. Thing is, I
want to use this site to express myself, so when I'm feeling stressed, I want
to express that as well. It's liberating in a way, sharing this information
with "the world", in my own controlled way.
If you've gotten this far, thank for you reading through my ramblings &
insecurities, I do truly appreciate it. Au revoir.

View File

@ -1,50 +0,0 @@
---
title: "Necessity Creates Productivity"
date: 2022-04-07T09:46:05+02:00
---
Or at least, that's how I experience it. Let me explain.
I have a lot of sideprojects. Most would say too many (I'm inclined to agree).
I often start these projects because I feel like it, without a particular
purpose or useful goal in mind. Programming is just something that I really
enjoy, so I tend to create ideas out of thin air just because I want to write
something, anything. There is however another group of sideprojects, the ones
that I start because I need something. Those that fix an annoyance, or make my
life easier. What I've noticed is that I'm a lot more productive & less easily
burned out when I'm working on these kinds of projects.
One of those projects (and my main project atm) is
[Vieter](https://git.rustybever.be/Chewing_Bever/vieter). I originally wrote a
full description of Vieter here & why I needed it, but that's really not what
this post is about. You can still read about it in [the docs](/docs/vieter/#why)
if you want. The important part to take away from this is that it's something I
really need. It made me more productive and greatly pushed down my update
times, which I personally find very important. That's why I'm getting a lot of
things done for this project, because I know that it'll be worth it in the end
& improve my life.
To show the other site of the spectrum, my original idea for this site was a
collection of microservices, with a complex authentication system & a full
JavaScript frontend ([source](https://git.rustybever.be/rusty-bever)). Let's
just admit it here, this idea was way too ambitious and not even *that* useful.
The only part that I'm actually still considering writing is the authentication
part, because I do have some other ideas to go along with those, but that's
another post entirely ;p
Due to this overkill idea, I didn't actually set up this site for over a year I
think, just because I just couldn't get myself to properly work on the
implementation. I actually really enjoy writing these blog-style posts, so it's
quite sad I didn't set up a proper Hugo-based site immediately. Gladly at some
point I got through my stubbornness, and I set up this site in less than a day
:) This site still runs on [a custom backend](/switch-to-axum), but it's much
more minimal and only supports what I really need. My mind's a lot calmer now
that I've properly left my original idea behind.
I'm honestly not quite sure what point I'm trying to make. This post is just an
observation about how my unpredictable mind can work. Knowing myself, the
sideprojects will probably never stop coming, but that's okay tbh. The
important part is that most of them have a purpose, and don't just burn me out
unnecessarily.
Fin.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.3 MiB

View File

@ -1,26 +0,0 @@
---
title: "Tour of Flanders"
date: 2022-04-04T12:53:23+02:00
---
Yesterday, some friends & I met to "watch" the Tour of Flanders (gonna have to
trust Google Translate on this one). Mind the quotes, because none of us really
know anything about cycling ;p One of us just lived close to where the tour
ended, so we used this as an excuse to organize a party at their place!
It was really fun standing in a big crowd of bystanders while the two
frontrunners passed by. Everyone went wild! It really shows how cosy a group of
Belgians can be if we just don't talk about anything besides sports (let's
leave the politics aside).
Afterwards, we went back to their place, ate some delicious burgers courtesy of
their mom, and watched [De
Mol](https://en.wikipedia.org/wiki/De_Mol_(TV_series)) together. For the rest
of the evening we had some beer & wine (and a glass of Johnnie Walker Black
Label ;) ), and just talked about everything. I really enjoy these kinds of
evenings, chilling with friends, no pressure to go out, just relaxing & talking
with some good booze :)
{{< figure src="./bert-enjoying-himself.jpg" title="Bert having some fun while we're all focused on the big screen" >}}
{{< figure src="./later-in-the-evening.jpg" title="After a few drinks (I gotta shave)" >}}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.6 MiB

View File

@ -1,44 +0,0 @@
---
title: "Tuxedo Book XP14 12th Gen Review"
date: 2022-04-02
draft: true
---
Fro the last couple of years, my main driver was a Macbook Air 13" from 2013.
It was my sister's old laptop & I claimed it when she replaced it because it
became too slow. Naturally, I put Linux on it and, after a few distro hops,
settled on EndeavourOS. This setup worked well for about 3 years, but it was
getting rather old. After about a year of using it myself I had to replace the
battery, and after another two years or so that one became useless as well. It
was time for a change, so I started searching.
Thanks to a recommendation from a friend, I found Tuxedo Computers and I just
couldn't get them out of my head, so eventually I gave in and bought one! As
the title already revealed, the model's a Tuxedo Book XP14 Gen12.
My specific version has a 120Hz display, 500GB of a Samsung 980, 2 x 8GB of
DDR4 RAM, an i5-1135G7 & Intel Iris Xe Graphics G7 80EUs.
Now that we've got the nerd stats out of the way, let's talk about the laptop
itself.
## The Good
The build quality is very solid. While the top half containing the display is
made up of a solid metal casing, the bottom part consists of a sturdy plastic.
There is some deck flex, but definitely not a level I would consider an issue.
The trackpad is very responsive & pairs nicely with the smoothness of the
cursor on the 120Hz display. I personally think the keyboard is quite amazing.
It's got a satisfying travel time & feels very solid for a membrane keyboard.
IO is more than enough, with a Kensington lock, SD card reader, gigabit
Ethernet port, Thunderbolt 4 port, two USB 3 ports, another USB-C port, HDMI &
two-in-one audio jack.
Under normal load the fans are completely silent, while at peak they're audible
but not annoying or overly loud.
Battery life is quite decent; under light load with dimmed backlight it can go
for about 6 hours. I do recommend properly configuring some energy profiles in
the Tuxedo Control Center.

View File

@ -1,83 +0,0 @@
---
title: "My V Workflow"
date: 2022-04-27T22:13:04+02:00
---
While I'm trying to find time to work on
[Vieter](https://git.rustybever.be/Chewing_Bever/vieter) (college is killing me
right now), I figured I could describe my current workflow for developing
Vieter, and in general, V!
I've always been a rather minimal developer, preferring simplicity &
lightweight programs over lots of smart IDE features. While this mentality
doesn't work for all languages, V's simplicity allows me to write it without
any smart features whatsoever!
## Tools
### Neovim
We can't do any coding without a text editor of course. My weapon of choice is
[Neovim](https://neovim.io/), the great Vim fork, ran inside my [st
build](https://git.rustybever.be/Chewing_Bever/st). My main reason for choosing
Neovim over Vim (besides the more active development) is the Lua, LSP &
Treesitter support.
I try to keep [my
config](https://git.rustybever.be/Chewing_Bever/dotfiles/src/branch/master/.config/nvim)
& list of plugins rather short by following the basic rule of only adding a
plugin if I find it adds actual value to my setup. If the plugin or setting
only adds a gimmick that I don't actively use, I probably won't add it.
### VLS
Thanks to the LSP support in Neovim I'm able to use
[VLS](https://github.com/vlang/vls) (V Language Server). This gives me better
autocomplete, useful suggestions & error messages, all without ever having to
run the compiler myself!
### Treesitter
The VLS repo also comes with grammar definitions for treesitter. This allows me
to import this into Neovim, providing me with better code highlighting using my
treesitter-compatible theme.
### Compiler mirror
I don't like it when things break without my permission. While it's a very good
thing that V is so actively developed, it does make programs rather sensitive
to change & can cause stuff to break after a compiler update. This is why I
maintain my own mirror of the compiler which I update regularly. Thanks to
this, I have full control over how frequently my compiler updates, providing me
with a level of stability on both my laptops & in my CI that can't be obtained
when blindly following the master branch.
### Packaging for Arch Linux
My distro of choice for all my devices is EndeavourOS, an Arch-based distro
(well, it's basically just a very good installer ;p). Thanks to this
uniformity, it's very easy for me to package my compiler mirror, VLS & the
treesitter grammar.
For the compiler, I build packages inside my CI
([PKGBUILD](https://git.rustybever.be/Chewing_Bever/v/src/branch/master/PKGBUILD))
& publish this package to my personal Vieter instance. Then, using this
compiler package, I periodically build & package VLS
([PKGBUILD](https://git.rustybever.be/bur/vieter-vls/src/branch/main/PKGBUILD)).
This is to make sure my VLS build is compatible with my compiler version. The
PKGBUILD also shows how to compile the treesitter grammar separately from VLS.
## Workflow
Just like my config, my way of working is rather simple. I really like working
in the terminal, so I usually write small Makefiles
([example](https://git.rustybever.be/Chewing_Bever/vieter/src/branch/dev/Makefile))
that do everything I need, e.g. compile, lint, test etc. Using the
[toggleterm](https://github.com/akinsho/toggleterm.nvim) plugin, I spawn
terminals inside Neovim & use `make` to do everything else!
## Outro
I'm not too sure how to end this post. I hope it might help someone who's
struggling to find a setup that works, or perhaps the links to my PKGBUILDs
could come in handy for someone ;p

View File

@ -1,100 +0,0 @@
---
title: "My Experience With V"
date: 2022-06-26
draft: true
---
For the last half a year or so, I've written code nearly exclusively in the V
programming language (excluding college projects). In this time, I've learned a
lot about the language, as well as being an active member of the community.
I don't recall exactly how I discovered V. being the kind of nerd that has a
list of languages they wanna try, I probably saw it somewhere & added it.
Luckily, V was the one I wanted to try out the most. After visiting
[vlang.io](https://vlang.io/), I joined the Discord server & that's where the
fun began!
Before I talk about the language itself, I would like to take a moment to
appreciate the community. I felt welcome the moment I joined, and everyone
(especially the V developers) was very helpful with any questions I had. If it
wasn't for their help, [Vieter](https://git.rustybever.be/vieter-v/vieter)
probably wouldn't be as far along as it is today!
## What's V?
While I'm not interested in giving a full description of the language, a short
introduction is in order. V is a compiled programming language with a syntax
very similar to Go. The main compiler backend transpiles V to C, providing
interopt with C code without any effort. This also gives the compilation phase
access to all optimisations that C compilers have to offer, resulting in very
fast & optimized binaries. I'd list more things, but then it wouldn't be short
anymore!
## Developing in V
Now for the relevant part of this post, the actual developing!
Developing locally in V is pretty straightforward. Write some code, run `v .`,
blink, see if you made a mistake, repeat. For me, this is a very important
feature of V. Not only does the compiler handle the "build system" for you;
it's also incredibly fast. This is accomplished by using the tcc compiler, a
small & extremely fast C compiler, for development builds. Thanks to this,
compiling my code doesn't take me out of "the flow"; a problem that I've faced
when working with Rust code, as I'm very sensitive to losing focus.
Building optimized binaries is equally simple; just run `v -prod .`. This will
use either gcc or clang to compile your code using the max optimisation levels.
Due to the rapidly developing nature of V, it is possible that old code no
longer compiles on a newer compiler. This won't happen once the language is
stabilized, but as of today, changes can occur. I don't actually know how
others handle this, but I personally maintain a mirror of the compiler that I
update regularly. This way, I decide when code might break, meaning I can react
quickly to make sure nothing stays broken for long. This brings me to my CI
setup!
Because overengineering is fun, I have my own CI server that I use to test &
deploy basically everything I create; V software is no exception! Using Docker
buildx, I create multi-architecture Alpine-based images containing my compiler
fork & any C library dependencies that I use. These images are then used in my
CI to build statically compiled binaries that I can use to create the actual
Docker images! Due to V compiling to C, compiling static binaries is quite
simple; just build using a musl-based OS such as Alpine Linux.
Enough drooling over my CI, back to V! Yes, when I was writing code in V, I
encountered some bugs in the compiler. While a bit inconvenient at times, they
definitely weren't a showstopper for me. V is still a developing language; I'm
not gonna try to advertise that it isn't. The thing is though, I was able to
report these compiler bugs to the community immediately, many of which being
fixed within 24 hours by one of the V developers! V might still be in
development, but it's definitely already ready for developing projects. My
[vieter](https://git.rustybever.be/vieter-v/vieter) project has nearly 4k SLoC
and still compiles just as quickly as when I started it. The resulting binaries
are rock-solid; my personal Vieter instance has been running for months without
issues.
## Conclusion?
It's clear from this post that I've taken a liking to V. The amount of
evolution I've seen in the months that I've been using it is impressive, and
I'm certain that V will reach its goal of being a stable language. I'm fine
tagging along until that day comes :)
In the context of developing Vieter, I've written a multitude of software
pieces, ranging from a cron daemon to a rewrite of Arch Linux's `repo-add`
command. This variety gives me confidence that V can already be used to develop
varied & complex software.
Besides developing Vieter, I'd like to enrich the ecosystem with packages that
I think will be useful for everyone. To this end, I've started splitting off
modules of the Vieter codebase & developing them independently. My first goal
will be writing a Docker client
[library](https://git.rustybever.be/vieter-v/docker), as I find this to be very
useful for any language to have (and also I need it myself of course).
Now, I know using these new and/or developing languages is not for everyone.
Some just prefer sticking to the proven titans of the industry, and that's
fine. However, for those like me that love using these new langs, I really do
recommend checking out V. It's fast, it's of course free and open-source, and
using a language is one of the best ways of helping it move forward. Perhaps
when you join, you'll see a Chewing Bever babbling on ;)

View File

@ -1,37 +0,0 @@
---
title: "My Workflow For This Site"
date: 2022-04-05
---
This blog is about a week old now. I'm still figuring out what kind of content
I'd like to post, or what kind of writing style I have. What I have figured out
however, is my workflow.
Thanks to my backend [powered by Axum](/switch-to-axum) I have pretty much full
creative control over the internal workings of my site. This gave me the
freedom to implement a system that I think works very well. Let's elaborate a
bit.
Both the blog & the documentation part of my website are currently being
generated using Hugo, a static site generator. The lack of JavaScript makes the
site very fast, which is always a big plus in my opinion. Thanks to my
[self-hosted CI](https://woodpecker-ci.org/), I can automatically build &
deploy the static files every time I update anything. My CI builds the static
website, compresses it into a tarball, & uploads this to my backend. This
process takes less than 10 seconds on a warm CI runner & it allows me to very
quickly update my site, correct errors, or just upload a post like this one!
My backend supports a simple system of serving multiple sites. In practice this
means that I can specify which site I'm uploading using a query parameter in
the POST request. This is how I'm able to serve my documentation on
[/docs](/docs) while still having my blog available as the "default" site.
The "source code" for my site(s) is stored in Git repositories using Markdown.
Considering I use Git on a daily basis, this is perfect for me & I don't see it
as an "extra step" anymore. For college I use Git as well, so using it in
personal projects is a no-brainer.
I have no idea how common this setup is, or if it'll work as well down the
road, but for now, I find it works perfectly.
Thanks for reading!

View File

@ -1,12 +0,0 @@
---
title: "Projects"
---
Throughout the years, I've created (and dropped) a lot of projects. Some were
built out of necessity, others simply because I thought it was cool. This
section of my site is where I can talk about these projects that I'm so
passionate about!
The posts for the various projects mostly consist of devlogs and version
release announcements. Each project has its own RSS feed, so you can subscribe
to the ones you'd like to follow.

View File

@ -1,31 +0,0 @@
---
title: "Alex"
summary: "Minecraft server wrapper that automates world backups"
type: "project"
params:
links:
- name: Source
url: 'https://git.rustybever.be/Chewing_Bever/alex'
---
Alex was created to solve an issue I'd been having: inconcistent Minecraft
server backups.
My original backup system involved compressing the server world and config
directories in a tarball. While this usually worked, the command regularly
failed due to conflicts while reading a file the server was writing to. Because
the Minecraft server is unaware of the fact it's being backed up, it
continuously writes data to disk, preventing the tar command from doing its
job.
My solution for this consisted of designing a process that wraps the Minecraft
server process and interacts with its standard input. The Minecraft server
accepts certain commands (`save-all` and `save-on`/`save-off`) that allow an
admin to control when and if the server writes data to disk. Using this, the
wrapper process tells the server to flush all data, stop writing to disk, and
only resume writing after a successful backup has been created.
This idea was then expanded to implement incremental backups, which greatly
reduced the time and disk size needed for the backups. For reference, our
~2.5GB world folder can be backed up incrementally in less than 5 seconds, with
backups running every thirty minutes, without ever turning off the server.

View File

@ -1,142 +0,0 @@
---
title: "Automating Minecraft Server Backups"
date: 2023-09-07
---
I started playing Minecraft back in 2012, after the release of version 1.2.5.
Like many gen Z'ers, I grew up playing the game day in day out, and now 11
years later, I love the game more than ever. One of the main reasons I still
play the game is multiplayer, seeing the world evolve as the weeks go by with
everyone adding their own personal touches.
Naturally, as a nerd, I've grown the habit of hosting my own servers, as well
as maintaining instances for friends. Having managed these servers, I've
experienced the same problems that I've heard other people complaining about as
well: backing up the server.
{{< figure src="./the-village.jpg" title="Sneak peak of the village we live in" >}}
## The Problem
Like any piece of software, a Minecraft server instance writes files to disk,
and these files, a combination of world data and configuration files, are what
we wish to back up. The problem is that the server instance is constantly
writing new data to disk. This conflicts with the "just copy the files"
approach (e.g. `tar` or `rsync`), as these will often encounter errors because
they're trying to read a file that's actively being written to. Because the
server isn't aware it's being backed up, it's also possible it writes to a file
already read by the backup software while the other files are still being
processed. This produces an inconsistent backup with data files that do not
properly belong together.
There are two straightforward ways to solve this problem. One would be to
simply turn off the server before each backup. While this could definitely work
without too much interruption, granted the backups are scheduled at times no
players are online, I don't find this to be very elegant.
The second solution is much more appealing. A Minecraft server can be
controlled using certain console commands, with the relevant ones here being
`save-off`, `save-all`, and `save-on`. `save-off` tells the server to stop
saving its data to disk, and cache it in memory instead. `save-all` flushes the
server's data to disk, and `save-on` enables writing to disk again. Combining
these commands provides us with a way to back up a live Minecraft server: turn
off saving using `save-off`, flush its data using `save-all`, back up the
files, and turn on saving again using `save-on`. With these tools at my
disposal, I started work on my own custom solution.
## My solution
After some brainstorming, I ended up with a fairly simple approach: spawn the
server process as a child process with the parent controlling the server's
stdin. By taking control of the stdin, we can send commands to the server
process as if we'd typed them into the terminal ourselves. I wrote the original
proof-of-concept over two years ago during the pandemic, but this ended up
sitting in a dead repository afterwards. However, a couple of months ago, some
new motivation to work on the project popped into my head (I started caring a
lot about our world), so I turned it into a fully fletched backup tool! The
project's called [alex](https://git.rustybever.be/Chewing_Bever/alex) and as
usual, it's open-source and available on my personal Gitea instance.
Although Alex is a lot more advanced now than it was a couple of months back,
it still functions on the same principle of injecting the above commands into
the server process's stdin. The real star of the show however is the way it
handles its backups, which brings us into the next section.
## Incremental backups
You could probably describe my usual projects as overengineered, and Alex is no
different. Originally, Alex simply created a full tarball every `n` minutes
(powered by the lovely [tar-rs](https://github.com/alexcrichton/tar-rs)
library). While this definitely worked, it was *slow*. Compressing several
gigabytes of world files always takes some time, and this combined with shaky
hard drive speeds resulted in backups easily taking 5-10 minutes. Normally,
this wouldn't bother me too much, but with this solution, the Minecraft server
isn't writing to disk for the entire duration of this backup! If the server
crashed during this time, all this data would be lost.
This called for a better method: incremental backups. For those unfamiliar, an
incremental backup is a backup that only stores the changes that occurred since
the last backup. This not only saves a ton of disk space, but it also greatly
decreases the amount of data that needs to be compressed, speeding up the
backup process tremendously.
Along with this, I introduced the concept of "chains". Because an incremental
backup describes the changes that occurred since the last backup, it needs that
other backup in order to be fully restored. This also implies that the first
incremental backup needs to be based off a full backup. A chain defines a list
of sequential backups that all depend on the one before them, with each chain
starting with a full backup.
All of this combined resulted in the following configuration for backups: the
admin can configure one or more backup schedules, with each schedule being
defined by a name, a frequency, a chain length and how many chains to keep. For
each of these configurations, a new backup will be created periodically
according to the defined frequency, and this backup will be appended to the
current chain for that schedule. If the chain is full (as defined by the chain
length), a new chain is created. Finally, the admin can configure how many of
these full chains to keep.
As an example, my server currently uses a dual-schedule system:
* One configuration is called "30min". As the name suggests, it has a frequency
of 30 minutes. It stores chains of length 48, and keeps 1 full chain. This
configuration allows me to create incremental backups (which take 5-10
seconds) every 30 minutes, and I can restore these backups in this 30-minute
granularity up to 24 hours back.
* The second configuration is called "daily", and this one simply creates a
full backup (a chain length of 1) every 24 hours, with 7 chains being stored.
This allows me to roll back a backup with a 24-hour granularity up to 7 days
back.
This configuration would've never been possible without incremental backups, as
the 30 minute backups would've simply taken too long otherwise. The required
disk space would've also been rather unwieldy, as I'd rather not store 48
multi-gigabyte backups per day. With the incremental backups system, each
backup after the initial full backup is only a few megabytes!
Of course, a tool like this wouldn't be complete without some management
utilities, so the Alex binary contains tools for restoring backups, exporting
incremental backups as a new full backup, and unpacking a backup.
## What's next?
There's still some improvements I'd like to add to Alex itself, notably making
Alex more aware of the server's internal state by parsing its logs, and making
restoring backups possible without having to stop the Alex instance (this is
rather cumbersome in Docker containers).
On a bigger scale however, there's another possible route to take: add a
central server component where an Alex instance can publish its backups to.
This server would then have a user management system to allow certain users of
the Minecraft server to have access to the backups for offline use. This server
could perhaps also show the logs of the server instance, as well as handling
syncing the backups to another location, such as an S3 store. This would make
the entire system more resistant to data loss.
Of course, I'm well aware these ideas are rather ambitious, but I'm excited to
see where this project might go next!
That being said, Alex is available as statically compiled binaries for `amd64`
and `arm64` [on my Gitea](https://git.rustybever.be/Chewing_Bever/alex). If
you're interested in following the project, Gitea recently added repository
[RSS feeds](https://git.rustybever.be/Chewing_Bever/alex.rss) ;)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 593 KiB

View File

@ -1,15 +0,0 @@
---
title: "Lander"
summary: "URL shortener, pastebin & file-sharing service, built from the ground up in C"
type: "project"
params:
links:
- name: Source
url: 'https://git.rustybever.be/Chewing_Bever/lander'
---
Lander is my personal URL shortener, pastebin and file-sharing service. I've
always wanted to make one of these myself, and as an added challenge, I built
everything (except for the HTTP parser) from the ground up. It's built on a
home-grown epoll-based event loop on top of which I built an HTTP framework
that I'm also planning to use for some other projects.

View File

@ -1,138 +0,0 @@
---
title: "Designing my own URL shortener"
date: 2023-10-14
---
One of the projects I've always found to be a good choice for a side project is
a URL shortener. The core idea is simple and fairly easily to implement, yet it
allows for a lot of creativity in how you implement it. Once you're done with
the core idea, you can start expanding the project as you wish: expiring links,
password protection, or perhaps a management API. The possibilities are
endless!
Naturally, this post talks about my own version of a URL shortener:
[Lander](https://git.rustybever.be/Chewing_Bever/lander). In order to add some
extra challenge to the project, I've chosen to write it from the ground up in C
by implementing my own event loop, and building an HTTP server on top to use as
the base for the URL shortener.
## The event loop
Lander consists of three layers: the event loop, the HTTP loop and finally the
Lander-specific code. Each of these layers utilizes the layer below it, with
the event loop being the bottom-most layer. This layer directly interacts with
the networking stack and ensures bytes are received from and written to the
client. The book [Build Your Own Redis](https://build-your-own.org/redis/) by
James Smith was an excellent starting point, and I highly recommend checking it
out! This book taught me everything I needed to know to start this project.
Now for a slightly more techical dive into the inner workings of the event
loop. The event loop is the layer that listens on the listening TCP socket for
incoming connections and directly processes requests. In each iteration of the
event loop, the following steps are taken:
1. For each of the open connections:
1. Perform network I/O
2. Execute data processing code, provided by the upper layers
3. Close finished connections
2. Accept a new connection if needed
The event loop runs on a single thread, and constantly goes through this cycle
to process requests. Here, the "data processing code" is a set of function
pointers passed to the event loop that get executed at specific times. This is
how the HTTP loop is able to inject its functionality into the event loop.
In the event loop, a connection can be in one of three states: `request`,
`response`, or `end`. In `request` mode, the event loop tries to read incoming
data from the client into the read buffer. This read buffer is then used by the
data processing code's data handler. In `response` mode, the data processing
code's data writer is called, which populates the write buffer. This buffer is
then written to the connection socket. Finally, the `end` state simply tells
the event loop that the connection should be closed without any further
processing. A connection can switch between `request` and `response` mode as
many times as needed, allowing connections to be reused for multiple requests
from the same client.
The event loop provides all the necessary building blocks needed to build a
client-server type application. These are then used to implement the next
layer: the HTTP loop.
## The HTTP loop
Before we can design a specific HTTP-based application, we need a base to build
on. This base is the HTTP loop. It handles both serializing and deserializing
of HTTP requests & responses, along with providing commonly used functionality,
such as bearer authentication and reading & writing files to & from disk. The
request parser is provided by the excellent
[picohttpparser](https://github.com/h2o/picohttpparser) library. The parsed
request is stored in the request's data struct, providing access to this data
for all necessary functions.
The HTTP loop defines a request handler function which is passed to the event
loop as the data handler. This function first tries to parse the request,
before routing it accordingly. For routing, literal string matches or
RegEx-based routing is available.
Each route consists of one or more steps. Each of these steps is a function
that tries to advance the processing of the current request. The return value
of these steps tells the HTTP loop whether the step has finished its task or if
it's still waiting for I/O. The latter instructs the HTTP loop to skip this
request for now, delaying its processing until the next cycle of the HTTP loop.
In each cycle of the HTTP loop (or rather, the event loop), a request will try
to advance its processing by as much as possible by executing as many steps as
possible, in order. This means that very small requests can be completely
processed within a single cycle of the HTTP loop. Common functionality is
provided as predefined steps. One example is the `http_loop_step_body_to_buf`
step, which reads the request body into a buffer.
The HTTP loop also provides the data writer functionality, which will stream an
HTTP response to the write buffer. The contents of the response are tracked in
the request's data struct, and these data structs are recycled between requests
using the same connection, preventing unnecessary allocations.
## Lander
Above the HTTP loop layer, we finally reach the code specific to Lander. It
might not surprise you that this layer is the smallest of the three, as the
abstractions below allow it to focus on the task at hand: serving and storing
HTTP redirects (and pastes). The way these are stored however is, in my
opinion, rather interesting.
For our Algorithms & Datastructures 3 course, we had to design three different
trie implementations in C: a Patricia trie, a ternary trie and a "custom" trie,
where we were allowed to experiment with different ideas. For those unfamiliar,
a trie is a tree-like datastructure used for storing strings. The keys used in
this tree are the strings themselves, with each character causing the tree to
branch off. Each string is stored at depth `m`, with `m` being the length of
the string. This also means that the search depth of a string is not bounded by
the size of the trie, but rather the size of the string! This allows for
extremely fast lookup times for short keys, even if we have a large number of
entries.
My design ended up being a combination of both a Patricia and a ternary trie: a
ternary trie that supports skips the way a Patricia trie does. I ended up
taking this final design and modifying it for this project by optimising it (or
at least try to) for shorter keys. This trie structure is stored completely in
memory, allowing for very low response times for redirects. Pastes are served
from disk, but their lookup is also performed using the same in-memory trie.
## What's next?
Hopefully the above explanation provides some insight into the inner workings
of Lander. For those interested, the source code is of course available
[here](https://git.rustybever.be/Chewing_Bever/lander). I'm not quite done with
this project though.
My current vision is to have Lander be my personal URL shortener, pastebin &
file-sharing service. Considering a pastebin is basically a file-sharing
service for text files specifically, I'd like to combine these into a single
concept. The goal is to rework the storage system to support arbitrarily large
files, and to allow storing generic metadata for each entry. The initial
usecase for this metadata would be storing the content type for uploaded files,
allowing this header to be correctly served when retrieving the files. This
combined with supporting large files turns Lander into a WeTransfer
alternative! Besides this, password protection and expiration of pastes is on
my to-do list as well. The data structure currently doesn't support removing
elements either, so this would need to be added in order to support expiration.
Hopefully a follow-up post announcing these changes will come soon ;)

View File

@ -1,31 +0,0 @@
---
title: "Rieter"
summary: "Easy-to-use Pacman repository server designed for the self-hosting enthusiast"
type: "project"
params:
links:
- name: Source
url: 'https://git.rustybever.be/Chewing_Bever/rieter'
---
This project is a reimagining of my Vieter project. While its goal is to
eventually fully replace Vieter, I'm following a different mindset on how to
get there.
First and foremost, I want to make a well-designed Pacman repository server
that anyone can set up on any device, be it a Raspberry Pi or a beefy server.
Rieter should be usable for anything from a small personal repository all the
way to a full mirror of a distribution's package server.
Something that also fits nicely in this concept is mirroring. Rieter will
support automatically mirroring upstream repositories. This could be used to
support your distribution by setting up a new public mirror, or to speed up
your updates by keeping a mirror of the repositories in your local network.
Only once I've created a robust repository server that can be used on its own
will I start looking towards the package build system. This system will of
course be redesigned from the ground up to (hopefully) eliminate all the
technical debt that's been accumulating in the Vieter codebase over the years.
With these two concepts combined, I hope to create a great ecosystem on which
one can build anything from personal repositories to entire distributions.

View File

@ -1,78 +0,0 @@
---
title: "Rethinking the Vieter project"
date: 2024-06-08
---
I've been meaning to recreate my Vieter project for a while. The codebase is
full of technical debt, and I've grown dissatisfied with the language it was
originally written in. That's where the Rieter project comes in: a full
reimagining and reimplementation of the core ideas of the project, in Rust. I
am however following a different mindset this time around.
My plan is to develop the project in two stages. The first stage involves
creating a well-designed general-purpose repository server. This includes
serving and storing packages, as well as providing a REST API and web UI to
interact with the repository packages. In this stage I'll also add mirroring
functionality to allow a Rieter server to automatically maintain a local copy
of another repository. This could be used to easily create another mirror for a
distribution's servers, or perhaps to create a local mirror for faster
downloads.
Once the first stage is finished, we have a solid foundation on which we can
build the second stage: the build system. This will involve redesigning the
agent-server architecture that's currently used in Vieter, with the goal of
completely replacing Vieter in due time.
This post is the first in a hopefully plentiful series of devlogs for this
project where I'll document my progress along the way.
## Current progress
The implementation of the repository server itself is almost done. A user can
publish, request and remove packages for any number of repositories and
architectures. Repositories are then further grouped into distributions,
allowing a single server to be used for multiple distributions if need be (e.g.
I would for example have `arch` and `endeavouros` as distributions on my
personal server). A package's information is added to the database, and this
data is then exposed via a paginated REST API.
The only real hurdle left for a first release is concurrency, which brings with
it a couple of problems. With the current implementation, it's possible for
concurrent uploads of packages to corrupt the repository. The generation of the
package archives happens inside the request handler for each upload, meaning
multiple requests basically do duplicate work and can cause CPU usage spikes.
The parsing of packages is also done inside the request handler, which once
again causes the server to spike in CPU usage if multiple packages are uploaded
in parallel. These things combined make concurrent uploads of packages a rather
painful problem to deal with.
My solution for these problems consists of two parts. First I want to add a
queueing system for new packages. Instead of parsing the packages directly in
the request handler, they would get added to a queue, with the server then
responding with a [`202
Accepted`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/202). The
actual parsing of the packages would be done asynchronously by a configurable
number of worker threads that parse the packages.
The second part involves serializing and stalling the generation of the package
archives until needed. Instead of actually generating the package archives for
each uploaded package, we simply notify some central worker thread that the
repository has been altered. This worker would then generate the package
archives, after ensuring the queue is empty and no new packages have arrived in
the last `n` seconds. This pattern accounts for groups of packages being
uploaded at once without needlessly stressing the server.
By implementing these features, the server should be able to handle a large
number of package uploads without using excessive resources, ensuring Rieter
can scale to proper sizes.
## First release
Once this is implemented, the codebase should be ready for a 0.1.0 release!
This version will already be useable as a fully-fletched repository server on
which I can then build the other parts of the first stage.
For the 1.0 release, I'll be adding a web UI, as this was something that I was
sorely missing from Vieter. Perhaps most exciting of all, automatic mirroring
will also be added which I'm definitely looking forward to! I hope to publish
another post here soon, but until then, thanks for reading.

View File

@ -1,140 +0,0 @@
---
title: "Progress on concurrent repositories"
date: 2024-06-18
---
During the last devlog I was working on a system for concurrent repositories.
After a lot of trying, I've found a system that should work pretty well, even
on larger scales. In doing so, the overall complexity of the system has
actually decreased on several points as well! Let me explain.
## Concurrent repositories
I went through a lot of ideas before settling on the current implementation.
Initially both the parsing of packages and the regeneration of the package
archives happened inside the request handler, without any form of
synchronisation. This had several unwanted effects. For one, multiple packages
could quickly overload the CPU as they would all be processed in parallel.
These would then also try to generate the package archives in parallel, causing
writes to the same files which was a mess of its own. Because all work was
performed inside the request handlers, the time it took for the server to
respond was dependent on how congested the system was, which wasn't acceptable
for me. Something definitely had to change.
My first solution heavily utilized the Tokio async runtime that Rieter is built
on. Each package that gets uploaded would spawn a new task that competes for a
semaphore, allowing me to control how many packages get parsed in parallel.
Important to note here is that the request handler no longer needs to wait
until a package is finished parsing. The parse task is handled asynchronously,
allowing the server to respond immediately with a [`202
Accepted`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/202). This
way, clients no longer need to wait unnecessarily long for a task that can be
performed asynchronously on the server. Each parse task would then regenerate
the package archives if it was able to successfully parse a package.
Because each task regenerates the package archives, this approach performed a
lot of extra work. The constant spawning of Tokio tasks also didn't sit right
with me, so I tried another design, which ended up being the current version.
### Current design
I settled on a much more classic design: worker threads, or rather, Tokio
tasks. On startup, Rieter launches `n` worker tasks that listen for messages on
an [mpsc](https://docs.rs/tokio/latest/tokio/sync/mpsc/index.html) channel. The
receiver is shared between the workers using a mutex, so each message only gets
picked up by one of the workers. Each request first uploads its respective
package to a temporary file, and sends a tuple `(repo, path)` to the channel,
notifying one of the workers a new package is to be parsed. Each time the queue
for a repository is empty, the package archives get regenerated, effectively
batching this operation. This technique is so much simpler and works wonders.
### Package queueing
I did have some fun designing the intrinsics of this system. My goal was to
have a repository seamlessly handle any number of packages being uploaded, even
different versions of the same package. To achieve this I leveraged the
database.
Each parsed package's information gets added to the database with a unique
monotonically increasing ID. Each repository can only have one version of a
package present for each of its architectures. For each package name, the
relevant package to add to the package archives is thus the one with the
largest ID. This resulted in this (in my opinion rather elegant) query:
```sql
SELECT * FROM "package" AS "p1" INNER JOIN (
SELECT "repo_id", "arch", "name", MAX("package"."id") AS "max_id"
FROM "package"
GROUP BY "repo_id", "arch", "name"
) AS "p2" ON "p1"."id" = "p2"."max_id"
WHERE "p1"."repo_id" = 1 AND "p1"."arch" IN ('x86_64', 'any') AND "p1"."state" <> 2
```
For each `(repo, arch, name)` tuple, we find the largest ID and select it, but
only if its state is not `2`, which means *pending deletion*. Knowing what old
packages to remove is then a similar query to this, where we instead select all
packages that are marked as *pending deletion* or whose ID is less than the
currently committed package.
This design not only seamlessly supports any order of packages being added; it
also paves the way for implementing repository mirroring down the line. This
allows me to atomically update a repository, a feature that I'll be using for
the mirroring system. I'll simply queue new packages and only regenerate the
package archives once all packages have successfully synced to the server.
## Simplifying things
During my development of the repository system, I realized how complex I was
making some things. For example, repositories are grouped into distros, and
this structure was also visible in the codebase. Each distro had its own "disto
manager" that managed packages for that repository. However, this was a
needless overcomplication, as distros are solely an aesthetic feature. Each
repository has a unique ID in the database anyways, so this extra level of
complexity was completely unnecessary.
Package organisation on disk is also still overly complex right now. Each
repository has its own folder with its own packages, but this once again is an
excessive layer as packages have unique IDs anyways. The database tracks which
packages are part of which repositories, so I'll switch to storing all packages
next to each other instead. This might also pave the way for some cool features
down the line, such as staging repositories.
I've been needlessly holding on to how I've done things with Vieter, while I
can make completely new choices in Rieter. The file system for Rieter doesn't
need to resemble the Vieter file system at all, nor should it follow any notion
of how Arch repositories usually look. If need be, I can add an export utility
to convert the directory structure into a more classic layout, but I shouldn't
bother keeping it in mind while developing Rieter.
## Configuration
I switched configuration from environment variables and CLI arguments to a
dedicated config file. The former would've been too simplistic for the later
configuration options I'll be adding, so I opted for a configuration file
instead.
```toml
api_key = "test"
pkg_workers = 2
log_level = "rieterd=debug"
[fs]
type = "local"
data_dir = "./data"
[db]
type = "postgres"
host = "localhost"
db = "rieter"
user = "rieter"
password = "rieter"
```
This will allow me a lot more flexibility in the future.
## First release
There's still some polish to be done, but I'm definitely nearing an initial 0.1
release for this project. I'm looking forward to announcing it!
As usual, thanks for reading, and having a nice day.

View File

@ -1,19 +0,0 @@
---
title: "Site"
summary: "The infrastructure for this site"
type: "project"
params:
links:
- name: Backend
url: 'https://git.rustybever.be/Chewing_Bever/site-backend'
- name: Hugo
url: 'https://git.rustybever.be/Chewing_Bever/site'
---
This site is created using [Hugo](https://gohugo.io/), an easy-to-use static
site generator that I've used for years. The static assets are built in my CI,
where they are then published to my custom backend server.
The backend server's main role is to allow me to publish built static sites to
various routes on my domain. I use this both to publish this site, as well as
deploying documentation for my various projects.

View File

@ -1,42 +0,0 @@
---
title: "Switching to Axum"
date: 2022-04-02
tags:
- rust
---
In classic Jef fashion, it took me less than a week to completely overhaul the
way my site works ;p Visually nothing's changed, but internally the website is
now being powered by a web server [written in
Rust](https://git.rustybever.be/Chewing_Bever/site-backend), powered by
[Axum](https://github.com/tokio-rs/axum).
The reason for this is expandibility. While nginx is really good at what it
does, it's rather limited when it comes to implementing new features on top of
it. However, even if it wasn't, I would've still switched because I just really
wanted to work in Rust once more :D
Favoritism aside, the plan is to join the [IndieWeb](https://indieweb.org/)
network. To quote their homepage:
> The IndieWeb is a people-focused alternative to the "corporate web".
They've got some really great ideas about what the internet could be if we
tried, and considering I agree with nearly all of them, I wanna join ;p
My first project will be to implement the Webmention protocol. This consists of
a simple exchange of POST requests, where you notify another user's site
whenever you mention one of their posts. This in exchange allows them to
display my response on their website, and vice versa! In essence, it's simple
decentralized commenting.
My plans after this are still vague. I might dip my toes into
[IndieAuth](https://indieauth.net/),
[microformats](https://indieweb.org/microformats) or any of the other cool
concepts they've got the offer! Either way, the plan is to enjoy myself with
this site ;p
If all goes well, this post will be the first new post to get published using
my new deploy system, so fingers crossed :)
Have a lovely day <3

View File

@ -1,29 +0,0 @@
---
title: "Vieter 0.2.0"
date: 2022-04-11
---
When this post gets published, I'll have successfully released version 0.2.0 of
[Vieter](https://git.rustybever.be/Chewing_Bever/vieter)! For the uninitiated,
Vieter is currently my biggest passion project. It's an implementation of an
Arch repository server, paired with a build system for automatically building
packages from the AUR & other sources.
This release brings a lot of goodies; the changelog & release binaries can be
found
[here](https://git.rustybever.be/Chewing_Bever/vieter/releases/tag/0.2.0). The
biggest changes are that Vieter now supports multiple repositories with support
for packages for multiple architectures! Besides that, there's some bug fixing,
improvements to the CLI & an added setting for the build system that allows for
building on other architectures. The [docs](https://rustybever.be/docs/vieter/)
have also been updated to reflect this new update.
Of course, development won't just stop now, I have too many ideas for that ;p
[0.3.0](https://git.rustybever.be/Chewing_Bever/vieter/milestone/27) will bring
with it some big improvements to the builder system, allowing for more
flexibility & configuration.
If you're interested in the project, join me over at
[#vieter:rustybever.be](https://matrix.to/#/#vieter:rustybever.be) on Matrix!
Cheers <3

View File

@ -1,69 +0,0 @@
---
title: "Vieter 0.3.0"
date: 2022-06-13
---
When this post is live, Vieter 0.3.0 will have been released! This release
really does come with a lot of new features, including more reliable builds and
a new cron implementation!
This release ended up taking me over two months, but I'm quite proud of it :)
It not only adds a lot of useful features, but also paves the way for a lot
more cool features down the road!
## What is Vieter?
Vieter consists of two independents parts, namely an implementation of an Arch
(Pacman) repository, & a build system to populate said repository. The goal is
to remove the need for an AUR helper & move all builds to a remote server. Not
only does this greatly reduce update times on lower-end systems, it also
prevents AUR packages from being built multiple times on different systems.
The repository can also be used independently, providing a convenient server
for publishing Arch packages from CI builds for example.
While I specifically mention Arch & the AUR, Vieter is compatible with any
Pacman-based distro & can build PKGBUILDs provided from any Git source.
## What's changed?
### New cron daemon
Perhaps the most important feature in this release is the implementation of a
cron daemon. While 0.2.0 still relied on crond to periodically start builds,
0.3.0 can schedule builds completely independently.
The daemon understands a subset of the cron expression syntax. The build
schedule can be either configured globally or on a per-repo basis. This allows
the user to fine-tune certain packages, e.g. if they want them to be updated
more regularly than all the rest.
### More robust builds
Often, a build would fail with exit code 8. This error indicates that makepkg
wasn't able to install all dependencies, caused by the builder image not being
up to date enough. Due to this, each build now runs `pacman -Syu` before
running the actual build.
Builds can now also use dependencies that are part of the target repository.
This allows building packages with AUR dependencies, as long as all
dependencies are also being built for said repository.
### Build logs
The main server now stores the logs of each build, including the exit code.
This makes it a lot easier to debug why builds fail.
### Improved documentation
The [Vieter documentation](https://rustybever.be/docs/vieter/) has had a pretty
major re-write to get it up to date with this new release. Now there's also
[HTTP API docs](https://rustybever.be/docs/vieter/api/#introduction) & [man
pages](https://rustybever.be/man/vieter/vieter.1.html).
## Interested?
If you're interested in Vieter, considering joining
[#vieter:rustybever.be](https://matrix.to/#/#vieter:rustybever.be) on Matrix!
The source code can be found on my personal
[Gitea](https://git.rustybever.be/vieter/vieter).

View File

@ -1,69 +0,0 @@
---
title: "Vieter 0.4.0"
date: 2022-10-01
---
Right at the start of October, I managed to release another version of Vieter!
## What is Vieter?
Vieter consists of two independents parts, namely an implementation of an Arch
(Pacman) repository, & a build system to populate said repository. The goal is
to remove the need for an AUR helper & move all builds to a remote server. Not
only does this greatly reduce update times on lower-end systems, it also
prevents AUR packages from being built multiple times on different systems.
The repository can also be used independently, providing a convenient server
for publishing Arch packages from CI builds for example.
While I specifically mention Arch & the AUR, Vieter is compatible with any
Pacman-based distro & can build PKGBUILDs provided from any Git source.
## What's changed?
### Renaming `repos` to `targets`
Before 0.4.0, "repos" was the term used to describe the list of Git
repositories that periodically get built on your Vieter instance. This term
however was rather confusing, as the Vieter server itself also hosts Pacman
repositories, making it difficult to correctly talk about the features. That's
why I've made the decision to rename this to "targets". All CLI commands
previously found under `vieter repos` can now be used via `vieter targets`
instead. API routes have also been renamed.
Along with this, a new kind of target can now be added which specifies the link
to a PKGBUILD file, instead of a Git repository. This can for example be used
to link a PKGBUILD that's contained inside some other Git repository that's not
specifically used for that PKGBUILD.
### Refactored web framework
The underlying web framework has seen a proper refactor to better accomodate
the rest of the codebase. All API routes can now be found under a versioned
`/api/v1` prefix.
The repository endpoints now support `DELETE` requests to remove packages,
arch-repos & repos. All routes serving files (e.g. the repository routes) now
support HTTP byte range requests, which not only allows Pacman to resume
downloads after failure, but also allows tools such as
[`axel`](https://github.com/axel-download-accelerator/axel) to work properly
using a Vieter server.
Endpoints creating new entries on the server now return the ID of the newly
created object (e.g. a target or a build log).
### CLI UX
The CLI has seen some useful changes. There's now a `-r` flag that makes
Vieter's output better for scripting. Besides that, a small tool has been added
to interact with the AUR and add AUR packages directly to your list of targets!
`vieter targets add` and `vieter logs add` now return the ID of the newly
created entry.
## Interested?
If you're interested in Vieter, considering joining
[#vieter:rustybever.be](https://matrix.to/#/#vieter:rustybever.be) on Matrix!
The source code can be found on my personal
[Gitea](https://git.rustybever.be/vieter/vieter).

View File

@ -1,99 +0,0 @@
---
title: "Vieter 0.5.0"
date: 2022-12-29
---
As 2022 comes to a close, and in the middle of exams, I'm ready to release
another version of Vieter!
## What's Vieter?
As usual, a small refresher on what Vieter actually is for the new readers.
Vieter provides two main services: a Pacman repository server and a build
system for publishing Arch packages to said server. The goal is to fully
replace any need for an AUR helper, with the AUR packages being built "in the
cloud" instead. Of course, one can also publish their own packages to this
server, allowing you to create your very own customized Arch repositories!
## What's changed?
### Server-agent architecture
The biggest change this version introduces is the migration to a polling-based
server-agent architecture.
Previous versions relied on a "cron daemon". This daemon needed to be deployed
on every architecture the user wanted to build packages for, with the daemon
periodically polling the server for the list of targets to build. All
scheduling was done on the node performing the builds.
While this system served me well for a while, it did limit possibilities for
improvement. Building on multiple servers wasn't possible, as the cron daemons
had no way of synchronizing with each other, meaning they'd all run all builds.
There was also no way for the server (or the user for that matter) to control
these daemons; they had a fixed build schedule that could only be changed by
changing a target's configuration.
Due to these limitations, I've decided to revamp the build system & convert it
to a server-agent architecture! With this new system, the main server handles
all scheduling. On each server running builds, a "build agent" is deployed
which periodically polls the main server for new jobs. This allows a Vieter
instance to run builds on an arbitrary number of build nodes! Thanks to this,
I'm able to run 137 builds in under 40 minutes, whereas before, I needed this
time to process less than half of that :) As a bit of a stress test for my
instance, I've started replicating the EndeavourOS repository for fun.
With the main server now being in control of scheduling, I've also been able to
implement manual scheduling of builds on the server. If a package needs to be
rebuilt, you can simply send an HTTP request (or use the accompanying CLI
command) to schedule a build job.
### Quality-of-life improvements
This release was mostly the build system redesign, but I've also added some QoL
improvements. Noteable additions are the option to periodically remove logs, as
an active Vieter instance can collect thousands over time. The CLI tool should
now also be a lot more stable, and will correctly display HTTP errors if
needed. There's also the option to specify what subdirectory of a Git
repository to use for a build. This is for example useful when building
packages using a Git repository [containing multiple
PKGBUILDs](https://github.com/endeavouros-team/PKGBUILDS).
## What's to come?
My brain never stops, so I still have a lot of cool ideas I wish to implement
in the coming months.
For starters: better build awareness. The build system right now does not track
the progress of jobs. Once a job is dispatched to an agent, the main server
forgets about it and moves on. If a build fails due to unknown means causing
the logs to never be uploaded, it'll be as if the build never happened. I'd
like to change this. I want the main server to be aware of what jobs are
running, have failed, or have perhaps timed out. This information could then be
made available through the API, providing the user with valuable insight into
the workings of their Vieter instance.
Building on this idea, I wish to know what specific command caused the build to
fail. Was it makepkg, an HTTP error? And if it was makepkg, what error code?
can the build system respond to it by itself? The main goal is to provide a
deeper understanding into the workings of builds and the build system as a
whole.
Another big one to tackle is API access to the repositories. These are
currently only accessible through Pacman, but having this information available
as a convenient REST API, useable from the CLI tool, sounds like a valuable
asset. This would pave the way towards repository-level configuration.
Of course there's a lot more ideas, but the list would be too long to put here
;p
## Conclusion
As you can see, I still have *a lot* of ideas for Vieter. As usual however, I
can't predict when any of these features will get implemented. It all depends
on whether the uni life leaves me some time :)
If you're interested in Vieter, considering joining
[#vieter:rustybever.be](https://matrix.to/#/#vieter:rustybever.be) on Matrix!
The source code can be found on my personal
[Gitea](https://git.rustybever.be/vieter-v/vieter).

View File

@ -1,62 +0,0 @@
---
title: "Vieter 0.6.0"
date: 2023-07-20
---
It's been a while since I've released a new version of Vieter. A busy semester
combined with a lack of interest in the project has definitely slowed down
development. Nonetheless, I've got a new release ready for you!
## What's Vieter?
Vieter consists of two central components: an Arch Linux (well, Pacman
actually) repository server that supports uploading package archives, combined
with a build system to populate this server by periodically building select
packages. The goal of this is to completely remove the need for an AUR helper,
and moving all build times to the cloud, allowing for smooth updates across as
many machines as required.
## What's changed?
This is a rather small update, and it mostly contains a few quality-of-life
improvements.
For one, there's now a Prometheus metrics endpoint so you can integrate Vieter
into your existing stack. Currently the metrics are limited to API request
timings, but this could be expanded upon in the future.
The API can now filter the list of targets, allowing you to more easily search
for specific targets. This functionality has also been added to the CLI.
Builds can now be configured with a timeout, with build containers being
automatically killed if this timeout is reached.
Behind the scenes, the codebase has been updated to a compiler commit after the
0.3.3 release, and the cron logic has been rewritten in C using the
[libvieter](https://git.rustybever.be/vieter-v/libvieter) library. Agents now
use worker threads, meaning they will not spawn a new thread anymore for each
new build. Package uploads now properly fail if the TCP connection was closed
before all bytes of a file were received. Lastly, the deprecated cron daemon
has been removed.
### What's next?
Throughout the last couple of months, I've grown more and more tired of the V
programming language, and the codebase in general. There's a lot of technical
debt present, and due to the limitations of the language and existing
frameworks, I've had to resort to questionable practices for a lot of the
features (e.g. POST request data as query parameters). Due to this, I've
decided to restart this project in Rust, under the name
[rieter](https://git.rustybever.be/Chewing_Bever/rieter). With this, I hope to
move away from this technical debt, and build a new solid foundation on which I
can further expand this project. I'm not going to be making any promises on
when this will be ready to replace Vieter, but I hope to get there soon.
## Conclusion
Just picture a very creative ending of this post here ;)
If you're interested in Vieter, consider joining
[#vieter:rustybever.be](https://matrix.to/#/#vieter:rustybever.be) on Matrix!
The source code can be found on my personal
[Gitea](https://git.rustybever.be/vieter-v/vieter).

View File

@ -1,29 +0,0 @@
---
title: "Vieter"
summary: "Arch Linux repository server & build system, written in V"
type: "project"
params:
links:
- name: Source
url: 'https://git.rustybever.be/vieter-v/vieter'
- name: Docs
url: '/docs/vieter'
- name: API Docs
url: '/api-docs/vieter'
---
Vieter is my personal solution to a problem Ive been facing for months:
extremely long AUR package build times. I run EndeavourOS on all my systems,
meaning I'm updating my systems fairly frequently. I really like being a
beta-tester for projects & run development builds for multiple packages.
Because of this, I have to regularly re-build these packages in order to stay
up to date with development. However, these builds can take a really long time,
even on my more powerful laptop. This project is a solution to that problem:
instead of building the packages locally, I can build them automatically in the
cloud & just download them whenever I update my system! Thanks to this
solution, Im able to shave 10-15 minutes off my update times, just from not
having to compile everything every time theres an update.
Besides this, its also just really useful to have a repository server that you
control & can upload your own packages to. For example, I package my st
terminal using a CI pipeline & upload it to my repository!

View File

@ -0,0 +1,5 @@
---
title: "Switching to Axum"
date: 2022-04-02
draft: true
---

15
nginx/default.conf 100644
View File

@ -0,0 +1,15 @@
# vim: ft=nginx
# =====FRONTEND HOSTING=====
location / {
root /usr/share/nginx/html;
index index.html;
}
error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}

View File

@ -0,0 +1,49 @@
# vim: ft=nginx
# =====MATRIX WELL-KNOWN FILES=====
# Used for server federation
location = /.well-known/matrix/server {
charset utf-8;
default_type application/json;
if ($request_method = 'GET') {
add_header Access-Control-Allow-Origin '*';
add_header Access-Control-Allow-Methods 'GET, POST, PUT, DELETE, OPTIONS';
add_header Access-Control-Allow-Headers 'X-Requested-With, Content-Type, Authorization';
return 200 '{"m.server":"${MATRIX_SERVER}"}';
}
if ($request_method = 'OPTIONS') {
add_header Access-Control-Allow-Origin '*';
add_header Access-Control-Allow-Methods 'GET, POST, PUT, DELETE, OPTIONS';
add_header Access-Control-Allow-Headers 'X-Requested-With, Content-Type, Authorization';
add_header 'Content-Length' 0;
return 204;
}
return 405;
}
location = /.well-known/matrix/client {
charset utf-8;
default_type application/json;
if ($request_method = 'GET') {
add_header Access-Control-Allow-Origin '*';
add_header Access-Control-Allow-Methods 'GET, POST, PUT, DELETE, OPTIONS';
add_header Access-Control-Allow-Headers 'X-Requested-With, Content-Type, Authorization';
return 200 '{"m.homeserver":{"base_url":"${MATRIX_CLIENT_SERVER}"}}';
}
if ($request_method = 'OPTIONS') {
add_header Access-Control-Allow-Origin '*';
add_header Access-Control-Allow-Methods 'GET, POST, PUT, DELETE, OPTIONS';
add_header Access-Control-Allow-Headers 'X-Requested-With, Content-Type, Authorization';
add_header 'Content-Length' 0;
return 204;
}
return 405;
}

37
nginx/nginx.conf 100644
View File

@ -0,0 +1,37 @@
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log notice;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
gzip off;
server {
listen 80;
listen [::]:80;
# This order is important, as the Matrix matches should be evaluated first
include /etc/nginx/conf.d/matrix.conf;
include /etc/nginx/conf.d/default.conf;
}
}

3
renovate.json 100644
View File

@ -0,0 +1,3 @@
{
"$schema": "https://docs.renovatebot.com/renovate-schema.json"
}

1
themes/etch 160000

@ -0,0 +1 @@
Subproject commit 1969ea26457aef716efae848c5d08c8a00d75a69

View File

@ -1 +0,0 @@
.DS_Store

View File

@ -1,20 +0,0 @@
The MIT License (MIT)
Copyright (c) 2020 Lukas Joswiak
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
the Software, and to permit persons to whom the Software is furnished to do so,
subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

View File

@ -1,33 +0,0 @@
# Etch
Etch is a simple, responsive theme for [Hugo](https://gohugo.io) with a focus on writing. A live demo is available at https://lukasjoswiak.github.io/etch/.
<img src="https://raw.githubusercontent.com/LukasJoswiak/etch/master/images/screenshot_small.png" alt="screenshot" width="545px">
## Features:
* Homepage with list of posts.
* Support for pages.
* Responsive design for optimized mobile experience.
* Syntax highlighting with customizable theme.
* Dark theme which automatically adjusts based on users' setting ([example](https://github.com/LukasJoswiak/etch/wiki/Dark-mode)).
* No external dependencies, no JavaScript, no web fonts.
* Internationalization friendly: use default English translations or create your own
## Installation
To install `etch`, download the repository into the `themes` folder in the root of your site.
```
$ git submodule add https://github.com/LukasJoswiak/etch.git themes/etch
```
Then, use the theme to generate your site.
```
$ hugo server -t etch
```
Use the [sample configuration](https://github.com/LukasJoswiak/etch/wiki/Configuration#sample-configuration) as a starting point. See the [configuration](https://github.com/LukasJoswiak/etch/wiki/Configuration) page for more info.
Read the [wiki](https://github.com/LukasJoswiak/etch/wiki) to learn about more options.

View File

@ -1,2 +0,0 @@
+++
+++

View File

@ -1,62 +0,0 @@
{{ if not (eq .Site.Params.dark "on") -}}
@media (prefers-color-scheme: dark) {
{{ end -}}
html {
scrollbar-color: #6c6c6c #2e2e2e;
}
body {
color: #ebebeb;
background: #121212;
-webkit-font-smoothing: antialiased;
-moz-osx-font-smoothing: grayscale;
}
header#banner a {
color: #e0e0e0;
text-decoration: none;
}
header#banner nav ul li a {
color: #cccccc;
}
header#links a {
color: #e0e0e0;
text-decoration: none;
}
header#links nav ul li a {
color: #00b1ed;
}
main#content a {
color: #00b1ed;
}
main#content p {
color: #f5f5f5;
}
main#content hr {
background: #5c5c5c;
}
main#content #toc h4 {
color: #d4d4d4;
}
main#content ul#posts small {
color: #a7a7a7;
}
main#content ul#posts li a:hover {
color: #21c7ff;
}
main#content header#post-header div {
color: #a7a7a7;
}
{{- if not (eq .Site.Params.dark "on") -}}
}
{{- end -}}

View File

@ -1,307 +0,0 @@
*, *:before, *:after {
box-sizing: border-box;
}
html {
font-size: 62.5%;
}
body {
font-size: 16px;
font-size: 1.6rem;
font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Helvetica, Arial, sans-serif, "Apple Color Emoji", "Segoe UI Emoji", "Segoe UI Symbol";
color: #313a3d;
width: 100%;
margin: 0 auto;
padding: 0 16px;
line-height: 1.6;
}
header#banner {
margin: 25px 0;
}
header#banner a {
color: #313a3d;
text-decoration: none;
}
header#banner a:hover {
text-decoration: underline;
}
header#banner h2 {
display: inline;
font-size: 21px;
font-size: 2.1rem;
margin: 0 8px 0 0;
}
header#banner nav {
display: inline-block;
}
header#banner nav ul {
list-style-type: none;
font-size: 1.05em;
text-transform: lowercase;
margin: 0;
padding: 0;
}
header#banner nav ul li {
display: inline;
margin: 0 3px;
}
header#banner nav ul li a {
color: #454545;
}
main#content a {
color: #007dfa;
text-decoration: none;
}
main#content a:hover {
text-decoration: underline;
}
main#content h1,
main#content h2,
main#content h3,
main#content h4,
main#content h5,
main#content h6 {
margin-bottom: 0;
line-height: 1.15;
}
main#content h3 {
font-size: 19px;
font-size: 1.9rem;
}
main#content h1 + p,
main#content h2 + p,
main#content h3 + p,
main#content h4 + p,
main#content h5 + p,
main#content h6 + p {
margin-top: 5px;
}
main#content p {
color: #394548;
margin: 16px 0;
}
main#content hr {
height: 1px;
border: 0;
background: #d8d8d8;
}
main#content abbr {
cursor: help;
}
/* index.html styles */
main#content ul#posts {
list-style-type: none;
font-size: 16px;
font-size: 1.6rem;
margin-top: 0;
padding: 0;
}
main#content ul#posts li {
margin: 5px 0;
padding: 0;
}
main#content ul#posts small {
font-size: 0.8em;
color: #767676;
margin-left: 10px;
}
main#content ul#posts li a {
text-decoration: none;
}
main#content ul#posts li a:hover {
color: #369aff;
}
main#content ul#posts li a:hover small {
color: inherit;
}
/* single.html styles */
main#content header#post-header h1 {
display: block;
font-size: 23px;
font-size: 2.3rem;
font-weight: 600;
line-height: 1.15;
}
main#content header#post-header > div {
display: block;
font-size: 0.85em;
color: #767676;
}
main#content #toc {
border: 1px solid #b1b1b1;
border-radius: 1px;
line-height: 26px;
margin: 16px 0;
padding: 9px 14px;
}
main#content #toc h4 {
font-size: 1.06em;
color: #3d3d3d;
margin: 0;
}
main#content #toc nav#TableOfContents {
margin-top: 4px;
}
main#content #toc nav#TableOfContents > ul, main#content #toc nav#TableOfContents > ol {
margin-left: -40px;
}
main#content #toc ul, main#content #toc ol {
font-size: 0.98em;
margin: 0;
padding: 0 0 0 40px;
}
main#content #toc ul {
list-style-type: none;
}
main#content #toc ol {
counter-reset: item;
}
main#content #toc ol li {
display: block;
}
main#content #toc ol li:before {
content: counters(item, ".") ". ";
counter-increment: item;
}
main#content img {
max-width: 100%;
margin: 0 auto;
}
main#content figure {
margin: 16px 0;
}
main#content figure img {
display: block;
max-width: 100%;
margin: 0 auto;
}
main#content figure figcaption {
font-size: 0.92em;
font-style: italic;
line-height: 22px;
text-align: center;
margin-top: 6px;
padding: 0 10px;
}
main#content figure figcaption h4 {
font-style: normal;
display: inline;
margin: 0;
}
main#content figure figcaption p {
display: inline;
margin: 0;
padding-left: 8px;
}
main#content blockquote {
font-style: italic;
margin-top: 10px;
margin-bottom: 10px;
margin-left: 50px;
padding-left: 15px;
border-left: 3px solid #ccc;
}
main#content code,
main#content pre {
font-family: 'Menlo', monospace;
}
main#content code {
font-size: 0.96em;
padding: 0 5px;
}
main#content pre {
display: block;
overflow-x: auto;
font-size: 14px;
font-size: 1.4rem;
white-space: pre;
margin: 20px 0;
padding: 1.5rem 1.5rem;
line-height: 1.4;
}
main#content pre code {
padding: 0;
}
main#content section.footnotes {
font-size: 0.9em;
}
footer#footer {
font-size: 14px;
font-size: 1.4rem;
font-weight: 400;
color: #b3b3b3;
margin: 40px 0;
}
header#links {
display: inline-block;
}
header#links nav {
display: inline-block;
}
header#links nav ul {
list-style-type: none;
font-size: 1.05em;
text-transform: lowercase;
margin: 0;
padding: 0;
}
header#links nav ul li {
display: inline;
margin: 0 3px;
}
header#links nav ul li a {
color: #007dfa;
text-decoration: none;
}

View File

@ -1,52 +0,0 @@
@media (min-width: 770px) {
body {
width: 600px;
line-height: 1.5;
}
main#content hr {
width: 108%;
margin-left: -3.8%;
}
/* index.html styles */
header#banner h2 {
font-size: 25px;
font-size: 2.5rem;
}
main#content h3 {
font-size: 20px;
font-size: 2rem;
}
main#content ul#posts {
font-size: 18px;
font-size: 1.8rem;
}
/* single.html styles */
main#content header#post-header h1 {
font-size: 24px;
font-size: 2.4rem;
}
main#content img {
max-width: 108%;
margin-left: -3.8%;
}
main#content figure {
margin-left: -3.8%;
}
main#content figure img {
max-width: 108%;
}
main#content pre {
width: 108%;
margin-left: -3.8%;
padding: 1.5rem 2.2rem;
}
}

View File

@ -1,59 +0,0 @@
/* Background */ .chroma { color: #f8f8f2; background-color: #272822 }
/* Error */ .chroma .err { color: #960050; background-color: #1e0010 }
/* LineTableTD */ .chroma .lntd { vertical-align: top; padding: 0; margin: 0; border: 0; }
/* LineTable */ .chroma .lntable { border-spacing: 0; padding: 0; margin: 0; border: 0; width: auto; overflow: auto; display: block; }
/* LineHighlight */ .chroma .hl { display: block; width: 100%;background-color: #ffffcc }
/* LineNumbersTable */ .chroma .lnt { margin-right: 0.4em; padding: 0 0.4em 0 0.4em;color: #7f7f7f }
/* LineNumbers */ .chroma .ln { margin-right: 0.4em; padding: 0 0.4em 0 0.4em;color: #7f7f7f }
/* Keyword */ .chroma .k { color: #66d9ef }
/* KeywordConstant */ .chroma .kc { color: #66d9ef }
/* KeywordDeclaration */ .chroma .kd { color: #66d9ef }
/* KeywordNamespace */ .chroma .kn { color: #f92672 }
/* KeywordPseudo */ .chroma .kp { color: #66d9ef }
/* KeywordReserved */ .chroma .kr { color: #66d9ef }
/* KeywordType */ .chroma .kt { color: #66d9ef }
/* NameAttribute */ .chroma .na { color: #a6e22e }
/* NameClass */ .chroma .nc { color: #a6e22e }
/* NameConstant */ .chroma .no { color: #66d9ef }
/* NameDecorator */ .chroma .nd { color: #a6e22e }
/* NameException */ .chroma .ne { color: #a6e22e }
/* NameFunction */ .chroma .nf { color: #a6e22e }
/* NameOther */ .chroma .nx { color: #a6e22e }
/* NameTag */ .chroma .nt { color: #f92672 }
/* Literal */ .chroma .l { color: #ae81ff }
/* LiteralDate */ .chroma .ld { color: #e6db74 }
/* LiteralString */ .chroma .s { color: #e6db74 }
/* LiteralStringAffix */ .chroma .sa { color: #e6db74 }
/* LiteralStringBacktick */ .chroma .sb { color: #e6db74 }
/* LiteralStringChar */ .chroma .sc { color: #e6db74 }
/* LiteralStringDelimiter */ .chroma .dl { color: #e6db74 }
/* LiteralStringDoc */ .chroma .sd { color: #e6db74 }
/* LiteralStringDouble */ .chroma .s2 { color: #e6db74 }
/* LiteralStringEscape */ .chroma .se { color: #ae81ff }
/* LiteralStringHeredoc */ .chroma .sh { color: #e6db74 }
/* LiteralStringInterpol */ .chroma .si { color: #e6db74 }
/* LiteralStringOther */ .chroma .sx { color: #e6db74 }
/* LiteralStringRegex */ .chroma .sr { color: #e6db74 }
/* LiteralStringSingle */ .chroma .s1 { color: #e6db74 }
/* LiteralStringSymbol */ .chroma .ss { color: #e6db74 }
/* LiteralNumber */ .chroma .m { color: #ae81ff }
/* LiteralNumberBin */ .chroma .mb { color: #ae81ff }
/* LiteralNumberFloat */ .chroma .mf { color: #ae81ff }
/* LiteralNumberHex */ .chroma .mh { color: #ae81ff }
/* LiteralNumberInteger */ .chroma .mi { color: #ae81ff }
/* LiteralNumberIntegerLong */ .chroma .il { color: #ae81ff }
/* LiteralNumberOct */ .chroma .mo { color: #ae81ff }
/* Operator */ .chroma .o { color: #f92672 }
/* OperatorWord */ .chroma .ow { color: #f92672 }
/* Comment */ .chroma .c { color: #75715e }
/* CommentHashbang */ .chroma .ch { color: #75715e }
/* CommentMultiline */ .chroma .cm { color: #75715e }
/* CommentSingle */ .chroma .c1 { color: #75715e }
/* CommentSpecial */ .chroma .cs { color: #75715e }
/* CommentPreproc */ .chroma .cp { color: #75715e }
/* CommentPreprocFile */ .chroma .cpf { color: #75715e }
/* GenericDeleted */ .chroma .gd { color: #f92672 }
/* GenericEmph */ .chroma .ge { font-style: italic }
/* GenericInserted */ .chroma .gi { color: #a6e22e }
/* GenericStrong */ .chroma .gs { font-weight: bold }
/* GenericSubheading */ .chroma .gu { color: #75715e }

View File

@ -1,38 +0,0 @@
baseURL = "https://example.com"
title = "Website Name"
theme = "etch"
languageCode = "en-US"
enableInlineShortcodes = true
pygmentsCodeFences = true
pygmentsUseClasses = true
[params]
description = "Your site description"
copyright = "Copyright © 2021 Your Name"
dark = "auto"
highlight = true
[menu]
[[menu.main]]
identifier = "posts"
name = "posts"
title = "posts"
url = "/"
weight = 10
[[menu.main]]
identifier = "about"
name = "about"
title = "about"
url = "/about/"
weight = 20
[permalinks]
posts = "/:title/"
[markup.goldmark.renderer]
# Allow HTML in Markdown
unsafe = true
[markup.tableOfContents]
ordered = true

View File

@ -1,4 +0,0 @@
---
title: "Home"
---
This is some info about me.

View File

@ -1,21 +0,0 @@
+++
title = "About"
+++
Written in Go, Hugo is an open source static site generator available under the [Apache Licence 2.0.](https://github.com/gohugoio/hugo/blob/master/LICENSE) Hugo supports TOML, YAML and JSON data file types, Markdown and HTML content files and uses shortcodes to add rich content. Other notable features are taxonomies, multilingual mode, image processing, custom output formats, HTML/CSS/JS minification and support for Sass SCSS workflows.
Hugo makes use of a variety of open source projects including:
* https://github.com/yuin/goldmark
* https://github.com/alecthomas/chroma
* https://github.com/muesli/smartcrop
* https://github.com/spf13/cobra
* https://github.com/spf13/viper
Hugo is ideal for blogs, corporate websites, creative portfolios, online magazines, single page applications or even a website with thousands of pages.
Hugo is for people who want to hand code their own website without worrying about setting up complicated runtimes, dependencies and databases.
Websites built with Hugo are extremelly fast, secure and can be deployed anywhere including, AWS, GitHub Pages, Heroku, Netlify and any other hosting provider.
Learn more and contribute on [GitHub](https://github.com/gohugoio).

View File

@ -1,47 +0,0 @@
+++
author = "Hugo Authors"
title = "Emoji Support"
date = "2019-03-05"
description = "Guide to emoji usage in Hugo"
tags = [
"emoji",
]
+++
Emoji can be enabled in a Hugo project in a number of ways.
<!--more-->
The [`emojify`](https://gohugo.io/functions/emojify/) function can be called directly in templates or [Inline Shortcodes](https://gohugo.io/templates/shortcode-templates/#inline-shortcodes).
To enable emoji globally, set `enableEmoji` to `true` in your sites [configuration](https://gohugo.io/getting-started/configuration/) and then you can type emoji shorthand codes directly in content files; e.g.
<p><span class="nowrap"><span class="emojify">🙈</span> <code>:see_no_evil:</code></span> <span class="nowrap"><span class="emojify">🙉</span> <code>:hear_no_evil:</code></span> <span class="nowrap"><span class="emojify">🙊</span> <code>:speak_no_evil:</code></span></p>
<br>
The [Emoji cheat sheet](http://www.emoji-cheat-sheet.com/) is a useful reference for emoji shorthand codes.
***
**N.B.** The above steps enable Unicode Standard emoji characters and sequences in Hugo, however the rendering of these glyphs depends on the browser and the platform. To style the emoji you can either use a third party emoji font or a font stack; e.g.
{{< highlight html >}}
.emoji {
font-family: Apple Color Emoji,Segoe UI Emoji,NotoColorEmoji,Segoe UI Symbol,Android Emoji,EmojiSymbols;
}
{{< /highlight >}}
{{< css.inline >}}
<style>
.emojify {
font-family: Apple Color Emoji,Segoe UI Emoji,NotoColorEmoji,Segoe UI Symbol,Android Emoji,EmojiSymbols;
font-size: 2rem;
vertical-align: middle;
}
@media screen and (max-width:650px) {
.nowrap {
display: block;
margin: 25px 0;
}
}
</style>
{{< /css.inline >}}

View File

@ -1,147 +0,0 @@
+++
author = "Hugo Authors"
title = "Markdown Syntax Guide"
date = "2019-03-11"
description = "Sample article showcasing basic Markdown syntax and formatting for HTML elements."
tags = [
"markdown",
"css",
"html",
"themes",
]
categories = [
"themes",
"syntax",
]
series = ["Themes Guide"]
aliases = ["migrate-from-jekyl"]
+++
This article offers a sample of basic Markdown syntax that can be used in Hugo content files, also it shows whether basic HTML elements are decorated with CSS in a Hugo theme.
<!--more-->
## Headings
The following HTML `<h1>`—`<h6>` elements represent six levels of section headings. `<h1>` is the highest section level while `<h6>` is the lowest.
# H1
## H2
### H3
#### H4
##### H5
###### H6
## Paragraph
Xerum, quo qui aut unt expliquam qui dolut labo. Aque venitatiusda cum, voluptionse latur sitiae dolessi aut parist aut dollo enim qui voluptate ma dolestendit peritin re plis aut quas inctum laceat est volestemque commosa as cus endigna tectur, offic to cor sequas etum rerum idem sintibus eiur? Quianimin porecus evelectur, cum que nis nust voloribus ratem aut omnimi, sitatur? Quiatem. Nam, omnis sum am facea corem alique molestrunt et eos evelece arcillit ut aut eos eos nus, sin conecerem erum fuga. Ri oditatquam, ad quibus unda veliamenimin cusam et facea ipsamus es exerum sitate dolores editium rerore eost, temped molorro ratiae volorro te reribus dolorer sperchicium faceata tiustia prat.
Itatur? Quiatae cullecum rem ent aut odis in re eossequodi nonsequ idebis ne sapicia is sinveli squiatum, core et que aut hariosam ex eat.
## Blockquotes
The blockquote element represents content that is quoted from another source, optionally with a citation which must be within a `footer` or `cite` element, and optionally with in-line changes such as annotations and abbreviations.
#### Blockquote without attribution
> Tiam, ad mint andaepu dandae nostion secatur sequo quae.
> **Note** that you can use *Markdown syntax* within a blockquote.
#### Blockquote with attribution
> Don't communicate by sharing memory, share memory by communicating.</p>
> — <cite>Rob Pike[^1]</cite>
[^1]: The above quote is excerpted from Rob Pike's [talk](https://www.youtube.com/watch?v=PAAkCSZUG1c) during Gopherfest, November 18, 2015.
## Tables
Tables aren't part of the core Markdown spec, but Hugo supports supports them out-of-the-box.
Name | Age
--------|------
Bob | 27
Alice | 23
#### Inline Markdown within tables
| Inline&nbsp;&nbsp;&nbsp; | Markdown&nbsp;&nbsp;&nbsp; | In&nbsp;&nbsp;&nbsp; | Table |
| ---------- | --------- | ----------------- | ---------- |
| *italics* | **bold** | ~~strikethrough~~&nbsp;&nbsp;&nbsp; | `code` |
## Code Blocks
#### Code block with backticks
```
html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Example HTML5 Document</title>
</head>
<body>
<p>Test</p>
</body>
</html>
```
#### Code block indented with four spaces
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Example HTML5 Document</title>
</head>
<body>
<p>Test</p>
</body>
</html>
#### Code block with Hugo's internal highlight shortcode
{{< highlight html >}}
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Example HTML5 Document</title>
</head>
<body>
<p>Test</p>
</body>
</html>
{{< /highlight >}}
## List Types
#### Ordered List
1. First item
2. Second item
3. Third item
#### Unordered List
* List item
* Another item
* And another item
#### Nested list
* Item
1. First Sub-item
2. Second Sub-item
## Other Elements — abbr, sub, sup, kbd, mark
<abbr title="Graphics Interchange Format">GIF</abbr> is a bitmap image format.
H<sub>2</sub>O
X<sup>n</sup> + Y<sup>n</sup> = Z<sup>n</sup>
Press <kbd><kbd>CTRL</kbd>+<kbd>ALT</kbd>+<kbd>Delete</kbd></kbd> to end the session.
Most <mark>salamanders</mark> are nocturnal, and hunt for insects, worms, and other small creatures.

View File

@ -1,58 +0,0 @@
+++
author = "Hugo Authors"
title = "Placeholder Text"
date = "2019-03-09"
description = "Lorem Ipsum Dolor Si Amet"
tags = [
"markdown",
"text",
]
+++
Lorem est tota propiore conpellat pectoribus de
pectora summo. <!--more-->Redit teque digerit hominumque toris verebor lumina non cervice
subde tollit usus habet Arctonque, furores quas nec ferunt. Quoque montibus nunc
caluere tempus inhospita parcite confusaque translucet patri vestro qui optatis
lumine cognoscere flos nubis! Fronde ipsamque patulos Dryopen deorum.
1. Exierant elisi ambit vivere dedere
2. Duce pollice
3. Eris modo
4. Spargitque ferrea quos palude
Rursus nulli murmur; hastile inridet ut ab gravi sententia! Nomine potitus
silentia flumen, sustinet placuit petis in dilapsa erat sunt. Atria
tractus malis.
1. Comas hunc haec pietate fetum procerum dixit
2. Post torum vates letum Tiresia
3. Flumen querellas
4. Arcanaque montibus omnes
5. Quidem et
# Vagus elidunt
<svg class="canon" xmlns="http://www.w3.org/2000/svg" overflow="visible" viewBox="0 0 496 373" height="373" width="496"><g fill="none"><path stroke="#000" stroke-width=".75" d="M.599 372.348L495.263 1.206M.312.633l494.95 370.853M.312 372.633L247.643.92M248.502.92l246.76 370.566M330.828 123.869V1.134M330.396 1.134L165.104 124.515"></path><path stroke="#ED1C24" stroke-width=".75" d="M275.73 41.616h166.224v249.05H275.73zM54.478 41.616h166.225v249.052H54.478z"></path><path stroke="#000" stroke-width=".75" d="M.479.375h495v372h-495zM247.979.875v372"></path><ellipse cx="498.729" cy="177.625" rx=".75" ry="1.25"></ellipse><ellipse cx="247.229" cy="377.375" rx=".75" ry="1.25"></ellipse></g></svg>
[The Van de Graaf Canon](https://en.wikipedia.org/wiki/Canons_of_page_construction#Van_de_Graaf_canon)
## Mane refeci capiebant unda mulcebat
Victa caducifer, malo vulnere contra
dicere aurato, ludit regale, voca! Retorsit colit est profanae esse virescere
furit nec; iaculi matertera et visa est, viribus. Divesque creatis, tecta novat collumque vulnus est, parvas. **Faces illo pepulere** tempus adest. Tendit flamma, ab opes virum sustinet, sidus sequendo urbis.
Iubar proles corpore raptos vero auctor imperium; sed et huic: manus caeli
Lelegas tu lux. Verbis obstitit intus oblectamina fixis linguisque ausus sperare
Echionides cornuaque tenent clausit possit. Omnia putatur. Praeteritae refert
ausus; ferebant e primus lora nutat, vici quae mea ipse. Et iter nil spectatae
vulnus haerentia iuste et exercebat, sui et.
Eurytus Hector, materna ipsumque ut Politen, nec, nate, ignari, vernum cohaesit sequitur. Vel **mitis temploque** vocatus, inque alis, *oculos nomen* non silvis corpore coniunx ne displicet illa. Crescunt non unus, vidit visa quantum inmiti flumina mortis facto sic: undique a alios vincula sunt iactata abdita! Suspenderat ego fuit tendit: luna, ante urbem
Propoetides **parte**.
{{< css.inline >}}
<style>
.canon { background: white; width: 100%; height: auto;}
</style>
{{< /css.inline >}}

View File

@ -1,34 +0,0 @@
+++
author = "Hugo Authors"
title = "Rich Content"
date = "2019-03-10"
description = "A brief description of Hugo Shortcodes"
tags = [
"shortcodes",
"privacy",
]
+++
Hugo ships with several [Built-in Shortcodes](https://gohugo.io/content-management/shortcodes/#use-hugo-s-built-in-shortcodes) for rich content, along with a [Privacy Config](https://gohugo.io/about/hugo-and-gdpr/) and a set of Simple Shortcodes that enable static and no-JS versions of various social media embeds.
<!--more-->
---
## YouTube Privacy Enhanced Shortcode
{{< youtube ZJthWmvUzzc >}}
<br>
---
## Twitter Shortcode
{{< tweet user="DesignReviewed" id="1085870671291310081" >}}
<br>
---
## Vimeo Simple Shortcode
{{< vimeo_simple 48912912 >}}

View File

@ -1,19 +0,0 @@
# Learn how to use Date format (date, created, updated)
# -> https://gohugo.io/functions/dateformat/
[posts]
[posts.title]
other = "Posts"
[posts.date]
other = "Jan 2, 2006"
[post]
[post.created]
other = "January 2, 2006"
[post.updated]
other = "Updated January 2, 2006"

Binary file not shown.

Before

Width:  |  Height:  |  Size: 62 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 61 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 87 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 43 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 43 KiB

View File

@ -1,11 +0,0 @@
<!DOCTYPE html>
<html>
{{- partial "head.html" . -}}
<body>
{{- partial "header.html" . -}}
<main id="content">
{{- block "main" . }}{{- end }}
</main>
{{- partial "footer.html" . -}}
</body>
</html>

View File

@ -1,6 +0,0 @@
<li>
<a href="{{ .Permalink }}">
{{ .Title }}
<small><time>{{ .Date | time.Format (i18n "posts.date") }}</time></small>
</a>
</li>

View File

@ -1,21 +0,0 @@
{{ define "main" }}
<article>
<header id="post-header">
<h1>{{ .Title }}</h1>
{{- if compare.Ne .Parent.Title "Home" -}}
<div>
Part of the <a href="{{ .Parent.Permalink }}">{{ .Parent.Title }}</a> series
</br>
{{- end -}}
{{- if isset .Params "date" -}}
{{ if eq .Lastmod .Date }}
<time>{{ .Date | time.Format (i18n "post.created") }}</time>
{{ else }}
<time>{{ .Lastmod | time.Format (i18n "post.updated") }}</time>
{{ end }}
{{- end -}}
</div>
</header>
{{- .Content -}}
</article>
{{ end }}

View File

@ -1,8 +0,0 @@
{{ define "main" }}
<h3>{{ .Title }}</h3>
<ul id="posts">
{{- range .Pages }}
{{ .Render "li" }}
{{- end }}
</ul>
{{ end }}

View File

@ -1,9 +0,0 @@
{{ define "main" }}
{{ .Content }}
<h3>{{ i18n "posts.title" }}</h3>
<ul id="posts">
{{- range .Pages }}
{{ .Render "li" }}
{{- end }}
</ul>
{{ end }}

View File

@ -1,12 +0,0 @@
{{ define "main" }}
{{ .Content }}
<h3>Projects</h3>
<ul id="posts">
{{ range .Pages }}
<li>
<a href="{{ .RelPermalink }}">{{ .Title }}</a>
<small>{{ .Summary }}</small>
</li>
{{ end }}
</ul>
{{ end }}

View File

@ -1,9 +0,0 @@
{{ define "main" }}
{{ .Content }}
<h3>Latest posts</h3>
<ul id="posts">
{{ range first 15 .Site.RegularPages }}
{{ .Render "li" }}
{{ end }}
</ul>
{{ end }}

View File

@ -1,3 +0,0 @@
<footer id="footer">
{{ .Site.Params.copyright }}
</footer>

View File

@ -1,31 +0,0 @@
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
{{ with .Site.Params.description -}}
<meta name="description" content="{{ . }}">
{{ end }}
{{ printf `<link rel="shortcut icon" href="%s">` ("favicon.ico" | absURL) | safeHTML }}
{{ with .OutputFormats.Get "rss" -}}
{{ printf `<link rel="%s" type="%s" href="%s" title="%s">` .Rel .MediaType.Type .Permalink $.Site.Title | safeHTML }}
{{ end -}}
{{ $resources := slice -}}
{{ $resources = $resources | append (resources.Get "css/main.css") -}}
{{ $resources = $resources | append (resources.Get "css/min770px.css") -}}
{{ $dark := .Site.Params.dark | default "auto" -}}
{{ if not (eq $dark "off") -}}
{{ $resources = $resources | append (resources.Get "css/dark.css" | resources.ExecuteAsTemplate "dark.css" .) -}}
{{ end -}}
{{ if .Site.Params.highlight -}}
{{ $resources = $resources | append (resources.Get "css/syntax.css") -}}
{{ end -}}
{{ $css := $resources | resources.Concat "css/style.css" | minify }}
{{ printf `<link rel="stylesheet" href="%s">` $css.RelPermalink | safeHTML }}
<title>{{ .Title }}</title>
</head>

View File

@ -1,12 +0,0 @@
<header id="banner">
<h2><a href="{{ .Site.BaseURL }}">{{ .Site.Title }}</a></h2>
<nav>
<ul>
{{ range .Site.Menus.main.ByWeight -}}
<li>
{{ .Pre }}<a href="{{ .URL }}" title="{{ .Title }}">{{- .Name -}}</a>{{ .Post }}
</li>
{{- end }}
</ul>
</nav>
</header>

View File

@ -1,6 +0,0 @@
<h3>{{ i18n "posts.title" }}</h3>
<ul id="posts">
{{- range .Pages }}
{{ .Render "li" }}
{{- end }}
</ul>

View File

@ -1,26 +0,0 @@
{{ define "main" }}
<h2>{{ .Title }}</h2>
<header id="links">
<nav>
<ul>
{{ range .Page.Params.links -}}
<li>
<a href="{{ .url }}" title="{{ .name }}">{{- .name -}}</a>
</li>
{{- end }}
</ul>
</nav>
</header>
{{ .Content }}
{{ if compare.Gt .Pages.Len 0 }}
<h3>{{ i18n "posts.title" }}</h3>
<ul id="posts">
{{- range .Pages }}
{{ .Render "li" }}
{{- end }}
</ul>
{{ end }}
{{ end }}

View File

@ -1,4 +0,0 @@
<aside id="toc">
<h4>Table of Contents</h4>
{{ .Page.TableOfContents }}
</aside>

View File

@ -1,13 +0,0 @@
name = "Etch"
license = "MIT"
licenselink = "https://github.com/LukasJoswiak/etch/blob/master/LICENSE"
description = "Lightweight Hugo theme with a focus on content"
homepage = "https://github.com/LukasJoswiak/etch"
demosite = "https://lukasjoswiak.github.io/etch/"
tags = ["simple", "minimal", "clean", "fast", "blog", "responsive", "dark mode", "privacy"]
features = ["fast", "blog", "syntax highlighting", "dark mode"]
min_version = "0.41"
[author]
name = "Lukas Joswiak"
homepage = "https://lukasjoswiak.com"