Compare commits

...

26 Commits

Author SHA1 Message Date
Jef Roosens bc6ae2bea8
Merge branch 'master' into woodpecker 2021-04-23 23:24:45 +02:00
Jef Roosens e5b35867bf
Added monica nginx config 2021-04-23 22:41:03 +02:00
Jef Roosens f0e5e83c19
Fixed koel nginx config 2021-04-23 21:51:47 +02:00
Jef Roosens d7d74235ad
Fixed some nginx bugs 2021-04-23 21:46:35 +02:00
Jef Roosens 3e303f678f
Moved sites-enabled to templates dir 2021-04-23 21:36:11 +02:00
Jef Roosens beb9014b94
Added index volume for koel 2021-04-23 21:19:33 +02:00
Jef Roosens e21a135c7d
Added podgrab nginx config 2021-04-23 21:06:42 +02:00
Jef Roosens ac5f944770
Added podgrab docker config 2021-04-23 17:03:53 +02:00
Jef Roosens 4e246adf4d
Merge branch 'master' of git.roosens.me:Chewing_Bever/self-hosting 2021-04-23 16:38:48 +02:00
Jef Roosens 1d500be41f
Switched to mariadb 2021-04-23 16:38:39 +02:00
Jef Roosens c2aa8183f7 Merge pull request 'Update redis Docker tag to v6.2.2' (#10) from renovate/docker-redis-6.x into master
Reviewed-on: https://git.roosens.me/Chewing_Bever/self-hosting/pulls/10
2021-04-23 16:37:47 +02:00
Jef Roosens ddb8555c7b
Added 'restart: always' to nginx 2021-04-23 16:34:14 +02:00
Jef Roosens 0a6ffbf67d
Added initial gitea config 2021-04-23 16:32:54 +02:00
Jef Roosens d13573f87d
Completely revamped nginx config 2021-04-23 16:26:32 +02:00
Renovate Bot e51a1fd8eb Update redis Docker tag to v6.2.2 2021-04-23 14:00:47 +00:00
Jef Roosens 3411f3d0a9
Modernized TShock config 2021-04-23 15:16:51 +02:00
Jef Roosens 608b4fbe90
Updated Portainer config 2021-04-23 15:05:14 +02:00
Jef Roosens 0b85900b71
Improved Nextcloud config 2021-04-23 15:02:42 +02:00
Jef Roosens 4a4437683e
Added Monica config 2021-04-23 14:57:04 +02:00
Jef Roosens e483ab0d7a
Improved miniflux config 2021-04-23 14:53:05 +02:00
Jef Roosens 92094ff5fc
Lowered required dc version for minecraft 2021-04-23 14:43:53 +02:00
Jef Roosens 7e4bb004e0
Modernized Koel config 2021-04-23 14:38:13 +02:00
Jef Roosens 0172b193a1
Modernized Firefly config 2021-04-23 14:29:34 +02:00
Jef Roosens 94bd72ee39
Removed outdated backup tool 2021-04-23 14:29:23 +02:00
Jef Roosens 57a0248236 Merge pull request 'Configure Renovate' (#3) from renovate/configure into master
Reviewed-on: https://git.roosens.me/Chewing_Bever/self-hosting/pulls/3
2021-04-20 16:02:20 +02:00
Renovate Bot 14ef895433 Add renovate.json 2021-04-20 14:01:30 +00:00
58 changed files with 719 additions and 615 deletions

View File

@ -1,24 +1,5 @@
<!---
Copyright (C) 2020 Jef Roosens
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
-->
# self-hosting # self-hosting
# Contents # Contents
The repo contains setup guides for the following: The repo contains setup guides for the following:
@ -35,9 +16,11 @@ Each directory contains (or will contain) its own `README.md` to aid with the
installation of that specific setup. installation of that specific setup.
# General info # General info
This info applies to all configs. This info applies to all configs.
## Docker ## Docker
All the setups named above use Docker and docker-compose. If you're on a All the setups named above use Docker and docker-compose. If you're on a
Linux-based server, you can find both `docker` and `docker-compose` in your Linux-based server, you can find both `docker` and `docker-compose` in your
package manager (do note that the Docker package might be called `docker.io`). package manager (do note that the Docker package might be called `docker.io`).
@ -45,23 +28,27 @@ Otherwise, the install instructions can be found
[here](https://docs.docker.com/engine/install/). [here](https://docs.docker.com/engine/install/).
## Configuration ## Configuration
Most configuration can be done using a `.env` file with a provided Most configuration can be done using a `.env` file with a provided
`.env.example` file to start from. This means that you never have to edit the `.env.example` file to start from. This means that you never have to edit the
compose files, unless you wish to deviate from the default format. compose files, unless you wish to deviate from the default format.
## Building the image ## Building the image
You can build the container image using the command `docker-compose build`. You can build the container image using the command `docker-compose build`.
This will build all services specified in the `docker-compose.yml` file. Any This will build all services specified in the `docker-compose.yml` file. Any
build configuration/environment variables can be defined in a `.env` file. A build configuration/environment variables can be defined in a `.env` file. A
`.env.example` file is given for each configuration. `.env.example` file is given for each configuration.
## Running the container ## Running the container
For running the server, we can use `docker-compose up -d`. This will start the For running the server, we can use `docker-compose up -d`. This will start the
service in the background. You can then see any logs using service in the background. You can then see any logs using
`docker-compose logs`. If you want the logs to update automatically, use `docker-compose logs`. If you want the logs to update automatically, use
`docker-compose logs -f`. `docker-compose logs -f`.
# Why did I make this? # Why did I make this?
Well, I just wanted to put all my knowledge in one basket. this makes it easier Well, I just wanted to put all my knowledge in one basket. this makes it easier
to manage and share with others. I spend a lot of time tweaking these configs to manage and share with others. I spend a lot of time tweaking these configs
and figuring out how they work best (for me at least), and wanted to share this and figuring out how they work best (for me at least), and wanted to share this

2
backups/.gitignore vendored
View File

@ -1,2 +0,0 @@
__pycache__/
backup_tool

View File

@ -1,4 +0,0 @@
# Backups
I wrote this Python program to manage backups of the stuff running on our
server. I know there's probably better ways to do this, but I really liked
working on this and it works well enough for our usecase.

View File

@ -1,41 +0,0 @@
import argparse
import sys
from specs import parse_specs_file
# This just displays the error type and message, not the stack trace
def except_hook(ext_type, value, traceback):
sys.stderr.write("{}: {}\n".format(ext_type.__name__, value))
sys.excepthook = except_hook
# Define parser
parser = argparse.ArgumentParser(
description='Backup directories and Docker volumes.')
parser.add_argument('-f', '--file', action='append', dest='file',
help='File containing spec definitions.')
parser.add_argument('-j', '--json', action='store_const', const=True,
default=False, help='Print out the parsed specs as JSON '
'and exit')
parser.add_argument('spec', nargs='*',
help='The specs to process. Defaults to all.')
# Parse arguments
args = parser.parse_args()
specs = sum([parse_specs_file(path) for path in args.file], [])
# Filter specs if needed
if args.spec:
specs = filter(lambda s: s.name in args.spec, specs)
# Dump parsed data as json
if args.json:
import json
print(json.dumps([spec.to_dict() for spec in specs], indent=4))
else:
pass
# Run the backups
# for spec in specs:
# spec.backup()

View File

@ -1,2 +0,0 @@
from .specs import Spec
from .parser import parse_specs_file

View File

@ -1,114 +0,0 @@
import yaml
from pathlib import Path
from specs import Spec
from typing import List, Dict
class InvalidKeyError(Exception):
def __init__(self, key):
message = "Invalid key: {}".format(key)
super().__init__(key)
class MissingKeyError(Exception):
def __init__(self, key):
message = "Missing key: {}".format(key)
super().__init__(key)
def parse_specs_file(path: Path) -> List[Spec]:
"""
Parse a YAML file defining backup specs.
Args:
path: path to the specs file
Returns:
A list of specs
"""
# Skeleton of a spec config
# If a value is None, this means it doesn't have a default value and must be
# defined
spec_skel = {
"source": None,
"destination": None,
"limit": None,
"volume": False,
"notify": {
"title": "Backup Notification",
"events": ["failure"]
}
}
# Read YAML file
with open(path, "r") as yaml_file:
data = yaml.load(yaml_file, Loader=yaml.Loader)
# Check specs section exists
if "specs" not in data:
raise MissingKeyError("specs")
# Allow for default notify settings
if "notify" in data:
spec_skel["notify"] = data["notify"]
specs = []
# Check format for each spec
for key in data["specs"]:
specs.append(Spec.from_dict(key, combine_with_skeleton(
data["specs"][key], spec_skel)
))
return specs
def combine_with_skeleton(data: Dict, skel: Dict) -> Dict:
"""
Compare a dict with a given skeleton dict, and fill in default values where
needed.
"""
# First, check for illegal keys
for key in data:
if key not in skel:
raise InvalidKeyError(key)
# Then, check the default values
for key, value in skel.items():
if key not in data:
# Raise error if there's not default value
if value is None:
raise MissingKeyError(key)
# Replace with default value
data[key] = value
# Error if value is not same type as default value
elif type(data[key]) != type(value) and value is not None:
raise TypeError("Invalid value type")
# Recurse into dicts
elif type(value) == dict:
data[key] = combine_with_skeleton(data[key], value)
return data
# Test cases
if __name__ == "__main__":
d1 = {
"a": 5
}
s1 = {
"a": 7,
"b": 2
}
r1 = {
"a": 5,
"b": 2
}
assert combine_with_skeleton(d1, s1) == r1

View File

@ -1,146 +0,0 @@
from pathlib import Path
from datetime import datetime
import requests
import os
class Spec:
def __init__(self, name, destination, limit, title, events=None):
self.name = name
self.destination = Path(destination)
self.limit = limit
self.title = title
self.events = [] if events is None else events
def to_dict(self):
return {
"name": self.name,
"destination": str(self.destination),
"limit": self.limit,
"notify": {
"title": self.title,
"events": self.events
}
}
def backup(self):
raise NotImplementedError()
def remove_redundant(self):
tarballs = sorted(self.destination.glob('*.tar.gz'),
key=os.path.getmtime, reverse=True)
if len(tarballs) >= self.limit:
for path in tarballs[self.limit - 1:]:
path.unlink()
def notify(self, status_code):
if status_code:
if "failure" not in self.events:
return
message = "backup for {} failed.".format(self.name)
else:
if "success" not in self.events:
return
message = "backup for {} succeeded.".format(self.name)
# Read API key from env vars
try:
key = os.environ["IFTTT_API_KEY"]
# Don't send notification if there's not API key defined
except KeyError:
return
url = "https://maker.ifttt.com/trigger/{}/with/key/{}".format(
"phone_notifications",
key
)
data = {
"value1": self.title,
"value2": message
}
requests.post(url, data=data)
def get_filename(self):
return '{}_{}.tar.gz'.format(
self.name,
datetime.now().strftime('%Y-%m-%d_%H-%M-%S')
)
@staticmethod
def from_dict(name, data) -> "Specification":
if data.get("volume", False):
return VolumeSpec.from_dict(name, data)
return DirSpec.from_dict(name, data)
@staticmethod
def from_file(path: str):
with open(path, 'r') as yaml_file:
data = yaml.load(yaml_file, Loader=yaml.Loader)
return [Spec.from_dict(name, info)
for name, info in data["specs"].items()]
class DirSpec(Spec):
def __init__(self, name, source, destination, limit, title, events=None):
super().__init__(name, destination, limit, title, events)
self.source = Path(source)
def backup(self):
self.remove_redundant()
status_code = os.system(
"tar -C '{}' -czf '{}' -- .".format(
self.source,
self.destination / self.get_filename()
)
)
self.notify(status_code)
@staticmethod
def from_dict(name, data):
return DirSpec(
name,
data["source"],
data["destination"],
data["limit"],
data["notify"]["title"],
data["notify"]["events"]
)
class VolumeSpec(Spec):
def __init__(self, name, volume, destination, limit, title, events=None):
super().__init__(name, destination, limit, title, events)
self.volume = volume
def backup(self):
status_code = os.system(
"docker run --rm -v '{}:/from' -v '{}:/to' alpine:latest "
"tar -C /from -czf '/to/{}' -- .".format(
self.volume,
self.destination,
self.get_filename()
)
)
@staticmethod
def from_dict(name, data):
return VolumeSpec(
name,
data["source"],
data["destination"],
data["limit"],
data["notify"]["title"],
data["notify"]["events"]
)

View File

@ -1,15 +0,0 @@
notify:
title: "title"
events:
- 'random'
specs:
test-spec:
source: '/some/path'
destination: '/some/other/path'
limit: 7
test-2:
source: '/path/to'
destination: '/to/some/other/path'
limit: 2

View File

@ -1,14 +0,0 @@
#!/usr/bin/env sh
# Zip app
(cd app && zip -r ../app.zip * -x "__pycache__/*" "**/__pycache__/*" ".vim/*" "**/.vim/*")
# Add shebang to top of file
echo "#!/usr/bin/env python3" | cat - app.zip > backup_tool
chmod a+x backup_tool
# Move executable over
mv backup_tool /usr/local/bin
# Remove zip
rm app.zip

View File

@ -63,7 +63,7 @@ DB_HOST=db
DB_PORT=5432 DB_PORT=5432
DB_DATABASE=firefly DB_DATABASE=firefly
DB_USERNAME=firefly DB_USERNAME=firefly
DB_PASSWORD=password DB_PASSWORD=firefly
# MySQL supports SSL. You can configure it here. # MySQL supports SSL. You can configure it here.
# If you use Docker or similar, you can set these variables from a file by appending them with _FILE # If you use Docker or similar, you can set these variables from a file by appending them with _FILE

View File

@ -1,4 +1,4 @@
version: '2.8' version: '2.4'
services: services:
app: app:
@ -6,24 +6,23 @@ services:
context: '.' context: '.'
args: args:
- 'LOCALE=$DEFAULT_LOCALE' - 'LOCALE=$DEFAULT_LOCALE'
image: 'firefly-iii-cron:latest' image: 'chewingbever/firefly-iii-cron:latest'
restart: 'always' restart: 'always'
healthcheck: healthcheck:
test: 'curl -f localhost:8080 || exit 1' test: 'curl -f localhost:8080 || exit 1'
interval: '1m' interval: '1m'
timeout: '10s' timeout: '10s'
retries: 3 retries: 3
start_period: '10s' start_period: '10s'
depends_on: depends_on:
db: db:
condition: 'service_healthy' condition: 'service_healthy'
redis: redis:
condition: 'service_healthy' condition: 'service_healthy'
env_file: env_file:
- '.env' - '.env'
labels:
- 'com.centurylinklabs.watchtower.enable=true'
networks: networks:
- 'nginx' - 'nginx'
- 'default' - 'default'
@ -31,35 +30,33 @@ services:
- 'upload:/var/www/html/storage/upload' - 'upload:/var/www/html/storage/upload'
db: db:
image: 'postgres:13-alpine' image: 'postgres:13.2-alpine'
restart: 'always' restart: 'always'
healthcheck: healthcheck:
test: 'pg_isready -U $DB_USERNAME' test: 'pg_isready -U firefly'
interval: '10s' interval: '10s'
timeout: '5s' timeout: '5s'
retries: 5 retries: 5
start_period: '0s'
environment: environment:
- 'POSTGRES_DB=$DB_DATABASE' - 'POSTGRES_DB=firefly'
- 'POSTGRES_PASSWORD=$DB_PASSWORD' - 'POSTGRES_PASSWORD=firefly'
- 'POSTGRES_USER=$DB_USERNAME' - 'POSTGRES_USER=firefly'
labels:
- 'com.centurylinklabs.watchtower.enable=true'
volumes: volumes:
- 'db-data:/var/lib/postgresql/data' - 'db-data:/var/lib/postgresql/data'
redis: redis:
image: 'redis:6-alpine' image: 'redis:6.2.2-alpine'
restart: 'always' restart: 'always'
healthcheck: healthcheck:
test: 'redis-cli -h localhost ping' test: 'redis-cli -h localhost ping'
interval: '10s' interval: '10s'
timeout: '5s' timeout: '5s'
retries: 3 retries: 3
labels:
- 'com.centurylinklabs.watchtower.enable=true'
networks: networks:
nginx: nginx:
external: true external: true

16
gitea/.env.example 100644
View File

@ -0,0 +1,16 @@
# User to run container as
USER_UID=1000
USER_GID=1000
# Database settings
DB_TYPE=postgres
DB_HOST=db:5432
DB_NAME=gitea
DB_USER=gitea
DB_PASSWD=gitea
# Wether to start LFS
LFS_START_SERVER=true
# Wether to allow registration
DISABLE_REGISTRATION=true

View File

@ -0,0 +1,59 @@
version: '2.4'
services:
app:
# Latest contains a development version
image: 'gitea/gitea:1.14.1-rootless'
restart: 'always'
depends_on:
db:
condition: 'service_healthy'
healthcheck:
test: 'curl -f localhost:3000 || exit 1'
interval: '30s'
timeout: '5s'
retries: 3
start_period: '5s'
env_file:
- '.env'
networks:
- 'default'
- 'nginx'
ports:
- '22:22'
volumes:
- 'data:/data'
- 'repos:/data/git/repositories'
- 'lfs:/data/git/lfs'
- '/etc/timezone:/etc/timezone:ro'
- '/etc/localtime:/etc/localtime:ro'
db:
image: 'postgres:13.2-alpine'
restart: 'always'
healthcheck:
test: 'pg_isready -U gitea'
interval: '30s'
timeout: '5s'
retries: 3
start_period: '0s'
environment:
- 'POSTGRES_USER=gitea'
- 'POSTGRES_PASSWORD=gitea'
- 'POSTGRES_DB=gitea'
volumes:
- 'db-data:/var/lib/postgresql/data'
networks:
nginx:
external: true
volumes:
data:
lfs:
db-data:
repos:

View File

@ -12,7 +12,7 @@ DB_HOST=db
DB_PORT=3306 DB_PORT=3306
DB_DATABASE=koel DB_DATABASE=koel
DB_USERNAME=koel DB_USERNAME=koel
DB_PASSWORD=changeme DB_PASSWORD=koel
# A random 32-char string. You can leave this empty if use php artisan koel:init. # A random 32-char string. You can leave this empty if use php artisan koel:init.
APP_KEY= APP_KEY=

View File

@ -1,14 +1,22 @@
version: '3.5' version: '2.4'
services: services:
app: app:
# This repository sadly only has a 'latest' flag
image: 'hyzual/koel:latest' image: 'hyzual/koel:latest'
restart: 'always' restart: 'always'
healthcheck:
test: 'curl -f localhost:80 || exit 1'
interval: '1m'
timeout: '10s'
retries: 3
start_period: '10s'
depends_on: depends_on:
- 'db' db:
labels: # Haven't found a good MySQL healthcheck yet
- 'com.centurylinklabs.watchtower.enable=true' condition: 'service_started'
networks: networks:
- 'default' - 'default'
- 'nginx' - 'nginx'
@ -16,19 +24,18 @@ services:
- './.env:/var/www/html/.env' - './.env:/var/www/html/.env'
- 'covers:/var/www/html/public/img/covers' - 'covers:/var/www/html/public/img/covers'
- 'music:/music' - 'music:/music'
- 'index:/var/www/html/storage/search-indexes'
db: db:
image: 'mysql:8' image: 'mariadb:10.5.9-focal'
restart: 'always' restart: 'always'
command: '--default-authentication-plugin=mysql_native_password' command: '--default-authentication-plugin=mysql_native_password'
environment: environment:
- 'MYSQL_DATABASE=koel' - 'MYSQL_DATABASE=koel'
- 'MYSQL_PASSWORD=$DB_PASSWORD' - 'MYSQL_USER=koel'
- 'MYSQL_ROOT_PASSWORD=$DB_PASSWORD' - 'MYSQL_PASSWORD=koel'
- 'MYSQL_USER=$DB_USERNAME' - 'MYSQL_RANDOM_ROOT_PASSWORD=yes'
labels:
- 'com.centurylinklabs.watchtower.enable=true'
volumes: volumes:
- 'db-data:/var/lib/mysql' - 'db-data:/var/lib/mysql'
@ -39,4 +46,5 @@ networks:
volumes: volumes:
covers: covers:
db-data: db-data:
index:
music: music:

View File

@ -1,4 +1,4 @@
version: '3.5' version: '2.0'
services: services:
app: app:
build: build:

View File

@ -1,4 +1,4 @@
version: '3.5' version: '2.0'
services: services:
app: app:
build: build:

View File

@ -1,4 +1,4 @@
version: '3.5' version: '2.0'
services: services:
app: app:
build: build:
@ -7,7 +7,7 @@ services:
- 'BASE_IMAGE' - 'BASE_IMAGE'
- 'MC_VERSION' - 'MC_VERSION'
- 'PAPERMC_VERSION' - 'PAPERMC_VERSION'
image: 'chewingbever/mc-papermc:${MC_VERSION}-${PAPERMC_VERSION}' image: 'localhost:5000/mc-papermc:${MC_VERSION}-${PAPERMC_VERSION}'
restart: 'always' restart: 'always'
# Needed to interact with server console # Needed to interact with server console

View File

@ -1,5 +1,4 @@
# Database settings # Database settings
DATABASE_URL=postgres://miniflux:changeme@db/miniflux?sslmode=disable
RUN_MIGRATIONS=1 RUN_MIGRATIONS=1
# Auto-create admin user # Auto-create admin user

View File

@ -1,3 +0,0 @@
POSTGRES_DB=miniflux
POSTGRES_USER=miniflux
POSTGRES_PASSWORD=changeme

View File

@ -1,28 +1,44 @@
version: '3.5' version: '2.4'
services: services:
app: app:
image: 'miniflux/miniflux:latest' image: 'miniflux/miniflux:2.0.29'
restart: 'always' restart: 'always'
depends_on: depends_on:
- 'db' db:
condition: 'service_healthy'
healthcheck:
test: 'wget --no-verbose --tries=1 --spider http://localhost:8080/ || exit 1'
interval: '1m'
timeout: '5s'
retries: 3
start_period: '5s'
env_file: env_file:
- 'miniflux.env' - '.env'
labels: environment:
- 'com.centurylinklabs.watchtower.enable=true' # This is always the same, so we just put it here
- 'DATABASE_URL=postgres://miniflux:miniflux@db/miniflux?sslmode=disable'
networks: networks:
- 'default' - 'default'
- 'nginx' - 'nginx'
db: db:
image: 'postgres:13-alpine' image: 'postgres:13.2-alpine'
restart: 'always' restart: 'always'
env_file: healthcheck:
- 'db.env' test: 'pg_isready -U miniflux'
labels: interval: '10s'
- 'com.centurylinklabs.watchtower.enable=true' timeout: '5s'
retries: 5
start_period: '0s'
environment:
- 'POSTGRES_DB=miniflux'
- 'POSTGRES_USER=miniflux'
- 'POSTGRES_PASSWORD=miniflux'
volumes: volumes:
- 'db-data:/var/lib/postgresql/data' - 'db-data:/var/lib/postgresql/data'

168
monica/.env.example 100644
View File

@ -0,0 +1,168 @@
#
# Welcome, friend ❤. Thanks for trying out Monica. We hope you'll have fun.
#
# Two choices: local|production. Use local if you want to install Monica as a
# development version. Use production otherwise.
APP_ENV=production
# true if you want to show debug information on errors. For production, put this
# to false.
APP_DEBUG=false
# The encryption key. This is the most important part of the application. Keep
# this secure otherwise, everyone will be able to access your application.
# Must be 32 characters long exactly.
# Use `php artisan key:generate` or `pwgen -s 32 1` to generate a random key.
APP_KEY=ChangeMeBy32KeyLengthOrGenerated
# Prevent information leakage by referring to IDs with hashIds instead of
# the actual IDs used in the database.
HASH_SALT=ChangeMeBy20+KeyLength
HASH_LENGTH=18
# The URL of your application.
APP_URL=http://localhost
# Force using APP_URL as base url of your application.
# You should not need this, unless you are using subdirectory config.
APP_FORCE_URL=false
# Database information
# To keep this information secure, we urge you to change the default password
# Currently only "mysql" compatible servers are working
DB_CONNECTION=mysql
DB_HOST=db
DB_PORT=3306
# You can use mysql unix socket if available, it overrides DB_HOST and DB_PORT values.
#DB_UNIX_SOCKET=/var/run/mysqld/mysqld.sock
DB_DATABASE=monica
DB_USERNAME=monica
DB_PASSWORD=monica
DB_PREFIX=
DB_TEST_HOST=127.0.0.1
DB_TEST_DATABASE=monica_test
DB_TEST_USERNAME=homestead
DB_TEST_PASSWORD=secret
# Use utf8mb4 database charset format to support emoji characters
# ⚠ be sure your DBMS supports utf8mb4 format
DB_USE_UTF8MB4=true
# Mail credentials used to send emails from the application.
MAIL_MAILER=smtp
MAIL_HOST=mailtrap.io
MAIL_PORT=2525
MAIL_USERNAME=
MAIL_PASSWORD=
MAIL_ENCRYPTION=
# Outgoing emails will be sent with these identity
MAIL_FROM_ADDRESS=
MAIL_FROM_NAME="Monica instance"
# New registration notification sent to this email
APP_EMAIL_NEW_USERS_NOTIFICATION=
# Ability to disable signups on your instance.
# Can be true or false. Default to false.
APP_DISABLE_SIGNUP=true
# Enable user email verification.
APP_SIGNUP_DOUBLE_OPTIN=false
# Set trusted proxy IP addresses.
# To trust all proxies that connect directly to your server, use a "*".
# To trust one or more specific proxies that connect directly to your server,
# use a comma separated list of IP addresses.
APP_TRUSTED_PROXIES=*
# Enable automatic cloudflare trusted proxy discover
APP_TRUSTED_CLOUDFLARE=false
# Frequency of creation of new log files. Logs are written when an error occurs.
# Refer to config/logging.php for the possible values.
LOG_CHANNEL=daily
# Error tracking. Specific to hosted version on .com. You probably don't need
# those.
SENTRY_SUPPORT=false
SENTRY_LARAVEL_DSN=
# Send a daily ping to https://version.monicahq.com to check if a new version
# is available. When a new version is detected, you will have a message in the
# UI, as well as the release notes for the new changes. Can be true or false.
# Default to true.
CHECK_VERSION=true
# Cache, session, and queue parameters
# ⚠ Change this only if you know what you are doing
#. Cache: database, file, memcached, redis, dynamodb
#. Session: file, cookie, database, apc, memcached, redis, array
#. Queue: sync, database, beanstalkd, sqs, redis
# If Queue is not set to 'sync', you'll have to set a queue worker
# See https://laravel.com/docs/5.7/queues#running-the-queue-worker
CACHE_DRIVER=redis
SESSION_DRIVER=file
SESSION_LIFETIME=120
QUEUE_CONNECTION=sync
# If you use redis, set the redis host or ip, like:
REDIS_HOST=redis
# Maximum allowed size for uploaded files, in kilobytes.
# Make sure this is an integer, without commas or spaces.
DEFAULT_MAX_UPLOAD_SIZE=10240
# Maximum allowed storage size per account, in megabytes.
# Make sure this is an integer, without commas or spaces.
DEFAULT_MAX_STORAGE_SIZE=512
# Default filesystem to store uploaded files.
# Possible values: public|s3
DEFAULT_FILESYSTEM=public
# AWS keys for S3 when using this storage method
AWS_KEY=
AWS_SECRET=
AWS_REGION=us-east-1
AWS_BUCKET=
AWS_SERVER=
# Allow Two Factor Authentication feature on your instance
MFA_ENABLED=true
# Enable DAV support
DAV_ENABLED=true
# CLIENT ID and SECRET used for OAuth authentication
PASSPORT_PERSONAL_ACCESS_CLIENT_ID=
PASSPORT_PERSONAL_ACCESS_CLIENT_SECRET=
# Allow to access general statistics about your instance through a public API
# call
ALLOW_STATISTICS_THROUGH_PUBLIC_API_ACCESS=false
# Indicates that each user in the instance must comply to international policies
# like CASL or GDPR
POLICY_COMPLIANT=true
# Enable geolocation services
# This is used to translate addresses to GPS coordinates.
ENABLE_GEOLOCATION=false
# API key for geolocation services
# We use LocationIQ (https://locationiq.com/) to translate addresses to
# latitude/longitude coordinates. We could use Google instead but we don't
# want to give anything to Google, ever.
# LocationIQ offers 10,000 free requests per day.
LOCATION_IQ_API_KEY=
# Enable weather on contact profile page
# Weather can only be fetched if we know longitude/latitude - this is why
# you also need to activate the geolocation service above to make it work
ENABLE_WEATHER=false
# Access to weather data from darksky api
# https://darksky.net/dev/register
# Darksky provides an api with 1000 free API calls per day
# You need to enable the weather above if you provide an API key here.
DARKSKY_API_KEY=

View File

@ -0,0 +1,58 @@
version: '2.4'
services:
app:
image: 'monica:2.20.0-apache'
restart: 'always'
healthcheck:
test: 'curl -f localhost:80 || exit 1'
interval: '1m'
timeout: '10s'
retries: 3
start_period: '10s'
depends_on:
db:
condition: 'service_started'
redis:
condition: 'service_healthy'
env_file:
- '.env'
networks:
- 'default'
- 'nginx'
volumes:
- 'data:/var/www/html/storage'
db:
image: 'mariadb:10.5.9-focal'
restart: 'always'
command: '--default-authentication-plugin=mysql_native_password'
environment:
- 'MYSQL_RANDOM_ROOT_PASSWORD=true'
- 'MYSQL_DATABASE=monica'
- 'MYSQL_USER=monica'
- 'MYSQL_PASSWORD=monica'
volumes:
- 'db-data:/var/lib/mysql'
redis:
image: 'redis:6.2.2-alpine'
restart: 'always'
healthcheck:
test: 'redis-cli -h localhost ping'
interval: '10s'
timeout: '5s'
retries: 3
networks:
nginx:
external: true
volumes:
data:
db-data:

View File

@ -2,7 +2,7 @@
POSTGRES_HOST=db POSTGRES_HOST=db
POSTGRES_DB=nextcloud POSTGRES_DB=nextcloud
POSTGRES_USER=nextcloud POSTGRES_USER=nextcloud
POSTGRES_PASSWORD=pass POSTGRES_PASSWORD=nextcloud
# Redis # Redis
REDIS_HOST=redis REDIS_HOST=redis

View File

@ -1,17 +1,24 @@
version: '3.5' version: '2.4'
services: services:
app: app:
image: 'nextcloud:20-apache' image: 'nextcloud:21.0.1-apache'
restart: 'always' restart: 'always'
healthcheck:
test: 'curl -f localhost || exit 1'
interval: '1m'
timeout: '10s'
retries: 3
start_period: '10s'
depends_on: depends_on:
- 'db' db:
- 'redis' condition: 'service_healthy'
redis:
condition: 'service_healthy'
env_file: env_file:
- '.env' - '.env'
labels:
- 'com.centurylinklabs.watchtower.enable=true'
networks: networks:
- 'default' - 'default'
- 'nginx' - 'nginx'
@ -21,40 +28,41 @@ services:
- 'root:/var/www/html' - 'root:/var/www/html'
cron: cron:
image: 'nextcloud:20-apache' image: 'nextcloud:21.0.1-apache'
entrypoint: '/cron.sh'
restart: 'always' restart: 'always'
entrypoint: '/cron.sh'
depends_on: depends_on:
- 'app' app:
condition: 'service_healthy'
env_file: env_file:
- '.env' - '.env'
labels:
- 'com.centurylinklabs.watchtower.enable=true'
volumes: volumes:
- 'config:/var/www/html/config' - 'config:/var/www/html/config'
- 'data:/var/www/html/data' - 'data:/var/www/html/data'
- 'root:/var/www/html' - 'root:/var/www/html'
db: db:
image: 'postgres:13-alpine' image: 'postgres:13.2-alpine'
restart: 'always' restart: 'always'
environment: environment:
- 'POSTGRES_DB' - 'POSTGRES_DB=nextcloud'
- 'POSTGRES_USER' - 'POSTGRES_USER=nextcloud'
- 'POSTGRES_PASSWORD' - 'POSTGRES_PASSWORD=nextcloud'
labels:
- 'com.centurylinklabs.watchtower.enable=true'
volumes: volumes:
- 'db-data:/var/lib/postgresql/data' - 'db-data:/var/lib/postgresql/data'
redis: redis:
image: 'redis:6-alpine' image: 'redis:6.2.2-alpine'
restart: 'always' restart: 'always'
labels: healthcheck:
- 'com.centurylinklabs.watchtower.enable=true' test: 'redis-cli -h localhost ping'
interval: '10s'
timeout: '5s'
retries: 3
networks: networks:
nginx: nginx:

View File

@ -1,12 +1,65 @@
# Main domain; also name of certificate # =====COMMON CONFIGURATION=====
MAIN_DOMAIN= ## Comma-seperated list of domains to generate certs for
## NOTE: you should only add domains here that aren't used in any of
# Comma-separated list of other domains which also arrive here ## the specific configurations below
DOMAINS= DOMAINS=
# Admin email; used for certificates ## Admin email; used for certificates
EMAIL= EMAIL=
# HTTP(S) Port ## HTTP(S) Port
HTTP_PORT=80 HTTP_PORT=80
HTTPS_PORT=443 HTTPS_PORT=443
# =====PER-SERVICE CONFIGURATION=====
# Domain name: domain name that points to the instance
# Hostname: basically the argument to proxy_pass
## Firefly III
### Domain name
FIREFLY_DOMAIN=
### Hostname
FIREFLY_HOST=firefly_app_1
## Koel
### Domain name
KOEL_DOMAIN=
### Hostname
KOEL_HOST=koel_app_1
## Miniflux
### Domain name
MINIFLUX_DOMAIN=
### Hostname
MINIFLUX_HOST=miniflux_app_1
## Monica
### Domain name
MONICA_DOMAIN=
### Hostname
MONICA_HOST=monica_app_1
## Nextcloud
### Domain name
NEXTCLOUD_DOMAIN=
### Hostname
NEXTCLOUD_HOST=nextcloud_app_1
## Portainer
### Domain name
PORTAINER_DOMAIN=
### Hostname
PORTAINER_HOST=portainer_app_1
## Gitea
### Domain name
GITEA_DOMAIN=
### Hostname
GITEA_HOST=gitea_app_1
## Podgrab
### Domain name
PODGRAB_DOMAIN=
### Hostname
PODGRAB_HOST=podgrab_app_1

View File

@ -1,17 +0,0 @@
FROM nginx:stable-alpine
RUN apk add --no-cache certbot
COPY entrypoint.sh ./entrypoint.sh
RUN chmod +x ./entrypoint.sh
RUN mkdir /var/lib/certbot
COPY renew /etc/periodic/weekly/renew
RUN chmod +x /etc/periodic/weekly/renew
# Default.conf file is annoying
RUN rm -rf /etc/nginx/conf.d/*
RUN /usr/sbin/crond -f -d 8 &
ENTRYPOINT [ "./entrypoint.sh" ]

View File

@ -1,6 +0,0 @@
#!/usr/bin/env sh
certbot certonly --standalone -d "$MAIN_DOMAIN,$DOMAINS" --email "$EMAIL" -n --agree-tos --expand
# The original script handles the template subsitution
exec /docker-entrypoint.sh nginx -g "daemon off;"

View File

@ -1,3 +0,0 @@
#!/usr/bin/env sh
python3 -c 'import random; import time; time.sleep(random.random() * 3600)' && certbot renew --webroot --webroot-path /var/lib/certbot/ --post-hook "/usr/sbin/nginx -s reload"

View File

@ -1,15 +1,13 @@
version: '3.5' version: '2.4'
services: services:
app: app:
build: './build' build: './nginx'
image: 'nginx-certbot:stable-alpine' image: 'nginx-certbot:stable-alpine'
restart: 'always'
environment: env_file:
- 'DOMAINS' - '.env'
- 'EMAIL'
- 'HTTPS_PORT'
- 'HTTP_PORT'
- 'MAIN_DOMAIN'
networks: networks:
- 'nginx' - 'nginx'
ports: ports:

View File

@ -5,4 +5,4 @@ user nginx nginx;
worker_processes auto; worker_processes auto;
# Load config segments # Load config segments
include conf.d/*; include conf.d/*.conf;

View File

@ -0,0 +1,11 @@
FROM nginx:1.20.0-alpine
COPY entrypoint.sh /entrypoint.sh
COPY renew /etc/periodic/weekly/renew
# Install certbot
# Remove default configs
RUN apk add --no-cache certbot && \
rm -rf /etc/nginx/conf.d/*
ENTRYPOINT [ "./entrypoint.sh" ]

View File

@ -0,0 +1,19 @@
#!/usr/bin/env sh
# Start cron
/usr/sbin/crond -d 8 &
# Renew all certificates
for url in $(env | grep '^[^=]\+_DOMAIN=' | sed 's/^.*\?=\(.*\)$/\1/g') $(echo "$DOMAINS" | sed 's/,/ /g')
do
certbot certonly \
--standalone \
-d "$url" \
--email "$EMAIL" \
-n \
--agree-tos \
--expand
done
# The original script handles the template subsitution
exec /docker-entrypoint.sh nginx -g "daemon off;"

View File

@ -0,0 +1,7 @@
#!/usr/bin/env sh
python3 -c 'import random; import time; time.sleep(random.random() * 3600)' && \
certbot renew \
--webroot \
--webroot-path /var/lib/certbot/ \
--post-hook "/usr/sbin/nginx -s reload"

View File

@ -1,6 +1,11 @@
server { server {
listen 443 ssl; # SSL Key locations
server_name DOMAIN; ssl_certificate /etc/letsencrypt/live/${FIREFLY_DOMAIN}/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/${FIREFLY_DOMAIN}/privkey.pem;
listen ${HTTPS_PORT} ssl;
listen [::]:${HTTPS_PORT} ssl;
server_name ${FIREFLY_DOMAIN};
location / { location / {
proxy_set_header Host $host; proxy_set_header Host $host;
@ -13,7 +18,7 @@ server {
proxy_set_header Connection "upgrade"; proxy_set_header Connection "upgrade";
resolver 127.0.0.11; resolver 127.0.0.11;
proxy_pass http://firefly_app_1:8080; proxy_pass http://${FIREFLY_HOST}:8080;
} }
} }

View File

@ -0,0 +1,23 @@
server {
# SSL Key locations
ssl_certificate /etc/letsencrypt/live/${GITEA_DOMAIN}/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/${GITEA_DOMAIN}/privkey.pem;
listen ${HTTPS_PORT} ssl;
listen [::]:${HTTPS_PORT} ssl;
server_name ${GITEA_DOMAIN};
location / {
resolver 127.0.0.11;
proxy_pass http://#{GITEA_HOST}:3000/;
# Static content caching
location ~* \.(?:jpg|jpeg|png|gif|ico|css|js|ttf)$ {
expires 1h;
add_header Cache-Control public;
proxy_pass http://${GITEA_HOST}:3000;
}
}
}

View File

@ -1,9 +0,0 @@
server {
listen 443 ssl;
server_name DOMAIN;
location / {
resolver 127.0.0.11;
proxy_pass http://koel_app_1:80;
}
}

View File

@ -0,0 +1,21 @@
server {
# SSL Key locations
ssl_certificate /etc/letsencrypt/live/${KOEL_DOMAIN}/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/${KOEL_DOMAIN}/privkey.pem;
listen ${HTTPS_PORT} ssl;
listen [::]:${HTTPS_PORT} ssl;
server_name ${KOEL_DOMAIN};
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Ssl on;
resolver 127.0.0.11;
proxy_pass http://${KOEL_HOST}:80;
}
}

View File

@ -1,10 +0,0 @@
server {
listen 443 ssl;
server_name DOMAIN;
location / {
resolver 127.0.0.11;
proxy_pass http://miniflux_app_1:8080;
}
}

View File

@ -0,0 +1,15 @@
server {
# SSL Key locations
ssl_certificate /etc/letsencrypt/live/${MINIFLUX_DOMAIN}/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/${MINIFLUX_DOMAIN}/privkey.pem;
listen ${HTTPS_PORT} ssl;
listen [::]:${HTTPS_PORT} ssl;
server_name ${MINIFLUX_DOMAIN};
location / {
resolver 127.0.0.11;
proxy_pass http://${MINIFLUX_HOST}:8080;
}
}

View File

@ -0,0 +1,25 @@
server {
# SSL Key locations
ssl_certificate /etc/letsencrypt/live/${MONICA_DOMAIN}/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/${MONICA_DOMAIN}/privkey.pem;
listen ${HTTPS_PORT} ssl;
listen [::]:${HTTPS_PORT} ssl;
server_name ${MONICA_DOMAIN};
client_max_body_size 1G;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Ssl on;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
resolver 127.0.0.11;
proxy_pass http://${MONICA_HOST}:80;
}
}

View File

@ -1,7 +1,12 @@
server { server {
listen 443 ssl; # SSL Key locations
listen [::]:443 ssl http2; ssl_certificate /etc/letsencrypt/live/${NEXTCLOUD_DOMAIN}/fullchain.pem;
server_name DOMAIN; ssl_certificate_key /etc/letsencrypt/live/${NEXTCLOUD_DOMAIN}/privkey.pem;
listen ${HTTPS_PORT} ssl;
# Not sure why http2 is here, but let's keep it just in case
listen [::]:${HTTPS_PORT} ssl http2;
server_name ${NEXTCLOUD_DOMAIN};
# Enable gzip but do not remove ETag headers # Enable gzip but do not remove ETag headers
gzip on; gzip on;
@ -23,7 +28,7 @@ server {
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always; add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
location / { location / {
proxy_pass http://nextcloud_app_1:80/; proxy_pass http://${NEXTCLOUD_HOST}:80/;
proxy_pass_request_headers on; proxy_pass_request_headers on;

View File

@ -0,0 +1,15 @@
server {
# SSL Key locations
ssl_certificate /etc/letsencrypt/live/${PODGRAB_DOMAIN}/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/${PODGRAB_DOMAIN}/privkey.pem;
listen ${HTTPS_PORT} ssl;
listen [::]:${HTTPS_PORT} ssl;
server_name ${PODGRAB_DOMAIN};
location / {
resolver 127.0.0.11;
proxy_pass http://${PODGRAB_HOST}:8080/;
}
}

View File

@ -1,11 +0,0 @@
server {
listen 443 ssl;
server_name DOMAIN;
location / {
proxy_set_header Connection "upgrade";
resolver 127.0.0.11;
proxy_pass http://portainer_app_1:9000;
}
}

View File

@ -0,0 +1,16 @@
server {
# SSL Key locations
ssl_certificate /etc/letsencrypt/live/${PORTAINER_DOMAIN}/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/${PORTAINER_DOMAIN}/privkey.pem;
listen ${HTTPS_PORT} ssl;
listen [::]:${HTTPS_PORT} ssl;
server_name ${PORTAINER_DOMAIN};
location / {
proxy_set_header Connection "upgrade";
resolver 127.0.0.11;
proxy_pass http://${PORTAINER_HOST}:9000;
}
}

View File

@ -1,9 +1,5 @@
http { http {
# SSL CONFIGURATION # COMMON SSL CONFIGURATION
# Key locations
ssl_certificate /etc/letsencrypt/live/${MAIN_DOMAIN}/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/${MAIN_DOMAIN}/privkey.pem;
# Allowed protocols # Allowed protocols
ssl_protocols TLSv1.2; ssl_protocols TLSv1.2;
@ -29,7 +25,6 @@ http {
return 301 https://$host:${HTTPS_PORT}$request_uri; return 301 https://$host:${HTTPS_PORT}$request_uri;
} }
# LOAD SITES # LOAD SITES
include sites-enabled/*.conf; include conf.d/sites-enabled/*.conf;
} }

View File

@ -0,0 +1,3 @@
*
!.gitignore

View File

@ -0,0 +1,5 @@
# How often to check for new episodes in seconds
CHECK_FREQUENCY=240
# Password the basic auth
PASSWORD=changeme

View File

@ -0,0 +1,29 @@
version: '2.4'
services:
app:
image: 'akhilrex/podgrab:1.0.0'
restart: 'always'
healthcheck:
test: 'curl -f localhost:8080 || exit 1'
interval: '1m'
timeout: '10s'
retries: 3
start_period: '10s'
env_file:
- '.env'
networks:
- 'nginx'
volumes:
- 'config:/config'
- 'assets:/assets'
networks:
nginx:
external: true
volumes:
config:
assets:

View File

@ -1,12 +1,17 @@
version: '3.5' version: '2.4'
services: services:
app: app:
image: 'portainer/portainer-ce:latest' image: 'portainer/portainer-ce:2.1.1-alpine'
restart: 'always' restart: 'always'
labels: healthcheck:
- 'com.centurylinklabs.watchtower.enable=true' test: 'curl -f localhost:9000 || exit 1'
interval: '1m'
timeout: '10s'
retries: 3
start_period: '10s'
networks: networks:
- 'nginx' - 'nginx'
ports: ports:
@ -17,8 +22,7 @@ services:
networks: networks:
nginx: nginx:
external: external: true
name: 'nginx'
volumes: volumes:
data: data:

3
renovate.json 100644
View File

@ -0,0 +1,3 @@
{
"$schema": "https://docs.renovatebot.com/renovate-schema.json"
}

View File

@ -1,29 +1,15 @@
# Copyright (C) 2020 Jef Roosens # What version of TShock to use
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
# Build arguments
RELEASE_TAG= RELEASE_TAG=
# Environment variables # What world size to create:
# 1 for small, 2 for medium, 3 for large
AUTOCREATE=2 AUTOCREATE=2
# Mount points # Mount points for the data directories
CONFIG_DIR= # By default, it creates volumes
LOGS_DIR= CONFIG_DIR=config
WORLDS_DIR= LOGS_DIR=logs
WORLDS_DIR=worlds
# Other # The port to publish the server on
PORT=7777 PORT=7777

View File

@ -1,42 +1,20 @@
# Copyright (C) 2020 Jef Roosens FROM alpine:3.13.5 AS base
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
FROM alpine:latest AS base
# Build arguments # Build arguments
ARG RELEASE_TAG ARG RELEASE_TAG
# Add unzip & curl
RUN apk update && apk add --no-cache unzip curl
WORKDIR /terraria WORKDIR /terraria
# Download & unzip RUN apk update && apk add --no-cache unzip curl && \
# TODO convert this to jq? curl -s "https://api.github.com/repos/Pryaxis/TShock/releases/tags/${RELEASE_TAG}" | \
RUN curl -s "https://api.github.com/repos/Pryaxis/TShock/releases/tags/${RELEASE_TAG}" | \
grep "browser_download_url" | \ grep "browser_download_url" | \
grep -o "https[^\"]\+" | \ grep -o "https[^\"]\+" | \
xargs curl -sLo tshock.zip && \ xargs curl -sLo tshock.zip && \
unzip tshock.zip && \ unzip -d tshock tshock.zip && \
rm tshock.zip && \ rm tshock.zip
# Is there a better way to do this?
mv TShock* tshock
FROM mono:latest FROM mono:6.12.0.107
WORKDIR /terraria WORKDIR /terraria
COPY --from=base /terraria/tshock /terraria COPY --from=base /terraria/tshock /terraria

View File

@ -1,23 +1,3 @@
<!---
Copyright (C) 2020 Jef Roosens
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
-->
# Build arguments # Build arguments
The only required build argument is `RELEASE_TAG`. This is the GitHub tag of The only required build argument is `RELEASE_TAG`. This is the GitHub tag of
the release you wish to use. The releases can be found the release you wish to use. The releases can be found

View File

@ -1,37 +1,25 @@
# Copyright (C) 2020 Jef Roosens version: '2.4'
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
version: '3.5'
services: services:
tshock: app:
build: build:
context: . context: .
args: args:
- 'RELEASE_TAG=${RELEASE_TAG}' - 'RELEASE_TAG=${RELEASE_TAG}'
image: 'terraria-tshock:${RELEASE_TAG}' image: 'chewingbever/terraria-tshock:${RELEASE_TAG}'
restart: 'always'
restart: 'unless-stopped'
stdin_open: true stdin_open: true
tty: true tty: true
environment: environment:
- AUTOCREATE - 'AUTOCREATE'
ports: ports:
- '$PORT:7777' - '$PORT:7777'
volumes: volumes:
- '$CONFIG_DIR:/terraria/config' - '$CONFIG_DIR:/terraria/config'
- '$LOGS_DIR:/terraria/logs' - '$LOGS_DIR:/terraria/logs'
- '$WORLDS_DIR:/terraria/worlds' - '$WORLDS_DIR:/terraria/worlds'
volumes:
config:
logs:
worlds:

3
vim/de
View File

@ -1,2 +1,3 @@
#!/usr/bin/env bash #!/usr/bin/env sh
docker run --rm -it -v "$1":/data -w '/data' chewingbever/nvim:latest docker run --rm -it -v "$1":/data -w '/data' chewingbever/nvim:latest