From 90d83bc5d451fa9ea088f22ba1a756adbadbc0d3 Mon Sep 17 00:00:00 2001 From: Chewing_Bever Date: Sat, 27 May 2023 12:21:19 +0200 Subject: [PATCH] docs: start architure file --- ARCHITECTURE.md | 58 +++++++++++++++++++++++++++++++++++++ src/event_loop/event_loop.c | 2 +- 2 files changed, 59 insertions(+), 1 deletion(-) create mode 100644 ARCHITECTURE.md diff --git a/ARCHITECTURE.md b/ARCHITECTURE.md new file mode 100644 index 0000000..b5affb5 --- /dev/null +++ b/ARCHITECTURE.md @@ -0,0 +1,58 @@ +# Architecture + +Lander is (almost) completely made up of self-written components. This file +describes how these various components are designed and interact. + +## Server + +Lander is an HTTP/1.1 server, meaning we need some sort of server framework. + +## Event loop + +At the bottom of the stack is an implementation of an event loop, heavily +inspired by the [Build Your Own Redis with +C/C++](https://build-your-own.org/redis/) book, written by James Smith. This +book was instrumental in understanding how an event loop works, and I can +hardly recommending checking it out! + +The event loop relies on the concept of non-blocking I/O. Instead of e.g. +starting a `read` command and waiting for its data to be retrieved, we can +instead use the `poll` syscall to wait for sockets to be ready for operation. +Thanks to this, we can let the kernel wait for I/O for us, allowing our program +to process data immediately using large CPU-bound bursts of work. + +Each cycle of the event loop consists of the same, surprisingely simple, steps: + +1. Execute the `poll` syscall on our list of currently open file descriptors, + and wait for it to return +2. For each file descriptor that received an event, we + 1. Process its event according to the state of the connection + 2. Close the connection if its state has changed to "end" +3. Finally, if the main file descriptor received an event, we try to accept a + connection + +Each connection can be in one of three states: `req`, `res` or `end`. Requests +always start in the `req` state, which is the reading & processing state. While +in the `req` state, the connection will try to read data from the socket (an +incoming request). After each read, the data processing function is called, +which tries to interpret the currently buffered data and process the request. +This function will then write some data to the write buffer, after which the +request is switched to the `res` state. In this state, no data is processed, +and all that the event loop tries to do is write the entire write buffer to the +socket. After this is done, the request switches back to the `req` mode. + +By letting the `res` state transition back into the `req` state, we can both +allow request pipelining (where multiple requests are sent over the same +connection) and allow the data handler to process requests greater than the +connection buffer size. Note that the event loop solely processes data stored +in its buffers, and switching between the `req` and `res` state does not mean +that the upper layer using this event loop has also fully processed its +request. In the context of the event loop, a response is solely what's in its +write buffer. + +To allow building upon this event loopp, it provides each connection with a +context struct, and optionally a global context that's the same for every +connection. The creation of these contexts, along with the data handler +function, can be provided by a higher layer, providing the building blocks for +higher-level loop implementations. This concept is used to implement the next +layer of Lander, namely the HTTP loop. diff --git a/src/event_loop/event_loop.c b/src/event_loop/event_loop.c index f648ca5..6142d22 100644 --- a/src/event_loop/event_loop.c +++ b/src/event_loop/event_loop.c @@ -172,7 +172,7 @@ void event_loop_run(event_loop *el, int port) { // poll for active fds // the timeout argument doesn't matter here - int rv = poll(poll_args, (nfds_t)poll_args_count, 1000); + int rv = poll(poll_args, (nfds_t)poll_args_count, 0); if (rv < 0) { critical(1, "Poll failed, errno: %i", errno);