bugs/6d4378ca918ffb1f6e5049ea293ffd35eb38bc269507af5a673ca134d400c286
commit
fdfcd22dde
|
@ -0,0 +1 @@
|
|||
{"author":{"id":"f586b76891632498b1f2d09587f3a3fdf4aa42bfa8402213dc3bfbaac9162697"},"ops":[{"type":1,"timestamp":1749975015,"nonce":"8csrN3QZPvdTyUZz6SulU+Nz3i0=","title":"Gpodder API pagination","message":"Any API route that returns elements grouped by a timestamp can be paginated,\nalthough not in a predictable way. The Gpodder.net server implements this as\nfollows:\n\n1. Decide on a max number of elements to return\n2. Query the elements with the smallest relevant timestamp value\n3. While the max amount of elements isn't reached, query the next timestamp and\n append it to the output\n4. Repeat\n\nThe timestamps are UNIX timestamps, so this approach can paginate with second\nprecision. The downside is that there's no way to have a reliable upper bound.\nIf the smallest timestamp to return contains more elements that the max number\nallowed, all elements still need to be returned as there's no way to split a\ntimestamp. It could still greatly reduce the number of elements returned\nthough, reducing memory pressure on the server.\n\nThe timestamp value returned by the API should also reflect these changes. It\nindicates the next timestamp value that should be requested by the client, so\nwith pagination, it should reflect that. Most spots that implement this already\ndo this properly though.\n\nAs for implementing this pattern, it can be done completely in SQL using Common\nTable Expressions. Diesel does not have support for this though, so it would\nrequire writing raw SQL queries.","files":null}]}
|
Loading…
Reference in New Issue