Import OPML files into Microsub servers

Some time ago I wrote a one liner to import OPML files. Now it has become a lot easier to import an OPML file with ek.

  ek import opml subscriptions.opml
  

It is also possible to export an OPML file.

  ek export opml > subscriptions.opml
  

With these two commands it becomes easier to import and export OPML files.

The format of the OPML files should match the structure of the Microsub channels and feeds. Microsub has a list of channels and each channel contains a list of feeds. If the OPML file contains feeds on the first level of the file, it will skip these. Channels in the second (or higher) level will also be skipped.


Implementing Microsub yourself (part 1)

In this article I will try to show how you can implement a very simple version of Microsub yourself. 

Let's start

The protocol for Microsub consists of a number of actions. The actions can be provided with a parameter action to the microsub endpoint. When implementing a Microsub server it's possible to create a version of responses where you don't need to implement the full thing. It depends on what you want to use. At the moment we will only implement channels and timeline.

Simplified channels

For example the channels action provides 4 different functions in the full implementation.

  1. Get a list of the available channels
  2. Create a new channel with a name
  3. Update the name of a channel
  4. Delete a channel

A great way to start is to only return a fixed number of channels. That way you only implement function 1 and only return a successful response for functions 2, 3 and 4. Clients will work when you do this and it becomes a lot easier to implement.

As an example in PHP:

  
if ($_GET['action'] == 'channels') {
    $channels = [
        [ 'name' => 'Notifications', 'uid' => 'notifications' ],
        [ 'name' => 'Home', 'uid' => 'home' ],
    ];

    header('Content-Type: application/json');
    echo json_encode(['channels'=>$channels]);
}


Simplified timeline

The timeline action provides 1 function. There are 2 parameters that allow paging. For a simplified version this does not need to be implemented.

The timeline action should return a response that looks like this:

{
  "items": [
    { ... },
    { ... }
  ],
  "paging": {}
}

By leaving paging empty you signal to the client, that there are no pages available at the moment.

The items array should be filled with JF2 items. JF2 is a simplified version of Microformats 2 that allows for easier implementation by clients and servers. An example could look like this:


    {
        "type": "entry",
        "name": "Ekster now supports actual Indieauth to the Microsub channels. It's now possible for example to connect with http://indiepaper.io and archive pages to a channel. But of course the possibilities are endless.",
        "content": {
            "text": "Ekster now supports actual Indieauth to the Microsub channels. It's now possible for example to connect with http://indiepaper.io and archive pages to a channel. But of course the possibilities are endless.",
            "html": "Ekster now supports actual Indieauth to the Microsub channels. It's now possible for example to connect with indiepapier.io and archive pages to a channel. But of course the possibilities are endless."
        },
        "published": "2018-07-15T12:54:00+02:00",
        "url": "https://p83.nl/p/795"
    }

If you return a list of the items from your microsub endpoint, you could see them in the client. Now the harder part is, gathering these items from feeds and websites and converting these to JF2.

Simplified Microsub endpoint

And create a file with the following code called endpoint.php in the web root of your website.

The code can be found here: endpoint.php

Add the following information to your <head> tag:

<link rel="microsub" href="https://yourdomain.com/endpoint.php" />

That's all there is to it. Now you can Login with Monocle.

New alpha release of Wrimini

I just released the latest version of Wrinini. There is not much changed in the front, except for the name field. It was added to make it easier to create issues on GitHub. It also helps to create articles more easily. The other change is that the app now authorizes with your default browser instead of the webview.

There were also a few bugfixes. If you share an url with text, which some apps do, Wrinini will now separate the text and url into their separate fields.

Tile38

I integrated Tile38 a bit in my weblog. It can post webhooks when a geoposition enters or exits a geofence. I'm still testing with this but get some results.

It works with the positions that my Android app posts to the weblog. This all works together with a bit of http, json and redis.

Microsub changes

Yesterday I made an improvement to support paging with ZADD and ZRANGEBYSCORE. This allows me to get range of entries based in the timestamp of the published date (converted to Unix timestamp). The problem is that the unread entries are still available in the list. It's hard to find the first unread entry in the list. That entry is the starting point of the list of entries for the first page of items.

I implemented the solution like this: keep two lists. One with all unread items and one with the read items. In principle an entry moves from one list to the other in a linear fashion, because that's the reading order. So now when there is no after or before argument the server can send the first twenty items of the list. The first and last item contain the next before and after values. Nice thing is that I now get unread count for free with ZCARD.

Changes for april 1

Today I fixed the files in the backend. Now all posts follow the current structure. All comments are placed under posts and not in the same directory. It took some time, but now it works great. This also allowed me to reindex all files and add missing properties to the older posts. At the same I actually removed all posts that were remove from the weblog.

Changes for March 29

Github issues and comments

Today I added better support for create Github issues and comments. I added a link to the posts, with a "in-reply-to" or "like-of" link to Github. The posts now points to Bridgy as well. To support this extra link I had to add better support for other links in the webmention job.

Changes on March 18

Data storage

I changed the way that the post are saved in the data backend. The backend is Redis and the ids of the posts were added to channel lists. That works great for a while. The list are sequential, which keeps the posts in order on the website. I changed to type of the channels to be sets. The advantage of this is that I can merge the sets of ids. The problem however is that sets don't have a sequential order. This can be solved with the SORT command. It sorts ids by a differents keys and also allows to limit the result. Both limits and sorting works this way.

Checkins

Next I made checkins work. I now use Own your swarm to send micropub posts to this blog. Now I can use the Swarm app to check-in to locations and have them send automatically to this blog. Then I improved the design of check-in posts. And added Mapbox to show the locations on the map.

Micropub for a static Neocities websiteIncludes information about a special authorization_endpoint specially for people who don't have their own website. They can login using a password.

Screenshot tool

This is a simple screenshot tool that I wrote to send screenshots to this weblog. It works by connecting gnome-screenshot, pinta and shpub.

#!/bin/bash
bkdest=$HOME/Desktop/Screenshots/
target=${bkdest}/$(date +%Y_%m_%d_%H_%M_%S).png
mkdir -p $bkdest
/usr/bin/gnome-screenshot -f $target -a
/usr/bin/pinta "$target"
MEDIA=`shpub -d -s $SHPUB_SERVER upload $target | tail -n 1`
shpub -d -s $SHPUB_SERVER note -c screenshots -f $MEDIA --json Screenshot

Load more