Preview toggling in tmux
window list
Since Debian 10 "buster" (as of writing: Debian stable), which ships tmux 2.8, I'm finally having this new preview feature in the tmux window list. (Ctrl-b w)
[Insert image here.]
That's very cool, but eats up so much space! Where is the rest of my windows? The list ends so soon.
But, TIL: One can easily toggle the preview, using v! Then there's the full window height available for the window list, again.
Other useful default key bindings for the window list include x for "killing" windows, < and > for scrolling through the preview list (e.g., through all the windows after selecting a session entry from the window list), Ctrl-s and n for search, t/T/Ctrl-t for toggle-tag/tag-none/tag-all, : for running a command for each tagged item, O for changing sort order (index/numerical, name/alphabetical, time/last-recently-used(?)).
This is all very nice.
(Of course, there's also the basic movement through up/down, choosing the selection through enter, and quitting (without choosing) through q.)
See also the tmux (Debian 10) full documentation on choose-tree
and/or upstream tmux choose-tree
.
My Ctrl-b w
default binding is choose-tree -Zw
.
Restarting provisioning on FritzBox modem-router
Today1 was the day my Internet connection's upgrade should land, but I spent the day looking at the walled garden of my Internet Service Provider (ISP), instead.
The CWMP auto-configuration & provisioning for some reason didn't work, so I was trapped in the Internet-like environment my ISP set up for contacting the CWMP provisioning server and/or accessing some of the ISP-hosted services. The problem was, I didn't know it, and initially believed it to be some testing environment where my line had been put to measure its stability after the bandwidth upgrade. After several hours of staring and waiting, calling the ISP's hotline seemed a saner solution than to still try and wait. The person there took me through assessing the situation and put me on my way to restart the modem-router's auto-provisioning process; so that's what this blog post is about.
Auto-provisioning
In modern practice, telling an AVM FritzBox DSL modem-router what ISP one is using should be enough to get it going; no typing in secret usernames and passwords anymore (for the Internet line, the telephone line, ...), it'll just pull all the necessary(? :V) settings from a CWMP/TR-069 "ACS" ("auto configuration server"). A friend even told me he didn't even need to tell the device what ISP he was using, but that may be because they may be pre-provisioning/pre-configuring it, as it's specially branded for that ISP anyway.
The process of pulling the configuration from the respective ACS on the FritzBox (FRITZ!Box 7430 with firmware 07.12) seems to be: Logging in into the web interface, selecting the right page; then the ISP (or ISP category + ISP) has to be chosen and we run for apply. (form button) ... This will take some seconds for the box to apply the change, then up to some minutes while the ACS settings/directions are retrieved & applied.
After that, the previous standard/default username/password, with which the walled garden / ACS can be reached, will be replaced with a personalized username/password (of which only the username can be seen).
Failing the provisioning step
As I was contacting my FritzBox via https, and it keeps its self-signed certificate changing and changing again whenever the box gets a new IP address, the auto-provisioning didn't finish because the web app running in my browser couldn't contact the FritzBox anymore after it changed its certificate again and I needed to ack it. (Or at least that's what I'm telling myself. Maybe I just wasn't patient enough and skipped the final part of the provisioning early, just assuming it would hang on the cert question again. Nevertheless,) when it fails, it stays failed and doesn't ever retry the initial provisioning.
It always stays with the default username/password, and keeps me in the ISP's walled garden.
Restarting provisioning
Now comes in the ISP's hotline. They told me to temporarily change to some (possibly completely unrelated) different ISP profile in the FritzBox's list of providers, "complete" that setup by running apply (form button); then, when that (obviously) failed, go back to the "right" ISP setting and hit apply again. This, then, starts the initial auto-provisioning process anew, without needing the otherwise often-mentioned factory reset.
When I was (on an iteration later ...) running with http (not https, due to the cert problem described above,) and the box' IP address (not the beautiful hostname from my internal DNS, as it would reject the hostname as a protection against DNS rebinding attacks, and changing the ISP always resets the exemption configured in another part of the web interface), I finally got the initial auto-provisioning to complete and got my shiny new username/password!
(Created Mon 27 Jul 2020 23:30:24 CEST, published around Tue 28 Jul 2020 00:38:00 CEST.)
- Day of Internet connection upgrade: Mon 27 Jul 2020, or, 2020-07-27 -- that is, after 2 weeks of waiting x_x, after accepting an ISP-made offer.↩
Why Apache webserver is refusing to serve symlinks to /tmp
TL;DR
:
Apache web server may be running with its own, ~empty
/tmp
directory due toPrivateTmp=true
in the (e.g., Debian and derivatives)apache2.service
systemd unit file.Broken by this was (a non-default configuration of)
lacme
ACME client, which can be made to work again by running it under systemd, too! (withJoinsNamespaceOf=apache2.service
)When reading the full article, you might learn about Debian code search and Debian Sources website, and get example systemd service/timer unit files to run
lacme
with. It should be full of explanations and reference links, too!
Introduction
Hi, today I'm going to fill in on a problem I could find plenty
on Google searches, but no solution so far1: Apache webserver,
when given a symlink,
e.g., in the document root configured with Options FollowSymLinks
,
pointing to a file in /tmp
, it'll often just refuse to serve it
(HTTP status code 403 Forbidden
), with a cryptic message
in the VHost
's error log:
AH00037: Symbolic link not allowed or link target not accessible: /srv/www/foo/testfile.txt
Normally ...
Normally, one would now have a look at the webserver source code
to learn what's really going on. (For this, you can use the
Debian Code Search, though you'll
have to drop the AH
part from the error message identifier,
and don't expect the error message format string to be presented
as a single line in the source code (it isn't, but uses preprocessor
merging of adjacent string literals, instead...) The resulting search
would be package:apache2 00037
,
resulting in a hit in apache2_2.4.43-1/server/request.c
, at this time.
You'd then go to Debian Sources,
enter apache2
as package to search, click on the resulting single
apache2 result link, then on the exact version you're running, ...
et voila, -> server
-> request.c
, and let your web browser
search for the 00037
, again.
..., but in this case ...
But, as stated before, this is in this case useless as the real problem
can't be found by looking at the apache2
source code.) In this case (--
note that, in other cases, it can also be an SELinux issue, but in my case wasn't --),
the problem rather lies with the apache2.service
systemd unit file.
In Debian (Debian 9 "stretch" or newer), it contains a
PrivateTmp=true
,
which instructs systemd to start the webserver with its own, initially
empty set of /tmp
, /var/tmp
directories; so, as far as Apache
is concerned, the symlink target simply isn't there, as its /tmp
is ~empty! It's then refusing to serve a dangling symlink, which
(a bit strangely) fulfills the "or link target not accessible"
part of the logged error message. (Why doesn't it possibly give
a 404 Not Found
, in this case? ... Could have saved hours of debugging.)
Workaround
As my initial problem was to get the Perl-based, minimal
Let's Encrypt "ACME" client lacme
running again, (which in my setup created symlinks from a /.well-known
backing directory to a temporary directory in /tmp
to serve the prove
of domain ownership to LE, which of course just failed
with 403 Forbidden
for them all the time...) Well, the simplest solution
was to run lacme
under systemd, too, using
JoinsNamespaceOf=apache2.service
!
Resulting systemd unit files for lacme
ACME client
This is what the systemd unit file to run lacme newOrder
as looks like:
(To be placed into /etc/systemd/system/lacme-newOrder.service
,
then the usual systemctl daemon-reload
; systemctl start ...
to run it once. Note there is no [Install]
section, it'll just be started
from/via associated timer unit, see below.)
[Unit]
Description=lacme Let's Encrypt client new order
After=apache2.service
Requisite=apache2.service
JoinsNamespaceOf=apache2.service
[Service]
Type=oneshot
ExecStart=/usr/sbin/lacme newOrder
PrivateTmp=true
View raw systemd service unit file
For completeness, here's also the timer unit needed for cron-like operation:
(To be placed into /etc/systemd/system/lacme-newOrder.timer
,
then the usual systemctl daemon-reload
,
this time also systemctl enable --now lacme-newOrder.timer
,
which here works as there is an [Install]
section,
for registering under timers.target
. If all went well,
use systemctl list-timers
to verify scheduling.)
[Unit]
Description=Daily run of lacme Let's Encrypt client new order
# Based on apt-daily-upgrade.timer
[Timer]
# Previous cron.daily run:
#OnCalendar=*-*-* 6:44
# Now, with leeway:
OnCalendar=*-*-* 5:44
RandomizedDelaySec=60m
Persistent=true
[Install]
WantedBy=timers.target
View raw systemd timer unit file
(Created Tue 28 Jul 2020 02:34:50 CEST, published around Tue 28 Jul 2020 02:52:00 CEST; filed/ordered under when the idea was made.)
-
Argh, okay. Just after publishing this blog post, I did find
a Stack Overflow post
giving the answer (and more detail, even a path where systemd
mounts the private
/tmp
from!), asked and active 1½ years ago ...↩
gnuplot
: Cosine oscillating on arbitrary function
To let a cosine oscillate on an "arbitrary" function,
when the function goes up the x coordinate might go backwards,
so we can't express this as a single-valued function from x to y.
But we can use parametric curves, instead; they go from a separate parameter,
t, to points (x(t), y(t)). (In gnuplot
, this is set parametric
.)
As starter, here is what the final result will look like
for cosine oscillating on cosine:
This is what motivated myself to write up a blog post about the topic.
gnuplot
source code of cosine_on_cosine
:
#!/usr/bin/gnuplot -persist
set parametric
f(x) = cos(x*2.*pi)
fDeriv(x) = -sin(x*2.*pi)
g(x) = 0.1*cos(8.*x*2.*pi)
periods=3
set title GPFUN_g.' on '.GPFUN_f
set samples periods*100
plot [0:periods][-0.5:3.5][-2:2] \
t, f(t) with points, \
t + cos(atan(fDeriv(t)) + pi/2.)*g(t), f(t) + sin(atan(fDeriv(t)) + pi/2.)*g(t) with points
set samples 100
...
[2020-07-27] Sadly, I never got around to write up the full blog post, and now have already forgotten all the details. Let's get this published anyway, now, so I can get on with other things, here...
(Created Thu 09 Apr 2020 02:40:08 CEST, published around Mon 27 Jul 2020 22:40:00 CEST.)
TIL: git
history visualisation via gource
Today I learned there is a tool out there,
or rather part of Debian already1,
which helps getting an intuitive understanding of a git
repository's history,
by means of generating an interactive visualisation!
(This can be exported as a video, too, but I didn't test that.)
Thanks, Simon, for pointing out gource
exists!
gource
on canvon-blog
Letting gource
run on this blog's git
history was particularly interesting
(N.B.: the git
repos isn't published (yet?), though);
I saw multiple entities "fighting it out" between each-other,
as pages were added by my git identity and tag pages created by the wiki's.
See the (in comparison: boring) end result here:
gource
on streamserver-cvn
Another way I've tested gource
is by letting it run
on my streamserver-cvn
project's git repos;
this was a bit boring, though, as there is only one author (myself),
most of the time...
Operating gource
It seems as though you can just cd
into your worktree
(N.B.: bare repository doesn't work as of gource
0.44-1+b3
(Debian),
whether running with GIT_DIR=.
or not.)
and run gource
, without any additional command-line arguments,
and expect it running.
A few points to note, though:
Seeking seems to be possible by hovering with the mouse pointer over the lower part of the window/screen so that a seekbar appears, then click where you want to go; as of
gource
0.44-1+b3
(Debian; again), it'll start from there, though, so will arrive at a different result than when left running from the beginning of git history...Pressing
q
orCtrl-Q
won't quit the application, butEsc
does.
(Created Mon 20 May 2019 19:51:19 CEST, published around Tue 21 May 2019 00:58:42 CEST.)
-
gource
in Debian:Version
↩0.43
in Debian 8,
Version0.44
in Debian 9,
Version0.49
in upcoming Debian 10.
ikiwiki
upgraded
(Blog software announcement 2019-3)
I've finally upgraded ikiwiki
from what was in Debian 8,
to what is expected to be in upcoming Debian 10!
Sadly (or reassuringly), that isn't a giant leap forward as the time frame involved might have suggested; it brings nice things, though, such as basic HTML5 by default, plus mobile responsive layout based on it... That means I can finally read my blog in my smartphone's Firefox, and expect something readable. (That's why I'm typing the base text of this on my smartphone, but the webform hasn't got much responsiveness added, or I'm doing something wrong...)
(Created Sun 19 May 2019 01:59:51 CEST, published around Sun 19 May 2019 17:04:04 CEST.)
For reference, find a list of related articles embedded here:
Blog software announcements 2019-n
- Blog via IkiWiki - Setup of blog (software)
- Modifying IkiWiki - Local modifications to blog software
ikiwiki
upgraded to version in upcoming Debian 10
streamserver-cvn
runs on my smartphone!
As of 2019-05-11, streamserver-cvn
,
my personal project to help a friend fill in a missing part in his
Desktop streaming, now runs on my smartphone, too!
(In the GNU/Linux environment inside the Termux Android App, that is...)
As I've already written up the details as project documentation, please see there; or visit the master README.md and/or master BUILDING.md directly.
The essence is to use "qmake QMAKE_CXXFLAGS+=--target=armv7l-linux-android ...
"
for building, as the __ARM_ARCH
defaults to 4, otherwise,
while qt5
only supports 5-8.
A nice trick to get MPEG-TS (MPEG Transport Stream) compatible input
for streamserver-cvn-cli
is to use "mpv ... --o=- --of=mpegts
"
(ships via the default repos of the Termux App),
possibly piped into it. (For getting something that actually plays
on Kodi on the Raspberry Pi, it may need much more flags, though.
What flags exactly is left as an exercise to the reader. ...)
Problems
The idea was to be able to stream some smartphone photos to the TV
(via Kodi on the Raspberry Pi, as suggested above), though.
This doesn't seem to work (yet?) as mpv
seemed to set a frame rate
of 24000 fps... It seems there is a /1001
missing, somewhere,
but when I used some flags to set the time base to "25000/1000
",
each image was taking around 20-25 s, but Kodi hung on the stream,
anyway. Perhaps I should have tried to set "1000/1000
"...
But can't test that, at the moment. It would be possible to try again
using ffmpeg
, too, instead.
ARM legacy system alongside arm64
replacement
Table of Contents
- General situation
- More specific situation
- The deprecated ARM instructions
- Running
mono
and the syscall filter - Conclusion
General situation
For some reason or another, you might have a legacy system that should theoretically be replaced by a newer system, especially when changing hardware/architecture anyway -- but you want to keep it running all the same. Or even need to, as it might be infrastructure for other hosts (e.g., needed to build Debian packages for them), which a newer system couldn't accomplish by itself.
Luckily, we're living in the future already, and can use many kinds of different technology to let the old system live on in a new one -- be it virtual machines (VMs; accelerated by hardware support like Linux KVM, or plain qemu without acceleration enabled), Linux Containers (LXC; the standalone one or that one provided by libvirt …), systemd-nspawn (chroot on steroids; possibly used via systemd-machined and machinectl), or anything else that you can imagine.
More specific situation
Years ago, I put my previous, 32-bit home server into an LXC container on the new, 64-bit system. Except for that it's still running although I wanted to retire it one day, it's working great. This was on amd64 (aka x86_64, or today simply "PC", as in Personal Computer) hardware.
So now I wanted to do mostly the same thing with a Raspbian armhf (meant for running on Raspberry Pi) development environment (32-bit) on a "new" RPi 3B+, which got Debian (the real thing) buster (upcoming Debian 10) arm64 (64-bit) running on it. One thing, I was astonished that it could do such a thing, being ARM-based and not PC, and all. Another thing, it was not such a smooth ride as on PC hardware!
The deprecated ARM instructions
So I set up the outer system based on a preview image
(created by Gunnar Wolf), and copied
the previous development environment to /var/lib/machines/devenv-raspbian
.
cd
to there, do a quick test:
root@devenv-arm64:/var/lib/machines/devenv-raspbian# chroot . bin/bash
root@devenv-arm64:/# ls
bin boot dev etc [...]
root@devenv-arm64:/#
Some things to note here:
It works.
It spams the
dmesg
/journal
/syslog
with messages such as:Apr 25 14:17:38 devenv-arm64 kernel: "bash" (2612) uses deprecated CP15 Barrier instruction at 0xf7e88b50 Apr 25 14:17:38 devenv-arm64 kernel: "bash" (2612) uses deprecated CP15 Barrier instruction at 0xf7e88b50 Apr 25 14:17:38 devenv-arm64 kernel: "bash" (2612) uses deprecated CP15 Barrier instruction at 0xf7e88b50 [...] Apr 25 14:17:39 devenv-arm64 kernel: "bash" (2612) uses deprecated setend instruction at 0xf7e666f4 Apr 25 14:17:39 devenv-arm64 kernel: "bash" (2612) uses deprecated setend instruction at 0xf7e66ca4 [...] Apr 25 14:18:05 devenv-arm64 kernel: cp15barrier_handler: 143 callbacks suppressed Apr 25 14:18:05 devenv-arm64 kernel: "bash" (2612) uses deprecated CP15 Barrier instruction at 0xf7d032b8 Apr 25 14:18:05 devenv-arm64 kernel: "bash" (2612) uses deprecated CP15 Barrier instruction at 0xf7d03394 Apr 25 14:18:05 devenv-arm64 kernel: "bash" (2612) uses deprecated setend instruction at 0xf7e666f4 Apr 25 14:18:05 devenv-arm64 kernel: "bash" (2612) uses deprecated setend instruction at 0xf7e66bd8 [...] Apr 25 14:18:17 devenv-arm64 kernel: "ls" (2613) uses deprecated CP15 Barrier instruction at 0xf7cbfb50 Apr 25 14:18:17 devenv-arm64 kernel: "ls" (2613) uses deprecated CP15 Barrier instruction at 0xf7cbfb50 Apr 25 14:18:17 devenv-arm64 kernel: compat_setend_handler: 2 callbacks suppressed Apr 25 14:18:17 devenv-arm64 kernel: "bash" (2612) uses deprecated setend instruction at 0xf7e666f4 Apr 25 14:18:17 devenv-arm64 kernel: "bash" (2612) uses deprecated setend instruction at 0xf7e66ca4 [...]
Point 2 nearly overshadowed point 1, here. Luckily, it was all easily fixed by doing:
root@devenv-arm64:~# echo 2 >/proc/sys/abi/setend
root@devenv-arm64:~# echo 2 >/proc/sys/abi/cp15_barrier
Apr 25 15:03:19 devenv-arm64 kernel: Removed setend emulation handler
Apr 25 15:03:19 devenv-arm64 kernel: Enabled setend support
Apr 25 15:03:40 devenv-arm64 kernel: Removed cp15_barrier emulation handler
Apr 25 15:03:40 devenv-arm64 kernel: Enabled cp15_barrier support
As you may guess from the dmesg
messages, this didn't just silence
the warnings, but did something different: It actually turned off
the in-kernel support for those deprecated ARM instructions, and moved
responsibility for supporting the functionality to the bare hardware.
Luckily, our Raspberry Pi 3B+ is backwards-compatible to running
an ARM userland compiled for the more limited Raspberry Pi 1, and
still has the deprecated instructions implemented even in aarch64 mode
running a 32bit personality. But if we would have liked to run
an arbitrary 32bit ARM userland on an arbitrary 64bit ARM CPU
running in 64bit mode, this could have been a show-stopper, here.
Research
Some details of my research into this weird situation:
Initial suggestion for using sysctl
s abi.*
found,
given by someone at linaro in 2017;
reference to previous discussion from 2014 made by someone at Arm Ltd
in the same thread in 2017;
proposed timeline for ARM instruction deprecation by someone else at Arm Ltd
in 2014. There, it says how it's all meant to work together for the goal
of finally getting rid of certain instructions.
As a side note, those emulation warnings are coming up for other people
doing slightly similar things (though in a much more professional way)
as well; e.g., directhex
wrangling with automated build infrastructure.
In section "When is a superset not a superset", it says:
CP15 memory barrier emulation is slow. My friend Vince Sanders, who helped with some of this analysis, suggested a cost of order 1000 cycles per emulated call. How many was I looking at? According to dmesg, about a million per second.
I guess that number has been taken by looking at the result of
the emulation warning dmesg
rate-limiting. Let's hope it
wasn't millions of log lines, instead …
Running mono
and the syscall filter
Having just a chroot
would be lame, nowadays; as you can easily boot
another operating system (as LXC or systemd-nspawn
or even docker
container,
as long as it's a Linux distribution and/or copes with using the same kernel
as the host system; or, otherwise, in a virtual machine, e.g. based on qemu
and kvm
). For this situation, where I thought the guest system would cope
with the newer, foreign-architecture kernel of the host system
with the deprecated ARM instructions put out of our way,
I was going to use systemd-nspawn
container.
So for our previous /var/lib/machines/devenv-raspbian
, I set up
a companion configuration file /etc/systemd/nspawn/devenv-raspbian.nspawn
(for unsetting some systemd-nspawn defaults when used from systemd-machined
,
and to set up networking),
which apparently already got used when I cd
'd to the machine(/container)
root directory and issued a simple:
root@devenv-arm64:/var/lib/machines/devenv-raspbian# systemd-nspawn
root@devenv-raspbian:~#
Again, some things to note:
- It works.
- This time, we're not inside a simple
chroot
, but in a container already, with theuts
namespace unshared and the hostname set according to the container name. (No tricks with/etc/debian_chroot
and aPS1
which uses it needed!) - It even picked up our inner root's home directory, and ended up there instead of in the container root directory.
With this running I tried the next step of booting into the container operating system. For the most part, it was a simple:
root@devenv-arm64:~# machinectl start devenv-raspbian
This gave the guest's boot messages in the host's journal.
When it's ready, a simple machinectl
without arguments,
from an unprivileged user, says what OS/version the container OS is,
and what IP address is being used -- an information which
is readily available as it's a container, not a VM.
When all works out well, you can enable the container for start on next (host) boot:
root@devenv-arm64:~# machinectl enable devenv-raspbian
So far, so good.
The problem
After some time I realized everything mono
(.NET) was instantly failing
at program start. Initial thoughts were on maybe another missing instruction,
at the hardware/kernel level; that wasn't the case, though. So I ran the thing
through strace
.. and noticed loads of cacheflush()
failing with EPERM
.
After some fruitless web search into the Linux source, it appeared to me
as if this could be part of some restriction placed upon us by
the systemd-nspawn
containering. -- A quick test running the csharp
interactive C# shell just inside a chroot instead of a systemd-nspawn
container succeeded: (Transcript from brain memory.)
root@devenv-arm64:/var/lib/machines/devenv-raspbian# chroot . bin/bash
root@devenv-arm64:/# csharp
[some message about mono needing /proc]
root@devenv-arm64:/# mount -t proc proc proc
root@devenv-arm64:/# csharp
[some assertion not met, seems to explode much as before]
root@devenv-arm64:/# linux32 csharp
[works]
root@devenv-arm64:/# umount proc # Don't forget; or systemd-nspawn
# later will error out when started
# from "machinectl start devenv-raspbian".
So it was able to work, just wouldn't with systemd-nspawn
.
Finally, I found out about systemd-nspawn having an automatic system-call-filter
implemented as a white-list. So this was where the cacheflush()
EPERM
was coming from! Additionally to forcing the personality to "arm
" (32-bit)
in the .nspawn
file mentioned above, we need to put cacheflush into the
syscall-filter white-list. To avoid trial-and-error until the minimum
necessary would be found, and as our containerization was for the old and new
system to coexist -- and not to de-root the system being used as container --,
I assumed that cacheflush was missing from the white-list due to the fact
that it's being ARM-private, and simply put every ARM-private syscall there
that I could find in an armhf header file. (Most probably what I was looking at
was asm/unistd.h from the Linux kernel sources.)
Specifically, that were: breakpoint
cacheflush
usr26
usr32
set_tls
.
(TODO: Report as issue / wishlist bug on systemd
and link that here?)
Power down the container, start it again
(remember to umount manually mounted proc
first),
et voila:
mono
runtime-based software from the old development environment
was running, too!
Conclusion
It is possible to run an older Raspberry Pi compatible OS
as a container inside a newer such OS on a newer hardware version
of the Raspberry Pi.
It's just unfortunate to seem impossible in practice at first;
with both the deprecated ARM instructions
filling up the journal,
and mono
runtime-based software of the older OS not running
looking like a show-stopper.
These problems can be overcome, though, which I hope to be able
to communicate with this blog post.
Modifying IkiWiki
(Blog software announcement 2019-2)
It's incredible how flexible IkiWiki is.
For changing the page output, you can create a templates
directory
and copy your ikiwiki
installation's page.tmpl
to there - that's it!
This will override the template and you can start making changes to it.
(That's how I got the additional CTIME/MTIME lines added,
where most of the time only one of them was provided so it felt for me
as if information was missing when looking only at a single page...)
For changing code (at least as long as it's part of a plugin),
you can set a libdir in the config, set up the required subdirectories
IkiWiki/Plugin
, copy your ikiwiki
installation's comments.pm
to there -
and it'll override the installed plugin, so you can start to make live
code changes.
Local avatars in comments
--- IkiWiki/Plugin/comments-20190509-1-ikiwiki_3_20141016_4_deb8u1.pm.bak 2017-01-11 19:18:52.000000000 +0100
+++ IkiWiki/Plugin/comments.pm 2019-05-10 04:25:09.364000000 +0200
@@ -641,8 +641,20 @@
my $user=shift;
return undef unless defined $user;
my $avatar;
+
+ # Try using a locally hosted avatar, first.
+ if ($user !~ m#^(?:\.|\.\.|.*/.*)$#) {
+ foreach my $testuri (
+ '/avatar/'.$user.'.png',
+ '/avatar/'.$user.'.jpg'
+ ) {
+ return $testuri if -f $config{destdir}.$testuri;
+ }
+ }
+
+ # Only then embed external resources.
eval q{use Libravatar::URL};
if (! $@) {
my $oiduser = eval { IkiWiki::openiduser($user) };
my $https=defined $config{url} && $config{url}=~/^https:/;
For reference, find a list of related articles embedded here:
Blog software announcements 2019-n
- Blog via IkiWiki - Setup of blog (software)
- Modifying IkiWiki - Local modifications to blog software
ikiwiki
upgraded to version in upcoming Debian 10
Feed experiments
Recently, I've been experimenting with (e.g. RSS or Atom) "feeds"; it's quite a nice technology which I've largely ignored while consuming. (I've had problems getting feeds to work on the "Joomla!" CMS including the photos from a gallery plug-in -- only the processing directives for the plug-in turned up -- but that's rather on the producing side.)
With this technology, the computer/program can process items (articles, comic strips, or also just any kind of repeating or automated information that may be useful to someone regularly?) from a web-site piece-by-piece (instead of web page-wise). This enables different possibilities, from sending as emails (ugh! want to get away from emails, not get even more of them!) to live-showing as Desktop notifications; but the most popular application maybe is the "Feed Reader".
Currently, I'm experimenting with the "Feedbro" Firefox web-browser extension on the Desktop/Notebook/PC, and the "Feeder" Android-App from the f-droid repository of free and open-source software, on mobile. I'm trying to have only few/lightweight feeds on the mobile app, as it's easy to get overwhelmed with information while using a feed reader, and may be impossible (or undesirable) to catch up, after missing out for a few days (as also suggested by a friend who tried using an RSS reader multiple times but ended up never looking at it again, due to this problem).
I'm not sure if I can keep up with what I've currently configured in the Desktop web browser, especially as some automated news items keep coming up again and again; though this may be a problem particular to Feedbro which I'm using there.
Here are some possibly-interesting feeds I've collected for the experimenting:
- Humour:
- Debian:
- "Planet Debian", a blog aggregator of Debian Developers: https://planet.debian.org/rss20.xml (see https://planet.debian.org/, linked from the Debian website)
- "Debian micronews RSS Feed" https://micronews.debian.org/feeds/feed.rss (see https://micronews.debian.org/ / via the Debian blog from the Debian website)
- "Debian News" https://www.debian.org/News/news.en; be sure to append
.en
to the advertised.../news
URL to always get the untranslated / english original news, even if your browser configuration asks for a different language by default. (from the Debian website) - "Debian Security Advisories (summaries)" https://www.debian.org/security/dsa-long (from the Debian website, too)
- Oh dear, there are no other concrete resources left that I'd have liked to give as examples... (after eliminating what I'd not recommend to try (again))
Some more suggestions:
Log in to your GitHub profile and subscribe to your "private feed"; it should be something like:
https://github.com/ACCOUNTNAME.private.atom?token=...
Subscribe to your own website/blog/... (if it got a feed) to learn about, e.g., comments in a timely manner. Maybe also the sites you're an admin for.
If you care about a specific Debian package's evolution, perhaps due to having contributed to it, you can subscribe to its Debian package news,
e.g.,https://tracker.debian.org/pkg/PACKAGENAME/rss
(viahttps://tracker.debian.org/pkg/PACKAGENAME
).Be warned, though, that at least Feedbro (see above) sees the frequent automatic updates to automatically generated "action items" as a completely new post, all the time. It might therefore help to add an early, non-fallthrough rule to automatically mark as read everything whose article URL starts with
https://tracker.debian.org/action-items/
, or alternatively match against specificaction-items
instances by number.Maybe subscribe to your favourite (or most recently discovered) Free Software project's news feed. It may be interesting, it may also be annoying, though ...
Or subscribe to the (e.g. GitHub) commits feed of your friend's long forgotten project; maybe it'll keep you up-to-date when/if it should become active again, or you'll have a nice test for how, e.g., Feedbro displays hundreds/thousands of days of inactivity in the feed stats...
This blog is powered by ikiwiki.