References/Debian (cont.d)

How to Use a Computer


A ThinkPad X41 was turned into OPNsense appliance.

Why OPNsense? Both pfSense and OPNsense are derivatives of the original m0n0wall. But according to Wikipedia/OPNsense, pfSense created a website to abuse its rival OPNsense. Maybe this is the reason why m0n0wall put up a banner on its website.

“… and I encourage all current m0n0wall users to check out OPNsense”

— Manuel Kasper (author of m0n0wall)


Hardware List

  • ThinkPad X41 : The motherboard was taken out with 1GB memory stick installed. And with 512 MB onboard memory it provides a 1.6 GHz CPU and 1.5 GB memory.
  • X4 UltraBase : It provides power connection and one 1 Gb Ethernet port. A retiring SATA I 120GB hard drive is inserted.
  • SMC CardBus Ethernet card : It provides the second Ethernet port, this one is only 100 Mb. It will be hooked up with ISP modem with lower bandwidth (less than 50 Mbps).

The minimum number of network interfaces for OPNsense is two.

Since the CPU (Intel Pentium M LV 778) is too old to be amd64 capable, download the vga image for i386 architecture on, then install it on the ThinkPad.

  1. One thing to note is during the installation, we may not want to set interface to “track WAN IPv6”.
  2. Considering how normal buy off the shelf router works, e.g., Netgear nighthawk series, we don’t need a super fancy storage system. Data packets processing happens in the RAM which is normally of size 256 MB on Netgear routers. So using a failing spindle disk on OPNsense is not a big deal as long as the OS is loading and configuration files can be saved.

Now it is up.

Figure 1: ThinkPad + OPNsense
  1. DC power
  2. LAN Ethernet port on UltraBase
  3. Onboard 512 MB memory modules
  4. WAN Ethernet port on SMC card
  5. CPU and memory stick are on the other side of the board

Fine Tuning

  • Fan Control

    There is a project bsdfan on github which is similar to scripts fan control on Linux. This is useful since we are using classic ThinkPad.

    As a prerequisite, enable shell login on OPNsense first and allow login with password. Then create a normal user in group administrator. Log in with normal user and we can switch to root by su - if package install is necessary.

    With root logged in,

    # pkg install git                                 # install git suite
    # git clone   # check out bsdfan project
    # cd bsdfan                                       # change dir to ./bsdfan/
    # make                                            # build with cc
    # cp -v bsdfan.conf mybsdfan.conf                 # make a copy of config file
    # ./bsdfan -c mybsdfan.conf                       # test run bsdfan with config

    If it is working, further install tmux and run bsdfan as a background program. This way no install is needed.

    # pkg install tmux
    # tmux                                            # enter tmux terminal
    # ./bsdfan -c mybsdfan.conf                       # run bsdfan with config
    (in tmux) Ctrl-b d                                # detach, leave tmux

    Since bsdfan is not official in /usr/ports, we’d rather manually run fan control when OPNsense is booted up.

  • OPNsense Configuration

    Replace default apinger with dpinger which is available since release 18.1.10.

    After upgrading from 18.1 to 18.7, the system refuses user ssh login since the default system has been hardened. The login shell has been switched to /sbin/nologin. To revert ssh login to what it was for a normal user (non root), via GUI System:Access:Users choose /bin/tcsh as Login shell.

Use Cases

As Firewall

First off, we want to use NAT to map destination IP to server’s internal IP. This way source of incoming traffic knows how to direct its data flow to the server. The server is exposed to the Internet however other unwanted traffic will be blocked. Go to Firewall:Settings:Advanced

  1. Check Reflection for port forwards. Otherwise we cannot visit home server from within home network.
  2. Check Automatic outbound NAT for Reflection. Otherwise we cannot ssh using external IP to home server from within home network.

Secondly, there is Bogon Networks rule set under the same tab as in above, we can choose Update Frequency.

Thirdly, we can enable Intrusion Detection which is installed by default. Go to Services:Intrusion Detection:Administration

  1. Under Settings tab, check both Enabled and IPS mode. IPS mode will “block” intrusion on top of “detection”.
  2. Next to Settings tab, check* rules and click Enable selected. Then check them again and click Download & Update Rules.
  3. Rules can be updated and reloaded by a cron job. It can be set via Schedule tab.

Lastly, we can block incoming traffic from specified countries.

  1. Go to Firewall:Aliases, create an alias of type GeoIP. Select region or countries to block traffic from.
  2. Go to Firewall:Rules:WAN, Add new rule for interface WAN. Choose from drop-down list for Source the alias created in the previous step. Set TCP/IP Version IPV4+IPV6.
  3. Move the rule to the top.

It is very easy to check the firewall log through a nice GUI — live view, overview and plain view. Normally we can look at the overview first and spot which one source IP is suspicious then go to plain view and type “block” in the search box.


HAProxy stands for “High Availability Proxy”. Simple NAT rules can be used for non intelligent applications, for example, single IP single server with firewall in between.

Things become more interesting if single IP is coupled with multiple servers serving the same content (servers mirror each other as redundancy) or different contents (servers work independently). In this case, “branching” is needed; traffic will be redirected according to for example server load or requesting subdomain names.

The raw haproxy is an executable that can run with a single text configuration file. Its full power will be released using haproxy this way.

On OPNsense, haproxy comes with a GUI and installed as a plugin. It is a few clicks away from setting it up and getting it running.

My user case is SSL passing through since backend servers before they are added to the backend pool, on each of them there have already been SSL certificates installed.

Below is the walk through on 18.1.12. (A manual firewall rule is needed once the setting for HAProxy itself is done.)

  • Real Servers Tab
    • Add an entry

      • Name: Give it a name, e.g., real_webserver.
      • FQDN or IP: Server IP on LAN
      • Verify SSL Certificate: Check
      • Option pass-through: Add send-proxy. This is useful for logging clients’ real IP’s in Layer 4 TCP mode on server side. (Note: This will affect renewal of a Letsencrypt certificate. Remove it momentarily when renewing a certificate then add it back when finished.)

      Correspondingly, on server side, there should be a line real_ip_header proxy_protocol in nginx configuration. Also add proxy_protocol to line listen, e.g., listen 443 ssl http2 proxy_protocol. See more on this.

  • Virtual Services Tab
    • Backend Pools
      • Add an entry
        • Enabled: Check
        • Name: Give it a name, e.g., web_backend.
        • Mode: TCP (Layer 4). This is for SSL passing through.
        • Servers: Select from drop down list the entry created for Real Servers tab, e.g., real_webserver.
    • Public Services

      This is the front end.

      • Add an entry
        • Enabled: Check
        • Name: Give it a name, e.g., web_frontnd.
        • Listen Addresses:,,,
        • Type: TCP. This is usually the same as backend. Also here is for SSL passing through.
        • Select Rules: Select from drop down list two rules that weirdly, will be created in the next tab. One rule is action and other http redirect.

          In the example below, they are real_webserver_act and cond_http_redir.

  • Rules and Checks Tab
    • Conditions

      This is for http redirect.

      • Add an entry
        • Name: Give it a name, e.g., cond_http_redir.
        • Condition type: Select from drop down list “SSL/TLS connection established”.
        • Negate condition: Check
    • Rules
      • Add an entry
        • Name: Give it a name, e.g., real_webserver_act.
        • Execute function: Select from drop down list “Use specified Backend Pool”.
        • Use backend pool: Select from drop down list the backend created for Backend Pool tab, e.g., web_backend.
  • Settings Tab

    Go to Services settings and

    • Enable HAProxy: Check

    Then hit “Apply” to start haproxy.

    One may also want to change Settings:Global Parameters, Maximum SSL DH Size to 2048 or 4096. Cipher list can be updated as per Mozilla Server Side TLS.


Though it still holds the actual public IP, it is useful for accessing blocked websites when traveling abroad.

As AdBlock

Use as AdBlock at firewall level?

Regular Expressions

Miscellaneous notes

[TODO]: perl syntax on

python 3 syntax on

  • ., match any single character
  • ?, match optional preceding single character
  • (), together with ?, specifying the range, match optional preceding token within the parentheses
  • |, logic OR
  • [], match any single character listed within the brackets
  • +, match preceding token 1 or more times

    Example: (ht|f)tps?://.+, match http, https, ftp URLS

  • ^, match the beginning letter of line
  • $, match the ending letter of line

    Example: ^.+(jpg|gif|png)$, match jpg, gif, png files

    Example: C.+?c, match between C and first c we find; ? makes matching occur once

    Example: Adding quotation marks at the beginning and end of each line of a file within emacs. M-x replace-regexp RET ^ RET " RET will add double quotation mark at the beginning of each line; similarly M-x replace-regexp RET $ RET " RET will add double quotation mark at the end of each line. To remove quotation marks at the beginning and end of each line, simply replace every " with a blank.


  • VLC3 with av1.

    Default VLC3 was compiled without av1 decoding capabilities (either libaom or dav1d). See more on dav1d.

    Install dav1d and compile VLC3 in /usr/ports/multimedia/vlc3.

    $sudo make config                        # enable dav1d as well as libssh2 and vpx 
    $sudo make package                       # make package only
    $cp -v ./work/pkg/VLC3_PKG.txz SOMEWHERE           
    $sudo pkg add SOMEWHERE/VLC3_PKG.txz     # manual install 
    $sudo make clean                         # clean ports 
  • Cinelerra has a render farm feature which can be deployed either on slave PCs within the same network as the master PC or on the master PC itself with duplicates of itself as slave PCs. The latter scenario works since the TCP communications happen between multiple localhost, i.e., the master PC itself. For a PC with many CPU cores, this is the deal breaker compared to Adobe editing suite—Almost all Adobe programs are expensive and only render at \(\times 1\) speed.

    cin render farm
    Figure 2: Cinelerra settings > Render Farm Tab

    In the above example, six concurrent rendering jobs are to be created. To start six cin -d PORT daemons with specified ports as above, use a for loop in a bash console,

    $for i in {24..29}; do cin -d 540$i ; done

    To concatenate resulting video pieces, see FFmpeg Concatenate.

    To swap segments of a file name, e.g., to change dw.pro001 to, use rename on Linux or sed on BSD,

    # cinelerra farm renders
    # FILE with file name if FILE ends with digits
    #      with file name FILE.proddd if FILE ends with alphabet
    # this script only processes files with names
    if ! [[ ${VIDEO} =~ $re ]]; then 
        # rename FILE.proddd to; on linux, use rename, on BSD use sed
        for f in ${VIDEO}*.pro*
            if [[ $(uname) == "FreeBSD" ]]; then
                fnew=$(echo $f | sed -E 's/^(.*)\.pro([0-9]{3})/\1\2\.pro/')
                mv -v $f $fnew
            elif [[ $(uname) == "Linux" ]]; then
                rename 's/^(.*)\.pro(\d{3})/$1$2\.pro/' $f
                echo "unknown operating system"
                echo "need files names in format"
                exit 1

    The script in the next bullet can be used as a standalone script or when combined with the script below and render farm of Cinelerra, we can max out the performance of a computer with many CPU cores.

    #!/usr/bin/env bash
    set -e          # err exit
    #set -u         # treat undefined as err
    set -o pipefail # if one pipe fails, entire chain fails
    # need specify file name and resolution
    # ---- for usage only no need look below this line ---- 
    # cinelerra farm renders
    # FILE with file name if FILE ends with digits
    #      with file name FILE.proddd if FILE ends with alphabet
    # this script only processes files with names
    re1='[0-9]+$' # ending with digits
    re2='^CYGWIN' # for windows
    if [[ $(uname) =~ $re2 ]]; then
    elif ! [[ ${VIDEO} =~ $re1 ]]; then 
        # rename FILE.proddd to; on linux, use rename, on BSD use sed
        for f in ${VIDEO}*.pro*
            if [[ $(uname) == "FreeBSD" ]]; then
                fnew=$(echo $f | sed -E 's/^(.*)\.pro([0-9]{3})/\1\2\.pro/')
                mv -v $f $fnew
            elif [[ $(uname) == "Linux" ]]; then
                rename 's/^(.*)\.pro(\d{3})/$1$2\.pro/' $f
                echo "unknown unix operating system"
                echo "need files names in format"
                exit 1
    # convert ProRes *.pro pieces to *.webm
    for f in ${VIDEO}*.pro
        ./ -r $RES -a n $f &
        pids="$pids $!"
    # wait until completion of each job in the above loop
    for pid in $pids; do
        wait $pid || let "RESULT=1"
        echo "job $pid done"
    if [ "$RESULT" == "1" ]; then
           exit 1
    # generate video list for binding 
    if [[ -f $VIDEO-videolist.txt ]]; then
        rm -v $VIDEO-videolist.txt
    for f in ${VIDEO}*-${RES}p.webm
        echo "file './$f'" >> $VIDEO-videolist.txt
    # cannot directly merge converted *.webm; merged audio corrupt (still not working Mar 2019)
    # output video only from *.webm
    ffmpeg -y -f concat -safe 0 -i $VIDEO-videolist.txt -c:v copy -c:a copy video-${VIDEO}-${RES}p.webm 
  • Below is my bash script to batch convert videos to *.webm according to Google’s recommended settings. The handling of passing script options may not be optimal. Any comments are very welcome.

    #!/usr/bin/env bash
    set -e          # err exit
    #set -u         # treat undefined as err
    set -o pipefail # if one pipe fails, entire chain fails
    if [[ $# -lt 1 ]]; then
        echo "type $(basename $0) -h for usage"
        exit 1
    # passing script options 
    # -r: resolution, source video resolution height. 1080+ > 1080 > 720 > 480 > 360 > 360-
    # -a: all, convert with specified res and all res below specified
    while getopts ":r:a:h" OPTIONS; # leading : means invalid options default to ? case
        case $OPTIONS in
                echo "choose resolution $RESOLUTION"
                echo "choose -a setting $ALL"
                echo "Usage: $(basename $0) [-r resolution] [-a y/n] files" >&2
                echo "-r : Video height.                                  "
                echo "     This height lies in one of catagories below,   "
                echo "     1080+ > 1080 > 720 > 480 > 360 > 360-          "
                echo "-a : Convert with the specified resolution only or  "
                echo "     with all resolutions below. Use y(es) or n(o). "
                exit 0
                echo "type $(basename $0) -h for usage"        
                exit 1
    shift "$(($OPTIND -1))"
    # workaround for absent -r -a
    # getopts does not support optional arguments
    # sanity check resolution
    if [[ $RESOLUTION == "" ]]; then
        echo "Need to specify target resolution"
        echo "type $(basename $0) -h for usage"        
        exit 1
        if ! [[ $RESOLUTION =~ $re ]] ; then
            echo "Resolution $RESOLUTION is not an integer" >&2; exit 1
    # sanity check -a setting
    if [[ $ALL != "y" ]] && [[ $ALL != "n" ]]; then
        echo "-a option can only be 'y' (for yes) or 'n' (for no)!"
        exit 1
    # sanity check file list
    if [[ $FILES == "" ]]; then
        echo "Need to name the files to be processed"
        echo "type $(basename $0) -h for usage"        
        exit 1
    # passed all above; show what to be processed
    echo "Will process following files: $FILES"
    # how to write script which takes input arguments
    # specify the file name here
    for FILE in $FILES # "$@" # an array of inputs
        if [ ! -f $FILE ]; then
            echo "$FILE does not exist!"
            # ffprobe returns height as 'height=xxx', filter out non numerics
            HEIGHT=$(ffprobe -v error -show_entries stream=height -of default=noprint_wrappers=1 $FILE | sed 's/[^0-9]*//g')
            echo 'Video height is '$HEIGHT
            # give warning resolution higher than video to be processed
            # pause for the first time then ignore all following warnings
            # to do
            # use shell parameter expansion, e.g.,
            # ~% FILE="example.tar.gz"
            # ~% echo "${FILE%%.*}"
            # example
            # ~% echo "${FILE%.*}"
            # example.tar
            # ~% echo "${FILE#*.}"
            # tar.gz
            # ~% echo "${FILE##*.}"
            # gz
            # settings according to google vp9 vod
            if [[ $RESOLUTION -ge 1080 ]]; then
                if [[ $HEIGHT -eq 1080 ]]; then
                    SCALE_FILTER_FULLHD="" # no scaling
                    echo 'No scaling needed'
                    SCALE_FILTER_FULLHD="-vf scale=1920x1080:flags=lanczos"
                    echo 'filter setting: '$SCALE_FILTER_FULLHD
                # 1920x1080 60 fps 
                ffmpeg -y -i $FILE $SCALE_FILTER_FULLHD -b:v 3000k \
                       -minrate 2000k -maxrate 4350k -tile-columns 2 -g 240 -threads 8 \
                       -quality good -crf 31 -c:v libvpx-vp9 -c:a libopus -b:a 128k \
                       -pass 1 -speed 4 "${FILE%%.*}"-1080p.webm && \
                    ffmpeg -i $FILE $SCALE_FILTER_FULLHD -b:v 3000k \
                           -minrate 2000k -maxrate 4350k -tile-columns 4 -g 240 -threads 8 \
                           -quality good -crf 31 -c:v libvpx-vp9 -c:a libopus -b:a 128k \
                           -pass 2 -speed 1 -y "${FILE%%.*}"-1080p.webm
                if [[ $ALL == "n" ]]; then
            if [[ $RESOLUTION -ge 720 ]]; then
                if [[ $HEIGHT -eq 720 ]]; then
                    echo 'No scaling needed'                
                    SCALE_FILTER_HD="-vf scale=1280x720:flags=lanczos"
                    echo 'filter setting: '$SCALE_FILTER_HD
                # 1280x720 30 fps
                ffmpeg -y -i $FILE $SCALE_FILTER_HD -b:v 1024k \
                       -minrate 512k -maxrate 1485k -tile-columns 2 -g 240 -threads 8 \
                       -quality good -crf 32 -c:v libvpx-vp9 -c:a libopus -b:a 128k \
                       -pass 1 -speed 4 "${FILE%%.*}"-720p.webm && \
                    ffmpeg -i $FILE $SCALE_FILTER_HD -b:v 1024k \
                           -minrate 512k -maxrate 1485k -tile-columns 2 -g 240 -threads 8 \
                           -quality good -crf 32 -c:v libvpx-vp9 -c:a libopus -b:a 128k \
                           -pass 2 -speed 2 -y "${FILE%%.*}"-720p.webm
                if [[ $ALL == 'n' ]]; then
            if [[ $RESOLUTION -ge 480 ]]; then 
                if [[ $HEIGHT -eq 480 ]]; then
                    echo 'No scaling needed'                
                    SCALE_FILTER_SD="-vf scale=640x480:flags=lanczos"
                    echo 'filter setting: '$SCALE_FILTER_SD
                # 640x480 30 fps
                ffmpeg -y -i $FILE $SCALE_FILTER_SD -b:v 750k \
                       -minrate 375k -maxrate 1088k -tile-columns 1 -g 240 -threads 4 \
                       -quality good -crf 33 -c:v libvpx-vp9 -c:a libopus -b:a 96k \
                       -pass 1 -speed 4 "${FILE%%.*}"-480p.webm && \
                    ffmpeg -i $FILE $SCALE_FILTER_SD -b:v 750k \
                           -minrate 375k -maxrate 1088k -tile-columns 1 -g 240 -threads 4 \
                           -quality good -crf 33 -c:v libvpx-vp9 -c:a libopus -b:a 96k \
                           -pass 2 -speed 2 -y "${FILE%%.*}"-480p.webm
                if [[ $ALL == 'n' ]]; then
            if [[ $RESOLUTION -ge 360 ]]; then
                if [[ $HEIGHT -eq 360 ]]; then
                    echo 'No scaling needed'
                    SCALE_FILTER_MOBILE="-vf scale=640x360:flags=lanczos"
                    echo 'filter setting: '$SCALE_FILTER_MOBILE                
                # 640x360 30 fps
                ffmpeg -y -i $FILE $SCALE_FILTER_MOBILE -b:v 276k \
                       -minrate 138k -maxrate 400k -tile-columns 1 -g 240 -threads 4 \
                       -quality good -crf 36 -c:v libvpx-vp9 -c:a libopus -b:a 96k \
                       -pass 1 -speed 4 "${FILE%%.*}"-360p.webm && \
                    ffmpeg -i $FILE $SCALE_FILTER_MOBILE -b:v 276k \
                           -minrate 138k -maxrate 400k -tile-columns 1 -g 240 -threads 4 \
                           -quality good -crf 36 -c:v libvpx-vp9 -c:a libopus -b:a 96k \
                           -pass 2 -speed 2 -y "${FILE%%.*}"-360p.webm
                if [[ $ALL == 'n' ]]; then
            if [[ $RESOLUTION -lt 360 ]]; then
                echo 'Are you sure you want a low quality '$RESOLUTION'p video?!'
  • Fcitx/Rime input method.

    The default installation does not include MSPY scheme. To add the input schema, go to .config/fcitx/rime, create a folder ./build and copy all files from /usr/local/share/brise here.

    There is a utility called rime_deployer. Compile the desired schema; copy generated *.bin file.

    $cd ~.config/fcitx/rime     # home directory for rime config
    $mkdir -pv build && cd "$_" # create 'build' dir and change dir to 'build'
    $cp -v /usr/local/share/brise/* ./                      # copy shared data
    $rime_deployer --compile double_pinyin_mspy.schema.yaml # compile schema
    $cp -v double_pinyin_mspy.prism.bin ../                 # copy bin file
    $cp -v double_pinyin_mspy.schema.yaml ../               # copy yaml file
    $rime_deployer --add-schema double_pinyin_mspy # generate default.custom.yaml

    If Fcitx/Rime cannot be activated in emacs, set environment variable LC_CTYPE to launch emacs,

    $LC_CTYPE=zh_CN.UTF-8 emacs &

    Make sure use sudo dpkg-reconfigure locales (Debian only; skip on FreeBSD) to generate that locale first.

  • Inkscape crash immediately upon launch.

    See bugzilla report by many. The fix now is compile and install /usr/ports/devel/glib20 and #pkg lock glib.

  • How to insert \(\LaTeX\) math into a video.
  • Mount NTFS device on FreeBSD.

    We need fusefs-ntfs driver,

    # pkg install fusefs-ntfs

    Also make sure in /boot/loader.conf, there is a line fuse_load="YES".

    Use gpart show to see the device name of storage disk. In the example below, we mount the NTFS slice with standard Linux permissions,

    # ntfs-3g -o permissions /dev/da4s2 ~/usbmnt/
  • Copy files with progress and speed information.

    The trick is to use FILE protocol of curl for local files.

    $curl -o ~/Downloads/file FILE:///path/to/file/on/storage/file
  • Switch to bash.

    From handbook/shells, switch to bash. This should not be done with root. Leave the root’s shell as tsch.

    % chsh -s /usr/local/bin/bash

    .bash_profile loads .profile and loads .bashrc if the shell is interactive. Put environment variables, session settings in .profile and aliases, completion etc., in .bashrc.

    # .bash_profile
    . ~/.profile
    if [[ $- == *i* ]]; then . ~/.bashrc; fi

    Comment out (set-exec-path-from-shell-PATH)) in general settings for emacs since it hangs GUI.

    ; set PATH, because we don't load .bashrc
    ; function from
    (defun set-exec-path-from-shell-PATH ()
      (setenv "PATH" (concat "/usr/local/bin:" (getenv "PATH")))
      (let ((path-from-shell (shell-command-to-string "$SHELL -i -c 'echo -n $PATH'")))
        (setenv "PATH" path-from-shell)
        (setq exec-path (split-string path-from-shell path-separator))))
    ;(if (window-system)
    ;        (set-exec-path-from-shell-PATH)) ; GUI hangs in FreeBSD
  • Grow zfs pool.

    The original zpool were four HITACHI C10K900 300G disks. They are replaced with four HITACHI C10K1800 600G disks.

    Disk S/N zpool label
    KLV0Z8GF zfs0
    KLV0ZYZF zfs1
    KLV0YAVF zfs2
    KLV0YN9F zfs3

    The procedure is to replace disks one by one.

    • Remove an old disk from zpool before power on.
    • Power on; check the missing disk info by zpool status. Write down the labels according to gpart backup da0 for example.
    • Copy partition geometry from mirror disk to new disk

      # gpart backup da1 | gpart restore -Fl da3
    • Modify the labels

      # gpart modify -i 1 -l efiboot0 da3
      # gpart modify -i 2 -l gptboot0 da3
      # gpart modify -i 3 -l swap0 da3
      # gpart modify -i 4 -l zfs0 da3
    • Clone EFI and GPT boot partitions

      # dd if=/dev/da1p1 of=/dev/da3p1
      # dd if=/dev/da1p2 of=/dev/da3p2
    • Resilver the new disk

      # zpool replace zroot uid-of-missing-disk da3p4
    • Upgrade zpool if action says so

      # zpool upgrade zroot
  • Compile GoldenDict on FreeBSD.
    • Check out git repository
    • Install qt5-qmake, qt5-help and qt5-linguisttools.
    • Enter folder goldendict and qmake without epwing or audio support

      qmake "CONFIG+=no_ffmpeg_player" "CONFIG+=no_epwing_support"
    • Modify and according to

      The changes in the repository are as follows,

      diff --git a/ b/
      index 80c3b6b..87643ee 100644
      --- a/
      +++ b/
      @@ -27,6 +27,7 @@
       #include <stdlib.h>
       #include <string.h>
       #include <stdio.h>
      +#include<unistd.h> // added for freebsd
       #include <iconv.h>
       #include <QTextDocument>
       #include "gddebug.hh"
      diff --git a/ b/
      index 9d52583..7d1f31e 100644
      --- a/
      +++ b/
      @@ -1,5 +1,6 @@
       #include "processwrapper.hh"
      +#include <unistd.h> // added for freebsd
       #include <QtCore>
       #ifdef Q_OS_WIN32
    • Add to Makefile line LIBS, -lexecinfo -liconv.
    • make -j8 and test run ./goldendict.

      GoldenDict 1.5
      Figure 3: About page of GoldenDict 1.5 on FreeBSD
    • iconv issues related to UTF-8

      The major problem here is GoldenDict won’t index Babylon *.bgl correctly. Sometimes it complains UTF-8 head is not correct; sometimes it complains nothing but won’t index the dictionary anyway. Might well just try the FreeBSD native iconv with adding -DLIBICONV_PLUG to CXXFLAGS in Makefile.

  • Open *.djvu with Evince. Multipage *.djvu file is not correctly identified. See bug report for more.

    File type DjVu image (image/vnd.djvu) is not supported

    The fix is given by Comment 3 down below on bug report page. On FreeBSD, the path is


    Add image/vnd.djvu to the last line MimeType= in djvudocument.evince-backend.

  • Upgrade to 11.2-Release.
    • Back up configuration files.
    • Follow the release installation page and use

      # freebsd-update upgrade -r 11.2-RELEASE

      freebsd-update install will be followed, after reboot and be applied again.

    • If packages are managed by pkg, reinstall all packages

      # pkg-static upgrade -f

      See more in handbook/updating-upgrading.

    • To fix broken nvidia drivers (ABI linked to older kernel version), we need to install /usr/src and use /usr/ports to build driver ourselves.
      • Get src.txz from
      • Unpack (and keep the source tree)

        # tar -C / -xvf src.txz

        -C will change working directory to root, then tar unpacks with verbose mode for file src.txz.

      • Refresh /usr/ports

        # portsnap fetch update 
      • Go to /usr/ports/x11/nvidia-driver-340 and build from source

        # make install clean

        clean coming after install will clean working directory. We might want to follow OpenSSL Wiki to add to /etc/make.conf

        DEFAULT_VERSIONS+= ssl=openssl
      • If pkg complains about changed configuration of nvidia driver, lock the pkg version with

        # pkg lock nvidia-driver-340
  • After upgrading to emacs 26.1, the Gnus+Gmail log in stopped working since it complained gpg could not be found. Basically pinentry-tty cannot work properly with newer Emacs. So we need to “loop back” to older pin entering scheme, add the following to init.el

    (setf epa-pinentry-mode 'loopback) % freebsd only 
  • Change pkg repository to the latest from quarterly, modify the file /etc/pkg/FreeBSD.conf or follow the instruction inside to create a local copy at /usr/local/etc/pkg/repos/FreeBSD.conf.
  • Using tor with regular user switching to _tor. After enabling control port 9051, if not using root or _tor, nyx will complain control_cookie_auth does not exist, since it is in /var/db/tor/ which is owned by _tor.

    A temporary fix to this problem is to use visudo to add a regular user with the power of issuing nyx as user _tor without a password.

    someuser ALL=(_tor) NOPASSWD: /usr/local/bin/nyx
  • Regarding *.core dump, see discussion on It seems we can disable core dump by setting




    in /etc/sysctl.conf.

  • Use Firefox with profiles. There are two methods to create new profiles.

    • With no firefox instances running (including running background), start firefox with firefox -p.
    • With firefox running, type about:profiles into address bar and follow the About Profiles page.

    To start firefox with a specified profile, use firefox -p NAMEofProfile. See here for more.

  • Install security/tor from ports and enable tor web browsing through regular Firefox

    Preferences –> Network Proxy –> settings.

    Set socks5 proxy port 9050. Also check socks5 proxy for DNS.

    But we need to skip loop back network interface lo0 for localhost. (Use ifconfig and check if the interface has IP for localhost.)

    Add set skip on lo0 in /etc/pf.conf after block all.

  • Mount remote server directory. Install first sshfs of course! To allow a normal use mount and check out the mounted remote directory, add


    to /etc/sysctl.conf.

    A normal user cannot open a FUSE device. Change this by

    # devfs ruleset 10
    # devfs rule add path 'fuse*' mode 666

    When mounting a remote directory use

    # sshfs -o allow_other USER@host:/path/to/mount/ /local/mount/point/
  • Show UTF-8 characters in terminal. Set locale in .cshrc

    setenv LANG en_US.UTF-8
    setenv MM_CHARSET UTF-8
  • Auto complete with sudo. Add to .cshrc

    set complete = enhance
    complete sudo 'n/-l/u/' 'p/1/c/' 

    More on this IBM link.

  • TeXLive DVD does not contain binaries for architecture amd64-freebsd but the online installer install-tl does.

    To use graphic installer, install p5-Tk from ports.

    xelatex needs Force a symlink

    # ln -s /usr/local/lib/ /usr/local/lib/
  • Mount ISO images

    First attach the image to unit #1

    # mdconfig -o readonly -a -t vnode -f /path/to/.iso or .img -u 1

    Then mount with type -t, read-only

    # mount -o ro -t cd9660 /dev/md1 /mnt

    Unmount unit #1 using

    # umount /mnt
    # mdconfig -d -u 1
  • One quirky thing about FreeBSD emacs-nox is undo is not bounded to Ctrl-/, but the beginner friendly Ctrl-x u.

    Another weird thing is backspace will invoke help Ctrl-h ?. Use the following to recover the erase key.

    (normal-erase-is-backspace-mode 1) ; for freebsd emacs
  • LXDE on FreeBSD by default uses openbox as window manager. The openbox configuration is purely text based and its location is /.config/openbox/lxde-rc.xml.

    To start LXDE automatically with startx, add

    % echo "exec startlxde" > ~/.xinitrc
  • If Dbus complains about “no machine-id”, use

    # dbus-uuidgen > /etc/machine-id
  • During installation, there is an option whether invite new user to other group. Say yes and add the newly created user to wheel group. Users in wheel group can escalate user privilege to root.
  • After installed sudo, only need to uncomment out the line for wheel group; no need to add new user to sudo group.
  • There are three important files regarding WiFi connection with static IP. /etc/rc.conf, /etc/resolv.conf and /etc/wpa_supplicant.conf. For connection to a hidden SSID, use configuration in /etc/wpa_supplicant

    ssid="/ssid name/"
    bssid="/mac addr of AP/"

    In /etc/rc.conf, add channel number helps fast establishment of static IP connections

    ifconfig_wlan0="WPA inet netmask ssid /ssid name/ ch
    annel 48:ht/40"

    Sometimes need to manually connect.

    %sudo service netif stop  # stop network 
    %sudo service netif start # restart
  • To kill zombie logged in users, use w or who to find out their login types. Then use ps auxw | grep -i tty0 to find out PID. Use kill PID to kick out those users.
  • pkg primer, /usr/ports itself provides searching mechanisms. cd /usr/ports then make index or make fetchindex. make search key=WORD will provide the searching functionality.

    The most commonly used commands,

    $sudo pkg update                     # refresh software repo
    $sudo pkg upgrade                    # update all outdated packages
    $sudo pkg lock PACKAGE               # prevent PACKAGE from upgrading
    $sudo pkg clean -a                   # clear cached packages
    $sudo pkg info                       # list all installed packages
    $sudo pkg info PACKAGE               # list detailed info about PACKAGE
    $sudo pkg info -D PACKAGE            # show post installation message 
    $sudo pkg rquery QUERY_FORM PACKAGE  # list info about PACKAGE not installed
    $sudo pkg delete PACKAGE             # uninstall PACKAGE (and those depending on it)
    $sudo pkg delete -f PACKAGE          # uninstall PACKAGE alone 
    $sudo pkg autoremove                 # uninstall packages no longer needed
    $sudo pkg search PACKAGE             # search for PACKAGE
  • View message of an installed package. Sometimes it contains important post installation howto. Installed packages are cached at /var/cache/pkg/.

    %pkg info --pkg-message PACKAGE

    To update FreeBSD itself, use freebsd-update fetch followed by freebsd-update install.

  • Networking checking. netstat -r will show route table. And to check listening port, we can use

    %netstat -an | grep -i listen

    sshd processes can be part of output of sockstat -4l. Connection authorization log is accessible via /var/log/auth.log.

  • Install devcpu-data. Then check if there is a new microcode update for CPU.

    %cpucontrol -u -v /dev/cpuctl0


A modern language inspired by many other languages.



Unzip binary archive to c:\Go. This is the recommended path; otherwise need specify GOROOT environment variable.

Add c:\Go\bin to PATH. To test the installation, write hello.go and type in command line. This should generate a hello.exe.

$go build 

Emacs go-mode

Go in Action

Algorithms in Go. (or algorithms in general)


  • Install KOReader on Kobo Aura HD.

    • Install KSM : follow “Installation (consists of 2 steps)”
    • KSM shows up before Kobo boots into Nickel (Kobo’s software)
    • Bind a static IP to Kobo MAC using dd-wrt : See settings Service Management under Services tab
    • Install KOReader : copy unzipped koreader folder to .adds. KOReader has a builtin SSH server, hence no need to install a separate app. Put public key as authorized_keys in .adds/koreader/settings/SSH/ (create SSH folder if it does not exist), then log in with sftp -i PRIV_KEY -P 2222 root@KOBO_IP.

      lftp sftp://root@KOBO_IP -p 2222 -e 'set sftp:connect-program "ssh -a -x -i PRIV_KEY"'

      In interactive lftp command line, to get book folder, use mirror --only-newer --only-missing; to push, add -R to the previous mirror command.

      In case of get and push commands being messed up, e.g., pushing without -R, up-to-date KOReader reading notes and highlights will be lost, to recover the metadata.epub.lua or metadata.pdf.lua data associated with BOOK.sdr folder, consider using cgsecurity’s PhotoRec (actually we will use it to recover only text files).

    • Set sleep and power off screensaver : From top menu,
      • Third Tab : Screen -> Screensaver -> check Use image as screensaver
      • Same location as Use image as screensaver, go further down Settings -> check White background behind message and images and Stretch covers and images to fit screen. Then select the desired screensaver by choosing Screensaver image.
      • As for power off screensaver, back up settings.reader.lua before adding the configuration code,

        ["poweroff_screensaver"] = {
        ["path"] = "/mnt/onboard/.adds/kbmenu_user/img/koreader_poweroff.png"

    kobopatch and Patch Instruction for the latest firmware. This may not be useful if use case in mainly booting directly into KOReader via KSM.

    See also development toolchain.

    okreader initial support for glo HD.

    For audio books, use AAXtoMP3 for archival purposes. Personal Audible ID used for purchases can be fetched using audible-activator. A nonfree solution is to use TunesKit Audible AAX Converter, a very zippy app.

  • Install KOReader on Kindle Touch 3G (K5).

    When I laid my hands on a Kobo device, I immediately felt over the years I was sold a lie that Kindle was the go to device when it comes to e-reading. After shifting reading experience to Kobo, it won’t stop surprising me even within the Kobo pecking order, being an older device doesn’t necessarily mean it is not good enough. Remove the back panel, unscrew the corners and replace the factory micro SD card with a modern storage card, the Kobo could fly, rendering any kindle a useless Amazon locked out, Apple equivalent cult cash cow. And it is before tuning Kobo kernel or hacking its firmware. The list of tweaking is endless for a Kobo; there is no jail in the first place, therefore no such a thing as jailbreak in the Kobo universe.

    Launch KOReader through KUAL, from top menu,

    • Second Tab : Switch zoom mode -> zooming to fit content width
    • Third Tab : Status bar -> only keep current page, pages left in chapter and progress percentage
    • Third Tab : Screen -> E Ink settings -> check avoid black flashes in UI

    From bottom menu, set reading in landscape mode.

    Copy stardict dictionaries to koreader/data/dict directory. It seems dictionaries containing CJK characters would crash KOReader and make kindle reboot. Also it takes fairly long time for search results to pop up if large dictionaries are used for lookup.

    As for Wikipedia lookup, if multiple languages are listed, at the moment the lookup results will be shown in languages looping over the list of specified languages. For example, if we tell KOReader to query Wikipedia pages in en de fr jp zh five different languages, first we will be shown search results in English, press the button at bottom center en > de, the search results in German come next; the button itself changes to de > fr. Press the button again, the search results in French turn up and etc. In this example, wouldn’t it be nice if pressing the button en > de, a drop down list pops up consisting of one language to four other different languages instead of looping over the entire language list item by item?

  • Mount and transfer files to Kindle on FreeBSD. Mount the e-reader to a local path then copy files to documents folder.

    $sudo mount -t msdosfs /dev/msdosfs/Kindle ~/usbmnt/
  • Eject Kindle.

    Unlike ejecting Kindle on a Linux machine using,

    $sudo eject /dev/disk/by-label/Kindle

    on FreeBSD, use

    $sudo camcontrol devlist
    <Kindle Internal Storage 0100> at scbus8 target 0 lun 0 (da4,pass5)


    $sudo usbconfig
    ugen0.9: <Amazon Amazon Kindle> at usbus0, cfg=0 md=HOST spd=HIGH (480Mbps) pwr=ON (2mA)  

    to get the mounted device info first, then power off Kindle to keep reading while it is connected charging,

    $sudo usbconfig -d 0.9 power_off

    where “-d” is used for device selection with the format <unit>.<addr>, “0.9” stands for “[ugen]<unit0>.<addr9>”.

    To get vendor, product and other detailed information, use

    $sudo usbconfig -d 0.9 dump_device_desc
  • SSH/sftp to Kindle.

    First generate a pair of SSH keys,

    $ssh-keygen -t rsa -b 4096 # default length is 2048 bit

    Connect Kindle to PC with a USB cable. Then copy the public key to Kindle as /mnt/us/usbnet/etc/authorized_keys. Use sftp to upload files.

    $sftp -i PRIV_KEY root@ # default IP assigned by USBNet

GNU/Linux Debian

  • Sync time with NTP.

    $sudo vi /etc/systemd/timesyncd.conf 
    # inside timesyncd.conf, uncomment NTP line and FallbackNTP line
    # get ntp servers from
    # edit
    $sudo timedatectl set-ntp true # activate ntp 
    $timedatectl status            # check status 
  • dd CD Disc to ISO file.

    First determine the block size of the source CD.

    $isoinfo -d -i /dev/cdrom | grep -i -E 'block size|volume size'
    Logical block size is: 2048
    Volume size is: 25733

    Then use dd to dump the CD data using the block size and volume size information from the previous step.

    $sudo dd if=/dev/cdrom of=IMG.iso bs=2048 count=25733 status=progress
  • Try stock vector photos before paying for the license.

    For example, Getty Images’ iStock website provides millions of photos, illustrations and clip arts. Before one places anything in the final print run, one can always use a manipulated sample photo as a vector substitute in the draft version of a project.

    • Download the sample photo one is interested in and use Imagemagick to threshold the image. This will remove the watermark.
    • Import the de-watermarked image in Inkscape. Use trace bitmap to obtain the paths of the image. Use Path -> Simplify to reduce the number of nodes if desired.
  • Add supporting audio files to GoldenDict.

    Audio files should be placed in a folder named as DICT.dsl.files if DICT.dsl is the dictionary name. Both DICT.dsl file and DICT.dsl.files folder reside in the same directory that is scanned by GoldenDict.

    goldendict with audio
    Figure 4: GoldenDict 1.5 with Audio on FreeBSD
  • Reduce the size of a scanned book.

    There are two major reasons. The first reason is obviously the concern regarding file size. The second one, more importantly is a scanned book in a PDF container may have poor performances in a document viewer, e.g., evince, scrolling up and down could be painful since slow decompressing of large images prevents fast responses. Scanned PDF may even constantly crash KOReader on a Kindle or won’t open at all.

    We can use *.djvu instead. Thus a scanned book contained in a PDF file will be turned into a DjVu volume. For example, the size of And Never Said a Word by Heinrich Böll from Internet Archive goes down from $10.6$MB in PDF to $7.1$MB in DjVu by applying the technique below.

    This script is used for shrinking the size of scanned PDF books made of
    large hi-res images, e.g., books from Internet Archive. What it does is removing
    useless paper pixels, keeping only text pixels.
    Main tools used here are OpenCV with python bindings. Other tools include didjvu
    and djvm in bash terminal.
    -Yün Han 
    import cv2 as cv
    import numpy as np
    import matplotlib
    matplotlib.use('Agg')            # non interactive plot for batch savefig
    import matplotlib.pyplot as plt
    for i in range(208):
        # Justification of using acrobat pro (only if necessary): borrowed books
        # from internet archive are encrypted. When downloaded, they are processed
        # through adobe digital editions. So when pdf is obtained, further export
        # images from it using acrobat pro. Otherwise, use pdfimages on linux
        # load image
        seq = str(i+1).zfill(3) # padding leading zeros, e.g., 1 becomes 001
        img = cv.imread('AndNeverSaidAWordHeinrichBoll_Page_' + seq + '.jpg') 
        # create a mask for background paper yellow 
        img2gray = cv.cvtColor(img, cv.COLOR_BGR2GRAY)
        _, bg_mask = cv.threshold(img2gray, 5, 10, cv.THRESH_BINARY) # play with threshold
        bg_mask_inv = cv.bitwise_not(bg_mask)
        # Now delete backgound paper yellow
        text_fg = cv.bitwise_and(img, img, mask=bg_mask_inv)
        bg_color_bg = cv.bitwise_and(img, img, mask=bg_mask)
        # super impose 
        img_processed = cv.add(bg_color_bg,text_fg)
        # no frame
        fig = plt.figure(frameon=False)
        fig.set_size_inches(7.5,11.2) # 3.75x5.6 inch for 6 inch kindle
        # no axis
        ax = plt.Axes(fig, [0., 0., 1., 1.])
        # show processed image and save figure
        ax.imshow(img_processed, interpolation='lanczos')
        plt.savefig('./png/'+seq+'.png', dpi=300) # dpi controls page quality 
        #cv.namedWindow('Processed Image',cv.WINDOW_NORMAL)
        #cv.imshow('Processed Image',img_processed)
    Once image processing is done, use
    $didjvu bundle -o AndNeverSaidAWordHeinrichBoll_part1.djvu 0{01..99}.png
    in bash terminal to encode a multipage djvu file for the first 99 pages. Repeat
    the same process for the remaining pages. Lastly use djvm to combine djvu parts
    into a single djvu volume.
    $djvm -c AndNeverSaidAWordHeinrichBoll.djvu
    The reason why we process 100 pages at a time is sometimes processing all *.png
    requires more than 4gb memory and our debian machines are old.
    Also tried 
    $didjvu bundle -o test.djvu --bg-crcb=none --fg-crcb=none -m sauvola --clean

    If color is not part of the equation at all, we can use monochrome images to create the DjVu document. It reduces the size of a file even further, e.g., the size of And Never Said a Word goes down from $10.6$MB in PDF to $3.5$MB in DjVu.

    $pdfimages -f 1 -l 208 AndNeverSaidAWordHeinrichBoll.pdf png/neverword
    $cd ./png && for f in *.pbm; do convert $f -channel RGB -negate ${f%%.*}.png; done
    $cd .. && didjvu bundle -o AndNeverSaidAWordHeinrichBoll.djvu png/*.png # use *.pbm if previous step skipped
    • If the book was scanned in two-page landscape mode, use

      $cd ./png && for f in *.pbm; do convert -crop 2x1@ $f $f-%d.png; done

      to split each page first.

      If a two-paged book is in DjVu in the first place, use

      $ddjvu -format=pdf BOOK.djvu BOOK.pdf

      to save it in PDF then repeat the above splitting processing of images from a PDF file.

    • If the number of images is over a thousand, the ordering of resulting PNG’s may be a problem. The default format of pdfimages output numbering is a 3-digit sequence, e.g., \(001\), \(002\)… Use rename to remedy this, e.g.,

      $rename -n 's/img-([0-9]{3}).pbm/img-0$1.pbm/' *.pbm # preview first
      $rename 's/img-([0-9]{3}).pbm/img-0$1.pbm/' *.pbm    # now rename 

    There might be cases where scanned pages do not consist of solely images, i.e., part of the page is text, the rest of it are images. Convert each page to an image if this is the case,

    #!/usr/bin/env bash
    set -e
    set -o pipefail
    for i in $(seq 0 $(($NUM_PAGE-1))) # the first page is page 0
        /usr/bin/convert -verbose -density 600 ${FILE}.pdf[$i] -quality 100 png/${FILE}-$(printf %04d $i).png

    If using a lower end computer for processing a scanned book of thousands of pages, memory intensive work can be mitigated by dealing with only one page at a time,

    #!/usr/bin/env bash
    set -e
    set -o pipefail
    # set name, range and quality
    for i in $(seq 0 $(($NUM_PAGE-1))) # the first page is page 0
        # save one book page as png image (transparent support)
        convert -verbose -fuzz 20% -transparent white -density $DPI ${FILE}.pdf[$i] -quality $QUALITY ${FILE}-$i.png
        # compress image with djvu
        didjvu bundle -o ${FILE}-$i.djvu ${FILE}-$i.png
        # bundle djvu with djvm
        if [[ $i == 0 ]] ; then
            mv -v ${FILE}-$i.djvu ${FILE}.djvu
            djvm -c ${FILE}.djvu ${FILE}.djvu ${FILE}-$i.djvu
        # clear off temp png and djvu
        rm -v ${FILE}-$i.png
        if [[ $i != 0 ]] ; then
            rm -v ${FILE}-$i.djvu

    Lastly, make the DjVu file searchable,

    $ocrodjvu --in-place -l eng+deu 'AndNeverSaidAWordHeinrichBoll.djvu'

    This will increase the file size by some percentage, e.g., the size of And Never Said a Word goes up from $3.5$MB to $4.0$MB.

    Note using this method, watermarks within scanned books (usually superimposed when a book is downloaded from a library website, e.g., IP address of the download terminal and library URL will be included in the footer as watermarks) will be automatically ignored since they are not part of scanned images.

    As for watermarks within LaTeXed PDF books, use Infix PDF editor: select the text box at the bottom of the page where “Downloaded from XXX Library, DATE; IP” occurs, then go to Edit->Delete across pages to remove all occurrences. We might try this several times since the text box might not be at the exact same location on each page.

  • Org without make.

    According to this hack, just to make autoloads file is enough (where org is unpacked)

    emacs -batch -Q -L lisp -l ../mk/org-fixup -f org-make-autoloads

    where -Q stands for quick, no site files; -L for adding directory for emacs to search lisps; -l for loading lisp code; -f for calling function.

  • Split two-page scanned PDF down the middle.

    $mutool poster -x 2 Artin1967.pdf Artin1967_split.pdf # need install mupdf-tools
  • Clonezilla larger disk to smaller disk.

    This scenario is not supported by Clonezilla. The workaround is shrink all the larger disk partitions so that the total space in use is less than or equal to the size of the smaller disk. This can be done with gparted live CD. We cannot gparted resize mounted partitions on a running Linux machine. Shut it down and resize in live CD environment instead.

    Then boot into Clonezilla GUI with its live CD,

    • Do a larger disk partitions to image backup, use SSH server if no spare disks are available;
    • Do a partitions image to smaller disk restore. Keep the original partition table and do not check resize the geometry of partition table to fit target.

    Once everything is done, boot into gparted live again to max out the capacity of the smaller disk by resizing whatever is allowed.

    When booting into the smaller disk with newly restored system, the device UUIDs might be different, use blkid to print the new UUIDs and update /etc/fstab accordingly. Lastly,

    $sudo update-initramfs -u 

    to reflect the updated UUIDs. We might need to update the swap UUID in /etc/initramfs-tools/conf.d/resume as well.

  • A trick to delete every other file/triple files etc.

    For example, with a folder in which there are a lot of PNG’s, we only want to keep the third out of every three.

    ls *.png | awk 'NR % 3 != 0 { print }' | xargs rm -v
  • A trick to convert capital letters in file name to lowercase.

    In Bash 4+, use shell parameter expansion.

    $str='string WITH CAPS'
    $echo $str
    string WITH CAPS
    $echo ${str,,}
    string with caps
    $echo ${str,,[C-W]}
    string with cAps

    Besides, , for uppercase to lowercase; ^ for lowercase to uppercase. ,, means convert every character that matches the pattern.

  • Add leading zeros to filenames.

    for f in [0-9].pdf; do
    > mv -v $f $(printf %02d.%s ${f%%.*} ${f##*.})
    > done
    renamed '0.pdf' -> '00.pdf'
    renamed '1.pdf' -> '01.pdf'
    renamed '2.pdf' -> '02.pdf'
    renamed '3.pdf' -> '03.pdf'
    renamed '4.pdf' -> '04.pdf'
    renamed '5.pdf' -> '05.pdf'
    renamed '6.pdf' -> '06.pdf'
    renamed '7.pdf' -> '07.pdf'
    renamed '8.pdf' -> '08.pdf'
    renamed '9.pdf' -> '09.pdf'

    To combine them, use

    $pdftk *.pdf cat output OUTPUT.pdf
  • Set permission of /var/www/html.
    • It is generally owned by USER and group www-data who is able to run web server.

      $sudo chown -R USER:www-data /var/www/html
    • Files and folders newly created within should automatically belong to group www-data. Apply setgid to all subfolders,

      $sudo find /var/www/html/ -type d -exec chmod g+s '{}' +
    • Don’t let world group have anything to do with /var/www/html.

      $sudo chmod -R o-rwx /var/www/html/
  • Gzip/Brotli nginx module and BREACH attack.

    Basic web server only compress html; so if we want to serve compressed text, images and other media types, we need configure the nginx server to tell the differences between different files.


    and an article on digitalocean. Down below there is a configuration for using Gzip compression. It is known Gzip compression over TLS/SSL is vulnerable to BREACH attack. An alternative Brotli used by Google is also vulnerable to the same attack but insofar as the mitigation is concerned it can be used with length_hiding module.

    server {
        # inside /etc/nginx/sites-available/active-website
        gzip on;                   # turn on 
        gzip_comp_level    5;      # 1 - 9; 9 most, 1 least 
        gzip_min_length    256;    # size less 256 won't be compressed
        gzip_proxied       any;    # for request routed via proxy servers 
        gzip_vary          on;     # for non gzip browsers
        gzip_buffers       32 16k; # 32 buffers with each of the size 16k 
        # woff is already compressed 
        # jpeg is already compressed
        # html is always compressed by gzip module

    Add the above settings then do a test sudo nginx -t before reloading nginx server. To test if gzip functions correctly, use web browser debug tool. For example, with firefox Web Developer -> Network (Ctrl-Shft-E), look for a media type instructed to be compressed by gzip and check Response headers. There should be a field

    Content-Encoding: gzip

    As an alternative, we custom build nginx with ngx-brotli and nginx-length-hiding-filter-module.

    • First create a folder where the compilation would be happening. Then grab build dependencies and debian source code.

      $mkdir -pv ~/Downloads/nginx_custom && cd "$_"
      $sudo apt-get build-dep nginx-full # replace nginx-full with nginx if from official repo
      $apt-get source nginx-full         # sudo is not needed
    • Now there is an nginx-version folder in current directory, e.g., nginx-1.14.2, go to module directory and fetch source code of both brotli and length_hiding from github.

      $cd nginx-1.14.2/debian/modules/
      $git clone        # brotli
      $cd ngx_brotli && git submodule update --init && cd .. 
      $git clone # length_hiding 
    • Go back to debian directory and edit rules and changelog. We choose to build nginx-full flavor and change in changelog is used as a tag to differentiate this custom build and packages from official debian repository.

          $cd .. # assuming PWD shows nginx-1.14.2/debian/modules/
          $vi rules 
          INSIDE /debian/rules 
          FLAVOURS := full
          full_configure_flags := \ 
      --add-dynamic-module=$(MODULESDIR)/nginx-length-hiding-filter-module \
      --add-dynamic-module=$(MODULESDIR)/ngx_brotli \
      --with-openssl-opt=enable-tls1_3 \
      --with-http_geoip_module # if building from official repo
          $vi changelog 
          INSIDE /debian/changelog
          nginx (1.14.2-2-custom) unstable; urgency=medium 

      Note here modules are added as dynamic modules so we need to copy *.so files manually later.

    • Custom build unsigned package. *.deb files will be generated in parental directory.

      $cd ~/Downloads/nginx_custom/nginx-1.14.2
      $sudo dpkg-buildpackage -uc -b
    • Install *-custom*.deb files.

      $sudo dpkg -i nginx_1.14.2-2-custom_all.deb \
      nginx-common_1.14.2-2-custom_all.deb \
      nginx-full_1.14.2-2-custom_amd64.deb \
      libnginx-mod-http-auth-pam_1.14.2-2-custom_amd64.deb \
      libnginx-mod-http-cache-purge_1.14.2-2-custom_amd64.deb \
      libnginx-mod-http-dav-ext_1.14.2-2-custom_amd64.deb \
      libnginx-mod-http-echo_1.14.2-2-custom_amd64.deb \
      libnginx-mod-http-geoip_1.14.2-2-custom_amd64.deb \
      libnginx-mod-http-image-filter_1.14.2-2-custom_amd64.deb \
      libnginx-mod-http-subs-filter_1.14.2-2-custom_amd64.deb \
      libnginx-mod-http-upstream-fair_1.14.2-2-custom_amd64.deb \
      libnginx-mod-http-xslt-filter_1.14.2-2-custom_amd64.deb \
      libnginx-mod-mail_1.14.2-2-custom_amd64.deb \
    • Activate the modules by changing main nginx configuration and site configuration files.
      • Reflect the changes in site configuration file.

        server {
            # inside /etc/nginx/sites-available/active-website
            # gzip/brotli compression below is exposed to TLS BREACH attack, use 
            # random comments added to htmls to hide length info
            length_hiding on;
            # use brotli for compression though brotli is also vulnerable to the sa$
            brotli on;            # turn on compression on the fly 
            brotli_comp_level 9;  # compression level 
            brotli_static on;     # serve pre compressed files
            brotli_types          # file types that need compression 
            # woff is already compressed 
            # jpeg is already compressed
      • In the main context, load the newly added modules in /etc/nginx/nginx.conf

        load_module modules/;
        load_module modules/;
        load_module modules/;
      • Copy three modules in ~/Downloads/nginx_custom/nginx-1.14.2/debian/build-full/objs (*.so files) to modules folder /usr/share/nginx/modules.
    • Test and restart nginx.

      $sudo nginx -t 
      $sudo service nginx stop
      $sudo service nginx start

      There should be a field in the response header

      Content-Encoding: br
  • Web font.

    If not importing web fonts from CDN (Content Delivery Network) services such as Google Fonts, put desired fonts to local server and use @font-face in CSS sheet to load them. As to how to optimize the web fonts on a website, see here for detailed discussions.

    There are generally four major font types: *.woff2, *.woff, *.ttf and *.eot. There are listed here in a descending order in terms of how long they have been around. WOFF is the most commonly used while the two older ones are intended for legacy support for Android (4.4 or below) and Internet Explorer (IE 9 or below) respectively.

    The newest WOFF2 is a compressed version of WOFF. It seems FontForge at the moment has difficulty supporting conversion of available fonts to WOFF2. The temporary fix is install woff2 package. It comes with both compress and decompress utilities (for conversion between TTF and WOFF2).

  • Create patches and apply patches.

    To create a patch for a single modified file,

    $diff -Nau original-file modified-file > patch-file

    where -N means treat absent file as empty; -a means treat files as text; -u means unified format.

    To apply a patch,

    $patch original-file < patch-file 
  • Some text processing.

    To extract text between double quotes, we can use cut, where -d indicates the delimiter while -f2 indicates the second field.

    $cut -d '"' -f2 < /path/to/original > /path/to/processed

    To delete a line containing specific pattern, we can use sed

    $sed '/PATTERN/d' originalfile

    This will print processed file to standard out. With -i option, it will modify the file (in-place edits)

    $sed -i '/PATTERN/d' originalfile

    With i.bak, it will create a backup file.

  • Set Gnome workspace names.

    $gsettings set org.gnome.desktop.wm.preferences workspace-names "['0', '1']"
    $gsettings get org.gnome.desktop.wm.preferences workspace-names
    ['0', '1']
  • Install VMware tools without X.
    • After clicking “Install tools”, the *iso has been loaded to device /dev/cdrom. Mount it

      $sudo mount /dev/cdrom /mnt

      then unpack tarball. Execute

    • The recommended way though is install open-vm-tools from repository.
  • Use ionice for maintenance of btrfs.

    For scrub,

    $sudo ionice -c idle btrfs scrub start /

    For defragment,

    $sudo ionice -c idle btrfs filesystem defragment -f -t 32M -r SOMEPATH
    • -f: Flush data before move on to next file.
    • -r: Recursive defrag current path.
    • -t 32M: Chunk size 32M is recommended size, see this thread for more information.
  • In Emacs Dired mode, batch rename either
    • by edit mode, toggle C-x C-q then rectangular edit; or
    • by % R after marking files, regular expressions are used. See manual for more.
  • lxterminal color fix. The solarized theme renders an unusable coloring of emacs -nw. To have a better color rendering, add to .bashrc (not a good idea though)

    $export TERM=xterm-256color
  • Wipe SSD/HDD.

    First round, use the generic shred with 3 loops of random fill, turn on verbose message,

    $sudo shred -v /dev/sdX

    Second around, use ATA Secure Erase (reset SSD cells). Here note the trick—most desktop PC once operating system is fully up, hard drive will be “frozen”. To unfroze, suspend to RAM (BIOS S3 settings) then back to power again. For example, this can be done on a ThinkPad with Debian live CD.

    Last round, use shred again but a round of zero fill, no repeats.

    $sudo shred -v -n 1 -z /dev/sdb
  • ath9k 5GHz. Set REGDOMAIN=US in /etc/default/crda. Verify

    $sudo iw reg get

    To check if 5GHz bands are available,

    $sudo iw list | grep -i mhz
  • Working with git from emacs

  • Compile emacs from GNU source. The reason of building from source is Debian team is still working on porting the latest emacs one month after stable emacs was released.

    • Only checkout the latest stable branch emacs-26.

      $git clone --depth 1 --branch emacs-26 git://
    • Enter emacs folder and execute

      $./  # generating compiling files
      $./configure --prefix=/home/USER/engrsoft/emacs/ --with-mailutils 
    • Install whatever ./configure complains of lacking of. But do install

      $sudo apt-get install libgnutls28-dev libm17n-dev # for encrypto and nonlatin char

      Regarding libgtk2.0-dev, without it, it seems it still builds but emacs ends up with a lucid GUI rather than GTK+.

      $MAKE='make -j4' make bootstrap # build with multi job workers 
      $make install                   # non root install, check out --prefix= folder
    • To keep up with the latest branch source,

      $git clean -dxf  # cleaning up old files
      $git pull 
  • Pulse Audio over ssh.

    There are generally two methods. The first one is install (server side) paprefs and enable Network Server –> Enable network access to local sound devices.

    The second method is again on server side

    export PULSE_SERVER="tcp:localhost:14713"

    where 14713 is the port from client side used to access pulse audio server. For example, from client side, connect with X forwarding

    ssh -C -R 14713:localhost:4713 -X USER@SERVER 

    where -R gives TCP port binding and -C offers data compression. 4713 is the default pulse audio server port. Port 14713 in this example needs to be added to port forward rule in firewall (OPNsense).

  • top / htop like traffic monitoring tools. Installed iptraf. Need sudo to invoke.
  • In case of VLC not being able to play files on a server, use sshfs to mount the remote directory locally.

    $sshfs USER@host:/path/to/mount/ /local/mount/point/

    To unmount mounted directory, use fusermount -u /local/mount/point/.

  • “Refresh” file equivalent of emacs, Ctrl-x Ctrl-v RET it is really “find alternative file” (letter v vs. letter f).
  • Check patch status against Intel Meltdown and Spectre.

    $tail /sys/devices/system/cpu/vulnerabilities/*
    ==> /sys/devices/system/cpu/vulnerabilities/meltdown <==
    ==> /sys/devices/system/cpu/vulnerabilities/spectre_v1 <==
    Mitigation: __user pointer sanitization
    ==> /sys/devices/system/cpu/vulnerabilities/spectre_v2 <==
    Mitigation: Full generic retpoline - vulnerable module loaded
  • vi quick reference card. *.pdf version. Cheat Sheet. Another Cheat Sheet. Solaris advanced user guide on vi and search and replace in vi.

    Why learning vi given I am an Emacs user? vi is most suitable for system administrative tasks; Emacs is for more general writing. The major incentive is I am not happy with nano when managing linux boxes.

    • Type v to turn on visual mode then everything that is hightlighted can be yanked.
    • gg sends the cursor to the beginning of the file and G sends the cursor to the end. However if jumping multiple lines is desired, use NUM M combination, where NUM is a number and M is a movement.
    • set: number turns on line numbers.
    • To enter the same prefix to multiple lines.

      • Move the cursor to the beginning of the first of those lines;
      • Ctrl-v enters the visual mode;
      • j multiple times depending on the number of lines;
      • I enters insert mode (note this is upper I) and enter prefix;
      • Esc to exit.

      An alternative is to enter prefix for one line and use ., i.e., repeating the last edit.

      The Emacs equivalent is rectangle selection which is more convient:

      • Activate mark, Ctrl-n \(m\) rows and Ctrl-f \(n\) columns to create an \(m \times n\) rectangle.
      • It is followed by Ctrl-x r t for changing text contents.
  • Convert .pdf graphics to *.jpg or *.png with Linux without using Adobe Acrobat Professional.

    $convert -density 600 -trim PICS.pdf -quality 100 PICS.jpg

    Conversion flags:

    • -density controls dpi
    • -trim removes unwanted edge pixels
    • -quality specifies the compression level
  • Change the default booting kernel.

    $sudo vi /etc/default/grub # change GRUB_DEFAULT=0 to "1>2" e.g. 
    $sudo update-grub

    "1>2" selects grub main menu item 1 (0-based, 0 is the first item) and submenu item 2. Usually, the latest kernel is submenu item 0 and submenu item 1 is its recovery mode. Main menu item 0 is the default booting kernel and main menu item 1 is the advanced menu entry.

  • Reformat OEM SAS drives with block size 512b. IBM rebranded Hitachi SAS drives use custom firmware and block size 528. To revert it back to retail version with block size 512, use sg3-utils.

    $sudo install sg3-utils                             # install SCSI utilities 
    $sudo sg_scan -i                                    # scan devices
    $sudo sg_format --format --size=512 --six /dev/sgx  # --six for mode select
  • Disable Gnome 3 auto start at boot. On Debian 9, this can be achieved by disabling gdm.

    $sudo systemctl set-default # login via console, gdm not started
    $sudo systemctl get-default                   # default was
  • Cross flash IBM ServeRAID M5015. Search for old firmware and MegaRAID Storage Manager. Use legacy product group and product name 9260-8i.

    FreeDOS>megarec.exe -cleanflash 0          # erase, 0 is the card index
    FreeDOS>megarec.exe -m0flash 0 v167.rom    # flash version 167 ROM
    FreeDOS>megarec.exe -writesbr 0 sbrlsi.bin # switch banner to LSI from IBM
  • Update firmware of IBM ServeRAID H1110.

    FreeDOS>sas2flsh.exe -o -e 6            # erase, -o advanced mode
    FreeDOS>sas2flsh.exe -o -f 2114it.bin   # switch to IT mode/IR mode, use 2114ir.bin
    FreeDOS>sas2flsh.exe -o -b mptsas2.rom  # flash RAID bios
    FreeDOS>sas2flsh.exe -list              # check info
  • One problem with btrfs RAID array is, if there are devid holes and the physical disks are of the same size, it is hard to tell which devid is associated with which disk in the physical disk bay. Here comes the utility lsscsi. Install it from package repo with the same name and use

    $lsscsi -v

    to read detailed information about each disk such as brand name, model, mount point, and firmware etc.

    On ASRock J3455-ITX, lsscsi -v [0-3:x:x:x] correspond to SATA 1-2, SATA A1-A2.

    lsscsi output physical port
    [0:x:x:x] SATA 1
    [1:x:x:x] SATA 2
    [2:x:x:x] SATA A1
    [3:x:x:x] SATA A2
  • Resilver btrfs RAID array. (No more dd.)

    $sudo btrfs replace start /src/dev/ /target/dev/ /btrfs/array/mnt/

    If replace start subcommand complains about existing file system on the new device, we need to use -f force flag after start. The progress of resilvering can be accessed by

    $sudo btrfs replace status /btrfs/array/mnt/ # use Ctrl-C to exit viewing progress

    To max out the capacity of new device, use resize. See SUSE doc for more examples.

    $sudo parted                # enter GNU parted prompt
    (parted)print               # list partitions on current device 
    (parted)resizepart NUMBER   # enter the partition number 
    (parted)END                 # enter end sector then exit 
    $sudo btrfs filesystem resize <dev_id>:max /btrfs/array/mnt/

    For root-on-btrfs, i.e., replacing the disk which hosts the /boot/ partition, consider advanced mode -k1 with Clonezilla. (Shut down server and clone the first disk using Clonezilla Live CD with new disk USBed to server, for example.)

    Finally, scrub the new array with balanced I/O load,

    $sudo ionice -c idle btrfs scrub start /btrfs-mount-path/
  • Find corrupt files associated with csum. The normal procedure to recover and rescue a broken btrfs system is outlined on its wiki page for btrfsck.
  • Remove old core packages from snap.

    $sudo snap remove core --revision 1234
  • Check Physical Drive information about SAS disks behind hardware LSI RAID controller.

    $sudo megacli -AdpGetPciInfo -aAll # get adapter info
    $sudo megacli -LdPdInfo -a0        # get drive info
  • Add/delete new user.

    $sudo adduser USER               # add a new user; follow questionnaire
    $sudo usermod -aG sudo USER      # add USER to 'sudo' group with privileges
    $sudo deluser --remove-home USER # delete a new user and assoc. home dir
  • Nextcloud command line mode.

    $nextcloudcmd -u USER -s LOCAL_FOLDER

    Note that there is no need to specify /USER folder after webdav. s flag stands for silence, i.e., no verbose diagnostics.

    See more details here.

  • Install suckless surf and tabbed. They are available from Debian official repository. Do install the version from testing repo though.

    $sudo apt-get install suckless-tools
    $sudo apt-get install surf

    Add alias websurf='tabbed -c surf -pe' to .bashrc and start web surfing with websurf. The -c option for tabbed is for enabling the same tab behavior of modern browsers, i.e., close tab to quit; p option for surf for disabling plugins. To read more from man pages

    $man surf
    $man tabbed
    • Ctrl-Shft-RET: Create new tab.
    • Ctrl-Shft-h or l: Switch between tabs.
    • Ctrl-j or k: Scroll down and up.
    • Ctrl-q: Quit a tab.
    • Ctrl-g: Enter URL.
    • Ctrl-h: Go back in history.
    • Ctrl-y: Yank the link of current page or the page over whose link the cursor hovers.
    • Ctrl-p: Load the link yanked with Ctrl-y.

    Easter egg: Try Alt-x on Wikipedia; it will load a random page.

    To use surf as the default browser to preview local *.html’s, delete mimeapps.list and mime folder,

    $cd ~.config
    $rm -iv mimeapps.list # or use 'mv -v' instead
    $cd ~.local/share/applications
    $rm -iv mimeapps.list
    $cd ../ # now in ~.local/share again
    $rm -rf mime
  • Fix partition mislabelling (TODO: this works for partition fix, but partition is not the same as device. Does it work in my case?) Say the original order of labelling of the installed btrfs system has the following structure,

    devid  1  ... /dev/sda
    devid  2  ... /dev/sdb
    devid  3  ... /dev/sdc
    devid  4  ... /dev/sdd

    We want to swap /dev/sdc for a new device, so we do the following:

    • Delete /dev/sdc from current RAID 1 array and balance among the rest three disks.
    • Install a new /dev/sdc and balance among all four disks.

    The problem is the newly installed /dev/sdc will be assigned a new devid \(5\) rather than the original \(3\). This can be fixed by fdisk using x (extra features) followed by f (fix partition order). Confirm operation with w (write changes).

  • git log takes flags -p (display detailed commits) and -n (the most recent \(n\) commits. Also commits can be viewed by specifying “by date” flags,

    git log --after='2017-11-15' --before='yesterday'

    git log can be replaced with git shortlog if commits summary is desired.

    To reset, use git checkout . (. is not the period of this sentence.)

  • Firefox font rendering patch. Create a file .fonts.conf under ~/ and paste the following as its contents.

    <?xml version='1.0'?>
    <!DOCTYPE fontconfig SYSTEM 'fonts.dtd'>
     <match target="font">
      <edit mode="assign" name="rgba">
     <match target="font">
      <edit mode="assign" name="hinting">
     <match target="font">
      <edit mode="assign" name="hintstyle">
     <match target="font">
      <edit mode="assign" name="antialias">
      <match target="font">
        <edit mode="assign" name="lcdfilter">
  • Change default web browser to firefox. To check current priorities of installed web browsers,

    $sudo update-alternatives --config x-www-browser
    There are 2 choices for the alternative x-www-browser (providing /usr/bin/x-www-browser).
      Selection    Path                                    Priority   Status
    * 0            /usr/local/bin/firefox                   220       auto mode
      1            /usr/local/bin/firefox                   220       manual mode
      2            /usr/bin/surf                            30        manual mode

    To add more browsers to the list,

    $sudo update-alternatives --install /usr/bin/x-www-browser x-www-browser /path/to/new/browser 250

    The last argument 250 is the priority. /usr/bin/sensible-browser will check $BROWSER, gnome-www-browser (if Gnome in use), x-www-browser, and www-browser in order. See discussion for more.

  • Change firefox update channel from Beta to Release. Go to /FIREFOX_ROOT/defaults/prefs/, edit channel in channel-prefs.js file.

    On Mac, it is in /Applications/

  • Specify port number in remote connection
    • For ssh, use ssh -p 1234 user@site. (Note: type ~. to disconnect a session.)
    • For scp, use scp -P 1234 /files/or/folders user@site:/path/to/folder/. Note the upper case -P.
    • For sftp, use sftp -p -P 1234 user@site. (Note: -p is preserving time stamp.)
    • For sftp connection in vlc network streaming, use


  • Change display manager from default wayland to gdm, comment out WaylandEnable=false in /etc/gdm3/daemon.conf.
  • Enter UTF-8 characters using US International Keyboard. For example, enter ğ by the following key sequence: Alt Gr-Shft u-g. Alt Gr-Shft u-Shft g will insert the upper case Ğ.

    Also check out Unibyte-Mode of emacs. Use C-x 8 C-h to list all defined built-in characters in emacs.

    For even more general Unicode characters, copy the character of interest into scratch buffer and use C-u C-x = to check out its encoding.

  • ThinkPad X41 with X41T motherboard installed boots then gets stuck right before graphic display shows up. It seems to be a kernel bug when initializing wacom touch screen. Delete xserver-xorg-input-wacom then X41 boots into LXDE without problems.
  • ThinkPad fan control thinkfan setting,

    $sudo thinkfan -q -b-3 -s 3 # quiet mode
                                # bias -3 (even out temp spike)
                                # update interval 3 seconds
  • Check battery information from command line

    $cat /sys/class/power_supply/BAT0/status

    All available options are stored in /sys/class/power_supply/BAT0, e.g., serial_number, charge_full, charge_full_design etc.

  • Laptop mode and tp_smapi for advanced power management. Check ArchWiki here and here. ThinkPad X41 specific page.

    $sudo vi /etc/modules
    # add thinkpad_ec tp_smapi hdaps
  • Use nvlc on headerless server together with tmux.

    $tmux new vlc # new named session 'vlc'
    $nvlc         # start vlc player 

    Once in VLC player command line interface, type h for help. Common operations include

    • B: Open file browser.
    • Space: Play/Add files.
    • Shft-P: Show play list.

    Press Ctrl-b then d to detach a tmux session. We will be returned to terminal. If later we want to reattach to the existing session, use tmux a -t 0 where a means attach, -t specifies target and 0 is a default unnamed tmux session. Use tmux ls to see all active sessions.

  • To enable minor modes in Emacs when opening a file , use -hook’s rather than specifying local variables at the end of the file. See details in the manual.
  • Recently, a change in coreutils-8.2x has made ls wrap within quotes any file whose name contains spaces.

     'file name with spaces.txt'
  • Use tar to create *.xz archives, which is opposite to tar xavf file.tar.xz.

    $tar -cvJf /path/to/target.tar.xz /path/to/source/files
    $man tar
     -c, --create
           create a new archive
     -v, --verbose
           verbosely list files processed
     -J, --xz
     -f, --file ARCHIVE
           use archive file or device ARCHIVE

    Create an *.xz tar ball with pipe mode. Obviously this mode allows multithreading of xz.

    $tar c /some/dir | xz -4T0 > name.tar.xz     # Linux
    $tar -cf - /some/dir | xz -4T0 > name.tar.xz # FreeBSD

    On FreeBSD, the syntax is stricter; we have to use - as standard input/output to direct input/output. The option -T0 is for multithreading; it uses all possible cores as the number of job workers. -4 is compression level; the default level is 6 but most likely it takes significantly longer time.

  • Place the cursor in front of a character and use M-x describe-char to check out the information about this character. For example ğ can be inserted into Emacs buffer by C-x 8 RET 11f.
  • Enabling flyspell-mode mode in org-mode slows down the org-ac completion so much that the latter is almost useless. Use on demand spell check instead. For example, M-$ a word or M-x ispell-region a block of text.
  • Switch to xetex from latex. Add the following to the *.tex (on a per file basis).

    %%% Local Variables: 
    %%% coding: utf-8
    %%% mode: latex
    %%% TeX-engine: xetex
    %%% TeX-master: t
    %%% End:

    The reason of this switch was I chanced upon the font used in A.S. Byatt’s novel Possession that I bought long time ago. To use the font with ease, we need to load fontspec package and add the following to the preamble of a *.tex file.

    % a very beautiful font as I saw in A.S. Byatt's Possession
    \setmainfont{IMFePIrm29C.otf}[   % see details in fontspec doc 
    Path          = /path/to/fonts/, % the ending forward slash is required
    % Extension     = ttf,
    ItalicFont    = IMFePIit29C.otf,
    SmallCapsFont = IMFePIsc29P.ttf,
    StylisticSet  = 2,               % Alternate below defaults to 0
    WordSpace     = 1.2,
    Ligatures     = Rare,
    Style         = Alternate,       % access alternate glyph

    The font’s full name is IM Fell Type after John Fell. It is hosted at Google Fonts and the original author’s website.

  • Add fonts to Emacs. See here in the official documentation. For installing fonts without Admin Access on Windows, see here for a solution using Portable Apps.
  • Common html characters for typography
  • Overwrite the default postamble format when exporting *.html’s.

    • First access the default format: C-h v org-html-postamble-format.
    • Then hardcode the customized format with #+BIND:; we need to change variable org-html-postamble from auto to t.
    #+BIND: org-html-postamble t
    #+BIND: org-html-postamble-format (("en" "<p class=\"author\">Author: %a</p>\n<p class=\"date\">Last updated: %d</p>")))
    #+DATE: <2017-10-12 Thu 12:31>
    • Finally, in order to allow #+BIND: to work, set org-export-allow-bind-keywords as non-nil.
    # Local Variables:
    # org-export-allow-bind-keywords: t
    # End:
  • Install FastX on server. The latest emacs 25.x is nowhere to be found on any campus Linux box (on some workstations, I saw even emacs 21.x), forcing me to make this move :(. See here for server side installation without root privilege.

    As an alternative, use X11 forwarding instead. (Testing)

    • On server side, add AllowX11Forwarding yes to /etc/ssh/sshd_config.
    • On client side, add ForwardX11 yes to /etc/ssh/ssh_config.

    A major problem here was I need to work on the server, but Nextcloud client wouldn’t resolve local IP address. An https address is a required field the first time Nextcloud client is initiated. The fix is quite an easy one but it took me a while to figure out: go to nginx configuration file for nextcloud in /etc/nginx/sites-available and add after the resolvable domain name in the server_name directive. Then use as the server address when Nextcloud client prompts for an input. The server has to use the localhost to see itself.

    Update (Nov 30, 2017): Switched to NoMachine NX free version after struggling with licensing issue with FastX 2. U of I Webstore license seems to require all servers to be behind university firewall. Installation of NX can be found here. Also, it is not true that in order to use NX client, we have to use sudo install. Simply unpack nxplayer somewhere you have the write privilege, then you are good to go.

  • Change SageMath terminal colors. Add

    %colors Linux

    to $HOME/.sage/init.sage or run it from sage prompt.

  • Precompiled binaries from won’t start on an Apollo Lake CPU powered machine; it won’t plot() on ThinkPad X41/T running 32bit Linux either.

    Building Sage from scratch. Follow developer’s guide to get a copy of the latest code. Stable release didn’t build through in either case. (At some point, it complained that openblas couldn’t detect CPU architecture.) Set architecture manually for Apollo Lake Atom CPU,

  • Dell UZ2315H RGB Linux setting RGB (97, 99, 94), Win setting RGB (100, 94, 85).

Mac OS X

10.11 El Capitan

  • Update on compiling Nextcloud client.

    It is still largely based on previously written step by step guide. However the Nextcloud client has had a major upgrade and Sparkle is no longer used, hence this revision.

    • Install Xcode 8 but add El Capitan SDK which can be extracted from Xcode 7 by xcodelegacy
    • Install brew
    • Install openssl 1.1.x and cmake with brew
    • Compile Qt 5.10.1 against OpenSSL 1.1.x
    • Compile qtkeychain and the client. May need to update /admin/osx/ It throws a lot of warnings and complaints about nonexistent Qt helper files.

      #!/usr/bin/env bash
      set -e
      set -o pipefail
      cd ~/Downloads
      rm -rf qtkeychain
      mkdir -pv qtkeychain && cd "$_"
      unzip ../
      mkdir -pv build && cd "$_"
      cmake \
      -DCMAKE_OSX_SYSROOT="/Applications/" \ 
      -DCMAKE_INSTALL_PREFIX=~/Downloads/qtkeychain/install \
      -DCMAKE_PREFIX_PATH=/usr/local/Qt-5.10.1/lib/cmake/ ../qtkeychain-master
      sudo make install
      cd .. && rm -rf build
      cd ~/Downloads
      #git clone git://
      cd desktop
      #git submodule init
      #git submodule update
      git checkout stable-2.5.2
      mkdir -pv build && cd "$_"
      cmake \
      -DCMAKE_INSTALL_PREFIX=~/Downloads/nextcloud-desktop-client \
      -DQTKEYCHAIN_LIBRARY=~/Downloads/qtkeychain/install/lib/libqt5keychain.dylib \
      -DQTKEYCHAIN_INCLUDE_DIR=~/Downloads/qtkeychain/install/include/qt5keychain/ \
      -DOPENSSL_ROOT_DIR=/usr/local/opt/openssl\@1.1/ \ 
      -DOPENSSL_INCLUDE_DIR=/usr/local/opt/openssl\@1.1/include/ \ 
      -DOPENSSL_CRYPTO_LIBRARY=/usr/local/opt/openssl\@1.1/lib/libcrypto.dylib \ 
      -DOPENSSL_SSL_LIBRARY=/usr/local/opt/openssl\@1.1/lib/libssl.dylib \
      -DCMAKE_PREFIX_PATH=/usr/local/Qt-5.10.1/lib/cmake/ ..
      make -j17 install
      cd .. && rm -rf build
  • Clear/Disable Quick Look cache.

    $qlmanage -r cache        # clear
    $qlmanage -r disablecache # disable cache 

    More details are available here.

  • Use pinentry within emacs minibuffer.
    • Install ELPA pinentry.
    • Add allow-emacs-pinentry to ~.gnupg/gpg-agent.conf. Reload configuration with gpgconf --reload gpg-agent.
    • (Optional) Add to init.el the following lines

      ;; pinentry 
      ;(setenv "INSIDE_EMACS" (format "%s,comint" emacs-version))
      ;(pinentry-start) ;; or M-x pinentry-start
  • USB Installer

    Unpack InstallOSXELCapitan.tar.bz2 to /Applications/. Format a USB stick with Mac OS Extended, then write the installation image

    $sudo /Applications/Install\ OS\ X\ El\ --volume /Volumes/Untitled --applicationpath /Applications/Install\ OS\ X\ El\
  • VMware installation

    Add smc.version = "0" to *.vmx file before first boot.

  • Unblock downloaded app.

    $sudo xattr -rd /Applications/
  • Delete Google Updater. Found this problem through Little Snitch traffic monitor. Use the --nuke option for ksinstall.

    $sudo /Library/Google/GoogleSoftwareUpdate/GoogleSoftwareUpdate.bundle/Contents/Resources/ksinstall --nuke

10.14 Mojave

  • Start up error and NS-drawing patch. See GNU mailing list

  • Use brew install emacs --HEAD to compile from the latest source. If further patching is needed, use brew edit emacs to edit the formula. See more here.

    $brew edit emacs
    # in emacs ruby formula file emacs.rb
    class Emacs < Formula ...
    head do 
      url "", :branch => "emacs-26"
    # :branch will instruct git to check out specified branch

    The emacs installed using brew cask install emacs does not read environment Variables properly. So for \(\LaTeX\), we need to append /Library/TeX/texbin/ to PATH in emacs initialization file.

  • Resize APFS.

    $diskutil apfs list # find volume container 
    $diskutil apfs resizeContainer disk1 0 # container new size 0 means grow to fit
  • Xcode.

    $sudo xcodebuild -license accept # first run for license acceptance 

    If Xcode is not installed before installing homebrew, homebrew will try to download and install command line tools.

Nextcloud Server Update

This is a how-to for upgrading nextcloud server. See the manual upgrade for more.

  • From within the server, first stop the web server

    $sudo service nginx stop
  • Then make a copy of current working /var/www/nextcloud

    $cd /var/www
    $sudo cp -rv nextcloud nextcloud.bak
  • Make a backup of working nextcloud directory

    $sudo mv -v nextcloud nextcloud.old
  • Unpack downloaded the latest nextcloud tarball and unpack it then move it to /var/www. Verify the download with diff. (If md5sum agrees, then we see no standard output.)

    $cd ~/Downloads/
    $md5sum nextcloud-13.0.4.tar.bz2 | diff -u nextcloud-13.0.4.tar.bz2.md5 -
    $cd nextcloud-latest/
    $tar xavf ../nextcloud-13.0.4.tar.bz2
    $cd /var/www/
    $sudo mv -v ~/Downloads/nextcloud-latest/nextcloud/ ./
  • Change directory permissions using ~/nextcloud_dir_config_script to prepare nextcloud folder. This script will create necessary folders first then set correct ownerships.

    printf "Creating possible missing Directories\n" 
    mkdir -p $ocpath/data 
    mkdir -p $ocpath/updater 
    printf "chmod Files and Directories\n" 
    find ${ocpath}/ -type f -print0 | xargs -0 chmod 0640 
    find ${ocpath}/ -type d -print0 | xargs -0 chmod 0750 
    printf "chown Directories\n" 
    chown -R ${rootuser}:${htgroup} ${ocpath}/ 
    chown -R ${htuser}:${htgroup} ${ocpath}/apps/ 
    chown -R ${htuser}:${htgroup} ${ocpath}/config/ 
    chown -R ${htuser}:${htgroup} ${ocpath}/data/ 
    chown -R ${htuser}:${htgroup} ${ocpath}/themes/ 
    chown -R ${htuser}:${htgroup} ${ocpath}/updater/ 
    chmod +x ${ocpath}/occ 
    printf "chmod/chown .htaccess\n"
    if [ -f ${ocpath}/.htaccess ]
      chmod 0644 ${ocpath}/.htaccess
      chown ${rootuser}:${htgroup} ${ocpath}/.htaccess
    if [ -f ${ocpath}/data/.htaccess ]
      chmod 0644 ${ocpath}/data/.htaccess
      chown ${rootuser}:${htgroup} ${ocpath}/data/.htaccess
  • Copy /var/www/nextcloud.old/config/config.php to /var/www/nextcloud/config/ (we need su -) and Copy /var/www/nextcloud.old/data/* to /var/www/nextcloud/data/ (also need su -).
  • Change directory permissions using ~/nextcloud_dir_config_script again.
  • Upgrade nextcloud

    $cd /var/www
    $sudo -u www-data php ./nextcloud/occ upgrade

    Wait and see if screen log shows successful upgrade.

  • Start web server

    $sudo service nginx start

Nextcloud Client for Mac OS X

Use openssl to Sign and Verify Files

Full command line operations are available on openssl Wiki. We need only the part of signing and verifying files.

  • Using openssl to sign a file,

    $openssl dgst -sha256 -sign <private-key> -out /tmp/sign.sha256 <file>
    $openssl base64 -in /tmp/sign.sha256 -out <signature>
  • Using openssl to verify a file with signature,

    $openssl base64 -d -in <signature> -out /tmp/sign.sha256
    $openssl dgst -sha256 -verify <pub-key> -signature /tmp/sign.sha256 <file> 
  • The <pub-key> File

    Below is the public key that I use.

    -----BEGIN PUBLIC KEY-----
    -----END PUBLIC KEY-----

Compiling Nextcloud Client

  • Update (May 29, 2019) Nextcloud client v2.5.2 built against Qt 5.10.1 with support for OpenSSL 1.1.1c. Please see above on how to build it.
  • Update (May 18, 2019) Nextcloud client v2.5.2 built against Qt 5.10.1 with support for OpenSSL 1.1.1b.
  • Update (Oct 2, 2018) Nextcloud client v2.4.3 built against Qt 5.10.1 with support for OpenSSL 1.1.1.

    Download available here. For Mac OS X 10.10+. To verify the package, use the signature (sha256)

  • Update (Aug 18, 2018) Nextcloud client v2.4.3 built against Qt 5.9.6 (Long Term Support) with support for OpenSSL 1.0.2p.

    Download available here. For Mac OS X 10.10+. To verify the package, use the signature (sha256)

  • Update (Aug 17, 2018) Nextcloud client v2.4.3 built against Qt 5.10.1 with support for OpenSSL 1.1.0i.
  • Update (Jun 7, 2018) Nextcloud client v2.4.1 built against Qt 5.9.6 (Long Term Support) with support for OpenSSL 1.0.2o.
  • Update (Mar 27, 2018) Nextcloud client v2.4.1 built against Qt 5.11.0 beta 2 with support for OpenSSL 1.1.0h.

    Download available here. For Mac OS X 10.10+. To verify the package, use the signature (sha256)

  • Update (Mar 9, 2018) Nextcloud client v2.4.1 built against Qt 5.10.1 with support for OpenSSL 1.1.0g.
  • Update (Feb 16, 2018) Nextcloud client v2.4.0 built against Qt 5.10.1 with support for OpenSSL 1.1.0g.
  • Update (Oct 8, 2017) to original post: Nextcloud Mac OS X client based on ownCloud v2.3.3 release. Qt 5.9.2 has been compiled on HP Z600 to replace Qt 5.9.1 previously used in compiling Nextcloud Mac OS X client v2.3.2. Building against openssl 1.1.x is not supported. Don’t try!

    There were compiling issues where cmake complained about QTKEYCHAIN_LIBRARY and QTKEYCHAIN_INCLUDE_DIR not being found. The fix was go to the build script ./osx/ and comment out the following three lines (given v2.3.2 was built here three months ago.) And don’t forget to switch to v2.3.3 branch!

    #### ~/client_theming/osx/ script ####
    # sudo rm -rf client # don't remove client folder this time round for v2.3.3
    # git clone --recursive # don't re-pull
    git checkout 2.3.3 # check out ownCloud branch v2.3.3; note 2.3.3 without v
    # git submodule update --recursive # go to ~/client folder; do this manually instead

    The following Figure 5 is the screen grab of About page of the client I built.

    NC MACOSX 2.3.3
    Figure 5: About page of Nextcloud client v2.3.3 for Mac OS X


  • How to stop Microsoft from gathering telemetry data from Windows 7, 8, and 8.1
  • In order to directly edit photos (using either Lightroom or darktable) on server machine, get sshfs-win and mount server photo directory to local Windows machine.
  • Use DisableWinTracking to block telemetry and pinging home services.
  • Windows by default uses local time, not UTC. It may causes delay in syncing files between Windows machines and Unix machines through Nextcloud server. According to ArchWiki, add a QWORD with hex value 1 to registry

  • Install input method librime and its Windows app weasel.
    • Download binary and follow the installation instructions first.
    • Go to program folder and within it, find data folder.
    • Add extra *.schema.yaml files to the data folder.
    • Create default.custom.yaml file in user config folder; the default path is %APPDATA%Rime.
    • Custom dictionaries can be added the following way, assuming the input scheme is double_pinyin_mspy.
      • Add double_pinyin_mspy.custom.yaml and set dictionary name,

          "translator/dictionary": luna_pinyin.extended 
      • But luna_pinyin.extended itself is not a real dictionary; it imports other dictionaries,

        name: luna_pinyin.extended
        version: "2017.01.02"
        sort: by_weight
        use_preset_vocabulary: true
          - luna_pinyin
          - luna_pinyin.extra_hanzi
    • Hit redeploy in the setting panel. It will regenerate default.yaml and compile auxiliary files for the newly added input schemes.
  • If a spinning disk was misidentified as a solid state device, run winsat formal -v. It will reassess computer hardware and according to hard drive performance it readjusts the how fast OS boots as well.

    Then go to servies.msc then disable Superfetch, Windows Remediation Service and Windows Search indexing.

    Also disable fast startup from Power Options. The so called fast startup is essentially a cheat, since the last shutdown was not fully completed and kernel and disk are both still waiting for a new wake up call. Thus the machine seems to be booting fast next time round.

  • Disable “Microsoft Compatibility Telemetry”. Run taskschd.msc. Find the folder Task Scheduler Library -> Microsoft -> Windows -> Application Experience. Right click on any of the tasks listed as Microsoft Compatibility Appraiser and choose Disable.
  • Build Nextcloud client for Windows with the latest support of OpenSSL. Only supported on openSUSE. Use openSUSE Tumbleweed for latest tool chains.

    • Clone client_theming repo.
    ~>git clone
    ~>cd client_theming
    client_theming>git submodule update --init --recursive
    • If the previous step is not working, remove client folder and manually checkout ownCloud repo.
    client_theming>git clone --recursive
    • Check out the latest ownCloud client.
    client_theming>cd client/
    client_theming/client>git checkout v2.4.1
    • Start docker service and build docker image.
    client_theming>sudo service docker start
    client_theming/client>sudo docker build -t nextcloud-client-win32:2.4.1 client/admin/win/docker/
    • Build the client.
    client_theming>sudo docker run -v "$PWD:/home/user/" nextcloud-client-win32:2.4.1 /home/user/win/ $(id -u)

    The resulting binary is in client_theming/build-win32/. To use the latest tool chains, modify /client/admin/win/docker if necessary.

    To clean the docker builds, see how to remove docker images, containers, volumes.

  • Install microcode with VMware CPU Microcode Update Driver.
  • Manually set Solarized theme for Win PowerShell. Click the upper left corner of PowerShell and follow PropertiesColorsScreen Backgroud. Set RGB colors as (0, 43, 54). Set font PragmataPro, size 18. Cygwin terminal can also use PragmataPro font; set font size 12.
  • The more recent versions of PrgmataPro font do not look as nice as before in Emacs, plus it is not free. Use Hack instead.
  • Remove recent archive names in WinRAR. Open regedit and find


    Recent histories are numbered on the right. Check each by hand by double clicking.

  • After cloning current SSD to an image SSD with EZ Gig IV Cloning Software, Windows won’t boot.

    Your PC needs to be repaired. A required device isn't connected or can't be accessed.

    The fix:

    • Create a recovery drive with Win 10 built-in tool with a USB drive.
    • Depending on whether the machine is a BIOS Legacy boot or a UEFI boot, change BIOS settings and boot into corresponding mode using the USB recovery drive. For example, boot the USB repair drive into BIOS mode to fix a BIOS boot machine or boot into UEFI mode to fix a UEFI boot machine.
    • Follow Repair your computerAdvanced optionsCommand prompt to enter cmd.exe Windows prompt.
    • Rebuild BCD.
    bootrec /fixmbr
    bootrec /fixboot
    bootrec /rebuildbcd
  • Install portable GnuPG. Assume we copied all necessary binaries to a folder GnuPG where gpgconf.exe resides. Create an empty file gpgconf.ctl under the same directory then GnuPG will be GNUPGHOME, all other methods of setting GNUPGHOME will be ignored.
  • Disable cortana and telemetry feature of Windows 10.
    • Fire up regedit.exe and find HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows\DataCollection. Under that folder create AllowTelemetry DWord entry and set it as 0.
    • Next, disable Connected User Experiences and Telemetry and dmwappushsvc in servies.msc.
    • Go to settings, scroll to the bottom, find App Diagnostics and turn it off; one item above is Background apps, turn it off as well.
  • Color profile folder location. On Win 7, it is C:\Windows\System32\spool\drivers\color.

Author: Yün Han

Emacs 26.1 (Org mode 9.2.3)