What’s the Difference Between a Login and a Nonlogin Shell?

Table of Contents

This is addressed nicely in the book Unix Power Tools (Oreilly):

When you first log in to a Unix system from a terminal, the system normally starts a login shell. A login shell is typcally the top-level shell in the “tree” of processes that starts with the init process. Many characteristics of processes are passed from parent to child process down this “tree” — especially environment variables, such as the search path. The changes you make in a login shell will affect all the other processes that the top-level shell starts — including any subshells.

So, a login shell is where you do general setup that’s done only the first time you log in — initialize your terminal, set environment variables, and so on. […]

So you could think about a login shell as a shell that is started at startup by the init process (or systemd nowadays). Or as a shell that logs you into the system by your providing a username and a password. A nonlogin shell, by contrast, is a shell that is invoked without logging anybody in.

Is My Current Shell a Login Shell?

There are two ways to check if your current shell is a login shell: First, you can check the output of echo $0: if it starts with a dash (like -bash), it’s a login shell. Be aware, however, that you can start a login shell with bash --login, and echo $0 will output just bash without the leading dash, so this is not a surefire way of find out if you are running a login shell.

Secondly, the Unix StackOverflow offers this way of finding out:

$ shopt -q login_shell && echo login || echo nonlogin

(-q supresses the output of the shopt command.)

The Difference Between Login and Nonlogin That Actually Matters

Practically speaking, the difference between a login shell and a nonlogin shell is in the configuration files that Bash reads when it starts up. In particular, according to man bash:

[…] it first reads and executes commands from the file /etc/profile, if that file exists. After reading that file, it looks for ~/.bash_profile, ~/.bash_login, and ~/.profile, in that order, and reads and executes commands from the first one that exists and is readable.

You can observe this behavior by putting echo commands in /etc/profile, ~/.bash_profile, ~/.bash_login and ~/.profile. Upon invoking bash --login you should see:

echo from /etc/profile
echo from ~/.bash_profile
$

If the shell is a nonlogin shell, Bash reads and executes commands from ~/.bashrc. Since we are starting a nonlogin shell from within a login shell, it will inherit the environment. Sometimes, this will lead to confusion when we inadvertantly get a login shell, and find out that our configuration from ~/.bashrc is not loaded. This is why many people put something like the following in their .bash_profile:

[[ -r ~/.bashrc ]] && source ~/.bashrc

This test whether .bashrc is readable and then sources it.

Why You Sometimes Want a Login Shell

When you switch users using su you will take the environment of the calling user with you. To prevent this, you should use su - which is short for su --login. This acts like a clean login for a new user, so the environment will not be cluttered with values from the calling user. Just as before, a login shell will read /etc/profile and the .bash_profile of the user you are switching to, but not its .bashrc. This post on StackOverflow shows why you might want to prefer to start with a clean environment (spoiler: your $PATH might be “poisened”).

Conclusion

In this article we saw that the main difference between a login and a nonlogin shell are the configuration files that are read upon startup. We then looked at what the benefits are of a login shell over a nonlogin shell.

Encrypt Device With Veracrypt From the Command Line

Table of Contents

You have a drive that you want to encrypt and use in Linux and other OSes. Then Veracrypt, the successor of Truecrypt, is a good choice. The prerequisite for this tutorial is that you already have created a partition on a drive. See my previous blog post on how to accomplish that. Creating a volume on a partition with data on it will permanently destroy that data, so make sure you are encrypting the correct partition (fdisk -l is your friend).

Encrypt a volume interactively from the command line using Veracrypt…

(The # sign at the beginning of the code examples indicates that the command should be executed as root. You can either use su - or sudo to accomplish this.)

# veracrypt -t --quick -c /dev/sdXX

-t is short for --text (meaning you don’t want the GUI) and should always be used first after the command name. The --quick option is explained in the docs:

If unchecked, each sector of the new volume will be formatted. This means that the new volume will be entirely filled with random data. Quick format is much faster but may be less secure because until the whole volume has been filled with files, it may be possible to tell how much data it contains (if the space was not filled with random data beforehand). If you are not sure whether to enable or disable Quick Format, we recommend that you leave this option unchecked. Note that Quick Format can only be enabled when encrypting partitions/devices.

So, using --quick is less secure, but not specifying it could take (a lot) longer, especially on traditional hard drives (we’re talking hours for 500GB).

Finally, the -c or --create command allows us to specify on which partition we want to create a veracrypt volume. Make sure you change the /dev/sdXX from the example above to the appropriate output of fdisk -l (for example, /dev/sdc1).

This command will interactively guide us to create a new volume:

Volume type:
 1) Normal
 2) Hidden
Select [1]: 1

Encryption Algorithm:
 1) AES
 2) Serpent
 3) Twofish
 4) Camellia
 5) Kuznyechik
 6) AES(Twofish)
 7) AES(Twofish(Serpent))
 8) Camellia(Kuznyechik)
 9) Camellia(Serpent)
 10) Kuznyechik(AES)
 11) Kuznyechik(Serpent(Camellia))
 12) Kuznyechik(Twofish)
 13) Serpent(AES)
 14) Serpent(Twofish(AES))
 15) Twofish(Serpent)
Select [1]: 1

Hash algorithm:
 1) SHA-512
 2) Whirlpool
 3) SHA-256
 4) Streebog
Select [1]: 1

Filesystem:
 1) None
 2) FAT
 3) Linux Ext2
 4) Linux Ext3
 5) Linux Ext4
 6) NTFS
 7) exFAT
Select [2]: 6

Enter password:
WARNING: Short passwords are easy to crack using brute force techniques!

We recommend choosing a password consisting of 20 or more characters. Are you sure you want to use a short password? (y=Yes/n=No) [No]: y

Re-enter password:

Enter PIM:

Enter keyfile path [none]:

Please type at least 320 randomly chosen characters and then press Enter:
Characters remaining: 4

Done: 100.000%  Speed: 61.8 GB/s  Left: 0 s

The VeraCrypt volume has been successfully created.

The volume is now created in the partition and is ready to be mounted.

… Or do it all in a one-liner

# veracrypt --text --quick                      \
        --non-interactive                       \
        --create /dev/sdXX                      \
        --volume-type=normal                    \
        --encryption=AES                        \
        --hash=SHA-512                          \
        --filesystem=NTFS                       \
        --password='Un$@f3'

Use --stdin to read the password from the standard in, instead of supplying it directly to the command, which is considered unsecure.

Mounting the volume

# mkdir /tmp/vera
# veracrypt -t /dev/sdXX /tmp/vera

Unmouting the volume

# veracrypt -d /tmp/vera

More info

$ veracrypt -t -h

-h is short for --help and should be self-explanatory.

Make less Options Permanent (or: the Missing .lessrc)

Table of Contents

The missing $HOME/.lessrc

I often wondered how I could make certain options for less permanent, like -I, for example, which will make search case insensitive. In GNU/Linux, preferences are often stored in rc files. For vim we have .vimrc, for Bash .bashrc, etc:

$ find "$HOME" -maxdepth 1 -name '*rc'
./.vimrc
./.idlerc
./.xinitrc
./.lynxrc
./.old_netrc
./.inputrc
./.bashrc
./.rtorrent.rc
./.sqliterc
./.xdvirc

Environment variable LESS

So, it would make sense to expect a .lessrc. But there is none. Instead, we define a environment variable LESS. My .bashrc:

export LESS="IFRSX"

Breakdown:

  • -I: ignore case when searching
  • -F: quit immediately when the entire file fits in one screen (in effect, mimic cat’s behavior)
  • -R: enable colored output (for example, when piping to less from diff --color=always)
  • -S: truncate long lines instead of wrapping them to the next line
  • -X: don’t clear screen on exit

See man 1 less for all options.

Make a Backup With rsync

Table of Contents

We want to make a backup of data, for example to an external hard drive.

The basic command

Assuming we are in someone’s home directory and we want to copy three source directories Music, Documents and Movies to a destination directory /mnt/external-hdd:

$ rsync -a Music Documents Movies /mnt/external-hdd

A word on slashes

Notice that we omit the trailing forward slash / on the source directories. This means the destination will look like:

/mnt/external-hdd
|-- Music
|   |-- a.mp3
|-- Documents
|   |-- b.txt
|-- Movies
|   |-- c.mp4

If we were to add trailing forward slashes, the upper-level source directories would not be copied and the result would look like:

/mnt/external-hdd
|-- a.mp3
|-- b.txt
|-- c.mp4

Read more about slashes in rsync.

The rsync -a command broken down

rsync -a is equal to rsync --archive and is a convenience command. According to the man page, it equals rsync -rlptgoD.

  • -r or --recurse: recursively copy data
  • -l or --links: copy symlinks as symlinks
  • -p or --perms: preserve permissions
  • -t or --times: preserve modifications times
  • -g or --group: preserve the group
  • -o or --owner: preserve owner
  • -D is the same as --devices --specials:
    • --devices: preserve device files
    • --specials: preserve special files

[…] a device file or special file is an interface to a device driver that appears in a file system as if it were an ordinary file

Wikipedia

Device files in Linux are usually found under the /dev directory.

See the overall progress of rsync

By default, rsync will show the progress of the individual files that are being copied. If you want the overall progress, you have to add some flags:

$ rsync -a --info=progress2 --no-i-r src dst

--info=progress2 shows the total transfer progress. (To see all available options for --info, execute rsync --info=help). --no-i-r is short for --no-inc-recursive and disables incremental recursion, forcing rsync to do a complete scan of of all directories before starting the file transfer. This is needed to get an accurate progress report, otherwise rsync doesn’t know how much work is left.

Human-readable output can be obtained by passing the -h or --human-readable option.

For a discussion of these options, see also this StackOverflow post.

Partition and Format Drive With NTFS

Table of Contents

Say we bought an external hard drive to back up some stuff from a crashed computer. We can use a Live USB to get at the data and put the data on the external hard drive. Because the data needs to be accessible by Windows, we are going to use format the drive with NTFS.

Create partition

Connect the external hard disk to your computer. Use sudo fdisk -l to find the device name. Output should look something like this:

Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: CT2000MX500SSD1
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0x117d68c1

Device     Boot Start        End    Sectors  Size Id Type
/dev/sdb1        2048 3907029167 3907027120  1.8T 83 Linux

As can been seen above, the name of the device is /dev/sdb. We use to name to run fdisk:

$ sudo fdisk /dev/sdb

Notice how we use the name of the device, and not the name of the partition (so /dev/sdb without any numbers attached at the end).

After entering the command above, an interactive menu will be facing you. Type a letter and press Enter to confirm. Changes will only be applied when you type w, so if you make a mistake, just stay calm and press q and you will exit fdisk with your pending changes discarded.

  • Delete all your existing partitions by pressing d. Depending on the amount of partitions, you might have to repeat this several times. If you want to check the current partition table, press p.
  • After all old partitions are deleted, add a new partition by pressing n. If you just want to create a single partition on your drive, accept all the defaults by pressing Enter on each prompt. This will leave you with a single partition that will take up all space on the drive.
  • Back in the main menu, type t to change the partition type. Press L to see all partitions types. Here we are going to choose 7 (HPFS/NTFS/exFAT). “The partition type […] is a byte value intended to specify the file system the partition contains and/or to flag special access methods used to access these partitions” (source). Linux does not care about the partition type, but Windows does, so we have to change it.
  • Press w to write your changes to the disk and exit fdisk.

Format partition with NTFS

Now we create the actual NTFS file system on the drive:

$ sudo mkfs.ntfs -Q -L label /dev/sdX1

(If you don’t have mkfs.ntfs installed, use your distro’s package manager to install it (on Arch Linux it’s in a package called ntfs-3g)).

Breakdown:

  • -Q is the same as --quick, -f or --fast. This will perfom a quick format, meaning that it will skip both zeroing of the volume and and bad sector checking. So obviously, leave this option out if you want the volume to be zeroed or you want error checking. Depending on the size of your partition, this might take quite a while.
  • -L is the same as --label: it’s the identifier you’ll see in Windows Explorer when your drive is connected.
  • dev/sdX1: change the X to the actual letter of your drive we found earlier in this tutorial. You always format a partition, not a drive, so make sure that you put the correct number of the partition you want formatted at the end.

Create Arch Linux Live USB

Table of Contents

Download image and PGP signature

From https://www.archlinux.org/download/. The preferred option is to download the image using bittorrent, so as to unburden the Arch servers.

Verify downloaded image

$ gpg --keyserver pgp.mit.edu                       \
        --keyserver-options auto-key-retrieve       \
        --verify archlinux-version-x86_64.iso.sig

If this disk is being created on an Arch Linux system, you could also invoke:

$ pacman-key -v archlinux-version-x86_64.iso.sig

The -v switch is short for --verify.

Insert USB drive and check the device name

$ sudo fdisk -l

The -l is short for --list, and will display the device names and partition tables.

Output will look like:

Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: CT2000MX500SSD1
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0x117d68c1

Device     Boot Start        End    Sectors  Size Id Type
/dev/sdb1        2048 3907029167 3907027120  1.8T 83 Linux

Check the output and find the device name of the USB (for instance /dev/sdc). Make sure this device is not mounted, otherwise the next command will fail. Also make sure you note the device name, and not a partition (indicated by a numeral at the end: /dev/sdc1, for example).

Copy Arch Linux image to USB drive

$ sudo dd if=archlinux-2019-01-01-x86_64.iso    \
        of=/dev/sdX                             \
        bs=64K                                  \
        oflag=sync                              \
        status=progess

Breakdown:

  • if indicates the input file (the .iso of the live Linux distro).
  • of, likewise, points to the output file, which is a device in this case. Note that /dev/sdX needs to be replaced with the device name we found in the previous step.
  • bs=64K indicates the block size, which means that dd will read and write up to 64K bytes at a time. The default is 512 bytes. It really depends what the optimal block size is, but several sources indicate that 64K is a good bet on somewhat modern to modern hardware.
  • oflag stands for “output flag”. The sync flag will make sure that all data is written to the USB stick when the dd command exits, so it will be safe to remove the USB stick.
  • status=progress indicates the level of information that is printed during file transfer. progress shows periodic transfer statistics.

Notice that the device does not need to be partitioned or empty before this operation. When dd writes to a device rather than a partition, all data on the drive – including partitions – will be erased anyway.

Use pandoc with Pygments to highlight source code

I am someone who has JavaScript disabled by default in his browser (I use uMatrix in Firefox for that). Only when I trust a site and I need to use functionality that truly depends on JavaScript will I turn it on. This hopefully protects me from most of the known and unknown bad stuff out there on the internet. It also makes me appreciate people who go through the trouble of making their webpages work without JavaScript.

Until recently, I used a JavaScript plugin on this blog to format source code. This bothered me, since using JavaScript just to display some source code seems like overkill and makes people have to turn on JavaScript in their browsers just to see the source code formatted nicely. I wanted to do better than that.

The way I normally write my blog posts is, I start with a Markdown article and then use pandoc to convert it to HTML which I then copy and paste into WordPress (if there is a better way to do this, please contact me). I noticed pandoc provides a switch --filter where you can specify a executable that can transform the pandoc output. The only problem is, you have to write a filter. Luckily, I found a GitHub gist that has already figured out how to write one. Here is some Haskell for you:

import Text.Pandoc.Definition
import Text.Pandoc.JSON (toJSONFilter)
import Text.Pandoc.Shared
import Data.Char(toLower)
import System.Process (readProcess)
import System.IO.Unsafe

main = toJSONFilter highlight

highlight :: Block -> Block
highlight (CodeBlock (_, options , _ ) code) = RawBlock (Format "html") (pygments code options)
highlight x = x

pygments:: String -> [String] -> String
pygments code options
         | (length options) == 1 = unsafePerformIO $ readProcess "pygmentize" ["-l", (map toLower (head options)),  "-f", "html"] code
         | (length options) == 2 = unsafePerformIO $ readProcess "pygmentize" ["-l", (map toLower (head options)), "-O linenos=inline",  "-f", "html"] code
         | otherwise = "<div class =\"highlight\"><pre>" ++ code ++ "</pre></div>"

Note that this program invokes another program, pygmentize to actually highlight the source code (pygmentize is part of the Pygments project). So, install pygmentize with your favorite package manager, install Haskell if you have not done so already, and then compile pygments.hs with:

$ ghc -dynamic pygments.hs

That’s it! Putting it all together, to create a blog post, I can now do:

$ pandoc -F pygments -f markdown -t html5 -o blogpost.html blogpost.md

I added some CSS that makes use of the Pygments classes and voilà: you can now view this blog without having to worry about a JavaScript cryptocurrency miner hijacking your CPU. You’re welcome.

Remove all files except a few in Bash

$ ls -1
153390909910_first
15339090991_second
15339090992_third
15339090993_fourth
15339090994_fifth
15339090995_sixth
15339090996_seventh
15339090997_eighth
15339090998_nineth
15339090999_tenth
15339091628_do_not_delete
root
root.sql

We want to delete all files that start with a timestamp (seconds since the epoch), except the newest file (15339091628_do_not_delete) and the files root and root.sql. The easiest way to do this, is enabling the shell option extglob (“extended globbing”), which allows us to use patterns to include or exclude files of operations:

$ shopt -s extglob
$ rm !(*do_not_delete|root*)

The last command will tell Bash to remove all files, except the ones that match either one of the patterns (everything ending with do_not_delete and everything starting with root). We delimite the patterns by using a pipe character |.

Other patterns that are supported by extglob include:

?(pattern-list)
      Matches zero or one occurrence of the given patterns

\*(pattern-list)
      Matches zero or more occurrences of the given patterns

+(pattern-list)
      Matches one or more occurrences of the given patterns

@(pattern-list)
      Matches one of the given patterns

!(pattern-list)
      Matches anything except one of the given patterns

To disable the extended globbing again:

$ shopt -u extglob

References

To read about all the options that extglob gives you, refer to man bash (search for Pathname Expansion). Searching for shopt in the same manual page will turn up all shell options. To see which shell options are currently enables for your shell, type shopt -p at the prompt.

Bash’ magic space

What does the “magic space” do?

Given the following:

$ find -wholename '*/path/to/file' -print -quit
$ man rm
$ rm -fv !-2:2

In the last line, feedback would be appreciated to see if we are indeed going to delete the second argument of two commands back. If you set Bash’ so-called “magic space”, history expansion will take place right away after typing a space after !-2:2:

$ rm -fv '*/path/to/file'

How to enable the magic space?

Put the following in your ~/.inputrc:

$if Bash
    Space: magic-space
$endif

Start a new session, or use bind -f ~/.inputrc to put the changes in effect immediately.

Other ways to achieve the same

You could also enable shopt -s histverify, which will perform the history expansion and give you another opportunity to modify the command before executing it. This requires you to press enter, though.

Import contacts (vCards) into Nextcloud

TL;DR

Export your contacts from Google in vCard version 3 format, split the contacts file and use cadaver to upload all files individually to your address book.

The struggle

Last week, I did a fresh install of Lingeage OS 14.1 on my OnePlus X and decided not to install any GApps. I have been slowly moving away from using Google services and, having found replacements in the form of open-source apps or web interfaces, I felt confident I would be able to use my phone without a Google Play Store or Play Services. (F-Droid is now my sole source of apps.)

To tackle the problem of storing contacts and a calendar that could be synced, I installed a Nextcloud instance on a Raspberry Pi 3. Having installed DAVdroid, I got my phone to sync contacts with Nextcloud, but not all of them: it would stop synchronizing after some 120 contacts, while I had more than 400.

I decided to try a different approach, so I exported the contacts on my phone in vCard format and tried to upload them to Nextcloud using the aptly named application "contacts" for this. However, this also failed unexpectedly. I’m using Nextcloud version 12.0.3 and version 2.0.1 of the contact app, but it refuses to accept vCard version 2.1 (HTTP response code 415: Unsupported media type). This, naturally, is the version Android 6 uses to export contacts.

After some searching, I found out that if you go to contacts.google.com, you can download your contacts in vCards version 3. Problem fixed? Well, not so fast: importing 400+ contacts into Nextcloud using the web interface on a Raspberry Pi 3 with an SD card for storage will take a long time. In fact, it never finished over the course of a couple of hours (!), so I needed yet another approach.

Fortunately, you can approach your Nextcloud instance through the WebDAV protocol using tools such as cadaver:

$ cadaver https://192.168.1.14/nextcloud/remote.php/dav

Storing your credentials in a .netrc file in your home directory will enable cadaver to verify your identity without prompting, making it suitable for scripting:

machine 192.168.1.14
login foo
password correcthorsebatterystaple

cadaver allows you to traverse the directories of the remote file system over WebDAV. To put a single local contacts file (from your working machine) to the remote Raspberry Pi, you could tell it to:

dav:/nextcloud/remote.php/dav/> cd addressbooks/users/{username}/{addressbookname}
dav:/nextcloud/remote.php/dav/addressbooks/users/foo/Contacts/> put /home/foo/all.vcf all.vcf

I had a single vcf file with 400+ contacts in them, but after uploading it this way, only a single contact was being displayed. Apparently, the Nextcloud’s contacts app assumes a single vcf file contains only a single contact. New challenge: we need to split this single vcf file containing multiple contacts into separate files that we can then upload to Nextcloud.

To split the contacts, we can use awk:

BEGIN {
    RS="END:VCARD\r?\n"
    FS="\n"
}
{
    command = "echo -n $(pwgen 20 1).vcf"
    command | getline filename
    close(command)
    print $0 "END:VCARD" > filename
}

This separates the contacts on the record separator END:VCARD and generates a random filename to store the individual contact in. (I also wrote a Java program to do the same thing, which is faster when splitting large files).

Obviously, it would be convenient now if we could upload all these files in one go. cadaver does provides the mput action to do so, but I did not get it to work with wildcards. So instead, I created a file with put commands:

for file in *.vcf; do
    echo "put $(pwd)/$file addressbooks/users/foo/Contacts/$file" >> commands
done

And then provided this as input to cadaver:

$ cadaver http://192.168.1.14/nextcloud/remote.php/dav <<< $(cat commands)

This may take a while (it took around an hour for 400+ contacts), but at least you get to see the progress as each request is made and processed. And voilà, all the contacts are displayed correctly in Nextcloud.