Dez 14, 2014
ZoL 0.6.3 introduced the ZFS event daemon, zed. It can execute scripts for every event ZFS generates (see zpool events
). A basic, but pretty useful example: Get a mail including a detailed report every a pool scrub finishes. Of course, anything you can script, you can do. The (small) drawback: At least the ZoL packages for Ubuntu ship without config files, examples and an upstart script for zed.
The config file zed.rc
can be found on github, as well as examples to get you started. Both the rc as well as the scripts go in /etc/zfs/zed.d/
.
This leaves us with a working daemon, but it won't automatically start. Here's a basic upstart file to fix that.
# zed - the ZFS event daemon
description "The ZFS event daemon"
start on local-filesystems
stop on runlevel [!2345]
respawn
expect daemon
exec /sbin/zed
Put that bit in /etc/init/zed.conf
and you've got zed up and running after reboots.
Dez 05, 2014
Lots of interesting events are being streamed live on the web these days. Unfortunately, most streaming providers use flash players with annoyingly complicated protocols to distribute the video, which makes it hard to view the stream in your favorite player. Luckily for us, there's Livestreamer. Given a URL, it extracts the video and pipes it to a player of your choice.
Livestreamer is written in Python and can be installed using pip
. To be able to access all of its features, a little additional preparation is needed.
Preparations
In order to access ustream's HD streams, Livestreamer needs python-librtmp, which also is available using pip
. The library needs cffi, so we have to install its dependencies, too. They are python2.7-dev
and libffi-dev
. python-librtmp
needs librtmp-dev
.
sudo apt-get install python2.7-dev libffi-dev librtmp-dev
After installing those development headers, we can install cffi
and python-librtmp
.
sudo pip install cffi
sudo pip install python-librtmp
Once that is done, Livestreamer can be installed.
sudo pip install livestreamer
Configuration
All of Livestreamer's options are available as CLI switches, but setting your preferred player and stream quality in the config file saves you from having to input them every time.
The config resides in ~/.config/livestreamer/config
, a viable minimal config can be found below.
player=mplayer
default-stream=best
player-no-close
With this config, you can start Livestreamer with a URL as the only argument, and it will start playing the stream in the best available quality in mplayer.
Of course, Livestreamer offers lots of features not mentioned here, including support for sites requiring the user to login before viewing streams, and several options to tweak the streaming behaviour to fit to your hardware, connection and favorite player.
Feb 28, 2014
Since a few days, Amazon's video streaming service is available in Germany. If you're on Linux, you'll be greeted with an error message and will be unable to view videos in your browser, though. This is because Amazon uses Microsoft's Silverlight to deliver the videos. Fortunately for us, there's Pipelight, a one-stop solution for running the Silverlight plugin in Wine and piping the streamed video back to a native browser.
Since the installation, while rather easy, consists of several steps, I'll detail the installation process here.
Installing Pipelight
Add the Pipelight PPA to your sources, update your package list and install pipelight. This is straight from the Pipelight readme.
sudo add-apt-repository ppa:pipelight/stable
sudo apt-get update
sudo apt-get install --install-recommends pipelight-multi
Then, get the latest Silverlight plugin and activate it. You will probably be prompted about some licences, press Y
to accept them.
sudo pipelight-plugin --update
sudo pipelight-plugin --enable silverlight
While now you've got a working Silverlight plugin, Amazon will still refuse to stream to your browser. That's because your browser's user agent betrays the fact you're running Linux.
Installing a user agent switching addon
Thus, we need an addon to fix that. I use UAControl, since it allows for site-based switching. Unfortunately, it can't change the response of Javascript's navigator.UserAgent
call. User-Agent JS Fixer takes care of that for us.
Install those two addons, then open UAControl's preferences and add Firefox 15/Windows: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:15.0) Gecko/20120427 Firefox/15.0a1
as your new user agent for amazon.de
. Save that and you're done. Enjoy streaming!
Mai 21, 2013
Since Firefox has problems handling SIGTERM gracefully, here's another way of quitting a running instance from the command line:
firefox is the pattern used to match the window title against.
Feb 18, 2013
Getting awesome to use the GTK2 and cursor theme you want it to is a matter of simply creating a file named .gtkrc-2.0
in your home and setting the theme there.
gtk-theme-name="Your-Theme"
gtk-cursor-theme-name="Your-Cursor-Theme"
Setting the GTK3 theme is as simple, but the file is ~/.config/gtk-3.0/settings.ini
.
Most GTK apps honor that setting. But X has a default cursor that is used for non-GTK apps. That, annoyingly, includes the desktop background. In order to get X to use the same cursor, you have to run one additional command:
sudo update-alternatives --config x-cursor-theme
That will present you with a list of cursor themes to choose from. Pick your favorite, restart your session and you're set.
Jan 04, 2013
Not that I'd be using something like this, but since the question came up in #zsh and I couldn't find any post listing the most sensible solutions, I decided to document the results.
In order to get a prompt that looks like
just set your PROMPT variable to one of the following values.
Option 1: Use \n within $'' (that's two single quotes):
PROMPT=$'%m – %*\n%n:%~:%# '
Option 2: Use a line break within "":
PROMPT="%m – %*
%n:%~:%# "
Option 3: Use $prompt_newline
PROMPT="%m – %*$prompt_newline%n:%~:%# "
Jan 02, 2013
almir is a bacula frontend, but its embedded web server does not (as of the time of writing) support SSL. So, in order to get at least a token amout of security, I decided to use nginx to add SSL capabilities. After instructing almir to only accept connection from localhost, I configured nginx as a reverse proxy with those features:
- Redirection of http requests to https
- Basic auth
- Automatic rewriting of almir's absolute URLs for including JavaScript etc. to relative URLs, in order to avoid problems with XSS protection from modern browsers
The first item in that list is easy to do, as the following snippet from the config and the wiki page it is copied from show.
server {
listen 80;
server_name your.full-qualified-domain.name;
return 301 https://$server_name$request_uri;
}
The next item is trivial as well, see snippet and wiki.
location / {
# …
auth_basic 'Your realm';
auth_basic_user_file /path/to/passwd;
# …
}
Figuring out that last feature was a lot more annoying. After finding the cause of the problem – my browser's XSS protection – dealing with it was a matter of configuring nginx' HttpSubModule to substitute the absolute URLs used by almir with relative URLs.
location / {
# …
sub_filter 'http://your.full-qualified-domain.name/' '/';
sub_filter_once off;
# …
}
For your (and my future) reference, here's the complete config file.
server {
listen 443;
server_name your.full-qualified-domain.name;
ssl on;
ssl_certitficate /path/to/ssl/crt;
ssl_certificate_key /path/to/ssl/key;
location / {
proxy_pass http://localhost:2500;
proxy_redirect http:// https://;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header Host $http_host;
sub_filter 'http://your.full-qualified-domain.name/' '/';
sub_filter_once off;
auth_basic "Your realm";
auth_basic_user_file /path/to/passwd;
}
}
server {
listen 80;
server_name your.full-qualified-domain.name;
return 301 https://$server_name$request_uri;
}
Okt 03, 2012
Let's assume foo is a package that I want to configure differently than the package maintainer does, but I don't want to go through the hassle of creating my own build scripts. Luckily, Debian (and Ubuntu) provide a way to use the package maintainer's scripts without the need to do anything yourself but enter a few simple lines into your shell.
The following commands will
- Download the source code of the latest version available for foo into a subdirectory of the current working directory
- Install all the dependencies needed to build foo
- Configure foo more to your liking
- Create installable deb-files while skipping the signing of those files (since you probably don't have the keys to do that anyway)
Notice that you don't need root for any of those commands!
apt-get source foo
apt-get build-dep foo
cd foo-4.2
./configure --with-more-awesomeness
dpkg-buildpackage -b -us -uc
If you want to, you can increase the version number of foo after configuring it by editing the debian/changelog file. Just copy the previous entry and adjust to reflect your changes.
Sep 16, 2012
Usually, when writing a shell script that relies on parsing the output of some other program — using grep
, awk
or whatever the text manipulation tool of your choise is — I check the output of the command on my machine and write my regex accordingly. What I — and lots of other people, it seems, when you look at the scripts you find online for any given problem — tend to forget: Even though I consider it almost mandatory to have my servers' shells set to English, not everybody uses English as the default language for their system and thus, your carefully tested regex either fails to parse the output or, even worse, parses wrong values, causing your script to misbehave in unforeseeable ways.
Fortunately, forcing your script to run with a locale of your choice is simple. Given you want your script to use the 'C' locale, which is plain ASCII english and available on every machine, simply add the following as your first command to your script. This will cause the script to print all messages, times, and numbers using the given locale without changing any systemwide settings.
You could also use LANG
, but that setting might be overridden if the user has LC_ALL
set.
Resources
This Ubuntu help page has a list of all language related environment variables.
Sep 13, 2012
This is probably a very setup-specific bug, but since it took me quite a while to figure out, I though I'd blog about it anyway. The problem was one specific bacula client — connected to both director and storage daemon via VPN — not completing any backup larger than a few megabyte. The logs didn't show anything but the not very helpful 'Connection reset by peer' message. Strangely enough, the files were copied just fine, but the director considered the backup failed afterwards, anyway.
What (probably) happened
The VPN tunnel, while not unstable as such, seems to drop idle connections after a while, causing the files to be copied without problems — after all, that connection is active all the time — but the control connection to be killed during that time, leading to the problem described above when bacula tries to update the database after finishing the file copy process.
How to fix it
The fix, it turns out, is trivial once you know why the problem occurs. Bacula has a Heartbeat Interval
directive for director, file daemon and storage daemon. Activating a 30 second heartbeat for both the affected file daemon and the storage daemon did the trick.
Sep 11, 2012
Static blogs are all the rage now, and while setting up one of these is not as easy as registering an account on tumblr, it still isn't rocket surgery.
In addition to using the latest and greatest in publishing technology, you have the added benefit of not needing any executable code on your server. All the work — like generating the HTML files served later — is (or can be) done locally, on your own machine.
Since I like python, I decided to use pelican as generator, so the machine you want to use it on should have python installed. Their quickstart is actually pretty good, but since it doesn't cover using distribution-provided packages when available, I figured I'd document my approach here.
Let's get started. My machine runs Ubuntu 12.04, but most commands should work on any halfway recent linux installation, even though some paths might differ.
Setting up virtualenv
In order to not install any python eggs systemwide, we start by setting up a virtual environment for all of pelican's dependencies to live in.
-
Install the needed programs
sudo apt-get install python-virtualenv virtualenvwrapper
-
Add those lines to your shell's resource file (e.g. ~/.bashrc
) and rehash your shell (source ~/.bashrc
should do the trick).
export WORKON_HOME=$HOME/.virtualenvs
source /etc/bash_completion.d/virtualenvwrapper
-
Create a virtual environment for pelican and associate it with the directory your blog is stored (I will use ~/blog
from now on).
mkvirtualenv pelican
mkdir ~/blog && cd $_
setvirtualenvproject
Installing and initializing pelican
-
Install pelican and, optionally, Markdown. I recommend using pip
to do so.
pip install pelican Markdown
-
Initialize your blog by answering a few questions. Afterwards, you're done.
-
In order to locally test your new blog, I recommend using python's builtin web server. pelican includes a script to automatically watch your content directory, building new files and serving them using said server. Unfortunately, this script didn't work out of the box on my machine. But don't worry, it's a rather trivial fix. Replace line 4 of the script with
PELICAN=~/.virtualenvs/pelican/bin/pelican
and line 7 with
Writing your first blog post
-
More, you say? Alright then. Here's how to write your first blog post: Create a text file in ~/blog/content/
. The name, depending if you're using reStructuredText or Markdown, should end with .rst
or .md
respectively. This small example should get you started.
Title: My first blog post
Date: 2012-9-11
Tags: blog,post
Everything but the title tag is optional, anything that's not a tag is turned into content.
-
Let pelican do its magic and visit http://localhost:8000 afterwards.
make html
./develop_server.sh start
-
Upload the generated files (located in ~/blog/output
) to a webserver of your choice — or let pelican handle that, too.
Useful resources
Sep 10, 2012
Our CUPS servers tend to, from time to time, get stuck sending data to a printer. Once this has happended, the server pauses the printer, which leads to angry calls, since the server continues to accept jobs for said printer without notifying the Windows client machines of its status. Instead of configuring CUPS' webserver to accept connections from other machines (or using SSH -X
to start a local browser), this problem can easily be solved using good old plain SSH.
- SSH into the machine running CUPS.
- Run
cancel -a <printer>
to clear the printer of all jobs, usually they accumulate some jobs until someone bothers to notify me.
- Run
cupsenable <printer>
to unpause the printer.