Latest posts

The Debian Janitor

There are a lot of small changes that can be made to the Debian archive to increase the overall quality. Many of these changes are small and have just minor benefits if they are applied to just a single package. Lintian encourages maintainers to fix these problems by pointing out the common ones.

Most of these issues are often trivially fixable; they are in general an inefficient use of human time, and it takes a lot of effort to keep up with. This is something that can clearly be automated.

Several tools (e.g. onovy's mass tool, and the lintian-brush tool that I've been working on) go a step further and (for a subset of the issues reported by lintian) fix the problems for you, where they can. Lintian-brush can currently fix most instances of close to 100 lintian tags.

Thanks to the Vcs-* fields set by many packages and the APIs provided by hosting platforms like Salsa, it is now possible to proactively attempt to fix these issues.

The Debian Janitor is a tool that will run lintian-brush across the entire archive, and propose fixes to lintian issues via pull request.

Objectives

The aim of Debian Janitor is to take some drudge work away from Debian maintainers where possible, so they can spend their time on more important packaging work. Its purpose is to make automated changes quick and easy to apply, with minimal overhead for package maintainers. It is essentially a bit of infrastructure to run lintian-brush across all of the archive.

The actions of the bot are restricted to a limited set of problems for which obviously correct actions can be taken. It is not meant to automate all packaging, or even to cover automating all instances of the issues it knows about.

The bot is designed to be conservative and delight with consistently correct fixes instead of proposing possibly incorrect fixes and hoping for the best. Considerable effort has been made to avoid the janitor creating pull requests with incorrect changes, as these take valuable time away from maintainers, the package doesn't actually improve (since the merge request is rejected) and it makes it likelier that future pull requests from the Debian Janitor bot are ignored or rejected.

In short: The janitor is meant to propose correct changes if it can, and back off otherwise.

Design

The Janitor finds package sources in version control systems from the Vcs*- control field in Debian source packages. If the packaging branch is hosted on a hosting platform that the Janitor has a presence on, it will attempt to run lintian-brush on the packaging branch and (if there are any changes made) build the package and propose a merge. It is based on silver-platter and currently has support for:

The Janitor is driven from the lintian and vcswatch tables in UDD. It queries for packages that are affected by any of the lintian tags that lintian-brush has a fixer script for. This way it can limit the number of repositories it has to process.

Ensuring quality

There are a couple of things I am doing to make sure that the Debian Janitor delights rather than annoys.

High quality changes

Lintian-brush has end-to-end tests for its fixers.

In order to make sure that merge requests are useful and high-value, the bot will only propose changes from lintian-brush that:

  • successfully build in a chroot and pass autopkgtest and piuparts;
  • are not completely trivial - e.g. only stripping whitespace

Changes for a package will also be reviewed by a human before they make it into a pull request.

One open pull request per package

If the bot created a pull request previously, it will attempt to update the current request by adding new commits (and updating the pull request description). It will remove and fix the branch when the pull request conflicts because of new upstream changes.

In other words, it will only create a single pull request per package and will attempt to keep that pull request up to date.

Gradual rollout

I'm slowly adding interested maintainers to receiving pull requests, before opening it up to the entire archive. This should help catch any widespread issues early.

Providing control

The bot will be upfront about its pull requests and try to avoid overwhelming maintainers with pull requests by:

  • Clearly identifying any merge requests it creates as being made by a bot. This should allow maintainers to prioritize contributions from humans.
  • Limiting the number of open proposals per maintainer. It starts by opening a single merge request and won't open additional merge requests until the first proposal has a response
  • Providing a way to opt out of future merge requests; just a reply on the merge request is sufficient.

Any comments on merge requests will also still be reviewed by a human.

Current state

Debian janitor is running, generating changes and already creating merge requests (albeit under close review). Some examples of merge requests it has created:

Using the janitor

The janitor can process any package that’s maintained in Git and has its Vcs-Git header set correctly (you can use vcswatch to check this).

If you're interested in receiving pull requests early, leave a comment below. Eventually, the janitor should get to all packages, though it may take a while with the current number of source packages in the archive.

By default, salsa does not send notifications when a new merge request for one of the repositories you're a maintainer for is created. Make sure you have notifications enabled in your Salsa profile, by ticking "New Merge Requests" for the packages you care about.

You can also see the number of open merge requests for a package repository on QA - it's the ! followed by a number in the pull request column.

It is also possible to download the diff for a particular package (if it's been generated) ahead of the janitor publishing it:

$ curl https://janitor.debian.net/api/lintian-fixes/pkg/PACKAGE/diff

E.g. for i3-wm, look at https://janitor.debian.net/api/lintian-fixes/pkg/i3-wm/diff.

Future Plans

The current set of supported hosting platforms covers the bulk of packages in Debian that is maintained in a VCS. The only other 100+ package platform that's unsupported is dgit. If you have suggestions on how best to submit git changes to dgit repositories (BTS bugs with patches? or would that be too much overhead?), let me know.

The next platform that is currently missing is bitbucket, but there are only about 15 packages in unstable hosted there.

At the moment, lintian-brush can fix close to 100 lintian tags. It would be great to add fixers for more common issues.

The janitor should probably be more tightly integrated with other pieces of Debian infrastructure, e.g. Jenkins for running jobs or linked to from the tracker or lintian.debian.org.

More information

See the FAQ on the homepage.

If you have any concerns about these roll-out plans, have other ideas or questions, please let me know in the comments.

comments.

Silver Platter

Making changes across the open source ecosystem is very hard; software is hosted on different platforms and in many different version control repositories. Not being able to make bulk changes slows down the rate of progress. For example, instead of being able to actively run a a script that strips out an obsolete header file (say "DM-Upload-Allowed") across all Debian packages, we make the linter warn about the deprecated header and wait as all developers manually remove the deprecated header.

Silver Platter

Silver-platter is a new tool that aids in making automated changes across different version control repositories. It provides a common command-line interface and API that is not specific to a single version control system or hosting platform, so that it's easy to propose changes based on a single script across a large set of repositories.

The tool will check out a repository, run a user-specified script that makes changes to the repository, and then either push those changes to the upstream repository or propose them for merging.

It's specifically built so that it can be run in a shell loop over many different repository URLs.

Example

As an example, you could use the following script (fix-fsf-address.sh) to update the FSF address in copyright headers:

#!/bin/sh

perl -i -pe \
'BEGIN{undef $/;} s/Free Software
([# ]+)Foundation, Inc\., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA/Free Software
\1Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301  USA/smg' *

echo "Update FSF postal address."

Say you a wanted to create a merge proposal with these changes against offlineimap. First, log into GitHub (this needs to be done once per hosting site):

$ svp login https://github.com

To see what the changes would be without actually creating the pull request, do a dry-run:

$ svp run --dry-run --diff ./fix-fsf-address.sh https://github.com/offlineimap/offlineimap
Merge proposal created.
Description: Update FSF postal address.

=== modified file 'offlineimap.py'
--- upstream/offlineimap.py 2018-03-04 03:28:30 +0000
+++ proposed/offlineimap.py 2019-04-06 21:07:25 +0000
@@ -14,7 +14,7 @@
 #
 #    You should have received a copy of the GNU General Public License
 #    along with this program; if not, write to the Free Software
-#    Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
+#    Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301  USA

 import os
 import sys

=== modified file 'setup.py'
--- upstream/setup.py       2018-05-01 01:48:26 +0000
+++ proposed/setup.py       2019-04-06 21:07:25 +0000
@@ -19,7 +19,7 @@
 #
 #    You should have received a copy of the GNU General Public License
 #    along with this program; if not, write to the Free Software
-#    Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
+#    Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301  USA

 import os
 from distutils.core import setup, Command

Then, create the actual pull request by running:

$ svp run ./fix-fsf-address.sh https://github.com/offlineimap/offlineimap
...
Reusing existing repository https://github.com/jelmer/offlineimap
Merge proposal created.
URL: https://github.com/OfflineIMAP/offlineimap/pull/609
Description: Update FSF postal address.

This would create a new commit with the updated postal address (if any files were changed) and the commit message Update FSF postal address. You can see the resulting pull request here.

Debian-specific operations

To make working with Debian packaging repositories easier, Silver Platter comes with a wrapper (debian-svp) specifically for Debian packages.

This wrapper allows specifying package names to refer to packaging branches; packaging URLs are retrieved from the Vcs-Git header in a package. For example:

$ debian-svp run ~/fix-fsf-address.sh offlineimap

to fix the same issue in the offlineimap package.

(Of course, you wouldn't normally fix upstream issues like this in the Debian package but forward them upstream instead)

There is also a debian-svp lintian-brush subcommand that will invoke lintian-brush on a packaging branch.

Supported technologies

Silver-Platter currently supports the following hosting platforms:

It works in one of three modes:

  • propose: Always create a pull request with the changes
  • push: Directly push changes back to the original branch
  • attempt-push: Attempt push, and fall back to propose if the current users doesn't have permissions to push to the repository or the branch.

Installation

There is a Silver Platter repository on GitHub. Silver Platter is also available as a Debian package in unstable (not buster).

More information

For a full list of svp subcommands, see svp(1).

comments.

Breezy evolves

Last month Martin, Vincent and I finally released version 3.0.0 of Breezy, a little over a year after we originally forked Bazaar.

When we started working on Breezy, it was mostly as a way to keep Bazaar working going forward - in a world where Python 2 has mostly disappeared in favour of Python 3).

Improvements

Since then, we have also made other improvements. In addition to Python 3 support, Breezy comes with the following other bigger changes:

Batteries Included

Breezy bundles most of the common plugins. This makes the installation of Breezy much simpler (pip install brz), and prevents possible issues with API incompatibility that plagued Bazaar.

Bundled plugins include: grep, git, fastimport, propose, upload, stats and parts of bzrtools.

>120 fixed bugs

Since Bazaar 2.7, lots of bugs in the Bazaar code base have been fixed (over 120 as of March 2019). We've also started an effort to go through all bugs in the Bazaar bug tracker to see whether they also apply to Breezy.

Native Git Support

Breezy now supports the Git file formats as a first class citizen; Git support is included in Breezy itself, and should work just as well as regular Bazaar format repositories.

Improved abstractions

Bazaar has always had a higher level API that could be used for version control operations, and which was implemented for both Bazaar, Git and Subversion formats.

As part of the work to support the Git format natively, we have changed the API to remove Bazaar-specific artefacts, like the use of file ids. Inventories (a Bazaar concept) are now also an implementation detail of the bzr formats, and not a concept that is visible in the API or UI.

In the future, I hope the API will be useful for tools that want to make automated changes to any version controlled resource, whether that be Git, Bazaar, Subversion or Mercurial repositories.

comments.

Lintian Brush

With Debian packages now widely being maintained in Git repositories, there has been an uptick in the number of bulk changes made to Debian packages. Several maintainers are running commands over many packages (e.g. all packages owned by a specific team) to fix common issues in packages.

Examples of changes being made include:

  • Updating the Vcs-Git and Vcs-Browser URLs after migrating from alioth to salsa
  • Stripping trailing whitespace in various control files
  • Updating e.g. homepage URLs to use https rather than http

Most of these can be fixed with simple sed or perl one-liners.

Some of these scripts are publically available, for example:

Lintian-Brush

Lintian-Brush is both a simple wrapper around a set of these kinds of scripts and a repository for these scripts, with the goal of making it easy for any Debian maintainer to run them.

The lintian-brush command-line tool is a simple wrapper that runs a set of "fixer scripts", and for each:

  • Reverts the changes made by the script if it failed with an error
  • Commits the changes to the VCS with an appropriate commit message
  • Adds a changelog entry (if desired)

The tool also provides some basic infrastructure for testing that these scripts do what they should, and e.g. don't have unintended side-effects.

The idea is that it should be safe, quick and unobtrusive to run lintian-brush, and get it to opportunistically fix lintian issues and to leave the source tree alone when it can't.

Example

For example, running lintian-brush on the package talloc fixes two minor lintian issues:

% debcheckout talloc
declared git repository at https://salsa.debian.org/samba-team/talloc.git
git clone https://salsa.debian.org/samba-team/talloc.git talloc ...
Cloning into 'talloc'...
remote: Enumerating objects: 2702, done.
remote: Counting objects: 100% (2702/2702), done.
remote: Compressing objects: 100% (996/996), done.
remote: Total 2702 (delta 1627), reused 2601 (delta 1550)
Receiving objects: 100% (2702/2702), 1.70 MiB | 565.00 KiB/s, done.
Resolving deltas: 100% (1627/1627), done.
% cd talloc
talloc% lintian-brush
Lintian tags fixed: {'insecure-copyright-format-uri', 'public-upstream-key-not-minimal'}
% git log
commit 0ea35f4bb76f6bca3132a9506189ef7531e5c680 (HEAD -> master)
Author: Jelmer Vernooij <jelmer@debian.org>
Date:   Tue Dec 4 16:42:35 2018 +0000

    Re-export upstream signing key without extra signatures.

    Fixes lintian: public-upstream-key-not-minimal
    See https://lintian.debian.org/tags/public-upstream-key-not-minimal.html for more details.

 debian/changelog                |   1 +
 debian/upstream/signing-key.asc | 102 +++++++++++++++---------------------------------------------------------------------------------------
 2 files changed, 16 insertions(+), 87 deletions(-)

commit feebce3147df561aa51a385c53d8759b4520c67f
Author: Jelmer Vernooij <jelmer@debian.org>
Date:   Tue Dec 4 16:42:28 2018 +0000

    Use secure copyright file specification URI.

    Fixes lintian: insecure-copyright-format-uri
    See https://lintian.debian.org/tags/insecure-copyright-format-uri.html for more details.

 debian/changelog | 3 +++
 debian/copyright | 2 +-
 2 files changed, 4 insertions(+), 1 deletion(-)

Script Interface

A fixer script is run in the root directory of a package, where it can make changes it deems necessary, and write a summary of what it's done for the changelog (and commit message) to standard out.

If a fixer can not provide any improvements, it can simply leave the working tree untouched - lintian-brush will not create any commits for it or update the changelog. If it exits with a non-zero exit code, then it is assumed that it failed to run and it will be listed as such and its changes reset rather than committed.

In addition, tests can be added for fixers by providing various before and after source package trees, to verify that a fixer script makes the expected changes.

For more details, see the documentation on writing new fixers.

Availability

lintian-brush is currently available in unstable and testing. See man lintian-brush(1) for an explanation of the command-line options.

Fixer scripts are included that can fix (some of the instances of) 34 lintian tags.

Feedback would be great if you try lintian-brush - please file bugs in the BTS, or propose pull requests with new fixers on salsa.

comments.

Breezy: Forking Bazaar

A couple of months ago, Martin and I announced a friendly fork of Bazaar, named Breezy.

It's been 5 years since I wrote a Bazaar retrospective and around 6 since I seriously contributed to the Bazaar codebase.

Goals

We don't have any grand ambitions for Breezy; the main goal is to keep Bazaar usable going forward. Your open source projects should still be using Git.

The main changes we have made so far come down to fixing a number of bugs and to bundling useful plugins. Bundling plugins makes setting up an environment simpler and to eliminate the API compatibility issues that plagued external plugins in the Bazaar world.

Perhaps the biggest effort in Breezy is porting the codebase to Python 3, allowing it to be used once Python 2 goes EOL in 2020.

A fork

Breezy is a fork of Bazaar and not just a new release series.

Bazaar upstream has been dormant for the last couple of years anyway - we don't lose anything by forking.

We're forking because gives us the independence to make some of the changes we deemed necessary and that are otherwise hard to make for an established project, For example, we're now bundling plugins, taking an axe to a large number of APIs and dropping support for older platforms.

A fork also means independence from Canonical; there is no CLA for Breezy (a hindrance for Bazaar) and we can set up our own infrastructure without having to chase down Canonical staff for web site updates or the installation of new packages on the CI system.

More information

Martin gave a talk about Breezy at PyCon UK this year.

Breezy bugs can be filed on Launchpad. For the moment, we are using the Bazaar mailing list and the #bzr IRC channel for any discussions and status updates around Breezy.

comments.

Xandikos, a lightweight Git-backed CalDAV/CardDAV server

For the last couple of years, I have self-hosted my calendar and address book data. Originally I just kept local calendars and address books in Evolution, but later I moved to a self-hosted CalDAV/CardDAV server and a plethora of clients.

CalDAV/CardDAV

CalDAV and CardDAV are standards for accessing, managing, and sharing calendaring and addressbook information based on the iCalendar format that are built atop the WebDAV standard, and WebDAV itself is a set of extensions to HTTP.

CalDAV and CardDAV essentially store iCalendar (.ics) and vCard (.vcf) files using WebDAV, but they provide some extra guarantees (e.g. files must be well-formed) and some additional methods for querying the data. For example, it is possible to retrieve all events between two dates with a single HTTP query, rather than the client having to check all the calendar files in a directory.

CalDAV and CardDAV are (unnecessarily) complex, in large part because they are built on top of WebDAV. Being able to use regular HTTP and WebDAV clients is quite neat, but results in extra complexity. In addition, because the standards are so large, clients and servers end up only implementing parts of it.

However, CalDAV and CardDAV have one big redeeming quality: they are the dominant standards for synchronising calendar events and addressbooks, and are supported by a wide variety of free and non-free applications. They're the status quo, until something better comes along. (and hey, at least there is a standard to begin with)

Calypso

I have tried a number of servers over the years. In the end, I settled for calypso.

Calypso started out as friendly fork of Radicale, with some additional improvements. I like Calypso because it is:

  • quite simple, understandable, and small (sloccount reports 1700 LOC)
  • it stores plain .vcf and .ics files
  • stores history in git
  • easy to set up, e.g. no database dependencies
  • written in Python

Its use of regular files and keeping history in Git is useful, because this means that whenever it breaks it is much easier to see what is happening. If something were to go wrong (i.e. a client decides to remove all server-side entries) it's easy to recover by rolling back history using git.

However, there are some downsides to Calypso as well.

It doesn't have good test coverage, making it harder to change (especially in a way that doesn't break some clients), though there are some recent efforts to make e.g. external spec compliance tests like caldavtester work with it.

Calypso's CalDAV/CardDAV/WebDAV implementation is a bit ad-hoc. The only WebDAV REPORTs it implements are calendar-multiget and addressbook-multiget. Support for properties has been added as new clients request them. The logic for replying to DAV requests is mixed with the actual data store implementation.

Because of this, it can be hard to get going with some clients and sometimes tricky to debug.

Xandikos

After attempting to fix a number of issues in Calypso, I kept running into issues with the way its code is structured. This is only fixable by rewriting sigifnicant chunks of it, so I opted to instead write a new server.

The goals of Xandikos are along the same lines as those of Calypso, to be a simple CalDAV/CardDAV server for personal use:

  • easy to set up; at the moment, just running xandikos -d $HOME/dav --defaults is enough to start a new server
  • use of plain .ics/.ivf files for storage
  • history stored in Git

But additionally:

  • clear separation between protocol implementation and storage
  • be well tested
  • standards complete
  • standards compliant

Current status

The CalDAV/CardDAV implementation of Xandikos is mostly complete, but there still a number of outstanding issues.

In particular:

  • lack of authentication support; setting up authentication support in uwsgi or a reverse proxy is one way of working around this
  • there is no useful UI for users accessing the DAV resources via a web browser
  • test coverage

Xandikos has been tested with the following clients:

Trying it

To run Xandikos, follow the instructions on the homepage:

./bin/xandikos --defaults -d $HOME/dav

A server should now be listening on localhost:8080 that you can access with your favorite client.

comments.

The Samba Buildfarm

Portability has always been very important to Samba. Nowadays Samba is mostly used on top of Linux, but Tridge developed the early versions of his SMB implementation on a Sun workstation.

A few years later, when the project was being picked up, it was ported to Linux and eventually to a large number of other free and non-free Unix-like operating systems.

Initially regression testing on different platforms was done manually and ad-hoc.

Once Samba had support for a larger number of platforms, including numerous variations and optional dependencies, making sure that it would still build and run on all of these became a non-trivial process.

To make it easier to find regressions in the Samba codebase that were platform-specific, tridge put together a system to automatically build Samba regularly on as many platforms as possible. So, in Spring 2001, the build farm was born - this was a couple of years before other tools like buildbot came around.

The Build Farm

The build farm is a collection of machines around the world that are connected to the internet, with as wide a variety of platforms as possible. In 2001, it wasn't feasible to just have a single beefy machine or a cloud account on which we could run virtual machines with AIX, HPUX, Tru64, Solaris and Linux so we needed access to physical hardware.

The build farm runs as a single non-privileged user, which has a cron job set up that runs the build farm worker script regularly. Originally the frequency was every couple of hours, but soon we asked machine owners to run it as often as possible. The worker script is as short as it is simple. It retrieves a shell script from the main build farm repository with instructions to run and after it has done so, it uploads a log file of the terminal output to samba.org using rsync and a secret per-machine password.

Some build farm machines are dedicated, but there have also been a large number of the years that would just run as a separate user account on a machine that was tasked with something else. Most build farm machines are hosted by Samba developers (or their employers) but we've also had a number of community volunteers over the years that were happy to add an extra user with an extra cron job on their machine and for a while companies like SourceForge and HP provided dedicated porter boxes that ran the build farm.

Of course, there are some security usses with this way of running things. Arbitrary shell code is downloaded from a host claiming to be samba.org and run. If the machine is shared with other (sensitive) processes, some of the information about those processes might leak into logs.

Our web page has a section about adding machines for new volunteers, with a long list of warnings.

Since then, various other people have been involved in the build farm. Andrew Bartlett started contributing to the build farm in July 2001, working on adding tests. He gradually took over as the maintainer in 2002, and various others (Vance, Martin, Mathieu) have contributed patches and helped out with general admin.

In 2005, tridge added a script to automatically send out an e-mail to the committer of the last revision before a failed build. This meant it was no longer necessary to bisect through build farm logs on the web to find out who had broken a specific platform when; you'd just be notified as soon as it happened.

The web site

Once the logs are generated and uploaded to samba.org using rsync, the web site at http://build.samba.org/ is responsible for making them accessible to the world. Initially there was a single perl file that would take care of listing and displaying log files, but over the years the functionality has been extended to do much more than that.

Initial extensions to the build farm added support for viewing per-compiler and per-host builds, to allow spotting trends. Another addition was searching logs for common indicators of running out of disk space.

Over time, we also added more samba.org-projects to the build farm. At the moment there are about a dozen projects.

In a sprint in 2009, Andrew Bartlett and I changed the build farm to store machine and build metadata in a SQLite database rather than parsing all recent build log files every time their results were needed.

In a follow-up sprint a year later, we converted most of the code to Python. We also added a number of extensions; most notably, linking the build result information with version control information so we could automatically email the exact people that had caused the build breakage, and automatically notifying build farm owners when their machines were not functioning.

autobuild

Sometime in 2011 all committers started using the autobuild script to push changes to the master Samba branch. This script enforces a full build and testsuite run for each commit that is pushed. If the build or any part of the testsuite fails, the push is aborted. This alone massively reduced the number of problematic changes that was pushed, making it less necessary for us to be made aware of issues by the build farm.

The rewrite also introduced some time bombs into the code. The way we called out to our ORM caused the code to fetch all build summary data from the database every time the summary page was generated. Initially this was not a problem, but as the table grew to 100,000 rows, the build farm became so slow that it was frustrating to use.

Analysis tools

Over the years, various special build farm machines have also been used to run extra code analysis tools, like static code analysis, lcov, valgrind or various code quality scanners.

Summer of Code

Of the last couple of years the build farm has been running happily, and hasn't changed much.

This summer one of our summer of code students, Krishna Teja Perannagari, worked on improving the look of the build farm - updating it to the current Samba house style - as well as various performance improvements in the Python code.

Jenkins?

The build farm still works reasonably well, though it is clear that various other tools that have had more developer attention have caught up with it. If we would have to reinvent the build farm today, we would probably end up using an off-the-shelve tool like Jenkins that wasn't around 14 years ago. We would also be able to get away with using virtual machines for most of our workers.

Non-Linux platforms have become less relevant in the last couple of years, though we still care about them.

The build farm in its current form works well enough for us, and I think porting to Jenkins - with the same level of platform coverage - would take quite a lot of work and have only limited benefits.

(Thanks to Andrew Bartlett for proofreading the draft of this post.)

comments.

Autonomous Shard Distributed Databases

Distributed databases are hard. Distributed databases where you don't have full control over what shards run which version of your software are even harder, because it becomes near impossible to deal with fallout when things go wrong. For lack of a better term (is there one?), I'll refer to these databases as Autonomous Shard Distributed Databases.

Distributed version control systems are an excellent example of such databases. They store file revisions and commit metadata in shards ("repositories") controlled by different people.

Because of the nature of these systems, it is hard to weed out corrupt data if all shards ignorantly propagate broken data. There will be different people on different platforms running the database software that manages the individual shards.

This makes it hard - if not impossible - to deploy software updates to all shards of a database in a reasonable amount of time (though a Chrome-like update mechanism might help here, if that was acceptable). This has consequences for the way in which you have to deal with every change to the database format and model.

(e.g. imagine introducing a modification to the Linux kernel Git repository that required everybody to install a new version of Git).

Defensive programming and a good format design from the start are essential.

Git and its database format do really well in all of these regards. As I wrote in my retrospective, Bazaar has made a number of mistakes in this area, and that was a major source of user frustration.

I propose that every autonomous shard distributed databases should aim for the following:

  • For the "base" format, keep it as simple as you possibly can. (KISS)

    The simpler the format, the smaller the chance of mistakes in the design that have to be corrected later. Similarly, it reduces the chances of mistakes in any implementation(s).

    In particular, there is no need for every piece of metadata to be a part of the core database format.

    (in the case of Git, I would argue that e.g. "author" might as well be a "meta-header")

  • Corruption should be detected early and not propagated. This means there should be good tools to sanity check a database, and ideally some of these checks should be run automatically during everyday operations - e.g. when pushing changes to others or receiving them.

  • If corruption does occur, there should be a way for as much of the database as possible to be recovered.

    A couple of corrupt objects should not render the entire database unusable.

    There should be tools for low-level access of the database, but the format and structure should be also documented well enough for power users to understand it, examine and extract data.

  • No "hard" format changes (where clients /have/ to upgrade to access the new format).

    Not all users will instantly update to the latest and greatest version of the software. The lifecycle of enterprise Linux distributions is long enough that it might take three or four years for the majority of users to upgrade.

  • Keep performance data like indexes in separate files. This makes it possible for older software to still read the data, albeit at a slower pace, and/or generate older format index files.

  • New shards of the database should replicate the entire database if at all possible; having more copies of the data can't hurt if other shards go away or get corrupted.

    Having the data locally available also means users get quicker access to more data.

  • Extensions to the database format that require hard format changes (think e.g. submodules) should only impact databases that actually use those extensions.

  • Leave some room for structured arbitrary metadata, which gets propagated but that not all clients need to be able to understand and can safely ignore.

    (think fields like "Signed-Off-By", "Reviewed-By", "Fixes-Bug", etc) in git commit metadata headers, or the revision metadata fields in Bazaar.

comments.

Using Propellor for configuration management

For a while, I've been wanting to set up configuration management for my home network. With half a dozen servers, a VPS and a workstation it is not big, but large enough to make it annoying to manually log into each machine for network-wide changes.

Most of the servers I have are low-end ARM machines, each responsible for a couple of tasks. Most of my machines run Debian or something derived from Debian. Oh, and I'm a member of the declarative school of configuration management.

Propellor

Propellor caught my eye earlier this year. Unlike some other configuration management tools, it doesn't come with its own custom language but it is written in Haskell, which I am already familiar with. It's also fairly simple, declarative, and seems to do most of the handful of things that I need.

Propellor is essentially a Haskell application that you customize for your site. It works very similar to e.g. xmonad, where you write a bit of Haskell code for configuration which uses the upstream library code. When you run the application it takes your code and builds a binary from your code and the upstream libraries.

Each host on which Propellor is used keeps a clone of the site-local Propellor git repository in /usr/local/propellor. Every time propellor runs (either because of a manual "spin", or from a cronjob it can set up for you), it fetches updates from the main site-local git repository, compiles the Haskell application and runs it.

Setup

Propellor was surprisingly easy to set up. Running propellor creates a clone of the upstream repository under ~/.propellor with a README file and some example configuration. I copied config-simple.hs to config.hs, updated it to reflect one of my hosts and within a few minutes I had a basic working propellor setup.

You can use ./propellor <host> to trigger a run on a remote host.

At the moment I have propellor working for some basic things - having certain Debian packages installed, a specific network configuration, mail setup, basic Kerberos configuration and certain SSH options set. This took surprisingly little time to set up, and it's been great being able to take full advantage of Haskell.

Propellor comes with convenience functions for dealing with some commonly used packages, such as Apt, SSH and Postfix. For a lot of the other packages, you'll have to roll your own for now. I've written some extra code to make Propellor deal with Kerberos keytabs and Dovecot, which I hope to submit upstream.

I don't have a lot of experience with other Free Software configuration management tools such as Puppet and Chef, but for my use case Propellor works very well.

The main disadvantage of propellor for me so far is that it needs to build itself on each machine it runs on. This is fine for my workstation and high-end servers, but it is somewhat more problematic on e.g. my Raspberry Pi's. Compilation takes a while, and the Haskell compiler and libraries it needs amount to 500Mb worth of disk space on the tiny root partition.

In order to work with Propellor, some Haskell knowledge is required. The Haskell in the configuration file is reasonably easy to understand if you keep it simple, but once the compiler spits out error messages then I suspect you'll have a hard time without any Haskell knowledge.

Propellor relies on having a central repository with the configuration that it can pull from as root. Unlike Joey, I am wary of publishing the configuration of my home network and I don't have a highly available local git server setup.

comments.

The state of distributed bug trackers

A whopping 5 years ago, LWN ran a story about distributed bug trackers. This was during the early waves of distributed version control adoption, and so everybody was looking for other things that could benefit from decentralization.

TL;DR: Not much has changed since.

The potential benefits of a distributed bug tracker are similar to those of a distributed version control system: ability to fork any arbitrary project, easier collaboration between related projects and offline access to full project data.

The article discussed a number of systems, including Bugs Everywhere, ScmBug, DisTract, DITrack, ticgit and ditz. The conclusion of our favorite grumpy editor at the time was that all of the available distributed bug trackers were still in their infancy.

All of these piggyback on a version control system somehow - either by reusing the VCS database, by storing their data along with the source code in the tree, or by adding custom hooks that communicate with a central server.

Only ScmBug had been somewhat widely deployed at the time, but its homepage gives me a blank page now. Of the trackers reviewed by LWN, Bugs Everywhere is the only one that is still around and somewhat active today.

In the years since the article, a handful of new trackers have come along. Two new version control systems - Veracity and Fossil - come with the kitchen sink included and so feature a built-in bug tracker and wiki.

There is an extension for Mercurial called Artemis that stores issues in an .issues directory that is colocated with the Mercurial repository.

The other new tracker that I could find (though it has also not changed since 2009) is SD. It uses its own distributed database technology for storing bug data - called Prophet, and doesn't rely on a VCS. One of the nice features is that it supports importing bugs from foreign trackers.

Some of these provide the benefits you would expect of a distributed bug tracker. Unfortunately, all those I've looked at fail to even provide the basic functionality I would want in a bug tracker. Moreso than with a version control system, regular users interact with a bug tracker. They report bugs, provide comments and feedback on fixes. All of the systems I tried make these actions a lot harder than with your average bugzilla or mantis instance - they provide a limited web UI or no web interface at all.

Update: LWN later also published articles on SD and on Fossil. Other interesting links are Eric Sink's article on distributed bug tracking (Erik works at Sourcegear who develop Veracity) and the dist-bugs mailing list.

comments.

Quantified Self

Dear lazyweb,

I've been reading about what the rest of the world seems to be calling "quantified self". In essence, it is tracking of personal data like activity, usually with the goal of data-driven decision making. Or to take a less abstract common example: counting the number of steps you take each day to motivate yourself to take more. I wish it'd been given a less annoying woolly name but this one seems to have stuck.

There are a couple of interesting devices available that track sleep, activity and overall health. Probably best known are the FitBit and the jazzed-up armband pedometers like the Jawbone UP and the Nike Fuelband. Unfortunately all existing devices seem to integrate with cloud services somehow, rather than giving the user direct access to their data. Apart from the usual privacy concerns, this means that it is hard to do your own data crunching or create a dashboard that contains data from multiple sources.

Has anybody found any devices that don't integrate with the cloud and just provide raw data access?

comments.

Porcelain in Dulwich

"porcelain" is the term that is usually used in the Git world to refer to the user-facing parts. This is opposed to the lower layers: the plumbing.

For a long time, I have resisted the idea of including a porcelain layer in Dulwich. The main reason for this is that I don't consider Dulwich a full reimplementation of Git in Python. Rather, it's a library that Python tools can use to interact with local or remote Git repositories, without any extra dependencies.

dulwich has always shipped a 'dulwich' binary, but that's never been more than a basic test tool - never a proper tool for end users. It was a mistake to install it by default.

I don't think there's a point in providing a dulwich command-line tool that has the same behaviour as the C Git binary. It would just be slower and less mature. I haven't come across any situation where it didn't make sense to just directly use the plumbing.

However, Python programmers using Dulwich seem to think of Git operations in terms of porcelain rather than plumbing. Several convenience wrappers for Dulwich have sprung up, but none of them is very complete. So rather than relying on external modules, I've added a "porcelain" module to Dulwich in the porcelain branch, which provides a porcelain-like Python API for Git.

At the moment, it just implements a handful of commands but that should improve over the next few releases:

from dulwich import porcelain

r = porcelain.init("/path/to/repo")
porcelain.commit(r, "Create a commit")
porcelain.log(r)

comments.

Book Review: Bazaar Version Control

Packt recently published a book on Version Control using Bazaar written by Janos Gyerik. I was curious what the book was like, and they kindly provided me with a digital copy.

The book is split into roughly five sections: an introduction to version control using Bazaar's main commands, an overview of the available workflows, some chapters on the available extensions and integration, some more advanced topics and finally, a quick introduction to programming using bzrlib.

It is assumed the reader has no pre-existing knowledge about version control systems. The first chapters introduce the reader to the concept of revision history, branching and merging and finally collaboration. All concepts are first discussed in theory, and then demonstrated using the Bazaar command-line UI and the bzr-explorer tool. The book follows roughly the same track as the official documentation, but it is more extensive and has more fancy drawings of revision graphs.

The middle section of the book discusses the modes in which Bazaar can be used - centralized or decentralized - as well as the various ways in which code can be landed in the main branch ("workflows"). The selection of workflows in the book is roughly the same as those in the official Bazaar documentation. The author briefly touches on a number of other software engineering topics such as code reviews, code formatting and automated testing, though not sufficiently to make it useful for people who are unfamiliar with these techniques. Both the official documentation and the book complicate things unnecessarily by listing every possible option.

The next chapter is a basic howto on the use of Bazaar with various hosting solutions, such as Launchpad, Redmine and Trac.

The Advanced Features chapter covers a wide range of obscure and less obscure features in Bazaar: uncommit, shelves, re-using working trees, lightweight checkouts, stacked branches, signing revisions and using e-mail hooks.

The chapter on foreign version control system integration is a more extensive version of the public docs. It has some factual inaccuracies; in particular, it recommends the installation of a 2 year old buggy version of bzr-git.

The last chapter provides quite a good introduction to the Bazaar APIs and plugin writing. It is a fair bit better than what is available publically.

Overall, it's not a bad book but also not a huge step forward from the official documentation. I might recommend it to people who are interested in learning Bazaar and who do not have any experience with version control yet. Those who are already familiar with Bazaar or another version control system will not find much new.

The book misses an opportunity by following the official documentation so closely. It has the same omissions and the same overemphasis on describing every possible feature. I had hoped to read more about Bazaar's data model, its file format and some of the common problems, such as parallel imports, format hell and slowness.

comments.

Migrating packaging from Bazaar to Git

A while ago I migrated most of my packages from Bazaar to Git. The rest of the world has decided to use Git for version control, and I don't have enough reason to stubbornly stick with Bazaar and make it harder for myself to collaborate with others.

So I'm moving away from a workflow I know and have polished over the last few years - including the various bzr plugins and other tools involved. Trying to do the same thing using git is frustrating and time-consuming, but I'm sure that will improve with time. In particular, I haven't found a good way to merge in a new upstream release (from a tarball) while referencing the relevant upstream commits, like bzr merge-upstream can. Is there a good way to do this? What helper tools can you recommend for maintaining a Debian package in git?

Having been upstream for bzr-git earlier, I used its git-remote-bzr implementation to do the conversions of the commits and tags:

% git clone bzr::/path/to/bzr/foo.bzr /path/to/git/foo.git

One of my last contributions to bzr-git was a bzr git-push-pristine-tar-deltas subcommand, which will export all bzr-builddeb-style pristine-tar metadata to a pristine-tar branch in a Git repository that can be used by pristine-tar directly or through something like git-buildpackage.

Once you have created a git clone of your bzr branch, it should be a matter of running bzr git-push-pristine-tar-deltas with the target git repository and the Debian package name:

% cd /path/to/bzr/foo.bzr
% bzr git-push-pristine-tar-deltas /path/to/git/foo.git foo
% cd /path/to/git/foo.git foo
% git branch
*  master
   pristine-tar

comments.

Kiln using Dulwich

http://blog.fogcreek.com/kiln-harmony-internals-the-basics/

Nice to see that #Kiln is also using #Dulwich for some of its Git support. Unfortunately I haven't been able to spend as much time on it recently as it deserves (busy changing jobs and countries), but hopefully that will change soon.

comments.