API Documentation

class borg.archiver.Archiver(lock_wait=None)[source]
static build_filter(matcher, strip_components=0)[source]
build_parser(args=None, prog=None)[source]
do_break_lock(args, repository)[source]

Break the repository lock (e.g. in case it was left by a dead borg.

do_change_passphrase(args, repository, manifest, key)[source]

Change repository key file passphrase

do_check(args, repository)[source]

Check repository consistency

do_create(args, repository, manifest=None, key=None)[source]

Create new archive

do_debug_delete_obj(args, repository)[source]

delete the objects with the given IDs from the repo

do_debug_dump_archive_items(args, repository, manifest, key)[source]

dump (decrypted, decompressed) archive items metadata (not: data)

do_debug_dump_repo_objs(args, repository, manifest, key)[source]

dump (decrypted, decompressed) repo objects

do_debug_get_obj(args, repository)[source]

get object contents from the repository and write it into file

do_debug_info(args)[source]

display system information for debugging / bug reports

do_debug_put_obj(args, repository)[source]

put file(s) contents into the repository

do_debug_refcount_obj(args, repository, manifest, key, cache)[source]

display refcounts for the objects with the given IDs

do_delete(args, repository)[source]

Delete an existing repository or archive

do_extract(args, repository, manifest, key, archive)[source]

Extract archive contents

do_help(parser, commands, args)[source]
do_info(args, repository, manifest, key, archive, cache)[source]

Show archive details such as disk space used

do_init(args, repository)[source]

Initialize an empty repository

do_key_export(args, repository)[source]

Export the repository key for backup

do_key_import(args, repository)[source]

Import the repository key from backup

do_list(args, repository, manifest, key)[source]

List archive or repository contents

do_migrate_to_repokey(args, repository)[source]

Migrate passphrase -> repokey

do_mount(args, repository, manifest, key)[source]

Mount archive or an entire repository as a FUSE filesystem

do_prune(args, repository, manifest, key)[source]

Prune repository archives according to specified rules

do_rename(args, repository, manifest, key, cache, archive)[source]

Rename an existing archive

do_serve(args)[source]

Start in server mode. This command is usually not used manually.

do_upgrade(args)[source]

upgrade a repository from a previous version

get_args(argv, cmd)[source]

usually, just returns argv, except if we deal with a ssh forced command for borg serve.

helptext = OrderedDict([('patterns', "\nExclusion patterns support four separate styles, fnmatch, shell, regular\nexpressions and path prefixes. By default, fnmatch is used. If followed\nby a colon (':') the first two characters of a pattern are used as a\nstyle selector. Explicit style selection is necessary when a\nnon-default style is desired or when the desired pattern starts with\ntwo alphanumeric characters followed by a colon (i.e. `aa:something/*`).\n\n`Fnmatch <https://docs.python.org/3/library/fnmatch.html>`_, selector `fm:`\n\n This is the default style. These patterns use a variant of shell\n pattern syntax, with '*' matching any number of characters, '?'\n matching any single character, '[...]' matching any single\n character specified, including ranges, and '[!...]' matching any\n character not specified. For the purpose of these patterns, the\n path separator ('\\' for Windows and '/' on other systems) is not\n treated specially. Wrap meta-characters in brackets for a literal\n match (i.e. `[?]` to match the literal character `?`). For a path\n to match a pattern, it must completely match from start to end, or\n must match from the start to just before a path separator. Except\n for the root path, paths will never end in the path separator when\n matching is attempted. Thus, if a given pattern ends in a path\n separator, a '*' is appended before matching is attempted.\n\nShell-style patterns, selector `sh:`\n\n Like fnmatch patterns these are similar to shell patterns. The difference\n is that the pattern may include `**/` for matching zero or more directory\n levels, `*` for matching zero or more arbitrary characters with the\n exception of any path separator.\n\nRegular expressions, selector `re:`\n\n Regular expressions similar to those found in Perl are supported. Unlike\n shell patterns regular expressions are not required to match the complete\n path and any substring match is sufficient. It is strongly recommended to\n anchor patterns to the start ('^'), to the end ('$') or both. Path\n separators ('\\' for Windows and '/' on other systems) in paths are\n always normalized to a forward slash ('/') before applying a pattern. The\n regular expression syntax is described in the `Python documentation for\n the re module <https://docs.python.org/3/library/re.html>`_.\n\nPrefix path, selector `pp:`\n\n This pattern style is useful to match whole sub-directories. The pattern\n `pp:/data/bar` matches `/data/bar` and everything therein.\n\nExclusions can be passed via the command line option `--exclude`. When used\nfrom within a shell the patterns should be quoted to protect them from\nexpansion.\n\nThe `--exclude-from` option permits loading exclusion patterns from a text\nfile with one pattern per line. Lines empty or starting with the number sign\n('#') after removing whitespace on both ends are ignored. The optional style\nselector prefix is also supported for patterns loaded from a file. Due to\nwhitespace removal paths with whitespace at the beginning or end can only be\nexcluded using regular expressions.\n\nExamples::\n\n # Exclude '/home/user/file.o' but not '/home/user/file.odt':\n $ borg create -e '*.o' backup /\n\n # Exclude '/home/user/junk' and '/home/user/subdir/junk' but\n # not '/home/user/importantjunk' or '/etc/junk':\n $ borg create -e '/home/*/junk' backup /\n\n # Exclude the contents of '/home/user/cache' but not the directory itself:\n $ borg create -e /home/user/cache/ backup /\n\n # The file '/home/user/cache/important' is *not* backed up:\n $ borg create -e /home/user/cache/ backup / /home/user/cache/important\n\n # The contents of directories in '/home' are not backed up when their name\n # ends in '.tmp'\n $ borg create --exclude 're:^/home/[^/]+\\.tmp/' backup /\n\n # Load exclusions from file\n $ cat >exclude.txt <<EOF\n # Comment line\n /home/*/junk\n *.tmp\n fm:aa:something/*\n re:^/home/[^/]\\.tmp/\n sh:/home/*/.thumbnails\n EOF\n $ borg create --exclude-from exclude.txt backup /\n\n"), ('placeholders', "\n Repository (or Archive) URLs, --prefix and --remote-path values support these\n placeholders:\n\n {hostname}\n\n The (short) hostname of the machine.\n\n {fqdn}\n\n The full name of the machine.\n\n {now}\n\n The current local date and time.\n\n {utcnow}\n\n The current UTC date and time.\n\n {user}\n\n The user name (or UID, if no name is available) of the user running borg.\n\n {pid}\n\n The current process ID.\n\n {borgversion}\n\n The version of borg, e.g.: 1.0.8rc1\n\n {borgmajor}\n\n The version of borg, only the major version, e.g.: 1\n\n {borgminor}\n\n The version of borg, only major and minor version, e.g.: 1.0\n\n {borgpatch}\n\n The version of borg, only major, minor and patch version, e.g.: 1.0.8\n\nExamples::\n\n borg create /path/to/repo::{hostname}-{user}-{utcnow} ...\n borg create /path/to/repo::{hostname}-{now:%Y-%m-%d_%H:%M:%S} ...\n borg prune --prefix '{hostname}-' ...\n\n")])
parse_args(args=None)[source]
preprocess_args(args)[source]
print_error(msg, *args)[source]
print_file_status(status, path)[source]
print_warning(msg, *args)[source]
run(args)[source]
borg.archiver.argument(args, str_or_bool)[source]

If bool is passed, return it. If str is passed, retrieve named attribute from args.

borg.archiver.main()[source]
borg.archiver.sig_info_handler(sig_no, stack)[source]

search the stack for infos about the currently processed file and print them

borg.archiver.with_archive(method)[source]
borg.archiver.with_repository(fake=False, create=False, lock=True, exclusive=False, manifest=True, cache=False)[source]

Method decorator for subcommand-handling methods: do_XYZ(self, args, repository, …)

If a parameter (where allowed) is a str the attribute named of args is used instead. :param fake: (str or bool) use None instead of repository, don’t do anything else :param create: create repository :param lock: lock repository :param exclusive: (str or bool) lock repository exclusively (for writing) :param manifest: load manifest and key, pass them as keyword arguments :param cache: open cache, pass it as keyword argument (implies manifest)

class borg.archive.Archive(repository, key, manifest, name, cache=None, create=False, checkpoint_interval=300, numeric_owner=False, progress=False, chunker_params=(19, 23, 21, 4095), start=None, end=None)[source]
exception AlreadyExists[source]

Archive {} already exists

exception Archive.DoesNotExist[source]

Archive {} does not exist

exception Archive.IncompatibleFilesystemEncodingError[source]

Failed to encode filename “{}” into file system encoding “{}”. Consider configuring the LANG environment variable.

Archive.add_item(item)[source]
Archive.calc_stats(cache)[source]
Archive.delete(stats, progress=False, forced=False)[source]
Archive.duration
Archive.extract_item(item, restore_attrs=True, dry_run=False, stdout=False, sparse=False)[source]
Archive.fpr
Archive.iter_items(filter=None, preload=False)[source]
static Archive.list_archives(repository, key, manifest, cache=None)[source]
Archive.load(id)[source]
Archive.process_dev(path, st)[source]
Archive.process_dir(path, st)[source]
Archive.process_fifo(path, st)[source]
Archive.process_file(path, st, cache, ignore_inode=False)[source]
Archive.process_stdin(path, cache)[source]
Archive.rename(name)[source]
Archive.restore_attrs(path, item, symlink=False, fd=None)[source]

Restore filesystem attributes on path (fd) from item.

Does not access the repository.

Archive.save(name=None, timestamp=None)[source]
Archive.stat_attrs(st, path)[source]
Archive.ts

Timestamp of archive creation (start) in UTC

Archive.ts_end

Timestamp of archive creation (end) in UTC

Archive.write_checkpoint()[source]
class borg.archive.ArchiveChecker[source]
check(repository, repair=False, archive=None, last=None, prefix=None, save_space=False)[source]
finish(save_space=False)[source]
identify_key(repository)[source]
init_chunks()[source]

Fetch a list of all object keys from repository

orphan_chunks_check()[source]
rebuild_manifest()[source]

Rebuild the manifest object if it is missing

Iterates through all objects in the repository looking for archive metadata blocks.

rebuild_refcounts(archive=None, last=None, prefix=None)[source]

Rebuild object reference counts by walking the metadata

Missing and/or incorrect data is repaired when detected

exception borg.archive.BackupOSError(os_error)[source]

Wrapper for OSError raised while accessing backup files.

Borg does different kinds of IO, and IO failures have different consequences. This wrapper represents failures of input file or extraction IO. These are non-critical and are only reported (exit code = 1, warning).

Any unwrapped IO error is critical and aborts execution (for example repository IO failure).

class borg.archive.CacheChunkBuffer(cache, key, stats, chunker_params=(15, 19, 17, 4095))[source]
write_chunk(chunk)[source]
class borg.archive.ChunkBuffer(key, chunker_params=(15, 19, 17, 4095))[source]
BUFFER_SIZE = 1048576
add(item)[source]
flush(flush=False)[source]
is_full()[source]
write_chunk(chunk)[source]
class borg.archive.DownloadPipeline(repository, key)[source]
fetch_many(ids, is_preloaded=False)[source]
unpack_many(ids, filter=None, preload=False)[source]

Return iterator of items.

ids is a chunk ID list of an item stream. filter is a callable to decide whether an item will be yielded. preload preloads the data chunks of every yielded item.

Warning: if preload is True then all data chunks of every yielded item have to be retrieved, otherwise preloaded chunks will accumulate in RemoteRepository and create a memory leak.

class borg.archive.RobustUnpacker(validator, item_keys)[source]

A restartable/robust version of the streaming msgpack unpacker

exception UnpackerCrashed[source]

raise if unpacker crashed

RobustUnpacker.feed(data)[source]
RobustUnpacker.resync()[source]
borg.archive.backup_io()[source]

Context manager changing OSError to BackupOSError.

borg.archive.backup_io_iter(iterator)[source]
borg.archive.is_special(mode)[source]
borg.archive.valid_msgpacked_dict(d, keys_serialized)[source]

check if the data <d> looks like a msgpacked dict

class borg.repository.LoggedIO(path, limit, segments_per_dir, capacity=90)[source]
COMMIT = b'@\xf4<%\t\x00\x00\x00\x02'
exception SegmentFull[source]

raised when a segment is full, before opening next

LoggedIO.cleanup(transaction_id)[source]

Delete segment files left by aborted transactions

LoggedIO.close()[source]
LoggedIO.close_segment()[source]
LoggedIO.crc_fmt = <Struct object>
LoggedIO.delete_segment(segment)[source]
LoggedIO.get_fd(segment)[source]
LoggedIO.get_latest_segment()[source]
LoggedIO.get_segments_transaction_id()[source]

Return last committed segment

LoggedIO.get_write_fd(no_new=False, raise_full=False)[source]
LoggedIO.header_fmt = <Struct object>
LoggedIO.header_no_crc_fmt = <Struct object>
LoggedIO.is_committed_segment(segment)[source]

Check if segment ends with a COMMIT_TAG tag

LoggedIO.iter_objects(segment, include_data=False)[source]
LoggedIO.put_header_fmt = <Struct object>
LoggedIO.read(segment, offset, id)[source]
LoggedIO.recover_segment(segment, filename)[source]
LoggedIO.segment_exists(segment)[source]
LoggedIO.segment_filename(segment)[source]
LoggedIO.segment_iterator(reverse=False)[source]
LoggedIO.write_commit()[source]
LoggedIO.write_delete(id, raise_full=False)[source]
LoggedIO.write_put(id, data, raise_full=False)[source]
class borg.repository.Repository(path, create=False, exclusive=False, lock_wait=None, lock=True, append_only=False)[source]

Filesystem based transactional key value store

On disk layout: dir/README dir/config dir/data/<X / SEGMENTS_PER_DIR>/<X> dir/index.X dir/hints.X

exception AlreadyExists[source]

Repository {} already exists.

exception Repository.CheckNeeded[source]

Inconsistency detected. Please run “borg check {}”.

Repository.DEFAULT_MAX_SEGMENT_SIZE = 5242880
Repository.DEFAULT_SEGMENTS_PER_DIR = 10000
exception Repository.DoesNotExist[source]

Repository {} does not exist.

exception Repository.InvalidRepository[source]

{} is not a valid repository. Check repo config.

exception Repository.ObjectNotFound[source]

Object with key {} not found in repository {}.

Repository.break_lock()[source]
Repository.check(repair=False, save_space=False)[source]

Check repository consistency

This method verifies all segment checksums and makes sure the index is consistent with the data stored in the segments.

Repository.close()[source]
Repository.commit(save_space=False)[source]

Commit transaction

Repository.compact_segments(save_space=False)[source]

Compact sparse segments by copying data into new segments

Repository.create(path)[source]

Create a new empty repository at path

Repository.delete(id, wait=True)[source]
Repository.destroy()[source]

Destroy the repository at self.path

Repository.get(id_)[source]
Repository.get_index_transaction_id()[source]
Repository.get_many(ids, is_preloaded=False)[source]
Repository.get_transaction_id()[source]
Repository.list(limit=None, marker=None)[source]
Repository.load_key()[source]
Repository.open(path, exclusive, lock_wait=None, lock=True)[source]
Repository.open_index(transaction_id)[source]
Repository.preload(ids)[source]

Preload objects (only applies to remote repositories)

Repository.prepare_txn(transaction_id, do_cleanup=True)[source]
Repository.put(id, data, wait=True)[source]
Repository.replay_segments(index_transaction_id, segments_transaction_id)[source]
Repository.rollback()[source]
Repository.save_config(path, config)[source]
Repository.save_key(keydata)[source]
Repository.write_index()[source]
exception borg.remote.ConnectionClosed[source]

Connection closed by remote host

exception borg.remote.ConnectionClosedWithHint[source]

Connection closed by remote host. {}

exception borg.remote.InvalidRPCMethod[source]

RPC method {} is not valid

exception borg.remote.PathNotAllowed[source]

Repository path not allowed

class borg.remote.RemoteRepository(location, create=False, exclusive=False, lock_wait=None, lock=True, append_only=False, args=None)[source]
exception NoAppendOnlyOnServer[source]

Server does not support –append-only.

exception RemoteRepository.RPCError(name, remote_type)[source]
RemoteRepository.borg_cmd(args, testing)[source]

return a borg serve command line

RemoteRepository.break_lock()[source]
RemoteRepository.call(cmd, *args, **kw)[source]
RemoteRepository.call_many(cmd, calls, wait=True, is_preloaded=False)[source]
RemoteRepository.check(repair=False, save_space=False)[source]
RemoteRepository.close()[source]
RemoteRepository.commit(save_space=False)[source]
RemoteRepository.delete(id_, wait=True)[source]
RemoteRepository.destroy()[source]
RemoteRepository.extra_test_args = []
RemoteRepository.get(id_)[source]
RemoteRepository.get_many(ids, is_preloaded=False)[source]
RemoteRepository.list(limit=None, marker=None)[source]
RemoteRepository.load_key()[source]
RemoteRepository.preload(ids)[source]
RemoteRepository.put(id_, data, wait=True)[source]
RemoteRepository.rollback(*args)[source]
RemoteRepository.save_key(keydata)[source]
RemoteRepository.ssh_cmd(location)[source]

return a ssh command line that can be prefixed to a borg command line

class borg.remote.RepositoryCache(repository)[source]

A caching Repository wrapper

Caches Repository GET operations using a local temporary Repository.

THRESHOLD = 65536
close()[source]
get_many(keys)[source]
class borg.remote.RepositoryNoCache(repository)[source]

A not caching Repository wrapper, passes through to repository.

Just to have same API (including the context manager) as RepositoryCache.

close()[source]
get(key)[source]
get_many(keys)[source]
class borg.remote.RepositoryServer(restrict_to_paths, append_only)[source]
negotiate(versions)[source]
open(path, create=False, lock_wait=None, lock=True, exclusive=None, append_only=False)[source]
rpc_methods = ('__len__', 'check', 'commit', 'delete', 'destroy', 'get', 'list', 'negotiate', 'open', 'put', 'rollback', 'save_key', 'load_key', 'break_lock')
serve()[source]
exception borg.remote.UnexpectedRPCDataFormatFromClient[source]

Borg {}: Got unexpected RPC data format from client.

exception borg.remote.UnexpectedRPCDataFormatFromServer[source]

Got unexpected RPC data format from server.

borg.remote.cache_if_remote(repository)[source]
class borg.cache.Cache(repository, key, manifest, path=None, sync=True, do_files=False, warn_if_unencrypted=True, lock_wait=None)[source]

Client Side cache

exception CacheInitAbortedError[source]

Cache initialization aborted

exception Cache.EncryptionMethodMismatch[source]

Repository encryption method changed since last access, refusing to continue

exception Cache.RepositoryAccessAborted[source]

Repository access aborted

exception Cache.RepositoryIDNotUnique[source]

Cache is newer than repository - do you have multiple, independently updated repos with same ID?

exception Cache.RepositoryReplay[source]

Cache is newer than repository - this is either an attack or unsafe (multiple repos with same ID)

Cache.add_chunk(id, data, stats)[source]
Cache.begin_txn()[source]
static Cache.break_lock(repository, path=None)[source]
Cache.chunk_decref(id, stats)[source]
Cache.chunk_incref(id, stats)[source]
Cache.close()[source]
Cache.commit()[source]

Commit transaction

Cache.create()[source]

Create a new empty cache at self.path

static Cache.destroy(repository, path=None)[source]

destroy the cache for repository or at path

Cache.file_known_and_unchanged(path_hash, st, ignore_inode=False)[source]
Cache.format_tuple()[source]
Cache.memorize_file(path_hash, st, ids)[source]
Cache.open(lock_wait=None)[source]
Cache.rollback()[source]

Roll back partial and aborted transactions

Cache.seen_chunk(id, size=None)[source]
Cache.sync()[source]

Re-synchronize chunks cache with repository.

Maintains a directory with known backup archive indexes, so it only needs to fetch infos from repo and build a chunk index once per backup archive. If out of sync, missing archive indexes get added, outdated indexes get removed and a new master chunks index is built by merging all archive indexes.

class borg.key.AESKeyBase(repository)[source]

Common base class shared by KeyfileKey and PassphraseKey

Chunks are encrypted using 256bit AES in Counter Mode (CTR)

Payload layout: TYPE(1) + HMAC(32) + NONCE(8) + CIPHERTEXT

To reduce payload size only 8 bytes of the 16 bytes nonce is saved in the payload, the first 8 bytes are always zeros. This does not affect security but limits the maximum repository capacity to only 295 exabytes!

PAYLOAD_OVERHEAD = 41
decrypt(id, data)[source]
encrypt(data)[source]
extract_nonce(payload)[source]
id_hash(data)[source]

Return HMAC hash using the “id” HMAC key

init_ciphers(enc_iv=b'')[source]
init_from_random_data(data)[source]
class borg.key.KeyBase(repository)[source]
TYPE = None
decrypt(id, data)[source]
encrypt(data)[source]
id_hash(data)[source]

Return HMAC hash using the “id” HMAC key

class borg.key.KeyfileKey(repository)[source]
FILE_ID = 'BORG_KEY'
TYPE = 0
find_key()[source]
get_new_target(args)[source]
load(target, passphrase)[source]
save(target, passphrase)[source]
class borg.key.KeyfileKeyBase(repository)[source]
change_passphrase()[source]
classmethod create(repository, args)[source]
decrypt_key_file(data, passphrase)[source]
classmethod detect(repository, manifest_data)[source]
encrypt_key_file(data, passphrase)[source]
find_key()[source]
get_new_target(args)[source]
load(target, passphrase)[source]
save(target, passphrase)[source]
exception borg.key.KeyfileNotFoundError[source]

No key file for repository {} found in {}.

class borg.key.Passphrase[source]
classmethod env_passphrase(default=None)[source]
classmethod getpass(prompt)[source]
kdf(salt, iterations, length)[source]
classmethod new(allow_empty=False)[source]
classmethod verification(passphrase)[source]
class borg.key.PassphraseKey(repository)[source]
TYPE = 1
change_passphrase()[source]
classmethod create(repository, args)[source]
classmethod detect(repository, manifest_data)[source]
init(repository, passphrase)[source]
iterations = 100000
exception borg.key.PassphraseWrong[source]

passphrase supplied in BORG_PASSPHRASE is incorrect

exception borg.key.PasswordRetriesExceeded[source]

exceeded the maximum password retries

class borg.key.PlaintextKey(repository)[source]
TYPE = 2
chunk_seed = 0
classmethod create(repository, args)[source]
decrypt(id, data)[source]
classmethod detect(repository, manifest_data)[source]
encrypt(data)[source]
id_hash(data)[source]
class borg.key.RepoKey(repository)[source]
TYPE = 3
find_key()[source]
get_new_target(args)[source]
load(target, passphrase)[source]
save(target, passphrase)[source]
exception borg.key.RepoKeyNotFoundError[source]

No key entry found in the config of repository {}.

exception borg.key.UnsupportedPayloadError[source]

Unsupported payload type {}. A newer version is required to access this repository.

borg.key.key_creator(repository, args)[source]
borg.key.key_factory(repository, manifest_data)[source]
class borg.keymanager.KeyManager(repository)[source]
export(path)[source]
export_paperkey(path)[source]
import_keyfile(args)[source]
import_paperkey(args)[source]
load_keyblob()[source]
store_keyblob(args)[source]
store_keyfile(target)[source]
exception borg.keymanager.NotABorgKeyFile[source]

This file is not a borg key backup, aborting.

exception borg.keymanager.RepoIdMismatch[source]

This key backup seems to be for a different backup repository, aborting.

exception borg.keymanager.UnencryptedRepo[source]

Keymanagement not available for unencrypted repositories.

exception borg.keymanager.UnknownKeyType[source]

Keytype {0} is unknown.

borg.keymanager.sha256_truncated(data, num)[source]

logging facilities

The way to use this is as follows:

  • each module declares its own logger, using:

    from .logger import create_logger logger = create_logger()

  • then each module uses logger.info/warning/debug/etc according to the level it believes is appropriate:

    logger.debug(‘debugging info for developers or power users’) logger.info(‘normal, informational output’) logger.warning(‘warn about a non-fatal error or sth else’) logger.error(‘a fatal error’)

    ... and so on. see the logging documentation for more information

  • console interaction happens on stderr, that includes interactive reporting functions like help, info and list

  • ...except input() is special, because we can’t control the stream it is using, unfortunately. we assume that it won’t clutter stdout, because interaction would be broken then anyways

  • what is output on INFO level is additionally controlled by commandline flags

borg.logger.create_logger(name=None)[source]

lazily create a Logger object with the proper path, which is returned by find_parent_module() by default, or is provided via the commandline

this is really a shortcut for:

logger = logging.getLogger(__name__)

we use it to avoid errors and provide a more standard API.

We must create the logger lazily, because this is usually called from module level (and thus executed at import time - BEFORE setup_logging() was called). By doing it lazily we can do the setup first, we just have to be careful not to call any logger methods before the setup_logging() call. If you try, you’ll get an exception.

borg.logger.find_parent_module()[source]

find the name of a the first module calling this module

if we cannot find it, we return the current module’s name (__name__) instead.

borg.logger.setup_logging(stream=None, conf_fname=None, env_var='BORG_LOGGING_CONF', level='info', is_serve=False)[source]

setup logging module according to the arguments provided

if conf_fname is given (or the config file name can be determined via the env_var, if given): load this logging configuration.

otherwise, set up a stream handler logger on stderr (by default, if no stream is provided).

if is_serve == True, we configure a special log format as expected by the borg client log message interceptor.

class borg.helpers.Buffer(allocator, size=4096, limit=None)[source]

provide a thread-local buffer

get(size=None, init=False)[source]

return a buffer of at least the requested size (None: any current size). init=True can be given to trigger shrinking of the buffer to the given size.

resize(size, init=False)[source]

resize the buffer - to avoid frequent reallocation, we usually always grow (if needed). giving init=True it is possible to first-time initialize or shrink the buffer. if a buffer size beyond the limit is requested, raise ValueError.

borg.helpers.ChunkerParams(s)[source]
borg.helpers.CompressionSpec(s)[source]
exception borg.helpers.Error[source]

Error base class

exit_code = 2
get_message()[source]
traceback = False
class borg.helpers.ErrorIgnoringTextIOWrapper[source]
read(n)[source]
write(s)[source]
exception borg.helpers.ErrorWithTraceback[source]

like Error, but show a traceback also

traceback = True
exception borg.helpers.ExtensionModuleError[source]

The Borg binary extension modules do not seem to be properly installed

class borg.helpers.FnmatchPattern(pattern)[source]

Shell glob patterns to exclude. A trailing slash means to exclude the contents of a directory, but not the directory itself.

PREFIX = 'fm'
exception borg.helpers.IntegrityError[source]

Data integrity error: {}

class borg.helpers.Location(text='')[source]

Object representing a repository / archive location

archive = None
canonical_path()[source]
env_re = re.compile(' # the repo part is fetched from BORG_REPO\n (?:::$) # just "::" is ok (when a pos. arg is required, no archive)\n , re.VERBOSE)
file_re = re.compile('\n (?P<proto>file):// # file://\n \n (?!:) # not starting with ":"\n (?P<path>([^:]|(:(?!:)), re.VERBOSE)
host = None
optional_archive_re = '\n (?:\n :: # "::" as separator\n (?P<archive>[^/]+) # archive name must not contain "/"\n )?$'
parse(text)[source]
path = None
path_re = '\n (?!:) # not starting with ":"\n (?P<path>([^:]|(:(?!:)))+) # any chars, but no "::"\n '
port = None
proto = None
scp_re = re.compile('\n (\n (?:(?P<user>[^@]+)@)? # user@ (optional)\n (?P<host>[^:/]+): # host: (don\'t match / in host to disambigua, re.VERBOSE)
ssh_re = re.compile('\n (?P<proto>ssh):// # ssh://\n (?:(?P<user>[^@]+)@)? # user@ (optional)\n (?P<host>[^:/]+)(?::(?P<port>\\d+))? , re.VERBOSE)
to_key_filename()[source]
user = None
class borg.helpers.Manifest(key, repository, item_keys=None)[source]
MANIFEST_ID = b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
list_archive_infos(sort_by=None, reverse=False)[source]
classmethod load(repository, key=None)[source]
write()[source]
exception borg.helpers.NoManifestError[source]

Repository has no manifest.

class borg.helpers.PathPrefixPattern(pattern)[source]

Literal files or directories listed on the command line for some operations (e.g. extract, but not create). If a directory is specified, all paths that start with that path match as well. A trailing slash makes no difference.

PREFIX = 'pp'
class borg.helpers.PatternBase(pattern)[source]

Shared logic for inclusion/exclusion patterns.

PREFIX = NotImplemented
match(path)[source]
class borg.helpers.PatternMatcher(fallback=None)[source]
add(patterns, value)[source]

Add list of patterns to internal list. The given value is returned from the match function when one of the given patterns matches.

match(path)[source]
exception borg.helpers.PlaceholderError[source]

Formatting Error: “{}”.format({}): {}({})

borg.helpers.PrefixSpec(s)[source]
class borg.helpers.ProgressIndicatorEndless(step=10, file=None)[source]
finish()[source]
output(triggered)[source]
progress()[source]
show()[source]
class borg.helpers.ProgressIndicatorPercent(total, step=5, start=0, same_line=False, msg='%3.0f%%', file=None)[source]
finish()[source]
output(percent)[source]
progress(current=None)[source]
show(current=None)[source]
class borg.helpers.RegexPattern(pattern)[source]

Regular expression to exclude.

PREFIX = 're'
class borg.helpers.ShellPattern(pattern)[source]

Shell glob patterns to exclude. A trailing slash means to exclude the contents of a directory, but not the directory itself.

PREFIX = 'sh'
exception borg.helpers.SigHup[source]

raised on SIGHUP signal

exception borg.helpers.SigTerm[source]

raised on SIGTERM signal

exception borg.helpers.SignalException[source]

base class for all signal-based exceptions

class borg.helpers.StableDict[source]

A dict subclass with stable items() ordering

items()[source]
class borg.helpers.Statistics[source]
csize_fmt
osize_fmt
show_progress(item=None, final=False, stream=None, dt=None)[source]
summary = ' Original size Compressed size Deduplicated size\n{label:15} {stats.osize_fmt:>20s} {stats.csize_fmt:>20s} {stats.usize_fmt:>20s}'
update(size, csize, unique)[source]
usize_fmt
borg.helpers.archivename_validator()[source]
borg.helpers.bigint_to_int(mtime)[source]

Convert bytearray to int

borg.helpers.bin_to_hex(binary)[source]
borg.helpers.check_extension_modules()[source]
borg.helpers.daemonize()[source]

Detach process from controlling terminal and run in background

borg.helpers.decode_dict(d, keys, encoding='utf-8', errors='surrogateescape')[source]
borg.helpers.dir_is_cachedir(path)[source]

Determines whether the specified path is a cache directory (and therefore should potentially be excluded from the backup) according to the CACHEDIR.TAG protocol (http://www.brynosaurus.com/cachedir/spec.html).

borg.helpers.dir_is_tagged(path, exclude_caches, exclude_if_present)[source]

Determines whether the specified path is excluded by being a cache directory or containing user-specified tag files. Returns a list of the paths of the tag files (either CACHEDIR.TAG or the matching user-specified files).

borg.helpers.format_archive(archive)[source]
borg.helpers.format_file_size(v, precision=2)[source]

Format file size into a human friendly format

borg.helpers.format_line(format, data)[source]
borg.helpers.format_time(t)[source]

use ISO-8601 date and time format

borg.helpers.format_timedelta(td)[source]

Format timedelta in a human friendly format

borg.helpers.get_cache_dir()[source]

Determine where to repository keys and cache

borg.helpers.get_keys_dir()[source]

Determine where to repository keys and cache

borg.helpers.gid2group(*args)[source]
borg.helpers.group2gid(*args)[source]
borg.helpers.int_to_bigint(value)[source]

Convert integers larger than 64 bits to bytearray

Smaller integers are left alone

borg.helpers.is_slow_msgpack()[source]
borg.helpers.load_excludes(fh)[source]

Load and parse exclude patterns from file object. Lines empty or starting with ‘#’ after stripping whitespace on both line ends are ignored.

borg.helpers.location_validator(archive=None)[source]
borg.helpers.log_multi(*msgs, *, level=20)[source]

log multiple lines of text, each line by a separate logging call for cosmetic reasons

each positional argument may be a single or multiple lines (separated by newlines) of text.

borg.helpers.make_path_safe(path)[source]

Make path safe by making it relative and local

borg.helpers.memoize(function)[source]
borg.helpers.normalized(func)[source]

Decorator for the Pattern match methods, returning a wrapper that normalizes OSX paths to match the normalized pattern on OSX, and returning the original method on other platforms

borg.helpers.parse_pattern(pattern, fallback=<class 'borg.helpers.FnmatchPattern'>)[source]

Read pattern from string and return an instance of the appropriate implementation class.

borg.helpers.parse_timestamp(timestamp)[source]

Parse a ISO 8601 timestamp string

borg.helpers.posix_acl_use_stored_uid_gid(acl)[source]

Replace the user/group field with the stored uid/gid

borg.helpers.prune_split(archives, pattern, n, skip=[])[source]
borg.helpers.prune_within(archives, within)[source]
borg.helpers.raising_signal_handler(exc_cls)[source]
borg.helpers.remove_surrogates(s, errors='replace')[source]

Replace surrogates generated by fsdecode with ‘?’

borg.helpers.replace_placeholders(text)[source]

Replace placeholders in text with their values.

borg.helpers.safe_decode(s, coding='utf-8', errors='surrogateescape')[source]

decode bytes to str, with round-tripping “invalid” bytes

borg.helpers.safe_encode(s, coding='utf-8', errors='surrogateescape')[source]

encode str to bytes, with round-tripping “invalid” bytes

borg.helpers.safe_timestamp(item_timestamp_ns)[source]
borg.helpers.signal_handler(sig, handler)[source]

when entering context, set up signal handler <handler> for signal <sig>. when leaving context, restore original signal handler.

<sig> can bei either a str when giving a signal.SIGXXX attribute name (it won’t crash if the attribute name does not exist as some names are platform specific) or a int, when giving a signal number.

<handler> is any handler value as accepted by the signal.signal(sig, handler).

borg.helpers.sizeof_fmt(num, suffix='B', units=None, power=None, sep='', precision=2)[source]
borg.helpers.sizeof_fmt_decimal(num, suffix='B', sep='', precision=2)[source]
borg.helpers.sizeof_fmt_iec(num, suffix='B', sep='', precision=2)[source]
borg.helpers.sysinfo()[source]
borg.helpers.timestamp(s)[source]

Convert a –timestamp=s argument to a datetime object

borg.helpers.to_localtime(ts)[source]

Convert datetime object from UTC to local time zone

borg.helpers.uid2user(*args)[source]
borg.helpers.update_excludes(args)[source]

Merge exclude patterns from files with those on command line.

borg.helpers.user2uid(*args)[source]
borg.helpers.yes(msg=None, false_msg=None, true_msg=None, default_msg=None, retry_msg=None, invalid_msg=None, env_msg='{} (from {})', falsish=('No', 'NO', 'no', 'N', 'n', '0'), truish=('Yes', 'YES', 'yes', 'Y', 'y', '1'), defaultish=('Default', 'DEFAULT', 'default', 'D', 'd', ''), default=False, retry=True, env_var_override=None, ofile=None, input=<built-in function input>)[source]
Output <msg> (usually a question) and let user input an answer.

Qualifies the answer according to falsish, truish and defaultish as True, False or <default>. If it didn’t qualify and retry is False (no retries wanted), return the default [which defaults to False]. If retry is True let user retry answering until answer is qualified.

If env_var_override is given and this var is present in the environment, do not ask the user, but just use the env var contents as answer as if it was typed in. Otherwise read input from stdin and proceed as normal. If EOF is received instead an input or an invalid input without retry possibility, return default.

param msg:introducing message to output on ofile, no
is added [None]
param retry_msg:
 retry message to output on ofile, no
is added [None]
param false_msg:
 message to output before returning False [None]
param true_msg:message to output before returning True [None]
param default_msg:
 message to output before returning a <default> [None]
param invalid_msg:
 message to output after a invalid answer was given [None]
param env_msg:message to output when using input from env_var_override [‘{} (from {})’], needs to have 2 placeholders for answer and env var name
param falsish:sequence of answers qualifying as False
param truish:sequence of answers qualifying as True
param defaultish:
 sequence of answers qualifying as <default>
param default:default return value (defaultish answer was given or no-answer condition) [False]
param retry:if True and input is incorrect, retry. Otherwise return default. [True]
param env_var_override:
 environment variable name [None]
param ofile:output stream [sys.stderr]
param input:input function [input from builtins]
return:boolean answer value, True or False
class borg.locking.ExclusiveLock(path, timeout=None, sleep=None, id=None)[source]

An exclusive Lock based on mkdir fs operation being atomic.

If possible, try to use the contextmanager here like:

with ExclusiveLock(...) as lock:
    ...

This makes sure the lock is released again if the block is left, no matter how (e.g. if an exception occurred).

acquire(timeout=None, sleep=None)[source]
break_lock()[source]
by_me()[source]
is_locked()[source]
release()[source]
class borg.locking.Lock(path, exclusive=False, sleep=None, timeout=None, id=None)[source]

A Lock for a resource that can be accessed in a shared or exclusive way. Typically, write access to a resource needs an exclusive lock (1 writer, noone is allowed reading) and read access to a resource needs a shared lock (multiple readers are allowed).

If possible, try to use the contextmanager here like:

with Lock(...) as lock:
    ...

This makes sure the lock is released again if the block is left, no matter how (e.g. if an exception occurred).

acquire(exclusive=None, remove=None, sleep=None)[source]
break_lock()[source]
downgrade()[source]
got_exclusive_lock()[source]
release()[source]
upgrade()[source]
exception borg.locking.LockError[source]

Failed to acquire the lock {}.

exception borg.locking.LockErrorT[source]

Failed to acquire the lock {}.

exception borg.locking.LockFailed[source]

Failed to create/acquire the lock {} ({}).

class borg.locking.LockRoster(path, id=None)[source]

A Lock Roster to track shared/exclusive lockers.

Note: you usually should call the methods with an exclusive lock held, to avoid conflicting access by multiple threads/processes/machines.

empty(*keys)[source]
get(key)[source]
load()[source]
modify(key, op)[source]
remove()[source]
save(data)[source]
exception borg.locking.LockTimeout[source]

Failed to create/acquire the lock {} (timeout).

exception borg.locking.NotLocked[source]

Failed to release the lock {} (was not locked).

exception borg.locking.NotMyLock[source]

Failed to release the lock {} (was/is locked, but not by me).

class borg.locking.TimeoutTimer(timeout=None, sleep=None)[source]

A timer for timeout checks (can also deal with no timeout, give timeout=None [default]). It can also compute and optionally execute a reasonable sleep time (e.g. to avoid polling too often or to support thread/process rescheduling).

sleep()[source]
start()[source]
timed_out()[source]
timed_out_or_sleep()[source]
borg.locking.get_id()[source]

Get identification tuple for ‘us’

borg.shellpattern.translate(pat)[source]

Translate a shell-style pattern to a regular expression.

The pattern may include **<sep> (<sep> stands for the platform-specific path separator; “/” on POSIX systems) for matching zero or more directory levels and “*” for matching zero or more arbitrary characters with the exception of any path separator. Wrap meta-characters in brackets for a literal match (i.e. “[?]” to match the literal character ”?”).

This function is derived from the “fnmatch” module distributed with the Python standard library.

Copyright (C) 2001-2016 Python Software Foundation. All rights reserved.

TODO: support {alt1,alt2} shell-style alternatives

class borg.lrucache.LRUCache(capacity, dispose)[source]
clear()[source]
items()[source]
class borg.fuse.FuseOperations(key, repository, manifest, archive, cached_repo)[source]

Export archive as a fuse filesystem

allocate_inode()[source]
allow_damaged_files = False
get_item(inode)[source]
getattr(inode, ctx=None)[source]
getxattr(inode, name, ctx=None)[source]
listxattr(inode, ctx=None)[source]
lookup(parent_inode, name, ctx=None)[source]
mount(mountpoint, mount_options, foreground=False)[source]

Mount filesystem on mountpoint with mount_options.

open(inode, flags, ctx=None)[source]
opendir(inode, ctx=None)[source]
process_archive(archive, prefix=[])[source]

Build fuse inode hierarchy from archive metadata

read(fh, offset, size)[source]
readdir(fh, off)[source]
statfs(ctx=None)[source]
class borg.fuse.ItemCache[source]
add(item)[source]
get(inode)[source]
borg.fuse.fuse_main()[source]

A basic extended attributes (xattr) implementation for Linux, FreeBSD and MacOS X.

exception borg.xattr.BufferTooSmallError[source]

the buffer given to an xattr function was too small for the result

borg.xattr.get_all(path, follow_symlinks=True)[source]
borg.xattr.getxattr(path, name, *, follow_symlinks=True)[source]
borg.xattr.is_enabled(path=None)[source]

Determine if xattr is enabled on the filesystem

borg.xattr.listxattr(path, *, follow_symlinks=True)[source]
borg.xattr.setxattr(path, name, value, *, follow_symlinks=True)[source]
borg.xattr.split_lstring(buf)[source]

split a list of length-prefixed strings into python not-length-prefixed bytes

borg.xattr.split_string0(buf)[source]

split a list of zero-terminated strings into python not-zero-terminated bytes

borg.platform.sync_dir(path)[source]
borg.platform_linux.acl_get()

Saves ACL Entries

If numeric_owner is True the user/group field is not preserved only uid/gid

borg.platform_linux.acl_set()

Restore ACL Entries

If numeric_owner is True the stored uid/gid is used instead of the user/group names

borg.platform_linux.acl_use_local_uid_gid()

Replace the user/group field with the local uid/gid if possible

class borg.hashindex.ChunkIndex

Mapping of 32 byte keys to (refcount, size, csize), which are all 32-bit unsigned.

The reference count cannot overflow. If an overflow would occur, the refcount is fixed to MAX_VALUE and will neither increase nor decrease by incref(), decref() or add().

Prior signed 32-bit overflow is handled correctly for most cases: All values from UINT32_MAX (2**32-1, inclusive) to MAX_VALUE (exclusive) are reserved and either cause silent data loss (-1, -2) or will raise an AssertionError when accessed. Other values are handled correctly. Note that previously the refcount could also reach 0 by increasing it.

Assigning refcounts in this reserved range is an invalid operation and raises AssertionError.

add()
decref()

Decrease refcount for ‘key’, return (refcount, size, csize)

incref()

Increase refcount for ‘key’, return (refcount, size, csize)

iteritems()
merge()
summarize()
value_size = 12
class borg.hashindex.ChunkKeyIterator
class borg.hashindex.NSIndex
iteritems()
value_size = 8
class borg.hashindex.NSKeyIterator
borg.compress.get_compressor()
class borg.compress.Compressor

compresses using a compressor with given name and parameters decompresses everything we can handle (autodetect)

compress()
decompress()
class borg.compress.CompressorBase

base class for all (de)compression classes, also handles compression format auto detection and adding/stripping the ID header (which enable auto detection).

ID = b'\xff\xff'
compress()
decompress()
detect()
name = 'baseclass'
class borg.chunker.Chunker
chunkify()

Cut a file into chunks.

Parameters:
  • fd – Python file object
  • fh – OS-level file handle (if available), defaults to -1 which means not to use OS-level fd.
borg.chunker.buzhash()
borg.chunker.buzhash_update()

A thin OpenSSL wrapper

This could be replaced by PyCrypto maybe?

class borg.crypto.AES

A thin wrapper around the OpenSSL EVP cipher API

decrypt()
encrypt()
iv
reset()
borg.crypto.bytes16_to_int()
borg.crypto.bytes_to_int
borg.crypto.bytes_to_long
borg.crypto.increment_iv()

Increment the IV by the given amount (default 1).

Parameters:
  • iv – input IV, 16 bytes (128 bit)
  • amount – increment value
Returns:

input_IV + amount, 16 bytes (128 bit)

borg.crypto.int_to_bytes16()
borg.crypto.long_to_bytes
borg.crypto.num_aes_blocks()

Return the number of AES blocks required to encrypt/decrypt length bytes of data. Note: this is only correct for modes without padding, like AES-CTR.