Recent comments posted to this site:
git annex dead
for a full git-annex repo (not a special remote) I had in a VPS, but my local machine keeps trying to sync with it on git annex sync
, asking me to confirm the ssh public key and so on. Is that correct?
git-annex sets up its own ssh connection caching because this makes it a lot faster.
To disable this feature, you can set annex.sshcaching=false, or set remote.origin.annex-ssh-options as you have.
git-annex has no way to know if you have another ssh socket to use, so it seems fine for you to need to configure it if you want it to use one.
My way of working with git-annex doesn't seem to mesh well with the Assistant or even with git annex sync
. I seem to have a bit of a control need when it comes to what gets committed when. But here's my workflow approximating what it does, with a twist. I have this in git config on mylaptop
:
remote.myserver.fetch=+refs/heads/*:refs/remotes/myserver/*
remote.myserver.push=refs/heads/*:refs/remotes/mylaptop/*
remote.myserver.push=refs/heads/master:refs/heads/master
remote.myserver.push=refs/heads/git-annex:refs/heads/git-annex
I don't need a synced/git-annex
. If upstream is not up-to-date I fetch and merge. In this case upstream happens to be a bare git repo, so I don't need synced/master
either. If upstream is non-bare, I use synced/master
-- or sometimes I keep upstream usually checked out on an orphan branch and just switch into master to check things and then switch away to avoid conflict. If I can avoid it, I prefer not to have several branches where I don't know which one is the latest one.
But here's the twist, look at this row:
remote.myserver.push=refs/heads/*:refs/remotes/mylaptop/*
If I just do git push
, close the lid and run into the forest, it may or may not have a non-fastforward event on master and git-annex ... but it always succeeds in pushing to the mylaptop
remote on my server.
If I have added a batch of files, I usually push first to all my remotes, to get that precious metadata up there. At that point I don't care if there's a conflict upstream. Then I git annex copy
to wherever, fetch all remotes, git annex merge
, maybe merge master
if I have to (usually not), then push to all remotes again. It's less of a bother than it sounds like. I don't even have any handy aliases for this, I prefer to just get the for loop from my command-line history.
@tim, are the git-annex repositories going to be connected? If so,
git-annex initremote
the S3 remote to one, merge it into the next
repo, and then git annex enableremote
the S3 remote there.
That's the sane way. If you want for some reason to have multiple separate git-annex repositories, that all try to use the same S3 bucket, without knowing about one-another, I have to recommend against it. You're setting yourself up to shoot yourself in the foot, and quite possibly lose data.
While you can git annex enableremote
the same bucket repeatedly in the
different repositories, each time it will be given a different uuid, and
since the uuid is stored in the bucket, this will prevent git annex
enableremote
from being used for the old uuid, since it will see the
bucket has a different uuid now.
I was hoping to use the same bucket for multiple repo's (100+) with a lot of files in common. Dropping unused files would not be an issue for me, so from what I read above this should be possible. However I cannot get initremote nor enableremote to add a repo with an existing uuid. How do I add an already initialized bucket to a new git annex repo?
Right, it would need a standalone package. It's quite easy to build such a package on any Debian system, just run "make linuxstandalone" in git-annex's source tree. I don't have a PowerPC system handy to do such builds on myself.
@Jan, there is a powerpc build of git-annex in Debian. Not sure if it targets the e500 CPU.