This special remote type stores file contents in a bucket in Amazon S3 or a similar service.
See using Amazon S3, Internet Archive via S3, and using Google Cloud Storage for usage examples.
configuration
The standard environment variables AWS_ACCESS_KEY_ID
and
AWS_SECRET_ACCESS_KEY
are used to supply login credentials
for Amazon. You need to set these only when running
git annex initremote
, as they will be cached in a file only you
can read inside the local git repository.
A number of parameters can be passed to git annex initremote
to configure
the S3 remote.
encryption
- One of "none", "hybrid", "shared", or "pubkey". See encryption.keyid
- Specifies the gpg key to use for encryption.chunk
- Enables chunking when storing large files.chunk=1MiB
is a good starting point for chunking.embedcreds
- Optional. Set to "yes" embed the login credentials inside the git repository, which allows other clones to also access them. This is the default when gpg encryption is enabled; the credentials are stored encrypted and only those with the repository's keys can access them.It is not the default when using shared encryption, or no encryption. Think carefully about who can access your repository before using embedcreds without gpg encryption.
datacenter
- Defaults to "US". Other values include "EU", "us-west-1", "us-west-2", "ap-southeast-1", "ap-southeast-2", and "sa-east-1".storageclass
- Default is "STANDARD". If you have configured git-annex to preserve multiple copies, consider setting this to "REDUCED_REDUNDANCY" to save money.host
andport
- Specify in order to use a different, S3 compatable service.bucket
- S3 requires that buckets have a globally unique name, so by default, a bucket name is chosen based on the remote name and UUID. This can be specified to pick a bucket name.public
- Set to "yes" to allow public read access to files sent to the S3 remote. This is accomplished by setting an ACL when each file is uploaded to the remote. So, changes to this setting will only affect subseqent uploads.publicurl
- Configure the URL that is used to download files from the bucket when they are available publically. (This is automatically configured for Amazon S3 and the Internet Archive.)partsize
- Amazon S3 only accepts uploads up to a certian file size, and storing larger files requires a multipart upload process.Setting
partsize=1GiB
is recommended for Amazon S3 when not using chunking; this will cause multipart uploads to be done using parts up to 1GiB in size. Note that setting partsize to less than 100MiB will cause Amazon S3 to reject uploads.This is not enabled by default, since other S3 implementations may not support multipart uploads or have different limits, but can be enabled or changed at any time.
fileprefix
- By default, git-annex places files in a tree rooted at the top of the S3 bucket. When this is set, it's prefixed to the filenames used. For example, you could set it to "foo/" in one special remote, and to "bar/" in another special remote, and both special remotes could then use the same bucket.x-amz-meta-*
are passed through as http headers when storing keys in S3. see the Internet Archive S3 interface documentation for example headers.
ANNEX_S3_ACCESS_KEY_ID
andANNEX_S3_SECRET_ACCESS_KEY
seem to have been changed toAWS_ACCESS_KEY_ID
andAWS_SECRET_ACCESS_KEY
it'd be really nice being able to configure a S3 remote of the form
<bucket>/<folder>
(not really a folder, of course, just the usual prefix trick used to simulate folders at S3). The remote = bucket architecture is not scalable at all, in terms of number of repositories.how hard would it be to support this?
thanks, this is the only thing that's holding us back from using git-annex, nice tool!
fileprefix
setting. Note that I have not tested it, beyond checking it builds, since I let my S3 account expire. Your testing would be appreciated.Any chance I could bribe you to setup Rackspace Cloud Files support? We are using them and would hate to have a S3 bucket only for this.
https://github.com/rackspace/python-cloudfiles
If encryption is not used, the files are stored in S3 as-is, and can be accessed directly. They are stored in a hashed directory structure with the names of their key used, rather than the original filename. To get back to the original filename, a copy of the git repo would also be needed.
With encryption, you need the gpg key used in the encryption, or, for shared encryption, a symmetric key which is stored in the git repo.
See future proofing for non-S3 specific discussion of this topic.
How do I recover a special remote from a clone, please? I see that
remote.log
has most of the details, but my remote is not configured on my clone and I see no obvious way to do it. And I usedembedcreds
, but the only credentials I can see are stored in .git/annex/creds/ so did not survive a clone. I'm confused because the documentation here forembedcreds
says that clones should have access.As a workaround, it looks like copying the remote over from
.git/config
as well as the credentials from.git/annex/creds/
seems to work. Is there some other way I'm supposed to do this, or is this the intended way?You can enable a special remote on a clone by running
git annex enableremote $name
, where $name is the name you used to originally create the special remote. (Older versions of git-annex usedgit annex initremote
to enable the special remote on the clone.)(Just in case, I have verified that embedcreds does cause the cipher= to be stored in the remote.log. It does.)
Thanks Joey - initremote on my slightly older version appears to work. I'll use
enableremote
when I can.This doesn't do what I expect. The documentation suggests that my S3 login credentials would be stored. I understand that the cipher would be stored; but isn't this a separate concept? Instead, I'm being asked to set
AWS_ACCESS_KEY_ID
andAWS_SECRET_ACCESS_KEY
; my understanding was that git-annex will keep them in the repository for me, so that I don't have to set them after runninginitremote
before cloning. This works, apart from surviving the cloning. I'm usingencryption=shared
; does this affect anything? Or am I using a version of git-annex (3.20121112ubuntu3) that's too old?That's not what the documentation here says! It even warns me: "Think carefully about who can access your repository before using embedcreds without gpg encryption."
My use case:
Occasional use of EC2, and a desire to store some persistent stuff in S3, since the dataset is large and I have limited bandwidth. I want to destroy the EC2 instance when I'm not using it, leaving the data in S3 for later.
If I use git-annex to manage the S3 store, then I get the ability to clone the repository and destroy the instance. Later, I can start a new instance, push the repo back up, and would like to be able to then pull the data back out of S3 again.
I'd really like the login credentials to persist in the repository (as the documentation here says it should). Even if I have to add a --yes-i-know-my-s3-credentials-will-end-up-available-to-anyone-who-can-see-my-git-repo flag. This is because I use some of my git repos to store private data, too.
If I use an Amazon IAM policy as follows, I can generate a set of credentials that are limited to access to a particular prefix of a specific S3 bucket only - effectively creating a sandboxed area just for git-annex:
Doing this means that I have a different set of credentials for every annex, so it would be really useful to be able have these stored and managed within the repository itself. Each set is limited to what the annex stores, so there is no bigger compromise I have to worry about apart from the compromise of the data that the annex itself manages.
I apologise for incorrect information. I was thinking about defaults when using the webapp.
I have verified that embedcreds=yes stores the AWS creds, always.
Is there a way to tell the S3 backend to store the files as they are named locally, instead of by hashed content name? i.e., I've annexed foo/bar.txt and annex puts it in s3 as mybucket.name/foo/bar.txt instead of mybucket.name/GPGHMACSHA1-random.txt
Or should I just write a script to s3cmd sync my annex, and add the S3/cloudfront distribution URL as a web remote?