How to copy selected files to cloudfront
If you use my WordPress CDN plugin and amazon cloudfront, you may have problem putting files into s3 storage. Here is a simple way without using any commercial tool if you are using Linux.
First download s3sync . Extract it to somewhere. In this example I used my home directory.
mkdir ~/.s3conf
Edit ~/.s3conf/s3config.yml, which should looks like this
aws_access_key_id: your s3accesskey aws_secret_access_key: your secret key
Enter wordpress directory
cd wordpress find * -type f -readable \( -name \*.css -o -name \*.js -o \ -name \*.png -o -name \*.jpg -o -name \*.gif -o -name \*.jpeg \) \ -exec ~/s3sync/s3cmd.rb -v put bucket:prefix/{} {} \ x-amz-acl:public-read Cache-Control:max-age=604800 \;
Change bucket to your real bucket name. If you don’t need any prefix, do not include slash. Adjust cache-control header. ~/s3sync/s3cmd.rb should point to where you extracted s3sync.
Update 1: Don’t forget install mime-types if you Linux distro didn’t install it by default. Check whether /etc/mime.types exists.
Update 2: s3cmd.rb does not set content-type at all. I think python version does. Any way I wrote a script to redo everything.
#!/bin/sh BUCKET= #Set your bucket PREFIX= #If you want to use prefix set it like PREFIX=blog/ find * -type f -readable -name \*.css -exec ~/s3sync/s3cmd.rb -v put $BUCKET:$PREFIX{} {} \ x-amz-acl:public-read Cache-Control:max-age=604800 Content-Type:text/css \; find * -type f -readable -name \*.js -exec ~/s3sync/s3cmd.rb -v put $BUCKET:$PREFIX{} {} \ x-amz-acl:public-read Cache-Control:max-age=604800 Content-Type:application/x-javascript \; find * -type f -readable -name \*.png -exec ~/s3sync/s3cmd.rb -v put $BUCKET:$PREFIX{} {} \ x-amz-acl:public-read Cache-Control:max-age=604800 Content-Type:image/png \; find * -type f -readable -name \*.gif -exec ~/s3sync/s3cmd.rb -v put $BUCKET:$PREFIX{} {} \ x-amz-acl:public-read Cache-Control:max-age=604800 Content-Type:image/gif \; find * -type f -readable \( -name \*.jpg -o -name \*.jpeg \) -exec ~/s3sync/s3cmd.rb -v put $BUCKET:$PREFIX{} {} \ x-amz-acl:public-read Cache-Control:max-age=604800 Content-Type:image/jpeg \;
Update 3: I just realized cloudfront does not gzip files. So I rewrote my script to force gzip encoding on css and js files.
#!/bin/sh BUCKET= #Your bucket PREFIX= #If you want to use prefix set it like PREFIX=blog/ S3CMD=/home/user/s3sync/s3cmd.rb #Your absolute path to s3cmd.rb find * -type f -readable -name \*.css -exec sh -c "gzip -9 -c {} > /tmp/s3tmp && \ $S3CMD -v put $BUCKET:$PREFIX{} /tmp/s3tmp x-amz-acl:public-read Cache-Control:max-age=604800 Content-Type:text/css Content-Encoding:gzip" \; find * -type f -readable -name \*.js -exec sh -c "gzip -9 -c {} > /tmp/s3tmp && \ $S3CMD -v put $BUCKET:$PREFIX{} /tmp/s3tmp x-amz-acl:public-read Cache-Control:max-age=604800 Content-Type:application/x-javascript Content-Encoding:gzip" \; find * -type f -readable -name \*.png -exec $S3CMD -v put $BUCKET:$PREFIX{} {} \ x-amz-acl:public-read Cache-Control:max-age=604800 Content-Type:image/png \; find * -type f -readable -name \*.gif -exec $S3CMD -v put $BUCKET:$PREFIX{} {} \ x-amz-acl:public-read Cache-Control:max-age=604800 Content-Type:image/gif \; find * -type f -readable \( -name \*.jpg -o -name \*.jpeg \) -exec $S3CMD -v put $BUCKET:$PREFIX{} {} \ x-amz-acl:public-read Cache-Control:max-age=604800 Content-Type:image/jpeg \;
Update 4: 4/1/2009
I added function to copy a single file or single directory.
#!/bin/sh if [[ -n $1 ]]; then LOC=$1 else LOC="*" fi BUCKET= #Your bucket PREFIX= #If you want to use prefix set it like PREFIX=blog/ S3CMD=/home/user/s3sync/s3cmd.rb #Your absolute path to s3cmd.rb find $LOC -type f -readable -name \*.css -exec sh -c "gzip -9 -c {} > /tmp/s3tmp && \ $S3CMD -v put $BUCKET:$PREFIX{} /tmp/s3tmp x-amz-acl:public-read Cache-Control:max-age=604800 Content-Type:text/css Content-Encoding:gzip" \; find $LOC -type f -readable -name \*.js -exec sh -c "gzip -9 -c {} > /tmp/s3tmp && \ $S3CMD -v put $BUCKET:$PREFIX{} /tmp/s3tmp x-amz-acl:public-read Cache-Control:max-age=604800 Content-Type:application/x-javascript Content-Encoding:gzip" \; find $LOC -type f -readable -name \*.png -exec ${S3CMD} -v put $BUCKET:$PREFIX{} {} \ x-amz-acl:public-read Cache-Control:max-age=604800 Content-Type:image/png \; find $LOC -type f -readable -name \*.gif -exec ${S3CMD} -v put $BUCKET:$PREFIX{} {} \ x-amz-acl:public-read Cache-Control:max-age=604800 Content-Type:image/gif \; find $LOC -type f -readable \( -name \*.jpg -o -name \*.jpeg \) -exec ${S3CMD} -v put $BUCKET:$PREFIX{} {} \ x-amz-acl:public-read Cache-Control:max-age=604800 Content-Type:image/jpeg \;
For example if you saved this script to a file name cloudfront:
cd wordpress cloudfront wp-content/uploads
Without any command line argument this script will upload all file under current directory.
I always enjoy learning how other people employ Amazon S3 and CloudFront. For Windows users I would recommend to check out my very own tool CloudBerry Explorer that helps to manage S3 and CloudFront. It is a freeware.
I keep getting an error — find: invalid predicate `-readable’
I’m using Debian Linux…