Cloud files uploading script
This is a very simple linux bash script to help uploading files to mosso cloud files.
Mosso uploading script: Mosso.sh
Updates: Here is a Mac OSX Version (provided by Bryan Rehbein)
You will also need curl .
After you sign up mosso cloud files service, first you need to login your mosso cloud files control panel then navigate to Your Account / API Access, where you will generate your API key. Edit the begin of mosso.sh
API_KEY=YOURAPIKEYHERE USER=yourusername CONTAINER=bucket
Now change to the directory where you want to upload files. e.g. a wordpress installation:
cd /srv/http/wordpress
bash mosso.sh wp-includes/js/jquery
Uploading wp-includes/js/jquery/jquery.table-hotkeys.js .... done. Uploading wp-includes/js/jquery/interface.js .... done. Uploading wp-includes/js/jquery/ui.core.js .... done. Uploading wp-includes/js/jquery/jquery.color.js .... done. Uploading wp-includes/js/jquery/ui.tabs.js .... done. Uploading wp-includes/js/jquery/ui.resizable.js .... done. Uploading wp-includes/js/jquery/jquery.hotkeys.js .... done. Uploading wp-includes/js/jquery/ui.sortable.js .... done. Uploading wp-includes/js/jquery/ui.dialog.js .... done. Uploading wp-includes/js/jquery/jquery.js .... done. Uploading wp-includes/js/jquery/ui.draggable.js .... done. Uploading wp-includes/js/jquery/jquery.form.js .... done. Uploading wp-includes/js/jquery/suggest.js .... done. Uploading wp-includes/js/jquery/jquery.schedule.js .... done. HTTP/1.1 202 Accepted Date: Sat, 14 Mar 2009 00:06:41 GMT Server: Apache X-CDN-URI: http://cdn.cloudfiles.mosso.com/c12345 Content-Length: 0 Content-Type: text/plain; charset=UTF-8
Write down the url on the line begn with X-CDN-URI. This is your mosso CDN url. You can also find this url inside your mosso control panel.
If you want to upload entire current directory, you can execute this script without argument.
EC2 new price
Amazone just anounced reserved instance.
The 3 year reserved small instance price at 3.9 cents per hour, which is equalivalent to $28 per month. Not bad for a vps with 1.7GB ram and 160GB hard driver at all. But I have found some even lower priced VPS, $3.95 per month for 512MB ram with 1 year billing.
How to copy selected files to cloudfront
If you use my WordPress CDN plugin and amazon cloudfront, you may have problem putting files into s3 storage. Here is a simple way without using any commercial tool if you are using Linux.
First download s3sync . Extract it to somewhere. In this example I used my home directory.
mkdir ~/.s3conf
Edit ~/.s3conf/s3config.yml, which should looks like this
aws_access_key_id: your s3accesskey aws_secret_access_key: your secret key
Enter wordpress directory
cd wordpress find * -type f -readable \( -name \*.css -o -name \*.js -o \ -name \*.png -o -name \*.jpg -o -name \*.gif -o -name \*.jpeg \) \ -exec ~/s3sync/s3cmd.rb -v put bucket:prefix/{} {} \ x-amz-acl:public-read Cache-Control:max-age=604800 \;
Change bucket to your real bucket name. If you don’t need any prefix, do not include slash. Adjust cache-control header. ~/s3sync/s3cmd.rb should point to where you extracted s3sync.
Update 1: Don’t forget install mime-types if you Linux distro didn’t install it by default. Check whether /etc/mime.types exists.
Update 2: s3cmd.rb does not set content-type at all. I think python version does. Any way I wrote a script to redo everything.
#!/bin/sh BUCKET= #Set your bucket PREFIX= #If you want to use prefix set it like PREFIX=blog/ find * -type f -readable -name \*.css -exec ~/s3sync/s3cmd.rb -v put $BUCKET:$PREFIX{} {} \ x-amz-acl:public-read Cache-Control:max-age=604800 Content-Type:text/css \; find * -type f -readable -name \*.js -exec ~/s3sync/s3cmd.rb -v put $BUCKET:$PREFIX{} {} \ x-amz-acl:public-read Cache-Control:max-age=604800 Content-Type:application/x-javascript \; find * -type f -readable -name \*.png -exec ~/s3sync/s3cmd.rb -v put $BUCKET:$PREFIX{} {} \ x-amz-acl:public-read Cache-Control:max-age=604800 Content-Type:image/png \; find * -type f -readable -name \*.gif -exec ~/s3sync/s3cmd.rb -v put $BUCKET:$PREFIX{} {} \ x-amz-acl:public-read Cache-Control:max-age=604800 Content-Type:image/gif \; find * -type f -readable \( -name \*.jpg -o -name \*.jpeg \) -exec ~/s3sync/s3cmd.rb -v put $BUCKET:$PREFIX{} {} \ x-amz-acl:public-read Cache-Control:max-age=604800 Content-Type:image/jpeg \;
Update 3: I just realized cloudfront does not gzip files. So I rewrote my script to force gzip encoding on css and js files.
#!/bin/sh BUCKET= #Your bucket PREFIX= #If you want to use prefix set it like PREFIX=blog/ S3CMD=/home/user/s3sync/s3cmd.rb #Your absolute path to s3cmd.rb find * -type f -readable -name \*.css -exec sh -c "gzip -9 -c {} > /tmp/s3tmp && \ $S3CMD -v put $BUCKET:$PREFIX{} /tmp/s3tmp x-amz-acl:public-read Cache-Control:max-age=604800 Content-Type:text/css Content-Encoding:gzip" \; find * -type f -readable -name \*.js -exec sh -c "gzip -9 -c {} > /tmp/s3tmp && \ $S3CMD -v put $BUCKET:$PREFIX{} /tmp/s3tmp x-amz-acl:public-read Cache-Control:max-age=604800 Content-Type:application/x-javascript Content-Encoding:gzip" \; find * -type f -readable -name \*.png -exec $S3CMD -v put $BUCKET:$PREFIX{} {} \ x-amz-acl:public-read Cache-Control:max-age=604800 Content-Type:image/png \; find * -type f -readable -name \*.gif -exec $S3CMD -v put $BUCKET:$PREFIX{} {} \ x-amz-acl:public-read Cache-Control:max-age=604800 Content-Type:image/gif \; find * -type f -readable \( -name \*.jpg -o -name \*.jpeg \) -exec $S3CMD -v put $BUCKET:$PREFIX{} {} \ x-amz-acl:public-read Cache-Control:max-age=604800 Content-Type:image/jpeg \;
Update 4: 4/1/2009
I added function to copy a single file or single directory.
#!/bin/sh if [[ -n $1 ]]; then LOC=$1 else LOC="*" fi BUCKET= #Your bucket PREFIX= #If you want to use prefix set it like PREFIX=blog/ S3CMD=/home/user/s3sync/s3cmd.rb #Your absolute path to s3cmd.rb find $LOC -type f -readable -name \*.css -exec sh -c "gzip -9 -c {} > /tmp/s3tmp && \ $S3CMD -v put $BUCKET:$PREFIX{} /tmp/s3tmp x-amz-acl:public-read Cache-Control:max-age=604800 Content-Type:text/css Content-Encoding:gzip" \; find $LOC -type f -readable -name \*.js -exec sh -c "gzip -9 -c {} > /tmp/s3tmp && \ $S3CMD -v put $BUCKET:$PREFIX{} /tmp/s3tmp x-amz-acl:public-read Cache-Control:max-age=604800 Content-Type:application/x-javascript Content-Encoding:gzip" \; find $LOC -type f -readable -name \*.png -exec ${S3CMD} -v put $BUCKET:$PREFIX{} {} \ x-amz-acl:public-read Cache-Control:max-age=604800 Content-Type:image/png \; find $LOC -type f -readable -name \*.gif -exec ${S3CMD} -v put $BUCKET:$PREFIX{} {} \ x-amz-acl:public-read Cache-Control:max-age=604800 Content-Type:image/gif \; find $LOC -type f -readable \( -name \*.jpg -o -name \*.jpeg \) -exec ${S3CMD} -v put $BUCKET:$PREFIX{} {} \ x-amz-acl:public-read Cache-Control:max-age=604800 Content-Type:image/jpeg \;
For example if you saved this script to a file name cloudfront:
cd wordpress cloudfront wp-content/uploads
Without any command line argument this script will upload all file under current directory.
Passed all He.net IPv6 test
A lot Cloud buzz today
I have used ubuntu 8.1 ec2 release for a while. I am actually very impressed by the beta version ubuntu cloud. It is a very lean version of ubuntu server. The default configuration run very fast with minimal number of services comparing to CentOS or Redhat. It is fast.
Meanwhile stock market indexes fall to 1997 levels, and Nasdaq took the lead.