Using the Ruby Fog library to connect with Eucalyptus

There is a popular Ruby cloud library called Fog. I’ve seen a number of applications that use this for connecting to AWS EC2. I’ve done some testing and it is pretty simple to get Fog talking to Eucalyptus. The first thing you need are your access id and secret key, much like you’d get from EC2. These are available on the credentials tab in Eucalyptus. The other thing you need to specify is the endpoint. In the case of Eucalyptus, that will point to the cloud endpoint for your private cloud (or in this example, the Eucalyptus Community Cloud).

This is an example credentials file, stored in ~/.fog

#######################################################
# Fog Credentials File
#
:default:
  :aws_access_key_id:       IyJWpgObMl2Yp70BlWEP4aNGMfXdhL0FtAx4cQ
  :aws_secret_access_key:   7DeDGG2YMOnOqmWxwnHD5x9Y0PKbwE3xttsew
  :endpoint:                http://ecc.eucalyptus.com:8773/services/Eucalyptus

Notice that the eucalyptus endpoint requires port 8773 and the “/services/Eucalyptus” path.
You can use the Fog interactive tool to test this out. Notice we’re using the AWS compute provider because the Eucalyptus cloud is API compatible with EC2.

# fog
  Welcome to fog interactive!
  :default provides AWS and AWS
>> servers = Compute[:aws].servers
  <Fog::Compute::AWS::Servers
    filters={}
    []
  >
>>

As you can see, there are no servers running. By replacing “servers” with “images”, you can show a list of images available on the ECC.
To start an instance, you can run a command like this;

>> servers = Compute[:aws].servers.create(:image_id => 'emi-9ACB1363', :flavor_id => 'm1.small')
  <Fog::Compute::AWS::Server
    id="i-3D7A079C",
    ami_launch_index=0,
    availability_zone="open",
    block_device_mapping=[],
    client_token=nil,
    dns_name="euca-0-0-0-0.eucalyptus.eucasys.com",
    groups=["default"],
    flavor_id="m1.small",
    image_id="emi-9ACB1363",
    kernel_id="eki-6CBD12F2",
    key_name=nil,
    created_at=Wed Oct 19 15:19:16 UTC 2011,
    monitoring=false,
    placement_group=nil,
    platform=nil,
    product_codes=[],
    private_dns_name="euca-0-0-0-0.eucalyptus.internal",
    private_ip_address=nil,
    public_ip_address=nil,
    ramdisk_id="eri-A97113E4",
    reason="NORMAL:  -- []",
    root_device_name=nil,
    root_device_type=nil,
    state="pending",
    state_reason=nil,
    subnet_id=nil,
    tenancy=nil,
    tags=nil,
    user_data=nil
  >
>> servers = Compute[:aws].servers  <Fog::Compute::AWS::Servers
    filters={}
    [
      <Fog::Compute::AWS::Server
        id="i-3D7A079C",
        ami_launch_index=0,
        availability_zone="open",
        block_device_mapping=[],
        client_token=nil,
        dns_name="euca-0-0-0-0.eucalyptus.eucasys.com",
        groups=["default"],
        flavor_id="m1.small",
        image_id="emi-9ACB1363",
        kernel_id="eki-6CBD12F2",
        key_name=nil,
        created_at=Wed Oct 19 15:19:16 UTC 2011,
        monitoring=false,
        placement_group=nil,
        platform=nil,
        product_codes=[],
        private_dns_name="euca-0-0-0-0.eucalyptus.internal",
        private_ip_address=nil,
        public_ip_address=nil,
        ramdisk_id="eri-A97113E4",
        reason="NORMAL:  -- []",
        root_device_name=nil,
        root_device_type=nil,
        state="pending",
        state_reason={},
        subnet_id=nil,
        tenancy=nil,
        tags={},
        user_data=nil
      >
    ]
  >

You’ll notice that now the instance (which is in “pending” state) appears in the list of servers.
I won’t show more here, but we’ve established basic connectivity to a Eucalyptus cloud. I hope this helps enable many existing and new applications to work with Eucalyptus!

Advertisements

Migrating an EC2 AMI to Eucalyptus

There have been different instructions for using an image from Amazon’s EC2 on a local Eucalyptus cluster. This seems to be what worked best for me.

The basic steps are, launch an instance of the AMI, run euca-bundle-vol with your Eucalyptus credentials, upload bundle, register. While it would be possible to use the download-bundle/un-bundle method detailed in this post, that only works with images that your account created. The use case I’m addressing here is to get starting images for building some custom images within your private cloud. Another use case is when duplicating custom images from private to public cloud for purposes of cloud-bursting. That’ll be covered in another post.

specifically, when converting ami-1a837773 (Ubuntu-Maverick-32bit)

ec2-run-instances ami-1a837773 -k dak-keypair

When that boots, scp the credentials zip file that you got from the ECC (or your own cloud)  (i.e. scp -i dak-keypair euca2*.zip ubuntu@50.16.60.6:.) (UPDATE: my image didn’t have zip installed, so I repackaged the zip as a tar.gz) Because Ubuntu images don’t allow root login, we can only copy files into the user directory. Ideally, we don’t want credentials on the root filesystem because they’ll end up in the bundle. So, the first thing we’ll need to do after logging into the instance is to move the zip file to /mnt directory (ephemeral store). (There are additional security concerns that may apply. This post at alestic.com covers that well.)

On the instance;

sudo mv euca2*.zip /mnt
cd /mnt
sudo unzip euca2*.zip
source eucarc

To bundle/upload the image, you’ll need the euca2ools. There are some instructions here that help. This Maverick image already has them installed.

If the image has a default kernel specified (as this Maverick one does), that aki id won’t work on eucalyptus. For the ECC, looking at the list of images shows that many of them specify the eki-6CBD12F2 kernel, so I will also use that when overriding the EC2 kernel.  If you run your own Eucalyptus installation, it is easy to get the default kernel id via the management interface on the “Configuration” tab. Take note of the ramdisk id also, since that goes hand-in-hand with the kernel.

In the case of a private Eucalyptus installation, network restrictions probably won't allow the EC2 instance to upload to Eucalyptus directly. One way to do that is downloading a gzipped image to your local machine, run euca-bundle-image prior to upload. That is time consuming and since I'm working with ECC here, all of the operations can be run on the EC2 instance.
sudo -E euca-bundle-vol -p Ubuntu-10.10-Maverick-32bit -s 2048 -d /mnt -r i386 --kernel eki-6CBD12F2 --ramdisk eri-A97113E4</pre>
euca-upload-bundle -b dak-images -m /mnt/Ubuntu-10.10-Maverick-32bit.manifest.xml
euca-register dak-images/Ubuntu-10.10-Maverick-32bit.manifest.xml

At this point, you should be all set to launch the image.

Footnote: I've tested this with a Maverick S3 backed AMI and a Lucid EBS backed AMI.

A New Adventure

My professional career has been spent at 2 companies, Eastman Kodak and D.O.Tech/directThought (rebranded 9 years in). At Kodak, I worked on blood analyzers (which they spun off to J&J), Photo CD (which was made obsolete by newer technologies) and Picture Maker, which is still going strong after N generations of hardware/software. At directThought, I had the joy of working with a lot of great people and working on some interesting projects. I worked on a Picture Maker-like kiosk/web-app/desktop app combination at Xerox. They even created a new division for that project called Pixography. We had XML templates that described printed products like greeting cards, calendars, business cards, brochures and photo books (to name a few). Java 2D rendered everything for print and preview. We had tight integration between the 3 different apps but that project died in its original form, but lived on in spirit in a custom production printing installation out on the west coast. After that, I worked on some enterprise apps for Pfizer, a payroll application for Paychex, then back to more custom apps for the services arm of Xerox. At that point, I got involved in Amazon Web Service and started kicking the tires on this new service called EC2. During that time, I started my most successful open source project called typica, which is still has a lot of users. After Xerox, I helped a number of customers run their apps on AWS’s infrastructure. We were fortunate enough to be come an inaugural AWS System Integrator. I was also asked to learn how to write apps for this hot new platform called the iPhone. I’ve had a couple of apps in the app store, and worked on a few more. I also got to go to the only WWDC where Steve didn’t deliver the keynote (because he was getting a new liver). All in all, a pretty great experience with may interesting technologies under my belt.

Now, I feel like it is time for a change. I’ve just accepted a job with Eucalyptus Systems! They build infrastructure that powers clouds. They have a lot of great people working there and I am looking forward to doing my part to help the company grow, if not flourish in this exciting space. Since they just started business last year, I can say I’m now part of a fast growing startup! Very excited!

How to build a local NAS backed by Amazon S3

A previous post talked about my need for some local, reliable storage in my home. That project led to investigating some other options. Since I’m a big fan of Amazon S3, it seemed like something I should involve in my storage solution. The Elastician (Mitch Garnaat) and I bought the same hardware and are working through the setup together. Here’s the rundown of the hardware including costs;

Cooler Master Elite 360 m-ATX ATX Mid/Mini Tower Case with 350-Watt Power Supply RC-360-KKR1 $56.97
Gigabyte Core 2 Quad/Intel G41/DDR2/A&V&GbE/MATX/DualBIOS Motherboard GA-G41M-ES2L $56.99
Intel Pentium E5300 2.6GHz 2M L2 Cache 800MHz LGA775 Desktop Processor $66.99
Corsair XMS2 4 GB (2 X 2 GB) PC2-6400 800 MHz 240-PIN DDR2 Dual-Channel Memory Kit – TWIN2X4096-6400C5 $94.99
Western Digital 1 TB Caviar Green SATA Intellipower 64 MB Cache Bulk/OEM Desktop Hard Drive WD10EARS $54.49 * 2
Kingston DataTraveler 112 – 8 GB USB 2.0 Flash Drive DT112K/8GBCL (Black) $13.93 * 2
RadioShack® Molex® to SATA Power Cable $2.99

My previous post discusses the hardware in more detail and some of the choices. Here’s a picture of inside of the case once things were assembled. The observant among you would notice that one of the drives doesn’t have power. That’s because the case power supply didn’t have 2 SATA power connectors and the adapter cable was on order when this picture was taken. I’ll also point out that this case isn’t ideal for mounting several 3.5″ drives. With adapters, I can fit 4 in there, true. However, shopping around for something more to my liking is something I’d do differently next time. Thinking more about the software to run on the NAS has led to several projects including FreeNAS and OpenFiler. We decided to go with something we’re familiar with, Ubuntu. Ubuntu has instructions on their download page for creating a bootable flash drive. I tried the Mac OS-X method and failed, so I resorted the tool from pendrivelinux.com on the family window box. The Universal USB Installer they have works well and created good, bootable flash drives every time.

Creating a Bootable Flash Drive

I tried the Ubuntu Server download, but that seems to be geared towards jumpstarting a server install vs running right off the flash drive. The Ubuntu Desktop was much more to my liking.

To get things going, I needed to connect a mouse/keyboard/monitor. Once I configured the BIOS to boot from the USB HDD, it recognized the bootable flash drive and started bring Ubuntu up. It seems to take “forever” to boot up. I could hit “escape” to watch the console and found that it was timing out on the floppy drive, which I don’t have. I went into the BIOS settings to let it know there wasn’t a floppy drive attached and boot time went WAY down! I let the desktop come up, but since this is an install image, changes made aren’t saved. Having the 2nd flash drive will come in very handy now! Plug it into another USB port before prceeding. Select the “System”->”Administration” menus, then the “Install Ubuntu… ” option. There are steps on the install wizard that require special mention. On step 4, select “erase and use the entire disk”, and select your flash drive (not of the hard drivces!). In step 5, after you’ve entered the required information, select “log in automatically”, which will help when running headless later. Now the most critical part, step 7 has an “advanced” button you need to click. Make sure  you select the proper device, because it defaults to /dev/sda (the first hard drive). You need to select /dev/sdd, which is the last device connected (the target flash drive). Let the install proceed and you’ll have a bootable ubuntu image we can start configuring.

Remote Desktop for Administration

Once it was up, I could use the desktop and configure Remote Desktop. Having played with the default VNC server, it seemed like the wrong option. It didn’t run unless I had a monitor attached, so I did some digging and found that tightVNC is a popular alternative. There are a few steps to getting it installed and running at boot, detailed here.

For another means of access, its a good idea to install ssh (“apt-get install openssh-server”)

Configuring the RAID

The Disk Utility also has a menu option to configure the RAID. It uses mdadm, but I heard some folks talking about using lvm. Linux Mag has an article that talks about both. I decided to go with the built-in option.

Run “apt-get install mdadm” in a termal window. You can then use “Disk Utility” (on the “System”->”Administration” menu). One thing I noticed is that if you play around with RAID config or do your own partitioning of the drives, the RAID wizard isn’t really happy about using those drives. If this is the case, select each drive and then “Format Drive”. Select the “Don’t Partition” option to reset the drive state. You’ll find that you can now select the drives in the RAID setup wizard.

I’ve set the drives up in a RAID 0 config. Prior to doing this, I did a performance test on a single drive and got an average read rate of 84MB/sec. Once the RAID was configured and formatted, I ran the same performance test and got a read rate of 155MB/sec, which is approaching double the speed! Now that’s what I was hoping for!

To get the RAID started at boot time, edit the /etc/mdadm/mdadm.conf file and replace the existing “DEVICE” line with these 2 lines;

DEVICE /dev/sda1 /dev/sdb1
ARRAY /dev/md0 devices=/dev/sda1,/dev/sdb1 auto=yes

Next, run “dpkg-reconfigure mdadm” and accept the defaults. Thanks to goldfisch.at for the help.

Now, to get it mounted, add this to the /etc/fstab

/dev/md0	/mdeia/RAID	ext4	rw,nosuid,nodev,uhelper=udisks	1	2

I might have been able to say “defaults” in that options column, but I took what was there when I mounted the RAID manually using the disk utility.

Sharing the Storage

Initially, I’m setting up Samba to share with my household machines. I found this article at ubuntu.com to help me. I’m concerned with privacy, not because I don’t trust my family, but because I plan on backing up my laptop and I don’t want others messing with my files.

I created a “data” directory on the RAID drive. If you right-click on that folder, select “sharing options”. It brings up a dialog, and if you click “share this folder”, you’ll get prompted to install some packages (do it!). I discovered that I needed to use “smbpasswd” to set the share password. I’ll probably need to do this for each user I create to access the RAID.

The Amazon S3 Backup

For the Amazon S3 backup part, we’ve tossed around a number of different options. S3sync isn’t bad, but doesn’t allow for threaded uploads, and there’s the issue of how often do we kick it off. We asked, “what about running an S3 based filesystem and doing a RAID 1 on top of that and the RAID 0 local drives?”. That might be OK, but how about traffic control? What block size do we use, and what penalty do we pay for a larger block size when storing small files? Where do we store the local cache? Do we even want a local cache since we have a local disk array? Along those lines, we looked at S3Backer and others.

What is the solution when  you don’t really think the available options are great? Right your own! We think that we can write a daemon tied into the file system notification (pynotify) and use boto for the S3 part. Stay tuned… I smell another open source project!

Amazon Simple Notification Service

Amazon has just come out with yet another service to help build your app on AWS. Their Simple Notification Service is a pub/sub setup where you create topics and users can subscribe. Delivery is via a “push” mechanism, so subscribers won’t need to poll for new messages. Output can be one of several protocols which include http/https/email/email-json or sqs. While the e-mail output can be useful for managing things like users watching a comment or blog post. The other options are clearly geared towards consumption by other software. Imagine the http options being used to implement a web service callback. SQS is clearly helpful for building loosely coupled services in the cloud. Now, SNS can help feed into those services.

SNS overview

For more information, visit the SNS documentation
Jeff Barr also does an excellent job of describing SNS at the AWS blog.

typica now supports SNS. Subversion contains the latest code. A release will be coming shortly. (check this space for updates)

Here’s an example of how to use typica to create a topic, subscribe, send a message, then unsubscribe and remove the topic;

NotificationService sns = new NotificationService(props.getProperty("aws.accessId"), props.getProperty("aws.secretKey"));
Result<String> ret = sns.createTopic("TestTopic");
String topicArn = ret.getResult();
System.err.println("topicArn: "+topicArn);

sns.subscribe(topicArn, "email", "dkavanagh@gmail.com");
System.out.println("Waiting till subscription is confirmed.");
System.out.println("Check your e-mail, confirm, then press <return>");
System.in.read();

List<SubscriptionInfo> subs = sns.listSubscriptionsByTopic(topicArn, null).getItems();
String subArn = subs.get(0).getSubscriptionArn();
System.err.println("subscriptionArn: "+subArn);
sns.publish(topicArn, TEST_MSG, "[SNS] testing...");

sns.unsubscribe(subArn);
sns.deleteTopic(topicArn);

Persistent Counters in SimpleDB

I’ve already discussed the new consistency features of Amazon SimpleDB. One of the things people have wished for in SimpleDB was a way to manage a universal counter, something similar to an auto-incrementing primary key in MySQL. The consistency features allow clients to implement such a thing very easily. The following is an algorithm;

Read value
Write value+1, but only if the previous value is what we just read
If write failed, increment value and try again
else return new value

To make it easy for Java programmers to do this with typica, I’ve added a Counter class. Usage is very simple as you can see by this example;

SimpleDB sdb = new SimpleDB("AccessId", "SecretKey");
Domain dom = sdb.createDomain("MyDomain");
Counter c = new Counter(dom, "counter1");
for (int i=0; i<20; i++) {
	System.err.println("next val = "+c.nextValue());
}

This code creates a counter and initializes it if there isn’t a current value. It uses a Iterator-like interface, but there is no test for next value because there always is one. The Counter object is stateless, so it relies totally on SimpleDB for its value. This will work very well on multiple app servers, all relying on the same counter for unique values.

To avoid this blog getting out of date, I won’t include the counter code here, rather you can browse it in SVN.

Code has been checked into SVN r311. I’ll update this post once the new version of typica is released which includes this.

For those seeking a more pythonic version, have a look here.

Eventually Consistent, or Immediately with SimpleDB

Amazon SimpleDB has been a service that provides a schema-less data store with some fairly simple query abilities. One of the catches has always been that when you put a piece of data in, you might not get it back in a query right away. That time delay is generally very short (like < 1 second), but there are no guarantees of this. The cause of this goes back to the fundamental tradeoffs in highly available and redundant systems, such as those Amazon builds. Werner Vogels does a pretty good job of laying out the tradeoffs in his "Eventually Consistent” blog post, and others he links to. Essentially, it’s the CAP theorem, which talks about how you can have only 2 of Consistency, Availability or Partitioning (which gets at redundancy).
Using SimpleDB has required an understanding of how inconsistent results will affect your application. Mostly, it has been important that the application never rely on data being there immediately. This can cause problems when trying to give the user completely up to date information.

SimpleDB now supports consistent read, put and delete. There is a cost for consistency, which is potentially higher latency. Let’s take a look at the new features.
The simplest improvement is in the Select and GetAttributes calls. Supplying the “ConsistentRead=true” parameter ensures consistent data is returned. Now, SimpleDB is an option for storing application state. A regular Put can be used and consistent read will get the current state, always.
What is far more interesting is what has been done with put and delete. PutAttributes has some optional parameters that define a condition that must be met to allow the put to continue. In the request, you can define an expected value for some attribute, or specify that the attribute simply must exist. One application for consistent put is a counter. Imagine an item that has counter attribute. To increment the counter, simply read the value, then do a conditional put, specifying the new value, but only if the old value is set. The request will fail if another writer got there first. A retry loop is required, as in this pseudo-code

value = read(counter);
while (put counter=value+1, if counter==value fails) {
    value = read(counter);
}

The same can happen for the delete operation. In a future post, I’ll talk about how to use typica to access these new features from Java. (added! https://coderslike.us/2010/03/09/persistent-counters-in-simpledb/)