Deploying the Eucalyptus Management Console on Eucalyptus

The Eucalyptus Management Console can be deployed in a variety of ways, but we’d obviously like it to be scalable, highly available and responsive. Last summer, I wrote up the details of deploying the console with Auto Scaling coupled with Elastic Load Balancing. The Cloud Formations service ties this all together by putting all of the details of how to use these services together in one template. This post will describe an example of how you can do this which works well on Eucalyptus (and AWS) and may guide you with your own application as well.

Let’s tackle a fairly simple deployment for the first round. For now, we’ll setup a LaunchConfig, AS group and ELB. We’ll also set up a security group for the AS group and allow access only to the ELB. Finally, we’ll set up a self signed SSL cert for the console. In another post, we’ll add memcached and and a cloudwatch alarm to automate scaling the console.

Instead of pasting pieces of the template here, why not open the template in another window. Under the “Resources” section, you’ll find the items I listed above. Notice “ConsoleLaunchConfig” pulls some values from the “Parameters” section such as KeyName, ImageId and InstanceType. Also uses is the “CloudIP”, but that gets included in a cloud-init script that is passed to UserData. Also, notice the SecurityGroups section that refers to the “ConsoleSecurityGroup” defined further down.

Right above that is the “ConsoleScalingGroup” which pulls in the launch config we just defined. Next “ConsoleELB” defines an ELB that listens for https traffic on 443 and talks to port 8888 on autoscaled instances. It defines a simple health check to verify the console process is running.

The “ConsoleSecurityGroup” uses attributes of the ELB to allow access only to the ELB security group on port 8888. We also allow for ssh ingress from a provided CIDR via “SSHLocation”.

To automate deploying the console using this Cloud Formations template, I wrote a shell script to pass the required values and create the stack. At the top of the script, there are 3 values you will need to set based on your cloud. CLOUD_IP is the address for your cloud front end. SSH_KEY is the name of the Keypair you’d like to use for ssh access into the instances (if any). IMAGE_ID must be the emi-id of a CentOS 6.6 image on your cloud. There are other values you may wish to change just below that. Those are used to create a self-signed SSL certificate. This cert will be installed in your account and it’s name passed into “euform-create-stack” command along with several other values we’ve already discussed.

If you’ve run this script successfully, you can check the status of the stack by running “euform-describe-stacks console-stack”. Once completed, the output section will show the URL to use to connect to your new ELB front-end.

To adjust the number of instances in the scaling group, you can use euscale-update-auto-scaling-group –desired-capacity=N. Since the template defines max count as 3, you would need to make other adjustments for running a larger deployment.

Check back again to see how to configure a shared memcached instance and auto-scale the console.

Working with Magic Search from Bower

In a previous post, I talked about an AngularJS widget for searching. Now, that has been re-factored a little with an eye towards reuse and is available using Bower. There was a little discussion about re-use in the last post, but now the task is much simpler. If you are already using Bower, you can simply add “angular-magic-search” to your bower.json. The widget can now be integrated into your templates and leverages which ever flavor you use for resource location and parameter passing. For example, using Chameleon templates, we do the following:

<link rel="stylesheet" type-"text/css" href="${request.static_path('eucaconsole:static/js/thirdparty/magic-search/magic_search.css')}"/>
<script src="${request.static_path('eucaconsole:static/js/thirdparty/magic-search/magic_search.js')}"></script>
<magic-search
  template="${request.static_path('eucaconsole:static/js/thirdparty/magic-search/magic_search.html')}"
  strings="{'remove':'${layout.ms_remove}', 'cancel':'${layout.ms_cancel}', 'prompt':'${layout.ms_prompt}'}"
  facets="${search_facets}" filter-keys="${filter_keys}"></magic-search>

The first 2 lines pull in the CSS and JS files. The div tag sets the ng-app and contains the magic-search element. Attributes are used to set that up, including template location, strings (which were run through i18n), as well as facets and filter keys.

The facets value is a JSON structure that looks like this:

[
 {'name': 'owner_alias',
  'label': 'Images owned by',
  'options':
    [{'key': '', 'label': 'Anyone'},
     {'key': 'self', 'label': 'Me (or shared with me)'}]
 },
 {'name': 'platform',
  'label': 'Platform',
  'options':
    [{'key': 'linux', 'label': 'Linux'},
     {'key': 'windows', 'label': 'Windows'}]
 },
 {'name': 'architecture',
  'label': 'Architecture',
  'options':
    [{'key': 'x86_64', 'label': '64-bit'},
     {'key': 'i386', 'label': '32-bit'}],
 }
]

It is used to populate the facets and is specific to the data being presented. Note that labels should have been run through your i18n function. The filter-keys value is an array of names. These are passed with the “textSearch” event so that the code you write to perform live text filtering can know which data values to look at.

['architecture', 'description', 'id', 'name', 'owner_alias', 'platform_name', 'root_device_type', 'tagged_name']

The final piece is to listen for events emitted by the search bar.

$scope.$on('searchUpdated', function($event, query) {
 ...
});
$scope.$on('textSearch', function($event, text, filter_keys) {
 ...
});

In the first function, “query” is a query fragment generated by the search facets. One use may be to append that to a URL for an XHR call to retrieve a new data set from the server. The second function gets the “filter_keys” discussed above and “text” which is simply the text the user typed which is not part of a pre-defined facet.

Hopefully, this makes it easier to re-use magic-seach in your application!

(coming next, magic-search and bootstrap)

Magic Search : facets and text in a single widget for efficient search UX

The Eucalyptus console has historically taken different approaches to search. Early versions simply filtered by text on the client. The next version introduced faceted search by using the Visual Search widget. This worked OK since we adopted backbone.js and used a client-side data model. In an effort to free ourselves from so much javascript on the client (and less data on the client), we re-built the console from the ground up in 2014. We went with much more server-side processing and routing along with AngularJS on the client. We also went with a very basic form based filter set which used fetches from the server to apply those filters. We also supported text filtering via a small search-bar.

In all cases, the search was localized to data we were displaying in a table or grid view (user selectable). Search filters were very much context sensitive. For each page, we define a list of columns that text search applies to (and mostly all columns were specified). We also used the query string in the URL to specify filters, so filters became book-markable.

As I write this, we’re finalizing the 4.1 console, which is the second release of the new code-base. In planning for version 4.2, we decided we’d like to revisit the faceted search we tried before. The widget we had used required backbone and had no desire to introduce that into an Angular application. I also surveyed other search widgets and didn’t find anything that matched that level of functionality and UX. The decision was made to develop our own Angular widget. We have a feature branch on github

Screen Shot 2015-01-07 at 9.53.41 AM

How We Built It

Our console uses Chameleon templates for basic DOM structure and SASS for styling on top of Foundation. We started by creating a widget template which defined the magic search layout. It is decorated with a single foundation element to make it full-width as well as many Angular attributes. The Angular controller is what primarily drives the functionality of the search. A foundation dropdown menu is used to display search facets and value lists. Angular’s ng-repeat is used to render selected facets and items on the dropdown. The controller simply maintains lists of these things which are displayed as needed.

How to Use It

To actually use the magic search, you must initialize the controller with a list of facets in JSON and a list of filter keys (which are the columns used for text filtering). The magic search bar emits events when search actions are performed. An event called “searchUpdated” is emitted with a query string when filter facets are changed. An event called “textSearch” is emitted when text search changes. Live text filtering is supported by emitting this event for each character typed in the search input which does not match a facet.

The application can choose what to do with those 2 events. In our case, we use an XHR call to populate our tables. When the “searchUpdated” event is received a new XHR call is made using the query string. This causes a server-side fetch using the new filter values. Our application responds to the “textSearch” event by live filtering the existing list using the filter keys to inspect the objects that make up the list.

Usability

We paid a lot of attention to usability in our design. I worked with Jenny Loza to refine all of the user interaction. The user is initially presented with a blank search bar and some placeholder text. Upon clicking anywhere in the search bar, the list of facets is presented and focus is set for text input. The user may select a facet with the mouse or type. If they type and the text matches any text in the facets, the list of facets will be filtered (and matching text bolded). This lets the user use a few mouse clicks to filter items, or continue entirely with the keyboard. If the user chooses the keyboard, typing and tabbing will get them through as many facets as they wish to select (including text search). Alternatively, a user may select facets and values entirely with the mouse (excluding text search). Each facet can be removed by clicking a small X in the facet box. The entire search bar can be cleared by selecting an X to the far right. We chose not to support edit-in-place for existing facets since it is very simple to remove and add facets.

The magic search bar allows multiple facets and those are combined to reduce the set of results (using “and”). If the user selects multiple values for a single facet, those results are combined (using “or”). We find this very intuitive.

Screen Shot 2015-01-07 at 9.51.16 AM

Screen Shot 2015-01-07 at 9.51.53 AM

Screen Shot 2015-01-07 at 9.52.32 AM

Screen Shot 2015-01-07 at 9.53.13 AM

How Your Project Might Use It

I recognize that not everybody will use Foundation, or the same server-side templating (or even SASS). Here are some ways you could approach re-use in some form. In place of Foundation, you could use Bootstrap dropdowns. They use the same DOM structure as Foundation’s and similar activation, so this would be an easy switch. Note that the “hideMenu()” function in the Angular controller uses a Foundation call to close the dropdown, so that would need to be replaced as well.

Our reliance on a server-side template is very minimal. Two places insert hrefs for resources (css and js files). The other 2 template references are to send initialization values to the Angular controller. You could replace those fairly easily in your own application.

The last thing is SASS. We check in the generated css file, so you could simply use that instead of our .scss file. The only external reference our .scss file uses is a dark grey color used more widely in our application. Additionally, our application defines an item-list and item class which are used for facet display. Those can be found as a SASS mix-in here. It’s likely there are other classes in the widget that were re-used either from our application or Foundation’s own styles. I’ll try to document any further exceptions. Please notice we use 2 SVG icons from Foundation, fi-filter and fi-x.

Getting Involved

This feature is still in development, though we think there isn’t much left to change. I do expect to find bugs which we’ll fix in the 4.2 development process. The code can be viewed in the Pull Request where you can also comment. The key files are magic_search.pt, magic_search.js and magic_search.scss. There is a ticket which we use to track this feature. It includes a list of test criteria that we’ll use to create functional tests, and it should give you a better idea of the capabilities built into this widget. Feel free to contact me with any comments, suggestions, concerns or otherwise.

https://github.com/eucalyptus/eucaconsole/tree/magic_search

Running the Eucalyptus Management Console on Eucalyptus with the triangle services

At Eucalyptus, we’ve leveraged the existing compute infrastructure to deploy some new services. For example, ELB and our imaging service user workers that run as instances. This is useful because the cloud administrator won’t need to configure new machines to handle these tasks. The workers can be dynamically provisioned on top of existing infrastructure and that’s what cloud is all about! The management console can be deployed on top of Eucalyptus as well. In fact, using ELB and Autoscaling, we can provide a single service endpoint for users and runs a scalable back-end. Since Eucalyptus provides RHEL/CentOS packages, I started by installing a CentOS 6 image from http://emis.eucalyptus.com/. This image included cloud-init so I can very easily provision the console on an instance with user data. Here is the cloud-init script you would supply in user data. The one value that needs to be adjusted for your install is the cloud IP address (10.111.5.35).

#cloud-config
# vim: syntax=yaml
#
# This config installs the eucalyptus and epel repos, then installs and
# configures the eucaconsole package
runcmd:
 - [ yum, -y, install, "http://downloads.eucalyptus.com/software/eucalyptus/nightly/4.0/centos/6/x86_64/eucalyptus-release-4.0-0.1.el6.noarch.rpm" ]
 - [ yum, -y, install, eucaconsole ]
 - [ sed, -i, "s/localhost/10.111.5.35/", /etc/eucaconsole/console.ini ]
 - [ service, eucaconsole, restart ]

Here are some commands you can run with euca2ools to set things up. First, assume the above script is stored in a file called “console-init”

eulb-create-lb -z PARTI00,PARTI01 -l "lb-port=80, protocol=HTTP, instance-port=8888, instance-protocol=HTTP" console-lb

The cloud I used had 2 clusters shown above. I also set up port 80 on the elb to talk to port 8888 on the instances. We could also set up port 443 and SSL termination instead. Now, run eulb-describe-lbs console-lb –show-long and you’ll notice the owner-alias and group-name values. That’s the internal security group you’ll need to authorize port 8888 ingress for. What that does is indicate these instances only give access to ELB traffic on the port the console runs on. Run the euca-authorize command using the owner-alias and group-name (i.e. euca-authorize -P tcp -p 8888 -o euca-internal-276586128672-console-elb -u 641936683417 console-as-group).

euscale-create-launch-config -i emi-22536a68 -t m1.medium --group console-as-group --key dak-ssh-key --monitoring-enabled -f console-init console-launch-config

The launch config needs the CentOS 6 EMI ID. I also used an m1.medium since it uses more memory, but still a single CPU. You can certainly dedicate more resources to single instances as you see fit. Specifying an ssh key is optional unless things have gone pear-shaped.

euscale-create-auto-scaling-group -l console-launch-config -m 1 --desired-capacity 2 --max-size 4 --grace-period 300 -z PARTI00,PARTI01 --load-balancers console-lb consolegroup

The autoscaling group ties things together. After the last command runs, you should get 2 instances pending. Once those are up, eulb-describe-instance-health console-lb will show you the state of the instances from an end-user perspective. An “InService” instance can handle requests going through the ELB whereas “OutOfService” instances may still be installing/configuring per cloud-init. The grace period determines how long the scaling group waits for those to be ready. There is a lot more we could do with cloud watch data and autoscaling. For now, this setup will let you manually adjust the number of instances you dedicate to the console scaling group. You can point your browser to the ELB DNS name and see the console login screen!

Extra Credit

Let’s setup SSL termination for ELB. You either have your own certs or you could generate your own. Here are the commands to generate self-signed certs:

openssl genrsa 2048 > myssl.pem
openssl req -new -key myssl.pem -out csr.pem
openssl x509 -req -in csr.pem -signkey myssl.pem -days 365 -sha512 -out myssl.crt
chmod 600 myssl.*

Now you have the key and cert you need. The csr.pem file can be discarded. Now, upload the cert

euare-servercertupload -s myssl --certificate-file myssl.crt --private-key-file myssl.pem

To get the ARN for this cert, run “euare-servercertgetattributes -s myssl”

Now, add the listener to the ELB

eulb-create-lb-listeners console-lb --listener "protocol=HTTPS,lb-port=443,instance-port=8888,instance-protocol=HTTP,cert-id=arn:aws:iam::276586128672:server-certificate/myssl"

Now you can use the console with https! To see details of the ELB, run “eulb-describe-lbs console-elb –show-long”. You might want to remove the port 80 listener. To do that, type “eulb-delete-lb-listeners -l 80 console-lb”.

 

Adventures in memcached integration

As developers, we sometimes run into problems that are somewhat… challenging. That’s part of the fun of writing code though. I like trying to find clever ways to solve a problem. This was the case when trying to integrate memcached into the Eucalyptus Management Console.

Version 4.0 of the console uses Gunicorn which utilizes separate worker processes to handle requests. To implement any kind of effective caching, we’d need a shared cache. Memcached is a pretty obvious choice. Since we were using pyramid, beaker seems like an obvious option. Beaker does have support for memcached, but as the author points out, dogpile.cache is a much better choice as a cache interface library. Dogpile.cache has backends for memcached, redis and others which allow for some more interesting choices architecturally.

Our application uses boto to talk to both Eucalyptus and AWS. To start with, we wanted to cache image lists since they don’t change often and they can be fairly large. Dogpile.cache has regions you configure (generally for different expiration times). We set up short_term, long_term and others for our application. While working on a prototype for this, I ran into 2 main issues which I’ll cover in detail: pickle doesn’t handle all boto object graphs and invalidation of cache data.

Pickled Botos

We have an array of boto.ec2.image.Image objects that need to be cached. The memcached backend for dogpile.cache can use one of a few python interfaces to memcached. I chose to use python-memcached. It pickles the data before sending it to the memcached server. For those who don’t know, pickling is a way to encode python data and can be used to marshall and unmarshall object graphs. Anyway, some boto objects don’t marshall very well. I ran into this about 2 years ago when working on the first version of the console that used the JSONEncoder to send json versions of the boto objects to the browser as AJAX responses. I had to write my own JSONEncoder to handle the objects which didn’t marshall properly. The JSONEncoder supports passing your own implementation which handles object conversion, so that made life a little easier. The Pickler also supports this, but the implementation is buried down in the python-memcached package and there is no way to pass your own pickler down from the dogpile.cache layer. (I feel a pull request coming..) What I chose to do instead was to iterate over the image list and make adjustments to the objects graphs prior to storing in the cache. Certainly, this isn’t ideal, but it works for now. In doing this, I was able to delete some values out of the object graph which I don’t care about which saves time and space in the cache mechanism. I also found that (in this case) the boto.ec2.blockdevicemapping.BlockDeviceType object contained a circular reference which was causing the pickler to barf. I trimmed this out during my iteration and pickling worked fine!

The hard part was figuring out which object was causing the problem. I found a stackoverflow article that helped here. It showed how to extend the Pickler to either log what it was operating on, or catch exceptions (as I added for my purposes). Here’s my code;

class MyPickler (pickle.Pickler):
  def save(self, obj):
    #print 'pickling object', obj, 'of type', type(obj)
    try:
      pickle.Pickler.save(self, obj)
    except:
      print "--------- object dict = "+str(obj.__dict__)

I found it very helpful to see what object was causing the problem and could insert a breakpoint to inspect that object when the problem occurs. In the memcache.py file of python-memcached, I had to change an import so that cpickler wasn’t used. That’s a native pickler which is much faster, but doesn’t allow me to extend it in this way. This is clearly only a debugging tool and the standard package code should be used in production.

Invalidate == Delete

Each item stored in a cache region has a key generated. When using the @cache_on_arguments decorator, the cache key is created based on the string form of the arguments passed to the cache function. The decorator takes a namespace argument, so I was able to specify an additional key component so that any image values being cached all included “image” in the cache key. By default the key is also run through sha1 to create a digest to get consistent length (and obfuscated) cache keys. This works well and would have been all I had to do except that I couldn’t simply rely on the configured expiration of the cache region. There are cases where we needed to invalidate the set of data in the cache due to changes initiated within the application. In that case, our user would expect to see the new data immediately.

To invalidate, we would need to know the cache key used to refer to the data in the cache and perform a delete on the key. Since the cache key was generated for us, I had no idea what to use for deletion. I could have reverse engineered it, but if something changed in the underlying library, that could be fragile. Fortunately, a cache region can be given a key generator function when it is configured. We could use our own code to generate the cache key and call that again to invalidate the cache. This is the key generator I’m using:

def euca_key_generator(namespace, fn):
  def generate_key(*arg):
    # generate a key:
    # "namespace_arg1_arg2_arg3..."
    key = namespace + "_" + "_".join(str(s) for s in arg[1:])

    # return cache key
    # apply sha1 to obfuscate key contents
    return sha1(key).hexdigest()

  return generate_key

To use this to invalidate a cache (based on args), I wrote another function:

def invalidate_cache(cache, namespace, *arg):
  key = euca_key_generator(namespace, None)(*arg)
  cache.delete(key)

The namespace and arg list are passed to the key generator as you can see. This is merely a helper function. To invalidate the image cache, I needed to call the above function with the proper arguments. These are the same arguments passed in to the cache function (which uses the decorator).

The work on shared caching is currently in a branch, but will likely be merged into develop over the next month or so.

Size Problem

After beating my head against a wall for a while, I found there is a size limitation on memcached. It will only take values up to 1MB in size unless you recompile it. Fortunately, there is a handy solution. Since value get pickled, they do really well with compression. The python-memcached library supports compression, but you need to enabled it. By default the min_compress_len is zero, which means it never tries to compress the pickled data. In fact, the code silently returns from the set method having done nothing. This is where the frustration came in. I ended up spending some quality time in pdb to figure out that I could configure a dogpile.cache region with a min_compress_len greater than zero to get the underlying code to compress my data. Bingo! My large data set went from 3MB to 650K. This is how I configured my regions:

 long_term.configure(
     memory_cache,
     expiration_time = int(settings.get('cache.long_term.expire')),
     arguments = {
         'url':[memory_cache_url],
         'min_compress_len':1024,
     },
 )

I realize that 650K is not that far from 1MB, so perhaps splitting up the data will be needed at some point. The failure mode is simply a performance one, not so fatal.

Memcached Debug Tips

I learned a couple of things about monitoring my memcached server while debugging things. Two tips I found that will help are:

  • run memcached from a shell with -vv option. You’ll get useful output about get, set, send and delete operations.
  • telnet into the server using “telnet localhost 11211”. You can run commands like “stats” and “stats items”

IAM Role support for the Eucalyptus Management Console

If you like using IAM Roles in Eucalyptus, there’s a nice option for you to try out. Since Eucalyptus 4.0 was just released with a brand new management console, I thought it would be a good time to let you know about a feature branch you can try out if you don’t mind installing from source. Please find instructions in the README for installing dependencies and running from source. You can install using “python setup.py install”

If you haven’t seen the new console yet, it adds support for IAM Users and Groups. This means you can manage users, groups and policies for the account, all from the console. This branch adds support for IAM Roles. Along side users and groups, you’ll be able to create, view and delete roles. To use roles, you assign them to instances or launch configurations (for AutoScaling). This allows you to assign special privileges to instances. I’ve added the ability to assign a role in both the new instance wizard and launch configuration wizard. The required IAM Instance Profile is handled for you by the console.

Here are some screen shots to show you a few of the changes.

Image

Image

Image

 

Angular JS inter-controller communication

I’ve been using Angular JS for a few months now and started doing more and more with it. Quickly, I ended up having reusable widgets with their own controllers. When I wanted to use them within another app (page), I included them and ran into another problem. I wanted to be able to expose functions or pass data between them. For example, one controller managed a generic set of data provided by some AJAX request. That table would be embedded within another page.

angular.module('myPage', ['tableWidget']).controller(...)

I had a special case where I wanted to lazy-load some details of the table, but had no way for the main controller to access the scope of the tableWidget controller. I dug around a bit and found I could use $emit and $on to pass events. For example, on the re-usable tableWidget controller, when data had finished loading, I would emit an event.

$scope.$emit('itemsLoaded', $scope.items);

That happens every time the widget loads data, on every page. But, on my page, I wanted to know about that so I set up a listener.

$scope.on('itemsLoaded', function($event, items){
  for (var i=0; i < items.length ; i++) {
    var item = items[i];
    // do something with item
  }
});

Pretty slick, right? Well, I had another problem. The table widget had a reload button to allow the user to cause another AJAX fetch without reloading the entire page. I needed to trigger that fetch from my main controller. I tried $emit, but no good. I finally found $broadcast.

$scope.$broadcast('refresh');

Within the table widget controller, I listened.

$scope.on('refresh', function($event) {
  // trigger the ajax call
});

I hope this is helpful! It really allowed me to make the reusable components in my application much more useful!

Using the Eucalyptus User Console with AWS

At the end of last year, we (Eucalyptus) released version 3.2 which included our user console. This feature finally allowed regular users to login to a web UI to manage their resources. Because this was our first release, we had a lot of catching up to do. I would say that is still the case, but the point here is that we were able to test all of our features against Eucalyptus. As we add features to the user console which are currently under development in the server side, we must have the capability to test using the AWS services. Our API fidelity goal means that we are really able to develop against the Amazon implementation and then test against the Eucalyptus version when that becomes ready. Recently we did this for resource tagging. The server side folks have just finished implementing that, so we’ll be able to point the user console at our own server soon.

As a result of this need for testing with Amazon, we have hacked in a way to connect the user console with AWS endpoints. The trick is simple. In the login screen, there are 3 fields. Those fields are normally for account, username and password. To connect with Amazon, simply supply endpoint, access key and secret key in that order, in the account/user/password fields, like in the picture below

AWSLogin

After logging in, you can use all of the features in the user console against your AWS account. One difference is in how images are handled. Because of the very large number of public images (14 thousand at last count), the user console will only show images owned by (or shared with) the AWS account. The picture below shows what you might see on the dashboard. Notice the access key and endpoint appear to the upper right.

AWSDashboard

 

You may notice the large number of snapshots shown. This includes all public snapshots, and maybe need to be limited to those owned by the user at some point.

The code is currently in the testing branch on github. https://github.com/eucalyptus/eucalyptus/tree/testing

Configuring the Eucalyptus User Console with a Reverse Proxy

The Eucalyptus User Console can be used standalone, but generally people run Tornado apps (as this is) behind a reverse proxy. There are a few reasons, but most commonly, it is so SSL termination can be handled in one place and several Tornado instances can be managed behind one front end. FriendFeed (who developed Tornado) talked about configuring one Tornado instance per core behind Nginx as the reverse proxy. This is what I’ll talk about in this post.

Eucalyptus Logo

The Eucalyptus User Console is built on top of Tornado. Each time you run the console server, you are getting a Tornado instance. For the 3.2 release, there isn’t a convenient way to set up several instances of the console server on one machine. Thankfully, it isn’t terribly difficult to make some modifications which allow this setup. The problems are really around different config and pid files. Logging is all pushed through syslog, so it ends up in /var/log/messages.

Fixing the config file is simple. We need separate config files because that file specifies the port used. After a package install of the eucalyptus-console (that’s the package name), you will find /etc/eucalyptus-console/console.ini which we need to duplicate for each copy of the server we wish to run. I made separate files in that directory called console-1.ini and console-2.ini. In those files, I set up the uiport value to 8880 and 8881. I also recommend turning off SSL since we’ll set up SSL termination with Nginx. To do that, comment out sslcert and sslkey values in both new config files.

To use the new config files, the startup script needs to be changed. I chose a simple route. In /etc/init.d, copy the eucalyptus-console script to eucalyptus-console-1 and eucalyptus-console-2. In those scripts, you can change the config file name to match the files we created before. For good measure, I also changed the “Provides:” value, SERVICE and LOCKFILE variables. The result is that you’ll now be able to run “service eucalyptus-console-1 start” and “service eucalyptus-console-2 start”.

The other wrinkle is the pid file. That file is specified in the init script, but also in the python code for the server. I’ve committed a change to the euca-console-server file (you’ll find in /usr/bin) and checked that into github. It will be on the “testing” branch and likely in “master” soon. This change allows passing in the pid file location so it is no longer hard-coded. What we can do with that is to specify the PIDFILE variable in the init scripts much like was done for the config file. I’ll attach copies of these files to this post so you can see for yourself.

nginx logo

Once you are able to start 2 (or more) copies of the console server, you can easily test those by pointing your browser to the host and ports 8880 and 8881. Now, we need to install and configure Nginx. On CentOS, you can install using “yum install nginx”.  I’m using the following config file (/etc/nginx/nginx.conf”).


user nginx;
worker_processes 10;

error_log /var/log/nginx/error.log;

pid /var/run/nginx.pid;
events {
 worker_connections 1024;
}

http {
 upstream euca-ui {
 server 127.0.0.1:8880;
 server 127.0.0.1:8881;
 }

server {
 listen 80;
 server_name euca-ui;
 keepalive_requests 500000;
 keepalive_timeout 1000;
 location / {
 proxy_pass http://euca-ui;
 }
 }

log_format main '$remote_addr - $remote_user [$time_local] "$request" '
 '$status $body_bytes_sent "$http_referer" '
 '"$http_user_agent" "$http_x_forwarded_for"';

access_log /var/log/nginx/access.log main;
}

Notice the upstream construct pointing to the 2 servers we have configured. Simply running like this, you’ll be able to get the login screen on port 80. When you try logging into the console, you’ll see a failure after requests get shifted to the “other” console server where you aren’t authenticated. We need Nginx to provide session stickyness. I found the ip_hash directive to be helpful. It may not be optimal, but it does tie requests of a given client IP to a specific server. It isn’t a session stickyness, but it “almost” as good. Simply add the line “ip_hash;” in the upstream block on the line prior to the server list.

Now, I’m able to login and see use the console and it still appears to be at port 80 on the host. There are two other things I’d like to address before calling this done.

1. I can’t tell which of the console servers is logging messages. Need to make messages unique to each instance.

2. Enable ssl termination so that we can interact on port 443 and have some further assurance of security.

I haven’t figured out a simple way to customize the log output via syslog, so let’s talk about ssl first.

Turning on SSL is quite easy. The package start script probably already generated self-signed certs which we can use. Modify the server directive in the nginx.conf file like this;


server {
 listen 443 ssl;
 ssl_certificate /etc/eucalyptus-console/console.crt;
 ssl_certificate_key /etc/eucalyptus-console/console.key;

Now, we’d like to set up forwarding from port 80 to 443, so users don’t have to remember to type “https:”. We can do that by adding another server directive like this;


server {
 listen 80;
 server_name euca-ui;
 rewrite ^ https://$server_name$request_uri? permanent;
 }

That about covers it. We clearly need a better way to manage multiple console servers on a single host, but this should be helpful to get something going. I hope to refine this process in future releases as we iron out the wrinkles. Here’s the final nginx.conf file I used;


user nginx;
worker_processes 10;

error_log /var/log/nginx/error.log;

pid /var/run/nginx.pid;
events {
 worker_connections 1024;
}
http {
 upstream euca-ui {
 ip_hash;
 server 127.0.0.1:8880;
 server 127.0.0.1:8881;
 }

server {
 listen 80;
 server_name euca-ui;
 rewrite ^ https://$server_name$request_uri? permanent;
 }

server {
 listen 443 ssl;
 ssl_certificate /etc/eucalyptus-console/console.crt;
 ssl_certificate_key /etc/eucalyptus-console/console.key;
 server_name euca-ui;
 keepalive_requests 500000;
 keepalive_timeout 1000;
 location / {
 proxy_pass http://euca-ui;
 }
 }

log_format main '$remote_addr - $remote_user [$time_local] "$request" '
 '$status $body_bytes_sent "$http_referer" '
 '"$http_user_agent" "$http_x_forwarded_for"';

access_log /var/log/nginx/access.log main;
}

Installing the Eucalyptus Console from source and packages

I previously posted some information about the new User Console we’ve been working on at Eucalyptus Systems. There has been a lot of activity and we’ve shown it to a lot of users to get feedback. We will be releasing it officially very soon, but till then, you can run it yourself a couple of ways. You can build from source, which is very easy, or install nightly builds which we provide for RHEL 6 and CentOS 5 and 6.

Did You Get the Package?

Packages are available in the nightly directory here:

http://downloads.eucalyptus.com/software/eucalyptus/nightly/3.2/

Configure the repo like this:

rpm -Uvh http://downloads.eucalyptus.com/software/eucalyptus/nightly/3.2/centos/6/x86_64/eucalyptus-release-3.2.0-0.1.el6.noarch.rpm

You’ll also need to have the elrepo configured as well.

rpm -Uvh http://elrepo.org/elrepo-release-6-4.el6.elrepo.noarch.rpm

Next, run

yum install -y eucalytpus-console

Next, you’ll need to configure the cloud location and perhaps some other things. The file is located in /etc/eucalyptus-console/console.ini  The config file has a lot of settings, but there are just a few you need to understand to get going quickly.

clchost – this is the IP or dns name for the eucalyptus cloud you’ll connect to

uiport – defaults to 8888, but you can use a different port if you like

sslcert, sslkey – these values are used to configure SSL, which you don’t need to do for development

usemock – this is important if you don’t have a cloud to talk to. Setting this to true instructs the user console to load mock data and many features operate on the mock data (though many also don’t work as well). In this mode, the console can be run standalone for simple demos or to work on the browser side like when you want to changing branding or other look and feel items.

Now, start the service. You’ll see something like this;


# service eucalyptus-console start
Generating self-signed certificate: [ OK ]
Generating cookie secret: [ OK ]
Starting eucalyptus-console: [ OK ]

By default, SSL is enabled. You can connect to the application using https://localhost:8888/ (assuming you’re on the same host, otherwise substitute the right hostname or IP). Skip down to the “Once You’re Running” section below.

Using the Source

A source install is also very easy, but there are some differences you will need to be aware of. First, the code lives on github here: https://github.com/eucalyptus/eucalyptus/tree/maint/3.2/testing

Notice that I’m showing you the maint/3.2/testing branch. That is because we’re putting the very latest fixes there. If you’re a little more risk-averse, you may want to simply run from maint/3.2/master instead. We push changes from testing to master after the code has passed QA to a reasonable degree.

To get started, you’ll need;

  • a git client
  • python 2.6 (or 2.7)
  • boto
  • m2crypto
  • tornado (2.1 or higher)
On CentOS 6,

rpm -Uvh http://elrepo.org/elrepo-release-6-4.el6.elrepo.noarch.rpm

yum install -y git python python-boto m2crypto python-tornado

or, Ubuntu 12.04,

sudo apt-get install git python-boto m2crypto python-tornado

Now, grab the source.

git clone git://github.com/eucalyptus/eucalyptus.git
cd eucalyptus
git checkout maint/3.2/testing
git pull origin maint/3.2/testing

For better or for worse, the user console is in a subdirectory of the main eucalyptus source tree. This means you pull down a whole lot more than you really need if you're only interested in the console. The good news is that in the console directory, you have a completely independent project that doesn't have any build or run-time dependencies on the rest (aside from connecting to the cloud). To get the console running, you'll first need to configure one or two things.  The console/eucaconsole/console.ini file has many settings, which we cover in the package install section above.

Once you've set things up to your liking, simply run

cd console
./launcher.sh

You should see a message like "2012-11-10 23:56:43 INFO Starting Eucalyptus Console". If not, there may be other issues you need to fix first. If you have problems and need help, visit #eucalytpus-ui or one of the other #eucalyptus channels on freenode.net.

If you saw this message, that's great! It's very likely you're ready to connect with the browser. Assuming you are running the console on the same machine as your browser, use the appropriate localhost URL like "http://localhost:8888/" if you kept the default port and ssl config (off).

Once You're Running

You should see a login page. If you're using the mock (usemock: True), you can put anything in the fields, or nothing and simply login. If you're connecting to a Eucalyptus cloud, you'll need to use the regular web login credentials you'd use to login to the Eucalyptus admin console (read on...)

* Warning: Science Content *

The user console uses temporary session credentials from the STS service to make calls, so your actual access key and secret key are never passed to the console app. As a result, accounts that want to use the console must have a) a password set up and b) access credentials assigned. These things can be done by the cloud admin via the admin console or using euca2ools like this;

euare-accountcreate -a testuser1
euare-useraddloginprofile --delegate testuser1 -u admin -p euca123
euare-useraddkey --delegate testuser1 -u admin

Another way to do this would be using the Eucalyptus Admin UI which you can reach in your web browser here: https://<your-cloud-frontend&gt;:8443/ and there are some helpful docs here.

Assuming you were able to login, you should be ready to explore!