Planet OSUOSL

October 29, 2018

Lars Lohn

Things Gateway - the Refrigerator and the Samsung Buttons


Perhaps I'm being overly effusive, but right now, the Samsung SmartThings Button is my Holy Grail of the Internet of Things.  Coupled with the Things Gateway from Mozilla and my own Web Thing API rule system, the Samsung Button grants me invincibility at solving some vexing Smart Home automation tasks.

Consider this problem: my kitchen is in an old decrepit farm house built in 1920.  The kitchen has a challenging layout with no good space for any modern appliance.  The only wall for the refrigerator is annoyingly narrower than an average refrigerator.  Unfortunately, the only switches for the kitchen and pantry lights are on that wall, too.  The refrigerator blocks the switches to the point they can only be felt, not seen.

For twenty years, I've been fine slipping my hand into the dusty cobwebs behind the refrigerator to turn on the lights.  I can foresee the end of this era.  I'm imagining two Samsung Buttons magnetically tacked to a convenient and accessible side of the refrigerator: one for the pantry and one for the kitchen.

The pantry light is the most common light to be inadvertently left on for hours at a time.  Nobody wants to reach behind the refrigerator to turn off the light.  This gives me an idea.  I'm going to make the new pantry button turn on the light for only 10 minutes at a time.  Rarely is anyone in there for longer than that.  Sometimes, however, it is handy to have it on for longer.  So I'll make each button press add ten minutes to the timer.  Need the light on for 30 minutes?  Press the button three times.  To appease the diligent one in the household that always remembers to turn lights off, a long press to the button turns the light off and cancels the timer:
class PantryLightTimerRule(Rule):

def register_triggers(self):
self.delay_timer = DelayTimer(self.config, "adjustable_delay", "10m")
self.PantryButton.subscribe_to_event('pressed')
self.PantryButton.subscribe_to_event('longPressed')
return (self.PantryButton, self.delay_timer, self.PantryLight)

def action(self, the_triggering_thing, the_trigger_event, new_value):
if the_triggering_thing is self.PantryButton and the_trigger_event == 'pressed':
if self.PantryLight.on:
self.delay_timer.add_time() # add ten minutes
else:
self.PantryLight.on = True

elif the_triggering_thing is self.PantryButton and the_trigger_event == 'longPressed':
self.PantryLight.on = False

elif the_triggering_thing is self.delay_timer:
self.PantryLight.on = False

elif the_triggering_thing is self.PantryLight and new_value is False:
self.delay_timer.cancel()

elif the_triggering_thing is self.PantryLight and new_value is True:
self.delay_timer.add_time() # add ten minutes
(see this code in situ in the timer_light_rule.py file in the pywot rule system demo directory)
Like all rules, there are two parts: registering the things that trigger the rule and the action that the rule takes when triggered.

This rules uses three triggers: the PantryButton (as played by Samsung), a timer called delay_timer, and the PantryLight (as played by an IKEA bulb).  The register_triggers method creates the timer with a default time increment of 10 minutes.  It subscribes to the "pressed" and "longPressed" events that the PantryButton can emit.  It returns a tuple containing these two things coupled with the reference to the PantryLight itself.

How is the PantryLight itself considered to be a trigger?  Since the bulb in the pantry is to be a smart bulb, any other controller in the house could theoretically turn it on.  No matter what turns it on, I want my timer rule to eventually turn it off.  Anytime something turns that light on, my rule will fire.

In the action method, in the last two lines you can see how I exploit the idea that anything turning the light on triggers the timer.  If the_triggering_thing is the PantryLight and it was turned from off to on, this rule will add time to the delay_timer.  If the delay_timer wasn't running, adding time to it will start it.

Further, going back up two more lines, you can see how turning off the light by any means cancels the delay_timer.

Going back up another line, you can see how I handle the timer naturally timing out.  It just turns off the light.  Yeah, that will result in the action method getting a message about the light turning off.  We've already seen that action will try to cancel the timer, but in this case, the timer isn't running anymore, so the cancel is ignored.

Finally, let's consider the top two cases of the action method.  For both, the_triggering_thing is the PantryButton. If the button is longPressed, it turns off the light which in turn will cancel the timer.  If there is a regular short press, the timer gets an additional ten minutes if the light was already on or the light gets turned on if it was off.

This seems to work really well, but it hasn't been in place for more than a few hours...
Now there's the case of the main kitchen light itself.  The room is wired for a single bulb in the middle of the ceiling.  That's worthless for properly lighting the space.  I adapted the bulb socket to be an outlet and then an LED shoplight has just enough cord to reach to over the stove.  There's another over the kitchen counter and a third over the sink.  Each light has a different way to turn it on, all of which are awkward in one way or another.

This will change with the second Samsung button.  It seems we only use the kitchen lights in certain combinations.  Multiple presses to the Samsung button will cycle through these combinations in this order:
  1. all off
  2. stove light only
  3. stove light & counter light
  4. stove light, counter light & sink light
  5. counter light & sink light
  6. sink light only
  7. counter light only
class CombinationLightRule(Rule):

def initial_state(self):
self.index = 0
self.combinations = [
(False, False, False),
(True, False, False),
(True, True, False),
(True, True, True),
(False, True, True),
(False, False, True),
(False, True, False),
]

def register_triggers(self):
self.KitchenButton.subscribe_to_event('pressed')
self.KitchenButton.subscribe_to_event('longPressed')
return (self.KitchenButton, )

def set_bulb_state(self):
self.StoveLight.on = self.combinations[self.index][0]
self.CounterLight.on = self.combinations[self.index][1]
self.SinkLight.on = self.combinations[self.index][2]

def action(self, the_triggering_thing, the_trigger_event, new_value):
if the_trigger_event == "pressed":
self.index = (self.index + 1) % len(self.combinations)
self.set_bulb_state()

elif the_trigger_event == "longPressed":
self.index = 0
self.set_bulb_state()
(see this code in situ in the combination_light_rule.py file in the pywot rule system demo directory)
This is just a simple finite state machine.  The three shop lights are controlled by the list, combinations, referenced by an index.  The only trigger is the KitchenButton (again, played by a Samsung Button).  If the button is short pressed the index is incremented modulo the number of combinations.  If the button is longPressed, the finite state machine is reset to state 0 and all the shop lights turned off.

These two Rules can be found in my pywot github repo.  To learn how to experiment with the Things Gateway with Python, see the Mozilla Project Things and my own post about setting up pywot.

I can think of so many applications for these Samsung Buttons, I may need to acquire a pallet of them...

by K Lars Lohn (noreply@blogger.com) at October 29, 2018 06:47 PM

October 25, 2018

Lars Lohn

Things Gateway - Running Web Thing API Applications in Python

The Web Thing API is a remarkable framework for creating applications that can control smart home devices.  Any language that can speak to a RESTful API or use Web Sockets can participate.

In the last few months I've been exploring the use of the Web Thing API using the Python language. After lots of trial and error, I've made some abstractions to simplify interacting with my smart home devices.  While I've been blogging about my explorations for quite a while, I've not made a concerted effort to make it easy for anyone else to follow in my foot steps.  I'm correcting that today, though I fear I may fail on the "easy" part.

This is a guide to help you accept my invitation to explore with me.  However, I need to be clear, this is a journey for programmers familiar with the Python programming language and Linux development practices.

Everyone has different needs and a different programming environment.  It is easiest to work with my Python module, pywot if you have a second Linux machine on which to run the pywot scripts.  However, if you only have the Things Gateway Raspberry Pi itself, you can still participate, it'll just be a longer task to setup.

The instructions below will walk you through setting up the Things Gateway Raspberry Pi with the requirements to run pywot scripts.  If you already have a machine available with Python3.6 you can run pywot scripts there instead of on the RaspberryPi, you can save a lot of hassle by skipping all the way down to Step 5.  If you have the second machine and still want to run the scripts on the Raspberry Pi, I suggest enabling SSH on the Things Gateway and doing command line work with ssh


One of the unfortunate things about the Raspbian Stretch Linux distribution on which the Things Gateway is distributed, is a rather outdated version of Python 3.  Version 3.6 of Python was released in 2016, yet Raspbian Stretch still includes only 3.5.  There were a number of important changes to the language between those two versions, especially in the realm of asynchronous programming.  Interacting with the Web of Things is all about asynchronous programming.

When I started experimenting with the Web Thing API, I did so using my Ubuntu based Linux workstation that already had Python 3.6 installed as native.  I blithely used the newer asynchronous constructs in creating my External Rules Framework.  It was a nasty surprise when I moved my code over to the Raspberry Pi running the Things Gateway and found the code wouldn't run.


To get Python 3.6 running on Raspbian Stretch, it must be configured and compiled  from source and then installed as an alternate Python.  Some folks blanched at that last sentence, I certainly did when I realized what I would have to do.  As it turns out, it isn't as onerous as I thought. Searching on the Web for a HOW-TO I found this great page on github.  My use of these instructions went flawlessly - there were neither mysterious failures nor unexpected complications requiring research. 

Here's exactly what I did to get a version of Python 3.6 running on Raspbian Stretch:

1) Connect a keyboard and monitor to your Raspberry Pi.  You'll get a login prompt.  The default user is "pi" and the password is "raspberry".  You really ought to change the default password to something more secure.  See Change Your Default Password for details. (Alternatively, you can enable ssh and login from another machine.  From your browser, go to http://gateway.local/settings, select "Developer" and then click the checkbox for "Enable SSH")

2) Once logged in, you want to make sure the RPi is fully updated and install some additional packages.  For me, this took about 10 minutes.
        
pi@gateway:~ $ sudo apt-get update
pi@gateway:~ $ sudo apt-get install build-essential tk-dev libncurses5-dev libncursesw5-dev libreadline6-dev libdb5.3-dev libgdbm-dev libsqlite3-dev libssl-dev libbz2-dev libexpat1-dev liblzma-dev zlib1g-dev


3) Now you've got to download Python 3.6 and build it. 
The last command "./configure" took nearly 4 minutes on my RPi:
        
pi@gateway:~ $ wget https://www.python.org/ftp/python/3.6.6/Python-3.6.6.tar.xz
pi@gateway:~ $ tar xf Python-3.6.6.tar.xz
pi@gateway:~ $ cd Python-3.6.6
pi@gateway:~/Python-3.6.6 $ ./configure

The next step is to compile it. This takes a lot of time. Mine ran for just under 30 minutes:
        
pi@gateway:~/Python-3.6.6 $ make

Now you've got to tag this version of Python as an alternative to the default Python. This command took just over 4 minutes on my RPi:
        
pi@gateway:~/Python-3.6.6 $ sudo make altinstall

Finally, a bit of clean up:
        
pi@gateway:~/Python-3.6.6 $ cd ..
pi@gateway:~ $ rm Python-3.6.6.tar.xz
pi@gateway:~ $ sudo rm -r Python-3.6.6
pi@gateway:~ $


4) Python 3.6 is now installed along side the native 3.5 version.  Invoking the command python3 will give you the native 3.5 version, where python3.6 will give you our newly installed 3.6 version.  It would be nice to have version 3.6 be the default for our work.  You can do that with a virtual environment:
        
pi@gateway:~ $ python3.6 -m venv py36

This has given you a private version of Python3.6 that you can customize at will without interfering with any other Python applications that may be tied to specific versions.  Each time you want to run programs with this Python3.6 virtual environment, you need to activate it:
        
pi@gateway:~ $ . ~/py36/bin/activate
(py36) pi@gateway:~ $

It would be wise to edit your .bashrc or other initialization file to make an alias for that command. If you do not, you'll have to remember that somewhat cryptic invocation.

5) It's time to install my pywot system.  This is the Python source code that implements my experiments with the Things Gateway using the Web Thing API.

I've not uploaded pywot to PyPI.  I've chosen not to productize this code because it's what I call stream of consciousness programming.  The code is the result of me hacking and experimenting.  I'm exploring the problem space looking for interesting and pleasing implementations.  Maybe someday it'll be the basis for a product, but until then, no warranty is expressed or implied.

Even though pywot isn't on PyPI, you still get to use the pip command to install it.  You're going to get a full git clone of my public pywot repo.  Since it's pip, all the dependencies will automatically download and install into the virtual Python3.6 environment. On my Raspberry Pi, this command took more than five minutes to execute.
        
(py36) pi@gateway:~ $ mkdir dev
(py36) pi@gateway:~ $ cd dev
(py36) pi@gateway:~/dev $ git clone https://github.com/twobraids/pywot.git
(py36) pi@gateway:~/dev $ pip install -e pywot
(py36) pi@gateway:~/dev $ 

Did you get a message saying, "You should consider upgrading via the 'pip install --upgrade pip' command." ?   I suggest that you do not do that.  It made a mess when I tried it and I'm not too inclined to figure out why.  Things will work fine if you skip that no-so-helpful suggestion.  <sigh>

6) Before you can run any of the pywot demos or write your own apps, you need to get the Things Gateway to grant you permission to talk to it.  Normally, one gets the Authorization Token by accessing the Gateway using a browser.  That could be difficult if the RasberryPi that has your Gateway on it is your only computer other than a mobile device.  Manually typing a 235 character code from your phone screen would be vexing to say the least.

Instead, run this script adding the url to your instance of the Things Gateway, and your Things Gateway login and password.  Take note: if you're running this on the Raspberry Pi that runs the Gateway, the url should have ":8080" appended to it.  If you are using some other machine, you do not need that.
        
(py36) pi@gateway:~/dev $ . ./pywot/demo/auth_key.sh
Enter URL: http://gateway.local:8080
Enter email: your.email@somewhere.com
Enter password: your_password
(py36) pi@gateway:~/dev $ ls sample_auth.ini
sample_auth.ini
(py36) pi@gateway:~/dev $

This command created a file called ~/dev/sample_auth.ini  You will use the data authorization key within that file in the configuration files used by the demo apps and the apps you create.

7) Finally, it's time to start playing with the demos.  Since everyone has a different set of devices, all of the demos in the ~/dev/pywot/demo and ~/dev/pywot/demo/rule_system will require some modification.  To me, the most interesting demos are those in the latter directory.

All of the demo files use configman to control configuration.  This gives each script command line switches and the ability to use environment variables and configuration files.  All configuration parameters can acquire their values using any of those three methods.  Conflicts are resolved with this hierarchy:
  1. command line switches, 
  2. configuration file, 
  3. environment variables, 
  4. program defaults.  
If you want more information about configman, see my 2014 PyOhio presentation.

--help  will always show you what configuration options are available
--admin.config=<somefilename.ini> will specify a configuration file from which to load values
--admin.dump_config=<somefilename.ini> will create a configuration file for you that you can customize with an editor.

Start with the simplest rule example: ~/dev/pywot/demo/rule_system/example_if_rule.py.  Run it to produce a blank configuration file.
        
(py36) pi@gateway:~/dev $ cd ./demo/rule_system
(py36) pi@gateway:~/dev/demo/rule_system $ ./example_if_rule.py --admin.dump_conf=example.ini
(py36) pi@gateway:~/dev $ cat example.ini
# a URL for fetching all things data
#http_things_gateway_host=http://gateway.local

# the name of the timezone where the Things are ('US/Pacific, UTC, ...')
local_timezone=US/Pacific

# format string for logging
#logging_format=%(asctime)s %(filename)s:%(lineno)s %(levelname)s %(message)s

# log level (DEBUG, INFO, WARNING, ERROR, CRITICAL)
#logging_level=DEBUG

# the fully qualified name of the RuleSystem class
#rule_system_class=pywot.rules.RuleSystem

# the number of seconds to allow for fetching data
#seconds_for_timeout=10

# the name of the default timezone running on the system ('US/Pacific, UTC, ...')
system_timezone=UTC

# the api key to access the Things Gateway
#things_gateway_auth_key=THINGS GATEWAY AUTH KEY

(py36) pi@gateway:~/dev/demo/rule_system $

Open example.ini in a text editor of your choice.  Using the Authorization key you generated in the file ~/dev/sample_auth.ini, uncomment and set the value of things_gateway_auth_key.  Set your local timezone on the local_timezone line.  Finally set the value of system_timezone.  If you're using the Gateway's Raspberry Pi, you can leave it as UTC  Otherwise set it whatever timezone your system is using.

Nota bene:  it is unfortunate that the url for connecting to the Things Gateway differs depending on what machine runs the examples.  If you're using the same Raspberry Pi that is running the Things Gateway, uncomment the "http_things_gateway_host" line and add ":8080" to the end of the line.  If, instead, you're running from another machine on the network, you need not make that change.

Edit the source file ~/dev/pywot/demo/rule_system/example_if_rule.py  and change the names of the devices to reflect the names of the devices you have in your smart home setup.

If you've not yet expired of old age after all these things you've had to do, it is finally time to actually run the example:
        
(py36) pi@gateway:~/dev $ ./example_if_rule.py --admin.config=example.ini

The script will echo its configuration to the log and then start listening to the Thing Gateway. As soon as your target light bulb is turned on, the other bulb(s) in the action will also turn on.  You can explore the rest of the demos using the same method of creating configuration files.


While not a polished product, my pywot Python module is useful for demonstrating the power of the Web Thing API.  Fortunately, the Web Thing API is an open standard that could be implemented by anyone.  The Python based Home Assistant (HASS) has plans to integrate it.  With a faithful implementation of the standard, pywot could be used as a scripting or rule engine for HASS, or any compliant platform.  Cross compatibility and letting everyone join in the fun is our goal.

Some thanks this week goes to Things Gateway developer Dave Hylands for cluing me in to how to get the Gateway Auth Key without having to use a browser.

by K Lars Lohn (noreply@blogger.com) at October 25, 2018 07:47 PM

October 23, 2018

Lars Lohn

Things Gateway - Sunrise, Sunset, Swifty Flow the Days


In my previous blog post, I introduced Time Triggers to demonstrate time based home automation.  Sometimes, however, pegging an action down to a specific time doesn't work: darkness falls at different times every evening as one season follows another.  How do you calculate sunset time?  It's complicated, but there are several Python packages that can do it: I chose Astral.

The Things Gateway doesn't know where it lives.   The Raspberry Pi distribution that includes the Things Gateway doesn't automatically know and understand your timezone when it is booted. Instead, it uses UTC, essentially Greenwich Mean Time, with none of those confounding Daylight Savings rules. Yet when viewing the Things Gateway from within a browser, the times in the GUI Rule System automatically reflect your local timezone. The presentation layer of the Web App served by the Things Gateway is responsible for showing you the correct time for your location.  Beware, when you travel and access your Things Gateway GUI rules remotely from a different timezone, any references to time will display in your remote timezone.  They'll still work properly at their appropriate times, but they will look weird during travel.

My own homegrown rule system uses a different tactic: it nails down a timezone for your Things Gateway.  In the configuration, you specify two timezones:  the timezone where your Things Gateway is physically located, local_timezone, and the timezone that is the default on the computer's clock running the the external rule system, system_timezone.  Here's two examples to show why both need to be specified.
  1. I generally run my rules on my Linux Workstation.  As this machine sits on my desk, its internal clock is set to reflect my local time.  I set both the local_timezone and the system_timezone to US/Pacific.  That tells my rule system that no time translations are required
  2. However, if were to instead run my Rule System on the Raspberry Pi that also runs the Things Gateway, I'd have have to specify the system_timezone as UTC.  My local_timezone remains US/Pacific.
These configuration parameters can be set in several ways.  You can create environment variables:
        
$ export local_timezone=US/Pacific
$ export system_timezone=US/Pacific
$ ./my_rules.py --help

Or they can be command line parameters:
        
$ ./my_rules.py --local_timezone=US/Pacific --system_timezone=US/Pacific

Or they can be in a configuration file:
        
$ cat config.ini
local_timezone=US/Pacific
system_timezone=US/Pacific
$ ./my_rules.py --admin.config=config.ini


My next blog post will cover more on how to set configuration and run this rule system on either the same Raspberry Pi that runs the Things Gateway or some other machine.

Meanwhile, let's talk about solar events.  Once the Rule System knows where the Things Gateway is, it can calculate sunrise, sunset  along with a host of other solar events that happen on a daily basis.  That's where the Python package Astral comes in.  Given the latitude, longitude, elevation and the local timezone, it will calculate the times for: blue_hour, dawn, daylight, dusk, golden_hour, night, rahukaalam, solar_midnight, solar_noon, sunrise, sunset, and twilight.

I created a trigger object that wraps Astral so it can be specified in the register_trigger method of the Rule System.  Here's a rule that will turn a porch light on ten minutes after sunset every day:
class EveningPorchLightRule(Rule):

def register_triggers(self):
self.sunset_trigger = DailySolarEventsTrigger(
self.config,
"sunset_trigger",
("sunset", ),
(44.562951, -123.3535762),
"US/Pacific",
70.0,
"10m" # ten minutes
)
self.ten_pm_trigger = AbsoluteTimeTrigger(
self.config,
'ten_pm_trigger',
'22:00:00'
)
return (self.sunset_trigger, self.ten_pm_trigger)

def action(self, the_triggering_thing, *args):
if the_triggering_thing is self.sunset_trigger:
self.Philips_HUE_01.on = True
else:
self.Philips_HUE_01.on = False
(see this code in situ in the solar_event_rules.py file in the pywot rule system demo directory)

Like the other rules that I've created, I start by creating my trigger, in this case, an instance of the DailySolarEventsTrigger class.  It is given the name "sunset_trigger" and the solar event, "sunset".  The rule can trigger on multiple solar events, but in this case, since I want only one, "sunset" appears alone in a tuple. Next I specify the latitude and longitude of my home city, Corvallis, OR.  That's in the US/Pacific timezone and about 70 meters above sea level.  Finally, I specify a string representing 10 minutes.

After the DailySolarEventsTrigger, I create another trigger, AbsoluteTimeTrigger to handle turning the light off at 10pm.  I could have created a second rule to do this, but a single rule to handle both ends of the operation seemed more satisfying.

In the action part of the rule, I needed to differentiate from the two types of actions.  Both triggers will call the action function, each identifying itself as the value of the parameter, the_triggering_thing.  If it was the sunset_trigger calling action, it turns the porch light on.  If it was the ten_pm_trigger calling action, the light gets turned off.  I think the implementation begs for a better dispatch method, but Python doesn't help much with that.


Some of the solar events are not just a single instant in time like a sunset, some represent periods during the day.  One example is Rahukaalam.  According to a Wikipedia article, in the realm of Vedic Astrology, Rahukaalam is an "inauspicious time" during which it is unwise to embark on new endeavors.  It's  based on dividing the daylight hours into eight periods.  One of the periods is marked as the inauspicious one based on the day of the week.  For example, it's the fourth period on Fridays and the seventh on Tuesdays.  Since the length of the daylight hours changes every day, the lengths of the periods change and it gets hard to remember when it's okay to start something new.

Here's a rule that will control a warning light.  The light being on indicates that the current time is within the start and end of the Rahukaalam time for the given global location and elevation

class RahukaalamRule(Rule):

def register_triggers(self):
rahukaalam_trigger = DailySolarEventsTrigger(
self.config,
"rahukaalam_trigger",
("rahukaalam_start", "rahukaalam_end", ),
(44.562951, -123.3535762),
"US/Pacific",
70.0,
)
return (rahukaalam_trigger,)

def action(self, the_triggering_thing, the_trigger, *args):
if the_trigger == "rahukaalam_start":
logging.info('%s starts', self.name)
self.Philips_HUE_02.on = True
self.Philips_HUE_02.color = "#FF9900"
else:
logging.info('%s ends', self.name)
self.Philips_HUE_02.on = False

(this code is not part of the demo scripts, however, the next very similar script is.)

Like all rules, it has two parts: registering triggers and responding to trigger actions.  In register_trigger, I subscribe to the rahukaalam_start and rahukaalam_end solar events for my town location, timezone and elevation.  In the action, I just look to the_trigger to see which of the two possible triggers fired.  It results in a suitably cautious orange light illuminating during the Rahukaalam period.

Could we make the light blink for the first 30 seconds, just so we ensure that we notice the warning?  Sure we can!
class RahukaalamRule(Rule):

def register_triggers(self):
rahukaalam_trigger = DailySolarEventsTrigger(
self.config,
"rahukaalam_trigger",
("rahukaalam_start", "rahukaalam_end", ),
(44.562951, -123.3535762),
"US/Pacific",
70.0,
"-2250s"
)
return (rahukaalam_trigger,)

async def blink(self, number_of_seconds):
number_of_blinks = number_of_seconds / 3
for i in range(int(number_of_blinks)):
self.Philips_HUE_02.on = True
await asyncio.sleep(2)
self.Philips_HUE_02.on = False
await asyncio.sleep(1)
self.Philips_HUE_02.on = True

def action(self, the_triggering_thing, the_trigger, *args):
if the_trigger == "rahukaalam_start":
logging.info('%s starts', self.name)
self.Philips_HUE_02.on = True
self.Philips_HUE_02.color = "#FF9900"
asyncio.ensure_future(self.blink(30))
else:
logging.info('%s ends', self.name)
self.Philips_HUE_02.on = False

(see this code in situ in the solar_event_rules.py file in the pywot rule system demo directory)

Here I created an async method that will turn the lamp on for two seconds and off for a second as many times as it can in 30 seconds.  The method is fired off by the action method when the light is initially turned on.  Once the blink routine has finished blinking, it silently quits.

Perhaps one of the best things about the Things Gateway is that the Things Framework allows nearly any programming language to participate. 

Thanks this week goes to the authors and contributors to the Python package Astral.  Readily available fun packages like Astral contribute immensely to the sheer joy of Open Source programming.

Now it looks like I've only got twenty minutes to publish this before I enter an inauspicious time...  D'oh, too late...



by K Lars Lohn (noreply@blogger.com) at October 23, 2018 11:33 PM

October 19, 2018

Lars Lohn

The Things Gateway - A Pythonic Rule System

In my last post, I talked about the features and limitations of the Rules System within the Things Gateway by Mozilla graphical user interface.  Today, I'm going to show an alternate rule system that interacts with the Things Gateway entirely externally using the Web Thing API.  The Web Thing API enables anyone armed with a computer language that can use Web Sockets to create entirely novel applications or rules systems that can control the Things Gateway.

In the past few months, I've blogged several times about controlling the Things Gateway with the Web Thing API using Python 3.6.  In each one was a stand alone project, opening and managing Web Sockets in an asynchronous programming environment.  By writing these projects, I've explored both functional and object oriented idioms to see how they compare.  Now with some experience, I feel free to abstract some of the underlying common aspects to create a rule engine of my own.
One of the great features of the GUI Rule System is the translation of the graphical representation of the rule into an English sentence (likely a future target for localization).  Simply reading it aloud easily leads to an unambiguous understanding of the rule's behavior.  I imagine that the Javascript implementation uses the placements of the visual objects to create a parse tree of the if/then boolean expression.  The parse tree can then be walked and translated into our spoken language.

Implementing a similar system based on parse trees is tempting for its flexibility, but usually results in a new chimera language halfway between the programming language used and the language represented in the parse tree.  See the SQLAlchemy encapsulation of the SQL language in Python as an example.  I'm less fond of this technique than I used to be.  I think I can get away with a simpler implementation just using fairly straightforward Python.

In my last post, I discussed the differences between "While" rules and "If" rules in the GUI Rules System.  Recall that the "While" style of rule takes an action and then undoes the action when the rule condition is no longer True.  However, an "If" style of rule never undoes its action.

Here's an example of the "If" style rule from my last blog post:

Using my rule system, the rule code looks like this:
class ExampleIfRule(Rule):

def register_triggers(self):
return (self.Philips_HUE_01,)

def action(self, *args):
if self.Philips_HUE_01.on:
self.Philips_HUE_02.on = True
self.Philips_HUE_03.on = True
self.Philips_HUE_04.on = True 
(see this code in situ in the example_if_rule.py file in the pywot rule system demo directory)

Creating a rule starts by creating a class derived from the base class Rule.  The programmer is responsible for implementing two methods: register_triggers and action.  Optionally, a third method, initial_state, and a constructor can be included, too. 

The register_triggers method is a callback function.  It returns a tuple of objects responsible for triggering the rule's action method.  This is generally a set of Things defined by the Things Gateway.  Anytime one of the things in the registered_triggers tuple changes state, the action method will execute.

In this example, "Philips HUE 01" is specified as the trigger.  Any time any property of "Philips HUE 01" changes, the action method decides what to do about it.  It looks to see if the Philips HUE light is in the "on" state, and if so, turns on the other lights, too. 

When an instance of the rule class is instantiated, all the Things known to the Things Gateway are added as attributes to the rule.  That allows any Thing to be referenced in the code with standard member syntax: "self.Philips_HUE_01".  Each of the properties of the Thing are available using the dot notation, too: "self.Philips_HUE_01.on".  Changing the state of a thing's properties is done with assignment statements: "self.Philips_HUE_04.on = True".  The attribute names are sanitized derivations of the name attribute of the Thing.  Spaces and other characters not allowed in Python identifiers are replaced with the underscore.  If the first character of the name is not allowed as a first character in an identifier, a leading underscore is added: "01 turns on 02, 03" becomes "_01_turns_on_02__03".  It's not ideal, but reconciling language requirement differences can be complicated.

The "While" version of the rule could look like this:

class ExampleWhileRule(Rule):

def register_triggers(self):
return (self.Philips_HUE_01,)

def action(self, the_triggering_thing, the_changed_property_name, the_new_value):
if the_changed_property_name == 'on':
self.Philips_HUE_02.on = the_new_value
self.Philips_HUE_03.on = the_new_value
self.Philips_HUE_04.on = the_new_value
(see this code in situ in the example_while_rule.py file in the pywot rule system demo directory)

Notice in this code, I've expanded the parameters of the action method.  Each time the action method is called, it receives a reference to the object that changed state, the name of the property that changed and the new value of the property.

To make the other lights follow the boolean value of  Philips HUE 01's on state, all we have to do is assign the_new_value  to the other lights' on property.

Since we've got the name of the changed property and its new value, we can implement the full functionality of the bonded_things.py example that I gave several weeks ago:

class BondedBulbsRule(Rule):

def register_triggers(self):
return (
self.Philips_HUE_01,
self.Philips_HUE_02,
self.Philips_HUE_03,
self.Philips_HUE_04,
)

def action(self, the_triggering_thing, the_changed_property_name, the_new_value):
for a_thing in self.triggering_things.values():
setattr(a_thing, the_changed_property_name, the_new_value)
(see this code in situ in the bonded_rule.py file in the pywot rule system demo directory)

In this example, any change to on/off state or color of one bulb will immediately be echoed by all the others.  We start by registering all four bulbs in the list of triggers.  This means that a change in property to any one of them will trigger the action method.  All we have to do in the action is iterate through the list of triggering_things and change the property indicated by the_changed_property_name.  Yes, the bulb that triggered the change doesn't need to have its property changed again, but it doesn't hurt to do so.  The mechanism behind changing values can tell that the new and old values are the same, so it takes no action for that bulb.

Compare this rule based code with the original one-off version of the bonded things code.  The encapsulations of the Rules System significantly improves the readability of the code.


Up to this point, I've only demonstrated using Things from the Things Gateway as triggers.  However, any object can be written to asynchronously invoke the action method.  Consider this class:

class HeartBeat(TimeBasedTrigger):
def __init__(
self,
config,
name,
period_str
# duration should be a integer in string form with an optional
# H, h, M, m, S, s, D, d as a suffix to indicate units - default S
):
super(HeartBeat, self).__init__(name)
self.period = self.duration_str_to_seconds(period_str)

async def trigger_dection_loop(self):
logging.debug('Starting heartbeat timer %s', self.period)
while True:
await asyncio.sleep(self.period)
logging.info('%s beats', self.name)
self._apply_rules()
(see this code in situ in the rule_triggers.py file in the pywot directory)

A triggering object can participate in more than one rule.  The act of registering a triggering object in a rule means that the rule is added to an internal list of participating_rules within the triggering object.  The method, _apply_rules, iterates through that collection and calls the  action method for each rule.  In the case of this HeartBeat trigger, it calls _apply_rules periodically as set by the period_str parameter of the constructor.  This provides a heartbeat that can make a series of actions happen over time.

Using the Heartbeat class that beats every two seconds, this rule creates a scrolling rainbow with six Philps HUE lights:

the_rainbow_of_colors = deque([
'#ff0000',
'#ffaa00',
'#aaff00',
'#00ff00',
'#0000ff',
'#aa00ff'
])

class RainbowRule(Rule):

def initial_state(self):
self.participating_bulbs = (
self.Philips_HUE_01,
self.Philips_HUE_02,
self.Philips_HUE_03,
self.Philips_HUE_04,
self.Philips_HUE_05,
self.Philips_HUE_06,
)

for a_bulb, initial_color in zip(self.participating_bulbs, the_rainbow_of_colors):
a_bulb.on = True
a_bulb.color = initial_color

def register_triggers(self):
self.heartbeat = HeartBeat(self.config, 'the heart', "2s")
return (self.heartbeat, )

def action(self, *args):
the_rainbow_of_colors.rotate(1)
for a_bulb, new_color in zip(self.participating_bulbs, the_rainbow_of_colors):
a_bulb.color = new_color
(see this code in situ in the rainbow_rule.py file in the pywot rule system demo directory)

The intial_state callback function sets up the bulbs by turning them on and setting the initial colors.  This time in register_triggers, a Heartbeat object is created with a period of two seconds.  The Heartbeat will call the action method every two seconds.  Finally, in the action, we rotate the list of colors by one and then assign new colors to each of the six bulbs.




By implementing the rule system within Python, rules can use the full power of the language.  Rules could be formulated that respond to anything that the language can do.  It wouldn't be difficult to have a Philips HUE bulb show red when your software testing system indicates a build error.  You could even hook up a big red button to physically press when you want to deploy the latest release of your code.  In a more close to home example, how about blinking the porch light green to guide the pizza delivery to the right door?  The possibilities are both silly and endless.

by K Lars Lohn (noreply@blogger.com) at October 19, 2018 10:06 PM

June 26, 2018

Ben Kero

IndieWebCamp 2018

Attending bkero

I’m looking forward to attending the 2018 IndieWebCamp. It’s a small 2-day event happening in Portland and is exploring the topics of independent web hosting and technologies to knit them together.

If you’re in Portland, you should attend too!

https://2018.indieweb.org/

by bkero at June 26, 2018 12:52 AM

April 17, 2018

Jeff Sheltren

git rebase --onto - The Simple One-Minute Explanation

TL;DR the command you want is: git rebase --onto [the new HEAD base] [the old head base - check git log] [the-branch-to-rebase-from-one-base-to-another] And my main motivation to putting it here is to easily find it again in the future as I always forget the syntax. (This is a re-post from my old blog on drupalgardens, but it is still helpful.) Mental model To make all of this simpler think of: You have: Two red dishes on top of two blue dishes One yellow dish You want: Those two red dishes on top of the one yellow dish You do: Carefully go with the finger down to the bottom of the two red dishes, which is the first blue dish Take the two red dishes Transfer them over to the one yellow dish That is what rebase --onto does: git rebase --onto [yellow dish] [from: first blue dish] [the two red dishes] Note: The following is meant for an intermediate audience that is familiar with general rebasing in GIT Longer explanation It happened! A branch - you had based your work - on has diverged upstream, but you still have work in progress, which you want to preserve. So it looks...
fabian Tue, 04/17/2018 - 02:24

by fabian at April 17, 2018 09:24 AM

March 26, 2018

Jeff Sheltren

Michael Meyers Joins Tag1 As Managing Director

I’m excited to announce that Michael Meyers has joined the Tag1 team as Managing Director. Michael was one of our very first clients 10 years ago, we’ve worked together on many projects over the years, and we look forward to working even more closely with him now that he’s a part of the Tag1 team. Michael has extensive experience building market leading high-growth technology companies and is particularly well known in the Drupal Community for his role in driving innovation of the Drupal platform. Michael brings over 20 years of experience managing global technology teams building high traffic, high performance mobile and web applications. Tag1 recently celebrated our 10th anniversary, in that time we’ve established ourselves as the leading provider of highly available, scalable, secure, high performance systems and as the organization other agencies and the users of Drupal turn to for help with their most challenging problems. We will be working with Michael to expand on our success to date and to help lead Tag1 into the future. Roots in Success Michael joins Tag1 from Acquia, where he spent the last 5 years on the leadership team as VP of Developer Relations, Developer Marketing, and helped launch the Developer...
Jeremy Mon, 03/26/2018 - 08:05

by Jeremy at March 26, 2018 03:05 PM

February 23, 2018

Beaver BarCamp

Beaver BarCamp Crowdfunding

Beaver BarCamp is scheduled for Saturday, April 7, 2018 and we hope to see you there! Previously the Open Source Lab has been able to fully fund the event, but this year it is difficult for us to fund it due to some budget constraints. The even is still happening, but we need help funding things like free t-shirts, food, and drinks. To cover the cost of those items we need to raise $4,000. If you would like to help us raise money to make BarCamp that much better, check out our Beaver BarCamp 2018 crowdfunding site which inludes the areas where we need donations. Please do what you can to keep BarCamp going!

by Cody Holliday at February 23, 2018 08:00 AM

January 29, 2018

Jeff Sheltren

Building An API With Django 2.0: Part II

This is the second-part of a series. In the previous entry we used Django 2.0 to build a simple REST API for registering users and managing their logins. To satisfy requirements we managed authentication with client-side sessions, using JSON Web Tokens. In this blog we’re going to build upon what we started previously by adding two-factor authentication. We’ll learn more about what that means and how it works. We’ll leverage the Django OTP library to fully support TOTP devices, also offering emergency codes for when users lose their phones. And during this process we’ll learn much more about how JSON Web Tokens work, building a custom payload to support a second level of authentication. You can follow along and write out the code yourself, or view it online at the following URL .
Jeremy Mon, 01/29/2018 - 02:59

by Jeremy at January 29, 2018 10:59 AM

January 15, 2018

Jeff Sheltren

Building An API With Django 2.0: Part I

We’ve helped build many interesting websites at Tag1. Historically, we started as a Drupal shop in 2007, heavily involved in the ongoing development of that popular PHP-based CMS . We also design and maintain the infrastructures on which many of these websites run. That said, we’ve long enjoyed applying our knowledge and skills for building sustainable and high-performing systems to different technologies as well. In this blog series, we’re going to build a backend API server for managing users on a high-traffic website using the Python-based Django framework. We’re going to assume you’re generally comfortable with Python, but new to Django. In this first blog of the series, we’ll build a simple registration and login system which can be used by a single page app, or a mobile app. Coming from a Drupal CMS background, it can initially be surprising to learn that such a simple task requires additional libraries and custom code. This is because Django is a framework, not a CMS. As you read through this first blog, you’ll gain a general understanding of how Django works, and how to add and use libraries. We’ll create a stateless REST API, using JSON Web Tokens for authentication. And we’ll tie it all together with consistent paths. You can follow along and write out the code yourself, or view it online on GitHub . Future blogs in this series will add support for two-factor authentication and unit testing, allowing us to automatically verify that all our functionality is working as designed.
Jeremy Mon, 01/15/2018 - 01:18

by Jeremy at January 15, 2018 09:18 AM

October 09, 2017

Ben Kero

Introduction to Linux Containers presentation materials

Here is a link to the presentation materials for my talk, Introduction to Linux Containers.

Press ‘c’ to see the presenter console for the slides.

by bkero at October 09, 2017 04:27 PM

May 09, 2017

Beaver BarCamp

Beaver BarCamp 17: New Horizons

April 8 started out like any other Saturday in Spring in Corvallis: rainy, then sunny, then windy, windy-rainy sleet, hail, and then of course, sunny again. Despite the crazy weather, people from all walks of life still convened at the Kelley Engineering Center on the Oregon State Campus for the Open Source Lab’s annual Beaver BarCamp.

For several years now, the OSL has hosted one of Oregon State’s only unconferences to great success. Just as a refresher, or for any newbies out there, an unconference is an event in which the attendees decide the topics of presentation and discussion the day of, rather than determining these topics ahead of time. This year, we tried a few new things.

Beaver BarCamp 17 Main Lobby

Beaver BarCamp is usually more computer science oriented. This year, we wanted to expand our horizons. Coordinating with major colleges across the OSU campus along with The CO and SPARK, we promoted the event to a wider audience and our pool of attendees this year included ecological science, food science, human communications, and other branches of engineering. Registrations were up nearly 30% this year and we’ve been excited about the general reception of this year’s event.

“I liked that the diversity of topics didn’t compromise on the highly technical stuff,” said one attendee studying DevOps. Even the first time attendees meshed with the natural flow of the event and sessions included diverse subjects such as chemistry and radio history. One first-time BarCamp attendee who works in human communications said, “Though I had no experience with the content of the session I attended, the speaker and participants made it easy for me to understand.” Given the success of this year and Beaver BarCamp’s naturally inclusive environment for all topics, experience levels, and backgrounds, we hope to spread the word that the event is no longer just for computer people: we want to reflect Oregon State’s commitment to diversity and create an inclusive environment for our attendees, both intellectually and socially.

Another addition this year included four taped sessions that we posted to the OSL YouTube page. We are excited to present these videos as a new way to experience the event, a way that shows exactly what to expect from an unconference and from Beaver BarCamp. Attendees were excited about this new offer because it they could reach a broader audience and circulate their ideas beyond this one-day event.

Caleb Boylan presenting on Ceph

Interested in joining us next year? There are lots of ways to stay informed, including:

Also, we send out reminder emails to past attendees so you can always stay connected to Beaver BarCamp.

We’re very excited about this year’s success and what it means for future BarCamps. We hope next year will be even more diverse and include an even broader range of sessions. If you have suggestions or would like to let us know what you thought of BarCamp or if you weren’t able to make it this year and would like to let us know how to make it easier for you to attend, fill out our feedback survey so the improvements we make next year will help everyone get the most out of BarCamp.

by Amanda Kelner at May 09, 2017 07:00 AM

March 07, 2017

Ben Kero

The Dark Arts of SSH presentation materials

Here is the presentation material for my talk entitled The Dark Arts of SSH. Please note this is a single HTML rendering that incldues presenter’s notes.

by bkero at March 07, 2017 11:17 PM

March 04, 2017

Ben Kero

Linux Kernel Compilation presentation materials

As promised to my audience, here are the slides from my presentation titled Building your First Linux Kernel.

by bkero at March 04, 2017 10:20 PM

September 06, 2016

Piotr Banaszkiewicz

AMY release v1.8.0

Major AMY v1.8.0 release was tagged. As you can see below, it was definitely focused on fixing bugs.

New Features

  • Aditya provided a template change that displays link between closed workshop request and corresponding event.
  • Aditya hid survey-related fields on Event-related forms.
  • Chris sped up (again :-) ) tests.
  • Chris removed unnecessary help text for autocompletion fields.
  • Aditya refactored delete views to use DeleteViewContext, essentially making code more DRY and easy to change.
  • I added deleting entries from bulk-upload feature.
  • I updated DataCarpentry self-organized workshops registration form.

Bugfixes

  • Aditya changed uniqueness constraints on Sponsorship model to reflect recent changes he made on that model.
  • Aditya changed display of some Membership model fields.
  • Aditya added missing CSRF tokens in PyData import page.
  • Chris fixed a rare case of email address leakage (CC instead of BCC) in event details page, instructors by date and in workshop staff finder.
  • Aditya changed a uniqueness constraint on Task model + added some other small improvements.
  • Chris fixed non-working links and corrected ordering in all trainings page.
  • Aditya refactored internal URLs file to use nested URLs structure and therefore made it a lot more readable.
  • Chris made “progress” column in trainees view wider
  • Aditya hid from import instances that were decided not to be imported
  • I fixed error message on faulty bulk-upload process.
  • I fixed a double-display of unpublished and published views in very specific circumstances.
  • I stopped counting in unresponsive workshops in workshops issues page.

September 06, 2016 12:00 AM

August 16, 2016

Piotr Banaszkiewicz

AMY bugfix release v1.7.2

AMY v1.7.2 was released today. It contains one bug fix provided by Aditya Narayan.

Aditya fixed a bug throwing 500 HTTP error when accessing /api/v1/todos/user/. This API endpoint is being accessed by the browser whenever any admin user loads their dashboard.

August 16, 2016 12:00 AM

August 14, 2016

Piotr Banaszkiewicz

AMY releases v1.7 and v1.7.1

After another two weeks of development and two weeks of delays, we’re finally releasing AMY v1.7 and a bugfix v1.7.1. This post is a joint changelog for both of them.

Release v1.7

This release is especially interesting since:

  1. it includes mostly Aditya’s and Chris’ PRs
  2. it includes two big PRs containing the biggest part of Aditya’s and Chris’ Summer projects.

New features

  • Chris Medrela helped check for missing migrations in automated continuous integration service Travis-CI
  • Chris Medrela sped up Travis-CI checks of AMY’s test suite by using a cache directory
  • Aditya Narayan as part of his Summer work added titles and URLs to task objects in AMY (useful feature for PyData conference integration)
  • Aditya Narayan changed form for creating new events so that admins can assign themselves to a new event while creating it
  • Aditya Narayan added a Sponsorship model to AMY and integrated it with AMY (we can now track sponsors for events)
  • Aditya Narayan migrated Host to Organization: it fixed some naming inconsistencies
  • in v1.6 we dropped support for numerical event IDs to rely only on slugs (e.g. 2016-08-13-Krakow or 2017-01-xx-Boston), now Aditya Narayan cleaned some remains left in the code from before dropping the support
  • I added support for cancelled tag used to mark events supposed to happen but not happening eventually
  • Chris Medrela added instructor training workflow, ie. huge part of AMY used for instructor training
  • Aditya Narayan added a feature for importing people, events, tasks from PyData conference site in a comfortable way

Bug fixes

  • Chris Medrela tracked and fixed an error in part of AMY responsible for allowing users to log in with other credentials than user/password (currently: GitHub login)
  • I fixed an API error occuring in some views (endpoints) when using CSV or YAML return format
  • Chris Medrela added access to AMY for people in invoicing group
  • Chris Medrela replaced entity &mdash; with actual char
  • Aditya Narayan added a contact field on Sponsorship model
  • Chris Medrela fixed issue with user social integration with GitHub getting out of sync
  • I fixed JavaScript code responsible for generating dates (it was generating e.g. 2016-8-3, it’s now generating 2016-08-03)

Release v1.7.1

This release contains mostly bug fixes for features we added in v1.7 :-)

Bug fixes

  • Chris Medrela removed an overlooked debugging message alert in one of the views
  • Aditya Narayan added a cancel button to almost all the forms in AMY
  • I added a message to “Apply for Instructor Training” page saying that people cannot register for Fall 2016 open-access training anymore
  • Aditya Narayan fixed “Import from URL” not working on workshop acceptance page
  • Chris Medrela fixed some validation issue in one of training-related forms
  • Chris Medrela added access to admin dashboard in AMY to trainers

New features

  • Chris Medrela added a command line tool for importing trainees progress from previous data format into AMY

August 14, 2016 12:00 AM

July 01, 2016

Piotr Banaszkiewicz

AMY release v1.6.2

Whoa, another one?! Yesterday we released v1.6.1, today it’s time for v1.6.2 with some very minor changes.

New features

  • New fields in the training request form:
    • group name will enable us to register groups for the training, without (for now) the need for a new form
    • comment will be a place for any additional information; instead of it, people would use additional skills.
  • Event.slug received new help text containing a format description for admins to use. This field’s validation was also changed so that it only allows entries in this specific format (this is additional to other validation done by Django, ie. only latin characters, digits, underscores and hyphens allowed).

Bug fixes

  • Migration 0088*, which was supposed to generate fake slugs for events without them, contained an error that we hit in the production, so I fixed it by adding random characters to the slugs if uniqueness constraint was about to be violated.

July 01, 2016 12:00 AM

April 19, 2016

Beaver BarCamp

March 16, 2016

Justin Dugger

Rule Zero of FinOpsDev

I'm working on a personal finance project codenamed FinOpsDev (rebranding suggestions welcome), aiming to reduce drudgery to near zero with automation, and exploit the increased velocity to run automated tasks more often, etc. Like DevOps for your checkbook. Or like Continous Accounting.

As a base, I'm using GNUCash backed by PostgreSQL. GNUCash provides the accounting principles and accounting concepts, and have used it for years. Postgres makes the data available in a central location, with well understood tools.

I'm not ready to announce any useful tools as a result of my tinkering quite yet. Instead, I want to reflect upon an old quote:

To err is human; to really foul things up requires a computer.

Up till now I've been using those tools in a manual process, so it naturally happens that my first foray ends up removing all data from the database. Forcing a restore from a backup I made from last year. From this calamity, a principle is born: no matter what the first financial automation to start with, the zeroth should be backups. I still don't know how it happened, which only underlines the importance of rule zero.

To commemorate the year of transactions I'm rebuilding, here's a clever little logrotate script I found that gets the job done without any additional dependencies:

/var/backups/postgresql/postgresql-dump.sql {
        daily
        nomissingok
        rotate 30
        compress
        delaycompress
        ifempty
        create 640 postgres postgres
        dateext
        postrotate
                /usr/bin/sudo -u postgres /usr/bin/pg_dumpall --clean > /var/backups/postgresql/postgresql-dump.sql
        endscript
}

Obviously tools like barman and pg_backrest are great, but I like having a quick, simple solution in place. Next on the plate is a cron job to exfiltrate backups to another server for safe keeping.

by Justin Dugger at March 16, 2016 12:00 AM

September 14, 2015

OSUOSL

OSL GSOC 2015-Oregon's Catch

by Evan Tschuy

This summer the Open Source Lab had three students from around the world working on open source software through Google Summer of Code. The OSL has participated in GSoC for nine years, and each year has had its own unique challenges and successes.

I had an opportunity to work with a student, Chaitanya, on What's Fresh, a project I originally developed last summer at the OSL for Oregon Sea Grant. With What's Fresh (which Sea Grant is planning to brand as Oregon's Catch), Sea Grant wanted to allow visitors to the Oregon coast to be able to find fresh fish available from fishermen, and had CASS, the new organization the OSL is a part of, develop the app and backend. Chaitanya worked on the backend, making data entry easier. It now has several important features, like easier location entry, search, and inline forms so users don't need to leave the page to add related items. It is also now themeable, so other organizations can use easily set up a customized version for their area.

It was initially slow-going as we got more familiar with working with each other and as he got comfortable working on the project. Since Chaitanya was more familiar with Python and Django than Javascript, it took a while for things to start coalescing. However, at the end of the summer, we're both proud of what's been accomplished and the features added to the project. It was exciting to see Chaitanya's skills grow, and to myself feel more comfortable in a mentorship role. We're going to deploy the improved version of the backend after one more round of code review.

This year, the Open Source Lab will have the opportunity to send one person to Google's annual Mentorship Summit. We look forward to seeing other mentors there!

 

 

 

by phillels at September 14, 2015 06:14 PM

September 12, 2015

OSUOSL

OSL GSOC 2015-Protein Geometry Database

by Elijah Voigt

What is the Protein Geometry Database?

The Protein Geometry Database project (PGD) is many things to many people.

The synopses on code.osuosl.org says:

"Protein Geometry Database is a specialized search engine for protein geometry. It allows you to explore either protein conformation or protein covalent geometry or the correlations between protein conformation and bond angles and lengths."

There's a lot of science in that paragraph; I speak code much better than I speak science, so let's look at the Github Repository. That page says things like...

It also describes the code as being:

  • 59.2% Python,
  • 27.2% HTML,
  • 12.4% JavaScript, and
  • 1.2% Other

Depending on what you use PGD for (if you use it at all) you have a different relationship with the project. What matters here is that PGD is a project that the OSL develops and maintains. This year a lot of great work was done on it for the 2015 Google Summer of Code.

What PGD Accomplished During GSOC 2015

This year's PGD GSOC project had five core goals, all of which got accomplished.

  1. Revamping the current account system.
  2. Building occupancy awareness into PGD.
  3. Testing the current development branch of PGD.
  4. Implementing a search by deposition date filter.
  5. Upgrading PGD to Django 1.8 (from Django 1.6!)

 

The student for this project was S. Ramana Subramanyam. He is in his second year at the Birla Institute of Technology and Science in Goa, India, and was wonderful to work with. Despite a 12 hour time difference he was able to be productive the majority of the time.

Although none of the code developed for this year's GSOC has been merged into PGD, it has all been reviewed and will be merged over the next few months as the project lead (Jack Twilley) and I are able to work together on migrating the changes.

Overcoming Challenges

The largest challenges we were faced with in this project were scheduling.

The PGD Project Lead (Jack) got an amazing internship for his Food Science degree in California at a vineyard; as a result he was unable to work on PGD and his GSOC mentorship as much as was initially expected. While I was able to answer (or at least help with) many of the questions S. Ramana had, sometimes we were forced to throw up our hands, send an email to Jack, and wait.

This didn't stop S. Ramana from completing all of his goals for the GSOC project; there was always plenty to do so he could put one thing on the back-burner and focus on a new task. At the most it was a mild inconvenience but didn't get in the way too often.

Where PGD Stands

Once the code is merged and the inevitable version control conflicts are resolved, PGD will have some pretty neat new features:

  1. Search results can be saved.
  2. Search results can be saved as a PNG image.
  3. Occupancy Awareness.
  4. Deposition Date is now a search Filter.
  5. PGD is pgraded to Django 1.8.

It took a lot of energy not to add ! to the end of each of those items.

Despite scheduling conflicts and the usual technical snafus that come with major engineering changes, I would say that this GSOC was a success for PGD and the OSL.

Personal Takeaways

This was my first time mentoring a student for GSOC and although I have had limited experience mentoring students with Devops Bootcamp, mentoring a student remotely with a 12 hour time difference is an entirely different can of worms.

My mentorship abilities were challenged but I learned a lot of great skills and added many tools to my belt when it comes to dealing with problems and knowing when/who to ask for help. If I am given the opportunity to be a GSOC mentor next year I will definitely jump on the opportunity to do so.

 

 

by phillels at September 12, 2015 02:18 AM

August 07, 2015

OSUOSL

Mysql1-vip Outage Post-Mortem

 

Background

On July 15th we ran into a number of issues with replication on mysql2 on a couple of session tables. This caused replication to be paused, and a large number of statements had to be skipped. Replication was restarted successfully. On July 16th some more issues with the same tables were encountered, but in far greater number. A ticket was created to track the issue. Replication was restarted several times, but on the week of the 20th a decision was made to entirely reload mysql2 and examine some alternative replication methods (primarily row-based replication).

Our servers, mysql1 and mysql2, are running mysql 5.5. While documentation and tribal knowledge claimed a master-slave replication set-up, they were configured as master-master replication.

What Happened

On July 30th a decision was made to reload mysql2 at 4:00PM PDT to fix replication errors. Slave replication was intentionally stopped. Databases were dropped one at a time on mysql2 with a small delay between each drop.

As mentioned previously, mysql1 and mysql2 were unexpectedly set up in master-master replication configuration. Therefore, though slave replication on mysql2 was stopped,  mysql2 was still sending commands to mysql1. This caused databases to be dropped on both machines. Thanks to the script delays we realized after a few minutes that mysql1 was dropping databases and the script was stopped. We then immediately started working to restore the databases.

Why restores took so long

As demand for the mysql cluster has grown, our backup strategy has shifted to be optimized to save disk space, our greatest resource bottleneck. This has been a worthwhile tradeoff in the past, as we have rarely had to do full restores. We use mysql-zrm to back up mysql with heavy compression. Because of this strategy, restores were largely CPU-bound instead of IO-bound.

We also discovered we had a couple of databases that had issues restoring due to indexing and foreign keys. Each time one of these failed, we had to parse the entire backup file (around 200GB), and pull out the bad database to restore separately, and then pull out the rest of the unrestored databases.

A further complication was that our backups were pointed at mysql2, which was out-of-date with mysql1, due to the initial synchronization failures. Fortunately, we had the binary logs from the 17th through the 30th. This means that though most data could be restored, some data from between the 15th and the 17th was lost.

These three factors combined meant a much slower, and much more labor-intensive restore process than we had anticipated.

Looking Forward

We learned a lot of important lessons from this outage, both related to how we run our mysql cluster, as well as how we plan and manage resources at the OSL in general.

Most immediately, some of the most important changes we will implement for the mysql service over the next month or two include:

  1. Evaluating better replication strategies to mitigate the initial cause, including row-based replication

  2. Storing binlogs as a backup on a separate server.

  3. Doing backups using Percona XtraBackup, allowing for much faster full restores

  4. Using mydumper rather than mysql-zrm to improve the speed of our logical backups

  5. Work on our documentation and training for our complex systems, including

    1. Regularly testing full restores as part of our backup process on a spare server

    2. Gather more accurate ETAs for the restoration process

    3. Regularly audit the databases we host -- Multiple test and ballooning databases (100GB+) seriously delayed the restore process

  6. Migrate to a bigger, more powerful mysql cluster (already planned before this outage)

In terms of the bigger picture, we recognize that we need to change how the lab plans, monitors, and manages resources and projects. Despite our best efforts, the backlog of hosting requests to the OSL continues to grow. We have, over the years, worked hard to stretch our resources to provide services to as many projects as we can. This has always come with tradeoffs, such as the compression of backups to maximize disk use, and less redundancy than we would have wished.

We have for a while been concerned about how thinly resources have been stretched, and have been working on a set of policy changes, as well as raising funds to reinvest in the lab. Some of you may have heard our staff talk about this plan -- we hope to talk to a lot more of you about this in the near future. Our new FTP cluster, perhaps one of our most neglected pieces of infrastructure, was an important first step in this renewal.

Over the next few months, the OSL will be looking at a number of different services and policies, including:

  1. Instituting a policy and mechanisms for better keeping the community informed

    1. Of outages, maintenance, etc.

    2. Of resource use & warning signs (dashboards)

  2. Identifying and redesigning “core” services, including

    1. Defining and monitoring capacity limits

    2. Implementing redundancy and restore practices, including staff drills

    3. Migrating more of these services to Chef

    4. Instituting periodic review of documentation, policies and performance metrics

    5. Finding better ways of leveraging community expertise to supplement our own

  3. Raising funds to refresh our most aging infrastructure, and catch up on the worst of our technical debt.

We want to thank you for your patience and support during this outage and over the years we have served you. We realize that the length of this outage, and the lack of progress reports was unacceptable, and we want you to know that we are taking steps to reduce both the likelihood and the impact of future outages.

by jordane at August 07, 2015 09:13 PM

June 17, 2015

OSUOSL

Write the Docs '15

by Elijah Voigt

The day is May 18. The location is the Portland's Crystal Ballroom. The conference is Write the Docs (WtD). Excitement and anticipation fill the air as we collectively munch on breakfast foods and find a seat. The keynote begins and immediately sets the mood: docs are fun, docs are interesting, and here's how you can make your docs awesome.

 

WtD was quite the experience and it got me excited about documentation, something I admit I never expected to be all that excited about. At times it felt like a support group for non-technical individuals that work with engineers, other times it felt like a storyteller sharing with us their adventure in documenting some massive project, and most importantly it was always engaging and interesting. Some of my most memorable talks were of Twillio's efforts to make their documentation better, GitHub's workflow of writing docs for GitHub with GitHub, and Google's new documentation tool and how it was developed and adopted in a grass roots effort as opposed to a top-down corporate approach. I even gave a Lightning Talk on "How to Write the Best Email You've Never Written... Until Now" which went over very well and seemed to speak to a lot of people.

Inspired by this awesome conference, we have have started a massive overhaul on our documentation including writing official style guides, overhauling the new hire onboarding docs, and updating our wiki. With the new hire documentation we have taken into account lessons learned from the conference, like how we should make docs fun to read in addition to informational; this shift has resulted in our 'Gamified New Hire Docs' rewrite, which essentially gamifies the onboarding process to be more fun. Once one of the new student employees passes a milestone, like submitting their first GitHub Pull Request, they get a reward badge (e.g., a gold star sticker). It might not seem like much, but this is way better than slogging through a daunting pile of docs as one starts a new job.

by Anonymous at June 17, 2015 09:03 PM

May 12, 2015

Russell Haering

Next Adventure: ScaleFT

In 2008 I stumbled across the opportunity to work as a sysadmin at the OSU Open Source Lab. When I started there I didn't have much experience with internet infrastructure, but it quickly became a passion of mine and inspired a mission that has had a profound influence on my life. My Twitter profile has a (necessarily) succinct summary of that mission:

Building infrastructure that makes the internet more usable to more people.

I've had a great time pursuing this mission at Cloudkick, and at Rackspace after we were acquired in December of 2010. I've met countless great people and learned a ton from them. I've worked with (and on) a bunch of great teams that are doing great work and furthering this mission more than I ever could alone.

But its time for the next step in my mission. Yesterday some good friends and I announced our new company, ScaleFT.

At ScaleFT we're focusing on improving how teams use infrastructure and working to make those interactions more collaborative and ultimately easier, safer and more fun. Tools like GitHub have proven the power of collaboration when applied to writing code. We're going to bring that same power to interactions with infrastructure.

Time to get hacking.

by Russell Haering at May 12, 2015 05:54 PM

March 16, 2015

Beaver BarCamp

Beaver Barcamp: Now with More Lightning Talks!

This year we will be introducing Lightning talks to Beaver Barcamp! A lightning talk is a five-minute presentation on any given topic; it's basically just a shorter version of the usual barcamp talk. Instead of a keynote, our first session will be all lightning talks. You can come early and propose a topic to give a lightning talk on, or vote on other topics that you want to hear about. The most popular proposals will be chosen to give their presentations. If you have any questions about this format, please email us at info<at>beaverbarcamp.org We look forward to seeing you at Beaver Barcamp!

by OSU Open Source Lab at March 16, 2015 07:00 AM

January 17, 2015

Pranjal Mittal

My new blog for programming related posts

After a lot of thought I have decided to divide my blogging activities into non-technical and technical blogs. I have created a separate blog for technical posts. I realized that I was facing a lot of difficulties trying to make syntax highlighting for code work in the current blog (which uses Blogger).

Hopefully the new blog which uses Octopress, gives me an incentive to complete my blog posts. Until now I have left some of my blog posts incomplete until because I got frustrated trying to paste code I wrote with syntax highlighting eventually giving up and then forgetting to complete the post. Even though there are some syntax highlighting JS libraries out there, they do not work so well with blogger and the highlighted code takes a small noticeable time to render. I did not like it so much or maybe I didn't try hard enough to make it work smoothly. But believe me it was much easier to have a blog setup on Octopress in the meantime.

If you would like to see an Octopress blog post sample, there you go: my post on finding total number of users on github using the Github API

by Pranjal Mittal (noreply@blogger.com) at January 17, 2015 08:05 AM

December 22, 2014

Alex Polvi

December 16, 2014

Alex Polvi

December 10, 2014

Pranjal Mittal

Javascript vs Python: Comparing ways of doing stuff

In this post I am going to compare ways of doing useful stuff in Javascript and Python.

1. Unpacking an array and passing as arguments to a function Background: Math.min in JS vs min in Python.

Javascript

var array = [1, 2, 3, 4]
Math.min.apply(Math.min, array)

// Apply keyword is used in Javascript while calling a function to unpack array into arguments for a function

// Math.min(array) will give an error



Python

array = [1, 2, 3, 4]

min(*array)

# Use * while calling function to unpack arguments

# min can also be called with a list as input directly. Python is beautiful.

P.S: If you know about some stuff in one of the language and cannot figure out how to do it in the other. Just leave a comment down and I will work it out for you.

by Pranjal Mittal (noreply@blogger.com) at December 10, 2014 11:41 AM

August 21, 2014

Pranjal Mittal

Sending sms messages from code without purchasing an online sms gateway

Very recently Makemymails introduced an alpha version of an SMS API that allows users to send automated sms messages from their own website code with a few lines of code which results in sms messages being routed via their android phone to the intended recipients.

This eliminates the need to buy expensive Sms gateways, because  your android phone itself becomes your sms gateway over which Makemymails provides a free web api that makes sending sms messgaes from phone dead simple. Sending an sms from your code simply boils down to calling a function from your code (supported languages PHP, Python) and above that a REST API is provided that allows integration with any programming language. What excites me is that the web sms api is completely free and I only have to pay a small amount for the sms plan/pack that I activate on my android phone.

Introduction

In an era of smartphones do you need to look beyond your own device for sending messages?
Buying an SMS gateway is only useful for high volumes of sms. If you are sending < 100-200 text messages per day from your website or code, it is 5-10 times more economical to use this Web-Android api from Makemymails over buying sms gateways and plans from internet sms gateway providers.

Eg. Clickatell is a very good service for sending sms messages from code and they provide nice API's too. The only sad part is the pricing and as a small volume sms user who just wants to use sms for transactional purposes like sending order confirmations, password tokens, etc via your website to users it ain't a very good option as it would drain out a lot of your money.

How does it work?


Requirements

- An mobile data enabled, android device
- Operational sim on the android phone that is capable of sending sms messages.
- (Optional) Sms plan/pack on Android phone which is much more cost effective than sms gateways for a few hundred sms per day.

1. You register for a free web account on Makemymails and obtain a username.

2. You install the Makemymails android app and provide your username inside the app to associate your device with your web account. You can associate multiple android devices with the same web account.

3. You visit your web account where you can see associated devices and your API KEY. Each device is assigned a unique device id from makemymails and you can use any of your device to send messages from the api by providing the corresponding device id during the api call.

Step by step instructions to get started?


Step 1: Sign up for a free account on Makemymails [1] and note your username somewhere.

(After sign up, do not get confused with the other services Makemymails offers. It offers an emailing service also which is a different use case altogether)


Step 2: Install Makemymails Android App from Google Play on the intended android device from which your messages will actually be sent.
[2] https://play.google.com/store/apps/details?id=awsms.mmm


Tap "Associate username" button.






Step 3: 

This page contains the api documentation which can be integrated with your website irrespective of the platform and programming language.




Note:

Api calls you make will cause an sms to be sent via your phone, so it is suggested to install an Sms plan on your default sim on your android device. Overall these sms plans are 5-10 times cheaper than the cost of buying the sms gateway and easier to activate.

The api call will cause a message to be sent from the default sim on your phone. The recipient will see your number as the Sender ID.

Step 4:

As soon as you make a POST request of content-type application/json to the url:
http://www.makemymails.com/sms/api-single-sms/ 
an sms will be generated by makemymails as per your post and routed via the selected android phone.
Make sure your device is connected to the internet at the time of making the call if you want the message to be delivered immediately.



Useful API libraries in different languages

Python: https://github.com/makemymails/makemymails-sms-python


Typical coders/fun use cases

- A command line tool can be built which can help send messages from your command line, straight through your android phone.

(I am going to build one for myself very soon and open source it if you would like to try... but of course I will have to remember to hide my API KEY from the code)


Typical commercial use cases

- Small scale e-commerce companies who wish to send order confirmation to users after successful purchase.
- Websites for hotels and resorts who have online portals for booking and want to send messages to their users after making a booking.
- Restaurants with online websites who deliver food at home and wish to send food order confirmations.
- Any website that wishes to send registration confirmation messages to users, sms messages when someone contacts  you via a contact form on website, or updates to users or administrators of a website when a transaction is made.

by Pranjal Mittal (noreply@blogger.com) at August 21, 2014 06:07 AM

May 26, 2014

Pranjal Mittal

Setting up Rsync in daemon mode on an AWS EC2 instance

I was trying to exlore and understand rsync in detail for a very cool project that I am planning to work on. The project is related to FTP Mirror Syncing about which I will write in detail next time. Rsync is a great tool for efficient syncing of directories. It transfers only the differences in files saving time and bandwidth. In this succint post I will quickly walk through the steps I perfomed to be able to setup rsync between 2 Amazon EC2 instances. I will particularly focus on using rsync in daemon mode as opposed to using rsync over ssh which you could explore easily without any problems.

Key to the steps described ahead:

(1) To edit default config file used by rsync daemon
(2) To start rsync daemon
(3) To kill rsync daemon
(4) Command to sync (push) contents in current directory to the server which is runnign the rsync daemon.
(5) To create several demo files for testing rsync


Steps performed in detail:

(Refer to corresponding key number)

(1) sudo nano /etc/rsyncd.conf


rsyncd.conf (contents)


lock file = /var/run/rsync.lock
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
port = 873

# Defines an rsync module.
[modulepranjal]
    path = <absolute_path_this_module_maps_to>
    comment = The syncthis directory shall be synced.
    uid = ubuntu
    gid = ubuntu
    read only = no
    list = yes
    hosts allow = 0.0.0.0/0
    auth users = *
    secrets file = /etc/rsyncd.secrets

# Can define more modules if you want that map to a different path.
...


rsyncd.secrets (contents)


rsync_client_user: keepanypassword


Note: Make sure you change access permissions of your rsyncd.secrets file to 600 if you want your rsync daemon to actually accept your secrets file.

    $ sudo chmod 600 /etc/rsyncd.conf

(2) sudo rsync --daemon

Caveat: Make sure connections to port 873 are allowed on your instance. I spent about 5-6 days trying to figure out why my rsync daemon is not working correctly when I try to rsync to it from some other instance and later figured out that AWS firewall had blocked all conections to port 873 since there was no rule allowing access to port 873.





(3) sudo kill `cat /var/run/rsyncd.pid`

(4) rsync -rv . ubuntu@10.252.164.249::modulepranjal/

Run this command on any other instance (without an rsync daemon) to push all contents in the current directory to the rsync module path on the instance running the rsync daemon.
 -r stands for recursively transferring all the contents of the directory.

Note: Double colon (::) means that rsync protocol will be used rather than ssh. If only a single colon (:) is provided then rsync tries to sync over ssh.


(5) for i in `seq 1 100`; do touch testfile$i; done

This simple bash command will generate 100 testfiles like testfile1, testfile2, etc.. which will be useful in case you wish to explore how a sync with several files involved looks like.

Quick Tip:

Syncing using rsync in daemon mode is much faster than using rsync in ssh mode. The daemon mode turns pretty useful for syncing public content where privacy is not much of a concern. The ssh mode takes more time as some time is spending on encrypting / decrypting the rsync transfer data.

by Pranjal Mittal (noreply@blogger.com) at May 26, 2014 04:51 PM

February 25, 2014

Brandon Philips

Slides: etcd at Go PDX

Last week I gave a talk at the PDX Go meetup (Go PDX). The presentation is a refinement on the talk I gave last month at GoSF but contains mostly the same content.

Several people in the audience had some experience with etcd already so it was great to hear their feedback on the project as a whole. The questions included partition tolerance and scaling properties, use cases and general design. It was a smart crowd and it was great to meet so many PDX Gophers.

Resources

etcd:

Raft:

by Brandon Philips at February 25, 2014 12:00 AM

February 16, 2014

Brandon Philips

Getting to Goven

This is the step by step story of how etcd, a project written in Go, arrived at using goven for library dependency management. It went through several evolutionary steps while trying to find a good solution to these basic goals:

  • Reproducible builds: given the same git hash and version of the Go compiler we wanted an identical binary everytime.
  • Zero dependencies: developers should be able to fork on github, make a change, build, test and send a PR without having anything more than a working Go compiler installed.
  • Cross platform: compile and run on OSX, Linux and Windows. Bonus points for cross-compilation.

Checked in GOPATH

Initially, to get reproducible builds and zero dependencies we checked in a copy of the GOPATH to “third_party/src”. Over time we encountered several problems:

  1. “go get github.com/coreos/etcd” was broken since downstream dependencies would change master and “go get” would setup a GOPATH that looked different than our checked in version.
  2. Windows developers had to have a working bash. Soon we had to maintain a copy of our build script written in Powershell.

At the time I felt that “go get” was an invalid use case since etcd was just a project built in Go and “go get” is primarliy useful for easily grabbing libraries when you are hacking on something. However, there was mounting user requests for a “go gettable” version of etcd.

To solve the Windows problem I wrote a script called “third_party.go” which ported the GOPATH management tools and the shell version of the “build” script to Go.

third_party.go

third_party.go worked well for a few weeks and we could remove the duplicate build logic in the Powershell scripts. The basic usage of was simple:

# Bump the raft dependency in the custom GOPATH
go run third_party.go bump github.com/coreos/go-etcd
# Use third_party.go to set GOPATH to third_party/src and build
go run third_party.go build github.com/coreos/etcd

But, there was a fatal flaw with this setup: it broke cross compilation via GOOS and GOARCH.

GOOS=linux go run third_party.go build github.com/coreos/etcd
fork/exec /var/folders/nq/jrsys0j926z9q3cjp1yfbhqr0000gn/T/go-build584136562/command-line-arguments/_obj/exe/third_party: exec format error

The reason is that GOOS and GOARCH get used internally by “go run`. Meaning it literally tries to build “third_party.go” as a Linux binary and runs it. Running a Linux binary on a OSX machine doesn’t work.

This soultion didn’t get us any closer to being “go gettable” either. There were several inquiries per week for this. So, I started looking around for better solutions and eventually settled on goven.

goven and goven-bump

goven achieves all of the desirable traits: reproducible builds, zero dependencies to start developing, cross compilation, and as a bonus “go install github.com/coreos/etcd” works.

The basic theory of operation is it checks all dependencies into subpackages of your project. Instead of importing “code.google.com/p/goprotobuf” you import github.com/coreos/etcd/third_party/code.google.com/p/goprotobuf. It makes the imports uglier but it is automated by goven.

Along the way I wrote some helper tools to assist in bumping dependencies which can be found on Github at philips/goven-bump. The scripts `goven-bump” and “goven-bump-commit” grab the hg revision or git hash of the dependency along with running goven. This makes bumping a dependency and getting a basic commit message as easy as:

cd ${GOPATH}/github.com/coreos/etcd
goven-bump-commit code.google.com/p/goprotobuf
git commit -m 'bump(code.google.com/p/goprotobuf): 074202958b0a25b4d1e194fb8defe5d69c300774'

goven and introduces some additional complexity for the maintainers of the project. But, the simplicity it presents to regular contributors and users used to “go get” make it worth the additional effort.

by Brandon Philips at February 16, 2014 12:00 AM

February 07, 2014

Russell Haering

Ridiculously Fast 'sprintf()' for Node.js

Today I was reminded of one of my neatest Node.js hacks. A few years ago, in the process of optimizing how Rackspace Cloud Monitoring compiles user-supplied alarms (a javascript-like DSL used to implement thresholds) we discovered that we were spending a significant amount of CPU time in a widely used Javascript implemetation of sprintf. This was back in the dark ages of Node.js, before util.format landed.

The CPU time spent in sprintf wasn't enough to be a problem: even compiling a few hundred thousand alarms is pretty fast, as compared to reading them out of a database, serializing the compiled alarms to XML, and loading them into Esper. Nonetheless, in a bout of "not invented here" and with a spirit of adventure in my heart, I did the obvious thing, and took a weekend to write a faster sprintf.

"Standard" Sprintf

The standard implementation of sprintf takes a format string, followed by any number of positional arguments intended to be injected into the resulting string. It operates by parsing the format string using a series of regular expressions, to generate a parse tree consisting of alternate constant strings and formating placeholders.

For example, consider:

sprintf('The %s ran around the tree', 'dog');  

The generated parse tree looks something like:

['The ', '%s', ' ran around the tree']

Then the tree is is iterated, and positional (or named) arguments injected to generate an array that can be joined into the appropriate result:

return ['The ', 'dog', ' ran around the tree'].join('');  

As an optimization, the parse tree is cached for each format string, so that repeated calls to sprintf for a given format string need only repeat the actual argument injection.

Getting Wild

TLDR; the code

So how can this be further optimized? We know a few things about V8:

  1. V8 is very good at concatenating strings.
  2. V8 is very good at just-in-time compiling "hot" functions.
  3. At least as of Crankshaft (the latest version of V8 I've used in any
    seriousness), V8 was unable to optimize code that treated arguments in unusual ways such as iterating it, or mixing its use with named arguments.

I was able to take advantage of these properties by generating a function which applied the format string through a single-line string concatenation, instead of instead of generating a parse tree. Taking the example above, I generate a string such as:

var fnBody = "return 'The ' + arguments[1] + ' jumped over the tree';";  

Then compiling that string into a function on the fly:

return Function(fnBody);  

By caching the resulting Function object, I was able to cause V8's JIT to optimize calls to sprintf into little more than a dictionary lookup, a function call and a string concatenation.

Security

An obvious risk of this strategy is that an attacker might find a way to cause us to generate arbitrary javascript.

This can be mitigated by never passing user-supplied input as a format string. In fact, because the cache doesn't implement any expiration, you should probably only ever pass literal format strings or you'll end up with a memory leak. This seems to be true of node-sprintf as well, so I don't consider it a serious limitation, just something to be aware of.

Performance

At the time, we saw marked (if not especially necessary) speedups in alarm compilation performance, but I don't have the bencharks on-hand. Instead, on a modern-ish version of Node.js (v0.10.17) running on my Macbook Pro I tested:

  1. My "fast" sprintf
  2. Node's util.format
  3. The widely used sprintf module

The test was:

for (var i = 0; i < 10000000; i++) {  
  sprintf_fn('The %s jumped over a tree', i);
}

The results:

Implementation Time
fast sprintf 1504ms
util.format 14761ms
standard sprintf 22964ms

The improved sprintf lacks a lot of the functionality of the other implementations, so the comparison isn't entirely fair. Nonetheless, with a speedup of about 10x over util.format and 15x over sprintf (at least for this benchmark), I think its safe to declare this hack a success.

by Russell Haering at February 07, 2014 12:00 AM

Ridiculously Fast 'sprintf()' for Node.js

Today I was reminded of one of my neatest Node.js hacks. A few years ago, in the process of optimizing how Rackspace Cloud Monitoring compiles user-supplied alarms (a javascript-like DSL used to implement thresholds) we discovered that we were spending a significant amount of CPU time in a widely used Javascript implemetation of sprintf. This was back in the dark ages of Node.js, before util.format landed.

The CPU time spent in sprintf wasn't enough to be a problem: even compiling a few hundred thousand alarms is pretty fast, as compared to reading them out of a database, serializing the compiled alarms to XML, and loading them into Esper. Nonetheless, in a bout of "not invented here" and with a spirit of adventure in my heart, I did the obvious thing, and took a weekend to write a faster sprintf.

"Standard" Sprintf

The standard implementation of sprintf takes a format string, followed by any number of positional arguments intended to be injected into the resulting string. It operates by parsing the format string using a series of regular expressions, to generate a parse tree consisting of alternate constant strings and formating placeholders.

For example, consider:

sprintf('The %s ran around the tree', 'dog');

The generated parse tree looks something like:

['The ', '%s', ' ran around the tree']

Then the tree is is iterated, and positional (or named) arguments injected to generate an array that can be joined into the appropriate result:

return ['The ', 'dog', ' ran around the tree'].join('');

As an optimization, the parse tree is cached for each format string, so that repeated calls to sprintf for a given format string need only repeat the actual argument injection.

Getting Wild

TLDR; the code

So how can this be further optimized? We know a few things about V8:

  1. V8 is very good at concatenating strings.
  2. V8 is very good at just-in-time compiling "hot" functions.
  3. At least as of Crankshaft (the latest version of V8 I've used in any seriousness), V8 was unable to optimize code that treated arguments in unusual ways such as iterating it, or mixing its use with named arguments.

I was able to take advantage of these properties by generating a function which applied the format string through a single-line string concatenation, instead of instead of generating a parse tree. Taking the example above, I generate a string such as:

var fnBody = "return 'The ' + arguments[1] + ' jumped over the tree';";

Then compiling that string into a function on the fly:

return Function(fnBody);

By caching the resulting Function object, I was able to cause V8's JIT to optimize calls to sprintf into little more than a dictionary lookup, a function call and a string concatenation.

Security

An obvious risk of this strategy is that an attacker might find a way to cause us to generate arbitrary javascript.

This can be mitigated by never passing user-supplied input as a format string. In fact, because the cache doesn't implement any expiration, you should probably only ever pass literal format strings or you'll end up with a memory leak. This seems to be true of node-sprintf as well, so I don't consider it a serious limitation, just something to be aware of.

Performance

At the time, we saw marked (if not especially necessary) speedups in alarm compilation performance, but I don't have the bencharks on-hand. Instead, on a modern-ish version of Node.js (v0.10.17) running on my Macbook Pro I tested:

  1. My "fast" sprintf
  2. Node's util.format
  3. The widely used sprintf module

The test was:

for (var i = 0; i < 10000000; i++) {
  sprintf_fn('The %s jumped over a tree', i);
}

The results:

Implementation Time
fast sprintf 1504ms
util.format 14761ms
standard sprintf 22964ms

The improved sprintf lacks a lot of the functionality of the other implementations, so the comparison isn't entirely fair. Nonetheless, with a speedup of about 10x over util.format and 15x over sprintf (at least for this benchmark), I think its safe to declare this hack a success.

February 07, 2014 12:00 AM

January 18, 2014

Brandon Philips

Video: etcd at GoSF

Last week I gave a talk at the San Francisco Go meetup (GoSF). The event was great and has about 200 Go Gophers in attendance.

Giving the talk was great because it made me realize how much we have accomplished on etcd since my last talk in October. The audience was mostly curious about how it differs from Zookeeper, how master elections work, and how we were testing various failure modes. A great suggestion from Brad Fitz was to use a mock of net.Conn to test various network problems. I hope to start executing on that soon.

by Brandon Philips at January 18, 2014 12:00 AM

January 12, 2014

Justin Dugger

LCA 2014 Videos of Note

Linuxconf 2014 wrapped up last week, and the videos are already online!

I didn't get a chance to review all the video, but here's some of the sessions I thought were interesting:

Rusty Russel discusses virtIO standardization. I thought I knew what virtIO was but his initial explaination leaves me more confused than I started out. Nevertheless, Rusty gives a implementer's view of the standardization process, and shares how virtIO manages forward and backward compatibility between hypervisor, guest OSes, and even hardware.

Elizabeth Krumbach Joseph explains how the OpenStack Core Infra team publishes does their work in the open. We've taken a similar approach, so its nice to see other approaches and bits we might steal =). Storing Jenkins jobs in YAML in config management sounds very nice, and I will have to bring it up at my next meeting.

Bdale Garbee shares his experience losing his home to the Black Forest Fire. As a serial renter / mover, I'm already well prepared to answer the question "What would you take if you had five minutes to clean out your home?" So I would have liked a bit more in the way of disaster recovery / offsite backups / tech stuff, but but I happen to know he rescued his servers from the fire and isn't storing them locally anymore. So perhaps there is no lesson to share yet =)

Michael Still presents a third party CI approach for database migrations in OpenStack. Looks like a combo of gerrit for code reviews, Zuul, and some custom zuul gearman worker. Surprisingly little duplicate content from the other open stack infrastructure talk!

Jim Cheetham asks 'Is it safe to mosh?' The answer appears to be yes, but takes a hands off approach to the underlying cryto.

Lots of exciting talks, and maybe I need to sit down and think about writing my own proposal for LCA 2015.

by Justin Dugger at January 12, 2014 12:00 AM

October 01, 2013

Brandon Philips

Video: Modern Linux Server with Containers

At LinuxCon 2013 I gave a talk that dissects “Linux Containers” into its component parts in the Kernel: cgroups and namespaces. The talk shows how cgroups act as the “accounting bean counter” and namespaces as the “castle walls” that isolate processes from each other.

If you are already familiar with the basics of namespaces and cgroups I show off some tools like nsenter, docker, and systemd-nspawn. Skip to the end to catch the demos.

The full slides are availble on slide deck and mirrored as a pdf here.

by Brandon Philips at October 01, 2013 12:00 AM

April 27, 2013

GOSCON News

It's All About Community: DC Metro Open Source Community Summit May 10, 2013

Oregon State University Open Source Lab is pleased lend its support to the Open Source Initiative and the first Open Source Community Summit, being held in Washington D.C. on May 10, 2013.

It's a great way to stand up and be counted as part of the DC open source comunity; check it out!more...

by deborah at April 27, 2013 05:53 AM

September 30, 2012

Justin Dugger

PuppetConf 2012

Recovered from the post-con crash a while ago, so it's time to write up some thoughts. Last week I attended PuppetConf with my coworkers at the OSL. The OSL attended PuppetConf primarily as a pre-deployment information gathering exercise. We want to avoid common pitfalls, and be able to plan for things coming down the pipeline. Puppet 3.0 was targetted to be released on Friday and clearly that slipped.

The venue itself was nice, but space partitioned poorly. The two main tracks had surplus space, but the three side tracks nearly always had people turned away for space concerns. Supposedly, the recordings will be available shortly, so it may not be the Worst Thing In The World, but only time will tell.

Content wise, one recurring theme is to start small and simple, and not worry about scale or sharing until they become an issue. Designing a deployment for thousands of nodes when you have perhaps a dozen gives new life to the term "architecture astronaut," and there's a certain amount of benefit to procrastinating on system design while the tools and ecosystem mature. Basically, build one to throw away.

Another problem we've been worrying about at the OSL is updating 3rd party config modules in their various forms. The hope is that by explicitly annotating in your system where things came from, you can automate pulling in updates from original sources. Pretty much the universal recommendation here is a condemnation: avoid git submodules. Submodules sounds like the right strategy, but it's for a different use case. In our experience, it dramatically complicates the workflow. At least one person mentioned librarian-puppet, which as far as I can tell is isn't much different than mr with some syntactic sugar for PuppetForge. This is great, because mr was basically the strategy I was recommending prior to PuppetConf.

The Better Living Through Statistics talk was less advanced than I'd hoped. Anyone who's spent maybe 5 minutes tuning nagios check_disks realizes how inadequate it is, and that the basic nagios framework is to blame. What you really want is an alert when the time to disk outage approaches time to free up more disk, and no static threshold can capture that. While Jamie did provide a vision for the future, I was really hoping for some new statistical insight on the problem. It appears it's up to me to create and provide said insight. Perhaps in another post.

R Tyler Croy gave a useful talk on behavior/test driven infrastructure. I'd looked into Cucumber before, but RSpec was only a word to me before this talk. It's certainly something I'll need to take some time to integrate into the workflow and introduce to students. One concern I had (that someone else aired) was that in the demo, the puppet code and the code to test it was basically identical, such that software could easily translate from code to test and back. Croy insisted this was not the case in more complicated Puppet modules, but I'm reserving judgement until I see said modules.

Overall, I'd definately recommend the conference to people preparing to deploy puppet. There's plenty more sessions I didn't cover in here that are worth your time. You'd probably get the most out of it by starting a trial implementation first, instead of procrastinating until Wednesday night to read the basics like I did. Beyond simply watching lectures, it's useful to get away from the office and sit down to learn about this stuff. Plus, it's useful to build your professional network of people you can direct questions to later.

by Justin Dugger at September 30, 2012 12:00 AM

July 01, 2012

Justin Dugger

Open Source Bridge Wrapup

Friday marked the end of Open Source Bridge. Just about the best introduction to Portland culture as you can find. Vegan lunches, Voodoo Donut catering, lunch truck friday, and rock and roll pipe organists in the Unitarian's sanctuary.

The keynotes were pretty cool. I'd seen Fenwick's presentation from LCA, and was surprised at how much had changed, hopefully since some of his keystone evidence turned out to be bogus; turns out there's strong evidence that the only "priming" effect was in grad students running the study. I'm still not quite clear on what JScott wants people to run vbox for, but he did have a really good idea about bringing your own recording equipment that I wish I had taken to heart.

Probably the most useful talk I attended was Laura Thompson's presentation on Mozilla's Crash Reporting service, powered by Socorro. A few of the projects the OSL hosts are desktop apps and collecting crash data might be a good engineering tool win for them. A lot of embedded hardware talks that would have been interesting, but not directly relevant to the needs of the OSL. Hopefully they'll be up as recordings soon.

The OSL was also well represented as well in the speaker's ranks: we ran five sessions during the main conference, and two during the Friday unconference. I think next year it would be a good idea to encourage our students to participate as volunteers; getting them facetime with speakers and the community at large can only do us a world of good. I gave a first run of a talk on using GNUCash for personal finance; the turnout was pretty good, given how many people were still at the food carts. I should have recorded it to self-critique and improve.

The "after party" on Thursday was nice. Lance won the 2012 Outsanding Open Source Citizen award, which is great, because he deserves recongition for handling the turmoil at the OSL over the past year. But now I've got to figure out my plan meet or beat that for next year. No small task.

Next up is catching up back at the Lab, and then OSCON!

by Justin Dugger at July 01, 2012 12:00 AM

June 13, 2012

Lance Albertson

Ganeti Tutorial PDF guide

As I mentioned in my previous blog post, trying out Ganeti can be cumbersome and I went out and created a platform for testing it out using Vagrant. Now I have a PDF guide that you can use to walk through some of the basics steps of using Ganeti along with even testing a fail-over scenario. Its an updated version of a guide I wrote for OSCON last year. Give it a try and let me know what you think!

by lance at June 13, 2012 01:53 AM

June 11, 2012

Frédéric Wenzel

Fail Pets Research in UX Magazine

I totally forgot blogging about this!

Remember how I curate a collection of fail pets across the Interwebs? Sean Rintel is a researcher at the University of Queensland in Australia and has put some thought into the UX implications of whimsical error messages, published in his article: The Evolution of Fail Pets: Strategic Whimsy and Brand Awareness in Error Messages in UX Magazine.

In his article, Rintel attributes me with coining the term "fail pet".

Attentive readers may also notice that Mozilla's strategy of (rightly) attributing Adobe Flash's crashes with Flash itself by putting a "sad brick" in place worked formidably: Rintel (just like most users, I am sure) assumes this message comes from Adobe, not Mozilla:

Thanks, Sean, for the mention, and I hope you all enjoy his article.

June 11, 2012 07:00 AM

June 08, 2012

Frédéric Wenzel

Let's talk about password storage

Note: This is a cross-post of an article I published on the Mozilla Webdev blog this week.

During the course of this week, a number of high-profile websites (like LinkedIn and last.fm) have disclosed possible password leaks from their databases. The suspected leaks put huge amounts of important, private user data at risk.

What's common to both these cases is the weak security they employed to "safekeep" their users' login credentials. In the case of LinkedIn, it is alleged that an unsalted SHA-1 hash was used, in the case of last.fm, the technology used is, allegedly, an even worse, unsalted MD5 hash.

Neither of the two technologies is following any sort of modern industry standard and, if they were in fact used by these companies in this fashion, exhibit a gross disregard for the protection of user data. Let's take a look at the most obvious mistakes our protagonists made here, and then we'll discuss the password hashing standards that Mozilla web projects routinely apply in order to mitigate these risks. <!--more-->

A trivial no-no: Plain-text passwords

This one's easy: Nobody should store plain-text passwords in a database. If you do, and someone steals the data through any sort of security hole, they've got all your user's plain text passwords. (That a bunch of companies still do that should make you scream and run the other way whenever you encounter it.) Our two protagonists above know that too, so they remembered that they read something about hashing somewhere at some point. "Hey, this makes our passwords look different! I am sure it's secure! Let's do it!"

Poor: Straight hashing

Smart mathematicians came up with something called a hashing function or "one-way function" H: password -> H(password). MD5 and SHA-1 mentioned above are examples of those. The idea is that you give this function an input (the password), and it gives you back a "hash value". It is easy to calculate this hash value when you have the original input, but prohibitively hard to do the opposite. So we create the hash value of all passwords, and only store that. If someone steals the database, they will only have the hashes, not the passwords. And because those are hard or impossible to calculate from the hashes, the stolen data is useless.

"Great!" But wait, there's a catch. For starters, people pick poor passwords. Write this one in stone, as it'll be true as long as passwords exist. So a smart attacker can start with a copy of Merriam-Webster, throw in a few numbers here and there, calculate the hashes for all those words (remember, it's easy and fast) and start comparing those hashes against the database they just stole. Because your password was "cheesecake1", they just guessed it. Whoops! To add insult to injury, they just guessed everyone's password who also used the same phrase, because the hashes for the same password are the same for every user.

Worse yet, you can actually buy(!) precomputed lists of straight hashes (called Rainbow Tables) for alphanumeric passwords up to about 10 characters in length. Thought "FhTsfdl31a" was a safe password? Think again.

This attack is called an offline dictionary attack and is well-known to the security community.

Even passwords taste better with salt

The standard way to deal with this is by adding a per-user salt. That's a long, random string added to the password at hashing time: H: password -> H(password + salt). You then store salt and hash in the database, making the hash different for every user, even if they happen to use the same password. In addition, the smart attacker cannot pre-compute the hashes anymore, because they don't know your salt. So after stealing the data, they'll have to try every possible password for every possible user, using each user's personal salt value.

Great! I mean it, if you use this method, you're already scores better than our protagonists.

The 21st century: Slow hashes

But alas, there's another catch: Generic hash functions like MD5 and SHA-1 are built to be fast. And because computers keep getting faster, millions of hashes can be calculated very very quickly, making a brute-force attack even of salted passwords more and more feasible.

So here's what we do at Mozilla: Our WebApp Security team performed some research and set forth a set of secure coding guidelines (they are public, go check them out, I'll wait). These guidelines suggest the use of HMAC + bcrypt as a reasonably secure password storage method.

The hashing function has two steps. First, the password is hashed with an algorithm called HMAC, together with a local salt: H: password -> HMAC(local_salt + password). The local salt is a random value that is stored only on the server, never in the database. Why is this good? If an attacker steals one of our password databases, they would need to also separately attack one of our web servers to get file access in order to discover this local salt value. If they don't manage to pull off two successful attacks, their stolen data is largely useless.

As a second step, this hashed value (or strengthened password, as some call it) is then hashed again with a slow hashing function called bcrypt. The key point here is slow. Unlike general-purpose hash functions, bcrypt intentionally takes a relatively long time to be calculated. Unless an attacker has millions of years to spend, they won't be able to try out a whole lot of passwords after they steal a password database. Plus, bcrypt hashes are also salted, so no two bcrypt hashes of the same password look the same.

So the whole function looks like: H: password -> bcrypt(HMAC(password, localsalt), bcryptsalt).

We wrote a reference implementation for this for Django: django-sha2. Like all Mozilla projects, it is open source, and you are more than welcome to study, use, and contribute to it!

What about Mozilla Persona?

Funny you should mention it. Mozilla Persona (née BrowserID) is a new way for people to log in. Persona is the password specialist, and takes the burden/risk away from sites for having to worry about passwords altogether. Read more about Mozilla Persona.

So you think you're cool and can't be cracked? Challenge accepted!

Make no mistake: just like everybody else, we're not invincible at Mozilla. But because we actually take our users' data seriously, we take precautions like this to mitigate the effects of an attack, even in the unfortunate event of a successful security breach in one of our systems.

If you're responsible for user data, so should you.

If you'd like to discuss this post, please leave a comment at the Mozilla Webdev blog. Thanks!

June 08, 2012 07:00 AM

May 31, 2012

Greg Lund-Chaix

Large Moodle downloads die prematurely when served through Varnish

Varnish and Moodle, to be blunt, hate each other. So much so that for my Moodle 1.9.x sites, I simply instruct Varnish to return(pass) without even trying to cache anything on a Moodle site. Today, however, I discovered even that is insufficient. Here’s what happened:

A user was reporting that when downloading large files from within Moodle (500mb course zip backups in this case), they’d stop at approximately 200mb. A look at varnishlog showed that Varnish was properly seeing that it’s a Moodle request that had a “Cache-Control: no-cache” header and didn’t even try to cache it before sending the request off to the backend. The backend was behaving exactly as expected and serving up the file. At some point, however, the download simply terminates before completion. No indications in the Varnish or Apache logs, nothing. It just … stops.

Huh.

So I put the following code in my VCL in vcl_recv:

if (req.url ~ "file.php") {
return (pipe);
}

Success!

Note: this must go into the VCL before the line in vcl_recv that checks the Cache-Control header, otherwise it’ll pass before it gets to the pipe:

if (req.url ~ "file.php") {
return (pipe);
}

# Force lookup if the request is a no-cache request from the client
if (req.http.Cache-Control ~ "no-cache") {
return (pass);
}

Share this: Digg del.icio.us Facebook Google Bookmarks Furl Print Reddit Slashdot StumbleUpon Technorati TwitThis Fark LinkedIn Ma.gnolia NewsVine Ping.fm Pownce Tumblr

by Greg at May 31, 2012 02:42 AM

May 30, 2012

Frédéric Wenzel

Fun with ebtables: Routing IPTV packets on a home network

In my home network, I use IPv4 addresses out of the 10.x.y.z/8 private IP block. After AT&T U-Verse contacted me multiple times to make me reconfigure my network so they can establish a large-scale NAT and give me a private IP address rather than a public one (this might be material for a whole separate post), I reluctantly switched ISPs and now have Comcast. I did, however, keep AT&T for television. Now, U-Verse is an IPTV provider, so I had to put the two services (Internet and IPTV) onto the same wire, which as it turned out was not as easy as it sounds. <!--more-->

tl;dr: This is a "war story" more than a crisp tutorial. If you really just want to see the ebtables rules I ended up using, scroll all the way to the end.

IPTV uses IP Multicast, a technology that allows a single data stream to be sent to a number of devices at the same time. If your AT&T-provided router is the centerpiece of your network, this works well: The router is intelligent enough to determine which one or more receivers (and on what LAN port) want to receive the data stream, and it only sends data to that device (and on that wire).

Multicast, the way it is supposed to work: The source server (red) sending the same stream to multiple, but not all, receivers (green).

Turns out, my dd-wrt-powered Cisco E2000 router is--out of the box--not that intelligent and, like most consumer devices, will turn such multicast packets simply into broadcast packets. That means, it takes the incoming data stream and delivers it to all attached ports and devices. On a wired network, that's sad, but not too big a deal: Other computers and devices will see these packets, determine they are not addressed to them, and drop the packets automatically.

Once your wifi becomes involved, this is a much bigger problem: The IPTV stream's unwanted packets easily satisfy the wifi capacity and keep any wifi device from doing its job, while it is busy discarding packets. This goes so far as to making it entirely impossible to even connect to the wireless network anymore. Besides: Massive, bogus wireless traffic empties device batteries and fills up the (limited and shared) frequency spectrum for no useful reason.

Suddenly, everyone gets the (encrypted) data stream. Whoops.

One solution for this is only to install manageable switches that support IGMP Snooping and thus limit multicast traffic to the relevant ports. I wasn't too keen on replacing a bunch of really expensive new hardware though.

In comes ebtables, part of netfilter (the Linux kernel-level firewall package). First I wrote a simple rule intended to keep all multicast packets (no matter their source) from exiting on the wireless device (eth1, in this case).

ebtables -A FORWARD -o eth1 -d Multicast -j DROP

This works in principle, but has some ugly drawbacks:

  1. -d Multicast translates into a destination address pattern that also covers (intentional) broadcast packets (i.e., every broadcast packet is a multicast packet, but not vice versa). These things are important and power DHCP, SMB networking, Bonjour, ... . With a rule like this, none of these services will work anymore on the wifi you were trying to protect.
  2. -o eth1 keeps us from flooding the wifi, but will do nothing to keep the needless packets sent to wired devices in check. While we're in the business of filtering packets, might as well do that too.

So let's create a new VLAN in the dd-wrt settings that only contains the incoming port (here: W) and the IPTV receiver's port (here: 1). We bridge it to the same network, because the incoming port is not only the source of IPTV, but also our connection to the Internet, so the remaining ports need to be able to connect to it still.

dd-wrt vlan settings

Then we tweak our filters:

ebtables -A FORWARD -d Broadcast -j ACCEPT
ebtables -A FORWARD -p ipv4 --ip-src ! 10.0.0.0/24 -o ! vlan1 -d Multicast -j DROP

This first accepts all broadcast packets (which it would do by default anyway, if it wasn't for our multicast rule), then any other multicast packets are dropped if their output device is not vlan1, and their source IP address is not local.

With this modified rule, we make sure that any internal applications can still function properly, while we tightly restrict where external multicast packets flow.

That was easy, wasn't it!

Some illustrations courtesy of Wikipedia.

May 30, 2012 07:00 AM

May 21, 2012

Lance Albertson

Trying out Ganeti with Vagrant

Ganeti is a very powerful tool but often times people have to look for spare hardware to try it out easily. I also wanted to have a way to easily test new features of Ganeti Web Manager (GWM) and Ganeti Instance Image without requiring additional hardware. While I do have the convenience of having access to hardware at the OSU Open Source Lab to do my testing, I'd rather not depend on that always. Sometimes I like trying new and crazier things and I'd rather not break a test cluster all the time. So I decided to see if I could use Vagrant as a tool to create a Ganeti test environment on my own workstation and laptop.

This all started last year while I was preparing for my OSCON tutorial on Ganeti and was manually creating VirtualBox VMs to deploy Ganeti nodes for the tutorial. It worked well but soon after I gave the tutorial I discovered Vagrant and decided to adapt my OSCON tutorial with Vagrant. Its a bit like the movie Inception of course, but I was able to successfully get Ganeti working with Ubuntu and KVM (technically just qemu) and mostly functional VMs inside of the nodes. I was also able to quickly create a three-node cluster to test failover with GWM and many facets of the webapp.

The vagrant setup I have has two parts:

  1. Ganeti Tutorial Puppet Module
  2. Ganeti Vagrant configs

The puppet module I wrote is very basic and isn't really intended for production use. I plan to re-factor it in the coming months into a completely modular production ready set of modules. The node boxes are currently running Ubuntu 11.10 (I've been having some minor issues getting 12.04 to work), and the internal VMs you can deploy are based on the CirrOS Tiny OS. I also created several branches in the vagrant-ganeti repo for testing various versions of Ganeti which has helped the GWM team implement better support for 2.5 in the upcoming release.

To get started using Ganeti with Vagrant, you can do the following:

git clone git://github.com/ramereth/vagrant-ganeti.git
git submodule update --init
gem install vagrant
vagrant up node1
vagrant ssh node1
gnt-cluster verify

Moving forward I plan to implement the following:

  • Update tutorial documentation
  • Support for Xen and LXC
  • Support for CentOS and Debian as the node OS

Please check out the README for more instructions on how to use the Vagrant+Ganeti setup. If you have any feature requests please don't hesitate to create an issue on the github repo.

by lance at May 21, 2012 06:09 AM

November 25, 2011

Frédéric Wenzel

Day 329 - Ready for the Sunset

Day 329 - Ready for the Sunset

A family of tourists, getting ready to watch the sun set on the Pacific coast. I love silhouette photos like this: It's fun to see the different characters with their body shapes and postures.

November 25, 2011 08:00 AM

August 09, 2011

GOSCON News

New Speaker Announced: Dr. David A. Wheeler

We've added our final speaker to the GOSCON Cost Take Out Panel: David A. Wheeler. Dr. Wheeler is a Research Staff Member at the Institute for Defense Analyses and is an expert on developing secure software and the use of open source software in the security space. He is the author of several well known works in this space, including Secure Programming for Linux and Unix HOWTO, Why Open Source Software / Free Software (OSS/FS)?, Look at the Numbers!, and How to Evaluate OSS/FS Programs. more...

by Leslie at August 09, 2011 08:54 PM

Wayne Moses Burke

Executive Director
Open Forum Foundation

Mr. Moses Burke will be moderating the Building Outside the Box Panel during GOSCON DC 2011 at the Innovation Nation Forum.more...

by Leslie at August 09, 2011 08:48 PM

Alexander B. Howard

Government 2.0 Correspondent

O’Reilly Media

Mr. Howard will be moderating the Cost Take Out Panel during GOSCON DC 2011 at the Innovation Nation Forum.more...

by Leslie at August 09, 2011 08:43 PM

June 19, 2011

Peter Krenesky

Ganeti Web Manager 0.7

Ganeti Web ManagerWe’ve just release version 0.7 of Ganeti Web Manager. Ganeti Web Manager is a Django based web application that allows administrators and clients access to their ganeti clusters. It includes a permissions and quota system that allows administrators to grant access to both clusters and virtual machines. It also includes user groups for structuring access to organizations.

This is the fourth release of Ganeti Web Manager and it contains numerous new features.  It also includes various bug fixes and speed optimizations.  Here is the full CHANGELOG, or read on for the highlights.

Xen Support

Ganeti Web Manager now have full Xen support.  Prior versions could display Xen instances, but now you can create and edit them too.  This as an important addition because Xen is a widely used and mature project.  Now with full hardware virtualization in Linux 3.0, Xen will continue to be an important technology for virtualization.  This was our most often requested feature and we’re glad to have fulfilled it.

Internationalization

Thanks to a large community contribution, internationalization support was added for nearly all aspects of the interface.  Users can switch between their default language and any other.  Currently only a Greek translation is available, but we’d like to see many more languages. If you can read and write another language this is a great opportunity for you to get involved. We’re using Transifex to coordinate people who want to help translate.

Search & Improved Navigation

Administrators of larger cluster can now find objects easier with our search interface.  It includes an Ajax auto-complete feature, along with detailed results.

We’ve also added contextual links wherever we could.  This included ensuring breadcrumbs were properly formatted on each page.  Object Permissions and Object Log were updated to ensure navigating between those screens and Ganeti Web Manager is seamless.

Import Tools

There are now import tools for Nodes.  These work the same as for instances.  The cache updater has also been reworked to support both Nodes and Instances.  It’s now a twisted plugin with modest speed improvements due to Ganeti requests happening asynchronously.

Speed, Scalability, and Bugs

We’ve sought out places where we performed extra and or inefficient database queries.  We identified numerous places where database interaction could be reduced, and pages returned faster.  This is an ongoing process.  We’ll continue to optimize and improve the responsiveness as we find areas of the project we can improve.

Numerous bugs were fixed in both the user interface and the backend.  Notably, the instance creation interface has had several bugs corrected.

Module Releases

We’re building several modules along with Ganeti Web Manager.  The following projects have new releases coinciding with Ganeti Web Manager 0.7:

Django Object Permissions 1.4

  • improved user selection widget
  • speed improvements

Object Log 0.6

  • our first public release
  • speed, scalability, and flexibility improvements

Twisted VNC Auth Proxy

  • our first public release
  • added support for hixie 07 and latest noVNC version.

Want to learn more?

Lance Albertson and I will be speaking about Ganeti & Ganeti Web Manager at several conferences this summer.  Catch us at the following events:

by peter at June 19, 2011 03:49 AM

May 18, 2011

Peter Krenesky

Google I/O 2011

Google I/O LogoFive OSUOSL co-workers and I recently finished a road trip to Google I/O 2011.  We took two cars on an 11 hour drive through scenic southern Oregon and northern California.  We learned more about Android and other technologies shaping the web.  It was also a great opportunity to spend time with each other outside the office.

Monday night we joined about 30 Google Summer of Code mentors for dinner and drinks hosted by the Google Open Source Programs Office.  We’re always grateful for events that bring together friends old and new.  One developer nervously sat down at our table, professing that he didn’t know anyone.  We might not work on the same project, but we’re all part of the open source community.

The highlight of the conference was the double announcement of Android Open Accessory program and Android @ Home.  Both open up Android to integration with third party devices.  These features coupled with near field communications (NFC) stand to dramatically change how we use our mobiles devices to interact with the world around us.  This is not a new idea.  X10 home automation has existed since 1975.  Zigbee and Z-wave are more modern protocols, but also available for years.  The difference here is 100 million Android users and a half million Arduino hackers.

As Phillip Torrone wrote on the Makezine Blog, “There really isn’t an easier way to get analog sensor data or control a motor easier and faster than with an Arduino — and that’s a biggie, especially if you’re a phone and want to do this.”

It won’t be a short road.  We still have obstacles such as higher costs.  A representative from Lighting Science I spoke to at their I/O booth quoted Android@Home enabled LED lights at $30 per bulb.  Android and Arduino might be the right combination of market penetration, eager hackers, and solid platforms for a more integrated environment.

NFC Sticker

My favorite session was How To NFC.   NFC (near field communication) is similar to RFID except it only works within a few centimeters.  Newer android phones can send and receive NFC messages any time except when the phone is sleeping.  NFC chips can also be embedded in paper, like the stickers that came in our I/O Badges.  An NFC enabled app can share data such as a url, or launch a multiplayer game with your friend.  It makes complex tasks as simple as “touch the phone here”.  Android is even smart enough to launch an app required for an NFC message, or send you to the market to install the app you need.  Only the Nexus-S supports NFC now, but this feature is so compelling that others will support it soon too.

The other technical sessions were very useful too, whether you were interested in Android, Chrome, or other Google technologies.  The speakers were knowledgeable on the subject areas they spoke on.  I attended mostly Android talks, and it was great hearing from the people who wrote the APIs we’re trying to use.  The sessions were all filmed and are worth watching online.

by peter at May 18, 2011 10:46 PM

May 03, 2011

Lance Albertson

Rebalancing Ganeti Clusters

One of the best features of Ganeti is its ability to grow linearly by adding new servers easily. We recently purchased a new server to expand our ever growing production cluster and needed to rebalance cluster. Adding and expanding the cluster consisted of the following steps:

  1. Installing the base OS on the new node
  2. Adding the node to your configuration management of choice and/or installing ganeti
  3. Add the node to the cluster with gnt-node add
  4. Check Ganeti using the verification action
  5. Use htools to rebalance the cluster

For simplicity sake I'll cover the last three steps.

Adding the node

Assuming you're using a secondary network, this is how you would add your node:

gnt-node add -s <secondary ip> newnode

Now lets check and make sure ganeti is happy:

gnt-cluster verify

If all is well, continue on otherwise try and resolve any issue that ganeti is complaining about.

Using htools

Make sure you install ganeti-htools on all your nodes before continuing. It requires haskell so just be aware of that requirement. Lets see what htools wants to do first:

$ hbal -m ganeti.example.org
Loaded 5 nodes, 73 instances
Group size 5 nodes, 73 instances
Selected node group: default
Initial check done: 0 bad nodes, 0 bad instances.
Initial score: 41.00076094
Trying to minimize the CV...
1. openmrs.osuosl.org g1.osuosl.bak:g2.osuosl.bak g5.osuosl.bak:g1.osuosl.bak 38.85990831 a=r:g5.osuosl.bak f
2. stagingvm.drupal.org g3.osuosl.bak:g1.osuosl.bak g5.osuosl.bak:g3.osuosl.bak 36.69303985 a=r:g5.osuosl.bak f
3. scratchvm.drupal.org g2.osuosl.bak:g4.osuosl.bak g5.osuosl.bak:g2.osuosl.bak 34.61266967 a=r:g5.osuosl.bak f

<snip>

28. crisiscommons1.osuosl.org g3.osuosl.bak:g1.osuosl.bak g3.osuosl.bak:g5.osuosl.bak 4.93089388 a=r:g5.osuosl.bak
29. crisiscommons-web.osuosl.org g2.osuosl.bak:g1.osuosl.bak g1.osuosl.bak:g5.osuosl.bak 4.57788814 a=f r:g5.osuosl.bak
30. aqsis2.osuosl.org g1.osuosl.bak:g3.osuosl.bak g1.osuosl.bak:g5.osuosl.bak 4.57312216 a=r:g5.osuosl.bak
Cluster score improved from 41.00076094 to 4.57312216
Solution length=30

I've shortened the actual output for the sake of this blog post. Htools automatically calculates which virtual machines to move and how using the least amount of operations. In most these moves, the VMs may simply be migrated, migrated & secondary storage replaced, or migrated, secondary storage replaced, migrated. In our environment we needed to move 30 VMs around out of the total 70 VMs that are hosted on the cluster.

Now lets see what commands we actually would need to run:

$ hbal -C -m ganeti.example.org

Commands to run to reach the above solution:

echo jobset 1, 1 jobs
echo job 1/1
gnt-instance replace-disks -n g5.osuosl.bak openmrs.osuosl.org
gnt-instance migrate -f openmrs.osuosl.org
echo jobset 2, 1 jobs
echo job 2/1
gnt-instance replace-disks -n g5.osuosl.bak stagingvm.drupal.org
gnt-instance migrate -f stagingvm.drupal.org
echo jobset 3, 1 jobs
echo job 3/1
gnt-instance replace-disks -n g5.osuosl.bak scratchvm.drupal.org
gnt-instance migrate -f scratchvm.drupal.org

<snip\>

echo jobset 28, 1 jobs
echo job 28/1
gnt-instance replace-disks -n g5.osuosl.bak crisiscommons1.osuosl.org
echo jobset 29, 1 jobs
echo job 29/1
gnt-instance migrate -f crisiscommons-web.osuosl.org
gnt-instance replace-disks -n g5.osuosl.bak crisiscommons-web.osuosl.org
echo jobset 30, 1 jobs
echo job 30/1
gnt-instance replace-disks -n g5.osuosl.bak aqsis2.osuosl.org

Here you can see the commands it wants you to execute. Now you can either put these all in a script and run them, split them up, or just run them one by one. In our case I ran them one by one just to be sure we didn't run into any issues. I had a couple of VMs not migration properly but those were exactly fixed. I split this up into a three day migration running ten jobs a day.

The length of time that it takes to move each VM depends on the following factors:

  1. How fast your secondary network is
  2. How busy the nodes are
  3. How fast your disks are

Most of our VMs ranged in size from 10G to 40G in size and on average took around 10-15 minutes to complete each move. Addtionally, make sure you read the man page for hbal to see all the various features and options you can tweak. For example, you could tell hbal to just run all the commands for you which might be handy for automated rebalancing.

Conclusion

Overall the rebalancing of our cluster went without a hitch outside of a few minor issues. Ganeti made it really easy to expand our cluster with minimal to zero downtime for our hosted projects.

by lance at May 03, 2011 05:55 AM

April 25, 2011

Russell Haering

Cast Preview Release

For the last few months I've been working on and off for Cloudkick (now Rackspace) on a project that we are calling Cast. I'm happy to announce that this afternoon we're releasing Cast version 0.1. The source has been on Github all along, but with this release we feel that the project has finally progressed to a point where:

  1. We've implemented the functionality planned for the first iteration.
  2. The afforementioned functionality actually works against the current version of Node.js.
  3. We have a website and documented most of the imporant parts.

Thats Great, So What Is It?

In short, Cast is an open-source deployment and service management system.

At Cloudkick we tend to see users deploying their code in one of three ways:

  1. Services are deployed via a configuration management system such as Puppet or Chef.
  2. Services are deployed by some sort SSH wrapper such as Fabric or Capistrano.
  3. Services are deployed to a "Platform as a Service" such as Heroku.

But none of these are perfect. Respectively:

  1. The high overhead in interacting with configuration management systems is fine when they are managing 'infrastructure' (that is, the systems on which you run your services), but tend to impede a smooth "devops" style workflow with fast iterations and easy deployment and upgrades.
  2. SSH wrappers typically work well enough on small scales, but but they feel like a hack, and don't trivially integrate with in-house systems.
  3. Of all the options, people seem to like these the best. The price speaks for itself - Platforms as a Service (PaaS) are hugely valuable to their users. The problem is that these platforms are closed systems, inflexible and not very "sysadmin friendly". When they go down, you're trapped. When the pricing or terms change, you're trapped. If they don't or can't do what you want, you're trapped.

With this situation in mind, what could we write for our users? An Open Platform (optionally, as a Service).

What Can it Do?

Using Cast you can:

  1. Upload your application to a server.
  2. Create 'instances' of your application. Think 'staging' and 'production'.
  3. Manage (start, stop, restart, etc) services provided by your application.
  4. Deploy new versions of your application.
  5. Do all of this from the command line or from a REST API.

We have a lot more interesting features planned. Hint: think "Cast cluster". But if this sounds like something you're interested in, stay tuned, share your thoughts or consider looking into a job at the new San Francisco Rackspace Office

April 25, 2011 12:00 AM

April 19, 2011

Greg Lund-Chaix

Facebook in Prineville, a slightly different view

On Friday, Facebook’s Senior Open Programs Manager, David Recordon, took a group of us from the OSL on a fantastic behind-the-scenes tour of the new Facebook data center in Prineville, Oregon. It was an amazing experience that prompted me to think about things I haven’t thought about in quite a few years. You see, long before I was ever a server geek I spent my summers and school holidays working as an apprentice in my family’s heating and air conditioning company. As we were walking through the data center looking at the ground-breaking server technology, I found myself thinking about terms and technologies I hadn’t considered much in years – evaporative cooling, plenums, airflow, blowers. The computing technology is fascinating and ground-breaking, but they’ve been covered exhaustively elsewhere. I’d like to spend some time talking about something a bit less sexy but equally important: how Facebook keeps all those servers from melting down from all the heat they generate.

First, though, some scale. They’re still building the data center – only one of the three buildings has been built so far, and it has less than half of its server rooms completed – but even at a fraction of its proposed capacity the data center was reportedly able to handle 100% of Facebook’s US traffic for a while when they tested it last week. The students we brought with us did a bit of back-of-the-envelope calculation: when the facility is fully built out, we suspect it’ll be able to hold on the order of hundreds of thousands of servers. It’s mind-boggling to think how much heat that many servers must generate. It’s hard enough to keep the vastly-smaller OSL data center cool, the idea of scaling it that large is daunting to say the least. As the tour progressed, I found myself more and more fascinated by the airflow and cooling.

The bottom floor of the facility is all data center floor and offices, while the upper floors are essentially giant plenums (the return air directly above the main floor, and the supply above the return). There is no ductwork, just huge holes (10′x10′) in the ceiling of the data center floor bring the cool air down from the “penthouse”, and open ceilings above the “hot” side of the racks to move the hot air out. A lot of the air movement is passive/convective – hot air rises from the hot side of the racks through the ceiling to the second floor and the cooled air drops down from the third floor onto the “cool” side of the server racks, where it’s pulled back though the servers. The air flow is certainly helped along by the fans in the servers and blowers up in the “penthouse”, but it’s clearly designed to take advantage of the fact that hot air rises and cold air sinks. They pull off a bit of the hot air to heat the offices, and split the rest between exhausting it outside and mixing with outside air and recirculating.


(Click to enlarge)

OK, enough with the talking, here are some pictures. Click on the images to enlarge them. Walking through the flow, we start at the “cool” side of the server racks:
  
Notice there are no faceplates to restrict the airflow. The motherboards, power supplies, processor heat sinks, and RAM are all completely exposed.

Then we move on to the “hot” side of the racks:
    
The plastic panels you can see on top of the racks and in the middle image guide the hot air coming out of the servers up through the open ceiling to the floor above. No ductwork needed. There are plastic doors at the ends of the rows to completely seal the hot side from the cold side. It was surprisingly-quiet even here. The fans are larger than standard and low speed. While uncomfortably warm, it was not very loud at all. We could speak normally and be heard easily. Very unlike the almost-deafening roar of a usual data center.

The second “floor” is basically just a big open plenum that connects the exhaust (“hot”) side of the server racks to the top floor in a couple of places (recirculating and/or exhaust, depending on the temperature). It’s a sort of half-floor between the ground floor and the “penthouse” that isn’t walk-able, so we climbed straight up to the top floor – a series of rooms (30′ high and very long) that do several things:

First, outside air is pulled in (the louvers to the right):

The white block/wall on the left is the return air plenum bringing the hot air from the floor below. The louvers above it bring the outside air into the next room.

Mix the outside air with the return air and filter it:

The upper louvers on the right are outside air, lower are return air bringing the hot air up from the servers. The filters (on the left) look like standard disposable air filters. Behind them are much more expensive high-tech filters.

Humidify and cool the air with rows and rows of tiny atomizers (surprisingly little water, and it was weird walking through a building-sized swamp cooler):
    
The left image shows the back of the air filters. The middle image shows the other side of the room with the water jets. The right image is a closer shot of the water jets/atomizers.

Blowers pull the now-cooled air through the sponges (for lack of a better word) in front of the atomizers and pass it on to be sent down to the servers:

They were remarkably quiet. We could easily speak and be heard over them and it was hard to tell how many (if any) were actually running.

Finally the air is dumped back into the data center through giant holes in the floor:
    
The first image shows the back of the blowers (the holes in the floor are to the right). The middle image shows the openings down to the server floor (the blowers are off to the left). The third image is looking down through the opening to the server room floor. The orange devices are smoke detectors.

The last room on the top floor is where the the unused hot return air is exhausted outside:

None of the exhaust fans were actually running, the passive airflow was sufficient without any assistance. The grates in the floor open down to the intermediate floor connecting to the hot side of the racks.

No refrigerant is used at all, just evaporative cooling (and that then only when needed). The only electricity used in the cooling system is for the fans and the water pumps. All of it – the louvers, the water atomizers, and the fans – are automatically controlled to maintain a static temperature/humidity down on the data center floor. When we were there, none of the fans (neither intake nor exhaust) appeared to be running, it was cool enough outside that they were passively exhausting all of the air from the data center and pulling in 100% outside air on the supply. As best I could tell, the only fans that were actually running were the little tiny 12V fans actually mounted on the servers.

This design makes great sense. It’s intuitive – hot air rises, cool air falls – and it obviously efficiently takes advantage of that fact. I kept thinking, “this is so simple! Why haven’t we been doing this all along?”

Share this: Digg del.icio.us Facebook Google Bookmarks Furl Print Reddit Slashdot StumbleUpon Technorati TwitThis Fark LinkedIn Ma.gnolia NewsVine Ping.fm Pownce Tumblr

by Greg at April 19, 2011 09:04 PM

April 17, 2011

Lance Albertson

Facebook Prineville Datacenter

Along with the rest of the OSU Open Source Lab crew (including students), I was invited to the grand opening of Facebook's new datacenter yesterday in Prineville, Oregon. We were lucky enough to get a private tour by Facebook's Senior Open Source Manager, David Recordon. I was very impressed with the facility on many levels.

Triplet racks & UPS

Triplet racks & UPS

I was glad I was able to get a close look at their Open Compute servers and racks in person. They were quite impressive. One triplet rack can hold ninty 1.5U servers which can add up quickly. We're hoping to get one or two of these racks at the OSL. I hope they fit as those triplet racks were rather tall!

Web & memcached servers

Web & memcached servers

Here's a look at a bank of their web & memcached servers. You can find the memcached servers with the large banks of RAM in the front of them (72Gs in each server). The web servers were running the Intel open compute boards while the memcached servers were using AMD. The blue LED's on the servers cost Facebook an extra $0.05 per unit compared to green LED's.

Hot aisle

Hot aisle

The hot aisle is shown here and was amazing quiet. Actually, the whole room was fairly quiet which is strange compared to our datacenter. Its because of the design of the open compute servers and the fact that they are using negative/positive airflow in the whole facility to push cold/hot air.

Generators

Generators

They had a lot of generators behind the building each a size of a bus easily. You can see their substation in the background. Also note the camera in the foreground, they were everywhere not to mention security because of Green Peace.

The whole trip was amazing and was just blown away by the sheer scale. Facebook is planning on building another facility next to this one within the next year. I was really happy that all of the OSL students were able to attend the trip as well as they rarely get a chance to see something like this.

We missed seeing Mark Zuckerburg by minutes unfortunately. We had a three hour drive back and it was around 8:10PM when we left and he showed up at 8:15PM. Damnit!

If you would like to see more of the pictures I took, please check out my album below.

Facebook Prineville Datacenter

Facebook Prineville Datacenter

Thanks David for inviting us!

by lance at April 17, 2011 01:38 AM