Justin Duggerhttps://www.pwnguin.net/2020-01-01T00:00:00-08:002019 in Review2020-01-01T00:00:00-08:00Justin Duggertag:www.pwnguin.net,2020-01-01:2019-in-review.html<p>A year ago I posted some personal goals, and now seems like an opportune time to
review what actually happened. And publicly shame myself with self evaluations.</p>
<p>To recap the goals:</p>
<ol>
<li>Make Anki a daily habit</li>
<li>Start and complete a useful machine learning project</li>
<li>Master Puppet</li>
<li>Start attending tech meetups in the bay area.</li>
<li>Consolidate retirement accounts</li>
</ol>
<h1>Make Anki a daily habit</h1>
<p>I need to consolidate my notes about this into a post of its own, but numerically, I
studied more than half of the days in 2019 (224), reviewing over 6000 cards. I added
800 cards, with an overall correct answer rate of 95 percent. Paradoxically, this
means I am likely doing too many easy reviews, or not mixing it up enough.</p>
<p>In 2020, I'll be focusing less on trivia like state capitals and times tables, and
hope to focus more on professional interests. And maybe a little bit of mental math
tricks -- just not hundreds of cards worth!</p>
<p>Rating 3.5/5</p>
<h1>Start and complete a useful machine learning project</h1>
<p>Never really made the time for this. I thought I'd finish up a hands on book I started
last year but the thing appeared to be written in a python Notebook with a ton of
gotchas that were errata'd. Kinda offputting.</p>
<p>Putting this at a 1 out of 5 instead of 0 because I did learn some time series
prediction algorithms and apply them at work. They're more statistical in nature, but
they are technically online algorithms that learn from new data!</p>
<p>Rating 1/5</p>
<h1>Master Puppet</h1>
<p>I did add a few Anki cards in the first half of the year, and I did make a few PRs at
work, but it wasn't nearly enough to call myself a master. I think the reality here
is that as container orchestration becomes more central, Puppet itself is a slightly
flawed tool.</p>
<p>Need to think about whether to carry this over or bail out and focus on container
ready replacements.</p>
<p>Rating 1/5</p>
<h1>Start attending tech meetups in the bay area</h1>
<p>Did none of that. A lot of these meetings start at 5 or 6, which is like, peak
commute time. And some are in SF rather than South bay, further complicating it.
SVLUG graciously starts at 7, but had 2 meetings in 2019, both of which
I missed. And many of these compete with game night. To add fuel to the chaos, Yahoo
Groups is shutting down or something, and Meetup.com wanted to introduce fees for
RSVPs. I did get to go to WWDC this year, which was sorta cool, but not particularly
good at building my local network.</p>
<p>I've joined a lot of virtual communities via Slack in 2019, but it's not quite the
same. One would imagine SV as a hub of this sort of thing, and maybe it is but I'm
just looking in the wrong place.</p>
<p>Partly I think the challenge here is not knowing anyone in the target meetup. Maybe I
should start with a local morning coffee meetup associated with one of the Slack communities. Even though I don't like coffee. Or mornings. And we do have some corporate internal meetups that perhaps I should take better advantage of.</p>
<p>Rating 1/5</p>
<h1>Consolidate retirement accounts</h1>
<p>Started off and ended strong, with a lot of procrastination the other 10 months of
the year. I've finished one, initiated the process with two more, leaving two accounts
still to consolidate. I was hopeful I found a fully online way to deal with an
otherwise paper and phone process, but no, even when you upload forms they call you to
confirm the transaction with both parties.</p>
<p>The thing I hate about these sort of phone calls is people asking me questions I
don't know the answer to, or making a decision I would have preferred to think about.
This leads to a sort of analysis paralysis, and the best I can do is initiate the
phone call with as much data and research as possible.</p>
<p>The two remaining accounts are less pressing as they're not costing me much beyond
accounting time, but when you factor that in they should probably go. Will carry
those over to the backlog for 2020.</p>
<p>Rating 3/5</p>
<h1>Conclusion</h1>
<p>It seems I committed to more than I could reasonably accomplish in a year, alongside
normal daily life. Nothing wrong with that per se, but recognizing this state aids
one in setting future goals. Only a few of my goals really naturally aligned with one
another -- learning puppet and using Anki to learn things.</p>
<p>Another holistic observation is that I did a lot of this during winter shutdown, so
maybe it's time to start planning vacations on the regular, now that I'm reaching the
accrual cap.</p>
<p>And finally, it's important to note these are stretch goals almost by default. I
didn't get much of the list done in May, but I did earn a promotion at work for the
period ending in May. It's not like I did nothing in 2019, some was work related,
some was impromptu new goals; it's just not public by default like the blog posts are.</p>Personal Goals for 20192019-01-12T00:00:00-08:00Justin Duggertag:www.pwnguin.net,2019-01-12:personal-goals-for-2019.html<p>It's been <a href="//www.pwnguin.net/2011-goal-review-and-2012-goals.html">quite a long time</a> since I've publicly posted goals. One year into my next
job, it's time self-motivate long term self-improvement goals by discussing them in
public.</p>
<ol>
<li>Make Anki a daily habit</li>
<li>Start and complete a useful machine learning project</li>
<li>Master puppet</li>
<li>Start attending tech meetups in the bay area.</li>
<li>Consolidate retirement accounts</li>
</ol>
<p>I want to adopt <a href="https://apps.ankiweb.net/">Anki</a> as part of a lifetime learning framework. It's more
commonly used for learning languages, and cramming for med school, but it should be
good for other purposes. No more forgetting birthdays, or obscure core library
functions! I've got a decent start at it, but due to unfortunate computer repair
circumstances, I missed a few months. No more!</p>
<p>Given my current role, it makes sense to learn more about ML. There have been amazing
advances since I took a course in undergrad, from results, to techniques, and even just
ease of use libraries. Making a useful project should hopefully ease completing this
goal.</p>
<p>Similarly, Puppet is something I need to get better at. I spent a long time using Chef,
and not everything maps cleanly. I need to catch up since the puppet2 days, and start
aborbing modern Puppet practices. Also useful, attending more tech meetups in the area.
SVLUG hasn't really met since I started thinking about attending, but this may be an
example of victim of success -- people have moved on from 'using linux' to more niche
tools that assume Linux as a baseline.</p>
<p>Finally, job hopping has left me with a series of retirement accounts that is getting
tedious to track individually. It's probably pretty easy to call up and get this
sorted, so this is a reminder to pull off the social anxiety band-aid.</p>Advancing the state of the dotfiles repo art2018-08-22T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2018-08-22:advancing-the-state-of-the-dotfiles-repo-art.html<p>Tracking dotfiles with revision control is <a href="https://joeyh.name/svnhome/">old tech</a>, with people originally
promoting the practice <a href="https://joeyh.name/cvshome/">using CVS</a>. <strong>Customizing settings is a great productivity
booster</strong>, but trying to remember how you configured <code>nethack</code> at your last job is a
huge frustration and time sink. Fortunately humanity has progressed, and at this point
the following can be considered table stakes:</p>
<ul>
<li><a href="https://git-scm.com/book/en/v2">git</a></li>
<li><a href="https://www.youtube.com/watch?v=0WfDe51pUU0">.gitignore</a> files</li>
</ul>
<p>This can go a long ways, but here's some simple principles I've organized around to
make things rational over time.</p>
<h2>1: Aggregate small files</h2>
<p>Dotfiles <a href="https://zachholman.com/2010/08/dotfiles-are-meant-to-be-forked/">may</a> or <a href="https://www.anishathalye.com/2014/08/03/managing-your-dotfiles/">may not</a> be meant to be forked, but done right they can
certainly be shared, studied and adopted. Customizing your settings is a good way
to learn the application deeply and boost productivity. The first time you lose your
configs, the easiest path is to give up and just accept defaults. We use revision
control to retain our precious programs, and thanks to the UNIX 'everything is a file'
philosophy, we can <strong>use standard software engineering practices</strong> to <strong>retain personal
configs.</strong></p>
<p>Small files make it easy to share snippets, bundle related settings, import outside
changes, and reduce merge conflicts. Well behaved programs <strong>support include
directives</strong>, allowing users to break monolithic configs into small files. So
naturally the challenge is that not all programs support include directives, and not
all that do support globbing. Globbing lets us specify a directory and inclued all
files within. This is the basis for bash-it, oh-my-zsh, and others, but can be
extended more generally than imagined. Three examples:</p>
<ol>
<li>Git<ul>
<li>supports including specific files, and ~, but not *. Not a deal breaker, but mildly annoying</li>
</ul>
</li>
<li>SSH<ul>
<li>supports includes on <a href="https://www.openssh.com/txt/release-7.3">7.3</a> and newer</li>
<li>supports globbing via *</li>
</ul>
</li>
<li>Bash<ul>
<li>supports includes, but not globbing</li>
<li>still achievable but <a href="https://github.com/jldugger/dotfiles/blob/master/.bashrc#L59-L65">more complicated</a> than you'd expect</li>
</ul>
</li>
</ol>
<h2>2: Keep it secret</h2>
<p>The unspoken risk of publishing dotfiles to a public repo is exposing personal
passwords, private keys, and workplace secrets for all to see and exploit. If you
don't consider privacy at the forefront, you'll pay for it later. The first level of
secrecy is simply using <strong>gitignore to exclude files</strong> and directories from tracking.
You probably already use gitignore to hush editor backups, build artifacts, and log
files, but you can also use it to ignore history files, package caches, and a litany of
common private key file names.</p>
<p>At a secondary level, you can extend this practice using principle #1. In the same way
I have <code>.bashrc</code> set up to include all files in <code>.bash.d/</code>, I have a script in
<code>.bash.d/</code> set up to include all files inside <code>.bash.d/private/</code>. Those licence keys in
environment variables, and employer specific shell settings live outside the git repo,
but inside the aggregation system. And if you're so inclined, you can set up
<code>private/</code> as its own git repo somewhere more... <em>private</em>. Any system of
configuration that provides directives for including other configuration files supports
this method, although if wildcards are not supported you may need another layer of
indirection to avoid disclosing important data through file names.</p>
<h2>3: multiple repositories</h2>
<p>If you write a lot of software, you likely don't use a monolithic repo, and
tracking them all can be tricky. SVN gave us <code>svn:externals</code>, which greatly aided in
chaining together repositories and subrepos. In the grand UNIX tradition of doing one
thing well, git does this <em>other</em> thing poorly.</p>
<p>Fortunately, <a href="https://myrepos.branchable.com/">myrepos</a> provides a similar functionality, and improves upon
svn:externals with features like:</p>
<ul>
<li>multiple disparate revision control systems</li>
<li>recursive chains</li>
<li>custom commands</li>
</ul>
<p>It also has the benefit of working with the venerable <a href="https://github.com/RichiH/vcsh">vcsh</a>, which allows you to
mix and match dotfiles repos. For hackers with many hats, this is useful to track
common configurations and role specific settings that may clash or at least not make
sense in a global perspective.</p>
<h2>4: Multi OS aware</h2>
<p>Some of us are cursed to roam the earth across multiple operating systems. You may find
yourself managing Linux servers from a CentOS shell server, writing code on a Mac Book,
and tinkering with Ubuntu at home. In the face of <strong>wildly different interpreter
versions, package managers and kernels, use <a href="https://linux.die.net/man/1/env"><code>env</code></a></strong> to make things robust. You
may be familiar with it's primary feature of printing out environment variables, but
it has a second purpose: executing scripts with a given interpereter.</p>
<p>It's pretty simple to set up. Instead of hardcoding an interpreter path like with
<code>#!/bin/bash</code>, you should use <code>#!/usr/bin/env bash</code>, which will utilize PATH to find
the appropriate interpreter. Now <strong>no matter where your preferred interpreter is
installed, it will run as normal</strong>. <code>env</code> is basically guarenteed to be omnipresent, so
you can use those 'modern' bash 4 features regardless of how old the shipping OS
interpreter is.</p>
<p>Still, you may encounter from time to time platforms that need specific special casing
for your needs. When you can't find a solution using tools common to your platforms,
you can <strong>special case using <code>uname</code></strong>; Linux platforms return 'Linux', while macOS
returns 'Darwin' for your scripting needs. If you happen to be running bash scripts on
Windows Subsystem for Linux, it also reports 'Linux', but maybe it's clean enough to
keep the illusion up for your scripts.</p>
<h2>5: Testing</h2>
<p>Shell settings in particular are a toxic combination of Turing completeness and obscure
edge cases. And once your shell breaks, it can be painful to fix. As your repo
grows, <strong><a href="https://www.shellcheck.net/">shellcheck</a> can help catch bugs</strong> before they bite you. You'll learn far
more about shell scripting in the process than you bargained for. After scouring a
number of dotfile repos, I found a <code>test.sh</code> that runs shellcheck against all bash
scripts it finds and adopted that alongside Travis.</p>
<p>For actual testing, most scripting styles don't support the typical unit testing via
function calls. Fortunately <a href="https://testanything.org/">TAP</a> provides a method for black box testing, and
<a href="https://github.com/bats-core/bats-core">BATS</a>. I honestly haven't used it much, but it's on the todo list now I guess.</p>
<h3>Wrap-up</h3>
<p>These are my organizing principles. You're free to use them, or ignore them, but
hopefully it explaining it here makes my <a href="https://github.com/jldugger/dotfiles/">own dotfiles repo</a> more nagivable.</p>Dispelling the Git Illusion2018-08-19T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2018-08-19:dispelling-the-git-illusion.html<p>The DevOps world is wonderful. Amazing! Life changing, even! You practice <a href="https://devops.com/meet-infrastructure-code/">Infrastructure As Code</a>, so if you want
to make a change, you write some <a href="https://puppet.com/">Puppet</a> code, merge the commit into master, and within an hour your fleet of
servers has your change, and the world is a little bit better for it.</p>
<p>This is a wonderful, simple workflow with a dangerous illusion: the git repo <em>is</em> the system. This <strong>git illusion</strong>
can lead developers astray. An object wasn't in the git repo before, and now it does, so now it is in the system. So
logically, deleting code from the system should be the same as deleting code from production, right? This theory sadly
does not hold, as you may discover after pushing out a bad puppet module. <code>git revert</code> will <em>not</em> nessecarily cure
your ills!</p>
<p>At the heart of this illusion is a false dichotomy. Server state resides in <em>three</em> possible states, not two:</p>
<ol>
<li>Managed</li>
<li>Deleted</li>
<li><em>Unmanaged</em></li>
</ol>
<p>This hidden Unmanaged state is the default state of the system, and so when you git revert, you revert to the
Unmanaged state, not a deleted state. And as you might imagine, a long running system living can accumulate garbage
(residual configurations) over time. Here's three approaches to taking out the garbage:</p>
<h2>1: The "clean up after yourself" principle</h2>
<p>Any time a PR intends to remove a resource from code, <strong>explicitly delete the object from code.</strong></p>
<p>This is doable when you know when resources are being removed. But it has a couple of drawbacks. Firstly, it sucks to
add code to the system to handle things you no longer need. Secondly, over time your codebase is a record of dead
things. You'll want to shake out your code base from time to time, effectively shifting the cleanup problem to the
code base rather than solving it. But at least you have a single point of truth to audit, rather than thousands. </p>
<h2>2: The Cattle principle</h2>
<p>After any PR is merged, <strong>delete everything.</strong></p>
<p>When you have only a handful of servers, it's tempting to treat them like pets. Give them cute names, and give them
loving care and special attention when they get sick. As your operation grows, it's pretty common to start treating
servers like cattle. Serialized numbers, standardized configurations, and when they get sick, it's cheaper to take
them out back behind the barn and prepare them for the glue factory. Effectively, you're deleting everything and
starting over to maintain the illusion.</p>
<p>To pull this off, you'll need surplus capacity for a rolling upgrade, and need it for a longer duration than if you
simply upgraded in place. And if you're dealing with state backends like <a href="https://vimeo.com/21372341">SQL servers</a> or file systems, then you
need <em>very</em> careful planning to avoid data loss.</p>
<h2>3: The Garbage Collection principle</h2>
<p>Automatically <strong>reap unmanaged resources</strong> via code</p>
<p>In some cases, you pretty much know where the garbage churns. A web server is likely to collect sites in
<code>sites-available</code>. Scheduled tasks are likely to aggregate in <code>/etc/cron.d/</code>. We can bake this domain knowledge into
the code base, using widely available tools. The <code>file</code> resource supports a <a href="https://puppet.com/docs/puppet/5.3/types/file.html#file-attribute-purge"><code>purge</code></a> option which implements this
exact policy.</p>
<p>This policy has some caveats of course. You actually have to know what's safe to purge, and which locations are not
purged. You have to be prepared to delete things that weren't in explicitly created via Puppet. Even if you're
confident nobody on your team will 'forget to add the conifg to Puppet', you'll have to deal with complex interactions
between the purge system and system packages that install things outside Puppet, which typically leads to relying on
them less.</p>
<p>To summarize: <strong>Unmanaged is not the same as deleted</strong>.</p>Podcasts 20162016-12-25T00:00:00-08:00Justin Duggertag:www.pwnguin.net,2016-12-25:podcasts-2016.html<p>Every so often, I see a podcast recommendation thread on <a href="https://news.ycombinator.com/item?id=13252162">HN</a> / Reddit / elsewhere. Rather than repeat
myself, here's an annotated list of podcasts I listen to. If you're needing a great podcast Android player, I recommend
<a href="https://play.google.com/store/apps/details?id=de.danoeh.antennapod">AntennaPod</a>, which has a thriving <a href="https://github.com/AntennaPod/AntennaPod">open source</a> community behind it. On to the list!</p>
<h2>Open Source</h2>
<ul>
<li><strong><a href="http://rfc.fm">Request for Commits</a></strong> <a href="https://changelog.com/rfc/feed">RSS</a> A good series focused on open source sustainability.</li>
<li><strong><a href="http://changelog.fm">The Changelog</a></strong> <a href="https://changelog.com/podcast/feed">RSS</a> Somewhat JS oriented show featuring interviews with open source movers and shakers.</li>
</ul>
<h2>Technology</h2>
<ul>
<li><strong><a href="http://codebreaker.codes">Codebreaker</a></strong> <a href="http://feeds.feedburner.com/CodebreakerByMarketplaceAndTechInsider">RSS</a> Explores the intersection technology and society in a way accessible to the public.</li>
<li><strong><a href="http://www.youtube.com/channel/UCtXKDgv1AVoG88PLl8nGXmw">Google Tech Talks</a></strong> <a href="https://www.youtube.com/feeds/videos.xml?channel_id=UCtXKDgv1AVoG88PLl8nGXmw">RSS</a> Occassional academic presentations; probably part of hiring PhD's at Google?</li>
<li><strong><a href="https://www.arresteddevops.com">ArrestedDevops</a></strong> <a href="http://feeds.podtrac.com/VGAulpN7MY1U">RSS</a> Good cast for keeping up on of new DevOps tech, tricks, and tools (and fads).</li>
<li><strong><a href="http://foodfight.libsyn.com">Food Fight</a></strong> <a href="http://foodfight.libsyn.com/rss">RSS</a> DevOps podcast focused on Chef.</li>
</ul>
<h2>Science</h2>
<ul>
<li><strong><a href="http://uh.edu/engines/">Engines of our Enginuity</a></strong> <a href="https://www.houstonpublicmedia.org/podcasts/engines-of-our-ingenuity/">RSS</a> Quick vignettes about the history of science and engineering.</li>
<li><strong><a href="https://academicminute.org">Academic Minute</a></strong> <a href="https://academicminute.org/feed/">RSS</a> Short summaries of recent peer reviewed publications.</li>
<li><strong><a href="http://www.scientificamerican.com">60 Second Science</a></strong> <a href="http://www.scientificamerican.com">RSS</a> More short summeries of recent peer reviewed publication, focused on science.</li>
</ul>
<h2>Business / Finanace</h2>
<ul>
<li><strong><a href="http://www.manager-tools.com/podcasts/manager-tools">Manager Tools</a></strong> <a href="http://www.manager-tools.com/podcasts/feed/rss2">RSS</a> / <strong><a href="http://www.manager-tools.com">Career Tools</a></strong> <a href="https://www.manager-tools.com/rss/rss.xml">RSS</a> A pair of podcasts focused on workplace behavior
and basics. No platitudes, and a great resource for first managers. Career Tools may be useful for novice employees?</li>
<li><strong><a href="http://www.marketplace.org/shows/marketplace-weekend">Marketplace Weekend</a></strong> <a href="http://www.marketplace.org/shows/143621/podcast.xml">RSS</a> / <strong><a href="http://www.marketplace.org/shows/marketplace">Marketplace</a></strong> <a href="http://www.marketplace.org/shows/85/podcast.xml">RSS</a> </li>
<li><strong><a href="http://www.EconTalk.org">EconTalk</a></strong> <a href="http://files.libertyfund.org/econtalk/EconTalk.xml">RSS</a> Hour long interviews on economics. Interviewer is heavily libertarian, so bring salt.</li>
<li><strong><a href="FT.com/Alphachat">FT Alphachat</a></strong> <a href="http://rss.acast.com/ft-alphachat">RSS</a> Good series on worldwide finance and topics.</li>
<li><strong><a href="http://www.npr.org/sections/money/">Planet Money</a></strong> <a href="https://www.npr.org/templates/rss/podlayer.php?id=93559255">RSS</a> A combo of timeless and timely shows make economics accessible to the public.</li>
<li><strong><a href="http://hbrideacast.org">HBR Ideacast</a></strong> <a href="http://feeds.harvardbusiness.org/harvardbusiness/ideacast">RSS</a> Interviews with authors, usually publishing in the Harvard Business Review.</li>
<li><strong><a href="http://ecorner.stanford.edu">Entrepeneurial Thought Leaders</a></strong> <a href="http://ecorner.stanford.edu/StanfordInnovationLab.xml">RSS</a> Entrepeneurs give invited talks to stanford MBAs.</li>
<li><strong><a href="http://ecorner.stanford.edu">Stanford Innovation Lab</a></strong> <a href="http://web.stanford.edu/group/edcorner/uploads/podcast/EducatorsCorner.xml">RSS</a> A remix podcast creating a dialog of sorts between two business leaders.</li>
<li><strong><a href="http://www.economist.com/">The Economist Radio</a></strong> <a href="http://rss.acast.com/theeconomistallaudio">RSS</a> <em>The Economist</em> magazine's podcast.</li>
</ul>
<h2>Entertainment</h2>
<ul>
<li><strong><a href="http://www.npr.org/programs/ask-me-another/">Ask Me Another</a></strong> <a href="https://www.npr.org/rss/podcast.php?id=510299">RSS</a> NPR trivia show featuring music by JoCo.</li>
<li><strong><a href="http://intelligencesquaredus.org/">Intelligence Squared US</a></strong> <a href="http://feeds.feedburner.com/IQ2USDebates">RSS</a> Oxford style debates on political topics of the day.</li>
<li><strong><a href="http://www.npr.org/templates/story/story.php?storyId=4473090">NPR Sunday Puzzle</a></strong> <a href="https://www.npr.org/templates/rss/podlayer.php?&amp;id=4473090">RSS</a> Quick puzzle show followed by a weekly challenge puzzle.</li>
<li><strong><a href="http://www.ted.com/talks/list">TED Talks</a></strong> <a href="http://feeds.feedburner.com/tedtalks_video">RSS</a> Well rehearsed presentations, somewhat cliched these days.</li>
<li><strong><a href="http://thebuglepodcast.com/">The Bugle</a></strong> <a href="http://feeds.feedburner.com/thebuglefeed">RSS</a> Political satire, former hosts include John Oliver (<em>Daily Show</em>, now on HBO).</li>
</ul>
<h2>Coda: What makes a great podcast?</h2>
<p>Perhaps this is influenced by my short commute time, but I prefer single topic podcasts of 20 minutes or less. Long
enough to cover a subject in greater detail than you get on the 3 minute news segment shows, but not so long as to
fatigue the listener. We know that <a href="http://blog.edx.org/optimal-video-length-student-engagement">long lectures</a> do poorly on online learning platforms, but the lesson is not to
never cover things in depth, but to break deep topics into smaller segments.</p>
<p>Audio quality is really important; doing a podcast via cell phone as a repeat occurance is a good way to earn my
unsubscription. Editing out most of the <a href="https://en.wikipedia.org/wiki/Speech_disfluency">speech disfluencies</a> is a good idea, not only out of respect for the
listener's time, but for the speaker's own self-image. On a similar note, pets / screaming children distract from the
message of professionalism the typical podcast wishes to communicate.</p>Rule Zero of FinOpsDev2016-03-16T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2016-03-16:rule-zero-of-finopsdev.html<p>I'm working on a personal finance project codenamed FinOpsDev (rebranding
suggestions welcome), aiming to reduce drudgery to near zero with automation,
and exploit the increased velocity to run automated tasks more often, etc. Like
DevOps for your checkbook. Or like Continous Accounting.</p>
<p>As a base, I'm using GNUCash backed by PostgreSQL. GNUCash provides the
accounting principles and accounting concepts, and have used it for years.
Postgres makes the data available in a central location, with well understood
tools.</p>
<p>I'm not ready to announce any useful tools as a result of my tinkering quite
yet. Instead, I want to reflect upon an old quote:</p>
<blockquote>
<p>To err is human; to really foul things up requires a computer.</p>
</blockquote>
<p>Up till now I've been using those tools in a manual process, so it naturally
happens that my first foray ends up removing all data from the database.
Forcing a restore from a backup I made from last year. From this calamity,
a principle is born: no matter what the first financial automation to start
with, the zeroth should be backups. I still don't know how it happened,
which only underlines the importance of rule zero.</p>
<p>To commemorate the year of transactions I'm rebuilding, here's a clever little
logrotate script I found that gets the job done without any additional
dependencies:</p>
<div class="highlight"><pre>/var/backups/postgresql/postgresql-dump.sql {
daily
nomissingok
rotate 30
compress
delaycompress
ifempty
create 640 postgres postgres
dateext
postrotate
/usr/bin/sudo -u postgres /usr/bin/pg_dumpall --clean > /var/backups/postgresql/postgresql-dump.sql
endscript
}
</pre></div>
<p>Obviously tools like barman and pg_backrest are great, but I like having a
quick, simple solution in place. Next on the plate is a cron job to exfiltrate
backups to another server for safe keeping.</p>LCA 2014 Videos of Note2014-01-12T00:00:00-08:00Justin Duggertag:www.pwnguin.net,2014-01-12:lca-2014-videos-of-note.html<p><a href="http://linux.conf.au/programme/schedule">Linuxconf 2014</a> wrapped up last week, and the <a href="http://mirror.linux.org.au/linux.conf.au/2014/">videos are already online</a>!</p>
<p>I didn't get a chance to review all the video, but here's some of the sessions I
thought were interesting:</p>
<p><a href="http://mirror.linux.org.au/linux.conf.au/2014/Wednesday/24-VirtIO_1.0:_A_Standard_Emerges_-_Rusty_Russell.mp4">Rusty Russel discusses virtIO standardization</a>. I thought I knew what virtIO was but his
initial explaination leaves me more confused than I started out. Nevertheless, Rusty gives a
implementer's view of the standardization process, and shares how virtIO manages forward and
backward compatibility between hypervisor, guest OSes, and even hardware.</p>
<p><a href="http://mirror.linux.org.au/linux.conf.au/2014/Wednesday/27-Systems_Administration_in_the_Open_-_Elizabeth_Krumbach_Joseph.mp4">Elizabeth Krumbach Joseph explains how the OpenStack Core Infra team publishes does their work in the open.</a> We've taken a similar approach, so its nice to see other approaches and bits we
might steal =). Storing Jenkins jobs in YAML in config management sounds very nice, and I will
have to bring it up at my next meeting.</p>
<p><a href="http://mirror.linux.org.au/linux.conf.au/2014/Thursday/92-Disaster_Recovery_Lessons_I_Hoped_Id_Never_Have_to_Learn_-_Bdale_Garbee.mp4">Bdale Garbee shares his experience losing his home</a> to the <a href="http://en.wikipedia.org/wiki/Black_Forest_Fire">Black Forest Fire</a>. As a serial
renter / mover, I'm already well prepared to answer the question "What would you take if you had
five minutes to clean out your home?" So I would have liked a bit more in the way of disaster
recovery / offsite backups / tech stuff, but but I happen to know he rescued his servers from
the fire and isn't storing them locally anymore. So perhaps there is no lesson to share yet =)</p>
<p><a href="http://mirror.linux.org.au/linux.conf.au/2014/Wednesday/65-Continuous_Integration_for_your_database_migrations_-_Michael_Still.mp4">Michael Still presents a third party CI approach for database migrations</a> in OpenStack. Looks
like a combo of gerrit for code reviews, Zuul, and <a href="http://git.openstack.org/cgit/stackforge/turbo-hipster/">some custom zuul gearman worker</a>.
Surprisingly little duplicate content from the other open stack infrastructure talk!</p>
<p><a href="http://mirror.linux.org.au/linux.conf.au/2014/Thursday/84-Is_it_safe_to_mosh_-_Jim_Cheetham.mp4">Jim Cheetham asks 'Is it safe to mosh?'</a> The answer appears to be yes, but takes a hands off
approach to the underlying cryto. </p>
<p>Lots of exciting talks, and maybe I need to sit down and think about writing my own proposal for
LCA 2015.</p>N900 Eulogy2012-11-23T00:00:00-08:00Justin Duggertag:www.pwnguin.net,2012-11-23:n900-eulogy.html<p><a href="//www.pwnguin.net/n900-arrival-and-notes.html">3 years ago</a>, Nokia released an intriguing option into the smartphone
market. The N900 was a true Linux phone, and a simple, freely available app
was all you needed to get root access to your hardware. The N900 was not a
massive hit with carriers, and not a massive hit in the market by extention.
That subsidy is pretty important for the US market. Of course, they faced a
variety of other problems competing in this market. Time to market, developer
support, economies of scale, feature parity, etc.</p>
<p>My exit two months ago is due to the notoriously flaky USB / charger port finally
failing. But today I want to focus on the software flaws rather than financial
or hardware. I ended up relying on this phone for a number of purposes. It
was an alarm clock, camera, PDA, mp3 playerphone, flashlight, remote terminal,
wireless keyboard and mouse, calculator, watch, and USB drive. A veritable
electronic multitool. But this digital convergence comes at a price: it's
also a large Single Point of Failure. It's a bit awkward falling back to using
a spare Nintendo DS as your wake up alarm.</p>
<h1>Thorns</h1>
<p>Beyond the aggregate, there are some specific pain points that were never
adaquately solved:</p>
<p><strong>Calendar Sync</strong>. In a bizarre twist, the only protocol supported by the
both N900 and Zimbra was Exchange. I had set up davical on the theory that
desktop and phone would support CalDAV, but apparently, despite being based
on evolution, the N900 calendar has disabled CalDAV support. But hey,
there's a Exchange for Maemo app. So you know, as long as it works for the
narrow circumstances in which their developers operate, no problem? Sigh.</p>
<p><strong>Google IMAP</strong>. For whatever reason, connecting to gmail via IMAP takes
ages. I think it's missing IDLE support? Or perhaps it's just not very
good at requesting data in chunks--gmail encourages leaving everything
in your inbox.</p>
<p><strong>Browser</strong>. Three browsers, none terribly good. Fennec is slow to load
and render, and the built in browser is outdated enough that gMail is
basically broken on it.</p>
<p><strong>Installable apps</strong>. This is a mixed blessing. Maemo never saw the
massive developer influx that Android has. It also doesn't have nearly
as much crapware or other privacy invasive free apps that Android seems
to implicitly encouraged.</p>
<p>My own ideas for apps turned out to be infeasible; those barode laser
scanners are tuned for specific frequencies, so you can't just display
or generate a picture. The flashlight eventually got implemented, but
is slightly annoying due turning on the camera app when opening the
lense cover. The relatively rapid Meego announcement dramatically
terminated any interest in the platform, so few were surprised when the
the "burning platform" memo was leaked.</p>
<h1>Roses</h1>
<p>One thing the Maemo platform did right, however, was their plugin system.
Rather than a Flickr app and a Picasa app and so on, almost all basic apps
relied on a Sharing API, so that one could install and register plugins
providing it. You'd take a picture and select from your providers where it
should be delivered. I had a plugin for the gallery2 webapp, for example.
Similarly, their Conversations app integrated SMS and the whole libpurple suite
into one consistent UI. Adding new protocols was simple, and you could merge
contacts into a consistent view.</p>
<p>The Linux OS does offer app devs a few useful improvements over the Android
interfaces; to <a href="http://android.stackexchange.com/q/4538/1698">the best of my knowledge</a>, there is no Android version
of <a href="http://www.valeriovalerio.org/?page_id=174">BlueMaemo</a>, a Bluetooth keyboard emulator that requires no custom
software on the other side. And it came with a suite of open source tools
relatively easy to build and ready to go.</p>
<p>Being available off contract was a huge boon; I was able to get many of
the benefits of a smartphone without being held to a pricey data plan. For
plenty of people, wifi is available in the home and the office, so it's not
a huge a sacrifice as it might seem. Combined with Skype and other VOIP
apps, it's not super hard to get by with a 100 dollar a year voice only plan.</p>
<h1>Moving on</h1>
<p>Clearly Maemo/Meego are dead and buried. Jolla has a huge market share
disadvantage; every minute they don't have a device in user's hands is a
moment someone is buying, and thus funding, a competitor. Case in point: I
bought a Galaxy Nexus as a replacement. It will be quite a few years
before Jolla will get a shot at converting me. And it will be a tough sell.
Google's Nexus line offers no contract, no carrier crapware phones, and the
pace of updates has been incredible.</p>PuppetConf 20122012-09-30T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2012-09-30:puppetconf-2012.html<p>Recovered from the post-con crash a while ago, so it's time to write up some thoughts.
Last week I attended PuppetConf with my coworkers at the OSL. The OSL attended
PuppetConf primarily as a pre-deployment information gathering exercise. We want
to avoid common pitfalls, and be able to plan for things coming down the pipeline.
Puppet 3.0 was targetted to be released on Friday and <a href="http://projects.puppetlabs.com/versions/271">clearly that slipped</a>.</p>
<p>The venue itself was nice, but space partitioned poorly. The two main tracks had
surplus space, but the three side tracks nearly always had people turned away for
space concerns. Supposedly, the recordings will be available shortly, so it may
not be the Worst Thing In The World, but only time will tell.</p>
<p>Content wise, one recurring theme is to start small and simple, and not worry about
scale or sharing until they become an issue. Designing a deployment for thousands of
nodes when you have perhaps a dozen gives new life to the term "architecture astronaut,"
and there's a certain amount of benefit to procrastinating on system design while
the tools and ecosystem mature. Basically, <a href="http://en.wikipedia.org/wiki/The_Mythical_Man-Month">build one to throw away</a>.</p>
<p>Another problem we've been worrying about at the OSL is updating 3rd party config
modules in their various forms. The hope is that by explicitly annotating in your
system where things came from, you can automate pulling in updates from original
sources. Pretty much the universal recommendation here is a condemnation: avoid git
submodules. Submodules <em>sounds</em> like the right strategy, but it's for a different use
case. In our experience, it dramatically complicates the workflow. At least one person
mentioned <a href="http://librarian-puppet.com/">librarian-puppet</a>, which as far as I can tell is isn't much different
than <a href="http://joeyh.name/code/mr/">mr</a> with some syntactic sugar for PuppetForge. This is great, because mr was
basically the strategy I was recommending prior to PuppetConf.</p>
<p>The <a href="http://puppetconf.com/speakers/?speaker=Jamie%20Wilkinson">Better Living Through Statistics</a> talk was less advanced than I'd hoped. Anyone
who's spent maybe 5 minutes tuning nagios check_disks realizes how inadequate it is, and
that the basic nagios framework is to blame. What you really want is an alert when the
<code>time to disk outage</code> approaches <code>time to free up more disk</code>, and no static threshold
can capture that. While Jamie did provide a vision for the future, I was really hoping
for some new statistical insight on the problem. It appears it's up to me to create and
provide said insight. Perhaps in another post.</p>
<p><a href="http://unethicalblogger.com/">R Tyler Croy</a> gave a useful talk on <a href="http://puppetconf.com/speakers/?speaker=R.%20Tyler%20Croy">behavior/test driven infrastructure</a>. I'd
looked into Cucumber before, but RSpec was only a word to me before this talk. It's
certainly something I'll need to take some time to integrate into the workflow and
introduce to students. One concern I had (that someone else aired) was that in the demo,
the puppet code and the code to test it was basically identical, such that software
could easily translate from code to test and back. Croy insisted this was not the case
in more complicated Puppet modules, but I'm reserving judgement until I see said modules.</p>
<p>Overall, I'd definately recommend the conference to people preparing to deploy puppet.
There's plenty more sessions I didn't cover in here that are worth your time. You'd
probably get the most out of it by starting a trial implementation first, instead of
procrastinating until Wednesday night to read the basics like I did. Beyond simply
watching lectures, it's useful to get away from the office and sit down to learn about
this stuff. Plus, it's useful to build your professional network of people you can
direct questions to later.</p>What programmer things should every sysadmin know in 2012?2012-07-24T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2012-07-24:what-programmer-things-should-every-sysadmin-know-in-2012.html<p><em>The following is an update to a <a href="http://serverfault.com/a/11167/919">post originally made to ServerFault</a> in 2009.</em></p>
<p>System administration is more than clicking Next, <a href="http://www.reddit.com/r/sysadmin/comments/wzs8k/rack_bite_am_i_the_only_one/">cutting your fingers on cagenuts</a>,
and 3am pager alerts. Sysadmins can borrow many of the same skills and tools that
programmers use daily to make themselves more productive, and build a deeper
understanding of the systems they manage. The following five skills are perhaps the
most obvious overlaps between our two roles:</p>
<p><strong>Version Control.</strong> Be able to generate, read and apply patches. In 2012, it's vital that
your version control system present a repo wide version history. You should be able to
write descriptive changelogs and why you want them. Whatever your technology (I recommend
<a href="http://www.amazon.com/gp/product/1430218339/ref=as_li_ss_tl?ie=UTF8&camp=1789&creative=390957&creativeASIN=1430218339&linkCode=as2&tag=jlduggesblog-20">git</a> by default), know how to search the repository's logs for keywords and time frames.</p>
<p><strong>Scripting.</strong> Do something once and be on your way. Do it twice or more, do it once then
write a script. To paraphrase Tom Limoncelli, if you're not using <a href="http://www.amazon.com/gp/product/1935182137/ref=as_li_ss_tl?ie=UTF8&camp=1789&creative=390957&creativeASIN=1935182137&linkCode=as2&tag=jlduggesblog-20">Powershell</a> or bash,
you're working too hard.</p>
<p><strong>Debugging.</strong> Know how to read a stack trace and how to report relevant errors to your
software support contact. Documenting the error is easy and helpful, but knowing how to
fix it takes a lot of investment in reading a specific code base. Do the part that's easy
for you, and let the devs handle the part that's easy for them.</p>
<p><strong>Testing.</strong> Repurpose integration tests for <a href="http://auxesis.github.com/cucumber-nagios/">continuous integration testing</a>. Used in
conjunction with version control and testing, you have a strong idea of what may have gone
wrong when and what changed at that time. </p>
<p><strong>Peer Review.</strong> Keep your system configuration in revision control, and turn on commit
mail. Change is core to system administration, not just an agenda item at the weekly staff
meeting. Do not let Change Management degrade into political battles or displays of
bureaucratic power.</p>
<p><strong>Study Cryptography.</strong> System administrators are charge of networked resources; baking in
security as a final step is somewhere between impossible and a very expensive proposition.
Given how much of the sysadmin role involves acting as a trusted third party, understanding
public key cryptography, password handling practices, entropy and encryption in general are
valuable skills in debugging, performance tuning and setting policy.</p>Open Source Bridge Wrapup2012-07-01T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2012-07-01:open-source-bridge-wrapup.html<p>Friday marked the end of Open Source Bridge. Just about the best introduction
to Portland culture as you can find. Vegan lunches, Voodoo Donut catering,
lunch truck friday, and rock and roll pipe organists in the Unitarian's
sanctuary. </p>
<p><a href="http://www.youtube.com/watch?v=0Cl4ImQpV94">The</a> <a href="http://www.youtube.com/watch?v=tJqZGRIwtxk">keynotes</a> <a href="http://www.youtube.com/watch?v=_OczqFEcUTA">were</a> pretty cool. I'd seen Fenwick's presentation
from LCA, and was surprised at how much had changed, hopefully since some of
his keystone evidence turned out to be bogus; turns out there's strong
evidence that the only "priming" effect was in grad students running the study.
I'm still not quite clear on what JScott wants people to run vbox for, but he
did have a really good idea about bringing your own recording equipment that I
wish I had taken to heart.</p>
<p>Probably the most useful talk I attended was Laura Thompson's presentation on
<a href="https://crash-stats.mozilla.com">Mozilla's Crash Reporting service</a>, powered by <a href="https://github.com/mozilla/socorro">Socorro</a>. A few of the
projects the OSL hosts are desktop apps and collecting crash data might be
a good engineering tool win for them. A lot of embedded hardware talks that
would have been interesting, but not directly relevant to the needs of the
OSL. Hopefully they'll be up as recordings soon. </p>
<p>The OSL was also well represented as well in the speaker's ranks: we ran five
sessions during the main conference, and two during the Friday unconference.
I think next year it would be a good idea to encourage our students to
participate as volunteers; getting them facetime with speakers and the
community at large can only do us a world of good. I gave a first run of a
talk on using GNUCash for personal finance; the turnout was pretty good,
given how many people were still at the food carts. I should have recorded
it to self-critique and improve.</p>
<p>The "after party" on Thursday was nice. Lance won the <a href="https://twitter.com/ramereth/status/218562778535952384/photo/1">2012 Outsanding Open
Source Citizen award</a>, which is great, because he deserves recongition for
handling the turmoil at the OSL over the past year. But now I've got to
figure out my plan meet or beat that for next year. No small task.</p>
<p>Next up is catching up back at the Lab, and then OSCON!</p>In like a lion, out like a Wildcat!2012-03-30T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2012-03-30:in-like-a-lion-out-like-a-wildcat.html<p>March has been a month of great change for K-State. Frank Martin left for the east
coast, in a departure you might have heard about at the water cooler. What you probably
didn't hear, is that I left in the opposite direction for a new position with the Open
Source Lab. Today ends my first week with the Lab, and it's quite a rush. </p>
<p>Working with K-State was interesting and intellectually stimulating in many ways, and I
wish the Wildcats I've left behind in Manhattan all the luck in their endevors. K-State
has given me a lot of knowledge and experience and it's time to share it with the
outside world.</p>
<p>The role I'm taking on with the <a href="http://osuosl.org/about">OSU Open Source Lab</a> is a System Administrator role,
managing systems and student workers. This is a unique and wonderful opportunity to
contribute back to the open source community that's benefited me personally and
professionally. The OSL provides managed and unmanaged systems to a number of projects,
and trains student workers as sysops. Not about to propose any major changes, but
there's always room for improvement, and I've got a few ideas that need to bake for a
while longer yet.</p>
<p>But for now, I think it's time to enjoy the weekend.</p>2011 Goal Review and 2012 Goals2012-01-02T00:00:00-08:00Justin Duggertag:www.pwnguin.net,2012-01-02:2011-goal-review-and-2012-goals.html<p>In January I published five goals:</p>
<ol>
<li>Learn and practice SEO techniques.</li>
<li>Explore website revenue and analysis.</li>
<li>Reinvigorate K-SLUG student participation.</li>
<li>Impress folks by being great at my new job.</li>
<li>Write down tasks to improve productivity and quality of life.</li>
</ol>
<p>Arguably I accomplished 4 of them. Obviously this isn't 5 for 5, and the main
reason is that I spent more time <em>really</em> analyzing my personal finances. I've
been keeping personal books since 2007, but this year I had some unexpected
family obligations as motivation to really bring it up a notch. I dug into
the local library collection on personal finance in August, and I've basically
reached the point where books just repeat themselves. The mundane stuff has
been automated for quite a while, so I can just focus on planning, analyzing
and tweaking. I've used 2011's last paycheck to work up tax returns and found
a bug in my annual budget spreadsheet's tax calculations. </p>
<p>To put the knowledge to the test, I started asking and answering questions on
reddit and StackExchange's money subsites, where I found a couple of neat
options university employees that I should write a full post on. The bad news
is there's so many small tricks to consider, it can soak up as much time as you
let it. But I did actually make some changes, so it's not a total analysis
paralysis. I opened and funded a savings account, and tuned my health insurance
policies during open enrollment. I came up with a conservative estimate for
FSA allocation to save on taxes. I moved away from the semi-luxury apartments
for a cheaper yet comparable place, with garage parking to boot. I researched
how to sell used video games and moved half of my video game collection. </p>
<h1>Goals for 2012</h1>
<ol>
<li>Be awesome at my new role.</li>
<li>Finish catching up on my backlog of games and podcasts.</li>
<li>Book diet; no more than 12 books this year.</li>
<li>Establish a retirement savings goal and act on it.</li>
<li>Twelve informative blog posts; no mindless link propagation or blogging
about blogging!</li>
</ol>
<p>Some of this is lessons from last year; I started coming back from the library
with more books than I really have time to read, so #3 is an attempt to rate
limit myself, because I found out that Microsoft Research has like two
thousand video lectures online, so I'll never actually catch up to <em>that</em>, but
I can catch up on the podcasts from the past few months. And finish selling off
video games to roll them back into the budget.</p>
<p>Goal #4 is really about finding a balance between saving and spending. I think I
have the unusual problem of saving too much. While it's been crucial for
helping out sick parents, my basic retirement strategy is "don't buy anything
and save everything you can," because I haven't really figured out how much I
can spend and still retire comfortably.</p>
<p>And #5 is about commiting to putting the polishing touches on a few posts that
have been brewing. And cutting back on blogging about goals--this shall be the
last until 2013! </p>Never Ask For Passwords2011-10-25T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2011-10-25:never-ask-for-passwords.html<p>Once again, it's demonstrated that sitting around a table drafting
policy fails the common sense test. <a href="http://chronicle.com/blogs/wiredcampus/universitys-social-media-policy-draws-cries-of-censorship/33898">Chronicle of Higher Education</a>
reports that Sam Houson State University decided to ask all members
of their social media portal for username and passwords. Their
<a href="http://shsu.edu/campus_life/social-universe/pdf/50_08_SHSU_PolicyManual_R04.pdf">published policy</a> still does. My favorite part:</p>
<blockquote>
<p>Do not change any passwords issued with the accounts. If there is a problem
or compromise of the accounts security, contact the Marketing and Communications
Social Media Representatives. They will issue you a new password. <em>Do not share
login and password information with unauthorized individuals</em>.</p>
</blockquote>
<p>Most social media sites support <a href="http://oauth.net/">OAuth</a>, which allows apps to read and write
to your feed without sharing an underlying username/password. Moreover, if all
you're doing is mere aggregation, there's no need to ask for this information.
You automate censors, you implement a blacklist, and you move on. There's no need
to edit posts directly, there's no need to spam thunderstorm warnings on every twitter
feed you can find.</p>Solving the Sunday Puzzle with coreutils & grep2011-08-14T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2011-08-14:solving-the-sunday-puzzle-with-coreutils-grep.html<p>Not too long ago, a puzzler admitted to solving the NPR Sunday Puzzle with a
computer. Since I hate word puzzles quite a bit, I'd taken similar steps in the
past. For example, <a href="http://www.npr.org/2011/05/15/136315586/as-a-matter-of-course">this recent challenge</a>: </p>
<p>Fill in the 4x4 square crossword with common uncapitalized words, using no
letter more than once:</p>
<table>
<tr><td>N</td><td>A</td><td>G</td><td>S</td></tr>
<tr><td>E</td><td></td><td></td><td></td></tr>
<tr><td>W</td><td></td><td></td><td></td></tr>
<tr><td>T</td><td></td><td></td><td></td></tr>
</table>
<p>My secret weapon? <a href="http://www.gnu.org/s/coreutils/">GNU coreutils</a>. Regex are a great tool, but I rarely have
to use some of the more obscure features, which hurts on the occasions where
they're called for. So the NPR puzzle can be a good way to practice and learn!</p>
<p><em>Edit</em>: Commenter hggdh points out that the heavy worker here is grep, which is not
part of coreutils. If your OS vendor doesn't provide grep, <a href="http://www.gnu.org/s/grep/">GNU grep</a> sounds like
a suitable replacement.</p>
<ol>
<li>
<p>I'm using the American English dictionary provided by Ubuntu
<code>/usr/share/dict/words</code>. The format of this file is one word per line. Every
form of a word, including contractions and possessives, gets its own line. We
use <code>|</code> (pipe) to chain the output of one command as the input of the next.
Cat simply outputs a file, and wc -l counts the lines in it.</p>
<p><code>laptop:~$ cat /usr/share/dict/words | wc -l</code></p>
<p><code>98569</code></p>
</li>
<li>
<p>I assume no apostrophes are in the puzzle. Grep reads input and outputs only
those lines that match a regular expression (regex). Using the -v option to
grep changes it to output only lines that don't match our pattern.</p>
<p><code>laptop:~$ cat /usr/share/dict/words | grep -v "'" | wc -l</code></p>
<p><code>74059</code></p>
</li>
<li>
<p>That's a lot of words to fuddle around with, so lets winnow this down.
Firstly, we only care about 4 letter words. We can use grep to give us only
these words, using the regular expression "^....$". Caret (^) represents
the start of a line, and $ represents the end of one. Each period is a single
free choice character for grep, matching exactly one character in the input. </p>
<p><code>laptop:~$ cat /usr/share/dict/words | grep -v "'" | grep "^....$" | wc -l</code></p>
<p><code>3174</code></p>
</li>
<li>
<p>Having cut the search space by 96 percent, we now turn to the clues for...
clues. Fortunately, nags and newts define which letters every word can start
with. Grep treats symbols within [] as alternatives, meaning any any one
symbol within can match the input. Below alters the regex from 3 to only match
words starting with a, g, s, e, w or t.</p>
<p><code>laptop:~$ cat /usr/share/dict/words | grep -v "'" | grep "^[agsewt]...$" | wc -l</code></p>
<p><code>777</code></p>
</li>
<li>
<p>Rules say no two letters repeat in the puzzle, so we'll exclude all words
with the letters from nags and newts anywhere other than the first letter. As
an alternative to -v, we can use carets inside brackets to indicate "not". </p>
<p><code>laptop:~$ cat /usr/share/dict/words | grep -v "'" | grep "^[agsewt][^nagsewt][^nagsewt][^nagsewt]$" | wc -l</code></p>
<p><code>131</code></p>
</li>
<li>
<p>Next, we can rule out words with repeated letters, like solo and wool. To do
this quickly, we'll need to use <a href="http://www.regular-expressions.info/brackets.html">backreferences</a>. Backreferences can be
<a href="http://swtch.com/~rsc/regexp/regexp1.html">slow</a>, but since our dataset is so tiny, it will be fine to add it to the
end of the pipeline.</p>
<p><code>cat /usr/share/dict/words | grep -v "'" | grep "^[agsewt][^nagsewt][^nagsewt][^nagsewt]$" | grep -vE "([a-z]).*(\1)" | wc -l</code></p>
<p><code>106</code></p>
</li>
<li>
<p>Starting to get close! From here on out, this plays a lot like sudoku. Our
goal is now to start constructing regex for each word. We replace the leading
alternative for a specific letter. To start off, we've only got 7 options for 2
across:</p>
<p><code>laptop:~$ cat /usr/share/dict/words | grep -v "'" | grep "^e[^nagsewt][^nagsewt][^nagsewt]$" | grep -vE "([a-z]).*(\1)"</code></p>
<p><code>echo</code></p>
<p><code>ecru</code></p>
<p><code>emir</code></p>
<p><code>epic</code></p>
<p><code>euro</code></p>
<p><code>evil</code></p>
<p><code>expo</code></p>
</li>
</ol>
<p>We now write a different regex without negations to get the same list.</p>
<div class="highlight"><pre>`laptop:~$ cat /usr/share/dict/words | grep "^e[cmpuvx][hipr][cloru]$" | grep -vE "([a-z]).*(\1)" | wc -l`
`7`
</pre></div>
<p>Now we build a similar regex for 2 down. Adding in what we know about it's
intersection with 2 across (cmpuvx) is the sudoku like step:</p>
<div class="highlight"><pre>`laptop:~$ cat /usr/share/dict/words | grep -v "'" | grep "^a[cmpuvx][^nagsewt][^nagsewt]$" | grep -vE "([a-z]).*(\1)"`
</pre></div>
<p><code>achy</code></p>
<p><code>acid</code></p>
<p><code>amid</code></p>
<p><code>amok</code></p>
<p><code>avid</code></p>
<p>We rewrite this one as </p>
<p><code>laptop:~$ cat /usr/share/dict/words | grep -v "'" | grep "^a[cmv][hio][dky]$" | grep -vE "([a-z]).*(\1)" | wc -l</code></p>
<p><code>5</code></p>
<p>Applying the same logic to 3 down yields <code>"^g[ir][lriu][bdlmp]$"</code>, and 4 down
yields <code>"^s[lu][cilmoru][bdfhkmopr]$"</code>. </p>
<ol>
<li>The last positions in each down regex constructs a new regex for 4 across:</li>
</ol>
<p><code>cat /usr/share/dict/words | grep -v "'" | grep "^t[dky][bdlmp][bdfhkmopr]$" | grep -vE "([a-z]).*(\1)"</code></p>
<p><code>typo</code></p>
<p>A unique solution to 4 across!</p>
<ol>
<li>Revisiting 2 down with this new fact also yields a unique answer. I leave
it from here as an exercise to the reader to solve the entire puzzle.</li>
</ol>Mid 2011 Update2011-07-31T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2011-07-31:mid-2011-update.html<p>Over halfway through 2011, so it's time to revisit my goals again. I really
should post more often.</p>
<h3>Personal Website</h3>
<p>pwnguin.net consistently gets more than a thousand hits a month. That's enough
that I can start adding A/B experiments on the most popular pages. I've added
some ads to blog posts to see if anything there gets unusually good revenue.
Still haven't published that bit on using grep to solve <a href="http://www.npr.org/series/4473090/sunday-puzzle">NPR's Sunday
puzzles</a>, but someone did admit to doing so on air! Need to add <code>awstats.js</code>
to the templates, to better complement Google Analytics.</p>
<p>I also set up an AdWords account just to try it out. I got a $100 coupon but
had to call them to redeem it and it's an awkward conversation since they want
to set up a campaign themselves when I don't really have anything in mind yet.
It's quite expensive what they set up, but I think I can come up with a cheaper
campaign once I've got more clue.</p>
<p>In June I set up openLDAP and tied some of the easier systems in. For a while
I was worried, as my free <a href="https://www.startssl.com/?app=11&action=true">SSL cert provider</a> was compromised. But they came
back online shortly before my certs expired and I've got it rolling. I also set
up IPv6, but I'm not really using it for anything other than IRC at the moment.
KSU won't be IPv6 ready for quite a while it seems, and IPv4 allocations are
basically running on empty.</p>
<h3>K-SLUG</h3>
<p>Not much done here. Right now we mostly meet on Wednesdays for informal lunches
and sometimes discuss Linux. Ideally, I'd establish a Friday Night fight (for
open source gaming), and monthly tech presentations. But before I get that off
the ground, I need to set up limesurvey and gather opinions. A work in
progress, but I'm hardly the only slacker here.</p>
<h3>Work</h3>
<p>Strange goings on lately. We're having difficulty hiring and retaining system
administrators. As I chat with the existing ones, I learn more and more
peculularities that drain their time. Like, scripts to copy out of LDAP instead
of directly relying on it. Anyways, they're having trouble getting a big enough
applicant pool to convince Affirmative Action to allow them to hire two people,
and the recent turnover roughly matches the rate at which they can hire, which
leaves them constantly understaffed.</p>
<p>I did catch several <a href="http://www.google.com/events/io/2011/sessions.html">Google I/O videos</a> in June, though as you might expect,
most were really about using Google's platform than building your own. Lots of
Android and Chrome WebGL stuff--only <a href="http://www.google.com/events/io/2011/sessions/clientlogin-fail.html">two</a> <a href="http://www.google.com/events/io/2011/sessions/identity-and-data-access-openid-and-oauth.html">videos</a> related to Identity
Management. So I did spend some free time reading some papers on entropy and
passwords, especially during the move last week when I was without net access
at the new place. I'll probably start up a new category and see about getting
on Planet Identity once I've got a few posts up.</p>
<p>Beyond papers and presentations at home, I picked up several books on Java and
a book on Kerberos from the research uni, and helped out a bit with our
new password change page. Right now it takes users 3 tries to choose a valid
password, and some people up to ten. This is on par with published research
on usability, so when our system doesn't improve things, we'll know why.</p>
<h3>Personal finance</h3>
<p>Beyond those goals, in July I pretty much focused on finance. Nobody in the
department recieved raises that I know of. So I've located a new place with
a roommate. Took quite a bit of effort to organize all the movers and
cleaners, but I estimate I've saved about 3k annually.</p>
<p>I also get garage parking, a larger kitchen, a closet in the bathroom, and
access to Tivo / Netflix. The new place is much further away, and in the
destruction path of Tuttle Creek Lake. I located a <a href="http://www.nwk.usace.army.mil/tc/daily.cfm">site on lake levels</a>
that might make it worth setting up nagios for so I can write and deploy a
scraper for it.</p>
<p>In other news, I figured out why my student loan split in GNUCash is always
wrong. Simple daily interest formula is not possible to represent at the
moment, although I've filed a <a href="http://uservoice.com/a/mzAEn">uservoice request</a>. I'm also trying out the
future transactions and cleared / reconciled features. It's actually pretty
nice to schedule a bunch of recurring transactions and have them show up 30
days before they occur. Although creating transactions ahead of time doesn't
work so well for 403(b) contributions. I also converted from XML to the sqlite
backend, but it turns out the schema's a bit ugly. I was really hoping that
since it has a postgres option I might be able to build some python reporting
webpages, but it's not as trivial as I'd hoped.</p>
<h3>The future</h3>
<p>Next quarter, I'll continue integrating apps with LDAP, and maybe selfhost
openID. It turns out there's nothing in Debian yet for this... Also outta
set up nagios or something to monitor items of interest. And, I should probably
fix my self-hosted Mozilla Sync server.</p>
<p>I've also got some posts in draft for the Identity tag that I can finish up
and post. There's been some commotion from Mozilla about solving authentication
I could reflect upon. On the wbe analytic side, I'll run through Conversion
University and try some A/B testing. </p>
<p>I'll also probably start up a subsite to sell my used games with. I've found
good sites to sell on already, but finders fees and shipping eats into the
better prices, so having a URL I can point locals at might save me and local
buyers hassle.</p>
<p>I also need to learn about HSAs and such before open enrollment. Our offering
looks pretty good, but I know a guy who caught unaware of a new change
regarding OTC drugs and his FSA.</p>May Goals Update2011-05-31T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2011-05-31:may-goals-update.html<p>Been a few months since I've set my goals for 2011, and the holiday's a good
time to reflect on progress and life.</p>
<h3>Goals for 2011</h3>
<p>The website's still doing okay. I did waste a bit of time drafting an
(unpublished) guide on using grep to solve NPR's Sunday word puzzles, because
I hate word puzzles with a passion and wanted to learn more about <code>grep</code>. I
replaced my stats tool with awstats, to complement GA. Revenue is still tiny,
but I still haven't added Adsense to the blog. And until traffic is high
enough for A/B experiments, there's no point in "optimizing."</p>
<p>I've also cleared out a great many of my todo items in the past few months.
So much so that there aren't many minor easy things left. Only some poorly
specified ideas that might be a good idea "some day" and the tasks that cause
me much <em>anxiety</em>. What can I say, I hate calling people, especially people I've
never met, especially when their fees can bankrupt me. But I'll pick out health
care providers eventually.</p>
<h3>May update</h3>
<p>Beyond those goals, in May I also upgraded to the latest Ubuntu release, read
a book and several papers, finished up the last of the LCA2011 videos I cared
about, and finished a few games and sold off about 100 dollars worth of old
games to the internet. I think I actually made a dollar or two on a couple,
but mostly, games are still an expense. Cheaper than renting though!</p>
<p>I also drove out to visit Mom before a cancer-treating surgery. She should be
returning to work today, and doing well physically. Financially I worry about
her; I read stories of people being fired over trivial stuff after cancer,
after a jump in employer paid premiums. They're already fouling up her FMLA
leave, and it just seems like a preamble to things to come. </p>
<h3>The future</h3>
<p>Next month I'm focusing my personal projects on dual use. Stuff that I can
apply at my current job @ KSU working on Identity Management. To that end, I'm
going to address authentication systems of pwnguin.net -- start using LDAP,
Kerberos, and such. I'll also stop delegating OpenID and selfhost. Beyond that
I really need to step up my Java game. Digging into compilers and JVMs doesn't
really prepare you for the J2EE colossus. I'm gonna have to find some decent
books and tutorials to fill my missing gaps in understanding. </p>
<p>I also need to find a roommate, because rent around here could mortgage a
damn house. If any of my Manhattan KS area readers need a roommate, you know
how to get in touch.</p>Reducing Interruptions2011-05-13T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2011-05-13:reducing-interruptions.html<p>Been pondering lately how I'd design a desktop environment to improve
productivity. Many of the ideas are non-trivial changes, but there's a few
papercut sized tweaks I thought I'd document and share. These all revolve
around the notion of <a href="http://en.wikipedia.org/wiki/Flow_%28psychology%29">flow</a>, a state of concentration. The aim is to reduce
notification popups and, as a side effect, waste less time futzing with the
window manager by batch processing them.</p>
<ol>
<li>
<p><em>Set Liferea update frequency to once a day.</em> Many feeds specify a refresh
of two hours. For news junkies, this might be fine. But none of the feeds I
subscribe to are time sensitive. So I set the "Default Feed Refresh Interval"
to 1 day. You can override the default for 'important' feeds, if need be.</p>
</li>
<li>
<p><em>Set Evolution to check less frequently</em>. This one can be controversial. By
default Evolution checks and notifies every 10 minutes. Since the goal is
multiple hours of uninterrupted concentration, that's far too many chances of
interruption. The challenge is responding to urgent mail without letting the
less urgent stuff interrupt. My approach is to turn off general notifications
and set alerts for important senders, like the boss and the support ticket
system. </p>
</li>
<li>
<p><em>Set Gwibber to check less frequently.</em> If someone needs your immediate
attention for 140 characters or less, they probably know how to reach you by
SMS. You can tweak the notifications to only include mentions and replies,
which dramatically cuts down on the traffic. Unfortunately, Gwibber limits the
upper bound between refeshes to an arbitrary 240 minutes, so if you wanted a
once a day reminder to check everything, you're out of luck there.</p>
</li>
<li>
<p><em>Filter noisy RSS feeds.</em> Not all feeds are pure information. For example,
the LinkedIn feed throws in tweets from your connections. I'm a fan of feed
processing tools, whether it be Yahoo! Pipes or client side XSLT. If you're
extra lucky, the feed has a digest; Lifehacker was unique in offering a
highlights tag feed, which reduces a weeks worth of posts to a single digest
post on Friday. Gawker seems to have killed these tag feeds and posts, after
the rousing success of driving users away with their recent redesign. Of
course, you can always unsubscribe if you find the signal just isn't worth the
noise...</p>
</li>
<li>
<p><em>Turn off banshee track notifications.</em> Music is a great way to mask ambient
noise in your home or office. Default Banshee however, pops up a notification
on every track change. As I'm reading a paper or writing code, this pop up
distracts the eye away from the monitor I <em>was</em> looking at. There doesn't seem
to be a configuration change one can make, but a clever user on AskUbuntu has
<a href="http://askubuntu.com/questions/33946/disable-notifications-on-track-change">found a makeshift solution</a>. </p>
</li>
</ol>Java Symposium 2011 Recap2011-03-20T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2011-03-20:java-symposium-2011-recap.html<p>Went out to Vegas to attend <a href="http://javasymposium.techtarget.com/">The Java Symposium 2011</a>. Not really fond of
the location, but you have to pick your battles. It's a three day conference,
but travel really knocks out at least a day. </p>
<p>The most interesting talks, I thought, were less about specific technology
releases like JEE 6 or Java 7 or Glassfish or Apache Camel, but performance
tuning techniques and war stories. In retrospect the one live demo I saw was
maybe a bit of smoke and mirrors, where they swapped out an optimistic
concurrency pattern using Exceptions that looks really ugly for a synchronized
version and magically halved throughput. They claimed it was antagonistic
performance from a 3rd party dependency, like Facebook or Twitter. The idea
being that if you remove roadblocks in your code, you might discover the
increased load on twitter scales even more poorly. Because apparently queues
are quadratic in Web 2.0 land.</p>
<p>But beyond that there were interesting talks on classloaders, static analysis
for performance anti-patterns, how the <a href="http://www.youtube.com/watch?v=uL2D3qzHtqY">JVM abstraction makes compliance
difficult to optimize</a>, and the virtues of Dtrace. Many insisted the first
place to look for dramatic fixes was not code, but configuration and hardware.
We wound up scheduling time with one of the vendor's freemium tools next week
to debug some terrible performance problems in testing environments not
present in production or development. I suspect the issue is going to be
insufficent hardware allocation or misconfiguration of the Oracle server; I
haven't seen any version control for config of that, and it seems like a
prime candidate for config and data drift. Unfortunately I don't have enough
access to the test env to debug the problem. Perhaps I'll ask for that to be
remedied.</p>
<p>The keynotes were mostly bland vendorspeak, although Oracle did claim they had
no evil plan, because if they did, the past year would have been less dramatic.
Because obviously they're only good at planning for evil, and fail at
benevolent planning. Otherwise, lots of emphasis on The Cloud, and Java 7, and
quite a bit of Open Source as a feature. </p>Good as new!2011-03-10T00:00:00-08:00Justin Duggertag:www.pwnguin.net,2011-03-10:good-as-new.html<p>So much for pristine brand new car condition. Bumped into a car in a parking
lot last week. The other car barely had any damage, just some paint that will
probably buff right out.</p>
<p>Apparently modern bumpers are made of plastic, and can be repaired with a heat
gun. <a href="//www.pwnguin.net/albums/photologue/photo/subaru-before-after/">Witness</a>:</p>
<p><a href="//www.pwnguin.net/albums/photologue/photo/subaru-before-after/"><img alt="before and after" src="//www.pwnguin.net/media/photologue/photos/cache/subaru-before-after.jpg" /></a></p>New Homepage Design2011-03-06T00:00:00-08:00Justin Duggertag:www.pwnguin.net,2011-03-06:new-homepage-design.html<p>New year, new website. The old design worked okay, but <a href="http://googleblog.blogspot.com/2011/02/finding-more-high-quality-sites-in.html">Google's recent
pagerank tweak</a> punished my RSS aggregation trick. So I've accelerated plans
to move the blog to self-hosting, which gives me more stuff to analyze and
might one day cover the cost of hosting. Tight deadline though, since I want this
ready in time for a conference I'm attending.</p>
<h3>The software</h3>
<p>The original RSS aggregation method has a number of benefits. It's
lightweight, and much harder to hack. So I've looked far and wide for
a similar setup for blogging, and the winner was hands down <a href="http://docs.notmyidea.org/alexis/pelican/">Pelican</a>.
Like Planet Venus, Pelican is Python powered, and offers a clean sepearation
of content and visual design to create static sites. </p>
<p>While Planet Venus used RSS as input, the new site uses static files written
in markup languages (I use Markdown). Pelican parses all the files and
constructs the various tag pages and feeds. To import all this was a bit
tricky; Livejournal's export is unique and does only a month at a time.
Luckily, I keep a backup of my entire blog in Liferea, which is sqlite3 backed.
So I wrote a Python script to write each entry out into the Markdown format.
Stealing code from <a href="http://www.codefu.org/wiki/Main/Html2markdown">html2markdown</a> was very handy, though it choked on a few
of my more insame markups. Def check out the homepage, because the author's
got an even more twisted workflow.</p>
<p>Pelican is available via <a href="https://github.com/ametaireau/pelicanx">git</a> or via pip. Deployment is very similar to before,
with a site configuration file and cronjob. I've also decided to place the
site in revision control, to ease authoring, deployment and automation. Perhaps
I'll set up a post commit hook for automated regeneration?</p>
<h3>Output templates</h3>
<p>Pelican uses The templates are implemented in <a href="http://jinja.pocoo.org/">Jinja2</a>, a very close relative
of Django's template system. Sandboxing really isn't a feature here, since
it's used once to generate the HTML. Jinja2 is unfortunately not as
documented as Django, but that's a very high bar.</p>
<p>I haven't looked into changing the default templates much yet, but I'll do so
soon to tweak the front page. It also has many parameters for common
snippets like Google Analytics and Disqus.</p>
<p>Disqus is a very important part of the design. Given a static blog, you might
expect to not have comments, but Disqus provides a javascript interface to
their system. It's like a sidewiki that sites can opt into. The major advantages
are that I don't have to host comments, Disqus has a centralized profile to
detect spammers with, and the javascript nature means there's less incentive to
spam links in comments.</p>
<p>There's also a skribit widget, but I really have no idea what that's about.
Content suggestions I guess? Perhaps when I'm less busy.</p>
<h3>Web Design</h3>
<p>The default theme is pretty nice, but it's not a fluid layout.
I've placed it on my todo list to change that and to tweak the colors, but
for now the default is sufficient. </p>
<p>I do need to look into a template or something to protect email addresses
from scrapers. There was a nice rot13 trick I used previously that Markdown
alone won't offer.</p>64-bit Really Does Matter2011-02-04T00:00:00-08:00Justin Duggertag:www.pwnguin.net,2011-02-04:64-bit-really-does-matter.html<p>In a stunning display of confidence, <a href="http://blog.pault.ag/post/3107062816/why-64-bit-computing-is-really-dumb-right-now">Paul Tagliamonte</a> stops just short of calling for dropping 64bit
Ubuntu desktops. Because PAE will save the day, naturally.</p>
<p>By day, I am a Java developer. This means I know the pains of memory constraints all too well. I recently
upgraded my desktop, as Paul suggests developers do, to 64bit in order to accommodate the various RAM
hungry IDEs and VirtualBox VMs we employ. And at my previous job, I helped guide the plan of action when
the Blackboard Angel app servers ran out of RAM. /3G (windows boxes) helped, but not enough. I told the
sysadmin that /PAE wouldn't help at all, and it didn't.</p>
<p>Why not? Because the truth is more complicated than a number of bits. At this moment, the highly popular
StarCraft 2 recommends 2GB of RAM. 4GB on OSX. World of Warcraft recommends the same. These are single
applications, geared towards users, reaching the upper limits of the <a href="http://kerneltrap.org/node/2450">default split</a> between RAM
storage and "other" uses. I can't say when commercial games will break 3 or 4 GB, but it's coming. That
milestone was a long way off in 2004, but certainly not outside hobbyist reaches shortly thereafter,
and has been a virtual requirement in the datacenter for some time now. Open source won't be far behind,
and in some ways, has been leading the charge.</p>
<p>The question is, what does PAE buy you? PAE gives you an extra level of indirection at the page table,
but you must specifically program the OS for it. In effect, you can expand individual page tables, at
a small CPU penalty. PAE is transparent to the application; as far as it's concerned there are no other
processes or pagetables. So 4GB remains the ceiling here. That's why PAE didn't help our IIS boxes —
once a single application wants more than that, the writing is on the wall. And that day is fast
approaching.</p>
<p>In effect, your if your multitasking CPU is juggling processes, it can have multiple processes in the
air no bigger than 4GB. But, you ask, if the OS can use the extra space, can't we provide a system call
or workaround for the applications that want to grow beyond 4GB? Sure can. But nearly everything already
builds on 64bit, so users might as well leapfrog the whole PAE mess. 64bit is far better tested than PAE
deployments, and it's dramatically easier to build since compilers have a 64bit arch defined for you
already, and will help you spot bad code. There's no reason to "port" applications to PAE when we've
already got a full 64bit stack.</p>
<p>64bit mode has some downsides ("storing all those zeros"), but the simple matter is that we're going to
be growing out of it soon. PAE is simply insufficient and should not be relied upon.</p>
<p>Except on my laptop, which doesn't support the whole 64-bit fad.</p>SQLite + SchemaSpy2011-01-23T00:00:00-08:00Justin Duggertag:www.pwnguin.net,2011-01-23:sqlite-schemaspy.html<p>A little over a year ago, I shared a neat technique to generate <a href="//www.pwnguin.net/generating-database-schema-with-sql-and-graphviz.html">DB diagrams
for databases,</a> with a focus on SQLite. At the time, SchemaSpy had much
better output but didn't have a good SQLite driver. Well, a recent commenter
asked if it was possible, and fortunately, that's recently changed. It's not
as easy as possible but it's at least more possible than before.</p>
<h3>Prep</h3>
<p>First, you'll need some compilers. Easy on Ubuntu: apt-get install
openjdk-6-jdk gcc.</p>
<p>Second you'll need to download development headers for SQLite. Again, on
Ubuntu apt-get install libsqlite3-dev.</p>
<p>Next, download the <a href="http://www.ch-werner.de/javasqlite/">javasqlite.tar.gz</a> from Christian Werner. This package,
when built, will provide a a Java native interface to access SQLite databases
with. It's a straight forward build assuming you've got all the dependencies.
I'm not clear on where to install this so java finds it or how to best set up
env vars to where SchemaSpy needs less help; I'm sure it's possible though.</p>
<p>Finally, download the <a href="http://sourceforge.net/projects/schemaspy/files/">latest SchemaSpy jar</a>.</p>
<h3>Usage</h3>
<p>The invocation is a little bit confusing. LD_LIBRARY_PATH seems required for
the JNI to work. The -dp is required for SchemaSpy to find the drivers.
There's probably a directory it could be installed to instead. -nowrows turns
off row counts, and -hq turns on high quality Graphviz output. I used the
following command line to diagram Banshee's database (YMMV):</p>
<blockquote>
<p>LD_LIBRARY_PATH=/usr/local/lib java -jar ~/tmp/schemaSpy_5.0.0.jar -t sqlite
-db ~/.config/banshee-1/banshee.db -o banshee -sso -dp
~/src/javasqlite-20110106/sqlite.jar -hq -norows</p>
</blockquote>
<p>The diagram output:</p>
<p><a href="//www.pwnguin.net/albums/photologue/photo/banshee-db-schema/"><img alt="schema diagram for Banshee database" src="//www.pwnguin.net/albums/media/photologue/photos/relationships_implied_compact.png" /></a></p>
<p>There's more to the output than just a diagram, like a clickmap to browse
individual table definitions, and warnings about <a href="http://www.sqlite.org/faq.html#q26">"Columns that are flagged as
both 'nullable' and 'must be unique'".</a></p>
<h3>Issues</h3>
<p>The SQLite driver isn't perfectly matched, and there's a number of warnings
generated. Firstly, there's rarely any documented key constraints, so
SchemaSpy has to infer relationships based on field and table names, and does
so poorly at times. Secondly, it fails to collect row counts, and gives you -1
instead if you ask it to try. Finally, it fails to determine autoincrement
status. Really, we should be thankful it works as well as it does, given the
SQLite design philosophy.</p>
<p>Well, hopefully you'll find a lot of opaque embedded databases just got a bit
easier to comprehend!</p>Goals for 20112011-01-17T00:00:00-08:00Justin Duggertag:www.pwnguin.net,2011-01-17:goals-for-2011.html<p>I don't think I published or set many goals for 2010, but it was a pretty good
year. I landed a new job with KSU as a Java Developer, dramatically increased
my savings rate, improved my homepage and completed watching MIT's <a href="http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-002-circuits-and-electronics-spring-2007/">EE
introduction course</a>. Turns out Electical Engineering is a lot simpler when
you don't have to do homework!</p>
<p>Since we're working on annual evaluations and goals at work, I thought it
might be a good idea to think about and publish personal not-work goals. My
todo-list has 36 items on it for 2011, so I'll share a few themes instead of
boring you (more than usual).</p>
<h3>Goals for 2011</h3>
<ol>
<li>
<p>Keep my website easily findable on the web, by remaining the top Google
result for my name.</p>
</li>
<li>
<p>Develop a second source of income to cover hosting expenses, by self
hosting my blog again. Analyze traffic and revenue to determine which topics I
should cover in greater detail, and develop them to where the majority of
google traffic comes from search terms other than #1.</p>
</li>
<li>
<p>Reinvigorate <a href="http://www.k-slug.org">K-SLUG</a> student involvement, by establishing events that
cater to student wants and needs.</p>
</li>
<li>
<p>Impress managers and developers by being great my new job.</p>
</li>
<li>
<p>Overcome learned helplessness and start documenting and spending 15
minutes a day removing small annoyances in my life.</p>
</li>
</ol>
<p>Number 5 is off to a great start; I've started using CalDAV to document stuff and
schedule one thing a day to cross off the list. I've already fixed my SSHFS
automount, added google analytics to my photo gallery, fixed my old tmpfs
mount, and set up a cronjob to import ticker data into GNUCash.</p>
<p>I find that setting deadlines is much more effective than just sorting by
priorities. Partially because many high priority tasks also have starting
dates that evolution won't let me filter by. But also because it keeps me
honest about how much work a line item really is, which helps me out a lot
because anxiety increases a lot on poorly specified work, which doesn't get
done.</p>
<p>There is a new kind of anxiety from never being "finished", but even though
there's more, smaller tasks this way, it's weaker and continues to wane as the
minor boosts in productivity add up. I suppose that's the trick to management:
imposing artificial deadlines in a world that's frequently missing them.</p>Limesurvey notes2010-12-19T00:00:00-08:00Justin Duggertag:www.pwnguin.net,2010-12-19:limesurvey-notes.html<p>The local LUG group is small and shrinking, due to weak recruiting efforts and
a failure to strike a balance between student oriented and staff oriented
events. In fact, the website appears to be down today. I take some of the
blame here, but we can also blame Canonical for making Ubuntu too easy and too
popular -- the last installfest, the mainstay of LUG events, was basically
unattended save for some Wii gaming courtesy of the ACM student chapter.</p>
<p>In fact, pretty much the only thing that gets students' attention these days
is video games. So it's time to adapt. If students don't want or need
installfests, we should find out what they do want and work with that. With
that in mind, I decided I should prepare a survey for a games related event.
Coworkers have used SurveyMonkey in the past, but have been upset about the
bait and hook approach. In googling for SurveyMonkey alternatives, I came
across <a href="http://www.limesurvey.org/">Limesurvey</a> (formerly PHPSurvey). It appears to be a nice OSS app
that approaches the same features as SurveyMonkey.</p>
<p>Unfortunately, it's not yet packaged in Ubuntu, only <a href="http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=472802">Debian experimental</a>.
So the rest of this post shall be dedicated a technique I've adopted for
coping with this. Particularly, the svn:externals property, which allows for
nested checkouts.</p>
<h3>svn:externals</h3>
<p>A few months ago, I finally took the plunge and set up <a href="http://joey.kitenet.net/svnhome/">svnhome</a>.
Technically, git is all the rage these days, but SVN should import into git
without loss of information at a later date, and it's professionally relevant
right now for me to dig into advanced SVN features. This does involve a lot of
svn:ignore properties, and at first I was really trigger happy with the
ignore. Originally, I ignored my src/ directory, but since they're mostly SVN
checkouts, I might as well use svn:externals, to document what I've got and
where it came from.</p>
<p>What makes this a bit complicated is that I want to combine the LimeSurvey SVN
checkout with the debian packaging that's also in an SVN collab-maint branch.
That, and they're both external to my homedir structure. Originally I had
chained the externals, marking LimeSurvey external in the homedir repo, and
the debian/ directory external to the limesurvey checkout. The problem is I
don't have commit rights to the LimeSurvey repo, so I can't commit an
svn:external in one computer and use svn update to duplicate the setup
elsewhere. Some might propose that I should just svn export LimeSurvey, and
merge it into my homedir repo (or not at all). I don't like it because it
makes it harder to send patches upstream and pull patches from them.</p>
<p>The solution I've settled on is to define the debian checkout from the src/
dir:</p>
<blockquote>
<p>limesurvey
https://limesurvey.svn.sourceforge.net/svnroot/limesurvey/source/limesurvey</p>
<p>limesurveydebian svn://svn.debian.org/svn/collab-maint/deb-
maint/limesurvey/trunk/debian</p>
</blockquote>
<p>From the limesurvey checkout point of view, this does leave debian/ as
unversioned, so I'll have to think harder about it or live with the
questionmark. Perhaps bzr has a clean way to integrate all this.</p>
<p>But for now, I move on to building and installing. One minor challenge is that
apt doesn't have a package to grab build depends from, so I have to do that
manually. And LimeSurvey pulls in a LOT of libraries, some of which are
packaged already, some of which are not. At least a few are bundled in other
apps like PHPMyAdmin, so that'll be something to look at for collaborative
improvement. And I should probably design and run that survey to start
rekindling the LUG!</p>The unexamined life2010-12-05T00:00:00-08:00Justin Duggertag:www.pwnguin.net,2010-12-05:the-unexamined-life.html<p>Not too long ago, I <a href="//www.pwnguin.net/time-management-for-system-administrators-review.html">I read a book on Time Management</a>. Well, a month into
my new job, I think the routine has settled down enough to take another look.
The transition from part-time to full time only took 10 hours out of my week,
but feels like more. So this past week, I decided to keep track of where my
time was going.</p>
<p>There's a lot of potential sources for this. I could self track, or refer to
the many ways I generate timestamped data in life. <a href="https://services.mozilla.com/">Browser history</a>, for
example. The Wii also keeps a calendar of play time, though it discards some
useful information like timestamps. I also installed <a href="http://darcs.nomeata.de/arbtt/doc/users_guide/">arbtt</a>, but haven't
looked into how to use it much. But it's dutfully recording! There's also IRC
logs to parse and in the future I might also turn the phone into a location
logging device, but haven't yet. So for the most part, planned out the week in
advance with Evolution's calendar and adjusted as the week went by to keep
track of time spent, in 15 minute increments.</p>
<h3>The results</h3>
<p>I won't be sharing the gory details of where and when I was doing things with
the internet (Sorry 4square!). But it's worth writing down a few things:</p>
<ul>
<li>
<p>Work: 40.25 hours. We have a separate time tracker tool at work that
analysis tools might help me populate, once I dig into projects.</p>
</li>
<li>
<p>Sleep. Nothing unusual or unhealthy here. A bit surprised because I
normally sleep quite late in the absence of an alarm. I think the morning road
noise and south facing bedroom window contribute a bit here.</p>
</li>
<li>
<p>Podcasts. I probably listen to too many money-related podcasts:
Marketplace, Marketplace Money, Planet Money, Freakonomics Radio, the
Economist, and Econtalk. In total, I watched 10 hours of podcasts. Some of
that was multitasking but most wasn't.</p>
</li>
<li>
<p>TV. When planning, I decided to cut out The Daily Show/The Colbert Report,
in favor of finishing <a href="http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-002-circuits-and-electronics-spring-2007/">6.002</a> lectures. And since House was off until next
year, that was a nice hour gained. What's left is so spare it's almost worth
just cutting off cable altogether. I did use a DVR to record things and skip
commercials.</p>
</li>
<li>
<p>Video Games. Feels like <strong>a looooot</strong>, but the wii records and Evolution
indicate it was only 15 hours. A lot of that was today, in a last ditch effort
to finish a Mario game.</p>
</li>
<li>
<p>Internet time. A lot of this was reading and writing writing AskMefi
answers, which take way longer than I ever anticipate when I start them. But I
did get about a 40 percent "best answer" rate this week, so there's at least
that!</p>
</li>
</ul>
<p>The rest was mostly dining, and waste like commuting and such. And adjusting
the calendar and writing this post. In case you didn't think the above was
sufficiently neurotic, I did spend a bit of time on my monthly budget review.</p>
<h3>Conclusions</h3>
<p>With this baseline established, I can now look at tweaking things. I don't
spend much time multitasking podcasts with anything but web browsing, so
that's a possible time saver. I could save an hour by <a href="http://twelveblackcodemonkeys.com/2006/05/03/fastspokenword/">compressing
podcasts</a> 10 percent. Just as effective though, would be to ditch some of
the econ podcasts. Probably Marketplace Money, because it's long and contains
a lot of reruns. And maybe Freakonomics Radio, if it contents are going to be
embedded in another podcast.</p>
<p>The Stewart/Colbert combo is good but time intensive, and just planning alone
saved me 4 hours a week by cutting them out.</p>
<p>While I got arbtt running, I haven't taken the time to effectively query it,
and the default report is a bit confusing. And I outta figure out how to
Mozilla's history is stored so I can automatically chart long browsing
sessions.</p>
<p>And of course, I need to get out more. Maybe when it's warmer.</p>One Week In2010-11-02T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2010-11-02:one-week-in.html<p>On Monday I started my new job as a developer for KSU's <a href="http://ome.ksu.edu/">OME</a>, in
particular working on Identity Management. It's been a week now, long enough
to fill out the paperwork, set up my workspace and start reading
documentation, but not long enough to really start coding.</p>
<p>The technical details: The IDM group has two major systems, KEAS and PDB.
I haven't gotten through all the documentation, and I'll probably end up
having to write some of it, but the purpose seems to be to augment LDAP with
authorization, and to unify several systems with different identifiers (unique
names). There's also a lot of web frontends to these systems, primarily based
on the Spring framework, and some webservices like CAS. Also, the place is a
big JavaEE shop, and familiarizing myself with all the damn acronyms and
technologies and APIs is going to be a pain. The Director of OME sounded a bit
like he wanted to diversify; especially since the KSU CS dept moved from Java
to Python, there may be a strategic moves. Overall there seems to be a lot of
inertia to overcome for any change, even ones as simple as moving from
<a href="http://www.nongnu.org/cvs/">CVS</a> to <a href="http://subversion.apache.org/">SVN</a> requires rewriting the integration tools with a custom
defect tracker.</p>
<p>I did have the opportunity to attend a meeting on the second day that was
unusually tense. Seems a late project is going to be later, and none of the
options are available. The writing on the wall says a lot of <a href="http://en.wikipedia.org/wiki/Not_Invented_Here">NIH era</a>
software is going to be replaced with open source stuff, which is nice.
There's also not much automated testing, but I probably need to observe the QA
team a bit before I could make an informed recommendation.</p>
<p>Finally, I found a place to live close to campus, in <a href="http://www.firstmanagementinc.com/properties/founders_hill/index.html">Founder's Hill</a>.
There was a bit of a ladybug problem but it seems to have cleared up now with
the frost and a maintenance request to properly weather seal the patio door.
The living room window overlooks a small pond, with frogs and fish and a flock
of birds. They birds are quite noisy long past sunset, thanks to some
artificial lighting.</p>New job; apartment hunting2010-10-04T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2010-10-04:new-job-apartment-hunting.html<p>Picked up a job with KSU starting next month. More details later. But first, I
need to line up a place to live. I'm pursuing a number of options, but let's
start with the tactic that landed me a sweet place to live last time I moved
out to Manhattan: anyone need a roommate? Or know someone cool who does?</p>Adios, Gamerfeed2010-08-16T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2010-08-16:adios-gamerfeed.html<p>I've been seeking out feeds to sync to my front page, in order to keep the
front page interesting, updated, and sync'd. One such interesting feed I tried
was my Xbox 360 gamertag. The idea was to update every time I had a new game.
Instead, it's been spamming every day I play, which is pretty damn annoying.
I've left it on, with the theory that I'd locate a better feed or fix it
somehow. Well, I'm declaring todo list bankruptcy on that one. Fixing it may
take quite a while, and the value is just too low to bother with.</p>
<p>Maybe once I finish my home inventory and valuation project I'll tweak it
to list new game purchases in XML. Until then, you'll just have to ask or
check <a href="http://live.xbox.com/en-US/profile/profile.aspx?pp=0&GamerTag=WildPwnguin">Microsoft's web site</a>.</p>The One True OpenID configuration2010-08-10T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2010-08-10:the-one-true-openid-configuration.html<p>I like the OpenID concept, but by now it's clear that there's far more Issuing
Parties than Relying Parties. In layman's terms, there's more sites that want
you to use your account with them on other sites than sites that let use an
account from elsewhere. Which defeats the whole point in dramatic fashion.
<a href="http://weis2010.econinfosec.org/papers/session3/weis2010_bonneau.pdf">The Password Thicket: Technical and Market Failures in Human Authentication
on the Web</a> [<a href="http://www.schneier.com/blog/archives/2010/07/website_passwor_1.html">via</a> Bruce Schneier] examines the marketplace and offers
the following conclusion:</p>
<blockquote>
<p>In the meantime, this perspective supports the claim [<a href="http://techcrunch.com/2008/03/24/is-openid-being-exploited-by-the-big-internet-companies/">15</a>] that
deployment of an open, federated identity protocol such as OpenID will be
opposed by current stakeholders on the web. Federated login not only removes
sites’ pretext for collecting personal data but their ability to establish a
trusted relationship with users.</p>
</blockquote>
<p>This is only the most recent accusation, as the citation indicates. And
there's plenty of economic incentive to refuse OpenID; you'll have no pretext
for asking for email addresses or Favorite Movie 'security questions' that
will help you sell targeted advertising. But there's another reason OpenID is
struggling in the marketplace that isn't mentioned: OpenID is <strong>hard</strong> for
users to deploy correctly. There is One True Way to configure your OpenID
safely, which I will document for myself and posterity:</p>
<ol>
<li>
<p>Buy a domain name. This domain is your openID.</p>
</li>
<li>
<p>Find some hosting for this domain. Preferably with exclusive access so
only you can modify it.</p>
</li>
<li>
<p>Purchase and install an SSL certificate for your domain.</p>
</li>
<li>
<p>Locate or install an authentication system that supports OpenID</p>
</li>
<li>
<p>On the page accessible on your domain, place an OpenID delegate relation
link to that authentication system.</p>
</li>
<li>
<p>Make sure both your OpenID URL and the delegate use HTTPS and are
invulnerable to the cornucopia of web attacks.</p>
</li>
<li>
<p>Cry as you realize that few of the OpenID providers you could have
delegated to will accept your OpenID.</p>
</li>
</ol>
<p>Skip any of these steps and there is a world of pain waiting for you. And
there's plenty of smart people who <a href="http://www.reddit.com/r/programming/comments/vder/how_to_actually_use_openid/cvgdn">miss important points</a>. If you don't
buy a domain and just use a site like LiveJournal or Gmail directly, you're at
the mercy of your provider's implementation and longevity. If you need a real
example, provider Vidoop <a href="http://factoryjoe.com/blog/2009/06/05/the-fall-of-vidoop/">gave people a real scare</a>.</p>
<p>Reviewing the steps above, it's obvious why Verisign was an early partner with
OpenID; they had a lot to gain up until their expert jumped ship for Facebook.</p>Arduino Update2010-07-09T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2010-07-09:arduino-update.html<p>An alert (and anonymous) reader commented on a <a href="//www.pwnguin.net/why-isnt-arduino-in-debian-ubuntu.html">previous post</a> about
<a href="http://arduino.cc">Arduino</a> packaging, letting me know that <a href="http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=577249">packaging has been accepted into
Debian</a>—both the Java IDE and the underlying compiler toolchain. A big
thanks to Phil Hands and Scott Howard for their efforts!</p>
<p>This is a very new package, so patches are still rolling in. The
DebianImportFreeze has come and gone, but Scott appears to be on top of
things, having just <a href="https://bugs.launchpad.net/ubuntu/+source/arduino/+bug/603357">filed a sync request</a> a few hours ago to pull in a few
more changes. If he keeps that pace up, he'll make MOTU in no time ;)</p>Bruce Schneier deserved to lose2010-06-19T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2010-06-19:bruce-schneier-deserved-to-lose.html<p>NPR held and broadcasted <a href="http://www.npr.org/templates/story/story.php?storyId=127861446">a debate</a> on the resolution, "The cyber war
threat has been grossly exaggerated," and invited a sophisticated panel to
debate for and against it. The format of Intelligence2 debates is two panels
of speakers competing to change the audience opinion. So even if the audience
comes in with a massive majority opinion, the merit of the debaters is judged
by the change in opinion after the debate concludes.</p>
<p>NPR's debate organizers do a good job of recruiting people with experience on
the subject rather than experience with policy debate and the more <a href="http://en.wikipedia.org/wiki/Kritik">estoeric
philosophical techniques</a> to "win". The team arguing cyberwar is a bogeyman
consists of <a href="http://www.schneier.com/">Bruce Schneier</a> and <a href="http://en.wikipedia.org/wiki/Marc_Rotenberg">Marc Rotenberg</a>. The team arguing that
cyberwar is real and dangerous is <a href="http://en.wikipedia.org/wiki/John_Michael_McConnell">John M. "Mike" McConnell</a> and <a href="http://futureoftheinternet.org/blog">Jon
Zittrain</a>. I've actually heard of two of these people, which makes me feel
just a bit smarter. As I subscribe to Schneier's blog (squids or not), I was
rooting for his side to win. Alas, <strong>rather than pursuading the undecided
audience, his team lost a few supporters to the other side</strong>, and lost by the
rules of the debate.</p>
<p>But having listened to it, it's pretty clear Schneier's team made several key
mistakes. Firstly, they let the other team decide what statements were under
scrutiny. If all your opponent has to do is not make exaggerated claims during
the debate to win, you'll lose pretty easily. You can see this principle in
action as Schneier tries to quote McConnell, only to have McConnell dismiss it
as out of context and a misquotation. Instead, they should have gone after
public figures and decision makers not present — McConnell isn't the only
politician or bureaucrat talking up cyberwar. I'm not about to go through
CSPAN transcripts, but surely Lieberman, who's introduced a bill I understand
would authorize the president to use an internet kill switch (and effectively
censor people) made an exaggerated claim to support that broad reaching power.</p>
<p>Meanwhile, Zittrain and McConnell pretty much offered a no contest argument.
They admitted that newspapers wrote exaggerated headlines, that the "cyberwar"
attacks against Georgia may have been self inflicted, and that the main risk
was not to our military but to high profile financial targets. By carefully
avoiding any sensational or <em>exaggerated claims</em> they gave the other team
nothing credible to point at as evidence. McConnell's main cyberwar threat
example was catastrophic data loss at US moneycenter banks who handle
trillions of dollars daily, and Zittrain's was the Youtube-BGP screwup.</p>
<p>In closing statements, Schneier and Rotenburg attempted to argue against the
policy that would emerge from a loss, effectively an appeal to heads in sand.
<strong>Instead of focusing on the negative policy outcomes, they should have
addressed the likelihood of the oppositions's two threats</strong> in closing
statements. I was never on a debate team but that seems like an obvious thing
to do! The banking argument is a classic confusion of consequences for risk
that Schneier is renowned for pointing out. The Lieberman bill even makes <a href="http://hsgac.senate.gov/public/index.cfm?FuseAction=Files.View&FileStore_id=4ee63497-ca5b-4a4b-9bba-04b7f4cb0123">the
distinction</a>:</p>
<blockquote>
<p>18) RISK.—The term "risk" means the potential for an unwanted outcome
resulting from an incident, as determined by the likelihood of the occurrence
of the incident and the associated consequences, including potential for an
adverse outcome vulnerabilities, and consequences associated with an incident.</p>
</blockquote>
<p>As determined by <em>likelihood</em> and <em>adverse outcomes</em>. Consider a common
example, one my own website faces and which I think McConnell alluded to with
his "millions of attacks daily" comment: brute force SSH login attempts. They
happen quite frequently, but because they are easily <a href="http://www.fail2ban.org/wiki/index.php/Main_Page">guarded</a>
<a href="http://www.debian-administration.org/article/SSH_with_authentication_key_instead_of_password">against</a>, I argue the threat is low. With the proper safeguards in place,
what is the remaining likelihood of a brute force attack succeeding? Slim to
none, because I have no guessable common system accounts and the <a href="http://en.wikipedia.org/wiki/Key_size#Asymmetric_algorithm_key_lengths">keysize is
massive enough</a> to make such attacks infeasible. The consequences are high
but the likelyhood is miniscule, so the risk is low. I'm much more worried
about keeping all my webapps patched than this SSH spam attack.</p>
<p>So what is the likelihood of the threats presented? A major bank losing all
customer data is pretty slim I'd say. They have incremental backups and
transaction logs and firewalls, and redundant file systems and offsite
backups, and <a href="http://en.wikipedia.org/wiki/Disaster_recovery">things I'm missing</a> because I haven't spent a lifetime
working for the financial sector. It would take more than the "two weeks"
Zittrain suggests for a crack tiger team to construct a plan to completely
wipe out customer records in seconds. Cyberwar could still do some damage,
but, importantly, no more than they experience and plan for daily. Computers
today are failure prone, even without script kiddies and trained military
strikes, so private firms have all kinds of insurances, countermeasures and
recovery plans in place. The consequence might be 7 trillion dollars, but the
risk of complete loss of records is minuscule, so the threat is small and
therefore exaggerated. Unregulated credit default swap markets present a
greater risk to banks than this.</p>
<p>Meanwhile the Youtube-BGP attack was resolved <a href="http://www.ripe.net/news/study-youtube-hijacking.html">within two hours</a>, and
early warning systems (more like fast after-the-fact alerts, really) are in
place to watch for bogus announcements. So the outages from a network routing
attack can be resolved relatively quickly. When the beer passing brigade fails
en masse the companies that profit from it figure it out and quickly. The
internet was built to be resilient to attack and Zittrain even admitted an
adhoc networks were a sensible approach to a destabilized internet.</p>
<p>In conclusion, I think the cyberwar threat is overstated, but the weak case
Schneier and Rotenburg presented at the debate was sufficient cause for them
to lose both the debate and majority consensus. This doesn't mean we should be
arming the president with a kill switch or ignore the dangers of fraud and
hacking, but we should prioritize based on risks rather than consequence, and
the risk is far greater elsewhere.</p>On synchronization and "the cloud"2010-06-09T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2010-06-09:on-synchronization-and-the-cloud.html<p>I've been meaning to do a followup post on N900/Maemo5, but every time I
summon the effort to start drafting the post, a new firmware comes out to
address problems motivating the post. Case in point: the most recent firmware
(PR1.2) finally added the USSD code support one needs to query Tmobile about
money left on prepaid accounts (a fantastic money saver for me).</p>
<p>But one thing that has become readily apparent is why people are so excited
about "the cloud". Roughly speaking, the non-technical and <a href="https://sites.google.com/site/traininginthecloud/">semi-technical
people</a> who use that term mean something other than <em>elastic computing</em>. As
best I can tell, what they're after is an <em>online datastore / network API
that's on 24/7</em>. Which can be neat, but isn't the revolutionary thing you
might imagine based on the rhetoric.</p>
<p>Anyways, about that neatness: after a few months of smartphone ownership it
became apparent that I need some way to deal with the fact that all of my
devices create and store new data sources, but none of them are on and
accessible 24/7. Solutions like <a href="http://www.cis.upenn.edu/~bcpierce/unison/">unison</a> remain academic because they are
neither automated nor available 24/7. Essentially, it seems <strong>people are
rediscovering the value of mainframe computing</strong>, but with redundancy and
graceful degradation to offline mode. Maemo5 meets this concept to varying
degrees of success.</p>
<h3>Music</h3>
<p>Synchronizing music libraries is relatively simple, and even Banshee supports
it. Simple file sync tools like unison are also admissible to me because
generally speaking, I don't add much music very fast and changes needed
immediately. But there's a side problem here: hidden data. Music ratings are a
great tool I use to create shuffle playlists of minimally acceptable music.
Because the MP3 format does not include this admittedly subjective rating
field, it's not sufficient to sync files to get this data. It's become clear
that <strong>the tight integration iTunes offers by sharing metadata between desktop
and device is pretty damn useful</strong>. Granted, they're impossible to get out,
and people sometimes accidentally wipe their ratings stored on iPods, so
perhaps the grass is just greener on the other side of the electric fence.
Perhaps I should see if something like Last.fm / Libre.fm can play a role in
collecting and syncing this data.</p>
<p>Maemo5's media player doesn't provide a rating concept, so there's nowhere for
this data to be put on the device for smart playlists. I hear that Meego took
a great step in the right direction by selecting Banshee as the media player,
so I hope the group takes time to address Banshee<->Banshee setting sync, to
meet or beat iTunes.</p>
<h3>Photography</h3>
<p>Maemo does an outstanding job here with a great idea. The unit of software
deployment on smartphones is generally the app. Maemo's Sharing plugin system
does a superlative job of modularizing the upload process such that you don't
need a seperate uploader app for each hosting provider. There is a gallery2
plugin, which works well for me, and there's no shortage of other plugins if
you prefer not to self host. This deserves an A++ and I hope Meego keeps this
tradition.</p>
<h3>Bookmarks / History</h3>
<p>Upon seeing that Mozilla was building their mobile Fennec browser for N900, I
decided to check out <a href="//www.pwnguin.net/weave-synchronization.html">Mozilla Weave</a>. It's a great replacement for an old
tool that was popular and canceled before the term "cloud" was cool: Google
Browser Sync. They've since rebranded it Mozilla Sync which maybe indicates
more clearly what the product does. Many people use Xmarks, but Sync also
handles history, preferences, even tabs. Mozilla offers a central service, but
I prefer to use the minimal standalone system that I can host myself. This is
probably the ideal -- people who want privacy have a number of steps they can
take, while there's still an easy to use compliment. Interestingly, Meego has
chosen Chrome as it's default browser, and the state of the art in cross
browser sync is bookmarks only (I can't imagine sharing preferences between
browsers being useful).</p>
<h3>Calendar & Organizer</h3>
<p>I've lately been experimenting with calendars and todo lists at work. For
example, I have "package weave-minimal for Ubuntu" as a todo item. At work we
have Exchange and Outlook. At home I have Evolution, which integrates with the
desktop in some neat ways I'd like to keep. Surprisingly, Maemo supports
Exchange very well. Calendars / alarms, todos, notes, etc. all come across
fine, and PR1.2 even generates responses to meeting invites. Ideally, I'd keep
work and personal data separate and let the phone unify them for presentation,
but I've yet to find a way to do that -- CalDAV support is not in Maemo. The
good news is that I've seen a few emails from Nokia developers that suggest
CalDAV might work in Meego. Guess it's time to set up a personal CalDAV server
and point Evolution at it.</p>
<p>As far as contacts go, I generally just centralize it on the SIM card and
leave it there, but I think Ovi has a system in place, and <a href="http://wiki.maemo.org/Sync">syncML</a> has
rough support.</p>
<h3>Conclusions</h3>
<p>I don't know why this form of cloud computing is popular now, when many of the
same problems and solutions have been around since laptops and wifi. Perhaps
the pocketability and utility of cellphones cancels out the "nerd factor"
associated with carrying around laptops, so that people now run into these
problems daily rather than just during offsite business meetings. Either way,
there's plenty of technology to support private cloud systems; I use gallery2,
weave and (soon) CalDAV privately to synchronize my computers.</p>
<p>Frankly, the greatest remaining challenge I have left is storage. In contrast
with <a href="http://jeremy.zawodny.com/blog/archives/007624.html">jzawodny</a>, S3 doesn't even come close to making economic sense at
personal scales of a couple TB, and Dropbox is even more pricey with less data
at end. For now I'll just take the availability risk of residential networking
and save the money.</p>Time Management for System Administrators review2010-04-30T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2010-04-30:time-management-for-system-administrators-review.html<p>After <a href="//www.pwnguin.net/coders-at-work-book-review.html">Coders At Work</a>, I needed to decide what next to read in my
professional sphere. I've been slowly collecting rainy day ideas and ToDos in
a text file, and moved them into Evolution when that got long enough that I
needed summary views to hide notes for each item. Even then, it became
overwhelming again. A personal website to improve, video games to play, books
to read, software to install, things to blog about, programs to write,
investments to research. Meanwhile work load at the college is only growing,
so reading a book on time management seems apropos.</p>
<p>There's lots of books on time management, but the system administrator role
brings together a unique combination of managerial meetings, discretionary
time and on-demand customer service. We're also members of a rare group of
people for whom on-call pagers are a nearly universal term of employment. And
for reasons I'll never understand, IT professionals are rarely managed by more
experienced IT professionals, resulting in scenarios like <a href="http://ask.metafilter.com/151380/Youre-harshing-my-flow-manager">this</a>. When I
look around the office, there's few people I can look to for advice on the
subject of time management. Some people stick in the office till 7PM, and
others let unproductive committee meetings dominate their work day. But I
recall a mantra: "to change the world, first change with yourself." So I
decided to pick up <a href="http://amzn.to/2eiqo3H">Time Management for System Administrators</a> and
give it a shot.</p>
<h3>About the Author</h3>
<p>The author, Tom Limoncelli, is an accomplished system administrator, having
worked for Lucent, Lumeta and now Google. He's published through the <a href="http://en.wikipedia.org/wiki/Large_Installation_System_Administration_Conference">LISA
conference</a> numerous times and serves on this year's program committee.
He's also a co-blogger on <a href="http://everythingsysadmin.com">Everything Sysadmin</a>, created in part to promote
books they've coauthored, such as "Practice of Network and System
Administration," a fairly comprehensive book on the subject. Clearly, the guy
is qualified as both a sysadmin and a busy one at that.</p>
<h3>Time Management for System Administrators</h3>
<p>Limoncelli knows your time is valuable, so TMfSA is a thin book, a svelte 194
pages. But do not be fooled. This is not a book you can read once and put on a
shelf to collect dust. You can get something out of it, of course, but it's no
different than a diet book, <strong>you have to be prepared to take action to truly
have gained</strong> from this book. Multiple times he admonishes the reader to start
practicing his approach right now, because <em>getting started</em> is the hardest,
most valuable advice. The only way to encourage readers to start now any
harder would be to include stationary to practice with, which Franklin Covey
already does quite annoyingly well at.</p>
<p>The basic central system is what Limoncelli calls "The Cycle System." In a
nutshell, his advice is to record everything, and organize by day. Record
every task to be done, prioritize and estimate the time it takes. If your day
is overbooked, he offers coping mechanisms to make up the difference:
Delegate, shorten the task, break the task up into smaller subtasks, delay
meetings. Let people know when high priority tasks aren't getting done on time
and come up with contingency plans.</p>
<p>Like any modern time productivity book, TMfSA addresses the email time sink.
It advocates an inbox zero approach, heavy on automating the filing, filtering
and processing. It's not as dogmatic as <em>Getting Things Done</em> or <em>Inbox Zero</em>
but does name drop the former in the Epilogue. TMfSA's main advice appears to
be to touch all mail once and skip archiving.</p>
<p>The "Cycle System" is generic enough that I wouldn't recommend this book if
that's all it contained. But the details count for a lot. He points out that
programmers and sysadmins work best in a state of '<a href="http://en.wikipedia.org/wiki/Flow_%28psychology%29">flow</a>', and that a good
workplace is organized to maximize flow and minimize interruptions. He
suggests that teams can organize a support rotation for a few hours a day so
the rest of the team can utilize large blocks of uninterrupted thought. New
email checks can be done every 3 hours rather than every 3 minutes, and you
can have a scheduled 'on call' person to shield the rest of the team from
interruptions. Since it's easier to concentrate when it's quiet, don't
squander that opportunity on replying to email.</p>
<p>Finally, it gets into the stuff that's dramatically different for IT than
other kinds of employees. There's some advice on running effective meetings,
the eternal bane of techies. There's some advice on specialization. An example
gets the point across: a lawn service can justify expensive mowing equipment
that improves productivity because they'll use it a lot more over a season.
<strong>If 1 hour of your time is worth more than 20 minutes of theirs, hiring a
service becomes a no-brainer</strong>. TMfSA covers a few such no-brainers in the
sysadmin workplace. It also covers when to document, when to automate and when
to outsource, and presents a clever use of make to automate system deployment.</p>
<h3>Takeaways</h3>
<p>The book was interesting and certainly gave me ideas on how to approach time
management, however there's a lot of places I choose to deviate. The author
spends a lot of time addressing a paper oriented system, but smartphones have
gotten to the point where most sysadmins have a phone that performs as well as
or better than the old Palm PDAs. The author makes a point about a
<em>centralized calendar</em> but all that's really called for is a <em>unified
calendar</em>. For example, a unified view of your work calendar on Exchange and a
personal calendar on CalDAV or Google. My phone supports local calendars
and oddly, Exchange but not CalDAV. I really hope the next firmware release of
Maemo adds CalDAV because it would solve this problem neatly. Failing that, I
really think it's something for Meego to look at.</p>
<p>I also dislike the idea of deleting email. There's a lot of valuable historic
information that saves my bacon on occasion, and anticipating this on the spot
is a hard task. Keeping your inbox empty adds a "keep or delete" decision that
I just skip. Limoncelli advocates the opposite approach primarily for a
technical reason, that clients and servers cope poorly with them; my
experience with Gmail suggests the problem is solvable rather than inherent. I
did agree with his point about the price of email interruptions and I've tuned
mail popups to specific high importance mail only, and set up better
filters—now patchmail goes into a folder I review once a week while preparing
change tickets.</p>
<p>As I said before, reading this book is less fruitful if you don't practice the
advice within. I find The Cycle System performs fine at work. It does less
well at home where there's rarely ultra high priority tasks and little
motivating distinction between "soon" and "eventually". I think a <a href="http://en.wikipedia.org/wiki/Least_slack_time_scheduling">slack time
based prioritization</a> would help with tasks that have deadlines, as would a
CalDAV app for my phone. For personal projects and hobbys, I've decided to
take a page from my university days and built a weekly schedule to fit
everything into. I've even blocked of some time every week to contribute more
to Ubuntu, now that my webserver is at a point where it's running the latest
stable and has a test disk image. No more patching Ubuntu packages locally!</p>
<p>The technical content has become less relevance over time. It may have been
best to not discuss PDAs and smartphones, as many things have changed since
2006. Palm is no more, and PalmOS is all but vanished. Wikis are popular
enough that pages are probably better spent on effective wiki permissions than
syntax--too many enterprise wiki systems deny by default and interfere with
the purpose of wikis. The make trick is clever, but cfengine would have been
more appropriate, if it cfengine was less confusing for readers. Since
publication, <a href="http://en.wikipedia.org/wiki/Comparison_of_open_source_configuration_management_software">Configuration Management Tools</a> have flourished into popular
and less confusing systems. I suppose make does feature a high
reward:investment ratio you can use as a reference point on your way to puppet
or chef or cfengine etc.</p>
<p>Overall, the book is a useful way to begin building your own time management
approach. Although the book's sysadmin voice feels forced at times with the
inclusion of UserFriendly clips, if you're a UNIX system administrator this
book is a way great jump start your own routine. Time Management for System
Administrators review</p>Coders at Work book review2010-03-28T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2010-03-28:coders-at-work-book-review.html<p>I finished up <a href="http://amzn.to/2ejUe9r">Coders at Work</a> today, having received it from the
<a href="http://www.jocolibrary.org/">local library</a> only recently (I need to figure out what magical data
source people use to put these books on reserve so early!)</p>
<p><em>Coders At Work</em> is a series of interviews with programmers, people with
successful backgrounds and some name recognition (to the extent people
recognize programmers). Each interview is roughly 40 pages, and attempts to
ask a standard set of questions in an attempt to document diversity and/or
consensus. Most of the coders interviewed are older, both for the obvious
reason that publicly acknowledgement of success and experience take time, and
to cater to a celebrity history market.</p>
<p>The only guy who's close to my age is Brad Fitzpatrick, who shares a bit of
culture with me; writing TI Basic games on long road trips and losing them due
to battery failure. His stories of rolling in advertising banner ad money is
one I've heard before. Probably the most interesting part of his entrepreneurship is
how open source allows him to sell a website/company and then build new sites
using the GPL'd assets he wrote when building the site up. Judging by his
anecdote about selling FreeVote on the cheap just to be done with it, I wonder
if something similar happened with LiveJournal.</p>
<p>The author has a bit of an obsession with Knuth and his volume of books, <em>The
Art Of Computer Programming</em>. As best I can tell, Knuth doesn't invent
anything but instead compiles published research into the above book. Since we
keep graduating new PhDs but still have just the one Knuth, the series is
unfinished and <em>unfinishable</em>. Annoyingly, many algorithms are named after him
that he merely popularized, rather than invented. I suppose it's a worthy task
to cut the jargon out of conference papers, as they can be really quite
excruciating. The author's overweighting of Knuth comes in the form of asking
every interviewee whether they've read Knuth's books and done any <a href="http://en.wikipedia.org/wiki/Literate_programming">literate
programming</a>, leading up to a finale interview with Knuth himself.</p>
<p>Despite the author's inclinations, there are some good interviews in there.
The Erlang author is interviewed, and after reading it I think I need to
invest some time with Erlang. It's too bad I'm currently experimenting with
Python/Django. It might be neat to do something web based with Erlang, but I'm
wary of anything that decides it's easier to write a new httpd than implement
an Apache module.</p>
<p>Some observations about the group interviewed: lots of compiler and language
people, many had early access to research university computers, or later in
computing history, programming jobs out of high school. (I don't even know how
you find that kind of work as a kid). Most of the people I recognized are
mainly famous for their non-coding activities; I'd wager more people have
looked at TAOCP than have used TeX. Overall, I'd say these "coders" slant
academic.</p>
<p><em>Coders at Work</em> is a pretty good read. You can easily read just the
interviews that interest you and not miss anything for it. If you work in the
field, consider picking it up it; makes for good night reading material.</p>(Old) Homepage Design2010-03-12T00:00:00-08:00Justin Duggertag:www.pwnguin.net,2010-03-12:old-homepage-design.html<p>I've decided my old homepage was bad enough to revisit now that I've got a bit
more content hosted deep within it. I replaced my crappy hand written HTML
with tools written this decade, and threw in some amateur visual design.</p>
<h3>The software</h3>
<p>Firstly, in order to keep the webpage fresh with little effort, I've chosen
RSS aggregation as the method of content generation. Since I know Ubuntu and
Debian both use <a href="http://www.planetplanet.org/">Planet</a>, that's where I first looked. But it seems Planet
2.0 is aging, and the fork <a href="http://intertwingly.net/code/venus/">Planet Venus</a> brings some neat new options. It
expands the selection of templates, adds a configurable RSS filter step, and
makes the normalization step configurable.</p>
<p>It's also packaged in Ubuntu as planet-venus, making it fairly simple to set
up. Deployment was a little tricky, as the package leaves most of the site
<a href="http://intertwingly.net/code/venus/docs/config.html">configuration</a> to the admin. You'll need a config.ini (I used
/etc/planet/planet.ini), a template dir (/usr/local/share/planet-venus/theme),
a cache dir (/var/cache/planet) and an output dir (somewhere in /var/www
typically). Finally, you'll need to set up a cron job to run the static output
generation script regularly. The script reads all the feeds and parameters in
config.ini, caches the results to save bandwidth on subsequent runs, passes
them to the template engine, and places the final product in the output dir.</p>
<p>When building a lifestream style site, you have to be picky about the kinds of
feeds you put in or it gets Facebook / Twitter style spammy. This is where the
RSS filter step can help; Planet Venus comes with a few filters like
'notweets', and a few stripAds filters to cleanse ads before republishing.
It's the same design pattern I talked about before <a href="//www.pwnguin.net/fun-and-profit-with-liferea-conversion-filters.html">here</a> with Liferea. In
the future I could write one to add in comment feeds and then filter out
everything that fails to meet some strong quality criteria.</p>
<h3>Output templates</h3>
<p>Planet Venus's real selling point to me is using <a href="http://www.djangoproject.com/documentation/templates/">Django templates</a>. I've
been meaning to learn Django for a while now, and this is a pretty good way to
work with the templates portion of Django. And again, the filter pattern pops
up. Here, filters take python variables as input; in Planet Venus's setup you
have access to feed and item variables, as well as planetwide settings. One
example filter might be to simply <a href="http://docs.djangoproject.com/en/dev/ref/templates/builtins/#pluralize">pluralize</a> a word based on a variable
(yes, you can even handle 'y' pluralization). Another example is the
<a href="http://docs.djangoproject.com/en/dev/ref/templates/builtins/#urlize">urlize</a> filter that adds HTML anchor tags to likely URLs (not so great
when you already have anchor tags in the filter's input).</p>
<p>I also use templates to generate an RSS feed. Nothing difficult about it,
since the input to templates is basically an RSS feed to begin with. To reduce
the probability of bugs, I translated a provided example <a href="http://htmltmpl.sourceforge.net/">htmptmpl</a> RSS
template into Django, and it's much smaller and clearer to me. Unfortunately,
there's a bug in Planet Venus that prevents the use of multiple Django
templates. I've reported it upstream, and I'm sure I can fix it or work around
it.</p>
<h3>Web Design</h3>
<p>I also decided to take a look at CSS layout frameworks, to get up to speed on
the subject quickly. 960.gs is popular, but it's 960 pixel width assumption
works poorly with quirky resolutions found on massive monitors and
smartphones. Luckily, I found found <a href="http://www.designinfluences.com/fluid960gs/">fluid960</a>, which is very similar, but
implements fluid layouts. It retains the CSS class names of 960.gs, so
tutorials and documentation on one translate fairly well to the other. Which
is good, because fluid960 pretty much relies on you already knowing regular
960 (I didn't). <a href="http://vimeo.com/7530607">This presentation</a> gives a good summary of things you
might want a CSS framework for, and this <a href="http://net.tutsplus.com/videos/screencasts/a-detailed-look-at-the-960-css-framework/"> 960 tutorial</a> covers what I
needed to know.</p>
<p>Color scheming is probably the hardest part for me. It's simple to pick a
color pallate that goes together, but there is a higher level opportunity to
communicate something through visual design. I could choose a purple scheme to
reflect my collegiate experience, or an Ubuntu pallete, but it seems
inappropriate for a personal site. I've got a bit of low level coding
experience, so I could go with a green on black terminal theme, but it's been
done to death ever since the Matrix, and it's basically impossible to beat
<a href="http://www.jwz.org/">jwz's</a> version.</p>
<p>Since I'm not really looking to break into web design, I went with a
relatively muted color scheme that organizes the content without distracting
from it. Truthfully it doesn't matter all that much, as experience shows the
majority of hits will come via RSS.</p>
<p>Well, that's basically all there is to my automated homepage system. On to
more important things, like setting up a calDAV server or a feed processing
tool.</p>Technorati re-enrollment2010-03-10T00:00:00-08:00Justin Duggertag:www.pwnguin.net,2010-03-10:technorati-re-enrollment.html<p>W58GYGH7ENPK</p>Lucky!2010-01-08T00:00:00-08:00Justin Duggertag:www.pwnguin.net,2010-01-08:lucky.html<p><a href="http://www.sparkfun.com/">SparkFun</a> held a "Free Day" on Jan 7th, giving away up to 100 dollars in
goods per order (to a max of $100,000). With a bit of planning, an employment
snow day and a lot of luck, I managed to snag one of the spots, and will
recieve an Arduino and stuff. I'm in no hurry to receive it, but it should
make for a nice diversion. Well, <em>another</em> nice diversion; I've already got a
hackable phone, a website, and console games to finish. When the goods do
arrive, I suppose that will be a good time to investigate how easy this is out
of the box with Ubuntu.</p>
<p>Their <a href="http://www.sparkfun.com/commerce/news.php?id=322">postmortem</a> gave a few statistics on the event, but it's not clear
what exactly caused them to slow down so much. Personally, I figure they were
over-subscribed 100:1 and everyone knew it, causing people to flood the site
beyond an overly optimistic worst case scenario (f5 refresh galore!). Not to
mention the extra 75,000 visiters were likely all trying to do purchase
transactions simultaneously, which involves SSL overhead and substantial DB
writes. They didn't post the architecture or individual server stats you'd
need to determine whether it was CPU, IO or network bound. Some bitter parties
accuse them of intentionally crippling the site as a stunt to avoid an anti-
climatic instant sellout. Or that they failed to utilize any caching control
on server or client side.</p>
<p>Anyways, thanks to SparkFun for the toys, I'll be sure to put them to good
use.</p>n900 arrival and notes.2009-12-10T00:00:00-08:00Justin Duggertag:www.pwnguin.net,2009-12-10:n900-arrival-and-notes.html<p>The phone arrived, and I'm recording a few notes on initial experiences,
covering mail, the display, the browser, available apps, battery life.</p>
<p><strong>Mail</strong>. "Mail for Exchange" worked fine, surprisingly. There are two popular
versions of exchange, 2003 and 2007. Last I checked 2007 wasn't supported by
Ubuntu (this may have changed but I'm afraid to test it for fear of destroying
my inbox or the server itself). There's a 'peak hours' mode that would be
better called 'work hours' to control when it should poll for mail. There's a
lot of complaints of gMail support on the forums. Need to test that more; in
addition to IMAP there's a mobile version of the webapp and the full version
works to some degree.</p>
<p><strong>Brightness</strong>. Many displays are weak outdoors, due to the physics of LCDs.
Essentially, the two options available are to outshine the sun or reflect and
filter the incoming sunlight (transflective). Older Nintendo handhelds shipped
without any light source, leading some clever people to design LED light that
drew power from the gamelink port, or even open the device to mod it with an
aftermarket frontlight screen. Wikipedia reports the N900 is a transflective,
and it performs admirably in daylight.</p>
<p><strong>Browser</strong>. I think I've tried this UI before on some other system images.
It's not bad, but the back button is form over function; when I want to go
back one click it has to first cook up a scrollable picture history. With
<a href="http://news.cnet.com/8301-30685_3-10411569-264.html?part=rss&subj=news&tag=2547-1_3-0-20">Fennec landing on the N900 first</a>, I'd be remiss not to give it a go, if
only for the Weave support.</p>
<p><strong>Installable apps</strong>. Some quirky stuff going on, like a facebook app
<em>installer</em> app and a facebook app. And to the best of my observation, the
installer app is larger than the app itself. I suppose at least it's not a
free trial. It looks like most of the fun stuff is sitting in satellite repos
like extras or garage?</p>
<p>On further inspection, there's a sort of <a href="http://wiki.maemo.org/Extras">rolling release system in place for
extras</a>, loosely modeled on Debian with perhaps clearer names: extras,
extras-testing and extras-devel. It looks like most packages reside in
-testing, and enabling -testing was similar to Ubuntu's software sources. On
the other hand, -devel has giant warning flags in the wiki.</p>
<p><strong>Battery life</strong>. My friend Tom insists smartphones are power leeches, so I've
been trying to keep it on leash. On the other hand, it seemed to survive a
workday yesterday as I fiddled. It charges via microB, which sounds handy
until you realize the only microB cable you've got is the one that it came
with. We've got countless USB A->B from monitors but no microB. I should break
out the Kill-a-watt and powertop, and see what's going on.</p>
<p><strong>Self reported data.</strong> I've captured some output from various Linux internal
documentation interfaces; I'll post them as comments later.</p>
<h3>App Ideas</h3>
<p>Just a few random ideas I've had:</p>
<ul>
<li>
<p><strong>Club Carder</strong>. Capture and display barcodes for customer loyalty and
library cards. Preliminary testing with displaying photographed barcodes
suggests that the screen is not transflective, and may defeat laser scanners
measuring reflectance. Most of the reviews of the Android app that does this
don't attempt retail scan testing, so I may need to borrow a friend's device
for comparison. If it does work, integrating with GPS may help recall the
correct card for the correct situation.</p>
</li>
<li>
<p><strong>FlashTorch</strong>. Apparently some LED flashes can be driven for longer. Just
finished a longish debate on the merits on #maemo, wherein it was determined
that the light can be driven in flash mode safely for .5s, and longer at 50mA
(less bright but supposedly not bad).</p>
</li>
<li>
<p><strong>GCStar scanner</strong>. Barcode scanner integrated with GCStar or some other
personal inventory app. There's already a barcode scanner work in progress,
probably just need to concoct a plugin to direct the data.</p>
</li>
<li>
<p><strong>Cellwriter</strong>. Should be possible to port it, the main problem is whether
it's actually faster, and whether one can override the onscreen keyboard with
it. It occurs to me that you could preload it with Graffiti or Graffiti 2, but
I doubt they're fast enough. In my experience, cursive is the way to go for
speed. There's certainly spare CPU to process it, but processing cursive
requires another error prone step we don't have yet.</p>
</li>
</ul>Weave synchronization2009-12-06T00:00:00-08:00Justin Duggertag:www.pwnguin.net,2009-12-06:weave-synchronization.html<p>During the transition from Firefox 2.0 to 3.0, <a href="http://www.google.com/tools/firefox/browsersync/">Google Browser Sync</a> was
lost to the churn of time and API. This plugin encrypted and synchronized your
browser history, tabs and cache among browsers. Since it's disappearance, many
people seem to recommend <a href="http://www.xmarks.com/">Xmarks</a>. While Xmarks is a fairly robust bookmark
sync tool, it neglects other aspects of a profile like cookies or history,
making it harder to recommend. Conceivably, Ubuntu One could handle Firefox
profiles the way it handles Tomboy and Evolution contacts, but for now, the
only plausible comprehensive option is Mozilla's own offering, <a href="https://mozillalabs.com/weave/">Weave</a>.</p>
<p>Weave is a browser sync engine. The FAQ gives a brief summary of features:
<a href="https://wiki.mozilla.org/Labs/Weave/FAQ#Firefox_Mobile">Weave Sync currently supports synchronization of bookmarks, browsing history,
open tabs, saved passwords, form entries, and selected preferences. Security
settings will be added in a future release.</a> Yes, even <em>tabs</em>. If you leave
a tab open on your work box you can copy it to another box (the remote tab
remains open). Password recall is advertised as useful for mobile phones; pull
complex passwords into your phone's <a href="http://www.mozilla.org/projects/fennec/1.0a1/releasenotes/">Fennec</a> and skip the tedious
caps/numlock/funky character disaster. All of these are configurable, so you
can prevent disclosure if you fear distributing passwords to multiple boxes,
or fear distributing to Mozilla's hosting service.</p>
<p>I tried giving Weave a spin around a year ago, but it wasn't quite open to
public use. There were hints to use WebDAV enabled servers on your own, but
the only server code published was more a code testing server (since retired I
think), and the plugin outright refused to work without a valid Mozilla labs
account, at a time where such accounts were invite only. In fact, I think I
couldn't even download it without someone with an existing account giving me
the URL; not exactly a participatory open source experience to say the least.</p>
<p>But that was then and this is now; they've made regular progress on Weave and
are ramping up for a 1.0 release. Since I've ordered a n900 smartphone, I
decided to spend the Thanksgiving holiday setting up a personal Weave server
to populate its browser with data quickly. I did more research, consulted the
web and documentation, even queried a few IRC channels for user's opinions. It
seems they've opened registration to the general public, and there's a new
<a href="http://tobyelliott.wordpress.com/2009/09/11/weave-minimal-server/">lightweight server</a> published that relies on PHP and SQLite. I've set it
up on my personal server to see whether things have improved; when my phone
comes in I'll also test integration with Fennec.</p>
<p>It's fairly young, but inherits many design decisions from prior Weave
servers, so it should catch up quick and the smaller scope is an advantage.
One minor problem is that its only published via tarball and uses (part of) a
blog for a website, which does not instill confidence. But with only like five
files and no build, it should be trivial to package or even fork and maintain
if necessary. And I have it on good authority better publishing is <a href="http://tobyelliott.wordpress.com/2009/09/11/weave-minimal-server/#comment-136">under
consideration</a>.</p>
<p>Of course, it would be a bit silly to package a server <a href="http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=556135">without the
client</a>. If Mozilla announces it formally soon enough, 1.0 plugin and a
minimal server could plausibly be in Ubuntu 10.04. So far, the server seems to
be working in proper order, but it'll take more and wider testing to have
utmost confidence in it.</p>
<h3>Server Install Notes</h3>
<ul>
<li>
<p>Apache runs as www-data on Ubuntu, so make sure that user has write
permissions on the database and read/execute on the PHP code.</p>
</li>
<li>
<p>Dependencies are fairly obvious: php5-sqlite, php5 and php5-cli (to create
your user). It feels a bit goofy to use PHP for CLI scripting, but I guess in
some situations it reduces dependencies.</p>
</li>
<li>
<p>If you self-sign a certificate, you need to store an exception on every
browser used. Or replace it with trusted SSL cert.</p>
</li>
<li>
<p>After a week of initial use, my database currently sits around 5MB; I'm
not sure how fast that will grow over time. Probably not much worse than a
local FF profile grows.</p>
</li>
</ul>
<h3>A note to Mozilla</h3>
<p>There's something like three versions of Weave documentation floating around
on your wiki, and Google search likes the oldest one best. Probably because
even your FAQ links to the oldest server documentation. I expect you don't
want this, and should probably take action to change it. Perhaps if
documentation had a "current version" alias, people could link to that instead
of a specific version destined to go out of date, and warning boxes could
appear when viewing old versions. If you need software online help to
reference versioned documentation, perhaps you could, you know, use the built-
in wiki version control?</p>Don't let the MSRP fool you2009-11-20T00:00:00-08:00Justin Duggertag:www.pwnguin.net,2009-11-20:dont-let-the-msrp-fool-you.html<p>I ordered up a <a href="http://amzn.to/2e8xn0G">Nokia N900</a> the other day, and I have to say the
preorder market for it is strange. Or perhaps, I don't participate in
preorders often enough to know that this is common.</p>
<p>The price listed in the Nokia Store and press releases is around 650USD. But
the retailers seem to be undergoing a reverse auction. Newegg undercuts Amazon
by 30 dollars, resulting in a wave of cancellations at Amazon and new orders
at Newegg. Then Amazon cuts the price further. Then Amazon is the first to
post an 50 dollar manufacturer rebate earned by activating the Ovi Store (app
store), and others rapidly follow suit. Obviously some people will game the
system and preorder with every retailer and cancel all but the cheapest when
shipping time comes.</p>
<p>At this point the price is around 200 dollars away from the MSRP; the amazon
link above is currently listing at 479USD. It makes me wonder how big their
markup was, or if they're taking a loss accidentally (or intentionally).
Either way I have to say this recession has been great on my pocketbook.</p>
<p>I think when it arrives I'll use the winter break to make a location aware app
for customer loyalty cards, as I'm tired of carrying around a billion bar
codes in my wallet. Maybe I should bug a friend of mine with Android to show
me how that one works.</p>How to REALLY make money with Ubuntu2009-10-28T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2009-10-28:how-to-really-make-money-with-ubuntu.html<p>You're all thinking way too hard. If you want to see how people make money on
open source, why not check out <a href="http://shop.ebay.com/?_from=R40&_trksid=p3907.m38.l1313&_nkw=ubuntu&_sacat=See-All-Categories">eBay</a>?</p>An alternative interpretation2009-10-24T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2009-10-24:an-alternative-interpretation.html<p>Gerfried Fuchs (rhonda) is a <a href="http://rhonda.deb.at/blog/debian/debian-progress.en.html">prolific Debian Developer</a>, who maintains a
number of Debian packages, some of which I use daily. She also participates in
the <a href="http://wiki.debian.org/Games/Development">Debian/Ubuntu Games Team</a>, which has a mission to unite the two
projects when it comes to the common purpose of gaming. To that noble end
she's considering joining MOTU, but is reluctant to keysign the <a href="http://www.ubuntu.com/community/conduct">Code of
Conduct</a>, finding an aside within unsavory. She <a href="http://rhonda.deb.at/blog/2009/10/16#coc-joke.en">worries</a>:</p>
<blockquote>
<p>Anyway, there is this one part with the that CoC that itches me. It's not
that one has to sign it with their GnuPG key, but related to it. Making it a
requirement to sign it gives the document a much more official character,
actually gives it the feeling and impression of a contract and I expect it is
meant to carry that feeling. Though, there is this one part in it that I
consider off for such a document:</p>
<blockquote>
<p>Nobody knows everything, and nobody is expected to be perfect in the Ubuntu
community (except of course the SABDFL).</p>
</blockquote>
<p>Given that the acronym SABDFL refers to Mark Shuttleworth it means that one
has to expect him to be impeccable—which I am sorry but cannot sign.</p>
</blockquote>
<p>As best I can tell, her worry is that it declares leadership is infallible,
and that any lighthearted humor undermines the serious tone a contract must
bear. But jokes can place a lot of meaning into a few words; many legal briefs
use them to great effect. My personal interpretation of the aside in question
is that we hold SABDFL and leadership to higher standards than the rest of the
community, and that we expect the utmost behavior from SABDFL. This
interpretation is supported by the <a href="http://www.ubuntu.com/community/leadership-conduct">Ubuntu Leadership Code of Conduct.</a> I
think this particular document does not and has not received the attention the
normal CoC has enjoyed; I encourage people to read it now. The aside, I
believe, calls to attention the expectations the community has of its
leadership.</p>
<p>It also serves as a reminder that <a href="http://www.ubuntu.com/community/processes/governance#sabdfl">SABDFL</a> has considerable power,
financial, social and structurally. Without communicating these expectations,
there would be to great power without great responsibility. Being able to take
SABDFL to task for mistakes the stabilizes the community when an error is made
by its public face, and places community checks on his power.</p>
<p>So Gerfried, I'd encourage you to read the Leadership CoC and the governance
documents, and then read the CoC again in their light. Perhaps you will then
find this aside reasonable, and perhaps you will find more to object to.
Either way, you'll discover more about the attitudes of the Ubuntu community
you work with, so I think it's worth your time. And if you change your mind,
I'd be glad to cheer you on your way to MOTU.</p>
<p><strong>UPDATE:</strong> shortly after writing this, the Ubuntu Community Council <a href="http://mako.cc/copyrighteous/20091020-00.comment">chose to
revise the document</a>. The statement in question is removed, but I still
encourage people to understand the community structure of Ubuntu and the
expectations we have of leadership.</p>Generating Database Schema With SQL and GraphViz2009-10-20T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2009-10-20:generating-database-schema-with-sql-and-graphviz.html<p>At work recently I've been asked to dig into a legacy app running queries
against Oracle. One of the challenges with projects like this is determining
the existing schema and how the tables are related. Typically you can read the
internal documentation or review the existing code, but in this case it seems
the documentation was never written, and nobody knows how to build the client
app or which copy of the source code is in use. The best option here is to
<strong>query the database itself for this information</strong>. For those playing at home,
Oracle is not required; in fact, even SQLite has sufficient instrumentation.
If you use Firefox or Liferea or Banshee you will have an SQLite database to
inspect.</p>
<p>There are popular DB admin tools in Ubuntu like TOra, but they lack a decent
diagram generator. If we can extract the schema from the DB, it should be
straightforward to pass this off to GraphViz. In Oracle, the Oracle Data
Dictionary tables provide you with all <a href="http://download.oracle.com/docs/cd/B19306_01/server.102/b14220/schema.htm">features to an Oracle schema</a>. In
SQLite, schemas are stored in the table SQLITE_MASTER. You can easily retrieve
it with a command like:</p>
<p><code>echo ".schema" | sqlite3 ~/.liferea_1.4/liferea.db >> liferea.sql</code></p>
<p>Since this information is readily available on the web, it's inevitable
someone on the internet has done the obvious and wrote a translator from SQL
schema to <a href="http://en.wikipedia.org/wiki/DOT_language">DOT</a>. I'll share two that I've found that are open source and
useful: SchemaSpy and SQLFairy.</p>
<h3>SchemaSpy</h3>
<p><a href="http://schemaspy.sourceforge.net/">SchemaSpy</a> is a Java based tool to connect to a database and generate
documentation. It uses JDBC, and outputs a wealth of information in HTML and
PNG. In addition to the ER diagram basics, it captures constraints, table
sizes, and offers some rudimentary static analysis of the schema. For example,
if you have a table named students and a column named student_id, it will
suggest this as an implied primary key if it's in the same table, or a foreign
key if it's outside the student table. It generates ER diagrams with and
without these implied relations, and uses AJAX to switch between generated
images. Check out the <a href="http://schemaspy.sourceforge.net/sample/">sample output</a> to see exactly what it generates. The
Windows builds of graphViz appear to do a poor job of rendering, but I like
the color coding of indexes and keys, and how the lines relate specific
fields:</p>
<p><a href="http://schemaspy.sourceforge.net/sample/relationships.html"><img alt="schema diagram for library database" src="http://schemaspy.sourceforge.net/sample/diagrams/summary/relationships.real.compact.png" /></a></p>
<p>The above diagram format is fairly handy (but rendered poorly; Ubuntu's
graphviz does a better job) when the schema is rational. In my case, there
were substantial missing relations and missing primary keys. It was bad enough
that even the implied relations are wrong. Fortunately, SchemaSpy leaves the
intermediary graphviz DOT files around, so I can go in and fix the output for
a few tables. It even provides a -meta parameter for cases like this, but it's
easy to fix one table diagram and then fix the actual problem in the schema.
Overall, it's a success for my Oracle DB reverse engineering task.</p>
<p>The output is fancy enough that I'm tempted to try it on the personal
databases I have. SchemaSpy relies on JDBC, and I've not been able to locate a
JDBC driver for SQLite that handles the metadata requests needed. Plus, SQLite
is special among DBs, in that its goal is to be the simplest thing that could
possibly work. This places the analysis steps somewhere between pointless and
impossible. In this case -meta might be handy for generating documentation
without having both triggers <strong>and</strong> foreign key constraints in the schema. If
you can find a driver to load it in the first place.</p>
<h3>SQL Fairy</h3>
<p>SQLite, though simple, is becoming more popular in desktop apps, so it'd be
productive to have a tool to document these on-disk formats. I know when I was
chasing enclosure handling in Liferea, documentation about the internal schema
would have saved time. But db.c was the second largest source file, behind
only a SWIG autogenerated header file to wrap scripting languages. Generating
readable-at-a-glance documentation would make it easier to see how enclosures
are handled within Liferea.</p>
<p>So while I can't convince SchemaSpy to hit up SQLite databases, I know what I
want is possible, and easy enough that surely someone's done it. This is the
<em>Internet</em>, after all. A fair amount of searching revealed <a href="http://sqlfairy.sourceforge.net/">SQLFairy</a>; it's
listed on the <a href="http://www.graphviz.org/Resources.php">Graphviz Resources</a>, to my chagrin. SQLFairy is a set of
Perl scripts with a main goal of manipulating schemas: translating, diffing,
and <strong>diagramming</strong>. It's not much, but it does do the bare minimum: generate
a diagram. It won't flag poorly built schemas or summarize table sizes, but it
does translate tables and relationships into DOT and hand them off to
GraphViz.</p>
<p>SQLite presents some <a href="http://www.sqlite.org/omitted.html">unique challenges</a> beyond lacking a JDBC driver; it
only started enforcing <a href="http://www.sqlite.org/foreignkeys.html">foreign keys</a> a <a href="http://www.sqlite.org/releaselog/3_6_19.html">few days ago</a>, so software
needing this today <a href="http://www.sqlite.org/cvstrac/wiki?p=ForeignKeyTriggers">works around</a> that limitation with triggers.
Fortunately, SQLFairy also supports implied relationships, but they call them
"natural joins", overloading normal DB terminology. Using the liferea.sql
schema dump, we can use sqlt-graph to generate an SVG for high quality
printing:</p>
<p><code>sqlt-graph -c --natural-join --from=SQLite -t svg -o liferea_schema.svg liferea.sql</code></p>
<p><img alt="schema diagram for liferea database" src="//www.pwnguin.net/media/photologue/photos/liferea_schema.png" /></p>
<p>You can then load that up in Inkscape and target whatever paper you've got or
tweak the diagram. We have some 11x17 ledger paper at work that showcases
these diagrams very nicely. Since SVG output is also XML, you could run some
XSLT <a href="http://github.com/vidarh/diagram-tools">output processing</a> to style it, but without more metadata in the
XML, about all I can get is drop shadows on boxes. It also doesn't do
relations very well, drawing relations between tables rather than between
fields within tables, because it doesn't properly make use of Graphviz
<a href="http://www.graphviz.org/doc/info/shapes.html#record">records</a>. As bad as the above graph is, it's worse on Ubuntu 9.04 and
9.10, which carry versions from before SQLFairy upstream revisited the
graphviz output. There's no reason this can't be fixed to be close to
SchemaSpy quality diagrams, although the two projects use different (possibly
compatible) licensing.</p>
<h3>Conclusions</h3>
<p>Right now it seems like SchemaSpy is a great tool for documenting your server
oriented database; if you generate documentation for your project it's worth
having a look at adding it to the doc target. On the other hand, SQLite
support is not as robust, even as adoption is growing. The newly announced FK
support gives me hope that most apps can be easily changed to be more
documentation amenable. If anyone can get SchemaSpy to work with SQLite,
please let me know how!</p>Fun and Profit with Liferea Conversion Filters2009-09-13T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2009-09-13:fun-and-profit-with-liferea-conversion-filters.html<p><a href="http://liferea.sourceforge.net/">Liferea</a> is an RSS feed reader, with some hidden powers. I use it to
automatically download enclosures from the <a href="http://www.ted.com/">TED</a> feed, and aggregate unread
comic strips into a single scroll page in a custom view folder. But today, I'd
like to present <strong>conversion filters</strong>.</p>
<p>Conversion filters are scripts that function according to UNIX pipes semantics
to process feeds before display. Liferea performs the task of retrieving the
XML, starts the conversion filter, and passes the XML over stdin. The filter
prints a converted XML file back to Liferea over stdout, and which happily
displays the new RSS feed. For the graphic minded:</p>
<p><img alt="liferea conversion filter diagram" src="//www.pwnguin.net/media/photologue/photos/conversionfilter.png" /></p>
<p>A <a href="http://kiza.kcore.de/software/snownews/snowscripts/extensions/">small repository of conversion filters</a> has been started, mainly driven
by a different, command line driven RSS reader, snownews. Most published
conversion filters are what you'd call a <em>source</em> in dataflow, rather than a
<em>filter</em>. Even more are site specific, making this more of a greasemonkey for
RSS feeds.</p>
<p>I'll share an example I concocted today. I like the <a href="http://freakonomics.blogs.nytimes.com/">Freakonomics blog</a>,
but lately they've added some unrelated content called <em>Quotes Uncovered</em>. I
have no idea how identifying the historical source of quotations relates to
microeconomics, but I do have an idea of how to use a conversion filter to
remove it. But first, we need to talk for a moment about <strong>XPath</strong>.</p>
<p>XML is tree structured, like a filesystem. I've seen at least two people take
this analogy to it's conclusion, and implement an <a href="http://github.com/halhen/xmlfs/tree/master">XMLfs</a> via FUSE. XPath
also uses this analogy, for a different purpose, describing paths in
documents. Paths in UNIX filesystems can be singular, like
<strong>/etc/apt/sources.list</strong>, or plural, like <strong>/etc/apt/sources.list.d/<strong><em>
(technically your shell expands this, but play along). Concepts like </em>*../</strong>
and </strong>*** are supported by XPath. Instead of directories though, we're going
to use XPath to navigate nodes in XML.</p>
<p>Unfortunately, I'm interested in a specific node, based not on it's name but
it's contents. So I need some predicate to narrow this. In UNIX shell, find is
commonly used for this purpose, but XPath integrates this concept and diverges
from UNIX paths. In the case of my Freakonomics cleaner, I want to delete the
item node that has <em>Quotes Uncovered</em> in a child title node:</p>
<p><code>//*/item/title[contains(text(),'Quotes Uncovered')]/..</code></p>
<p>Liferea doesn't understand XPath though, so we'll need to use the conversion
filter to handle. Conversion filters I've seen thus far end up scripting
parsing the XML, processing the structure and printing it. Only the middle
part is actually going to be unique to conversion filters. Fortunately,
there's a program that adheres to UNIX philosophy and handles this:
<a href="http://xmlstar.sourceforge.net/">XMLStarlet</a> (in Universe). XMLstarlet reads XML from stdin, and writes XML
from stdout, just as our conversion filters must. On a technical level, it
converts the command line to an XSLT and applies it to the input. In this case
I just need to tell it to delete the section matching that XPath from the
feed. That's accomplished with the ed -d option.</p>
<p>So now I just create a new subscription, and put the command in the box:</p>
<p><img alt="liferea subscription properties dialog" src="//www.pwnguin.net/media/photologue/photos/Liferea-Subscription.png" /></p>
<p>The result is a one-liner that puts the Quotes Uncovered series to pasture.
It's tempting to UNIX pipes to build more complicated filters, but I'm told
Liferea isn't currently coded for chained pipes. So if you get more
complicated than the basic 3 step Fetch-Filter-Display pipeline, you'll need
to write a short shell script for that, or write some really crazy XPath.</p>
<h3>Postscript</h3>
<p>The Snownews extension repo is interesting, but doesn't have a way to link
feeds with scripts. Earlier I compared conversion filters to Greasemonkey.
Well Greasemonkey has a partner extension called Greasefire and a website
backend userscripts.org for script discovery. It'd be handy to have conversion
filter discovery for RSS!</p>Adeona + UbuntuOne?2009-08-23T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2009-08-23:adeona-ubuntuone.html<p><a href="http://adeona.cs.washington.edu/">Adeona</a> is an OSS anti-theft program. The theory of operation is described
as a <em>privacy-preserving device-tracking system</em> in the <a href="http://adeona.cs.washington.edu/papers/adeona-usenixsecurity08.pdf">research
publication</a>:</p>
<blockquote>
<p>A device tracking system consists of: client hardware or software logic
installed on the device; (sometimes) cryptographic key material stored on the
device; (sometimes) cryptographic key material maintained separately by the
device owner; and a remote storage facility. The client sends location updates
over the Internet to the remote storage. Once a device goes missing, the owner
or authorized agent searches the remote storage for location updates
pertaining to the device’s current whereabouts.</p>
</blockquote>
<p>The paper goes on to describe specific goals for Adeona:</p>
<ul>
<li>
<p>network update anonymity</p>
</li>
<li>
<p>privacy from the thief</p>
</li>
<li>
<p>efficiency on par with existing tools</p>
</li>
</ul>
<p>I'll argue that most of this is post-hoc requirements from an early decision
to rely on OpenDHT (which turns out to be a not so great idea). That said,
it's not bad stuff to have; Joey Hess's <a href="kitenet.net/~joey/blog/entry/Palm_Pre_privacy">recent discovery</a> made headlines,
and rightly so, when Palm Pre violated pretty much every principle the Adeona
wrote about.</p>
<p>However, it's rarely a wise idea for products to rely on academic research
projects. As a research project, it might have been acceptable for Adeona to
depend on OpenDHT. But this decision should have been revisited when Adeona
launched as a community product. And they've paid a price -- OpenDHT announced
it would be closing down on July 1st 2009. I'm not clear on whether the
OpenDHT infrastructure is too slow and unreliable, or if the software itself
is unreliable, but Adeona's own website currently starts with a disclaimer of
non-functionality.</p>
<p>So what I'm wondering out loud is, why not integrate Adeona and UbuntuOne? The
purpose of using OpenDHT appears to be to find cheap storage for tracking
data. Obviously some privacy would be sacrificed. Canonical would basically be
able to track some things, like IPs connected to UbuntuOne. While there are
already other reliable places for Canonical to snoop IP addresses (access logs
from Ubuntu archives), UbuntuOne takes the extra step of authenticating users.
I believe though, that the only thing they'd be able to associate is your
username with a set of IPs. I guess it comes down to how many ultra-paranoid
users object to the privacy concerns of UbuntuOne itself.</p>
<p>Ideally, I think Adeona would be generalized to allow a number of remote
storage interfaces. If I understood the system correctly, data is encrypted
before being stored. The privacy paranoid would be able to store location data
on their own servers, or encrypted email, etc.</p>
<p>So is there something I'm missing? A competing product maybe? Or perhaps the
viewpoint of law enforcement; the academic papers seem to neglect recovery
rates and concerns from police. Or perhaps something technical. I'm hardly a
crypto expert, and currently security researchers seem to be focusing on a
<a href="http://www.google.com/hostednews/ap/article/ALeqM5gDEcxr3CSkM0RlVSqVzNWlccf6XwD99P33N82">proprietary application.</a></p>Why isn't Arduino in Debian / Ubuntu?2009-08-12T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2009-08-12:why-isnt-arduino-in-debian-ubuntu.html<p>After <a href="//www.pwnguin.net/an-electronic-diversion.html">my review</a> of the Sparkle Labs <strong>Discover Electronics!</strong> kit I
thought I'd look for related software in Ubuntu. I did find <a href="http://packages.ubuntu.com/gresistor">gResistor</a>
(Hat tip: Debianized by <a href="http://www.tuxmaniac.com/blog/2008/07/26/gresistor-is-now-debianised/">tuxmaniac</a>), and some drafting tools. But I also
found a lot of stuff that wasn't in Debian or Ubuntu.</p>
<p>For example, <a href="http://www.arduino.cc/">Arduino</a> makes an interesting platform for hobbyists,
students and apparently artists. It's lineage is confusing, inheriting from
both <a href="http://www.processing.org">Processing</a> and <a href="http://www.wiring.org.co">Wiring</a>. I'm not yet clear whether Wiring is a
language, hardware or an IDE, but it in turn is based on Processing and adds
some microcontroller-y stuff as I understand it. Processing is a language and
IDE, and has been used for <a href="http://www.aiga.org/content.cfm/the-amazing-visual-language-of-processing">amazing things</a>. It's all open source, and
reportedly works on Ubuntu. People have built <a href="http://www.kellbot.com/2009/05/life-size-katamari-lives/">amazing</a> <a href="http://gizmodo.com/5028377/amazing-wii+like-3+d-controller-interface-built-with-foil-wiring-resistors-and-arduino">things</a> with
the Arduino platform as well.</p>
<p>Strangely, hub of activity hasn't resulted in a .deb package, only a two year
idle <a href="http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=433270">Request For Package </a> for Processing, and nothing on the subject of
Arduino's IDE. Having packaging available for this stuff can do a lot to
remove trip ups for budding developers and artists. I'd much rather they spend
their time fixing their own bugs rather than trying to remove any bugs in
integrating the platform with Ubuntu. Does anyone know why or where these
efforts have stalled out? Is it legal issues, build problems, or just too
convenient already?</p>Free Software and Computer Scientists2009-08-10T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2009-08-10:free-software-and-computer-scientists.html<p>Gabriella Coleman writes on the relationship of <a href="http://gabriellacoleman.org/blog/?p=1712">academics and Free
Software</a>:</p>
<blockquote>
<p>One irony (though not entertained in the chapter) has to do with the status
of Free Software in the academy: it is pretty weak among CS-ey types and yet
Free Software is often identified as a paragon example of the openness and
communitarian elements of how academic science is supposed to work. So.. what
is exactly going on?</p>
</blockquote>
<p>The post seems to confuse the issue of software generated by CS research, and
the software needs of education. Simply put: Moodle developers don't earn
tenure.</p>
<p>It's not the case, however, that Free Software "is weak among CS-ey types".
Looking back on my CS undergraduate studies, my fellow students and I used a
number of open source tools. Some are general purpose tools: Apache, OCaml and
gcc, CVS, ANTLR, lexx/yacc to name the ones I remember. We even studied source
code from a C stdlib. Our 3d graphics course focused on Linux, and OpenGL when
it came time for hardware rendering. Some open source software was necessarily
open sourced for instruction: NachOS and Minix come to mind. One professor was
writing a textbook on formal language theory and the drafts are GFDL.</p>
<p>There are plenty of FOSS research projects. They're just things nobody's heard
of or forgot, because they're academic, boring or complicated. If we treat the
various Bell Labs as a private academic institution, they've put out stuff
like SPIN, graphviz, and so on. University of Edinburg released <a href="http://festvox.org/festival/">festival</a>,
and Carnegie Mellon developed <a href="www.speech.cs.cmu.edu/flite/">flite</a>. BSD came from Berkley, an obvious
example. The rest of the stuff I know of is source code static analysis tools
that go further and take longer than lint, or Eclipse plugins. If anyone knows
of Moodle related research for publication in say ACM or IEEE, I'm unaware of
it and would enjoy a reference to the paper.</p>
<p>So CS students use and read open source software, and CS researchers create
and improve it. The remaining aspect of the question, as I understand it, is
why more student projects aren't started to solve the university's problems.
It's a bit naive, I think, to imagine a capstone project can replace incumbent
software in a semester or two. You could develop a project over years of
classes, but there's a problematic theme among educational source code:
projects for instructional improvement need to stay in need of improvement.
Hence, Minix spawned multiple, unincorporated VM projects, and eventually lead
to Linus Torvalds starting his own OS. It's also a bit rude to charge students
tuition for the privilege of replacing software you need, and then demand that
rights to that labor. Even GRAs are paid tuition or stipend!</p>
<p>A positive example: <a href="http://risujin.org/cellwriter/">Cellwriter</a>. <a href="http://www.urop.umn.edu/">UROP</a> paid the student an
Undergraduate Research grant, with exceptional results. But in the longer
term, who owns the responsibility for maintaining these programs?
Undergraduates tend to graduate and land jobs, rarely to continue their work
on a capstone course. Leaving the life-span of a project in their hands works
fine for the self-selected, but requiring CS students to work on someone
else's problem will only work until grades are assigned. I remember my own
capstone project, and while instructive, I don't think anyone in the team
wants to see that subject or code ever again.</p>
<p>Directing CS departments at open source's deficiencies is a bit like asking EE
professors to wire your campus, or ArchiE professors to design your buildings.
These people are employed to solve hard and mostly unsolved problems; campus
IT is mostly not that. (At this point, I must confess I am a CS academic-in-
training who's transitioned to college IT administration, and may have a
bias.) It's a tremendous mistake to take these people and build an open source
AutoCAD when they should be working on <a href="http://www.dgp.toronto.edu/~shbae/ilovesketch.htm">alternatives</a> to the CAD concept!
Of course it would be great if innovations from Computer Science departments
were Free Software, and many are.</p>
<p>It's therefore incumbent upon IT administration to handle the mundane tasks;
we operate in service of the academy, not the other way around. My alma mater
actually builds its own online education site, and employs many CS students,
but I see little need for CS research on the subject. It's not that IT lacks
funding or programmers, but that they have no desire (ability?) to release
that code. <a href="http://osuosl.org/about-osuosl">OSUOSL</a> is the closest positive example I have; they're an
organization that promotes the creation of FOSS, by students and staff. But
from what I see, their mission is more outreach than... "inreach".</p>
<p>So I guess my point is, it's the duty of IT departments to solve IT
department's needs. Hopefully I've convinced you that Computer Science isn't
quite so alien to Free Software, and that projects similar to OSUOSL are a
better form of FOSS adaptation than reallocating Computer Science faculty and
classes.</p>An electronic diversion2009-07-26T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2009-07-26:an-electronic-diversion.html<p>I've always felt a bit mystified with analog electronics. Sure, I had a few
chances to figure it out, but these sorts of things require an experienced
guide. When I was a child, I received a Christmas gift of an AM radio kit. It
didn't work; my father suggested the crystal diode was broke but we never got
it working. He once brought back from a garage sale a kit that's now <a href="http://en.wikipedia.org/wiki/Gakken_EX-System">a
collectors item</a>, but it was missing a manual and a few parts. I suppose I
can't really fault a guy with an Accounting degree for inadequately explaining
what a transistor was without using the term "conductance band", but at the
time, the idea of electricity stopping the flow of electricity confused the
hell out of me.</p>
<p>So even as I was taking embedded programming courses in college, I was
uncomfortable with some of the stuff I would be working with. Sure, all CS
grads are required to take a digital logic class, but it didn't cover the
basics, like the purpose of ground (science education mostly talks about
positive and negative and circuits), or elementary parts like diodes,
transistors and resistors. Now that I'm at a point in life that I have a bit
of money to spend, I've been looking at intro electronic hobby kits, and
watching <a href="http://ocw.mit.edu/OcwWeb/Electrical-Engineering-and-Computer-Science/6-002Spring-2007/VideoLectures/index.htm">MIT 6.002 lectures</a>.</p>
<h3>The parts</h3>
<p>Of the many kits available at local stores and online, one that that caught my
eye was a kit on Maker's Shed, a <a href="http://kits.sparklelabs.com/">from Sparkle Labs</a>. Most kits are focused
on a single project like an AM radio. This kit, however, is described as a
"curated selection of parts". It comes with a breadboard and lot of just
normal parts, which is great.</p>
<p><a href="//www.pwnguin.net/media/photologue/photos/sparkle_labs_kit.jpg"><img alt="neat papercraft organizer" src="//www.pwnguin.net/media/photologue/photos/cache/sparkle_labs_kit_thumbnail.jpg" /></a></p>
<p>The picture does a better job explaining what's in it than I can. All of the
basics are present: resistors, capacitors, diodes, buttons, LEDs, wires. It
also comes with potentiometers, transistors and photoresistors. Finally, it
comes with a 556 chip (a pair of 555 timers), which turns out to be a real
treat. The 556 isn't the sort of thing I would have found in a textbook or in
online lectures, but appears to be extremely versatile. And apparently the 555
very high selling IC.</p>
<h3>The manual</h3>
<p>Interestingly, Sparkle Labs is mainly a Design-with-a-capital-D outfit,
judging by their published<a href="http://www.sparklelabs.com/v2/work.php">portfolio</a>. This emphasis on visual design shows
in their manual, which features lots of stylized 3d renders of circuits
alongside traditional circuit diagrams. I'd be very interested to learn how
the renders were made, as they're very nice and nearly solve the challenge of
translating circuit diagrams to a breadboard.</p>
<p>The manual explains most of the parts included, and offers example circuits to
demonstrate their use. For example, when discussing resistors, it mentions
industrial color band labeling. The first circuit you build is a handy power
supply from a 9V battery down to 5V. Other circuits include a dark detector
and light detector. It comes with some transistors, but not enough to build
logic gates or simple adders.</p>
<p>The manual ends with a circuit to build a signal generator out of a <a href="http://en.wikipedia.org/wiki/555_timer_IC">556
timer</a>. It's a variant on the <a href="http://www.jameco.com/Jameco/PressRoom/punk.html">Atari Punk Console</a> that uses photodiodes
for no reason I can discern. I replaced them with potentiometers (as hinted
at), and it works fairly well. I know I've seen other circuits out there and
I'm tempted to try them out.</p>
<p>The manual does have some shortcomings though. A few parts included get no
mention in the manual, which is a bit puzzling. A couple of diodes are
included, presumably for radio, since turning AC current into DC current is a
recipe for death in household scenarios. And the section on the 556 comes with
a few impressive and fun circuits, but really doesn't explain the function and
pinouts in sufficient detail.</p>
<p>From a graphic design perspective, a few of the colors are a bit off. The
manual comes with a resistor decoding chart, but doesn't quite match up with
the resistors provided. The unprepared may be expecting a far greater contrast
between red and brown, and confuse the two. After the manual introduces
resistor color coding, 3d renders are done with a generic resistor color
banding, which is just mean to lazy people like me.</p>
<h3>My final words</h3>
<p>Overall, this kit is not a bad deal for a hobbyist. The parts are a cheaper
deal for the money than at places like Octoparts and Digikey, and the manual
can easily be supplemented by the internet and libraries. If Sparkle Lab's
main goal is to corner an "educational" market, the manual might need some
revision, or a supplemental website. But my main purpose was to be more
comfortable with analog components, in preparation of building a <a href="http://ca.rroll.net/2008/03/22/custom-built-usb-sensor-bar/">USB powered
Sensor Bar</a> for my <a href="//www.pwnguin.net/a-cheap-media-remote.html">Wiimote+Ubuntu</a> setup. Mission accomplished.</p>Eclipse Update2009-07-20T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2009-07-20:eclipse-update.html<blockquote>
<p><doko> tjaalton, pwnguin: and I'm filing a bug report to
remove this package again, because it does include a handful or more third
party libs inside the eclipse package. if you want to keep eclipse, please fix
these bugs, hint, hint ;) </p>
</blockquote>
<p>Matthias Klose (doko) has <a href="https://launchpad.net/ubuntu/+source/eclipse/3.4.1-0ubuntu1">uploaded a new version of Eclipse</a> to Karmic.
However, things could be better. Firstly, there's still 3.5 to contend with.
Moreover, as the quote above hints at, the current package violates Debian
Policy, in particular, <a href="http://www.debian.org/doc/debian-policy/ch-source.html#s-embeddedfiles">4.13: Convenience copies of code</a>:</p>
<blockquote>
<p>Some software packages include in their distribution convenience copies of
code from other software packages, generally so that users compiling from
source don't have to download multiple packages. Debian packages should not
make use of these convenience copies unless the included package is explicitly
intended to be used in this way. If the included code is already in the Debian
archive in the form of a library, the Debian packaging should ensure that
binary packages reference the libraries already in Debian and the convenience
copy is not used. If the included code is not already in Debian, it should be
packaged separately as a prerequisite if possible.</p>
</blockquote>
<p>The Debian project prefers explicit copies of third party libs to be in a
systemwide package, for sanity's sake. The reasons are fairly good:</p>
<ul>
<li>
<p>Firstly, it's inefficient. Duplicated libraries can't be shared with
other executables, on disk or in RAM.</p>
</li>
<li>
<p>Most UNIX programs already do this, so it's an expected norm. If a Debian
Developer is looking for source code to a library to track down a bug, it's
easy to accidentally assume the existing package was used.</p>
</li>
<li>
<p>Without a package and corresponding metadata, there's no way to search
for all instances of a given library. If you do find a bug in a library, you'd
like it to be fixed everywhere at once.</p>
</li>
<li>
<p>One reason you might absolutely need to fix a bug everywhere at once is
security flaws. With the stated policy, it's much easier to verify that it's
fixed everywhere.</p>
</li>
</ul>
<p>Since doko's upload, <a href="https://edge.launchpad.net/ubuntu/+source/eclipse/3.4.1-0ubuntu2">another upload</a> took care of some build
dependencies, but nothing has addressed the library issue, likely because
nobody's been informed (directed comments on IRC doesn't count as notice!),
and unlike doko's comment suggests, no bug has been filed. If anyone wants to
tackle this, #ubuntu-motu is the place to look for guidance.</p>The sorry state of Eclipse2009-06-26T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2009-06-26:the-sorry-state-of-eclipse.html<p>So Wednesday marked <a href="http://www.eclipse.org/org/press-release/20090624_galileo.php">the official release of Eclipse 3.5</a> (codename:
Galileo). Eclipse is an integrated source code editor and debugger. It's
roughly comparable to Visual Studio; it supports a number of languages through
plugins (PHP, for example), and tools (JUnit, for example). It's also one of
the few Open Source tools that goes beyond text editing and debugging comes
close to supporting Software Engineering tools like UML and formal
specifications.</p>
<p>Eclipse proved an invaluable platform for a compilers class, where ANTLR plug-
ins gave excellent tools to review the grammar stripped of the handling code.
It's also handy for embedded platforms, where the alternatives are even worse.
I last wrote about eclipse approximately <a href="//www.pwnguin.net/eclipse-disaster.html">a year ago</a>. I haven't revisited
the subject of the CDT, but I hope it's improved with time.</p>
<p>The problem is, the Ubuntu package of Eclipse has not improved with time, but
stood still. 3.2.2-5ubuntu3 is the current package version, and been there
since Hardy. Importantly, when Hardy was released, Eclipse 3.2 was already 2
years old. The current proposal from <a href="https://bugs.launchpad.net/ubuntu/+source/eclipse/+bug/123064?comments=all">this bug report</a> is to remove it
entirely. And it seems like a reasonable suggestion. Why leave a package that
many revisions behind for people to find and suffer from? It's a sentiment
that <a href="http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=526489">Debian agrees</a> with.</p>
<p>A tragic question comes to mind: if Eclipse is a tool that programmers want or
need, why is Eclipse neglected in volunteer groups like Debian or MOTU? It's
not the case that Eclipse is too fast moving; they have a yearly release
cycle. It's not that there aren't dozens of interested people; the "upgrade
Eclipse" bug has countless subscribers. Is it that Debian and Ubuntu
developers don't need Eclipse, or maybe even dislike tools beyond vim/emacs?
Or maybe Eclipse's challenge is that building software with Eclipse is simple,
but building Eclipse itself is not. <strong>Is it possible to build Eclipse with
Eclipse?</strong></p>
<p>Given the apparent neglect, it's probably time to remove it from the archive,
acknowledge that users will be grabbing from upstream directly and <strong>start
dealing with the consequences</strong>. Eclipse forms an ecosystem of plug-ins, with
a rather sparse set of core features in their absence. Plug-ins don't appear
to make many assumptions about the host platform; one bug reporter notes that
<a href="https://bugs.launchpad.net/ubuntu/%2Bsource/subversion/%2Bbug/382048">Eclipse SVN plug-ins</a> use Subversion 1.6 libraries, which upgrade working
directories to 1.6. This breaks compatibility with the SVN shipped with
Ubuntu.</p>
<p>I assume it's much easier to backport SVN 1.6 than fix Eclipse packaging, so
perhaps the barrier would be lower if we acknowledged this external dependency
exists. <strong>So who should be consulted on removing Eclipse</strong>? MOTU? The Java
Team? The Eclipse Team?</p>Streamlined support tricks2009-06-12T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2009-06-12:streamlined-support-tricks.html<p>Ubuntu is a big project. Nobody really understands everything about Ubuntu;
even <a href="//www.pwnguin.net/training-materials.html">authoring a Training video on Ubuntu</a> doesn't guarantee sufficient
expertise. One great thing about the Internet is that it can bring people with
a common interest or problem together. If you have a problem with your car,
there are dozens of outlets you can explore to diagnose and repair it. Today
I'll explore some of the various ways we can diagnose and repair Ubuntu, and
how to moderate the flow from these firehoses.</p>
<h3>Official Channels</h3>
<p>Obviously Ubuntu was founded by people with a deep understanding of the
Internet, software, and collaboration. Over the past ten releases there have
been a number of avenues created for people to ask questions and get answers,
with varying degrees of noise and signal.</p>
<p>The most high profile of these is Launchpad's <a href="http://launchpad.net/ubuntu">bug tracker,</a> Malone (named
after Bugsy Malone?). If you have a problem that only changing a program can
fix, this is the place. It's modeled on BugZilla and Debian BTS, but brings a
lot of new stuff to the table. One such improvement over simple email based
tools is the dupe-finder. Given a description, Malone will suggest a set of
bugs similar in description, before asking you for a detailed write up of the
problem. This is a great way for software to help people with similar problems
find each other. But a big project like Ubuntu gets a lot of bugs; setting
yourself as a general bug contact is a very not smart idea. If you want to
help out with bugs, <strong>pick a package or two and set yourself as a bug
contact</strong>.</p>
<p>A more recent Launchpad addition is the <a href="http://answers.launchpad.net/ubuntu">Answers tool</a>. You can ask a
question, and people can ask for more information or suggest an answer.
Answers manages the state of questions so you can safely ignored questions
with accepted solutions. Answers is intended for things that maybe aren't
exactly bugs, but simple questions about how to use a program. Like Malone, it
also features a dupe checker to help consolidate activity. It's not clear from
the website, but you can also set yourself as a answers contact for individual
packages. <strong>https://answers.launchpad.net/ubuntu/+source/<em>pkg-name</em>/+answer-
contact</strong> will point you in the right direction, for a given <em>pkg-name</em>. I'm
not aware of any RSS feeds for answers, in whole or in part, unfortunately.</p>
<p>We also run two support channels on Freenode, #ubuntu and #ubuntu+1. #ubuntu+1
is for development versions, so it usually focuses around testing packages and
undoing damage from package updates gone wrong. #ubuntu itself is the standard
support channel, but it can be very daunting. There's currently 1300 people
participating in the channel, and it requires a specific etiquette to scale.
Because simultaneous conversations are going on, it's best to address
questions to the channel, but address all replies to a specific recipient. IRC
clients will pick up when their name is used and highlight the reply above the
noise of other support conversations. There's also a set of <a href="https://wiki.ubuntu.com/UbuntuBots">IRC bots</a> to
aid in common questions, facts, and <a href="http://irclogs.ubuntu.com">channel logging</a>. One neat extra is
that if you mention a bug number or URL, the bots can provide the channel with
the bug summary. **If you wish to selectively help people via IRC, join</p>
<h1>ubuntu, get familiar with your client and set some hilights on packages</h1>
<p>you're an expert on**. If you want to get really sophisticated, there are
scripts out there for screen+irssi and libnotify that will generate a desktop
popup when an IRC hilight comes in.</p>
<p>The mailing lists serve a number of purposes within Ubuntu, but today I'll
focus on <a href="https://lists.ubuntu.com/#Community+Support">the support</a> lists. Unmoderated lists can get pretty prolific;
<a href="https://lists.ubuntu.com/archives/ubuntu-users/">ubuntu-users</a> seems to average about a megabyte of mail a month. This
translated to 4000 individual emails for the month of May. That's 130 messages
a day! Again, a good email client can help trim that down. Gmail will group
messages into coherent threads, but we're still looking at perhaps 20 threads
a day. Cutting down the volume further may require getting familiar with mail
filters that are built into your client. I believe Gmail can store a list to a
folder and then bring in an entire thread to the inbox when a message matches
a keyword; but I haven't tried this. At any rate, I understand <strong>procmail-like
tools are key to narrowing large volume lists to something more viable</strong>.</p>
<p>Finally, there is <a href="http://ubuntuforums.org/">the forum</a>. Web forums have a reputation for being very
novice friendly, and very time wasting. Forums are also historically polling
oriented; if you want to know what's going on currently, you visit the website
and refresh the pages. I think UbuntuForums supports subscriptions, but only
at the granularity of boards and threads. That's potentially a lot of traffic,
so if you want something less demanding, you'll need to <strong>subscribe to an
UbuntuForum board and filter it yourself in your mail client</strong>, similar to the
mailing lists. There's no tracking of question status ("problem solved!");
that feature was deemed too hard on the database and disabled.</p>
<p>It seems like web based system have a higher potential to scale; I assume
that's mostly due to being backed by a database. But it's also important to be
able to customize the system for the support workflow and collaboration, and
that's an important distinction between Launchpad and a generic forum. Forums
are like the swiss army knife of social websites, you can use them to do lots
of things... poorly. I feel custom tailored software will always have an edge
over a general forum: wikis for howtos, bugtrackers for bugs, revision control
for scripts and programs.</p>
<h3>Unofficial resources</h3>
<p>So far, we've only covered the things that people use who want to be involved
and associated with Ubuntu. The internet is a much wider place than ubuntu.com
and subdomains. I'll mention a few places I've found that some people may use
instead. Reasons for avoiding official avenues range from ignorance of what's
available, to having a established reputations and relations at another place,
to better turnaround times and wider points of view.</p>
<p>The first and obvious external support tool is Google websearch. It's a great
tool for surveying the hundreds of bugzillas, web forums and blogs on the net.
For many people, websearch is the first go-to tool when they encounter a
problem; any distribution that stops Google and search engines in general from
indexing their bug tracker is doing their users a disservice. Launchpad uses
an interesting URL scheme to make their database indexable; I hope to see how
they implemented it sometime in July.</p>
<p>The first actual website I'll mention is <a href="http://ask.metafilter.com">AskMetafilter</a>. Metafilter is
free to read, but requires a 5 dollar one time fee to post. As a result, they
have managed to build a community of <a href="http://www.thatsaspicymeatball.com/comments/">pleasantly eloquant posters</a>. <strong>And
they're full on board with web 2.0 features like tagging and <a href="http://ask.metafilter.com/tags/ubuntu/rss">RSS
feeds</a>.</strong> The volume is already quite low as a result of the 5 dollar
hurdle, especially since you can only ask one question a week and have to wait
a week before asking a question.</p>
<p>But generally, you probably don't want to pay to help other people out. An
interesting new group of sites has been built with some very worthwhile design
insights to motivate quality responses and questions. There are countless
question and answer sites out there; Google Answers (now retired), Yahoo
Answers, etc. There's also specialized sites like Experts Exchange that do
some unsavory tricks and generally profit from crowdsourced labor that happens
spontaneously on the web. StackOverflow is a site designed to shift the rules
for programming questions on the web towards open access. But more interesting
for Ubuntu is it's companion site, <a href="http://serverfault.com/">ServerFault</a>, which is geared towards
servers and system administration, a topic that will be more relevant to
Ubuntu than programming questions typically get. They support RSS feeds, so
you can <strong>subscribe to all questions tagged <a href="http://serverfault.com/feeds/tag/ubuntu">Ubuntu</a></strong>. I hope people
working on LP answers observe what is and isn't working for ServerFault.</p>
<p>And obviously, if a specific program is having trouble, there's usually an IRC
channel, mailing list, bug tracker or forum organized by upstream. If you're
going upstream for help, just be careful. It can be frustrating for user after
user to come in and complain about a bug in Ubuntu that is fixed in upstream's
latest release. If you're looking to help people, getting involved upstream is
always a good pick. If you're still reading, hopefully this lengthy post has
given you some ideas on how to target specific support request topics, and
save yourself some time wading through noisy communication channels. Helping
out doesn't have to be an avalanche of data!</p>Developer time is rivalrous2009-06-07T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2009-06-07:developer-time-is-rivalrous.html<p><a href="http://doctormo.wordpress.com/2009/06/06/foss-can-work-in-the-free-market/">Martin Owens</a> writes on the subject of open source economics:</p>
<blockquote>
<p>What I would suggest is that we are looking at the problem the wrong way.
While software is not rivalrous or excludable, software development as a
service is excludable (although not quite rivalrous) and this is important.</p>
</blockquote>
<p>Fundamentally, I'm not sold on "source code isn't excludable". Or computer
data of any sort for that matter. If I make a photo, I can exclude it by not
publishing it until I receive payment for it. Similarly, if I hire Martin to
patch the source code for a project, I can exclude others from that work
simply by not publishing the patch. It's tempting to apply the following logic
to excludable goods:</p>
<ol>
<li>
<p>HackerWare publishes an open source product, FizBuzz.</p>
</li>
<li>
<p>SuitSales Inc <a href="http://imranontech.com/2007/01/24/using-fizzbuzz-to-find-developers-who-grok-coding/">discovers FizBuzz has a flaw</a>.</p>
</li>
<li>
<p>SuitSales Inc, depending on FizBuzz, hires a Joe The Programmer to fix
the bug for them in house.</p>
</li>
<li>
<p><strong>(the "software isn't excludable" step/fallacy) </strong>Joe the Programmer
gives the patch away for free to people, maybe even HackerWare.</p>
</li>
</ol>
<p>Except Joe the Programmer doesn't have to give the patch away for free. He
could go around consulting with SuitSales's competitors and repeat the
transaction, at significantly lower costs (people complain about this practice
in the IT community). He might even negotiate an agreement with people not
share his patch. But even if he doesn't, <strong> SuitSales Inc. has the same
incentives</strong>: share in exchange for cash. In any case, some action has to be
taken for a third party to enjoy the benefits of the patch.</p>
<p>In fact, the GPL isn't enough to make normal code a public good. You're under
no obligation to publish patches for private use, and no third party is
required to be involved. Instead, the GPL is sort of a compromise, to undo the
damage copyright has wrought on the process. Copyright provides massive
incentives to produce "intellectual property." Ever notice how "hit-driven"
markets seem to deal exclusively with intellectual property? Movies, music,
software, books; theres more stuff out there that I want to enjoy than I could
spend a lifetime consuming.</p>
<p>So if this is more of a club good, why do people offer code seemingly for
free? Joe might give the patch away in exchange for some peer review of his
code before he offers it to his clients for production use. SuitSales might
want ease of maintenance, <a href="https://wiki.ubuntu.com/MarkShuttleworth#Is%20Ubuntu%20a%20Debian%20fork?%20Or%20spoon?%20What%20sort%20of%20silverware%20are%20you,%20man?">because carrying a delta incurs a cost</a>. The GPL
provides grease on the wheels for this.</p>
<p>And why would HackerWare <a href="http://en.wikipedia.org/wiki/Zope">release the code</a> in the first place? Because all
this consulting work is, and always has been, the gravy train, and HackerWare
has a major competitive advantage. They can advertise support contracts at the
same place you get the software from, and they know the heart of the code.</p>
<p>So I find Martin Owen's proposal interesting, but probably misguided. He
denigrates support contracts as somehow indirect and undesirable, when it's
really a good way to insure a group of users and fund development in the
process. The trouble with buying and selling developer time directly is one of
estimation. Generally speaking, you hire developers for their output. It's
generally believed that programmer productivity is unequal and hard to measure
beforehand, so you really have no idea how many "blocks" you'd need to spend
to prioritize a feature or bug. And how would you enforce hours worked?</p>
<p>Most importantly, what does escrow do with failed projects?</p>A case study in the FAIL metric: Soul-Fu2009-05-29T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2009-05-29:a-case-study-in-the-fail-metric-soul-fu.html<p>Tom Calloway posts <a href="http://spot.livejournal.com/308370.html?view=1403794#t1403794">a metric for evaluating open source projects</a>, which I
shall dub "FAIL, Assessing Inadequacy Lazily" (FAIL). To help calibrate the
scales, I'll provide a case study: <a href="http://www.soulfu.com/forums/index.php">Secret of Ultimate Legend Fantasy
Unleashed</a> (Soul-Fu).</p>
<h3>About Soul-Fu</h3>
<p>Soul-fu is the brainchild of Aaron Bishop, original author of <a href="http://en.wikipedia.org/wiki/Egoboo_(computer_game)">Egoboo</a>.
They are both dungeon crawler games, with an emphasis on action and influence
from Nethack. It's original platform was apparently Windows, but the author
justifies some code by claiming game consoles were also targeted. Like Egoboo
before it, Soul-Fu is largely abandoned by its creator and the source code
provided to all takers on a custom license.</p>
<p>Technology wise, the game includes openGL rendering and cel shading. Shadows
are cast from characters and enemies, and equipping armor and weapons changes
your appearance in game. Animations involve skeletal motion and are generally
decent.</p>
<p>In Aaron's absence, a modest community has organized around the official
message board, to mod and hack the code. Mod'ing the game involves loading up
a text editor in game, modeled after emacs(!).</p>
<p>So how does this game fare on the FAIL scale? Let's find out.</p>
<p><em>Applied points of fail in bold</em>, <em>editorial comments in italics</em></p>
<h3>Size</h3>
<p><em>The community SVN helpfully checked in all the game data along with the code
to a single project, but it manages to still fit within 100MB</em></p>
<ul>
<li>
<p>The source code is more than 100 MB. [ +5 points of FAIL ]</p>
</li>
<li>
<p>If the source code also exceeds 100 MB when it is compressed [ +5 points
of FAIL ]</p>
</li>
</ul>
<h3>Source Control</h3>
<p><em>The Soul-Fu community runs their own SVN server, after it was discovered that
SourceForge didn't approve of the license.</em></p>
<ul>
<li>
<p>There is no publicly available source control (e.g. cvs, svn, bzr, git) [
+10 points of FAIL ]</p>
</li>
<li>
<p>There is publicly available source control, but:</p>
</li>
<li>
<p>There is no web viewer for it [ +5 points of FAIL ]</p>
</li>
<li>
<p>There is no documentation on how to use it for new users [ +5 points of
FAIL ]</p>
</li>
<li>
<p>You've written your own source control for this code [ +30 points of FAIL
]</p>
</li>
<li>
<p>You don't actually use the existing source control [ +50 points of FAIL ]</p>
</li>
</ul>
<h3>Building From Source</h3>
<p><em>Soul-Fu originally built via VC.NET; someone later wrote a simple Makefile to
get the job done on Linux. These Makefiles have been reverted in SVN, so it's
hard to judge this section. To promote clue, we'll say anything not in SVN
HEAD doesn't count.</em></p>
<ul>
<li>
<p>There is no documentation on how to build from source [ +20 points of FAIL
]</p>
</li>
<li>
<p>If documentation exists on how to build from source, but it doesn't work [
+10 points of FAIL ]</p>
</li>
<li>
<p>Your source is configured with a handwritten shell script [ +10 points of
FAIL ]</p>
</li>
<li>
<p>Your source is configured editing flat text config files [ +20 points of
FAIL]</p>
</li>
<li>
<p>Your source is configured by editing code header files manually [ +30
points of FAIL ]</p>
</li>
<li>
<p>Your source isn't configurable <strong>[ +50 points of FAIL ]</strong></p>
</li>
<li>
<p>Your source builds using something that isn't GNU Make <strong>[ +10 points of
FAIL ]</strong></p>
</li>
<li>
<p>Your source only builds with third-party proprietary build tools <strong>[ +50
points of FAIL ]</strong></p>
</li>
<li>
<p>You've written your own build tool for this code <strong>[ +100 points of FAIL
]</strong> <em>In game scripting is done with a custom language, PXSS, and it's own
compiler. There's also the conversion of game data to a single file, but I
might forgive that here.</em></p>
</li>
</ul>
<h3>Bundling</h3>
<p><em>Soul-Fu was originally published requiring a custom libjpeg. This has been
fixed in SVN.</em></p>
<ul>
<li>
<p>Your source only comes with other code projects that it depends on [ +20
points of FAIL ]</p>
</li>
<li>
<p>If your source code cannot be built without first building the bundled
code bits [ +10 points of FAIL ]</p>
</li>
<li>
<p>If you have modified those other bundled code bits [ +40 points of FAIL ]</p>
</li>
</ul>
<h3>Libraries</h3>
<p><em>Soul-Fu builds against a number of freely available libraries such as SDL,
ogg, and openGL</em></p>
<ul>
<li>
<p>Your code only builds static libraries [ +20 points of FAIL ]</p>
</li>
<li>
<p>Your code can build shared libraries, but only unversioned ones [ +20
points of FAIL ]</p>
</li>
<li>
<p>Your source does not try to use system libraries if present [ +20 points
of FAIL ]</p>
</li>
</ul>
<h3>System Install</h3>
<p><em>The most Soul-Fu has for installing is a .nsi for Windows bundles, and even
that doesn't appear to be in SVN HEAD. I wrote debian packing for it, but
Soul-Fu itself has a snowball's chance in hell of distribution inclusion.</em></p>
<ul>
<li>
<p>Your code tries to install into /opt or /usr/local [ +10 points of FAIL ]</p>
</li>
<li>
<p>Your code has no "make install" <strong>[ +20 points of FAIL ]</strong></p>
</li>
<li>
<p>Your code doesn't work outside of the source directory [ +30 points of
FAIL ]</p>
</li>
</ul>
<h3>Code Oddities</h3>
<p><em>Honestly, the code itself has a host of oddities that might merit it's own
categories, but we're trying to use simple and objective metrics as a stand-in
for intelligent analysis.</em></p>
<ul>
<li>
<p>Your code uses Windows line breaks ("DOS format" files) [ +5 points of
FAIL ]</p>
</li>
<li>
<p>Your code depends on specific compiler feature functionality [ +20 points
of FAIL ]</p>
</li>
<li>
<p>Your code depends on specific compiler bugs [ +50 points of FAIL ]</p>
</li>
<li>
<p>Your code depends on Microsoft Visual Anything <strong>[ +100 points of FAIL ]</strong></p>
</li>
</ul>
<h3>Communication</h3>
<p><em>Soul-Fu's hub of activity is a phpBB forum. Theoretically, the original
author has a mailing list, but I don't think it's available for community
use.</em></p>
<ul>
<li>
<p>Your project does not announce releases on a mailing list <strong>[ +5 points of
FAIL ]</strong></p>
</li>
<li>
<p>Your project does not have a mailing list <strong>[ +10 points of FAIL ]</strong></p>
</li>
<li>
<p>Your project does not have a bug tracker <strong>[ +20 points of FAIL ]</strong></p>
</li>
<li>
<p>Your project does not have a website [ +50 points of FAIL]</p>
</li>
<li>
<p>Your project is sourceforge vaporware [ +100 points of FAIL ]</p>
</li>
</ul>
<h3>Releases</h3>
<p><em>Per the original author's guidelines, official community releases must be
approved by a vote. No such vote has happened yet. We'll give it a no release
mark and move on.</em></p>
<ul>
<li>
<p>Your project does not do sanely versioned releases (Major, Minor) [ +10
points of FAIL ]</p>
</li>
<li>
<p>Your project does not do versioned releases [ +20 points of FAIL ]</p>
</li>
<li>
<p>Your project does not do releases <strong>[ +50 points of FAIL ]</strong></p>
</li>
<li>
<p>Your project only does releases as attachments in web forum posts [ +100
points of FAIL ]</p>
</li>
<li>
<p>Your releases are only in .zip format [ +5 points of FAIL ]</p>
</li>
<li>
<p>Your releases are only in OSX .zip format [ +10 points of FAIL ]</p>
</li>
<li>
<p>Your releases are only in .rar format [ +20 points of FAIL ]</p>
</li>
<li>
<p>Your releases are only in .arj format [ +50 points of FAIL ]</p>
</li>
<li>
<p>Your releases are only in an encapsulation format that you invented. [
+100 points of FAIL ]</p>
</li>
<li>
<p>Your release does not unpack into a versioned top-level directory (e.g.
glibc-2.4.2/ ) [ +10 points of FAIL ]</p>
</li>
<li>
<p>Your release does not unpack into a top-level directory (e.g. glibc/ ) [
+25 points of FAIL ]</p>
</li>
<li>
<p>Your release unpacks into an absurd number of directories (e.g.
home/johndoe/glibc-svn/tarball/glibc/src/) [ +50 points of FAIL ]</p>
</li>
</ul>
<h3>History</h3>
<p><em>Soul-Fu was originally a solo project, intended for commercialization. As
such you will encounter stupid code and <strong>intentional</strong> obfuscation of data
objects. The metric may need revision to accommodate the level of peer review
code got while proprietary. An additional "original authors abandoned the
project when open sourcing it" criteria may be in order, for now we'll call it
a fork since the original is still available.</em></p>
<ul>
<li>
<p>Your code is a fork of another project [ +10 points of FAIL ]</p>
</li>
<li>
<p>Your primary developers were not involved with the parent project <strong>[ +50
points of FAIL ]</strong></p>
</li>
<li>
<p>Until open sourcing it, your code was proprietary for:</p>
</li>
<li>
<p>1-2 years <strong>[ +10 points of FAIL ]</strong></p>
</li>
<li>
<p>3-5 years [ +20 points of FAIL ]</p>
</li>
<li>
<p>6-10 years [ +30 points of FAIL ]</p>
</li>
<li>
<p>10+ years [ +50 points of FAIL ]</p>
</li>
</ul>
<h3>Licensing</h3>
<p><em>The code is licensed under what can only be called a custom "Be Nice"
license. I actually asked the OSI to rule on it after the author claimed it
was okay and the OSI would approve it. The license itself is hidden within an
HTML manual, and generally states that it must remain nagware and
noncommercial. It also stipulates that official releases must see community
approval. On the bizarre licensing the author has not budged.</em></p>
<ul>
<li>
<p>Your code does not have per-file licensing <strong>[ +10 points of FAIL ]</strong></p>
</li>
<li>
<p>Your code contains inherent license incompatibilities [ +20 points of FAIL
]</p>
</li>
<li>
<p>Your code does not have any notice of licensing intent [ +30 points of
FAIL ]</p>
</li>
<li>
<p>Your code doesn't include a copy of the license text [ +50 points of FAIL
]</p>
</li>
<li>
<p>Your code doesn't have a license [ +100 points of FAIL ]</p>
</li>
</ul>
<h3>Documentation</h3>
<p><em>Soul-Fu code does not self document, and what passes for documentation are
some notes the author made while writing Soul-Fu.</em></p>
<ul>
<li>
<p>Your code doesn't have a changelog <strong>[+10 points of FAIL]</strong></p>
</li>
<li>
<p>Your code doesn't have any documentation [ +20 points of FAIL ]</p>
</li>
<li>
<p>Your website doesn't have any documentation [ +30 points of FAIL ]</p>
</li>
</ul>
<h2>FAIL METER</h2>
<p>0 points of FAIL: Perfect! All signs point to success!</p>
<blockquote>
<p>5-25 points of FAIL: You're probably doing okay, but you could be better.
<br>30-60 points of FAIL: Babies cry when your code is downloaded</p>
<p>65-90 points of FAIL: Kittens die when your code is downloaded <br>95-130
points of FAIL: HONK HONK. THE FAILBOAT HAS ARRIVED!</p>
<p>135+ points of FAIL: So much fail, your code should have its own reality TV
show. </p>
</blockquote>
<p><strong>495 points of FAIL: Soul-Fu</strong>.</p>
<p>This is why I have Soul-Fu patches and Ubuntu packaging, but am sitting on
them. It would be a drop in the bucket of failure.</p>mount sshfs at boot2009-05-25T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2009-05-25:mount-sshfs-at-boot.html<p>Currently, Upstart in Ubuntu does not generate network events. Instead it
calls traditional sysvinit. By default NetworkManager is installed and
running; rather than emit network events to Upstart, it contains a run-parts
dispatcher (/etc/NetworkManager/dispatcher.d/) which itself simply relies on
ifupdown's run-parts dispatcher (/etc/network/*.d/). In particular you care
about /etc/network/if-up.d/ and /etc/network/if-down.d/</p>
<p>First set up a unencrypted ssh keypair, so you can ssh and mount the point
without a prompt. Write a script, place it in /etc/network/if-up.d/ and make
executable. The following was discovered on <a href="http://ubuntuforums.org/showthread.php?t=430312">UbuntuForums</a> and was
sufficient for me:</p>
<table class="highlighttable"><tr><td class="linenos"><div class="linenodiv"><pre> 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110</pre></div></td><td class="code"><div class="highlight"><pre><span class="c">#!/bin/sh</span>
<span class="c">## http://ubuntuforums.org/showthread.php?t=430312</span>
<span class="c">## The script will attempt to mount any fstab entry with an option</span>
<span class="c">## "...,comment=$SELECTED_STRING,..."</span>
<span class="c">## Use this to select specific sshfs mounts rather than all of them.</span>
<span class="nv">SELECTED_STRING</span><span class="o">=</span><span class="s2">"sshfs"</span>
<span class="c"># Not for loopback</span>
<span class="o">[</span> <span class="s2">"</span><span class="nv">$IFACE</span><span class="s2">"</span> !<span class="o">=</span> <span class="s2">"lo"</span> <span class="o">]</span> <span class="o">||</span> <span class="nb">exit </span>0
<span class="c">## define a number of useful functions</span>
<span class="c">## returns true if input contains nothing but the digits 0-9, false otherwise</span>
<span class="c">## so realy, more like isa_positive_integer</span>
isa_number <span class="o">()</span> <span class="o">{</span>
! <span class="nb">echo</span> <span class="nv">$1</span> <span class="p">|</span> egrep -q <span class="s1">'[^0-9]'</span>
<span class="k">return</span> <span class="nv">$?</span>
<span class="o">}</span>
<span class="c">## returns true if the given uid or username is that of the current user</span>
am_i <span class="o">()</span> <span class="o">{</span>
<span class="o">[</span> <span class="s2">"</span><span class="nv">$1</span><span class="s2">"</span> <span class="o">=</span> <span class="s2">"`id -u`"</span> <span class="o">]</span> <span class="o">||</span> <span class="o">[</span> <span class="s2">"</span><span class="nv">$1</span><span class="s2">"</span> <span class="o">=</span> <span class="s2">"`id -un`"</span> <span class="o">]</span>
<span class="o">}</span>
<span class="c">## takes a username or uid and finds it in /etc/passwd</span>
<span class="c">## echoes the name and returns true on success</span>
<span class="c">## echoes nothing and returns false on failure</span>
user_from_uid <span class="o">()</span> <span class="o">{</span>
<span class="k">if</span> isa_number <span class="s2">"</span><span class="nv">$1</span><span class="s2">"</span>
<span class="k">then</span>
<span class="c"># look for the corresponding name in /etc/passwd</span>
<span class="nb">local </span><span class="nv">IFS</span><span class="o">=</span><span class="s2">":"</span>
<span class="k">while</span> <span class="nb">read </span>name x uid the_rest
<span class="k">do</span>
<span class="k">if</span> <span class="o">[</span> <span class="s2">"</span><span class="nv">$1</span><span class="s2">"</span> <span class="o">=</span> <span class="s2">"</span><span class="nv">$uid</span><span class="s2">"</span> <span class="o">]</span>
<span class="k">then</span>
<span class="nb">echo</span> <span class="s2">"</span><span class="nv">$name</span><span class="s2">"</span>
<span class="k">return</span> 0
<span class="k">fi</span>
<span class="k">done</span> </etc/passwd
<span class="k">else</span>
<span class="c"># look for the username in /etc/passwd</span>
<span class="k">if</span> grep -q <span class="s2">"^</span><span class="si">${</span><span class="nv">1</span><span class="si">}</span><span class="s2">:"</span> /etc/passwd
<span class="k">then</span>
<span class="nb">echo</span> <span class="s2">"</span><span class="nv">$1</span><span class="s2">"</span>
<span class="k">return</span> 0
<span class="k">fi</span>
<span class="k">fi</span>
<span class="c"># if nothing was found, return false</span>
<span class="k">return</span> 1
<span class="o">}</span>
<span class="c">## Parses a string of comma-separated fstab options and finds out the</span>
<span class="c">## username/uid assigned within them.</span>
<span class="c">## echoes the found username/uid and returns true if found</span>
<span class="c">## echoes "root" and returns false if none found</span>
uid_from_fs_opts <span class="o">()</span> <span class="o">{</span>
<span class="nb">local </span><span class="nv">uid</span><span class="o">=</span><span class="sb">`</span><span class="nb">echo</span> <span class="nv">$1</span> <span class="p">|</span> egrep -o <span class="s1">'uid=[^,]+'</span><span class="sb">`</span>
<span class="k">if</span> <span class="o">[</span> -z <span class="s2">"</span><span class="nv">$uid</span><span class="s2">"</span> <span class="o">]</span><span class="p">;</span> <span class="k">then</span>
<span class="c"># no uid was specified, so default is root</span>
<span class="nb">echo</span> <span class="s2">"root"</span>
<span class="k">return</span> 1
<span class="k">else</span>
<span class="c"># delete the "uid=" at the beginning</span>
<span class="nv">uid_length</span><span class="o">=</span><span class="sb">`</span>expr length <span class="nv">$uid</span> - 3<span class="sb">`</span>
<span class="nv">uid</span><span class="o">=</span><span class="sb">`</span>expr substr <span class="nv">$uid</span> <span class="m">5</span> <span class="nv">$uid_length</span><span class="sb">`</span>
<span class="nb">echo</span> <span class="nv">$uid</span>
<span class="k">return</span> 0
<span class="k">fi</span>
<span class="o">}</span>
<span class="c"># unmount all shares first</span>
sh <span class="s2">"/etc/network/if-down.d/umountsshfs"</span>
<span class="k">while</span> <span class="nb">read </span>fs mp <span class="nb">type </span>opts dump pass extra
<span class="k">do</span>
<span class="c"># check validity of line</span>
<span class="k">if</span> <span class="o">[</span> -z <span class="s2">"</span><span class="nv">$pass</span><span class="s2">"</span> -o -n <span class="s2">"</span><span class="nv">$extra</span><span class="s2">"</span> -o <span class="s2">"`expr substr </span><span class="si">${</span><span class="nv">fs</span><span class="si">}</span><span class="s2">x 1 1`"</span> <span class="o">=</span> <span class="s2">"#"</span><span class="o">]</span><span class="p">;</span>
<span class="k">then</span>
<span class="c"># line is invalid or a comment, so skip it</span>
<span class="k">continue</span>
<span class="c"># check if the line is a selected line</span>
<span class="k">elif</span> <span class="nb">echo</span> <span class="nv">$opts</span> <span class="p">|</span> grep -q <span class="s2">"comment=</span><span class="nv">$SELECTED_STRING</span><span class="s2">"</span><span class="p">;</span> <span class="k">then</span>
<span class="c"># get the uid of the mount</span>
<span class="nv">mp_uid</span><span class="o">=</span><span class="sb">`</span>uid_from_fs_opts <span class="nv">$opts</span><span class="sb">`</span>
<span class="k">if</span> am_i <span class="s2">"</span><span class="nv">$mp_uid</span><span class="s2">"</span><span class="p">;</span> <span class="k">then</span>
<span class="c"># current user owns the mount, so mount it normally</span>
<span class="o">{</span> sh -c <span class="s2">"mount </span><span class="nv">$mp</span><span class="s2">"</span> <span class="o">&&</span>
<span class="nb">echo</span> <span class="s2">"</span><span class="nv">$mp</span><span class="s2"> mounted as current user (`id-un`)"</span> <span class="o">||</span>
<span class="nb">echo</span> <span class="s2">"</span><span class="nv">$mp</span><span class="s2"> failed to mount as current user (`id -un`)"</span><span class="p">;</span>
<span class="o">}</span> <span class="p">&</span>
<span class="k">elif</span> am_i root<span class="p">;</span> <span class="k">then</span>
<span class="c"># running as root, so sudo mount as user</span>
<span class="k">if</span> isa_number <span class="s2">"</span><span class="nv">$mp_uid</span><span class="s2">"</span><span class="p">;</span> <span class="k">then</span>
<span class="c"># sudo wants a "#" sign icon front of a numeric uid</span>
<span class="nv">mp_uid</span><span class="o">=</span><span class="s2">"#</span><span class="nv">$mp_uid</span><span class="s2">"</span>
<span class="k">fi</span>
<span class="o">{</span> sudo -u <span class="s2">"</span><span class="nv">$mp_uid</span><span class="s2">"</span> sh -c <span class="s2">"mount </span><span class="nv">$mp</span><span class="s2">"</span> <span class="o">&&</span>
<span class="nb">echo</span> <span class="s2">"</span><span class="nv">$mp</span><span class="s2"> mounted as </span><span class="nv">$mp_uid</span><span class="s2">"</span> <span class="o">||</span>
<span class="nb">echo</span> <span class="s2">"</span><span class="nv">$mp</span><span class="s2"> failed to mount as </span><span class="nv">$mp_uid</span><span class="s2">"</span><span class="p">;</span>
<span class="o">}</span> <span class="p">&</span>
<span class="k">else</span>
<span class="c"># otherwise, don't try to mount another user's mount point</span>
<span class="nb">echo</span> <span class="s2">"Not attempting to mount </span><span class="nv">$mp</span><span class="s2"> as other user </span><span class="nv">$mp_uid</span><span class="s2">"</span>
:
<span class="nb">echo</span> <span class="s2">"Not attempting to mount </span><span class="nv">$mp</span><span class="s2"> as other user </span><span class="nv">$mp_uid</span><span class="s2">"</span>
<span class="k">fi</span>
<span class="k">fi</span>
<span class="c"># if not an sshfs line, do nothing</span>
<span class="k">done</span> </etc/fstab
<span class="nb">wait</span>
</pre></div>
</td></tr></table>
<p>If you have a wifi or otherwise unreliable connection, place the following in
/etc/network/if-down.d/:</p>
<table class="highlighttable"><tr><td class="linenos"><div class="linenodiv"><pre>1
2
3
4
5
6
7
8
9</pre></div></td><td class="code"><div class="highlight"><pre><span class="c">#!/bin/bash</span>
<span class="c"># Not for loopback!</span>
<span class="o">[</span> <span class="s2">"</span><span class="nv">$IFACE</span><span class="s2">"</span> !<span class="o">=</span> <span class="s2">"lo"</span> <span class="o">]</span> <span class="o">||</span> <span class="nb">exit </span>0
<span class="c"># comment this for testing</span>
<span class="nb">exec </span>1>/dev/null <span class="c"># squelch output for non-interactive</span>
<span class="c"># umount all sshfs mounts</span>
<span class="nv">mounted</span><span class="o">=</span><span class="sb">`</span>grep <span class="s1">'fuse.sshfs\|sshfs#'</span> /etc/mtab <span class="p">|</span> awk <span class="s1">'{ print $2 }'</span><span class="sb">`</span>
<span class="o">[</span> -n <span class="s2">"</span><span class="nv">$mounted</span><span class="s2">"</span> <span class="o">]</span> <span class="o">&&</span> <span class="o">{</span> <span class="k">for</span> mount in <span class="nv">$mounted</span><span class="p">;</span> <span class="k">do</span> umount -l <span class="nv">$mount</span><span class="p">;</span> <span class="k">done</span><span class="p">;</span> <span class="o">}</span>
</pre></div>
</td></tr></table>
<p>The final step is to make sure that <a href="https://help.ubuntu.com/community/NetworkManager0.7#Adding%20Wired%20connections">your connection starts at boot</a>, if
you want to start networking (and sshfs) before anyone logs in.</p>Open Source as an Enterprise Strategy2009-05-21T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2009-05-21:open-source-as-an-enterprise-strategy.html<p>After the dotcom crash, a new argument about open source arose: "What if the
company you bought software from goes under? Open source gives you a viable
route to find consultants, move development internally, and so on. You're not
dependent on any one group's success." You can find this logic behind the
Ubuntu Foundation pledge. The argument is slightly compelling, but I can't
think of any examples where a company with paying customers simply liquidated.</p>
<p>However, another argument springs to the mind of many after a recent event:
<strong>"What if the company you bought software from is itself purchased? What
protections do you have from aggressive monopolists? With open source, you
always have a viable route to avoid particular nasty players."</strong> Without a
strong competitor available, you have a far worse bargaining position when it
comes to things like support and additional features.</p>
<p>That recent event? <a href="https://www.blackboard.com/angel">Blackboard purchased yet another competitor</a>.
Blackboard is growing to represent everything IT people hate in the software
industry. They applied their trademark to stop what I assume was
typosquatting. They filed restraining orders against people who intended to
present software vulnerabilities at a conference. Most egregiously, Blackboard
was awarded a patent for online teaching websites ("learning systems"). I am
not a patent lawyer, but I do know that dozens of places, including my alma
mater, had been working on, and may have even deployed, similar systems at the
time the patent was filed.</p>
<p>What's most striking about this is that Blackboard is a Linux based system.
It's completely a LAMP stack. That heritage might be why Blackboard offered <a href="http://www.blackboard.com/getdoc/ee803a3a-cf08-464c-8926-7268a5dcdb15/Patent-Pledge.aspx">a
small compromise</a> days after the SFLC announced the USPTO was re-examining
Blackboard's patents. The jist of the pledge is to not pursue patent claims
against open source projects, unless those projects integrate with proprietary
software. This gift isn't as generous as it seems; the number one question
from many colleges looking for software is "Does this integrate with Banner?"
Banner is basically the administrative lifeblood of these places. Enrollment,
accounting, performance metrics, etc. It is of course, proprietary and the
most boring kind of software no open source hacker would write to scratch a
personal itch (I'm sorry Mark, but <strong>SchoolTool</strong> is far from where colleges
need to be in the running).</p>
<p>So in the face of the announcement that Blackboard had bought up its largest
competitor using their earnings from that quarter (!), a number of people are
suggesting that <a href="http://chronicle.com/free/v55/i37/37a00102.htm">open source is strategic</a> to their enterprise:</p>
<blockquote>
<p>"It's going to probably push some people off the fence on the whole open-
source question," said Nicole Engelbert, lead analyst for education technology
at the consulting firm Datamonitor, who believes that it is possible for
Blackboard to improve as a result of its purchase of Angel. "Some people are
going to say, 'That's it, I'm going to definitely invest in Moodle or Sakai.'"</p>
</blockquote>
<p>Ubuntu ships Moodle (but not Sakai). We also ship restricted drivers. I wonder
what Blackboard, Canonical, and the Ubuntu community think of that.</p>Incentives2009-05-15T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2009-05-15:incentives.html<p>I had drafted an article on UbuntuOne, but clearly this has been beaten to
death. But has anyone else noticed how regressive Amazon S3 is? That's gotta
have some unsavory market implications, and likely means an open source
service available on S3 will be at a competitive disadvantage to a centralized
operator.</p>About time2009-05-12T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2009-05-12:about-time.html<p><a href="http://www.latimes.com/news/nationworld/nation/wire/sns-ap-us-car-warranty-calls,1,7232357.story">FTC</a> is gearing up to do something about Car Warranty robocalling scams.</p>
<p>Clearly these people are one step ahead technologically at the moment, to be
able to wardial like that without being caught or even blocked. Maybe someone
will write a phone app where you can just report these numbers to the relevant
authorities on with a single button press. That's the sort of phone software
that might actually cause me to purchase a smartphone over my current $100
annual phone plan.</p>Garmin hosts a Linux User Group meeting2009-05-10T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2009-05-10:garmin-hosts-a-linux-user-group-meeting.html<p><a href="http://www.garmin.com">Garmin</a> is a sizable consumer electronics firm situated in the Kansas City
metro area, and yesterday they graciously hosted a <a href="http://www.kulua.org/">KULUA meeting</a> in their
conference space. Why would they do this? Because they use <a href="http://developer.garmin.com/linux">Linux in their
products</a> and are hiring.</p>
<p><a href="http://www.tallgrasstech.com/">Tallgrass Technologies</a> catered lunch, gave a presentation on OpenStorage
and held a buisness card raffle for Royals' tickets. A few local Linux
aficionado's gave quick presentations on postgresql and <a href="www.eclipse.org/mylyn/">Mylyn</a>. It has
been suggested that at some point slides and videos of these will be posted.
Garmin also gave a talk (<a href="http://groups.google.com/group/kulua-l/msg/6e1b4263a200151b">slides</a>), but there wasn't any invitation for
questions, so those of you hoping to hear about upstream involvement or more
transparency will be disappointed. Still, I'm glad they chose to distribute
the slides and talk at all. Oh, one neat trick the presenter showed off was
ssh'ing into a nuvi device. It's rare to find embedded platforms that leave
dropbear installed, so thats pretty sweet. Not sure what ant was running for
though; surely they don't build on the device itself.</p>
<p>The campus itself is much, <em>much</em> bigger than when I was last invited to
visit. The parking garage appears to have been custom engineered to move
Garmin products, as the place is an unnavigable labyrinth. The lobby
atmosphere was laden with an awkward senior year interview type tension: five
well dressed complete strangers sit in uncomfortable silence, perhaps even
secretly sizing one another up, and wait for a company employee to escort you
to the interview room user group meeting. I feel like I got the evil eye a
bit, dressed in my Ubuntu polo and jeans. Probably because Garmin is hiring,
and won't let you forget this fact. In fact, they're one of the few places in
town that <em>is</em> hiring, even though they just <a href="http://seekingalpha.com/article/135883-garmin-ltd-q1-2009-earnings-call-transcript?source=yahoo">announced</a> major <a href="http://finance.yahoo.com/news/Garmin-posts-steep-decline-in-apf-15156602.html?.v=10">profit
drops</a> in the last week.</p>
<p>Certainly, there was a large turnout, and it highlights demand for user group
meetings in Johnson County and Kansas City. If there are any Ubuntistas in the
area, you may want to get in contact with me.</p>Backups galore2009-05-02T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2009-05-02:backups-galore.html<p>I thought I'd do a quick survey and figure out what backup tools are readily
available in Ubuntu. I found 17 that might be worth mentioning. Many are
front-ends, and front-ends-to-the-front-ends, so to keep them straight I
cooked up a quick diagram with GraphViz:</p>
<p><a href="http://www.flickr.com/photos/jldugger/3492777989/" title="backups galore by jld5445, on Flickr"><img alt="backups galore" src="http://farm4.static.flickr.com/3325/3492777989_9ec4ca0e1a_b.jpg" /></a></p>
<p>So there's a lot, and I may have missed a few nodes or edges. So far, my
favorite is probably mrb:</p>
<blockquote>
<p>Package: mrb Description: Manage incremental data snapshots with make</p>
</blockquote>
<p>rsync mrb is a single, self-documenting, executable makefile, which aims to
make trivial the task of maintaining a set of compact, incremental, rsync
mirrors of your important (and sometimes rapidly changing) data. . It relies
only on the time-hardened industry tools GNU make and rsync. Snapshots may be
taken at any opportune interval. Multiple snapshot targets can be configured
in a modular fashion, so fast changing data can be separated from static bulk
data, with snapshots of each scheduled or triggered on demand, as may be
appropriate for each.</p>
<p>At first you think, "Perfect, how hard can a make frontend to rsync be?" 15
kilobytes of Makefile later, you realize this may not be as brilliant as it
sounds on paper. But it is well commented and relatively user friendly, which
may actually impede the code's readability (as if anything involving
whitespace syntax could be readable). Such gems as:</p>
<div class="highlight"><pre># If I have to explain this one, then I guess you are just reading this
# 'for the articles' -- but I hope you'll have enjoyed it anyway...
</pre></div>
<p>I haven't tried mrb yet, but I might put it on the TODO list. But first, I'll
need to give <a href="http://jwz.livejournal.com/801607.html">jwz's method</a> a full vetting.</p>Ubuntu is for ricers?2009-04-21T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2009-04-21:ubuntu-is-for-ricers.html<p>nixternal <a href="http://blog.nixternal.com/2009.04.20/stereotypes/">takes funroll-loops a little too literally</a>, and discovers that
there's more Ubuntu stickers than Gentoo.</p>
<p>Obviously, stickers are a passive tool to communicate. Conversation starters,
if you will. And certainly people can tie up identity with a popular brand. I
recall seeing a poor grad student who didn't have a MacBook with an Apple logo
glued over their Acer logo (poor Acer). She quickly scurried away as the next
lab got started, so I didn't get a chance to ask the story behind it. I have a
hard time putting myself in the shoes of a Gentoo user and wanting to explain
Gentoo to someone who isn't using it. It would just serve to highlight the
gulf between nerds and normal people. So I can imagine why there aren't many
Gentoo stickers in the wild.</p>
<p>nixternal also asks for laptop stickers, and I just happened to finish
photographing some I received in the mail today. Martin Owens <a href="http://doctormo.wordpress.com/2009/03/10/zareason-makers-of-swag/">chronicles how
ZaReason came to offer them</a>. Here's an amateur photo with a yellow lamp
bulb and white flash:</p>
<p><a href="http://www.flickr.com/photos/jldugger/3460878148/" title="img_0572.jpg by jld5445, on Flickr"><img alt="img_0572.jpg" src="http://farm4.static.flickr.com/3610/3460878148_d50735e66a.jpg" /></a></p>Goal Setting2009-04-20T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2009-04-20:goal-setting.html<p>Ubuntu 9.04 will mark the <a href="http://daniel.holba.ch/blog/?p=391">tenth release</a> of Ubuntu. Rather than party and
engage in self congratulation, I'd like to engage in a retrospective.</p>
<h2>The rocky trail traveled</h2>
<p>In November I helped facilitate a collaboration between a <a href="http://jspaleta.livejournal.com/">Fedora Board
member</a> and the organizers of Ubuntu Brainstorm, to test some social
network datamining. I'm assuming the conclusion was that the votes were too
sparse to draw any strong conclusions about niches interested in specific
subjects. Certainly, there's an element of privacy that restricts what can be
done here even if it is viable.</p>
<p><a href="http://www.metafilter.com/user/82435">I joined MetaFilter</a> and started following the <a href="http://ask.metafilter.com/tags/ubuntu/rss">ubuntu</a> and <a href="http://ask.metafilter.com/tags/linux/rss">linux</a>
tags on AskMeFi. So far, I've racked up four or five "best answers". It's a
very similar project to <a href="http://answers.launchpad.net/ubuntu">Answers</a>, except there's a five dollar sign up fee
that appears to boost the coherence of the questions and replies. And it has
RSS feeds, which is nice for turning down the rate of flow out of the
firehose.</p>
<p>In January, I tried to get ubuntu CDs in the local college library. They
refused a free donation, citing a budget crunch. They also feared a slippery
slope where once they started accepting some software, they'd have to start
buying other programs for circulation. Amusingly, the library already carries
Ubuntu books, and provides the CDs they come with. I believe the reason it was
refused was political rather than fiscal: officially accepting software means
budgeting for software, which would likely slant the budget in favor of the
technology librarians over other subject areas.</p>
<p>Where I specifically fell down over the past six months was in testing and
fixing Jaunty. Traditionally, I use my TabletPC to test tools in ubuntu+1 that
support the TabletPC hardware. Fingerprint readers, wacom, handwriting, and so
on. Unfortunately, I had to loan that device out to a family member who's
computer has gone out of commission. Add on top of that a new job with far
less spare time, as a system administrator, and triaging and testing fell
behind.</p>
<h2>Blazing a new way forward</h2>
<p>I don't feel like I've accomplished much in the past cycle. To better motivate
myself, I'm going to publicly commit to making some changes. For the future,
it's time to right-size my goals. I'll do some new stuff, and drop some of
old.</p>
<h3>For sure:</h3>
<ul>
<li>
<p><strong>Promote Ubuntu</strong>. I've volunteered to present the Ubuntu netbook remix
at an upcoming LUG meeting. Hopefully I'll have enough CDs and stickers to go
around.</p>
</li>
<li>
<p><strong>Evaluate desktop backup tools.</strong> We have a lot of backup tools in
Ubuntu. Newly minted Ubuntu Member mterry even wrote and packaged one. We
always tell people to make backups before upgrading, but there isn't a lot of
attention paid to it by the community. I'm taking notes on 8 backup tools thus
far. Since I've got a desktop that's been running Ubuntu, and upgraded from
warty through the ages to jaunty, keeping around the accumulated changes
interests me.</p>
</li>
<li>
<p><strong>Share-alike.</strong> I'll revisit the library's policy on software and try the
public libraries instead if they still can't accept free software.</p>
</li>
</ul>
<h3>Maybe:</h3>
<ul>
<li>
<p><strong>Address bug #290159</strong>. There's a patch, I forwarded it upstream for
review, and it sorta stalled out. Unfortunately, the upstream author is also
the Debian maintainer, so there's no extra opportunity for collaboration and
peer review there.</p>
</li>
<li>
<p><strong>Test and triage wacom.</strong> Without 24/7 access to hardware, triaging
reports is hard and testing is even harder. If I get it back, I'll be in a
better place to dedicate some time to handle it.</p>
</li>
<li>
<p><strong>Pitch in for the Ubuntu education project</strong>. I have experience with
popular existing web courseware tools, but not Moodle. Helping out with that
may be interesting. I'm not sure where exactly this is being organized though.</p>
</li>
<li>
<p><strong>Package KeePass 2.0</strong> In a <a href="//www.pwnguin.net/group-password-management-suggestions.html">previous post</a> I asked about software for
managing team secrets. <a href="http://keepass.info/">Keepass 2.0</a> fills a niche we need at work, so it'd
be nice to have it available on Ubuntu workstations. Upstream is dicey though;
a single developer who doesn't publish a public source repo, just binaries
with corresponding source. Hopefully he'll publish a new version that fixes
one or two bugs we've encountered and reported in testing.</p>
</li>
</ul>
<h3>Step Down Considerately:</h3>
<ul>
<li><strong>Fingerprinting in Ubuntu.</strong> This is a bad idea who's time has come.
Unfortunately, there's too much bad to unwind. Thinkfinger in Ubuntu is an SVN
snapshot of a dead project, who's packaging I don't fully understand. fprint
is potentially the replacement but I haven't had time to read how PAM changed
since Hardy. But basically, now that I know that it can work, I'm not sure it
<em>should</em>.</li>
</ul>Group Password Management Suggestions?2009-04-08T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2009-04-08:group-password-management-suggestions.html<p>In IT, it's common to segregate responsibilities not to individuals, but
groups of people. This is handy for vacations, conferences, meetings or
promotions -- power isn't endowed to a specific person and the redundancy
means high throughput when necessary.</p>
<p>One problem is that not all systems we work with understand groups. In
particular, they don't support clustering identities into identical roles.
Active Directory and Unix have Organizational Units and groups, so for most of
our systems, this is not a problem. But for websites, it's often the case that
a single account is created, forcing us to "share" authentication amongst the
group. Moreover, these systems are proliferating, increasing the number of
shared secrets we need to keep.</p>
<p>Password proliferation is not just limited to groups; many people have dozens
of passwords for various websites. Web browsers are including tools to
remember these things, and there's several existing tools for individuals such
as <a href="http://passwordsafe.sourceforge.net/">PasswordSafe</a> or <a href="http://packages.ubuntu.com/jaunty/seahorse">Seahorse</a>.</p>
<p>However, you will quickly run into trouble extending this software to team
use. Publishing updates is tricky; the worst situation is having multiple
versions of an encrypted flat text file floating around (do I have the latest?
did Bob update his version before he published the new version?). If you
centralize the file, you have to think carefully about remote update or risk
overwriting the only place that anyone actually knew the password.</p>
<p>My question is, is there any password management published and ready for use,
which makes the leap from single users to groups? Bonus points if your team
actually uses it. If a clear victor emerges, I'll make sure to blog a more
detailed review.</p>I wholeheartedly agree2009-03-02T00:00:00-08:00Justin Duggertag:www.pwnguin.net,2009-03-02:i-wholeheartedly-agree.html<p>Colin puts together a coherent rant about <a href="http://www.chiark.greenend.org.uk/ucgi/~cjwatson/blosxom/2009/03/02#2009-02-27-bug-triage-rants">bug triage with Ubuntu.</a> I'm
glad he was able to meditate on the subject and produce a document with
clarity and specific improvements; I wrote a private rant that included the
phrase "Bug Assassination Squad: We kill bugs in their infancy". Probably not
diplomatic enough to publish.</p>
<p>More diplomatically, one thing I'd like to add to Colin's commentary is about
<strong>bug duplicates</strong>. Some people seem to be in the habit of marking duplicate
bugs invalid. Yes, there are a lot of bugs, so going to the effort of finding
the dupe and marking it means you'll be slower at closing bugs. But you'll
have a better bug database as a result. And it will spare me the effort of
searching through all the package's bugs to discover there is no such
duplicate. Certainly, if you believe there's a duplicate report, you have a
better idea of where the dupe is than the person who submitted the bug, who
went through LP's own dupefinder to report the bug in the first place.</p>
<p>While I do feel vindicated that this practice is contrary to <a href="https://wiki.ubuntu.com/Bugs/HowToTriage/#Duplicates">guidelines</a>,
the fact that it <a href="https://bugs.launchpad.net/ubuntu/+bugs?field.searchtext=Thanks+for+the+bug+report.+This+particular+bug+has+already+been+reported+into+our+bug+tracking+system%2C+but+please+feel+free+to+report+any+further+bugs+you+find.&orderby=-datecreated&field.status%3Alist=NEW&field.status%3Alist=INCOMPLETE_WITH_RESPONSE&field.status%3Alist=INCOMPLETE_WITHOUT_RESPONSE&field.status%3Alist=INVALID&field.status%3Alist=WONTFIX&field.status%3Alist=CONFIRMED&field.status%3Alist=TRIAGED&field.status%3Alist=INPROGRESS&field.status%3Alist=FIXCOMMITTED&field.status%3Alist=FIXRELEASED&assignee_option=any&field.assignee=&field.bug_reporter=&field.bug_supervisor=&field.bug_commenter=&field.subscriber=&field.component-empty-marker=1&field.status_upstream-empty-marker=1&field.omit_dupes.used=&field.omit_dupes=on&field.has_patch.used=&field.has_cve.used=&field.tag=&field.tags_combinator=ANY&field.has_no_package.used=&search=Search">still happens</a> is not encouraging.</p>One of these things is not like the other2009-02-03T00:00:00-08:00Justin Duggertag:www.pwnguin.net,2009-02-03:one-of-these-things-is-not-like-the-other.html<p>The local tabloid, <em>Pitch</em>, recently ran <a href="http://www.pitch.com/2009-01-15/news/for-disgraced-former-joust-king-steve-sanders-there-s-life-after-the-arcade/1">a story</a> about competitive gaming
with an emphasis on classic arcade games. One star from a previous era makes a
bizarre nostalgic comment:</p>
<blockquote>
<p>"Back then, it was more of a brotherhood," he says. "Today, there's far more
tension among those people. Also, bad habits have popped up. People are
playing too many games, eating bad foods, not getting out in the sun. People
are up till 4 a.m. playing games without their parents knowing and eating
genetically modified foods."</p>
</blockquote>Why the video tag won't save us2009-01-28T00:00:00-08:00Justin Duggertag:www.pwnguin.net,2009-01-28:why-the-video-tag-wont-save-us.html<p>I see Mozilla is forking over <a href="http://www.0xdeadbeef.com/weblog/?p=977">some cash to improve Ogg Theora</a>. It's a
nice symbolic gesture, certainly. But Mozilla is fighting an uphill battle, in
the snow, upside down and underwater. I've been sitting on this post for quite
a while, so this announcement is a good excuse to dust it off and finish it.
If you have a glimmer of hope that the <video&gt tag will somehow undo the
Flash dependencies that Google Video and Youtube hath wrought, I have news for
you. Flash video is the path of least resistance to what video hosting sites
want: <strong>revenue.</strong></p>
<p>In 2007 a large consortium of engineers representing companies discussed the
adding <a href="http://www.whatwg.org/specs/web-apps/current-work/#video">a <video&gt tag</a> to HTML. A number of <a href="http://www.w3.org/2007/08/video/positions/">papers</a> were published
for the event, describing company's positions. Nokia wrote about how they need
hardware decoder support for whatever codecs are chosen. Mozilla wrote about
the need for an open and free "baseline" codec. Adobe's is surprisingly
insightful. Youtube seems to have the distinction of being the only company to
send a marketing rep to an engineering forum. An excerpt from their <a href="http://www.w3.org/2007/08/video/positions/Youtube.html">position
paper</a> :</p>
<blockquote>
<p>It is estimated that the total revenue for online video will expand to $6.3
billion by 2012. Right now, online video content is responsible for an
estimated $538 million in the US alone. That may seem like a small slice of
the $20+ billion spent on online advertising in 2007, but consider this: while
online ad revenues are growing 26% per year, online video ad revenues are
growing 55.5% per year. The potential for Video monetization is clearly there
– just a matter of when.</p>
</blockquote>
<p>I'm not misrepresenting Youtube here; their entire paper reads like this.
Reading between the lines of Youtube's otherwise incomprehensible position
lies the statement that delivering advertising is all they care about. Not
consumer preference, not <a href="http://mjg59.livejournal.com/80563.html">saving energy</a> or hardware support or bandwidth
consumption. Ogg and Javascript bring nothing to the Youtube table.</p>
<p>Youtube isn't alone; several video sites are banking not on banner ads or
customized channel pages, but on targetted per viewer ads, similar Google's
AdSense. A traditional broadcast video has markers for broadcasters to insert
ads, mixing them together in real time. These two ideas are fundamentally
incompatible. You can't mix in ads server side and still hand out targeted ads
to millions of people. Instead you can use Flash to overlay interactive ads on
top of videos and do the processing on client side, where there's a lot of
unused CPU. Some places have already started this, and it's quite annoying.
This practice is why Flash can't use Xv -- Xv is in CMYK, and the non-video
parts of Flash are in RGB. Hence the high loads when watching flash video; it
decodes the video into CMYK, converts to RGB, then draws crap on top of it,
and sends it to be drawn.</p>
<p>Video hosting sites are increasingly closing their services off behind walled
gardens in preparation for "monetization". There are well known scripts that
can retrieve the underlying video from a Youtube URL (and others), sans Flash
advertising. Nobody should be surprised to find out that it's against the TOS
to do so. When Google Tech talks migrated from Google Video to Youtube, they
lost their .mp4 video enclosures, never to return. Simple programmatic access
to hosted video is approaching impossible, restrained only by the need for
Flash player to access the same video somehow. This would normally be a step
backwards in <strong>making the world's information searchable and universally
accessible and usable.</strong></p>
<h3>The good news</h3>
<p>For sites that do offer direct access to video files, within the <video&gt tag
lies some great functionality. You can use the attribute markup to set time
markers ("a cue"), leading surfers directly to an excerpt of a larger video. I
was going to share an example here, but I commented it out after seeing that
pretty much nothing supports time markers yet.</p>
<p>Anyways, a lot is to be gained by this. You can quote exactly what a person
said, or select a segment or scene from a larger video corpus to share with
your audience. And you don't have to recut a video to do it, just let the
browser handle that. Building these controls into the browser lets you
entirely skip the cut and re-encoding and just grab what you need.</p>
<p>Some sites offer this function independently via URL parameters, but making it
standard brings a benefit beyond your direct audience. You can apply a lot of
the same techniques Google uses for regular links. If everyone uses the same
published syntax to cite clips within a larger video, engines can index sites
as they're discovered and instantly apply knowledge of the citation to the
region of time quoted, rather than waiting for a programmer to translate
another video hosting site's parameters. Search engines can apply annotations
to a larger video and perhaps even generate table of contents automatically.
Youtube is experimenting with annotations in Flash, and it's apparent they're
in dire need of something more pageranky to curb the abuse.</p>
<p>This all relies on a fast seek to make cues feasible. As far as I can tell,
none of the major vendors have even addressed this, even though the spec calls
for it. For example, can anyone explain what unit the start and end attributes
are measured in? Seconds would make sense, but it could be bytes if you want
to seek quickly.</p>
<p>So I applaud Mozilla's efforts and wish them success, but we're kinda ignoring
the elephant in the room here.</p>Ubuntu for Network Engineers2009-01-26T00:00:00-08:00Justin Duggertag:www.pwnguin.net,2009-01-26:ubuntu-for-network-engineers.html<p>Keith Tokash posted a review of Ubuntu 8.04 <a href="http://www.ccieflyer.com/2009-Feb-KTokash-Ubuntu.php">from the perspective of a network
engineer</a> who switched from Windows XP to Ubuntu. It's been picked up <a href="http://linux.slashdot.org/article.pl?sid=09/01/25/037223">at
Slashdot</a> as well. Rather than post a comment there, I thought I'd share a
more in depth response.</p>
<p>A quick synopsis before hitting individual points: much of the review centers
around Linux as embodied within Ubuntu, rather than the things Ubuntu brings
over the other Linux distributions. Balancing the praise are a few thorns,
places where Ubuntu isn't sufficient for his needs. On to a few specific
points, then.</p>
<p>He mentions that Evolution and Exchange don't get along well. This is hardly
news, and a significant barrier where I work as well. And frankly, it get gets
even worse than he reports. Evolution uses the Outlook Web Access (OWA) to
connect to Exchange. This works okay in Exchange 2003, but not Exchange 2007.
For people working in a large organization, you're unlikely to see a move to a
mail server with wider compatibility, but there's hope: <a href="http://www.openchange.org/">OpenChange</a> is
working on a library to implement MAPI, so clients can access Exchange on the
same terms as Outlook. I can't find it now, but there was some discussion
about getting Debian packaging going and getting this done in the Jaunty
timeline. It's by no means final, and won't help people preferring LTS, but by
the next LTS, I would think Evolution will connect to Exchange 2007. Hopefully
Exchange doesn't change much in the meantime!</p>
<p>He also mentions Visio, a diagram program. Visio is certainly a well polished
tool, and I've yet to find anything comparable. I hate dia, OpenOffice renders
like crap, and Inkscape's diagram tool isn't usable. GraphViz and dotty are on
my list to try out, but the lack of a GUI is going to put off a lot of people,
and frankly the screenshots give me no faith in their power to make things
pleasing. Most diagramming tools in Ubuntu revolve around UML in one way or
another.</p>
<p>He also cites the lack of a password manager compatible with PasswordSafe.
I've been looking into this myself, and as it happens, he's wrong. <a href="http://fpx.de/fp/Software/Gorilla/">Password
Gorilla</a> is a compatible program, in package password-gorilla
(<a href="http://packages.ubuntu.com/search?keywords=password-gorilla">available</a> since hardy). Unfortunately, if you search for "passwordsafe"
in the repos, Gorilla doesn't come up, which explains why he might not have
found it. Personally, I like KeePass 2.0 over PasswordSafe. Keepass works
great in Jaunty, not so great in Hardy or Intrepid. No package yet, but I may
publish one in a repo.</p>
<p>Finally, there's also a few complaints that are far more mundane. For example,
turning off 2-page view in Evince (GNOME's PDF reader). Instead of looking in
the View menu, he <em>installed Adobe's Linux PDF client</em>! Worth a chuckle, I
guess. (If you're reading Keith, it's a checkbox under View->Dual).</p>
<p>All in all it seems Ubuntu is doing a good job of putting together a coherent
desktop. Many of the rough edges are being worked on, but Visio remains a sore
spot for the foreseeable future.</p>A cheap media remote2008-12-19T00:00:00-08:00Justin Duggertag:www.pwnguin.net,2008-12-19:a-cheap-media-remote.html<p>It's been a while since I blogged about something I pay attention to in
Ubuntu, get ready! Today I talk about using a Wiimote to watch videos.</p>
<p>If you already own a Wii and a <a href="http://www.amazon.com/gp/product/B0019SI266?ie=UTF8&tag=jlduggesblog-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=B0019SI266">cheap Bluetooth adapter</a>, there's a
lot of fun stuff you can do in Linux with the Wiimote. The simplest and most
immediately useful is to use it as an <a href="https://help.ubuntu.com/community/CWiiD">alternative input device</a>.</p>
<p>A quick rundown: install package wminput, modprobe uinput, and run sudo
wminput -d . wminput will listen on bluetooth for the wiimote, and translate
into mouse cursor movement and keystrokes according to it's default config
file. Press 1+2 on the WiiMote to put it into discovery mode, and it should
connect in a few seconds.</p>
<p>I use the following as my wminput config file:</p>
<div class="highlight"><pre># remote - settings appropriate for totem/mplayer/mythTV
Wiimote.A = KEY_SPACE
Wiimote.B = KEY_ENTER
Wiimote.Up = KEY_VOLUMEUP
Wiimote.Down = KEY_VOLUMEDOWN
Wiimote.Left = KEY_LEFT
Wiimote.Right = KEY_RIGHT
Wiimote.Minus = KEY_BACK
Wiimote.Plus = KEY_FORWARD
Wiimote.Home = KEY_ESC
Wiimote.1 = KEY_F
Wiimote.2 = KEY_PROG2
</pre></div>
<p>Put it in ~.cwiid/wminput/default and you won't have to specify it on the
command line! The idea is that D-pad left and right scan through the video, up
and down turn the volume down, A triggers pause/play, and 1 toggles
fullscreen. I don't know what KEY_PROG2 is good for, but it felt wrong to
leave a button unmapped. No tilt sensing or mousing here, since I want to be
able to set the wiimote down and enjoy the show.</p>
<p>Some might think that KEY_PLAYPAUSE would be more function specific than
KEY_SPACE, but they are wrong; none of the players appear to support that. My
hope is that in the future I can find settings more universal; my USB keyboard
has buttons that trigger rhythmbox, so maybe that mechanism can be exposed.</p>
<p>There hasn't been a lot of activity upstream, aside from applying a patch
from Mario Limonciello (superm1). I'm not sure if it's a sign of maturity or
of project failure.</p>In the name of science and engineering2008-12-17T00:00:00-08:00Justin Duggertag:www.pwnguin.net,2008-12-17:in-the-name-of-science-and-engineering.html<p>In the <a href="http://pthree.org/2008/12/14/lzma/">inaugurial post</a> in Aaron Toponce's series on compression, two
critical errors are made and highlighted by a <a href="http://pthree.org/2008/12/14/lzma/#comment-109012">comment to his post</a>:</p>
<blockquote>
<p>"I wanted to see speed, and I figured what could be faster than a bunch of
identical data? I was way wrong."</p>
</blockquote>
<p>Aaron, you say you wanted to see speed, but didn't tell the programs that.
Instead you asked for maximal compression. It should come as no surprise that
-1 through -9 are not calibrated among compression tools. -9 means "I don't
care how much the CPU costs over -8, find me those extra bytes." You really
shouldn't be surprised when an algorithm can take gobs of time <strong>when asked to
do so</strong>.</p>
<p>Your other mistake is about datasets. Lossless compression on completely
random data is held to be impossible; certainly the developers of bzip2, gzip
and lzma aren't after that goal. Each of these algorithms is designed to work
on some datasets better than others. We shouldn't expect the <a href="http://en.wikipedia.org/wiki/Burrows-Wheeler_transform">Burrows-Wheeler
transform</a> to be a worthwhile investment on megabytes of zeros, for example
(though I still have a hard time believing it works <em>at all, on anything</em>).</p>
<p>In fact, the bzip2 implementation may be saved by what the <a href="http://www.bzip.org/1.0.3/html/misc.html">bzip2 author calls
a mistake</a>:</p>
<blockquote>
<p>The run-length encoder, which is the first of the compression
transformations, is entirely irrelevant. The original purpose was to protect
the sorting algorithm from the very worst case input: <em>a string of repeated
symbols</em>. But algorithm steps Q6a and Q6b in the original <a href="http://www.cl.cam.ac.uk/teaching/2001/DSAlgs/SRC-124.pdf">Burrows-Wheeler
technical report (SRC-124)</a> show how repeats can be handled without
difficulty in block sorting.</p>
</blockquote>
<p>(emphasis and link mine)</p>
<p>Still, I applaud your efforts to investigate and document, even if I think
conclusions should be withheld pending further investigation. It's likely my
own investigation was flawed in some way, but I don't think it's worth peer
review at this point: <a href="http://goodmerge.sourceforge.net/About.php">GoodMerge</a> publishes it's <a href="http://goodmerge.sourceforge.net/Statistics.php">own findings</a>.</p>
<p>It might be informative to:</p>
<ol>
<li>
<p>Publish your datasets. This may involve changing them slightly for
privacy and copyright ;)</p>
</li>
<li>
<p>Graph the relationship between CPU time, compression ratio and -# option,
to illustrate the size of tradeoffs available.</p>
</li>
</ol>New Ubuntu Store2008-11-20T00:00:00-08:00Justin Duggertag:www.pwnguin.net,2008-11-20:new-ubuntu-store.html<p>Seemingly in response to an <a href="//www.pwnguin.net/ubuntu-gear.html">earlier post</a> of mine, Canonical has recently
<a href="http://www.ubuntu.com/news/us-based-shop">launched</a> a US based <a href="http://usshop.ubuntu.com/">store</a>. Since they've gone to the trouble to risk
taking my advice, I suppose it's only fair that I shill a bit for them.</p>
<p>Prices before were close to 40 dollars for a shirt after taxes and shipping.
They're now closer to 25, and I think I'll put in an order at that price.</p>
<p>The store's new, so it's not yet perfected -- they ship a <a href="https://usshop.ubuntu.com/product.php?catid=2&code=09%2095101">fun set of
stickers</a> via UPS Ground, a 7 dollar charge for something that could
probably be sent for 42 cents. That's okay, <a href="http://www.system76.com">system76</a> still offers
<a href="http://system76.com/article_info.php?articles_id=9">stickers</a> for the price of a SASE (<a href="http://en.wikipedia.org/wiki/Self-addressed_stamped_envelope">?</a>) as a draw (and sales lead) to
their store.</p>
<p>One product suggestion: <a href="https://wiki.ubuntu.com/MassachusettsTeam/Projects/AluminiumCaseBadges">Aluminum case badges.</a></p>New job2008-11-13T00:00:00-08:00Justin Duggertag:www.pwnguin.net,2008-11-13:new-job.html<p>Today I started a new job as a "Systems Analyst" for a local community
college. There's a lot of silly practices going on there, but I think I can
fix at least a few of them before my time is up. Personal target number one is
probably going to be the stupid email confidentiality clause plastered to
every staff email. Target number two is to fix the character stripping done on
input to the issue tracker. A third might be to do something about Firefox 3
and SSL certs on campus. And I'm sure the people who hired me have some things
for me to do as well ^_^</p>
<p>The systems my group seems to be tasked with are educational and IT support.
Computer labs, Blackboard/WebCT, Renderman, and servers for student web
classes. There's a healthy mix of Linux, Windows and OSX on desktop, and Linux
and Windows on servers. And a brickton of VMware. There are a number of
transitions happening soon, so I expect a trial by fire during the learning
phase (apparently one happened today before I arrived and another a few hours
later). They're moving student email from locally hosted to GMail for
Institutions, rolling out IE7 (I think they're hoping Vista dies), and
switching to ANGEL LMS.</p>
<p>The pay is twice what I was making before (but maybe half of what I'm worth),
with lots of gadgets and no overtime or on-call duties. Oh, and a pager (only
for when I'm on the job I'm told). I mostly spent today touring the facilities
and setting up the vast array of equipment bequeathed to me: a dual monitor
PC, an iMac, an iPaq, wireless Plantronics headset, and a lot of random
surplus equipment. The college seems to be great at buying equipment at least.
And the whole building is impressive, if a bit impractical in my experience
(huge windows and glass everywhere seems good on paper but leads to heating,
cooling and projector problems).</p>
<p>I'm also allowed a free class every semester but my level of education
basically exceeds 95 percent of what a community college offers students.
Still, it might be nice to take a hands on technology course on networking to
complement my studies on networking and proofs of protocol correctness.</p>Ubuntu on ARM2008-11-13T00:00:00-08:00Justin Duggertag:www.pwnguin.net,2008-11-13:ubuntu-on-arm.html<p>Canonical has <a href="http://www.ubuntu.com/news/arm-linux">just announced</a> that they'll be bringing the ARM arch to
Ubuntu. This is quite exciting -- ARM devices have been getting more and more
features, and as LinuxDevices <a href="http://linuxdevices.com/news/NS9527593286.html">reports</a>, there's been unofficial builds by
Nokia for a while. Centralizing the builds should help reduce the effort
needed to set up Ubuntu on such platforms. For example, currently builds
happen after release and are announced when they're complete; we can now
expect regular builds throughout development and a release date people can
plan around.</p>
<p>There's some interesting hardware coming my way that I'd like to try this on,
but I won't share just what it is until I actually have one in my hands, since
they've sold out their first small batch and having problems in manufacturing.</p>
<p>In the meantime, I'll be hanging out in #ubuntu-arm observing.</p>Brainstorm reconsidered?2008-11-10T00:00:00-08:00Justin Duggertag:www.pwnguin.net,2008-11-10:brainstorm-reconsidered.html<p>Jef Spaleta, Fedora contributor and member of the Fedora Board, offers a
<a href="http://jspaleta.livejournal.com/28663.html">dissenting view</a> on the value of websites like Dell Ideastorm for
community based projects:</p>
<blockquote>
<p>It makes some sense when you have dedicated engineering resources to spend
and are looking for ideas to spend it on. But if you want to grow new
volunteer involvement, I don't think the Idea Storm implementations we are
seeing make sense for that. The popularity of an idea simply is not enough. We
have to have a mechanism which helps individuals turn personal interest in to
personal action...instead of encouraging them to wait for someone else to take
their idea and run with it.</p>
</blockquote>
<p>Ouch. He's not the first person to recognize there's something wrong with
Brainstorm. My personal favorite is <a href="http://brainstorm.ubuntu.com/idea/11730/">#11730</a>: "Make developers pay
attention to ideas on this site". The rhetoric of Brainstorm, inherited from
Ideastorm, is that users submit ideas and developers implement them. The
disconnect between this and reality is causing frustration. The Dell Ideastorm
revolves around identifying items for Dell to spend money on to make their
product more valuable to customers; Dell managers have a profit motive to pay
close attention and allocate resources to make things happen.</p>
<p>In contrast, Canonical has a commitment to making Ubuntu available free of
charge, which affects how the profit motive is expressed. In Canonical's focus
to make development sustainable, their paying customers are OEMs like Dell,
not end users. Under those constraints, they should reasonably focus on the
set of items that both users and OEMs want (and might possibly pay for).
Automated hardware change recognition and Deviant Art contests don't seem to
quite fit that mold.</p>
<p>The rest of Ubuntu contribution is basically the fantastic result of an
appropriate social contract. Volunteers seek to improve Ubuntu for their needs
and those like them, and contribute that back to everyone. It's a classic
stone soup story; the soup tastes great and everyone gets more than they gave.
Such people don't need a website to figure out what they want to fix, they
need guidance on how to fix it.</p>
<p>Jef's alternative suggestion is to use the votes to find clusters of people
interested in a common set of goals, and give them a communication medium to
make their goals reality. That sounds nice but I think it's a lot harder than
it sounds to find a group of people who can get things done. MOTU has a series
of failed failing focused teams: BitTorrent, Java, Games, Audio. Even the
ubuntu-laptop team seems to have fallen apart, with no apparent leadership or
goals. I don't think it's a matter of subscribing people who vote for a set of
features to a mailing list.</p>
<p>From the outside looking in, I'd say that Debian has had more success with
their focused team based efforts. Probably because in many cases, it was quite
simple to organize a cluster of people interested and already taking personal
action. There's probably a critical mass of core competence needed before team
approaches can be successful; recruiting is important, but you can't do it
without a group who really knows their stuff and can write down the important
parts for those that follow.</p>
<p>Within Ubuntu, the <a href="https://wiki.ubuntu.com/UbuntuOpenWeek">OpenWeek</a> is supposed to function like a cross between
a conference and a Freshman Activities Fair at the beginning of the semester,
and helps address the need for recruiting informed volunteers. However, I
think the IRC format, as is, is not appropriate. The first half of most
sessions runs like a file dump from a text file, and the latter half runs like
a Q&A. The main purpose is to engage the wider community in the development
projects within Ubuntu. The Q&A is really useful in getting people to ask
questions and maybe even participate in development, but the IRC format hurts
the presentation of information in the first half. If we want Ubuntu to
further evolve beyond the command line, I think it might be more appropriate
to record a 20 minute video presentation or tutorial as a prerequisite to an
IRC Q&A session. Perhaps this is something Loco's can get involved in
producing with developers?</p>
<p>Another approach may be to make sure high rated items on Brainstorm see
treatment at OpenWeek. I imagine fixing suspend and resume would be a
massively popular Q&A. Or a session on the <a href="http://brainstorm.ubuntu.com/idea/42/">highly popular</a> idea, suggested
for Jaunty: faster boot times.</p>OpenMoko or Android G1?2008-11-08T00:00:00-08:00Justin Duggertag:www.pwnguin.net,2008-11-08:openmoko-or-android-g1.html<p>Having killed my cellphone with a good cycle in the washer, I'm in the new
phone market I guess. Besides simply replacing the model I damaged ("the cheap
option"), I'm also looking at OpenMoko and Android. Trouble is, they're both
the same price, and nobody really has published a direct comparison.</p>
<p>I like the idea of running Linux on a phone, and having a wide degree of
freedom to customize, but with other fun hardware on the way, I'm not sure how
much time I'd really devote to tinkering with a phone I'd rather have working
100 percent of the time. So I'm looking for three kinds of advice:</p>
<ul>
<li>
<p><strong>why I should get an OpenMoko</strong> from people who own one</p>
</li>
<li>
<p><strong>why I should get an HTC G1</strong> from people who own one</p>
</li>
<li>
<p><strong>a comparison</strong> from people who own either but have used both</p>
</li>
</ul>
<p>The last category strikes me as rare, but it can't hurt to ask.</p>Voting Is Hard2008-11-05T00:00:00-08:00Justin Duggertag:www.pwnguin.net,2008-11-05:voting-is-hard.html<p>Brandon Holtsclaw, an acquaintance of mine, writes that <a href="https://www.imbrandon.com/2008.11.03/ive-never-been-shorted.html">ATMs are error free,
so why not voting machines</a>? He's not the only one I've seen making this
comparison, and it's a fundamentally misguided analogy.</p>
<h3>Sins of the past</h3>
<p>To describe why voting is hard it's nessecary to discuss the capacity for evil
in the system. For much of America's history, voting was done by voice (<em>viva
voce</em>), in public. You might be hassled by party members on your way to
vote, who would attempt to intimidate or bribe you. You then stood before a
judge and declared your vote, or later, turned in a paper ballot. You can
imagine the kind of pettiness that might ensue after you vote against the
interests of your employer or landlord. Eventually this behavior became
commonplace enough that a voting system from Australia was imported: secret
ballot. You voted in privacy, expecting to be shielded from the vice grips of
political machines.</p>
<p>A few problems arose. Ballot stuffing and repeat voting was easy to stop by
handing out a ballot only once to registered voters. <strong>Chain voting</strong>
represented a much harder threat to solve. In chain voting, one person needs
to smuggle out a single official ballot. Our villain then marks the ballot,
and offers it to people along with a promise to buy an unmarked ballot from
them. In this way you can buy up a series of votes, based on that one smuggled
ballot.</p>
<p>The solution here gets very tricky. You need to make sure that the ballot
turned in matches the ballot given to the voter, which requires inspection of
the ballot at some level. Paper ballots will have a tear away tab with a
ballot ID on it, and fold over, so that poll workers who verify ballots cannot
see the vote.</p>
<p>From the study of the history of fraud we find the need for three fundamental
properties of a voting system:</p>
<ul>
<li>
<p><strong>countability</strong>: the vote has to be countable in a transparent fashion.
This seems obvious, but is a central problem with voting machines</p>
</li>
<li>
<p><strong>integrity</strong>: the vote must be resistant to fraud, from voters,
candidates or election officials</p>
</li>
<li>
<p><strong>anonymity</strong>: no individual voter should be able to prove they voted a
certain way</p>
</li>
</ul>
<h3>An Example</h3>
<p>Debian elections publish all votes, using an MD5 hash of the ID and secret key
to protect a voter's identity. It seems like overkill, but for DDs it means
that their employer can't find out if they vote against the current interests
of the company. I haven't thought too much about it, but the Debian vote can't
scale, for simple matter of md5sum collisions. The birthday paradox suggests
that as the voting base grows, the likelihood of collisions is higher.
Anywhere two collisions have the same vote is an opportunity to change one of
the votes.</p>
<p>There are more problems, but that isn't the point. The point that Debian
elections may lack complete integrity is to illustrate the difficulty to
engineer something secure, even ignoring implementation flaws and publishing
theoretical systems alone. Intelligent critics of the American election status
quo have proposed systems that have been discovered to come up short to those
three basic demands, and not for a lack of understanding or trying. It just
turns out that <strong>voting is hard</strong>.</p>
<h3>The better analogy</h3>
<p>Voting machines are like an ATM that has to keep accurate account balances
without verifying who deposited how much or giving the customer a receipt
proving the deposit ever happened.</p>Bootchart refresher course2008-10-30T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2008-10-30:bootchart-refresher-course.html<p>Boot times are becoming a major hassle again to everyday users. <a href="http://www.nytimes.com/2008/10/26/technology/26boot.html?_r=1&partner=rssnyt&emc=rss&oref=slogin">NYTimes is
reporting</a> that many laptop vendors are resorting to a technology called
<a href="http://www.splashtop.com/">SplashTop</a> to boot a small Linux environment from BIOS, rather than incur
a lengthy Vista boot penalty, documented in this <a href="http://video.nytimes.com/video/playlist/embedded/1194812888716/index.html#">New York Times video.</a>
This can save you a good deal of time if you need to write a quick email and
turn the computer back off.</p>
<p>Of course, if you want anything outside the provisions of SplashTop, you'll
have to boot into a different OS. SplashTop seems like the wrong solution to
the right problem; if the full OS boots too slow, why boot a crippled OS
rather than improve the full OS boot time? Thankfully, one of <a href="https://lists.ubuntu.com/archives/ubuntu-devel-announce/2008-September/000481.html">the release
goals</a> for Ubuntu 9.04 is faster boot time.</p>
<p><a href="http://www.bootchart.org/">Bootchart</a> is a great tool to record and chart relevant information about
what resources are in demand during boot. Originally <a href="http://www.redhat.com/archives/fedora-devel-list/2004-November/msg00447.html">dreamed up</a> and
implemented by Fedora, it's quickly become a tool used <a href="http://www.bootchart.org/samples.html">by many</a>.</p>
<h3>Installation And Use On Ubuntu</h3>
<p>The package name is bootchart; you can use your favorite command line or GUI
package manager to install the package from main. By default it saves
generated charts to /var/log/bootchart/, and will record one for every boot
(except when booting in "profile" mode, which monitors disk access during boot
to determine which files to read ahead).</p>
<p>The chart itself contains 3 sections:</p>
<ul>
<li>
<p>A description of the system</p>
</li>
<li>
<p>A pair of graphs describing CPU & <a href="http://utcc.utoronto.ca/~cks/space/blog/linux/LinuxIowait">iowait</a> and disk activity/throughput
during boot</p>
</li>
<li>
<p>A <a href="http://en.wikipedia.org/wiki/Gantt_chart">Gantt Chart</a> showing long lived processes and their status: waiting
for disk, running, sleeping or <a href="http://en.wikipedia.org/wiki/Zombie_process">zombie</a></p>
</li>
</ul>
<p>The most immediately useful purpose of bootchart is to objectively discover
the boot time, and where the bottlenecks are. For example, this
<a href="http://farm3.static.flickr.com/2402/2356106039_292a2723d4_b.jpg">bootchart</a> shows a length of 33 seconds, which isn't bad, but we can
quickly discover a 5 second dead zone, with a <a href="https://bugs.launchpad.net/ubuntu/+source/initramfs-tools/+bug/203429">suspicious process called
"resume"</a>. We can also see that disk throughput peaks at 23MB/sec during
readahead, but isn't sustained. Daniel Stone documents a similar series of
diagnoses <a href="http://www.fooishbar.org/blog//tech/ubuntu/fastBootMiniBoF-2004-12-09-13-45.html">from 2004</a>.</p>
<p>I happen to have a very large archive of bootcharts starting from Dapper
forward one machine, and from Feisty forward on another. And if you've
installed bootchart in the past, you might too. This can be useful to figure
out when a drastic jump occured, and lead you to which changes to Ubuntu might
have triggered it. You do risk wasting a lot of disk space, of course.</p>
<h3>Secret features</h3>
<p>Ubuntu comes with scripts to upload charts to a website (disabled by default).
From /etc/init.d/stop-bootchart.d:</p>
<blockquote>
<p>wget -O /dev/null -q --post-file $CHARTS/$base-$count.png
<a href="http://bootchart.err.no/">http://bootchart.err.no/</a>upload/$(lsb_release -si)/$(lsb_release
-sc)/$(uname -r)/$(uname -m)/$boottype</p>
</blockquote>
<p>Make sure you get permission from the owner or change that URL to a server you
control before running it in this manner.</p>
<p>Bootchart can also record to SVGZ, a compressed vector image format. Bug
reporters feel SVGZ is <a href="https://bugs.launchpad.net/ubuntu/+source/bootchart/+bug/204186">a lot smaller in practice</a>, and reduces <a href="https://bugs.launchpad.net/ubuntu/+source/bootchart/+bug/218499">boot time
on already long boots</a> (Ironic!). SVGZ also has the advantage that with a
bit of thought (ditch the XML comments and use sensible ID attributes instead)
you could data mine a collection of charts. For example, you could graph the
boot time over the course of six months, or calculate the ratio of time spent
running readahead versus total boot time. A lazy survey of my October charts
for Intrepid shows boot time has declined personally from 47 seconds to 34.</p>
<p>One slightly dangerous feature is that bootchart <a href="http://www.redhat.com/archives/fedora-devel-list/2004-November/msg00561.html">doesn't show all running
processes</a>:</p>
<blockquote>
<p>Some processes were filtered out for clarity -- mostly sleepy kernel
processes and the ones that only live for the duration of a single top sample.
This skews the chart a bit but is definitely more comprehensible.</p>
</blockquote>
<h3>The dire future</h3>
<p>Bootchart hasn't released since late 2005. A new developer unleashed a fury of
26 commits one day in August, but hasn't touched it since and hasn't announced
any release. Some of the patches appear to fix bugs reported in Sourceforge
but nobody's touched the bug since being reported, even to close them. There
remain a number of open bug reports, but nobody seems to be reviewing reports
or patches at this time. This is worrisome because a developer's ability to
fix flaws is only as good as the tools available to objectively see them.</p>
<p>Given the goal of faster boot time, it seems unfortunate that a fundamental
tool is currently without a functioning exchange for ideas, patches and bug
reports. I hope one of the first actions taken on this track is to reform the
bootchart project into a vibrant community of participants and observers from
a plethora of projects. I'm finding discussions going on about bootchart on
the <a href="http://jolexa.wordpress.com/2008/10/14/linux-fastboot-my-bootchart/">edges of the open source community</a>, but none at the theoretical
center of activity. <strong>Collaboration is part of the Code of Conduct</strong>, and I
hope our bright minds gathering at Mountain View in December take it to heart.</p>I owe someone a thank you2008-10-17T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2008-10-17:i-owe-someone-a-thank-you.html<p>Sometime during the last few weeks my Intel HDA based <a href="https://wiki.ubuntu.com/SergioZanchetta/Old/ToshibaTecraM7">Toshiba Tecra M7</a>
tablet finally supported headphone jack-sense. No longer will I feel like an
amateur plugging in headphones and fumbling about figuring out how to silence
the speakers but not the headphones. Yay!</p>Training materials2008-10-12T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2008-10-12:training-materials.html<p>I've had the opportunity recently to review some training materials as part of
my job. <a href="http://vtc.com">VTC</a> offers a lot of streaming video tutorials on various
software. I decided to search for Ubuntu and one series showed up. Charles
Griffen authored and published these video tutorials, and they're available
from a wide array of sources. Amazon retails a DVD version for
<a href="http://amzn.to/2dFHhW3">$99.95</a>.</p>
<p>To inform the broader Ubuntu community and to learn from the mistakes of
others, I've decided to publish a critique of <a href="http://www.vtc.com/products/Ubuntu-Linux-tutorials.htm">Charles Griffin's Ubuntu
Tutorials</a>:</p>
<h3>Information</h3>
<p>The content itself is basically a brief overview of the default Ubuntu Desktop
components, spread out over five and a half hours. The audience appears to be
expected to be comfortable with Windows and computers in general as he walks
them through burning and verifying a LiveCD. In addition to the basic overview
of the GNOME desktop software, he covers briefly how to use the command line,
WINE/Crossover/Cedega, and <strong>Automatix</strong>.</p>
<p>The tutorials are based on Ubuntu 6.10. This is very old and <a href="http://www.ubuntu.com/news/ubuntu610end-of-life">unsupported
software</a>, and the videos haven't been updated to reflect any changes in
behavior or software. If revisiting the video every six months takes too much
time, 6.06 LTS might have been a better choice for the longevity of material.</p>
<p>He spends five minutes on legal <a href="https://wiki.ubuntu.com/RestrictedFormats">restrictions on media formats</a>, but
doesn't communicate solutions or even the fundamentals of the problem of
patents. For example, DVD playback isn't illegal, if you've negotiated a
patent license, or if someone has negotiated one on your behalf, like Dell
does for their customers. Canonical even offers such things <a href="http://shop.canonical.com/product_info.php?products_id=244">for sale</a>, if
you wish to protect yourself from liability.</p>
<p>The videos completely neglect the <a href="http://wiki.ubuntu.com">Wiki</a> and <a href="http://launchpad.net/ubuntu">Launchpad</a> as an avenue
for support, instead suggesting the purchase of <a href="http://www.canonical.com/services/support">official support</a> from
Canonical or community support from forums. While speaking about the
community, he neglects the foundational Code of Conduct that lay out the
etiquette expected among developers and the community.</p>
<p>He advocates the use of EasyUbuntu and Automatix in his video. By the time
these videos were published, Automatix was known by developers, including
Technical Board member Matt Zimmerman, to be<a href="https://lists.ubuntu.com/archives/ubuntu-devel/2006-November/022185.html"> fatally flawed</a>. After
Matthew Garrett published <a href="http://mjg59.livejournal.com/77440.html?thread=511616">a document reviewing the flaws,</a> I feel it's
irresponsible to continue publishing a recommendation of Automatix.</p>
<h3>Narration</h3>
<p>Charles's voice is smooth and comes across well over the recording. The pacing
is quick enough that the potential boredom of the subject isn't multiplied by
numbing slowness. He is well rehearsed and professional. Much of this seems to
come from his work on <a href="http://www.linuxreality.com/">LinuxReality</a>, a podcast oriented towards new Linux
users (now defunct).</p>
<p>Most of these tutorials come with subtitles. This is great for accessibility,
but there are several errors, some of which are substantial. The file browser
is universally misspelled "Nodilus" (it is "Nautilus"). More importantly, the
command line tutorial makes a specific point about forward slash versus
backslash, while the subtitles get it precisely backwards!</p>
<p>The diction is annoying. He uses the word "one" to refer to you, the viewer,
<strong>a lot</strong>. Your English teacher in high school might have marked you down for
informality, but nobody takes their advice seriously, and neither should you.
Otherwise, the tutorials remain relatively free of jargon and accessible to
the audience. Where important jargon is used, it is explained adequately.</p>
<h3>Presentation</h3>
<p>The video itself is based entirely off of screen capture streams, even when a
diagram or two would be far more illuminating. The VTC website offers
Quicktime or Flash playback, which may annoy advocates of open codecs.
Admittedly, it is a bit of a challenge today to host video that is universally
playable today. (HTML5 offers hope, but that's a subject for another time).</p>
<p>The VM image used for recording is in need of upkeep -- update-manager is
prompting for updates in the notification area, and the volume is muted or
broken. Busted audio might not be so bad except the tutorial covers some audio
applications... without working audio.</p>
<p>The video isn't high quality. The recording was done at maybe 5 or 10 frames
per second, and the resolution is too small to adequately display some
applications, making the whole experience look cramped. Still, it's high
enough quality that most text is readable, a problem I've encountered
personally when running Desktop OS's at TV quality resolution. The low
resolution does have the advantage that it can play from a DVD to a regular
television and still be readable.</p>
<h3>Conclusion</h3>
<p>These tutorials are outdated, but convey a wealth of information about what to
cover and how. Personally, I was a bit disappointed that the tutorials didn't
teach me anything new about the Ubuntu Desktop, even about programs I haven't
used much. (Does anyone use Evolution seriously?)</p>
<p>I get the impression that making such a set of tutorials takes more effort in
planning, recording, and editing than a single person can muster. I may seem
harsh in writing this critique, so let me be clear: Griffin makes a valiant
solo effort, but the rapid pace of Ubuntu and Linux in general is depreciating
the quality of his instruction. The <a href="https://wiki.ubuntu.com/ScreencastTeam">Screencast Team</a> brings a lot of
expertise and knowledge to the table, and if they decide to do an "Ubuntu
Introduction" project in the future, I hope they'll consider how to improve
upon the works of others, and find ways to cope with the high rate of change!</p>Is this spam new or only new to me?2008-10-09T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2008-10-09:is-this-spam-new-or-only-new-to-me.html<p>Recently I've been hit by a series of spam comments that are unusually
relevant to the conversation. This is tricky, but given the following
patterns, I believe they are indeed spam.</p>
<ol>
<li>
<p>The posts are spread out over history, not focused on recent entries or
comments.</p>
</li>
<li>
<p>Generally the comments are a single sentence, and no more than two.</p>
</li>
<li>
<p>Googling the comment contents reveals them all to be excerpts from a much
larger comments elsewhere.</p>
</li>
<li>
<p>Each spammer has a single post of their own, a long piece with a weak
command of the English language. Googling sentences or phrases does not work
on this text. If I had to guess, it was a written by a Markov chain trained on
blog posts, perhaps LJ tags.</p>
</li>
<li>
<p>At the end of a fairly long and poorly written blog post is a text url.
Not a link, like one would expect.</p>
</li>
<li>
<p>None of the users do repeat business.</p>
</li>
<li>
<p>There have been about 15 of these in the span of two hours.</p>
</li>
</ol>
<p>Item 5 deserves special consideration, because, dear Watson, it doesn't fit.
Why would someone go to the length to write a spam engine to register, blog
and comment, train it on advanced computing principles, and screw up whatever
circumlocutious goal they had in mind?</p>
<p>Either the spammer made a rank amateur mistake, or was afraid Google or
LiveJournal would notice links. Both employ automated systems to prevent spam,
and might not notice a URL without the HTML linking. Perhaps they serve up
some form of malware that would quickly get flagged if directly linked to?</p>
<p>So either LJ caught and stopped this flood, or the spammer caught and stopped
the mistake and will be returning shortly. Has anyone else noticed this
pattern before?</p>Is Gentoo dying?2008-09-28T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2008-09-28:is-gentoo-dying.html<p>A recent cnet article suggests that <a href="http://news.cnet.com/8301-13505_3-10047439-16.html?tag=mncol;title">Ubuntu is eating other distribution's
lunch</a>. In particular, one distribution is reported to be falling apart:
<a href="http://www.gentoo.org">Gentoo</a>. Gentoo was very popular among my friends at the time I adopted
Linux, but from what I've seen, the project fell apart as developers were
unable to come to consensus or resolve conflicts.</p>
<p>From what little I know of Gentoo users and the project, it's closer to say
that Gentoo is becoming an unofficial set of distributed overlays than a
centralized approved project with trusted developers and so on. Gentoo's core
appeal isn't under attack by Ubuntu -- building from source and customization
for performance are central and remain relatively unique. If Ubuntu's focus on
desktop usability or six month release cycle are appealing enough to Gentoo
users that they leave the project, then build from source and customization
were simply means to an end and Ubuntu has improved the Linux landscape for
the better.</p>
<p>The article cites Google Trends as evidence of Ubuntu rocketing off to outer
space while Gentoo stagnates. I'm reminded of the current 5-a-day discussion;
there's a certain amount of danger to blindly trusting metrics. This Google
Trend shows <a href="http://www.google.com/trends?q=gentoo%2C+ubuntu%2C+dell&ctab=0&geo=all&date=all&sort=0">"Ubuntu" approaching "Dell":</a></p>
<p><img alt="comparison of gentoo, ubnutu and dell over time" src="http://www.google.com/trends/viz?q=gentoo,+ubuntu,+dell&date=all&geo=all&graph=weekly_img&sort=0" /></p>
<blockquote>
<p>(Orange = Dell; Red = Ubuntu; Blue = Gentoo) <p></p>
</blockquote>
<p>It's a large leap to say that Ubuntu is as popular as Dell; certainly Ubuntu
is a small fraction of Dell sales. One thing I do know is that Ubuntu is very
googleable. All the <a href="http://lists.ubuntu.com">mailing lists</a> are archived, <a href="http://irclogs.ubuntu.com">IRC</a> is publicly
logged, the forums even have a special search engine mode for faster indexing
and engine retrieval, <a href="http://bugs.debian.org/robots.txt">bug pages aren't blocked in robots.txt</a>, and the
wiki is used extensively. This wide array of information is something a tool
like Google Search can aid in, and is far simpler than say searching the wiki
individually, then Launchpad and so on. Perhaps Ubuntu users are heavier users
of Google Search, compensating for what could be a smaller user base. We can't
really infer user base size from Trends, just growth patterns.</p>
<p>What does seem clear from the trend is that interest in Ubuntu isn't growing
as quickly today. Every new release causes a bump in search volume but there
isn't as much sticking around after the release. Maybe Ubuntu works better
today, so people aren't Googling their problems as often? I am, shall we say,
open to alternative explanations.</p>"Be collaborative"2008-09-18T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2008-09-18:be-collaborative.html<p>Greg K-H gave a talk at the Linux Plumber's Conference ostensibly about the
<a href="http://www.kroah.com/log/linux/lpc_2008_keynote.html">Linux Ecosystem</a>, but appears to have been primarily about Ubuntu's
leading commercial sponsor, Canonical. I wasn't in attendence; as a mere
amateur, I'm satisfied to read reports and watch videos from conferences like
LPC. It's a lot cheaper on the wallet, certainly.</p>
<p>But I do wonder about his accounting. It's simple enough to measure the rate
of change of the kernel, the speed of releases and so on, but it's much harder
to referee the score. I haven't seen any published scripts to automatically
attribute his tally, and I imagine publishing such a thing might be an
invasion of privacy. Still there are questions in my mind. If someone chooses
to contract a kernel hacker to write a network driver, who does Greg
attribute? Does his account of X.org include any work done by Daniel Stone?</p>
<p>Greg is right though, that it takes a lot of work just to tread water with the
kernel. The rate of change will bowl you over, which I'm told is one reason
Redhat hires a small army of kernel developers to backport patches to RHEL,
and I'm sure their customers love them for it.</p>
<h3>The Good News</h3>
<p>There is, I think, an unprecedented opportunity coming up to collaborate.
<a href="http://fedoraproject.org/wiki/Releases/10/Alpha/ReleaseNotes#Kernel_2.6.27_development_version">Fedora</a>, <a href="http://en.opensuse.org/Roadmap/11.1">openSuSE </a> and <a href="http://packages.ubuntu.com/intrepid/linux-image-generic">Ubuntu</a> will all be shipping a .27 kernel
in a stable release. It occurs to me that this would be a perfect time for one
of those extended stable kernel cycles that Greg mentioned in that Google Tech
Talk some time ago. I hope this idea is brought up during a <a href="http://linuxplumbersconf.org/">conference about
solving problems and the "kernel ecosystem".</a></p>Beautiful2008-09-17T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2008-09-17:beautiful.html<p>I won't ruin the RSS with a huge image, but here's a <a href="http://jengelh.medozas.de/images/nf-packet-flow.svg">graphical flow</a> of
iptables/netfilter. Netfilter is an awesomely complex beast, as the image
shows.</p>
<p>Unfortunately, the author has some <a href="http://jengelh.medozas.de/2008/0609-ubuntu.php">scathing</a> <a href="http://jengelh.medozas.de/2008/0819-ubuntu.php">reviews</a> of Ubuntu 8.04,
concluding:</p>
<blockquote>
<p>An absolute fail distro. Really, I recommend Windows XP anytime over Ubuntu.
(But of course, SUSE/Fedora in the pole position still.)</p>
</blockquote>
<p>I appreciate Jan's contribution and dedication to kernel.org and elsewhere; I
will leave it to others to debunk or affirm his individual complaints. But
framing the complaints of a kernel hacker as applicable to Windows users is
disingenuous.</p>Why single out Apple?2008-09-14T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2008-09-14:why-single-out-apple.html<p>Apple has recently gotten some heat for <a href="http://almerica.blogspot.com/2008/09/podcaster-rejeceted-because-it.html">refusing to approve a competing
iPhone app</a>. From a business perspective, the only thing wrong is Apple
admitting this instead of finding a scapegoat excuse. It's a bit sad when
gatekeepers using their status to bludgeon a select few competitors is a step
forward from the status quo.</p>
<p>We've known that the iPhone is a walled garden for some time; hell, when Apple
announced the App store it was hailed as revolutionary, no longer did users
have to jailbreak their phones to fight the cell phone provider cartel (just
sign up for a two year agreement of epic proportions). Jailbreaks are a great
way to <strong>earn notoriety and a feeling of superiority, having somehow cheated
the system</strong>. I used to succumb to the seduction of cheating the system via
technology; long before the advent of iPods and iPhones, a classmate of mine
owned a <a href="http://en.wikipedia.org/wiki/Rio_PMP300">Diamond Rio</a> and I was quite jealous. I certainly had a CD burner
back in the days when Word Autosave could turn a CD-R into a coaster through
dreaded buffer underflows.</p>
<p>But over time I've come to realize that having to jailbreak something in the
first place is the source of that envy, and it's only enviable because open
access socially beneficial. Locked platforms increase the cost to entry,
excluding programmers because they don't have cash to spare or an address
outside of a dorm room. I wonder how the Steve Jobs that sold <a href="http://en.wikipedia.org/wiki/Blue_box_(phreaking)">blue boxes</a>
feels about the Steve Jobs that locks out competition. He'd probably ask the
Steve Jobs that wrote an <a href="http://www.apple.com/hotnews/thoughtsonmusic/">open letter</a> to explain to us why the <a href="http://www.youtube.com/watch?v=9tqIluIi3_U">big
cartels</a> won't let them play that way.</p>
<p>So what's the status quo on mobile phones? If you want to know what the PC
world might look like post Microsoft, mobile is the place to look. It's a
massively splintered market with trails of middlemen between application
developers and their customers The big platforms are:</p>
<ul>
<li>
<p><strong>J2ME (Java)</strong>: The broadest implemented <a href="http://en.wikipedia.org/wiki/Mobile_Information_Device_Profile">platform</a>, and also the most
constrained. Network access is restricted to HTTP only and the "improved" 2.0
API provides only tone and wav audio. Phones in the US usually refuse to run
applets not signed by the carrier, but you can at least test on your PC's JVM.</p>
</li>
<li>
<p><strong>Symbian</strong>: a popular firmware OS among high end Nokia phones, but is
introducing capabilities via digital signatures to clamp down on Bluetooth
virus effects.</p>
</li>
<li>
<p><strong>BREW</strong>: popular on Qualcomm hardware, is reportedly worse off than
Symbian, where your ability to deliver software to end users at all must be
blessed by a "content provider," typically a mobile carrier, and you still
need a 400 dollar license to digitally sign personal builds for testing.</p>
</li>
</ul>
<p>So the only thing that Apple has done is wrest away the stranglehold carriers
have on application consumers. Traditionally the best way to reach your
customers is through the carriers, on their websites, mailings and app stores.
Apple has taken this away from AT&T, but certainly isn't about to surrender it
to 3rd party developers. They've taken high margins away from the carriers,
but left for themselves the lucrative role of gatekeeper and market maker,
with nearly 30 percent margins!</p>
<p><strong>The hacker spirit isn't just about the ability to use, modify, and share
software, it's about equality</strong>. The equality to run, modify, and share
software with the same rights as the <em>vendor reserves for themselves</em>. Locked
platforms and gatekeepers are an invasion of that hacker spirit. I don't think
it's appropriate to single out Apple's practices here or to lavish any praise
on them in light of their ability to interfere and demonstrated willingness to
use that ability.</p>Warrior Rabbit totem2008-09-08T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2008-09-08:warrior-rabbit-totem.html<p>As seen on <a href="http://irclogs.ubuntu.com/2008/09/05/%23ubuntu+1.html">#ubuntu+1:</a></p>
<p><code>#ubuntu+1.log-14:23 < pwnguin> hmm. apparently next week is ubuntu+2 naming time</code></p>
<p><code>#ubuntu+1.log:14:24 * pwnguin votes for jackalope</code></p>
<p>I'm so very <a href="https://lists.ubuntu.com/archives/ubuntu-devel-announce/2008-September/000481.html">sorry</a>.</p>Chromium Build Instructions?2008-09-02T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2008-09-02:chromium-build-instructions.html<p>I have located the <a href="http://dev.chromium.org/developers/how-tos/build-instructions-linux">Linux build instructions</a> for Chromium, what you might
know as Google Chrome.</p>
<p>Interesting notes thus far:</p>
<ul>
<li>
<p>Instructions include a reference to "Ubuntu 8" for retrieving build
dependencies ;)</p>
</li>
<li>
<p>an SVN checkout that retrieves ANOTHER set of source code management tools
not in wide use</p>
</li>
<li>
<p>uses bison for grammars; I've become a big fan of ANTLR for describing a
grammar</p>
</li>
<li>
<p>"BSD licensed," so it should be license compatible with Mozilla (and IE
and opera)</p>
</li>
<li>
<p>checkout of source code takes 16 minutes to grab 666MB and crashes:</p>
</li>
</ul>
<blockquote>
<p>Traceback (most recent call last):</p>
<p>File "home/jldugger/Desktop/chromium/depot_tools/release/gclient.py", line
925, in result = Main(sys.argv)</p>
<p>File "home/jldugger/Desktop/chromium/depot_tools/release/gclient.py", line
921, in Main return DispatchCommand(command, options, args)</p>
<p>File "home/jldugger/Desktop/chromium/depot_tools/release/gclient.py", line
885, in DispatchCommand return command_map<a href="options, args">command</a></p>
<p>File "home/jldugger/Desktop/chromium/depot_tools/release/gclient.py", line
837, in DoUpdate return update_all(client, options, args)</p>
<p>File "home/jldugger/Desktop/chromium/depot_tools/release/gclient.py", line
713, in UpdateAll deps = get_all_deps(client, entries)</p>
<p>File "home/jldugger/Desktop/chromium/depot_tools/release/gclient.py", line
576, in GetAllDeps solution_deps = get_default_solution_deps(client,
solution["name"])</p>
<p>File "home/jldugger/Desktop/chromium/depot_tools/release/gclient.py", line
537, in GetDefaultSolutionDeps deps = scope["deps"] KeyError: 'deps'</p>
</blockquote>
<ul>
<li><strong>Once built, Chromium will do nothing of value to Linux users today.</strong></li>
</ul>
<p>Currently, I'm downloading their tarball, hoping that it works better than the
checkout instructions did.</p>A recipe for smoother IRC2008-08-29T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2008-08-29:a-recipe-for-smoother-irc.html<p>IRC is a common method of communication within Ubuntu and many other projects.
I believe Ubuntu policy is that nothing is official unless it's on a mailing
list, but there's still some benefit to having a rendezvous communication like
IRC with others. But frequently, I see things like the following:</p>
<blockquote>
<p><code>* novice has joined #ubuntu-foo</code></p>
<p><code>< novice> Can anyone help me with bug #53632?</code></p>
<p><code>< novice> I applied the patch, but it's not building correctly</code></p>
<p><code>< novice> Am I doing something wrong?</code></p>
</blockquote>
<p>_ Five minutes of silence go by _</p>
<blockquote>
<p><code>< novice> Is anybody here?!?!</code></p>
<p><code>* novice has left #ubuntu-foo</code></p>
</blockquote>
<p>The first question can really be anything relevant to -foo, but the end result
is all too common; the person leaves before anyone glances at the channel to
respond. You might imagine it's a simple problem to solve with more
peoplepower, but it's more a question of latency than throughput. So having
tools to handle that latency is nice.</p>
<h3>The ingredients</h3>
<ul>
<li>
<p>A remote server running sshd</p>
</li>
<li>
<p>An account on said server</p>
</li>
<li>
<p>Screen and irssi installed on said server</p>
</li>
<li>
<p>ssh client on your local machine</p>
</li>
</ul>
<h3>Steps to take</h3>
<ol>
<li>
<p>ssh into the remote server: ssh host.com</p>
</li>
<li>
<p>run screen: screen -S irc</p>
</li>
<li>
<p>run irc: irssi -C freenode.net</p>
</li>
<li>
<p>Hit Control-A followed by D to detach from the screen session. Irssi will
continue running until you reattach</p>
</li>
<li>
<p>set up an GNOME launcher with the following command:</p>
</li>
</ol>
<p><code>gnome-terminal -e "ssh -t host.com screen -d -r"</code></p>
<ol>
<li>Click the icon, authenticate with the ssh server and presto, <strong>IRC as if
you never left</strong></li>
</ol>
<h3>Tips for maximal enjoyment</h3>
<ul>
<li>
<p><a href="http://sial.org/howto/openssh/publickey-auth/">Public Key encryption</a> can make life a bit simpler</p>
</li>
<li>
<p>Control-A followed by ? (question mark) will bring up the help dialog for
screen</p>
</li>
<li>
<p>alt-3 switches to window 3 in irssi, and so on. This continues for the
qwerty row as well</p>
</li>
<li>
<p>screen -x will attach multiple connections to the same session instead of
disconnecting the others</p>
</li>
<li>
<p><a href="http://quadpoint.org/articles/irssi">This page</a> has a more detailed description of the above</p>
</li>
</ul>
<h3>Enjoy!</h3>
<p>Now you can just ask questions without having to leave, or monitor a channel
for relevant questions and messages without missing anything. Traditionally,
putting a nick in a message highlights the line on the intended recipient's
screen, or does <a href="http://code.google.com/p/irssi-libnotify/wiki/MainPage">other interesting things.</a> A common technique to establish
rendezvous over IRC is a simple ping:</p>
<blockquote>
<p>crweb: ping? I have a question about Qt on embedded</p>
</blockquote>
<p>If nobody gets back to you immediately, be prepared to wait, especially on
relatively small and slow channels. If you need to shut down your local
computer, just hit Control+A D to detach. On busy channels like #ubuntu, it's
wise to use nick: highlighting to sort out your conversation from the others.</p>First jobs suck2008-08-28T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2008-08-28:first-jobs-suck.html<p>While in High School, I took a part time job at the local AMC. In retrospect,
I should have left after the first day, which they seemed to have timed with
the release of Star Wars: Episode I. The lines were massive and paperwork was
set aside to have their ten new hires push popcorn. This sort of emergency
mode was constant and I now recognize it as a red flag of terrible jobs.</p>
<p>AMC is located in Kansas City, so it makes the local news when AMC
is fined for <a href="http://www.kansascity.com/382/story/769717.html">violating child labor law.</a> I'm not very surprised they got
hit; the theaters are mainly operated by kids and overseen by community
college students. The papers are citing trash compactors and late hours; I
know both such things happened during my tenure. It's a bit of a surprise to
me that compactors are age restricted -- it's almost impossible to get injured
with the one I used. Late hours for 14 and 15 year olds was more a problem of
enforcement than deliberate evil; management assumed they'd actually leave
when scheduled to, which didn't always happen. I recall two instances of
managers visibly frustrated when one of them forgot to clock out; but clearly
not frustrated enough to stop hiring them. I'm not sure what their solution
will be; fire any such person who doesn't clock out and leave in time?</p>
<p>The real WTFs though aren't published in the papers. Federal law exempts
<a href="http://www.dol.gov/compliance/guide/minwage.htm#who">movie theaters</a> from overtime, have no break required (but few do).
Honestly, while it was a surprise to discover, overtime was never a big
problem (Kansas law places overtime at 46 hours and has no theater provision).
But the no dinner part always was. Working 8 or 9 hours on your feet without
food was full of suck, and probably motivated a lot of consession stand theft.
I find it a bit unfortunate that AMC has paid a fine rather than significantly
improve their workplace.</p>A local perspective2008-08-17T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2008-08-17:a-local-perspective.html<p>Well, since it's made it's way to <a href="http://politics.slashdot.org/article.pl?sid=08/08/16/1748246">Slashdot</a>, I should probably blog about
<a href="http://seantevis.com/kansas/3000/running-for-office-xkcd-style/">Sean Tevis's</a> political campaign. Tevis is running for a House seat in the
State of Kansas legislature. His district is practically down the block from
where I live, so I'm kinda affected by who wins that district, even if I can't
vote.</p>
<p>Tevis created an homage to XKCD about what <em>he</em> thinks is the Matter With
Kansas. It briefly touches the absurdity of Kansas politics (go see how many
Republican seats go uncontested in Kansas), mentions a brief platform, and
introduces his internet based campaign donation system as something
newsworthy. Perhaps it's simply the nature of technical fiction, where
"writing it causes it to be," but it has indeed been featured in the LA times,
NPR, and now Slashdot. He's received numerous donations, many of which were
under 50 dollars.</p>
<p>But besides understanding young people, what's he going to do? What's his
platform? If you only read his website, you might not know, but he's a
Democratic candidate. His comic subtly presents his platform: real science
standards, progressive tax reform, and government efficiency. He's a promoting
some sort of 'best schools evar by 2020' plan, but you don't need a house seat
to instill pride in Kansas schools. You might also wonder who exactly is
against good schools, although a few School Boards have earned Kansas a...
reputation.</p>
<p>Maybe his opponent is against good schools? The comic suggests it, but the
only platform available on Arlen Siegfreid's <a href="http://www.arlensiegfreid.com/">site</a> says "Quality
Education" which is vague enough that if Arlen loses he might consider a new
career in writing patents. Does Siegfreid's "quality education" endorse
creationism or fully funded schools?</p>
<p>Arlen's front page also contains the curious paragraph:</p>
<blockquote>
<p>Arlen Siegfreid's liberal, Mainstream Coalition-endorsed liberal Democratic
opponent has gained national attention raising tens of thousands online from
out-of-state donors. You can visit his report <a href="http://ethics.ks.gov/CFAScanned/House/2008ElecCycle/200807/H015ST_200807.pdf">here</a> and add up the out-of-
state numbers yourself -- plus an additional $67,000 of his donations are
unidentified as they are under the $50 limit required by the state.</p>
</blockquote>
<p>You're in a <strong>bad place</strong> when your platform is based on your opponent rather
than what you stand for. They are at least smart enough to know what really
helps in local elections: name recognition. This is why they never mention
this opponent's name. Here we see two forms of democratic action in
opposition: engaging the undecided and uninvested, and rallying the base
constituency. Arlen isn't writing to the greater public; he writes to the
people who recognize the "Mainstream Coalition", or at least think it's a bad
thing. Apparently these people have money to donate. I doubt it will work; the
average citizen here is far more angry about property tax appraisal than Roe
v. Wade.</p>
<p>Of course, we don't vote with money in Kansas. Campaigns have to translate
those dollars into people in voting booths. Those out of state donors can't
vote here. Hell, <em>I</em> can't vote in that race. His platform, at the end of the
day, has to appeal to the people voting. The good news is, as I alluded to
above, name recognition is half the battle in local elections. The other half
is party recognition. I look forward to discovering how one of the largest
campaign bankrolls in Kansas translates into votes.</p>
<p>One interesting thing about this system is that his major expenditures thus
far has been ~$4,000 to Paypal. Barack Obama says to look at how he runs his
campaign to tell how Barack would run his administration. Tevis's platform
includes government efficiency -- is 5 percent a good deal for an online
donation system?</p>Discrete Math in Practice2008-08-14T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2008-08-14:discrete-math-in-practice.html<p><a href="http://jucato.org/blog/some-short-updates/">Jucato</a> asks:</p>
<blockquote>
<p>“What are the practical applications of discrete math in actual software
development?”</p>
</blockquote>
<p><strong>Discrete math is everywhere in programming and Computer Science.</strong> Its
probably more applicable in general to programming than calculus is. There are
several simple but significant situations where discrete math suits software.
The simplest case is Big O notation in loops, where Riemann sums are used to
formalize programmer intuition. But almost nobody cares about intelligent
program analysis anymore. We measure programs in terms of lines of code and
cost to write, not cost to run, and God forbid we prove anything correct. I
think that says something damning about the state of the art.</p>
<p>But that's just a start. Graphs in computation (specifically DAGs - directed
acyclic graph) is a subject I've <a href="//www.pwnguin.net/cooking-for-everybody.html">touched on</a> before, mainly as an
alternative model of computation. But we use graphs in general all over the
place in software, frequently as <strong>models</strong>. There's a famous graph of system
calls to serve a web page in Apache on Linux versus Windows; optimal layout of
such a monstrosity is a graph theory problem that programs like GraphViz were
built to solve. We model lots of things in programming with graphs, and rely
on graph theory to do interesting things with them. UML is a famous example of
graphs in software development. If you want to write <a href="http://en.wikipedia.org/wiki/Object_Constraint_Language">OCL</a> about UML and
have a program verify that the software matches, you better hope the guys who
wrote the <em>verifier</em> thought discrete math was practical. But then again,
nobody in the open source world uses UML. And OO itself is not without its
<a href="http://steve-yegge.blogspot.com/2006/03/execution-in-kingdom-of-nouns.html">detractors</a>.</p>
<p>Okay, so how about something fundamental to UNIX in practice: <strong>regular
expressions</strong>. It is a central theorem that regular expressions can be
represented using any of a number of kinds of simple <strong>finite automata</strong> (and
that you can represent those diagrams as regular expressions). If your program
wishes to handle regular expressions, an NFA is suitable for the job. It's
important to note that a regular expression match runtime should be linear in
the length of the input, and not affected much by the length of the regular
expression. But this is yet another <a href="http://swtch.com/~rsc/regexp/regexp1.html">lesson lost</a> on today's practitioners
of software development.</p>
<p>This is getting pretty dismal, no? There's hope yet. Imagine if you built a
<strong>dependency graph</strong> from the Debian archive, where every package was a node
and every dependency was a directed edge. We wish to find an order in which to
install these such that no installed package ever depends on something not
installed. A naive implementation would be slow and possibly never terminate.
An implementation informed by graph theory would reduce the problem to
topological sort. Every DAG has a topological ordering, placing nodes into
"layers," where each "layer" depends only on the "layer" before it. Thus you
can install the innermost layer first, and work your way out. But <em>only</em> if
you have a DAG; detecting and resolving cycles is the better half of solving
this problem in real life, and is no less a graph theory problem.</p>
<p>This principle applies in many places. CPU instruction reordering.
Spreadsheets. Makefiles. Other examples from the Wikipedia article. But maybe
you just want to play a game inspired by fundamental discrete math. For your
consideration, <a href="http://web.mit.edu/xiphmont/Public/gPlanarity.html">gPlanarity</a> (package name: gplanarity).</p>Arora versus Aurora2008-08-09T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2008-08-09:arora-versus-aurora.html<blockquote>
<p>Why should I have to change my name? He's the one who sucks!</p>
<p>-- Michael Bolton, <em>Office Space</em></p>
</blockquote>
<p><a href="http://code.google.com/p/arora/">Arora</a> is a next generation browser project publicly released <a href="http://github.com/Arora/arora/commit/ecbfdbe31736065658267be5a78325f917a7ccb4">April
11th, 2008</a>, with a focus on bleeding edge <a href="http://code.google.com/p/arora/wiki/QtWebKit">UI elements.</a></p>
<p><a href="http://adaptivepath.com/aurora/">Aurora</a> is a next generation browser project publicly announced <a href="http://labs.mozilla.com/2008/08/introducing-the-concept-series-call-for-participation/">August
4th, 2008</a>, with a focus on bleeding edge <a href="http://www.adaptivepath.com/blog/2008/08/04/aurora-design-themes/">UI elements.</a></p>
<p>Besides the obvious differences, one has working code, and the other is a
Mozilla Labs prototyping project.</p>Ubuntu Gear2008-08-08T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2008-08-08:ubuntu-gear.html<p>One last post on <a href="//www.pwnguin.net/gnome-a11y-contest.html">the subject</a> of <a href="//www.pwnguin.net/introducing-money-into-open-source.html">money</a>.</p>
<p>I know that the dollar's been on a recent downward trend, but <a href="https://shop.canonical.com/index.php?currency=USD&cPath=14">this still
feels exorbitant</a>. But the old "strong dollar in name only" policy cuts
both ways -- maybe it's time to set up a shop in the land of cheaper exports?
Shipping itself adds nearly 50% to the price!</p>
<p>UPDATE: fixed the shop link!</p>GNOME a11y Contest2008-08-05T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2008-08-05:gnome-a11y-contest.html<p>Keeping with the theme of <a href="//www.pwnguin.net/introducing-money-into-open-source.html">money in Linux</a>, today I present you <a href="http://www.gnome.org/projects/outreach/a11y/">GNOME
Outreach Program: Accessibility</a>. The Program has raised $50,000 in pursuit
of three goals:</p>
<ol>
<li>
<p>Increase awareness of accessibility and its relation to computer
applications</p>
</li>
<li>
<p>Encourage and inspire people to work on accessibility</p>
</li>
<li>
<p>Help the free software community improve its accessibility</p>
</li>
</ol>
<p>These come from <a href="http://www.gnome.org/projects/outreach/a11y/rules/">the rules</a> the front page encourages everyone to read.
Divided into Long-Term and Short-Term Tasks, the rules read more like a
sweepstakes than an employment agreement or contract. Still, the fundamentals
are oddly close to a work for hire: you perform a set of tasks, and are
rewarded with money. $6000 for Long-Term tasks, $200 for Short-Term tasks.</p>
<h3>Short-Term Tasks</h3>
<p>On the subject of short-term tasks, <a href="http://www.gnome.org/projects/outreach/a11y/tasks/">the tasks list page</a> says:</p>
<blockquote>
<p>Are you a developer who wants to become more familiar with accessibility?
Are you an artist that can draw? Maybe you might also be interested in
becoming a module maintainer some day. A great way to get started is by fixing
bugs, and we're offering you a way to get paid to do it. :-)</p>
</blockquote>
<p>Short term tasks are two week affairs that pay out $200 each (in bundles of
five) a construction similar to Google's <a href="http://code.google.com/opensource/ghop/2007-8/">"Highly Open Participation
Contest"</a>. Unfortunately, I doubt any of the people completing them expect
to claim any prize money. It's a simple matter of math: if any task takes more
than 30 hours, flipping burgers pays better (where I live).</p>
<p>Bundling by fives is crazy. It discourages participants simply by the unknown
quantity of effort and massive investment before payout. But it gets worse. A
sufficiently clever person can nullify all monetary incentive for some tasks
and <strong>a sufficiently conscientious person will avoid completing tasks they
don't intend to claim a prize on.</strong> An example: there's only five "Create 10
Accessible Icons" tasks posted. Completing one task effectively reserves the
rest for you. Certainly, if you try one, finish it, and decide the other four
tasks are too much work to bother with, you've removed the intended monetary
incentive, albeit accidentally.</p>
<p>This assumes the flow of tasks is stagnant, though I've seen only
<a href="http://bugzilla.gnome.org/show_activity.cgi?id=519313">evidence</a> supporting that assumption. If you can't find five tasks you
think you can claim before anyone else does, you can either wait until new
tasks come along or start now and hope they do. Worse still, many of the small
task bugs marked as completed were done without knowledge of the Outreach
Program it seems. A total of six small tasks are marked completed:</p>
<ul>
<li>
<p><a href="http://bugzilla.gnome.org/show_bug.cgi?id=519469">Option to switch OFF sound completly while keeping video
communications</a></p>
</li>
<li>
<p><a href="http://bugzilla.gnome.org/show_bug.cgi?id=519484">User choice of video codec compression algorithms</a></p>
</li>
<li>
<p><a href="http://bugzilla.gnome.org/show_bug.cgi?id=103223">Notification Area needs keynav</a></p>
</li>
<li>
<p><a href="http://bugzilla.gnome.org/show_bug.cgi?id=464468">Automatically scroll text caret to make it visible</a></p>
</li>
<li>
<p><a href="http://bugzilla.gnome.org/show_bug.cgi?id=506900">Dasher interface lacks HIG compliancy</a></p>
</li>
<li>
<p><a href="http://bugzilla.gnome.org/show_bug.cgi?id=519092">Add accessibility support to GtkVolumeButton</a></p>
</li>
</ul>
<p>The first two bugs were closed by an Ekiga developer as WONTFIX. Folks, it
doesn't get any more <strong>classic Bugzilla</strong> than that. The middle two were fixed
by a developer being paid for their work by Sun. Only the last two were
claimed tasks under the Outreach Program. So of the 30 total Short-Term task
claims, 2 have been fixed thus far via Outreach.</p>
<h3>Long-Term Tasks</h3>
<p>Long term tasks are closer to Google Summer of Code in nature: 6 month
projects netting $6000. Thus far, there have been two proposals accepted;
<a href="http://mousetrap.flaper87.org/trac/">"MouseTrap: Head Tracking via Lowcost Webcam"</a> and <a href="http://www.gnome.org/projects/outreach/a11y/tasks/magnification/">Magnification.</a></p>
<p>MouseTrap was actually working two months ago. You can see video of it in
action:</p>
<blockquote>
<p>(<a href="http:/youtube.com/profile_videos?user=flaper87">Link if you don't
see the Flash Player above)</p>
</blockquote>
<p>And the magnification task is being worked on by the developer of Compiz's
Enhanced Zoom. They're both talented people and I'm sure the money motivates
them to continue the work they started outside of GNOME.</p>
<h3>Will they meet their goals?</h3>
<p>With half a year yet to go, it seems unfair to make any doom and gloom
predictions. So instead, let's examine where they stand now in relation to
their goals.</p>
<p>Have they helped improve free software accessibility? <strong>Yes. Not as much as
they hoped</strong>, but there's time left to save it. They've fixed a few bugs,
enticed a few projects towards GNOME that could have large impact.</p>
<p>Have they encourage and inspire people to work on accessibility? They've
certainly encouraged the two Long-Term authors, though maybe not <em>inspired</em>
them. Another three people seemed inspired enough to pick up some short term
tasks, but none seem on pace to net five Completed Tasks.</p>
<p>Have they increased awareness of accessibility? I'd say not enough. <strong>This is
an "outreach program", but it almost seems GNOME needs "inreach" to convince
their own project members accessibility matters.</strong> Normally when I think of
outreach I think of an expert organization going to the public to share that
expertise, not an organization in search of experts. Closing bugs as WONTFIX
that Foundation and Sponsors are trying to pay someone to fix does not look
like an organization full of accessibility experts. If GNOME leadership
doesn't convince the rank and file that it needs accessibility, outside
contribution faces a hurdle that could leave them bitter on the subject. This
sort of failure jeopardizes any success found in the previous two questions.</p>
<p>Still, $6,000 is a cheap price for getting some of the Compiz people thinking
about how GNOME might ever integrate their work. And a cheap price to get the
ball rolling on head tracking within GNOME. If we're lucky, in some distant
future we might see eye tracking as well!</p>Introducing Money Into Open Source2008-08-04T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2008-08-04:introducing-money-into-open-source.html<p>Jeff Atwood's <a href="http://www.codinghorror.com">Coding Horror</a> is a blog ostensibly about "Programming and
Human Factors". So then Jeff's <a href="http://www.codinghorror.com/blog/archives/000893.html">proposition</a> about donating money in OSS is
a little troublesome. Jeff is confused and needs help spending money, like <a href="http://en.wikipedia.org/wiki/Brewster's_Millions"> a
popular novel and film</a>. I've stopped reading Coding Horror on a daily
basis long ago, but the goals behind this particular theme are well placed,
and it seems like a worthwhile initiative. He <a href="http://www.codinghorror.com/blog/archives/001158.html">writes about donating $5,000 to
an open source .NET project</a>:</p>
<blockquote>
<p>I had hoped that $5,000 grant money would be converted into something that
furthered an open source project -- perhaps something involving the community
and garnering more code contributions. But apparently that's more difficult
than anyone realized.</p>
<p>[...] I'm absolutely dumbfounded to learn that contributing money isn't
an effective way to advance an open source project. Surely money can't be
totally useless to open source projects... can it?</p>
</blockquote>
<p>What follows below is the kind of research Jeff Atwood should have done in
preparation of giving that money. What I think would have been most useful to
Jeff is researching what has already been done, though I suppose he's not got
<strong>time</strong> for such trivial things.</p>
<h3>Ubuntu</h3>
<p>A familiar infusion of cash into a project likely to be well known by my readers
is Mark Shuttleworth's founding of the Ubuntu project. In his wiki page, under
"Why are you funding Ubuntu, instead of giving the money to Debian?" he writes:</p>
<blockquote>
<p>And finally, it seems to me that the hard part is not making funds available,
its allocating them to people and projects. I could easily write a cheque to
SPI, Inc, for the same amount that I've invested in Ubuntu. But who would
decide how that money was spent? Have you actually read the financial
statements of SPI, Inc, over the past few years?"</p>
</blockquote>
<p>Instead Mark chose to start Ubuntu as a paragon, demonstrating how changes might
improve Debian through example, probably expecting to remain a small unpopular
beacon on the hill. There were a number of things wrong with Debian at the time,
and I think Ubuntu rapid popularity helped light a fire underneath them at a
time when some Debian people were thinking "So what if we're losing users to
Gentoo? Gentoo can have those idiots." XKCD has a tongue-in-cheek examination of
why that line of thought is wrong.</p>
<p>To that end, several people were hired and a company formed. Lots of <strong>time</strong>
was spent picking who would be hired, what would be done, and explaining what
was broken in Debian and how Ubuntu would work differently. </p>
<h3>Inkscape</h3>
<p>Bryce Herrington, intimate with the development of Inkscape, writes in <a href="http://bryceharrington.org/drupal/node/52">"Pay
in Time, not Money"</a>:</p>
<blockquote>
<p>I can't count the number of times people have offered $100 bounties for
implementing some feature or other in Inkscape. From what I've seen - and
I've seen MANY features come into Inkscape from folks who aren't developers
- $100 is the wrong way to do it.</p>
<p>Not that people are anti-money or anything like that. Certainly we've gotten
rabid success out of our Google Summer of Code projects, but these pay out
$4500. So maybe $100 just isn't the right price point to stir interest (even
a simple feature is going to require 10-20 hrs work, at which point you might
make more flipping burgers!)</p>
</blockquote>
<p>Bryce wrote this after Jeff's decision, so it might seem unfair to cite him, but
I think it's fair to say Bryce's thoughts reflect many of the developers Jeff
could have asked. Bryce offers advice reinforcing Jeff's perception that time is
more important than money. Bryce goes on to laud the advantages of documenting
failure in a wiki, for others to read and overcome. Perhaps Launchpad should
have <a href="http://www.stefanoforenza.com/launchpad-needs-a-wiki/">offered wikis instead of bounties</a>.</p>
<p>Bryce also cites Google's Summer of Code as a success. He rightly chastises these
chump change bounties, but I wonder how Bryce feels about the apparent failure of
Jeff's larger donation that matches Google's SoC in scale.</p>
<h3>Google Summer of Code</h3>
<p>Google Summer of Code is probably the most well known project spending money on
open source. They've certainly paid for at some kickass projects on behalf of
Ubuntu in the past. But unlike most bounties their model is a bit different --
they're targeting a specific group of people and asking for specific goals. The
specific group is important -- college students interested in working at Google.
$5000 motivates these students, since students don't have long term employment needs
or expectations, a job to quit or family to feed. Additionally, because Google puts
it out there as an "internship in open source", there's an implicit assumption that
a job with Google can be had via this project, and that's something most of us can't
rely on. </p>
<p>Asking for specific goals is another part of the equation. Google rightly recognizes
that many projects don't have a lot of management. By asking groups deal with deadlines
and requiring all parties to come to them with ideas and choices, they embed
management principles often ignored into open source projects. They also sidestep
organizational problems by sending checks directly to the people doing work and doing
some of the footwork beyond cutting a check themselves. </p>
<p>Google asks organizations and people to plead their case. It's anti-democratic and <em>it
works</em>. That's important to note, because Debian's own attempts at funding have been met
with fierce democratic resistance.</p>
<h3>dunc-tank</h3>
<p>Debian is a rough and tumble society. If you disagree, look at Sam Hocevar's
<a href="http://sam.zoy.org/">homepage</a> some time. <strong>They elected a weapons grade troll as DPL</strong>. (A title Sam
might proudly wear, I think). Still, he's been an effective leader, and his <a href="http://www.debian.org/vote/2007/platforms/sho">platform</a>
was entirely reasonable after a highly contentious year. He writes on the subject of money:</p>
<blockquote>
<p>Debian has money. Last time someone wanted to use that money for something, we disagreed,
and he found money somewhere else. So we still have that money, and I would like to use
it at least to fix our broken hardware. I cannot believe this is in a DPL platform, but
escher has been down for ages, developers do not have access to an alpha machine, and we
have not even tried to fix that problem with money.</p>
<p>Speaking of money, one thing that requires lots of it is meetings. IRC meetings are not
enough for some tasks, and isolating people in a remote place to work together on nothing
else than their Debian project has proven to work very well. Though I am getting scared by
the escalating luxury of the DebConf accommodations, I believe we can and should afford even
more meetings to take place. There are local structures in many countries (Extremadura in
Spain, Cetril in France) that can take care of the logistics. </p>
</blockquote>
<p>Sam's DPL-ship followed a long and painful debate about raising funds to compensate key people
for their work. I don't want to open wounds here, but its a risk that must be taken to review
what went wrong. <a href="http://www.dunc-tank.org/">dunc-tank</a> was (is?) a project to compensate the volunteer release managers
of Debian for their work. The proposal was received so poorly that a small minority of people
didn't just reduce involvement in the face of compensation from others, but aimed to see the goals
of dunc-tank fail, by <a href="http://dunc-bank.zoy.org/">filing as many RC bugs as possible</a>. In one sense, the introduction
of paid workers failed -- Etch released late. </p>
<p>But in another sense it succeeded wildly. Suddenly a group of people had been motivated to
test Debian and report RC bugs in far greater numbers than before, in <em>spite</em> of not being paid
for their efforts. Clearly something else was at work here, beyond compensating key players. I
suspect that Sam Hocevar's role in that contra group greatly contributed support to his bid for
DPL; it certainly demonstrated his ability to drive community contribution to release engineering.
Sven Muller gave an opinion in his <a href="http://blog.incase.de/index.php/2006/10/27/debian-and-dunc-tank/">summary of events</a></p>
<blockquote>
<p>The whole thing might have been a lot different if some random, mostly unknown DD (or even better:
non-DD) had started dunc-tank, collecting money and finally paying some DDs to work on some specific tasks.</p>
</blockquote>
<p>Much of the controversy appears to have been around whether the DPL could approve the use of
general Debian funds to pay some developers but not others. That's a tough question, especially
when a few developers appeared to despise money itself. Prescient Ben Mako Hill wrote something
<a href="http://mako.cc/writing/funding_volunteers/funding_volunteers.html">on mixing volunteers and money</a> that Dunc Tank might have neglected. It covers a few other
examples of how money complicated things, including the historic X Consortium's <a href="http://www.usenix.org/publications/library/proceedings/usenix2000/invitedtalks/gettys_html/img0.htm">collapse</a>
and the revival in the form of the Xfree86 and later the X.org Foundation.</p>
<h3>Nouveau</h3>
<p><a href="http://nouveau.freedesktop.org/">Nouveau</a> has a spectacular example of crowd-sourcing funding. Rather than raising $10,000
from a monied few, David Nielsen offered the [following pledge[15]:</p>
<blockquote>
<p>"I will pledge at least $10 USD towards the development of the open source nouveau driver for the nvidia card series but only if 1,000 other people will do the same."</p>
</blockquote>
<p>David was astonished at the <a href="http://www.pledgebank.com/nouveaudriver/info">rate</a> of success of the pledge drive, and the nouveau developers
were as well. So did the developers choose to spend it on beer, cigarettes and hardware? Surprisingly,
no. Despite overwhelming support, none of the money raised has been collected or spent, for many
reasons, some unique to this drive. I asked Stephane Marchesin, prolific nouveau developer, what happened,
and he replied:</p>
<blockquote>
<p>My bank said the issue with small donations, especially international donations, is that the transfer
itself costs you money (up to $10, which is the amount for each donation). They also said under banking
regulations, 1400 $10 transactions could look suspicious and freeze my personal account. I don't know to
what extent this would have been taxed. This was not my major concern, as setting up a non-profit
organization would have solved that issue anyway. </p>
</blockquote>
<p>American readers may not realize it, but European <a href="http://en.wikipedia.org/wiki/Taxation_in_France">taxes</a> are quite high. The frozen account part
is troublesome but the separate non-profit legal entity insulates Stephane's bank account from both
these problems (but not the transaction fee problem). We have many such entities, though as Mark Shuttleworth
alluded to above, sometimes they don't inspire confidence. Unfortunately, it appears [not a single one of the
public entities] supporting open source was ready to risk legal battles by accepting responsibility for
nouveau's pledge money. Stephane tells me that the Software Freedom Conservancy has since improved:</p>
<blockquote>
<p>We recently got an acknowledgment from the Software Freedom Conservancy that nouveau would have been a
suitable project for them to harbor (they are not afraid of being sued, for example they host the finances
for samba and wine). It's just unfortunate that at the time we looked for a helping organization they were
still in the process of setting up theirs.</p>
</blockquote>
<p>Additionally, there's concerns over who gets what. When I asked on #nouveau about the pledge drive,
it was suggested that simply putting money on the table is a recipe for <a href="http://youtube.com/watch?v=awskKWzjlhk">epic drama</a>, and their
current system works well enough: groups of people pitch in to buy hardware and ship it to a specific
developer. </p>
<h3>Conclusions</h3>
<p>$5k or $10k is enough money to get attention, enough that people worry about doing it right. But not
enough that people are actually motivated to do it right. Projects won't waste time thinking about
spending money they don't anticipate receiving, so they generally don't have anything in place to take
your money. They might not even realize they need it; the ScrewTurn wiki author Dario self nominated his
project for the donation, apparently not fully aware how much paperwork and red tape might be involved.
Some projects do indeed appear to use their money-agnostic traits as an insurgent model like Jon Galloway
suggests; not only do they not know what to do with it, having it makes them more legitimate targets.</p>
<p>If not every project can use money, it's important then to review what does work. Google's approach neatly
circumvents many of the problems above. They seek projects with the infrastructure to accept money. They
amortize the costs of organizing by handling a huge number of projects, and they invite projects to
participate in the stewardship of the resources Google's money has bought for them. Transaction costs are
minor because the numbers involved are large. </p>
<p>One thing that's clear from the evidence is that travel expenses are a uncontroversial way of spending money
on a project. It's heavily recommended by the comments to Jeff's own blog, Sam Hocevar, and Ben Mako Hill.
The comments suggest paying for PDC and using the other half for flight, food and hotel. ($2,500 for a
conference sounds expensive to me--does PDC stand for "Pretty Damn Costly?")</p>
<p>The obvious answer to Jeff's problem is to reduce the <a href="http://spreadsheets.google.com/pub?key=pKxDW35algYebfs8nssTjIQ">candidate list</a> to those projects capable of
spending his money. That seems like the number one lesson Jeff can impart to the growing .NET Open Source
community, and maybe open source at large.</p>Origami Ibex2008-08-02T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2008-08-02:origami-ibex.html<p>TED published a talk from Robert Lang titled <a href="http://www.ted.com/index.php/talks/robert_lang_folds_way_new_origami.html">"Idea+Square=Origami"</a> (18
mins):</p>
<p>He talks about art and it's relationship to math and engineering; its really
quite amazing where he goes with it. But even if you hate math or art
philosophy, there's something amusing embedded in the talk: an origami Ibex.
He has two examples, <a href="http://www.tansu-style.com/robert-j-lang/robert-j-lang-web-gallery/pages/20-P2010150.html">here</a> and <a href="http://www.langorigami.com/art/gallery/gallery.php4?name=sentinel">here</a>. He even adds details in the
horns.</p>
<p>But just as importantly, Linux source and builds are available for the
software he wrote to help him design origami. You find it all from the
<a href="http://www.langorigami.com/science/treemaker/treemaker5.php4">TreeMaker homepage</a>. If you use it and like it, be sure to thank him! It'd
be really neat if someone came up with directions for making an Ibex to keep
people busy in the run-up to the Ibex release party :)</p>Backgrounds for people with no talent2008-07-24T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2008-07-24:backgrounds-for-people-with-no-talent.html<p>I love the official Hardy background. It serves many purposes well:</p>
<ul>
<li>
<p>Far above and beyond the restrained abstract backgrounds that came before
it</p>
</li>
<li>
<p>No point is so bright that it distracts your eye from using the desktop</p>
</li>
<li>
<p>The codename heron served as an appropriate inspiration for an LTS (but we
don't have to repeat that with intrepid)</p>
</li>
<li>
<p>It is art that goes out of its way to let you know it's art</p>
</li>
<li>
<p>By being so damn good, it makes the strongest case possible for the
current color palette</p>
</li>
<li>
<p>It demonstrates how a group of people can create something better than a
single person did</p>
</li>
</ul>
<p>There's even some neat things about it you probably don't notice. The accent
shapes surrounding the heron are actually taken from the heron itself. The
author who did that feels it's lazy, but I like it; musical pieces often
borrow from themselves to create a more nuanced whole. There's also a subtle
gradient that can be handy to tell how good the color depth of your display is
and how well your dithering algorithm is working. (Nouveau at one point
apparently was slightly busted there.)</p>
<p>Unfortunately, there's some drawbacks. The heron was awesome, but somewhat
tied to that release. If Ubuntu continues to modify it, we look lazy and it
looks out of place. This is probably why we should avoid codename inspired
animals in future releases.</p>
<h3>Fractals: The Ultimate Programmer Art</h3>
<p>I don't normally cite gentoo-wiki (did you know they aren't officially
affiliated with Gentoo?), but they have a fantastic article on using GIMP for
<a href="http://gentoo-wiki.com/TIP_GIMP_Fractal_Backgrounds">fractal backgrounds</a>. The page describes itself thusly:</p>
<blockquote>
<p>This short how-to describes how to make a simple and yet visually appealing
background for your computer in very little time with very little skill.</p>
</blockquote>
<p>The gallery has some nice results, but the real fun is making your own, of
course. It's a pretty simple process in GIMP:</p>
<ol>
<li>
<p>Pick a gradient / colormap to render the fractal in</p>
</li>
<li>
<p>Open <a href="http://docs.gimp.org/en/plug-in-flame.html">Filters->Render->Nature->Flame</a></p>
</li>
<li>
<p>Zoom way in under the Camera tab -- the fractal is your centerpiece</p>
</li>
<li>
<p>Click the Edit button and browse through the variations and
randomizations for a render you like</p>
</li>
<li>
<p>Once you have a fractal you like, <strong>make sure to save it!</strong> (Click OK to
exit the Edit Dialog, then Save) You might want to re-render it in a different
colormap later</p>
</li>
<li>
<p>Select a colormap (custom gradient most likely)</p>
</li>
<li>
<p>Render</p>
</li>
<li>
<p>Play around with the color tools, experiment with GIMP. There's lots of
stuff, and you can always hit undo if you don't like the results.</p>
</li>
</ol>
<p>You can see my results hosted <a href="http://jldugger.deviantart.com/">on deviantArt</a>. I like the blue and red one
the best. The other one is a bit too bright and grainy, and I don't think I'll
bother to fix it.</p>
<p>This process has disadvantages; you have no direct control over the results,
and they have almost no bearing on reality or historical artistic methods
unless you explicitly give it some. But there's a number of advantages: you
can generate variations rapidly, you don't need to study or practice art or
math to understand it, and you can end up with something you like fairly
quickly. I hope gentoo-wiki's quick tutorial helps you make a desktop that's
uniquely <em>yours</em>.</p>George Carlin's rolling in his grave2008-07-24T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2008-07-24:george-carlins-rolling-in-his-grave.html<p>Wikipedia offers <a href="http://en.wikipedia.org/wiki/Etiquette">the following definition</a> of etiquette (emphasis mine):</p>
<blockquote>
<p>Etiquette, one aspect of decorum, is a code that governs the expectations of
social behavior, according to the <strong>contemporary conventional norm</strong> within a
society, social class, or group. Usually unwritten, it may be codified in
written form.</p>
</blockquote>
<p><a href="http://www.ubuntu.com/community/conduct">The Ubuntu Code of Conduct</a> is our etiquette, codified in written form; it
is also universal. It covers:</p>
<blockquote>
<p>"behaviour as a member of the Ubuntu Community, in any forum, mailing list,
wiki, web site, IRC channel, install-fest, public meeting or private
correspondence"</p>
</blockquote>
<p>Written in the affirmative, it offers several adjectives relevant to how
Ubuntu development should be done. This universality means making rules and
interpretations for Planet Ubuntu based on the CoC might reasonably apply
elsewhere. Jono Bacon, member of the Ubuntu Community Council and Canonical's
appointed Ubuntu Community Manager once stated he felt excited to work at a
place where he didn't have to turn off who he was during work hours or
separate what he liked from what he did, and we may <a href="https://wiki.ubuntu.com/CommunityCouncilAgenda/">shortly discover how Jono
feels about diminishing that</a>. Despite the clear wording of universality,
Fabián Rodríguez has <a href="http://www.fabianrodriguez.com/blog/archives/2008/07/21/have-you-noticed-a-friendly-reminder/">suggested</a> that the Planet needs more formalized
rules than the ones inherited from Debian (English only and "don't be
annoying") and the CoC. His position appears to be that including
objectionable <a href="http://www.sourcecode.de/content/linux-haters-blog-windows-lover-blogs-wtf">words and phrases</a>, and objectionable ideas are not
respectful, and therefore <a href="https://wiki.ubuntu.com/CommunityCouncilAgenda/talk">violate the CoC</a>. <strong>I respectfully disagree.</strong></p>
<p>Fabián Rodríguez <a href="http://www.fabianrodriguez.com/blog/archives/2008/07/21/have-you-noticed-a-friendly-reminder/">writes</a>:</p>
<blockquote>
<p>I don’t expect anyone to change their “WTF” and “STFU” attitude, just leave
it outside this project. Setting up a category to carry only Planet Ubuntu
posts may help.</p>
</blockquote>
<p>And clarifies in <a href="http://www.fabianrodriguez.com/blog/archives/2008/07/21/have-you-noticed-a-friendly-reminder/#comment-91264">a comment</a>:</p>
<blockquote>
<p>Although I am brining up the CoC because we have one, I think it is such
common sense I am a bit surprised I even got comments on IRC asking what is
wrong with WTF’ing here and there or A**holing now and then. Nothing really.
But take it elsewhere. And I’ll gladly meet you there, but it won’t be under
my @ubuntu.com hat.</p>
</blockquote>
<p>It should be obvious that sending harassing, demeaning or confrontational
email to Ubuntu or Debian or any other developers isn't suddenly okay if you
didn't use @ubuntu.com as the From address. Similarly, it shouldn't matter
whether you tag a post with "ubuntu"; if you act or write from a position
within the community, and the audience associates you with Ubuntu, you should
follow the guidelines as an ambassador of Ubuntu to the larger Linux
community, or at least make a note that you are not acting or writing as a
member of the Ubuntu community in cases where it might not be clear. At any
rate, <strong>if you are an Ubuntu Member, then, you should be worried when someone
tries to redefine the Conduct you agreed to follow.</strong></p>
<p>The logical conclusion of Fabián's position is that to be respectful nothing
that sets off anyone's triggers may pass through Planet Ubuntu's gates. This
is a dangerous place to be: one can think of dozens actions that might be
considered offensive to some. As Jordan Mantha eloquently <a href="http://laserjock.wordpress.com/2008/07/23/repurposing-planet/">put it</a>:</p>
<blockquote>
<p>trying to legislate morality is both undesirable and incredibly difficult
for the Community Council to do. They are trying to represent a community made
up of people from nations and cultures all over the world, and it’s
essentially impossible to satisfy both the moral sensibilities and personal
liberties of everybody at the same time. I’m also fairly sure it is neither
their right nor their charter to tell people what is and is not offensive.</p>
</blockquote>
<p>There exist a number of social, religious and political taboos that various
cultures may find offensive. It feels weird being an American (land of
assimilation) calling a Canadian (home of multiculturalism) on this. If
Ubuntu, "Linux for human beings," demands that writing obey one viewpoint, it
potentially offends another one as censorship. For example, most of us may see
a <a href="https://launchpad.net/~ubuntu-l10n-bo">Tibetan</a> language translation of Ubuntu as progress in bringing Free
Software to people who need it, but to a few the act may <a href="http://www.newleftreview.org/?page=article&view=2720" title="One of the biggest grievances is that the Chinese authorities equate any expression of Tibetan identity with separatism.">suggest</a> an
anti-China political statement, akin to adding a Confederate flag to the
distribution. The Code of Conduct is fortunate to say nothing about such dry
powderkegs. As long as we can hold beliefs, disagree and still obey they Code
of Conduct's demands for consideration, respect, collaboration, and
consultation, there should be room in Ubuntu for all of us.</p>
<h3>Chilling effects</h3>
<p>Attitude is one thing; I think RTFM or STFU are rarely productive statements.
But what's appeared goes far beyond that. Fabián prefers that people who don't
agree with his flexible interpretation of "respect" go away. I know at least
one guy who does. He still contributes to the development of Ubuntu, but what
he has to say is less often heard because despite his <a href="http://www.angryfacts.com/facts.cgi?f=9">reputation</a>, he's
quite willing to implicitly comply with the infantilization of the Planet and
rarely tags posts our way. It's fortunate that it's easy to include individual
RSS feeds in Liferea directly, but if I don't know there's an amount of self
censorship going on first, I simply lose that insight, no matter how germane
it is to Ubuntu, Free Software or the communities that surround them.</p>
<p>But even if we were to agree that some topics are a bridge too far, specific
words and phrases found offensive by some have no clear relation with respect,
and already are within <strong>contemporary conventional norms</strong> when used in
moderation. In fact, <a href="http://undamped.blogspot.com/2008/07/be-dick-if-thats-what-you-are.html">this</a> entire enlightened discussion would have come
across as condescending rather than conversational if the language were to be
sanitized; by using that language the author communicates that the audience is
a peer, which is <em>central</em> to the point. <strong>Self-censored writing feels
inauthentic</strong>. In that thread, the author comments:</p>
<blockquote>
<p>I haven't been particularly active in the Ubuntu community (my first
introduction to the open source world), largely because everyone is so damned
polite all the time, and as a result the discussions seem fairly dry and
limited to technical topics.</p>
</blockquote>
<p>This is a disappointing failure to integrate, especially since Rhythmbox
<a href="https://bugs.edge.launchpad.net/ubuntu/+source/rhythmbox" title="My God, it's full of segfaults!">needs a lot</a> of lovin', and I'd be happy to see Ubuntu play a
foundational role in making that happen.</p>
<h3>Doing something constructive about it</h3>
<p>One thing that can be done is to <strong>offer editorial advice</strong> ahead of
publication. I've often wished to have a few trustworthy people preview my
work on this blog and offer suggestions the way <a href="http://kuro5hin.org">kuro5hin</a> does before
going public with a writing. It's a bit sad that the advent of blogging
software led to the downfall of community driven writing like k5. Stephan's <a href="http://www.sourcecode.de/content/you-will-never-fix-bug-01">
writing</a> comes across as a bit... "stream of thought," and perhaps a round
of editorial review can create something more effective at communicating his
ideas and getting people to agree with him. I suspect such offers will be
treated as censorship, though a good editor offers only advice, not orders.</p>
<p>Since people are seeking, among other remedies, the removal of Stephan's blog
from the Planet, I thought I'd do them a favor. As far as I can tell, the
current Planet software doesn't implement filtering (the Venus branch might,
but will it support queries?), but the entire Planet software is easy to
duplicate, and it's output is easy to manipulate. Here's what the Planet looks
like without <a href="http://pipes.yahoo.com/pipes/pipe.run?_id=ekaR_FtY3RGsNNRdCB2yXQ&_render=rss&textinput1=Fabi%C3%A1n+Rodr%C3%ADguez">Stephan Hermann</a>. And for comparison, without <a href="http://pipes.yahoo.com/pipes/pipe.run?_id=ekaR_FtY3RGsNNRdCB2yXQ&_render=rss&textinput1=Stephan+Hermann">Fabián's
blog.</a> You can find the construction of this relatively simple
construction <a href="http://pipes.yahoo.com/jldugger/ubuntuplanetfilter">here</a>. If that doesn't float your boat, I've also
constructed a simple <a href="http://pipes.yahoo.com/jldugger/ubuntuminusobsenity">dirty words filter</a>. Feel free to customize, the
defaults come from <a href="http://en.wikipedia.org/wiki/Seven_dirty_words">the expert on the subject.</a> I've also considered
running an alternative, unofficial planet similar to <a href="http://planet.kernel.org/fedora">Dave Airlied's</a>, but
I'm not sure it's possible without coming across as arrogant or causing hurt
feelings.</p>
<p>Finally, the Community Council has this topic on their agenda, and if it
doesn't get tabled for lack of time, will be heard at the next meeting. If you
can't attend, there are logs available for all such meetings on
<a href="http://irclogs.ubuntu.com">irclogs.ubuntu.com.</a> In the spirit of being collaborative, it seems
relevant to invite <a href="http://emmajane.net/">Emma Hogbin</a>'s opinion, as it seems the language of
<a href="http://www.archive.org/details/women_in_open_source">her lecture</a> that <a href="http://mjg59.livejournal.com/94420.html">started</a> the <a href="http://www.sourcecode.de/content/you-will-never-fix-bug-01">mess</a> Fabián and Stephan find
themselves in now. As an invited speaker to the now canceled Ubuntu Live!
event, decisions rendered would clearly affect her future participation with
the Ubuntu community.</p>
<p>If you take one thing away from this essay let it be this: <strong>Booting members
from the project is in no way, shape, or form "collaborative"</strong>, and should be
taken only when all reasonable measures have failed.</p>Ubuntu advantages2008-07-20T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2008-07-20:ubuntu-advantages.html<p>There are some advantages to running Ubuntu I can think of:</p>
<ul>
<li>
<p>apt-get</p>
</li>
<li>
<p>six month release cycles</p>
</li>
<li>
<p>at a very low cost</p>
</li>
<li>
<p>deep access to the process that creates Ubuntu</p>
</li>
<li>
<p>a large community with an <a href="http://www.ubuntu.com/community/conduct">established etiquette</a></p>
</li>
</ul>
<p>But not among those is "never search the internet for drivers again". There is
a perception that this is special to Ubuntu or Linux. Oh, and also that it's
true. Which is why <a href="http://podcast.ubuntu-uk.org/2008/07/16/s01e10-easy-come-easy-go/">the Ubuntu UK podcast</a> mentions it.</p>
<p>The problem, as I've lead you to believe, is that it's <strong>not true</strong>. Linux
does support a lot of hardware. Probably more than Windows, certainly more
than Apple. Greg K-H says people want Linux kernel source / binary
compatibility for the driver support. What Linux doesn't do so great job of is
supporting <em>new</em> hardware. Lets say you run out and build a new computer which
includes a Gigafoo motherboard, with a pq43 chipset, which only hit the market
recently and you want to install Ubuntu 8.04. <strong>You can't.</strong> (Specifics
changed because my source is unavailable for details at the moment.)</p>
<p>Unsurprisingly, the hardware is only supported recently, and the 8.04 kernel
is based on an older kernel. 8.10 will probably support this out of the box,
but not 8.04. Technically, 8.04 LTS might be slightly different. But
otherwise, there's a six month gap in Ubuntu's process where you might fall
into unsupported territory (see? process <em>does</em> matter!). You certainly can't
go to the vendor's website and download some drivers to make it work.</p>
<p>You could build a new kernel with the new drivers you need, but good luck
finding where they are and whether they're ready yet. Someone came
into #ubuntu-kernel today <a href="http://irclogs.ubuntu.com/2008/07/19/%23ubuntu-kernel.html">asking why</a> his upstream kernel didn't build and
the short answer is "it doesn't." The long answer is "you need to update your
local copy, and if it's still broke get in touch with upstream about it." So
even though your hardware might not be supported, you can still download
something and make it work. If you build your own kernel, know C well enough
to debug FTBFS and don't mind losing support from Ubuntu kernel devs. So in a
malevolent sense, the Ubuntu UK guy was right -- not only do you (in posession
of newer than six month hardware) need to download and build drivers from the
net, <strong>most of you straight up can't.</strong> Somehow I don't think they quite meant
it that way.</p>
<p>This isn't specific to Ubuntu or Linux. Microsoft carries a lot of drivers,
but recognizes that they have to support new hardware released after them.
Hence in Windows XP you have a floppy disk to provide the SATA drivers that XP
didn't have the foresight to include. There appears to be <a href="http://wiki.debian.org/DebianInstaller/FAQ#head-2522460048fd92cb8a53c3c0f176ca741033be57"> a way to do
this</a> but it's not documented, and <a href="http://www.netsplit.com/2006/09/02/having-left-debian/">historical</a> <a href="http://people.debian.org/~mjr/irc/dpl-debate-2007/dpl-discuss.html">sentiment</a> (fun
drinking game: drink every time someone mentions Ubuntu in a Debian political
debate) tells me they're not interested in helping out some poor guinea pig
install Ubuntu.</p>
<p>Going forward, technologies like DKMS and practices like <a href="https://lists.ubuntu.com/archives/ubuntu-devel/2008-July/025726.html">stable release
updates</a> can breath some life into this problem, but it's every bit social
as it is technical. <strong>I think people have a bit of a right to be irate when
they're promised something that isn't true</strong>, and I hope Ubuntu Marketing and
everyone else understands this and agrees.</p>Where is Ubuntu Title used?2008-07-17T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2008-07-17:where-is-ubuntu-title-used.html<p><a href="https://lists.ubuntu.com/archives/ubuntu-art/2008-July/006857.html">The following</a> up on the ArtTeam mailing list today:</p>
<blockquote>
<p>I always assumed it (the font) to be a convenience in order to display the
brand consistently. I've never really thought of using it for anything other
than "ubuntu". Are there any examples where it is used for other purposes?</p>
</blockquote>
<p>I came up with a few answers, but I'm sure there's more out there. So I'm
asking my readers: <strong>Where have you spotted the Ubuntu titling font?</strong></p>
<p>I think as an experiment in font usability, I'll switch all the system fonts
on my machine :-)</p>Ubuntu is not perfect2008-07-16T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2008-07-16:ubuntu-is-not-perfect.html<p><a href="https://wiki.ubuntu.com/NickAli">Nick Ali</a>, author of the Ubuntu Weekly Newsletter (among other things),
writes that Ubuntu <a href="http://boredandblogging.com/2008/07/15/wanted-accurate-headlines/">doesn't get a fair shake</a> in a <a href="http://royal.pingdom.com/?p=319">Pingdom report</a> on
update site availability. Microsoft had a measured 100% response to pings,
Apple a 99.9%. On the subject of Ubuntu it says:</p>
<blockquote>
<p>Ubuntu on the other hand came off worse, with only a 98.64% availability for
its main repository. That is a total of 1 day, 5 hours and 45 minutes in the
three months of this survey.</p>
</blockquote>
<p>This brought out an ugly side of Ubuntu Marketing: the self-appointed
Defenders of Ubuntu, ready to strike down all perceived slights against it in
forums, blogs, and wherever else informed discussion rears its ugly head. Nick
Ali (not a Defender, and out of character today) is apparently upset that the
headline is misleading (would "Microsoft, Apple trounce Ubuntu" be
sufficient?). 'Ed Vim' comments to the Pingdom post itself, saying:</p>
<blockquote>
<p>I find this article to be a bit misleading, not only in the title but also
the content. The title is just another headline grabber for other sites to use
in the typical MS FUD campaign. And let’s face it, a plain vanilla Ubuntu or
OS X system with no updates will still be safer and more functional to use
than a clean Windows install. This ‘alleged’ update situation is a very
minimal issue, As for content, just read the previous comments.</p>
</blockquote>
<p>Ubuntu lives in a glass house. You can see nearly everything we do (even the
nasty comments on general computing sites). So ask yourself this-- who reports
network availability to Ubuntu and why hasn't anyone cited them yet? I have no
doubt that Microsoft and Apple have someone looking at this and making sure
it's up to par.</p>
<p>People are making out like its unfair that Ubuntu is penalized for a downtime
on its archive because of a new release. I agree in one sense, but not
completely; the time period <em>over</em> emphasizes the release week and would have
been more relevant to Ubuntu users, current and potential, if a six month or
year long period were measured instead of the one month coinciding with a new
release. But we're only talking about doubling the length of the study.
Assuming that the entire outage was contained to release, we'd still be lucky
to get <strong>two</strong> nines.</p>
<p>Still, I don't mind the comparison. Rather, we should feel honored to be
chosen as a representative of Linux as a whole, and try to live up to such
expectations. One thing not spoken thus far, is that a separate mirror exists
for security updates, hopefully so that exactly this problem doesn't spill
over into slower security. It would be interesting to see how that archive
matches up with the general archive.ubuntu.com availability, but currently
both seem to point to the same data center.</p>
<p>And we do have serious downtime problems during this time period that no
amount of amortizing can undo this semi-annual anomaly. Mirror admins jokingly
compete to see who's got the biggest pipe during release weeks. The main
archive is known to be unusable during this time, and the website itself
typically reverts to a low overhead, high information view to compensate for
the rush. Some clever people decide to upgrade a few days before release to
avoid the rush (if only they'd do it before freeze, we'd have more testers!).
If we seriously intend to fix <a href="https://bugs.launchpad.net/ubuntu/+bug/1">Bug Number One</a> we need to hold ourselves to
the same level of accountability that we do Microsoft or Apple. The existence
of unofficial mirrors isn't an excuse, and in fact may be a <a href="http://www.cs.arizona.edu/people/justin/packagemanagersecurity/">security
hole.</a> We have talented people and source code, but without the willingness
to accept how we look in the mirror as truth, we cannot truly improve.</p>
<p>So the next time you encounter criticism of your favorite software, I ask you
to please take a moment to ask yourself two questions: <strong>Is it true?</strong> and
<strong>How can I make it better?</strong></p>Undefined reference to personality?2008-07-15T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2008-07-15:undefined-reference-to-personality.html<p>When I was a TA for an undergraduate course in operating systems, I'd often
field simple questions as this is sometimes students' first exposure to GNU
tools. One or two students invariably came to me with a problem exemplified by
the following:</p>
<blockquote>
<p><strong>hello.cc:</strong></p>
</blockquote>
<div class="highlight"><pre><span class="cp">#include "hello.h" </span>
<span class="k">using</span> <span class="k">namespace</span> <span class="n">std</span><span class="p">;</span>
<span class="kt">void</span> <span class="n">Hello</span><span class="o">::</span><span class="n">print</span><span class="p">()</span> <span class="p">{</span>
<span class="n">cout</span> <span class="o"><<</span> <span class="s">"Hello World!"</span> <span class="o"><<</span> <span class="n">endl</span><span class="p">;</span>
<span class="k">return</span><span class="p">;</span>
<span class="p">}</span>
</pre></div>
<blockquote>
<p><strong>hello.h:</strong></p>
</blockquote>
<div class="highlight"><pre><span class="cp">#include <iostream></span>
<span class="k">class</span> <span class="nc">Hello</span> <span class="p">{</span>
<span class="k">public</span><span class="o">:</span>
<span class="kt">void</span> <span class="n">print</span><span class="p">();</span>
<span class="p">};</span>
</pre></div>
<blockquote>
<p><strong>main.cc:</strong></p>
</blockquote>
<div class="highlight"><pre><span class="cp">#include "hello.h"</span>
<span class="kt">int</span> <span class="nf">main</span><span class="p">()</span> <span class="p">{</span>
<span class="n">Hello</span><span class="o">*</span> <span class="n">h</span> <span class="o">=</span> <span class="n">new</span> <span class="n">Hello</span><span class="p">();</span>
<span class="n">h</span><span class="o">-></span><span class="n">print</span><span class="p">();</span>
<span class="k">return</span> <span class="mi">0</span><span class="p">;</span>
<span class="p">}</span>
</pre></div>
<p>This code is fine, even by pedantic standards. However, new students often get
tripped up:</p>
<p><code>jldugger@jldugger:~/test $ gcc hello.cc main.cc</code></p>
<p><em>several lines worth of linker errors ommited]</em></p>
<p><code>main.cc:(.text+0x176): undefined reference to 'std::ios_base::Init::~Init()'</code></p>
<p><code>tmp/cc2APvWV.o: In function 'main':</code></p>
<p><code>main.cc:(.text+0x195): undefined reference to operator new(unsigned int)'</code></p>
<p><code>/tmp/cc2APvWV.o:(.eh_frame+0x11): undefined reference to __gxx_personality_v0'</code></p>
<p><code>collect2: ld returned 1 exit status</code></p>
<p>I stumble on this occasionally too, but it's not hard to diagnose if you know
much about C and C++ (for some students this is not the case!). Problems like
new being undefined and missing iostreams libraries suggest the linker
doesn't understand C++. Google is not always <a href="http://www.google.com/search?q=%2Ftmp%2Fcc2APvWV.o%3A(.eh_frame%2B0x11)%3A+undefined+reference+to+%60__gxx_personality_v0%27">helpful</a> in diagnosing these
errors. <a href="http://www.delorie.com/djgpp/doc/ug/basics/compiling.html">Some documentation</a> is equally misleading; I recall a time in the
past where this was true, and the change has been confusing at times.</p>
<p>The <a href="http://jldugger.livejournal.com/10965.html?thread=13269#t13269">root of the problem</a> is that gcc correctly detects the language type
but does not inform the linker stage of compilation to use the standard C++
library. There are a number of fixes possible. One option is to compile with
gcc -c , and explicitly link at a later stage. This makes sense for large
projects; a Makefile can just compile the changed source and link in the rest
of the previous build for faster build times.</p>
<p>An option more suitable for novices is to use g++ explicitly. <strong>If you're
using .cc files to indicate C++ source code, replace </strong>gcc<strong> with </strong>g++<strong> in
your commands and Makefiles.</strong> Hopefully readers and Google searcher now
understand the problem. And beginning nachOS authors can begin to go on
writing deadlocks and race conditions while their compiler chain happily
obeys.</p>The Ubuntu Canvas2008-07-12T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2008-07-12:the-ubuntu-canvas.html<p>Dear Lazyweb,</p>
<p>Can anyone give me statistics on the following Ubuntu userbase properties:</p>
<ul>
<li>
<p>distribution of screen size, resolution, and DPI</p>
</li>
<li>
<p>distribution of virtual and physical color bit depth</p>
</li>
<li>
<p>subpixel rendering usage</p>
</li>
</ul>
<p>A reported statistic from the Hardware Database or popcon or something
automatically reported would be handy, but inferred statistics like Ubuntu.com
or Launchpad visitors and stand-ins like PC users in general are also valuable
for validation. I think it's important to know where current users stand in
relation to users in general.</p>
<p>This information seems valuable for discussing things like legibility of
default fonts in Intrepid and so on. So please share any information!</p>yay Garmin2008-07-10T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2008-07-10:yay-garmin.html<p>I just saw a link on IRC that I should <a href="http://www.computerworld.com.au/index.php/id;1756347190;fp;16;fpid;1">share.</a> Garmin has decided to
publish <a href="http://developer.garmin.com/linux/">source code</a> to their Linux based devices.</p>
<p>You can see Garmin's new offices from my house, they're quite tall. It's good
to know that friends of mine working for Garmin are slowly changing how things
work. A surprising number of K-SLUG members have been hired over the past few
years and I hope they're just getting started. For example, they should
probably be working with legal to figure out how to <a href="http://developer.garmin.com/forum/viewtopic.php?t=453">forward patches
upstream</a>.</p>
<p>It's never easy to transition a company from closed processes to encouraging
or even allowing outside contributions. Nobody wants to open the firewall and
expose corporate intranet assets to the evil Internet, the development process
is confusing when some projects are open and others aren't, etc. There's also
liability floating in the ether, mostly in the form of "I damaged my product!"
but maybe also in the "your X server patch (the one you reverted but is still
published via VCS) broke my monitor."</p>
<p>So it's no wonder many embedded vendors stop at publishing tarballs and avoid
public review at all costs. I won't let them off that easy however. They were
kind enough to publish in the Debian style .orig/.diff pairing, so I've been
browsing some of the changes. Here's an inexplicable example:</p>
<p><code>diff -Nur xrandr-1.2.0.orig/xrandr.c xrandr-1.2.0/xrandr.c</code>:</p>
<div class="highlight"><pre><span class="gd">--- xrandr-1.2.0.origxrandr.c 2008-06-23 11:48:28.000000000 -0500</span>
<span class="gi">+++ xrandr-1.2.0xrandr.c 2008-06-23 11:48:56.000000000 -0500</span>
<span class="gu">@@ -163,7 +163,7 @@ <br></span>
#if HAS_RANDR_1_2 <br> typedef enum _policy {
<span class="gd">- clone, extend <br>+ policy_clone, extend</span>
} policy_t; <br>
typedef enum _relation { <br>@@ -1398,7 +1398,7 @@
#if HAS_RANDR_1_2 <br> output_t *output = NULL;
char *crtc; <br>- policy_t policy = clone;
<span class="gi">+ policy_t policy = policy_clone; <br> Bool setit_1_2 = False;</span>
Bool query_1_2 = False; <br> Bool query_1 = False;
<span class="gu">@@ -1634,7 +1634,7 @@ <br> continue;</span>
} <br> if (!strcmp ("--clone", argv[i])) {
<span class="gd">- policy = clone; <br>+ policy = policy_clone;</span>
setit_1_2 = True; <br> continue;
}
</pre></div>
<p>Of course, these devices are a bit outside my pricerange, but the appearance
of source does appear promising for the upcoming nuvifone. It'll be
interesting to see how it compares to the openMoko design.</p>Cellwriter - My package of the whenever I feel like it2008-07-09T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2008-07-09:cellwriter-my-package-of-the-whenever-i-feel-like-it.html<p><a href="//www.pwnguin.net/hello-planet-ubuntu.html">As promised</a>, I've put together a short screencast demoing Michael Levin's
handwriting recognition tool <a href="http://risujin.org/cellwriter/">CellWriter</a>:</p>
<p>(<a href="http://jldugger.blip.tv/file/1054074/">Link for those who don't see a video above</a>)</p>
<p>The premise is that you write in cells so the tool can do individual letter
analysis. You <em>must</em> train it on your handwriting for now -- no default
profile is provided. This means substantial work for Latin-based language
speakers, and insurmountable challenge for Asiatic languages.</p>
<h3>More useful than you'd think</h3>
<p>You can use it with a regular mouse, but Cellwriter is intended for use with
stylus inputs, like tablets or touchscreens. The software consults with a
number recognition engines to determine the input to a given cell: a stroke
preprocessor (to accommodate different stroke orders and digitizer aliasing),
average distance of the input from sample input, average angles, and frequency
weighted word context.</p>
<p>Tablet input might seem like a slow and silly idea, but it's useful in a
number of places:</p>
<ul>
<li>
<p>Devices without a real keyboard. Nokia internet tablets come to mind.</p>
</li>
<li>
<p>Settings where a laptop screen between you and another person forms an
unwanted barrier. Classrooms are a good place where an LCD display can be a
barrier between you and the lecturer and the chalk board, and a distraction to
those around you.</p>
</li>
<li>
<p>Text input to drawing applications. Sometimes you want to add some nicely
rendered text to a drawing, but don't want the hassle of rotating the screen
to get access to the keyboard.</p>
</li>
<li>
<p>Airplanes. Coach is a crowded space, and laptops sometimes fit poorly.
Tablet input can reclaim some of the space the airlines keep taking from you
in the name of efficiency. I've yet to test this theory, however.</p>
</li>
</ul>
<h3>Patches Welcome</h3>
<p>CellWriter is a young project, and there are still a number of areas where
Cellwriter needs more help:</p>
<ul>
<li>
<p><strong>Default trained database</strong>. For some languages the sheer number of
characters is overwhelming, and attempting to train for an individual is
farcical. Creating and using default data set would improve immediate
usability for both Latin and non-Latin languages.</p>
</li>
<li>
<p><strong>Internationalization</strong>. Levin is <a href="http://forum.risujin.org/index.php?topic=4.0">reportedly working on gettext
support</a>, which is an important first step. From there community
translation services like <a href="https://translations.launchpad.net/">Rosetta</a> can aid in bringing the tool to a wider
audience.</p>
</li>
<li>
<p><strong>Librarification</strong>. Currently CellWriter is a GTK application; there is
an open invitation for <a href="http://forum.risujin.org/index.php?topic=3.0">volunteers</a> to split the code into a reusable
library, as well as a Qt UI for KDE. This could be very useful, if say you
wanted GNOME Sudoku to support handwriting input, or a KDE native UI version.</p>
</li>
<li>
<p><strong>Bug fixing</strong>. A number of bugs still exist in CellWriter; if you have
experience with GTK and struts, <a href="https://bugs.launchpad.net/cellwriter/+bug/179107">this bug</a> could use your expertise.
Additionally, the video above highlights a number of minor UI flaws, some of
which I'm told are going to be fixed in the next release. Any bugs, patches or
feature requests can be reported at <a href="https://bugs.launchpad.net/cellwriter">the Launchpad Project</a> page, or on an
impromptu <a href="http://forum.risujin.org/">forum as bug tracker on Risujin</a>.</p>
</li>
</ul>
<h3>Get it today</h3>
<p>You can easily install CellWriter in Hardy or Intrepid:</p>
<p><img alt="add/remove Application dialog" src="http://farm4.static.flickr.com/3241/2650792995_d41fe88e62_o.png" /></p>
<p>Or you can install it in Debian with</p>
<blockquote>
<p>apt-get install cellwriter</p>
</blockquote>
<p>Thanks to Michael Levin for his continuting work and to <a href="http://www.urop.umn.edu/">UROP</a> for making
it all possible!</p>Dear UbuntuStudio2008-07-03T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2008-07-03:dear-ubuntustudio.html<p>I know there's tons of applications of the <a href="//www.pwnguin.net/cooking-for-everybody.html">last post</a>, but I thought there
wasn't much in the way of GPL'd implementations. Today, I saw this awesome
screenshot, <a href="http://clam-project.org/screenshots.html">from</a> the audio-visual program <a href="http://clam-project.org/wiki/Frequenly_Asked_Questions#What_is_CLAM.3F">CLAM</a>:</p>
<p><img alt="wiring diagram of audio processing network" src="http://clam-project.org/wiki/images/b/b7/NetEditQt4-SpectralNetworkWithControlSenders.png" /></p>
<p>The above comes from their Qt based <a href="http://clam-project.org/wiki/Network_Editor_tutorial">Network Editor</a> application. This
is hardly unique among audio applications -- I recall seeing graphEdit years
ago on Windows. But I think perhaps there's some great underlying
functionality ready to break out for wider uses. <a href="http://www.ohloh.net/projects/clam">Ohloh</a> suggests that it's
a mature application with lots of developers, but few comments. CLAM is one of
Google's 2008 SoC projects, and some interesting new functionality is on the
way thanks to that. One project in particular sounds as useful outside the
audio domain: <a href="http://clam-project.org/wiki/GSoC/Network_scalability_and_Blender_integration">Network Scalability</a>, by Natanael Olaiz (mentor, Pau Arumí).
Basically, writing the infrastructure to allow sub-networks, and sub-sub-
networks etc. <a href="http://dadaisonline.blogspot.com/2008/06/too-much.html"> He appears to be making good progress,</a> and I hope he
succeeds!</p>
<p>(This is why I always say everything you've ever thought of has already
been done on the Internet.)</p>Cooking for Everybody2008-07-02T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2008-07-02:cooking-for-everybody.html<p>One of the reasons I love open source is its entrepreneurial spirit. If you
don't like the way a company is handling their print driver, <a href="http://www.gnu.org/gnu/gnu-history.html">go off and write
an operating system that meets your definition of liberation</a>. If a project
manager is refusing your favorite UI patches, you can fork the UI while still
sharing the underlying <a href="http://developer.pidgin.im/wiki/WhatIsLibpurple">networking libraries</a>. But critics of this right-
to-fork suggest that most people can't do this; they just don't have the
skills. In many cases, they're right. I don't think I'll ever be capable of
contributing directly to emacs, for example. Or to gcc. Sure, the right to
fork has benefited me by allowing smart people to write egcs, I can donate
money to help defray costs of hosting and conferences etc., and I can hire
someone to write the code for me, but these challenges are amplified in small
or trivial projects; there's an amount of learned helplessness from decades of
"software users" and "software developers". I think overcoming this state of
learned helplessness requires that people write at least one program to do
whatever it is <em>they</em> want.</p>
<h3>Cooking for Engineers</h3>
<p>Let me now introduce an analogy. Programs are at their heart, a sequence of
instructions, with the goal of transforming a set of inputs into a relevant
output. Everyone and their dog makes the following tired analogy, but I
promise to make a meal out of this: cooking at its heart is a sequence of
instructions, with goal to transform a set of ingredients into a tasty dish.
And for every <em>Joy of Cooking</em> soldier who claims cooking is an art, there's
at least one chemical engineer making a science of it. If you've seen enough
engineers at work, you know when it comes to calculations they generally have
one hammer at their disposal, to be used on everything in sight: Spreadsheets.
They take their years of education and experience with formulas and systems
and cram them into a spreadsheet. Sure, many of them were required to take a
"C for Engineers" course (from the same train of thought that will some day
bring us "Partial Differential Equations for Elementary Educators"), but if
the subject's not on the FE exam, who cares? So it comes as no surprise to me
to see this particular abuse: <a href="http://www.cookingforengineers.com/">Cooking For Engineers</a>. I'll use <a href="http://www.cookingforengineers.com/recipe/181/Cheesecake-Cupcakes">one
particularly tasty example</a>.</p>
<p>I've upset more than one engineer who tried to show this "new" site to me,
expecting shocks of horror or awe, but getting a slight ribbing in return
instead. For extra credit, take a look at <a href="http://www.cookingforengineers.com/recipe/42/Traditional-Chicken-Pot-Pie/trn">this recipe for chicken pot
pie</a>. Here's the deal: the leftmost column are ingredients, and each column
after that is an instruction with inputs on the adjacent rows of the column to
the left. Essentially, this is taking a beautiful data structure and cramming
it into a table. This looks like an engineering tactic through and through,
but one minor flaw stands out: many recipes use only Imperial units, instead
of metric. This reads like cooking for web designers, and the <a href="http://www.cookingforengineers.com/article/138/About-Cooking-For-Engineers"> bio</a>
confirms it:</p>
<blockquote>
<p>He has worked as a network engineer, software programmer, PDA hardware
designer, computer vision researcher, and, most recently, notebook hardware
application engineer. Michael holds a Bachelor of Science from the College of
Engineering at University of California, Berkeley in Electrical Engineering
and Computer Science.</p>
</blockquote>
<h3>Cooking for Computer Scientists</h3>
<p>This is a guy who ought to know better. He did ask <a href="http://www.orthogonalthought.com/blog/index.php/2007/07/tufte/">an expert</a> for advice,
but one of those two misunderstood the other (or possibly both did). I won't
bother trying to spend a lot of time trying to rescue the format, other than
to say that overlapping colors <em>might</em> make groupings more obvious. Instead
I'm going to propose we go back to the basics &mdash parse trees.</p>
<p><a href="http://www.flickr.com/photos/jldugger/2619872079/" title="cupcake by jld5445, on Flickr"><img alt="cheese cupcake recipe" src="http://farm4.static.flickr.com/3161/2619872079_25dd4c5036_o.png" /></a></p>
<p>Please forgive my poor lineart (I have no idea what to put for "place" or
"top"), and focus on the information being conveyed here. The relationships
between ingredients and actions are now explicit, ingredients are visually
distinct than actions; since the information needed is different, this will be
handy. Equally important is that the input and output have the same type. In
the extra credit example, this is done implicitly in order to fit the recipe
into a table without getting ridiculous. You can probably spend all day
dreaming up just the perfect way to diagram that recipe, but I only spent two
hours with Inkscape on it. You could add cooking instruments, balance the
spacing and alignment, redesign the horizontal layout to represent time, you
could make it interactive and double click to expand or collapse parts of the
diagram; lots of random stuff with just the visual layout without changing the
underlying instructions. But I'll leave the advanced transmission of
information to humans to the Edward Tufte's of the world.</p>
<p>But it's also important to notice that we have standard algorithms for
manipulating that data structure; there's all sorts of things you can do. You
can store the measurements in metric and translate units based on a would-be
chef's preference. You can scale up or down a recipe to N people. You can even
translate it into a standard numbered step recipe. Or you could translate it
into instructions for two people (parallelism!). You could calculate the
number of cupcakes you can make if you only have so much of each ingredient.</p>
<p>At its core, that picture models the flow of ingredients through a set of
simple processes. I haven't shown you any recipes where this matters, but we
don't need a tree structure, just a <a href="http://en.wikipedia.org/wiki/Directed_acyclic_graph">DAG.</a> The package diagrams from
<a href="//www.pwnguin.net/the-popular-emergence-of-apt-git.html">earlier</a> are also DAGs. The important thing about DAGs is that paths can
cross, but never loop. Languages designed to handle the flow of data through a
network are called <a href="http://en.wikipedia.org/wiki/Dataflow_programming#Languages">Dataflow Languages</a>, though many also handle full
graph structures. Restricting the model of computation to a DAG then means
it's technically Turing incomplete; this has both good and bad consequences. I
should note that it's been proven that spreedsheets, that hammer in every
engineer's pocket, is also fundamentally a DAG. Here's a spreadsheet I used to
split the bills with my roommates visualized:</p>
<p><a href="http://www.flickr.com/photos/jldugger/2619872075/" title="bills by jld5445, on Flickr"><img alt="bills" src="http://farm4.static.flickr.com/3284/2619872075_fed5a76a48_o.png" /></a></p>
<p>So why am I picking on poor engineers, if it's all the same? Because
spreadsheets are terrible on understandability. Spreadsheets marry
presentation with calculation, and I think it's a fundamental mistake. They
form a DAG, but via well hidden textual pointers. If you accidentally form a
cycle, it's probably unwanted, but good luck tracking down where things went
wrong. Moreover, they're oriented towards a single set of data points. I made
a new spreadsheet page every month based on a template, because Calc doesn't
work the way I'd like.</p>
<p>There's tons of other applications. I'd been taking notes about this and
brainstorming for a while now, but now that <a href="http://bryceharrington.org/drupal/docs_vs_streams">Bryce is making overtures</a>
toward a new project doing precisely this, I think I'll share my notes with
everyone, in hopes that I get the program I want without having to write a
line of code ^_^. His writing seems to be more profound than he realizes
(emphasis mine):</p>
<blockquote>
<p>I could no longer do without one of these silly cron jobs than I could have
done <strong>without Microsoft Excel</strong> a decade ago. [...] The realization sank in
at that point, that our computer UI paradigm really stinks for what we
actually use our computers for.</p>
</blockquote>
<p>Indeed, the internet has changed things dramatically and our computing hasn't
stopped to notice. He goes on to say that he used none of the desktop metaphor
to write his blog post. Bryce, you're doing it wrong. As I'm writing this, I
have several Firefox tabs viewing various research and reference pages, gedit
viewing some text notes I'd been keeping on this subject, Inkscape for drawing
the graphics, and an editor open to write this post! We do occasionally
originate documents, so don't throw the baby out with the bathwater. As reddit
user damienkats <a href="http://www.reddit.com/info/6owbv/comments/c04h2wb">notes</a>, "Saying documents vs. streams is like saying cars
vs. highway."</p>
<h3>Cooking for Everybody</h3>
<p>It is possible to expand the spreadsheet model into a stream processing model.
Each node repeatedly consumes the necessary inputs, runs an algorithm and
produces whatever intended output to the next process(es) in line. When the
inputs aren't available, the node blocks until they are. If one of the inputs
"runs out", the node halts. As an example, a processing node could read in RSS
XML one item node at a time, and split the output into items with enclosures
and those without. A more complicated example could be a node that takes RSS
XML items, and if the link property is a .torrent URL, moves that into an
enclosure field and sets the other enclosure properties accordingly.</p>
<p>This form of programming has some tremendously productive properties primarily
to the benefit of new programmers. Firstly, it's easy to trace a change in one
part of the program to where it affects the rest. You don't need to maintain a
ctags database and query it to determine what semantics other functions that
call the function you just changed rely on. Secondly, as Sean Parent of Adobe
Systems <a href="http://www.youtube.com/watch?v=4moyKUHApq4">noted (at 17:45 in through 22:00)</a>, the acyclic property
eliminates the need for inductive reasoning about your program. Each node is
logically dependent on only those that came before it. Thirdly, it's simple to
see that as long as the input isn't infinite, these systems will halt.
Additionally, the looping structure makes common "one off" loop errors
basically impossible.</p>
<p>Many of the fundamental software design techniques still work, even if the
training wheels are on. Firstly, I don't think you'll need UML, but you could
still model your data, and your programs! You could still do "write, compile,
test" cycles. Peer review might be easier, if changes can be succinctly
described. You could still have revision histories and merge strategies. You
could still do test plans, construct test cases and file bug reports. You
could do formal correctness proofs. You could load up a debugger and
investigate the output and input of any given node. You could still time the
whole program for performance, and even analyze individual nodes for
bottlenecks. You could modularize, reuse and rewrite.</p>
<p>Such a design is also sufficiently advanced enough to cater to experts as
well. It's very difficult to design a UNIX pipeline that uses more than one
stdin or stdout, it defies the very meaning of "std." Taking another step
towards UNIX is plausible; there's no reason we can't have processing nodes
that buffers input until EOF, in fact sort would likely require that kind of
bottlenecking. Equally important is what I mentioned earlier; dataflow
languages not only handle parallelism, many were built for it! One naive way
is to run node as a process and use common IPC to pass data between them.
Pipes do this, but you could also use shared memory or threads, as long as the
system structure is adaquately defined.</p>
<p>And while the system itself can't form loops, experts should be able to write
new algorithms that do operate as a processing node and incorporate them into
designs or share them with others. They could can also provide some
information about the nodes themselves for use during design -- Big O runtime
analysis, multithread support, blocks until EOF, input types and ranges,
descriptions, revision control URL, etc.</p>
<p>All of this adds up to a healthy environment where maybe we can undo some of
the damage the past twenty years has taught people about the relationship
between them and their computers.</p>The popular emergence of apt-git?2008-06-27T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2008-06-27:the-popular-emergence-of-apt-git.html<p>It's no secret that Canonical is <a href="http://www.markshuttleworth.com/archives/125">a large proponent</a> of <a href="http://bazaar-vcs.org/">Bazaar</a> (bzr)
and would like to use Ubuntu as a guinea pig for large scale deployments. At
UDS Prague, James Westby gave <a href="http://www.youtube.com/watch?v=qoFva4qzmGo">an interview</a> about using "distributed
version control systems" <a href="http://blueprints.launchpad.net/ubuntu/+spec/distributed-development-importer">(DVCS) for coordinating development</a>. The
interviewer is a bit confused about how the Ubuntu flavors interact, so I
think an explanation of DVCS and Ubuntu development is in order.</p>
<p>When talking about open source software in general, it's important to keep in
mind the concept of versions. Each patch applied to a project can be thought
of as a new version of the software. Generally projects release a new version
every few months containing a bundle patches. Here's an example based on
<a href="http://xournal.sourceforge.net/">Xournal</a>:</p>
<p><a href="http://www.flickr.com/photos/jldugger/2609705486/" title="xournal_flow by jld5445, on Flickr"><img alt="xournal_flow" src="http://farm4.static.flickr.com/3293/2609705486_d00fac7dc7_o.png" /></a></p>
<p>Xournal turns out to be a pretty good scenario so I'll keep returning to it.
Time flows from left to right (but not necessarily to scale). Note the time
span between 0.4.2 and 0.4.2.1. There was a pretty critical crashing bug that
was caught, patched and released there. However, few new users should use
Xournal directly from source rather than go through their distribution. For
example, Debian provides packages for Xournal. A picture is enlightening.
NOTE: This is not meant to encompass the entirety of Debian's release process:</p>
<p><a href="http://www.flickr.com/photos/jldugger/2609313067/" title="xournal-debian_flow by jld5445, on Flickr"><img alt="xournal-debian_flow" src="http://farm4.static.flickr.com/3257/2609313067_6e04964d81_o.png" /></a></p>
<p>Several things are going on here. I've organized the picture into three rows:
upstream in blue and Debian unstable and Debian testing in red. Debian
packagers take upstream releases, add control and build rules, and effectively
patch in a debian/ directory. New versions start out in unstable, and after a
duration in unstable without serious bugs, that version replaces the old
version in testing. The purpose of this is to catch the sort of bugs we saw in
Xournal upstream before they cause widespread grief.</p>
<p>Another thing to look at is the June 3rd version in Debian unstable. It has
two versions pointing at it. This is because it inherits both the changes from
upstream, and the changes that were made in packaging the program for Debian.
It is possible that some changes in one source conflict with another. This
might happen because Debian patched a bug fix one way, while upstream applied
a different fix, for example. In this case the solution is to drop the patch,
but sometimes the fix is more complicated. This is what I call the <strong>package
merge problem</strong> in distributions. This is essentially why Debian maintainers
hate source code with a debian/ directory provided, since it can cause a
conflict with their own packaging efforts.</p>
<p>If the above diagram was confusing, the one below may make you physically
dizzy (and it doesn't even contain SRU or Backports!):</p>
<p><a href="http://www.flickr.com/photos/jldugger/2609734912/" title="xournal-ubuntu_flow by jld5445, on Flickr"><img alt="xournal-ubuntu_flow" src="http://farm4.static.flickr.com/3042/2609734912_bb03e4703e_b.jpg" /></a></p>
<p>That's how Ubuntu works. Blue is still upstream, red is still Debian unstable,
and Ubuntu now is gold. Feisty, Gutsy, Hardy and Intrepid are all Ubuntu
stable releases; the *buntu flavors all put their packages in these same
repositories. They largely all share the same kernel, the same X11 server etc;
they just install different packages from those repos by default. This diagram
doesn't depict a Hardy version of Xournal because it was a <strong>sync</strong> from
Ubuntu itself--Ubuntu just copied the source from Gutsy into Hardy and rebuilt
the whole shebang without really looking at it. Many packages in the
"universe" categories do the same thing, except from Debian; there is a script
to do this automatically as long as Ubuntu has made no changes to the package
in the last release.</p>
<p>This diagram traces the Ubuntu Xournal package to its Debian origins, which
are a long ways back. Here, notice that only Feisty derives from Debian.
Effectively, Ubuntu has forked this package. This is where distributed version
control might make cherry picking patches simpler for Debian, Ubuntu and even
Xournal, who's author has reasonably lamented the long lead times. DVCS like
bzr or git allow everyone to easily share individual commits within a branch
with one another. Debian and Ubuntu source code often seems hidden behind
walls, smoke and mirrors; DVCS also provides a commonly understood method of
accessing it to upstream, who may want to investigate a heavily reported bug
that only Ubuntu users seem to report.</p>
<p>In addition, DVCS also makes it easier for new contributers to participate.
Current practice for people without upload access is to grab the source
package with apt-get, apply a fix and then generate a "debdiff", attach it to
a bug report in LP, and subscribe the appropriate teams. Then the maintainers
view the debdiff, apply it, and upload to the build system for deployment.
DVCS can make all that happen faster, potentially allowing groups like MOTU to
work their magic on more packages and bugs.</p>
<h3>So distributed version control is important; Is bzr the right pick?</h3>
<p>We might want to know what's popular right now, since DVCS exhibits network
effects. Romain Francoise has a <a href="http://blog.orebokech.com/2008/06/updated-debian-vcs-statistics.html">very timely graph of Debian package version
control system popularity</a>:</p>
<p><img alt="Hosted by Debian on Romain's account -- don't abuse too heavily ;)" src="http://people.debian.org/~rfrancoise/vcs-stats-080624.png" /></p>
<p>SVN is the runaway winner here, but for both Ubuntu and Debian's sake, I don't
anticipate this to last. SVN doesn't do enough to help developers with the
merge problem. Git is quickly on the move, while bzr is standing still when it
comes to adoption. The elephant in the room in this graphic is packages with
no version control whatsoever. I imagine that widespread DVCS in Ubuntu is
expected to lead to more adoption within Debian. The graph also doesn't
distinguish between packages who keep the whole source code in revision
control versus those that only keep the debian/ dir.</p>
<p>That report comes on the coattails of discussion at FUDCon on Fedora VCS
selection and <a href="http://0pointer.de/blog/projects/on-version-control-systems.html">a rather sad commentary</a> from a Fedora developer (Lennart
Poettering):</p>
<blockquote>
<p>Yes, with CVS, SVN and GIT I think I have learned enough VC systems for now.
My hunger for learning further ones is exactly zero. Let me just code, and
don't make it hard for me by asking me to learn your favorite one, please.</p>
</blockquote>
<p>Fedora <a href="http://jkeating.livejournal.com/61987.html">making a unilateral package VCS decision</a> could have consequences
downstream. A short <a href="http://aruiz.typepad.com/siliconisland/2008/06/re-on-version-c.html">conversation </a> can be seen on Planet GNOME about the
possibility of moving to Git for code hosting, causing some new edits to
<a href="http://live.gnome.org/DistributedSCM">DistributedSCM</a> in the meantime. The debate over VCS is <a href="http://keithp.com/blogs/Repository_Formats_Matter/">not new</a> but
Keith offered <a href="http://keithp.com/blogs/Tyrannical_SCM_selection/">an insightful gem</a>:</p>
<blockquote>
<p>[M]ost of the group will just not bother, and will end up choosing
essentially randomly, with a slight bias to whatever is most familiar</p>
</blockquote>
<p>This fits well with the Debian VCS-* graphic -- most packages choose nothing,
or SVN. Well, if formats matter to enlightened despots, how does bzr stack up
against DVCS champ git? To figure this out, I've asked my friend <a href="http://www.aeruder.net/">Andy
Rueder</a>, who spent a good deal of time digging into git and documenting
commands and options:</p>
<blockquote>
<p>< aeruder> i've always been a big fan of git due to the simplicity of the
repository and the fact that the kernel guys are using it</p>
<p>< aeruder> git is very simple as far as repository format goes... every
file has a sha1 from its content, a tree (directory) contains sha1's of files
and sha1's of other trees, and a commit simply contains a message, and the
sha1 of 0 or more parents and the sha1 of the tree associated with it</p>
</blockquote>
<p>What I think this means is that there isn't a whole lot to change
about the storage format, and it shows. Daniel Stone, former Ubuntu X
maintainer and deadly code ninja, has <a href="http://ajaxxx.livejournal.com/58885.html?thread=162309#t162309">said of bzr</a>:</p>
<blockquote>
<p>bzr fails because I try to clone stuff, and it says, 'oh, you have 0.13, we
changed the revision yet again and you need 0.18, but good luck finding
anything newer than 0.15'.</p>
</blockquote>
<p>This is probably the greatest point I've seen against bzr so far. I haven't
personally used either bzr or git seriously, but my impression from those who
have is that bzr is still difficult with version compatibility. It seems that,
if Bazaar is to be king, "stable" should be shortly added to the front page
list of bullet points, so distributions can get tools into the hands of
developer-users that aren't hopelessly outdated.</p>Hello Planet Ubuntu!2008-06-26T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2008-06-26:hello-planet-ubuntu.html<p>Allow me to introduce myself. My name is <a href="http://wiki.ubuntu.com/JustinDugger">Justin Dugger</a> and I've been
participating in Ubuntu for a while now. First as a tester and now as bug
contact and (some) development. Most of what I do is request information from
bug reporters and keep friendly relations with upstream. There's a thread on
-devel-discuss about making Ubuntu's contributions more visible, and at the
moment I think that's exactly backwards. We should be giving out care packages
to the heroes upstream that make the bulk of our releases!</p>
<p>Some upstream recent projects I'm currently interested in and would like to
thank for their efforts thus far:</p>
<ul>
<li>
<p><strong>Nouveau</strong>: For all the (what I shall generously call) activism about
closed binary modules I've been reading recently, there's been little fanfare
about this project. At the very least, I've seen dramatic improvement over the
"nv" driver here, though I imagine other may have different experiences.</p>
</li>
<li>
<p><a href="http://risujin.org/cellwriter/"><strong>CellWriter</strong></a>: There are a few bugs that need to be sorted out, but
ultimately, this is a fantastic step forward in Linux handwriting recognition.
I should make a screencast demo of this at some point.</p>
</li>
<li>
<p><strong>Compiz</strong>: Colorfilter is a great but obscure plugin that started life as
an Google SoC project mentored by Ubuntu and is now in Ubuntu and Compiz
upstream. At its core is a set of Cg filters for color transforms with a
number of uses. It has filters for high contrast, "inverting" colors, and
filters that can remove the colors that the colorblind can't perceive, so you
can test your software or presentation.</p>
</li>
<li>
<p><strong>Xournal</strong>: Another tablet oriented program, it lets you keep a high
resolution written journal to take notes, make diagrams, whatever. I've found
it useful for quick diagramming in technical conversations.</p>
</li>
<li>
<p><a href="http://wiki.debian.org/Games/Development"><strong>Debian Games Team</strong></a>: This team has been instrumental in keeping
Debian and Ubuntu on top of new games and keeping games on top of new build
environments. If you're an Ubuntu user looking to contribute upstream, this
seems like a good place to start. My particular favorite is <a href="http://packages.ubuntu.com/intrepid/gunroar">Gunroar</a>, and
it wouldn't have happened without serious effort from them. Thanks!</p>
</li>
</ul>
<p>If you benefit from any of these (or others), I encourage you to take a moment
to offer gratitude. Enthusiasm and praise are the grease on the wheels of Free
Software.</p>Aren't we forgetting someone important here?2008-06-23T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2008-06-23:arent-we-forgetting-someone-important-here.html<p>Greg K-H <a href="http://www.linuxfoundation.org/en/Kernel_Driver_Statement">leads another charge against the windmills of closed kernel
modules</a>. Just who are we supposed to expect to be convinced by a statement
that they're very bad?</p>apt-sync2008-06-21T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2008-06-21:apt-sync.html<p>As a followup to <a href="//www.pwnguin.net/a-rebuttal-to-apt-rsync.html">apt-rsync</a> and <a href="//www.pwnguin.net/revisiting-apt-rsync.html">analysis</a>, I should point out that the
people on IRC who suggested it was already done <a href="https://edge.launchpad.net/apt-sync">were clearly right</a>.
What's not clear, however, is how this might enter Ubuntu or Debian accepted
practice. There's been a lot of debate since apt-sync's work over other ways
of compressing packages. LZMA, <a href="//www.pwnguin.net/a-comparison-of-compression-schemes.html">already seen once</a> on this blog, is a
frequent contender. <a href="https://blueprints.edge.launchpad.net/ubuntu/+spec/dpkg-lzma">Hardy</a> allows LZMA packaging, and I've seen several
mailing list threads about it. Unfortunately LZMA and rsync are mortal
enemies, shall we say. I'm not sure, but I think that LZMA has won because
zsync makes the CD image size problem worse, not better.</p>
<p>An alternative compromise might be to investigate <a href="http://www.squashfs-lzma.org/">LZMA & squashfs.</a>
Currently this represents one of the interesting problems with the Linux
kernel. Almost everyone uses squashfs, and while Greg KH can go around
declaring that everyone should want their code in the tree and how there's
throngs of developers eager to make it happen, the LKML was fairly clear about
rejecting inclusion in 2005. There's been some <strong>very</strong> recent<a href="http://www.nabble.com/-patch-0-6--First-take-at-squashfs-mainlining-support-patches-to17825431.html"> efforts to
bring squashfs into mainline</a>, so there is hope. The trouble for squashfs-
lzma is that the squashfs maintainer refuses the patches because he's afraid
to risk being turned down by LKML again. So squashfs-lzma is distributed as a
patch to a patch, and most distros are wary, if they're even thinking about it
at all.</p>
<p>But assuming we could use LZMA on squashfs, that would leave Ubuntu more
free to ship packages built with an zsyncable package archive, but would still
leave a conflict between network bandwidth and disk space.</p>SVG vs PNG2008-06-20T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2008-06-20:svg-vs-png.html<p>I've been testing some <a href="https://www.launchpad.net/netbook-remix-launcher" title="Netbook Remix Launcher">software</a> recently that includes liberal use of
large program icons in Ubuntu. <a href="http://alioth.debian.org/docman/view.php/30046/2/menu-one-file.html#s3.7" title="Debian Menu icon policy">Historically</a>, Debian packages have used
<a href="http://en.wikipedia.org/wiki/X_PixMap" title="Wikipedia: XPM">XPM</a> for icons, because they diff easily and are compressed already via
packaging. More recently, <a href="http://en.wikipedia.org/wiki/Vector_graphics" title="Wikipedia: Vector Graphics">vector graphics</a> have become popular in many
places in the desktop and the web with today's bigger and higher DPI computer
displays, because vector graphics are resolution independent. This makes for
neat stuff, like quick launchers and desktop icons that scale cleanly with
size:</p>
<p>SVGs are nice, but there's still a few reasons you might still use PNG.
They're far more plentiful -- even I'm guilty of converting a Windows .ico
into a PNG for an icon. And they take less disk space, since SVG is a verbose
XML, while PNG is a compressed format. But when size really counts, it also
possible to <a href="http://www.adobe.com/svg/illustrator/compressedsvg.html" title="Compressed SVG files are typically 50 to 80 percent smaller...">compress SVG with gzip</a>.</p>
<p>But times are changing. Linux, and Ubuntu in particular, is moving into
several different devices with different resolutions, aspect ratios and
display sizes. Lots of technology is needed to cope with the diversity, as
<a href="http://podcast.ubuntu-uk.org/2008/05/27/s01e06-flaming-star/" title="I would like to see it work really well on a small screen">Shuttleworth noted</a> (33:30 timestamp). SVG plays a role in coping
resolution and size. This screencap demonstrates how stark the difference is
on my laptop:</p>
<p><img alt="Blown up comparison between PNG and SVG" src="http://farm4.static.flickr.com/3154/2591515491_afe99f8277.jpg?v=0" /></p>
<p>So what's holding widespread SVG on Ubuntu back? Disk space is a minor concern
on the most constrained devices that Ubuntu could run on, but I think that's a
wash when you start storing 5 sizes of icons versus one scalable graphic.
Render time is an interesting question that depends on a host of factors. If
we assume standard icon bitmap sizes, I would imagine it's a win for PNG. If
we remove that restriction, you have to throw in the cost of scaling the PNG.
<a href="http://www.bryceharrington.org/drupal/blog/1" title="Bryce's blog">Bryce Harrington,</a> Ubuntu X developer and Inkscape founder, offered the
following insight:</p>
<blockquote>
<p>< bryce> svg supports advanced functionality like filters, gradients, etc.
which can be processor intensive for some chipsets</p>
<p>< bryce> however a really trivial one - just some simple shapes of
solid colors - could actually render faster <br> < bryce> filesystem I/O tends
to be a major performance killer, so any optimization which allows you to
avoid file I/O at the expense of processor or memory, tends to work out pretty
well</p>
</blockquote>
<p>Bryce went on to suggest that tricks to rescale both PNG and SVG by raster
lines were occasionally used in performance sensitive code. Gzip compressed
SVG might help under specific situations, since it reduces I/O and you can
pipeline some of it. But we're talking peanuts for icons, since most will fit
within a single small contiguous extent.</p>
<p>The most interesting argument I saw was from Johan Brannlund:</p>
<blockquote>
<p>< johanbr> Bitmap icons tailored for small sizes look better than SVG.</p>
</blockquote>
<p>There's no easy way to dispute that. I don't think you'll find superb pixel
art coming out of SVG renders. I'm not sure it matters, but it helps. One
thing that can be done is see how many packages are actually customizing their
tiny PNG icons. Firefox <a href="iconpacks.mozdev.org" title="Firefox Icon packs">seems to</a>. Deluge might, I can't really tell the
difference between their PNG and the SVG. Pidgin is actually backwards here --
as far as I can tell, they generate SVG from PNG, yielding the worst of all
worlds. Carrier (aka "funpidgin" aka "that silly fork of pidgin") does the
same. So clearly all three practices are in use; it would be interesting to
see how common each practice is (and document new ones), but I'm not sure
there's a good way to do it. So if you see me filing bugs and tags in the next
few weeks, this is what's up.</p>A small victory2008-05-28T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2008-05-28:a-small-victory.html<p>During a search for some open source software, I came across a curiously
licensed project. A brief snippet:</p>
<blockquote>
<p>You may copy, distribute and use the Software according to the Gnu Public
License (GPL) as long as you agree to and comply with any and all conditions
in this license.</p>
<p>Limitations</p>
<ol>
<li>You may with this license only use the Software for non- commercial
purposes. [...]</li>
</ol>
</blockquote>
<p>A bit saddened by the missing freedom 0, I wrote the author merely asking
for clarification, and in response he's declared intent to relicense under
GPLv3. This is only a small victory, as it doesn't currently build for Linux,
but it has in the past and I think getting it to build again is a proper
reward for this positive step.</p>Revisiting apt-rsync2008-05-26T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2008-05-26:revisiting-apt-rsync.html<p>I found a <a href="http://samba.anu.edu.au/rsync/rsync-and-debian/rsync-and-debian.html">fairly thorough review</a> of apt-rsync and where it stands as of
2002. It's six years old but still relevant. An interesting point it makes is
that apt-get update could be handled with rsync cheaply.</p>
<p>For now on, I think I'll refer future advocates to that post and ask them
to come up with hard numbers for costs incurred. It's important to change the
status quo, not for change's sake alone, but for the improvements it brings.</p>Eclipse disaster2008-05-23T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2008-05-23:eclipse-disaster.html<p>As a student and developer fairly experienced with Linux, I'm often asked what
IDE or debugger to install. I'd love to answer Eclipse, as on paper it's very
nearly competitive with Visual Studio. If you're using Java, by all means,
it's perfect. <strong>If you're using C/C++, don't bother with Eclipse.</strong></p>
<h3>Why?</h3>
<p>The JDT has an interactive tutorial on using eclipse to start a project. CDT
does not. If you don't know how to use Eclipse, this is unfortunate. Not a
deal breaker, as there is documentation and perhaps videos somewhere. But
don't think that the awesomeness of the JDT translates to an awesome CDT.
Moreover, your ability to interact with the object model begins and ends at
starting projects and adding / removing objects. The rest of the Views can
parse and integrate your changes to classes, but you can't use them to stub
the code.</p>
<p>But the big one is <a href="http://publib.boulder.ibm.com/infocenter/rtnlhelp/v6r0m0/index.jsp?topic=/org.eclipse.cdt.doc.user/concepts/cdt_c_content_assist.htm">Content Assist.</a> Content Assist is a simple feature
intended to duplicate Visual Studio's <a href="http://en.wikipedia.org/wiki/IntelliSense">IntelliSense</a>. Sounds great,
terrible implementation. Even the CDT project lead <a href="http://cdtdoug.blogspot.com/">Doug Schaefer</a> is
<a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=205964#c1">frustrated</a>:</p>
<blockquote>
<p>It's bugs like this that make me wonder why CDT committers are working on
new parsers, when there are still a number of issues like this that need to be
addressed in the existing code base that none of the committers have signed up
to address.</p>
</blockquote>
<p>A fellow CDT committer <a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=205964#c2">rebukes him</a>:</p>
<blockquote>
<p>We have requirements for multi-language support from the people that are
paying us to work on CDT, so that's where we're focusing ...Content assist is
important for us, but working on it has to be balanced with our other
priorities.</p>
</blockquote>
<p>Its clear that the CDT as an open source project is in a strange position of
having people paid to not make it better. Or rather, balancing priorities
mainly means completely ignoring content assist. At current count, there are
<a href="https://bugs.eclipse.org/bugs/buglist.cgi?query_format=specific&order=relevance+desc&bug_status=__open__&product=CDT&content=content+assist">144 open bugs about CDT Content Assist</a>. Now, some of those bugs in such a
basic search may not actually be correct, and at least one of those bugs is a
feature request. But many of the bugs I've looked at appear valid, yet
stagnate on bugzilla. Plus that feature request is for a cancel-able content
assist, because it takes too damn long, so you can basically call it a
"content assist sucks" bug report. The lack of any significant triage confirms
that the "balanced" approach is complete denial and/or ignorance.</p>
<p>So why would you use Eclipse? If you look at their sponsors, it's pretty easy.
Use Eclipse whenever putting up with the embedded systems toolchain your
vendor offers. <a href="http://dash.eclipse.org/dash/commits/web-app/summary.cgi?company=y&year=x&top=tools&project=tools.cdt">QNX, WindRiver, MontaVista and a ton of others</a> now base
their tools on Eclipse, and you can be damn sure they don't target Java on
their microcontrollers. It's a smart move if you're familiar with embedded
systems development from pre-2001. As terrible as I make Eclipse out to be,
it's quite a move forward in that area, where nobody sought to distinguish
themselves on the IDE front (embedded system designers are masochists almost
by definition, after all). I suspect developers who use eclipse CDT turn off
content assist and get on with their lives in an otherwise fantastic life. I'm
tempted to file a bug that Content Assist should be off by default.</p>
<p>So if I can't recommend Eclipse, what should I?</p>A rebuttal to apt-rsync2008-05-09T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2008-05-09:a-rebuttal-to-apt-rsync.html<p>Seen on #ubuntu-devel today:</p>
<blockquote>
<p>[hi5] Re:rysnc method driver for apt. The context of this can be anything
from people over sat connections, 3rd world countries (like iraq where there
is no fiber, or copper backbones/lines since the ground is so hard it's almost
impossible to run it so everything is expensive sat. based), or saving
bandwidth / speeding up updates 1000x for all users / saving money to run
repos. Are there no thoughts?</p>
</blockquote>
<p>A mess of counter opinion was offered, and some feelings were hurt. I see
this pretty often -- someone has an idea to help some people (usually
themselves included) and when people shoot the idea down and offer
alternatives they get outright angry. Sometimes people invest too much emotion
in the solution instead of the problem, I guess. Similar things happened with
Automatix, even though Automatix really did stupid things that really did
cause some upgrades to break and Ubuntu worked very hard to come up with (I
think) suitable alternatives. </p>
<blockquote>
<p>[ andrew___] hi5: I think you need to be reading a book on
statistics rather than programming, actually. If you can show that debiff will
reduce bandwidth by X% and increase CPU usage by Y%, we can have a proper
debate about whether it's worth it. Until then, we're all just hand-
waving.<blockquote></p>
</blockquote>
<p>With that in mind, I'd like to suggest a quick demonstration of why rsync
fails at it's proposed goal of saving bandwidth. <a
href="http:/en.wikipedia.org/wiki/Rdiff">Rdiff is basically a binary patching
system based on rsync. It should be a good way to measure how much data needs
to be transferred. In order to give rsync a fair shot, I'm going to pick two
versions of a package and rdiff them. I'll pick them to be close in version,
one version bump, if you will. Browsing my apt cache the first satisfactory
package to hit my eye was xulrunner (the package causing many people angst
with firefox randomly terrible disk performance):</p>
<blockquote>
<p>jldugger@jldugger:~rdiff $ ls -lh</p>
<p>-rw-r--r-- 1 jldugger jldugger 7.4M 2008-05-08 22:44 xulrunner-1.9_1.9~b5
+nobinonly-0ubuntu3_i386.deb<br>-rw-r--r-- 1 jldugger jldugger 7.4M 2008-05-08
22:44 xulrunner-1.9_1.9~b5+nobinonly-0ubuntu4~8.04.0mt1_i386.deb</p>
<p>So it's a fairly big package, plenty of room for rdiffrsync to identify any
easy bandwidth savings. Any network overhead should be recoverable on a
package this size. First make a signature:</p>
<p>jldugger@jldugger:~rdiff $ rdiff signature xulrunner-1.9_1.9~b5
+nobinonly-0ubuntu3_i386.deb xul.sig</p>
<p>jldugger@jldugger:~rdiff $ ls -lh xul.sig</p>
<p>-rw-r--r-- 1 jldugger jldugger 44K 2008-05-08 22:46 xul.sig<blockquote></p>
</blockquote>
<p>Then make the actual diff:</p>
<blockquote>
<p>jldugger@jldugger:~/rdiff $ rdiff delta xul.sig xulrunner-1.9_1.9~b5
+nobinonly-0ubuntu4~8.04.0mt1_i386.deb xul.diff</p>
<p>jldugger@jldugger:~rdiff $ ls -lh xul.diff</p>
<p>-rw-r--r-- 1 jldugger jldugger 7.4M 2008-05-08 22:47 xul.diff<blockquote></p>
</blockquote>
<p>And there you have it: <strong>rsync saves no bandwidth at all for .debs.</strong>
Moreover, these are the two CLOSEST versions. It's not hard to imagine why
this is: <a href="http://en.wikipedia.org/wiki/Deb_(file_format)">the .deb file format</a> is an ar archive of two compressed
tarballs. The tarballs are going to mostly look like random data, and change
size. Just in case minor changes didn't totally screw over rsync by making
seemingly random changes all over the tarball, gluing the two tarballs
together is going to shift most of the archive around and raise hell as well.</p>
<p>At this point it's appropriate to introduce a <a href="http://olstrans.sourceforge.net/release/OLS2000-rsync/OLS2000-rsync.html">lecture</a> <a href="http://ftp.gnumonks.org/pub/congress-talks/ols2000/high/cd2/2000-07-21_15-02-49_C_64.mp3">(mp3
version)</a> by the author. It's really quite enlightening, but I'll highlight
the important bits for the lazy:</p>
<blockquote>
<p>the remote update problem is basically: you have two computers
connected by a very high latency, very low bandwidth link... a typical
Internet link, at least if you're in Australia. So, a piece of wet string, a
really pathetic link... and you've got two files. [...] You've got two lumps
of data; one sitting on one of the computers and the other sitting on the
other computer, and you want to update one of the lumps of data to be the same
as the other one.<blockquote></p>
<p>Basically the rsync program works fine on compressed files, the actual
binary works fine, but the rsync algorithm is not very efficient on compressed
files. [...] gzip uses dynamic Huffman encoding, which means if you change one
byte in the file, everything after that point in the file changes in the
compressed data. Problem; that means of course that rsync will be terrible,
unless, of course, the change is toward the end of the file. [58m, 31s]</p>
</blockquote>
<p>Most importantly, near the end: <br></p>
<blockquote>
<p>yes, handling renames is something I do want to do, partly because of the
stuff Steven has been doing with his apt-proxy stuff, where he's actually done
an rsync-based apt-proxy system. And look for it; I think it's called apt-
proxy, isn't it Steven?</p>
<p><a href="http:/apt-proxy.sourceforge.net/">Apt-proxy</a> essentially concludes the same thing, and restates a common
hypothesis that rsync is CPU intensive. So unless you can convince gzip to
adopt an rsync amenable compression, the apt-rsync is on life support. There
are other ways one might go about this, but rsync isn't it today. I leave
you with a bit of hope:</p>
<p>There are always more efficient algorithms than rsync. If you
have structured data, and you know precisely the sorts of updates, the
constraints on the types of updates that can happen to the data, then you can
always craft a better algorithm than rsync.</p>
</blockquote>More raw events2008-05-07T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2008-05-07:more-raw-events.html<p>If you're interested in Linux and tired of shitty "top 10 commends" on Digg,
recycled articles and news filters like Slashdot, here's two interesting
things I've found in the past few days:</p>
<p><a href="http:/www.fosdem.org/2008/media/video">FOSDEM 2008 videos</a> -- a developer conference to present new software
and technologies. That page has the past 4 years worth of videos</p>
<p><a href="https:/wiki.ubuntu.com/UbuntuOpenWeek">Ubuntu Open Week</a> was last week. There was a bit of last minute schedule
changes so some of them were short on good questions. Often they seem like
someone cutting and pasting a wiki page, which is unfortunate. It might be
time to revisit the idea and tweak some things.</p>You may call me Sir Pwnguin if you like2008-04-04T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2008-04-04:you-may-call-me-sir-pwnguin-if-you-like.html<p>If you <a href="http://madduck.net/blog/2008.04.04:nicknames/">don't like being called by a name you chose</a>, you have chosen <em>very
poorly</em>. Please try to exert better judgment in the future, as you are very
nearly <strong>failing at life.</strong></p>Ghosts2008-03-04T00:00:00-08:00Justin Duggertag:www.pwnguin.net,2008-03-04:ghosts.html<p>So I grabbed the latest NiN experiment. Reports from frustrated users unable
to download their purchase suggest it works well enough to be popular, though
my experience with the torrent suggests piracy is still the distribution
method to beat ;)</p>
<p>As far as the music itself, I'm unimpressed mostly. The album presents
itself as an "instrumental," but unless Trent Reznor's dictionary defines it
as "unmelodic ambient music", I think that word misses the mark. Only a few
tracks stand out musically to me. Clearly this album wouldn't succeed in music
stores. It appears the purpose here is to exploit collectors or perhaps
promote something akin to selling Garageband files whenever that gets
announced. I've never understood the appeal of that stuff -- all you can
really do is sample the tracks. You won't be able to add in an awesome solo
from the bass noise machine or anything and expect it to fit in.</p>
<p>Although, for some reason 28 Ghosts IV reminds me of an acoustic version
of Dethklok's Go into the Sea for some reason. But we all know that acoustic
is lame and totally not metal.</p>Pipes -- A one year review2008-02-08T00:00:00-08:00Justin Duggertag:www.pwnguin.net,2008-02-08:pipes-a-one-year-review.html<p><a href="http://pipes.yahoo.com/">Yahoo! Pipes</a> is celebrating their <a href="http://blog.pipes.yahoo.com/2008/02/07/our-one-year-anniversary/">one year anniversary</a>; plenty has
been written on what Pipes is and what it does, so I'll spare you mere
description with an incredible amount of fanfare by tech media. <a href="http://radar.oreilly.com/archives/2007/02/pipes_and_filte.html/">O'Reilly
wrote:</a></p>
<blockquote>
<p>Yahoo!'s new Pipes service is a milestone in the history of the internet</p>
</blockquote>
<p>But not everyone agreed at the time. IT Week columnist Tim Anderson asked if
Pipes was all it cracked up to be, mere days after the announcement. The
author attacks not the concept or implementation, <a href="http://www.itweek.co.uk/itweek/comment/2185589/yahoo-pipes-cracked">but the open internet
culture it needs to succeed:</a></p>
<blockquote>
<p>Participating in mashups works well for e-commerce sites like eBay or
Amazon, because it drives sales, but that model fails for other kinds of
services.</p>
</blockquote>
<p>Clearly at release, opinions clashed. So after a year in beta, has Pipes
succeeded? The answer has slowly become clear to me. They've certainly
jumpstarted a new class of web application, which I have been using ("beta
testing") for much of the past year. But what I've found is equal parts
frustrating and amazing.</p>
<h3>Milestone of internet programming?</h3>
<p>Firstly, in some senses, O'Reilly understates the accomplishment of Yahoo!
Pipes. UNIX pipes are fantastic, but textual representations lead to common
linear topologies. Even the canonical program tee merely dumps output to a
file, rather than duplicating the stream into a second pipe. UNIX pipes
essentially allow you to glue two text apps together, with some fun filters
like grep in between. Search all your source code for instances of blah and
display in a text editor, or create a diff of a programming project and mail
it to a grader. Anything more complex than this and you'll suddenly need to be
writing code in C to arrange the programs and pipes, which quickly loses
advantage and appeal. But there's a broad satisfaction among UNIX users with
pipes; you can quickly write a program to perform a specific task without even
thinking about programming -- it feels more like simply using the system than
actually programming.</p>
<p>Yahoo's graphical representation allows you to easily construct entire
assemblies, split pipelines apart, independently modify them and recombine
them into something far better. This concept is no milestone -- done
correctly, it's a monument. A monument to what, exactly? Things that a few
years ago I dismissed as junk: XML, Webservices, etc. Yahoo pipes correctly
uses XML as a programmer tool, able to analyze it's own data input formats
<strong>while writing your program</strong>. If your feed has extra attributes, these
become targets for rules later on in the pipe. This has been the most
impressive use of XML I've seen to date. Pipes effectively lowers the Web
Services bar from people familiar with Greasemonkey or XML Transformations
down closer to sysadmin skills.</p>
<h3>No panacea</h3>
<p>So Pipes is great in that it maintains the simplicity of the UNIX pipes model,
while extending it in ways the old model could never have adapted to.
Unfortunately, a year later, a lot of bugs that would have been forgivable at
release continue to exist. For example, regular expressions were added as a
way to mangle strings. For advanced uses, this is priceless. Sadly, they're
not implemented correctly. I repeatedly run into a wall regarding their regexp
Module. It's painful enough learning Perl-like regular expression syntax, but
Yahoo!'s bugs make it downright impossible for experts to do things let alone
novices. HTML isn't meant to be parsed in this manner, and many sites may
resort to fighting Pipes (which has an opt out system for content owners) and
knock-offs (which may not) by twiddling the code in ways that affect regular
expressions but not more advanced grammars used in actual rendering.</p>
<p>This means Pipes' best tool in the toolbox to counter IT Week's closed
internet claims is bent, if not broken. And he's right on the merits. Almost
nobody has a business incentive to publish RSS or other web services in an
advertising driven system. Google dropped their search API in favor of
Javascript widgets and Gears. Google Video might allow people to publish video
for download, but Youtube doesn't. Pipes introduced a Fetch Page module, in an
attempt to let users break up any given page into sub elements. Webscraping
with Pipes is difficult to do at best, and the broken regexp module makes it
almost meaningless in making feeds from sites without one already. For
example, I've been working at an RSS feed for the Greatest Site in the
Universe, but the author's handwritten HTML requires delicate touches that
Pipes' regexp system cannot handle.</p>
<p>But this is not the only problem that comes up. I've constructed a fairly
simple pipe that searches several RSS feeds for items, and formats them in a
way that I can automate <a href="http://en.wikipedia.org/wiki/Broadcatching">broadcatching</a> with. One pipe, run with several
different parameters. For reasons unknown to me, these pipes work fine in
debugging mode, but in particular I get the same feed from two distinctly
different parameters. I can only imagine that something is terribly broken in
their caching scheme.</p>
<p>A third annoyance is that you can only use other modules as inputs. The beauty
of UNIX pipes was that so many programs were "fittable" from either end.
Yahoo! pipes cannot accept an anonymous standard input. Without this, every
workaround you create must be duplicated in every pipe you need it for, and
it's sometimes a typo bound endeavor.</p>
<p>Finally, the Pipes team is not at all on top of their beta tester user base.
This is somewhat understandable when the comments are like "Pipes should have
an easy HTML to RSS module!", but when you provide a test Pipe that clearly
demonstrates simple bugs like regular expression case sensitivity, it sends a
confusing signal to me. Are they not interested in fixing Pipes? Is there
somehow a bigger problem I'm not seeing? Are the kinds of things I have in
mind for Pipes simply not on the slated future of Pipes? A fellow Pipes user
<a href="http://discuss.pipes.yahoo.com/Message_Boards_for_Pipes/threadview?m=tm&bn=pip-DeveloperHelp&tid=2912&mid=2914&tof=-1&rt=2&frt=2&off=1">responds to my question:</a></p>
<blockquote>
<p>I'm still not clear on how to alert the pipes development team about these
issues.</p>
</blockquote>
<p>If I was a Microsoft looking to evaluate the value Pipes adds to Yahoo!, these
sorts of failings are not a good sign.</p>
<h3>So now what?</h3>
<p>I'm forced to conclude that <strong>Pipes is mostly a pipedream</strong>. In order to fix
the many problems I'm encountering, I'll likely resort to abandoning Pipes in
favor of running scripts locally. Plagger requires perl and is apparently a
PITA to package. OCamlRSS has gone over a year since it's last update. This
situation is slightly disappointing because with Pipes, the explanatory
diagrams were the code. But for now I have more faith in my ability to
translate hand drawn diagrams into code and perl regular expressions than
Yahoo!s ability to deliver a working regExp system.</p>Duggers Third Law2008-01-18T00:00:00-08:00Justin Duggertag:www.pwnguin.net,2008-01-18:duggers-third-law.html<p>The third law states that everything you've thought of has already been done
on the Internet.</p>
<p>Example: <a href="//www.pwnguin.net/a-comparison-of-compression-schemes.html">What I thought of</a> vs <a href="http://goodmerge.sourceforge.net/About.php">What's already been done.</a></p>A comparison of compression schemes2008-01-17T00:00:00-08:00Justin Duggertag:www.pwnguin.net,2008-01-17:a-comparison-of-compression-schemes.html<h2>The Scenario</h2>
<p>I have about 10GB of SNES games sitting on a storage drive. As I've recently
experienced a scare with my storage disk, I've decided to put as much as I can
on DVDs. Unfortunately, even with Dual Layer DVDs, one can only put about 8GB
on a single disk. Since joda has recently switched to releasing his torrent
packs in 7z form, I've thought I'd try switching my other archives. The
results are amazing.</p>
<p>In a bit more detail, the dataset we're working with is the SNES GoodSet, give
or take a few. The goodset includes nearly every revision of every game known
to the set's editors. So every language, every beta, every bug fix, every
translation and every mod. That's a lot. In fact, the size above is actually
from taking every game and its variants and zipping that group individually.
It's probably twice as big uncompressed.</p>
<p>For testing purposes, I took the first dozen or so groups to create a smaller
test set to work with. I then compressed them with various strategies and
algorithms.</p>
<h2>The compression algorithms</h2>
<p>I tested the following compressions where appropriate:</p>
<ul>
<li>
<p>.zip</p>
</li>
<li>
<p>.7z</p>
</li>
<li>
<p>.rar</p>
</li>
<li>
<p>.tar.gz</p>
</li>
<li>
<p>.tar.bz2</p>
</li>
</ul>
<h2>The Organization Strategies</h2>
<h3>Single Flat Directory, Single file</h3>
<p>In this strategy, take all 82 files and place it into a single directory, then
compress the directory. gzip and bz2 don't support this AFAIK, so they were
not tested.</p>
<h3>Grouped Directories, Single file</h3>
<p>In this strategy, place all files for a single game into a single directory,
and place all 14 directories into a top level directory to compress. Again,
gzip and bz2 commonly rely on tar to work across multiple files / directories,
so this doesn't really apply.</p>
<h3>Individual Groups compressed</h3>
<p>Compress each of the 14 group directories individually. For for gzip and bz2,
tar them first.</p>
<h3>Tarred/Group Tarred</h3>
<p>These are two very similar tests with very similar results. Take the
organization strategies from the first two and tar them, then use the
compression. This is motivated by two things. Firstly, it helps put bz2 and
gzip on an even footing with rar/zip/7z. Secondly, as from the 7za man page:</p>
<blockquote>
<p>DO NOT USE the 7-zip format for backup purpose on Linux/Unix because :</p>
<ul>
<li>
<ul>
<li>7-zip does not store the owner/group of the file.</li>
</ul>
</li>
</ul>
<p>On Linux/Unix, in order to backup directories you must use tar :</p>
<ul>
<li>
<ul>
<li>to backup a directory : tar cf - directory | 7zr a -si
directory.tar.7z</li>
</ul>
</li>
<li>
<ul>
<li>to restore your backup : 7zr x -so directory.tar.7z | tar xf -</li>
</ul>
</li>
</ul>
</blockquote>
<p>So if you want to automate backup perhaps these tests are a useful insight
into 7zip viability.</p>
<h3>The Data</h3>
<blockquote>
<td>
</blockquote>
<p>Total Size (in MB)</p>
<p>Compression ratio</p>
<blockquote>
<td>
</blockquote>
<p>117.7</p>
<p>100.00%</p>
<blockquote>
<td>
<td>
<td>
</blockquote>
<p><em>Single Flat Directory, Single file (for supporting formats)</em></p>
<blockquote>
<td>
<td>
</blockquote>
<p>one big zip</p>
<p>62.1</p>
<p>52.76%</p>
<p>one big 7z</p>
<p>50.6</p>
<p>42.99%</p>
<p>one big rar</p>
<p>58.8</p>
<p>49.96%</p>
<blockquote>
<td>
<td>
<td>
</blockquote>
<p><em>Grouped Directories, Single file (for supporting formats)</em></p>
<blockquote>
<td>
<td>
</blockquote>
<p>one big zip</p>
<p>61.2</p>
<p>52.00%</p>
<p>one big 7z</p>
<p>12.2</p>
<p>10.37%</p>
<p>one big rar</p>
<p>58.5</p>
<p>49.70%</p>
<blockquote>
<td>
<td>
<td>
</blockquote>
<p><em>Individual Groups compressed</em></p>
<blockquote>
<td>
<td>
</blockquote>
<p>zipped groups</p>
<p>62.1</p>
<p>52.68%</p>
<p>7zipped groups</p>
<p>10.7</p>
<p>9.09%</p>
<p>tar.gz'd groups</p>
<p>62.1</p>
<p>52.68%</p>
<p>tar.bz2'd groups</p>
<p>65.6</p>
<p>55.23%</p>
<p>rar'd groups</p>
<p>68.5</p>
<p>58.20%</p>
<blockquote>
<td>
<td>
<td>
</blockquote>
<p><em>Tarred</em></p>
<blockquote>
<td>
<td>
</blockquote>
<p>tar.gz</p>
<p>62.1</p>
<p>52.76%</p>
<p>tar.bz2</p>
<p>65.7</p>
<p>55.82%</p>
<p>tar.zip</p>
<p>61.9</p>
<p>52.59%</p>
<p>tar.7z</p>
<p>29.8</p>
<p>25.32%</p>
<p>tar.rar</p>
<p>12.3</p>
<p>10.45%</p>
<blockquote>
<td>
<td>
<td>
</blockquote>
<p><em>Group Tarred</em></p>
<blockquote>
<td>
<td>
</blockquote>
<p>tar.gz</p>
<p>62.1</p>
<p>52.76%</p>
<p>tar.bz2</p>
<p>65.7</p>
<p>55.82%</p>
<p>tar.zip</p>
<p>61.9</p>
<p>52.59%</p>
<p>tar.7z</p>
<p>28.8</p>
<p>24.47%</p>
<p>tar.rar</p>
<p>11.1</p>
<p>9.43%</p>
<h2>Analysis of results</h2>
<p>The ultimate victor is 7zip, when compressing each group of related files
separately. RAR does great in the other cases, but surprisingly fares worst
here. Additional information:</p>
<ul>
<li>
<p>Clearly, grouping provides 7zip with substantial benefit. It's not hard to
come up with possible reasons: since the games are very much the same with
only fonts and text changed in most cases, it would be simple to define very
large dictionary blocks with very short lines, effectively turning the
dictionary into the main file, and the actual file streams as diffs.</p>
</li>
<li>
<p>This theory also explains why the large single archive fails 7z:
eventually the program decides to limit its dictionary size, causing it to use
smaller words.</p>
</li>
<li>
<p>The individual archives for each group approach theoretically allows one
to cherry pick the best compression for each group, but given how much better
7z is, I doubt any individual groups would actually benefit.</p>
</li>
<li>
<p>In fact, Paul Sladen speculates that <a href="http://www.paul.sladen.org/projects/compression/">7zip already cherry picks
compression algorithms</a>. He also suggests that order does matter, causing
substantial improvements in backups if you order the tarball by filetype.</p>
</li>
</ul>
<h2>Conclusion</h2>
<p>Based on these results, I decided to recompress my 10GB archive with 7z,
leaving the grouping intact. The final size: 3.9GB. Fantastic. And in fact, it
appears that one can do even better, getting <a href="http://www.mininova.org/det/1102879">down to 1.6GB</a>. Handy. I just
used the default settings, so perhaps it's time to fiddle with the advanced 7z
settings to compress with. <a href="//www.pwnguin.net/addendum.html">Addendum: Using smarter options leads to
comparable compression ratios.</a></p>Addendum2008-01-17T00:00:00-08:00Justin Duggertag:www.pwnguin.net,2008-01-17:addendum.html<p>After finding a better document on p7zip (/usr/share/doc/p7zip/examples), I've
found the command options</p>
<blockquote>
<p>7za a -t7z -mx=9 -ms=on -mmt=on</p>
</blockquote>
<p>do a much better job of compressing, bringing the total in line with the
aforementioned torrent. From 2 DVDs to a quarter, is good stuff I think. There
is a downside, though: ZSNES and snes9x don't support this archive format
natively. But they didnt support the grouping format with zip either. So no
huge loss I guess.</p>I love the internet2008-01-11T00:00:00-08:00Justin Duggertag:www.pwnguin.net,2008-01-11:i-love-the-internet.html<p>They say that open source didn't really take off until the internet was
popular, and open source probably milks it for all it's worth. But sometimes
the Internet isn't enough, and several <a href="http://en.wikipedia.org/wiki/Linux_conference">conferences</a> have grown from it.
Ubuntu their Developer Summits, O'Reilly hosts OSCON, and some enterprising
guys in Austrailia host Linuxconf.</p>
<p><a href="http://linux.conf.au/programme/presentations">Linuxconf 2008</a> is coming up shortly, and one of the reasons I like
it is that unlike the other conferences, it takes the Internet to heart.
Nearly all lectures are recorded and made available for poor / lazy folk like
me to view at my leisure. It's a great way to kill a day or two. Several look
promising:</p>
<ul>
<li>
<p>Dave Jones will be giving a lecture on what he does I guess</p>
</li>
<li>
<p>Matt Garrett will be giving a talk on suspend to disk. I wish he'd give a
lecture on open source bioinformatics tools available some time<br>* Dave
Arlie has a talk scheduled about open source 3d drivers</p>
</li>
<li>
<p>A pair of debianites are going to talk about why big companies don't like
Debian<br>* A talk on setting up an apt repo</p>
</li>
<li>
<p>A security lecture on Debian Security<br>* open source animation tools demo
by Bdale Garbee's daughter Elizabeth</p>
</li>
<li>
<p>MPX - multi pointer X. for those with tablets<br></p>
</li>
</ul>
<p>There's tons more. If you live in town, I envy you.<p></p>Deluge: After one month2008-01-02T00:00:00-08:00Justin Duggertag:www.pwnguin.net,2008-01-02:deluge-after-one-month.html<p>A month ago, I wrote about a <a href="//www.pwnguin.net/a-plan-for-anime.html">plan to automate</a> some of my online
activities, and in particular, to replace Azureus with <a href="deluge-torrent.org/">Deluge</a>. Well, I've
been using it for the last six months and here's the scoop:</p>
<h3>Pros</h3>
<p><strong>Works</strong> Deluge delivers the goods. Encryption, DHT, uPNP etc. Pretty much
everything you need in the face of mean ISPs, throttling, and leeches.</p>
<p><strong>Actively developed</strong> Deluge is rapidly improving with quality and features.
The development community is responsive, if a bit hostile at times. The cons
below have a chance at being eliminated in the future, if this continues.</p>
<p><strong>Non-judgemental</strong> Deluge doesn't try its hardest to stop me from seeding, it
doesn't try to trick users etc. The system to stop seeding is extremely simple
and I hope in the future it's extended to shut the whole thing down when
there's no active torrents left.</p>
<h3>Cons</h3>
<p><strong>No good plan for end user distribution</strong> Traditionally, Debian users wait
for upstream releases to enter unstable, and Ubuntu users sometimes wait for
the next release. The Debian model works great when a DD is involved, and the
Ubuntu system works okay with -backports and PPAs. Deluge has managed to avoid
both of these, but also avoided providing the usual sources.list repo. They've
been grappling with how to better serve their users in light of the cost of
being popular and rapidly changing. Ubuntu represents 50 percent of the
bandwidth and 80 percent of the downloads -- if anyone (Jono maybe) reads
this, perhaps it's nearly time to form an outreach.</p>
<p><strong>No significant memory savings</strong> Azureus has a reputation for being fat, and
I was hoping that Deluge would be lighter. It was at first, but with new
versions came new features and new weight. My primary reason for switching was
to reduce memory usage. DHT alone seems to cost 5 resident memory MB and 3
writable. I'm not sure how to best analyze memory usage. Comments welcome ;)</p>
<p>For me now, Deluge is the app to beat.</p>A Guerilla Slashdot User's Guide2007-12-22T00:00:00-08:00Justin Duggertag:www.pwnguin.net,2007-12-22:a-guerilla-slashdot-users-guide.html<p><a href="http://slashdot.org/">Slashdot,</a> rightly or wrongly, has a bad reputation. They post duplicates
(guilty) and the comments are all idiots. I find slashdot comments, treated
properly, give an extra dimension to the article in question. Allow me to
give you some tips on finding Slashdot more productive, useful, or at least
entertaining.</p>
<p><a href="http://adblockplus.org/en/">Use AdBlock Plus</a>. This works for lots of sites, and Slashdot is no
exception. It cuts loading times, and animated annoyances.</p>
<p>Second, create an account. If you're worried about spam, you can try
services like <a href="http://mailinator.com/">Mailinator</a> but it seems like they're onto this stuff and
don't like it. They block Mailinator addresses as "invalid". And claimed I
some how cut and double pasted one email address to where they don't match.
So maybe that's a downside. It does suggest they intend to sell your email
address, or else they wouldn't care how easy it is to set up throwaway email
addys.</p>
<p>Once you've established an account, you can now set personal preferences
for the site. Moderation is the system that makes large boards feasible.
Slashdot has crowdsourced moderation, long before that word even existed.
Crowd moderation is better than no moderation, but it isn't all roses. You'll
need some jujitsu to make this thing work.</p>
<ul>
<li>
<p>First ensure you're using threaded mode, with the old comment style.
There's always some beta comment system, as equally confusing as omnipresent.
Preferences -> Comments -> Discussion Style: Classic mode. Below that
is Display mode, and you want Threaded.</p>
</li>
<li>
<p>Then sort by Highest Ratings (but not the ignore threads option). Then set
the Threshold to 1 or 2. Conversations are hard to read when half the thread
is sliced out by moderation. Below that, set the Highlight Threshold to 2.
This way anything moderated will be readable or gone.</p>
</li>
<li>
<p>Thirdly, muck with the Reason Modifier. This one deserves a bit more
explanation.</p>
</li>
</ul>
<p>One of the thorns in the rosebush of Slashdot moderation is the tyranny of the
majority opinion. Pointing out flaws in Linux, or otherwise deviating from
the world view of Slashdottery can result in instant flamebait moderation.
Flamebait is an interesting moderation -- it's essentially saying the post is
contraversial, but not offensive or spam. In my book, that's <em>interesting</em>; in
slashdot's book the moderation is treated as a <em>negative.</em> The comment is
bumped down one point. My best trick is this: <strong>To best read Slashdot
comment threads, change flamebait from 0 to +4. </strong>Suddenly, the other halves
of debates become visible. Or at least far more hilarious.</p>
<p>Of course, this requires that moderators not change their views
significantly. If flamebait begins to be interpreted as Ron Paul spam, or
some other concept, the whole thing breaks down. So if you moderate slashdot,
ignore this whole post.</p>Power restored2007-12-18T00:00:00-08:00Justin Duggertag:www.pwnguin.net,2007-12-18:power-restored.html<p>Nearly six days and two very cold nights after the ice storm demolished
Manhattan's power infrastructure, we finally regained electricity to the
house. I suppose I'll remember this as a lost week -- I was unable to really
accomplish anything but eat sleep and complain.</p>A plan For anime2007-11-30T00:00:00-08:00Justin Duggertag:www.pwnguin.net,2007-11-30:a-plan-for-anime.html<p>I guess you could say watching fansubbed anime's a pastime of mine. The two
most common distribution methods these days is IRC and bittorrent. I've built
up a small ritual around it:</p>
<ol>
<li>Check a site like <a href="http://tokyotosho.com/?cat=1">TokyoTosho</a> for new shows. </li>
<li>Download the torrent, and pass to azureus.</li>
<li>Allow azurues time to download, and maybe seed. </li>
<li>Watch.</li>
</ol>
<p>Its a process I've honed over the years. Recently, I've been trying to make
the ordeal smoother. Starting backwards, I guess. </p>
<p>gMPlayer doesn't integrate with desktops very well, or show subtitles by
default, so a few months ago I wrote a nautilus script to pass suitable
options to mplayer. It's a very small script based on another one I found
on <a href="http://g-scripts.sourceforge.net/">g-scripts</a>. Totem's getting better, but performance still tanks hard on
mkv for unknown reasons. </p>
<p>The next step is making bittorrent itself a bit more automatable. <a href="http://azureus.sourceforge.net/">Azureus</a>,
like all java programs is a big footprint kind of program. It takes somewhere
between 30 and 50 Megs of RAM. And it tends to crash. Hard. So hard it crashes
on restart, until you remove, of all things, the log files. So I'm currently
looking for a replacement for Azureus. For all it's weight, Azureus does a
good job of staying up to date and has lots of great tech. DHT, upload/download
limiting, etc. There's even a plugin to handle RSS feeds, which is good for
automation (more on this in a second). </p>
<p>Right now I'm looking at <a href="http://packages.ubuntu.com/deluge-torrent">Deluge</a> as a replacement for Az. It's got a bit
more control over when and why torrents are closed, and appears to take around
10 megs at idle. I should probably do smarter measurements of RAM use while
actually running a torrent, but it's a good sign at least. But what counts is
download time. Many people in the local LUG moan that their torrents never go
fast, I've never had that problem with Az and I don't intend to go back to
such times. </p>
<p>The third step is has a couple options at the moment. Both Azureus and Deluge
support RSS subscriptions. I'd prefer not to have a bulky torrent running in
the background all day not doing anything though. I do however, use a tool
called <a href="http://liferea.sourceforge.net/">Liferea</a> in the background on my desktop. I've been moving a lot of
stuff over to Liferea, and this is also a candidate. Liferea has an option to
download enclosures and launch an application on them. In this case, I'd run
Deluge or Azureus. Not sure how to close them when they're idle again, as
bittorrent authors prefer that people always be running their program and
seeding. A big problem currently is that liferea doesn't handle MIME types,
even though the enclosure protocol specifies their existance. It's mostly a
matter of laziness on mine and the author's part. I've fixed the parsers to
handle MIME types, but that's just the first part of several changes.</p>
<p>The final step is to actually create an RSS for just the things I'm interested
in. US TV has <a href="http://tvrss.net">tvrss</a>, which does a good job of making RSS feeds from sites
like mininova. Anime doesn't have that, but it does have several dedicated
sites like mininova. This is where <a href="http://pipes.yahoo.com/pipes/">Yahoo! Pipes</a> comes in. It's like a
flowchart for UNIX pipes. Most unix pipes aren't very complicated though. They
mainly just filter or transform things and pass them on to the next guy in
line. Ideally, Yahoo!'s system allows for splitting pipes in far more
intelligent ways than tee ever did. In practice it's a bit frustrating to
write a complicated Pipe, as their loop operator doesn't take operators
itself. You have to sort of build them out of string manip and regexp's. Fun
times.</p>
<p>Here's an explanation to <a href="http://pipes.yahoo.com/jldugger/anime">the pipe</a> design. In order to get the very latest
torrents, RSS feeds from several sites (instead of just one) are pulled in and
merged. Then it filters based on user provided parameters. What's left should
be your series, in chronological order. Then we normalize the feed, as each
site offers different data. Some tools, like Liferea, only work on enclosures,
not links. So the pipe sets those values up, and bam, <a href="http://en.wikipedia.org/wiki/Broadcatching">broadcatching</a>
complete.</p>
<p>If you've made it this far, I'm open to questions about patching liferea,
suggestions on other feeds to aggregate, bug reports, and alternative
bittorrent clients.</p>Linux Fingerprint Developments2007-11-15T00:00:00-08:00Justin Duggertag:www.pwnguin.net,2007-11-15:linux-fingerprint-developments.html<p>It's been a busy week for Fingerprint Authentication in Linux. <a href="http://sourceforge.net/mailarchive/forum.php?thread_name=1195126619.17548.65.camel%40zimtstern.suse.de&forum_name=thinkfinger-devel">Timo announced
</a> that pam_thinkfinger is being retired, and a new project will supposedly
arise from it. So it's effectively being abandoned. Normally I'd say it's a
bad thing, but it's a solid driver at the moment, and most of the problems
arise from the PAM module. Requiring uinput has been a source of pain as it's
partly considered insecure. The good news is that Thinkfinger is a whirlwind
of activity at the moment preparing for a final release, reviewing and
integrating patches that have been floating around. Hopefully upstream won't
die as much as branch. </p>
<p>Additionally, theres a new Ubuntero pushing on <a href="https://bugs.launchpad.net/bugs/54816">including BioAPI in Ubuntu</a>.
He seems a bit of a novice though, so don't hold your breath. I don't quite
see the purpose in pushing BioAPI exactly, but if it works, great for everyone. </p>
<p>Finally, <a href="http://www.reactivated.net/fprint/wiki/Main_Page">fprint</a> announced an initial release this week. They aim to create
a generic library with which they can support all kinds of fingerprint hardware.
Unsurprisingly, security is <a href="http://www.reactivated.net/fprint/wiki/Security_notes">"of interest, [but] not a primary objective".</a>
That's right. A security tool who's primary objective is not security. Mostly
this is an artifact of the contradiction of using fingerprints as security
devices -- they're not as secure as a good password, but they are slightly
better than auto login as root.</p>Proving them wrong once again2005-04-01T00:00:00-08:00Justin Duggertag:www.pwnguin.net,2005-04-01:proving-them-wrong-once-again.html<p>I got my GRE scores back today. The breakdown is as follows:</p>
<p><strong>Verbal</strong> 590 (81% below)</p>
<p><strong>Quantitative</strong> 660 (59% below)</p>
<p><strong>Analtyical</strong> 5.5 (86% below)</p>Jobless Recovery2004-03-31T00:00:00-08:00Justin Duggertag:www.pwnguin.net,2004-03-31:jobless-recovery.html<p>Fixed a few new errors in the webpages, they're w3c compliant again! New CIS data as well.</p>
<p>Spring Break is over, and I blew most of it off. Mostly caught up with some TV shows (didn't know Dennis Miller
was now with CNBC) and played Yahoo! Spades --I'm ranked at 1650 or so in 20 games. The scoring is ELO, like chess.
You start out at 1500, and 1800 is considered a very high ranking. But I was lucky enough to land an on-site
interview with General Dynamics in Arizona.</p>
<p>The part of Phoenix they put us up in is clearly the result of urban renovation. We noticed three strip clubs on
the way to the hotel, and several pawn brokers and payday loan offices. I think its safe to say that the Maserati
dealer and upscale apartment complexes we saw are relatively new. Every place we looked at offered plenty of
ammendities like concierge services and gated access. Nobody mentioned the crime rate, but Scottsdale is certainly
an interesting estuary of poor and rich. Just a scant few blocks north of our hotel lies the nightclub area and
several upscale shopping stops, mostly art galleries. We saw one that just looked like unfinished canvases, until
Zack informed me they were "abstract art." I didn't realize that was a thing people bought. I have to wonder where
the money comes from, to support some of these people's lifestyles, probably drugs and hookers. Zack wanted to
visit Axis/Radius, expecially after he saw the white curtians they'd put up since he was by last time. Of course
the lines were atroucious, and I didn't think to own something worth wearing to a night club.</p>
<p>We also stopped by Tempe, and walked around Arizona Mills mall. I never understand why tourists go to malls but
its clearly an attraction. Its mostly the same suburban stuff you can anywhere, but there's also some stores that
directly caters to tourists, like Gameworks and IMAX. The price for 3 hour unlimited play has gone up since I last
went but there may be some day of week pricing factors at play. Had some fun with a Police Sim game. You basically
stand on a pad and get into gunfights with a japanese drug trafficing gang, and shoot hundreds of people. The nifty
part is the pad you stand on senses your weight shifting as you move, and you duck for cover. It's basically doing
squats and my calves still hurt from it.</p>
<p>It would definately be relaxing to have a job waiting for me when I graduate; especially with how unstable the
economy is right now. Some say we're experiencing a jobless recovery, others say the growth indictors aren't
reliable anymore. Either way it does seem that real growth will be on the way soon. I'm hearing that some of
Sprint's executives may be leaving, perhaps because their outsourcing idea isn't going well. They may be rehiring
several of the people they let go. Garmin is building more in town. I hear they're building like an eight story
extension to their existing facility, that they're expecting up by September. That's a ton of people to hold. I'm
not sure the nearby roads can handle that kind of traffic, especially since its right across the street a Junior
High, with plenty of foot traffic, and Olathe South High School, which releases a flood of teenage vehicle traffic.
But it's also a ton of people to hire, which is good news for Olathe, Kansas City, and my prospective profession
in general. Its crazy to see Garmin in the news and on TV and referred to as some sort of respected vendor. </p>Con Games2004-02-29T00:00:00-08:00Justin Duggertag:www.pwnguin.net,2004-02-29:con-games.html<p>Doing well in classes. I'm now the team leader for EECE 733:Real Time Embedded System Deisgn. I get to manage 4
graduates on our semester project. Hopefully it will be more interesting than our Software Engineering Project.</p>
<p>I've been playing this stock market game as well. It was made by a junior in CS, and its not implemented very well.
My roommate is also in the game, and we're pretty competitive. We found a small hole or two in the software logic,
and right now my portolfio is up 400 percent over two weeks. Originally there was only going to be trading allowed
during market open hours, but this was relaxed since world markets are open all the time somewhere, and market
hours coincide with class. He mentioned that you could trade after hours but the price wouldn't change. In reality,
you can trade after hours, and people do. Usually this happens when a company announces quarterly results. The
difference is that in reality, you also pay after hours prices, not close price. So company ABC announces they
beat expectations by 20 cents, and prices jump up 40 percent. We buy in at the close price and sell at market open
for nearly 40 percent. The other hole is that every price used by his system is delayed by twenty minutes. Yahoo
(the website the game uses for prices) also offers realtime quotes on stocks, so you have 20 minute futuresight.
Handy for avoiding bad trades.</p>
<p>Both of these are fixable. If the guy had done any real searching, he'd have found the [mock market][], a webapp
that does this game much cleaner. It's OSS and Linux/Apache/MySQL/PHP based, so the only real cost is time.
Instead its running on a pirated version of Microsoft products. Its no wonder that MS is pissed about GNU tools
like Linux, the customers who steal stuff from them will remain loyal to MS. It's the paying customers who will
look elsewhere.</p>
<p>Ikaruga finally came and its as insane as people say it is. Fortunately, it has several time based unlockables.
Every hour played earns you another credit, up to nine. After that you get free play, which means I should now
be able to actually finish the game. There's some galleries to unlock, and a prototype mode. Right now Zack's
hooked on F-Zero GX. I think he's jealous that its cooler than Wipeout. Which is fine by me, since he's unlocking
a lot of stuff that I was having problems with. I need to con(vince) him to play versus and thouroughly trash his
ego soon, though. </p>The beginning of the end2004-01-20T00:00:00-08:00Justin Duggertag:www.pwnguin.net,2004-01-20:the-beginning-of-the-end.html<p>The end is neigh! The beginning of the end is near - the Spring semester begins on Thursday. I'm taking a fairly
light course load, but there's the classes I'm taking should be fairly intensive. (Note to self, update schedule)
Graphics is gonna be cool; Cantrell has still been working on his spline based GLscreensaver. Now the splines are
all glowy and wide.</p>
<p>Seems like I only update this page when I'm nearing the end of a cycle of debugging my desktop. This time around
I've gotten SMB working, dug into the internals of iptables (the internet firewall / fitler for linux) and
upgraded to a 2.6 kernel. A lot of people have said the preemptive stuff in 2.6 really helped desktop performance,
but I haven't noticed any of that. I'll look at it more after I stop using Debian's provided kernel. I also moved
the video drivers up to the latest version (and started using Debian's provided package). Looks like the internet
ruminations were right on this one -- I get lower performance, hiccups and an annoying static on sound when
programs do something somewhat video intensive, like openGL or full screen scrolling. Time to look into pinning
the old drivers back. Also pointed out a minor bug in the Debian packages to the maintainer, he forgot to move a
base package up to the new version when he moved the rest forward.</p>
<h1>One and a half tons of stationary power</h1>
<p>My brother, Cole, has been borrowing my car a lot lately, since his RX-7 has forgotten how to stop. This is a
practice he'll be weened from when I leave tomorrow for school. He's looking to replace the brake booster now. The
garage thought it was the master cylinder, but when he took it home to repair it, replacing it fixed nothing. Now
he's repairing the power booster. Not that my car isn't a piece of crap. I'm still on my first car, some 6 years
later. My roommate, Zack's been through several more than I, and my brother's already gone through one in his first
year driving. He likes sporty cars -- his first was a CRX, a compact he made even more so after running into a SUV
stopped in the middle of the road. Now he has an RX-7. The rotory idea is pretty cool, and I hear they're
considering bringing it back if the RX-8 does ok in the market. So far its still on factory order. Myself, I'm
more interested in some other engineering marvels. Hybrid electric cars are pretty nifty. Solid milage and they
still move faster than my current car. Regenerative braking is cool, and theres all sorts of nifty things you can
do with two different engines. The electric engine in the Prius functions as part of a continuously variable
transmission, replaces the starter, is used to charge the batteries, and pitches in during high performance
situations. </p>
<p>Dad really likes the hydrogen idea, but really hydrogen is just a liquid battery. Its nice in a
modular energy design, but you still need a power source somewhere to create the fuel. And the distribution of
hydrogen power is sort of a chicken and egg situation. If nobody buys a hydrogen powered car, there's no reason
for stations to sell hydrogen. But very few would buy a hydrogen based car without a hydrogen source nearby. In
theory, you could have a hydrogen generator plugged in your garage, but expect crazy energy bills, especially in
the summer where the extra heat will seap into your air conditioned house. Hybrid electrics on the other hand,
offer energy savings and build on an existing infrastructure. Yay hybrid!</p>
<h1>Games</h1>
<p>Picked up F Zero GX, a futuristic racer for the Gamecube. Very beautiful game, for the brief period in the learning
curve where one looks for these things. I hope I can find an arcade cabinet with it soon, its very fun, and I'd
love to see the arcade tracks. Someone has just released an initial beta of a gamecube Linux. Interesting stuff
that I'd look into further if I had the tools. I'd need a special screwdriver to open the case, the rare ethernet
adaptor to boot, and an image server. Lets just say Nintendo has their bases covered in preventing piracy.
Unfortunately, this also makes it harder to run anything else on the Cube. This is part for the course in console
systems however. You pay big bucks for the privledge of having hardware that is more functional (I hear the
developer systems can read CD-Rs).</p>
<p>I also ordered Ikaruga from Best Buy for 20 dollars. Its gone from back ordered in 5 to 9 days to 30 days to 90!.
If they don't have it and don't expect it in, they should have listed it that way. Wow this has been a long entry. </p>Kernel Ninja2003-10-31T00:00:00-08:00Justin Duggertag:www.pwnguin.net,2003-10-31:kernel-ninja.html<p>It's been a while, but I've gotten back into a tweaking mood. Since the last posting I've managed to recover to a
Debian build, build a working NAT to share the cable modem, and left for school in August. Linux has held up fairly
well; the dorms have started to get very nervous about windows install insecurity. A friend of mine accidentally
left the network cable plugged in when he decided to remove XP for win2k, and got blaster as a result. They've had
him shut off for a week now, partially because the tech they sent out didn't actually remove blaster. Why they
don't just block the port in the first place is beyond me. In contrast, linux grabbed an ip lease early on, and
hasn't had any problems since (that weren't my own doing).</p>
<p>So earlier this week I decided to enroll in the graphics course next semester, and having an openGL stack the
professor likes lent me the motivation to try attacking the kernel again. For a moment I forgot that I ever built
my own kernel sucessfully, perhaps because QNX threw me for such loops when I decided to try it out on the second
disk (I was working with some developers to hex edit the boot sector-- eventually I decided that I could do
without QNX and its delicious GUI). Anyways I ran into the same old problem with dhcp never getting a lease on my
own build; compiling the drivers into the kernel resulted in reversing the aliases, it seems. So what was eth0 is
eth1, and vice versa. To fix this, its time to build modules. Then its off to accellerated X and finally GL. Then
I can go online and brag about how I have the lowest and therefore least valuable glgears score ever!</p>
<p>Sadly, I was not able to use my picross idea as an AI project; instead I'm working with Thaddaeus Frogley's C++
Fight! framework. I swear, I didn't make up that name. He say's his parents made it up, and I believe him, since
they are English, after all ;) So for now, the wx picross project is on the back burner. Interestingly enough,
Thaddeus has some stuff with wxWindows on his page, mostly how to set it up in VS6. It's not his fault though,
he's spent the last few years working on Grand Theft Auto for XBox and other games, so he gets a get out of
DLL-Hell free card.</p>
<p>Oh, its Halloween too. As usual for Kansas, the weather took a sharp cold turn just in time to ruin all the
children's day of candy feasting. I remember one time my brother went trick or treating in the snow as a ninja.
Well, at least, he was a ninja underneath the down overcoat. Stealth frozen ninja sneak attack! </p>Tweaking2003-07-24T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2003-07-24:tweaking.html<p>I once said that the end use of Linux is to customize Linux, after observing my roommate and a friend obsess over
little things like which screensavers should appear randomly and which shouldn't appear at all. I've tried to avoid
such vanity as tweaking options that only appear when I'm not using the computer, but I have been browsing around
lately at some of the various performance tools. I've been toying with hdparm, but it seems its allready maxxed
out there.</p>
<p>I've also been trying to recompile my kernel, and I've been getting closer. I've now actually gotten a kernel to
boot, and most of the services to run. The hurdle right now is dhcp and dhclient. ifconfig reports the devices as
up and running, but the card connected to the cable modem never recieves a lease. As you can imagine, debugging
and online support are difficult when you can't get online. To complicate matters it seems I've nuked the old
debian install; something about device drivers being overwritten I'm guessing.</p>
<p>Sometimes Debian pisses me off. GAIM has been acting up and crashing lately, so I thought I'd see if there's an
upgrade. I know that GAIM does update regularly, several of my friends actually run builds from CVS. The latest
version of GAIM in the unstable deb tree is about a month and two releases old. People bitch about Debian being
slow to package, this is why. So I grab the 20 meg tarball, and follow the instructions the FAQ suggests while
downloading. First problem: ./configure fails on a GLIB test. I've run into these sorts of problems before;
usually the hard part is figuring out what the package you need is named. The package search utility on the Debian
webpage is supposed to be helpful in this reguard but its usually underpowered. None of the glib results I get
back are very helpful. I tell it to search descriptions and try again; the very last result is GAIM itself. From
this page I discover the true name: libglib2-dev. They've conviently attached the lib prefixes to all packages
reguardless of redundancy. Whispering the true name of the devil into apt-get, allows gaim to configure, build and
install. But now the computer beeps every 30 minutes or so unexplicably. It takes me about a day to realize that
its GAIM beeping about the online status of people in my list. After reading over the ./configure output again, it
seems I'm missing the Audio File Library. Again, the beauty of the Debian package search shines through; search
queries can only have a single word. I ask on #debian and someone suggests I should install libao. That doesn't
work. The name, it so happens, is libaudiofile-dev. Now this is somewhat aggrevating since searching for
"audiofile" returns No Results. Armed with this second library, I trudge onward. Everything appears to be in order
now, but I'm sure GAIM and Debian haven't let this victory go unnoticed. Eve now, I'm sure the two of them are
independently plotting, awaiting the oppertune time to strike...</p>
<p>And finally, Debian added some more themes to the base GNOME distrobition. These actually make the system not look
like amateur crap! I think i've settled on a pretty combination of a few themes. </p>Summer of Defeat2003-06-27T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2003-06-27:summer-of-defeat.html<p>Looking back that was a big entry last update. Since then I've tried out the QNX CD, and its interesting but seems
a bit sluggish. It could be slow because its running from the CD but I have a feeling everything was in RAM, and
it was just the lack of hardware accelleration that made the GUI feel a bit unresponsive. Overall QNX needs serious
work to overtake Windows or even Linux as my desktop of choice.</p>
<p>Downloaded the new Day of Defeat, its pretty much better. Reminds me to update the games page... Done. They added
rocket launchers to the game. It was feared that it would break the gameplay much in the same way it's feared the
LAW will break CS. But for the most part the rocket is a useless weapon. you have to put it on your shoulder to
fire, which makes you move slower, and most of the explosion damage goes in a forward cone, so no quake 3 style
rocket jumping and other nonsense. Where it does come in handy is demolition. Most every map has been updated to
make the rocket worthwhile. You can blow up walls exposing sniper nests, open new routes, and demolish sandbag
bunkers that machine gunners might use. And sometimes you can use it to complete objectives like destroying tanks
and radio systems, since the rockets act as proxies for det paks.</p>
<p>Fun stuff, but its been distracting me a bit from other things. Like compiling a kernel. I've found a bit stronger
kernel compile in the apt-get archive, but I haven't yet found a way to compile my own =(. Or my small picross
game. I've decided to write a small game that runs in both windows and linux for fun. After much deliberation I've
chosen wxWindows. So far its been great. I've finally managed to get it to compile on both platforms, and it works
as advertised. When I get a better version going, there might be binaries available for download.</p>
<p>Finally, school related stuff. I passed all my classes and even got As in all my CIS classes. This coming semester
looks to be pretty fun, and somewhat easier. In part I'm doing the wxWindows thing as practice, so I can have
something when I'm through with the semester and say, "I did this, and I am proud." Hopefully it will turn out
that way too =) </p>Finals Week2003-05-14T00:00:00-07:00Justin Duggertag:www.pwnguin.net,2003-05-14:finals-week.html<p>Halfway through finals week. Just finished my take home final, thought I should work on my webpage a bit for a
break. Its clearly been a while. In a few days school will be over with for the year, and I haven't found a summer
job. All those companies with internships that said they'd know by May haven't gotten in touch with me, which is
business speak for "We have better things to do than inform you that we chose someone else (or decided not to hire
interns)." I interviewed with the CIS department last Tuesday, but they haven't called me either. I'll need to
email someone or call someone, to make sure I really didn't get the job before I leave for KC. Subletting a place
for the summer would be difficult to do if they actually do hire me. I guess its another year of pretending to
look for a job while moping around the house. Sure, I can make a bunch of cool computer stuff over the summer, but
said cool stuff can't pay the rent.</p>
<p>In other arenas, Debian's been performing nicely, although I still can't compile a kernel to save my computer's
life. I've stumped lots of people with it. I guess this summer I'll run through it and try as generic of a kernel
as possible and work my way up. It will be time consuming but it seems thats the one thing I'll have plenty of. If
I come across some money it would be nice to try out QNX and see how it performs, but right now the ISO is just
sitting there because there isn't any space to install it to. Apparently the first entries to Deviant Art were QNX
themes, so I might see if they're compatible with the latest version.</p>
<p>E3 is going on and there's lots of interesting new games. Or rather, updated versions of old games. Star Fox,
F-Zero, 1080, and Sword of Mana all look pretty damn cool. Giftpia sounds sort of interesting, but its starting to
sound too much like Animal Crossing (okay game but overwhelming, really). Halflife 2 is scheduled for release
around my birthday, and I'd imagine that Counter-Strike: Condition Zero and TF2 are being ported to HL2. Its going
to be interesting to see how the gaming community splits over this. There's still actively developed mods, like
Natural Selection (v1.1 should be out in about a week), for Half-Life. Unless Valve does something very impresive
reguarding backwards compatibility, there's going to be a wide divide between people who wants to remain with what
they have and those who want to ride the new release. Clearly if you have a large following and a large amount of
material, moving over will be difficult. This is different than the Quakes in that HalfLife remains one of the most
popular games available. I suppose that Valve may push its Steam technology for mod developers who may want to
turn a dollar. If I get the time, it might be interesting to bring Science and Industry to HL2.</p>
<p>Well, its been a blast of a semseter, check out my CIS page if you wanna see some of the stuff I've done.
Highlights include a tomcat servlet system for a trivial university, and a cyclic static schedule generator just
begging to be made into a command line tool. In the future I should consider labelling that stuff with a liscence
(BSD probably), but for now I'll just let implicit copyright handle things. </p>Birth of a Linux user2003-01-21T00:00:00-08:00Justin Duggertag:www.pwnguin.net,2003-01-21:birth-of-a-linux-user.html<p>Back from break! Tried installing Gentoo after win98 fried itself, now I'm
playing with Debian. So far its going much smoother, although it feels odd not
knowing whats going on underneath the hood. I gotta say the gentoo user docs
are very nice. Time to figure out how to customize and configure X. Maybe
sometime I'll try out different colorschemes for this website, like
[kuro5hin][1].</p>
<div class="highlight"><pre>[1]: http://www.kuro5hin.org/
</pre></div>Winter Break Project2002-12-23T00:00:00-08:00Justin Duggertag:www.pwnguin.net,2002-12-23:winter-break-project.html<p>Finished a simple project. The world now has a kuroneko wallpaper. Now that
crazy cat from Trigun can be yours to look at, day in day out! It appears I
should also work on the sidebar layout to keep it from moving further and
further down.</p>Winter Break Fun2002-12-13T00:00:00-08:00Justin Duggertag:www.pwnguin.net,2002-12-13:winter-break-fun.html<p>Added a schedule for the coming spring. Time to take a break and play some DoD.</p>Work Complete!2002-12-12T00:00:00-08:00Justin Duggertag:www.pwnguin.net,2002-12-12:work-complete.html<p>Phase one of my website is complete. From here out I can focus on the more
aesthetic parts of things. Target number 1: the games page. Target number 2:
coloration.</p>