sitr.us

posts by Jesse Hallett

Kinesis Advantage with DSA keycaps

I now have a Kinesis Advantage keyboard for use at work. I have been feeling some wrist strain recently; and some of my coworkers were encouraging me to try one. So I borrowed a Kinesis for a week, and found that I really liked it. The contoured shape makes reaching for keys comfortable; I find the column layout to be nicer than the usual staggered key arrangement; and between the thumb-key clusters and the arrow keys, there are a lot of options for mapping modifier keys that are in easy reach.

Kinesis Advantage, before modification

But I really like the PBT keycaps on my Leopold. I would not enjoy going back to plain, old ABS. I also don’t want my keyboard to be just like every other Kinesis. So I decided to get replacement keycaps.

I did some research on buying PBT keycaps with the same profiles as the stock Kinesis keys. I assumed that I would end up getting blank keycaps - putting together a set with legends appropriate for a Kinesis seemed like it would be a painful undertaking, since there don’t seem to be any sets made specifically for the Kinesis.

Most keyboards - including the Kinesis Advantage - use what is called a DCS profile, where the keys in each row have different heights and angles. (That does not include laptop keyboards, or island-style keyboards such as the ones that Apple sells. Those are in their own categories.) DCS family: medium profile, cylindrical top, sculptured.
Image from Signature Plastics.
Input Nirvana on Geekhack has a post with a list of all necessary keycap sizes and profiles to reproduce the arrangement on a stock Kinesis. It is possible to order these individually from Signature Plastics; but their inventory for à la carte orders varies depending on what they have left over from production of large batches. When I checked, SP did not have any row 5 PBT keycaps available. I got the impression that building a custom DCS set would be somewhat difficult.

Then I saw prdlm2009 on Deskthority suggest that DSA profile keycaps work well on a Kinesis. DSA is a uniform profile - every key has the same height and angle. It makes everything much simpler when dealing with unusual keyboard layouts, or unusual keyboards. DSA family medium profile, spherical top, non-sculptured.
Image from Signature Plastics.
DSA also features spherical tops. If you look at the keys on a typical keyboard, you can see that the top curves up on the left and right sides - as though someone had shaped them around a cylinder. The tops of DSA keys are spherical; as though shaped around a large marble. So the keys cup the fingertips from all sides.

Signature Plastics sells a variety of nice, blank DSA keycap sets. I did not order the optimal combination of keycap sets; but now I have a better idea of what that combination is. The key count on an Advantage is:

  • 56 – 1x keys (optionally including two homing keys)
  • 8 – 1.25x keys
  • 4 – 2x keys

1x, 2x, etc. refers to the lengths of the keys.

One can get everything except the 1.25x keys with one ErgoDox Base set and two Numpad sets. The Numpad sets seem to be the cheapest way to get all 4 2x keycaps, along with additional 1x caps. The only set that includes 1.25x caps is the Standard Modifier set, which includes 7 of them. (So close!) I recommend ordering the 1.25x keycaps individually from the blank keycap inventory.

stock Kinesis keycaps (left), DSA keycaps (right)

The keycaps from SP are much thicker than the stock keycaps. And they are made from PBT plastic, which is denser than the more common keycap material, ABS. What I like most about PBT caps is their texture. The tops of the keys are usually slightly rough, somewhat pebbly. It gives a little bit of grippiness, and feels soothing on my fingers compared to featureless, flat plastic. I also think that the sound of PBT keys being pressed is nicer. It is slightly quieter, with a somewhat deeper tone.

All of the stock keycaps have been removed, revealing Cherry MX Brown switches.

The Kinesis Advantage comes with either Cherry MX Brown or Cherry MX Red switches.

For anyone wondering how to remove keycaps from a keyboard with Cherry MX switches, here is a video. What the video does not mention is that it is a good idea to wiggle the keycap puller while pulling up on the keycap. That helps to avoid pulling with too much force, which could break a switch.

I have another keyboard that also has Cherry MX Brown switches, and I really liked the change in key feel after installing o-rings. O-rings make typing a little quieter, and add some springiness to the bottom of the key travel. A tradeoff is that they reduce the length of key travel a bit.

installing o-rings in the new keycaps

I used 40A-R o-rings from WASD, which are relatively thick and soft. But when I tried these out with the DSA keycaps I could not discern any difference between a key with an o-ring and one without.

Comparing the underside of a DSA keycap to a typical DCS keycap reveals the issue:

blank DSA keycaps from SP (top), keycaps from a Leopold FC660M (bottom)

DCS keycaps have cross-shaped supports under the cap, which contact the top of the switch housing when the key is fully depressed. O-rings sit between those supports and the switch housing, absorbing some force from contact. But the DSA keycaps lack those supports. That means that the switch can reach the bottom of its travel before the underside of the keycap contacts the switch housing.

I found that doubling up o-rings pushed the rubber high enough to be effective. But I was concerned that two o-rings shorted key travel too much and introduced too much squishiness. In the end I left the o-rings out entirely. I may take another shot using either thinner, firmer o-rings, or with small washers in place of a second o-ring.

two o-rings installed in one keycap

When I ordered my keycaps I got one ErgoDox Base set and one ErgoDox Modifier set. I did not do enough checking - I assumed that the 1.5x keys in the Modifier set would fit in the Kinesis. But it turns out that the keys in the leftmost and rightmost columns of the Kinesis take 1.25x keycaps. The larger keycaps do not fit.

Whoops! That was supposed to be a 1.25x key, not a 1.5x.

I have ordered some appropriately sized keycaps. In the meantime, I am using 1x keycaps in the 1.25x positions.

stock 1.25x Tab key next to its intended, blank replacement

Even though the DSA keycaps are not the same shape as the stock caps, they fit quite well on the Kinesis. There are just two slightly problematic spots. The photo below shows the one key that comes into contact with the edge of the keyboard case when it is not depressed. Thankfully the operation of the key does not seem to be affected.

The fit is tight in this corner.

Due to small differences in switch positioning, the key in the same position in the other well has a little bit of clearance.

The other problem is that two of my 2x keys overlap very slightly. When I press the one on the right there is sometimes an extra click as it pushes past its neighbor.

There is not quite enough space between these two keys.

I am thinking of sanding down the corners of these keys a little bit to fix the problem.

This is another case where there is no problem with the keys in the same positions on the other side of the board. It seems that the switches in the left thumb cluster just happen to be a little too close together on my board.

All done!

Since I had to use 1u keycaps for the leftmost and rightmost columns, I ended up not having enough keycaps to replace the two keycaps in the top of each thumb cluster. But I think that having tall keycaps there makes them easier to press - those positions are a bit difficult to reach otherwise. So I may just keep the stock caps on those keys. Or I might try to get tall, DCS profile, PBT caps for those positions.

closeup of one of the wells to show key spacing

The other positions where I think that DSA does not work really well are the four keys in the bottom row of each well. I curl my fingers down to reach those; and I tend to either hit the edges of the keys, or to press them with my fingernail instead of with my finger. The stock keycaps for those positions are angled toward the center of the well, making it easier to reach the tops of the keys.

Those points aside, I am very pleased with how these new keycaps worked out! The DSA profile is quite comfortable. I love the texture of the PBT keycaps. And they make a more pleasant sound than the thinner ABS caps that came with the board.

shoe for science!

Category Theory proofs in Idris

Idris is a programming language with dependent types. It is similar to Agda, but hews more closely to Haskell. The goal of Idris is to bring dependent types to general-purpose programming. It supports multiple compilation targets, including C and Javascript.

Dependent types provide an unprecedented level of type safety. A quick example is a type-safe printf implementation (video). They are also useful for theorem proving. According to the Curry-Howard correspondence, mathematical propositions can be represented in a program as types. An implementation that satisfies a given type serves as a proof of the corresponding proposition. In other words, inhabited types represent true propositions.

The Curry-Howard correspondence applies to every language with type checking. But the type systems in most languages are not expressive enough to build very interesting propositions. On the other hand, dependent types can express quantification (i.e., the mathematical concepts of universal quantification (∀) and existential quantification (∃)). This makes it possible to translate a lot of interesting math into machine-verified code.

Functional data structures in JavaScript with Mori

I have a long-standing desire for a JavaScript library that provides good implementations of functional data structures. Recently I found Mori, and I think that it may be just the library that I have been looking for. Mori packages data structures from the Clojure standard library for use in JavaScript code.

Functional data structures

A functional data structure (also called a persistent data structure) has two important qualities: it is immutable and it can be updated by creating a copy with modifications (copy-on-write). Creating copies should be nearly as cheap as modifying a comparable mutable data structure in place. This is achieved with structural sharing: pointers to unchanged portions of a structure are shared between copies so that memory need only be allocated for changed portions of the data structure.

A simple example is a linked list. A linked list is an object, specifically a list node, with a value and a pointer to the next list node, which points to the next list node. (Eventually you get to the end of the list where there is a node that points to the empty list.) Prepending an element to the front of such a list is a constant-time operation: you just create a new list element with a pointer to the start of the existing list. When lists are immutable there is no need to actually copy the original list. Removing an element from the front of a list is also a constant-time operation: you just return a pointer to the second element of the list. Here is a slightly more-detailed explanation.

Lists are just one example. There are functional implementations of maps, sets, and other types of structures.

Rich Hickey, the creator Clojure, describes functional data structures as decoupling state and time. (Also available in video form.) The idea is that code that uses functional data structures is easier to reason about and to verify than code that uses mutable data structures.

Functional Reactive Programming in JavaScript

I had a great time at NodePDX last week. There were many talks packed into a short span of time and I saw many exciting ideas presented. One topic that seemed particularly useful to me was Chris Meiklejohn’s talk on Functional Reactive Programming (FRP).

I have talked and written about how useful promises are. See Promise Pipelines in JavaScript. Promises are useful when you want to represent the outcome of an action or a value that will be available at some future time. FRP is similar except that it deals with streams of reoccurring events and dynamic values.

Here is an example of using FRP to subscribe to changes to a text input. This creates an event stream that could be used for a typeahead search feature:

var inputs = $('#search')
    .asEventStream('keyup change')
    .map(function(event) { return event.target.value; })
    .filter(function(value) { return value.length > 2; });

var throttled = inputs.throttle(500 /* ms */);

var distinct = throttled.skipDuplicates();

This creates an event stream from all keyup and change events on the given input. The stream is transformed into a stream of strings matching the value of the input when each event occurs. Then that stream is filtered so that subscribers to inputs will only receive events if the value of the input has a length greater than two.

Monkey patching document.write()

This is one of the crazier workarounds that I have implemented. I was working on a web page that embeds third-party widgets. The widgets are drawn in the page document - they do not get their own frames. And sometimes the widgets are redrawn after page load.

We had a problem with one widget invoking document.write(). In case you are not familiar with it, if that method is called while the page is rendering it inserts content into the DOM immediately after the script tag in which the call is made. But if document.write() is called after page rendering is complete it erases the entire DOM. When this widget was redrawn after page load it would kill the whole page.

The workaround we went with was to disable document.write() after page load by replacing it with a wrapper that checks whether the jQuery ready event has fired.

(function() {
    var originalWrite = document.write;
    document.write = function() {
        if (typeof jQuery !== 'undefined' && jQuery.isReady) {
            if (typeof console !== 'undefined' && console.warn) {
                console.warn("document.write called after page load");
            }
        }
        else {
            // In IE before version 8 `document.write()` does not
            // implement Function methods, like `apply()`.
            return Function.prototype.apply.call(
                originalWrite, document, arguments
            );
        }
    }
})();

The new implementation checks the value of jQuery.isReady and delegates to the original document.write() implementation if the page is not finished rendering yet. Otherwise it does nothing other than to output a warning message.

Promise Pipelines in JavaScript

Promises, also know as deferreds or futures, are a wonderful abstraction for manipulating asynchronous actions. Dojo has had Deferreds for some time. jQuery introduced its own Deferreds in version 1.5 based on the CommonJS Promises/A specification. I’m going to show you some recipes for working with jQuery Deferreds. Use these techniques to turn callback-based spaghetti code into elegant declarative code.

The basics of jQuery Deferreds

A Deferred is an object that represents some future outcome. Eventually it will either resolve with one or more values if that outcome was successful; or it will fail with one or more values if the outcome was not successful. You can get at those resolved or failed values by adding callbacks to the Deferred.

In jQuery’s terms a promise is a read-only view of a deferred.

Here is a simple example of creating and then resolving a promise:

function fooPromise() {
    var deferred = $.Deferred();

    setTimeout(function() {
        deferred.resolve("foo");
    }, 1000);

    return deferred.promise();
}

Callbacks can be added to a deferred or a promise using the .then() method. The first callback is called on success, the second on failure:

Installing a custom ROM on the Transformer Prime: A start-to-finish guide

This guide provides step-by-step instructions for installing the Virtuous Prime community ROM on your Asus Transformer Prime TF201 tablet. This guide will be useful to you if you do not have root access to your tablet.

Be aware that following the instructions here will void your warranty and will wipe all of the data on your tablet. There is also a danger that you might brick your tablet. Proceed at your own risk.

So, why would you want to install a custom ROM on your tablet? In my case I wanted to gain root access, which allows one to do all sorts of nifty things. Community-made ROMS are also often customized to make the Android experience more pleasant for power users. And choosing your own ROM means that you are no longer dependent on the company that sold you your device to distribute firmware updates in a timely fashion. But if you are reading this then you probably already know why you want to install a custom ROM - so let’s get on to the next step.

Cookies are bad for you: Improving web application security

Most web applications today use browser cookies to keep a user logged in while she is using the application. Cookies are a decades-old device and they do not stand up well to security threats that have emerged on the modern web. In particular, cookies are vulnerable to cross-site request forgery. Web applications can by made more secure by using OAuth for session authentication.

This post is based on a talk that I gave at Open Source Bridge this year. The slides for that talk are available here.

cookie authentication

When a user logs into a web application the application server sets a cookie value that is picked up by the user’s browser. The browser includes the same cookie value in every request sent to the same host until the cookie expires. When the application server receives a request it can check whether the cookies attached to it contain a value that identifies a specific user. If such a cookie value exists then the server can consider the request to be authenticated.

attacks that target browser authentication

There are many types of attacks that can be performed against a web application. Three that specifically target authentication between the browser and the server are man-in-the-middle (MITM), cross-site request forgery (CSRF), and cross-site scripting (XSS). Plain cookie authentication is vulnerable to all three.

How Mobile Safari emulates mouse events

When you are adapting web apps to touchscreen devices particular challenges come up around events like mouseover and mouseout. Touchscreen devices like the iPad do not have a cursor, so the user cannot exactly move the mouse over an HTML element. However, Mobile Safari, the web browser that comes with the iPhone and iPad, has a fallback for websites that require hovering or cursor movement.

Usually when you tap on an element on a link or other clickable element Mobile Safari translates that into a regular click event. The browser also produces some touch events that do not exist in a lot of browsers. But from the perspective of a web page that was not designed with a touchscreen in mind, what you get is a plain click. More specifically, the browser fires mousedown, mouseup, and click in that order. But if a clickable element also does something on mouseover then tapping on that element will trigger a mouseover event instead of a click. Tapping on the same element again will produce a click event. A random example of a page that exhibits this behavior is the schedule page from the Open Source Bridge website. Try tapping on session titles and see what happens.

Mobile Safari will only produce mouse events when the user taps on a clickable element, like a link. You can make an element clickable by adding an onClick event handler to it, even if that handler does nothing. On tap Mobile Safari fires the events mouseover, mousemove, mousedown, mouseup, and click in that order - with some caveats which are explained below. Those events all fire together after the user lifts her finger. You might expect the mousedown event to fire as soon as the user presses her finger to the screen - but it does not. When the user taps on another clickable element the browser fires a mouseout event on the first element in addition to firing the aforementioned events on the new element.

CouchDB Notes

Recently I gave a talk at Portland Ruby Brigade meeting on CouchDB, a document-oriented database. I thought I would share my notes from that talk. In some respects this was a followup to an earlier talk that Igal Koshevoy gave comparing various post-relational databases. Igal also wrote some additional notes on my talk.

In summary, some of the distinguishing features of CouchDB are:

  • Schema-less data store stores documents containing arbitrary JSON data.
  • Incrementally updated map-reduce views provide fast access to data, support powerful data processing, and eliminate lookup penalties for data in large or deeply nested documents.
  • Map-reduce views - which are again, incrementally updated - provide fast access to aggregate data, such as sums or averages of document attributes.
  • Schema-less design means no schema migrations are ever required. And new map-reduce views can be installed with no downtime.
  • “Crash-only” design protects data integrity in almost every crash scenario. No recovery process is required when rebooting a crashed database server.
  • Lock-free design means that read requests never have to wait for other read or write requests to finish. Writes are only serialized at the point where data is actually written to the disk.
  • Integrated, robust master-master replication with automatic conflict handling
  • MVCC, or “optimistic locking”, prevents data loss from multiple writes to the same document from different sources.
  • RESTful interface makes it easy to integrate CouchDB with any environment that speaks HTTP.
  • Documents can contain binary attachments. Attachment support combined with the HTTP interface means that CouchDB can serve HTML, JavaScript, images, and anything else required to host a web application directly from the database.

More detailed information on all of the above points can be found in CouchDB’s technical overview.

Some of the downsides:

  • Writes and single-document lookups are slower than other databases due to HTTP overhead and frequent disk access.
  • CouchDB optimizes CPU and RAM use by using lots of disk space. The same data set will take up a lot more space in CouchDB than in other database systems.
  • You must create map-reduce views in advance for any queries you want to run. SQL users are used to processing data at query time; but this is not allowed by the CouchDB design (assuming you are not using temporary views in production, which you should not do.)
  • There is a serious learning curve when learning to think in terms of map-reduce views.
  • Map-reduce views, though very powerful, are not as flexible as SQL queries. There may be cases where it is necessary to push data processing to an asynchronous job or to the client.
  • CouchDB is a young project and its API is undergoing rapid changes.
  • Documentation can be sparse - especially when very new features are involved.