I'm getting married this June. (For the Debian folks, the Ghillie shirt and vest just arrived to go with the kilt. My thanks go out to the lunch table at Debconf that made that suggestion. formal Scottish dress would not have fit, but I wanted something to go with the kilt.)
Music and dance have been an important part of my spiritual journey. Dance has also been an import part of the best weddings I attended. So I wanted dance to be a special part of our celebration. I put together a play list for my 40th birthday; it was special and helped set the mood for the event. Unfortunately, as I started looking at what I wanted to play for the wedding, I realized I needed to do better. Some of the songs were too long. Some of them really felt like they needed a transition. I wanted a continuous mix not a play list.
I'm blind. I certainly could use two turn tables and a mixer--or at least I could learn how to do so. However, I'm a kid of the electronic generation, and that's not my style. So, I started looking at DJ software. With one exception, everything I found was too visual for me to use.
I've used Nama before to put together a mashup. It seemed like Nama offered almost everything I needed. Unfortunately, there were a couple of problems. Nama would not be a great fit for a live mix: you cannot add tracks or effects into the chain without restarting the engine. I didn't strictly need live production for this project, but I wanted to look at that long-term. At the time of my analysis, I thought that Nama didn't support tempo-scaling tracks. For that reason, I decided I was going to have to write my own software. Later I learned that you can adjust the sample rate on a track import, which is more or less good enough for tempo scaling. By that point I already had working code.
I wanted a command line interface. I wanted BPM and key detection; it looked like Mixxx was open-source software with good support for that. Based on my previous work, I chose Csound as the realtime sound backend.

Where I got


I'm proud of what I came up with. I managed to stay focused on my art rather than falling into the trap of focusing too much on the software. I got something that allows me to quickly explore the music I want to mix, but also managed to accomplish my goal and come up with a mix that I'm really happy with. As a result, at the current time, my software is probably only useful to me. However, it is available under the GPL V3. If anyone else would be interested in hacking on it, I'd be happy to spend some time documenting and working with them.
Here's a basic description of the approach.

  • You are editing a timeline that stores the transformations necessary to turn the input tracks into the output mix.
  • There are 10 mixer stereo channels that will be mixed down into a master output.
  • there are a unlimited number of input tracks. Each track is associated with a given mixer channel. Tracks are placed on the timeline at a given start point (starting from a given cue point in the track) and run for a given length. During this time, the track is mixed into the mixer channel. Associated with each track is a gain (volume) that controls how the track is mixed into the mixer channel. Volumes are constant per track.
  • Between the mixer channel and the master is a volume fader and an effect chain.
  • Effects are written in Csound code. Being able to easily write Csound effects is one of the things that made me more interested in writing my own than in looking at adding better tempo scaling/BPM detection to Nama.
  • Associated with each effect are three sliders that give inputs to the effect. Changing the number of mixer channels and effect sliders is an easy code change. However it'd be somewhat tricky to be able to change that dynamically during a performance. Effects also get an arbitrary number of constant parameters.
  • Sliders and volume faders can be manipulated on the timeline. You can ask for a linear change from the current value to a target over a given duration starting at some point. So I can ask for the amplitude to move from 0 to 127 at the point where I want to mix in a track say across 2 seconds. You express slider manipulation in terms of the global timeline. However it is stored relative to the start of the track. That is, if you have a track fade out at a particular musical phrase, the fade out will stay with that phrase even if you move the cue point of the track or move where the track starts on the timeline. This is not what you want all the time, but my experience with Nama (which does it using global time) suggests that I at least save a lot of time with this approach.
  • There is a global effect chain between the output of the final mixer and the master output. This allows you to apply distortion, equalization or compression to the mix as a whole. The sliders for effects on this effect chain are against global time not a specific track.
  • There's a hard-coded compressor on the final output. I'm lazy and I needed it before I had effect chains.

There's some preliminary support for a MIDI controller I was given, but I realized that coding that wasn't actually going to save me time, so I left it. This was a really fun project. I managed to tell a story for my wedding that is really important for me to tell. I learned a lot about what goes into putting together a mix. It's amazing how many decisions go into even simple things like a pan slider. It was also great that there is free software for me to build on top of. I got to focus on the part of the problem I wanted to solve. I was able to reuse components for the realtime sound work and for analysis like BPM detection.
A new programmer asked on a work chat room how timezones are handled in databases. He asked if it was a good idea to store things in UTC. The senior programmers all laughed as we told some of our horror stories with timezones. Yes, UTC is great; if only it were that simple.
About a week later I was designing the schema for a blue sky project I'm implementing. I had to confront time in all its Pythonic horror.
Let's start with the datetime.datetime class. Datetime objects optionally include a timezone. If no timezone is present, several methods such as timestamp treat the object as a local time in the system's timezone. The timezone method returns a POSIX timestamp, which is always expressed in UTC, so knowing the input timezone is important. The now method constructs such an object from the current time.
However other methods act differently. The utcnow method constructs a datetime object that has the UTC time, but is not marked with a timezone. So, for example datetime.fromtimestamp(datetime.utcnow().timestamp()) produces the wrong result unless your system timezone happens to have the same offset as UTC.
It's also possible to construct a datetime object that includes a UTC time and is marked as having a UTC time. The utcnow method never does this, but you can pass the UTC timezone into the now method and get that effect. As you'd expect, the timestamp method returns the correct result on such a datetime.
Now enter SQLAlchemy, one of the more popular Python ORMs. Its DATETIME type has an argument that tries to request a column capable of storing a a timezone from the underlying database. You aren't guaranteed to get this though; some databases don't provide that functionality. With PostgreSQL, I do get such a column, although something in SQLAlchemy is not preserving the timezones (although it is correctly adjusting the time). That is, I'll store a UTC time in an object, flush it to my session, and then read back the same time represented in my local timezone (marked as my local timezone). You'd think this would be safe.
Enter SQLite. SQLite makes life hard for people wanting to store time; it seems to want to store things as strings. That's fairly incompatible with storing a timezone and doing any sort of comparisons on dates. SQLAlchemy does not try to store a timezone in SQLite. It just trims any timezone information from the datetime. So, if I do something like
d = datetime.now(timezone.utc)
obj.date_col = d
session.add(obj)
session.flush()
assert obj.date_col == d # fails
assert obj.date_col.timestamp() == d.timestamp() # fails
assert d == obj.date_col.replace(tzinfo = timezone.utc) # finally succeeds

There are some unfortunate consequences of this. If you mark your datetimes with timezone information (even if it is always the same timezone), whether two datetimes representing the same datetime compare equal depends on whether objects have been flushed to the session yet. If you don't mark your objects with timezones, then you may not store timezone information on other databases.
At least if you use only the methods we've discussed so far, you're reasonably safe if you use local time everywhere in your application and don't mark your datetimes with timezones. That's undesirable because as our new programmer correctly surmised, you really should be using UTC. This is particularly true if users of your database might span multiple timezones.
You can use UTC time and not mark your objects as UTC. This will give the wrong data with a database that actually does support timezones, but will sort of work with SQLite. You need to be careful never to convert your datetime objects into POSIX time as you'll get the wrong result.
It turns out that my life was even more complicated because parts of my project serialize data into JSON. For that serialization, I've chosen ISO 8601. You've probably seen that format: '2017-04-09T18:17:27.340410+00:00. Datetime provides the convenient isoformat method to print timestamps in the ISO 8601 format. If the datetime has a timezone indication, it is included in the ISO formatted string. If not, then no timezone indication is included. You know how I mentioned that datetime takes a string without a timezone marker as local time? Yeah, well, that's not what 8601 does: UTC all the way, baby! And at least the parser in the iso8601 module will always include timezone markers. So, if you use datetime to print a timestamp without a timezone marker and then read that back in to construct a new datetime on the deserialization side, then you'll get the wrong time. OK, so mark things with timezones then. Well, if you use local time, then the time you get depends on whether you print the ISO string before or after session flush (before or after SQLAlchemy trims the timezone information as it goes to SQLite).
It turns out that I had the additional complication of one side of my application using SQLite and one side using PostgreSQL. Remember how I mentioned that something between SQLAlchemy and PostgreSQL was recasting my times in local timezone (although keeping the time the same)? Well, consider how that's going to work. I serialize with the timezone marker on the PostgreSQL side. I get a ISO8601 localtime marked with the correct timezone marker. I deserialize on the SQLite side. Before session flush, I get a local time marked as localtime. After session flush, I get a local time with no marking. That's bad. If I further serialize on the SQLite side, I'll get that local time incorrectly marked as UTC. Moreover, all the times being locally generated on the SQLite side are UTC, and as we explored, SQLite really only wants one timezone in play.
I eventually came up with the following approach:

  1. If I find myself manipulating a time without a timezone marking, assert that its timezone is UTC not localtime.

  2. Always use UTC for times coming into the system.

  3. If I'm generating an ISO 8601 time from a datetime that has a timezone marker in a timezone other than UTC, represent that time as a UTC-marked datetime adjusting the time for the change in timezone.


This is way too complicated. I think that both datetime and SQLAlchemy's SQLite time handling have a lot to answer for. I think SQLAlchemy's core time handling may also have some to answer for, but I'm less sure of that.
Previously, I wrote about my project to create an audio depiction of network traffic. In this second post, I explore how I model aspects of the network that will be captured in the audio representation. Before getting started, I'll pass along a link. This is not the first time someone has tried to put sound to packets flying through the ether: I was pointed at Peep. I haven't looked at Peep, but will do so after I finish my own write up. Not being an academic, I feel no obligation to compare and contrast my work to others:-)
I started with an idea of what I'd like to hear. One of my motivations was to explore some automated updates we run at work. So, I was hoping to capture the initial DNS and ARP traffic as the update discovered the systems it would contact. Then I was hoping to capture the ssh and other traffic of the actual update.

To Packet or Stream


One of the simplest things to do would simply be to model network packets. For DNS I chose that approach. I was dubious that a packet-based model would capture the aspects of TCP streams I typically care about. I care about the source and destination (both address and port) of course. However I also care about how much traffic is being carried over the stream and the condition of the stream. Are there retransmits? Are there a bunch of unanswered SYNs? But I don't care about the actual distribution of packets. Also, a busy TCP stream can generate thousands of packets a second. I doubted my ability to distinguish thousands of sounds a second at all, especially while trying to convey enough information to carry stream characteristics like overall traffic volume.
So, for TCP, I decided to model some characteristics of streams rather than individual packets.
For DNS, I decided to represent individual requests/replies.
I came up with something clever for ARPP. There, I model the request/reply as an outstanding request. A lot of unanswered ARPs can be a sign of a scan or a significant problem. The mornful sound of a TCP stream trailing off into an unanswered ARP as the cache times out on a broken network is certainly something I'd like to capture. So, I track when an ARP request is sent and when/if it is answered.

Sound or Music


I saw two approaches. First, I could use some sound to represent streams. As an example, a running diesel engine could make a great representation of a stream. The engine speed could represent overall traffic flow. There are many opportunities for detuning the engine to represent various problems that can happen with a stream. Perhaps using stereo separation and slightly different fundamental frequencies I could even represent a couple of streams and still be able to track them.
However, at least with me as a listener, that's not going to scale to a busy network. The other option I saw was to try and create melodic music with various musical phrases modified as conditions within the stream or network changed. That seemed a lot harder to do, but humans are good at listening to complicated music.
I ended up deciding that at least for the TCP streams, I was going to try and produce something more musical than sound. I was nervous: I kept having visions of a performance of "Peter and the Wolf" with different instruments representing all the characters that somehow went dreadfully wrong.
As an aside, the decision to approach music rather than sound depended heavily on what I was trying to capture. If I'm modeling more holistic properties of a system--for example, total network traffic without splitting into streams--I think parameterized sounds would be a better approach.
The decision to approach things musically affected the rest of the modeling. Somehow I was going to need to figure out notes to play. I'd already rejected the idea of modeling packets, so I wouldn't simply be able to play notes when a packet arrived.

Energy Decay


As I played with various options, I realized that the critical challenge would be figuring out how to focus the listener's attention on the important aspects of what was going on. Clutter was the great enemy. My job would be figuring out how to spend sound wisely. When something interesting happened, that part of the model should get more focus--more of the listener's energy.
Soon I found myself thinking a lot about managing the energy of network streams. I imagined streams getting energy when something happened, and spending that energy to convey that interesting event to the listener. Energy needed to accumulate fast enough that even low-traffic streams could be noticed. Energy needed to be spent fast enough that old events were not taking listener focus from new, interesting things going on. However, if the energy were spent slow enough, then network events could be smoothed out to give a better picture of the stream rather than individual packets.
This concept of managing some decaying quantity and managing the rate of decay proved useful at multiple levels of the model.

Two Layer Model


I started with a python script that parses tcpdump output. It associates a packet with a stream and batches packets together to avoid overloading other parts of the system.
The output of this script are stream events. Events include a source and destination address, a stream ID, traffic in each direction, and any special events on the stream.
For DNS, the script just outputs packet events. For ARP, the script outputs request start, reply, and timeout events. There's some initial support for UDP, but so far that doesn't make sound.
Right now, FINs are modeled, but SYNs and the interesting TCP conditions aren't directly modeled. If you get retransmissions you'll notice because packet flow will decrease. However, I'd love to explicitly sound retransmissions. I also think a window filling as an application fails to read is important. I imagine either narrowing a band-pass filter to clamp the audio bandwidth available to a stream with a full window. Or perhaps taking it the other direction and adding an echo.
The next layer down tracks the energy of each stream. But that, and how I map energy into music, is the topic of the next post.
I've been working on a fun holiday project in my spare time lately. It all started innocently enough. The office construction was nearing its end, and it was time for my workspace to be set up. Our deployment wizard and I were discussing. Normally we stick two high-end monitors on a desk. I'm blind; that seemed silly. He wanted to do something similarly nice for me, so he replaced one of the monitors with excellent speakers. They are a joy to listen to, but I felt like I should actually do something with them. So, I wanted to play around with some sort of audio project.
I decided to take a crack at an audio representation of network traffic. The solaris version of ping used to have an audio option, which would produce sound for successful pings. In the past I've used audio queues to monitor events like service health and build status.
It seemed like you could produce audio to give an overall feel for what was happening on the network. I was imagining a quick listen would be able to answer questions like:

  1. How busy is the network

  2. How many sources are active

  3. Is the traffic a lot of streams or just a few?

  4. Are there any interesting events such as packet loss or congestion collapse going on?

  5. What's the mix of services involved


I divided the project into three segments, which I will write about in future entries:

  • What parts of the network to model

  • How to present the audio information

  • Tools and implementation


I'm fairly happy with what I have. It doesn't represent all the items above. As an example, it doesn't directly track packet loss or retransmissions, nor does it directly distinguish one service from another. Still, just because of the traffic flow, rsync sounds different from http. It models enough of what I'm looking for that I find it to be a useful tool. And I learned a lot about music and Linux audio. I also got to practice designing discrete-time control functions in ways that brought back the halls of MIT.
The last couple of days I've been debugging some things for a client involving Miredo, the Linux Teredo implementation. I spent a frustratingly long time only to discover that while I wasn't looking my Teredo address had changed and so the reason my packets weren't quite working out right is that they went to the wrong place. Along the journey I was trying to debug the procedure Teredo uses to avoid address spoofing. This proved difficult. Miredo forks off a daemon process to monitor its child (and I think handle some privilege separation issues). Eventually the actual worker process gets started; that's a multi-threaded complex process. Many events have timeouts, so if you spend too long in the debugger, peer entries or short-lived authentication checksums may time out. In addition, pthread_cancel is used in some cases where there is a timeout, so you may find that the thread you are debugging has been blasted from on high.

The trivial approach of setting a breakpoint sure didn't work. Somehow in all the forking around, Gdb failed to remove the break point in the child. So, I got the amusing message


Program terminated with signal SIGTRAP, Trace/breakpoint trap.
The program no longer exists.
With that auspicious start, I began my adventure. I'll skip the play-by-play , but I want to pass along some useful observations.
  • The best way to deal with forking is to run the program, find the right process through some other means and attach to it. Life is very frustrating when this doesn't work because you need an early breakpoint, because it's hard to find the process or the like.
  • When that fails, the catch fork command can be used to break after a fork succeeds.
  • Don't forget to set follow-fork to indicate whether the debugger should stay with the parent or child. (I really wanted the semi-mythical set follow-fork both)
  • I found the amazing set scheduler-lock command. This allows you to disable execution of other threads while you're debugging.
  • Don't forget to turn off the scheduler lock from time to time: multi-threaded programs get into some fairly unusual deadlocks when only one of the threads is permitted to run.

That wasnot as hard as I thought . I now have a ALSA audio driver for flite. There is something not quite the same about the sound output, but it does seem to work and I have Orca, Emacspeak and Windows all playing sound. And I can listen to music while coding, which is something I've wanted for years. Hmm. should I try and do something really scary like try to get all this working with a bluetooth headset?

I was somewhat scared by the ALSA API until I realized that there is snd_pcm_set_params and that you can mostly ignore the configuration spaces.

From time to time, my optimism asserts itself and I decide to start off on some software upgrade project. "Software has evolved; this will go reasonably well and I'll be happy at the end," I think. It never works. A few weeks ago I wrote and said that I was moving to Virtualbox as a virtualization solution. That went well enough on my desktop that I decided to try and bring that to my laptop. As part of playing with Virtualbox, I've come to the conclusiong that Orca, the Gnome screen reader is either useful enough that I could get some value out of Gnome, or will be there in a small number of months. So, I decided to move from console land to X.

Most of my work is in Emacs or in a terminal buffer within Emacs. I use Emacspeak as a screen reader for Emacs; it is a collection of lisp that interprets calls to output text and makes sure speech is also output. That's not going to change all that much. I'll probably use some Gnome applications, but I expect most of what I do will still be in Emacs. In particular, I'm not planning to drop Gnus as my mail reader--although I'm considering using Evolution for some things.

So, the desired final result is Emacs, Gnome and Virtualbox running under X. There will be three screen readers: Orca for Gnome, Emacspeak for Emacs and Window Eyes for Windows. X can handle that, so what's the big deal? Well, the Linux sound drivers can't always handle that. It turn out that with modern kernels, libraries and hardware, ALSA can do fine too. However only one OSS application can have the sound card opened at the same time. If any ALSA applications have the sound card open, then no OSS applications can have the sound card open. I'm using Flite as a backend speech synthesizer for Emacspeak; unfortunately, it is built against OSS rather than ALSA. However it's good and keeps the sound card open only for very short times. Orca uses Espeak (or at least that seems the best option), which uses ALSA. However Espeak also keeps the sound card open for short periods of time. The astute reader may notice some missing information at this point.

Of course with any such project there is bound to be some pre-yaks to shave before you actually get down to work. In my case, that was dvorak on X. Gnome has this nice keyboard preferences applet that lets you choose your keyboard layout. It let me choose dvorak just fine. But the fine folks at x.org decided that the dvorak layout should not remap the control keys. So, if you hit the key to the right of 'a', you get 'o', but if you hold down control and hit the same key ou get 'C-s'. That might actually be an improvement except that the Linux console map, Windows and the Mac all do it differently. So I learned dvorak with the control keys remapped and that's how my finger macros work. There seems to be no way to do this right short of mucking with the xkb symbol maps. For some reason I have two sets of xkb rules, one in /etc/X11 and one in /usr/share. Different tools seem to use different sets and they work differently. I did get it working, although not in a manner that will be preserved across upgrades. The dvorak yak had somewhat more hair than expected. The Orca laptop layout uses '7-9', 'u-o', 'j-l'. That is not actually improved by the dvorak translation. So I decided to remap the keys. Unfortunately, the keyboard layout dialogue for Orca sorts things by the key position on the keyboard! So, as you change keys, the display resorts and it is challenging to figure out what remains to be done.

Everything worked until I added Virtualbox to the mix. Well everything except for gdm, but I haven't figured that out yet. Virtualbox is an ALSA application. It holds the sound card open the entire time it's running. Oops.

So, there are two approaches I can think of. I can cause Flite to use ALSA or I can use something other than Flite. Now, in theory, Emacspeak also supports Espeak. I decided to try and go down that road. However it seems to depend on some TCL shared library that is no where to be found. It's apparently in the Emacspeak subversion repository but not actually shipping in the releases. Hmm, let's go take a look at Flite and ALSA support. Ah, promising. What's this au_alsa.c? Ah, that's no good. It seems to be two or three ALSA APIs out of date and doesn't seem to work. Let's see if there is a new version of Flite. O, there is. And look! They removed ALSA support completely! So back to that Espeak server. O, hey, look. It doesn't compile when you check it out of source. In fact it can't possibly have ever compiled with these options. Who builds their software -ansi -pedantic anyway? It does sort of work, but it leaves a lot to be desired. The way it says punctuation is kind of unfortunate, and it's missing a bunch of little beeps that are an important part of how Emacspeak communicates. I'm not entirely sure whether I want to live with this or write ALSA support for Flite.

In other news, the state of Gnome phone sync support is really incredibly horrid.

Just as MIT announces a huge site license with Vmware, I've decided that for my personal needs, I can do better. I've been using Vmware Server to run a Windows instance along side Linux. I use it for web browsing and for running Office. These sorts of applications are far easier to use with a screen reader on Windows than they are on Linux.

I've always felt dirty using Vmware. First, it's not open source. Second, it just really feels like a bad fit. The architecture is very complicated. Server wants me to trust its security model. Also it depends heavily on my local libraries and precludes me from upgrading my Gnome or GTK installation. There are command line tools for manipulating the configurations, but they are hard to use and don't really seem like a very supported interface. Then, there is the plans for new Vmware Server which relies exclusively on a Java component to interact with the virtual machine. It seems far more complicated, brittle and harder to understand. So, I started looking for an alternative.

I've settled on Virtual Box. The major downside is that it does not support 64-bit guests. It's much simpler than Vmware; it doesn't depend on a network IPC mechanism and it uses the OS's security model. It's mostly open source: there is an open source version and there is a version with a few more features that is not open source. On one machine I can use the open source version; on another I need the USB support from the closed source version. It has kind of a rugged user interface, but one that meets all my needs. I seem to be very happy. Thanks to Marc Horowitz for pointing it out.

I have a new professional blog. For a variety of reasons I need and want to have a place to discuss Kerberos, the IETF and related security technologies. While this is public, it contains more personal stuff than I want to subject purely professional contacts to. I need to get someone to help establish a color scheme, possibly update the theme and select atom feeds rather than RSS. However there is at least content, which seems like a good start for a blog.
I seem to have found a use for IPV6: better automated tunnel and NAT traversal management. It all started when my mail server ate its disk. I've decided to replace it and I think it likely I'll replace it with a xen box. All too often I've wished to to set up a wiki, or some other web service that is reasonably stable. Running it on the colo box would be fine except for the security issues. Also, xen would provide better staging for upgrades, etc. And besides, the less I have to let go with both hands for a colo box, the better. But the networking for xen is a bitch. The ideal solution would be to allocate global IPV4 addresses for each virtual machine. I might be able to get away with that provided that I did a bunch of ugly routing tricks and provided that I never had several people over to my apartment needing IP addresses. The obvious thing to do would be to assign private-use IP addresses to the xen domains. Thinking about this though, it makes managing them hard. I cannot ssh from outside the xen environment to one of the domains. I could of course play evil NAT tricks, but getting Kerberos and other things like that working would be challenging. I could tunnel that net 10 to my laptop wherever I am and back to my apartment. Hmm, what's a good way of setting up the tunnel. Ugh, that's going to be a messy.

You know, if I were using IPV6 I'd have all the addresses I need, or so I'm told. You know, I actually could do that. I could have a 6-to-4 address on the colo box and assign a subnet to the xen domains. I could destination-NAT through ports for the IPV4 external services like web and mail. But for management I could connect using the domain 0 as a IPV6 router for 6-to-4. Each VM would get its own IPV6 address on a 6-to-4 subnet behind domain 0. Using teredo or 6-to-4 I could guarantee that my laptop always had a useful IPV6 address, even behind a NAT. I could similarly set up 6-to-4 at my apartment so that I could easily connect to management interfaces from there.

I've done most of the work to support this. In particular, I wrote scripts to easily manage 6-to-4 on Linux. I have two prefixes at home. The first is based on the inner tunnel address of my home router. That's nice and stable, but the problem with using that prefix is that traffic goes over the tunnel. So, I created another prefix based on my comcast address. That's somewhat stable as Comcast doesn't renumber often, but not stable enough I want to put it in DNS. It does use moderately efficient routing at least for talking to other 6-to-4 nodes. I mark the stable prefix as not preferred, so that the source selection algorithm will prefer other addresses, but it will still be available for inbound connections.

Last spring I wrote about my IPsec configuration. At that time, I was using a laptop as a router. Since then I've purchased a Soekris Net 4801 as a router. In some ways it is a bit overkill: it has seven ethernet ports. I need two. It was really easy to configure though: I dropped a CF card in my laptop, ran debootstrap, installed some packages and copied over config files. I put the CF card on the board and booted. Of course I had managed to get something wrong (failed to create /dev/console) so I had to repeat once or twice, but that was all because I didn't understand aspects of the etch installer and I was cutting corners. I did run into trouble trying to configure grub to boot off a drive that would end up being the first bios drive but was not the first bios drive on the system where it was installed. One problem: the 2.2-based Openswan (including the one now in sarge) does
not work well. I'm using the openswan 2.3.0-2 previously in unstable and it works well enough.Here are my router configuration files. Let me know if I left anything out that is needed. The ipsec-nat script should be run after IPsec comes up. Most traffic goes through IPsec except for port 80, 443 and 22. There is also support for machines that are always NATed; I use that for Windows machines—particularly Windows machines that may generate a lot of game traffic. There are several improvements I'm considering. I'm considering dedicating an ethernet port to the wireless AP and bridging to that port. The bridge iptables support would allow me to have better control over what can be done on the wireless side. I would like to have better support for mobility of my laptop and maintaining the same address. I'm also considering adding a second tunnel to MIT.

Openswan

Apr. 23rd, 2005 11:53 pm
I have several entries to write: one flaming about Openswan, one about eventual conclusions for clothes matching and one on general state of life/moving into apartment. The Openswan entrie requires the least new thought and I'm getting tired so I'll write it now.


So, at new apartment, I need a tunnel so I can have static IPs. I have a colo box and it has a /29 for my use.
Requirements for a Solution )


Linux and IPsec )
I choose Openswan )
Linux is sadly not really that far behind in the IPsec world. The only thing that is Linux-specific is that there are a lot of options to choose from. As far as I can tell all the operating systems I've tried to get IPsec working on—AIX, Cisco IOS, NetBSD—are about as complicated. My conclusion is that I'll keep using IPsec since I've got it working. However my needs would have been more easily met by an IPIP tunnel and some mechanism to deal with mobility. No, for my next project I will *not* get Mobile IP working.

X40 is old

Jan. 23rd, 2005 06:38 pm
Time to get a new laptop; Linux works on this one. (Not really; I'm quite happy with the laptop.) I just upgraded to 2.6.10 and found that it works. In particular suspend to memory seems to work. Also, a lot of the fiddly little IBM bits like control of bluetooth, control of various hotkeys, control of video are now understood by Linux.

G5 Status

Oct. 9th, 2004 12:02 am
I'm now using the Penguinppc 64 kernel. I feel idirty; it's one of the more sketchy Linux kernels I've run. Hard disk works. Sound does not. Fan controller does not (this is a big deal; G5 without fan control revs up for take off about a minute after power on). Wireless does not (and never will) but I don't care. The machine is blazingly fast.
So A new dual 2.5GHz Apple G5 arrived for me Friday. Installing Linux is proving challenging. It started with the stock Debian 2.6.8 power4 kernels being unable to initialize the CPU. That makes the machine less useful. Someone points out a backport of a ppc64 patch that will allow the kernel to see the CPU. Next up: I'd like to see my SATA disk. Really, having your cdrom be your only media is not as useful as it could be.
I've been busy and haven't updated recently. There's been a lot of travel: a trip to Microsoft to do some interop testing followed shortly be a trip to WWDC. Here's some status updates on previous entries.

In early June, I wrote about some problems Susan and I were having and how Susan wasn't sure whether she had broken up a while before that point. We are still dating. We are making good progress on the problems we are having, but we keep finding new issues. The only thing I can really say at this point is that signs point to complexity.

Back in February, I wrote about concerns that I am stagnating at MIT. Well, part of that entry is no longer true. Kerberos has become much more interesting than it was then; future design challenges promise to be even more interesting. It looks like we may get chances to solve some fairly long-standing design limitations in Unix credentials and in how cross-organization authentication is handled. I'm enjoying what I'm doing. But I still feel that sticking with this job long-would be a mistake. I wouldn't mind doing the same job in a context where I could work on building the right team. However I suspect I'll have to decide between a chance at an ideal work environment and continuing on what I consider to be a good project.

The new laptop is working better than when I last wrote about it. I have sound working using the ALSA driver. The i810_audio driver has better power management and seems to be a better driver, but it deals poorly with short audio input like a single key name being spoken at high speed. I have wireless working. Thanks to [livejournal.com profile] tytso, I have my speaker beeping again. ACPI sort of half works; I can suspend to memory and even resume. However, minutes after I do so, the sound driver gets confused and keeps repeating the DMA buffer. This seems true even if I unload the sound driver before suspending. More experiments are needed. Neither pmdisk nor the other module (swsusp) for suspending to disk in the 2.6.6 kernel actually seem to work. I really like the hardware though; IBM continues to do an impressive job of laptop construction.

The laptop won of course, but it was a reasonably close battle as these things go. Resizing down the NTFS partition didn't work quite as well as expected; old version of Partition Magic and ntfsresize wouldn't reduce the partition below 17G even after defragmentation. Once NTFS partition was dealt with, the new Debian installer just worked. X and ethernet worked with no significant effort. Sound worked with no effort, although it doesn't work with my speech software. ACPI is a complete lose, or rather in the words of those who write web pages about such things, it works just fine. As you read the web pages, you realize that by just fine, they mean that everything besides suspend to memory or disk works. Bluetooth and possibly USB is broken. Wireless not attempted yet although I have some faith in madwifi. In short, useless until sound improves.l

Nokia 6600

Jun. 20th, 2004 06:50 pm
This is the month of new toys; a new work laptop is sitting in my boss's office and the new desktop is on order. Yesterday I got a Nokia 6600. This is thanks to [livejournal.com profile] obra who pointed me at Talks, a screen reader for Series 60 phones. It's so cool to be able to look at the address book, look at missed calls and do neat things like copy a number from recent calls to the address book. It's a bit bigger than the S55, but seems more solidly built. I guess I'll need to get a new Bluetooth headset; my current headset seems not to be compatible.

The voice command support seems to be kind of broken. It is very bad at recognizing commands. I seem to have gotten it into a mode where I cannot delete the voice tag on `normal profile' nor can I record it. Either operation says voice system error. Advice on how to recover appreciated. O, and whoever decided that you can only have a voice tag on at most one phone number per contact needs to get shot.

Next up: getting Bluetooth under Linux working better. I have GPRS, but would like to get Obex working.

Svk review

May. 8th, 2004 05:14 pm
There was a Discussion of Svk on class SIPB. I ended up reviewing this version control system for use in maintaining the Debian PAM package. Based on participation in the discussion, others may find my review interesting. So here it is. Read more... )
Last June I got a Zaurus PDA. IT runs Linux, and the idea is that I'll be able to use it for contact management, etc. If I can just get it talking.


But the sound driver crashes in audio_sync some times when the SNDCTL_SYNC IOCTL is issued. The kernel oopses and no more sound comes out. All very sad. And it's useless until I can fix this.

I finally got a chance to look at pxa-i2s.c in the Zaurus kernel sources. What complete crap. It seems like there is basically no locking in the code at all. I'm not completely convinced this is wrong for a uniprocessor machine, but the coding style does not inspire confidence so it will take me a while to reason about locking correctness. Here's a typical example of the style:
 if (b->offset &= ~3)  
I think that statement actually happens to be more or less correct, although it will sometimes lose a partial sample, but it brings new meaning to the word side effect. Hopefully I can figure out a solution to this problem before going to Japan. Read more... )
Page generated Dec. 28th, 2025 08:25 am
Powered by Dreamwidth Studios