It doesn’t look like new scripts have been added lately, but I just ran across this nice little collection of dtrace scripts and one-liners. There’s a special section for Mac OS X 10.5 compatible scripts.
It doesn’t look like new scripts have been added lately, but I just ran across this nice little collection of dtrace scripts and one-liners. There’s a special section for Mac OS X 10.5 compatible scripts.
I’ve still not gotten around to reading the actual paper, but slashdot today mentioned some additional analysis of the paper correlating increase beer drinking with reduced scientific success (measured in terms of publications and citations). My initial thought after reading the first of many posts about this that appeared around the internet was as follows:correlation != causationThis blog post points out some additional issues with the paper, which I’ll now have to check out, given that it is both short, and to evaluate the other scientist’s analysis.Beyond the correlation != causation, the author of the blog post points out that there are really only 34 data points, and without 5 of them the correlation falls apart. Additionally, the R-squared for correlation is 0.5. In addition to the comment made pointing out that an equally probable explanation for the data was that low-output scientists were drinking more, there’s also that statistic itself. An R-squared of 0.5 suggests that 50 percent of the variation in output can be attributed to beer drinking level. That still leaves a large percentage of potential other influences on top of alternate causal relationships.Ah well, I suppose the original article may have aimed more at headlines or amusement, but it’s still fun to try and justify beer. I think the common sense on this item is likely close to the mark: heavy drinking certainly doesn’t help with your output, but reasonable social drinking probably doesn’t correlate well with output levels.
Wow. The new Safari build has significantly improved JavaScript performance in Apple’s SunSpider benchmark.Older Safari 3: 8537.4ms ± 0.3%Safari 3.1: 3152.6ms ± 0.2%The new Firefox beta is no slouch either:Firefox 3 beta 4: 5080.4ms ± 5.7%Firefox 2.0.0.12: 15463.6ms ± 5.7%Full results after the jump. Read more ›
So, I’ve recently been experimenting with and loving ZFS on FreeBSD and OS X. Some of the initial instability (kernel panics) issues I was having with ZFS and OS X seem to have calmed down for the moment, and I’ve had zero problems on 64-bit FreeBSD. 32-bit FreeBSD has been mostly problem free after doing a little tuning to make sure that there was enough space for ZFS to grow during times of need.Recently I ran across this old thread about ZFS being ported to OS X, and it reminded me of how I feel about Solaris and related technologies that have been ported to other operating systems.I love ZFS, and I’ve liked DTrace in my recent experiments with it on OS X, which has lead me to play a bit with Solaris from whence these technologies come. After all it has not only those technologies, but a bunch of other neat solutions like zones (including branded zones to run native Linux applications and the like), but I don’t see how I could love this operating system without more easily installable & buildable software available for it. Sure there’s Blastwave and a few other repositories out there that have Solaris binaries, and Project Indiana is working on linux-izing Solaris to provide a nice command-line package manager and whatnot, but I don’t see how one could already love the operating system as an experimenting “power user.” Sun, I’ve heard, does an excellent job on engineering solid server products (both software and hardware), but I can’t imagine going back to compiling everything from makefiles to get decent pieces of software on it. It’s OK if everything you want to build doesn’t have terrible dependency trees and, say, has been tested on Solaris, but trying to figure out how to get something to compile that requires some modification to make it through this process is a waste of time, unless I’m going to be using a particular piece of software a lot, and already know that it’s essential from use on other platforms.Perhaps Indiana will change much of this, but I’m also wondering why it hasn’t happened already? FreeBSD and the variety of other BSD operating systems out there are also somewhat acquired tastes in an operating system, but working with it is so much easier with the ports collection. I can build and install pretty much whatever I want on it, and have a working desktop machine or a server box up and running without too much pain. If I went over to Solaris, however, I know that I would end up spending time trying to get things like netatalk to compile, and figuring out how to get and compile relevant libraries because I don’t see it at any of the standard repositories. If I were on linux or BSD, even if I ended up compiling from source, the package system would allow me to get the right compiler and libraries without any trouble. I could also trust that what’s in ports or on an rpm or deb repository will be fairly recent. When I sifted around looking for things on blastwave, not only was I unsure about whether things would run happily on something other than the somewhat long released Solaris 10 (the current OpenSolaris builds are called Nevada and I believe will be Solaris 11) , but quite a few things were more than a few versions old.I’m not blaming the Solaris community, or Blastwave or anyone for this, I think they’ve all done quite a bit of work to make available what is there, but I don’t think I’ll end up playing with it much until there’s at least a healthy set of packages or ports that don’t require too much messing around with to get running. When that’s there, I’ll give it a go again.
While much of the MATLAB documentation is pretty dry in terms of the data they select to demonstrate the use of functions, I found it interesting today when I noticed they had an example on excluding outliers that uses data pertaining to the 2000 presidential elections in Florida. While it might not have much to do with the controversy over who won that election, it does bring back the old “butterfly ballot” issue, since surprise, surprise the only county that used those had the greatest residual (degree of distance from a fit that includes data for other counties). <a href="http://www.mathworks.com/access/helpdesk/help/toolbox/curvefit/index.html?/access/helpdesk/help/toolbox/curvefit/bqxox7w.html&http://www.mathworks.com/access/helpdesk/help/toolbox/curvefit/bqxox7w.html#b…“>MATLAB Curve Fitting Toolbox – Excluding Data
So, I know there have been numerous blog posts gushing about ZFS.
This is another one of those posts.I’ve got “experimental” versions of the filesystem going on a number of Mac hosts, and now on my FreeBSD NAS device. Aside from having been able to cause the OS X version to kernel panic a number of times by doing a a few out-of-the-ordinary things, it has been an excellent experience so far, and I haven’t lost any data. One of the reasons that those kernel panics didn’t bother me at all was because there is no fsck, no filesystem checking tool. The filesystem is designed to repair itself on the fly. In addition it’s also equipped to efficiently deal with power failures, as journaled filesystem are able to do, without a long fsck, except that ZFS doesn’t use journals.
Despite a number of comments about ZFS “really needing” a 64-bit CPU and gobs of memory, this requirement appears to be quite scalable. For a home-based NAS device where most of the time only one client is being served, it seems to do fine even on an old VIA C3 800Mhz with 512MB of RAM. It certainly won’t break any speed records for data transmission, but the data that makes it on there is in a raidz1 and so if 1 drive dies, it’s no big deal. In addition, if something happens to a disk between times when ZFS is working with it, or if there’s an error that the drive doesn’t deal with appropriately ZFS will realize that the checksums don’t match on the reads it is doing and will repair the damage on the erring drive. Unless you disable it, every last block of data you write out to disk in ZFS is checksummed. So, not only can you be relatively sure that your data are safe, but you know if it has become corrupted, and in most cases with redundancy the filesystem can fix it.Did I mention that it also supports transparent data compression? Enable the option and all data written to the drive after that point is compressed with no need to wait 8 hours to recompress all the data already on disk (though I believe some functions are being considered to allow forced recompression of data). While I think, perhaps, on my little NAS device this might actually slow things down a bit, in many workloads Sun has indicated that compression actually speeds up data access because for many of the lightweight compression algorithms used the CPU hit is small, and less data needs to be actually pulled from the device when read.
Honestly, aside from some ironing out that needs doing on some of the fresher ports of this filesystem, I see no downsides to it. It may not work well on a 200 Mhz machine, but something closer to 1Ghz is enough, even a Via C3, which is probably at 800 Mhz comparable to a 500-600Mhz Pentium/PentiumII. Did I mention that creating new filesystems on a pool costs essentially nothing and that you can do snapshots and clones that are quite efficient and only need to store the blocks that have changed rather having a whole duplicate image of a filesystem?
In another week or so I’ll likely post up some comments about tuning usage on the NAS if any tuning is really necessary.
People, in large groups, can behave like idiots.I would suggest, perhaps, that you don’t take the first, and most obvious, interpretation.That suggestion could be a nice mantra for life, however, I would add that over-interpretation can be problematic as well.
So, in a recent post I mentioned that I started brewing beer and that I made my own temperature sensor (not for the wort, just for air temperature where things were fermenting/brewing/conditioning). To accomplish this, I took a miniPOV3 kit that I had constructed some number of months ago, and added a temperature sensor to it. I had originally thought, why not just use a thermistor and maybe some voltage dividers with A/D, and the binary display would make for a decent enough way to communicate the current temperature. Unfortunately it turns out that the ATTiny2313 in the kit doesn’t do A/D. Luckily I had a DS-1631 temperature sensor lying around that allows one to get the current temperature of the chip over an I2C bus.The hack itself is rather simple, and I’ll post the Eagle CAD file in a bit if I can figure out how to modify the original diagram provided by ladyada over at adafruit industries (who sells the kit). The general gist of the hack was connecting the SCL & SDA lines on the DS-1631 to the PD0 and PD1 lines (with 4.7k pullups) and using Peter Fleury’s excellent I2C master library that implements i2c in software. The ATTiny2313 does have ways of doing I2C in hardware, but the pins for this were taken up with connections for some of the LEDs that give the miniPOV its ability to be both a persistence of vision toy, and to have a really cheap 8-bit display. Since this was already a hack, and I wanted to minimize the degree to which this was going to make a mess of the nice clean PCB, I instead used the software I2C implementation with some un-utilized pins, which has been plenty fast and reliable (it ran happily for about a month before killing its batteries). The DS-1631 also runs fine off the lower voltage provided by 2 AAs in series.I may post some photos later. You’ll find a schematic (if someone can tell me how to fix the description box title, and how to properly make a device I would appreciate it. this one won’t move after placing it 🙂
I must implement something like this for the cats, courtesy of the Curious Inventor Blog:Although, perhaps in conjunction with something like this, but functional, so I don’t have to listen to a car horn every time the cats do something stupid 🙂
In answer to the title, beer, actually. At the beginning of this month I acquired by first beer brewing kit as a gift from my fiancé Annie. We picked it up from the Wine & Hop Shop in Madison while we were visiting her sister. There are a number of places also in the Chicago area, but most are not quite as close as I’d like. In the future, I’ll likely get supplies from the Brew & Grow in Chicago, which someone else recommended to me.I started with malt extract kit for a California golden ale, prepping the ingredients per the instructions that came with the kit. It was pretty easy starting from this stage since you don’t have to do the extraction of the sugars from the grain yourself, which mean that only one boil was really necessary to get the wort ready. An aside for this portion of the prep process: get a pot that’s large enough. When they say you need a 3-gallon pot minimum, that really is the bare minimum, get a larger one. I had no troubles with boil-overs, but if I hadn’t kept a close eye on it, I’m quite sure my stove would have been covered with hot sticky wort, and would have reduced the strength or yield of the batch.Getting the temperatures right at different stages was made fairly easy by using a digital thermometer (thermistor attached to a Fluke meter). I’ll post some details about hacking the miniPOV into a digital binary-display thermometer for the fermentation phase, shortly.I did the primary fermentation in an ale pail (haha), if you do the same and therefore can’t see what’s going on inside, don’t worry if it takes a bit for air to start bubbling through the airlock. It took a number of hours for mine to get started fermenting and then went pretty fast for a day or two (air bubbling out every several seconds). Racking the beer into the secondary fermentation vessel, a glass carboy, was also fairly painless. I would recommend getting an auto-siphon or racking cane to get the siphon started. They’re very cheap, and though this stage isn’t too hard, it’ll keep your beer a bit cleaner than doing a mouth siphon. On that subject heard from a number of people that are horrified by the prospect of mouth siphoning, and also from a number of people who guiltily admitted that they did it themselves. I’d have to guess that the concerns there are perhaps a little overblown, but still you don’t want all the happy microbes in your mouth colonizing your beer. You certainly don’t want any liquid you’re going to be conditioning and later bottling getting in contact with your mouth.Now on to bottling. Use your dishwasher to sanitize the bottles. Its damned easy. Just put some one step or whatever sanitizer/cleaner you use in the spot for detergent (I wouldn’t use a detergent on them though), and run a cycle. Also, get a bottle filler or bottling wand and a clamp. They are cheap and make filling more more consistent and easy with less spillage. I spilled at least a bottle or two worth of beer on the door of the dishwasher (which happens to be a great spot to bottle since when you close the door, all your spilled beer will slosh into the dishwasher and not on your floor). Many thanks to Seth for the dishwasher recommendation.All that said, while I was paranoid along the way about whether or not I’d ruined the batch, it seems to have turned out fairly well. It’s quite a nice golden ale, and tastes similar to what you might expect from a Sierra Nevada Pale Ale. In addition to the above recommendations, I would suggest getting a kit with multiple fermentation containers. Even if you don’t plan on doing multiple-stage fermentation it makes it much easier to mix in the extra sugar for carbonation in the bottles. Who in their right mind wants to add a measured amount of sugar to 4-dozen bottles or deal with not sucking up sediment while bottling.The following wiki is also useful for explaining different aspects of the brewing process: Home Brewing Wiki.