Unifi controller on Debian Buster with Java 11

Updated: Mon Jul 6 10:35:09 2020

Edit: These instructions were written for the v5 controller. I've since updated to v6 and that has worked fine, but I've had a report that these instructions may not work with a fresh v6 installation.

I was moving my Unifi controller to a new VM and wanted to use Debian Buster which has been the current release for a year now. It seems Ubiquiti's own instructions only mention Jessie and Stretch as of July 2020, which is unfortunate as Jessie LTS is now past the end of its support and is no longer getting any security updates.

It also doesn't help that Debian have dropped MongoDB in recent releases for licensing reasons, and that the Unifi controller requires older versions of MongoDB and Java.

Installing the latest MongoDB version 4 doesn't work, as the unifi .deb requires MongoDB 3. Here's how I got it working without installing truly ancient software.

MongoDB 3

There are some guides that suggest installing 3.4 from Jessie which needs some old libraries, but there's no need - the controller works fine with 3.6. It's not packaged for Buster, but the Stretch version works without any bodging.

echo "deb stretch/mongodb-org/3.6 main" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.6.list

wget -qO - | sudo apt-key add -

apt update
apt install mongodb-org
systemctl start mongod
systemctl enable mongod


As per the unifi instructions.

apt install ca-certificates apt-transport-https
echo 'deb stable ubiquiti' | sudo tee /etc/apt/sources.list.d/100-ubnt-unifi.list
sudo wget -O /etc/apt/trusted.gpg.d/unifi-repo.gpg 
apt update
apt install unifi

This will install a load of dependencies including Java 11, which initially doesn't work. Thankfully there are a couple of simple fixes - although this isn't supported, my controller seems to be working fine with this.

Check to see why it's not starting:

systemctl start unifi
journalctl -u unifi

Fix Java

The first issue to fix is the JAVA_HOME variable, otherwise you get an error saying "Cannot locate Java Home". This should be relatively safe for any future upgrades:

echo 'JAVA_HOME="$( readlink -f "$( which java )" | sed "s:bin/.*$::" )"' | sudo tee /etc/default/unifi

The Unifi controller still doesn't work though, the log reports: "Cannot find any VM in Java Home". A bit of testing with strace and it seems to be looking for libraries in a directory that's in a different place. Sticking this symlink in seems to resolve it:

ln -s /usr/lib/jvm/java-11-openjdk-amd64/lib/ /usr/lib/jvm/java-11-openjdk-amd64/lib/amd64

It's alive!

Finally, start the controller:

systemctl start unifi
systemctl status unifi
systemctl enable unifi

You should then be able to access the controller on https://whatever:8443 as usual.

I put mine behind nginx with a Let's Encrypt certificate to get rid of the TLS warnings, but that's beyond the scope of this guide.

Enabling TRIM Support on a Via VL817 USB 3.1 SATA Adaptor

Updated: Thu Feb 13 19:17:22 2020

Mainly here as a reminder for myself, but hopefully others may find this useful.

The Via VL817 USB 3.1 SATA adaptor supports UASP mode, and can be convinced to enable TRIM support in Linux. This means you can also run blkdiscard to erase the drive, which was my primary use case.

This probably works with other adaptors too, but this one is a "Sabrent USB 3.1 (Type-A) to SSD / 2.5-Inch SATA Hard Drive Adapter [Optimized For SSD, Support UASP SATA III] (EC-SS31)" I bought from Amazon. You can find it here

First, check your device is showing up with 'Driver=uas':

# lsusb -t
/:  Bus 02.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/6p, 5000M
    |__ Port 1: Dev 2, If 0, Class=Mass Storage, Driver=uas, 5000M

Then verify that unmap is supported - you're looking for "Unmap command supported (LBPU): 1". On debian buster I needed to apt install sg3-utils for sg_vpd.

# sg_vpd -a /dev/sda
Logical block provisioning VPD page (SBC):
  Unmap command supported (LBPU): 1
  Write same (16) with unmap bit supported (LBPWS): 0
  Write same (10) with unmap bit supported (LBPWS10): 0

Confirm the device ID for your adaptor using lsusb:

# lsusb
Bus 002 Device 002: ID 2109:0715 VIA Labs, Inc. 

The bus and device should match those from the 'lsusb -t' command you ran earlier.

Create /etc/udev/rules.d/50-uasp-usb.rules with the following content:

ACTION=="add|change", ATTRS{idVendor}=="2109", ATTRS{idProduct}=="0715", SUBSYSTEM=="scsi_disk", ATTR{provisioning_mode}="unmap"

Amend the vendor and product IDs if you need to.

Then reload udev:

# udevadm control --reload-rules && udevadm trigger

You should now find that TRIM works on that device. In my case, I wanted to quickly erase an SSD using the blkdiscard command which now works without returning an error and does indeed zero out the entire SSD. DO NOT RUN THIS UNLESS YOU WISH TO LOSE ALL YOUR DATA...

# blkdiscard /dev/sda

The version of smartctl on Debian Buster doesn't seem to know about this device yet, so if you get the following error running smartctl:

/dev/sda: Unknown USB bridge [0x2109:0x0715 (0xa000)]
Please specify device type with the -d option.

You can fix this by passing in the '-d sat' argument:

# smartctl -d sat -a /dev/sda

Why emailing passwords is a bad idea.

Updated: Sat Feb 7 20:12:36 2015

You may be reading this because somebody has got in touch and complained about you sending them their password as part of your signup process or 'forgotten password' function. I hope I can explain why this is not a good idea.

Times Change

Like everything else to do with computers, the world of internet security is constantly evolving. Ideas that seemed great just a few years ago turn out to be not such a good idea, and unfortunately this is one of them.

Best Practice

There are a number of reasons why emailing passwords to people is now considered an unwise idea.

  1. Email was not originally designed to be secure. Emails are often delivered in clear text over the internet and stored as easily readable files on servers. Messages can also bounce to unexpected places when something goes wrong and people sometimes share email accounts. If you email a password, consider that an unauthorised person might gain access to your service. That is probably not something that you want, especially if that can result in reputation damage or credit card chargebacks.

  2. Humans aren't very good at remembering good passwords, so people often use the same password for many services. This means that if the password is revealed other services may be accessed. If this includes internet banking, social media or email account information, the consequences could be significant. When you accept a password and other personal information from a customer, you are taking responsibility for keeping that information safe.

  3. If you are able to email a password, it likely means that you are storing the password in plain text. This means that if your site is compromised attackers can potentially make off with the email addresses and passwords belonging to your users. This annoys customers and leads to bad publicity, which will be something you want to avoid. If your site is storing passwords insecurely, there is an increased likelihood that it has other security issues. You may believe that your site is secure, but with successful attacks against massive names like Adobe, Snapchat and Yahoo leaking customer passwords, it is best not to take the risk.

What should I do?

As of early 2015, you should consider the following at minimum:

  1. Don't store passwords in plain text. You (or the product you choose) should use a one-way hash with key strengthening, such as bcrypt or PBKDF2. This may sound complex, but it is a way of turning a password into a form where you can verify that the right password has been used, but you can't tell what the original password is. A bonus side effect of these 'hash functions' is that they permit passwords of any length.

  2. You should also not encrypt the password in a manner that means it can be decrypted later on, as this is likely to be inadequate - think of it like using a padlock, but keeping the key next to it.

  3. Don't email a copy of the password when somebody signs up.

  4. The best way to handle forgotten passwords is to send the customer a link that will allow them to set a new password. It should be valid only for a short period (say 24 hours), and must stop working after the password has been changed.

  5. Passwords should always be transmitted securely - this means your site uses HTTPS and the little padlock appears in the browser.

I don't understand this. I just sell things through my website.

If your site does any of the following, then it is likely that it has aspects that are not adequately secure:

Hopefully you can ask the people responsible for your site or the vendor of product that you use to help you out. Alternatively, please consider engaging the services of somebody who does understand the detail on this page.

Two-factor time based (TOTP) SSH authentication with pam_oath and Google Authenticator

Updated: Thu Sep 19 21:34:14 2013

Two-factor authentication (2FA) is becoming an increasingly useful way of providing an extra layer of security to services above and beyond passwords.

OATH is an open mechanism for generating either event-based or time-based One Time Passwords and there are a number of hardware tokens and software implementations available, which makes it ideal for a small scale implementation without requiring lots of infrastructure or expense.

Setting up a simple trial to add 2FA to a remote access server using Google Authenticator as a software token, I thought it would be useful to document the bits that I glued together.

These instructions are for RHEL/CentOS 6, and you'll need the EPEL repo for the oath packages (or install the packages and their dependencies directly). pam_oath (and its documentation) is available directly or it might be provided by your OS distribution, if you're not using RHEL/CentOS.

You should leave a session logged in as root while you test this, in case you break anything and need to undo it.

Install the relevant packages, and symlink the pam_oath module into the right place:

# yum install pam_oath oathtool
# ln -s /usr/lib64/security/ /lib64/security/

Enable ChallengeResponse auth in /etc/ssh/sshd_config:

ChallengeResponseAuthentication yes
PasswordAuthentication no
UsePAM yes

and restart sshd:

# service sshd restart

If you're using a software token, you'll want to generate a random seed. A good way to generate a random string of an appropriate size and format:

# head -10 /dev/urandom | md5sum | cut -b 1-30

Set up your oath seed in /etc/users.oath:

HOTP/T30/6  yourusername    -   15ad027b56c81672214f4659ffb432

You can add as many users as you need, one line at a time. You should also secure that file appropriately, as these strings are effectively a password:

# chmod 600 /etc/users.oath
# chown root /etc/users.oath

You can generate an OTP using oathtool. Run this with the -v option and your chosen key. The Base32 version of the secret is the one that you will need for the Google Authenticator smartphone app. You can type this in, or generate a QR code later...

#  oathtool --totp -v 15ad027b56c81672214f4659ffb432
Hex secret: 15ad027b56c81672214f4659ffb432

Since you probably don't want OTP enabled all the time for all users, create /etc/security/access-local.conf - you can set differing options depending on your requirements.

This configuration would allow access without requiring an OTP from a trusted network:

+ : ALL :
- : ALL : ALL

This configuration only requires OTP for members of the 'otpusers' unix group. This might be useful to selectively 2FA enable user accounts as part of a gradual rollout, or you might decide to only require 2FA for users who have permission to su to root.

- : (otpusers) : ALL
+ : ALL : ALL

You can be quite creative with these rules - they follow the standard pam_access syntax, so check the documentation for that.

Finally, I added the following lines to /etc/pam.d/system-auth-ac and /etc/pam.d/password-auth-ac (This is a RHEL/CentOS-ism) - where you put them will depend on your pam configuration and OS. The pam_access entry is optional, but it does make the above choices possible.

auth [success=1 default=ignore] accessfile=/etc/security/access-local.conf
auth required usersfile=/etc/users.oath window=30

Now you can ssh into your server (don't close the root session you currently have open in case you've broken something!). You can generate your OTP using oathtool:

# oathtool --totp 15ad027b56c81672214f4659ffb432

Log in quickly (before that token expires), and you should find it lets you in:

username@host:~$ ssh securehost
One-time password (OATH) for `username':
Last login: Wed Jul 10 22:38:53 2013 from

To set up the Google Authenticator smartphone app, you can take your Base32 formatted secret, and either enter it manually or generate a QR code. To make a QR code, you need a URL formatted string, as below. The example of 'username@securehost' is a simple description, so it can be anything you like.


Feed this into a QR code generator that you trust (remember, this is effectively a password), and scan the code using the app.

With the secret saved into your smartphone app you should now be able to log in using the codes that it generates.

Extra things

The /etc/users.oath file gets updated every time you log in, which can make this a challenge to manage centrally across multiple hosts. It is possible to update this with a custom augeas lens if you're using puppet. I've also got an ANSI escape commandline QR code/seed generator. These are a bit of a bodge, but do seem to work. If there's demand I'll see about sticking a copy of them and the relevant puppet manifest up somewhere.

I've also got a script to decrypt Gemalto PSKC v1 files for the IDProve 100 / Easy OTP v3 tokens.

Update, 2013-09-19

Fixed typo in symlink, thanks to Andreas Ott for spotting it.

Picasa.ini files not properly updated

Updated: Thu May 17 00:00:00 2012

Like many people I've been using the wonderful (and free) Picasa to manage my photos. One of the huge benefits of Picasa aside from its fast and friendly user interface is that it doesn't write changes to your images. Instead it stores a record of changes that are made to each original image in a Picasa.ini file in each directory. This means you can make changes to your images in Picasa, such as adjusting the contrast and brightness or cropping (or marking with a star), and you don't need to worry about it overwriting your original images.

In order to keep performance reasonable it stores a cache of these adjusted thumbnails and the changes in your Local Settings directory too.

I discovered a problem with Picasa's method of updating these Picasa.ini files though - if you make changes to a file, and then move it to a different folder, within picasa, it doesn't update the original or the new .ini file. This means there's a record of the changes still in the old directory, but not the new one, but it doesn't seem to matter because Picasa tracks this information in the Local Settings database.

The problem comes if you lose the database, or (as I did) intentionally delete it. Picasa will happily then trawl back through your pictures directory and rebuild most of the information from these .ini files. Unless you've moved the images to a different folder after editing, in which case your changed will be lost.

The frustrating thing is that this information is still available, picasa just doesn't know where it is.

To fix this with my photos, I wrote a short perl script to trawl all the Picasa.ini files to pull data out of them, then write them to folders where this information is missing. There are a couple of caveats with this though:

  1. If your camera doesn't keep track of file names (they return to 0001.JPG after emptying your memory card) this almost certainly won't work properly.
  2. If you've made changes to the image in the new folder too, they won't be updated or merged. It will warn you if there's filename duplication though.
  3. I ran this on a linux computer. It should work on windows too, using something like activeperl, but I've not tested it.

If you do find it useful, please do let me know, and remember, back up your files before using this script. It works for me, but I make no guarantees that it won't mess up your images.

You can get the script here

To run it simply give it the full path to the directory that contains your photos. e.g.:

./ /home/bcc/photos

After running it, you'll need to clear out Picasa's database for it to pick up the changes. Hold down ctrl-alt-shift as Picasa starts and it will ask if you want to do this. You will lose any labels you've applied to images, but if you need this script then that's probably already happened...

Dev8D 2010

Updated: Mon Mar 8 21:31:32 2010

The Event

I hadn't expected to get to go to Dev8D 2010. After the success of our entry in 2009, it was agreed that other people in the department should get the opportunity to go. It came as a pleasant surprise to be invited to join the DevCSI Developer Focus group - intended to help foster a development community based around UK HE, and carrying on the work started at Dev8D 2009. Among other responsibilities, this meant helping with some of the preparation and running of the dev8D 2010 event.

I arrived earlyish, and set up in Base Camp where I started putting together a handful of slides for my lightning talk on list8D. Matt Spence and I had prepared a demo the day before, but I wanted to give a bit of a talk about how the dev8D prototype from last year had turned into a proper funded project, how our management had supported the development, and how agile development had helped us maintain realistic expectations. Most excitingly this would include the first demo of the shiny new theme thanks to some amazing last minute work by Matt.

I had also been roped into taking photos for the event and as more people started to turn up I wandered around getting some pictures.

Lunch was excellent, and with the Linked Data event running at the same time on the first day there were around 500 people in the ULU building.

In the afternoon I had an interesting conversation about where I saw cloud computing with a couple of other interesting folks. Consensus seemed to be that it's a useful tool where appropriate, but not always the right answer. Services such as content delivery and compute-on-demand definitely of value, but it's not mature enough for core service provision yet. Feels a bit like virtualisation did 5 years ago - useful but not quite there yet.

I wandered through to the expert zone to prepare for my talk on list8D which for the most part went well. Minor networking issues meant I couldn't completely demo the addition of new items, but it was nice to show off the new theme and the brilliant work put in by Matt and Simon in getting list8D ready for real use.

I also watched the excellent lightning talks by Joss Winn on Wordpress, the Eprints guys talking about their challenge, and a demo of OpenGL development on android.

Wednesday evening was set aside as 'Games Night'. In addition to a collection of the usual and not-so-usual board games, we played Developer Bingo where you had to find other developers who could sign off a specific item on your sheet. These were things like "has been slashdotted", "coded in fortran" or "is a GNU maintainer" -- based off the signup details and a number of 'likely' other suggestions. This was a brilliant icebreaker, and the prizes of lego boardgames were similarly well received with people playing with their prizes with people they'd only met that evening. Once again, the food was excellent, although the ULU bar could have done with some proper beer.

On Thursday morning (having stayed up finishing my slides later than I probably should have) I gave another lightning talk on Web Security which seemed to go down well. It's a lot of material to cover in 15 minutes and not really in any depth, but the major aim was to give people enough information to go and do some further research themselves. Judging from a couple of conversations I had later on in the day, it seems that at least a couple of dev8Ders will do just that so I consider that a success.

I also watched a brilliant lightning talk by Stephen Johnston on using the Microsoft Azure cloud computing platform to calculate satellite collision probabilities. Very cool stuff, and well suited to the 'compute power on demand' model.

After this I went off to see the RepRap 3D printer which had been set up and was busy printing a coathook. The buzz around this device was amazing - nobody could quite believe this thing was printing physical objects. Adrian Bowyer gave a great talk back in the expert zone on how RepRap came to be, why it was open source and how he hoped it would revolutionise the ability to make things. What's really impressive is that the RepRap device can print about 50% of its own parts, and they're constantly working to improve that percentage. They also encourage the improvement of the design of individual bits and the contribution of those changes back to the central project.

I really can't describe how cool RepRap is, and how much excitement there was at the event - you really got the feeling that RepRap is a game changer in the same way that the internet allowed anyone to publish - this gives people the ability to manufacture. Best of all, it only costs about £300 to build one from scratch, which puts it well within the reach of individuals and communities.

Thursday afternoon meant the Cloud Computing workshop which had Dave Tarrant covering Amazon EC2 and myself talking about Linode. The workshop room was pretty much full for this which only added to the pressure. Dave did a brilliant job going through the basics of EC2 and most people in the room had a working EC2 instance running Apache and MySQL. The Linode demos went pretty well, and I was happy to show off the recovery console and the new StackScripts, and a number of attendees signed up for some of the free instances that Linode had generously provided for the event.

In the evening (entertain yourself evening), despite the horrible rain a few of us went to Ciao Bella for some tasty italian food, then on to the Jeremy Bentham pub for the Shambrarian meetup which was excellent. Good to find another pub in London that has decent beer on tap and a good whiskey selection.

Friday was finally a day where I could relax a bit, so I spent one session in the genetic algorithms workshop by Richard Jones. This is a novel approach to using multiple generations of virtual creatures to solve problems that are non-trivial to work out through conventional means. Using a set of simple rules and a fitness function, you test each set of 'DNA' against the fitness function, pick the best ones, breed them, then run them again. Over a number of generations, you should end up with a pool of creatures that get better and better at solving the problem.

A great visual example of this is the evolution of Mona Lisa demo.

This was a great introduction to an area I knew nothing about, and although I missed part of the workshop due to helping sort out the nominations for the awards dinner, I really enjoyed getting the chance to play with this alternative approach to solving complicated problems.

I also spent a bit of time on Friday putting together a simple list8D API to LTI bridge, for our entry for the LTI challenge which Steve had noticed would be a perfect fit.

Friday evening was the awards dinner which was a lot of fun - we got to give away some cool awards (best newcomer, best leap-of-faith and best t-shirt were my favourites) and the meal was brilliant. I was taking photos of the presentation of the certificates and while there was a convenient balcony, my flash wasn't really strong enough to reach comfortably which was a shame, since the photos taken from the side of the stage weren't as good as I'd hoped.

On Saturday morning (feeling rather blurry from the very late night) I give my web security lightning talk again as it had been asked for. Again, a good number of questions and another chat from someone after the talk suggests it was worthwhile.

I spent the rest of Saturday helping to judge some of the entries for the challenges. I was amazed at the number and quality of the submissions. Clearly a lot of work had gone into many of them, even only over a few days.

Finally with the close of Dev8D came the awarding of the bounty/challenge prizes (again, as photographer-monkey, but the light was rather better this time), then heading home, exhausted.

The Good

The Bad

The Shiny


Well done, if you've read this far. Here's some stuff that may be of interest:

You may also be interested in joining the DevCSI Developer Contact group.


Thanks to Mahendra, David F, the UKOLN events team, and anyone else involved in running Dev8D. It was an amazing event and I had a brilliant time.

Fake Drugs being sold from sites

Updated: Mon Mar 8 14:47:39 2010

BBC News reported that a number of sites are being used to sell counterfeit drugs at the end of last week. I wish I could say this surprised me, but knowing how complicated the issues are in sorting out web security at the university where I work, I can't say it's come as a massive shock.

At a university it is often the case that a department may be responsible for their own web presence - usually someone for who it is not a priority, and they may know nothing about the technical issues involved. Sometimes a department will have had a third party company supply a site or content management system without realising it needs to be kept up to date. Even where there is a good level of centralised support for web publishing, some departments may do their own thing for historical reasons.

We've been fairly proactive at working with departments and getting our own house in order, but it's certainly been a challenge to have security taken seriously across the institution. While incidents like this are unfortunate, they do have the positive side-effect of raising the profile of these issues, and longer term this can only be a good thing.

Finally I'll share a tip for anyone working in academia. Set up some google site alerts for the following:

These will alert you to any new pages that appear on your site with those terms. It's not perfect, but it will alert you to some compromised pages, or even comment spam on wiki pages/blog posts that should be dealt with.

Driving 8x8 LED Displays with an Arduino

Updated: Sun Feb 7 22:13:00 2010

After playing around a bit, I moved on to connecting the 8x8 displays. I spent a bit of time thinking about how best to do it, and had come to the conclusion that using a 595 shift register to drive the anodes of each display was the way forward. I'd ordered some ULN8023A darlington transistor arrays which can sink up to 500ma of current. This is less than I was planning to draw through the 24 LEDs that make up a row, so the plan was to connect the cathodes of all the LED matrices to this chip. Again, a 595 shift register controls the 8023, so it means I can directly address each row and column in the same way.

Once I'd got one 8x8 display working, it seemed like checking it all worked properly was a plan.

It did, so I moved the current limiting resistors over to the 'y-axis' board, and started building the other 2 display boards.

Each board joins directly onto the next, so the wiring isn't ridiculous on any of them, but there's still an awful lot of extremely fiddly connections to make, and it's used pretty much all of my 150 bits of wire up. I wouldn't plan on doing this again in a hurry...

Once I'd connected the other 2 displays, I changed the display code to push out 2 additional sets of bytes on the X-axis with some slightly different display patterns and we were in business:

Here you can see (from the top) one of the 595 shift registers, the 8023 darlington array and 8 current limiting resistors. These collectively make up the Y-axis board, which controls the cathodes of all the displays. Each of these is operated in turn very quickly, lighting up an entire row. These are scanned quickly enough that the image on the display seems to be complete, thanks to the persistence-of-vision effect.

I had considered driving this 595 off separate pins, but decided not to. This is the first one that is connected, so it keeps the last byte of 4 that is sent out. This has the advantage that the latch of all 4 shift registers is operated at the same time, ensuring there's no lag between changing the column and row data. This would probably not be noticeable, but it would annoy me knowing there was a slight lag :)

Putting all this work together, I still had to make the software driving the display useful, rather than just pushing out hard-coded bitmaps.

I wrote some code to turn a 2d boolean array into a series of bytes for direct output. There's an intermediate stage which updates the cached bytes from the boolean array for performance, so the continuous display scanning/multiplexing isn't slowed down by excessive data shuffling. A quick demo later, and we have something that actually shows off the displays as one single screen:

Finally, I need to run some of the LCDs at a different brightness level. I modified my code to maintain 2 arrays and 2 byte caches. One contains the 'bright' LEDs, and one the 'dim', and these are lit alternatively for different periods to create this effect. Again, the refresh rate needs to be fast enough that it's not obvious to the human eye, and that's where I started to run into problems. Switching between the 2 display layers for the 24x8 display, multiplexing the rows, and varying the duty cycle of different LEDs seemed to be getting too much. I couldn't do all that fast enough to keep the refresh rate sufficiently high - the dim LEDs were showing horrible signs of flickering.

I turns out the shiftOut and digitalWrite functions provided by the Arduino software are pretty slow, and this becomes a problem when you're pushing a lot of data. My clever byte caching wasn't actually making a difference since it seems the shiftOut function turns that back into individual bits for output, which I could have done myself without the intermediate layer.

Fortunately it seems I'm not the only person who's had this problem, and thanks to the extremely clever MartinFick on this forum post, I replaced the shiftOut and digitalWrite calls with shiftRaw and fastWrite. The difference in performance is staggering - I have much more control over the duty cycle again, and there's no sign of flicker.

I think it's fair to say this has been a successful weekend. I've got a reasonably sane bit of code for driving the display with both dim and bright LEDs 'simultaneously', and it's run off a data structure that should be dead easy to implement the game of life on top of. All I'm missing now is an RTC to keep time, and actually porting the code over...

More Simple Arduino Goodness

Updated: Sat Feb 6 22:13:00 2010

After a little more fiddling on Friday night, I had a bunch of LEDs connected to one of the 595 shift registers following one of the Oomlout example circuits.

directly driving LEDs

I added a second shift register chained off the second to run another 8 LEDs, again following an example circuit, but this time from the Earthshine Arduino Guide.

Driving with a shift register

This naturally meant cool lighting effects.

Then I had a go at driving the LEDs at different duty cycles to vary their brightness. This is something I'll need to do with the 8x8 displays, so it seemed like a sensible plan to have a go with a simple circuit. It turns out it's not that hard to do:

Duty cycle demo

Arduino Goodness

Updated: Thu Feb 4 22:13:00 2010

So, my plans to build the Game of Life Clock took a step closer to reality today with the arrival of my order of stuff from Oomlout following the recommendation of a couple of people. Everything turned up within 24 hours of placing the order. Very impressed.

Arduino bits

In addition to the 8x8 LED matrices I needed, I bought a new Arduino Duemilanove, since my old NG only has an ATMega8, with 8k of ram. This has been fine for tinkering, but was looking a bit tight for running the game, RTC and matrix driver chips. The Duemilanove has 32k of ram, which is tonnes more than I need.

Old and New

It also gave me a chance to order the ARDX starter kit, which in addition to the Duemilanove has a bunch of extra stuff to play with. Given the Arduino-heavy nature of some of the dev8D workshops this year, it seemed like it'd be worth having some extra bits to play with.

ARDX kit

New arduino and breadboard

I've not really done much this evening other than have a play with the first starter kit circuit, and get the latest arduino software up and running. It is worth noting that the 10mm LED that ships as part of the starter kit is a bit "argh, my eyes".

One thing that did come as a bit of a surprise was not having to hit the reset button to upload a new sketch. That'll take some getting used to.


Next up is driving an 8x8 LED matrix off a pair of 595 shift registers. I'm still torn between using shift registers or the much more sophisticated MAX7219 LED driver. Both have their advantages and disadvantages, so I think the best bet is to have a play and see...

Go to Archive