Pifinder Perils
I’m building a PiFinder! It uses a camera to take pictures of the sky, connected to a raspberry pi which uses a database of stars to tell you where in the sky your telescope is pointing. But a PiFinder is $550 new. A stock pifinder uses the newest and most expensive options for pis and cameras, and when I looked at the parts list, I thought: I can build something similar for a fifth of the price! And so began the quest to create the Sliced PiFinder: a DIY PiFinder made from an used raspberry pi for a small slice of the cost. Previously, I bought a $33 camera and 3D printed a custom enclosure for it.
On the one hand, if I can successfully modify this project so I don’t need expensive parts, I save $400! On the other hand, my hubris has led to consequences.
This is a longpost about those consequences and the shenanigans I found to work around them.
(None of this would be possible if the pifinder wasn’t open source, so thanks to the creator brickbots, who has given me tons of help!)
Problem 1: Camera FoV
We live in a world where I use software written by a space agency across the ocean for free. What a world we live in.
Once the PiFinder takes a picture of the sky, it needs to use a database of stars to figure out where in the sky the camera was looking when the picture was taken. That process is called “plate solving” because we used to take star pictures on photographic plates.
How do you plate solve? Amazingly, that’s the easy part! The European Space Agency released an open source python library called “tetra3” that does it all for you! All you need to know is… hmmmm… the exact position of every single star in the sky your camera can see.
Amazingly, you can just download a database of every single bright star in the sky! Astronomers have been making star catalogs for thousands of years, including launching space telescopes like Gaia devoted to doing nothing but star cataloging. The PiFinder software downloads a database of stars automatically!
However, for tetra3 to plate solve, it needs to know your camera’s field of view (FoV), so it knows how far apart various stars will look to the camera. Different lenses change FoV. The PiFinder software expects you to have a Raspberry pi HQ Camera ($50) and 25mm lens ($20), which has a FoV of 10.2. I’m using an IMX462 camera ($33) and 12mm lens from a different camera ($10), so how do I figure out my FoV?
One option is math. I tried pointing the camera at Orion, taking a picture of the screen, measuring the distance in pixels between stars in orion, and comparing that to the known angular distance. That gave me an estimate of 15 degrees of FoV. Unfortunately, I didn’t know at the time that the the picture was wider than the screen.
Then I realized I was using a raspberry pi and could access the raw hardware. I SSHed in, used the command line to take a picture, then downloaded the raw .png. Now I knew what the software was looking at. I uploaded it to nova.astrometry.net, which plate solved my image and told me not just the exact coordinates of where I was looking (and the constellation!) but also the image’s FoV.
(Also, just for fun, I used it to take my first ever long exposure shot of 30s, and was blown away by the number of stars visible. Image above!)
Another complication: images are 2D. Does tetra3’s FoV input mean horizontal FoV or vertical FoV or diagonal FoV? I didn’t see it anywhere in the documentation. (It was horizontal)
In the end, my new IMX462 camera with 12mm lens had about 26deg of horizontal FoV. I could buy a new lens to reduce that down to the PiFinder software’s expected 10.2 degrees… but instead brickbots helped me write software to crop the image down to 10.2 degrees. Now plate solving works!
Problem 2: Alt/Az
Let’s say you want to see the andromeda galaxy. How do you know if it’s is visible in your night sky right now or not? Should you look north or east or south to find it?
Since the earth is a sphere, “down” changes from place to place. Since north is perpendicular to down, that means north/south/east/west look at different parts of the sky based on your place on earth.
Thankfully, if you know your location on Earth and the time, since Earth rotates once per 24 hours you can do some math to figure out where to look relative to your local north and down directions. Those coordinates are called “altitude” and “azimuth”.
The PiFinder gets your location on Earth and the time from a $50 GPS USB dongle. But that’s expensive and my phone already has a GPS. If I can modify the software so I can enter my coordinates and the time, I save $50.
It turns out parts of the software don’t work unless it knows your coordinates, but instead of showing an error message some features simply do nothing. I edited the software’s config file to add my GPS coordinates… but coordinates in the config file are never used, since all previous pifinders have had GPSes and got coordinates from there. So I had to modify the existing “GPS_fake.py” (which previously did nothing) to actually send fake GPS and time messages.
And because computing altitude and azimuth isn’t complicated enough: the raspberry pi has no internal clock, so every time it starts up the time is wrong (unless it can connect to wifi and download the time using NTP). This works for me at home, but if I want to take this to dark places, I eventually need to program a screen that lets you enter the coordinates and time on the device itself!
Problem 3: Gyroscope Fusion Failures
If you move a telescope with a PiFinder left or right, changing your azimuth, its screen should move the star display left or right quickly. That’s hard for a few reasons.
First, the Pifinder needs long exposures to capture dim stars. If it moves during a long exposure, the image it captures will have stars that look like smeared lines instead of dots and it can’t plate solve. That means if you bump the device, there’s a few seconds of delay before the camera can lock on to the sky again.
To get quicker feedback during unsuccessful pictures, the PiFinder uses an intertial measurement unit (IMU) chip, which combines a gyroscope (measures changes in angle), an accelerometer (measures acceleration, including gravity), and a magnetometer (measures Earth’s magnetic field). The PiFinder uses a fancy $30 IMU chip called the BNO055, which has a tiny processor that computes the chip’s current altitude and azimuth 100+ times a second.
However, I scavenged an LSM6DS3TR-C + LIS3MDL IMU from a different project with the exact same sensors: magnetometer, gyroscope, accelerometer. Surely, I thought, I could write some code to compute altitude/azimuth from those sensors and save $30!
Combining data from the 3 sensors is called “sensor fusion”. It’s incredibly hard. Thankfully, I’m not the first to study sensor fusion (drone builders want it too) and there’s two main sensor fusion algorithms which already exist, named mahony and madgwick. Adafruit has an AHRS library which implements both… in C++, but I was using python. Eventually I found an implementation of both and downloaded it, loaded in my sensor readings… and spinning the device 90 degrees didn’t change the output by 90 degrees. Why?
Problem #1: Axis remapping!
In my PiFinder, the IMU is oriented so the sensor’s +Y direction is the one the camera is pointing towards, but I had a hard time figuring that out because the chip was buried inside a circuit board and the code has many options to switch coordinates based on how a PiFinder is mounted on a telescope.
Problem #2: Units
The Madgwick filter expected the gyro’s inputs to be in degrees/second when they were in radians per second. No problem, I can just multiply them by 180/pi. Then I took a look at the Madgwick code, and it requests degrees because it multiplies the numbers by pi/180 to turn them back into radians. Aaargh.
Problem #3: Calibration!
Magnetometers and gyroscopes will drift - if they output a range, say, 4 units wide, instead of getting sensor readings from -2 to 2, your actual readings might be shifted so you read values from 0.5 to 3.5. Magnetometer readings shift in a similar way for a different reason: Earth’s magnetic field is different everywhere. You can take many many readings and average them to find the true zero point, then subtract that from all future readings to make zero the middle point.
For a gyroscope, those readings need to be taken while the gyroscope isn’t moving. I realized I could use the accelerometer for that - if the accelerometer’s gravity direction isn’t changing, I can use that to know I’m not moving and grab some gyroscope readings. Once you integrate the gyroscope over time, it looked pretty stable, with only around 0.3 degrees of error.
For a magnetometer, spin it around as much as possible, and the unchanging 3D vector of earth’s magnetic field, as measured by the magnetometer’s 3 axes, should trace out a sphere! Then you can use the center of that sphere as your reference zero point.
Engineers did what engineers do in overly specific fields and made up magnetometer calibration number jargon. The values for center of the sphere (which should be at (0,0,0) but usually isn’t) are called the “hard-iron offsets”. But you can get fancier: if you have a magnet (or some other electrical device) near the magnetometer, it might make its own magnetic field and skew one axis at a time, and that skew is called the “soft-iron offset”. They’re just coefficients in a calibration matrix!
The simplest way to calibrate a magnetometer is to ignore soft-iron entirely, keep track of maximum and minimum values in all 3 magnetometer axes as you spin the device around every which way, assume the hard-iron offset is the average of the max and min, and subtract that calibrated value every time you read from the magnetometer in the future.
I had to do all that calibration myself. The output still didn’t work very well. The sensor fusion algorithm’s output reported altitude and azimuth axes which didn’t measure down correctly. Why wasn’t it working?
Problem #4: EVERY SINGLE MAGDWICK FILTER EVER WAS WRONG 3 YEARS AGO
According to this research by someone named Mark Uckermann from a British bike GPS startup,, apparently almost every Magdwick library has a subtle bug in the code compared to the original paper, invisible if you use a small enough “beta” parameter. Oops. That was 3 years ago, and I think my library has fixed it? But I can’t tell since the variable names are slightly different.
Problem #5: Speed
The sensor fusion algorithms involve lots of math. Doing these calculations myself in python (along with the camera and plate solving and everything else the pifinder was doing) took up so much time that on my pi 3, the IMU code was only able to read values from the sensors around 12 times a second. The sensors were updating 104 times a second. That meant I was losing tons of info, including updates of “how much the angle shifted since the last time you checked” data from the gyroscope. I could configure the sensors to slower speed… But the gyroscope+accelerometer could only go to discrete values like 12.5 Hz while the magnetometer could only go to different discrete values, like 10 Hz. Aargh.
After all that… turning my device 90 degrees still wouldn’t make the azimuth output change by 90 degrees. Aaaaargh.
But wait. If my sensor fusion algorithm wasn’t working… maybe it was a problem in the sensor fusion algorithm code? The accelerometer tells me the direction of gravity - maybe I can compute azimuth. The IMU can tell me where magnetic north is. If I measure both… maybe there’s a way to compute altitude and azimuth directly? Maybe, just maybe, if I sat down and did a ton of galaxy brain math, I’d be able to use my own scavenged IMU instead of buying the pifinder’s recommended BNO055 for $30!
Oh yeah. The BNO055 chip is $30. It does calibration for you.
I gave up, desoldered my old IMU, bought a BNO055, and soldered it in.
It better work.