Converting dash cam videos into Panoramax images
Posted by FeetAndInches on 20 February 2026 in English.I’ve recently begun contributing street-level imagery on Mapillary and Panoramax in my local area. I figured that my dash cam was already recording anyway, so if it could be of use to anyone, why not share it?
Contributing to Mapillary was very easy; since my dash cam has an integrated GPS that encoded its data into the video file, I could just upload the video to Mapillary and their website would turn it into an image sequence. Panoramax requires you to preprocess the video into geotagged images yourself, which made it hard to contribute to. Some cameras can be configured to save periodic images instead of videos, but that didn’t work for me because I still needed the dash cam to work normally as a dash cam first and Panoramax instrument second. It took me a while to figure it out, so I’m writing this blog post to hopefully help out the next guy in the same situation.
The task involves four basic steps. I scripted a solution that works specifically for my dash cam model (Garmin 47) and operating system (Linux). If Panoramax continues to grow, I imagine that separate scripts could be written for each step to mix and match for different camera types and computing environments. The steps are:
-
Extract the raw GPS data from the dash cam video clip(s)
-
Along the GPS trace, create a set of evenly-spaced points
-
Extract images from the video occurring at the evenly-spaced points, and
-
Add the GPS and time data to the image files
One could go even further and automatically upload the images to Panoramax straight from the terminal, but that’s beyond my coding abilities.
Let’s take a look at each step in detail:
Step 1 - Getting GPS data from the video
Thankfully, Garmin makes this relatively easy to do with exiftool. If you open the terminal in the directory with the video clips and run the command
exiftool GRMN<number>.MP4
The output will contain a warning:
Warning : [minor] The ExtractEmbedded option may find more tags in the media data
So we can modify the command into
exiftool -ee3 GRMN<number>.MP4
Now exiftool will output all the same information as before, as well as a bunch of the following
Sample Time : 0:00:58
Sample Duration : 1.00 s
GPS Latitude : XX deg YY' ZZ.ZZ" N
GPS Longitude : UU deg VV' WW.WW" W
GPS Speed : 11.2654
GPS Date/Time : 2026:02:13 22:24:45.000Z
Jackpot! Now we can redirect the output to a file and get our GPS coordinates. We need to have a file saved in the working directory to tell exiftool how to format the data. So I saved the following as gps_format.fmt:
#[IF] $gpslatitude $gpslongitude
#[BODY]$gpslatitude#,$gpslongitude#,${gpsdatetime#;DateFmt("%Y-%m-%dT%H:%M:%S%f")}
Now we pass that to exiftool to only print the metadata we’re interested in. We’ll also put > gps.tmp to save the output to a file:
exiftool -p gps_format.fmt -ee3 GRMN<number>.MP4 > gps.tmp
And we’re done! Now we have the raw GPS information out of the video and into plain text.
Step 2 - Turn the GPS data into evenly spaced points
To do this, I use python to linearly interpolate between GPS points approximately 3 meters apart. And I do mean very approximately: instead of doing a proper distance calculation, I just eyeball how many meters are in a degree. One meter is very roughly about 0.000009° of latitude. Since one meter is a larger portion of a degree near the poles, it needs to be adjusted based on the latitude. I blindly use the latitude of the first point of the sequence and assume it doesn’t change enough over time to matter.
from math import cos, radians
cosd = lambda x: cos(radians(x))
scale_lat = 1 / 9e-6
scale_lon = (1 / 9e-6) * cosd(lat0)
Now it is easy to use the Pythagorean Theorem to estimate the distance between two points:
dx = scale_lon * (lon1 - lon0)
dy = scale_lat * (lat1 - lat0)
dist_between_points = (dx**2 + dy**2)**0.5
Recursively find this distance for each pair of points along the GPS trace. Also keep a running tally of the total distance traveled. For example, consider the following data after you stop at a red light, sit for a while, and then keep going:
Pt | Dist | Tot
A | -- | 0
B | 10 | 10
C | 6 | 16
D | 2 | 18
E | 0 | 18
(sit at the red light...)
Q | 0 | 18
R | 1 | 19
S | 3 | 22
T | 7 | 29
U | 11 | 40
V | 14 | 54
(and so on)
Suppose you want image spacing of about 3 meters (about 10 feet or half a car length). So you want images at 0, 3, 6, 9, 12, 15, …, and so on. We can take point A as our first point, but we need to interpolate between GPS points to find evenly-spaced points. I’ll use the notation X -> Y N% to mean “interpolate N% from X to Y.” Then to find our desired points, we need:
Pt | Formula
0 | A
3 | A -> B 30%
6 | A -> B 60%
9 | A -> B 90%
12 | B -> C 33%
15 | B -> C 83%
18 | D
21 | R -> S 67%
24 | S -> T 29%
27 | S -> T 71%
30 | T -> U 9%
etc...
Since Garmin takes GPS measurements once per second, this is a convenient way to determine at exactly what time each new point occurred. For the point 60% from A to B, it’s just the GPS timestamp of A plus 0.60 seconds. For the latitude and longitude of the interpolated point, we can just interpolate the latitude and longitude coordinates separately. 3 meters is not even close to far enough for great-circle paths to matter. So e.g.
lerp = lambda a, b, x: (1 - x) * a + x * b
lat_interp = lerp(latA, latB, 0.6)
lon_interp = lerp(lonA, lonB, 0.6)
# And so on for each interpolated point
Save this output to a file (I call mine processed_points.csv), and you’re done with step 2!
Step 3 - Extract images from the video
It is possible to extract a single frame of a video using ffmpeg. The time should be a decimal number of seconds after the start of the video to exactly three decimal places.
ffmpeg -ss <time> -i <video>.MP4 -frames:v 1 output.jpg
By default, ffmpeg compresses the images quite a bit. It was enough that I could notice a quality difference when I put a paused frame of the video side-by-side with an extracted image. We can force ffmpeg to improve the quality with q:v (number). A smaller number here produces a higher quality image at the expense of file size and processing time. I’ve settled on a value of 3, but feel free to play around with this to get the quality or file sizes you want.
ffmpeg -ss <time> -i <video>.MP4 -q:v 3 -frames:v 1 output.jpg
ffmpeg will print a bunch of text to the console that we don’t care about. To avoid flooding the screen, use the -hide_banner and -loglevel options to reduce (but not completely shut up) the amount it outputs to the console:
ffmpeg -ss <time> -i <video>.MP4 -q:v 3 -frames:v 1 -hide_banner -loglevel fatal output.jpg
Since you are going to extract many images, you’ll have to use this command in a loop with a bunch of variables that change from iteration to iteration, e.g.
ffmpeg -ss $(printf "%.3f" "$time") -i "$input_dir""/DCIM/105UNSVD/GRMN""$num"".MP4" -q:v "$jpeg_quality" -frames:v 1 -hide_banner -loglevel fatal "$output_dir"/"$num""-""$(printf "%04d" $img_num)"".jpg"
My naming convention produces file names of the format video number-image number.jpg. So for example, the 25th image extracted from GRMN4567.MP4 would be named 4567-0025.jpg.
And we’re almost there! Now we just need to put the metadata from step 2 into the images we just generated.
Step 4 - Add the GPS and time metadata to the images
You can write tags to files using exiftool using the format:
exiftool -<key>=<value> <file name>.jpg
You can add multiple tags in a single line.
exiftool -<key1>=<value1> -<key2>=<value2> <file name>.jpg
Note that exiftool only supports specific keys, so it won’t write the metadata if it doesn’t know what the key is. It creates a new image by default, so to avoid duplicating each image, add:
exiftool -overwrite_original -<key1>=<value1> -<key2>=<value2> <file name>.jpg
This will write a line to the terminal to confirm after every single image. To avoid that, redirect the output to /dev/null. This tells the terminal to throw the output into a black hole, or the wardrobe to Narnia, or anywhere else besides the terminal.
exiftool -overwrite_original -<key1>=<value1> -<key2>=<value2> <file name>.jpg 2> /dev/null
For Panoramax to accept your images, you need all of the following tags:
-gpslatitude=45.6789
-gpslongitude=-123.456789
-gpslatituderef=N
-gpslongituderef=W
-datetimeoriginal=2000-01-02T03:04:05
If you are missing these, Panoramax will reject your image. Note that the latitude and longitude ref tags are necessary because exiftool doesn’t understand negative coordinates as being in the southern or western hemispheres. You have to provide them separately for the GPS data to be read correctly. If you forget to add them, Panoramax may accept the image but put it in the wrong place. The date and time should be given in ISO 8601 format. If you don’t specify a time zone, Panoramax will assume local time and automatically convert it to UTC on their site.
You can theoretically add any tag in the exif specification. Some ones I like for Panoramax are:
-subsectimeoriginal=067
-author=FeetAndInches
-make=Garmin
-model="Garmin 47 Dash Cam"
The SubSecTimeOriginal field is important for getting Panoramax to put your sequence in the right order. Since the images come from a dash cam, speeds of 10-20 m/s are common, so multiple images are taken per second of video. The DateTimeOriginal tag does not preserve fractional seconds (even if you provide them when writing the tag), so several pictures would be recorded as the same time and Panoramax would have to guess their order. Note that this needs to be provided as an integer string after the decimal point. So for a time of 51.328 seconds, you would write -subsectimeoriginal=328. For a time of 51.1 seconds, you would just write -subsectimeoriginal=1. For a time of 51.001 seconds, you would need to include leading zeroes as -subsectimeoriginal=001.
If you don’t use the SubSecTimeOriginal tag, you can still get Panoramax to show your images in order if you use a suitable file naming convention. You can open the sequence on the website and select the option to sort by file name.
The author tag is just nice to attribute that it’s your image even if it gets shared outside Panoramax. The make and model tags help fill in some of the camera information on Panoramax and helps determine your GPS accuracy, which is used to determine the image’s quality score.
You can do step 4 in the same loop as step 3. Since the coordinates and time will change for each image, the command will look messy like:
exiftool -overwrite_original -gpslongitude=$lon -gpslatitude=$lat -gpslatituderef=$ns -gpslongituderef=$ew -datetimeoriginal=$timestamp -author="$exif_author" -subsectimeoriginal="$subsec" -make="$exif_make" -model="$exif_model" -usercomment="$exif_comment" "$output_dir"/"$num""-""$(printf "%04d" $img_num)"".jpg" > /dev/null
Closing Notes
This post explains the basic principles of how to turn a video into usable images on Panoramax. I plan to write a second post going into the 201 level - things like how to deal with missing a single GPS measurement, duplicated measurements, getting sent to Null Island, how to detect erroneous data, using the videos immediately before and after to interpolate better at the edges, recursively doing this for multiple video clips, etc. But for now, I hope this has been useful to you.
If anyone is interested, I can share the entire scripts that I use right now. They’re a little buggy, only partially commented, and occasionally require some babysitting to make sure they work properly. But if something is better than nothing and you are willing to try and deal with someone else’s amateur code, please let me know.
Thanks for reading,
FeetAndInches
Discussion