Forums › BB Series Discussions › .shr file format specification
- This topic has 10 replies, 4 voices, and was last updated 6 years, 5 months ago by
Andrew.
- AuthorPosts
andrewcleggParticipantHi,
I wrote some Python code to read the older .bbr file format, and would now like to extend that code to also read the newer .shr format. Can you point me to where I can find the latest Spike source code so I can see what the new format is, or where I can find the file format specification document?
Thanks.
Andy
AndrewModeratorHi Andy,
I am attaching a folder with the example project we will be shipping for SHR parsing. The format is very similar to the BBR file except it changed the header format (still a single struct) to include all the new settings that were introduced with the SHR file format. Also, the sweeps are prefixed with a bit more information than just the timestamp. The general parsing process should be very familiar and similar.
In the future, this will be available in the standard API/SDK download, sorry I hadn’t got around to uploading it yet.
Regards,
AndrewAttachments:
You must be logged in to view attached files.
andrewcleggParticipantThanks Andrew! I was already poking around this afternoon and deducing some of the new header structure and where the lat/lon/height is stored with each sweep.
BTW, the GPS feature is awesome. I have wanted that for a long time. I have been running a separate script and having to post-process to line up the logged GPS coordinates with each Spike sweep. My only request is that you might consider adding some visual indicator that GPS is working, right on or near the spectrum display. Perhaps a small “GPS” button that lights up green when GPS status is good, red when not good, and grey when GPS is not connected. That could be placed, for example, within the grey border above the spectrum or waterfall plot, next to the intensity slider. Just a thought. (If there already is a visual indicator, other than leaving the GPS Control Panel open, let me know — maybe I’m blind!)
Also, the decimation option is GREAT too. That will really help with long-term spectrum monitoring. My hard disk will be very happy.
One quick question — is the sweep time still taken from the system clock, or is it grabbed from GPS if available? I presume the former since the serial GPS output is only every second or so not very continuous.
Thanks again.
Andy
AndrewModeratorHi Andy,
Yes, the timestamps still come from the system clock.
I will look into a GPS indicator for a future release. This is a good idea.
I’m glad you like the recent changes to the recording/GPS. Both the sweep decimation and GPS coord. tagging were highly requested features.
Thanks for the feedback.
Regards,
Andrew
Mihai Armand EneaParticipantHi andrewclegg,
I am also looking for a method to parse the .bbr and the .shr files from Signal Hound using Python, but as I am a beginner of Python at the moment it is not as easy for me, and I need to be able to parse these files as soon as possible, as I am coming close to a deadline with my project.
Is it possible for you to share your method of parsing for these files? Or maybe we can exchange email addresses and talk about it there? Any help would be greatly apreciated.Regards,
Mihai
andrewcleggParticipant
ykhaledParticipantHello Andrew, in line with this thread. Are there restriction on the maximum SHR file size?
Also, is it a good idea to automatically split & save SHR files – say every 1 GB?
andrewcleggParticipantI find that Signal Hound files quickly exceed the memory limits of python, especially on a PC where most python installations are still limited to 32-bit. You won’t even be able to load a 1 GB SHR file completely into python on a PC, most likely.
What I do when I need to process large files is process each spectrum sweep as it is read in. For example, if I need total power across the spectrum, I sum the power across the bins, and save just that number. Then I read the next sweep, etc. I don’t try to read the whole file into memory.
If you aren’t able to do that because you actually need to operate on the full 3D array of frequency/time/amplitude, then at a minimum I recommend doing that on Linux or Mac with a large amount of memory, and still auto-saving reasonable-sized SHR files (1 GB is probably a good start). I don’t recommend doing this on a PC with a 32-bit installation of python. To keep things even more manageable for later processing, you could decimate in frequency and/or time as the files are read, unless you really need the full time/frequency resolution that’s in the SHR file. Also, keep in mind that the latest versions of Spike support decimation when saving files. It can, for example, save the average spectrum over 1-second intervals, or the max spectrum over each 100 sweeps, or various combinations. That’s a really useful feature to avoid huge file sizes when acquiring data over long time spans.
BTW, I tried the save channelized data feature on the new versions of Spike (same option screen as decimation), but it didn’t work for me. I’m not sure what happened. I have not created any scripts that support reading SHR files when channelization was used, because I didn’t get a chance to figure out what I was doing wrong to get any valid sample files. Probably my error somewhere.
BTW, I made a version of the python script that can deal with corrupted SHR files when Signal Hound crashes during data acquisition. It’s very simple — the code just avoids reading the last 2 or 3 sweeps, which are the only ones that are corrupted/incomplete when there is a crash. The rest of the data are still fine.
ykhaledParticipantThanks for sharing.
My concern is not in relation to memory or sampling technique.
I need to know when Spike is automatically stopping recording due to file_size restrictions if any.
andrewcleggParticipantOK, but that’s not the topic of this thread, so my assumption that you were asking about max file size in relation to reading with python scripts (the topic of this thread) seemed logical!
Max file size is an option in Spike but I suggest you start a thread under that topic if you want more details.
AndrewModeratorYou can change the max file size allowed in the preference menu. It is limited to 1GB when running the 32-bit version of Spike.
If you need even more flexibility than what Spike provides, you could consider programming to the API directly, then you full control over the acquisition and recording process.
Regards
- AuthorPosts
You must be logged in to reply to this topic.