Back to the main page.
Bug 2977 - datset unusable after call of ft_definetrial
Status | CLOSED INVALID |
Reported | 2015-10-05 16:14:00 +0200 |
Modified | 2019-08-10 12:31:27 +0200 |
Product: | FieldTrip |
Component: | fileio |
Version: | unspecified |
Hardware: | PC |
Operating System: | Linux |
Importance: | P5 major |
Assigned to: | |
URL: | |
Tags: | |
Depends on: | |
Blocks: | |
See also: |
Andreas Wollbrink - 2015-10-05 16:14:42 +0200
after calling ft_definetrial (see below) the used data set seems to be modified and no longer usable afterwards (see below) program snip used: ############## BEGIN of code ######################################### % split raw data into epochs cfg = []; cfg.dataset = 'Test_tDCScharge_20151005_01.ds'; cfg.continuous = 'yes'; cfg.trialdef.eventtype = 'refDipPeak'; cfg.trialdef.prestim = 0.0; cfg.trialdef.poststim = 6.0; cfg = ft_definetrial(cfg); ############# END of code when one resuses the same snip (or any other fieldtrip function) the error is as follows: Error using ft_read_header (line 2056) unsupported header format (unknown) checking the status of the datset with (CTF program) dshead it gives an error message 'dshead: unhandled exception.'. The dataset seems to be out of order. when one runs the program snip above you get the following warning message: Warning: '/data/biomag01/Bachelorarbeit-WillemMueller/tDCS-charge/Honigmelone2015Oct05/Test_tDCScharge_20151005_01.ds' is a directory. Use rmdir to delete directories. > In ft_read_header (line 2142) In ft_read_event (line 498) In ft_trialfun_general (line 87) In ft_definetrial (line 174) having a closer look to the functions used I realized there seems to be a malfunction in ft_read_header: even not using a compressed data file the function ft_read_header assumes to be one, inflates it and tries to delete the original file
Andreas Wollbrink - 2015-10-05 16:15:50 +0200
I forget to mention the Fieldtrip version used: version r10622
Eelke Spaak - 2015-10-05 16:21:39 +0200
(In reply to Andreas Wollbrink from comment #1) Hi Andreas, I believe this bug was fixed in revision 10623 (so you are *just* one revision behind the fix, unfortunately): r10623 | roboos | 2015-08-24 22:13:31 +0100 (Mon, 24 Aug 2015) | 2 lines bugfix - fixed serious bug that I introduced in the previous revision 10622. The bug caused certain (multi-file format) datasets to have the data or header file deleted, e.g. the meg4 file in a ds directory would be deleted due to an incorrect detection of the dataset being unzipped to a temporary directory. Another affected format is brainvision (with the vhdr/vmrk/dat files). I noticed since a lot of the test files suddenly went missing. Hopefully this did not cause you to lose any data!
Robert Oostenveld - 2015-10-05 21:55:15 +0200
(In reply to Eelke Spaak from comment #2) Hi Andreas, Sorry for the really stupid bug. It was my fault and the bug only existed for ~12 hours. Is there a specific reason why you might be stuck on this particular revision of fieldtrip? I can think of a potential reason for you (still) having this problem: I think it was the last revision that made it to googlecode (i.e. svn on google) before google discontinued the googlecode service. If you had been doing svn update from googlecode and are still doing it, it might be that you are getting an error (which you might not notice if it is in a cron job) or it might be that there is no clear error at all, just no update any more. best Robert
Andreas Wollbrink - 2015-10-06 17:13:20 +0200
Hi Robert, indeed I still used the googlecode svn update feature. After switching to github I was able to download a new version of fieldtrip (version 4c881c0e054ce2e22f072302463b58a2334cb32e). The bug seems to be fixed. Fortunately I was working on a copy of my dataset. Therefore I did not loose any data. Thanks for your support. Cheers, Andreas