Whisker tracking assessment
Input video quality
For assessed videos, need to characterize:
- Frame rate
- Pixel size
Frame rate and Pixel size
Need to know the range of values over the set of assessed videos.
In a representative video
- Histogram (255 bins) the intensity along the whisker backbone
- Histogram (255 bins) the intensity in the background
In single frames from ~10 representative videos
- Sample intensity along a straight-line across a particularly thick whisker.
- The steepness of the intensity will be used as a qualitative measure of how sharp the image is. This is used to motivate particular choices for the size of the line detector used for tracing.
If a video was compressed before analysis:
- What parameters were used for the compression?
- Compression utility (e.g. ffmpeg, imagej, matlab, ...)
- Relevant parameters (e.g. bitrate, quality setting, ...)
Was the entire whisker traced?
Were too many pixels traced?
How close to the true backbone was the trace?
Tracing accuracy will vary for different scenarios.
- Normal whiskers
- Crossing whiskers
- Whiskers in contact with post
Over a set of (perhaps 10) whiskers in a category, for each whisker:
- By hand, make multiple (at least 2) independent tracings by clicking on points along the whisker backbone. Points should be estimated to sub-pixel precision. Neighboring points should be between 1-2 pixels apart. Ideally, different individuals would perform each tracing. The reliability of these traces as an accuracy standard will be assessed from the redundant traces.
- Render the average hand tracing as a polyline at 1/5th pixel resolution (the precision limit of the automated tracing).
- Similarly render the automated tracing result.
- Compute %agreement (% of rendered pixels where traces overlap).
Analyze a large, broad set of applicable data to characterize the number of whisker identification errors. These errors directly impact the number of edits a user needs to make to fully correct a dataset using the tracing GUI.
- Trials drawn at random from sessions. Sessions should be chosen to cover variability between:
- whisker count
Sessions should be consistent with respect to:
- image quality
- (roughly) grooming. Mice with split whiskers, untrimmed whiskers from other rows, etc... should be excluded.
- Intervals of time where there was some odd event that made tracking irrelevant or impossible should be excluded.
- Air puff administered as negative reward
- Foot in the field of view obscuring whiskers
- Error types:
Errors are determined relative to manually proofread tracing results.
- Detection false negative - the whisker was not traced.
- A whisker (not a hair or microvibrissae) is clearly in the field of view, but there is no associated trace.
- An untraced whisker hidden in the facial fur or out of the field of view does not count as an error.
- Identity false negative - a whisker was traced, but was not assigned a whisker label
- Wrong identity - a whisker was traced and labeled, but the label is wrong.
- For each trial, record the following:
- Tracking software revision (e.g. svn r589)
- Mouse id
- Session date
- Session notes
- Trial number
- Trial notes
- Whisker row (e.g. C or D)
- Number of whiskers tracked
- Total number of frames
- Number of frames with 1 errors
- Number of frames with 2 errors
- Number of frames with N errors
(I just insert these columns on a spreadsheet when I need them)
- Number of errors of type (a) Detection false negative
- Number of errors of type (b) Identity false negative
- Number of errors of type (c) Identity false positive
Session and trial notes should include information about:
- Use of non-default parameters
- Excluded time intervals
- Use of non-default "classify" step
I need access to the *.measurements files from proofread videos.