Add a column to show the size of each single file in a duplicate group.
Basically, the size column but without being totaled up in the group headers.
Activity Newest / Oldest
Status changed to: Released
Status changed to: In progress
The feature is planned for V8.5. It did not receive enough votes to make it into V8.4.
sounds good. thanks for the update!!
Just curious if there is a planned timeline for this feature? I have a bunch (30+) drives I plan to scan/clean up. Along with an app i am writing, this something I will use to do this effectively and hopefully only once. Its no rush (had the drives for years) but helps me target when to finish my app too :)
Status changed to: Planned
Status changed to: Under review
I'm sorry, but we don't understand your request. The individual size of each file is already displayed in the "Size" column. You can additionally enable the "Allocated" column, see the attached screenshot.
If I have 2 1000 MB files and 30 100 MB files as two sets of duplicates, I want to be able to sort so that I see the 1000 MB files on top because they are the largest actual file size.
The current size column sums the sizes and would make the 30 files show as largest because they total 3000 MB.
This is a little similar to how file name is shown as one of the file names and not changed to "multiple" when there are actually multiple.
I basically want the size of ONE of the duplicate files in a group not just a total size of the group.
I could mock up a quick screen shot showing what I mean if helpful - just let me know.
I went ahead and created / attached a simple mock-up showing what I mean by a file size column.
Let me know if this is helpful.
Thank you for the mock. The TreeSize main application already has a column "Average File Size" that we could introduce in the File Search too. It would cover your use case, and it would be clear what to display in case not all files have the same size (happens e.g. when searching duplicates by filename).
Would this fit to your needs?
Yes, that would work.
I believe when checking for duplicate content by hash (which is what I am doing most of the time), the file size would be always be the same anyway.