Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inconsistent Number of Files Written Based on Filename Order #1081

Open
pasvensson opened this issue Mar 10, 2025 · 1 comment
Open

Inconsistent Number of Files Written Based on Filename Order #1081

pasvensson opened this issue Mar 10, 2025 · 1 comment

Comments

@pasvensson
Copy link

Description

I have encountered an issue with LittleFS where the number of equally sized files that can be written before the filesystem is full varies depending on whether the filenames are written in alphabetical order or in a random order. I have created two test cases that demonstrate this behavior, each outputting the number of files written before the filesystem is full.

Steps to Reproduce

  1. Run the test case that writes files with filenames in alphabetical order until the filesystem is full.
  2. Run the second test case that writes files with filenames in random order until the filesystem is full.
  3. Compare the number of files written in each case.

Expected Behavior

The number of files written before the filesystem is full should be similar regardless of the order of filenames.

Actual Behavior

The number of files written before the filesystem is full varies depending on the order of filenames.

Environment:

Command to run: runners/test_runner test_exhaustion_filenames_alphabetical_order test_exhaustion_filenames_random_order

When filenames are written in alphabetical order, fewer files can be written before the filesystem is full.
When filenames are written in random order, more files can be written before the filesystem is full.

Log

Here are the last few lines of output from each test case:
Alphabetical filename order

...
tests/test_exhaustion_filename_order.toml:88:warn: file size: 100  number of files:367

finished test_exhaustion_filenames_alphabetical_order:0g21g22ggg8346gg2k1k6

Random filename order

...
tests/test_exhaustion_filename_order.toml:186:warn: file size: 100  number of files:536

finished test_exhaustion_filenames_random_order:0g21g22ggg8346gg2k1k6

References

The littlefs fork & branch: https://github.com/pasvensson/littlefs/tree/exhaustion_filename_order
The file with the test cases: tests/test_exhaustion_filename_order.toml
The alphabetical order test case: test_exhaustion_filenames_alphabetical_order
The random order test case: test_exhaustion_filenames_random_order

@geky
Copy link
Member

geky commented Mar 13, 2025

Hi @pasvensson,

It's interesting to discover this behavior, but it's not too surprising.

It's due to how LittleFS distributes metadata across multiple metadata logs. In each log, LittleFS will keep adding files until it's full, at which it will try to compact the log, and, if that fails, allocate an additional pair of blocks and split the log in two. This results in an even distribution of files over all of the metadata logs in the filesystem.

The problem is that LittleFS compacts/splits lazily. When creating files in alphabetical order, the new files always go into the last metadata log, which forces it to compact/split as soon as it's full. When creating files in a random order, you likely end up with many metadata logs that are compactable, but not yet full.

It's worth noting, over time if you continue to write to the randomly created files, it will eventually end up roughly with the same space utilization as if you created the files alphabetically.


You could mitigate this by calling lfs_fs_gc periodically, and lowering compact_thresh to block_size/2. But compaction/splitting has a high cost, so you may not want to.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants