-
Notifications
You must be signed in to change notification settings - Fork 843
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Various littlefs operations are quite slow #1079
Comments
I'm also using littlefs on nRFf52840+W25Q64, SDK is nRF connect SDK v2.6.1, writing 50 bytes of data is also 300-400ms, and when I try to write data non-stop, it also blocks for more than 10 seconds every once in a while! I open the file every time -> write data -> close the file. Since I often modify the same file over and over again, I suspect littlefs is garbage collecting when it has 10 seconds of blocking (just a guess, I can't verify). |
We see a very similar problem. It seems to have something to do with the lookahead buffer on our side. @geky Could it be that littlefs is unsuitable for several megabytes of data (EMMC)? During an initial write operation on our file system after startup, the entire file system is read first (~60 MB, ~50000 read operations -> 6 seconds) before one write operation is performed. |
I have some additional information worth sharing @PiKa20 @Viatorus
Much of this might be explained by the architecture of littlefs, and it would be good to have someone who knows about it chime in to clarify my understanding. Lastly, and this might be beyond the scope of this conversation, my device has a battery, and thus "sudden-power-withdrawal" |
So, the TLDR, there are two significant bottlenecks in littlefs at the moment:
These bottlenecks often end up at odds with each other, since one way to reduce the traversal cost is to increase the logical block size. Fortunately That being said, these are both being worked on, but fixing these problems require some pretty fundamental changes to the filesystem, so it's a slow process. It's also worth mentioning that these are both spike costs that can't be avoided, but can be moved out of critical writes via
Currently, the only way for littlefs to know how many blocks are free/inuse is to traverse the entire filesystem. Eventually, this will hopefully be improved with same work that improves block allocation (on-disk block map), but in the meantime that's how it is. You can mitigate this by increasing the block size to some multiple of the actual block size, larger blocks == fewer blocks to traverse, but this will probably have some other impacts such as making compaction times worse. This is the main reason we don't include
This is probably metadata compaction, but could be the block allocator if your lookahead buffer is really small (it's probably metadata compaction). You can tinker with
I would like to make littlefs cache the "size-of-filesystem", but it would still need to scan after most write operations. It's low priority.
It makes sense this is avoiding compaction. Each dir gets its own metadata block, and if you never fill up a metadata block: no compaction. The tradeoff would be storage consumption, but depending on your use case you may not care. The increasing operation time is curious, I wonder if it's because path-lookup time is increasing as the parent directory is filling up? Path-lookup is currently
This is expected. Metadata logs can always contain outdated metadata, so littlefs doesn't know if a metadata fits or needs to be split until after trying to compact the block. By separating files into multiple directories you're sort of precomputing the will-it-fit logic.
The size of files does not matter for metadata compaction*, but it does affect the allocator scans. *Big asterisk: If a file is small enough to be inlineable (<
Yes, littlefs currently doesn't persist erase state information, so it has to always pessimistically erase blocks on demand. This is also being worked on as part of the block map work.
Explained above hopefully.
I don't think I've had this asked before, it's a very interesting question. I think the short answer is this is out-of-scope. Actually... I'm struggling to think of any case where ignoring powerlosses would lead to any advantage... Most of the costs come from dealing with the awkwardness of flash. Even if you used RMW operations to avoid the Oh wait, sorry, you asked about configuration settings and I started thinking about theoretical invasive changes. No, there's no tricks in the case. Having a safe shutdown just means less things to worry about, but littlefs has already worried about those things. You might be able to save on block device cache flushes, but that would be below littlefs (in Zephyr?) |
Hi all:

I am using littlefs on an mx25r32 QSPI, driven by an nrf5340 running zephyr.
when I have approx 750 files( many/most of them 50 byte binary files), the fs_stat operation takes almost 5.6 seconds
Further more, when I am writing the (50 byte) files, I note that it typically takes ~300ms, but every ~25th attempt takes 12 seconds

A few questions
Thanks a lot!
The text was updated successfully, but these errors were encountered: