I have been in the necessity of creating large file systems, and I end up using ext4 almost all the time. But in those occasions where the file system is larger than, say, 4TB, I noticed that with the default values, the file system performance was awfully low the first couple of hours after creation. After digging a little, I found out that the issue was due to a lazy initialization feature of the file system, which was very inconvenient when I was in a hurry and I needed to copy a big chunk of files in a short period of time.
So, for me, the most convenient solution was to do the initialization immediately, and get done with it. For that, the following options are needed:
# "-m 0": to not reserve any space for the super user (0%)
# "-E": extended options
# "lazy_itable_init=0": having this enabled speeds up file system initialization, in particular for large file systems. The kernel will continue the initialization in the background, with the drawback of reducing it's performance noticeably while the initialization completes. Setting it to zero disables it.
# "lazy_journal_init=0": when disabled, the journal inode will be fully zeroed out, but having it enabled will speed up file system initialization noticeably, with a small risk if the system crashes before it finished.
$ mkfs.ext4 -m 0 -E lazy_itable_init=0,lazy_journal_init=0 /dev/sdc1
Well, for clarification, the -m 0
option is mainly because I’m using the file system for storage, and not for the operating system.