How to work with large number of small files

From RCSWiki
Jump to navigation Jump to search

Background

The main file systems on the ARC cluster, and generally any computational cluster, are network shared distributed file systems.

The network shared part means that the file systems are mounted by all the nodes in the cluster, so that any compute node could process data stored in ARC.

The distributed part means that there is more than one file server node serving the files to the cluster nodes.

Such file systems handle really well workloads that use small numbers of very large files. However, handling large numbers of small files on such file systems can pose several challenges.


These are the main negative aspects:

  • Metadata Overhead:
Each file, regardless of size, requires metadata storage. Managing millions or billions of small files can lead to significant metadata overhead, impacting the performance and scalability of the file system.
Metadata operations (creation, modification, deletion) can become a bottleneck, as the metadata server(s) may struggle to keep up with the high volume of requests.
  • Inefficient Disk Space Utilization:
File systems typically allocate disk space in fixed-size blocks. Small files often do not fully utilize these blocks, leading to wasted space and reduced storage efficiency.
  • Increased I/O Operations:
Accessing a large number of small files can result in a high number of I/O operations, which can degrade performance due to increased seek times and latency.
The overhead of opening and closing many small files can be substantial, especially in distributed systems where network latency adds to the access time.
  • Network Congestion:
In distributed file systems, accessing small files can generate significant network traffic, particularly if the files are scattered across multiple nodes.
High metadata traffic, due to frequent file access and updates, can congest the network, leading to performance degradation.
  • File System Limitations:
Many file systems have a maximum limit on the number of files they can handle, and reaching this limit can cause system instability or failure.
Directory structure limitations can also be problematic, as directories containing a large number of small files can become unwieldy and slow to access.
  • Backup and Restore Challenges:
Backing up and restoring a large number of small files can be time-consuming and resource-intensive, often taking significantly longer than dealing with fewer, larger files.
Efficiently managing and ensuring data consistency during backup and restore processes becomes more complex.
  • Application Performance:
Applications that need to process or analyze data spread across numerous small files may face performance issues due to high file access overhead and inefficient data retrieval.
Sorting, searching, and aggregating data from many small files can be computationally expensive and slow.
  • Data Management Complexity:
Keeping track of a vast number of small files can be administratively complex, complicating tasks like data migration, replication, and deletion.
Ensuring data integrity and consistency across distributed nodes adds another layer of complexity.

Local File systems on the Node

/tmp

/dev/shm

From Python

Working with files inside a TAR archive

  • tarfile module:
https://www.askpython.com/python-modules/tarfile-module
https://stackoverflow.com/questions/27220376/python-read-file-within-tar-archive

Storing files inside containers

Using Apptainer

Using Apptainer Overlays

Persistent overlay directories allow you to overlay a writable file system on an immutable read-only container for the illusion of read-write access. You can run a container and make changes, and these changes are kept separately from the base container image.

Using HDF5 Hierarchical Data file Format