How cache-fs work with the Artifactory?

Artifactory: What is cachefs and how to troubleshoot issues related to cachefs provider

AuthorFullName__c
Pranav Hegde
articleNumber
000006073
ft:sourceType
Salesforce
FirstPublishedDate
2024-04-18T11:28:50Z
lastModifiedDate
2024-04-18
VersionNumber
1
The cache-fs works as a binary cache that uses LRU (Least Recently Used) as its cleanup protocol. The cache-fs protocol can improve Artifactory's performance since frequent requests will be served from the cache-fs. The cache-fs binary provider will be the closest filestore layer of Artifactory. This means that if the filestore is mounted, we would like the cache-fs to be local on the artifactory server itself (if the filestore is local, then cache-fs is meaningless).

It is good practice to allocate a good amount of size to cache-fs, as it can improve Artifactory's performance since frequent requests will be served from the cache-fs (it uses Least Recently Used as its cleanup protocol). This means that if the filestore is mounted on NFS/ in S3 Cloud Storage, then the cache on local nodes will help to serve frequent download requests received to Artifactory.

For example, each time you download an artifact from it will automatically be saved in the cache. It also works as a FIFO (first in first out) order, So when the cache fills up, it will delete the first artifact that was cached.

Generally, it is recommended to keep 10 percent of the binary size in Artifactory which is the amount of physical storage occupied by the binaries in your system. For example, if your binary size is 1 TB, you need to provide at least 100 GB as the cache-fs storage size.


As a rule of thumb, you should monitor for 24 hours the cache-fs and if it gets full in a shorter time than that, it means that the maximum cache size is not enough and can be increased in order to improve the Artifactory performance. For example, if we have 500 GB maximum cache size and after 24 hours the cache was filled only with 200 GB, it will be pretty safe to say that we do not need more than that.

When you are using S3 as a filestore with the eventual mechanism, 5GB is the default value for the cache-fs provider.
<provider id="cache-fs" type="cache-fs">
        <maxCacheSize>50000000000</maxCacheSize>
        <cacheProviderDir>cache</cacheProviderDir>
 </provider>
*** The maximum storage allocated for the cache in bytes.