<chain template="s3"/>
The "s3" chain stands for the following configuration:
<chain template="s3">
<provider id="cache-fs" type="cache-fs">
<provider id="eventual" type="eventual">
<provider id="retry" type="retry">
<provider id="s3" type="s3"/>
</provider>
</provider>
</provider>
</chain>
As mentioned above, when the non-cluster template is used in an HA cluster, a shared mount is required between all HA nodes to share the Artifactory “data” directory.
The shared “data” directory has to be configured in the $ARTIFACTORY_HOME/etc/ha-node.properties:
artifactory.ha.data.dir=/mnt/shared/artifactory/ha-data
The template above configures 4 layers of storage providers.
Therefore his up-time is important to make sure the “Eventual” folder does not grow out of control.
Upon Downloading
Upon Uploading
Best Practices:
binarystore.xml example:
<config version="v1">
<chain template="s3">
<provider id="cache-fs" type="cache-fs">
<provider id="eventual" type="eventual">
<provider id="retry" type="retry">
<provider id="s3" type="s3"/>
</provider>
</provider>
</provider>
</chain>
<provider id="cache-fs" type="cache-fs">
<cacheProviderDir>/var/opt/jfrog/artifactory/data/cache</cacheProviderDir>
<maxCacheSize>100000000000</maxCacheSize>
</provider>
<provider id="eventual" type="eventual">
<numberOfThreads>10</numberOfThreads>
<timeout>180000</timeout>
<dispatcherInterval>5000</dispatcherInterval>
</provider>
<provider id="retry" type="retry">
<maxTrys>10</maxTrys>
<interval>1000</interval>
</provider>
<provider id="s3" type="s3">
<endpoint>http://s3.amazonaws.com</endpoint>
<identity>[ENTER IDENTITY HERE]</identity>
<credential>[ENTER CREDENTIALS HERE]</credential>
<path>[ENTER PATH HERE]</path>
<bucketName>[ENTER BUCKET NAME HERE]</bucketName>
</provider>
</config>
The "s3" chain stands for the following configuration:
<chain template="s3">
<provider id="cache-fs" type="cache-fs">
<provider id="eventual" type="eventual">
<provider id="retry" type="retry">
<provider id="s3" type="s3"/>
</provider>
</provider>
</provider>
</chain>
As mentioned above, when the non-cluster template is used in an HA cluster, a shared mount is required between all HA nodes to share the Artifactory “data” directory.
The shared “data” directory has to be configured in the $ARTIFACTORY_HOME/etc/ha-node.properties:
artifactory.ha.data.dir=/mnt/shared/artifactory/ha-data
The template above configures 4 layers of storage providers.
- Cache-FS - Should not be shared, in order to assure good performance for frequent artifacts.
- Eventual - Being created in the configured “artifactory.ha.data.dir” and is the shared layer between all cluster nodes. The “eventual” directory contains 3 subdirectories: “_pre”, “_add”, “_delete”. Those folders are practically the queues of events which should be transmitted to the next provider, the cloud storage provider.
- Retry - Responsible for “retry” events in case of failures.
- S3 - The cloud storage provider.
Therefore his up-time is important to make sure the “Eventual” folder does not grow out of control.
Upon Downloading
- The node which received the download request will first check if the file is available in the Cache-FS layer.
- If not available, he will check if the file exists in the Eventual folder.
- Else, he will proceed to download the file from the cloud provider, and then provide it to the client.
Upon Uploading
- The node which received the upload request will stream the file to the eventual “_pre” folder.
- Once the node has fully received the file, it will move the file from “_pre” to “_add”, and only then close the upload request. As the file reaches the “_add” folder, it is available to download for all other HA members.
- The primary node checks if there are any new files to be handled in the “eventual” folder, and eventually will upload the file to S3.
Best Practices:
- A large local Cache-FS partition is important to assure frequent artifacts are served as quickly as possible.
- Primary node uptime is required for artifacts to be uploaded to S3.
- Monitoring the number of files at the “eventual” subfolders is important, so in case of a failure, you as an administrator are notified in time.
binarystore.xml example:
<config version="v1">
<chain template="s3">
<provider id="cache-fs" type="cache-fs">
<provider id="eventual" type="eventual">
<provider id="retry" type="retry">
<provider id="s3" type="s3"/>
</provider>
</provider>
</provider>
</chain>
<provider id="cache-fs" type="cache-fs">
<cacheProviderDir>/var/opt/jfrog/artifactory/data/cache</cacheProviderDir>
<maxCacheSize>100000000000</maxCacheSize>
</provider>
<provider id="eventual" type="eventual">
<numberOfThreads>10</numberOfThreads>
<timeout>180000</timeout>
<dispatcherInterval>5000</dispatcherInterval>
</provider>
<provider id="retry" type="retry">
<maxTrys>10</maxTrys>
<interval>1000</interval>
</provider>
<provider id="s3" type="s3">
<endpoint>http://s3.amazonaws.com</endpoint>
<identity>[ENTER IDENTITY HERE]</identity>
<credential>[ENTER CREDENTIALS HERE]</credential>
<path>[ENTER PATH HERE]</path>
<bucketName>[ENTER BUCKET NAME HERE]</bucketName>
</provider>
</config>