ARTIFACTORY: How to migrate Artifactory to a Kubernetes cluster

ARTIFACTORY: How to migrate Artifactory to a Kubernetes cluster

AuthorFullName__c
Vignesh Surendrababu
articleNumber
000005409
ft:sourceType
Salesforce
FirstPublishedDate
2022-09-14T15:37:04Z
lastModifiedDate
2025-05-15
VersionNumber
4
In this article, we are going to describe the steps that can help to migrate the Artifactory instance which is running on a Virtual machine to a kubernetes cluster using a helm based installation. Since Artifactory has the capability to perform a Full System Export and Import, when it is required to perform a migration, it is recommended to have a new Artifactory server installed using the new database.

During this migration process, it is important to have both the old Artifactory and new Artifactory servers with the same version.


Use Case 1: Migration steps when using S3/Azure blob storage or a GCP bucket as Filestore


Step 1

Firstly, it is required to have a new Artifactory server installed on a kubernetes cluster and it can be achieved by following the instructions available on the Quick Start Guide. Since we are going to use a Full System Export/Import process in this example, we will be using the PostgreSQL as a database bundled on the helm charts


Step 2

When using an external bucket for storing the checksums, we would need to configure the new Artifactory instance and also use the same provider type in the binarystore.xml. 

In this case, for example, when using s3 as the bucket on the old server, we can either use the same s3 bucket with the new server or backup the contents of the s3 bucket by performing an s3 copy from old s3 bucket to a new s3 bucket. For more information, please refer to the external documentation from AWS. It is suggested to backup the s3 data if preferred to use the same bucket on the new installation. 


Step 3

In order to configure and use the s3 configuration on the binarystore, there are two options to prefer.

Option 1: Editing directly in the values.yaml.
databaseUpgradeReady: true
unifiedUpgradeAllowed: true
postgresql:
 enabled: true
 postgresqlPassword: Password
artifactory:
 replicaCount: 1
 masterKeySecretName: my-masterkey-secret
 joinKeySecretName: my-joinkey-secret
 license:
   secret: artifactory-cluster-license
   dataKey: art.lic
 copyOnEveryStartup:
   # Absolute path
   - source: /artifactory_bootstrap/binarystore.xml
     # Relative to ARTIFACTORY_HOME/
     target: etc/artifactory/
 persistence:
   binarystoreXml:
     <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
     <config version="2">
     <chain>
       <provider type="cache-fs" id="cache-fs">
           <provider type="s3-storage-v3" id="s3-storage-v3"/>
       </provider>
     </chain>
         <provider type="s3-storage-v3" id="s3-storage-v3">
         <testConnection>true</testConnection>
         <bucketName>bucketname</bucketName>
         <path>artifactory/filestore</path>
         <region>ap-southeast-2</region>
         <endpoint>s3.ap-southeast-2.amazonaws.com</endpoint>
         <signatureExpirySeconds>300</signatureExpirySeconds>
         <usePresigning>false</usePresigning>
         <useInstanceCredentials>true</useInstanceCredentials>
         <maxConnections>50</maxConnections>
         </provider>
     </config>
   enabled: true
   type: aws-s3-v3
nginx:
 enabled: true


Option 2: Create your own secret and pass it to your helm install command.

# Prepare your custom Secret file (custom-binarystore.yaml)

kind: Secret
apiVersion: v1
metadata:
 name: custom-binarystore
 labels:
   app: artifactory
   chart: artifactory
stringData:
  binarystoreXml: |
     <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
     <config version="2">
     <chain>
       <provider type="cache-fs" id="cache-fs">
           <provider type="s3-storage-v3" id="s3-storage-v3"/>
       </provider>
     </chain>
         <provider type="s3-storage-v3" id="s3-storage-v3">
         <testConnection>true</testConnection>
         <bucketName>bucketname</bucketName>
         <path>artifactory/filestore</path>
         <region>ap-southeast-2</region>
         <endpoint>s3.ap-southeast-2.amazonaws.com</endpoint>
         <signatureExpirySeconds>300</signatureExpirySeconds>
         <usePresigning>false</usePresigning>
         <useInstanceCredentials>true</useInstanceCredentials>
         <maxConnections>50</maxConnections>
         </provider>
     </config>

As a next step when using a custom secret, create a secret from the file.

$ kubectl apply -n artifactory -f ./custom-binarystore.yaml

Then, Pass the secret to your helm install command.

$ helm upgrade --install artifactory --namespace artifactory --set artifactory.persistence.customBinarystoreXmlSecret=custom-binarystore jfrog/artifactory

Instead of passing the “customBinarystoreXmlSecret” in the helm command, it is also possible to use on the values file directly as shown below
databaseUpgradeReady: true
unifiedUpgradeAllowed: true
postgresql:
 enabled: true
 postgresqlPassword: Password
artifactory:
 replicaCount: 1
 masterKeySecretName: my-masterkey-secret
 joinKeySecretName: my-joinkey-secret
 license:
   secret: artifactory-cluster-license
   dataKey: art.lic
 copyOnEveryStartup:
   # Absolute path
   - source: /artifactory_bootstrap/binarystore.xml
     # Relative to ARTIFACTORY_HOME/
     target: etc/artifactory/
 persistence:
   enabled: true
   customBinarystoreXmlSecret: custom-binarystore
nginx:
 enabled: true

Step 4

Once the new server has been installed, disable the Garbage collection from both old instance and the new instance


Step 5

Take the server off the network to block new requests on the old server


Step 6

Perform a Full System Export from the old server by enabling the option “Exclude Content”


Step 7

On the new server, as the s3 bucket configuration is already in place, Perform full system import using the exported data on Step 5 (Do NOT select the Exclude Content option).

Note

When performing the Full System Export and Import, make sure that Artifactory is running only with one node on both old and new servers.


Step 8

Switch the DNS to the new server.


Step 9

Perform a few validations by uploading a few files and making sure it is reflecting on the s3 bucket used on the new server. Also, the existing file downloads should not be causing any issues.

Note

As we focus on using s3 bucket in this example when using Azure blob storage or a GCP bucket, the process still remains the same and we would need to update the binarystore.xml to use the correct provider type while installing the new Artifactory.

Additional use cases regarding the migration will be updated on this article shortly, stay tuned!