


So I assume that starting from v2.1.0, harbor_core has a memory leak please let me know if you still want me to share the logs with you. The highest load for the v2.0.3 node was observed: harbor_core used 2.3% of RAM, 16.4% of CPU, and overall LA wasn't higher than 5. Even after two hours after replication was able to successfully finish (a quite rear case), the harbor_core process was still consuming 71% of RAM, 1.5% of CPU. harbor_core process also consumed those cases, when replication was able to successfully complete, the harbor_core process uses more than 80% of RAM and 15% of CPU. This leads to a huge LA and, in most of the cases, to DoS.ĥ.2.

Both nodes running on v2.1.x Harbor versions consumes all the available RAM.So docker image name changes only in the registry name part. In case_2, GitLab's Docker image /docker/jenkins/slave from the GitLab project docker/jenkins was replicated to the Harbor in the following way: /docker-demo-docker/jenkins/slave.So Docker image name changes not only in registry name. In case_1, GitLab's Docker image /docker/jenkins/slave from the GitLab project docker/jenkins was replicated to the Harbor in the following way: /docker-demo-docker/slave.I observed that replication on v2.1.x runs much faster than on v2.0.3, but (at least) in my cases, then finishes later (the reason described in p.5).I wasn't able to reproduce the Harbor error /jwt/auth?scope=repository%!A(MISSING)%!F(MISSING)%!F(MISSING) I shared it the original message.I collected LA and resource consumption by the Harbor processes (top five).case_2: the project wasn't specified, so replication was done as-is (to the docker project).all three setups had the same replication config:.S3 was used as a storage backend for the registry.I tested replication on three different nodes (t3.large).I guess that now I managed to solve all of them, and this is what I have to share: Setup Hi for the delay I was busy with testing as sometimes I faced some controversial results. Please specify the versions of following systems.

