使用Docker安装Harbor
1、说明
# Harbor服务器
假设主机ip为192.168.186.128,HTTP端口为8080,HTTPS端口为8443# 测试机器
主机ip:只要可以访问Harbor主机即可
1.1、安装要求
1.2、Harbor组件说明
2、配置Harbor的HTTP访问(如果使用http访问,请配置)
# 详见:https://goharbor.io/docs/2.13.0/install-config/run-installer-script/#connect-http说明:
需要准备两台机器,如果只是做测试,可以使用虚拟机模拟这种环境
一台机器给harbor使用,同时harbor未开启https配置,即在harbor.yml注释https配置
另一台机器做测试,使用http访问,推送镜像到harbor,需要做如下配置
如果想在harbor机器上测试docker login,也需要做如下操作
修改/etc/docker/daemon.json文件:
{"registry-mirrors": ["https://docker.m.daocloud.io"],"insecure-registries": ["ip:端口 或者 域名:端口,如192.168.186.128:8080"]
}
重启docker:
systemctl restart docker
3、配置Harbor的HTTPS访问(如果使用https访问,请配置)
# 详见:https://goharbor.io/docs/2.13.0/install-config/configure-https/以下步骤假设您的 Harbor 注册表的主机名是yourdomain.com,
并且其 DNS 记录指向您正在运行 Harbor 的主机。因为不使用域名,所以这里的yourdomain.com就是当前harbor机器的IP以下操作在harbor机器上操作
3.1、生成CA证书
生成CA证书私钥:
openssl genrsa -out ca.key 4096
生成CA证书:
openssl req -x509 -new -nodes -sha512 -days 3650 \-subj "/C=CN/ST=Beijing/L=Beijing/O=example/OU=Personal/CN=MyPersonal Root CA" \-key ca.key \-out ca.crt
# 详见:https://goharbor.io/docs/2.13.0/install-config/customize-token-service/# 参数说明
C=CN,国家
ST=Beijing,省份/州
L=Beijing,城市/地区
O=example,组织/公司
OU=Personal,其他内容
CN=MyPersonal Root CA,域名/IP
3.2、生成服务器证书
生成私钥:
openssl genrsa -out yourdomain.com.key 4096
生成证书签名请求(CSR):
# 调整选项中的值-subj以反映您的组织。如果您使用 FQDN 连接 Harbor 主机,
# 则必须将其指定为通用名称 ( CN) 属性,并在密钥和 CSR 文件名中使用它。openssl req -sha512 -new \-subj "/C=CN/ST=Beijing/L=Beijing/O=example/OU=Personal/CN=yourdomain.com" \-key yourdomain.com.key \-out yourdomain.com.csr
# 说明:
此处的yourdomain.com替换为harbor机器的IP,如192.168.186.128# 修改后的内容为:
openssl req -sha512 -new \-subj "/C=CN/ST=Beijing/L=Beijing/O=example/OU=Personal/CN=192.168.186.128" \-key yourdomain.com.key \-out yourdomain.com.csr
生成 x509 v3 扩展文件:
# 无论您使用 FQDN 还是 IP 地址连接到 Harbor 主机,都必须创建此文件,
# 以便为 Harbor 主机生成符合主体备用名称 (SAN) 和 x509 v3 扩展要求的证书。
# 请将条目替换为DNS您的域名。cat > v3.ext <<-EOF
authorityKeyIdentifier=keyid,issuer
basicConstraints=CA:FALSE
keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment
extendedKeyUsage = serverAuth
subjectAltName = @alt_names[alt_names]
DNS.1=yourdomain.com
DNS.2=yourdomain
DNS.3=hostname
EOF
# 说明:# 修改v3.ext文件中的[alt_names]到EOF这部分内容,如下所示:
[alt_names]
IP.1 = 192.168.186.128
EOF# v3.ext文件内容:
authorityKeyIdentifier=keyid,issuer
basicConstraints=CA:FALSE
keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment
extendedKeyUsage = serverAuth
subjectAltName = @alt_names[alt_names]
IP.1 = 192.168.186.128
使用v3.ext文件为Harbor主机生成证书:
# yourdomain.com将CSR和CRT文件名中的替换为Harbor主机名openssl x509 -req -sha512 -days 3650 \-extfile v3.ext \-CA ca.crt -CAkey ca.key -CAcreateserial \-in yourdomain.com.csr \-out yourdomain.com.crt
3.3、向Harbor和Docker提供证书
生成ca.crt、yourdomain.com.crt和yourdomain.com.key文件后,
将它们提供给Harbor和Docker,并重新配置Harbor以使用它们。
将服务器证书和密钥复制到Harbor主机上的certficates文件夹中:
mkdir -p /data/cert/\cp yourdomain.com.crt /data/cert/
\cp yourdomain.com.key /data/cert/
# 说明:# harbor.yml文件,修改内容如下:
......
https:port: 【修改端口,8443】certificate: 【指定证书,/data/cert/yourdomain.com.crt】private_key: 【指定密钥,/data/cert/yourdomain.com.key】
......此处的/data/cert我harbor.yml文件中的https的certificate、private_key字段的值
假设把证书、密钥放在/data/cert目录下# 创建目录命令:
mkdir -p /data/cert/# 如果certificate的值为为/data/cert/yourdomain.com.crt执行命令:
\cp yourdomain.com.crt /data/cert/# 如果private_key的值为/data/cert/yourdomain.com.key,执行命令:
\cp yourdomain.com.key /data/cert/
转换yourdomain.com.crt为yourdomain.com.cert,以供Docker使用:
# Docker守护进程将.crt文件解释为CA证书,.cert将文件解释为客户端证书。
openssl x509 -inform PEM -in yourdomain.com.crt -out yourdomain.com.cert
将服务器证书、密钥和CA文件复制到Harbor主机上的Docker证书文件夹中,必须先创建相应的文件夹:
说明:
在测试机器上需要执行以下操作
如果想在harbor机器上测试docker login,也需要做如下操作
mkdir -p /etc/docker/certs.d/yourdomain.com/\cp yourdomain.com.cert /etc/docker/certs.d/yourdomain.com/
\cp yourdomain.com.key /etc/docker/certs.d/yourdomain.com/
\cp ca.crt /etc/docker/certs.d/yourdomain.com/# 如果将默认nginx端口443映射到其他端口,
# 请创建文件夹
# /etc/docker/certs.d/yourdomain.com:port、或
# /etc/docker/certs.d/harbor_IP:port。
# 说明:
因为使用的是IP,HTTPS端口为8443,
所以要创建目录/etc/docker/certs.d/192.168.186.128:8443/# /etc/docker/certs.d/192.168.186.128:8443/目录有以下文件:
yourdomain.com.cert
yourdomain.com.key
ca.crt# 执行命令:
mkdir -p /etc/docker/certs.d/192.168.186.128:8443/# 方式1:
# 复制yourdomain.com.cert、yourdomain.com.key、ca.crt这三个文件
\cp yourdomain.com.cert /etc/docker/certs.d/192.168.186.128:8443/
\cp yourdomain.com.key /etc/docker/certs.d/192.168.186.128:8443/
\cp ca.crt /etc/docker/certs.d/192.168.186.128:8443/# 方式2:
# 或者也可以复制yourdomain.com.crt
# \cp yourdomain.com.crt /etc/docker/certs.d/192.168.186.128:8443/# 方式3:
# 或者通过浏览器下载证书192.168.186.128.crt
# \cp 192.168.186.128.crt /etc/docker/certs.d/192.168.186.128:8443/
使用自定义证书的配置如下所示:
/etc/docker/certs.d/└── yourdomain.com:port├── yourdomain.com.cert <-- Server certificate signed by CA├── yourdomain.com.key <-- Server key signed by CA└── ca.crt <-- Certificate authority that signed the registry certificate
重新启动Docker引擎:
systemctl restart docker
部署或重新配置Harbor(如果Harbor已经运行)
# 修改配置文件
harbor.yml文件启用https配置,修改ports、certificate、private_key等字段# harbor.yml文件,修改内容如下:
......
https:port: 【修改端口,8443】certificate: 【指定证书,/data/cert/yourdomain.com.crt】private_key: 【指定密钥,/data/cert/yourdomain.com.key】
......# 执行脚本
./prepare# 停止并销毁容器
docker-compose down# 创建并启动容器
docker-compose up -d
验证HTTPS连接(如果Harbor已经运行):
浏览器访问,可能会显示警告,“您的连接不是私密连接”/etc/docker/daemon.json文件取消-insecure-registry配置# 登录harbor:
docker login yourdomain.com
docker login yourdomain.com:port# 说明:
# 使用8080端口登录:
# docker login 192.168.186.128:8080
# 使用8443端口登录:
# docker login 192.168.186.128:8443
3.4、部署或重新配置Harbor
如果尚未部署Harbor,请参阅“配置Harbor YML文件”以获取有关
如何通过指定hostname和https属性来配置Harbor以使用证书的信息harbor.yml。如果已经使用HTTP部署了Harbor并想要将其重新配置为使用HTTPS,请执行以下步骤。
运行prepare脚本以启用HTTPS:
# Harbor使用一个nginx实例作为所有服务的反向代理。
# 可以使用prepare脚本配置nginx使用HTTPS。
# 该脚本prepare位于Harbor安装程序包中,与install.sh脚本位于同一级别。
./prepare
如果Harbor正在运行,请停止并删除现有实例:
# 镜像数据保留在文件系统中,因此不会丢失任何数据。
docker compose down -v
重启Harbor:
docker compose up -d
3.5、验证HTTPS连
浏览器访问:
打开浏览器并输入https://yourdomain.com。它应该会显示 Harbor 界面。某些浏览器可能会显示警告,指出证书颁发机构 (CA) 未知。
这是因为您使用的自签名 CA 并非来自受信任的第三方 CA。
您可以将 CA 导入浏览器以消除此警告。
修改docker配置文件:
在运行Docker守护程序的计算机上,检查/etc/docker/daemon.json文件
以确保未为https://yourdomain.com -insecure-registry设置该选项。
登录Harbor:
# 从Docker客户端登录Harbor。
docker login yourdomain.com# 如果您已将nginx的443端口映射到其他端口,请在login命令中添加该端口。
docker login yourdomain.com:port
4、配置Harbor组件之间的内部TLS通信(如果需要,请配置)
详见:https://goharbor.io/docs/2.13.0/install-config/configure-internal-tls/
修改harbor.yml文件:
# 取消internal_tls、enable、dir前面的注释# harbor.yml文件,修改内容如下:
https:
......internal_tls:enabled: truedir: 【tls目录,如/etc/harbor/tls/internal】
......
执行命令:
./prepare
生成证书:
# 命令:
# docker run -v /:/hostfs goharbor/prepare:<current_harbor_version> gencert -p /path/to/internal/tls/cert# 示例:
# 假设当前current_harbor_version的值为v2.13.0
docker run -v /:/hostfs goharbor/prepare:v2.13.0 gencert -p /data/internal/tls/cert
5、安装Harbor
5.1、安装在线版本
下载:
wget https://github.com/goharbor/harbor/releases/download/v2.13.0/harbor-online-installer-v2.13.0.tgz
解压:
tar -xf harbor-online-installer-v2.13.0.tgz
项目结构:
harbor
├── common.sh
├── harbor.yml.tmpl
├── install.sh
├── LICENSE
└── prepare0 directories, 5 files
切换目录:
cd harbor
查看harbor.yml.tmpl文件:
# Configuration file of Harbor# The IP address or hostname to access admin UI and registry service.
# DO NOT use localhost or 127.0.0.1, because Harbor needs to be accessed by external clients.
hostname: reg.mydomain.com# http related config
http:# port for http, default is 80. If https enabled, this port will redirect to https portport: 80# https related config
https:# https port for harbor, default is 443port: 443# The path of cert and key files for nginxcertificate: /your/certificate/pathprivate_key: /your/private/key/path# enable strong ssl ciphers (default: false)# strong_ssl_ciphers: false# # Harbor will set ipv4 enabled only by default if this block is not configured
# # Otherwise, please uncomment this block to configure your own ip_family stacks
# ip_family:
# # ipv6Enabled set to true if ipv6 is enabled in docker network, currently it affected the nginx related component
# ipv6:
# enabled: false
# # ipv4Enabled set to true by default, currently it affected the nginx related component
# ipv4:
# enabled: true# # Uncomment following will enable tls communication between all harbor components
# internal_tls:
# # set enabled to true means internal tls is enabled
# enabled: true
# # put your cert and key files on dir
# dir: /etc/harbor/tls/internal# Uncomment external_url if you want to enable external proxy
# And when it enabled the hostname will no longer used
# external_url: https://reg.mydomain.com:8433# The initial password of Harbor admin
# It only works in first time to install harbor
# Remember Change the admin password from UI after launching Harbor.
harbor_admin_password: Harbor12345# Harbor DB configuration
database:# The password for the user('postgres' by default) of Harbor DB. Change this before any production use.password: root123# The maximum number of connections in the idle connection pool. If it <=0, no idle connections are retained.max_idle_conns: 100# The maximum number of open connections to the database. If it <= 0, then there is no limit on the number of open connections.# Note: the default number of connections is 1024 for postgres of harbor.max_open_conns: 900# The maximum amount of time a connection may be reused. Expired connections may be closed lazily before reuse. If it <= 0, connections are not closed due to a connection's age.# The value is a duration string. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".conn_max_lifetime: 5m# The maximum amount of time a connection may be idle. Expired connections may be closed lazily before reuse. If it <= 0, connections are not closed due to a connection's idle time.# The value is a duration string. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".conn_max_idle_time: 0# The default data volume
data_volume: /data# Harbor Storage settings by default is using /data dir on local filesystem
# Uncomment storage_service setting If you want to using external storage
# storage_service:
# # ca_bundle is the path to the custom root ca certificate, which will be injected into the truststore
# # of registry's containers. This is usually needed when the user hosts a internal storage with self signed certificate.
# ca_bundle:# # storage backend, default is filesystem, options include filesystem, azure, gcs, s3, swift and oss
# # for more info about this configuration please refer https://distribution.github.io/distribution/about/configuration/
# # and https://distribution.github.io/distribution/storage-drivers/
# filesystem:
# maxthreads: 100
# # set disable to true when you want to disable registry redirect
# redirect:
# disable: false# Trivy configuration
#
# Trivy DB contains vulnerability information from NVD, Red Hat, and many other upstream vulnerability databases.
# It is downloaded by Trivy from the GitHub release page https://github.com/aquasecurity/trivy-db/releases and cached
# in the local file system. In addition, the database contains the update timestamp so Trivy can detect whether it
# should download a newer version from the Internet or use the cached one. Currently, the database is updated every
# 12 hours and published as a new release to GitHub.
trivy:# ignoreUnfixed The flag to display only fixed vulnerabilitiesignore_unfixed: false# skipUpdate The flag to enable or disable Trivy DB downloads from GitHub## You might want to enable this flag in test or CI/CD environments to avoid GitHub rate limiting issues.# If the flag is enabled you have to download the `trivy-offline.tar.gz` archive manually, extract `trivy.db` and# `metadata.json` files and mount them in the `/home/scanner/.cache/trivy/db` path.skip_update: false## skipJavaDBUpdate If the flag is enabled you have to manually download the `trivy-java.db` file and mount it in the# `/home/scanner/.cache/trivy/java-db/trivy-java.db` pathskip_java_db_update: false## The offline_scan option prevents Trivy from sending API requests to identify dependencies.# Scanning JAR files and pom.xml may require Internet access for better detection, but this option tries to avoid it.# For example, the offline mode will not try to resolve transitive dependencies in pom.xml when the dependency doesn't# exist in the local repositories. It means a number of detected vulnerabilities might be fewer in offline mode.# It would work if all the dependencies are in local.# This option doesn't affect DB download. You need to specify "skip-update" as well as "offline-scan" in an air-gapped environment.offline_scan: false## Comma-separated list of what security issues to detect. Possible values are `vuln`, `config` and `secret`. Defaults to `vuln`.security_check: vuln## insecure The flag to skip verifying registry certificateinsecure: false## timeout The duration to wait for scan completion.# There is upper bound of 30 minutes defined in scan job. So if this `timeout` is larger than 30m0s, it will also timeout at 30m0s.timeout: 5m0s## github_token The GitHub access token to download Trivy DB## Anonymous downloads from GitHub are subject to the limit of 60 requests per hour. Normally such rate limit is enough# for production operations. If, for any reason, it's not enough, you could increase the rate limit to 5000# requests per hour by specifying the GitHub access token. For more details on GitHub rate limiting please consult# https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting## You can create a GitHub token by following the instructions in# https://help.github.com/en/github/authenticating-to-github/creating-a-personal-access-token-for-the-command-line## github_token: xxxjobservice:# Maximum number of job workers in job servicemax_job_workers: 10# Maximum hours of task duration in job service, default 24max_job_duration_hours: 24# The jobLoggers backend name, only support "STD_OUTPUT", "FILE" and/or "DB"job_loggers:- STD_OUTPUT- FILE# - DB# The jobLogger sweeper duration (ignored if `jobLogger` is `stdout`)logger_sweeper_duration: 1 #daysnotification:# Maximum retry count for webhook jobwebhook_job_max_retry: 3# HTTP client timeout for webhook jobwebhook_job_http_client_timeout: 3 #seconds# Log configurations
log:# options are debug, info, warning, error, fatallevel: info# configs for logs in local storagelocal:# Log files are rotated log_rotate_count times before being removed. If count is 0, old versions are removed rather than rotated.rotate_count: 50# Log files are rotated only if they grow bigger than log_rotate_size bytes. If size is followed by k, the size is assumed to be in kilobytes.# If the M is used, the size is in megabytes, and if G is used, the size is in gigabytes. So size 100, size 100k, size 100M and size 100G# are all valid.rotate_size: 200M# The directory on your host that store loglocation: /var/log/harbor# Uncomment following lines to enable external syslog endpoint.# external_endpoint:# # protocol used to transmit log to external endpoint, options is tcp or udp# protocol: tcp# # The host of external endpoint# host: localhost# # Port of external endpoint# port: 5140#This attribute is for migrator to detect the version of the .cfg file, DO NOT MODIFY!
_version: 2.13.0# Uncomment external_database if using external database.
# external_database:
# harbor:
# host: harbor_db_host
# port: harbor_db_port
# db_name: harbor_db_name
# username: harbor_db_username
# password: harbor_db_password
# ssl_mode: disable
# max_idle_conns: 2
# max_open_conns: 0# Uncomment redis if need to customize redis db
# redis:
# # db_index 0 is for core, it's unchangeable
# # registry_db_index: 1
# # jobservice_db_index: 2
# # trivy_db_index: 5
# # it's optional, the db for harbor business misc, by default is 0, uncomment it if you want to change it.
# # harbor_db_index: 6
# # it's optional, the db for harbor cache layer, by default is 0, uncomment it if you want to change it.
# # cache_layer_db_index: 7# Uncomment external_redis if using external Redis server
# external_redis:
# # support redis, redis+sentinel
# # host for redis: <host_redis>:<port_redis>
# # host for redis+sentinel:
# # <host_sentinel1>:<port_sentinel1>,<host_sentinel2>:<port_sentinel2>,<host_sentinel3>:<port_sentinel3>
# host: redis:6379
# password:
# # Redis AUTH command was extended in Redis 6, it is possible to use it in the two-arguments AUTH <username> <password> form.
# # there's a known issue when using external redis username ref:https://github.com/goharbor/harbor/issues/18892
# # if you care about the image pull/push performance, please refer to this https://github.com/goharbor/harbor/wiki/Harbor-FAQs#external-redis-username-password-usage
# # username:
# # sentinel_master_set must be set to support redis+sentinel
# #sentinel_master_set:
# # tls configuration for redis connection
# # only server-authentication is supported
# # mtls for redis connection is not supported
# # tls connection will be disable by default
# tlsOptions:
# enable: false
# # if it is a self-signed ca, please set the ca path specifically.
# rootCA:
# # db_index 0 is for core, it's unchangeable
# registry_db_index: 1
# jobservice_db_index: 2
# trivy_db_index: 5
# idle_timeout_seconds: 30
# # it's optional, the db for harbor business misc, by default is 0, uncomment it if you want to change it.
# # harbor_db_index: 6
# # it's optional, the db for harbor cache layer, by default is 0, uncomment it if you want to change it.
# # cache_layer_db_index: 7# Uncomment uaa for trusting the certificate of uaa instance that is hosted via self-signed cert.
# uaa:
# ca_file: /path/to/ca# Global proxy
# Config http proxy for components, e.g. http://my.proxy.com:3128
# Components doesn't need to connect to each others via http proxy.
# Remove component from `components` array if want disable proxy
# for it. If you want use proxy for replication, MUST enable proxy
# for core and jobservice, and set `http_proxy` and `https_proxy`.
# Add domain to the `no_proxy` field, when you want disable proxy
# for some special registry.
proxy:http_proxy:https_proxy:no_proxy:components:- core- jobservice- trivy# metric:
# enabled: false
# port: 9090
# path: /metrics# Trace related config
# only can enable one trace provider(jaeger or otel) at the same time,
# and when using jaeger as provider, can only enable it with agent mode or collector mode.
# if using jaeger collector mode, uncomment endpoint and uncomment username, password if needed
# if using jaeger agetn mode uncomment agent_host and agent_port
# trace:
# enabled: true
# # set sample_rate to 1 if you wanna sampling 100% of trace data; set 0.5 if you wanna sampling 50% of trace data, and so forth
# sample_rate: 1
# # # namespace used to differentiate different harbor services
# # namespace:
# # # attributes is a key value dict contains user defined attributes used to initialize trace provider
# # attributes:
# # application: harbor
# # # jaeger should be 1.26 or newer.
# # jaeger:
# # endpoint: http://hostname:14268/api/traces
# # username:
# # password:
# # agent_host: hostname
# # # export trace data by jaeger.thrift in compact mode
# # agent_port: 6831
# # otel:
# # endpoint: hostname:4318
# # url_path: /v1/traces
# # compression: false
# # insecure: true
# # # timeout is in seconds
# # timeout: 10# Enable purge _upload directories
upload_purging:enabled: true# remove files in _upload directories which exist for a period of time, default is one week.age: 168h# the interval of the purge operationsinterval: 24hdryrun: false# Cache layer configurations
# If this feature enabled, harbor will cache the resource
# `project/project_metadata/repository/artifact/manifest` in the redis
# which can especially help to improve the performance of high concurrent
# manifest pulling.
# NOTICE
# If you are deploying Harbor in HA mode, make sure that all the harbor
# instances have the same behaviour, all with caching enabled or disabled,
# otherwise it can lead to potential data inconsistency.
cache:# not enabled by defaultenabled: false# keep cache for one day by defaultexpire_hours: 24# Harbor core configurations
# Uncomment to enable the following harbor core related configuration items.
# core:
# # The provider for updating project quota(usage), there are 2 options, redis or db,
# # by default is implemented by db but you can switch the updation via redis which
# # can improve the performance of high concurrent pushing to the same project,
# # and reduce the database connections spike and occupies.
# # By redis will bring up some delay for quota usage updation for display, so only
# # suggest switch provider to redis if you were ran into the db connections spike around
# # the scenario of high concurrent pushing to same project, no improvement for other scenes.
# quota_update_provider: redis # Or db
复制配置文件:
# 复制
cp harbor.yml.tmpl harbor.yml# 查看
cat harbor.yml | grep -v '#' | grep -v '^$'
修改配置文件:
说明:
生成ssl证书见“3、配置Harbor的HTTPS访问(如果使用https访问,请配置)”
hostname: 【修改为ip或域名,192.168.186.128】
http:port: 【修改端口,8080】
https:port: 【修改端口,8443】certificate: 【指定证书,/data/cert/yourdomain.com.crt】private_key: 【指定密钥,/data/cert/yourdomain.com.key】internal_tls:enabled: truedir: 【tls目录,/etc/harbor/tls/internal】
harbor_admin_password: Harbor12345
database:password: root123max_idle_conns: 100max_open_conns: 900conn_max_lifetime: 5mconn_max_idle_time: 0
data_volume: 【数据映射目录,/data】
trivy:ignore_unfixed: falseskip_update: falseskip_java_db_update: falseoffline_scan: falsesecurity_check: vulninsecure: falsetimeout: 5m0s
jobservice:max_job_workers: 10max_job_duration_hours: 24job_loggers:- STD_OUTPUT- FILE
notification:webhook_job_max_retry: 3
log:level: infolocal:rotate_count: 50rotate_size: 200Mlocation: 【日志目录,/var/log/harbor】
_version: 2.13.0
proxy:http_proxy:https_proxy:no_proxy:components:- core- jobservice- trivy
upload_purging:enabled: trueage: 168hinterval: 24hdryrun: false
cache:enabled: falseexpire_hours: 24
安装:
# ./install.sh --with-trivy
./install.sh
项目结构:
# 命令:
tree harbor# 内容:
harbor
|-- common
| `-- config
| |-- core
| | |-- app.conf
| | |-- certificates
| | `-- env
| |-- db
| | `-- env
| |-- jobservice
| | |-- config.yml
| | `-- env
| |-- log
| | |-- logrotate.conf
| | `-- rsyslog_docker.conf
| |-- nginx
| | |-- conf.d
| | `-- nginx.conf
| |-- portal
| | `-- nginx.conf
| |-- registry
| | |-- config.yml
| | |-- passwd
| | `-- root.crt
| |-- registryctl
| | |-- config.yml
| | `-- env
| `-- shared
| `-- trust-certificates
|-- common.sh
|-- docker-compose.yml
|-- harbor.yml
|-- harbor.yml.tmpl
|-- install.sh
|-- LICENSE
|-- prepare
`-- ssl|-- ca.crt|-- ca.key|-- ca.srl|-- v3.ext|-- yourdomain.com.cert|-- yourdomain.com.crt|-- yourdomain.com.csr`-- yourdomain.com.key15 directories, 29 files
查看/启动/停止容器:
# 查看运行的容器列表
docker ps# ----------------------------# 创建并启动容器
docker-compose up -d# 停止并销毁容器
docker-compose down# ----------------------------# 启动容器
docker start 容器ID/名称# 停止容器
docker stop 容器ID/名称# 重启容器
docker restart 容器ID/名称# 删除停止的容器
docker rm 容器ID/名称
5.2、安装离线版本
下载:
wget https://github.com/goharbor/harbor/releases/download/v2.13.0/harbor-offline-installer-v2.13.0.tgz
解压:
tar -xf harbor-offline-installer-v2.13.0.tgz
项目结构:
harbor
├── common.sh
├── harbor.v2.13.0.tar.gz
├── harbor.yml.tmpl
├── install.sh
├── LICENSE
└── prepare0 directories, 6 files
切换目录:
cd harbor
复制配置文件:
# 复制
cp harbor.yml.tmpl harbor.yml# 查看
cat harbor.yml | grep -v '#' | grep -v '^$'
修改配置文件:
说明:
生成ssl证书见“3、配置Harbor的HTTPS访问(如果使用https访问,请配置)”
hostname: 【修改为ip或域名:192.168.186.128】
http:port: 【修改端口:8080】
https:port: 【修改端口:8443】certificate: 【指定证书,/data/cert/yourdomain.com.crt】private_key: 【指定密钥,/data/cert/yourdomain.com.key】internal_tls:enabled: truedir: 【tls目录,/etc/harbor/tls/internal】
harbor_admin_password: Harbor12345
database:password: root123max_idle_conns: 100max_open_conns: 900conn_max_lifetime: 5mconn_max_idle_time: 0
data_volume: 【数据映射目录,/data】
trivy:ignore_unfixed: falseskip_update: falseskip_java_db_update: falseoffline_scan: falsesecurity_check: vulninsecure: falsetimeout: 5m0s
jobservice:max_job_workers: 10max_job_duration_hours: 24job_loggers:- STD_OUTPUT- FILE
notification:webhook_job_max_retry: 3
log:level: infolocal:rotate_count: 50rotate_size: 200Mlocation: 【日志目录,/var/log/harbor】
_version: 2.13.0
proxy:http_proxy:https_proxy:no_proxy:components:- core- jobservice- trivy
upload_purging:enabled: trueage: 168hinterval: 24hdryrun: false
cache:enabled: falseexpire_hours: 24
安装:
# ./install.sh --with-trivy
./install.sh
项目结构:
# 命令:
tree harbor# 内容:
harbor
├── common
│ └── config
│ ├── core
│ │ ├── app.conf
│ │ ├── certificates
│ │ └── env
│ ├── db
│ │ └── env
│ ├── jobservice
│ │ ├── config.yml
│ │ └── env
│ ├── log
│ │ ├── logrotate.conf
│ │ └── rsyslog_docker.conf
│ ├── nginx
│ │ ├── conf.d
│ │ └── nginx.conf
│ ├── portal
│ │ └── nginx.conf
│ ├── registry
│ │ ├── config.yml
│ │ ├── passwd
│ │ └── root.crt
│ ├── registryctl
│ │ ├── config.yml
│ │ └── env
│ └── shared
│ └── trust-certificates
├── common.sh
├── docker-compose.yml
├── harbor.v2.13.0.tar.gz
├── harbor.yml
├── harbor.yml.tmpl
├── install.sh
├── LICENSE
├── prepare
└── ssl├── ca.crt├── ca.key├── ca.srl├── v3.ext├── yourdomain.com.cert├── yourdomain.com.crt├── yourdomain.com.csr└── yourdomain.com.key15 directories, 30 files
查看/启动/停止容器:
# 查看运行的容器列表
docker ps# ----------------------------# 创建并启动容器
docker-compose up -d# 停止并销毁容器
docker-compose down# ----------------------------# 启动容器
docker start 容器ID/名称# 停止容器
docker stop 容器ID/名称# 重启容器
docker restart 容器ID/名称# 删除停止的容器
docker rm 容器ID/名称
6、浏览器访问
假设当前ip为192.168.186.128
浏览器访问:http://192.168.186.128:8080 或者 https://192.168.186.128:8443
默认管理员账户密码均为:admin Harbor12345
输入用户名密码:
登录成功后:
7、测试
7.1、使用普通用户推送镜像
用户管理:
创建用户:
项目:
新建项目:
点击刚才创建的项目,选择成员:
点击用户,新建成员:
编写Dockerfile文件:
# https://docs.docker.com/reference/dockerfile/
FROM alpine:latest
WORKDIR /root
RUN echo hello > halo.txt
构建镜像:
docker build -t 192.168.186.128:8080/hello/halo:v1 .# docker build -t 192.168.186.128:8443/hello/halo:v1 .
登录/退出Harbor:
# 登录
docker login 192.168.186.128:8080# docker login 192.168.186.128:8443
# 退出
docker logout 192.168.186.128:8080# docker logout 192.168.186.128:8443
提交镜像:
docker push 192.168.186.128:8080/hello/halo:v1# docker push 192.168.186.128:8443/hello/halo:v1
查看镜像仓库:
点击“hello/halo”,查看Artifacts:
7.2、使用机器人账户推送镜像
机器人账户:
添加机器人账户:
根据需要选择权限(当前选择全部权限):
复制令牌 或者 导出到文件中(如果忘记,后续可通过刷新令牌获取):
查看文件:
# 文件名:robot$hello+hello.json# 文件内容:
{"creation_time":"yyyy-MM-ddTHH:mm:ss.000Z","expires_at":-1,"id":3,"name":"robot$hello+robot","secret":"9mcCmzhNevyViAX3t940HIQ0A36wETxf"}# 参数说明:
# creation_time:创建时间
# expires_at:过期时间
# id:用户ID
# name:机器人账户
# secret:令牌/密码
编写Dockerfile文件:
# https://docs.docker.com/reference/dockerfile/
FROM alpine:latest
WORKDIR /root
构建镜像:
docker build -t 192.168.186.128:8080/hello/halo:v2 .# docker build -t 192.168.186.128:8443/hello/halo:v2 .
登录/退出Harbor:
# 用户名:robot$hello+robot
# 密码:9mcCmzhNevyViAX3t940HIQ0A36wETxf# 登录
docker login 192.168.186.128:8080# docker login 192.168.186.128:8443
# 退出
docker logout 192.168.186.128:8080# docker logout 192.168.186.128:8443
提交镜像:
docker push 192.168.186.128:8080/hello/halo:v2# docker push 192.168.186.128:8443/hello/halo:v2
查看镜像仓库:
点击“hello/halo”,查看Artifacts:
8、详见
https://goharbor.io/
https://mp.weixin.qq.com/s/sW6ZWuN840cRCcCd0gxwZA