当前位置: 首页 > news >正文

2025最新版flink2.0.0安装教程(保姆级)

Flink支持多种安装模式。

local(本地)——本地模式

standalone——独立模式,Flink自带集群,开发测试环境使用

standaloneHA—独立集群高可用模式,Flink自带集群,开发测试环境使用

yarn——计算资源统一由Hadoop YARN管理,生产环境测试

flink1.13 (2021)和 flink2.0.0 (2025)之间的区别:

下载链接:

https://dlcdn.apache.org/flink/flink-2.0.0/flink-2.0.0-bin-scala_2.12.tgz

升级 jdk

不升级 jdk,会报如下错误:

将 jdk1.8 升级到 jdk11

上传:

解压:

[root@node01 ~]# cd /opt/modules/
[root@node01 modules]# tar -zxvf jdk-11.0.25_linux-x64_bin.tar.gz -C /opt/installs

重命名:

mv jdk-11.0.25 jdk11

如果以前有安装的 jdk8,先将其重命名:

cd /opt/installs
mv jdk jdk8

做超链接,指向新的 jdk

ln -s jdk11 jdk

同步 jdk11 到 node02,node03 上

xsync.sh /opt/installs/jdk11

将 node02 和 node03 上的 jdk 重命名为 jdk8

mv jdk jdk8

然后在 node01 上同步超链接:

xsync.sh /opt/installs/jdk

在三台电脑上,验证 java 的版本

java -version

上传Flink安装包,解压,配置环境变量

cd /opt/modules
tar -zxf flink-2.0.0-bin-scala_2.12.tgz -C /opt/installs/
cd /opt/installs
mv flink-2.0.0 flink
配置环境变量:
vim /etc/profile.d/custom_env.sh
export FLINK_HOME=/opt/installs/flink
export PATH=$PATH:$FLINK_HOME/bin
export HADOOP_CONF_DIR=/opt/installs/hadoop/etc/hadoop记得source /etc/profile

修改配置文件

① /opt/installs/flink/conf/config.yaml 【以前叫 flink-config.yaml】

yaml 格式检测器:YAML、YML在线编辑(校验、格式化)器工具(无广告) - WGCLOUD

编辑的话,可以使用 idea,由于 yaml 格式非常严格,所以使用 alt+部分选中,进行修改不容易报错

新的配置文件:

参考:OpenEuler部署Flink 1.19.2完全分布式集群_flink1.19部署-CSDN博客

第一步:要把该文件21-24行兼容jdk17的配置删掉或注释掉

第二步:修改其他配置项

jobmanager:bind-host: 0.0.0.0rpc:address: node01# The RPC port where the JobManager is reachable.port: 6123memory:process:# The total process memory size for the JobManager.# Note this accounts for all memory usage within the JobManager process, including JVM metaspace and other overhead.size: 1600mexecution:# The failover strategy, i.e., how the job computation recovers from task failures.# Only restart tasks that may have been affected by the task failure, which typically includes# downstream tasks and potentially upstream tasks if their produced data is no longer available for consumption.failover-strategy: regionarchive:fs:# Directory to upload completed jobs to. Add this directory to the list of# monitored directories of the HistoryServer as well (see below).dir: hdfs://node01:9820/flink/completed-jobs/taskmanager:bind-host: 0.0.0.0host: node01# The number of task slots that each TaskManager offers. Each slot runs one parallel pipeline.numberOfTaskSlots: 2memory:process:size: 1728mparallelism:# The parallelism used for programs that did not specify and other parallelism.default: 1rest:# The address to which the REST client will connect toaddress: node01bind-address: 0.0.0.0web:submit:# Flag to specify whether job submission is enabled from the web-based# runtime monitor. Uncomment to disable.enable: truecancel:# Flag to specify whether job cancellation is enabled from the web-based# runtime monitor. Uncomment to disable.enable: truehistoryserver:web:# The address under which the web-based HistoryServer listens.address: node01# The port under which the web-based HistoryServer listens.port: 8082archive:fs:# Comma separated list of directories to monitor for completed jobs.dir: hdfs://node01:9820/flink/completed-jobs/# Interval in milliseconds for refreshing the monitored directories.fs.refresh-interval: 10000

完整版的,仅供参考:

################################################################################
#  Licensed to the Apache Software Foundation (ASF) under one
#  or more contributor license agreements.  See the NOTICE file
#  distributed with this work for additional information
#  regarding copyright ownership.  The ASF licenses this file
#  to you under the Apache License, Version 2.0 (the
#  "License"); you may not use this file except in compliance
#  with the License.  You may obtain a copy of the License at
#
#      http://www.apache.org/licenses/LICENSE-2.0
#
#  Unless required by applicable law or agreed to in writing, software
#  distributed under the License is distributed on an "AS IS" BASIS,
#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#  See the License for the specific language governing permissions and
#  limitations under the License.
################################################################################# These parameters are required for Java 17 support.
# They can be safely removed when using Java 8/11.
#  # log4j 2 configuration
#  log:
#    level: TRACE
#    max: 5
#    dir: /tmp/flink_logs#==============================================================================
# Common
#==============================================================================jobmanager:# The host interface the JobManager will bind to. By default, this is localhost, and will prevent# the JobManager from communicating outside the machine/container it is running on.# On YARN this setting will be ignored if it is set to 'localhost', defaulting to 0.0.0.0.# On Kubernetes this setting will be ignored, defaulting to 0.0.0.0.## To enable this, set the bind-host address to one that has access to an outside facing network# interface, such as 0.0.0.0.bind-host: 0.0.0.0rpc:# The external address of the host on which the JobManager runs and can be# reached by the TaskManagers and any clients which want to connect. This setting# is only used in Standalone mode and may be overwritten on the JobManager side# by specifying the --host <hostname> parameter of the bin/jobmanager.sh executable.# In high availability mode, if you use the bin/start-cluster.sh script and setup# the conf/masters file, this will be taken care of automatically. Yarn# automatically configure the host name based on the hostname of the node where the# JobManager runs.address: node01# The RPC port where the JobManager is reachable.port: 6123memory:process:# The total process memory size for the JobManager.# Note this accounts for all memory usage within the JobManager process, including JVM metaspace and other overhead.size: 1600mexecution:# The failover strategy, i.e., how the job computation recovers from task failures.# Only restart tasks that may have been affected by the task failure, which typically includes# downstream tasks and potentially upstream tasks if their produced data is no longer available for consumption.failover-strategy: regionarchive:fs:# Directory to upload completed jobs to. Add this directory to the list of# monitored directories of the HistoryServer as well (see below).dir: hdfs://node01:9820/flink/completed-jobs/taskmanager:# The host interface the TaskManager will bind to. By default, this is localhost, and will prevent# the TaskManager from communicating outside the machine/container it is running on.# On YARN this setting will be ignored if it is set to 'localhost', defaulting to 0.0.0.0.# On Kubernetes this setting will be ignored, defaulting to 0.0.0.0.## To enable this, set the bind-host address to one that has access to an outside facing network# interface, such as 0.0.0.0.bind-host: 0.0.0.0# The address of the host on which the TaskManager runs and can be reached by the JobManager and# other TaskManagers. If not specified, the TaskManager will try different strategies to identify# the address.## Note this address needs to be reachable by the JobManager and forward traffic to one of# the interfaces the TaskManager is bound to (see 'taskmanager.bind-host').## Note also that unless all TaskManagers are running on the same machine, this address needs to be# configured separately for each TaskManager.host: node01# The number of task slots that each TaskManager offers. Each slot runs one parallel pipeline.numberOfTaskSlots: 2memory:process:# The total process memory size for the TaskManager.## Note this accounts for all memory usage within the TaskManager process, including JVM metaspace and other overhead.# To exclude JVM metaspace and overhead, please, use total Flink memory size instead of 'taskmanager.memory.process.size'.# It is not recommended to set both 'taskmanager.memory.process.size' and Flink memory.size: 1728mparallelism:# The parallelism used for programs that did not specify and other parallelism.default: 1# # The default file system scheme and authority.
# # By default file paths without scheme are interpreted relative to the local
# # root file system 'file:///'. Use this to override the default and interpret
# # relative paths relative to a different file system,
# # for example 'hdfs://mynamenode:12345'
# fs:
#   default-scheme: hdfs://mynamenode:12345#==============================================================================
# High Availability
#==============================================================================# high-availability:
#   # The high-availability mode. Possible options are 'NONE' or 'zookeeper'.
#   type: zookeeper
#   # The path where metadata for master recovery is persisted. While ZooKeeper stores
#   # the small ground truth for checkpoint and leader election, this location stores
#   # the larger objects, like persisted dataflow graphs.
#   #
#   # Must be a durable file system that is accessible from all nodes
#   # (like HDFS, S3, Ceph, nfs, ...)
#   storageDir: hdfs:///flink/ha/
#   zookeeper:
#     # The list of ZooKeeper quorum peers that coordinate the high-availability
#     # setup. This must be a list of the form:
#     # "host1:clientPort,host2:clientPort,..." (default clientPort: 2181)
#     quorum: localhost:2181
#     client:
#       # ACL options are based on https://zookeeper.apache.org/doc/r3.1.2/zookeeperProgrammers.html#sc_BuiltinACLSchemes
#       # It can be either "creator" (ZOO_CREATE_ALL_ACL) or "open" (ZOO_OPEN_ACL_UNSAFE)
#       # The default value is "open" and it can be changed to "creator" if ZK security is enabled
#       acl: open#==============================================================================
# Fault tolerance and checkpointing
#==============================================================================# The backend that will be used to store operator state checkpoints if
# checkpointing is enabled. Checkpointing is enabled when execution.checkpointing.interval > 0.# # Execution checkpointing related parameters. Please refer to CheckpointConfig and CheckpointingOptions for more details.
# execution:
#   checkpointing:
#     interval: 3min
#     externalized-checkpoint-retention: [DELETE_ON_CANCELLATION, RETAIN_ON_CANCELLATION]
#     max-concurrent-checkpoints: 1
#     min-pause: 0
#     mode: [EXACTLY_ONCE, AT_LEAST_ONCE]
#     timeout: 10min
#     tolerable-failed-checkpoints: 0
#     unaligned: false
#     # Flag to enable/disable incremental checkpoints for backends that
#     # support incremental checkpoints (like the RocksDB state backend).
#     incremental: false
#     # Directory for checkpoints filesystem, when using any of the default bundled
#     # state backends.
#     dir: hdfs://namenode-host:port/flink-checkpoints
#     # Default target directory for savepoints, optional.
#     savepoint-dir: hdfs://namenode-host:port/flink-savepoints# state:
#   backend:
#     # Supported backends are 'hashmap', 'rocksdb', or the
#     # <class-name-of-factory>.
#     type: hashmap#==============================================================================
# Rest & web frontend
#==============================================================================rest:# The address to which the REST client will connect toaddress: node01# The address that the REST & web server binds to# By default, this is localhost, which prevents the REST & web server from# being able to communicate outside of the machine/container it is running on.## To enable this, set the bind address to one that has access to outside-facing# network interface, such as 0.0.0.0.bind-address: 0.0.0.0# # The port to which the REST client connects to. If rest.bind-port has# # not been specified, then the server will bind to this port as well.# port: 8081# # Port range for the REST and web server to bind to.# bind-port: 8080-8090web:submit:# Flag to specify whether job submission is enabled from the web-based# runtime monitor. Uncomment to disable.enable: truecancel:# Flag to specify whether job cancellation is enabled from the web-based# runtime monitor. Uncomment to disable.enable: true#==============================================================================
# Advanced
#==============================================================================# io:
#   tmp:
#     # Override the directories for temporary files. If not specified, the
#     # system-specific Java temporary directory (java.io.tmpdir property) is taken.
#     #
#     # For framework setups on Yarn, Flink will automatically pick up the
#     # containers' temp directories without any need for configuration.
#     #
#     # Add a delimited list for multiple directories, using the system directory
#     # delimiter (colon ':' on unix) or a comma, e.g.:
#     # /data1/tmp:/data2/tmp:/data3/tmp
#     #
#     # Note: Each directory entry is read from and written to by a different I/O
#     # thread. You can include the same directory multiple times in order to create
#     # multiple I/O threads against that directory. This is for example relevant for
#     # high-throughput RAIDs.
#     dirs: /tmp# classloader:
#   resolve:
#     # The classloading resolve order. Possible values are 'child-first' (Flink's default)
#     # and 'parent-first' (Java's default).
#     #
#     # Child first classloading allows users to use different dependency/library
#     # versions in their application than those in the classpath. Switching back
#     # to 'parent-first' may help with debugging dependency issues.
#     order: child-first# The amount of memory going to the network stack. These numbers usually need
# no tuning. Adjusting them may be necessary in case of an "Insufficient number
# of network buffers" error. The default min is 64MB, the default max is 1GB.
#
# taskmanager:
#   memory:
#     network:
#       fraction: 0.1
#       min: 64mb
#       max: 1gb#==============================================================================
# Flink Cluster Security Configuration
#==============================================================================# Kerberos authentication for various components - Hadoop, ZooKeeper, and connectors -
# may be enabled in four steps:
# 1. configure the local krb5.conf file
# 2. provide Kerberos credentials (either a keytab or a ticket cache w/ kinit)
# 3. make the credentials available to various JAAS login contexts
# 4. configure the connector to use JAAS/SASL# # The below configure how Kerberos credentials are provided. A keytab will be used instead of
# # a ticket cache if the keytab path and principal are set.
# security:
#   kerberos:
#     login:
#       use-ticket-cache: true
#       keytab: /path/to/kerberos/keytab
#       principal: flink-user
#       # The configuration below defines which JAAS login contexts
#       contexts: Client,KafkaClient#==============================================================================
# ZK Security Configuration
#==============================================================================# zookeeper:
#   sasl:
#     # Below configurations are applicable if ZK ensemble is configured for security
#     #
#     # Override below configuration to provide custom ZK service name if configured
#     # zookeeper.sasl.service-name: zookeeper
#     #
#     # The configuration below must match one of the values set in "security.kerberos.login.contexts"
#     login-context-name: Client#==============================================================================
# HistoryServer
#==============================================================================# The HistoryServer is started and stopped via bin/historyserver.sh (start|stop)
#
# jobmanager:
#   archive:
#     fs:
#       # Directory to upload completed jobs to. Add this directory to the list of
#       # monitored directories of the HistoryServer as well (see below).
#       dir: hdfs:///completed-jobs/historyserver:web:# The address under which the web-based HistoryServer listens.address: node01# The port under which the web-based HistoryServer listens.port: 8082archive:fs:# Comma separated list of directories to monitor for completed jobs.dir: hdfs://node01:9820/flink/completed-jobs/# Interval in milliseconds for refreshing the monitored directories.fs.refresh-interval: 10000

② /opt/installs/flink/conf/masters

node01:8081

③ /opt/installs/flink/conf/workers

node01
node02
node03

上传jar包

将资料下的flink-shaded-hadoop-2-uber-2.7.5-10.0.jar放到flink的lib目录下

分发

xsync.sh /opt/installs/flink
xsync.sh /etc/profile.d/custom_env.sh

去 node02 和 node03 上修改一个主机名:

修改 node02 和 node03 上的 taskmanager 中的 host 为 node02 和 node03

启动

#启动HDFS  
start-dfs.sh
#启动集群
start-cluster.sh
#启动历史服务器
historyserver.sh start

假如 historyserver 无法启动,也就没有办法访问 8082 服务,原因大概是你没有上传 关于 hadoop 的 jar 包到 lib 下:

观察webUI

http://node01:8081   -- Flink集群管理界面    当前有效,重启后里面跑的内容就消失了
能够访问8081是因为你的集群启动着呢
http://node01:8082   -- Flink历史服务器管理界面,及时服务重启,运行过的服务都还在
能够访问8082是因为你的历史服务启动着

相关文章:

  • 层次式架构核心:中间层的功能、优势与技术选型全解析
  • Oracle中用户密码过期修改为不限制
  • Linux系统-scp命令--两台服务器之间传输文件
  • 利用纯JS开发浏览器小窗口移动广告小功能
  • 通过微信APPID获取小程序名称
  • Spring 框架知识整理
  • K8S_ResourceQuota与LimitRange的作用
  • Materials Studio学习笔记(一)——Materials Studio软件介绍
  • Flutter学习 滚动组件(1):ListView基本使用
  • 【差分隐私相关概念】瑞丽差分隐私(RDP)命题4
  • 宝塔面板中解锁Laravel日志查看的奥秘
  • pull.rebase 三种模式的应用场景
  • java的类加载器及其双亲委派机制
  • 解决docker安装OpenWebUI 报错 500
  • Node.js 数据库 CRUD 项目示例
  • uni-app/微信小程序接入腾讯位置服务地图选点插件
  • STM32F407实现SD卡的读写功能
  • #[特殊字符]Rhino建模教程 · 第一章:正方体建模入门
  • docker 启用portainer,容器管理软件
  • Flowable工程化改造相关文档
  • 明查|把太平洋垃圾污染问题甩锅中国,特朗普用的是P过的图
  • 双拥主题歌曲MV:爱我人民,爱我军
  • 打破“内卷”与“焦虑”怪圈,在阅读中寻找松弛感
  • 中国体育报:中国乒协新周期新起点再出发
  • 宁德时代与广汽等五车企发布10款巧克力换电新车型:年内将将完成30城1000站计划
  • 视频丨习近平同阿塞拜疆总统会谈:两国建立全面战略伙伴关系