Flume环境部署和配置详解及案例大全

flume是一个分布式、可靠、和高可用的海量日志采集、聚合和传输的系统。支持在日志系统中定制各类数据发送方,用于收集数据;同时,Flume提供对数据进行简单处理,并写到各种数据接受方(比如文本、HDFS、Hbase等)的能力 。

一、什么是Flume?  flume 作为 cloudera 开发的实时日志收集系统,受到了业界的认可与广泛应用。Flume 初始的发行版本目前被统称为 Flume OG(original generation),属于 cloudera。但随着 FLume 功能的扩展,Flume OG 代码工程臃肿、核心组件设计不合理、核心配置不标准等缺点暴露出来,尤其是在 Flume OG 的最后一个发行版本 0.94.0 中,日志传输不稳定的现象尤为严重,为了解决这些问题,2011 年 10 月 22 号,cloudera 完成了 Flume-728,对 Flume 进行了里程碑式的改动:重构核心组件、核心配置以及代码架构,重构后的版本统称为 Flume NG(next generation);改动的另一原因是将 Flume 纳入 apache 旗下,cloudera Flume 改名为 Apache Flume。 flume的特点:  flume是一个分布式、可靠、和高可用的海量日志采集、聚合和传输的系统。支持在日志系统中定制各类数据发送方,用于收集数据;同时,Flume提供对数据进行简单处理,并写到各种数据接受方(比如文本、HDFS、Hbase等)的能力 。  flume的数据流由事件(Event)贯穿始终。事件是Flume的基本数据单位,它携带日志数据(字节数组形式)并且携带有头信息,这些Event由Agent外部的Source生成,当Source捕获事件后会进行特定的格式化,然后Source会把事件推入(单个或多个)Channel中。你可以把Channel看作是一个缓冲区,它将保存事件直到Sink处理完该事件。Sink负责持久化日志或者把事件推向另一个Source。 flume的可靠性   当节点出现故障时,日志能够被传送到其他节点上而不会丢失。Flume提供了三种级别的可靠性保障,从强到弱依次分别为:end-to-end(收到数据agent首先将event写到磁盘上,当数据传送成功后,再删除;如果数据发送失败,可以重新发送。),Store on failure(这也是scribe采用的策略,当数据接收方crash时,将数据写到本地,待恢复后,继续发送),Besteffort(数据发送到接收方后,不会进行确认)。 flume的可恢复性:  还是靠Channel。推荐使用FileChannel,事件持久化在本地文件系统里(性能较差)。    flume的一些核心概念:Agent使用JVM 运行Flume。每台机器运行一个agent,但是可以在一个agent中包含多个sources和sinks。Client生产数据,运行在一个独立的线程。Source从Client收集数据,传递给Channel。Sink从Channel收集数据,运行在一个独立线程。Channel连接 sources 和 sinks ,这个有点像一个队列。Events可以是日志记录、 avro 对象等。   Flume以agent为最小的独立运行单位。一个agent就是一个JVM。单agent由Source、Sink和Channel三大组件构成,如下图:

  值得注意的是,Flume提供了大量内置的Source、Channel和Sink类型。不同类型的Source,Channel和Sink可以自由组合。组合方式基于用户设置的配置文件,非常灵活。比如:Channel可以把事件暂存在内存里,也可以持久化到本地硬盘上。Sink可以把日志写入HDFS, HBase,甚至是另外一个Source等等。Flume支持用户建立多级流,也就是说,多个agent可以协同工作,并且支持Fan-in、Fan-out、Contextual Routing、Backup Routes,这也正是NB之处。如下图所示:

 二、flume的官方网站在哪里?  http://flume.apache.org/

  三、在哪里下载?

  http://www.apache.org/dyn/closer.cgi/flume/1.5.0/apache-flume-1.5.0-bin.tar.gz

  四、如何安装?1)将下载的flume包,解压到/home/hadoop目录中,你就已经完成了50%:)简单吧

tar -zxvf apache-flume-1.6.0-bin -C /home/hadoop

2)修改 flume-env.sh 配置文件,主要是JAVA_HOME变量设置(可根据自己的java_home进行配置)

root@m1:/home/hadoop/flume-1.6.0-bin# cp conf/flume-env.sh.template conf/flume-env.sh
root@m1:/home/hadoop/flume-1.6.0-bin# vi conf/flume-env.sh
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
#   http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
  
# If this file is placed at FLUME_CONF_DIR/flume-env.sh, it will be sourced
# during Flume startup.
  
# Enviroment variables can be set here.
  
JAVA_HOME=/usr/lib/jvm/java-7-oracle
  
# Give Flume more memory and pre-allocate, enable remote monitoring via JMX
#JAVA_OPTS="-Xms100m -Xmx200m -Dcom.sun.management.jmxremote"
  
# Note that the Flume conf directory is always included in the classpath.
#FLUME_CLASSPATH=""

3)验证是否安装成功

root@m1:/home/hadoop# /home/hadoop/flume-1.6.0-bin/bin/flume-ng version
Flume 1.6.0
Source code repository: https://git-wip-us.apache.org/repos/asf/flume.git
Revision: 8633220df808c4cd0c13d1cf0320454a94f1ea97
Compiled by hshreedharan on Wed May 7 14:49:18 PDT 2014
From source with checksum a01fe726e4380ba0c9f7a7d222db961f
root@m1:/home/hadoop#
 出现上面的信息,表示安装成功了
 
 
五、flume的案例
  1)案例1:Avro
  Avro可以发送一个给定的文件给Flume,Avro 源使用AVRO RPC机制。
   a)创建agent配置文件
root@m1:/home/hadoop#vi /home/hadoop/flume-1.6.0-bin/conf/avro.conf
  
a1.sources = r1
a1.sinks = k1
a1.channels = c1
  
# Describe/configure the source
a1.sources.r1.type = avro
a1.sources.r1.channels = c1
a1.sources.r1.bind = 0.0.0.0
a1.sources.r1.port = 4141
  
# Describe the sink
a1.sinks.k1.type = logger
  
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
  
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
b)启动flume agent a1

root@m1:/home/hadoop# /home/hadoop/flume-1.6.0-bin/bin/flume-ng agent -c . -f /home/hadoop/flume-1.6.0-bin/conf/avro.conf -n a1 -Dflume.root.logger=INFO,console
c)创建指定文件
root@m1:/home/hadoop# echo "hello world" > /home/hadoop/flume-1.6.0-bin/log.00
d)使用avro-client发送文件

root@m1:/home/hadoop# /home/hadoop/flume-1.6.0-bin/bin/flume-ng avro-client -c . -H m1 -p 4141 -F /home/hadoop/flume-1.6.0-bin/log.00
f)在m1的控制台,可以看到以下信息,注意最后一行:
root@m1:/home/hadoop/flume-1.6.0-bin/conf# /home/hadoop/flume-1.6.0-bin/bin/flume-ng agent -c . -f /home/hadoop/flume-1.6.0-bin/conf/avro.conf -n a1 -Dflume.root.logger=INFO,console
Info: Sourcing environment configuration script /home/hadoop/flume-1.5.0-bin/conf/flume-env.sh
Info: Including Hadoop libraries found via (/home/hadoop/hadoop-2.2.0/bin/hadoop) for HDFS access
Info: Excluding /home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/slf4j-api-1.7.5.jar from classpath
Info: Excluding /home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar from classpath
...
-08-10 10:43:25,112 (New I/O worker #1) [INFO - org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream(NettyServer.java:171)] [id: 0x92464c4f, /192.168.1.50:59850 :> /192.168.1.50:4141] UNBOUND
-08-10 10:43:25,112 (New I/O worker #1) [INFO - org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream(NettyServer.java:171)] [id: 0x92464c4f, /192.168.1.50:59850 :> /192.168.1.50:4141] CLOSED
-08-10 10:43:25,112 (New I/O worker #1) [INFO - org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.channelClosed(NettyServer.java:209)] Connection to /192.168.1.50:59850 disconnected.
-08-10 10:43:26,718 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:70)] Event: { headers:{} body: 68 65 6C 6C 6F 20 77 6F 72 6C 64        hello world }

2)案例2:Spool
    Spool监测配置的目录下新增的文件,并将文件中的数据读取出来。需要注意两点:
    1) 拷贝到spool目录下的文件不可以再打开编辑。
    2) spool目录下不可包含相应的子目录
      a)创建agent配置文件

root@m1:/home/hadoop# vi /home/hadoop/flume-1.6.0-bin/conf/spool.conf
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# Describe/configure the source
a1.sources.r1.type = spooldir
a1.sources.r1.channels = c1
a1.sources.r1.spoolDir = /home/hadoop/flume-1.6.0-bin/logs
a1.sources.r1.fileHeader = true
# Describe the sink
a1.sinks.k1.type = logger
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
b)启动flume agent a1
root@m1:/home/hadoop# /home/hadoop/flume-1.6.0-bin/bin/flume-ng agent -c . -f /home/hadoop/flume-1.6.0-bin/conf/spool.conf -n a1 -Dflume.root.logger=INFO,console
c)追加文件到/home/hadoop/flume-1.5.0-bin/logs目录

root@m1:/home/hadoop# echo "spool test1" > /home/hadoop/flume-1.6.0-bin/logs/spool_text.log
d)在m1的控制台,可以看到以下相关信息:
/08/10 11:37:13 INFO source.SpoolDirectorySource: Spooling Directory Source runner has shutdown.
/08/10 11:37:13 INFO source.SpoolDirectorySource: Spooling Directory Source runner has shutdown.
/08/10 11:37:14 INFO avro.ReliableSpoolingFileEventReader: Preparing to move file /home/hadoop/flume-1.5.0-bin/logs/spool_text.log to /home/hadoop/flume-1.5.0-bin/logs/spool_text.log.COMPLETED
/08/10 11:37:14 INFO source.SpoolDirectorySource: Spooling Directory Source runner has shutdown.
/08/10 11:37:14 INFO source.SpoolDirectorySource: Spooling Directory Source runner has shutdown.
/08/10 11:37:14 INFO sink.LoggerSink: Event: { headers:{file=/home/hadoop/flume-1.5.0-bin/logs/spool_text.log} body: 73 70 6F 6F 6C 20 74 65 73 74 31        spool test1 }
/08/10 11:37:15 INFO source.SpoolDirectorySource: Spooling Directory Source runner has shutdown.
/08/10 11:37:15 INFO source.SpoolDirectorySource: Spooling Directory Source runner has shutdown.
/08/10 11:37:16 INFO source.SpoolDirectorySource: Spooling Directory Source runner has shutdown.
/08/10 11:37:16 INFO source.SpoolDirectorySource: Spooling Directory Source runner has shutdown.
/08/10 11:37:17 INFO source.SpoolDirectorySource: Spooling Directory Source runner has shutdown.
3)案例3:Exec
    EXEC执行一个给定的命令获得输出的源,如果要使用tail命令,必选使得file足够大才能看到输出内容
      a)创建agent配置文件

root@m1:/home/hadoop# vi /home/hadoop/flume-1.6.0-bin/conf/exec_tail.conf
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# Describe/configure the source
a1.sources.r1.type = exec
a1.sources.r1.channels = c1
a1.sources.r1.command = tail -F /home/hadoop/flume-1.5.0-bin/log_exec_tail
# Describe the sink
a1.sinks.k1.type = logger
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
b)启动flume agent a1
root@m1:/home/hadoop# /home/hadoop/flume-1.6.0-bin/bin/flume-ng agent -c . -f /home/hadoop/flume-1.6.0-bin/conf/exec_tail.conf -n a1 -Dflume.root.logger=INFO,console
c)生成足够多的内容在文件里

root@m1:/home/hadoop# for i in {1..100};do echo "exec tail$i" >> /home/hadoop/flume-1.5.0-bin/log_exec_tail;echo $i;sleep 0.1;done
e)在m1的控制台,可以看到以下信息:
-08-10 10:59:25,513 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:70)] Event: { headers:{} body: 65 78 65 63 20 74 61 69 6C 20 74 65 73 74    exec tail test }
-08-10 10:59:34,535 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:70)] Event: { headers:{} body: 65 78 65 63 20 74 61 69 6C 20 74 65 73 74    exec tail test }
-08-10 11:01:40,557 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:70)] Event: { headers:{} body: 65 78 65 63 20 74 61 69 6C 31          exec tail1 }
-08-10 11:01:41,180 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:70)] Event: { headers:{} body: 65 78 65 63 20 74 61 69 6C 32          exec tail2 }
-08-10 11:01:41,180 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:70)] Event: { headers:{} body: 65 78 65 63 20 74 61 69 6C 33          exec tail3 }
-08-10 11:01:41,181 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:70)] Event: { headers:{} body: 65 78 65 63 20 74 61 69 6C 34          exec tail4 }
-08-10 11:01:41,181 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:70)] Event: { headers:{} body: 65 78 65 63 20 74 61 69 6C 35          exec tail5 }
-08-10 11:01:41,181 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:70)] Event: { headers:{} body: 65 78 65 63 20 74 61 69 6C 36          exec tail6 }
....
....
....
-08-10 11:01:51,550 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:70)] Event: { headers:{} body: 65 78 65 63 20 74 61 69 6C 39 36        exec tail96 }
-08-10 11:01:51,550 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:70)] Event: { headers:{} body: 65 78 65 63 20 74 61 69 6C 39 37        exec tail97 }
-08-10 11:01:51,551 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:70)] Event: { headers:{} body: 65 78 65 63 20 74 61 69 6C 39 38        exec tail98 }
-08-10 11:01:51,551 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:70)] Event: { headers:{} body: 65 78 65 63 20 74 61 69 6C 39 39        exec tail99 }
-08-10 11:01:51,551 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:70)] Event: { headers:{} body: 65 78 65 63 20 74 61 69 6C 31 30 30       exec tail100 }
4)案例4:Syslogtcp
    Syslogtcp监听TCP的端口做为数据源
      a)创建agent配置文件
root@m1:/home/hadoop# vi /home/hadoop/flume-1.5.0-bin/conf/syslog_tcp.conf
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# Describe/configure the source
a1.sources.r1.type = syslogtcp
a1.sources.r1.port = 5140
a1.sources.r1.host = localhost
a1.sources.r1.channels = c1
# Describe the sink
a1.sinks.k1.type = logger
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
b)启动flume agent a1
root@m1:/home/hadoop# /home/hadoop/flume-1.5.0-bin/bin/flume-ng agent -c . -f /home/hadoop/flume-1.5.0-bin/conf/syslog_tcp.conf -n a1 -Dflume.root.logger=INFO,console
c)测试产生syslog

root@m1:/home/hadoop# echo "hello idoall.org syslog" | nc localhost 5140
d)在m1的控制台,可以看到以下信息:
/08/10 11:41:45 INFO node.PollingPropertiesFileConfigurationProvider: Reloading configuration file:/home/hadoop/flume-1.5.0-bin/conf/syslog_tcp.conf
/08/10 11:41:45 INFO conf.FlumeConfiguration: Added sinks: k1 Agent: a1
/08/10 11:41:45 INFO conf.FlumeConfiguration: Processing:k1
/08/10 11:41:45 INFO conf.FlumeConfiguration: Processing:k1
/08/10 11:41:45 INFO conf.FlumeConfiguration: Post-validation flume configuration contains configuration for agents: [a1]
/08/10 11:41:45 INFO node.AbstractConfigurationProvider: Creating channels
/08/10 11:41:45 INFO channel.DefaultChannelFactory: Creating instance of channel c1 type memory
/08/10 11:41:45 INFO node.AbstractConfigurationProvider: Created channel c1
/08/10 11:41:45 INFO source.DefaultSourceFactory: Creating instance of source r1, type syslogtcp
/08/10 11:41:45 INFO sink.DefaultSinkFactory: Creating instance of sink: k1, type: logger
/08/10 11:41:45 INFO node.AbstractConfigurationProvider: Channel c1 connected to [r1, k1]
/08/10 11:41:45 INFO node.Application: Starting new configuration:{ sourceRunners:{r1=EventDrivenSourceRunner: { source:org.apache.flume.source.SyslogTcpSource{name:r1,state:IDLE} }} sinkRunners:{k1=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@6538b14 counterGroup:{ name:null counters:{} } }} channels:{c1=org.apache.flume.channel.MemoryChannel{name: c1}} }
/08/10 11:41:45 INFO node.Application: Starting Channel c1
/08/10 11:41:45 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: CHANNEL, name: c1: Successfully registered new MBean.
/08/10 11:41:45 INFO instrumentation.MonitoredCounterGroup: Component type: CHANNEL, name: c1 started
/08/10 11:41:45 INFO node.Application: Starting Sink k1
/08/10 11:41:45 INFO node.Application: Starting Source r1
/08/10 11:41:45 INFO source.SyslogTcpSource: Syslog TCP Source starting...
/08/10 11:42:15 WARN source.SyslogUtils: Event created from Invalid Syslog data.
/08/10 11:42:15 INFO sink.LoggerSink: Event: { headers:{Severity=0, flume.syslog.status=Invalid, Facility=0} body: 68 65 6C 6C 6F 20 69 64 6F 61 6C 6C 2E 6F 72 67 hello idoall.org }

5)案例5:JSONHandler
      a)创建agent配置文件

root@m1:/home/hadoop# vi /home/hadoop/flume-1.6.0-bin/conf/post_json.conf
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# Describe/configure the source
a1.sources.r1.type = org.apache.flume.source.http.HTTPSource
a1.sources.r1.port = 8888
a1.sources.r1.channels = c1
# Describe the sink
a1.sinks.k1.type = logger
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
b)启动flume agent a1

root@m1:/home/hadoop# /home/hadoop/flume-1.6.0-bin/bin/flume-ng agent -c . -f /home/hadoop/flume-1.6.0-bin/conf/post_json.conf -n a1 -Dflume.root.logger=INFO,console
c)生成JSON 格式的POST request

root@m1:/home/hadoop# curl -X POST -d '[{ "headers" :{"a" : "a1","b" : "b1"},"body" : "idoall.org_body"}]' http://localhost:8888
d)在m1的控制台,可以看到以下信息:
08/10 11:49:59 INFO node.Application: Starting Channel c1
/08/10 11:49:59 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: CHANNEL, name: c1: Successfully registered new MBean.
/08/10 11:49:59 INFO instrumentation.MonitoredCounterGroup: Component type: CHANNEL, name: c1 started
/08/10 11:49:59 INFO node.Application: Starting Sink k1
/08/10 11:49:59 INFO node.Application: Starting Source r1
/08/10 11:49:59 INFO mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
/08/10 11:49:59 INFO mortbay.log: jetty-6.1.26
/08/10 11:50:00 INFO mortbay.log: Started SelectChannelConnector@0.0.0.0:8888
/08/10 11:50:00 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: SOURCE, name: r1: Successfully registered new MBean.
/08/10 11:50:00 INFO instrumentation.MonitoredCounterGroup: Component type: SOURCE, name: r1 started
/08/10 12:14:32 INFO sink.LoggerSink: Event: { headers:{b=b1, a=a1} body: 69 64 6F 61 6C 6C 2E 6F 72 67 5F 62 6F 64 79  idoall.org_body }
6)案例6:Hadoop sink
   其中关于hadoop2.2.0部分的安装部署,请参考文章《ubuntu12.04+hadoop2.2.0+zookeeper3.4.5+hbase0.96.2+hive0.13.1分布式环境部署》
      a)创建agent配置文件
root@m1:/home/hadoop# vi /home/hadoop/flume-1.6.0-bin/conf/hdfs_sink.conf
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# Describe/configure the source
a1.sources.r1.type = syslogtcp
a1.sources.r1.port = 5140
a1.sources.r1.host = localhost
a1.sources.r1.channels = c1
# Describe the sink
a1.sinks.k1.type = hdfs
a1.sinks.k1.channel = c1
a1.sinks.k1.hdfs.path = hdfs://m1:9000/user/flume/syslogtcp
a1.sinks.k1.hdfs.filePrefix = Syslog
a1.sinks.k1.hdfs.round = true
a1.sinks.k1.hdfs.roundValue = 10
a1.sinks.k1.hdfs.roundUnit = minute
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
b)启动flume agent a1

root@m1:/home/hadoop# /home/hadoop/flume-1.6.0-bin/bin/flume-ng agent -c . -f /home/hadoop/flume-1.6.0-bin/conf/hdfs_sink.conf -n a1 -Dflume.root.logger=INFO,console
c)测试产生syslog

root@m1:/home/hadoop# echo "hello idoall flume -> hadoop testing one" | nc localhost 5140
d)在m1的控制台,可以看到以下信息:

/08/10 12:20:39 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: CHANNEL, name: c1: Successfully registered new MBean.
/08/10 12:20:39 INFO instrumentation.MonitoredCounterGroup: Component type: CHANNEL, name: c1 started
/08/10 12:20:39 INFO node.Application: Starting Sink k1
/08/10 12:20:39 INFO node.Application: Starting Source r1
/08/10 12:20:39 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: SINK, name: k1: Successfully registered new MBean.
/08/10 12:20:39 INFO instrumentation.MonitoredCounterGroup: Component type: SINK, name: k1 started
/08/10 12:20:39 INFO source.SyslogTcpSource: Syslog TCP Source starting...
/08/10 12:21:46 WARN source.SyslogUtils: Event created from Invalid Syslog data.
/08/10 12:21:49 INFO hdfs.HDFSSequenceFile: writeFormat = Writable, UseRawLocalFileSystem = false
/08/10 12:21:49 INFO hdfs.BucketWriter: Creating hdfs://m1:9000/user/flume/syslogtcp//Syslog.1407644509504.tmp
/08/10 12:22:20 INFO hdfs.BucketWriter: Closing hdfs://m1:9000/user/flume/syslogtcp//Syslog.1407644509504.tmp
/08/10 12:22:20 INFO hdfs.BucketWriter: Close tries incremented
/08/10 12:22:20 INFO hdfs.BucketWriter: Renaming hdfs://m1:9000/user/flume/syslogtcp/Syslog.1407644509504.tmp to hdfs://m1:9000/user/flume/syslogtcp/Syslog.1407644509504
/08/10 12:22:20 INFO hdfs.HDFSEventSink: Writer callback called.
e)在m1上再打开一个窗口,去hadoop上检查文件是否生成

root@m1:/home/hadoop# /home/hadoop/hadoop-2.2.0/bin/hadoop fs -ls /user/flume/syslogtcp
Found 1 items
-rw-r--r--  3 root supergroup    155 2014-08-10 12:22 /user/flume/syslogtcp/Syslog.1407644509504
root@m1:/home/hadoop# /home/hadoop/hadoop-2.2.0/bin/hadoop fs -cat /user/flume/syslogtcp/Syslog.1407644509504
SEQ!org.apache.hadoop.io.LongWritable"org.apache.hadoop.io.BytesWritable^;>Gv$hello idoall flume -> hadoop testing one
7)案例7:File Roll Sink
      a)创建agent配置文件
root@m1:/home/hadoop# vi /home/hadoop/flume-1.6.0-bin/conf/file_roll.conf
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# Describe/configure the source
a1.sources.r1.type = syslogtcp
a1.sources.r1.port = 5555
a1.sources.r1.host = localhost
a1.sources.r1.channels = c1
# Describe the sink
a1.sinks.k1.type = file_roll
a1.sinks.k1.sink.directory = /home/hadoop/flume-1.5.0-bin/logs
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
b)启动flume agent a1

root@m1:/home/hadoop# /home/hadoop/flume-1.6.0-bin/bin/flume-ng agent -c . -f /home/hadoop/flume-1.6.0-bin/conf/file_roll.conf -n a1 -Dflume.root.logger=INFO,console
c)测试产生log
root@m1:/home/hadoop# echo "hello idoall.org syslog" | nc localhost 5555
root@m1:/home/hadoop# echo "hello idoall.org syslog 2" | nc localhost 5555
d)查看/home/hadoop/flume-1.5.0-bin/logs下是否生成文件,默认每30秒生成一个新文件

root@m1:/home/hadoop# ll /home/hadoop/flume-1.6.0-bin/logs
总用量 272
drwxr-xr-x 3 root root  4096 Aug 10 12:50 ./
drwxr-xr-x 9 root root  4096 Aug 10 10:59 ../
-rw-r--r-- 1 root root   50 Aug 10 12:49 1407646164782-1
-rw-r--r-- 1 root root   0 Aug 10 12:49 1407646164782-2
-rw-r--r-- 1 root root   0 Aug 10 12:50 1407646164782-3
root@m1:/home/hadoop# cat /home/hadoop/flume-1.5.0-bin/logs/1407646164782-1 /home/hadoop/flume-1.5.0-bin/logs/1407646164782-2
hello idoall.org syslog
hello idoall.org syslog 2













本页内容版权归属为原作者,如有侵犯您的权益,请通知我们删除。

KafkaStreams介绍(二) - 2016-07-22 18:07:55

说明: 本文是Confluent Platform 3.0版本中对于Kafka Streams的翻译。 原文地址: https://docs.confluent.io/3.0.0/streams/index.html 看了很多其他人翻译的文档,还是第一次翻译,有什么翻译的不好的地方还请指出。   这是Kafka Streams介绍的第二篇,以前的介绍如下: http://blog.csdn.net/ransom0512/article/details/51971112   1.  快速入门 1.1.  目
2016年7月19日下午,笔者做客国泰君安通信研究团队”软银收购ARM“深度解读电话会议,与在线的150多位机构投资者分享了对于”软银收购ARM“的个人观点。 以下为电话会议实录,略经编辑以及后期补充部分观点。 主持人:各位同事朋友大家下午好,我是国泰君安通信行业分析师宋嘉吉,欢迎大家今天参加本次电话会议,此次会议的主题是软银收购ARM,7月18号软银宣布以243亿英镑收购半导体IP供应商ARM,是对未来物联网战略的提前卡位,我们认为这也是物联网行业布局芯片的又一重磅催化。今天大唐电信封了涨停,按照我们对
一、HBase伪分布式集群安装 1、安装包解压 $ cd app/ $ tar -xvfhbase-1.2.0-cdh5.7.1.tar.gz $ rmhbase-1.2.0-cdh5.7.1.tar.gz   2、添加环境变量 $ cd ~ $ vim .bashrc exportHBASE_HOME=/home/developer/app/hbase-1.2.0-cdh5.7.1 exportPATH=$PATH:$HBASE_HOME/bin $ source .bashrc   3、编辑hbase
本文将介绍Oracle集成云Agent的基础架构,所包含的组件,和如何连接云与OP应用。 目前/典型的集成方式 目前常用的将云应用/基于互联网的应用与企业内部部署(OP)应用连接的方式为:穿透一层或者更多的防火墙,使用反向代理、Oracle API Gateway或者OHS。要实现这些操作需要多种专业知识,比如防火墙需要开放入站端口,暴露一个私有的SOAP/REST服务并且配置网络路由。SOAP/REST服务可以用SOA套件之类的产品实现,比如与CRM系统进行通讯,实现客户信息的接收。如下图所示: 如果使
本次主要是详细记录Docker1.12在Ubuntu16.04上的安装过程,创建Docker组(避免每次敲命令都需要sudo),Docker常用的基本命令的总结,在容器中运行Hello world,以及创建一个基于Python Flask的web应用容器的全过程。 1.Docker1.12在Ubuntu16.04上安装 1.1.先决条件1,添加Docker源 wxl @wxl - pc: ~ $ sudo apt-get update 增加CA证书 wxl@wxl -pc :~$ sudo apt -ge
参考自: http://blog.csdn.net/jdplus/article/details/45920733 进行了大范围修改和完善 文件下载 CDH (Cloudera’s Distribution, including Apache Hadoop),是Hadoop众多分支中的一种,由Cloudera维护,基于稳定版本的Apache Hadoop构建,并集成了很多补丁,可直接用于生产环境。  Cloudera Manager则是为了便于在集群中进行Hadoop等大数据处理相关的服务安装和监控管理的
Spark可以通过三种方式配置系统: 通过SparkConf对象, 或者Java系统属性配置Spark的应用参数 通过每个节点上的conf/spark-env.sh脚本为每台机器配置环境变量 通过log4j.properties配置日志属性 Spark属性 Spark属性可以为每个应用分别进行配置,这些属性可以直接通过SparkConf设定,也可以通过set方法设定相关属性。 下面展示了在本地机使用两个线程并发执行的配置代码: val conf = new SparkConf() .setMaster(
云端基于Docker的微服务与持续交付实践笔记,是基于易立老师在阿里巴巴首届在线技术峰会上《云端基于Docker的微服务与持续交付实践》总结而出的。 本次主要讲了什么? Docker Swarm Docker Swarm mode 微服务支持(Docker集群架构体系) Docker的发展趋势和前沿成果 在Docker技术方面还是很佩服大牛的,所以赶紧写下笔记,追随大神的脚步。 阿里云资深专家易立,技术就不说了,他比其他直播间硬生生多讲了半个多点,于情于理还是万分感谢本次分享的(可惜devOps没时间讲了
前面我们已经部署好了一个Docker Swarm集群环境,接下来,我们就对Swarm集群的相关管理进行简单介绍。 集群调度策略 既然是集群,就是有一个调度策略,也就是该集群包含那么多子节点,我到底是设置一个什么样的策略来进行分配呢? 我们查看Docker官方文档可以看到Swarm的集群调度包含三种策略: To choose a ranking strategy, pass the  --strategy  flag and a strategy value to the  swarm manage  co

docker4dotnet #2 容器化主机 - 2016-07-21 14:07:47

.NET 猿自从认识了小鲸鱼,感觉功力大增。上篇 《docker4dotnet #1 前世今生世界你好》 中给大家介绍了如何在Windows上面配置Docker for Windows和Docker Tools for Visual Studio来使用docker协助.NET Core应用的开发,这篇我们来看看如何创建和管理容器化主机。 所谓容器化主机Dockerized Host,就是安装了docker engine的主机,可以使用docker工具进行管理。使用docker来协助开发,我们至少需要本地和