phoenix建表列名重复,导致无法删除表格且修改,插入数据。

操作过程:
第一步删除phoenix中系统的表格信息,主要为SYSTEM.CATALOG,第二步删除Hbase中的表格信息。

操作步骤:
(1)查询phoenix系统表
这里写图片描述
SYSTEM.CATALOG 内容是所有表格的信息,系统表和自建表

SYSTEM.FUNCTION 内容是所有函数信息,系统函数和自定义函数

SYSTEM.SEQUENCE 我也不知道

SYSTEM.STATS 内容是所有表格的最后状态信息

(2)查询SYSTEM.CATALOG表结构
这里写图片描述

(3)我要删除的表格
DELETE from SYSTEM.CATALOG where TABLE_NAME = ‘USER_DETAILS’;
这里写图片描述

(4)查询SYSTEM.STATS表结构
这里写图片描述

(5)删除Hbase中的表格
这里写图片描述

(6)重新建表以及查询
这里写图片描述

0: jdbc:phoenix:localhost:2181> drop table USER_DETAILS;
16/07/21 14:38:15 WARN ipc.CoprocessorRpcChannel: Call failed on IOException
org.apache.hadoop.hbase.DoNotRetryIOException: org.apache.hadoop.hbase.DoNotRetryIOException: USER_DETAILS: 36
    at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:87)
    at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1469)
    at org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:11725)
    at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7796)
    at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1988)
    at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1970)
    at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33652)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2170)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ArrayIndexOutOfBoundsException: 36
    at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:403)
    at org.apache.phoenix.schema.PTableImpl.<init>(PTableImpl.java:315)
    at org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:303)
    at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:883)
    at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:501)
    at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doDropTable(MetaDataEndpointImpl.java:1489)
    at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1442)
    ... 10 more

    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
    at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
    at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
    at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:327)
    at org.apache.hadoop.hbase.protobuf.ProtobufUtil.execService(ProtobufUtil.java:1624)
    at org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:92)
    at org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:89)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)
    at org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel.callExecService(RegionCoprocessorRpcChannel.java:95)
    at org.apache.hadoop.hbase.ipc.CoprocessorRpcChannel.callMethod(CoprocessorRpcChannel.java:56)
    at org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService$Stub.dropTable(MetaDataProtos.java:11925)
    at org.apache.phoenix.query.ConnectionQueryServicesImpl$9.call(ConnectionQueryServicesImpl.java:1480)
    at org.apache.phoenix.query.ConnectionQueryServicesImpl$9.call(ConnectionQueryServicesImpl.java:1466)
    at org.apache.hadoop.hbase.client.HTable$15.call(HTable.java:1797)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.DoNotRetryIOException): org.apache.hadoop.hbase.DoNotRetryIOException: USER_DETAILS: 36
    at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:87)
    at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1469)
    at org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:11725)
    at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7796)
    at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1988)
    at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1970)
    at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33652)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2170)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ArrayIndexOutOfBoundsException: 36
    at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:403)
    at org.apache.phoenix.schema.PTableImpl.<init>(PTableImpl.java:315)
    at org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:303)
    at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:883)
    at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:501)
    at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doDropTable(MetaDataEndpointImpl.java:1489)
    at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1442)
    ... 10 more

    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1268)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:226)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:331)
    at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.execService(ClientProtos.java:34118)
    at org.apache.hadoop.hbase.protobuf.ProtobufUtil.execService(ProtobufUtil.java:1620)
    ... 13 more
16/07/21 14:38:15 WARN client.HTable: Error calling coprocessor service org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService for row \x00\x00USER_DETAILS
java.util.concurrent.ExecutionException: org.apache.hadoop.hbase.DoNotRetryIOException: org.apache.hadoop.hbase.DoNotRetryIOException: USER_DETAILS: 36
    at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:87)
    at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1469)
    at org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:11725)
    at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7796)
    at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1988)
    at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1970)
    at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33652)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2170)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ArrayIndexOutOfBoundsException: 36
    at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:403)
    at org.apache.phoenix.schema.PTableImpl.<init>(PTableImpl.java:315)
    at org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:303)
    at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:883)
    at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:501)
    at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doDropTable(MetaDataEndpointImpl.java:1489)
    at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1442)
    ... 10 more

    at java.util.concurrent.FutureTask.report(FutureTask.java:122)
    at java.util.concurrent.FutureTask.get(FutureTask.java:192)
    at org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:1809)
    at org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:1765)
    at org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1196)
    at org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1176)
    at org.apache.phoenix.query.ConnectionQueryServicesImpl.dropTable(ConnectionQueryServicesImpl.java:1465)
    at org.apache.phoenix.schema.MetaDataClient.dropTable(MetaDataClient.java:2304)
    at org.apache.phoenix.schema.MetaDataClient.dropTable(MetaDataClient.java:2226)
    at org.apache.phoenix.jdbc.PhoenixStatement$ExecutableDropTableStatement$1.execute(PhoenixStatement.java:860)
    at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:338)
    at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:326)
    at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
    at org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:324)
    at org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1345)
    at sqlline.Commands.execute(Commands.java:822)
    at sqlline.Commands.sql(Commands.java:732)
    at sqlline.SqlLine.dispatch(SqlLine.java:808)
    at sqlline.SqlLine.begin(SqlLine.java:681)
    at sqlline.SqlLine.start(SqlLine.java:398)
    at sqlline.SqlLine.main(SqlLine.java:292)
Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: org.apache.hadoop.hbase.DoNotRetryIOException: USER_DETAILS: 36
    at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:87)
    at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1469)
    at org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:11725)
    at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7796)
    at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1988)
    at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1970)
    at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33652)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2170)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ArrayIndexOutOfBoundsException: 36
    at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:403)
    at org.apache.phoenix.schema.PTableImpl.<init>(PTableImpl.java:315)
    at org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:303)
    at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:883)
    at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:501)
    at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doDropTable(MetaDataEndpointImpl.java:1489)
    at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1442)
    ... 10 more

    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
    at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
    at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
    at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:327)
    at org.apache.hadoop.hbase.protobuf.ProtobufUtil.execService(ProtobufUtil.java:1624)
    at org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:92)
    at org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:89)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)
    at org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel.callExecService(RegionCoprocessorRpcChannel.java:95)
    at org.apache.hadoop.hbase.ipc.CoprocessorRpcChannel.callMethod(CoprocessorRpcChannel.java:56)
    at org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService$Stub.dropTable(MetaDataProtos.java:11925)
    at org.apache.phoenix.query.ConnectionQueryServicesImpl$9.call(ConnectionQueryServicesImpl.java:1480)
    at org.apache.phoenix.query.ConnectionQueryServicesImpl$9.call(ConnectionQueryServicesImpl.java:1466)
    at org.apache.hadoop.hbase.client.HTable$15.call(HTable.java:1797)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.DoNotRetryIOException): org.apache.hadoop.hbase.DoNotRetryIOException: USER_DETAILS: 36
    at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:87)
    at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1469)
    at org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:11725)
    at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7796)
    at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1988)
    at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1970)
    at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33652)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2170)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ArrayIndexOutOfBoundsException: 36
    at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:403)
    at org.apache.phoenix.schema.PTableImpl.<init>(PTableImpl.java:315)
    at org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:303)
    at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:883)
    at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:501)
    at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doDropTable(MetaDataEndpointImpl.java:1489)
    at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1442)
    ... 10 more

    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1268)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient

本页内容版权归属为原作者,如有侵犯您的权益,请通知我们删除。
1.查看mysql中metastore数据存储结构 Metastore中只保存了表的描述信息( 名字,列,类型,对应目录 ) 使用SQLYog连接itcast05 的mysql数据库 查看hive数据库的表结构: 2.建表(默认是内部表(先建表,后有数据)) (建表时必须指定列的分隔符) create table trade_detail(id bigint, account string, income double, expenses double, time string) row format d
1、查看系统环境 cat /etc/redhat-releaseuname -runame -m 关闭所有服务器的防火墙 /etc/init.d/iptables stopchkconfig iptables offchkconfig --list iptables 2、Spark集群机器规划 一共准备了四台机器 Host IP Hadoop Spark Node1 192.168.2.128 Master Master Node2 192.168.2.130 Slave Slave Node3 192.
本篇主要阐述通过DeveStack 去部署Openstack(mitaka),对大多数来说安装部署Openstack 来说是个痛苦的过程,尤其是 OpenStack和它依赖的一些组件在快速发展中,经常出现这个版本组件对不上那个版本 dashboard等情况。如果只是看看或者初期玩玩 OpenStack的话,使用DevStack也是个不错的办法。DevStack采用了自动化源码部署的方式,适用于开发环境的部署和Openstack开发者,单节点,小环境;这里采用的操作系统为Ubuntu14.04。 一、操作系
Spark版本: 1.6.2 概览 Spark SQL用于处理结构化数据,与Spark RDD API不同,它提供更多关于数据结构信息和计算任务运行信息的接口,Spark SQL内部使用这些额外的信息完成特殊优化。可以通过SQL、DataFrames API、Datasets API与Spark SQL进行交互,无论使用何种方式,SparkSQL使用统一的执行引擎记性处理。用户可以根据自己喜好,在不同API中选择合适的进行处理。本章中所有用例均可以在 spark-shell、pyspark shell、s

HBase工作原理学习 - 2016-07-22 18:07:56

HBase工作原理学习   1 HBase简介 HBase是一个高可靠性、高性能、面向列、可伸缩的分布式存储系统,利用HBase技术可在廉价PC Server上搭建大规模结构化的存储集群。HBase的目标是存储并处理大型数据,具体来说是仅需使用普通的硬件配置,就能够处理由成千上万的行和列所组成的大型数据。 与MapReduce的离线批处理计算框架不同,HBase是一个可以随机访问的存储和检索数据平台,弥补了HDFS不能随机访问数据的缺陷,适合实时性要求不是非常高的业务场景。HBase存储的都是Byte数组
一:RDD粗粒度与细粒度 粗粒度: 在程序启动前就已经分配好资源(特别适用于资源特别多而且要进行资源复用) 细粒度:计算需要资源是才分配资源,细粒度没有资源浪费问题。 二: RDD 的解密: 1,分布式(擅长迭代式是spark的精髓之所在) 基于内存(有些时候也会基于硬盘) 特别适合于计算的计算框架 2,RDD代表本身要处理的数据,是一个数据集Dataset RDD本身是抽象的,对分布式计算的一种抽象 RDD 定义: 弹性分布数据集 代表一系列的数据分片 3,RDD弹性之一: 自动进行内存和磁盘数据存储的

spark 集群搭建 详细步骤 - 2016-07-22 18:07:20

最近好不容易搞到了三台测试机,可以用来搭建spark集群搞模型。本宝宝开心得不行,赶紧行动,把spark集群搭起来,模型跑起来。 1.搭建hadoop集群 hadoop的hdfs文件系统是整个生态圈的基础,因为数据量大了以后,数据一般就都放hdfs上头了。因为四台测试机之前已经搭建好了hadoop集群环境,而且经过本宝宝测试,hadoop集群也是可用的,所以就省了搭hadoop集群的功夫。 2.配置集群host 四台机器的hostname如下: namenodetest01.hadoop.xxx.com

使用Fuel安装Openstack - 2016-07-22 18:07:11

Openstack自动化部署工具,主要用于生产环境. 一. 环境准备 这里用的是Openstack 9.0版本. Fuel Documentation 下载 Fuel for OpenStack镜像文件 , 用于安装Feul Master. 安装 Xshell , 用于远程连接. 二. 安装fuel_master节点 1. VirtualBox网络配置 管理-全局设定-网络-仅主机(Host-Only)网络 新建三张新的网卡: Host-Only Ethernet Adapter #1 IPv4: 10.
flume是一个分布式、可靠、和高可用的海量日志采集、聚合和传输的系统。支持在日志系统中定制各类数据发送方,用于收集数据;同时,Flume提供对数据进行简单处理,并写到各种数据接受方(比如文本、HDFS、Hbase等)的能力 。 一、什么是Flume? flume 作为 cloudera 开发的实时日志收集系统,受到了业界的认可与广泛应用。Flume 初始的发行版本目前被统称为 Flume OG(original generation),属于 cloudera。但随着 FLume 功能的扩展,Flume O

KafkaStreams介绍(二) - 2016-07-22 18:07:55

说明: 本文是Confluent Platform 3.0版本中对于Kafka Streams的翻译。 原文地址: https://docs.confluent.io/3.0.0/streams/index.html 看了很多其他人翻译的文档,还是第一次翻译,有什么翻译的不好的地方还请指出。   这是Kafka Streams介绍的第二篇,以前的介绍如下: http://blog.csdn.net/ransom0512/article/details/51971112   1.  快速入门 1.1.  目