tsuna / gohbase Goto Github PK
View Code? Open in Web Editor NEWPure-Go HBase client
License: Apache License 2.0
Pure-Go HBase client
License: Apache License 2.0
It also doesn't work on BinaryComparator, SubstringComparator, LongComparator, BinaryPrefixComparator, BitComparator and RegexStringComparator too.
flr := filter.NewCompareFilter(filter.Equal, filter.NewBinaryComparator(filter.NewByteArrayComparable([]byte("row1"))))
scanRequest, _ := hrpc.NewScanRange(context.Background(), []byte("test"), []byte("row1"), []byte("row4"), hrpc.Filters(flr))
scanner := client.Scan(scanRequest)
ERRO[0000] unexpected error receiving rpc response client="RegionClient{Addr: ericlam.c.elite-nuance-751.internal:16201}" err="HBase Java exception org.apache.hadoop.hbase.DoNotRetryIOException:\norg.apache.hadoop.hbase.DoNotRetryIOException: java.lang.reflect.InvocationTargetException\n\tat org.apache.hadoop.hbase.protobuf.ProtobufUtil.toFilter(ProtobufUtil.java:1479)\n\tat org.apache.hadoop.hbase.protobuf.ProtobufUtil.toScan(ProtobufUtil.java:994)\n\tat org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2413)\n\tat org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)\n\tat org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2196)\n\tat org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)\n\tat org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)\n\tat org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)\n\tat java.lang.Thread.run(Thread.java:745)\nCaused by: java.lang.reflect.InvocationTargetException\n\tat sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\n\tat sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\n\tat sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n\tat java.lang.reflect.Method.invoke(Method.java:498)\n\tat org.apache.hadoop.hbase.protobuf.ProtobufUtil.toFilter(ProtobufUtil.java:1475)\n\t... 8 more\nCaused by: org.apache.hadoop.hbase.exceptions.DeserializationException: parseFrom called on base Filter, but should be called on derived type\n\tat org.apache.hadoop.hbase.filter.Filter.parseFrom(Filter.java:270)\n\t... 13 more\n"
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x401cf0]
time="2017-04-03T17:50:22+08:00" level=error msg="failed looking up region" backoff=16ms err="deadline exceeded" key=1 table=test
time="2017-04-03T17:50:52+08:00" level=error msg="failed looking up region" backoff=32ms err="deadline exceeded" key=1 table=test
time="2017-04-03T17:51:22+08:00" level=error msg="failed looking up region" backoff=64ms err="deadline exceeded" key=1 table=test
time="2017-04-03T17:51:52+08:00" level=error msg="failed looking up region" backoff=128ms err="deadline exceeded" key=1 table=test
time="2017-04-03T17:52:22+08:00" level=error msg="failed looking up region" backoff=256ms err="deadline exceeded" key=1 table=test
time="2017-04-03T17:52:52+08:00" level=error msg="failed looking up region" backoff=512ms err="deadline exceeded" key=1 table=test
time="2017-04-03T17:53:23+08:00" level=error msg="failed looking up region" backoff=1.024s err="deadline exceeded" key=1 table=test
time="2017-04-03T17:53:54+08:00" level=error msg="failed looking up region" backoff=2.048s err="deadline exceeded" key=1 table=test
time="2017-04-03T17:54:26+08:00" level=error msg="failed looking up region" backoff=4.096s err="deadline exceeded" key=1 table=test
time="2017-04-03T17:55:00+08:00" level=error msg="failed looking up region" backoff=8.192s err="deadline exceeded" key=1 table=test
time="2017-04-03T17:55:33+08:00" level=info msg="added new region client" client=RegionClient{Host: yakkety, Port: 40285}
time="2017-04-03T17:55:33+08:00" level=info msg="added new region" overlaps=[] region=RegionInfo{Name: test,,1491097549520.89c4b68edcedd43f93accb5eb87574b7., ID: 1491097549520, Namespace: , Table: test, StartKey: , StopKey: } replaced=true
res: &{[0xc04223c820 0xc04223c8c0] <nil> 0xc042034ce0}
my test code
func main() {
client := gohbase.NewClient("192.168.0.106")
getRequest, err := hrpc.NewGetStr(context.Background(), "test", "1")
if err != nil {
log.Println(err.Error())
return
}
getRsp, err := client.Get(getRequest)
if err != nil {
log.Println(err.Error())
return
}
fmt.Println("res: ", getRsp)
}
hi,
when I create a table ,i can set a namespace.But when I use Put() or Get(),how i can set a Namespace?
There is no protocol command to do this, so it would need to be implemented by combining disable + drop + create. I have implemented this myself in a wrapper, and it works, but there are two issues that would be best if directly integrated into gobhase:
It would be nice if gohbase provided this integration directly. At the very least, support for Describe Table would be great.
It seems to me gohbase doesn't support delete row actions with a specified timestamp:
Line 241 in bfd7afa
Is that correct?
We've fixed it on our fork here - cloudflare#8. What is the current way to submit PRs?
If my hbase has a custom Coprocessor, I can like SCANNER to achieve a communication with the custom Coprocessor Client ? Or is the project already having such a function?
I have a requirement to the limit number of columns returned from very wide rows.
I have forked this project and I am able to make this work by exposing the StoreLimit property which already exists in the Scan and Get structs.
Before I continue on with this, is there some reason why StoreLimit and StoreOffset aren't already accessible?
I am doing a series (read: thousands) of PUT operations on a table that does exist (basically, I am scanning one table, and for each row, doing a small transform and putting into another table). Sometimes, the PUT operation fails and I get the error "table not found" from the client. The rows that this operation fails on are different every time the task is run.
I have narrowed down the error to the following function:
// metaLookup checks meta table for the region in which the given row key for the given table is.
func (c *client) metaLookup(ctx context.Context,
table, key []byte) (hrpc.RegionInfo, string, uint16, error) {
metaKey := createRegionSearchKey(table, key)
rpc, err := hrpc.NewGetBefore(ctx, metaTableName, metaKey, hrpc.Families(infoFamily))
if err != nil {
return nil, "", 0, err
}
resp, err := c.Get(rpc)
if err != nil {
return nil, "", 0, err
}
if len(resp.Cells) == 0 {
return nil, "", 0, TableNotFound
}
reg, host, port, err := region.ParseRegionInfo(resp)
if err != nil {
return nil, "", 0, err
}
if !bytes.Equal(table, fullyQualifiedTable(reg)) {
// This would indicate a bug in HBase.
return nil, "", 0, fmt.Errorf("wtf: meta returned an entry for the wrong table!"+
" Looked up table=%q key=%q got region=%s", table, key, reg)
} else if len(reg.StopKey()) != 0 &&
bytes.Compare(key, reg.StopKey()) >= 0 {
// This would indicate a hole in the meta table.
return nil, "", 0, fmt.Errorf("wtf: meta returned an entry for the wrong region!"+
" Looked up table=%q key=%q got region=%s", table, key, reg)
}
return reg, host, port, nil
}
Specifically, the block:
if len(resp.Cells) == 0 {
return nil, "", 0, TableNotFound
}
It looks like, for some reason, no results are being returned when trying to retrieve the metadata for a region. What situations might cause this issue? I'm beginning to think this is another environment related issue, but wanted to be sure before I go yelling at our ops team. :)
What's left before this loses the prototype status and is considered production ready?
I want delete row from table.
Found a function that deletes the specified column of the column family, but did not find a function to delete the row from table.
Is there any way? please tell me.
plz give me some ideas
hey guys? this project's default user is root, I notice that its defind in type client struct{}, and this is a private struct, so how can we specified user of hbase?
i donot have root permision so i have these exceptions
time="2017-04-12T09:42:46+08:00" level=error msg="failed looking up region" backoff=13m23.192s err="deadline exceeded" key=1 table="GRIH:TEST"
time="2017-04-12T09:56:29+08:00" level=info msg="added new region client" client=RegionClient{Host: Slave01, Port: 60020}
time="2017-04-12T09:56:29+08:00" level=info msg="added new region" overlaps=[] region=RegionInfo{Name: GRIH:TEST,,1491892060219.ff2bf261855ebdd27e70e26a5cee6a81., ID: 1491892060219, Namespace: GRIH, Table: TEST, StartKey: , StopKey: } replaced=true
time="2017-04-12T11:39:21+08:00" level=info msg="added new region client" client=RegionClient{Host: Slave09, Port: 60020}
2017/04/12 11:39:21 HBase Java exception org.apache.hadoop.hbase.security.AccessDeniedException:
org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient permissions (user=root, scope=GRIH:TEST, family=colFam:age, params=[table=GRIH:TEST,family=colFam:age],action=WRITE)
at org.apache.hadoop.hbase.security.access.AccessController.prePut(AccessController.java:1650)
at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$30.call(RegionCoprocessorHost.java:918)
at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1673)
at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1749)
at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1705)
at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.prePut(RegionCoprocessorHost.java:914)
at org.apache.hadoop.hbase.regionserver.HRegion.doPreMutationHook(HRegion.java:2896)
at org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2871)
at org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2817)
at org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2821)
at org.apache.hadoop.hbase.regionserver.HRegion.doBatchMutate(HRegion.java:3548)
at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:2694)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2256)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33646)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2185)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109)
at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
at java.lang.Thread.run(Thread.java:745)
I got the scan demo and try it,but got this log.
ERRO[0060] failed looking up region backoff=32ms err="deadline exceeded" key= table=QueryCompHtableName
can u show a detail scan demo?
Thank U.
when scanner.Next() called, the client crash with error: index out of range
the reason is when i scan the table, the count of CellsPerResult is n (n>0), the count of PartialFlagPerResult is 0. but why the count of PartialFlagPerResult is 0?
I am using the new scanner api like so:
func ScanCh(scanner hrpc.Scanner, errCallbacks ...func(row *hrpc.Result, err error)) <-chan *hrpc.Result {
ch := make(chan *hrpc.Result)
go func() {
defer close(ch)
defer scanner.Close()
var row *hrpc.Result
var err error
for err != io.EOF {
if row, err = scanner.Next(); err == nil {
ch <- row
} else if err != io.EOF {
for _, f := range errCallbacks {
f(row, err)
}
}
}
}()
return ch
}
After some minutes and several hundred rows being scanned, I am seeing the following error from HBase:
ERRO[0249] failed to close scanner err="HBase Java exception org.apache.hadoop.hbase.UnknownScannerException:
org.apache.hadoop.hbase.UnknownScannerException: Name: 1282974, already closed?
at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2128)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2034)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
at java.lang.Thread.run(Thread.java:745)
" scannerID=1282974
It should also be noted that this error occurs well before all of the rows are scanned, so I never get to scan all rows.
The NumberOfRows option on a scan has no effect. It is ignored internally.
Hi~,
I want to get part of the table, how can i set the limit number?
Hi~,
I want to set a startRow, and then get 5 row start with startRow ,when i set 5 to DefaultNumberOfRows ,it is not living.How can i do?
How can I create table like this query in gohbase?
create 't1', 'f1', {NUMREGIONS => 15, SPLITALGO => 'HexStringSplit'}
@timoha We have come across a pretty major issue. When using Client.Get
, if no result is found in HBase, HBase will return an RPC with no value and no error. The gohbase client will take this rpc response and return an empty *hrpc.Result
and no error. A workaround is to check the resulting *hrpc.Result
and make sure it's not the zero value, working under the assumption that a zero value hrpc.Result
is never valid.
I am fairly certain that the resulting code is here:
// ToLocalResult takes a protobuf Result type and converts it to our own
// Result type in constant time.
func ToLocalResult(pbr *pb.Result) *Result {
// We should return nil here, not an empty struct.
if pbr == nil {
return &Result{}
}
// ...
}
I can submit a PR for this if you wish, but I wanted to check in here to ensure that there wasn't some specific reason that an empty struct is returned here.
How I used Regex Scan filter?
I have an auditing use case where I'm trying to determine key ranges - really I only want to know the start and end keys between a given key range.
The reverse flag exists in the protobuf definition. I've made a quick proof of concept change to expose it and superficially it seems to do exactly what I need.
Would anyone see a problem in the library with reverse scanning - especially with respect to region switches? Is it perhaps already implemented but I haven't found it?
When I try to perform a Scan or a Get operation against a system table (e.g. hbase:namespace
or hbase:meta
), I get back the following error:
WTF: Meta returned an entry for the wrong table! Looked up table="hbase:namespace" key="default" got region=*region.info{Table: "namespace", Name: "hbase:namespace,,1479305903740.2ef03b3d1421dd63dd7774cd115039cb.", StopKey: ""}
It seems that the table name returned by HBase doesn't include the "hbase:"
prefix so it fails this check.
Not sure how to go about fixing this though.
Hello, back again. =)
Almost everything is working great for me so far, however I currently have an issue with the admin client.
I created a new AdminClient using gohbase.NewAdminClient
and tried to use it to drop a table using AdminClient.DeleteTable, which (as I sort of expected it would), complain about the table not being disabled. That's fine, I first disable the table, which works, then I immediately call DeleteTable and this time it fails with an entirely different error that I cannot seem to figure out:
HBase Java exception org.apache.hadoop.hbase.DoNotRetryIOException:
<IP redacted> is unable to read call parameter from client <IP redacted>; java.lang.UnsupportedOperationException: getProcedureResult
We are using Cloudera Hbase 5.6.x.
Any ideas what is going on here? So far, this is the only function that is failing.
@jonbonazza Looks like there's a bug introduced by #32 where it was forgotten to set zkRoot
in AdminClient
https://github.com/tsuna/gohbase/blob/master/admin_client.go#L36 which broke operations on hbase master.
Hi there,
I'm really new at using hbase and I think this library would be of great help. Unfortunately I can't get any response from its methods. Every time I run something the console get stuck and stop printing. For instance, this is my code below:
package main
import (
"fmt"
"github.com/tsuna/gohbase"
"github.com/tsuna/gohbase/hrpc"
"golang.org/x/net/context"
)
func main() {
client := gohbase.NewClient("localhost:8080")
fmt.Printf("1\n")
getRequest, err := hrpc.NewGetStr(context.Background(), "mytable", "row1")
if err != nil {
fmt.Printf("Error: %s", err)
}
fmt.Printf("2\n")
getRsp, err := client.Get(getRequest)
fmt.Printf("3\n")
fmt.Println(getRsp)
scanRequest, err := hrpc.NewScanStr(context.Background(), "mytable")
if err != nil {
fmt.Printf("Error: %s", err)
}
rsp, err := client.Scan(scanRequest)
if err != nil {
fmt.Printf("error: %v", err)
}
if len(rsp) != 1 {
fmt.Printf(" %s rows", len(rsp))
}
fmt.Printf("-- ENDED -- ")
}
Few notes:
Does someone have any idea of what is going on?
We are using Cloudera's HBase distro version 5.6.1. This is essentially just packaging around hbase version 1.0.0. The gohbase README says that it supports hbase version 1.0.0, but I am not able to connect. Is there a zookeeper version requirement?
I am using the following code:
client := gohbase.NewClient("ommited")
ctx, _ := context.WithTimeout(context.Background(), 5*time.Second)
scan, err := hrpc.NewScan(ctx, []byte("omitted"))
if err != nil {
fmt.Println(err)
}
res, err := client.Scan(scan)
if err != nil {
fmt.Println(err)
}
The final println always prints the error "deadline exceeded." I have tried to up the timeout in the context to 10s, but this doesn't help and I can't see why a connection wouldn't happen within 5 seconds anyway.
Any ideas what is going on here?
The error is listed below.
ERRO[0035] error occured, closing region client client="RegionClient{Addr: xxxxx:xxx}" err="client is dead"
Is this normal? After several seconds, the process exited successfully.
Any function like getRegionsInRange() in HTable?
This is most probably not an issue with tsuna / gohbase. Hopefully, one of you can help me with a definitive answer. The following is my code snippet. Hbase is in "standalone mode".
package main
import (
"context"
"github.com/tsuna/gohbase"
"github.com/tsuna/gohbase/hrpc"
)
const metaTableName = "hbase:meta"
var infoFamily = map[string][]string{
"info": nil,
}
var cFamilies = map[string]map[string]string{
"cf": nil,
"cf2": nil,
}
func main () {
testTableName := "test1_"
ac := gohbase.NewAdminClient("127.0.0.1")
crt := hrpc.NewCreateTable(context.Background(), []byte(testTableName), cFamilies)
if err := ac.CreateTable(crt); err != nil {
}
}
When I run the client locally as the server (same server), things are fine. The table "test1_" is created.
root@project-computing-lab-farhad-2:~/hbase-1.2.5# bin/hbase shell
2017-05-11 17:46:40,273 WARN [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
HBase Shell; enter 'help' for list of supported commands.
Type "exit" to leave the HBase Shell
Version 1.2.5, rd7b05f79dee10e0ada614765bb354b93d615a157, Wed Mar 1 00:34:48 CST 2017
hbase(main):001:0> list
TABLE
test1_
1 row(s) in 0.2950 seconds
=> ["test1_"]
hbase(main):002:0> disable "test1_"
0 row(s) in 2.4520 seconds
hbase(main):003:0> drop "test1_"
0 row(s) in 1.2760 seconds
If I run the client from a remote machine, (code is exactly the same as above except the server IP address is now changed as follows).
ac := gohbase.NewAdminClient("10.145.211.213")
I get a lot of these zookeeper failure messages
2017-05-11 17:59:15,628 INFO [SyncThread:0] server.ZooKeeperServer: Established session 0x15bf8a69357000f with negotiated timeout 30000 for client /10.145.211.185:52456
2017-05-11 17:59:15,630 INFO [ProcessThread(sid:0 cport:-1):] server.PrepRequestProcessor: Processed session termination for sessionid: 0x15bf8a69357000f
2017-05-11 17:59:15,631 INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] server.NIOServerCnxn: Closed socket connection for client /10.145.211.185:52456 which had sessionid 0x15bf8a69357000f
2017-05-11 17:59:16,146 INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] server.NIOServerCnxnFactory: Accepted socket connection from /10.145.211.185:52460
2017-05-11 17:59:16,146 INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] server.ZooKeeperServer: C
This is a very commonly reported issue on the web. Some folks have suggested to reorder the listings in /etc/host as follows
http://stackoverflow.com/questions/7791788/hbase-client-do-not-able-to-connect-with-remote-hbase-server
I have tried all these suggestions but still keep getting the same error. Can anyone give a definitive answer as to whether Hbase in standalone mode can support remote clients?
thanks,
Hello,
I'm currently using this library to do write kafka events into HBase. After skimming the code, there's a few questions that I hope you can help clarify for me.
Line 559 in 51c9c64
rpcQueueSize
and flushInterval
seems like it's there to batch up requests. Are there any tips to tweaking these numbers for high throughput? (I'm currently looking at 10k events/second from kafka--but this number will increase to roughly 200k/second)Thanks for your time!
Hi there, just following up on the previous issue that I was having (memory leak). I was lucky enough this time to be able to take a pprof dump with --inuse_objects
before the program crashed.
Here's the output of top5 -cum
:
$ go tool pprof --inuse_objects aardvark_35de5ef_linux_amd64/event-persister $HOME/pprof/pprof.event-persister.__omitted__:6060.inuse_objects.inuse_space.024.pb.gz 130 ↵
Entering interactive mode (type "help" for commands)
(pprof) top5 -cum
0 of 1932512 total ( 0%)
Dropped 248 nodes (cum <= 9662)
Showing top 5 nodes out of 30 (cum >= 1389497)
flat flat% sum% cum cum%
0 0% 0% 1929781 99.86% runtime.goexit
0 0% 0% 1453748 75.23% <...>/vendor/github.com/tsuna/gohbase.(*client).establishRegion
0 0% 0% 1453748 75.23% <...>/vendor/github.com/tsuna/gohbase.(*client).reestablishRegion
0 0% 0% 1389497 71.90% <...>/vendor/github.com/golang/protobuf/proto.(*Buffer).Unmarshal
0 0% 0% 1389497 71.90% <...>/vendor/github.com/golang/protobuf/proto.(*Buffer).unmarshalType
(pprof)
Here's the output from tracing down the object allocs: https://gist.github.com/Taik/0788d61dc48868c1aa98
If there's anything else you need, please let me know. I'd be willing to spend some cycles fixing this the upcoming few weeks if you don't have capacity.
Thanks!
It looks like this library automatically bundles up multiple simultaneous requests into a single multi-action. Is it possible for a consumer of the library to do batch requests easier than making multiple requests in many different goroutines?
Hello,
I've been using this library for a while and I've just updated the latest version. I am having problems with an infinite loop looking up regions. I've tried this with HBase 1.2.4 and 1.3.1 - both on a large cluster and on my local machine with the same results. It works fine simply recompiling with the older version. Any ideas?
go version go1.8 linux/amd64
hbase
create 'doug','c'
put 'doug','boo','c:hoo',':-('
package main
import (
"context"
"log"
"github.com/tsuna/gohbase"
"github.com/tsuna/gohbase/hrpc" )
func main() {
client := gohbase.NewClient("localhost")
getRequest, err := hrpc.NewGetStr(context.Background(), "doug", "boo")
if err != nil {
log.Printf("%s", err.Error())
}
getRsp, err := client.Get(getRequest)
log.Printf("found results %v", getRsp)
}
on issue #56
Not clear in this point.
on "engine" and "charset" args
"engine" , "charset" should set to something ref by java class?
`engine_scan = "org.apache.hadoop.hbase.filter.RegexStringComparator.EngineType.JAVA" // "JAVA"
charset = "io.netty.util.CharsetUtil.ISO_8859_1" // ""ISO_8859_1" // "ISO-8859-1"
`
i tried to set all of them but return errors
resp.Next error : HBase Java exception org.apache.hadoop.hbase.DoNotRetryIOException: org.apache.hadoop.hbase.DoNotRetryIOException: java.lang.reflect.InvocationTargetException at org.apache.hadoop.hbase.protobuf.ProtobufUtil.toFilter(ProtobufUtil.java:1537) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.toScan(ProtobufUtil.java:1052) at org.apache.hadoop.hbase.regionserver.RSRpcServices.newRegionScanner(RSRpcServices.java:2475) at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2753) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:34950) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168) Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.GeneratedMethodAccessor54.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.toFilter(ProtobufUtil.java:1533) ... 8 more Caused by: org.apache.hadoop.hbase.exceptions.DeserializationException: parseFrom called on base Filter, but should be called on derived type at org.apache.hadoop.hbase.filter.Filter.parseFrom(Filter.java:270) ... 12 more
## my simple scan with regex
regex := fmt.Sprintf("^[0-9-a-f]{64}%s",suffix)
comp := filter.NewRegexStringComparator(regex, filter.CASE_INSENSITIVE, charset, engine_scan)
hl := filter.NewCompareFilter(filter.Equal, comp)
opts = append(opts, hrpc.Filters(hl))
sc , err := hrpc.NewScan(ctx,table,opts...)
res := client.Scan(scan)
...
What wrong? and how to fix it?
my code
func main() {
client := gohbase.NewClient("192.168.1.129")
get, _ := hrpc.NewGet(context.Background(), []byte("tweet_stream"), []byte("1696956982016-05-30_13:57:03737160875270774784"))
client.Get(get)
}
it works on mac,but it show those messges on my windws
Connected to 192.168.1.129:2181
Authenticated: id=95989244501426398, timeout=30000
Recv loop terminated: err=EOF
Send loop terminated: err=<nil>
Connected to 192.168.1.129:2181
Authenticated: id=95989244501426399, timeout=30000
Recv loop terminated: err=EOF
Send loop terminated: err=<nil>
Connected to 192.168.1.129:2181
Authenticated: id=95989244501426400, timeout=30000
Recv loop terminated: err=EOF
Send loop terminated: err=<nil>
....
It output those info continuously,Can you help me?Thanks!
I'd like to add Admin.proto to get access to the CompactRegion function but that requires several other files.
It looks like HBase is preparing for some changes internally. There is no longer a Master.proto
file in hbase-protocol
, and there is now a hbase-protocol-shaded
directory with overlapping proto files.
Given the changes and possible conflicts, how should I handle adding or updating proto files?
only see the put method, can I put the multi-row in one request together?
Hi, i"m trying to use this client, but i"m having some issues. As soon as i try to use it I get the following error:
# command-line-arguments
/home/..../go-1.9.2/pkg/tool/linux_amd64/link: cannot open file /home/..../go-1.9.2/pkg/linux_amd64/github.com/tsuna/gohbase.a: open /home/..../go-1.9.2/pkg/linux_amd64/github.com/tsuna/gohbase.a: no such file or directory
Real paths were obfuscate by .....
My import look like this
import (
"github.com/tsuna/gohbase"
)
package is located directly inside of folder which is set by GOPATH e.g.
GOPATH: "/home/example/workspaces/go"
Location of package: "/home/example/workspaces/go/github.com/tsuna/gohbase"
Any idea where the problem could be?
Hi~,I know how to use Scan(), but i do not know someting of this function ,like this:
for {
// Make a new Scan RPC for this region
if rpc != nil {
// If it's not the first region, we want to start at whatever the
// last region's StopKey was
startRow = rpc.GetRegionStop()
}
// TODO: would be nicer to clone it in some way
rpc, err := hrpc.NewScanRange(ctx, table, startRow, stopRow,
hrpc.Families(families), hrpc.Filters(filters),
hrpc.TimeRangeUint64(fromTs, toTs),
hrpc.MaxVersions(maxVerions),
hrpc.NumberOfRows(numberOfRows))
if err != nil {
return nil, err
}
res, err := c.sendRPC(rpc)
if err != nil {
return nil, err
}
scanres = res.(*pb.ScanResponse)
results = append(results, scanres.Results...)
localResults := make([]*hrpc.Result, len(results))
for idx, result := range results {
localResults[idx] = hrpc.ToLocalResult(result)
}
return localResults, nil
// TODO: The more_results field of the ScanResponse object was always
// true, so we should figure out if there's a better way to know when
// to move on to the next region than making an extra request and
// seeing if there were no results
for len(scanres.Results) != 0 {
rpc = hrpc.NewScanFromID(ctx, table, *scanres.ScannerId, rpc.Key())
res, err = c.sendRPC(rpc)
if err != nil {
return nil, err
}
scanres = res.(*pb.ScanResponse)
results = append(results, scanres.Results...)
}
rpc = hrpc.NewCloseFromID(ctx, table, *scanres.ScannerId, rpc.Key())
if err != nil {
return nil, err
}
res, err = c.sendRPC(rpc)
// Check to see if this region is the last we should scan (either
// because (1) it's the last region or (3) because its stop_key is
// greater than or equal to the stop_key of this scanner provided
// that (2) we're not trying to scan until the end of the table).
// (1)
if len(rpc.GetRegionStop()) == 0 ||
// (2) (3)
len(stopRow) != 0 && bytes.Compare(stopRow, rpc.GetRegionStop()) <= 0 {
// Do we want to be returning a slice of Result objects or should we just
// put all the Cells into the same Result object?
localResults := make([]*hrpc.Result, len(results))
for idx, result := range results {
localResults[idx] = hrpc.ToLocalResult(result)
}
return localResults, nil
}
}
code:
package main
import (
"log"
"github.com/tsuna/gohbase"
"github.com/tsuna/gohbase/hrpc"
"golang.org/x/net/context"
)
func main() {
admin := gohbase.NewAdminClient("master")
columns := []string{"phone"}
table := "table"
createReq := hrpc.NewCreateTable(context.Background(), []byte(table), columns)
result, err := admin.CreateTable(createReq)
if err != nil {
log.Println("err:", err)
}
log.Println("result:", result)
}
i use the above code to create a table in HBase,bu it give me a java exception:
ym@master:~/gopath/src/test$ go run hbase_createtable.go
2016/05/06 11:15:50 Connected to 192.168.0.116:2181
2016/05/06 11:15:50 Authenticated: id=95836126668718128, timeout=30000
2016/05/06 11:15:50 Recv loop terminated: err=EOF
2016/05/06 11:15:50 Send loop terminated: err=
2016/05/06 11:15:50 err: HBase Java exception java.io.IOException:
java.io.IOException: null
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2159)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:104)
at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NumberFormatException: null
at java.lang.Integer.parseInt(Integer.java:542)
at java.lang.Integer.parseInt(Integer.java:615)
at org.apache.hadoop.hbase.HColumnDescriptor.getMaxVersions(HColumnDescriptor.java:617)
at org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1582)
at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1452)
at org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:429)
at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:52195)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2117)
... 4 more
2016/05/06 11:15:50 result:
is it a bug of gohbase or my usage error?please help me to fix this problem.
i am sure the HBase server is work normal,because i can create a table in shell commad.
Allow a user to send multiple actions in one call.
This requires grouping actions by region (rowKey) and using the MultiRequest, RegionAction, and Action protobuf message types.
I was just running the example code and always get a timeout. I checked the code and seems it will suspend on the code below.
646 // The region was in the cache, check
647 // if the region is marked as available
648 if reg.IsUnavailable() {
649 return c.waitOnRegion(rpc, reg)
650 }
Also paste my example code here:
20 log.Printf("host: [%s]\n", *host)
21 admin := gohbase.NewAdminClient(*host)
22 log.Printf("Got admin: %+v", admin)
23 // columns := []string{"cf, cf2"}
24 var cFamilies = map[string]map[string]string{
25 "cf": nil,
26 "cf2": nil,
27 }
28 createReq := hrpc.NewCreateTable(context.Background(), []byte("goTest"), cFamilies)
29 log.Printf("Before Create Table")
30 err := admin.CreateTable(createReq)
31 log.Printf("After Create Table")
32 if err != nil {
33 log.Println("err:", err)
34 }
35 return
HBase is running on localhost with v.1.0.0. Any ideas?
how can I work with a hbase cluster with kerberos?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.