[VOL-5486] Upgrade library versions
Change-Id: I8b4e88699e03f44ee13e467867f45ae3f0a63c4b
Signed-off-by: Abhay Kumar <abhay.kumar@radisys.com>
diff --git a/vendor/github.com/go-redis/redis/v8/.golangci.yml b/vendor/github.com/go-redis/redis/v8/.golangci.yml
index 1e8d238..de51455 100644
--- a/vendor/github.com/go-redis/redis/v8/.golangci.yml
+++ b/vendor/github.com/go-redis/redis/v8/.golangci.yml
@@ -2,20 +2,3 @@
concurrency: 8
deadline: 5m
tests: false
-linters:
- enable-all: true
- disable:
- - funlen
- - gochecknoglobals
- - gochecknoinits
- - gocognit
- - goconst
- - godox
- - gosec
- - maligned
- - wsl
- - gomnd
- - goerr113
- - exhaustive
- - nestif
- - nlreturn
diff --git a/vendor/github.com/go-redis/redis/v8/.prettierrc b/vendor/github.com/go-redis/redis/v8/.prettierrc.yml
similarity index 100%
rename from vendor/github.com/go-redis/redis/v8/.prettierrc
rename to vendor/github.com/go-redis/redis/v8/.prettierrc.yml
diff --git a/vendor/github.com/go-redis/redis/v8/.travis.yml b/vendor/github.com/go-redis/redis/v8/.travis.yml
deleted file mode 100644
index 1bf578d..0000000
--- a/vendor/github.com/go-redis/redis/v8/.travis.yml
+++ /dev/null
@@ -1,20 +0,0 @@
-dist: xenial
-language: go
-
-services:
- - redis-server
-
-go:
- - 1.14.x
- - 1.15.x
- - tip
-
-matrix:
- allow_failures:
- - go: tip
-
-go_import_path: github.com/go-redis/redis
-
-before_install:
- - curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s --
- -b $(go env GOPATH)/bin v1.31.0
diff --git a/vendor/github.com/go-redis/redis/v8/CHANGELOG.md b/vendor/github.com/go-redis/redis/v8/CHANGELOG.md
index 8392d54..195e519 100644
--- a/vendor/github.com/go-redis/redis/v8/CHANGELOG.md
+++ b/vendor/github.com/go-redis/redis/v8/CHANGELOG.md
@@ -1,5 +1,177 @@
-# Changelog
+## [8.11.5](https://github.com/go-redis/redis/compare/v8.11.4...v8.11.5) (2022-03-17)
-> :heart: [**Uptrace.dev** - distributed traces, logs, and errors in one place](https://uptrace.dev)
-See https://redis.uptrace.dev/changelog/
+### Bug Fixes
+
+* add missing Expire methods to Cmdable ([17e3b43](https://github.com/go-redis/redis/commit/17e3b43879d516437ada71cf9c0deac6a382ed9a))
+* add whitespace for avoid unlikely colisions ([7f7c181](https://github.com/go-redis/redis/commit/7f7c1817617cfec909efb13d14ad22ef05a6ad4c))
+* example/otel compile error ([#2028](https://github.com/go-redis/redis/issues/2028)) ([187c07c](https://github.com/go-redis/redis/commit/187c07c41bf68dc3ab280bc3a925e960bbef6475))
+* **extra/redisotel:** set span.kind attribute to client ([065b200](https://github.com/go-redis/redis/commit/065b200070b41e6e949710b4f9e01b50ccc60ab2))
+* format ([96f53a0](https://github.com/go-redis/redis/commit/96f53a0159a28affa94beec1543a62234e7f8b32))
+* invalid type assert in stringArg ([de6c131](https://github.com/go-redis/redis/commit/de6c131865b8263400c8491777b295035f2408e4))
+* rename Golang to Go ([#2030](https://github.com/go-redis/redis/issues/2030)) ([b82a2d9](https://github.com/go-redis/redis/commit/b82a2d9d4d2de7b7cbe8fcd4895be62dbcacacbc))
+* set timeout for WAIT command. Fixes [#1963](https://github.com/go-redis/redis/issues/1963) ([333fee1](https://github.com/go-redis/redis/commit/333fee1a8fd98a2fbff1ab187c1b03246a7eb01f))
+* update some argument counts in pre-allocs ([f6974eb](https://github.com/go-redis/redis/commit/f6974ebb5c40a8adf90d2cacab6dc297f4eba4c2))
+
+
+### Features
+
+* Add redis v7's NX, XX, GT, LT expire variants ([e19bbb2](https://github.com/go-redis/redis/commit/e19bbb26e2e395c6e077b48d80d79e99f729a8b8))
+* add support for acl sentinel auth in universal client ([ab0ccc4](https://github.com/go-redis/redis/commit/ab0ccc47413f9b2a6eabc852fed5005a3ee1af6e))
+* add support for COPY command ([#2016](https://github.com/go-redis/redis/issues/2016)) ([730afbc](https://github.com/go-redis/redis/commit/730afbcffb93760e8a36cc06cfe55ab102b693a7))
+* add support for passing extra attributes added to spans ([39faaa1](https://github.com/go-redis/redis/commit/39faaa171523834ba527c9789710c4fde87f5a2e))
+* add support for time.Duration write and scan ([2f1b74e](https://github.com/go-redis/redis/commit/2f1b74e20cdd7719b2aecf0768d3e3ae7c3e781b))
+* **redisotel:** ability to override TracerProvider ([#1998](https://github.com/go-redis/redis/issues/1998)) ([bf8d4aa](https://github.com/go-redis/redis/commit/bf8d4aa60c00366cda2e98c3ddddc8cf68507417))
+* set net.peer.name and net.peer.port in otel example ([69bf454](https://github.com/go-redis/redis/commit/69bf454f706204211cd34835f76b2e8192d3766d))
+
+
+
+## [8.11.4](https://github.com/go-redis/redis/compare/v8.11.3...v8.11.4) (2021-10-04)
+
+
+### Features
+
+* add acl auth support for sentinels ([f66582f](https://github.com/go-redis/redis/commit/f66582f44f3dc3a4705a5260f982043fde4aa634))
+* add Cmd.{String,Int,Float,Bool}Slice helpers and an example ([5d3d293](https://github.com/go-redis/redis/commit/5d3d293cc9c60b90871e2420602001463708ce24))
+* add SetVal method for each command ([168981d](https://github.com/go-redis/redis/commit/168981da2d84ee9e07d15d3e74d738c162e264c4))
+
+
+
+## v8.11
+
+- Remove OpenTelemetry metrics.
+- Supports more redis commands and options.
+
+## v8.10
+
+- Removed extra OpenTelemetry spans from go-redis core. Now go-redis instrumentation only adds a
+ single span with a Redis command (instead of 4 spans). There are multiple reasons behind this
+ decision:
+
+ - Traces become smaller and less noisy.
+ - It may be costly to process those 3 extra spans for each query.
+ - go-redis no longer depends on OpenTelemetry.
+
+ Eventually we hope to replace the information that we no longer collect with OpenTelemetry
+ Metrics.
+
+## v8.9
+
+- Changed `PubSub.Channel` to only rely on `Ping` result. You can now use `WithChannelSize`,
+ `WithChannelHealthCheckInterval`, and `WithChannelSendTimeout` to override default settings.
+
+## v8.8
+
+- To make updating easier, extra modules now have the same version as go-redis does. That means that
+ you need to update your imports:
+
+```
+github.com/go-redis/redis/extra/redisotel -> github.com/go-redis/redis/extra/redisotel/v8
+github.com/go-redis/redis/extra/rediscensus -> github.com/go-redis/redis/extra/rediscensus/v8
+```
+
+## v8.5
+
+- [knadh](https://github.com/knadh) contributed long-awaited ability to scan Redis Hash into a
+ struct:
+
+```go
+err := rdb.HGetAll(ctx, "hash").Scan(&data)
+
+err := rdb.MGet(ctx, "key1", "key2").Scan(&data)
+```
+
+- Please check [redismock](https://github.com/go-redis/redismock) by
+ [monkey92t](https://github.com/monkey92t) if you are looking for mocking Redis Client.
+
+## v8
+
+- All commands require `context.Context` as a first argument, e.g. `rdb.Ping(ctx)`. If you are not
+ using `context.Context` yet, the simplest option is to define global package variable
+ `var ctx = context.TODO()` and use it when `ctx` is required.
+
+- Full support for `context.Context` canceling.
+
+- Added `redis.NewFailoverClusterClient` that supports routing read-only commands to a slave node.
+
+- Added `redisext.OpenTemetryHook` that adds
+ [Redis OpenTelemetry instrumentation](https://redis.uptrace.dev/tracing/).
+
+- Redis slow log support.
+
+- Ring uses Rendezvous Hashing by default which provides better distribution. You need to move
+ existing keys to a new location or keys will be inaccessible / lost. To use old hashing scheme:
+
+```go
+import "github.com/golang/groupcache/consistenthash"
+
+ring := redis.NewRing(&redis.RingOptions{
+ NewConsistentHash: func() {
+ return consistenthash.New(100, crc32.ChecksumIEEE)
+ },
+})
+```
+
+- `ClusterOptions.MaxRedirects` default value is changed from 8 to 3.
+- `Options.MaxRetries` default value is changed from 0 to 3.
+
+- `Cluster.ForEachNode` is renamed to `ForEachShard` for consistency with `Ring`.
+
+## v7.3
+
+- New option `Options.Username` which causes client to use `AuthACL`. Be aware if your connection
+ URL contains username.
+
+## v7.2
+
+- Existing `HMSet` is renamed to `HSet` and old deprecated `HMSet` is restored for Redis 3 users.
+
+## v7.1
+
+- Existing `Cmd.String` is renamed to `Cmd.Text`. New `Cmd.String` implements `fmt.Stringer`
+ interface.
+
+## v7
+
+- _Important_. Tx.Pipeline now returns a non-transactional pipeline. Use Tx.TxPipeline for a
+ transactional pipeline.
+- WrapProcess is replaced with more convenient AddHook that has access to context.Context.
+- WithContext now can not be used to create a shallow copy of the client.
+- New methods ProcessContext, DoContext, and ExecContext.
+- Client respects Context.Deadline when setting net.Conn deadline.
+- Client listens on Context.Done while waiting for a connection from the pool and returns an error
+ when context context is cancelled.
+- Add PubSub.ChannelWithSubscriptions that sends `*Subscription` in addition to `*Message` to allow
+ detecting reconnections.
+- `time.Time` is now marshalled in RFC3339 format. `rdb.Get("foo").Time()` helper is added to parse
+ the time.
+- `SetLimiter` is removed and added `Options.Limiter` instead.
+- `HMSet` is deprecated as of Redis v4.
+
+## v6.15
+
+- Cluster and Ring pipelines process commands for each node in its own goroutine.
+
+## 6.14
+
+- Added Options.MinIdleConns.
+- Added Options.MaxConnAge.
+- PoolStats.FreeConns is renamed to PoolStats.IdleConns.
+- Add Client.Do to simplify creating custom commands.
+- Add Cmd.String, Cmd.Int, Cmd.Int64, Cmd.Uint64, Cmd.Float64, and Cmd.Bool helpers.
+- Lower memory usage.
+
+## v6.13
+
+- Ring got new options called `HashReplicas` and `Hash`. It is recommended to set
+ `HashReplicas = 1000` for better keys distribution between shards.
+- Cluster client was optimized to use much less memory when reloading cluster state.
+- PubSub.ReceiveMessage is re-worked to not use ReceiveTimeout so it does not lose data when timeout
+ occurres. In most cases it is recommended to use PubSub.Channel instead.
+- Dialer.KeepAlive is set to 5 minutes by default.
+
+## v6.12
+
+- ClusterClient got new option called `ClusterSlots` which allows to build cluster of normal Redis
+ Servers that don't have cluster mode enabled. See
+ https://godoc.org/github.com/go-redis/redis#example-NewClusterClient--ManualSetup
diff --git a/vendor/github.com/go-redis/redis/v8/Makefile b/vendor/github.com/go-redis/redis/v8/Makefile
index 49e4c96..a4cfe05 100644
--- a/vendor/github.com/go-redis/redis/v8/Makefile
+++ b/vendor/github.com/go-redis/redis/v8/Makefile
@@ -1,10 +1,11 @@
-all: testdeps
+PACKAGE_DIRS := $(shell find . -mindepth 2 -type f -name 'go.mod' -exec dirname {} \; | sort)
+
+test: testdeps
go test ./...
go test ./... -short -race
go test ./... -run=NONE -bench=. -benchmem
env GOOS=linux GOARCH=386 go test ./...
go vet
- golangci-lint run
testdeps: testdata/redis/src/redis-server
@@ -15,7 +16,20 @@
testdata/redis:
mkdir -p $@
- wget -qO- http://download.redis.io/redis-stable.tar.gz | tar xvz --strip-components=1 -C $@
+ wget -qO- https://download.redis.io/releases/redis-6.2.5.tar.gz | tar xvz --strip-components=1 -C $@
testdata/redis/src/redis-server: testdata/redis
cd $< && make all
+
+fmt:
+ gofmt -w -s ./
+ goimports -w -local github.com/go-redis/redis ./
+
+go_mod_tidy:
+ go get -u && go mod tidy
+ set -e; for dir in $(PACKAGE_DIRS); do \
+ echo "go mod tidy in $${dir}"; \
+ (cd "$${dir}" && \
+ go get -u && \
+ go mod tidy); \
+ done
diff --git a/vendor/github.com/go-redis/redis/v8/README.md b/vendor/github.com/go-redis/redis/v8/README.md
index da5d0fb..f3b6a01 100644
--- a/vendor/github.com/go-redis/redis/v8/README.md
+++ b/vendor/github.com/go-redis/redis/v8/README.md
@@ -1,23 +1,32 @@
-# Redis client for Golang
+# Redis client for Go
-[](https://travis-ci.org/go-redis/redis)
+
[](https://pkg.go.dev/github.com/go-redis/redis/v8?tab=doc)
[](https://redis.uptrace.dev/)
-[](https://discord.gg/rWtp5Aj)
-> :heart: [**Uptrace.dev** - distributed traces, logs, and errors in one place](https://uptrace.dev)
+go-redis is brought to you by :star: [**uptrace/uptrace**](https://github.com/uptrace/uptrace).
+Uptrace is an open source and blazingly fast **distributed tracing** backend powered by
+OpenTelemetry and ClickHouse. Give it a star as well!
-- Join [Discord](https://discord.gg/rWtp5Aj) to ask questions.
+## Resources
+
+- [Discussions](https://github.com/go-redis/redis/discussions)
- [Documentation](https://redis.uptrace.dev)
- [Reference](https://pkg.go.dev/github.com/go-redis/redis/v8?tab=doc)
- [Examples](https://pkg.go.dev/github.com/go-redis/redis/v8?tab=doc#pkg-examples)
- [RealWorld example app](https://github.com/uptrace/go-treemux-realworld-example-app)
+Other projects you may like:
+
+- [Bun](https://bun.uptrace.dev) - fast and simple SQL client for PostgreSQL, MySQL, and SQLite.
+- [BunRouter](https://bunrouter.uptrace.dev/) - fast and flexible HTTP router for Go.
+
## Ecosystem
-- [Distributed Locks](https://github.com/bsm/redislock).
-- [Redis Cache](https://github.com/go-redis/cache).
-- [Rate limiting](https://github.com/go-redis/redis_rate).
+- [Redis Mock](https://github.com/go-redis/redismock)
+- [Distributed Locks](https://github.com/bsm/redislock)
+- [Redis Cache](https://github.com/go-redis/cache)
+- [Rate limiting](https://github.com/go-redis/redis_rate)
## Features
@@ -26,16 +35,16 @@
[circuit breaker](https://en.wikipedia.org/wiki/Circuit_breaker_design_pattern) support.
- [Pub/Sub](https://pkg.go.dev/github.com/go-redis/redis/v8?tab=doc#PubSub).
- [Transactions](https://pkg.go.dev/github.com/go-redis/redis/v8?tab=doc#example-Client-TxPipeline).
-- [Pipeline](https://pkg.go.dev/github.com/go-redis/redis/v8?tab=doc#example-Client-Pipeline) and
- [TxPipeline](https://pkg.go.dev/github.com/go-redis/redis/v8?tab=doc#example-Client-TxPipeline).
+- [Pipeline](https://pkg.go.dev/github.com/go-redis/redis/v8?tab=doc#example-Client.Pipeline) and
+ [TxPipeline](https://pkg.go.dev/github.com/go-redis/redis/v8?tab=doc#example-Client.TxPipeline).
- [Scripting](https://pkg.go.dev/github.com/go-redis/redis/v8?tab=doc#Script).
- [Timeouts](https://pkg.go.dev/github.com/go-redis/redis/v8?tab=doc#Options).
- [Redis Sentinel](https://pkg.go.dev/github.com/go-redis/redis/v8?tab=doc#NewFailoverClient).
- [Redis Cluster](https://pkg.go.dev/github.com/go-redis/redis/v8?tab=doc#NewClusterClient).
-- [Cluster of Redis Servers](https://pkg.go.dev/github.com/go-redis/redis/v8?tab=doc#example-NewClusterClient--ManualSetup)
+- [Cluster of Redis Servers](https://pkg.go.dev/github.com/go-redis/redis/v8?tab=doc#example-NewClusterClient-ManualSetup)
without using cluster mode and Redis Sentinel.
- [Ring](https://pkg.go.dev/github.com/go-redis/redis/v8?tab=doc#NewRing).
-- [Instrumentation](https://pkg.go.dev/github.com/go-redis/redis/v8?tab=doc#ex-package--Instrumentation).
+- [Instrumentation](https://pkg.go.dev/github.com/go-redis/redis/v8?tab=doc#example-package-Instrumentation).
## Installation
@@ -47,7 +56,7 @@
go mod init github.com/my/repo
```
-And then install go-redis (note _v8_ in the import; omitting it is a popular mistake):
+And then install go-redis/v8 (note _v8_ in the import; omitting it is a popular mistake):
```shell
go get github.com/go-redis/redis/v8
@@ -59,6 +68,7 @@
import (
"context"
"github.com/go-redis/redis/v8"
+ "fmt"
)
var ctx = context.Background()
@@ -129,9 +139,37 @@
res, err := rdb.Do(ctx, "set", "key", "value").Result()
```
-## See also
+## Run the test
-- [Fast and flexible HTTP router](https://github.com/vmihailenco/treemux)
-- [Golang PostgreSQL ORM](https://github.com/go-pg/pg)
-- [Golang msgpack](https://github.com/vmihailenco/msgpack)
-- [Golang message task queue](https://github.com/vmihailenco/taskq)
+go-redis will start a redis-server and run the test cases.
+
+The paths of redis-server bin file and redis config file are defined in `main_test.go`:
+
+```
+var (
+ redisServerBin, _ = filepath.Abs(filepath.Join("testdata", "redis", "src", "redis-server"))
+ redisServerConf, _ = filepath.Abs(filepath.Join("testdata", "redis", "redis.conf"))
+)
+```
+
+For local testing, you can change the variables to refer to your local files, or create a soft link
+to the corresponding folder for redis-server and copy the config file to `testdata/redis/`:
+
+```
+ln -s /usr/bin/redis-server ./go-redis/testdata/redis/src
+cp ./go-redis/testdata/redis.conf ./go-redis/testdata/redis/
+```
+
+Lastly, run:
+
+```
+go test
+```
+
+## Contributors
+
+Thanks to all the people who already contributed!
+
+<a href="https://github.com/go-redis/redis/graphs/contributors">
+ <img src="https://contributors-img.web.app/image?repo=go-redis/redis" />
+</a>
diff --git a/vendor/github.com/go-redis/redis/v8/RELEASING.md b/vendor/github.com/go-redis/redis/v8/RELEASING.md
new file mode 100644
index 0000000..1115db4
--- /dev/null
+++ b/vendor/github.com/go-redis/redis/v8/RELEASING.md
@@ -0,0 +1,15 @@
+# Releasing
+
+1. Run `release.sh` script which updates versions in go.mod files and pushes a new branch to GitHub:
+
+```shell
+TAG=v1.0.0 ./scripts/release.sh
+```
+
+2. Open a pull request and wait for the build to finish.
+
+3. Merge the pull request and run `tag.sh` to create tags for packages:
+
+```shell
+TAG=v1.0.0 ./scripts/tag.sh
+```
diff --git a/vendor/github.com/go-redis/redis/v8/cluster.go b/vendor/github.com/go-redis/redis/v8/cluster.go
index a6ce5c5..a54f2f3 100644
--- a/vendor/github.com/go-redis/redis/v8/cluster.go
+++ b/vendor/github.com/go-redis/redis/v8/cluster.go
@@ -68,6 +68,9 @@
ReadTimeout time.Duration
WriteTimeout time.Duration
+ // PoolFIFO uses FIFO mode for each node connection pool GET/PUT (default LIFO).
+ PoolFIFO bool
+
// PoolSize applies per cluster node and not for the whole cluster.
PoolSize int
MinIdleConns int
@@ -86,12 +89,12 @@
opt.MaxRedirects = 3
}
- if (opt.RouteByLatency || opt.RouteRandomly) && opt.ClusterSlots == nil {
+ if opt.RouteByLatency || opt.RouteRandomly {
opt.ReadOnly = true
}
if opt.PoolSize == 0 {
- opt.PoolSize = 5 * runtime.NumCPU()
+ opt.PoolSize = 5 * runtime.GOMAXPROCS(0)
}
switch opt.ReadTimeout {
@@ -146,6 +149,7 @@
ReadTimeout: opt.ReadTimeout,
WriteTimeout: opt.WriteTimeout,
+ PoolFIFO: opt.PoolFIFO,
PoolSize: opt.PoolSize,
MinIdleConns: opt.MinIdleConns,
MaxConnAge: opt.MaxConnAge,
@@ -153,9 +157,13 @@
IdleTimeout: opt.IdleTimeout,
IdleCheckFrequency: disableIdleCheck,
- readOnly: opt.ReadOnly,
-
TLSConfig: opt.TLSConfig,
+ // If ClusterSlots is populated, then we probably have an artificial
+ // cluster whose nodes are not in clustering mode (otherwise there isn't
+ // much use for ClusterSlots config). This means we cannot execute the
+ // READONLY command against that node -- setting readOnly to false in such
+ // situations in the options below will prevent that from happening.
+ readOnly: opt.ReadOnly && opt.ClusterSlots == nil,
}
}
@@ -291,8 +299,9 @@
func (c *clusterNodes) Addrs() ([]string, error) {
var addrs []string
+
c.mu.RLock()
- closed := c.closed
+ closed := c.closed //nolint:ifshort
if !closed {
if len(c.activeAddrs) > 0 {
addrs = c.activeAddrs
@@ -343,7 +352,7 @@
}
}
-func (c *clusterNodes) Get(addr string) (*clusterNode, error) {
+func (c *clusterNodes) GetOrCreate(addr string) (*clusterNode, error) {
node, err := c.get(addr)
if err != nil {
return nil, err
@@ -407,7 +416,7 @@
}
n := rand.Intn(len(addrs))
- return c.Get(addrs[n])
+ return c.GetOrCreate(addrs[n])
}
//------------------------------------------------------------------------------
@@ -465,7 +474,7 @@
addr = replaceLoopbackHost(addr, originHost)
}
- node, err := c.nodes.Get(addr)
+ node, err := c.nodes.GetOrCreate(addr)
if err != nil {
return nil, err
}
@@ -586,8 +595,16 @@
if len(nodes) == 0 {
return c.nodes.Random()
}
- n := rand.Intn(len(nodes))
- return nodes[n], nil
+ if len(nodes) == 1 {
+ return nodes[0], nil
+ }
+ randomNodes := rand.Perm(len(nodes))
+ for _, idx := range randomNodes {
+ if node := nodes[idx]; !node.Failing() {
+ return node, nil
+ }
+ }
+ return nodes[randomNodes[0]], nil
}
func (c *clusterState) slotNodes(slot int) []*clusterNode {
@@ -628,14 +645,14 @@
return state, nil
}
-func (c *clusterStateHolder) LazyReload(ctx context.Context) {
+func (c *clusterStateHolder) LazyReload() {
if !atomic.CompareAndSwapUint32(&c.reloading, 0, 1) {
return
}
go func() {
defer atomic.StoreUint32(&c.reloading, 0)
- _, err := c.Reload(ctx)
+ _, err := c.Reload(context.Background())
if err != nil {
return
}
@@ -645,14 +662,15 @@
func (c *clusterStateHolder) Get(ctx context.Context) (*clusterState, error) {
v := c.state.Load()
- if v != nil {
- state := v.(*clusterState)
- if time.Since(state.createdAt) > 10*time.Second {
- c.LazyReload(ctx)
- }
- return state, nil
+ if v == nil {
+ return c.Reload(ctx)
}
- return c.Reload(ctx)
+
+ state := v.(*clusterState)
+ if time.Since(state.createdAt) > 10*time.Second {
+ c.LazyReload()
+ }
+ return state, nil
}
func (c *clusterStateHolder) ReloadOrGet(ctx context.Context) (*clusterState, error) {
@@ -728,7 +746,7 @@
// ReloadState reloads cluster state. If available it calls ClusterSlots func
// to get cluster slots information.
func (c *ClusterClient) ReloadState(ctx context.Context) {
- c.state.LazyReload(ctx)
+ c.state.LazyReload()
}
// Close closes the cluster client, releasing any open resources.
@@ -789,7 +807,7 @@
}
if isReadOnly := isReadOnlyError(lastErr); isReadOnly || lastErr == pool.ErrClosed {
if isReadOnly {
- c.state.LazyReload(ctx)
+ c.state.LazyReload()
}
node = nil
continue
@@ -806,8 +824,10 @@
var addr string
moved, ask, addr = isMovedError(lastErr)
if moved || ask {
+ c.state.LazyReload()
+
var err error
- node, err = c.nodes.Get(addr)
+ node, err = c.nodes.GetOrCreate(addr)
if err != nil {
return err
}
@@ -1004,7 +1024,7 @@
for _, idx := range rand.Perm(len(addrs)) {
addr := addrs[idx]
- node, err := c.nodes.Get(addr)
+ node, err := c.nodes.GetOrCreate(addr)
if err != nil {
if firstErr == nil {
firstErr = err
@@ -1218,13 +1238,13 @@
return false
}
- node, err := c.nodes.Get(addr)
+ node, err := c.nodes.GetOrCreate(addr)
if err != nil {
return false
}
if moved {
- c.state.LazyReload(ctx)
+ c.state.LazyReload()
failedCmds.Add(node, cmd)
return true
}
@@ -1252,10 +1272,13 @@
}
func (c *ClusterClient) processTxPipeline(ctx context.Context, cmds []Cmder) error {
- return c.hooks.processPipeline(ctx, cmds, c._processTxPipeline)
+ return c.hooks.processTxPipeline(ctx, cmds, c._processTxPipeline)
}
func (c *ClusterClient) _processTxPipeline(ctx context.Context, cmds []Cmder) error {
+ // Trim multi .. exec.
+ cmds = cmds[1 : len(cmds)-1]
+
state, err := c.state.Get(ctx)
if err != nil {
setCmdsErr(cmds, err)
@@ -1291,6 +1314,7 @@
if err == nil {
return
}
+
if attempt < c.opt.MaxRedirects {
if err := c.mapCmdsByNode(ctx, failedCmds, cmds); err != nil {
setCmdsErr(cmds, err)
@@ -1400,13 +1424,13 @@
addr string,
failedCmds *cmdsMap,
) error {
- node, err := c.nodes.Get(addr)
+ node, err := c.nodes.GetOrCreate(addr)
if err != nil {
return err
}
if moved {
- c.state.LazyReload(ctx)
+ c.state.LazyReload()
for _, cmd := range cmds {
failedCmds.Add(node, cmd)
}
@@ -1455,7 +1479,7 @@
moved, ask, addr := isMovedError(err)
if moved || ask {
- node, err = c.nodes.Get(addr)
+ node, err = c.nodes.GetOrCreate(addr)
if err != nil {
return err
}
@@ -1464,7 +1488,7 @@
if isReadOnly := isReadOnlyError(err); isReadOnly || err == pool.ErrClosed {
if isReadOnly {
- c.state.LazyReload(ctx)
+ c.state.LazyReload()
}
node, err = c.slotMasterNode(ctx, slot)
if err != nil {
@@ -1567,7 +1591,7 @@
for _, idx := range perm {
addr := addrs[idx]
- node, err := c.nodes.Get(addr)
+ node, err := c.nodes.GetOrCreate(addr)
if err != nil {
if firstErr == nil {
firstErr = err
@@ -1631,7 +1655,7 @@
return nil, err
}
- if (c.opt.RouteByLatency || c.opt.RouteRandomly) && cmdInfo != nil && cmdInfo.ReadOnly {
+ if c.opt.ReadOnly && cmdInfo != nil && cmdInfo.ReadOnly {
return c.slotReadOnlyNode(state, slot)
}
return state.slotMasterNode(slot)
@@ -1655,6 +1679,35 @@
return state.slotMasterNode(slot)
}
+// SlaveForKey gets a client for a replica node to run any command on it.
+// This is especially useful if we want to run a particular lua script which has
+// only read only commands on the replica.
+// This is because other redis commands generally have a flag that points that
+// they are read only and automatically run on the replica nodes
+// if ClusterOptions.ReadOnly flag is set to true.
+func (c *ClusterClient) SlaveForKey(ctx context.Context, key string) (*Client, error) {
+ state, err := c.state.Get(ctx)
+ if err != nil {
+ return nil, err
+ }
+ slot := hashtag.Slot(key)
+ node, err := c.slotReadOnlyNode(state, slot)
+ if err != nil {
+ return nil, err
+ }
+ return node.Client, err
+}
+
+// MasterForKey return a client to the master node for a particular key.
+func (c *ClusterClient) MasterForKey(ctx context.Context, key string) (*Client, error) {
+ slot := hashtag.Slot(key)
+ node, err := c.slotMasterNode(ctx, slot)
+ if err != nil {
+ return nil, err
+ }
+ return node.Client, err
+}
+
func appendUniqueNode(nodes []*clusterNode, node *clusterNode) []*clusterNode {
for _, n := range nodes {
if n == node {
diff --git a/vendor/github.com/go-redis/redis/v8/cluster_commands.go b/vendor/github.com/go-redis/redis/v8/cluster_commands.go
index 1f0bae0..085bce8 100644
--- a/vendor/github.com/go-redis/redis/v8/cluster_commands.go
+++ b/vendor/github.com/go-redis/redis/v8/cluster_commands.go
@@ -2,24 +2,108 @@
import (
"context"
+ "sync"
"sync/atomic"
)
func (c *ClusterClient) DBSize(ctx context.Context) *IntCmd {
cmd := NewIntCmd(ctx, "dbsize")
- var size int64
- err := c.ForEachMaster(ctx, func(ctx context.Context, master *Client) error {
- n, err := master.DBSize(ctx).Result()
+ _ = c.hooks.process(ctx, cmd, func(ctx context.Context, _ Cmder) error {
+ var size int64
+ err := c.ForEachMaster(ctx, func(ctx context.Context, master *Client) error {
+ n, err := master.DBSize(ctx).Result()
+ if err != nil {
+ return err
+ }
+ atomic.AddInt64(&size, n)
+ return nil
+ })
if err != nil {
- return err
+ cmd.SetErr(err)
+ } else {
+ cmd.val = size
}
- atomic.AddInt64(&size, n)
return nil
})
- if err != nil {
- cmd.SetErr(err)
- return cmd
+ return cmd
+}
+
+func (c *ClusterClient) ScriptLoad(ctx context.Context, script string) *StringCmd {
+ cmd := NewStringCmd(ctx, "script", "load", script)
+ _ = c.hooks.process(ctx, cmd, func(ctx context.Context, _ Cmder) error {
+ mu := &sync.Mutex{}
+ err := c.ForEachShard(ctx, func(ctx context.Context, shard *Client) error {
+ val, err := shard.ScriptLoad(ctx, script).Result()
+ if err != nil {
+ return err
+ }
+
+ mu.Lock()
+ if cmd.Val() == "" {
+ cmd.val = val
+ }
+ mu.Unlock()
+
+ return nil
+ })
+ if err != nil {
+ cmd.SetErr(err)
+ }
+ return nil
+ })
+ return cmd
+}
+
+func (c *ClusterClient) ScriptFlush(ctx context.Context) *StatusCmd {
+ cmd := NewStatusCmd(ctx, "script", "flush")
+ _ = c.hooks.process(ctx, cmd, func(ctx context.Context, _ Cmder) error {
+ err := c.ForEachShard(ctx, func(ctx context.Context, shard *Client) error {
+ return shard.ScriptFlush(ctx).Err()
+ })
+ if err != nil {
+ cmd.SetErr(err)
+ }
+ return nil
+ })
+ return cmd
+}
+
+func (c *ClusterClient) ScriptExists(ctx context.Context, hashes ...string) *BoolSliceCmd {
+ args := make([]interface{}, 2+len(hashes))
+ args[0] = "script"
+ args[1] = "exists"
+ for i, hash := range hashes {
+ args[2+i] = hash
}
- cmd.val = size
+ cmd := NewBoolSliceCmd(ctx, args...)
+
+ result := make([]bool, len(hashes))
+ for i := range result {
+ result[i] = true
+ }
+
+ _ = c.hooks.process(ctx, cmd, func(ctx context.Context, _ Cmder) error {
+ mu := &sync.Mutex{}
+ err := c.ForEachShard(ctx, func(ctx context.Context, shard *Client) error {
+ val, err := shard.ScriptExists(ctx, hashes...).Result()
+ if err != nil {
+ return err
+ }
+
+ mu.Lock()
+ for i, v := range val {
+ result[i] = result[i] && v
+ }
+ mu.Unlock()
+
+ return nil
+ })
+ if err != nil {
+ cmd.SetErr(err)
+ } else {
+ cmd.val = result
+ }
+ return nil
+ })
return cmd
}
diff --git a/vendor/github.com/go-redis/redis/v8/command.go b/vendor/github.com/go-redis/redis/v8/command.go
index 5dd5533..4bb12a8 100644
--- a/vendor/github.com/go-redis/redis/v8/command.go
+++ b/vendor/github.com/go-redis/redis/v8/command.go
@@ -8,6 +8,7 @@
"time"
"github.com/go-redis/redis/v8/internal"
+ "github.com/go-redis/redis/v8/internal/hscan"
"github.com/go-redis/redis/v8/internal/proto"
"github.com/go-redis/redis/v8/internal/util"
)
@@ -19,7 +20,7 @@
String() string
stringArg(int) string
firstKeyPos() int8
- setFirstKeyPos(int8)
+ SetFirstKeyPos(int8)
readTimeout() *time.Duration
readReply(rd *proto.Reader) error
@@ -150,15 +151,21 @@
if pos < 0 || pos >= len(cmd.args) {
return ""
}
- s, _ := cmd.args[pos].(string)
- return s
+ arg := cmd.args[pos]
+ switch v := arg.(type) {
+ case string:
+ return v
+ default:
+ // TODO: consider using appendArg
+ return fmt.Sprint(v)
+ }
}
func (cmd *baseCmd) firstKeyPos() int8 {
return cmd.keyPos
}
-func (cmd *baseCmd) setFirstKeyPos(keyPos int8) {
+func (cmd *baseCmd) SetFirstKeyPos(keyPos int8) {
cmd.keyPos = keyPos
}
@@ -199,6 +206,10 @@
return cmdString(cmd, cmd.val)
}
+func (cmd *Cmd) SetVal(val interface{}) {
+ cmd.val = val
+}
+
func (cmd *Cmd) Val() interface{} {
return cmd.val
}
@@ -211,7 +222,11 @@
if cmd.err != nil {
return "", cmd.err
}
- switch val := cmd.val.(type) {
+ return toString(cmd.val)
+}
+
+func toString(val interface{}) (string, error) {
+ switch val := val.(type) {
case string:
return val, nil
default:
@@ -239,7 +254,11 @@
if cmd.err != nil {
return 0, cmd.err
}
- switch val := cmd.val.(type) {
+ return toInt64(cmd.val)
+}
+
+func toInt64(val interface{}) (int64, error) {
+ switch val := val.(type) {
case int64:
return val, nil
case string:
@@ -254,7 +273,11 @@
if cmd.err != nil {
return 0, cmd.err
}
- switch val := cmd.val.(type) {
+ return toUint64(cmd.val)
+}
+
+func toUint64(val interface{}) (uint64, error) {
+ switch val := val.(type) {
case int64:
return uint64(val), nil
case string:
@@ -269,7 +292,11 @@
if cmd.err != nil {
return 0, cmd.err
}
- switch val := cmd.val.(type) {
+ return toFloat32(cmd.val)
+}
+
+func toFloat32(val interface{}) (float32, error) {
+ switch val := val.(type) {
case int64:
return float32(val), nil
case string:
@@ -288,7 +315,11 @@
if cmd.err != nil {
return 0, cmd.err
}
- switch val := cmd.val.(type) {
+ return toFloat64(cmd.val)
+}
+
+func toFloat64(val interface{}) (float64, error) {
+ switch val := val.(type) {
case int64:
return float64(val), nil
case string:
@@ -303,7 +334,11 @@
if cmd.err != nil {
return false, cmd.err
}
- switch val := cmd.val.(type) {
+ return toBool(cmd.val)
+}
+
+func toBool(val interface{}) (bool, error) {
+ switch val := val.(type) {
case int64:
return val != 0, nil
case string:
@@ -314,6 +349,120 @@
}
}
+func (cmd *Cmd) Slice() ([]interface{}, error) {
+ if cmd.err != nil {
+ return nil, cmd.err
+ }
+ switch val := cmd.val.(type) {
+ case []interface{}:
+ return val, nil
+ default:
+ return nil, fmt.Errorf("redis: unexpected type=%T for Slice", val)
+ }
+}
+
+func (cmd *Cmd) StringSlice() ([]string, error) {
+ slice, err := cmd.Slice()
+ if err != nil {
+ return nil, err
+ }
+
+ ss := make([]string, len(slice))
+ for i, iface := range slice {
+ val, err := toString(iface)
+ if err != nil {
+ return nil, err
+ }
+ ss[i] = val
+ }
+ return ss, nil
+}
+
+func (cmd *Cmd) Int64Slice() ([]int64, error) {
+ slice, err := cmd.Slice()
+ if err != nil {
+ return nil, err
+ }
+
+ nums := make([]int64, len(slice))
+ for i, iface := range slice {
+ val, err := toInt64(iface)
+ if err != nil {
+ return nil, err
+ }
+ nums[i] = val
+ }
+ return nums, nil
+}
+
+func (cmd *Cmd) Uint64Slice() ([]uint64, error) {
+ slice, err := cmd.Slice()
+ if err != nil {
+ return nil, err
+ }
+
+ nums := make([]uint64, len(slice))
+ for i, iface := range slice {
+ val, err := toUint64(iface)
+ if err != nil {
+ return nil, err
+ }
+ nums[i] = val
+ }
+ return nums, nil
+}
+
+func (cmd *Cmd) Float32Slice() ([]float32, error) {
+ slice, err := cmd.Slice()
+ if err != nil {
+ return nil, err
+ }
+
+ floats := make([]float32, len(slice))
+ for i, iface := range slice {
+ val, err := toFloat32(iface)
+ if err != nil {
+ return nil, err
+ }
+ floats[i] = val
+ }
+ return floats, nil
+}
+
+func (cmd *Cmd) Float64Slice() ([]float64, error) {
+ slice, err := cmd.Slice()
+ if err != nil {
+ return nil, err
+ }
+
+ floats := make([]float64, len(slice))
+ for i, iface := range slice {
+ val, err := toFloat64(iface)
+ if err != nil {
+ return nil, err
+ }
+ floats[i] = val
+ }
+ return floats, nil
+}
+
+func (cmd *Cmd) BoolSlice() ([]bool, error) {
+ slice, err := cmd.Slice()
+ if err != nil {
+ return nil, err
+ }
+
+ bools := make([]bool, len(slice))
+ for i, iface := range slice {
+ val, err := toBool(iface)
+ if err != nil {
+ return nil, err
+ }
+ bools[i] = val
+ }
+ return bools, nil
+}
+
func (cmd *Cmd) readReply(rd *proto.Reader) (err error) {
cmd.val, err = rd.ReadReply(sliceParser)
return err
@@ -359,6 +508,10 @@
}
}
+func (cmd *SliceCmd) SetVal(val []interface{}) {
+ cmd.val = val
+}
+
func (cmd *SliceCmd) Val() []interface{} {
return cmd.val
}
@@ -371,6 +524,26 @@
return cmdString(cmd, cmd.val)
}
+// Scan scans the results from the map into a destination struct. The map keys
+// are matched in the Redis struct fields by the `redis:"field"` tag.
+func (cmd *SliceCmd) Scan(dst interface{}) error {
+ if cmd.err != nil {
+ return cmd.err
+ }
+
+ // Pass the list of keys and values.
+ // Skip the first two args for: HMGET key
+ var args []interface{}
+ if cmd.args[0] == "hmget" {
+ args = cmd.args[2:]
+ } else {
+ // Otherwise, it's: MGET field field ...
+ args = cmd.args[1:]
+ }
+
+ return hscan.Scan(dst, args, cmd.val)
+}
+
func (cmd *SliceCmd) readReply(rd *proto.Reader) error {
v, err := rd.ReadArrayReply(sliceParser)
if err != nil {
@@ -399,6 +572,10 @@
}
}
+func (cmd *StatusCmd) SetVal(val string) {
+ cmd.val = val
+}
+
func (cmd *StatusCmd) Val() string {
return cmd.val
}
@@ -435,6 +612,10 @@
}
}
+func (cmd *IntCmd) SetVal(val int64) {
+ cmd.val = val
+}
+
func (cmd *IntCmd) Val() int64 {
return cmd.val
}
@@ -475,6 +656,10 @@
}
}
+func (cmd *IntSliceCmd) SetVal(val []int64) {
+ cmd.val = val
+}
+
func (cmd *IntSliceCmd) Val() []int64 {
return cmd.val
}
@@ -523,6 +708,10 @@
}
}
+func (cmd *DurationCmd) SetVal(val time.Duration) {
+ cmd.val = val
+}
+
func (cmd *DurationCmd) Val() time.Duration {
return cmd.val
}
@@ -570,6 +759,10 @@
}
}
+func (cmd *TimeCmd) SetVal(val time.Time) {
+ cmd.val = val
+}
+
func (cmd *TimeCmd) Val() time.Time {
return cmd.val
}
@@ -623,6 +816,10 @@
}
}
+func (cmd *BoolCmd) SetVal(val bool) {
+ cmd.val = val
+}
+
func (cmd *BoolCmd) Val() bool {
return cmd.val
}
@@ -677,6 +874,10 @@
}
}
+func (cmd *StringCmd) SetVal(val string) {
+ cmd.val = val
+}
+
func (cmd *StringCmd) Val() string {
return cmd.val
}
@@ -689,6 +890,13 @@
return util.StringToBytes(cmd.val), cmd.err
}
+func (cmd *StringCmd) Bool() (bool, error) {
+ if cmd.err != nil {
+ return false, cmd.err
+ }
+ return strconv.ParseBool(cmd.val)
+}
+
func (cmd *StringCmd) Int() (int, error) {
if cmd.err != nil {
return 0, cmd.err
@@ -770,6 +978,10 @@
}
}
+func (cmd *FloatCmd) SetVal(val float64) {
+ cmd.val = val
+}
+
func (cmd *FloatCmd) Val() float64 {
return cmd.val
}
@@ -789,6 +1001,59 @@
//------------------------------------------------------------------------------
+type FloatSliceCmd struct {
+ baseCmd
+
+ val []float64
+}
+
+var _ Cmder = (*FloatSliceCmd)(nil)
+
+func NewFloatSliceCmd(ctx context.Context, args ...interface{}) *FloatSliceCmd {
+ return &FloatSliceCmd{
+ baseCmd: baseCmd{
+ ctx: ctx,
+ args: args,
+ },
+ }
+}
+
+func (cmd *FloatSliceCmd) SetVal(val []float64) {
+ cmd.val = val
+}
+
+func (cmd *FloatSliceCmd) Val() []float64 {
+ return cmd.val
+}
+
+func (cmd *FloatSliceCmd) Result() ([]float64, error) {
+ return cmd.val, cmd.err
+}
+
+func (cmd *FloatSliceCmd) String() string {
+ return cmdString(cmd, cmd.val)
+}
+
+func (cmd *FloatSliceCmd) readReply(rd *proto.Reader) error {
+ _, err := rd.ReadArrayReply(func(rd *proto.Reader, n int64) (interface{}, error) {
+ cmd.val = make([]float64, n)
+ for i := 0; i < len(cmd.val); i++ {
+ switch num, err := rd.ReadFloatReply(); {
+ case err == Nil:
+ cmd.val[i] = 0
+ case err != nil:
+ return nil, err
+ default:
+ cmd.val[i] = num
+ }
+ }
+ return nil, nil
+ })
+ return err
+}
+
+//------------------------------------------------------------------------------
+
type StringSliceCmd struct {
baseCmd
@@ -806,6 +1071,10 @@
}
}
+func (cmd *StringSliceCmd) SetVal(val []string) {
+ cmd.val = val
+}
+
func (cmd *StringSliceCmd) Val() []string {
return cmd.val
}
@@ -859,6 +1128,10 @@
}
}
+func (cmd *BoolSliceCmd) SetVal(val []bool) {
+ cmd.val = val
+}
+
func (cmd *BoolSliceCmd) Val() []bool {
return cmd.val
}
@@ -905,6 +1178,10 @@
}
}
+func (cmd *StringStringMapCmd) SetVal(val map[string]string) {
+ cmd.val = val
+}
+
func (cmd *StringStringMapCmd) Val() map[string]string {
return cmd.val
}
@@ -917,6 +1194,27 @@
return cmdString(cmd, cmd.val)
}
+// Scan scans the results from the map into a destination struct. The map keys
+// are matched in the Redis struct fields by the `redis:"field"` tag.
+func (cmd *StringStringMapCmd) Scan(dest interface{}) error {
+ if cmd.err != nil {
+ return cmd.err
+ }
+
+ strct, err := hscan.Struct(dest)
+ if err != nil {
+ return err
+ }
+
+ for k, v := range cmd.val {
+ if err := strct.Scan(k, v); err != nil {
+ return err
+ }
+ }
+
+ return nil
+}
+
func (cmd *StringStringMapCmd) readReply(rd *proto.Reader) error {
_, err := rd.ReadArrayReply(func(rd *proto.Reader, n int64) (interface{}, error) {
cmd.val = make(map[string]string, n/2)
@@ -957,6 +1255,10 @@
}
}
+func (cmd *StringIntMapCmd) SetVal(val map[string]int64) {
+ cmd.val = val
+}
+
func (cmd *StringIntMapCmd) Val() map[string]int64 {
return cmd.val
}
@@ -1009,6 +1311,10 @@
}
}
+func (cmd *StringStructMapCmd) SetVal(val map[string]struct{}) {
+ cmd.val = val
+}
+
func (cmd *StringStructMapCmd) Val() map[string]struct{} {
return cmd.val
}
@@ -1060,6 +1366,10 @@
}
}
+func (cmd *XMessageSliceCmd) SetVal(val []XMessage) {
+ cmd.val = val
+}
+
func (cmd *XMessageSliceCmd) Val() []XMessage {
return cmd.val
}
@@ -1169,6 +1479,10 @@
}
}
+func (cmd *XStreamSliceCmd) SetVal(val []XStream) {
+ cmd.val = val
+}
+
func (cmd *XStreamSliceCmd) Val() []XStream {
return cmd.val
}
@@ -1241,6 +1555,10 @@
}
}
+func (cmd *XPendingCmd) SetVal(val *XPending) {
+ cmd.val = val
+}
+
func (cmd *XPendingCmd) Val() *XPending {
return cmd.val
}
@@ -1343,6 +1661,10 @@
}
}
+func (cmd *XPendingExtCmd) SetVal(val []XPendingExt) {
+ cmd.val = val
+}
+
func (cmd *XPendingExtCmd) Val() []XPendingExt {
return cmd.val
}
@@ -1403,6 +1725,233 @@
//------------------------------------------------------------------------------
+type XAutoClaimCmd struct {
+ baseCmd
+
+ start string
+ val []XMessage
+}
+
+var _ Cmder = (*XAutoClaimCmd)(nil)
+
+func NewXAutoClaimCmd(ctx context.Context, args ...interface{}) *XAutoClaimCmd {
+ return &XAutoClaimCmd{
+ baseCmd: baseCmd{
+ ctx: ctx,
+ args: args,
+ },
+ }
+}
+
+func (cmd *XAutoClaimCmd) SetVal(val []XMessage, start string) {
+ cmd.val = val
+ cmd.start = start
+}
+
+func (cmd *XAutoClaimCmd) Val() (messages []XMessage, start string) {
+ return cmd.val, cmd.start
+}
+
+func (cmd *XAutoClaimCmd) Result() (messages []XMessage, start string, err error) {
+ return cmd.val, cmd.start, cmd.err
+}
+
+func (cmd *XAutoClaimCmd) String() string {
+ return cmdString(cmd, cmd.val)
+}
+
+func (cmd *XAutoClaimCmd) readReply(rd *proto.Reader) error {
+ _, err := rd.ReadArrayReply(func(rd *proto.Reader, n int64) (interface{}, error) {
+ if n != 2 {
+ return nil, fmt.Errorf("got %d, wanted 2", n)
+ }
+ var err error
+
+ cmd.start, err = rd.ReadString()
+ if err != nil {
+ return nil, err
+ }
+
+ cmd.val, err = readXMessageSlice(rd)
+ if err != nil {
+ return nil, err
+ }
+
+ return nil, nil
+ })
+ return err
+}
+
+//------------------------------------------------------------------------------
+
+type XAutoClaimJustIDCmd struct {
+ baseCmd
+
+ start string
+ val []string
+}
+
+var _ Cmder = (*XAutoClaimJustIDCmd)(nil)
+
+func NewXAutoClaimJustIDCmd(ctx context.Context, args ...interface{}) *XAutoClaimJustIDCmd {
+ return &XAutoClaimJustIDCmd{
+ baseCmd: baseCmd{
+ ctx: ctx,
+ args: args,
+ },
+ }
+}
+
+func (cmd *XAutoClaimJustIDCmd) SetVal(val []string, start string) {
+ cmd.val = val
+ cmd.start = start
+}
+
+func (cmd *XAutoClaimJustIDCmd) Val() (ids []string, start string) {
+ return cmd.val, cmd.start
+}
+
+func (cmd *XAutoClaimJustIDCmd) Result() (ids []string, start string, err error) {
+ return cmd.val, cmd.start, cmd.err
+}
+
+func (cmd *XAutoClaimJustIDCmd) String() string {
+ return cmdString(cmd, cmd.val)
+}
+
+func (cmd *XAutoClaimJustIDCmd) readReply(rd *proto.Reader) error {
+ _, err := rd.ReadArrayReply(func(rd *proto.Reader, n int64) (interface{}, error) {
+ if n != 2 {
+ return nil, fmt.Errorf("got %d, wanted 2", n)
+ }
+ var err error
+
+ cmd.start, err = rd.ReadString()
+ if err != nil {
+ return nil, err
+ }
+
+ nn, err := rd.ReadArrayLen()
+ if err != nil {
+ return nil, err
+ }
+
+ cmd.val = make([]string, nn)
+ for i := 0; i < nn; i++ {
+ cmd.val[i], err = rd.ReadString()
+ if err != nil {
+ return nil, err
+ }
+ }
+
+ return nil, nil
+ })
+ return err
+}
+
+//------------------------------------------------------------------------------
+
+type XInfoConsumersCmd struct {
+ baseCmd
+ val []XInfoConsumer
+}
+
+type XInfoConsumer struct {
+ Name string
+ Pending int64
+ Idle int64
+}
+
+var _ Cmder = (*XInfoConsumersCmd)(nil)
+
+func NewXInfoConsumersCmd(ctx context.Context, stream string, group string) *XInfoConsumersCmd {
+ return &XInfoConsumersCmd{
+ baseCmd: baseCmd{
+ ctx: ctx,
+ args: []interface{}{"xinfo", "consumers", stream, group},
+ },
+ }
+}
+
+func (cmd *XInfoConsumersCmd) SetVal(val []XInfoConsumer) {
+ cmd.val = val
+}
+
+func (cmd *XInfoConsumersCmd) Val() []XInfoConsumer {
+ return cmd.val
+}
+
+func (cmd *XInfoConsumersCmd) Result() ([]XInfoConsumer, error) {
+ return cmd.val, cmd.err
+}
+
+func (cmd *XInfoConsumersCmd) String() string {
+ return cmdString(cmd, cmd.val)
+}
+
+func (cmd *XInfoConsumersCmd) readReply(rd *proto.Reader) error {
+ n, err := rd.ReadArrayLen()
+ if err != nil {
+ return err
+ }
+
+ cmd.val = make([]XInfoConsumer, n)
+
+ for i := 0; i < n; i++ {
+ cmd.val[i], err = readXConsumerInfo(rd)
+ if err != nil {
+ return err
+ }
+ }
+
+ return nil
+}
+
+func readXConsumerInfo(rd *proto.Reader) (XInfoConsumer, error) {
+ var consumer XInfoConsumer
+
+ n, err := rd.ReadArrayLen()
+ if err != nil {
+ return consumer, err
+ }
+ if n != 6 {
+ return consumer, fmt.Errorf("redis: got %d elements in XINFO CONSUMERS reply, wanted 6", n)
+ }
+
+ for i := 0; i < 3; i++ {
+ key, err := rd.ReadString()
+ if err != nil {
+ return consumer, err
+ }
+
+ val, err := rd.ReadString()
+ if err != nil {
+ return consumer, err
+ }
+
+ switch key {
+ case "name":
+ consumer.Name = val
+ case "pending":
+ consumer.Pending, err = strconv.ParseInt(val, 0, 64)
+ if err != nil {
+ return consumer, err
+ }
+ case "idle":
+ consumer.Idle, err = strconv.ParseInt(val, 0, 64)
+ if err != nil {
+ return consumer, err
+ }
+ default:
+ return consumer, fmt.Errorf("redis: unexpected content %s in XINFO CONSUMERS reply", key)
+ }
+ }
+
+ return consumer, nil
+}
+
+//------------------------------------------------------------------------------
+
type XInfoGroupsCmd struct {
baseCmd
val []XInfoGroup
@@ -1426,6 +1975,10 @@
}
}
+func (cmd *XInfoGroupsCmd) SetVal(val []XInfoGroup) {
+ cmd.val = val
+}
+
func (cmd *XInfoGroupsCmd) Val() []XInfoGroup {
return cmd.val
}
@@ -1529,6 +2082,10 @@
}
}
+func (cmd *XInfoStreamCmd) SetVal(val *XInfoStream) {
+ cmd.val = val
+}
+
func (cmd *XInfoStreamCmd) Val() *XInfoStream {
return cmd.val
}
@@ -1574,8 +2131,14 @@
info.LastGeneratedID, err = rd.ReadString()
case "first-entry":
info.FirstEntry, err = readXMessage(rd)
+ if err == Nil {
+ err = nil
+ }
case "last-entry":
info.LastEntry, err = readXMessage(rd)
+ if err == Nil {
+ err = nil
+ }
default:
return nil, fmt.Errorf("redis: unexpected content %s "+
"in XINFO STREAM reply", key)
@@ -1589,6 +2152,306 @@
//------------------------------------------------------------------------------
+type XInfoStreamFullCmd struct {
+ baseCmd
+ val *XInfoStreamFull
+}
+
+type XInfoStreamFull struct {
+ Length int64
+ RadixTreeKeys int64
+ RadixTreeNodes int64
+ LastGeneratedID string
+ Entries []XMessage
+ Groups []XInfoStreamGroup
+}
+
+type XInfoStreamGroup struct {
+ Name string
+ LastDeliveredID string
+ PelCount int64
+ Pending []XInfoStreamGroupPending
+ Consumers []XInfoStreamConsumer
+}
+
+type XInfoStreamGroupPending struct {
+ ID string
+ Consumer string
+ DeliveryTime time.Time
+ DeliveryCount int64
+}
+
+type XInfoStreamConsumer struct {
+ Name string
+ SeenTime time.Time
+ PelCount int64
+ Pending []XInfoStreamConsumerPending
+}
+
+type XInfoStreamConsumerPending struct {
+ ID string
+ DeliveryTime time.Time
+ DeliveryCount int64
+}
+
+var _ Cmder = (*XInfoStreamFullCmd)(nil)
+
+func NewXInfoStreamFullCmd(ctx context.Context, args ...interface{}) *XInfoStreamFullCmd {
+ return &XInfoStreamFullCmd{
+ baseCmd: baseCmd{
+ ctx: ctx,
+ args: args,
+ },
+ }
+}
+
+func (cmd *XInfoStreamFullCmd) SetVal(val *XInfoStreamFull) {
+ cmd.val = val
+}
+
+func (cmd *XInfoStreamFullCmd) Val() *XInfoStreamFull {
+ return cmd.val
+}
+
+func (cmd *XInfoStreamFullCmd) Result() (*XInfoStreamFull, error) {
+ return cmd.val, cmd.err
+}
+
+func (cmd *XInfoStreamFullCmd) String() string {
+ return cmdString(cmd, cmd.val)
+}
+
+func (cmd *XInfoStreamFullCmd) readReply(rd *proto.Reader) error {
+ n, err := rd.ReadArrayLen()
+ if err != nil {
+ return err
+ }
+ if n != 12 {
+ return fmt.Errorf("redis: got %d elements in XINFO STREAM FULL reply,"+
+ "wanted 12", n)
+ }
+
+ cmd.val = &XInfoStreamFull{}
+
+ for i := 0; i < 6; i++ {
+ key, err := rd.ReadString()
+ if err != nil {
+ return err
+ }
+
+ switch key {
+ case "length":
+ cmd.val.Length, err = rd.ReadIntReply()
+ case "radix-tree-keys":
+ cmd.val.RadixTreeKeys, err = rd.ReadIntReply()
+ case "radix-tree-nodes":
+ cmd.val.RadixTreeNodes, err = rd.ReadIntReply()
+ case "last-generated-id":
+ cmd.val.LastGeneratedID, err = rd.ReadString()
+ case "entries":
+ cmd.val.Entries, err = readXMessageSlice(rd)
+ case "groups":
+ cmd.val.Groups, err = readStreamGroups(rd)
+ default:
+ return fmt.Errorf("redis: unexpected content %s "+
+ "in XINFO STREAM reply", key)
+ }
+ if err != nil {
+ return err
+ }
+ }
+ return nil
+}
+
+func readStreamGroups(rd *proto.Reader) ([]XInfoStreamGroup, error) {
+ n, err := rd.ReadArrayLen()
+ if err != nil {
+ return nil, err
+ }
+ groups := make([]XInfoStreamGroup, 0, n)
+ for i := 0; i < n; i++ {
+ nn, err := rd.ReadArrayLen()
+ if err != nil {
+ return nil, err
+ }
+ if nn != 10 {
+ return nil, fmt.Errorf("redis: got %d elements in XINFO STREAM FULL reply,"+
+ "wanted 10", nn)
+ }
+
+ group := XInfoStreamGroup{}
+
+ for f := 0; f < 5; f++ {
+ key, err := rd.ReadString()
+ if err != nil {
+ return nil, err
+ }
+
+ switch key {
+ case "name":
+ group.Name, err = rd.ReadString()
+ case "last-delivered-id":
+ group.LastDeliveredID, err = rd.ReadString()
+ case "pel-count":
+ group.PelCount, err = rd.ReadIntReply()
+ case "pending":
+ group.Pending, err = readXInfoStreamGroupPending(rd)
+ case "consumers":
+ group.Consumers, err = readXInfoStreamConsumers(rd)
+ default:
+ return nil, fmt.Errorf("redis: unexpected content %s "+
+ "in XINFO STREAM reply", key)
+ }
+
+ if err != nil {
+ return nil, err
+ }
+ }
+
+ groups = append(groups, group)
+ }
+
+ return groups, nil
+}
+
+func readXInfoStreamGroupPending(rd *proto.Reader) ([]XInfoStreamGroupPending, error) {
+ n, err := rd.ReadArrayLen()
+ if err != nil {
+ return nil, err
+ }
+
+ pending := make([]XInfoStreamGroupPending, 0, n)
+
+ for i := 0; i < n; i++ {
+ nn, err := rd.ReadArrayLen()
+ if err != nil {
+ return nil, err
+ }
+ if nn != 4 {
+ return nil, fmt.Errorf("redis: got %d elements in XINFO STREAM FULL reply,"+
+ "wanted 4", nn)
+ }
+
+ p := XInfoStreamGroupPending{}
+
+ p.ID, err = rd.ReadString()
+ if err != nil {
+ return nil, err
+ }
+
+ p.Consumer, err = rd.ReadString()
+ if err != nil {
+ return nil, err
+ }
+
+ delivery, err := rd.ReadIntReply()
+ if err != nil {
+ return nil, err
+ }
+ p.DeliveryTime = time.Unix(delivery/1000, delivery%1000*int64(time.Millisecond))
+
+ p.DeliveryCount, err = rd.ReadIntReply()
+ if err != nil {
+ return nil, err
+ }
+
+ pending = append(pending, p)
+ }
+
+ return pending, nil
+}
+
+func readXInfoStreamConsumers(rd *proto.Reader) ([]XInfoStreamConsumer, error) {
+ n, err := rd.ReadArrayLen()
+ if err != nil {
+ return nil, err
+ }
+
+ consumers := make([]XInfoStreamConsumer, 0, n)
+
+ for i := 0; i < n; i++ {
+ nn, err := rd.ReadArrayLen()
+ if err != nil {
+ return nil, err
+ }
+ if nn != 8 {
+ return nil, fmt.Errorf("redis: got %d elements in XINFO STREAM FULL reply,"+
+ "wanted 8", nn)
+ }
+
+ c := XInfoStreamConsumer{}
+
+ for f := 0; f < 4; f++ {
+ cKey, err := rd.ReadString()
+ if err != nil {
+ return nil, err
+ }
+
+ switch cKey {
+ case "name":
+ c.Name, err = rd.ReadString()
+ case "seen-time":
+ seen, err := rd.ReadIntReply()
+ if err != nil {
+ return nil, err
+ }
+ c.SeenTime = time.Unix(seen/1000, seen%1000*int64(time.Millisecond))
+ case "pel-count":
+ c.PelCount, err = rd.ReadIntReply()
+ case "pending":
+ pendingNumber, err := rd.ReadArrayLen()
+ if err != nil {
+ return nil, err
+ }
+
+ c.Pending = make([]XInfoStreamConsumerPending, 0, pendingNumber)
+
+ for pn := 0; pn < pendingNumber; pn++ {
+ nn, err := rd.ReadArrayLen()
+ if err != nil {
+ return nil, err
+ }
+ if nn != 3 {
+ return nil, fmt.Errorf("redis: got %d elements in XINFO STREAM reply,"+
+ "wanted 3", nn)
+ }
+
+ p := XInfoStreamConsumerPending{}
+
+ p.ID, err = rd.ReadString()
+ if err != nil {
+ return nil, err
+ }
+
+ delivery, err := rd.ReadIntReply()
+ if err != nil {
+ return nil, err
+ }
+ p.DeliveryTime = time.Unix(delivery/1000, delivery%1000*int64(time.Millisecond))
+
+ p.DeliveryCount, err = rd.ReadIntReply()
+ if err != nil {
+ return nil, err
+ }
+
+ c.Pending = append(c.Pending, p)
+ }
+ default:
+ return nil, fmt.Errorf("redis: unexpected content %s "+
+ "in XINFO STREAM reply", cKey)
+ }
+ if err != nil {
+ return nil, err
+ }
+ }
+ consumers = append(consumers, c)
+ }
+
+ return consumers, nil
+}
+
+//------------------------------------------------------------------------------
+
type ZSliceCmd struct {
baseCmd
@@ -1606,6 +2469,10 @@
}
}
+func (cmd *ZSliceCmd) SetVal(val []Z) {
+ cmd.val = val
+}
+
func (cmd *ZSliceCmd) Val() []Z {
return cmd.val
}
@@ -1661,6 +2528,10 @@
}
}
+func (cmd *ZWithKeyCmd) SetVal(val *ZWithKey) {
+ cmd.val = val
+}
+
func (cmd *ZWithKeyCmd) Val() *ZWithKey {
return cmd.val
}
@@ -1725,6 +2596,11 @@
}
}
+func (cmd *ScanCmd) SetVal(page []string, cursor uint64) {
+ cmd.page = page
+ cmd.cursor = cursor
+}
+
func (cmd *ScanCmd) Val() (keys []string, cursor uint64) {
return cmd.page, cmd.cursor
}
@@ -1779,6 +2655,10 @@
}
}
+func (cmd *ClusterSlotsCmd) SetVal(val []ClusterSlot) {
+ cmd.val = val
+}
+
func (cmd *ClusterSlotsCmd) Val() []ClusterSlot {
return cmd.val
}
@@ -1933,6 +2813,10 @@
return args
}
+func (cmd *GeoLocationCmd) SetVal(locations []GeoLocation) {
+ cmd.locations = locations
+}
+
func (cmd *GeoLocationCmd) Val() []GeoLocation {
return cmd.locations
}
@@ -2024,6 +2908,192 @@
//------------------------------------------------------------------------------
+// GeoSearchQuery is used for GEOSearch/GEOSearchStore command query.
+type GeoSearchQuery struct {
+ Member string
+
+ // Latitude and Longitude when using FromLonLat option.
+ Longitude float64
+ Latitude float64
+
+ // Distance and unit when using ByRadius option.
+ // Can use m, km, ft, or mi. Default is km.
+ Radius float64
+ RadiusUnit string
+
+ // Height, width and unit when using ByBox option.
+ // Can be m, km, ft, or mi. Default is km.
+ BoxWidth float64
+ BoxHeight float64
+ BoxUnit string
+
+ // Can be ASC or DESC. Default is no sort order.
+ Sort string
+ Count int
+ CountAny bool
+}
+
+type GeoSearchLocationQuery struct {
+ GeoSearchQuery
+
+ WithCoord bool
+ WithDist bool
+ WithHash bool
+}
+
+type GeoSearchStoreQuery struct {
+ GeoSearchQuery
+
+ // When using the StoreDist option, the command stores the items in a
+ // sorted set populated with their distance from the center of the circle or box,
+ // as a floating-point number, in the same unit specified for that shape.
+ StoreDist bool
+}
+
+func geoSearchLocationArgs(q *GeoSearchLocationQuery, args []interface{}) []interface{} {
+ args = geoSearchArgs(&q.GeoSearchQuery, args)
+
+ if q.WithCoord {
+ args = append(args, "withcoord")
+ }
+ if q.WithDist {
+ args = append(args, "withdist")
+ }
+ if q.WithHash {
+ args = append(args, "withhash")
+ }
+
+ return args
+}
+
+func geoSearchArgs(q *GeoSearchQuery, args []interface{}) []interface{} {
+ if q.Member != "" {
+ args = append(args, "frommember", q.Member)
+ } else {
+ args = append(args, "fromlonlat", q.Longitude, q.Latitude)
+ }
+
+ if q.Radius > 0 {
+ if q.RadiusUnit == "" {
+ q.RadiusUnit = "km"
+ }
+ args = append(args, "byradius", q.Radius, q.RadiusUnit)
+ } else {
+ if q.BoxUnit == "" {
+ q.BoxUnit = "km"
+ }
+ args = append(args, "bybox", q.BoxWidth, q.BoxHeight, q.BoxUnit)
+ }
+
+ if q.Sort != "" {
+ args = append(args, q.Sort)
+ }
+
+ if q.Count > 0 {
+ args = append(args, "count", q.Count)
+ if q.CountAny {
+ args = append(args, "any")
+ }
+ }
+
+ return args
+}
+
+type GeoSearchLocationCmd struct {
+ baseCmd
+
+ opt *GeoSearchLocationQuery
+ val []GeoLocation
+}
+
+var _ Cmder = (*GeoSearchLocationCmd)(nil)
+
+func NewGeoSearchLocationCmd(
+ ctx context.Context, opt *GeoSearchLocationQuery, args ...interface{},
+) *GeoSearchLocationCmd {
+ return &GeoSearchLocationCmd{
+ baseCmd: baseCmd{
+ ctx: ctx,
+ args: args,
+ },
+ opt: opt,
+ }
+}
+
+func (cmd *GeoSearchLocationCmd) SetVal(val []GeoLocation) {
+ cmd.val = val
+}
+
+func (cmd *GeoSearchLocationCmd) Val() []GeoLocation {
+ return cmd.val
+}
+
+func (cmd *GeoSearchLocationCmd) Result() ([]GeoLocation, error) {
+ return cmd.val, cmd.err
+}
+
+func (cmd *GeoSearchLocationCmd) String() string {
+ return cmdString(cmd, cmd.val)
+}
+
+func (cmd *GeoSearchLocationCmd) readReply(rd *proto.Reader) error {
+ n, err := rd.ReadArrayLen()
+ if err != nil {
+ return err
+ }
+
+ cmd.val = make([]GeoLocation, n)
+ for i := 0; i < n; i++ {
+ _, err = rd.ReadArrayLen()
+ if err != nil {
+ return err
+ }
+
+ var loc GeoLocation
+
+ loc.Name, err = rd.ReadString()
+ if err != nil {
+ return err
+ }
+ if cmd.opt.WithDist {
+ loc.Dist, err = rd.ReadFloatReply()
+ if err != nil {
+ return err
+ }
+ }
+ if cmd.opt.WithHash {
+ loc.GeoHash, err = rd.ReadIntReply()
+ if err != nil {
+ return err
+ }
+ }
+ if cmd.opt.WithCoord {
+ nn, err := rd.ReadArrayLen()
+ if err != nil {
+ return err
+ }
+ if nn != 2 {
+ return fmt.Errorf("got %d coordinates, expected 2", nn)
+ }
+
+ loc.Longitude, err = rd.ReadFloatReply()
+ if err != nil {
+ return err
+ }
+ loc.Latitude, err = rd.ReadFloatReply()
+ if err != nil {
+ return err
+ }
+ }
+
+ cmd.val[i] = loc
+ }
+
+ return nil
+}
+
+//------------------------------------------------------------------------------
+
type GeoPos struct {
Longitude, Latitude float64
}
@@ -2045,6 +3115,10 @@
}
}
+func (cmd *GeoPosCmd) SetVal(val []*GeoPos) {
+ cmd.val = val
+}
+
func (cmd *GeoPosCmd) Val() []*GeoPos {
return cmd.val
}
@@ -2122,6 +3196,10 @@
}
}
+func (cmd *CommandsInfoCmd) SetVal(val map[string]*CommandInfo) {
+ cmd.val = val
+}
+
func (cmd *CommandsInfoCmd) Val() map[string]*CommandInfo {
return cmd.val
}
@@ -2309,6 +3387,10 @@
}
}
+func (cmd *SlowLogCmd) SetVal(val []SlowLog) {
+ cmd.val = val
+}
+
func (cmd *SlowLogCmd) Val() []SlowLog {
return cmd.val
}
diff --git a/vendor/github.com/go-redis/redis/v8/commands.go b/vendor/github.com/go-redis/redis/v8/commands.go
index 79698ba..bbfe089 100644
--- a/vendor/github.com/go-redis/redis/v8/commands.go
+++ b/vendor/github.com/go-redis/redis/v8/commands.go
@@ -9,7 +9,8 @@
"github.com/go-redis/redis/v8/internal"
)
-// KeepTTL is an option for Set command to keep key's existing TTL.
+// KeepTTL is a Redis KEEPTTL option to keep existing TTL, it requires your redis-server version >= 6.0,
+// otherwise you will receive an error: (error) ERR syntax error.
// For example:
//
// rdb.Set(ctx, key, value, redis.KeepTTL)
@@ -67,6 +68,11 @@
dst = append(dst, k, v)
}
return dst
+ case map[string]string:
+ for k, v := range arg {
+ dst = append(dst, k, v)
+ }
+ return dst
default:
return append(dst, arg)
}
@@ -90,6 +96,10 @@
Exists(ctx context.Context, keys ...string) *IntCmd
Expire(ctx context.Context, key string, expiration time.Duration) *BoolCmd
ExpireAt(ctx context.Context, key string, tm time.Time) *BoolCmd
+ ExpireNX(ctx context.Context, key string, expiration time.Duration) *BoolCmd
+ ExpireXX(ctx context.Context, key string, expiration time.Duration) *BoolCmd
+ ExpireGT(ctx context.Context, key string, expiration time.Duration) *BoolCmd
+ ExpireLT(ctx context.Context, key string, expiration time.Duration) *BoolCmd
Keys(ctx context.Context, pattern string) *StringSliceCmd
Migrate(ctx context.Context, host, port, key string, db int, timeout time.Duration) *StatusCmd
Move(ctx context.Context, key string, db int) *BoolCmd
@@ -117,6 +127,8 @@
Get(ctx context.Context, key string) *StringCmd
GetRange(ctx context.Context, key string, start, end int64) *StringCmd
GetSet(ctx context.Context, key string, value interface{}) *StringCmd
+ GetEx(ctx context.Context, key string, expiration time.Duration) *StringCmd
+ GetDel(ctx context.Context, key string) *StringCmd
Incr(ctx context.Context, key string) *IntCmd
IncrBy(ctx context.Context, key string, value int64) *IntCmd
IncrByFloat(ctx context.Context, key string, value float64) *FloatCmd
@@ -124,11 +136,14 @@
MSet(ctx context.Context, values ...interface{}) *StatusCmd
MSetNX(ctx context.Context, values ...interface{}) *BoolCmd
Set(ctx context.Context, key string, value interface{}, expiration time.Duration) *StatusCmd
+ SetArgs(ctx context.Context, key string, value interface{}, a SetArgs) *StatusCmd
+ // TODO: rename to SetEx
SetEX(ctx context.Context, key string, value interface{}, expiration time.Duration) *StatusCmd
SetNX(ctx context.Context, key string, value interface{}, expiration time.Duration) *BoolCmd
SetXX(ctx context.Context, key string, value interface{}, expiration time.Duration) *BoolCmd
SetRange(ctx context.Context, key string, offset int64, value string) *IntCmd
StrLen(ctx context.Context, key string) *IntCmd
+ Copy(ctx context.Context, sourceKey string, destKey string, db int, replace bool) *IntCmd
GetBit(ctx context.Context, key string, offset int64) *IntCmd
SetBit(ctx context.Context, key string, offset int64, value int) *IntCmd
@@ -141,6 +156,7 @@
BitField(ctx context.Context, key string, args ...interface{}) *IntSliceCmd
Scan(ctx context.Context, cursor uint64, match string, count int64) *ScanCmd
+ ScanType(ctx context.Context, cursor uint64, match string, count int64, keyType string) *ScanCmd
SScan(ctx context.Context, key string, cursor uint64, match string, count int64) *ScanCmd
HScan(ctx context.Context, key string, cursor uint64, match string, count int64) *ScanCmd
ZScan(ctx context.Context, key string, cursor uint64, match string, count int64) *ScanCmd
@@ -158,6 +174,7 @@
HMSet(ctx context.Context, key string, values ...interface{}) *BoolCmd
HSetNX(ctx context.Context, key, field string, value interface{}) *BoolCmd
HVals(ctx context.Context, key string) *StringSliceCmd
+ HRandField(ctx context.Context, key string, count int, withValues bool) *StringSliceCmd
BLPop(ctx context.Context, timeout time.Duration, keys ...string) *StringSliceCmd
BRPop(ctx context.Context, timeout time.Duration, keys ...string) *StringSliceCmd
@@ -168,6 +185,7 @@
LInsertAfter(ctx context.Context, key string, pivot, value interface{}) *IntCmd
LLen(ctx context.Context, key string) *IntCmd
LPop(ctx context.Context, key string) *StringCmd
+ LPopCount(ctx context.Context, key string, count int) *StringSliceCmd
LPos(ctx context.Context, key string, value string, args LPosArgs) *IntCmd
LPosCount(ctx context.Context, key string, value string, count int64, args LPosArgs) *IntSliceCmd
LPush(ctx context.Context, key string, values ...interface{}) *IntCmd
@@ -177,9 +195,12 @@
LSet(ctx context.Context, key string, index int64, value interface{}) *StatusCmd
LTrim(ctx context.Context, key string, start, stop int64) *StatusCmd
RPop(ctx context.Context, key string) *StringCmd
+ RPopCount(ctx context.Context, key string, count int) *StringSliceCmd
RPopLPush(ctx context.Context, source, destination string) *StringCmd
RPush(ctx context.Context, key string, values ...interface{}) *IntCmd
RPushX(ctx context.Context, key string, values ...interface{}) *IntCmd
+ LMove(ctx context.Context, source, destination, srcpos, destpos string) *StringCmd
+ BLMove(ctx context.Context, source, destination, srcpos, destpos string, timeout time.Duration) *StringCmd
SAdd(ctx context.Context, key string, members ...interface{}) *IntCmd
SCard(ctx context.Context, key string) *IntCmd
@@ -188,6 +209,7 @@
SInter(ctx context.Context, keys ...string) *StringSliceCmd
SInterStore(ctx context.Context, destination string, keys ...string) *IntCmd
SIsMember(ctx context.Context, key string, member interface{}) *BoolCmd
+ SMIsMember(ctx context.Context, key string, members ...interface{}) *BoolSliceCmd
SMembers(ctx context.Context, key string) *StringSliceCmd
SMembersMap(ctx context.Context, key string) *StringStructMapCmd
SMove(ctx context.Context, source, destination string, member interface{}) *BoolCmd
@@ -212,6 +234,7 @@
XGroupCreateMkStream(ctx context.Context, stream, group, start string) *StatusCmd
XGroupSetID(ctx context.Context, stream, group, start string) *StatusCmd
XGroupDestroy(ctx context.Context, stream, group string) *IntCmd
+ XGroupCreateConsumer(ctx context.Context, stream, group, consumer string) *IntCmd
XGroupDelConsumer(ctx context.Context, stream, group, consumer string) *IntCmd
XReadGroup(ctx context.Context, a *XReadGroupArgs) *XStreamSliceCmd
XAck(ctx context.Context, stream, group string, ids ...string) *IntCmd
@@ -219,19 +242,42 @@
XPendingExt(ctx context.Context, a *XPendingExtArgs) *XPendingExtCmd
XClaim(ctx context.Context, a *XClaimArgs) *XMessageSliceCmd
XClaimJustID(ctx context.Context, a *XClaimArgs) *StringSliceCmd
+ XAutoClaim(ctx context.Context, a *XAutoClaimArgs) *XAutoClaimCmd
+ XAutoClaimJustID(ctx context.Context, a *XAutoClaimArgs) *XAutoClaimJustIDCmd
+
+ // TODO: XTrim and XTrimApprox remove in v9.
XTrim(ctx context.Context, key string, maxLen int64) *IntCmd
XTrimApprox(ctx context.Context, key string, maxLen int64) *IntCmd
+ XTrimMaxLen(ctx context.Context, key string, maxLen int64) *IntCmd
+ XTrimMaxLenApprox(ctx context.Context, key string, maxLen, limit int64) *IntCmd
+ XTrimMinID(ctx context.Context, key string, minID string) *IntCmd
+ XTrimMinIDApprox(ctx context.Context, key string, minID string, limit int64) *IntCmd
XInfoGroups(ctx context.Context, key string) *XInfoGroupsCmd
XInfoStream(ctx context.Context, key string) *XInfoStreamCmd
+ XInfoStreamFull(ctx context.Context, key string, count int) *XInfoStreamFullCmd
+ XInfoConsumers(ctx context.Context, key string, group string) *XInfoConsumersCmd
BZPopMax(ctx context.Context, timeout time.Duration, keys ...string) *ZWithKeyCmd
BZPopMin(ctx context.Context, timeout time.Duration, keys ...string) *ZWithKeyCmd
+
+ // TODO: remove
+ // ZAddCh
+ // ZIncr
+ // ZAddNXCh
+ // ZAddXXCh
+ // ZIncrNX
+ // ZIncrXX
+ // in v9.
+ // use ZAddArgs and ZAddArgsIncr.
+
ZAdd(ctx context.Context, key string, members ...*Z) *IntCmd
ZAddNX(ctx context.Context, key string, members ...*Z) *IntCmd
ZAddXX(ctx context.Context, key string, members ...*Z) *IntCmd
ZAddCh(ctx context.Context, key string, members ...*Z) *IntCmd
ZAddNXCh(ctx context.Context, key string, members ...*Z) *IntCmd
ZAddXXCh(ctx context.Context, key string, members ...*Z) *IntCmd
+ ZAddArgs(ctx context.Context, key string, args ZAddArgs) *IntCmd
+ ZAddArgsIncr(ctx context.Context, key string, args ZAddArgs) *FloatCmd
ZIncr(ctx context.Context, key string, member *Z) *FloatCmd
ZIncrNX(ctx context.Context, key string, member *Z) *FloatCmd
ZIncrXX(ctx context.Context, key string, member *Z) *FloatCmd
@@ -239,7 +285,10 @@
ZCount(ctx context.Context, key, min, max string) *IntCmd
ZLexCount(ctx context.Context, key, min, max string) *IntCmd
ZIncrBy(ctx context.Context, key string, increment float64, member string) *FloatCmd
+ ZInter(ctx context.Context, store *ZStore) *StringSliceCmd
+ ZInterWithScores(ctx context.Context, store *ZStore) *ZSliceCmd
ZInterStore(ctx context.Context, destination string, store *ZStore) *IntCmd
+ ZMScore(ctx context.Context, key string, members ...string) *FloatSliceCmd
ZPopMax(ctx context.Context, key string, count ...int64) *ZSliceCmd
ZPopMin(ctx context.Context, key string, count ...int64) *ZSliceCmd
ZRange(ctx context.Context, key string, start, stop int64) *StringSliceCmd
@@ -247,6 +296,9 @@
ZRangeByScore(ctx context.Context, key string, opt *ZRangeBy) *StringSliceCmd
ZRangeByLex(ctx context.Context, key string, opt *ZRangeBy) *StringSliceCmd
ZRangeByScoreWithScores(ctx context.Context, key string, opt *ZRangeBy) *ZSliceCmd
+ ZRangeArgs(ctx context.Context, z ZRangeArgs) *StringSliceCmd
+ ZRangeArgsWithScores(ctx context.Context, z ZRangeArgs) *ZSliceCmd
+ ZRangeStore(ctx context.Context, dst string, z ZRangeArgs) *IntCmd
ZRank(ctx context.Context, key, member string) *IntCmd
ZRem(ctx context.Context, key string, members ...interface{}) *IntCmd
ZRemRangeByRank(ctx context.Context, key string, start, stop int64) *IntCmd
@@ -260,6 +312,12 @@
ZRevRank(ctx context.Context, key, member string) *IntCmd
ZScore(ctx context.Context, key, member string) *FloatCmd
ZUnionStore(ctx context.Context, dest string, store *ZStore) *IntCmd
+ ZUnion(ctx context.Context, store ZStore) *StringSliceCmd
+ ZUnionWithScores(ctx context.Context, store ZStore) *ZSliceCmd
+ ZRandMember(ctx context.Context, key string, count int, withScores bool) *StringSliceCmd
+ ZDiff(ctx context.Context, keys ...string) *StringSliceCmd
+ ZDiffWithScores(ctx context.Context, keys ...string) *ZSliceCmd
+ ZDiffStore(ctx context.Context, destination string, keys ...string) *IntCmd
PFAdd(ctx context.Context, key string, els ...interface{}) *IntCmd
PFCount(ctx context.Context, keys ...string) *IntCmd
@@ -332,6 +390,9 @@
GeoRadiusStore(ctx context.Context, key string, longitude, latitude float64, query *GeoRadiusQuery) *IntCmd
GeoRadiusByMember(ctx context.Context, key, member string, query *GeoRadiusQuery) *GeoLocationCmd
GeoRadiusByMemberStore(ctx context.Context, key, member string, query *GeoRadiusQuery) *IntCmd
+ GeoSearch(ctx context.Context, key string, q *GeoSearchQuery) *StringSliceCmd
+ GeoSearchLocation(ctx context.Context, key string, q *GeoSearchLocationQuery) *GeoSearchLocationCmd
+ GeoSearchStore(ctx context.Context, key, store string, q *GeoSearchStoreQuery) *IntCmd
GeoDist(ctx context.Context, key string, member1, member2, unit string) *FloatCmd
GeoHash(ctx context.Context, key string, members ...string) *StringSliceCmd
}
@@ -364,7 +425,7 @@
return cmd
}
-// Perform an AUTH command, using the given user and pass.
+// AuthACL Perform an AUTH command, using the given user and pass.
// Should be used to authenticate the current connection with one of the connections defined in the ACL list
// when connecting to a Redis 6.0 instance, or greater, that is using the Redis ACL system.
func (c statefulCmdable) AuthACL(ctx context.Context, username, password string) *StatusCmd {
@@ -375,6 +436,7 @@
func (c cmdable) Wait(ctx context.Context, numSlaves int, timeout time.Duration) *IntCmd {
cmd := NewIntCmd(ctx, "wait", numSlaves, int(timeout/time.Millisecond))
+ cmd.setReadTimeout(timeout)
_ = c(ctx, cmd)
return cmd
}
@@ -425,7 +487,7 @@
return cmd
}
-func (c cmdable) Quit(ctx context.Context) *StatusCmd {
+func (c cmdable) Quit(_ context.Context) *StatusCmd {
panic("not implemented")
}
@@ -469,7 +531,37 @@
}
func (c cmdable) Expire(ctx context.Context, key string, expiration time.Duration) *BoolCmd {
- cmd := NewBoolCmd(ctx, "expire", key, formatSec(ctx, expiration))
+ return c.expire(ctx, key, expiration, "")
+}
+
+func (c cmdable) ExpireNX(ctx context.Context, key string, expiration time.Duration) *BoolCmd {
+ return c.expire(ctx, key, expiration, "NX")
+}
+
+func (c cmdable) ExpireXX(ctx context.Context, key string, expiration time.Duration) *BoolCmd {
+ return c.expire(ctx, key, expiration, "XX")
+}
+
+func (c cmdable) ExpireGT(ctx context.Context, key string, expiration time.Duration) *BoolCmd {
+ return c.expire(ctx, key, expiration, "GT")
+}
+
+func (c cmdable) ExpireLT(ctx context.Context, key string, expiration time.Duration) *BoolCmd {
+ return c.expire(ctx, key, expiration, "LT")
+}
+
+func (c cmdable) expire(
+ ctx context.Context, key string, expiration time.Duration, mode string,
+) *BoolCmd {
+ args := make([]interface{}, 3, 4)
+ args[0] = "expire"
+ args[1] = key
+ args[2] = formatSec(ctx, expiration)
+ if mode != "" {
+ args = append(args, mode)
+ }
+
+ cmd := NewBoolCmd(ctx, args...)
_ = c(ctx, cmd)
return cmd
}
@@ -688,7 +780,7 @@
return cmd
}
-// Redis `GET key` command. It returns redis.Nil error when key does not exist.
+// Get Redis `GET key` command. It returns redis.Nil error when key does not exist.
func (c cmdable) Get(ctx context.Context, key string) *StringCmd {
cmd := NewStringCmd(ctx, "get", key)
_ = c(ctx, cmd)
@@ -707,6 +799,33 @@
return cmd
}
+// GetEx An expiration of zero removes the TTL associated with the key (i.e. GETEX key persist).
+// Requires Redis >= 6.2.0.
+func (c cmdable) GetEx(ctx context.Context, key string, expiration time.Duration) *StringCmd {
+ args := make([]interface{}, 0, 4)
+ args = append(args, "getex", key)
+ if expiration > 0 {
+ if usePrecise(expiration) {
+ args = append(args, "px", formatMs(ctx, expiration))
+ } else {
+ args = append(args, "ex", formatSec(ctx, expiration))
+ }
+ } else if expiration == 0 {
+ args = append(args, "persist")
+ }
+
+ cmd := NewStringCmd(ctx, args...)
+ _ = c(ctx, cmd)
+ return cmd
+}
+
+// GetDel redis-server version >= 6.2.0.
+func (c cmdable) GetDel(ctx context.Context, key string) *StringCmd {
+ cmd := NewStringCmd(ctx, "getdel", key)
+ _ = c(ctx, cmd)
+ return cmd
+}
+
func (c cmdable) Incr(ctx context.Context, key string) *IntCmd {
cmd := NewIntCmd(ctx, "incr", key)
_ = c(ctx, cmd)
@@ -762,11 +881,12 @@
return cmd
}
-// Redis `SET key value [expiration]` command.
+// Set Redis `SET key value [expiration]` command.
// Use expiration for `SETEX`-like behavior.
//
// Zero expiration means the key has no expiration time.
-// KeepTTL(-1) expiration is a Redis KEEPTTL option to keep existing TTL.
+// KeepTTL is a Redis KEEPTTL option to keep existing TTL, it requires your redis-server version >= 6.0,
+// otherwise you will receive an error: (error) ERR syntax error.
func (c cmdable) Set(ctx context.Context, key string, value interface{}, expiration time.Duration) *StatusCmd {
args := make([]interface{}, 3, 5)
args[0] = "set"
@@ -787,17 +907,69 @@
return cmd
}
-// Redis `SETEX key expiration value` command.
+// SetArgs provides arguments for the SetArgs function.
+type SetArgs struct {
+ // Mode can be `NX` or `XX` or empty.
+ Mode string
+
+ // Zero `TTL` or `Expiration` means that the key has no expiration time.
+ TTL time.Duration
+ ExpireAt time.Time
+
+ // When Get is true, the command returns the old value stored at key, or nil when key did not exist.
+ Get bool
+
+ // KeepTTL is a Redis KEEPTTL option to keep existing TTL, it requires your redis-server version >= 6.0,
+ // otherwise you will receive an error: (error) ERR syntax error.
+ KeepTTL bool
+}
+
+// SetArgs supports all the options that the SET command supports.
+// It is the alternative to the Set function when you want
+// to have more control over the options.
+func (c cmdable) SetArgs(ctx context.Context, key string, value interface{}, a SetArgs) *StatusCmd {
+ args := []interface{}{"set", key, value}
+
+ if a.KeepTTL {
+ args = append(args, "keepttl")
+ }
+
+ if !a.ExpireAt.IsZero() {
+ args = append(args, "exat", a.ExpireAt.Unix())
+ }
+ if a.TTL > 0 {
+ if usePrecise(a.TTL) {
+ args = append(args, "px", formatMs(ctx, a.TTL))
+ } else {
+ args = append(args, "ex", formatSec(ctx, a.TTL))
+ }
+ }
+
+ if a.Mode != "" {
+ args = append(args, a.Mode)
+ }
+
+ if a.Get {
+ args = append(args, "get")
+ }
+
+ cmd := NewStatusCmd(ctx, args...)
+ _ = c(ctx, cmd)
+ return cmd
+}
+
+// SetEX Redis `SETEX key expiration value` command.
func (c cmdable) SetEX(ctx context.Context, key string, value interface{}, expiration time.Duration) *StatusCmd {
cmd := NewStatusCmd(ctx, "setex", key, formatSec(ctx, expiration), value)
_ = c(ctx, cmd)
return cmd
}
-// Redis `SET key value [expiration] NX` command.
+// SetNX Redis `SET key value [expiration] NX` command.
//
// Zero expiration means the key has no expiration time.
-// KeepTTL(-1) expiration is a Redis KEEPTTL option to keep existing TTL.
+// KeepTTL is a Redis KEEPTTL option to keep existing TTL, it requires your redis-server version >= 6.0,
+// otherwise you will receive an error: (error) ERR syntax error.
func (c cmdable) SetNX(ctx context.Context, key string, value interface{}, expiration time.Duration) *BoolCmd {
var cmd *BoolCmd
switch expiration {
@@ -818,10 +990,11 @@
return cmd
}
-// Redis `SET key value [expiration] XX` command.
+// SetXX Redis `SET key value [expiration] XX` command.
//
// Zero expiration means the key has no expiration time.
-// KeepTTL(-1) expiration is a Redis KEEPTTL option to keep existing TTL.
+// KeepTTL is a Redis KEEPTTL option to keep existing TTL, it requires your redis-server version >= 6.0,
+// otherwise you will receive an error: (error) ERR syntax error.
func (c cmdable) SetXX(ctx context.Context, key string, value interface{}, expiration time.Duration) *BoolCmd {
var cmd *BoolCmd
switch expiration {
@@ -853,6 +1026,16 @@
return cmd
}
+func (c cmdable) Copy(ctx context.Context, sourceKey, destKey string, db int, replace bool) *IntCmd {
+ args := []interface{}{"copy", sourceKey, destKey, "DB", db}
+ if replace {
+ args = append(args, "REPLACE")
+ }
+ cmd := NewIntCmd(ctx, args...)
+ _ = c(ctx, cmd)
+ return cmd
+}
+
//------------------------------------------------------------------------------
func (c cmdable) GetBit(ctx context.Context, key string, offset int64) *IntCmd {
@@ -965,6 +1148,22 @@
return cmd
}
+func (c cmdable) ScanType(ctx context.Context, cursor uint64, match string, count int64, keyType string) *ScanCmd {
+ args := []interface{}{"scan", cursor}
+ if match != "" {
+ args = append(args, "match", match)
+ }
+ if count > 0 {
+ args = append(args, "count", count)
+ }
+ if keyType != "" {
+ args = append(args, "type", keyType)
+ }
+ cmd := NewScanCmd(ctx, c, args...)
+ _ = c(ctx, cmd)
+ return cmd
+}
+
func (c cmdable) SScan(ctx context.Context, key string, cursor uint64, match string, count int64) *ScanCmd {
args := []interface{}{"sscan", key, cursor}
if match != "" {
@@ -1113,6 +1312,21 @@
return cmd
}
+// HRandField redis-server version >= 6.2.0.
+func (c cmdable) HRandField(ctx context.Context, key string, count int, withValues bool) *StringSliceCmd {
+ args := make([]interface{}, 0, 4)
+
+ // Although count=0 is meaningless, redis accepts count=0.
+ args = append(args, "hrandfield", key, count)
+ if withValues {
+ args = append(args, "withvalues")
+ }
+
+ cmd := NewStringSliceCmd(ctx, args...)
+ _ = c(ctx, cmd)
+ return cmd
+}
+
//------------------------------------------------------------------------------
func (c cmdable) BLPop(ctx context.Context, timeout time.Duration, keys ...string) *StringSliceCmd {
@@ -1190,6 +1404,12 @@
return cmd
}
+func (c cmdable) LPopCount(ctx context.Context, key string, count int) *StringSliceCmd {
+ cmd := NewStringSliceCmd(ctx, "lpop", key, count)
+ _ = c(ctx, cmd)
+ return cmd
+}
+
type LPosArgs struct {
Rank, MaxLen int64
}
@@ -1283,6 +1503,12 @@
return cmd
}
+func (c cmdable) RPopCount(ctx context.Context, key string, count int) *StringSliceCmd {
+ cmd := NewStringSliceCmd(ctx, "rpop", key, count)
+ _ = c(ctx, cmd)
+ return cmd
+}
+
func (c cmdable) RPopLPush(ctx context.Context, source, destination string) *StringCmd {
cmd := NewStringCmd(ctx, "rpoplpush", source, destination)
_ = c(ctx, cmd)
@@ -1309,6 +1535,21 @@
return cmd
}
+func (c cmdable) LMove(ctx context.Context, source, destination, srcpos, destpos string) *StringCmd {
+ cmd := NewStringCmd(ctx, "lmove", source, destination, srcpos, destpos)
+ _ = c(ctx, cmd)
+ return cmd
+}
+
+func (c cmdable) BLMove(
+ ctx context.Context, source, destination, srcpos, destpos string, timeout time.Duration,
+) *StringCmd {
+ cmd := NewStringCmd(ctx, "blmove", source, destination, srcpos, destpos, formatSec(ctx, timeout))
+ cmd.setReadTimeout(timeout)
+ _ = c(ctx, cmd)
+ return cmd
+}
+
//------------------------------------------------------------------------------
func (c cmdable) SAdd(ctx context.Context, key string, members ...interface{}) *IntCmd {
@@ -1379,14 +1620,25 @@
return cmd
}
-// Redis `SMEMBERS key` command output as a slice.
+// SMIsMember Redis `SMISMEMBER key member [member ...]` command.
+func (c cmdable) SMIsMember(ctx context.Context, key string, members ...interface{}) *BoolSliceCmd {
+ args := make([]interface{}, 2, 2+len(members))
+ args[0] = "smismember"
+ args[1] = key
+ args = appendArgs(args, members)
+ cmd := NewBoolSliceCmd(ctx, args...)
+ _ = c(ctx, cmd)
+ return cmd
+}
+
+// SMembers Redis `SMEMBERS key` command output as a slice.
func (c cmdable) SMembers(ctx context.Context, key string) *StringSliceCmd {
cmd := NewStringSliceCmd(ctx, "smembers", key)
_ = c(ctx, cmd)
return cmd
}
-// Redis `SMEMBERS key` command output as a map.
+// SMembersMap Redis `SMEMBERS key` command output as a map.
func (c cmdable) SMembersMap(ctx context.Context, key string) *StringStructMapCmd {
cmd := NewStringStructMapCmd(ctx, "smembers", key)
_ = c(ctx, cmd)
@@ -1399,28 +1651,28 @@
return cmd
}
-// Redis `SPOP key` command.
+// SPop Redis `SPOP key` command.
func (c cmdable) SPop(ctx context.Context, key string) *StringCmd {
cmd := NewStringCmd(ctx, "spop", key)
_ = c(ctx, cmd)
return cmd
}
-// Redis `SPOP key count` command.
+// SPopN Redis `SPOP key count` command.
func (c cmdable) SPopN(ctx context.Context, key string, count int64) *StringSliceCmd {
cmd := NewStringSliceCmd(ctx, "spop", key, count)
_ = c(ctx, cmd)
return cmd
}
-// Redis `SRANDMEMBER key` command.
+// SRandMember Redis `SRANDMEMBER key` command.
func (c cmdable) SRandMember(ctx context.Context, key string) *StringCmd {
cmd := NewStringCmd(ctx, "srandmember", key)
_ = c(ctx, cmd)
return cmd
}
-// Redis `SRANDMEMBER key count` command.
+// SRandMemberN Redis `SRANDMEMBER key count` command.
func (c cmdable) SRandMemberN(ctx context.Context, key string, count int64) *StringSliceCmd {
cmd := NewStringSliceCmd(ctx, "srandmember", key, count)
_ = c(ctx, cmd)
@@ -1468,22 +1720,50 @@
// - XAddArgs.Values = map[string]interface{}{"key1": "value1", "key2": "value2"}
//
// Note that map will not preserve the order of key-value pairs.
+// MaxLen/MaxLenApprox and MinID are in conflict, only one of them can be used.
type XAddArgs struct {
- Stream string
- MaxLen int64 // MAXLEN N
+ Stream string
+ NoMkStream bool
+ MaxLen int64 // MAXLEN N
+
+ // Deprecated: use MaxLen+Approx, remove in v9.
MaxLenApprox int64 // MAXLEN ~ N
- ID string
- Values interface{}
+
+ MinID string
+ // Approx causes MaxLen and MinID to use "~" matcher (instead of "=").
+ Approx bool
+ Limit int64
+ ID string
+ Values interface{}
}
+// XAdd a.Limit has a bug, please confirm it and use it.
+// issue: https://github.com/redis/redis/issues/9046
func (c cmdable) XAdd(ctx context.Context, a *XAddArgs) *StringCmd {
- args := make([]interface{}, 0, 8)
- args = append(args, "xadd")
- args = append(args, a.Stream)
- if a.MaxLen > 0 {
- args = append(args, "maxlen", a.MaxLen)
- } else if a.MaxLenApprox > 0 {
+ args := make([]interface{}, 0, 11)
+ args = append(args, "xadd", a.Stream)
+ if a.NoMkStream {
+ args = append(args, "nomkstream")
+ }
+ switch {
+ case a.MaxLen > 0:
+ if a.Approx {
+ args = append(args, "maxlen", "~", a.MaxLen)
+ } else {
+ args = append(args, "maxlen", a.MaxLen)
+ }
+ case a.MaxLenApprox > 0:
+ // TODO remove in v9.
args = append(args, "maxlen", "~", a.MaxLenApprox)
+ case a.MinID != "":
+ if a.Approx {
+ args = append(args, "minid", "~", a.MinID)
+ } else {
+ args = append(args, "minid", a.MinID)
+ }
+ }
+ if a.Limit > 0 {
+ args = append(args, "limit", a.Limit)
}
if a.ID != "" {
args = append(args, a.ID)
@@ -1544,7 +1824,7 @@
}
func (c cmdable) XRead(ctx context.Context, a *XReadArgs) *XStreamSliceCmd {
- args := make([]interface{}, 0, 5+len(a.Streams))
+ args := make([]interface{}, 0, 6+len(a.Streams))
args = append(args, "xread")
keyPos := int8(1)
@@ -1568,7 +1848,7 @@
if a.Block >= 0 {
cmd.setReadTimeout(a.Block)
}
- cmd.setFirstKeyPos(keyPos)
+ cmd.SetFirstKeyPos(keyPos)
_ = c(ctx, cmd)
return cmd
}
@@ -1604,6 +1884,12 @@
return cmd
}
+func (c cmdable) XGroupCreateConsumer(ctx context.Context, stream, group, consumer string) *IntCmd {
+ cmd := NewIntCmd(ctx, "xgroup", "createconsumer", stream, group, consumer)
+ _ = c(ctx, cmd)
+ return cmd
+}
+
func (c cmdable) XGroupDelConsumer(ctx context.Context, stream, group, consumer string) *IntCmd {
cmd := NewIntCmd(ctx, "xgroup", "delconsumer", stream, group, consumer)
_ = c(ctx, cmd)
@@ -1620,10 +1906,10 @@
}
func (c cmdable) XReadGroup(ctx context.Context, a *XReadGroupArgs) *XStreamSliceCmd {
- args := make([]interface{}, 0, 8+len(a.Streams))
+ args := make([]interface{}, 0, 10+len(a.Streams))
args = append(args, "xreadgroup", "group", a.Group, a.Consumer)
- keyPos := int8(1)
+ keyPos := int8(4)
if a.Count > 0 {
args = append(args, "count", a.Count)
keyPos += 2
@@ -1646,7 +1932,7 @@
if a.Block >= 0 {
cmd.setReadTimeout(a.Block)
}
- cmd.setFirstKeyPos(keyPos)
+ cmd.SetFirstKeyPos(keyPos)
_ = c(ctx, cmd)
return cmd
}
@@ -1670,6 +1956,7 @@
type XPendingExtArgs struct {
Stream string
Group string
+ Idle time.Duration
Start string
End string
Count int64
@@ -1677,8 +1964,12 @@
}
func (c cmdable) XPendingExt(ctx context.Context, a *XPendingExtArgs) *XPendingExtCmd {
- args := make([]interface{}, 0, 7)
- args = append(args, "xpending", a.Stream, a.Group, a.Start, a.End, a.Count)
+ args := make([]interface{}, 0, 9)
+ args = append(args, "xpending", a.Stream, a.Group)
+ if a.Idle != 0 {
+ args = append(args, "idle", formatMs(ctx, a.Idle))
+ }
+ args = append(args, a.Start, a.End, a.Count)
if a.Consumer != "" {
args = append(args, a.Consumer)
}
@@ -1687,6 +1978,39 @@
return cmd
}
+type XAutoClaimArgs struct {
+ Stream string
+ Group string
+ MinIdle time.Duration
+ Start string
+ Count int64
+ Consumer string
+}
+
+func (c cmdable) XAutoClaim(ctx context.Context, a *XAutoClaimArgs) *XAutoClaimCmd {
+ args := xAutoClaimArgs(ctx, a)
+ cmd := NewXAutoClaimCmd(ctx, args...)
+ _ = c(ctx, cmd)
+ return cmd
+}
+
+func (c cmdable) XAutoClaimJustID(ctx context.Context, a *XAutoClaimArgs) *XAutoClaimJustIDCmd {
+ args := xAutoClaimArgs(ctx, a)
+ args = append(args, "justid")
+ cmd := NewXAutoClaimJustIDCmd(ctx, args...)
+ _ = c(ctx, cmd)
+ return cmd
+}
+
+func xAutoClaimArgs(ctx context.Context, a *XAutoClaimArgs) []interface{} {
+ args := make([]interface{}, 0, 8)
+ args = append(args, "xautoclaim", a.Stream, a.Group, a.Consumer, formatMs(ctx, a.MinIdle), a.Start)
+ if a.Count > 0 {
+ args = append(args, "count", a.Count)
+ }
+ return args
+}
+
type XClaimArgs struct {
Stream string
Group string
@@ -1711,7 +2035,7 @@
}
func xClaimArgs(a *XClaimArgs) []interface{} {
- args := make([]interface{}, 0, 4+len(a.Messages))
+ args := make([]interface{}, 0, 5+len(a.Messages))
args = append(args,
"xclaim",
a.Stream,
@@ -1723,14 +2047,67 @@
return args
}
-func (c cmdable) XTrim(ctx context.Context, key string, maxLen int64) *IntCmd {
- cmd := NewIntCmd(ctx, "xtrim", key, "maxlen", maxLen)
+// xTrim If approx is true, add the "~" parameter, otherwise it is the default "=" (redis default).
+// example:
+// XTRIM key MAXLEN/MINID threshold LIMIT limit.
+// XTRIM key MAXLEN/MINID ~ threshold LIMIT limit.
+// The redis-server version is lower than 6.2, please set limit to 0.
+func (c cmdable) xTrim(
+ ctx context.Context, key, strategy string,
+ approx bool, threshold interface{}, limit int64,
+) *IntCmd {
+ args := make([]interface{}, 0, 7)
+ args = append(args, "xtrim", key, strategy)
+ if approx {
+ args = append(args, "~")
+ }
+ args = append(args, threshold)
+ if limit > 0 {
+ args = append(args, "limit", limit)
+ }
+ cmd := NewIntCmd(ctx, args...)
_ = c(ctx, cmd)
return cmd
}
+// Deprecated: use XTrimMaxLen, remove in v9.
+func (c cmdable) XTrim(ctx context.Context, key string, maxLen int64) *IntCmd {
+ return c.xTrim(ctx, key, "maxlen", false, maxLen, 0)
+}
+
+// Deprecated: use XTrimMaxLenApprox, remove in v9.
func (c cmdable) XTrimApprox(ctx context.Context, key string, maxLen int64) *IntCmd {
- cmd := NewIntCmd(ctx, "xtrim", key, "maxlen", "~", maxLen)
+ return c.xTrim(ctx, key, "maxlen", true, maxLen, 0)
+}
+
+// XTrimMaxLen No `~` rules are used, `limit` cannot be used.
+// cmd: XTRIM key MAXLEN maxLen
+func (c cmdable) XTrimMaxLen(ctx context.Context, key string, maxLen int64) *IntCmd {
+ return c.xTrim(ctx, key, "maxlen", false, maxLen, 0)
+}
+
+// XTrimMaxLenApprox LIMIT has a bug, please confirm it and use it.
+// issue: https://github.com/redis/redis/issues/9046
+// cmd: XTRIM key MAXLEN ~ maxLen LIMIT limit
+func (c cmdable) XTrimMaxLenApprox(ctx context.Context, key string, maxLen, limit int64) *IntCmd {
+ return c.xTrim(ctx, key, "maxlen", true, maxLen, limit)
+}
+
+// XTrimMinID No `~` rules are used, `limit` cannot be used.
+// cmd: XTRIM key MINID minID
+func (c cmdable) XTrimMinID(ctx context.Context, key string, minID string) *IntCmd {
+ return c.xTrim(ctx, key, "minid", false, minID, 0)
+}
+
+// XTrimMinIDApprox LIMIT has a bug, please confirm it and use it.
+// issue: https://github.com/redis/redis/issues/9046
+// cmd: XTRIM key MINID ~ minID LIMIT limit
+func (c cmdable) XTrimMinIDApprox(ctx context.Context, key string, minID string, limit int64) *IntCmd {
+ return c.xTrim(ctx, key, "minid", true, minID, limit)
+}
+
+func (c cmdable) XInfoConsumers(ctx context.Context, key string, group string) *XInfoConsumersCmd {
+ cmd := NewXInfoConsumersCmd(ctx, key, group)
_ = c(ctx, cmd)
return cmd
}
@@ -1747,6 +2124,19 @@
return cmd
}
+// XInfoStreamFull XINFO STREAM FULL [COUNT count]
+// redis-server >= 6.0.
+func (c cmdable) XInfoStreamFull(ctx context.Context, key string, count int) *XInfoStreamFullCmd {
+ args := make([]interface{}, 0, 6)
+ args = append(args, "xinfo", "stream", key, "full")
+ if count > 0 {
+ args = append(args, "count", count)
+ }
+ cmd := NewXInfoStreamFullCmd(ctx, args...)
+ _ = c(ctx, cmd)
+ return cmd
+}
+
//------------------------------------------------------------------------------
// Z represents sorted set member.
@@ -1761,7 +2151,7 @@
Key string
}
-// ZStore is used as an arg to ZInterStore and ZUnionStore.
+// ZStore is used as an arg to ZInter/ZInterStore and ZUnion/ZUnionStore.
type ZStore struct {
Keys []string
Weights []float64
@@ -1769,7 +2159,34 @@
Aggregate string
}
-// Redis `BZPOPMAX key [key ...] timeout` command.
+func (z ZStore) len() (n int) {
+ n = len(z.Keys)
+ if len(z.Weights) > 0 {
+ n += 1 + len(z.Weights)
+ }
+ if z.Aggregate != "" {
+ n += 2
+ }
+ return n
+}
+
+func (z ZStore) appendArgs(args []interface{}) []interface{} {
+ for _, key := range z.Keys {
+ args = append(args, key)
+ }
+ if len(z.Weights) > 0 {
+ args = append(args, "weights")
+ for _, weights := range z.Weights {
+ args = append(args, weights)
+ }
+ }
+ if z.Aggregate != "" {
+ args = append(args, "aggregate", z.Aggregate)
+ }
+ return args
+}
+
+// BZPopMax Redis `BZPOPMAX key [key ...] timeout` command.
func (c cmdable) BZPopMax(ctx context.Context, timeout time.Duration, keys ...string) *ZWithKeyCmd {
args := make([]interface{}, 1+len(keys)+1)
args[0] = "bzpopmax"
@@ -1783,7 +2200,7 @@
return cmd
}
-// Redis `BZPOPMIN key [key ...] timeout` command.
+// BZPopMin Redis `BZPOPMIN key [key ...] timeout` command.
func (c cmdable) BZPopMin(ctx context.Context, timeout time.Duration, keys ...string) *ZWithKeyCmd {
args := make([]interface{}, 1+len(keys)+1)
args[0] = "bzpopmin"
@@ -1797,96 +2214,169 @@
return cmd
}
-func (c cmdable) zAdd(ctx context.Context, a []interface{}, n int, members ...*Z) *IntCmd {
- for i, m := range members {
- a[n+2*i] = m.Score
- a[n+2*i+1] = m.Member
+// ZAddArgs WARN: The GT, LT and NX options are mutually exclusive.
+type ZAddArgs struct {
+ NX bool
+ XX bool
+ LT bool
+ GT bool
+ Ch bool
+ Members []Z
+}
+
+func (c cmdable) zAddArgs(key string, args ZAddArgs, incr bool) []interface{} {
+ a := make([]interface{}, 0, 6+2*len(args.Members))
+ a = append(a, "zadd", key)
+
+ // The GT, LT and NX options are mutually exclusive.
+ if args.NX {
+ a = append(a, "nx")
+ } else {
+ if args.XX {
+ a = append(a, "xx")
+ }
+ if args.GT {
+ a = append(a, "gt")
+ } else if args.LT {
+ a = append(a, "lt")
+ }
}
- cmd := NewIntCmd(ctx, a...)
+ if args.Ch {
+ a = append(a, "ch")
+ }
+ if incr {
+ a = append(a, "incr")
+ }
+ for _, m := range args.Members {
+ a = append(a, m.Score)
+ a = append(a, m.Member)
+ }
+ return a
+}
+
+func (c cmdable) ZAddArgs(ctx context.Context, key string, args ZAddArgs) *IntCmd {
+ cmd := NewIntCmd(ctx, c.zAddArgs(key, args, false)...)
_ = c(ctx, cmd)
return cmd
}
-// Redis `ZADD key score member [score member ...]` command.
+func (c cmdable) ZAddArgsIncr(ctx context.Context, key string, args ZAddArgs) *FloatCmd {
+ cmd := NewFloatCmd(ctx, c.zAddArgs(key, args, true)...)
+ _ = c(ctx, cmd)
+ return cmd
+}
+
+// TODO: Compatible with v8 api, will be removed in v9.
+func (c cmdable) zAdd(ctx context.Context, key string, args ZAddArgs, members ...*Z) *IntCmd {
+ args.Members = make([]Z, len(members))
+ for i, m := range members {
+ args.Members[i] = *m
+ }
+ cmd := NewIntCmd(ctx, c.zAddArgs(key, args, false)...)
+ _ = c(ctx, cmd)
+ return cmd
+}
+
+// ZAdd Redis `ZADD key score member [score member ...]` command.
func (c cmdable) ZAdd(ctx context.Context, key string, members ...*Z) *IntCmd {
- const n = 2
- a := make([]interface{}, n+2*len(members))
- a[0], a[1] = "zadd", key
- return c.zAdd(ctx, a, n, members...)
+ return c.zAdd(ctx, key, ZAddArgs{}, members...)
}
-// Redis `ZADD key NX score member [score member ...]` command.
+// ZAddNX Redis `ZADD key NX score member [score member ...]` command.
func (c cmdable) ZAddNX(ctx context.Context, key string, members ...*Z) *IntCmd {
- const n = 3
- a := make([]interface{}, n+2*len(members))
- a[0], a[1], a[2] = "zadd", key, "nx"
- return c.zAdd(ctx, a, n, members...)
+ return c.zAdd(ctx, key, ZAddArgs{
+ NX: true,
+ }, members...)
}
-// Redis `ZADD key XX score member [score member ...]` command.
+// ZAddXX Redis `ZADD key XX score member [score member ...]` command.
func (c cmdable) ZAddXX(ctx context.Context, key string, members ...*Z) *IntCmd {
- const n = 3
- a := make([]interface{}, n+2*len(members))
- a[0], a[1], a[2] = "zadd", key, "xx"
- return c.zAdd(ctx, a, n, members...)
+ return c.zAdd(ctx, key, ZAddArgs{
+ XX: true,
+ }, members...)
}
-// Redis `ZADD key CH score member [score member ...]` command.
+// ZAddCh Redis `ZADD key CH score member [score member ...]` command.
+// Deprecated: Use
+// client.ZAddArgs(ctx, ZAddArgs{
+// Ch: true,
+// Members: []Z,
+// })
+// remove in v9.
func (c cmdable) ZAddCh(ctx context.Context, key string, members ...*Z) *IntCmd {
- const n = 3
- a := make([]interface{}, n+2*len(members))
- a[0], a[1], a[2] = "zadd", key, "ch"
- return c.zAdd(ctx, a, n, members...)
+ return c.zAdd(ctx, key, ZAddArgs{
+ Ch: true,
+ }, members...)
}
-// Redis `ZADD key NX CH score member [score member ...]` command.
+// ZAddNXCh Redis `ZADD key NX CH score member [score member ...]` command.
+// Deprecated: Use
+// client.ZAddArgs(ctx, ZAddArgs{
+// NX: true,
+// Ch: true,
+// Members: []Z,
+// })
+// remove in v9.
func (c cmdable) ZAddNXCh(ctx context.Context, key string, members ...*Z) *IntCmd {
- const n = 4
- a := make([]interface{}, n+2*len(members))
- a[0], a[1], a[2], a[3] = "zadd", key, "nx", "ch"
- return c.zAdd(ctx, a, n, members...)
+ return c.zAdd(ctx, key, ZAddArgs{
+ NX: true,
+ Ch: true,
+ }, members...)
}
-// Redis `ZADD key XX CH score member [score member ...]` command.
+// ZAddXXCh Redis `ZADD key XX CH score member [score member ...]` command.
+// Deprecated: Use
+// client.ZAddArgs(ctx, ZAddArgs{
+// XX: true,
+// Ch: true,
+// Members: []Z,
+// })
+// remove in v9.
func (c cmdable) ZAddXXCh(ctx context.Context, key string, members ...*Z) *IntCmd {
- const n = 4
- a := make([]interface{}, n+2*len(members))
- a[0], a[1], a[2], a[3] = "zadd", key, "xx", "ch"
- return c.zAdd(ctx, a, n, members...)
+ return c.zAdd(ctx, key, ZAddArgs{
+ XX: true,
+ Ch: true,
+ }, members...)
}
-func (c cmdable) zIncr(ctx context.Context, a []interface{}, n int, members ...*Z) *FloatCmd {
- for i, m := range members {
- a[n+2*i] = m.Score
- a[n+2*i+1] = m.Member
- }
- cmd := NewFloatCmd(ctx, a...)
- _ = c(ctx, cmd)
- return cmd
-}
-
-// Redis `ZADD key INCR score member` command.
+// ZIncr Redis `ZADD key INCR score member` command.
+// Deprecated: Use
+// client.ZAddArgsIncr(ctx, ZAddArgs{
+// Members: []Z,
+// })
+// remove in v9.
func (c cmdable) ZIncr(ctx context.Context, key string, member *Z) *FloatCmd {
- const n = 3
- a := make([]interface{}, n+2)
- a[0], a[1], a[2] = "zadd", key, "incr"
- return c.zIncr(ctx, a, n, member)
+ return c.ZAddArgsIncr(ctx, key, ZAddArgs{
+ Members: []Z{*member},
+ })
}
-// Redis `ZADD key NX INCR score member` command.
+// ZIncrNX Redis `ZADD key NX INCR score member` command.
+// Deprecated: Use
+// client.ZAddArgsIncr(ctx, ZAddArgs{
+// NX: true,
+// Members: []Z,
+// })
+// remove in v9.
func (c cmdable) ZIncrNX(ctx context.Context, key string, member *Z) *FloatCmd {
- const n = 4
- a := make([]interface{}, n+2)
- a[0], a[1], a[2], a[3] = "zadd", key, "incr", "nx"
- return c.zIncr(ctx, a, n, member)
+ return c.ZAddArgsIncr(ctx, key, ZAddArgs{
+ NX: true,
+ Members: []Z{*member},
+ })
}
-// Redis `ZADD key XX INCR score member` command.
+// ZIncrXX Redis `ZADD key XX INCR score member` command.
+// Deprecated: Use
+// client.ZAddArgsIncr(ctx, ZAddArgs{
+// XX: true,
+// Members: []Z,
+// })
+// remove in v9.
func (c cmdable) ZIncrXX(ctx context.Context, key string, member *Z) *FloatCmd {
- const n = 4
- a := make([]interface{}, n+2)
- a[0], a[1], a[2], a[3] = "zadd", key, "incr", "xx"
- return c.zIncr(ctx, a, n, member)
+ return c.ZAddArgsIncr(ctx, key, ZAddArgs{
+ XX: true,
+ Members: []Z{*member},
+ })
}
func (c cmdable) ZCard(ctx context.Context, key string) *IntCmd {
@@ -1914,24 +2404,44 @@
}
func (c cmdable) ZInterStore(ctx context.Context, destination string, store *ZStore) *IntCmd {
- args := make([]interface{}, 3+len(store.Keys))
- args[0] = "zinterstore"
- args[1] = destination
- args[2] = len(store.Keys)
- for i, key := range store.Keys {
- args[3+i] = key
- }
- if len(store.Weights) > 0 {
- args = append(args, "weights")
- for _, weight := range store.Weights {
- args = append(args, weight)
- }
- }
- if store.Aggregate != "" {
- args = append(args, "aggregate", store.Aggregate)
- }
+ args := make([]interface{}, 0, 3+store.len())
+ args = append(args, "zinterstore", destination, len(store.Keys))
+ args = store.appendArgs(args)
cmd := NewIntCmd(ctx, args...)
- cmd.setFirstKeyPos(3)
+ cmd.SetFirstKeyPos(3)
+ _ = c(ctx, cmd)
+ return cmd
+}
+
+func (c cmdable) ZInter(ctx context.Context, store *ZStore) *StringSliceCmd {
+ args := make([]interface{}, 0, 2+store.len())
+ args = append(args, "zinter", len(store.Keys))
+ args = store.appendArgs(args)
+ cmd := NewStringSliceCmd(ctx, args...)
+ cmd.SetFirstKeyPos(2)
+ _ = c(ctx, cmd)
+ return cmd
+}
+
+func (c cmdable) ZInterWithScores(ctx context.Context, store *ZStore) *ZSliceCmd {
+ args := make([]interface{}, 0, 3+store.len())
+ args = append(args, "zinter", len(store.Keys))
+ args = store.appendArgs(args)
+ args = append(args, "withscores")
+ cmd := NewZSliceCmd(ctx, args...)
+ cmd.SetFirstKeyPos(2)
+ _ = c(ctx, cmd)
+ return cmd
+}
+
+func (c cmdable) ZMScore(ctx context.Context, key string, members ...string) *FloatSliceCmd {
+ args := make([]interface{}, 2+len(members))
+ args[0] = "zmscore"
+ args[1] = key
+ for i, member := range members {
+ args[2+i] = member
+ }
+ cmd := NewFloatSliceCmd(ctx, args...)
_ = c(ctx, cmd)
return cmd
}
@@ -1976,29 +2486,112 @@
return cmd
}
-func (c cmdable) zRange(ctx context.Context, key string, start, stop int64, withScores bool) *StringSliceCmd {
- args := []interface{}{
- "zrange",
- key,
- start,
- stop,
+// ZRangeArgs is all the options of the ZRange command.
+// In version> 6.2.0, you can replace the(cmd):
+// ZREVRANGE,
+// ZRANGEBYSCORE,
+// ZREVRANGEBYSCORE,
+// ZRANGEBYLEX,
+// ZREVRANGEBYLEX.
+// Please pay attention to your redis-server version.
+//
+// Rev, ByScore, ByLex and Offset+Count options require redis-server 6.2.0 and higher.
+type ZRangeArgs struct {
+ Key string
+
+ // When the ByScore option is provided, the open interval(exclusive) can be set.
+ // By default, the score intervals specified by <Start> and <Stop> are closed (inclusive).
+ // It is similar to the deprecated(6.2.0+) ZRangeByScore command.
+ // For example:
+ // ZRangeArgs{
+ // Key: "example-key",
+ // Start: "(3",
+ // Stop: 8,
+ // ByScore: true,
+ // }
+ // cmd: "ZRange example-key (3 8 ByScore" (3 < score <= 8).
+ //
+ // For the ByLex option, it is similar to the deprecated(6.2.0+) ZRangeByLex command.
+ // You can set the <Start> and <Stop> options as follows:
+ // ZRangeArgs{
+ // Key: "example-key",
+ // Start: "[abc",
+ // Stop: "(def",
+ // ByLex: true,
+ // }
+ // cmd: "ZRange example-key [abc (def ByLex"
+ //
+ // For normal cases (ByScore==false && ByLex==false), <Start> and <Stop> should be set to the index range (int).
+ // You can read the documentation for more information: https://redis.io/commands/zrange
+ Start interface{}
+ Stop interface{}
+
+ // The ByScore and ByLex options are mutually exclusive.
+ ByScore bool
+ ByLex bool
+
+ Rev bool
+
+ // limit offset count.
+ Offset int64
+ Count int64
+}
+
+func (z ZRangeArgs) appendArgs(args []interface{}) []interface{} {
+ // For Rev+ByScore/ByLex, we need to adjust the position of <Start> and <Stop>.
+ if z.Rev && (z.ByScore || z.ByLex) {
+ args = append(args, z.Key, z.Stop, z.Start)
+ } else {
+ args = append(args, z.Key, z.Start, z.Stop)
}
- if withScores {
- args = append(args, "withscores")
+
+ if z.ByScore {
+ args = append(args, "byscore")
+ } else if z.ByLex {
+ args = append(args, "bylex")
}
+ if z.Rev {
+ args = append(args, "rev")
+ }
+ if z.Offset != 0 || z.Count != 0 {
+ args = append(args, "limit", z.Offset, z.Count)
+ }
+ return args
+}
+
+func (c cmdable) ZRangeArgs(ctx context.Context, z ZRangeArgs) *StringSliceCmd {
+ args := make([]interface{}, 0, 9)
+ args = append(args, "zrange")
+ args = z.appendArgs(args)
cmd := NewStringSliceCmd(ctx, args...)
_ = c(ctx, cmd)
return cmd
}
+func (c cmdable) ZRangeArgsWithScores(ctx context.Context, z ZRangeArgs) *ZSliceCmd {
+ args := make([]interface{}, 0, 10)
+ args = append(args, "zrange")
+ args = z.appendArgs(args)
+ args = append(args, "withscores")
+ cmd := NewZSliceCmd(ctx, args...)
+ _ = c(ctx, cmd)
+ return cmd
+}
+
func (c cmdable) ZRange(ctx context.Context, key string, start, stop int64) *StringSliceCmd {
- return c.zRange(ctx, key, start, stop, false)
+ return c.ZRangeArgs(ctx, ZRangeArgs{
+ Key: key,
+ Start: start,
+ Stop: stop,
+ })
}
func (c cmdable) ZRangeWithScores(ctx context.Context, key string, start, stop int64) *ZSliceCmd {
- cmd := NewZSliceCmd(ctx, "zrange", key, start, stop, "withscores")
- _ = c(ctx, cmd)
- return cmd
+ return c.ZRangeArgsWithScores(ctx, ZRangeArgs{
+ Key: key,
+ Start: start,
+ Stop: stop,
+ })
}
type ZRangeBy struct {
@@ -2047,6 +2640,15 @@
return cmd
}
+func (c cmdable) ZRangeStore(ctx context.Context, dst string, z ZRangeArgs) *IntCmd {
+ args := make([]interface{}, 0, 10)
+ args = append(args, "zrangestore", dst)
+ args = z.appendArgs(args)
+ cmd := NewIntCmd(ctx, args...)
+ _ = c(ctx, cmd)
+ return cmd
+}
+
func (c cmdable) ZRank(ctx context.Context, key, member string) *IntCmd {
cmd := NewIntCmd(ctx, "zrank", key, member)
_ = c(ctx, cmd)
@@ -2149,26 +2751,91 @@
return cmd
}
+func (c cmdable) ZUnion(ctx context.Context, store ZStore) *StringSliceCmd {
+ args := make([]interface{}, 0, 2+store.len())
+ args = append(args, "zunion", len(store.Keys))
+ args = store.appendArgs(args)
+ cmd := NewStringSliceCmd(ctx, args...)
+ cmd.SetFirstKeyPos(2)
+ _ = c(ctx, cmd)
+ return cmd
+}
+
+func (c cmdable) ZUnionWithScores(ctx context.Context, store ZStore) *ZSliceCmd {
+ args := make([]interface{}, 0, 3+store.len())
+ args = append(args, "zunion", len(store.Keys))
+ args = store.appendArgs(args)
+ args = append(args, "withscores")
+ cmd := NewZSliceCmd(ctx, args...)
+ cmd.SetFirstKeyPos(2)
+ _ = c(ctx, cmd)
+ return cmd
+}
+
func (c cmdable) ZUnionStore(ctx context.Context, dest string, store *ZStore) *IntCmd {
- args := make([]interface{}, 3+len(store.Keys))
- args[0] = "zunionstore"
- args[1] = dest
- args[2] = len(store.Keys)
- for i, key := range store.Keys {
- args[3+i] = key
- }
- if len(store.Weights) > 0 {
- args = append(args, "weights")
- for _, weight := range store.Weights {
- args = append(args, weight)
- }
- }
- if store.Aggregate != "" {
- args = append(args, "aggregate", store.Aggregate)
+ args := make([]interface{}, 0, 3+store.len())
+ args = append(args, "zunionstore", dest, len(store.Keys))
+ args = store.appendArgs(args)
+ cmd := NewIntCmd(ctx, args...)
+ cmd.SetFirstKeyPos(3)
+ _ = c(ctx, cmd)
+ return cmd
+}
+
+// ZRandMember redis-server version >= 6.2.0.
+func (c cmdable) ZRandMember(ctx context.Context, key string, count int, withScores bool) *StringSliceCmd {
+ args := make([]interface{}, 0, 4)
+
+ // Although count=0 is meaningless, redis accepts count=0.
+ args = append(args, "zrandmember", key, count)
+ if withScores {
+ args = append(args, "withscores")
}
+ cmd := NewStringSliceCmd(ctx, args...)
+ _ = c(ctx, cmd)
+ return cmd
+}
+
+// ZDiff redis-server version >= 6.2.0.
+func (c cmdable) ZDiff(ctx context.Context, keys ...string) *StringSliceCmd {
+ args := make([]interface{}, 2+len(keys))
+ args[0] = "zdiff"
+ args[1] = len(keys)
+ for i, key := range keys {
+ args[i+2] = key
+ }
+
+ cmd := NewStringSliceCmd(ctx, args...)
+ cmd.SetFirstKeyPos(2)
+ _ = c(ctx, cmd)
+ return cmd
+}
+
+// ZDiffWithScores redis-server version >= 6.2.0.
+func (c cmdable) ZDiffWithScores(ctx context.Context, keys ...string) *ZSliceCmd {
+ args := make([]interface{}, 3+len(keys))
+ args[0] = "zdiff"
+ args[1] = len(keys)
+ for i, key := range keys {
+ args[i+2] = key
+ }
+ args[len(keys)+2] = "withscores"
+
+ cmd := NewZSliceCmd(ctx, args...)
+ cmd.SetFirstKeyPos(2)
+ _ = c(ctx, cmd)
+ return cmd
+}
+
+// ZDiffStore redis-server version >=6.2.0.
+func (c cmdable) ZDiffStore(ctx context.Context, destination string, keys ...string) *IntCmd {
+ args := make([]interface{}, 0, 3+len(keys))
+ args = append(args, "zdiffstore", destination, len(keys))
+ for _, key := range keys {
+ args = append(args, key)
+ }
cmd := NewIntCmd(ctx, args...)
- cmd.setFirstKeyPos(3)
_ = c(ctx, cmd)
return cmd
}
@@ -2395,7 +3062,7 @@
return cmd
}
-func (c cmdable) Sync(ctx context.Context) {
+func (c cmdable) Sync(_ context.Context) {
panic("not implemented")
}
@@ -2432,6 +3099,7 @@
args = append(args, "SAMPLES", samples[0])
}
cmd := NewIntCmd(ctx, args...)
+ cmd.SetFirstKeyPos(2)
_ = c(ctx, cmd)
return cmd
}
@@ -2448,6 +3116,7 @@
}
cmdArgs = appendArgs(cmdArgs, args)
cmd := NewCmd(ctx, cmdArgs...)
+ cmd.SetFirstKeyPos(3)
_ = c(ctx, cmd)
return cmd
}
@@ -2462,6 +3131,7 @@
}
cmdArgs = appendArgs(cmdArgs, args)
cmd := NewCmd(ctx, cmdArgs...)
+ cmd.SetFirstKeyPos(3)
_ = c(ctx, cmd)
return cmd
}
@@ -2710,7 +3380,7 @@
return cmd
}
-// GeoRadius is a read-only GEORADIUSBYMEMBER_RO command.
+// GeoRadiusByMember is a read-only GEORADIUSBYMEMBER_RO command.
func (c cmdable) GeoRadiusByMember(
ctx context.Context, key, member string, query *GeoRadiusQuery,
) *GeoLocationCmd {
@@ -2737,6 +3407,38 @@
return cmd
}
+func (c cmdable) GeoSearch(ctx context.Context, key string, q *GeoSearchQuery) *StringSliceCmd {
+ args := make([]interface{}, 0, 13)
+ args = append(args, "geosearch", key)
+ args = geoSearchArgs(q, args)
+ cmd := NewStringSliceCmd(ctx, args...)
+ _ = c(ctx, cmd)
+ return cmd
+}
+
+func (c cmdable) GeoSearchLocation(
+ ctx context.Context, key string, q *GeoSearchLocationQuery,
+) *GeoSearchLocationCmd {
+ args := make([]interface{}, 0, 16)
+ args = append(args, "geosearch", key)
+ args = geoSearchLocationArgs(q, args)
+ cmd := NewGeoSearchLocationCmd(ctx, q, args...)
+ _ = c(ctx, cmd)
+ return cmd
+}
+
+func (c cmdable) GeoSearchStore(ctx context.Context, key, store string, q *GeoSearchStoreQuery) *IntCmd {
+ args := make([]interface{}, 0, 15)
+ args = append(args, "geosearchstore", store, key)
+ args = geoSearchArgs(&q.GeoSearchQuery, args)
+ if q.StoreDist {
+ args = append(args, "storedist")
+ }
+ cmd := NewIntCmd(ctx, args...)
+ _ = c(ctx, cmd)
+ return cmd
+}
+
func (c cmdable) GeoDist(
ctx context.Context, key string, member1, member2, unit string,
) *FloatCmd {
diff --git a/vendor/github.com/go-redis/redis/v8/error.go b/vendor/github.com/go-redis/redis/v8/error.go
index 9fe1376..521594b 100644
--- a/vendor/github.com/go-redis/redis/v8/error.go
+++ b/vendor/github.com/go-redis/redis/v8/error.go
@@ -10,6 +10,7 @@
"github.com/go-redis/redis/v8/internal/proto"
)
+// ErrClosed performs any operation on the closed client will return this error.
var ErrClosed = pool.ErrClosed
type Error interface {
@@ -64,15 +65,28 @@
return ok
}
-func isBadConn(err error, allowTimeout bool) bool {
- if err == nil {
+func isBadConn(err error, allowTimeout bool, addr string) bool {
+ switch err {
+ case nil:
return false
+ case context.Canceled, context.DeadlineExceeded:
+ return true
}
if isRedisError(err) {
- // Close connections in read only state in case domain addr is used
- // and domain resolves to a different Redis Server. See #790.
- return isReadOnlyError(err)
+ switch {
+ case isReadOnlyError(err):
+ // Close connections in read only state in case domain addr is used
+ // and domain resolves to a different Redis Server. See #790.
+ return true
+ case isMovedSameConnAddr(err, addr):
+ // Close connections when we are asked to move to the same addr
+ // of the connection. Force a DNS resolution when all connections
+ // of the pool are recycled
+ return true
+ default:
+ return false
+ }
}
if allowTimeout {
@@ -115,6 +129,14 @@
return strings.HasPrefix(err.Error(), "READONLY ")
}
+func isMovedSameConnAddr(err error, addr string) bool {
+ redisError := err.Error()
+ if !strings.HasPrefix(redisError, "MOVED ") {
+ return false
+ }
+ return strings.HasSuffix(redisError, " "+addr)
+}
+
//------------------------------------------------------------------------------
type timeoutError interface {
diff --git a/vendor/github.com/go-redis/redis/v8/internal/hashtag/hashtag.go b/vendor/github.com/go-redis/redis/v8/internal/hashtag/hashtag.go
index 2fc74ad..b3a4f21 100644
--- a/vendor/github.com/go-redis/redis/v8/internal/hashtag/hashtag.go
+++ b/vendor/github.com/go-redis/redis/v8/internal/hashtag/hashtag.go
@@ -60,7 +60,7 @@
return rand.Intn(slotNumber)
}
-// hashSlot returns a consistent slot number between 0 and 16383
+// Slot returns a consistent slot number between 0 and 16383
// for any given string key.
func Slot(key string) int {
if key == "" {
diff --git a/vendor/github.com/go-redis/redis/v8/internal/hscan/hscan.go b/vendor/github.com/go-redis/redis/v8/internal/hscan/hscan.go
new file mode 100644
index 0000000..852c8bd
--- /dev/null
+++ b/vendor/github.com/go-redis/redis/v8/internal/hscan/hscan.go
@@ -0,0 +1,201 @@
+package hscan
+
+import (
+ "errors"
+ "fmt"
+ "reflect"
+ "strconv"
+)
+
+// decoderFunc represents decoding functions for default built-in types.
+type decoderFunc func(reflect.Value, string) error
+
+var (
+ // List of built-in decoders indexed by their numeric constant values (eg: reflect.Bool = 1).
+ decoders = []decoderFunc{
+ reflect.Bool: decodeBool,
+ reflect.Int: decodeInt,
+ reflect.Int8: decodeInt8,
+ reflect.Int16: decodeInt16,
+ reflect.Int32: decodeInt32,
+ reflect.Int64: decodeInt64,
+ reflect.Uint: decodeUint,
+ reflect.Uint8: decodeUint8,
+ reflect.Uint16: decodeUint16,
+ reflect.Uint32: decodeUint32,
+ reflect.Uint64: decodeUint64,
+ reflect.Float32: decodeFloat32,
+ reflect.Float64: decodeFloat64,
+ reflect.Complex64: decodeUnsupported,
+ reflect.Complex128: decodeUnsupported,
+ reflect.Array: decodeUnsupported,
+ reflect.Chan: decodeUnsupported,
+ reflect.Func: decodeUnsupported,
+ reflect.Interface: decodeUnsupported,
+ reflect.Map: decodeUnsupported,
+ reflect.Ptr: decodeUnsupported,
+ reflect.Slice: decodeSlice,
+ reflect.String: decodeString,
+ reflect.Struct: decodeUnsupported,
+ reflect.UnsafePointer: decodeUnsupported,
+ }
+
+ // Global map of struct field specs that is populated once for every new
+ // struct type that is scanned. This caches the field types and the corresponding
+ // decoder functions to avoid iterating through struct fields on subsequent scans.
+ globalStructMap = newStructMap()
+)
+
+func Struct(dst interface{}) (StructValue, error) {
+ v := reflect.ValueOf(dst)
+
+ // The destination to scan into should be a struct pointer.
+ if v.Kind() != reflect.Ptr || v.IsNil() {
+ return StructValue{}, fmt.Errorf("redis.Scan(non-pointer %T)", dst)
+ }
+
+ v = v.Elem()
+ if v.Kind() != reflect.Struct {
+ return StructValue{}, fmt.Errorf("redis.Scan(non-struct %T)", dst)
+ }
+
+ return StructValue{
+ spec: globalStructMap.get(v.Type()),
+ value: v,
+ }, nil
+}
+
+// Scan scans the results from a key-value Redis map result set to a destination struct.
+// The Redis keys are matched to the struct's field with the `redis` tag.
+func Scan(dst interface{}, keys []interface{}, vals []interface{}) error {
+ if len(keys) != len(vals) {
+ return errors.New("args should have the same number of keys and vals")
+ }
+
+ strct, err := Struct(dst)
+ if err != nil {
+ return err
+ }
+
+ // Iterate through the (key, value) sequence.
+ for i := 0; i < len(vals); i++ {
+ key, ok := keys[i].(string)
+ if !ok {
+ continue
+ }
+
+ val, ok := vals[i].(string)
+ if !ok {
+ continue
+ }
+
+ if err := strct.Scan(key, val); err != nil {
+ return err
+ }
+ }
+
+ return nil
+}
+
+func decodeBool(f reflect.Value, s string) error {
+ b, err := strconv.ParseBool(s)
+ if err != nil {
+ return err
+ }
+ f.SetBool(b)
+ return nil
+}
+
+func decodeInt8(f reflect.Value, s string) error {
+ return decodeNumber(f, s, 8)
+}
+
+func decodeInt16(f reflect.Value, s string) error {
+ return decodeNumber(f, s, 16)
+}
+
+func decodeInt32(f reflect.Value, s string) error {
+ return decodeNumber(f, s, 32)
+}
+
+func decodeInt64(f reflect.Value, s string) error {
+ return decodeNumber(f, s, 64)
+}
+
+func decodeInt(f reflect.Value, s string) error {
+ return decodeNumber(f, s, 0)
+}
+
+func decodeNumber(f reflect.Value, s string, bitSize int) error {
+ v, err := strconv.ParseInt(s, 10, bitSize)
+ if err != nil {
+ return err
+ }
+ f.SetInt(v)
+ return nil
+}
+
+func decodeUint8(f reflect.Value, s string) error {
+ return decodeUnsignedNumber(f, s, 8)
+}
+
+func decodeUint16(f reflect.Value, s string) error {
+ return decodeUnsignedNumber(f, s, 16)
+}
+
+func decodeUint32(f reflect.Value, s string) error {
+ return decodeUnsignedNumber(f, s, 32)
+}
+
+func decodeUint64(f reflect.Value, s string) error {
+ return decodeUnsignedNumber(f, s, 64)
+}
+
+func decodeUint(f reflect.Value, s string) error {
+ return decodeUnsignedNumber(f, s, 0)
+}
+
+func decodeUnsignedNumber(f reflect.Value, s string, bitSize int) error {
+ v, err := strconv.ParseUint(s, 10, bitSize)
+ if err != nil {
+ return err
+ }
+ f.SetUint(v)
+ return nil
+}
+
+func decodeFloat32(f reflect.Value, s string) error {
+ v, err := strconv.ParseFloat(s, 32)
+ if err != nil {
+ return err
+ }
+ f.SetFloat(v)
+ return nil
+}
+
+// although the default is float64, but we better define it.
+func decodeFloat64(f reflect.Value, s string) error {
+ v, err := strconv.ParseFloat(s, 64)
+ if err != nil {
+ return err
+ }
+ f.SetFloat(v)
+ return nil
+}
+
+func decodeString(f reflect.Value, s string) error {
+ f.SetString(s)
+ return nil
+}
+
+func decodeSlice(f reflect.Value, s string) error {
+ // []byte slice ([]uint8).
+ if f.Type().Elem().Kind() == reflect.Uint8 {
+ f.SetBytes([]byte(s))
+ }
+ return nil
+}
+
+func decodeUnsupported(v reflect.Value, s string) error {
+ return fmt.Errorf("redis.Scan(unsupported %s)", v.Type())
+}
diff --git a/vendor/github.com/go-redis/redis/v8/internal/hscan/structmap.go b/vendor/github.com/go-redis/redis/v8/internal/hscan/structmap.go
new file mode 100644
index 0000000..6839412
--- /dev/null
+++ b/vendor/github.com/go-redis/redis/v8/internal/hscan/structmap.go
@@ -0,0 +1,93 @@
+package hscan
+
+import (
+ "fmt"
+ "reflect"
+ "strings"
+ "sync"
+)
+
+// structMap contains the map of struct fields for target structs
+// indexed by the struct type.
+type structMap struct {
+ m sync.Map
+}
+
+func newStructMap() *structMap {
+ return new(structMap)
+}
+
+func (s *structMap) get(t reflect.Type) *structSpec {
+ if v, ok := s.m.Load(t); ok {
+ return v.(*structSpec)
+ }
+
+ spec := newStructSpec(t, "redis")
+ s.m.Store(t, spec)
+ return spec
+}
+
+//------------------------------------------------------------------------------
+
+// structSpec contains the list of all fields in a target struct.
+type structSpec struct {
+ m map[string]*structField
+}
+
+func (s *structSpec) set(tag string, sf *structField) {
+ s.m[tag] = sf
+}
+
+func newStructSpec(t reflect.Type, fieldTag string) *structSpec {
+ numField := t.NumField()
+ out := &structSpec{
+ m: make(map[string]*structField, numField),
+ }
+
+ for i := 0; i < numField; i++ {
+ f := t.Field(i)
+
+ tag := f.Tag.Get(fieldTag)
+ if tag == "" || tag == "-" {
+ continue
+ }
+
+ tag = strings.Split(tag, ",")[0]
+ if tag == "" {
+ continue
+ }
+
+ // Use the built-in decoder.
+ out.set(tag, &structField{index: i, fn: decoders[f.Type.Kind()]})
+ }
+
+ return out
+}
+
+//------------------------------------------------------------------------------
+
+// structField represents a single field in a target struct.
+type structField struct {
+ index int
+ fn decoderFunc
+}
+
+//------------------------------------------------------------------------------
+
+type StructValue struct {
+ spec *structSpec
+ value reflect.Value
+}
+
+func (s StructValue) Scan(key string, value string) error {
+ field, ok := s.spec.m[key]
+ if !ok {
+ return nil
+ }
+ if err := field.fn(s.value.Field(field.index), value); err != nil {
+ t := s.value.Type()
+ return fmt.Errorf("cannot scan redis.result %s into struct field %s.%s of type %s, error-%s",
+ value, t.Name(), t.Field(field.index).Name, t.Field(field.index).Type, err.Error())
+ }
+ return nil
+}
diff --git a/vendor/github.com/go-redis/redis/v8/internal/instruments.go b/vendor/github.com/go-redis/redis/v8/internal/instruments.go
deleted file mode 100644
index e837526..0000000
--- a/vendor/github.com/go-redis/redis/v8/internal/instruments.go
+++ /dev/null
@@ -1,33 +0,0 @@
-package internal
-
-import (
- "context"
-
- "go.opentelemetry.io/otel/api/global"
- "go.opentelemetry.io/otel/api/metric"
-)
-
-var (
- // WritesCounter is a count of write commands performed.
- WritesCounter metric.Int64Counter
- // NewConnectionsCounter is a count of new connections.
- NewConnectionsCounter metric.Int64Counter
-)
-
-func init() {
- defer func() {
- if r := recover(); r != nil {
- Logger.Printf(context.Background(), "Error creating meter github.com/go-redis/redis for Instruments", r)
- }
- }()
-
- meter := metric.Must(global.Meter("github.com/go-redis/redis"))
-
- WritesCounter = meter.NewInt64Counter("redis.writes",
- metric.WithDescription("the number of writes initiated"),
- )
-
- NewConnectionsCounter = meter.NewInt64Counter("redis.new_connections",
- metric.WithDescription("the number of connections created"),
- )
-}
diff --git a/vendor/github.com/go-redis/redis/v8/internal/log.go b/vendor/github.com/go-redis/redis/v8/internal/log.go
index 3810f9e..c8b9213 100644
--- a/vendor/github.com/go-redis/redis/v8/internal/log.go
+++ b/vendor/github.com/go-redis/redis/v8/internal/log.go
@@ -19,6 +19,8 @@
_ = l.log.Output(2, fmt.Sprintf(format, v...))
}
+// Logger calls Output to print to the stderr.
+// Arguments are handled in the manner of fmt.Print.
var Logger Logging = &logger{
log: log.New(os.Stderr, "redis: ", log.LstdFlags|log.Lshortfile),
}
diff --git a/vendor/github.com/go-redis/redis/v8/internal/pool/conn.go b/vendor/github.com/go-redis/redis/v8/internal/pool/conn.go
index 08a2071..5661659 100644
--- a/vendor/github.com/go-redis/redis/v8/internal/pool/conn.go
+++ b/vendor/github.com/go-redis/redis/v8/internal/pool/conn.go
@@ -7,9 +7,7 @@
"sync/atomic"
"time"
- "github.com/go-redis/redis/v8/internal"
"github.com/go-redis/redis/v8/internal/proto"
- "go.opentelemetry.io/otel/api/trace"
)
var noDeadline = time.Time{}
@@ -66,41 +64,28 @@
}
func (cn *Conn) WithReader(ctx context.Context, timeout time.Duration, fn func(rd *proto.Reader) error) error {
- return internal.WithSpan(ctx, "redis.with_reader", func(ctx context.Context, span trace.Span) error {
- if err := cn.netConn.SetReadDeadline(cn.deadline(ctx, timeout)); err != nil {
- return internal.RecordError(ctx, err)
- }
- if err := fn(cn.rd); err != nil {
- return internal.RecordError(ctx, err)
- }
- return nil
- })
+ if err := cn.netConn.SetReadDeadline(cn.deadline(ctx, timeout)); err != nil {
+ return err
+ }
+ return fn(cn.rd)
}
func (cn *Conn) WithWriter(
ctx context.Context, timeout time.Duration, fn func(wr *proto.Writer) error,
) error {
- return internal.WithSpan(ctx, "redis.with_writer", func(ctx context.Context, span trace.Span) error {
- if err := cn.netConn.SetWriteDeadline(cn.deadline(ctx, timeout)); err != nil {
- return internal.RecordError(ctx, err)
- }
+ if err := cn.netConn.SetWriteDeadline(cn.deadline(ctx, timeout)); err != nil {
+ return err
+ }
- if cn.bw.Buffered() > 0 {
- cn.bw.Reset(cn.netConn)
- }
+ if cn.bw.Buffered() > 0 {
+ cn.bw.Reset(cn.netConn)
+ }
- if err := fn(cn.wr); err != nil {
- return internal.RecordError(ctx, err)
- }
+ if err := fn(cn.wr); err != nil {
+ return err
+ }
- if err := cn.bw.Flush(); err != nil {
- return internal.RecordError(ctx, err)
- }
-
- internal.WritesCounter.Add(ctx, 1)
-
- return nil
- })
+ return cn.bw.Flush()
}
func (cn *Conn) Close() error {
diff --git a/vendor/github.com/go-redis/redis/v8/internal/pool/pool.go b/vendor/github.com/go-redis/redis/v8/internal/pool/pool.go
index 355742b..44a4e77 100644
--- a/vendor/github.com/go-redis/redis/v8/internal/pool/pool.go
+++ b/vendor/github.com/go-redis/redis/v8/internal/pool/pool.go
@@ -12,7 +12,10 @@
)
var (
- ErrClosed = errors.New("redis: client is closed")
+ // ErrClosed performs any operation on the closed client will return this error.
+ ErrClosed = errors.New("redis: client is closed")
+
+ // ErrPoolTimeout timed out waiting to get a connection from the connection pool.
ErrPoolTimeout = errors.New("redis: connection pool timeout")
)
@@ -54,6 +57,7 @@
Dialer func(context.Context) (net.Conn, error)
OnClose func(*Conn) error
+ PoolFIFO bool
PoolSize int
MinIdleConns int
MaxConnAge time.Duration
@@ -117,9 +121,10 @@
for p.poolSize < p.opt.PoolSize && p.idleConnsLen < p.opt.MinIdleConns {
p.poolSize++
p.idleConnsLen++
+
go func() {
err := p.addIdleConn()
- if err != nil {
+ if err != nil && err != ErrClosed {
p.connsMu.Lock()
p.poolSize--
p.idleConnsLen--
@@ -136,9 +141,16 @@
}
p.connsMu.Lock()
+ defer p.connsMu.Unlock()
+
+ // It is not allowed to add new connections to the closed connection pool.
+ if p.closed() {
+ _ = cn.Close()
+ return ErrClosed
+ }
+
p.conns = append(p.conns, cn)
p.idleConns = append(p.idleConns, cn)
- p.connsMu.Unlock()
return nil
}
@@ -153,6 +165,14 @@
}
p.connsMu.Lock()
+ defer p.connsMu.Unlock()
+
+ // It is not allowed to add new connections to the closed connection pool.
+ if p.closed() {
+ _ = cn.Close()
+ return nil, ErrClosed
+ }
+
p.conns = append(p.conns, cn)
if pooled {
// If pool is full remove the cn on next Put.
@@ -162,7 +182,6 @@
p.poolSize++
}
}
- p.connsMu.Unlock()
return cn, nil
}
@@ -185,7 +204,6 @@
return nil, err
}
- internal.NewConnectionsCounter.Add(ctx, 1)
cn := NewConn(netConn)
cn.pooled = pooled
return cn, nil
@@ -228,16 +246,19 @@
return nil, ErrClosed
}
- err := p.waitTurn(ctx)
- if err != nil {
+ if err := p.waitTurn(ctx); err != nil {
return nil, err
}
for {
p.connsMu.Lock()
- cn := p.popIdle()
+ cn, err := p.popIdle()
p.connsMu.Unlock()
+ if err != nil {
+ return nil, err
+ }
+
if cn == nil {
break
}
@@ -306,17 +327,28 @@
<-p.queue
}
-func (p *ConnPool) popIdle() *Conn {
- if len(p.idleConns) == 0 {
- return nil
+func (p *ConnPool) popIdle() (*Conn, error) {
+ if p.closed() {
+ return nil, ErrClosed
+ }
+ n := len(p.idleConns)
+ if n == 0 {
+ return nil, nil
}
- idx := len(p.idleConns) - 1
- cn := p.idleConns[idx]
- p.idleConns = p.idleConns[:idx]
+ var cn *Conn
+ if p.opt.PoolFIFO {
+ cn = p.idleConns[0]
+ copy(p.idleConns, p.idleConns[1:])
+ p.idleConns = p.idleConns[:n-1]
+ } else {
+ idx := n - 1
+ cn = p.idleConns[idx]
+ p.idleConns = p.idleConns[:idx]
+ }
p.idleConnsLen--
p.checkMinIdleConns()
- return cn
+ return cn, nil
}
func (p *ConnPool) Put(ctx context.Context, cn *Conn) {
@@ -477,6 +509,7 @@
p.connsMu.Lock()
cn := p.reapStaleConn()
p.connsMu.Unlock()
+
p.freeTurn()
if cn != nil {
diff --git a/vendor/github.com/go-redis/redis/v8/internal/pool/pool_sticky.go b/vendor/github.com/go-redis/redis/v8/internal/pool/pool_sticky.go
index c3e7e7c..3adb99b 100644
--- a/vendor/github.com/go-redis/redis/v8/internal/pool/pool_sticky.go
+++ b/vendor/github.com/go-redis/redis/v8/internal/pool/pool_sticky.go
@@ -172,8 +172,7 @@
func (p *StickyConnPool) badConnError() error {
if v := p._badConnError.Load(); v != nil {
- err := v.(BadConnError)
- if err.wrapped != nil {
+ if err := v.(BadConnError); err.wrapped != nil {
return err
}
}
diff --git a/vendor/github.com/go-redis/redis/v8/internal/proto/reader.go b/vendor/github.com/go-redis/redis/v8/internal/proto/reader.go
index 0fbc51e..0e6ca77 100644
--- a/vendor/github.com/go-redis/redis/v8/internal/proto/reader.go
+++ b/vendor/github.com/go-redis/redis/v8/internal/proto/reader.go
@@ -8,6 +8,7 @@
"github.com/go-redis/redis/v8/internal/util"
)
+// redis resp protocol data type.
const (
ErrorReply = '-'
StatusReply = '+'
@@ -18,7 +19,7 @@
//------------------------------------------------------------------------------
-const Nil = RedisError("redis: nil")
+const Nil = RedisError("redis: nil") // nolint:errname
type RedisError string
@@ -83,7 +84,7 @@
return nil, err
}
- full = append(full, b...)
+ full = append(full, b...) //nolint:makezero
b = full
}
if len(b) <= 2 || b[len(b)-1] != '\n' || b[len(b)-2] != '\r' {
diff --git a/vendor/github.com/go-redis/redis/v8/internal/proto/scan.go b/vendor/github.com/go-redis/redis/v8/internal/proto/scan.go
index 08d18d3..0e99476 100644
--- a/vendor/github.com/go-redis/redis/v8/internal/proto/scan.go
+++ b/vendor/github.com/go-redis/redis/v8/internal/proto/scan.go
@@ -10,7 +10,7 @@
)
// Scan parses bytes `b` to `v` with appropriate type.
-// nolint: gocyclo
+//nolint:gocyclo
func Scan(b []byte, v interface{}) error {
switch v := v.(type) {
case nil:
@@ -106,6 +106,13 @@
var err error
*v, err = time.Parse(time.RFC3339Nano, util.BytesToString(b))
return err
+ case *time.Duration:
+ n, err := util.ParseInt(b, 10, 64)
+ if err != nil {
+ return err
+ }
+ *v = time.Duration(n)
+ return nil
case encoding.BinaryUnmarshaler:
return v.UnmarshalBinary(b)
default:
@@ -131,7 +138,7 @@
for i, s := range data {
elem := next()
if err := Scan([]byte(s), elem.Addr().Interface()); err != nil {
- err = fmt.Errorf("redis: ScanSlice index=%d value=%q failed: %s", i, s, err)
+ err = fmt.Errorf("redis: ScanSlice index=%d value=%q failed: %w", i, s, err)
return err
}
}
diff --git a/vendor/github.com/go-redis/redis/v8/internal/proto/writer.go b/vendor/github.com/go-redis/redis/v8/internal/proto/writer.go
index 81b09b8..c426098 100644
--- a/vendor/github.com/go-redis/redis/v8/internal/proto/writer.go
+++ b/vendor/github.com/go-redis/redis/v8/internal/proto/writer.go
@@ -98,6 +98,8 @@
case time.Time:
w.numBuf = v.AppendFormat(w.numBuf[:0], time.RFC3339Nano)
return w.bytes(w.numBuf)
+ case time.Duration:
+ return w.int(v.Nanoseconds())
case encoding.BinaryMarshaler:
b, err := v.MarshalBinary()
if err != nil {
diff --git a/vendor/github.com/go-redis/redis/v8/internal/rand/rand.go b/vendor/github.com/go-redis/redis/v8/internal/rand/rand.go
index 40676f3..2edccba 100644
--- a/vendor/github.com/go-redis/redis/v8/internal/rand/rand.go
+++ b/vendor/github.com/go-redis/redis/v8/internal/rand/rand.go
@@ -43,3 +43,8 @@
s.src.Seed(seed)
s.mu.Unlock()
}
+
+// Shuffle pseudo-randomizes the order of elements.
+// n is the number of elements.
+// swap swaps the elements with indexes i and j.
+func Shuffle(n int, swap func(i, j int)) { pseudo.Shuffle(n, swap) }
diff --git a/vendor/github.com/go-redis/redis/v8/internal/safe.go b/vendor/github.com/go-redis/redis/v8/internal/safe.go
index 862ff0e..fd2f434 100644
--- a/vendor/github.com/go-redis/redis/v8/internal/safe.go
+++ b/vendor/github.com/go-redis/redis/v8/internal/safe.go
@@ -1,3 +1,4 @@
+//go:build appengine
// +build appengine
package internal
diff --git a/vendor/github.com/go-redis/redis/v8/internal/unsafe.go b/vendor/github.com/go-redis/redis/v8/internal/unsafe.go
index 4bc7970..9f2e418 100644
--- a/vendor/github.com/go-redis/redis/v8/internal/unsafe.go
+++ b/vendor/github.com/go-redis/redis/v8/internal/unsafe.go
@@ -1,3 +1,4 @@
+//go:build !appengine
// +build !appengine
package internal
diff --git a/vendor/github.com/go-redis/redis/v8/internal/util.go b/vendor/github.com/go-redis/redis/v8/internal/util.go
index 894382b..e34a7f0 100644
--- a/vendor/github.com/go-redis/redis/v8/internal/util.go
+++ b/vendor/github.com/go-redis/redis/v8/internal/util.go
@@ -4,24 +4,19 @@
"context"
"time"
- "github.com/go-redis/redis/v8/internal/proto"
"github.com/go-redis/redis/v8/internal/util"
- "go.opentelemetry.io/otel/api/global"
- "go.opentelemetry.io/otel/api/trace"
)
func Sleep(ctx context.Context, dur time.Duration) error {
- return WithSpan(ctx, "time.Sleep", func(ctx context.Context, span trace.Span) error {
- t := time.NewTimer(dur)
- defer t.Stop()
+ t := time.NewTimer(dur)
+ defer t.Stop()
- select {
- case <-t.C:
- return nil
- case <-ctx.Done():
- return ctx.Err()
- }
- })
+ select {
+ case <-t.C:
+ return nil
+ case <-ctx.Done():
+ return ctx.Err()
+ }
}
func ToLower(s string) string {
@@ -49,33 +44,3 @@
}
return true
}
-
-func Unwrap(err error) error {
- u, ok := err.(interface {
- Unwrap() error
- })
- if !ok {
- return nil
- }
- return u.Unwrap()
-}
-
-//------------------------------------------------------------------------------
-
-func WithSpan(ctx context.Context, name string, fn func(context.Context, trace.Span) error) error {
- if span := trace.SpanFromContext(ctx); !span.IsRecording() {
- return fn(ctx, span)
- }
-
- ctx, span := global.Tracer("github.com/go-redis/redis").Start(ctx, name)
- defer span.End()
-
- return fn(ctx, span)
-}
-
-func RecordError(ctx context.Context, err error) error {
- if err != proto.Nil {
- trace.SpanFromContext(ctx).RecordError(ctx, err)
- }
- return err
-}
diff --git a/vendor/github.com/go-redis/redis/v8/internal/util/safe.go b/vendor/github.com/go-redis/redis/v8/internal/util/safe.go
index 1b3060e..2130711 100644
--- a/vendor/github.com/go-redis/redis/v8/internal/util/safe.go
+++ b/vendor/github.com/go-redis/redis/v8/internal/util/safe.go
@@ -1,3 +1,4 @@
+//go:build appengine
// +build appengine
package util
diff --git a/vendor/github.com/go-redis/redis/v8/internal/util/unsafe.go b/vendor/github.com/go-redis/redis/v8/internal/util/unsafe.go
index c9868aa..daa8d76 100644
--- a/vendor/github.com/go-redis/redis/v8/internal/util/unsafe.go
+++ b/vendor/github.com/go-redis/redis/v8/internal/util/unsafe.go
@@ -1,3 +1,4 @@
+//go:build !appengine
// +build !appengine
package util
diff --git a/vendor/github.com/go-redis/redis/v8/options.go b/vendor/github.com/go-redis/redis/v8/options.go
index f2c16c5..a4abe32 100644
--- a/vendor/github.com/go-redis/redis/v8/options.go
+++ b/vendor/github.com/go-redis/redis/v8/options.go
@@ -8,14 +8,12 @@
"net"
"net/url"
"runtime"
+ "sort"
"strconv"
"strings"
"time"
- "github.com/go-redis/redis/v8/internal"
"github.com/go-redis/redis/v8/internal/pool"
- "go.opentelemetry.io/otel/api/trace"
- "go.opentelemetry.io/otel/label"
)
// Limiter is the interface of a rate limiter or a circuit breaker.
@@ -58,7 +56,7 @@
DB int
// Maximum number of retries before giving up.
- // Default is 3 retries.
+ // Default is 3 retries; -1 (not 0) disables retries.
MaxRetries int
// Minimum backoff between each retry.
// Default is 8 milliseconds; -1 disables backoff.
@@ -79,8 +77,12 @@
// Default is ReadTimeout.
WriteTimeout time.Duration
+ // Type of connection pool.
+ // true for FIFO pool, false for LIFO pool.
+ // Note that fifo has higher overhead compared to lifo.
+ PoolFIFO bool
// Maximum number of socket connections.
- // Default is 10 connections per every CPU as reported by runtime.NumCPU.
+ // Default is 10 connections per every available CPU as reported by runtime.GOMAXPROCS.
PoolSize int
// Minimum number of idle connections which is useful when establishing
// new connection is slow.
@@ -139,7 +141,7 @@
}
}
if opt.PoolSize == 0 {
- opt.PoolSize = 10 * runtime.NumCPU()
+ opt.PoolSize = 10 * runtime.GOMAXPROCS(0)
}
switch opt.ReadTimeout {
case -1:
@@ -191,9 +193,32 @@
// Scheme is required.
// There are two connection types: by tcp socket and by unix socket.
// Tcp connection:
-// redis://<user>:<password>@<host>:<port>/<db_number>
+// redis://<user>:<password>@<host>:<port>/<db_number>
// Unix connection:
// unix://<user>:<password>@</path/to/redis.sock>?db=<db_number>
+// Most Option fields can be set using query parameters, with the following restrictions:
+// - field names are mapped using snake-case conversion: to set MaxRetries, use max_retries
+// - only scalar type fields are supported (bool, int, time.Duration)
+// - for time.Duration fields, values must be a valid input for time.ParseDuration();
+// additionally a plain integer as value (i.e. without unit) is intepreted as seconds
+// - to disable a duration field, use value less than or equal to 0; to use the default
+// value, leave the value blank or remove the parameter
+// - only the last value is interpreted if a parameter is given multiple times
+// - fields "network", "addr", "username" and "password" can only be set using other
+// URL attributes (scheme, host, userinfo, resp.), query paremeters using these
+// names will be treated as unknown parameters
+// - unknown parameter names will result in an error
+// Examples:
+// redis://user:password@localhost:6789/3?dial_timeout=3&db=1&read_timeout=6s&max_retries=2
+// is equivalent to:
+// &Options{
+// Network: "tcp",
+// Addr: "localhost:6789",
+// DB: 1, // path "/3" was overridden by "&db=1"
+// DialTimeout: 3 * time.Second, // no time unit = seconds
+// ReadTimeout: 6 * time.Second,
+// MaxRetries: 2,
+// }
func ParseURL(redisURL string) (*Options, error) {
u, err := url.Parse(redisURL)
if err != nil {
@@ -215,10 +240,6 @@
o.Username, o.Password = getUserPassword(u)
- if len(u.Query()) > 0 {
- return nil, errors.New("redis: no options supported")
- }
-
h, p, err := net.SplitHostPort(u.Host)
if err != nil {
h = u.Host
@@ -249,7 +270,7 @@
o.TLSConfig = &tls.Config{ServerName: h}
}
- return o, nil
+ return setupConnParams(u, o)
}
func setupUnixConn(u *url.URL) (*Options, error) {
@@ -261,19 +282,122 @@
return nil, errors.New("redis: empty unix socket path")
}
o.Addr = u.Path
-
o.Username, o.Password = getUserPassword(u)
+ return setupConnParams(u, o)
+}
- dbStr := u.Query().Get("db")
- if dbStr == "" {
- return o, nil // if database is not set, connect to 0 db.
+type queryOptions struct {
+ q url.Values
+ err error
+}
+
+func (o *queryOptions) string(name string) string {
+ vs := o.q[name]
+ if len(vs) == 0 {
+ return ""
+ }
+ delete(o.q, name) // enable detection of unknown parameters
+ return vs[len(vs)-1]
+}
+
+func (o *queryOptions) int(name string) int {
+ s := o.string(name)
+ if s == "" {
+ return 0
+ }
+ i, err := strconv.Atoi(s)
+ if err == nil {
+ return i
+ }
+ if o.err == nil {
+ o.err = fmt.Errorf("redis: invalid %s number: %s", name, err)
+ }
+ return 0
+}
+
+func (o *queryOptions) duration(name string) time.Duration {
+ s := o.string(name)
+ if s == "" {
+ return 0
+ }
+ // try plain number first
+ if i, err := strconv.Atoi(s); err == nil {
+ if i <= 0 {
+ // disable timeouts
+ return -1
+ }
+ return time.Duration(i) * time.Second
+ }
+ dur, err := time.ParseDuration(s)
+ if err == nil {
+ return dur
+ }
+ if o.err == nil {
+ o.err = fmt.Errorf("redis: invalid %s duration: %w", name, err)
+ }
+ return 0
+}
+
+func (o *queryOptions) bool(name string) bool {
+ switch s := o.string(name); s {
+ case "true", "1":
+ return true
+ case "false", "0", "":
+ return false
+ default:
+ if o.err == nil {
+ o.err = fmt.Errorf("redis: invalid %s boolean: expected true/false/1/0 or an empty string, got %q", name, s)
+ }
+ return false
+ }
+}
+
+func (o *queryOptions) remaining() []string {
+ if len(o.q) == 0 {
+ return nil
+ }
+ keys := make([]string, 0, len(o.q))
+ for k := range o.q {
+ keys = append(keys, k)
+ }
+ sort.Strings(keys)
+ return keys
+}
+
+// setupConnParams converts query parameters in u to option value in o.
+func setupConnParams(u *url.URL, o *Options) (*Options, error) {
+ q := queryOptions{q: u.Query()}
+
+ // compat: a future major release may use q.int("db")
+ if tmp := q.string("db"); tmp != "" {
+ db, err := strconv.Atoi(tmp)
+ if err != nil {
+ return nil, fmt.Errorf("redis: invalid database number: %w", err)
+ }
+ o.DB = db
}
- db, err := strconv.Atoi(dbStr)
- if err != nil {
- return nil, fmt.Errorf("redis: invalid database number: %s", err)
+ o.MaxRetries = q.int("max_retries")
+ o.MinRetryBackoff = q.duration("min_retry_backoff")
+ o.MaxRetryBackoff = q.duration("max_retry_backoff")
+ o.DialTimeout = q.duration("dial_timeout")
+ o.ReadTimeout = q.duration("read_timeout")
+ o.WriteTimeout = q.duration("write_timeout")
+ o.PoolFIFO = q.bool("pool_fifo")
+ o.PoolSize = q.int("pool_size")
+ o.MinIdleConns = q.int("min_idle_conns")
+ o.MaxConnAge = q.duration("max_conn_age")
+ o.PoolTimeout = q.duration("pool_timeout")
+ o.IdleTimeout = q.duration("idle_timeout")
+ o.IdleCheckFrequency = q.duration("idle_check_frequency")
+ if q.err != nil {
+ return nil, q.err
}
- o.DB = db
+
+ // any parameters left?
+ if r := q.remaining(); len(r) > 0 {
+ return nil, fmt.Errorf("redis: unexpected option: %s", strings.Join(r, ", "))
+ }
return o, nil
}
@@ -292,21 +416,9 @@
func newConnPool(opt *Options) *pool.ConnPool {
return pool.NewConnPool(&pool.Options{
Dialer: func(ctx context.Context) (net.Conn, error) {
- var conn net.Conn
- err := internal.WithSpan(ctx, "redis.dial", func(ctx context.Context, span trace.Span) error {
- span.SetAttributes(
- label.String("db.connection_string", opt.Addr),
- )
-
- var err error
- conn, err = opt.Dialer(ctx, opt.Network, opt.Addr)
- if err != nil {
- _ = internal.RecordError(ctx, err)
- }
- return err
- })
- return conn, err
+ return opt.Dialer(ctx, opt.Network, opt.Addr)
},
+ PoolFIFO: opt.PoolFIFO,
PoolSize: opt.PoolSize,
MinIdleConns: opt.MinIdleConns,
MaxConnAge: opt.MaxConnAge,
diff --git a/vendor/github.com/go-redis/redis/v8/package.json b/vendor/github.com/go-redis/redis/v8/package.json
new file mode 100644
index 0000000..e4ea4bb
--- /dev/null
+++ b/vendor/github.com/go-redis/redis/v8/package.json
@@ -0,0 +1,8 @@
+{
+ "name": "redis",
+ "version": "8.11.5",
+ "main": "index.js",
+ "repository": "git@github.com:go-redis/redis.git",
+ "author": "Vladimir Mihailenco <vladimir.webdev@gmail.com>",
+ "license": "BSD-2-clause"
+}
diff --git a/vendor/github.com/go-redis/redis/v8/pipeline.go b/vendor/github.com/go-redis/redis/v8/pipeline.go
index c6ec340..31bab97 100644
--- a/vendor/github.com/go-redis/redis/v8/pipeline.go
+++ b/vendor/github.com/go-redis/redis/v8/pipeline.go
@@ -24,6 +24,7 @@
// depends of your batch size and/or use TxPipeline.
type Pipeliner interface {
StatefulCmdable
+ Len() int
Do(ctx context.Context, args ...interface{}) *Cmd
Process(ctx context.Context, cmd Cmder) error
Close() error
@@ -53,6 +54,15 @@
c.statefulCmdable = c.Process
}
+// Len returns the number of queued commands.
+func (c *Pipeline) Len() int {
+ c.mu.Lock()
+ ln := len(c.cmds)
+ c.mu.Unlock()
+ return ln
+}
+
+// Do queues the custom command for later execution.
func (c *Pipeline) Do(ctx context.Context, args ...interface{}) *Cmd {
cmd := NewCmd(ctx, args...)
_ = c.Process(ctx, cmd)
diff --git a/vendor/github.com/go-redis/redis/v8/pubsub.go b/vendor/github.com/go-redis/redis/v8/pubsub.go
index c56270b..efc2354 100644
--- a/vendor/github.com/go-redis/redis/v8/pubsub.go
+++ b/vendor/github.com/go-redis/redis/v8/pubsub.go
@@ -2,7 +2,6 @@
import (
"context"
- "errors"
"fmt"
"strings"
"sync"
@@ -13,13 +12,6 @@
"github.com/go-redis/redis/v8/internal/proto"
)
-const (
- pingTimeout = time.Second
- chanSendTimeout = time.Minute
-)
-
-var errPingTimeout = errors.New("redis: ping timeout")
-
// PubSub implements Pub/Sub commands as described in
// http://redis.io/topics/pubsub. Message receiving is NOT safe
// for concurrent use by multiple goroutines.
@@ -43,9 +35,12 @@
cmd *Cmd
chOnce sync.Once
- msgCh chan *Message
- allCh chan interface{}
- ping chan struct{}
+ msgCh *channel
+ allCh *channel
+}
+
+func (c *PubSub) init() {
+ c.exit = make(chan struct{})
}
func (c *PubSub) String() string {
@@ -54,10 +49,6 @@
return fmt.Sprintf("PubSub(%s)", strings.Join(channels, ", "))
}
-func (c *PubSub) init() {
- c.exit = make(chan struct{})
-}
-
func (c *PubSub) connWithLock(ctx context.Context) (*pool.Conn, error) {
c.mu.Lock()
cn, err := c.conn(ctx, nil)
@@ -150,7 +141,7 @@
if c.cn != cn {
return
}
- if isBadConn(err, allowTimeout) {
+ if isBadConn(err, allowTimeout, c.opt.Addr) {
c.reconnect(ctx, err)
}
}
@@ -261,13 +252,16 @@
}
cmd := NewCmd(ctx, args...)
- cn, err := c.connWithLock(ctx)
+ c.mu.Lock()
+ defer c.mu.Unlock()
+
+ cn, err := c.conn(ctx, nil)
if err != nil {
return err
}
err = c.writeCmd(ctx, cn, cmd)
- c.releaseConnWithLock(ctx, cn, err, false)
+ c.releaseConn(ctx, cn, err, false)
return err
}
@@ -370,6 +364,8 @@
c.cmd = NewCmd(ctx)
}
+ // Don't hold the lock to allow subscriptions and pings.
+
cn, err := c.connWithLock(ctx)
if err != nil {
return nil, err
@@ -380,6 +376,7 @@
})
c.releaseConnWithLock(ctx, cn, err, timeout > 0)
+
if err != nil {
return nil, err
}
@@ -418,56 +415,6 @@
}
}
-// Channel returns a Go channel for concurrently receiving messages.
-// The channel is closed together with the PubSub. If the Go channel
-// is blocked full for 30 seconds the message is dropped.
-// Receive* APIs can not be used after channel is created.
-//
-// go-redis periodically sends ping messages to test connection health
-// and re-subscribes if ping can not not received for 30 seconds.
-func (c *PubSub) Channel() <-chan *Message {
- return c.ChannelSize(100)
-}
-
-// ChannelSize is like Channel, but creates a Go channel
-// with specified buffer size.
-func (c *PubSub) ChannelSize(size int) <-chan *Message {
- c.chOnce.Do(func() {
- c.initPing()
- c.initMsgChan(size)
- })
- if c.msgCh == nil {
- err := fmt.Errorf("redis: Channel can't be called after ChannelWithSubscriptions")
- panic(err)
- }
- if cap(c.msgCh) != size {
- err := fmt.Errorf("redis: PubSub.Channel size can not be changed once created")
- panic(err)
- }
- return c.msgCh
-}
-
-// ChannelWithSubscriptions is like Channel, but message type can be either
-// *Subscription or *Message. Subscription messages can be used to detect
-// reconnections.
-//
-// ChannelWithSubscriptions can not be used together with Channel or ChannelSize.
-func (c *PubSub) ChannelWithSubscriptions(ctx context.Context, size int) <-chan interface{} {
- c.chOnce.Do(func() {
- c.initPing()
- c.initAllChan(size)
- })
- if c.allCh == nil {
- err := fmt.Errorf("redis: ChannelWithSubscriptions can't be called after Channel")
- panic(err)
- }
- if cap(c.allCh) != size {
- err := fmt.Errorf("redis: PubSub.Channel size can not be changed once created")
- panic(err)
- }
- return c.allCh
-}
-
func (c *PubSub) getContext() context.Context {
if c.cmd != nil {
return c.cmd.ctx
@@ -475,36 +422,135 @@
return context.Background()
}
-func (c *PubSub) initPing() {
+//------------------------------------------------------------------------------
+
+// Channel returns a Go channel for concurrently receiving messages.
+// The channel is closed together with the PubSub. If the Go channel
+// is blocked full for 30 seconds the message is dropped.
+// Receive* APIs can not be used after channel is created.
+//
+// go-redis periodically sends ping messages to test connection health
+// and re-subscribes if ping can not not received for 30 seconds.
+func (c *PubSub) Channel(opts ...ChannelOption) <-chan *Message {
+ c.chOnce.Do(func() {
+ c.msgCh = newChannel(c, opts...)
+ c.msgCh.initMsgChan()
+ })
+ if c.msgCh == nil {
+ err := fmt.Errorf("redis: Channel can't be called after ChannelWithSubscriptions")
+ panic(err)
+ }
+ return c.msgCh.msgCh
+}
+
+// ChannelSize is like Channel, but creates a Go channel
+// with specified buffer size.
+//
+// Deprecated: use Channel(WithChannelSize(size)), remove in v9.
+func (c *PubSub) ChannelSize(size int) <-chan *Message {
+ return c.Channel(WithChannelSize(size))
+}
+
+// ChannelWithSubscriptions is like Channel, but message type can be either
+// *Subscription or *Message. Subscription messages can be used to detect
+// reconnections.
+//
+// ChannelWithSubscriptions can not be used together with Channel or ChannelSize.
+func (c *PubSub) ChannelWithSubscriptions(_ context.Context, size int) <-chan interface{} {
+ c.chOnce.Do(func() {
+ c.allCh = newChannel(c, WithChannelSize(size))
+ c.allCh.initAllChan()
+ })
+ if c.allCh == nil {
+ err := fmt.Errorf("redis: ChannelWithSubscriptions can't be called after Channel")
+ panic(err)
+ }
+ return c.allCh.allCh
+}
+
+type ChannelOption func(c *channel)
+
+// WithChannelSize specifies the Go chan size that is used to buffer incoming messages.
+//
+// The default is 100 messages.
+func WithChannelSize(size int) ChannelOption {
+ return func(c *channel) {
+ c.chanSize = size
+ }
+}
+
+// WithChannelHealthCheckInterval specifies the health check interval.
+// PubSub will ping Redis Server if it does not receive any messages within the interval.
+// To disable health check, use zero interval.
+//
+// The default is 3 seconds.
+func WithChannelHealthCheckInterval(d time.Duration) ChannelOption {
+ return func(c *channel) {
+ c.checkInterval = d
+ }
+}
+
+// WithChannelSendTimeout specifies the channel send timeout after which
+// the message is dropped.
+//
+// The default is 60 seconds.
+func WithChannelSendTimeout(d time.Duration) ChannelOption {
+ return func(c *channel) {
+ c.chanSendTimeout = d
+ }
+}
+
+type channel struct {
+ pubSub *PubSub
+
+ msgCh chan *Message
+ allCh chan interface{}
+ ping chan struct{}
+
+ chanSize int
+ chanSendTimeout time.Duration
+ checkInterval time.Duration
+}
+
+func newChannel(pubSub *PubSub, opts ...ChannelOption) *channel {
+ c := &channel{
+ pubSub: pubSub,
+
+ chanSize: 100,
+ chanSendTimeout: time.Minute,
+ checkInterval: 3 * time.Second,
+ }
+ for _, opt := range opts {
+ opt(c)
+ }
+ if c.checkInterval > 0 {
+ c.initHealthCheck()
+ }
+ return c
+}
+
+func (c *channel) initHealthCheck() {
ctx := context.TODO()
c.ping = make(chan struct{}, 1)
+
go func() {
timer := time.NewTimer(time.Minute)
timer.Stop()
- healthy := true
for {
- timer.Reset(pingTimeout)
+ timer.Reset(c.checkInterval)
select {
case <-c.ping:
- healthy = true
if !timer.Stop() {
<-timer.C
}
case <-timer.C:
- pingErr := c.Ping(ctx)
- if healthy {
- healthy = false
- } else {
- if pingErr == nil {
- pingErr = errPingTimeout
- }
- c.mu.Lock()
- c.reconnect(ctx, pingErr)
- healthy = true
- c.mu.Unlock()
+ if pingErr := c.pubSub.Ping(ctx); pingErr != nil {
+ c.pubSub.mu.Lock()
+ c.pubSub.reconnect(ctx, pingErr)
+ c.pubSub.mu.Unlock()
}
- case <-c.exit:
+ case <-c.pubSub.exit:
return
}
}
@@ -512,16 +558,17 @@
}
// initMsgChan must be in sync with initAllChan.
-func (c *PubSub) initMsgChan(size int) {
+func (c *channel) initMsgChan() {
ctx := context.TODO()
- c.msgCh = make(chan *Message, size)
+ c.msgCh = make(chan *Message, c.chanSize)
+
go func() {
timer := time.NewTimer(time.Minute)
timer.Stop()
var errCount int
for {
- msg, err := c.Receive(ctx)
+ msg, err := c.pubSub.Receive(ctx)
if err != nil {
if err == pool.ErrClosed {
close(c.msgCh)
@@ -548,7 +595,7 @@
case *Pong:
// Ignore.
case *Message:
- timer.Reset(chanSendTimeout)
+ timer.Reset(c.chanSendTimeout)
select {
case c.msgCh <- msg:
if !timer.Stop() {
@@ -556,30 +603,28 @@
}
case <-timer.C:
internal.Logger.Printf(
- c.getContext(),
- "redis: %s channel is full for %s (message is dropped)",
- c,
- chanSendTimeout,
- )
+ ctx, "redis: %s channel is full for %s (message is dropped)",
+ c, c.chanSendTimeout)
}
default:
- internal.Logger.Printf(c.getContext(), "redis: unknown message type: %T", msg)
+ internal.Logger.Printf(ctx, "redis: unknown message type: %T", msg)
}
}
}()
}
// initAllChan must be in sync with initMsgChan.
-func (c *PubSub) initAllChan(size int) {
+func (c *channel) initAllChan() {
ctx := context.TODO()
- c.allCh = make(chan interface{}, size)
+ c.allCh = make(chan interface{}, c.chanSize)
+
go func() {
- timer := time.NewTimer(pingTimeout)
+ timer := time.NewTimer(time.Minute)
timer.Stop()
var errCount int
for {
- msg, err := c.Receive(ctx)
+ msg, err := c.pubSub.Receive(ctx)
if err != nil {
if err == pool.ErrClosed {
close(c.allCh)
@@ -601,29 +646,23 @@
}
switch msg := msg.(type) {
- case *Subscription:
- c.sendMessage(msg, timer)
case *Pong:
// Ignore.
- case *Message:
- c.sendMessage(msg, timer)
+ case *Subscription, *Message:
+ timer.Reset(c.chanSendTimeout)
+ select {
+ case c.allCh <- msg:
+ if !timer.Stop() {
+ <-timer.C
+ }
+ case <-timer.C:
+ internal.Logger.Printf(
+ ctx, "redis: %s channel is full for %s (message is dropped)",
+ c, c.chanSendTimeout)
+ }
default:
- internal.Logger.Printf(c.getContext(), "redis: unknown message type: %T", msg)
+ internal.Logger.Printf(ctx, "redis: unknown message type: %T", msg)
}
}
}()
}
-
-func (c *PubSub) sendMessage(msg interface{}, timer *time.Timer) {
- timer.Reset(pingTimeout)
- select {
- case c.allCh <- msg:
- if !timer.Stop() {
- <-timer.C
- }
- case <-timer.C:
- internal.Logger.Printf(
- c.getContext(),
- "redis: %s channel is full for %s (message is dropped)", c, pingTimeout)
- }
-}
diff --git a/vendor/github.com/go-redis/redis/v8/redis.go b/vendor/github.com/go-redis/redis/v8/redis.go
index efad7f1..bcf8a2a 100644
--- a/vendor/github.com/go-redis/redis/v8/redis.go
+++ b/vendor/github.com/go-redis/redis/v8/redis.go
@@ -2,14 +2,14 @@
import (
"context"
+ "errors"
"fmt"
+ "sync/atomic"
"time"
"github.com/go-redis/redis/v8/internal"
"github.com/go-redis/redis/v8/internal/pool"
"github.com/go-redis/redis/v8/internal/proto"
- "go.opentelemetry.io/otel/api/trace"
- "go.opentelemetry.io/otel/label"
)
// Nil reply returned by Redis when key does not exist.
@@ -51,9 +51,7 @@
ctx context.Context, cmd Cmder, fn func(context.Context, Cmder) error,
) error {
if len(hs.hooks) == 0 {
- err := hs.withContext(ctx, func() error {
- return fn(ctx, cmd)
- })
+ err := fn(ctx, cmd)
cmd.SetErr(err)
return err
}
@@ -69,9 +67,7 @@
}
if retErr == nil {
- retErr = hs.withContext(ctx, func() error {
- return fn(ctx, cmd)
- })
+ retErr = fn(ctx, cmd)
cmd.SetErr(retErr)
}
@@ -89,9 +85,7 @@
ctx context.Context, cmds []Cmder, fn func(context.Context, []Cmder) error,
) error {
if len(hs.hooks) == 0 {
- err := hs.withContext(ctx, func() error {
- return fn(ctx, cmds)
- })
+ err := fn(ctx, cmds)
return err
}
@@ -106,9 +100,7 @@
}
if retErr == nil {
- retErr = hs.withContext(ctx, func() error {
- return fn(ctx, cmds)
- })
+ retErr = fn(ctx, cmds)
}
for hookIndex--; hookIndex >= 0; hookIndex-- {
@@ -128,23 +120,6 @@
return hs.processPipeline(ctx, cmds, fn)
}
-func (hs hooks) withContext(ctx context.Context, fn func() error) error {
- done := ctx.Done()
- if done == nil {
- return fn()
- }
-
- errc := make(chan error, 1)
- go func() { errc <- fn() }()
-
- select {
- case <-done:
- return ctx.Err()
- case err := <-errc:
- return err
- }
-}
-
//------------------------------------------------------------------------------
type baseClient struct {
@@ -225,12 +200,9 @@
return cn, nil
}
- err = internal.WithSpan(ctx, "redis.init_conn", func(ctx context.Context, span trace.Span) error {
- return c.initConn(ctx, cn)
- })
- if err != nil {
+ if err := c.initConn(ctx, cn); err != nil {
c.connPool.Remove(ctx, cn, err)
- if err := internal.Unwrap(err); err != nil {
+ if err := errors.Unwrap(err); err != nil {
return nil, err
}
return nil, err
@@ -289,7 +261,7 @@
c.opt.Limiter.ReportResult(err)
}
- if isBadConn(err, false) {
+ if isBadConn(err, false, c.opt.Addr) {
c.connPool.Remove(ctx, cn, err)
} else {
c.connPool.Put(ctx, cn)
@@ -299,25 +271,36 @@
func (c *baseClient) withConn(
ctx context.Context, fn func(context.Context, *pool.Conn) error,
) error {
- return internal.WithSpan(ctx, "redis.with_conn", func(ctx context.Context, span trace.Span) error {
- cn, err := c.getConn(ctx)
- if err != nil {
- return err
- }
+ cn, err := c.getConn(ctx)
+ if err != nil {
+ return err
+ }
- if span.IsRecording() {
- if remoteAddr := cn.RemoteAddr(); remoteAddr != nil {
- span.SetAttributes(label.String("net.peer.ip", remoteAddr.String()))
- }
- }
+ defer func() {
+ c.releaseConn(ctx, cn, err)
+ }()
- defer func() {
- c.releaseConn(ctx, cn, err)
- }()
+ done := ctx.Done() //nolint:ifshort
+ if done == nil {
err = fn(ctx, cn)
return err
- })
+ }
+
+ errc := make(chan error, 1)
+ go func() { errc <- fn(ctx, cn) }()
+
+ select {
+ case <-done:
+ _ = cn.Close()
+ // Wait for the goroutine to finish and send something.
+ <-errc
+
+ err = ctx.Err()
+ return err
+ case err = <-errc:
+ return err
+ }
}
func (c *baseClient) process(ctx context.Context, cmd Cmder) error {
@@ -325,45 +308,50 @@
for attempt := 0; attempt <= c.opt.MaxRetries; attempt++ {
attempt := attempt
- var retry bool
- err := internal.WithSpan(ctx, "redis.process", func(ctx context.Context, span trace.Span) error {
- if attempt > 0 {
- if err := internal.Sleep(ctx, c.retryBackoff(attempt)); err != nil {
- return err
- }
- }
-
- retryTimeout := true
- err := c.withConn(ctx, func(ctx context.Context, cn *pool.Conn) error {
- err := cn.WithWriter(ctx, c.opt.WriteTimeout, func(wr *proto.Writer) error {
- return writeCmd(wr, cmd)
- })
- if err != nil {
- return err
- }
-
- err = cn.WithReader(ctx, c.cmdTimeout(cmd), cmd.readReply)
- if err != nil {
- retryTimeout = cmd.readTimeout() == nil
- return err
- }
-
- return nil
- })
- if err == nil {
- return nil
- }
- retry = shouldRetry(err, retryTimeout)
- return err
- })
+ retry, err := c._process(ctx, cmd, attempt)
if err == nil || !retry {
return err
}
+
lastErr = err
}
return lastErr
}
+func (c *baseClient) _process(ctx context.Context, cmd Cmder, attempt int) (bool, error) {
+ if attempt > 0 {
+ if err := internal.Sleep(ctx, c.retryBackoff(attempt)); err != nil {
+ return false, err
+ }
+ }
+
+ retryTimeout := uint32(1)
+ err := c.withConn(ctx, func(ctx context.Context, cn *pool.Conn) error {
+ err := cn.WithWriter(ctx, c.opt.WriteTimeout, func(wr *proto.Writer) error {
+ return writeCmd(wr, cmd)
+ })
+ if err != nil {
+ return err
+ }
+
+ err = cn.WithReader(ctx, c.cmdTimeout(cmd), cmd.readReply)
+ if err != nil {
+ if cmd.readTimeout() == nil {
+ atomic.StoreUint32(&retryTimeout, 1)
+ }
+ return err
+ }
+
+ return nil
+ })
+ if err == nil {
+ return false, nil
+ }
+
+ retry := shouldRetry(err, atomic.LoadUint32(&retryTimeout) == 1)
+ return retry, err
+}
+
func (c *baseClient) retryBackoff(attempt int) time.Duration {
return internal.RetryBackoff(attempt, c.opt.MinRetryBackoff, c.opt.MaxRetryBackoff)
}
@@ -722,7 +710,9 @@
hooks // TODO: inherit hooks
}
-// Conn is like Client, but its pool contains single connection.
+// Conn represents a single Redis connection rather than a pool of connections.
+// Prefer running commands from Client unless there is a specific need
+// for a continuous single Redis connection.
type Conn struct {
*conn
ctx context.Context
diff --git a/vendor/github.com/go-redis/redis/v8/renovate.json b/vendor/github.com/go-redis/redis/v8/renovate.json
deleted file mode 100644
index f45d8f1..0000000
--- a/vendor/github.com/go-redis/redis/v8/renovate.json
+++ /dev/null
@@ -1,5 +0,0 @@
-{
- "extends": [
- "config:base"
- ]
-}
diff --git a/vendor/github.com/go-redis/redis/v8/ring.go b/vendor/github.com/go-redis/redis/v8/ring.go
index 34d05f3..4df00fc 100644
--- a/vendor/github.com/go-redis/redis/v8/ring.go
+++ b/vendor/github.com/go-redis/redis/v8/ring.go
@@ -12,7 +12,8 @@
"time"
"github.com/cespare/xxhash/v2"
- "github.com/dgryski/go-rendezvous"
+ rendezvous "github.com/dgryski/go-rendezvous" //nolint
+
"github.com/go-redis/redis/v8/internal"
"github.com/go-redis/redis/v8/internal/hashtag"
"github.com/go-redis/redis/v8/internal/pool"
@@ -78,6 +79,9 @@
ReadTimeout time.Duration
WriteTimeout time.Duration
+ // PoolFIFO uses FIFO mode for each node connection pool GET/PUT (default LIFO).
+ PoolFIFO bool
+
PoolSize int
MinIdleConns int
MaxConnAge time.Duration
@@ -138,6 +142,7 @@
ReadTimeout: opt.ReadTimeout,
WriteTimeout: opt.WriteTimeout,
+ PoolFIFO: opt.PoolFIFO,
PoolSize: opt.PoolSize,
MinIdleConns: opt.MinIdleConns,
MaxConnAge: opt.MaxConnAge,
@@ -571,7 +576,7 @@
}
info := cmdsInfo[name]
if info == nil {
- internal.Logger.Printf(c.Context(), "info for cmd=%s not found", name)
+ internal.Logger.Printf(ctx, "info for cmd=%s not found", name)
}
return info
}
diff --git a/vendor/github.com/go-redis/redis/v8/script.go b/vendor/github.com/go-redis/redis/v8/script.go
index 07ed482..5cab18d 100644
--- a/vendor/github.com/go-redis/redis/v8/script.go
+++ b/vendor/github.com/go-redis/redis/v8/script.go
@@ -8,7 +8,7 @@
"strings"
)
-type scripter interface {
+type Scripter interface {
Eval(ctx context.Context, script string, keys []string, args ...interface{}) *Cmd
EvalSha(ctx context.Context, sha1 string, keys []string, args ...interface{}) *Cmd
ScriptExists(ctx context.Context, hashes ...string) *BoolSliceCmd
@@ -16,9 +16,9 @@
}
var (
- _ scripter = (*Client)(nil)
- _ scripter = (*Ring)(nil)
- _ scripter = (*ClusterClient)(nil)
+ _ Scripter = (*Client)(nil)
+ _ Scripter = (*Ring)(nil)
+ _ Scripter = (*ClusterClient)(nil)
)
type Script struct {
@@ -38,25 +38,25 @@
return s.hash
}
-func (s *Script) Load(ctx context.Context, c scripter) *StringCmd {
+func (s *Script) Load(ctx context.Context, c Scripter) *StringCmd {
return c.ScriptLoad(ctx, s.src)
}
-func (s *Script) Exists(ctx context.Context, c scripter) *BoolSliceCmd {
+func (s *Script) Exists(ctx context.Context, c Scripter) *BoolSliceCmd {
return c.ScriptExists(ctx, s.hash)
}
-func (s *Script) Eval(ctx context.Context, c scripter, keys []string, args ...interface{}) *Cmd {
+func (s *Script) Eval(ctx context.Context, c Scripter, keys []string, args ...interface{}) *Cmd {
return c.Eval(ctx, s.src, keys, args...)
}
-func (s *Script) EvalSha(ctx context.Context, c scripter, keys []string, args ...interface{}) *Cmd {
+func (s *Script) EvalSha(ctx context.Context, c Scripter, keys []string, args ...interface{}) *Cmd {
return c.EvalSha(ctx, s.hash, keys, args...)
}
// Run optimistically uses EVALSHA to run the script. If script does not exist
// it is retried using EVAL.
-func (s *Script) Run(ctx context.Context, c scripter, keys []string, args ...interface{}) *Cmd {
+func (s *Script) Run(ctx context.Context, c Scripter, keys []string, args ...interface{}) *Cmd {
r := s.EvalSha(ctx, c, keys, args...)
if err := r.Err(); err != nil && strings.HasPrefix(err.Error(), "NOSCRIPT ") {
return s.Eval(ctx, c, keys, args...)
diff --git a/vendor/github.com/go-redis/redis/v8/sentinel.go b/vendor/github.com/go-redis/redis/v8/sentinel.go
index 7db9843..ec6221d 100644
--- a/vendor/github.com/go-redis/redis/v8/sentinel.go
+++ b/vendor/github.com/go-redis/redis/v8/sentinel.go
@@ -23,7 +23,13 @@
MasterName string
// A seed list of host:port addresses of sentinel nodes.
SentinelAddrs []string
- // Sentinel password from "requirepass <password>" (if enabled) in Sentinel configuration
+
+ // If specified with SentinelPassword, enables ACL-based authentication (via
+ // AUTH <user> <pass>).
+ SentinelUsername string
+ // Sentinel password from "requirepass <password>" (if enabled) in Sentinel
+ // configuration, or, if SentinelUsername is also supplied, used for ACL-based
+ // authentication.
SentinelPassword string
// Allows routing read-only commands to the closest master or slave node.
@@ -36,6 +42,10 @@
// Route all commands to slave read-only nodes.
SlaveOnly bool
+ // Use slaves disconnected with master when cannot get connected slaves
+ // Now, this option only works in RandomSlaveAddr function.
+ UseDisconnectedSlaves bool
+
// Following options are copied from Options struct.
Dialer func(ctx context.Context, network, addr string) (net.Conn, error)
@@ -53,6 +63,9 @@
ReadTimeout time.Duration
WriteTimeout time.Duration
+ // PoolFIFO uses FIFO mode for each node connection pool GET/PUT (default LIFO).
+ PoolFIFO bool
+
PoolSize int
MinIdleConns int
MaxConnAge time.Duration
@@ -82,6 +95,7 @@
ReadTimeout: opt.ReadTimeout,
WriteTimeout: opt.WriteTimeout,
+ PoolFIFO: opt.PoolFIFO,
PoolSize: opt.PoolSize,
PoolTimeout: opt.PoolTimeout,
IdleTimeout: opt.IdleTimeout,
@@ -101,6 +115,7 @@
OnConnect: opt.OnConnect,
DB: 0,
+ Username: opt.SentinelUsername,
Password: opt.SentinelPassword,
MaxRetries: opt.MaxRetries,
@@ -111,6 +126,7 @@
ReadTimeout: opt.ReadTimeout,
WriteTimeout: opt.WriteTimeout,
+ PoolFIFO: opt.PoolFIFO,
PoolSize: opt.PoolSize,
PoolTimeout: opt.PoolTimeout,
IdleTimeout: opt.IdleTimeout,
@@ -142,6 +158,7 @@
ReadTimeout: opt.ReadTimeout,
WriteTimeout: opt.WriteTimeout,
+ PoolFIFO: opt.PoolFIFO,
PoolSize: opt.PoolSize,
PoolTimeout: opt.PoolTimeout,
IdleTimeout: opt.IdleTimeout,
@@ -167,6 +184,10 @@
sentinelAddrs := make([]string, len(failoverOpt.SentinelAddrs))
copy(sentinelAddrs, failoverOpt.SentinelAddrs)
+ rand.Shuffle(len(sentinelAddrs), func(i, j int) {
+ sentinelAddrs[i], sentinelAddrs[j] = sentinelAddrs[j], sentinelAddrs[i]
+ })
+
failover := &sentinelFailover{
opt: failoverOpt,
sentinelAddrs: sentinelAddrs,
@@ -177,11 +198,14 @@
opt.init()
connPool := newConnPool(opt)
+
+ failover.mu.Lock()
failover.onFailover = func(ctx context.Context, addr string) {
_ = connPool.Filter(func(cn *pool.Conn) bool {
return cn.RemoteAddr().String() != addr
})
}
+ failover.mu.Unlock()
c := Client{
baseClient: newBaseClient(opt, connPool),
@@ -208,14 +232,21 @@
failover.trySwitchMaster(ctx, addr)
}
}
-
if err != nil {
return nil, err
}
if failover.opt.Dialer != nil {
return failover.opt.Dialer(ctx, network, addr)
}
- return net.DialTimeout("tcp", addr, failover.opt.DialTimeout)
+
+ netDialer := &net.Dialer{
+ Timeout: failover.opt.DialTimeout,
+ KeepAlive: 5 * time.Minute,
+ }
+ if failover.opt.TLSConfig == nil {
+ return netDialer.DialContext(ctx, network, addr)
+ }
+ return tls.DialWithDialer(netDialer, network, addr, failover.opt.TLSConfig)
}
}
@@ -430,10 +461,22 @@
}
func (c *sentinelFailover) RandomSlaveAddr(ctx context.Context) (string, error) {
- addresses, err := c.slaveAddrs(ctx)
+ if c.opt == nil {
+ return "", errors.New("opt is nil")
+ }
+
+ addresses, err := c.slaveAddrs(ctx, false)
if err != nil {
return "", err
}
+
+ if len(addresses) == 0 && c.opt.UseDisconnectedSlaves {
+ addresses, err = c.slaveAddrs(ctx, true)
+ if err != nil {
+ return "", err
+ }
+ }
+
if len(addresses) == 0 {
return c.MasterAddr(ctx)
}
@@ -482,10 +525,10 @@
return addr, nil
}
- return "", errors.New("redis: all sentinels are unreachable")
+ return "", errors.New("redis: all sentinels specified in configuration are unreachable")
}
-func (c *sentinelFailover) slaveAddrs(ctx context.Context) ([]string, error) {
+func (c *sentinelFailover) slaveAddrs(ctx context.Context, useDisconnected bool) ([]string, error) {
c.mu.RLock()
sentinel := c.sentinel
c.mu.RUnlock()
@@ -508,6 +551,8 @@
_ = c.closeSentinel()
}
+ var sentinelReachable bool
+
for i, sentinelAddr := range c.sentinelAddrs {
sentinel := NewSentinelClient(c.opt.sentinelOptions(sentinelAddr))
@@ -518,16 +563,22 @@
_ = sentinel.Close()
continue
}
-
+ sentinelReachable = true
+ addrs := parseSlaveAddrs(slaves, useDisconnected)
+ if len(addrs) == 0 {
+ continue
+ }
// Push working sentinel to the top.
c.sentinelAddrs[0], c.sentinelAddrs[i] = c.sentinelAddrs[i], c.sentinelAddrs[0]
c.setSentinel(ctx, sentinel)
- addrs := parseSlaveAddrs(slaves)
return addrs, nil
}
- return []string{}, errors.New("redis: all sentinels are unreachable")
+ if sentinelReachable {
+ return []string{}, nil
+ }
+ return []string{}, errors.New("redis: all sentinels specified in configuration are unreachable")
}
func (c *sentinelFailover) getMasterAddr(ctx context.Context, sentinel *SentinelClient) string {
@@ -547,12 +598,11 @@
c.opt.MasterName, err)
return []string{}
}
- return parseSlaveAddrs(addrs)
+ return parseSlaveAddrs(addrs, false)
}
-func parseSlaveAddrs(addrs []interface{}) []string {
+func parseSlaveAddrs(addrs []interface{}, keepDisconnected bool) []string {
nodes := make([]string, 0, len(addrs))
-
for _, node := range addrs {
ip := ""
port := ""
@@ -574,8 +624,12 @@
for _, flag := range flags {
switch flag {
- case "s_down", "o_down", "disconnected":
+ case "s_down", "o_down":
isDown = true
+ case "disconnected":
+ if !keepDisconnected {
+ isDown = true
+ }
}
}
@@ -589,7 +643,7 @@
func (c *sentinelFailover) trySwitchMaster(ctx context.Context, addr string) {
c.mu.RLock()
- currentAddr := c._masterAddr
+ currentAddr := c._masterAddr //nolint:ifshort
c.mu.RUnlock()
if addr == currentAddr {
@@ -630,15 +684,22 @@
}
for _, sentinel := range sentinels {
vals := sentinel.([]interface{})
+ var ip, port string
for i := 0; i < len(vals); i += 2 {
key := vals[i].(string)
- if key == "name" {
- sentinelAddr := vals[i+1].(string)
- if !contains(c.sentinelAddrs, sentinelAddr) {
- internal.Logger.Printf(ctx, "sentinel: discovered new sentinel=%q for master=%q",
- sentinelAddr, c.opt.MasterName)
- c.sentinelAddrs = append(c.sentinelAddrs, sentinelAddr)
- }
+ switch key {
+ case "ip":
+ ip = vals[i+1].(string)
+ case "port":
+ port = vals[i+1].(string)
+ }
+ }
+ if ip != "" && port != "" {
+ sentinelAddr := net.JoinHostPort(ip, port)
+ if !contains(c.sentinelAddrs, sentinelAddr) {
+ internal.Logger.Printf(ctx, "sentinel: discovered new sentinel=%q for master=%q",
+ sentinelAddr, c.opt.MasterName)
+ c.sentinelAddrs = append(c.sentinelAddrs, sentinelAddr)
}
}
}
@@ -646,6 +707,7 @@
func (c *sentinelFailover) listen(pubsub *PubSub) {
ctx := context.TODO()
+
if c.onUpdate != nil {
c.onUpdate(ctx)
}
@@ -701,7 +763,7 @@
Addr: masterAddr,
}}
- slaveAddrs, err := failover.slaveAddrs(ctx)
+ slaveAddrs, err := failover.slaveAddrs(ctx, false)
if err != nil {
return nil, err
}
@@ -723,9 +785,12 @@
}
c := NewClusterClient(opt)
+
+ failover.mu.Lock()
failover.onUpdate = func(ctx context.Context) {
c.ReloadState(ctx)
}
+ failover.mu.Unlock()
return c
}
diff --git a/vendor/github.com/go-redis/redis/v8/tx.go b/vendor/github.com/go-redis/redis/v8/tx.go
index ad825c6..8c9d872 100644
--- a/vendor/github.com/go-redis/redis/v8/tx.go
+++ b/vendor/github.com/go-redis/redis/v8/tx.go
@@ -13,7 +13,8 @@
// Tx implements Redis transactions as described in
// http://redis.io/topics/transactions. It's NOT safe for concurrent use
// by multiple goroutines, because Exec resets list of watched keys.
-// If you don't need WATCH it is better to use Pipeline.
+//
+// If you don't need WATCH, use Pipeline instead.
type Tx struct {
baseClient
cmdable
@@ -65,16 +66,13 @@
// The transaction is automatically closed when fn exits.
func (c *Client) Watch(ctx context.Context, fn func(*Tx) error, keys ...string) error {
tx := c.newTx(ctx)
+ defer tx.Close(ctx)
if len(keys) > 0 {
if err := tx.Watch(ctx, keys...).Err(); err != nil {
- _ = tx.Close(ctx)
return err
}
}
-
- err := fn(tx)
- _ = tx.Close(ctx)
- return err
+ return fn(tx)
}
// Close closes the transaction, releasing any open resources.
diff --git a/vendor/github.com/go-redis/redis/v8/universal.go b/vendor/github.com/go-redis/redis/v8/universal.go
index 5f0e1e3..c89b3e5 100644
--- a/vendor/github.com/go-redis/redis/v8/universal.go
+++ b/vendor/github.com/go-redis/redis/v8/universal.go
@@ -25,6 +25,7 @@
Username string
Password string
+ SentinelUsername string
SentinelPassword string
MaxRetries int
@@ -35,6 +36,9 @@
ReadTimeout time.Duration
WriteTimeout time.Duration
+ // PoolFIFO uses FIFO mode for each node connection pool GET/PUT (default LIFO).
+ PoolFIFO bool
+
PoolSize int
MinIdleConns int
MaxConnAge time.Duration
@@ -53,6 +57,7 @@
// The sentinel master name.
// Only failover clients.
+
MasterName string
}
@@ -82,6 +87,7 @@
DialTimeout: o.DialTimeout,
ReadTimeout: o.ReadTimeout,
WriteTimeout: o.WriteTimeout,
+ PoolFIFO: o.PoolFIFO,
PoolSize: o.PoolSize,
MinIdleConns: o.MinIdleConns,
MaxConnAge: o.MaxConnAge,
@@ -109,6 +115,7 @@
DB: o.DB,
Username: o.Username,
Password: o.Password,
+ SentinelUsername: o.SentinelUsername,
SentinelPassword: o.SentinelPassword,
MaxRetries: o.MaxRetries,
@@ -119,6 +126,7 @@
ReadTimeout: o.ReadTimeout,
WriteTimeout: o.WriteTimeout,
+ PoolFIFO: o.PoolFIFO,
PoolSize: o.PoolSize,
MinIdleConns: o.MinIdleConns,
MaxConnAge: o.MaxConnAge,
@@ -154,6 +162,7 @@
ReadTimeout: o.ReadTimeout,
WriteTimeout: o.WriteTimeout,
+ PoolFIFO: o.PoolFIFO,
PoolSize: o.PoolSize,
MinIdleConns: o.MinIdleConns,
MaxConnAge: o.MaxConnAge,
@@ -168,9 +177,9 @@
// --------------------------------------------------------------------
// UniversalClient is an abstract client which - based on the provided options -
-// can connect to either clusters, or sentinel-backed failover instances
-// or simple single-instance servers. This can be useful for testing
-// cluster-specific applications locally.
+// represents either a ClusterClient, a FailoverClient, or a single-node Client.
+// This can be useful for testing cluster-specific applications locally or having different
+// clients in different environments.
type UniversalClient interface {
Cmdable
Context() context.Context
@@ -190,12 +199,12 @@
_ UniversalClient = (*Ring)(nil)
)
-// NewUniversalClient returns a new multi client. The type of client returned depends
-// on the following three conditions:
+// NewUniversalClient returns a new multi client. The type of the returned client depends
+// on the following conditions:
//
-// 1. if a MasterName is passed a sentinel-backed FailoverClient will be returned
-// 2. if the number of Addrs is two or more, a ClusterClient will be returned
-// 3. otherwise, a single-node redis Client will be returned.
+// 1. If the MasterName option is specified, a sentinel-backed FailoverClient is returned.
+// 2. if the number of Addrs is two or more, a ClusterClient is returned.
+// 3. Otherwise, a single-node Client is returned.
func NewUniversalClient(opts *UniversalOptions) UniversalClient {
if opts.MasterName != "" {
return NewFailoverClient(opts.Failover())
diff --git a/vendor/github.com/go-redis/redis/v8/version.go b/vendor/github.com/go-redis/redis/v8/version.go
new file mode 100644
index 0000000..112c9a2
--- /dev/null
+++ b/vendor/github.com/go-redis/redis/v8/version.go
@@ -0,0 +1,6 @@
+package redis
+
+// Version is the current release version.
+func Version() string {
+ return "8.11.5"
+}