Pipelined data routing/trasnformation engine
The library provides a pipeline engine for stream processing of data.
It is based on the concept of EIP (enterprise integration patterns).
The motivation for this project is simple: to get an easy and clear way of coding ETL-like programs for parallel processing of data. In my case it was a BFS crawler tuned for extraction of specific metadata, (see a basic version in example
folder).
The library provides the following primitives:
All the primitives are accessible through DSL.
Method | Signature | Args |
---|---|---|
Source | f func(n *Node) | function f, used for generation of exchanges |
Filter | f func(e Exchange, n *Node) | function f, intercepts exchanges |
Process | workers int | Number of workers, beeing used to process exchanges |
To | route string | Name of the route, where to redirect an exchange for execution of request-reply |
WireTap | route string | Name of the route, where to copy an exchange |
Sink | f func(e Exchange) error | Function f, used for consumption exchanges |
go run examples/crawler/main.go
make code-coverage
and see the coverage.html for details.
github.com/Zensey/go-data-routing/node.go (100.0%)
github.com/Zensey/go-data-routing/nodetype_string.go (75.0%)
github.com/Zensey/go-data-routing/pool.go (100.0%)
github.com/Zensey/go-data-routing/route_builder.go (71.8%)
github.com/Zensey/go-data-routing/router_context.go (100.0%)
github.com/Zensey/go-data-routing/worker.go (81.8%)