Model-Driven Telemetry and Pipeline

On the last two videos, we went over the basic concepts of model-driven telemetry and presented a demo using pipeline and other tools.

🤖 Here’s what we used:

XRv IOS-XR

This is the XRv configuration, notice I cared to configure NTP, this is because I want to make sure the telemetry data goes out with the correct UTC timestamp, which will make our lives easier when working with InfluxDB and Grafana:

Also, notice the sensor-group called EVENT-SENSORS, when referenced under the subscription, is configured with a sample-interval of 0, this means it will behave like an “event-driven” sensor, which is why we’re using it for our syslog. In the videos referenced above, we had our event-sensors inside an additional subscription that was referencing the same destination-group, which achieves the same outcome, so, as you can see, there’s some good flexibility within the [telemetry model-driven] hierarchy.

Note: You can quickly see the YANG models available in your IOS-XR device by looking in the /pkg/yang directory from the router shell:

BIRD Routing Daemon

The strategy for BIRD will be to import all the routes under [protocol static] and then use filters to split the ones that we’re going to advertise via BGP and the ones advertised via OSPF. For BIRD to acknowledge our static routes, they have to be in the format of “route prefix/prefix-length next-hop“, so we can use this Python3 script to generate the route files:

And then:

The above will generate 20K 10.44.X.X and 15K 10.88.X.X routes in files called bird.bgp  and bird.ospf respectively.

And this is the configuration file we’re using for BIRD:

The official site has good documentation that will help you get started if this is your first time working with BIRD.

Pipeline

I found the telemetry tutorials on xrdocs to be the most helpful when it comes to getting started with pipeline, the links to the specific articles are listed at the end of this post.

This is the pipeline configuration file, the xport_input section called [fromRouters] matches the settings of the destination-group we configured previously in the XRv router:

As far as the metrics.json file, pipeline comes with a few sensors already defined, so I’ve excluded those from this gist, for brevity:

InfluxDB and Grafana

Pipeline comes with a shell script to quickly spin up a Grafana and Prometheus containers, so I got rid of the Prometheus line and added one for InfluxDB instead, the shell script is called  run.sh and can be found under bigmuddy-network-telemetry-pipeline/tools/monitor :

Notice we are not doing any port publishing -p because the network type –net is set to host, so the ports exposed in the respective container images will be automatically exposed to the host’s network (my 10.0.0.0/24 LAN).

InfluxDB

The shell script will start the container exposing port 8086 and will save it with an id/tag of “influxdb“.  Then we can go inside the container and do what we need to do using the Influx CLI:

This is an example query from Postman, using the timestamp to query like this comes handy, especially when trying to figure out if the telemetry data is making it into the database:

Grafana

Grafana by default will be exposed on port 3000, so you can simply browse to it after spinning up the container. Make sure you configure the data source as an InfluxDB type and be mindful of the HTTP access type (direct/proxy) if you’re reaching the Grafana URL from an external machine and not from localhost.

Here are a few query strings I’m using for the dashboard that I showed in the demo:

References and Resources

Everything you need to know about Pipeline

Filtering in Telemetry. Where to apply and why?

Using Pipeline: Integrating with InfluxDB

Configuring Model-Driven Telemetry (MDT)

BIRD 1.6.4 User’s Guide

YangModels repository