Developing and Testing Integrations for Datadog | Datadog

Developing and testing integrations for Datadog

Author Remi Hakim

Published: 3月 14, 2017
00:00:00
00:00:00

Hi everyone, my name is Remi Hakim. As Ilan said, I’m a lead engineer at Datadog working on the Agent and integrations, and I’m here today to talk about integrations and more specifically, to talk about developing and testing integrations.

So, let’s start with integration numbers.

As of today, we have almost 200 different integrations.

It’s a number that almost doubled over the past two years.

As Alexis mentioned earlier, it’s something that we want to keep growing, we want to keep adding more and more integrations to avoid the blind spots in your monitoring.

Out of those 200 integrations, 82 of them are Agent-based.

What does it mean? It means that they are run as Agent checks.

So, you’re all familiar with the Agent, you know that it supports plugins.

And so, each 82 of these integrations are run as Agent checks.

This means that they’re open source.

You can go on our GitHub public repositories and inspect the source code.

You can contribute today and you can help us in fixing bugs, you can help us add new features, you can help us add new integrations.

And, as a matter of fact, that’s what happened.

Over the past six years, we merged more than 2,000 pull requests from more than 250 contributors.

So, I know we have a few in the room, so I’d like to really thank you very much if you’ve ever contributed to the Agent, if you’ve fixed something, if you’ve added a new feature to the Agent, to one integration, to one of the libraries that we have.

Thanks a lot. We wouldn’t have that many integrations without you.

Adding more Datadog integrations

So, what’s next? We want to keep on adding more integrations.

We have 200 integrations now.

So, we want to keep increasing the number because integrations are a core feature of Datadog.

Their simplicity makes it easy to monitor every component of your stack.

You install the Agent, you enable the MySQL integration, the Postgres integration, the Redis integration, the SQL Server integration.

And it just takes a few minutes.

And we want to keep adding integrations, to make sure that we’re not missing something in your stack that would cause you an outage.

So, how do we do that?

How do we double the number of integrations?

Well, we need your help.

We need contributors and experts to help us add new integrations, make the current integrations be even better, and for that we (Datadog) need to do a better job at including your contributions.

So, as Alexis mentioned earlier, in the past, sometimes we took too long to get a product request reviewed, merged, or even when they got merged, there was some friction to release them.

It was taking too long to release your contributions.

And the reason why is that, until recently, the Agent and integrations were just one package.

We were not packaging each and every integration and that’s something that we’re changing as we speak.

So, as of today we’re now starting to package integrations individually.

Which means that you’ll be able to install and update your integrations.

You’ll be able to really update your integrations as soon as the features are ready.

New integrations repositories

So, to do that we created two new repositories.

So, we moved all the integrations code from the Agent’s repository to these two new repositories: integrations core and integrations extras.

And so, the difference between these two repositories.

So integrations core is the repository where all the current integrations live.

So all the integrations that were living in the Agent repository now live in this one repository.

Those integrations as usual will be fully supported by Datadog.

We’ll be maintaining it with your help.

But we’re also introducing a new repository which is integrations extras.

The goal of this repository is to make it easy for you to share your own integrations.

We do have many customers who developed their own custom checks for some components for which we didn’t have an integration yet.

But they didn’t feel confidence to share it with the community because it was not ready…it was not consumer ready.

There was no easy tool to do that.

So, we’re introducing integrations extras to make it easy for you to share your integrations.

The main difference between the core integrations and the extras integrations is that the core integrations will still be bundled with the Agent.

So, we wanted to keep the simplicity that we have today.

When you install the Agent you have one package and out of the box you’ll have all the integrations.

You can just install the Agent and enable the MySQL integration—that will still work.

But on top of that, that will be the case for integrations extras but on top of that, you’ll be able to individually install and upgrade your integrations.

So, you’ll be able to install Datadog integration football coming from integrations extras and you’ll be able to use it.

You’ll be able to upgrade one of the core integrations, you’ll be able to pin the version of integration.

So, the goal was really to keep the same simplicity.

So, one package that comes with all integrations out of the box, but it also gives you more flexibility.

So, as a user, it won’t change much.

It will just give you more flexibility on what version of the integrations you’ll be using.

As a contributor, we developed what you call the “integrations SDK” which is more like a framework of tools to help you build integrations to help you test your Agent checks and to help with packaging and distributing those integrations.

Demo: How to contribute a new integration

So, let’s do a good demo.

So, I’m going to show you first the integrations core repository.

So, this is the repository that contains all the existing integrations.

As you can see, we have all the integrations.

Each folder…there’s one folder per integration.

So, let’s have a look. Let’s pause a little bit.

That’s better.

Let’s have a look at the Redis integration for example.

So, in this folder you can see there’s a bunch of files.

So, there is a Readme file that describes what the integration does, how to install it, how to configure it.

There’s a changelog here that lists all the new features and the bug fixes of the integrations.

Then we have the actual code of the integration.

So, the Agent check, which is the same one that was living in the dd-agent repository in the past.

So, it’s the same as that code.

We have an example configuration file now. We have a manifest file.

We submit it as a version of the integration.

The name of integration requires a short description etc.

Then, we have the requirements file.

So, this is the file that’s used to declare what are the Python dependencies that this integration requires to run.

So, if you’re developing a new integration and it requires a new library that’s not yet shipped with the Agent, you’ll be able to declare it in this file so that the Python dependency will be packaged with the integration and with the Agent.

And then we have the test file in the CI folder.

So the test file is where you define all your tests, you define what you want to assert.

You want to assert that you’re collecting the right metrics, etc.

And then, the CI folder is mainly a rake task that you can use to declare how you’re going to set up the component that you want to monitor in the CI.

So we use for our CI, we use Travis and AppVeyor for Windows.

And so, in this Rakefile we’ll be able to say, “Okay, I want to run Redis on this port and can leverage Docker to do that.”

So, it seems like there’s a lot of boilerplate here, so there’s a lot of freebie code but basically all it does, is it uses Docker to set up a Redis cluster on Travis.

There are many files and at first you can see it’s pretty complicated to do that. There are many, many files.

So, that’s why we’re providing tools to generate all these boilerplates so that you can focus on writing code, writing tests, and I’m going to show you an example of how to do that.

Creating a new integration

Here I’m on my different machine and I have a service that’s running and I want to monitor this one component.

So this component exposes metrics on the web interface.

So, let’s have a look. It’s returning some JSON and there is a value and that’s the value that I want collect and send to Datadog to be able to monitor this one component.

So, I’m going to create an integration for this one component.

So, this component is called “Hello world.”

So, I’m going to create a new integration in integrations extras.

So, to do that I just cloned the integrations extras repository which has the same exact layout as the core integrations repository I was showing you earlier.

So, I can fork this repository.

And I’m going to create a new integration for that.

So, I can use some of the rake tasks that we wrote for you to generate all of the boilerplates.

So, I can do rake generate:skeleton and then the name of the integration.

So, let’s call it,“Hello world.”

And so, what does is that it creates all the boilerplate for the files I was talking to you about.

It creates the changelog, it creates the requirements file, it creates the integration check code, a test file.

So, let’s have a look, for example, at the test file.

It gets propagated with some boilerplate code so that you can actually run your Agent check, write your own test, etc.

Let’s have a look at the actual code for the check.

Same thing, so most of it is pre-written. You just have to add your logic in the check method.

So, if you have already written an Agent check, you’re already familiar with the AgentCheck class and how to implement an Agent check with the same thing.

Nothing changes here.

So, you can add your logic here.

And so, let’s write an integration for the command that I was telling you about.

So, I actually did it already, so I’m going to show you what it looks like once it’s done, once I’ve added the logic.

So, let’s use “Hello world” folder now.

So, here I have all the files so the same files are just in this branch.

I just added the actual logic so I can have a look at the logic here, what it does is it uses requests to actually query for the URL and then passes the JSON and then will send the metric as a gauge to Datadog.

So you can send the value to the gauge as data with some tags.

Writing tests for your new integration

And I wrote tests for this.

So, the tests will just…test against a local version of the application I want to monitor.

And so, it means that in order for the test to pass, I need to actually run locally this component.

And so, the tests will then run the check and assert that the metric is probably collected with the right tags.

So, as my component can be packaged as a Docker container, actually creating a Docker container that runs my hello world application.

I can leverage that in my CI to actually start this container and so, it’s what I did.

In the CI folder, there’s a rake task where what it does, it just starts the helloworld container.

And that’s it.

So, all the boilerplate of this file was automatically generated.

I just had to edit a few things to tell the CI to use this Docker image and to add my code, to add my test, and to list the dependencies that I need to use.

So, here I need to request a Python library for this integration.

So then I can do rake CI run Hello world and what to do…in my virtual environment, it will make sure that it has all the dependencies that it needs to test the code, which means that the Python nosetest libraries.

It will install the requests library because it knows that’s required by the Agent check.

It will then start the container, “helloworld” then it will run the actual test.

The test passed.

Then it will clean up after itself.

So, by stopping and removing the container and then my test passed.

So now it’s time for each of you to have an Agent check that creates the metric.

I have a test that makes sure that it works.

I have a CI to make sure that it will be able to run those tests in our environment.

So, it’s time to commit.

So, I can commit that.

So, I already did it, so I won’t do it again, but I committed that and I can push that and create a pull request against the integrations extras repository.

Creating a pull request

So, I have a pull request here, and I have my code, and so, yeah.

And, what happens when I open a PR is that we have Travis that will test the code and AppVeyor as well

So, Travis would run the CI, test the code, and then once the CI passes, we give a quick review to the code.

If it looks okay then we’ll merge it.

And then…so, once you open a pull request and the CI passes, we’ll review the pull request.

If there are some changes to be made then we’ll ask you to make the required changes then we’ll merge the pull request and as soon as the pull request is merged we’ll build an integration package that will then send this package for the CI, and if it passes the CI then we publish it to our APT and then the repositories will have the same thing for MSI installers on Windows, and so everyone will be able to use the latest version of this integration.

We’ll be able to use this integration.

Collecting feedback

It’s still a work in progress.

There are still a few things that need to be worked on.

So, feedback is more than welcome.

We also want to add more features that in the future will be…we’ll want for you to be able to share not only your Agent checks but we will want you to be able to share your dashboards, your monitors, in the same way.

So, it’s something that’s still in progress.

So, you should try it.

We’ll have a training session this afternoon.

So, feel free to come and ping me to discuss about that.