Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .config/ansible-lint.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ warn_list:
- key-order[task] # Ensure specific order of keys in mappings.
- name[casing]
- 'risky-shell-pipe'
- no-handler # backup of old certificates
skip_list:
- '106'
- 'command-instead-of-module'
Expand Down
226 changes: 226 additions & 0 deletions docs/logstash-pipelines.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,226 @@
# Pipelines #

## Git managed ##

If you have pipeline code managed in (and available via) Git repositories, you can use this role to check them out and integrate them into `pipelines.yml`.

```
logstash_pipelines:
syslog:
name: syslog
source: https://github.com/widhalmt/syslog-logstash-pipeline.git
```

You can add a `version` attribute to your pipeline. It defaults to `main`. You can use every string, [Ansibles git](https://docs.ansible.com/ansible/latest/collections/ansible/builtin/git_module.html) module accepts.

## Input and Output ##

### Basic configuration ###

To have a single Redis input and output to your files, use this.

```
logstash_pipelines:
syslog:
name: syslog
source: https://github.com/netways/syslog-logstash-pipeline.git
exclusive: false
input:
- name: default
key: syslog-input
output:
- name: default
key: syslog-output
```

This will result in your pipeline checking out the configuration on GitHub and adding these two extras. In extra files, just shown in one place to safe space.

```
input {
redis {
host => "localhost"
data_type => "list"
key => "syslog-input"
}
}
output {
redis {
host => "localhost"
data_type => "list"
key => "syslog-output"
}
}
```

### Multiple inputs ###

Just give more inputs with `name` and `key`. Every key will be read.

### More complex configuration ###

If you want a bit more control over which outputs are used, the role offers more sophisticated configuration.

If you have several outputs that all have conditions, like just send some messages to a development system or only alerts to a monitoring system.

```
logstash_pipelines:
syslog:
name: syslog
source: https://github.com/netways/syslog-logstash-pipeline.git
exclusive: false
input:
- name: default
key: input
output:
- name: special
key: myspecial
condition: '[program] == "special"'
- name: special2
key: myspecial2
condition: '[program] == "special2"'
- name: default
key: forwarder
```
This will give you the following configuration:

```
input {

# default output
redis {
host => "localhost"
data_type => "list"
key => "input"
}

}

output {

# special output
if [program] == "special"{
redis {
host => "localhost"
data_type => "list"
key => "myspecial"
}
}

# special2 output
if [program] == "special2"{
redis {
host => "localhost"
data_type => "list"
key => "myspecial2"
}
}

# default output
redis {
host => "localhost"
data_type => "list"
key => "forwarder"
}

}
```

Note that the `default` output get's **every** event, the other two outputs only get those where the condition is met.

You can combine several outputs with `else`. That's helpful when you want to split events. Like syslog messages depending on which program logged an event. Just change `exclusive` to `true`.

```
logstash_pipelines:
syslog:
name: syslog
source: https://github.com/netways/syslog-logstash-pipeline.git
exclusive: true
input:
- name: default
key: input
output:
- name: special
key: myspecial
condition: '[program] == "special"'
- name: special2
key: myspecial2
condition: '[program] == "special2"'
- name: default
key: forwarder
```

This will give you the following Logstash configuration.

```
input {

# default output
redis {
host => "localhost"
data_type => "list"
key => "input"
}

}

output {

# special output
if [program] == "special" {
redis {
host => "localhost"
data_type => "list"
key => "myspecial"
}
}
# special2 output
else if [program] == "special2" {
redis {
host => "localhost"
data_type => "list"
key => "myspecial2"
}
}

# default output
else {
redis {
host => "localhost"
data_type => "list"
key => "forwarder"
}

}

}
```

Here the `default` output only receives the events that haven't already been sent to one of the others.

## Extra configuration ##

### Congestion threshold ###

Every Output can have a `congestion:` option with a numerical value. If the Redis key already holds more items than the value says, the output will stop.

## Caveats ##

There are still some minor issues you need to keep in mind:

* The default output in an `exclusive: true` setup must be the last in the YAML configuration. There's no sorting, the role simply expects the default to be the last one.
* The configuration *should* work but will make no sense if you have `exclusive: true` but two or more outputs without `condition`.

## Custom pipelines ##

If you have other ways of putting pipeline code into the correct directories, you can just skip the `source` option.

```
logstash_pipelines:
syslog:
name: syslog
```
**You have to make sure the code is available or Logstash will constantly log errors!**

This will create the directories and integrate all `*.conf` files within via `pipelines.yml`.

**If you add a source later, the role will delete the directory and recreate it with it's own code. So make sure you have a backup!"
4 changes: 2 additions & 2 deletions docs/role-elasticsearch.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,10 +17,10 @@ Role Variables
* *elasticsearch_ca*: Set to the inventory hostname of the host that should house the CA for certificates for inter-node communication. (default: First node in the `elasticsearch` host group)
* *elastic_ca_pass*: Password for Elasticsearch CA (default: `PleaseChangeMe`)
* *elastic_ca_expiration_buffer*: Ansible will renew the CA if its validity is shorter than this value, which should be number of days. (default: 30)
* *elastic_ca_will_expire_soon*: Set it to true to renew the CA and the certificate of all Elastic Stack components (default: `fasle`), Or run the playbook with `--tags renew_ca` to do that.
* *elastic_ca_will_expire_soon*: Set it to true to renew the CA and the certificate of all Elastic Stack components (default: `false`), Or run the playbook with `--tags renew_ca` to do that.
* *elasticsearch_tls_key_passphrase*: Passphrase for elasticsearch certificates (default: `PleaseChangeMeIndividually`)
* *elasticsearch_cert_expiration_buffer*: Ansible will renew the elasticsearch certificate if its validity is shorter than this value, which should be number of days. (default: 30)
* *elasticsearch_cert_will_expire_soon*: Set it to true to renew elasticsearch certificate (default: `fasle`), Or run the playbook with `--tags renew_elasticsearch_cert` to do that.
* *elasticsearch_cert_will_expire_soon*: Set it to true to renew elasticsearch certificate (default: `false`), Or run the playbook with `--tags renew_elasticsearch_cert` to do that.
* *elasticsearch_datapath*: Path where Elasticsearch will store it's data. (default: `/var/lib/elasticsearch` - the packages default)
* *elasticsearch_create_datapath*: Create the path for data to store if it doesn't exist. (default: `false` - only useful if you change `elasticsearch_datapath`)
* *elasticsearch_fs_repo*: List of paths that should be registered as repository for snapshots (only filesystem supported so far). (default: none) Remember, that every node needs access to the same share under the same path.
Expand Down
2 changes: 1 addition & 1 deletion docs/role-kibana.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ These variables are identical over all our elastic related roles, hence the diff
* *elastic_elasticsearch_http_port*: Port of Elasticsearch http (Default: `9200`)
* *kibana_tls_key_passphrase*: Passphrase for kibana certificates (default: `PleaseChangeMe`)
* *kibana_cert_expiration_buffer*: Ansible will renew the kibana certificate if its validity is shorter than this value, which should be number of days. (default: 30)
* *kibana_cert_will_expire_soon*: Set it to true to renew kibana certificate (default: `fasle`), Or run the playbook with `--tags renew_kibana_cert` to do that.
* *kibana_cert_will_expire_soon*: Set it to true to renew kibana certificate (default: `false`), Or run the playbook with `--tags renew_kibana_cert` to do that.
* *elastic_kibana_host*: Hostname users use to connect to Kibana (default: FQDN of the host the role is executed on)
* *elastic_kibana_port*: Port Kibana webinterface is listening on (default: `5601`)
* *elasticsearch_ca*: Set to the inventory hostname of the host that should house the CA for certificates for inter-node communication. (default: First node in the `elasticsearch` host group)
Expand Down
4 changes: 3 additions & 1 deletion docs/role-logstash.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,8 @@ It can optionally configure two types of Logstash pipelines:
* Pipeline configuration managed in an external git repository
* A default pipeline which will read from different Redis keys and write into Elasticsearch

For details on how to configure pipelines please refer to our [docs about pipelines](./logstash-pipelines.md).

Details about configured pipelines will be written into `pipelines.yml` as comments. Same goes for logging configuration in `log4j.options`.

It will work with the standard Elastic Stack packages and Elastics OSS variant.
Expand Down Expand Up @@ -58,7 +60,7 @@ Aside from `logstash.yml` we can manage Logstashs pipelines.
* *logstash_tls_key_passphrase*: Passphrase for Logstash certificates (default: `LogstashChangeMe`)
* *elastic_ca_pass*: Password for Elasticsearch CA (default: `PleaseChangeMe`)
* *logstash_cert_expiration_buffer*: Ansible will renew the Logstash certificate if its validity is shorter than this value, which should be number of days. (default: 30)
* *logstash_cert_will_expire_soon*: Set it to true to renew logstash certificate (default: `fasle`), Or run the playbook with `--tags renew_logstash_cert` to do that.
* *logstash_cert_will_expire_soon*: Set it to true to renew logstash certificate (default: `false`), Or run the playbook with `--tags renew_logstash_cert` to do that.
* *logstash_elasticsearch*: Address of Elasticsearch instance for default output (default: list of Elasticsearch nodes from `elasticsearch` role or `localhost` when used standalone)
* *logstash_security*: Enable X-Security (No default set, but will be activated when in full stack mode)
* *logstash_user*: Name of the user to connect to Elasticsearch (Default: `logstash_writer`)
Expand Down