Alertmaneger config to slack is not working

deployed prom-stack in cluster setup prometheusRule and alertmanagerconfig (reciver slack), but seems rule is working but alert is not coming in slack.

my config

apiVersion: monitoring.coreos.com/v1alpha1
kind: AlertmanagerConfig
metadata:
name: slack-config
namespace: monitoring
labels:
release: prometheus
spec:
receivers:
- name: slack-receiver
slackConfigs:
- channel: ‘#critical-alerts
sendResolved: true
apiURL:
name: slack-webhook-secret
key: slack_api_url
route:
groupBy: [‘alertname’, ‘job’]
groupWait: 10s
groupInterval: 5m
repeatInterval: 12h
continue: true
receiver: slack-receiver
routes:
- receiver: slack-receiver
matchers:
- name: severity
value: critical
matchType: =~


apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: test-slack-alert
namespace: monitoring
labels:
release: prometheus
spec:
groups:
- name: blackbox.alerts
rules:
- alert: TestSlackAlert
expr: vector(1)
for: 1m
labels:
severity: critical
annotations:
summary: “This is a test alert to Slack”

This looks to be something you’re doing with OpenShift, perhaps? I’m not particularly familiar with this, TBH. To find help here,

  1. Please link to a tutorial or blog post you’re working from so we have some context.
  2. Please insert your YAML using codeblocks so it is not corrupted by the Discourse web software, as your YAML currently is.

Hey Rob,
Please find formated code below

apiVersion: monitoring.coreos.com/v1alpha1
kind: AlertmanagerConfig
metadata:
  name: slack-config
  namespace: monitoring
  labels:
    release: prometheus
spec:
  receivers:
    - name: slack-receiver
      slackConfigs:
        - channel: '#critical-alerts'
          sendResolved: true
          apiURL:
            name: slack-webhook-secret
            key: slack_api_url            
  route:
    groupBy: ['alertname', 'job']
    groupWait: 10s
    groupInterval: 5m
    repeatInterval: 12h
    continue: true
    receiver: slack-receiver
    routes:
      - receiver: slack-receiver
        matchers:
          - name: severity
            value: critical
            matchType: =~
          

---


apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
  name: test-slack-alert
  namespace: monitoring
  labels:
    release: prometheus  
spec:
  groups:
    - name: blackbox.alerts
      rules:
        - alert: TestSlackAlert
          expr: vector(1)
          for: 1m
          labels:
            severity: critical
          annotations:
            summary: "This is a test alert to Slack"

I see that this won’t apply directly to my microshift cluster, so I’d definitely need to work from a tutorial to even try this. Please link to whatever you’re working from, and I can at least try it. I’m not set up to test something like this against slack; you’d need to find someone to help you who was, however.

OK, installed the prometheus operator and your YAML loads. Not really set up to test this, as I PM’d you earlier. You’ll need to find someone who’s a bit more familiar with the Prometheus ecosystem than myself.