weixin_39739234
weixin_39739234
2020-12-28 03:01

Documentation for opentelemetry-agent

In what area(s)?

/area observability

Related issue

dapr/dapr#969

Describe the feature

When we improve traces (#969), it is good to provide monitoring images or helm charts with monitoring configuration, which allows users to enable monitoring system easily

  • traces: zipkin, opentelemetry-collector (for app insight)

该提问来源于开源项目:dapr/docs

  • 点赞
  • 写回答
  • 关注问题
  • 收藏
  • 复制链接分享
  • 邀请回答

7条回答

  • weixin_39739234 weixin_39739234 4月前

    Local fowarder is being deprecated. Switching to opentelemetry agent makes sense for appinsight traces.

    https://github.com/open-telemetry/opentelemetry-collector https://github.com/open-telemetry/opentelemetry-collector-contrib

    点赞 评论 复制链接分享
  • weixin_39739234 weixin_39739234 4月前

    https://github.com/open-telemetry/opentelemetry-collector/tree/master/examples/demo demo is working with the below config.yaml

    
    receivers:
      opencensus:
        endpoint: 0.0.0.0:55678
    
    exporters:
      prometheus:
        endpoint: "0.0.0.0:8889"
        namespace: promexample
        const_labels:
          label1: value1
      logging:
    
      zipkin:
        url: "http://zipkin-all-in-one:9411/api/v2/spans"
        format: proto
    
      jaeger_grpc:
        endpoint: jaeger-all-in-one:14250
    
      azuremonitor:
      azuremonitor/2:
        # endpoint is the uri used to communicate with Azure Monitor
        endpoint: "https://dc.services.visualstudio.com/v2/track"
        # instrumentation_key is the unique identifer for your Application Insights resource
        instrumentation_key: ""
        # maxbatchsize is the maximum number of items that can be queued before calling to the configured endpoint
        maxbatchsize: 100
        # maxbatchinterval is the maximum time to wait before calling the configured endpoint.
        maxbatchinterval: 10s
    
    # Alternatively, use jaeger_thrift_http with the settings below. In this case
    # update the list of exporters on the traces pipeline.
    #
    #  jaeger_thrift_http:
    #    url: http://jaeger-all-in-one:14268/api/traces
    
    processors:
      batch:
      queued_retry:
    
    extensions:
      health_check:
      pprof:
        endpoint: :1888
      zpages:
        endpoint: :55679
    
    service:
      extensions: [pprof, zpages, health_check]
      pipelines:
        traces:
          receivers: [opencensus]
          exporters: [logging, zipkin, jaeger_grpc, azuremonitor/2]
          processors: [batch, queued_retry]
        metrics:
          receivers: [opencensus]
          exporters: [logging,prometheus, azuremonitor/2]
    
    点赞 评论 复制链接分享
  • weixin_39576149 weixin_39576149 4月前

    Might want to redact the instrumentation key from the above config

    点赞 评论 复制链接分享
  • weixin_39739234 weixin_39739234 4月前

    Might want to redact the instrumentation key from the above config

    oh! nice catch. I will delete appinsight resource and create new one. :(

    点赞 评论 复制链接分享
  • weixin_39739234 weixin_39739234 4月前
    1. Put your appinsight ikey to opentelemetry_appinsight.yaml
    2. apply the yamls

    dapr_tracing_opentelemetry.yaml

    
    ---
    apiVersion: dapr.io/v1alpha1
    kind: Configuration
    metadata:
      name: tracing
    spec:
      tracing:
        enabled: true
        expandParams: true
        includeBody: true
    ---
    apiVersion: dapr.io/v1alpha1
    kind: Component
    metadata:
      name: native
    spec:
      type: exporters.native
      metadata:
      - name: enabled
        value: "true"
      - name: agentEndpoint
        value: "otel-collector.monitoring.svc.cluster.local:55678"
    ---
    

    opentelemetry_appinsight.yaml

    
    ---
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: otel-collector-conf
      namespace: dapr-monitoring
      labels:
        app: opentelemetry
        component: otel-collector-conf
    data:
      otel-collector-config: |
        receivers:
          opencensus:
            endpoint: 0.0.0.0:55678
        processors:
          queued_retry:
          batch:
        extensions:
          health_check:
          pprof:
            endpoint: :1888
          zpages:
            endpoint: :55679
        exporters:
          azuremonitor:
          azuremonitor/2:
            endpoint: "https://dc.services.visualstudio.com/v2/track"
            instrumentation_key: "<replace: appinsight ikey>"
            # maxbatchsize is the maximum number of items that can be queued before calling to the configured endpoint
            maxbatchsize: 100
            # maxbatchinterval is the maximum time to wait before calling the configured endpoint.
            maxbatchinterval: 10s
        service:
          extensions: [pprof, zpages, health_check]
          pipelines:
            traces:
              receivers: [opencensus]
              exporters: [azuremonitor/2]
              processors: [batch, queued_retry]
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: otel-collector
      labels:
        app: opencesus
        component: otel-collector
    spec:
      ports:
      - name: opencensus # Default endpoint for Opencensus receiver.
        port: 55678
        protocol: TCP
        targetPort: 55678
      selector:
        component: otel-collector
    ---
    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
      name: otel-collector
      namespace: dapr-monitoring
      labels:
        app: opentelemetry
        component: otel-collector
    spec:
      replicas: 3  # scale out based on your usage
      template:
        metadata:
          labels:
            app: opentelemetry
            component: otel-collector
        spec:
          containers:
          - name: otel-collector
            image: omnition/opentelemetry-collector-contrib:latest
            command:
              - "/otelcontribcol"
              - "--config=/conf/otel-collector-config.yaml"
            resources:
              limits:
                cpu: 1
                memory: 2Gi
              requests:
                cpu: 200m
                memory: 400Mi
            ports:
              - containerPort: 55678 # Default endpoint for Opencensus receiver.
            volumeMounts:
              - name: otel-collector-config-vol
                mountPath: /conf
              #- name: otel-collector-secrets
              #  mountPath: /secrets
            livenessProbe:
              httpGet:
                path: /
                port: 13133
            readinessProbe:
              httpGet:
                path: /
                port: 13133
          volumes:
            - configMap:
                name: otel-collector-conf
                items:
                  - key: otel-collector-config
                    path: otel-collector-config.yaml
              name: otel-collector-config-vol
    #       - secret:
    #            name: otel-collector-secrets
    #            items:
    #              - key: cert.pem
    #                path: cert.pem
    #              - key: key.pem
    #                path: key.pem
    </replace:>
    点赞 评论 复制链接分享
  • weixin_39739234 weixin_39739234 4月前

    app insight tracing shows weird map when I use opentelemetry-agent. Needs to investigate it more.

    点赞 评论 复制链接分享
  • weixin_39616547 weixin_39616547 4月前

    I was following the docs on using the localforwarder with App Insights and haven't been able to get this to work. I'm on dapr .8.0 with an environment I spun up with docker-compose. It doesn't seem like any data is being captured.

    Even with localforwarder being deprecated, should this still work today?

    updated: There also seems to be two different ways to configure tracing in the docs. Not sure which one to follow - Diagnose with tracing - Tracing

    点赞 评论 复制链接分享

相关推荐