<rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Blog - Rico Berger</title><link>https://ricoberger.de/blog/</link><description>Personal Blog about Site Reliability Engineering, Platform Engineering, Cloud Native, Kubernetes and more</description><language>en-us</language><copyright>Rico Berger</copyright><pubDate>Thu, 12 Feb 2026 13:40:15 +0000</pubDate><lastBuildDate>Thu, 12 Feb 2026 13:40:15 +0000</lastBuildDate><item><title>Kubernetes Logging with ClickHouse and OpenTelemetry</title><link>https://ricoberger.de/blog/posts/kubernetes-logging-with-clickhouse-and-opentelemetry/</link><description><![CDATA[<p>In today&#39;s blog post, we will deploy ClickHouse and the OpenTelemetry Collector
in a Kubernetes cluster to collect the cluster&#39;s logs. Finally, we will deploy
Grafana to explore the logs stored in ClickHouse.</p>
<p><a href="https://ricoberger.de/blog/posts/kubernetes-logging-with-clickhouse-and-opentelemetry/assets/grafana.png"><img src="https://ricoberger.de/blog/posts/kubernetes-logging-with-clickhouse-and-opentelemetry/assets/grafana.png" alt="Grafana"/></a></p>
<h2>ClickHouse</h2>
<p>In the first step, we need to create a new ClickHouse cluster. In this blog
post, we will use the
<a href="https://github.com/Altinity/clickhouse-operator">ClickHouse Operator</a> for this
purpose. The operator can be deployed using the following commands:</p>
<pre><code class="language-sh">kubectl create namespace clickhouse-operator
kubectl apply -f https://raw.githubusercontent.com/ricoberger/playground/6b942ebe1df3b121b09042274f6598485d159826/kubernetes/kubernetes-logging-with-clickhouse-and-opentelemetry/clickhouse/clickhouse-operator.yaml
</code></pre>
<pre><code class="language-plaintext">NAME                                   READY   STATUS    RESTARTS   AGE
clickhouse-operator-6bcc776b68-77xf6   2/2     Running   0          14s
</code></pre>
<p>Once the ClickHouse operator is running, we can create a ClickHouse cluster.
Since we want a cluster with three shards, we also need to deploy ClickHouse
Keeper. To deploy ClickHouse Keeper and create the ClickHouse cluster afterward,
we can use the following commands:</p>
<pre><code class="language-sh">kubectl create namespace otel
kubectl apply -f https://raw.githubusercontent.com/ricoberger/playground/6b942ebe1df3b121b09042274f6598485d159826/kubernetes/kubernetes-logging-with-clickhouse-and-opentelemetry/clickhouse/clickhouse-keeper.yaml
kubectl apply -f https://raw.githubusercontent.com/ricoberger/playground/6b942ebe1df3b121b09042274f6598485d159826/kubernetes/kubernetes-logging-with-clickhouse-and-opentelemetry/clickhouse/clickhouse.yaml
</code></pre>
<pre><code class="language-plaintext">NAME                               READY   STATUS    RESTARTS   AGE
chi-clickhouse-otel-0-0-0          1/1     Running   0          18s
chi-clickhouse-otel-1-0-0          1/1     Running   0          19s
chi-clickhouse-otel-2-0-0          1/1     Running   0          19s
chk-clickhouse-keeper-otel-0-0-0   1/1     Running   0          70s
chk-clickhouse-keeper-otel-0-1-0   1/1     Running   0          63s
chk-clickhouse-keeper-otel-0-2-0   1/1     Running   0          56s
</code></pre>
<h2>OpenTelemetry Collector</h2>
<p>In the next step, we need to deploy the
<a href="https://opentelemetry.io/docs/collector/">OpenTelemetry Collector</a>. The
OpenTelemetry project provides pre-built distributions of the collector.
However, since it is recommended to build a custom Docker image, we will choose
this option instead of using one of the pre-built distributions.</p>
<p>We will use the following
<a href="https://raw.githubusercontent.com/ricoberger/playground/6b942ebe1df3b121b09042274f6598485d159826/kubernetes/kubernetes-logging-with-clickhouse-and-opentelemetry/otel-collector/Dockerfile">Dockerfile</a>
and
<a href="https://raw.githubusercontent.com/ricoberger/playground/6b942ebe1df3b121b09042274f6598485d159826/kubernetes/kubernetes-logging-with-clickhouse-and-opentelemetry/otel-collector/builder-config.yaml">builder manifest</a>
to build our custom Docker image
<code>registry.homelab.ricoberger.dev/otel-collector:v0.138.0</code> using the following
command:</p>
<pre><code class="language-sh">docker buildx build --platform=linux/arm64,linux/amd64 -f ./Dockerfile -t registry.homelab.ricoberger.dev/otel-collector:v0.138.0 --push .
</code></pre>
<pre><code class="language-yaml">receivers:
  # Configure the OTLP receiver, so that the OpenTelementry Collector can
  # receive logs via gRPC or HTTP using OTLP format.
  otlp:
    protocols:
      grpc:
        endpoint: ${env:MY_POD_IP}:4317
      http:
        endpoint: ${env:MY_POD_IP}:4318
  # Configure the Filelog receiver, which is responsible for collecting the logs
  # from all Pods. Because we are using the Filelog receiver we also have to
  # deploy the OpenTelemetry Collector as DaemonSet in our Kubernetes cluster.
  filelog:
    include: [/var/log/pods/*/*/*.log]
    start_at: beginning
    storage: file_storage
    include_file_path: true
    include_file_name: false
    # The following operator will perform some simple tasks on the collected log
    # lines:
    # - The container operator parses logs in docker, cri-o and containerd
    #   formats and is responsible for parsing the timestamp and attributes.
    # - The json_parser operator parses the string-type field selected by
    #   parse_from as JSON. This means it will parse the body of all log lines
    #   as attributes and sets the severity for each log line from the parsed
    #   result.
    # - The trace_parser operator sets the trace on an entry by parsing a value
    #   from the body.
    operators:
      - id: container_parser
        type: container
      - id: json_parser
        type: json_parser
        on_error: send_quiet
        severity:
          parse_from: attributes.level
      - id: trace_parser
        type: trace_parser
        trace_id:
          parse_from: attributes.trace_id
        span_id:
          parse_from: attributes.span_id
        trace_flags:
          parse_from: attributes.trace_flags
        on_error: send_quiet

exporters:
  # Configure the ClickHouse exporter. We are mostly using the default
  # configureation, the only exception is that we disable the creation of the
  # schema. This is required because we want to use multiple shards, which is
  # is not supported in the default schema.
  clickhouse:
    endpoint: clickhouse://clickhouse-clickhouse.otel.svc.cluster.local:9000?dial_timeout=10s&amp;compress=lz4
    database: otel
    username: &#34;${env:CLICKHOUSE_USERNAME}&#34;
    password: &#34;${env:CLICKHOUSE_PASSWORD}&#34;
    create_schema: false
    async_insert: true
    logs_table_name: otel_logs
    traces_table_name: otel_traces
    metrics_table_name: otel_metrics
    timeout: 5s
    retry_on_failure:
      enabled: true
      initial_interval: 5s
      max_interval: 30s
      max_elapsed_time: 300s

processors:
  # To improve the ingestion performance of our logs into ClickHouse we batch
  # the log lines, before sending them over to ClickHouse.
  batch:
    send_batch_size: 10000
    timeout: 10s
  memory_limiter:
    check_interval: 5s
    limit_mib: 400
    spike_limit_mib: 100
  # The Kubernetes attributes processor will automatically expand the resource
  # attributes of our logs with Kubernetes metadata, such as the Pod name,
  # container name, etc.
  k8sattributes:
    passthrough: false
    pod_association:
      - sources:
          - from: resource_attribute
            name: k8s.pod.ip
      - sources:
          - from: resource_attribute
            name: k8s.pod.uid
      - sources:
          - from: connection
    extract:
      metadata:
        - &#34;k8s.namespace.name&#34;
        - &#34;k8s.deployment.name&#34;
        - &#34;k8s.statefulset.name&#34;
        - &#34;k8s.daemonset.name&#34;
        - &#34;k8s.cronjob.name&#34;
        - &#34;k8s.job.name&#34;
        - &#34;k8s.node.name&#34;
        - &#34;k8s.pod.name&#34;
        - &#34;k8s.pod.uid&#34;
        - &#34;k8s.pod.start_time&#34;
      labels:
        - tag_name: $$1
          key_regex: (.*)
          from: pod
      annotations:
        - tag_name: $$1
          key_regex: (.*)
          from: pod
  # The last processor we are using is the transform processor, to add the
  # service name for each log line, based on the container name added by the
  # Kubernetes attributes processor.
  transform/logs:
    error_mode: silent
    log_statements:
      - context: log
        conditions:
          - IsMap(resource.attributes) and
            resource.attributes[&#34;k8s.container.name&#34;] != nil
        statements:
          - set(resource.attributes[&#34;service.name&#34;],
            resource.attributes[&#34;k8s.container.name&#34;])

# Add extensions, for health checks, zPages and file storage. The file storage
# extension is used within the Filelog receiver, to store the offset for all log
# files, so the receiver can pick up where it left off in the case of a
# collector restart
extensions:
  file_storage:
    directory: /var/lib/otel-collector
  health_check:
    endpoint: ${env:MY_POD_IP}:13133
  zpages:
    endpoint: ${env:MY_POD_IP}:55679

service:
  extensions:
    - file_storage
    - health_check
    - zpages
  pipelines:
    logs:
      receivers:
        - otlp
        - filelog
      processors:
        - batch
        - memory_limiter
        - k8sattributes
        - transform/logs
      exporters:
        - clickhouse
</code></pre>
<p>Now that we have defined our OpenTelemetry Collector configuration, we can
deploy the OpenTelemetry Collector as a DaemonSet. Since we have disabled schema
creation in the ClickHouse exporter, we will also create and run a CronJob to
establish the schema in ClickHouse. The following commands can be used to deploy
the CronJob and DaemonSet:</p>
<pre><code class="language-sh">kubectl apply -f https://raw.githubusercontent.com/ricoberger/playground/6b942ebe1df3b121b09042274f6598485d159826/kubernetes/kubernetes-logging-with-clickhouse-and-opentelemetry/otel-collector/otel-collector-create-schema.yaml
kubectl create job --from=cronjob/otel-collector-create-schema otel-collector-create-schema-manual

kubectl apply -f https://raw.githubusercontent.com/ricoberger/playground/6b942ebe1df3b121b09042274f6598485d159826/kubernetes/kubernetes-logging-with-clickhouse-and-opentelemetry/otel-collector/otel-collector.yaml
</code></pre>
<pre><code class="language-plaintext">chi-clickhouse-otel-0-0-0                   1/1     Running     0          3h16m
chi-clickhouse-otel-1-0-0                   1/1     Running     0          3h16m
chi-clickhouse-otel-2-0-0                   1/1     Running     0          3h16m
chk-clickhouse-keeper-otel-0-0-0            1/1     Running     0          3h31m
chk-clickhouse-keeper-otel-0-1-0            1/1     Running     0          3h31m
chk-clickhouse-keeper-otel-0-2-0            1/1     Running     0          3h31m
otel-collector-create-schema-manual-twxtq   0/1     Completed   0          3h15m
otel-collector-ggt92                        1/1     Running     0          3h14m
otel-collector-wsj6n                        1/1     Running     0          3h14m
</code></pre>
<h2>Demo Application</h2>
<p>We now have a functioning OpenTelemetry Collector that gathers all the logs from
our Kubernetes cluster using the Filelog receiver and stores them in our
ClickHouse cluster via the ClickHouse exporter.</p>
<p>In the next step, we will deploy the
<a href="https://github.com/ricoberger/echoserver">echoserver</a>, which is configured to
send logs to the OpenTelemetry Collector using the OTLP format. You can use the
following commands to install the echoserver.</p>
<pre><code class="language-sh">kubectl create namespace echoserver
helm upgrade --install echoserver oci://ghcr.io/ricoberger/charts/echoserver --version 1.1.0 -f https://raw.githubusercontent.com/ricoberger/playground/6b942ebe1df3b121b09042274f6598485d159826/kubernetes/kubernetes-logging-with-clickhouse-and-opentelemetry/echoserver/values-otlp.yaml --namespace echoserver
</code></pre>
<pre><code class="language-plaintext">NAME                          READY   STATUS    RESTARTS   AGE
echoserver-57d88d558f-hzmvw   1/1     Running   0          14m
</code></pre>
<p>We can now use the echoserver to generate logs for testing the filtering of logs
later.</p>
<pre><code class="language-sh">kubectl port-forward -n echoserver svc/echoserver 8080

curl -vvv &#34;http://localhost:8080/&#34;
curl -vvv &#34;http://localhost:8080/panic&#34;
curl -vvv &#34;http://localhost:8080/status&#34;
curl -vvv &#34;http://localhost:8080/status?status=400&#34;
curl -vvv &#34;http://localhost:8080/timeout?timeout=10s&#34;
curl -vvv &#34;http://localhost:8080/headersize?size=100&#34;
curl -vvv -X POST -d &#39;{&#34;method&#34;: &#34;POST&#34;, &#34;url&#34;: &#34;http://localhost:8080/&#34;, &#34;body&#34;: &#34;test&#34;, &#34;headers&#34;: {&#34;x-test&#34;: &#34;test&#34;}}&#39; http://localhost:8080/request
curl -vvv &#34;http://localhost:8080/fibonacci?n=100&#34;
</code></pre>
<h2>Grafana</h2>
<p>In the final section of this blog post, we will deploy Grafana with a
pre-configured
<a href="https://grafana.com/grafana/plugins/grafana-clickhouse-datasource/">ClickHouse datasource</a>
to view our logs.</p>
<pre><code class="language-sh">kubectl create namespace grafana
helm upgrade --install grafana oci://ghcr.io/grafana/helm-charts/grafana --version 10.1.4 -f https://raw.githubusercontent.com/ricoberger/playground/6b942ebe1df3b121b09042274f6598485d159826/kubernetes/kubernetes-logging-with-clickhouse-and-opentelemetry/grafana/values.yaml --namespace grafana
</code></pre>
<pre><code class="language-plaintext">NAME                       READY   STATUS    RESTARTS   AGE
grafana-55b587fdc8-4cg6t   1/1     Running   0          13m
</code></pre>
<p>Afterward, we can access our Grafana instance using the following port-forward
command with the username <code>admin</code> and the password <code>admin</code>.</p>
<pre><code class="language-sh">kubectl port-forward -n grafana svc/grafana 3000:80
</code></pre>
<p>When we navigate to the <strong>Explore</strong> section and select the <strong>ClickHouse</strong>
datasource, we can view the logs. For instance, we can filter the logs of our
ClickHouse cluster by using <strong>ServiceName</strong> with the value <strong>clickhouse</strong>.</p>
<div class="grid grid-cols-2 md:grid-cols-2 gap-4">
  <div>
    <a href="https://ricoberger.de/blog/posts/kubernetes-logging-with-clickhouse-and-opentelemetry/assets/grafana-clickhouse-query.png">
      <img class="h-auto max-w-full" src="https://ricoberger.de/blog/posts/kubernetes-logging-with-clickhouse-and-opentelemetry/assets/grafana-clickhouse-query.png" alt="Query"/>
    </a>
  </div>
  <div>
    <a href="https://ricoberger.de/blog/posts/kubernetes-logging-with-clickhouse-and-opentelemetry/assets/grafana-clickhouse-result.png">
      <img class="h-auto max-w-full" src="https://ricoberger.de/blog/posts/kubernetes-logging-with-clickhouse-and-opentelemetry/assets/grafana-clickhouse-result.png" alt="Result"/>
    </a>
  </div>
</div>
<p>We can also filter the logs using the <strong>LogAttributes</strong>. In the following
example, we will view all <strong>GET</strong> requests for the <strong>echoserver</strong>.</p>
<div class="grid grid-cols-2 md:grid-cols-2 gap-4">
  <div>
    <a href="https://ricoberger.de/blog/posts/kubernetes-logging-with-clickhouse-and-opentelemetry/assets/grafana-echoserver-query.png">
      <img class="h-auto max-w-full" src="https://ricoberger.de/blog/posts/kubernetes-logging-with-clickhouse-and-opentelemetry/assets/grafana-echoserver-query.png" alt="Query"/>
    </a>
  </div>
  <div>
    <a href="https://ricoberger.de/blog/posts/kubernetes-logging-with-clickhouse-and-opentelemetry/assets/grafana-echoserver-result.png">
      <img class="h-auto max-w-full" src="https://ricoberger.de/blog/posts/kubernetes-logging-with-clickhouse-and-opentelemetry/assets/grafana-echoserver-result.png" alt="Result"/>
    </a>
  </div>
</div>
<h2>Wrapping Up</h2>
<p>That&#39;s it for today&#39;s blog post. We have successfully deployed the OpenTelemetry
Collector to gather logs in our Kubernetes cluster and store them in ClickHouse.
Finally, we have deployed and configured Grafana to explore our stored logs. If
you enjoyed the blog post, feel free to follow me on my social media channels.</p>
]]></description><author>Rico Berger</author><enclosure url="https://ricoberger.de/blog/posts/kubernetes-logging-with-clickhouse-and-opentelemetry/assets/grafana.png" type="image/png"></enclosure><guid isPermaLink="true">https://ricoberger.de/blog/posts/kubernetes-logging-with-clickhouse-and-opentelemetry/</guid><pubDate>Mon, 03 Nov 2025 17:00:00 +0000</pubDate></item><item><title>Fixing the YAML Language Server</title><link>https://ricoberger.de/blog/posts/fixing-the-yaml-language-server/</link><description><![CDATA[<p>First, I want to apologize for the clickbait title. The
<a href="https://github.com/redhat-developer/yaml-language-server">YAML Language Server</a>
is a great project, and I don&#39;t want to disrespect the maintainers. However,
while
<a href="https://ricoberger.de/blog/posts/reworking-my-neovim-configuration/">reworking my Neovim configuration</a>,
I encountered some issues with the YAML Language Server that I would like to
share with you, along with how I resolved them.</p>
<p><a href="https://ricoberger.de/blog/posts/fixing-the-yaml-language-server/assets/yamlls.png"><img src="https://ricoberger.de/blog/posts/fixing-the-yaml-language-server/assets/yamlls.png" alt="YAML Language Server"/></a></p>
<h2>Motivation</h2>
<p>While reworking my Neovim configuration, I also decided to adjust the YAML
Language Server settings to support completion and validation for Kubernetes
manifests. I began with the following configuration:</p>
<pre><code class="language-lua">yaml = {
  format = {
    enable = false,
  },
  completion = true,
  hover = true,
  validate = true,
  schemas = {
    kubernetes = {
      &#34;/kubernetes/**/*.yml&#34;,
      &#34;/kubernetes/**/*.yaml&#34;,
    },
  },
  schemaStore = {
    enable = false,
    url = &#34;&#34;,
  },
},
</code></pre>
<p>This worked well for all standard Kubernetes manifests. The problems began when
I started working with CustomResourceDefinitions (CRDs), as I consistently
encountered the following error:</p>
<pre><code class="language-plaintext">kubernetes/namespaces/default/example-vs.yaml|2 col 7-21 error   1| Value is not accepted. Valid values: &#34;ValidatingAdmissionPolicy&#34;, &#34;ValidatingAdmissionPolicyBinding&#34;, &#34;MutatingAdmissionPolicy&#34;, &#34;MutatingAdmissionPolicyBinding&#34;, &#34;StorageVersion&#34;, &#34;DaemonSet&#34;, &#34;Deployment&#34;, &#34;ReplicaSet&#34;, &#34;StatefulSet&#34;, &#34;TokenRequest&#34;, &#34;TokenReview&#34;, &#34;LocalSubjectAccessReview&#34;, &#34;SelfSubjectAccessReview&#34;, &#34;SelfSubjectRulesReview&#34;, &#34;SubjectAccessReview&#34;, &#34;HorizontalPodAutoscaler&#34;, &#34;Scale&#34;, &#34;CronJob&#34;, &#34;Job&#34;, &#34;CertificateSigningRequest&#34;, &#34;ClusterTrustBundle&#34;, &#34;Lease&#34;, &#34;LeaseCandidate&#34;, &#34;LimitRange&#34;, &#34;Namespace&#34;, &#34;Node&#34;, &#34;PersistentVolume&#34;, &#34;PersistentVolumeClaim&#34;, &#34;Pod&#34;, &#34;ReplicationController&#34;, &#34;ResourceQuota&#34;, &#34;Service&#34;, &#34;FlowSchema&#34;, &#34;PriorityLevelConfiguration&#34;, &#34;Ingress&#34;, &#34;IngressClass&#34;, &#34;NetworkPolicy&#34;, &#34;IPAddress&#34;, &#34;ServiceCIDR&#34;, &#34;PodDisruptionBudget&#34;, &#34;DeviceClass&#34;, &#34;ResourceClaim&#34;, &#34;ResourceClaimTemplate&#34;, &#34;ResourceSlice&#34;, &#34;CSIDriver&#34;, &#34;CSINode&#34;, &#34;VolumeAttachment&#34;, &#34;StorageVersionMigration&#34;, &#34;CustomResourceDefinition&#34;, &#34;APIService&#34;.
</code></pre>
<p>This was somewhat expected, as the YAML Language Server is unaware of the
CustomResourceDefinitions I am using. To resolve this, there are two options:</p>
<ol>
<li>Add the schema for each CustomResourceDefinition to the YAML Language Server
configuration. The issue with this approach is that I would need to define a
unique pattern for each file, which is not feasible.</li>
</ol>
<pre><code class="language-lua">schemas = {
  kubernetes = {
    &#34;*-deploy.yml&#34;,
    &#34;*-deploy.yaml&#34;,
    ...
  },
  [&#34;https://raw.githubusercontent.com/datreeio/CRDs-catalog/refs/heads/main/networking.istio.io/virtualservice_v1.json&#34;] = {
    &#34;*-vs.yml&#34;,
    &#34;*-vs.yaml&#34;,
  }
}
</code></pre>
<ol start="2">
<li>Add an annotation to all CustomResourceDefinitions, which was also not a
feasible solution for me.</li>
</ol>
<pre><code class="language-yaml"># yaml-language-server: $schema=https://raw.githubusercontent.com/datreeio/CRDs-catalog/refs/heads/main/networking.istio.io/virtualservice_v1.json
apiVersion: networking.istio.io/v1
kind: VirtualService
metadata:
  name: example
  namespace: default
spec: ...
</code></pre>
<p>After conducting some research, I decided to create
<a href="https://github.com/ricoberger/kubernetes-json-schema">my own JSON schema for Kubernetes</a>,
which includes the Kubernetes JSON schema and the schemas for all
CustomResourceDefinitions I am using. The configuration for the YAML Language
Server to utilize the new schema is as follows:</p>
<pre><code class="language-lua">schemas = {
  [&#34;https://raw.githubusercontent.com/ricoberger/kubernetes-json-schema/refs/heads/main/schemas/all.json&#34;] = {
    &#34;/kubernetes/**/*.yml&#34;,
    &#34;/kubernetes/**/*.yaml&#34;,
  }
}
</code></pre>
<p>The problem with this approach is that the YAML Language Server has extensive
logic related to the default Kubernetes schema, which prevents it from
functioning as expected. With the configuration mentioned above, I consistently
received the following error:</p>
<pre><code class="language-plaintext">kubernetes/namespaces/default/example-deploy.yaml|2 col 1-2 error| Matches multiple schemas when only one must validate.
</code></pre>
<p>This is a
<a href="https://github.com/redhat-developer/yaml-language-server/issues/998">known issue</a>
with the YAML Language Server. At that point, I was so frustrated that I decided
<a href="https://github.com/ricoberger/yaml-language-server">to fork the YAML Language Server</a>
to add an option to overwrite the default Kubernetes schema.</p>
<p>With this small change, everything is now functioning as expected. I receive
completion and validation for all Kubernetes manifests, including the
CustomResourceDefinitions I am using.</p>
<h2>Usage</h2>
<p>In the following section, we will explore how to use the forked version of the
YAML Language Server and the custom Kubernetes JSON schema with Neovim.</p>
<p>First, we need to clone the repository, check out the branch with the modified
version, and build the YAML Language Server.</p>
<pre><code class="language-sh">git clone git@github.com:ricoberger/yaml-language-server.git
cd yaml-language-server
git checkout add-option-to-overwrite-kubernetes-schema

npm install
npm run build
</code></pre>
<p>In the next step, we will create a bash script
<a href="https://github.com/ricoberger/dotfiles/blob/acb3f643129799906a33bf72a290ba21f1270190/.bin/yamlls"><code>yamlls</code></a>
in our <code>PATH</code> to set the <code>YAMLLS_KUBERNETES_SCHEMA_URL</code> environment variable and
to start our custom YAML Language Server.</p>
<pre><code class="language-sh">#!/usr/bin/env bash

export YAMLLS_KUBERNETES_SCHEMA_URL=&#34;https://raw.githubusercontent.com/ricoberger/kubernetes-json-schema/refs/heads/main/schemas/all.json&#34;

node /Users/ricoberger/Documents/GitHub/ricoberger/yaml-language-server/out/server/src/server.js $@
</code></pre>
<p>Lastly, we configure the YAML Language Server in Neovim
(<a href="https://github.com/ricoberger/dotfiles/blob/acb3f643129799906a33bf72a290ba21f1270190/.config/nvim/lsp/yamlls.lua"><code>yamlls.lua</code></a>)
to use our <code>yamlls</code> script for starting the server.</p>
<pre><code class="language-lua">return {
  cmd = { &#34;yamlls&#34;, &#34;--stdio&#34; },
  filetypes = {
    &#34;yaml&#34;,
    &#34;yaml.docker-compose&#34;,
    &#34;yaml.gitlab&#34;,
    &#34;yaml.helm-values&#34;,
  },
  single_file_support = true,
  settings = {
    redhat = {
      telemetry = {
        enabled = false,
      },
    },
    yaml = {
      format = {
        enable = false,
      },
      completion = true,
      hover = true,
      validate = true,
      schemas = {
        kubernetes = {
          &#34;/kubernetes/**/*.yml&#34;,
          &#34;/kubernetes/**/*.yaml&#34;,
        },
      },
      schemaStore = {
        enable = false,
        url = &#34;&#34;,
      },
    },
  },
  on_init = function(client)
    client.server_capabilities.documentFormattingProvider = nil
    client.server_capabilities.documentRangeFormattingProvider = nil
  end,
}
</code></pre>
<p>If you are using the <a href="https://github.com/mrjosh/helm-ls">Helm Language Server</a>,
you can adjust the configuration in
<a href="https://github.com/ricoberger/dotfiles/blob/acb3f643129799906a33bf72a290ba21f1270190/.config/nvim/lsp/helm_ls.lua"><code>helm_ls.lua</code></a>
to also utilize the <code>yamlls</code> script for starting the YAML Language Server.</p>
<h2>Schema Generation</h2>
<p>In the last section of this blog post, we will examine the
<a href="https://github.com/ricoberger/kubernetes-json-schema">ricoberger/kubernetes-json-schema</a>
repository and explain how the generation of the JSON schema works.</p>
<p>The JSON schema is generated by the
<a href="https://github.com/ricoberger/kubernetes-json-schema/blob/04a5c1e66245ca459ab518049cd822aa6c9985bd/utilities/generate.sh"><code>generate.sh</code></a>
script. First, we create a new <a href="https://kind.sigs.k8s.io/"><code>kind</code></a> cluster and
apply all CustomResourceDefinitions (CRDs) from the
<a href="https://github.com/ricoberger/kubernetes-json-schema/tree/04a5c1e66245ca459ab518049cd822aa6c9985bd/crds"><code>crds</code></a>
directory. Next, we start a <code>kubectl proxy</code> to access the Kubernetes API of the
<code>kind</code> cluster.</p>
<pre><code class="language-sh">kind create cluster --image=kindest/node:v1.34.0
kubectl apply --server-side -f crds/
kubectl proxy --port=5555 --accept-hosts=&#39;^.*&#39;
</code></pre>
<p>The
<a href="https://github.com/ricoberger/kubernetes-json-schema/blob/04a5c1e66245ca459ab518049cd822aa6c9985bd/utilities/openapi2jsonschema.py"><code>openapi2jsonschema.py</code></a>
script is used to fetch the OpenAPI v2 definition from the <code>kind</code> cluster and
convert it to JSON schema. The generated JSON schema files are stored in the
<a href="https://github.com/ricoberger/kubernetes-json-schema/tree/04a5c1e66245ca459ab518049cd822aa6c9985bd/schemas"><code>schemas</code></a>
directory.</p>
<pre><code class="language-sh">openapi2jsonschema.py &#34;schemas&#34; &#34;http://127.0.0.1:5555/openapi/v2&#34;
</code></pre>
<h2>Wrapping Up</h2>
<p>I&#39;m still unsure whether I&#39;m missing something obvious or if the YAML Language
Server is simply not designed to handle this use case. The latter suggests that
there are several issues related to the handling of CustomResourceDefinitions
(<a href="https://github.com/redhat-developer/yaml-language-server/pull/824">824</a>,
<a href="https://github.com/redhat-developer/yaml-language-server/pull/841">#841</a>,
<a href="https://github.com/redhat-developer/yaml-language-server/pull/962">#962</a>, and
<a href="https://github.com/redhat-developer/yaml-language-server/pull/1050">#1050</a>). If
you have any suggestions or improvements, please let me know. I would be happy
to hear your thoughts.</p>
]]></description><author>Rico Berger</author><enclosure url="https://ricoberger.de/blog/posts/fixing-the-yaml-language-server/assets/yamlls.png" type="image/png"></enclosure><guid isPermaLink="true">https://ricoberger.de/blog/posts/fixing-the-yaml-language-server/</guid><pubDate>Sun, 14 Sep 2025 12:00:00 +0000</pubDate></item><item><title>Reworking My Neovim Configuration</title><link>https://ricoberger.de/blog/posts/reworking-my-neovim-configuration/</link><description><![CDATA[<p>It has been a while since I last shared my Neovim configuration. Over time, I&#39;ve
made several adjustments to enhance my workflow. Recently, I decided to
streamline my setup by removing some plugins and replacing them with built-in
alternatives. In this post, I&#39;ll walk you through the changes I&#39;ve made and my
current Neovim configuration.</p>
<p><a href="https://ricoberger.de/blog/posts/reworking-my-neovim-configuration/assets/neovim.png"><img src="https://ricoberger.de/blog/posts/reworking-my-neovim-configuration/assets/neovim.png" alt="Neovim"/></a></p>
<h2>Managing Plugins</h2>
<p>Until now, I have used <code>lazy.nvim</code> to manage my plugins. While <code>lazy.nvim</code> is an
excellent plugin manager, I wanted to simplify my setup by using the
<a href="https://neovim.io/doc/user/pack.html#_plugin-manager">built-in plugin manager</a>.</p>
<p>Adding plugins with the built-in plugin manager is straightforward. We can do
this by calling <code>vim.pack.add()</code>. For example, to add my favorite color scheme,
<code>catppuccin</code>, we can simply include the following lines in our <code>init.lua</code> file:</p>
<pre><code class="language-lua">vim.pack.add({
  { src = &#34;https://github.com/catppuccin/nvim&#34;, name = &#34;catppuccin&#34; },
}, { load = true })

require(&#34;catppuccin&#34;).setup({...})
vim.cmd.colorscheme(&#34;catppuccin&#34;)
</code></pre>
<p>While exploring the built-in plugin manager, I found this
<a href="https://www.reddit.com/r/neovim/comments/1mc5taa/experimenting_with_lazy_loading_in_neovims_new/">Reddit post</a>
about implementing lazy loading. For instance, I&#39;m using <code>CopilotChat.nvim</code> for
AI assistance. To load this plugin only when needed, we can use the following
configuration:</p>
<pre><code class="language-lua">vim.keymap.set(&#34;n&#34;, &#34;&lt;leader&gt;co&#34;, &#34;&lt;cmd&gt;CopilotChatOpen&lt;cr&gt;&#34;)

vim.api.nvim_create_autocmd(&#34;CmdUndefined&#34;, {
  group = vim.api.nvim_create_augroup( &#34;lazy-load-copilotchat&#34;, { clear = true }),
  pattern = { &#34;CopilotChat*&#34; },
  callback = function()
    vim.pack.add({
      { src = &#34;https://github.com/nvim-lua/plenary.nvim&#34;, },
      { src = &#34;https://github.com/CopilotC-Nvim/CopilotChat.nvim&#34;, name = &#34;CopilotChat&#34;, },
    }, { load = true })

    require(&#34;CopilotChat&#34;).setup({...})
  end,
  once = true,
})
</code></pre>
<p>This will load the plugin only when we use the <code>:CopilotChatOpen</code> command or any
other command provided by the plugin.</p>
<h2>File Explorer</h2>
<p>Since its release, I&#39;ve been using the
<a href="https://github.com/folke/snacks.nvim/blob/main/docs/explorer.md"><code>snacks.nvim</code> explorer</a>.
I primarily use it to switch between files in the same directory and to copy,
move, rename, and delete files. I accomplish these tasks with the following
keymaps:</p>
<pre><code class="language-lua">vim.keymap.set(&#34;n&#34;, &#34;&lt;leader&gt;ee&#34;, function() local dir = vim.fn.expand(&#34;%:.:h&#34;) if dir == &#34;.&#34; or dir == &#34;&#34; then return &#34;:edit &#34; end return &#34;:edit &#34; .. vim.fn.expand(&#34;%:.:h&#34;) .. &#34;/&#34; end, { expr = true })
vim.keymap.set(&#34;n&#34;, &#34;&lt;leader&gt;ec&#34;, function() return &#34;:!cp &#34; .. vim.fn.expand(&#34;%:.&#34;) .. &#34; &#34; .. vim.fn.expand(&#34;%:.&#34;) end, { expr = true })
vim.keymap.set(&#34;n&#34;, &#34;&lt;leader&gt;em&#34;, function() return &#34;:!mv &#34; .. vim.fn.expand(&#34;%:.&#34;) .. &#34; &#34; .. vim.fn.expand(&#34;%:.&#34;) end, { expr = true })
vim.keymap.set(&#34;n&#34;, &#34;&lt;leader&gt;er&#34;, function() return &#34;:!rm &#34; .. vim.fn.expand(&#34;%:.&#34;) end, { expr = true })
vim.keymap.set(&#34;n&#34;, &#34;&lt;leader&gt;ey&#34;, function() vim.fn.setreg(&#34;+&#34;, vim.fn.expand(&#34;%:.&#34;)) end)
</code></pre>
<h2>Finding Files</h2>
<p>My go-to plugin for opening files has been the
<a href="https://github.com/folke/snacks.nvim/blob/main/docs/picker.md"><code>snacks.nvim</code> picker</a>.
However, after reading this
<a href="https://www.reddit.com/r/neovim/comments/1n10xdb/an_experiment_around_a_fuzzy_finder_without/">Reddit post</a>,
I wanted to explore the built-in capabilities of Neovim for this purpose.</p>
<p>With the Neovim nightly version, we can use the <code>findfunc</code> option to define a
custom function for finding files, the <code>wildtrigger()</code> function to initiate
wildcard expansion in the command line, and the <code>matchfuzzy()</code> function to
enable fuzzy matching.</p>
<p>To use <a href="https://github.com/sharkdp/fd"><code>fd</code></a> for finding files with the <code>:find</code>
command and to fuzzy match these files with the provided argument, we can add
the following code to our <code>init.lua</code> file:</p>
<pre><code class="language-lua">if vim.fn.executable(&#34;fd&#34;) == 1 then
  function _G.fd_find_files(cmdarg, _)
    local fnames = vim.fn.systemlist(&#34;fd --full-path --hidden --color never --type f --exclude .git&#34;)

    if #cmdarg == 0 then
      return fnames
    else
      return vim.fn.matchfuzzy(fnames, cmdarg)
    end
  end

  vim.opt.findfunc = &#34;v:lua.fd_find_files&#34;
end
</code></pre>
<p>To automatically enable autocompletion when typing the <code>:find</code> command, we can
add the following <code>autocmd</code>:</p>
<pre><code class="language-lua">vim.api.nvim_create_autocmd({ &#34;CmdlineChanged&#34;, &#34;CmdlineLeave&#34; }, {
  pattern = { &#34;*&#34; },
  group = vim.api.nvim_create_augroup( &#34;cmdline-autocompletion&#34;, { clear = true }),
  callback = function(ev)
    local function should_enable_autocomplete()
      local cmdline_cmd = vim.fn.split(vim.fn.getcmdline(), &#34; &#34;)[1]
      return cmdline_cmd == &#34;find&#34;
    end

    if ev.event == &#34;CmdlineChanged&#34; and should_enable_autocomplete() then
      vim.opt.wildmode = &#34;noselect:lastused,full&#34;
      vim.fn.wildtrigger()
    end

    if ev.event == &#34;CmdlineLeave&#34; then
      vim.opt.wildmode = &#34;full&#34;
    end
  end,
})
</code></pre>
<p>Other features I utilized from the <code>snacks.nvim</code> picker include finding recently
used files, sending found files to the quickfix list, locating buffers, and
identifying marks. I achieved these functionalities with custom functions and
keymaps, which can be found in my
<a href="https://github.com/ricoberger/dotfiles/blob/8ccaa830538c5c2eb6f02b8c2d795fb5f7025220/.config/nvim/init.lua#L424-L489">dotfiles repository</a>.</p>
<h2>Searching Through Files</h2>
<p>The last feature I used from <code>snacks.nvim</code> was searching through files. To
replace this functionality, I decided to use the built-in <code>:grep</code> command along
with <a href="https://github.com/BurntSushi/ripgrep"><code>rg</code></a>. To set this up, we can add
the following configuration to our <code>init.lua</code> file:</p>
<pre><code class="language-lua">if vim.fn.executable(&#34;rg&#34;) == 1 then
  vim.opt.grepprg = &#34;rg --vimgrep --smart-case --hidden --color=never --glob=&#39;!.git&#39;&#34;
  vim.opt.grepformat = &#34;%f:%l:%c:%m&#34;
end
</code></pre>
<p>Together with the following keymaps, I replicated the search functionality I had
with <code>snacks.nvim</code> and added some useful search commands. These include
searching in the directory of the current file, searching in the current file,
searching for the word under the cursor, searching for visually selected text,
and searching for todo comments like <code>TODO</code>, <code>FIXME</code>, and <code>BUG</code>. The last keymap
also replaces the
<a href="https://github.com/folke/todo-comments.nvim"><code>todo-comments.nvim</code></a> plugin for
me.</p>
<pre><code class="language-lua">vim.keymap.set(&#34;n&#34;, &#34;&lt;leader&gt;ss&#34;, &#34;:silent grep!&lt;space&gt;&#34;)
vim.keymap.set(&#34;n&#34;, &#34;&lt;leader&gt;sc&#34;, function() return &#34;:silent grep! --glob=&#39;&#34; .. vim.fn.expand(&#34;%:.:h&#34;) .. &#34;/**&#39; &#34; end, { expr = true })
vim.keymap.set(&#34;n&#34;, &#34;&lt;leader&gt;s/&#34;, function() return &#34;:silent grep! --glob=&#39;&#34; .. vim.fn.expand(&#34;%:.&#34;) .. &#34;&#39; &#34; end, { expr = true })
vim.keymap.set(&#34;n&#34;, &#34;&lt;leader&gt;sw&#34;, &#34;:silent grep!&lt;space&gt;&lt;c-r&gt;&lt;c-w&gt;&#34;)
vim.keymap.set(&#34;v&#34;, &#34;&lt;leader&gt;sv&#34;, &#39;y:silent grep!&lt;space&gt;&lt;c-r&gt;&#34;&#39;)
vim.keymap.set(&#34;n&#34;, &#34;&lt;leader&gt;st&#34;, &#34;:silent grep! -e=&#39;todo:&#39; -e=&#39;warn:&#39; -e=&#39;info:&#39; -e=&#39;xxx:&#39; -e=&#39;bug:&#39; -e=&#39;fixme:&#39; -e=&#39;fixit:&#39; -e=&#39;bug:&#39; -e=&#39;issue:&#39;&lt;cr&gt;&#34;)
</code></pre>
<h2>LSP, Linting and Formatting</h2>
<p>With the recently added support for
<a href="https://github.com/neovim/neovim/pull/33972">inline completion</a> in Neovim&#39;s
built-in LSP client, I decided to remove the
<a href="https://github.com/zbirenbaum/copilot.lua"><code>copilot.lua</code></a> plugin and use the
<a href="https://www.npmjs.com/package/@github/copilot-language-server"><code>@github/copilot-language-server</code></a>
instead. Along with the following <code>autocmd</code>, I can now utilize the inline
completion feature provided by GitHub Copilot. I also set up a keymap to accept
suggestions with <code>&lt;c-cr&gt;</code>.</p>
<pre><code class="language-lua">vim.api.nvim_create_autocmd(&#34;LspAttach&#34;, {
  callback = function(event)
    local client = vim.lsp.get_client_by_id(event.data.client_id)
    local buffer = event.buf

    if client then
      if client:supports_method( vim.lsp.protocol.Methods.textDocument_inlineCompletion) then
        vim.lsp.inline_completion.enable(true)
        vim.keymap.set(&#34;i&#34;, &#34;&lt;c-cr&gt;&#34;, function()
          if not vim.lsp.inline_completion.get() then
            return &#34;&lt;c-cr&gt;&#34;
          end
        end, {
          expr = true,
          replace_keycodes = true,
        })
      end
    end
  end,
})
</code></pre>
<p>I also added the <a href="https://github.com/mattn/efm-langserver">efm-langserver</a> to
replace <a href="https://github.com/mfussenegger/nvim-lint"><code>nvim-lint</code></a> and
<a href="https://github.com/stevearc/conform.nvim"><code>conform.nvim</code></a> for linting and
formatting. You can find the configuration for <code>efm-langserver</code> in my
<a href="https://github.com/ricoberger/dotfiles/blob/8ccaa830538c5c2eb6f02b8c2d795fb5f7025220/.config/nvim/lsp/efm.lua">dotfiles</a>.</p>
<details>
<summary>Formatting Using Automatic Commands</summary>
<p>While replacing <code>conform.nvim</code>, I also came across this
<a href="https://blog.erikwastaken.dev/posts/2023-05-06-a-case-for-neovim-without-plugins.html">blog post</a>
on implementing formatting using <code>autocmd</code>. I tried this for <code>golangci-lint</code>,
but then I decided to use <code>efm-langserver</code> and <code>golangci_lint_ls</code> language
server instead.</p>
<pre><code class="language-lua">local function run_golangcilint()
  vim.fn.jobstart({
    &#34;golangci-lint&#34;,
    &#34;run&#34;,
    &#34;--output.json.path=stdout&#34;,
    &#34;--output.text.path=&#34;,
    &#34;--output.tab.path=&#34;,
    &#34;--output.html.path=&#34;,
    &#34;--output.checkstyle.path=&#34;,
    &#34;--output.code-climate.path=&#34;,
    &#34;--output.junit-xml.path=&#34;,
    &#34;--output.teamcity.path=&#34;,
    &#34;--output.sarif.path=&#34;,
    &#34;--issues-exit-code=0&#34;,
    &#34;--show-stats=false&#34;,
  }, {
    stdout_buffered = true,
    on_stdout = function(_, data)
      local output = vim.trim(table.concat(data, &#34;\n&#34;))
      local ns = vim.api.nvim_create_namespace(&#34;golangcilint&#34;)
      vim.diagnostic.reset(ns)

      if output == &#34;&#34; then
        return
      end
      local decoded = vim.json.decode(output)
      if decoded[&#34;Issues&#34;] == nil or type(decoded[&#34;Issues&#34;]) == &#34;userdata&#34; then
        return
      end

      local severities = {
        error = vim.diagnostic.severity.ERROR,
        warning = vim.diagnostic.severity.WARN,
        refactor = vim.diagnostic.severity.INFO,
        convention = vim.diagnostic.severity.HINT,
      }

      local diagnostics = {}
      for _, item in ipairs(decoded[&#34;Issues&#34;]) do
        if vim.fn.expand(&#34;%&#34;) == item.Pos.Filename then
          local sv = severities[item.Severity] or severities.warning
          table.insert(diagnostics, {
            lnum = item.Pos.Line &gt; 0 and item.Pos.Line - 1 or 0,
            col = item.Pos.Column &gt; 0 and item.Pos.Column - 1 or 0,
            end_lnum = item.Pos.Line &gt; 0 and item.Pos.Line - 1 or 0,
            end_col = item.Pos.Column &gt; 0 and item.Pos.Column - 1 or 0,
            severity = sv,
            source = item.FromLinter,
            message = item.Text,
          })
        end

        vim.diagnostic.set(ns, 0, diagnostics, {})
        vim.diagnostic.show()
      end
    end,
  })
end

vim.api.nvim_create_autocmd({ &#34;BufEnter&#34;, &#34;BufWritePost&#34; }, {
  group = vim.api.nvim_create_augroup(&#34;linting-go&#34;, { clear = true }),
  callback = function(opts)
    if vim.bo[opts.buf].filetype == &#34;go&#34; then
      run_golangcilint()
    end
  end,
})
</code></pre>
</details>
<h2>Completion</h2>
<p>Until now, I&#39;ve been using <a href="https://github.com/Saghen/blink.cmp"><code>blink.cmp</code></a>.
While it is a great plugin, I wanted to explore Neovim&#39;s
<a href="https://github.com/neovim/neovim/pull/27339">built-in completion</a> capabilities.
To enable the built-in completion, we can add the following configuration to our
<code>LspAttach</code> autocmd:</p>
<pre><code class="language-lua">vim.api.nvim_create_autocmd(&#34;LspAttach&#34;, {
  callback = function(event)
    local client = vim.lsp.get_client_by_id(event.data.client_id)
    local buffer = event.buf

    if client then
      if client:supports_method(vim.lsp.protocol.Methods.textDocument_completion) then
        vim.lsp.completion.enable(true, client.id, buffer, { autotrigger = true })
        vim.keymap.set(&#34;i&#34;, &#34;&lt;c-space&gt;&#34;, function() vim.lsp.completion.get() end)
      end
    end
  end,
})
</code></pre>
<p>This will request completions when a trigger character is typed or when <code>Ctrl</code> +
<code>Space</code> is pressed. For me, this is more than enough since I primarily use
autocompletion results from the LSP, rather than the other sources I had with
<code>blink.cmp</code>, such as snippets, buffer, or path completions.</p>
<h2>Reviewing Pull Requests</h2>
<p>For reviewing pull requests, I used to rely on
<a href="https://github.com/sindrets/diffview.nvim"><code>diffview.nvim</code></a>. However, since I
already had the ability to view changes in the current file with
<a href="https://github.com/lewis6991/gitsigns.nvim"><code>gitsigns.nvim</code></a>, using another
plugin solely for pull request reviews seemed unnecessary. Therefore, I created
my own command, <code>:GitDiff</code>, to utilize <code>gitsigns.nvim</code> for this purpose.</p>
<pre><code class="language-lua">vim.api.nvim_create_user_command(&#34;GitDiff&#34;, function(opts)
  if #vim.fn.split(opts.args, &#34; &#34;) ~= 2 then
    return
  end

  local base = vim.fn.split(opts.args, &#34; &#34;)[1]
  local head = vim.fn.split(opts.args, &#34; &#34;)[2]

  local result = vim.system({ &#34;git&#34;, &#34;merge-base&#34;, base, head }):wait()
  if result.code ~= 0 then
    return
  end

  local commit = vim.fn.trim(result.stdout)

  local gitsigns = require(&#34;gitsigns&#34;)
  gitsigns.change_base(commit, true)
  gitsigns.setqflist(&#34;all&#34;)
end, { nargs = &#34;*&#34; })
</code></pre>
<p>The <code>:GitDiff</code> command requires two arguments: the base branch and the pull
request branch. It identifies the common ancestor commit of both branches and
uses this commit as the base for <code>gitsigns.nvim</code>. The command then populates the
quickfix list with all changes between the two branches.</p>
<p>I&#39;m actively using <a href="https://github.com/dlvhdr/gh-dash">gh-dash</a> and have added a
custom keymap to automatically trigger the command when opening a pull request.</p>
<pre><code class="language-yaml">keybindings:
  prs:
    - name: diff (nvim)
      key: D
      command: |
        cd {{.RepoPath}} &amp;&amp; git checkout {{.BaseRefName}} &amp;&amp; git pull &amp;&amp; gh pr checkout {{.PrNumber}} &amp;&amp; nvim -c &#34;:GitDiff {{.BaseRefName}} {{.HeadRefName}}&#34;
</code></pre>
<h2>Wrapping Up</h2>
<p>With the current Neovim nightly version, I have successfully replaced several
plugins with built-in alternatives. My setup is now more minimal, and I can
accomplish everything I need without relying on as many plugins. The plugins I
am still using are:</p>
<ul>
<li><a href="https://github.com/catppuccin/nvim"><code>catppuccin</code></a></li>
<li><a href="https://github.com/nvim-treesitter/nvim-treesitter"><code>nvim-treesitter</code></a></li>
<li><a href="https://github.com/qvalentin/helm-ls.nvim"><code>helm-ls.nvim</code></a></li>
<li><a href="https://github.com/nvim-lualine/lualine.nvim"><code>lualine.nvim</code></a></li>
<li><a href="https://github.com/lewis6991/gitsigns.nvim"><code>gitsigns.nvim</code></a></li>
<li><a href="https://github.com/jake-stewart/multicursor.nvim"><code>multicursor.nvim</code></a></li>
<li><a href="https://github.com/CopilotC-Nvim/CopilotChat.nvim"><code>CopilotChat.nvim</code></a></li>
</ul>
<p>I hope you enjoyed the blog post. You can find my complete Neovim configuration
in my
<a href="https://github.com/ricoberger/dotfiles/blob/8ccaa830538c5c2eb6f02b8c2d795fb5f7025220/.config/nvim">dotfiles</a>
repository and the updated Vim cheatsheet is also available
<a href="https://ricoberger.de/cheat-sheets/vim/">here</a> 🙂.</p>
]]></description><author>Rico Berger</author><enclosure url="https://ricoberger.de/blog/posts/reworking-my-neovim-configuration/assets/neovim.png" type="image/png"></enclosure><guid isPermaLink="true">https://ricoberger.de/blog/posts/reworking-my-neovim-configuration/</guid><pubDate>Thu, 11 Sep 2025 17:00:00 +0000</pubDate></item><item><title>Continuous Profiling Using Parca</title><link>https://ricoberger.de/blog/posts/continuous-profiling-using-parca/</link><description><![CDATA[<p>In today&#39;s blog post, we will examine continuous profiling using Parca. We will
set up Parca in a Kubernetes cluster, explore its architecture, and collect
profiles from an example application. Finally, we will review the Parca UI to
analyze the collected profiles. If you are not familiar with continuous
profiling, I recommend reading the
<a href="https://www.parca.dev/docs/overview#what-is-profiling">&#34;What is profiling?&#34;</a>
section in the Parca documentation.</p>
<p>Parca has two main components: the Parca Server and the Parca Agent. The Parca
Server stores profiling data and enables querying and analysis over time. The
Parca Agent is an eBPF-based whole-system profiler.</p>
<p>The diagram below illustrates the architecture of Parca and the Parca Agent.</p>
<p><a href="https://ricoberger.de/blog/posts/continuous-profiling-using-parca/assets/architecture.svg"><img src="https://ricoberger.de/blog/posts/continuous-profiling-using-parca/assets/architecture.svg" alt="Parca Architecture"/></a></p>
<p>Parca can source profiles by retrieving them from targets via HTTP or by using
the Parca Agent, which pushes them to the Parca Server. The Parca Server then
stores these profiles. Different series of profiles are identified by their
unique label combinations and can be visualized through the Parca UI, which is
provided by the Parca Server, using icicle-graphs.</p>
<h2>Installation</h2>
<p>In the following we will deploy an example application to a Kubernetes cluster,
which exposes HTTP endpoints serving pprof, the Parca Server and the Parca
Agent. Let&#39;s start by creating a new namespace called <code>parca</code> and deploying the
<a href="https://github.com/ricoberger/echoserver">echoserver</a> as our example
application.</p>
<pre><code class="language-sh">kubectl create namespace parca
helm upgrade --install echoserver --namespace parca oci://ghcr.io/ricoberger/charts/echoserver --version 1.0.3 --set-json=&#39;podLabels={&#34;app&#34;: &#34;echoserver&#34;}&#39;
</code></pre>
<p>In the next step we deploy the Parca Server, by applying the
<code>1-1-parca-server.yaml</code> file from the
<a href="https://github.com/ricoberger/playground/tree/d278b2c9dd149d2aea7bfda036a764ae51e0e6a1/kubernetes/parca">ricoberger/playground</a>
repository:</p>
<pre><code class="language-sh">kubectl apply -f https://raw.githubusercontent.com/ricoberger/playground/d278b2c9dd149d2aea7bfda036a764ae51e0e6a1/kubernetes/parca/1-1-parca-server.yaml
</code></pre>
<p>This will create a Service, StatefulSet, and ConfigMap for the Parca Server. The
ConfigMap contains the configuration for the Parca Server. We define the storage
location (the local filesystem<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup>) that Parca will use to store the profiles
and add a scrape configuration for our example application.</p>
<pre><code class="language-yaml">object_storage:
  bucket:
    config:
      directory: /data
    type: FILESYSTEM
scrape_configs:
  - job_name: echoserver
    scrape_interval: 45s
    scrape_timeout: 60s
    static_configs:
      - targets:
          - echoserver.parca.svc.cluster.local:8080
    profiling_config:
      pprof_config:
        fgprof:
          enabled: true
          path: /debug/pprof/fgprof
</code></pre>
<p>Last but not least, we deploy the Parca Agent using the <code>1-2-parca-agent.yaml</code>
file:</p>
<pre><code class="language-sh">kubectl apply -f https://raw.githubusercontent.com/ricoberger/playground/d278b2c9dd149d2aea7bfda036a764ae51e0e6a1/kubernetes/parca/1-2-parca-agent.yaml
</code></pre>
<p>The Parca Agent is deployed as a DaemonSet and requires a ClusterRole,
ClusterRoleBinding, and ServiceAccount to watch all Nodes and Pods within the
cluster. The deployed ConfigMap contains the configuration for the Parca Agent,
including the relabel configuration, ensuring that the pushed profiles have all
the necessary labels for later differentiation.</p>
<p>At this point, our example application, the Parca Server and the Parca Agent,
should be up and running:</p>
<pre><code class="language-plaintext">NAME                              READY   STATUS    RESTARTS   AGE
echoserver-664ccd6944-hjxpb       1/1     Running   0          4m15s
parca-agent-lq8h5                 1/1     Running   0          3m7s
parca-server-0                    1/1     Running   0          3m37s
</code></pre>
<p>We can also access the Parca UI via
<code>kubectl port-forward -n parca svc/parca-server 7070</code>, where we should see our
example application in the targets list:
<a href="http://localhost:7070/targets">http://localhost:7070/targets</a>.</p>
<p><a href="https://ricoberger.de/blog/posts/continuous-profiling-using-parca/assets/parca-server-targets.png"><img src="https://ricoberger.de/blog/posts/continuous-profiling-using-parca/assets/parca-server-targets.png" alt="Parca Server - Targets"/></a></p>
<h3>Parca Operator</h3>
<p>Before we continue exploring the Parca UI, I want to share a small side project
of mine. The <a href="https://github.com/ricoberger/parca-operator">Parca Operator</a>
allows us to dynamically populate the <code>scrape_configs</code> of the Parca Server using
a <code>ParcaScrapeConfig</code> Custom Resource.</p>
<p>The operator uses the Kubernetes Service Discovery feature of Parca to
dynamically add, update, and remove scrape configurations, similar to the
Prometheus Operator. This allows us to replace our static configuration for the
echoserver, which would not function correctly if we run more than one replica
of the service.</p>
<p>Let&#39;s first update our configuration for the Parca Server by removing the
<code>scrape_configs</code>:</p>
<pre><code class="language-sh">kubectl apply -f https://raw.githubusercontent.com/ricoberger/playground/d278b2c9dd149d2aea7bfda036a764ae51e0e6a1/kubernetes/parca/2-1-parca-server.yaml
</code></pre>
<p>Now we can install the Parca Operator using the corresponding Helm chart and the
<a href="https://raw.githubusercontent.com/ricoberger/playground/d278b2c9dd149d2aea7bfda036a764ae51e0e6a1/kubernetes/parca/3-1-parca-operator.yaml"><code>3-1-parca-operator.yaml</code></a>
values file. The operator will mount the Parca Server&#39;s configuration file and
create a new configuration file as a Secret in the <code>parca</code> namespace, named
<code>parca-server-generated</code>.</p>
<pre><code class="language-sh">helm upgrade --install parca-operator --namespace parca oci://ghcr.io/ricoberger/charts/parca-operator --version 1.2.0 -f https://raw.githubusercontent.com/ricoberger/playground/d278b2c9dd149d2aea7bfda036a764ae51e0e6a1/kubernetes/parca/3-1-parca-operator.yaml
</code></pre>
<p>In the next step, we will update our Parca Server setup to use the Secret
instead of the ConfigMap for its configuration. We will also add a ClusterRole,
ClusterRoleBinding, and ServiceAccount for the Parca Server, enabling it to list
and watch all Pods in the cluster.</p>
<pre><code class="language-sh">kubectl apply -f https://raw.githubusercontent.com/ricoberger/playground/d278b2c9dd149d2aea7bfda036a764ae51e0e6a1/kubernetes/parca/3-2-parca-server.yaml
</code></pre>
<pre><code class="language-plaintext">NAME                              READY   STATUS    RESTARTS   AGE
echoserver-664ccd6944-hjxpb       1/1     Running   0          8m2s
parca-agent-lq8h5                 1/1     Running   0          6m54s
parca-operator-6789868cdd-7txdf   1/1     Running   0          103s
parca-server-0                    1/1     Running   0          49s
</code></pre>
<p>Last but not least, we will create a <code>ParcaScrapeConfig</code> for our example
application:</p>
<pre><code class="language-sh">kubectl apply -f https://raw.githubusercontent.com/ricoberger/playground/d278b2c9dd149d2aea7bfda036a764ae51e0e6a1/kubernetes/parca/3-3-echoserver-parcascrapeconfig.yaml
</code></pre>
<p>In the <code>ParcaScrapeConfig</code> we select all Pods, which are having the <code>app</code> label
set to <code>echoserver</code>. We also set the port which is used to serve the pprof
endpoints.</p>
<p>In the <code>ParcaScrapeConfig</code>, we select all Pods with the <code>app</code> label set to
<code>echoserver</code>. We also specify the port (<code>http</code>) used to serve the pprof
endpoints. The remaining configuration is similar to the static configuration we
used previously.</p>
<pre><code class="language-yaml">apiVersion: parca.ricoberger.de/v1alpha1
kind: ParcaScrapeConfig
metadata:
  name: echoserver
  namespace: parca
spec:
  selector:
    matchLabels:
      app: echoserver
  scrapeConfig:
    port: http
    interval: 45s
    timeout: 60s
    profilingConfig:
      pprofConfig:
        fgprof:
          enabled: true
          path: /debug/pprof/fgprof
</code></pre>
<p>If we access the <a href="http://localhost:7070/targets">targets list</a> in the Parca UI
now, we should see a new job for the echoserver with a different set of labels
than before.</p>
<p><a href="https://ricoberger.de/blog/posts/continuous-profiling-using-parca/assets/parca-server-targets-with-parca-operator.png"><img src="https://ricoberger.de/blog/posts/continuous-profiling-using-parca/assets/parca-server-targets-with-parca-operator.png" alt="Parca Server - Targets with Parca Operator"/></a></p>
<h2>Explore Profiles</h2>
<p>In the last section of this blog post, we will explore the profiles ingested
into Parca. Let&#39;s open the Parca UI by creating a port-forward to the Parca
Server using the command <code>kubectl port-forward -n parca svc/parca-server 7070</code>
and then opening <a href="http://localhost:7070/">http://localhost:7070/</a> in our
browser.</p>
<p>In the first step we explore the profiles captured by the Parca Agent. To do
this, we select <code>On-CPU</code> from the <code>Profile Type</code> dropdown menu. For now, we are
only interested in the profiles of the echoserver. To filter the profiles, we
select the <code>container</code> label and the <code>echoserver</code> value. Lastly, we select the
<code>pod</code> label from the <code>Sum By</code> dropdown menu. After that, we should see the
profiles captured by the Parca Agent for the echoserver.</p>
<p><a href="https://ricoberger.de/blog/posts/continuous-profiling-using-parca/assets/on-cpu-profile.png"><img src="https://ricoberger.de/blog/posts/continuous-profiling-using-parca/assets/on-cpu-profile.png" alt="On-CPU Profiles"/></a></p>
<p>To simulate a CPU intensive task, we create a port-forward to the echoserver
using the command <code>kubectl port-forward -n parca svc/echoserver 8080</code>.
Afterwards we run the following cURL command:</p>
<pre><code class="language-sh">curl -vvv &#34;http://localhost:8080/fibonacci?n=100000000&#34;
</code></pre>
<p>Once the request is complete, we can refresh the profiles displayed in the Parca
UI by clicking the <code>Search</code> button. We should now observe a spike in the CPU
usage of the echoserver. By hovering over one of the data points, we can see how
many cores the echoserver used per second and the total CPU usage time. Clicking
on the data point will display the captured sample below the graph, allowing us
to check where the most time was spent. As expected, most of the time should be
attributed to the <code>main.fibonacciHandler</code> function.</p>
<div class="grid grid-cols-2 md:grid-cols-2 gap-4">
  <div>
    <a href="https://ricoberger.de/blog/posts/continuous-profiling-using-parca/assets/on-cpu-profile-1.png">
      <img class="h-auto max-w-full" src="https://ricoberger.de/blog/posts/continuous-profiling-using-parca/assets/on-cpu-profile-1.png" alt="On-CPU Profile"/>
    </a>
  </div>
  <div>
    <a href="https://ricoberger.de/blog/posts/continuous-profiling-using-parca/assets/on-cpu-profile-2.png">
      <img class="h-auto max-w-full" src="https://ricoberger.de/blog/posts/continuous-profiling-using-parca/assets/on-cpu-profile-2.png" alt="On-CPU Profile"/>
    </a>
  </div>
</div>
<p>When using the Parca Agent, only the <code>On-CPU</code> profiles are available. However,
since we added a scrape configuration for the echoserver, the other profiles
should also be available. These include:</p>
<ul>
<li><strong>Fgprof Samples Total</strong>: CPU profile samples observed regardless of their
current On/Off CPU scheduling status</li>
<li><strong>Fgprof Samples Time Total</strong>: CPU profile measured regardless of their
current On/Off CPU scheduling status in nanoseconds</li>
<li><strong>Goroutine Created Total</strong>: Stack traces that created all current goroutines.</li>
<li><strong>Memory Allocated Objects Total</strong>: A sampling of all past memory allocations
by objects.</li>
<li><strong>Memory Allocated Bytes Total</strong>: A sampling of all past memory allocations in
bytes.</li>
<li><strong>Memory In-Use Objects</strong>: A sampling of memory allocations of live objects by
objects.</li>
<li><strong>Memory In-Use Bytes</strong>: A sampling of memory allocations of live objects by
bytes.</li>
<li><strong>Process CPU Nanoseconds</strong>: CPU profile measured by the process itself in
nanoseconds.</li>
<li><strong>Process CPU Samples</strong>: CPU profile samples observed by the process itself.</li>
</ul>
<p>If we are selecting the <code>Process CPU Samples</code> from the <code>Profile Types</code> dropdown,
we should see a similar pattern as for the <code>On-CPU</code> profiles.</p>
<div class="grid grid-cols-2 md:grid-cols-2 gap-4">
  <div>
    <a href="https://ricoberger.de/blog/posts/continuous-profiling-using-parca/assets/process-cpu-samples-1.png">
      <img class="h-auto max-w-full" src="https://ricoberger.de/blog/posts/continuous-profiling-using-parca/assets/process-cpu-samples-1.png" alt="Process CPU Samples"/>
    </a>
  </div>
  <div>
    <a href="https://ricoberger.de/blog/posts/continuous-profiling-using-parca/assets/process-cpu-samples-2.png">
      <img class="h-auto max-w-full" src="https://ricoberger.de/blog/posts/continuous-profiling-using-parca/assets/process-cpu-samples-2.png" alt="Process CPU Samples"/>
    </a>
  </div>
</div>
<p>Lastly, we will select the <code>Memory In-Use Bytes</code> item from the <code>Profile Types</code>
dropdown. When we hover over a data point in the graph, we can see the number of
bytes used by the echoserver during this time. If we click on the data point, we
can view the sample below the graph to analyze where most of our memory was
spent.</p>
<div class="grid grid-cols-2 md:grid-cols-2 gap-4">
  <div>
    <a href="https://ricoberger.de/blog/posts/continuous-profiling-using-parca/assets/memory-in-use-bytes-1.png">
      <img class="h-auto max-w-full" src="https://ricoberger.de/blog/posts/continuous-profiling-using-parca/assets/memory-in-use-bytes-1.png" alt="Memory In-Use Bytes"/>
    </a>
  </div>
  <div>
    <a href="https://ricoberger.de/blog/posts/continuous-profiling-using-parca/assets/memory-in-use-bytes-2.png">
      <img class="h-auto max-w-full" src="https://ricoberger.de/blog/posts/continuous-profiling-using-parca/assets/memory-in-use-bytes-2.png" alt="Memory In-Use Bytes"/>
    </a>
  </div>
</div>
<div class="footnotes" role="doc-endnotes">
<hr/>
<ol>
<li id="fn:1">
<p>Parca supports the same storage configuration options as
<a href="https://thanos.io/tip/thanos/storage.md/#supported-clients">Thanos</a>. This
means it can also be used together with an S3 compatible object storage or
an Azure Storage Account. This would also be the recommended storage option
for a production deployment, because Parca doesn&#39;t handle the retention for
stored data and it is recommended to configure this via a lifecycle rule in
the object storage. <a href="#fnref:1" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
</ol>
</div>
]]></description><author>Rico Berger</author><enclosure url="https://ricoberger.de/blog/posts/continuous-profiling-using-parca/assets/preview.png" type="image/png"></enclosure><guid isPermaLink="true">https://ricoberger.de/blog/posts/continuous-profiling-using-parca/</guid><pubDate>Sat, 28 Jun 2025 12:00:00 +0000</pubDate></item><item><title>Deploy YugabyteDB on a Multi-Zone AKS Cluster</title><link>https://ricoberger.de/blog/posts/deploy-yugabytedb-on-a-multi-zone-aks-cluster/</link><description><![CDATA[<p>After getting started with YugabyteDB in the
<a href="http://ricoberger.de/blog/posts/getting-started-with-yugabytedb/">last blog post</a>,
I wanted to explore how to set up a zone-aware YugabyteDB cluster. To do this,
we will create a multi-zone AKS cluster and use the standard single-zone
YugabyteDB Helm Chart to deploy one-third of the nodes in the database cluster
across each of the three zones.</p>
<p>With the <code>commerce</code> schema from the last post, the final architecture will
appear as shown in the following graphic. We will have three StatefulSets for
the YB-Master and three StatefulSets for the YB-TServer. Each StatefulSet will
contain one Pod running in the specified AKS zone. The tablets for each table in
the <code>commerce</code> schema will be distributed across all Pods in the cluster. All
the configuration files we are using can be found in the
<a href="https://github.com/ricoberger/playground/tree/d07d42dfa25386737dd84edeee2f9b1054101edd/applications/deploy-yugabytedb-on-a-multi-zone-aks-cluster">ricoberger/playground</a>
GitHub repository.</p>
<p><a href="https://ricoberger.de/blog/posts/deploy-yugabytedb-on-a-multi-zone-aks-cluster/assets/architecture.png"><img src="https://ricoberger.de/blog/posts/deploy-yugabytedb-on-a-multi-zone-aks-cluster/assets/architecture.png" alt="Architecture"/></a></p>
<h2>Create a AKS Cluster</h2>
<p>Create a AKS cluster, if you have not already done so, by running the following
command. Note that if you do not specify 3 zones in the zones parameter
explicitly then AKS may place the 3 nodes in only 2 zones.</p>
<pre><code class="language-sh">az group create --name yugabyte --location germanywestcentral
az aks create --resource-group yugabyte --name yugabyte --node-count 3 --zones 1 2 3
</code></pre>
<p>We create a
<a href="https://github.com/ricoberger/playground/tree/d07d42dfa25386737dd84edeee2f9b1054101edd/applications/deploy-yugabytedb-on-a-multi-zone-aks-cluster/storageclass.yaml">StorageClass</a>.
We need to specify <code>WaitForFirstConsumer</code> mode for the <code>volumeBindingMode</code> so
that volumes will be provisioned according to pods zone affinities.</p>
<pre><code class="language-sh">kubectl apply --server-side -f storageclass.yaml
</code></pre>
<h2>Create a YugabyteDB Cluster</h2>
<p>Add the Helm chart repository and make sure that we have the latest updates to
the repository by running the following commands.</p>
<pre><code class="language-sh">helm repo add yugabytedb https://charts.yugabyte.com
helm repo update
</code></pre>
<p>Before we can install the Helm charts, we have to create the 3 namespaces first.</p>
<pre><code class="language-sh">kubectl create namespace yb-germanywestcentral-1
kubectl create namespace yb-germanywestcentral-2
kubectl create namespace yb-germanywestcentral-3
</code></pre>
<p>Now we create the overall YugabyteDB cluster in such a way that one third of the
nodes are hosted in each zone.</p>
<pre><code class="language-sh">helm upgrade --install yb-germanywestcentral-1 yugabytedb/yugabyte --version 2.25.1 --namespace yb-germanywestcentral-1 --wait -f values-yb-germanywestcentral-1.yaml
helm upgrade --install yb-germanywestcentral-2 yugabytedb/yugabyte --version 2.25.1 --namespace yb-germanywestcentral-2 --wait -f values-yb-germanywestcentral-2.yaml
helm upgrade --install yb-germanywestcentral-3 yugabytedb/yugabyte --version 2.25.1 --namespace yb-germanywestcentral-3 --wait -f values-yb-germanywestcentral-3.yaml
</code></pre>
<h2>Check the Cluster Status</h2>
<p>We can check the status of the cluster using various commands noted below.</p>
<p>Check the pods</p>
<pre><code class="language-sh">kubectl get pods -A | grep yb-germanywestcentral
</code></pre>
<pre><code class="language-sh">yb-germanywestcentral-1                     yb-master-0                                                      3/3     Running            0                7m18s
yb-germanywestcentral-1                     yb-tserver-0                                                     3/3     Running            0                7m18s
yb-germanywestcentral-2                     yb-master-0                                                      3/3     Running            0                5m55s
yb-germanywestcentral-2                     yb-tserver-0                                                     3/3     Running            0                5m55s
yb-germanywestcentral-3                     yb-master-0                                                      3/3     Running            0                4m53s
yb-germanywestcentral-3                     yb-tserver-0                                                     3/3     Running            0                4m53s
</code></pre>
<p>Check the services.</p>
<pre><code class="language-sh">kubectl get service -A | grep yb-germanywestcentral
</code></pre>
<pre><code class="language-sh">yb-germanywestcentral-1                     yb-masters                                      ClusterIP      None           &lt;none&gt;         7000/TCP,7100/TCP,15433/TCP                                                            8m6s
yb-germanywestcentral-1                     yb-tservers                                     ClusterIP      None           &lt;none&gt;         9000/TCP,12000/TCP,11000/TCP,13000/TCP,9100/TCP,6379/TCP,9042/TCP,5433/TCP,15433/TCP   8m6s
yb-germanywestcentral-2                     yb-masters                                      ClusterIP      None           &lt;none&gt;         7000/TCP,7100/TCP,15433/TCP                                                            6m42s
yb-germanywestcentral-2                     yb-tservers                                     ClusterIP      None           &lt;none&gt;         9000/TCP,12000/TCP,11000/TCP,13000/TCP,9100/TCP,6379/TCP,9042/TCP,5433/TCP,15433/TCP   6m42s
yb-germanywestcentral-3                     yb-masters                                      ClusterIP      None           &lt;none&gt;         7000/TCP,7100/TCP,15433/TCP                                                            5m40s
yb-germanywestcentral-3                     yb-tservers                                     ClusterIP      None           &lt;none&gt;         9000/TCP,12000/TCP,11000/TCP,13000/TCP,9100/TCP,6379/TCP,9042/TCP,5433/TCP,15433/TCP   5m40s
</code></pre>
<p>We can also access the YB-Master Admin UI for the cluster at
<code>http://localhost:7000</code>. Note that we can use any of the above three services
for this purpose as all of them will show the same cluster metadata.</p>
<pre><code class="language-sh">kubectl port-forward --namespace yb-germanywestcentral-1 svc/yb-masters 7000
</code></pre>
<p><a href="https://ricoberger.de/blog/posts/deploy-yugabytedb-on-a-multi-zone-aks-cluster/assets/yb-master-admin-ui.png"><img src="https://ricoberger.de/blog/posts/deploy-yugabytedb-on-a-multi-zone-aks-cluster/assets/yb-master-admin-ui.png" alt="YB-Master Admin UI"/></a></p>
<h2>Configure Zone-Aware Replica Placement</h2>
<p>The default replica placement policy treats every yb-tserver as equal
irrespective of its <code>placement_*</code> setting. We go to
<code>http://localhost:7000/cluster-config</code> to confirm that the default configuration
is still in effect.</p>
<p><a href="https://ricoberger.de/blog/posts/deploy-yugabytedb-on-a-multi-zone-aks-cluster/assets/cluster-configuration-1.png"><img src="https://ricoberger.de/blog/posts/deploy-yugabytedb-on-a-multi-zone-aks-cluster/assets/cluster-configuration-1.png" alt="Cluster Configuration"/></a></p>
<p>To make the replica placement zone-aware, so that one replica is placed in each
zone, we run the following command:</p>
<pre><code class="language-sh">kubectl exec -it --namespace yb-germanywestcentral-1 yb-master-0 -- bash -c &#34;/home/yugabyte/master/bin/yb-admin --master_addresses yb-master-0.yb-masters.yb-germanywestcentral-1.svc.cluster.local:7100,yb-master-0.yb-masters.yb-germanywestcentral-2.svc.cluster.local:7100,yb-master-0.yb-masters.yb-germanywestcentral-3.svc.cluster.local:7100 modify_placement_info azure.germanywestcentral.germanywestcentral-1,azure.germanywestcentral.germanywestcentral-2,azure.germanywestcentral.germanywestcentral-3 3&#34;
</code></pre>
<p>To see the new configuration, we go to <code>http://localhost:7000/cluster-config</code>.</p>
<p><a href="https://ricoberger.de/blog/posts/deploy-yugabytedb-on-a-multi-zone-aks-cluster/assets/cluster-configuration-2.png"><img src="https://ricoberger.de/blog/posts/deploy-yugabytedb-on-a-multi-zone-aks-cluster/assets/cluster-configuration-2.png" alt="Cluster Configuration"/></a></p>
<h2>Single Namespace</h2>
<blockquote>
<p>The following section was added after the blog post was first published on
April 14, 2025.</p>
</blockquote>
<p>After publishing the first version of the blog post, I wondered if it was
possible to create a zone-aware YugabyteDB cluster within a single namespace,
allowing PodDisruptionBudgets to apply to the entire cluster. It turns out this
is achievable by setting the <code>oldNamingStyle</code> value to <code>false</code> in the Helm
chart. You can find the updated values files in the
<a href="https://github.com/ricoberger/playground/tree/3457e1927526849001fbc838d229cc7450f18afa/applications/deploy-yugabytedb-on-a-multi-zone-aks-cluster">ricoberger/playground</a>
repository.</p>
<pre><code class="language-sh">kubectl create namespace yugabytedb
</code></pre>
<pre><code class="language-sh">helm upgrade --install yb-germanywestcentral-1 yugabytedb/yugabyte --version 2.25.1 --namespace yugabytedb --wait -f values-single-namespace-yb-germanywestcentral-1.yaml
helm upgrade --install yb-germanywestcentral-2 yugabytedb/yugabyte --version 2.25.1 --namespace yugabytedb --wait -f values-single-namespace-yb-germanywestcentral-2.yaml
helm upgrade --install yb-germanywestcentral-3 yugabytedb/yugabyte --version 2.25.1 --namespace yugabytedb --wait -f values-single-namespace-yb-germanywestcentral-3.yaml
</code></pre>
<pre><code class="language-sh">kubectl exec -it --namespace yugabytedb yb-germanywestcentral-1-yb-master-0 -- bash -c &#34;/home/yugabyte/master/bin/yb-admin --master_addresses yb-germanywestcentral-1-yb-master-0.yb-germanywestcentral-1-yb-masters.yugabytedb.svc.cluster.local:7100,yb-germanywestcentral-2-yb-master-0.yb-germanywestcentral-2-yb-masters.yugabytedb.svc.cluster.local:7100,yb-germanywestcentral-3-yb-master-0.yb-germanywestcentral-3-yb-masters.yugabytedb.svc.cluster.local:7100 modify_placement_info azure.germanywestcentral.germanywestcentral-1,azure.germanywestcentral.germanywestcentral-2,azure.germanywestcentral.germanywestcentral-3 3&#34;
</code></pre>
]]></description><author>Rico Berger</author><enclosure url="https://ricoberger.de/blog/posts/deploy-yugabytedb-on-a-multi-zone-aks-cluster/assets/architecture.png" type="image/png"></enclosure><guid isPermaLink="true">https://ricoberger.de/blog/posts/deploy-yugabytedb-on-a-multi-zone-aks-cluster/</guid><pubDate>Sun, 13 Apr 2025 19:00:00 +0000</pubDate></item><item><title>Getting Started with YugabyteDB</title><link>https://ricoberger.de/blog/posts/getting-started-with-yugabytedb/</link><description><![CDATA[<p>Hi and welcome to another database blog post. Last time, we explored Vitess;
this time, we will look at Yugabyte. We will set up a simple Yugabyte cluster on
Kubernetes. Once the cluster is running, we will create the <code>commerce</code> schema
from the
&#34;<a href="http://ricoberger.de/blog/posts/getting-started-with-vitess/">Getting Started with Vitess</a>&#34;
blog post and experiment with the cluster. Finally, we will examine how to
monitor a Yugabyte cluster using Prometheus. Please note that I am not an expert
in YugabyteDB; I simply want to experiment with it in this blog post.</p>
<p><a href="https://ricoberger.de/blog/posts/getting-started-with-yugabytedb/assets/yugabytedb.png"><img src="https://ricoberger.de/blog/posts/getting-started-with-yugabytedb/assets/yugabytedb.png" alt="YugabyteDB"/></a></p>
<p>Before we begin setting up YugabyteDB, we need a running Kubernetes cluster. If
you want to try this on your local machine, you can use a tool like
<a href="https://kind.sigs.k8s.io/">kind</a> to create a local cluster.</p>
<p>If you are not familar with the
<a href="https://docs.yugabyte.com/preview/architecture/">architecture</a> and
<a href="https://docs.yugabyte.com/preview/architecture/key-concepts/">key concepts</a> of
YugabyteDB, I recommend reviewing the documentation before proceeding. All the
configuration files we are using can be found in the
<a href="https://github.com/ricoberger/playground/tree/84a9d0dcf439bce2394d70f51d9088ec35ed3392/applications/getting-started-with-yugabytedb">ricoberger/playground</a>
GitHub repository.</p>
<h2>Installation</h2>
<p>We will create our YugabyteDB cluster using the
<a href="https://github.com/yugabyte/charts">Helm chart</a>. To install the Helm chart, we
will create a new namespace called <code>yugabytedb</code> and set up a new cluster using
the
<a href="https://github.com/ricoberger/playground/tree/84a9d0dcf439bce2394d70f51d9088ec35ed3392/applications/getting-started-with-yugabytedb/002_values.yaml"><code>002_values.yaml</code></a>
values file. Afterward, we can verify that the cluster is running by executing
<code>kubectl get pods</code>.</p>
<pre><code class="language-sh">kubectl apply --server-side -f 001_namespace.yaml

helm repo add yugabytedb https://charts.yugabyte.com
helm repo update

helm upgrade --install yugabytedb yugabytedb/yugabyte --version 2.25.1 --namespace yugabytedb --wait -f 002_values.yaml
</code></pre>
<pre><code class="language-plaintext">NAME           READY   STATUS    RESTARTS   AGE
yb-master-0    3/3     Running   0          2m4s
yb-master-1    3/3     Running   0          2m4s
yb-master-2    3/3     Running   0          2m4s
yb-tserver-0   3/3     Running   0          2m4s
yb-tserver-1   3/3     Running   0          2m4s
yb-tserver-2   3/3     Running   0          2m4s
</code></pre>
<p>After a few minutes, it should show that all pods are in the status of running.
At the point we can also check the state of the cluster using the <code>yb_servers()</code>
database function:</p>
<pre><code class="language-sh"># Password: mypassword
kubectl exec -it -n yugabytedb yb-tserver-0 -- ysqlsh --dbname yugabyte --username yugabyte --password -c &#34;select * from yb_servers();&#34;
</code></pre>
<pre><code class="language-plaintext">                         host                          | port | num_connections | node_type | cloud  |   region    | zone  |                       public_ip                       |               uuid
-------------------------------------------------------+------+-----------------+-----------+--------+-------------+-------+-------------------------------------------------------+----------------------------------
 yb-tserver-0.yb-tservers.yugabytedb.svc.cluster.local | 5433 |               0 | primary   | cloud1 | datacenter1 | rack1 | yb-tserver-0.yb-tservers.yugabytedb.svc.cluster.local | 416a684d83e74d96962b95b2128b9870
 yb-tserver-2.yb-tservers.yugabytedb.svc.cluster.local | 5433 |               0 | primary   | cloud1 | datacenter1 | rack1 | yb-tserver-2.yb-tservers.yugabytedb.svc.cluster.local | 579810f2f1cc46adab7ed05ce00dde64
 yb-tserver-1.yb-tservers.yugabytedb.svc.cluster.local | 5433 |               0 | primary   | cloud1 | datacenter1 | rack1 | yb-tserver-1.yb-tservers.yugabytedb.svc.cluster.local | 7119b812c43448bc8d72e6b5580cdff8
(3 rows)
</code></pre>
<h2>Create a User, Database and Schema</h2>
<p>Now that we have a running YugabyteDB cluster, we can create a user, a database,
and the schema for our <code>commerce</code> application. We create a database named
<code>commerce</code> and a user named <code>commerceuser</code>. The user gets all priveleges to
access the database and schema.</p>
<pre><code class="language-sh"># Password: mypassword
kubectl exec -it -n yugabytedb yb-tserver-0 -- ysqlsh --dbname yugabyte --username yugabyte --password
</code></pre>
<pre><code class="language-sql">-- Create database
create database commerce;

-- Create user
create role commerceuser login password &#39;commercepassword&#39;;

-- Grant priveleges to user con database
grant all on database commerce to commerceuser;

-- Connect to database
\c commerce

-- Grant priveleges to user on schema in database
grant all on schema public to commerceuser;
</code></pre>
<p>Now we can log in to the <code>commerce</code> database using the newly created user.</p>
<pre><code class="language-sh"># Password: commercepassword
kubectl exec -it -n yugabytedb yb-tserver-0 -- ysqlsh --dbname commerce --username commerceuser --password
</code></pre>
<p>Once we connect to the <code>commerce</code> database, we can create the schema and insert
data into the tables we create.</p>
<pre><code class="language-sql">-- Create schema
create table if not exists product(
  sku varchar(128),
  description varchar(128),
  price bigint,
  primary key(sku)
);

create table if not exists customer(
  customer_id bigserial,
  email varchar(128),
  primary key(customer_id)
);

create table if not exists corder(
  order_id bigserial,
  customer_id bigint,
  sku varchar(128),
  price bigint,
  primary key(order_id)
);

-- Load data
insert into product(sku, description, price) values(&#39;SKU-1001&#39;, &#39;Monitor&#39;, 100);
insert into product(sku, description, price) values(&#39;SKU-1002&#39;, &#39;Keyboard&#39;, 30);

-- Show relations
\d

-- Close database session
\q
</code></pre>
<p>At this point, we have our initial schema, and it&#39;s time to examine the created
pods, tables, and tablets. As shown in the graphic below, we created two
StatefulSets during the installation of the YugabyteDB cluster. The <code>yb-master</code>
contains three YB-Master pods. We can check which of the three is the current
leader using the <code>yb-admin list_all_masters</code> command. In our case, the leader is
the <code>yb-master-0</code> pod.</p>
<p>Additionally, we can see that one tablet was created for each table using the
<code>yb_admin list_tablets &lt;keyspace-type&gt;.&lt;keyspace-name&gt; &lt;table&gt;</code> and
<code>yb-admin list_tablet_servers &lt;tablet-id&gt;</code> commands. For example, the leader of
the <code>product</code> table is <code>yb-tserver-0</code>, while the other two pods are followers.</p>
<p>Additionally, the <code>yb-tserver</code> StatefulSet contains three YB-TServer pods. Using
the <code>yb_admin list_tablets &lt;keyspace-type&gt;.&lt;keyspace-name&gt; &lt;table&gt;</code> and
<code>yb-admin list_tablet_servers &lt;tablet-id&gt;</code> commands we can see how our created
tables are splitted into tablets and how the tablets are distributed. For
example for the <code>product</code> table one tablet was created, which is
<a href="https://docs.yugabyte.com/preview/architecture/docdb-replication/replication/">replicated three times</a>.
The leader is running on <code>yb-tserver-2</code>.</p>
<blockquote>
<p><strong>Note:</strong> For you the architecture could slightly different:</p>
<ul>
<li>The YB-Master leader could be on another pod</li>
<li>The leader for each tablet could be on another YB-TServer pod</li>
</ul>
</blockquote>
<p><a href="https://ricoberger.de/blog/posts/getting-started-with-yugabytedb/assets/architecture-initial.png"><img src="https://ricoberger.de/blog/posts/getting-started-with-yugabytedb/assets/architecture-initial.png" alt="Architecture - Initial"/></a></p>
<details>
<summary>Details</summary>
<p>To generate the architecture diagram above, the following commands where used:</p>
<pre><code class="language-sh">kubectl exec -it -n yugabytedb yb-tserver-0 -- bash
</code></pre>
<pre><code class="language-plaintext">$ yb-admin -master_addresses yb-master-0.yb-masters.yugabytedb.svc.cluster.local:7100,yb-master-1.yb-masters.yugabytedb.svc.cluster.local:7100,yb-master-2.yb-masters.yugabytedb.svc.cluster.local:7100 list_all_masters
Master UUID                             RPC Host/Port           State           Role    Broadcast Host/Port
39151c52cc5140d1bfb560a9bd079d6f        yb-master-0.yb-masters.yugabytedb.svc.cluster.local:7100        ALIVE           LEADER  yb-master-0.yb-masters.yugabytedb.svc.cluster.local:7100
259542fbc93e4c8694f78ce0e5d65085        yb-master-1.yb-masters.yugabytedb.svc.cluster.local:7100        ALIVE           FOLLOWER        yb-master-1.yb-masters.yugabytedb.svc.cluster.local:7100
f04da599708f46eba331d10ff2732125        yb-master-2.yb-masters.yugabytedb.svc.cluster.local:7100        ALIVE           FOLLOWER        yb-master-2.yb-masters.yugabytedb.svc.cluster.local:7100

$ yb-admin -master_addresses yb-master-0.yb-masters.yugabytedb.svc.cluster.local:7100,yb-master-1.yb-masters.yugabytedb.svc.cluster.local:7100,yb-master-2.yb-masters.yugabytedb.svc.cluster.local:7100 list_tablets ysql.commerce product
Tablet-UUID                             Range                                                           Leader-IP               Leader-UUID
9a57c185f18448e1902d961068f4c763        partition_key_start: &#34;&#34; partition_key_end: &#34;&#34;                   yb-tserver-2.yb-tservers.yugabytedb.svc.cluster.local:9100      579810f2f1cc46adab7ed05ce00dde64

$ yb-admin -master_addresses yb-master-0.yb-masters.yugabytedb.svc.cluster.local:7100,yb-master-1.yb-masters.yugabytedb.svc.cluster.local:7100,yb-master-2.yb-masters.yugabytedb.svc.cluster.local:7100 list_tablet_servers 9a57c185f18448e1902d961068f4c763
Server UUID                             RPC Host/Port           Role
7119b812c43448bc8d72e6b5580cdff8        yb-tserver-1.yb-tservers.yugabytedb.svc.cluster.local:9100      FOLLOWER
579810f2f1cc46adab7ed05ce00dde64        yb-tserver-2.yb-tservers.yugabytedb.svc.cluster.local:9100      LEADER
416a684d83e74d96962b95b2128b9870        yb-tserver-0.yb-tservers.yugabytedb.svc.cluster.local:9100      FOLLOWER

$ yb-admin -master_addresses yb-master-0.yb-masters.yugabytedb.svc.cluster.local:7100,yb-master-1.yb-masters.yugabytedb.svc.cluster.local:7100,yb-master-2.yb-masters.yugabytedb.svc.cluster.local:7100 list_tablets ysql.commerce customer
Tablet-UUID                             Range                                                           Leader-IP               Leader-UUID
251ba609612948f085bf149e430e607f        partition_key_start: &#34;&#34; partition_key_end: &#34;&#34;                   yb-tserver-1.yb-tservers.yugabytedb.svc.cluster.local:9100      7119b812c43448bc8d72e6b5580cdff8

$ yb-admin -master_addresses yb-master-0.yb-masters.yugabytedb.svc.cluster.local:7100,yb-master-1.yb-masters.yugabytedb.svc.cluster.local:7100,yb-master-2.yb-masters.yugabytedb.svc.cluster.local:7100 list_tablet_servers 251ba609612948f085bf149e430e607f
Server UUID                             RPC Host/Port           Role
7119b812c43448bc8d72e6b5580cdff8        yb-tserver-1.yb-tservers.yugabytedb.svc.cluster.local:9100      LEADER
579810f2f1cc46adab7ed05ce00dde64        yb-tserver-2.yb-tservers.yugabytedb.svc.cluster.local:9100      FOLLOWER
416a684d83e74d96962b95b2128b9870        yb-tserver-0.yb-tservers.yugabytedb.svc.cluster.local:9100      FOLLOWER

$ yb-admin -master_addresses yb-master-0.yb-masters.yugabytedb.svc.cluster.local:7100,yb-master-1.yb-masters.yugabytedb.svc.cluster.local:7100,yb-master-2.yb-masters.yugabytedb.svc.cluster.local:7100 list_tablets ysql.commerce corder
Tablet-UUID                             Range                                                           Leader-IP               Leader-UUID
5692cde127c9421aaa7aea8b8eed67c1        partition_key_start: &#34;&#34; partition_key_end: &#34;&#34;                   yb-tserver-2.yb-tservers.yugabytedb.svc.cluster.local:9100      579810f2f1cc46adab7ed05ce00dde64

$ yb-admin -master_addresses yb-master-0.yb-masters.yugabytedb.svc.cluster.local:7100,yb-master-1.yb-masters.yugabytedb.svc.cluster.local:7100,yb-master-2.yb-masters.yugabytedb.svc.cluster.local:7100 list_tablet_servers 5692cde127c9421aaa7aea8b8eed67c1
Server UUID                             RPC Host/Port           Role
7119b812c43448bc8d72e6b5580cdff8        yb-tserver-1.yb-tservers.yugabytedb.svc.cluster.local:9100      FOLLOWER
579810f2f1cc46adab7ed05ce00dde64        yb-tserver-2.yb-tservers.yugabytedb.svc.cluster.local:9100      LEADER
416a684d83e74d96962b95b2128b9870        yb-tserver-0.yb-tservers.yugabytedb.svc.cluster.local:9100      FOLLOWER
</code></pre>
<p>We can also use the <code>yb_table_properties</code> function to determine the number of
tablets per table and the <code>yb_local_tablets</code> view to get metadata for
YSQL/YCQL/system tablets on a server.</p>
<pre><code class="language-sh"># Password: mypassword
kubectl exec -it -n yugabytedb yb-tserver-0 -- ysqlsh --dbname yugabyte --username yugabyte --password
</code></pre>
<pre><code class="language-plaintext">yugabyte=# \c commerce
Password:
You are now connected to database &#34;commerce&#34; as user &#34;yugabyte&#34;.
commerce=# select * from yb_table_properties(&#39;product&#39;::regclass);
 num_tablets | num_hash_key_columns | is_colocated | tablegroup_oid | colocation_id
-------------+----------------------+--------------+----------------+---------------
           1 |                    1 | f            |                |
(1 row)

commerce=# select * from yb_table_properties(&#39;customer&#39;::regclass);
 num_tablets | num_hash_key_columns | is_colocated | tablegroup_oid | colocation_id
-------------+----------------------+--------------+----------------+---------------
           1 |                    1 | f            |                |
(1 row)

commerce=# select * from yb_table_properties(&#39;corder&#39;::regclass);
 num_tablets | num_hash_key_columns | is_colocated | tablegroup_oid | colocation_id
-------------+----------------------+--------------+----------------+---------------
           1 |                    1 | f            |                |
(1 row)

commerce=# select * from yb_local_tablets where namespace_name = &#39;commerce&#39;;
            tablet_id             |             table_id             | table_type | namespace_name | ysql_schema_name | table_name | partition_key_start | partition_key_end |       state
----------------------------------+----------------------------------+------------+----------------+------------------+------------+---------------------+-------------------+-------------------
 5692cde127c9421aaa7aea8b8eed67c1 | 0000400000003000800000000000400d | YSQL       | commerce       | public           | corder     |                     |                   | TABLET_DATA_READY
 251ba609612948f085bf149e430e607f | 00004000000030008000000000004006 | YSQL       | commerce       | public           | customer   |                     |                   | TABLET_DATA_READY
 9a57c185f18448e1902d961068f4c763 | 00004000000030008000000000004000 | YSQL       | commerce       | public           | product    |                     |                   | TABLET_DATA_READY
(3 rows)

commerce=#
</code></pre>
</details>
<h2>Monitoring</h2>
<p>To monitor our YugabyteDB cluster we will use Prometheus and Grafana. We will
not go through the setup of Prometheus and Grafana within this post and assume
that we already have a running Prometheus and Grafana instance. To monitor our
YugabyteDB cluster with Prometheus and Grafana we will create a scrape
configuration for Prometheus and import a dashboard<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup> into Grafana.</p>
<pre><code class="language-sh">kubectl apply --server-side -f 101_monitoring.yaml
</code></pre>
<ul>
<li><a href="https://github.com/ricoberger/playground/tree/84a9d0dcf439bce2394d70f51d9088ec35ed3392/applications/getting-started-with-yugabytedb/102_dashboard_yugabytedb.json">YugabyteDB</a></li>
<li><a href="https://github.com/ricoberger/playground/tree/84a9d0dcf439bce2394d70f51d9088ec35ed3392/applications/getting-started-with-yugabytedb/103_dashboard_resources.json">Kubernetes / Compute Resources / Pod</a></li>
</ul>
<p>We can also monitor the YugabyteDB cluster via port forward to the YB-Master
server and opening <code>http://localhost:7000</code> in our browser.</p>
<div class="grid grid-cols-2 md:grid-cols-2 gap-4">
  <div>
    <a href="https://ricoberger.de/blog/posts/getting-started-with-yugabytedb/assets/dashboard-yugabytedb-1.png">
      <img class="h-auto max-w-full" src="https://ricoberger.de/blog/posts/getting-started-with-yugabytedb/assets/dashboard-yugabytedb-1.png" alt="Grafana Dashboard"/>
    </a>
  </div>
  <div>
    <a href="https://ricoberger.de/blog/posts/getting-started-with-yugabytedb/assets/yb-master-dashboard.png">
      <img class="h-auto max-w-full" src="https://ricoberger.de/blog/posts/getting-started-with-yugabytedb/assets/yb-master-dashboard.png" alt="YB-Master Dashboard"/>
    </a>
  </div>
</div>
<h2>Insert Data</h2>
<p>In the next step, it&#39;s time to insert data into our <code>commerce</code> database. We can
do this by running the following commands:</p>
<pre><code class="language-sh">kubectl port-forward -n yugabytedb svc/yb-tservers 5433

go run . -create-customers -goroutines=20
go run . -create-orders -goroutines=100
</code></pre>
<p><a href="https://ricoberger.de/blog/posts/getting-started-with-yugabytedb/assets/dashboard-resource-usage.png"><img src="https://ricoberger.de/blog/posts/getting-started-with-yugabytedb/assets/dashboard-resource-usage.png" alt="Dashboard - Resource Usage"/></a></p>
<p>When we insert a lot of data, the CPU usage increases heavily, we can increase
the CPU requests and limits by applying the <code>201_values.yaml</code> file and monitor
the database behaviour via the YugabyteDB dashboard during the rollout.</p>
<pre><code class="language-sh">helm upgrade --install yugabytedb yugabytedb/yugabyte --version 2.25.1 --namespace yugabytedb --wait -f 201_values.yaml
</code></pre>
<p><a href="https://ricoberger.de/blog/posts/getting-started-with-yugabytedb/assets/dashboard-yugabytedb-2.png"><img src="https://ricoberger.de/blog/posts/getting-started-with-yugabytedb/assets/dashboard-yugabytedb-2.png" alt="Dashboard - YugabyteDB"/></a></p>
<p>Once we have inserted enough data, we can see that YugabyteDB splits our
<code>corder</code> table into two tablets:</p>
<ul>
<li>The partion for the first tablets starts at <code>&#34;&#34;</code> and ends at <code>&#34;\270:&#34;</code>. The
leader for this tablet is <code>yb-tserver-1</code></li>
<li>The partion for the first tablets starts at <code>&#34;\270:&#34;</code> and ends at <code>&#34;&#34;</code>. The
leader for this tablet is <code>yb-tserver-0</code></li>
</ul>
<p><a href="https://ricoberger.de/blog/posts/getting-started-with-yugabytedb/assets/architecture-insert-data-2-tablets.png"><img src="https://ricoberger.de/blog/posts/getting-started-with-yugabytedb/assets/architecture-insert-data-2-tablets.png" alt="Architecture - Insert Data - 2 Tablets"/></a></p>
<details>
<summary>Details</summary>
<p>To generate the architecture diagram above, the following commands where used:</p>
<pre><code class="language-sh">kubectl exec -it -n yugabytedb yb-tserver-0 -- bash
</code></pre>
<pre><code class="language-plaintext">$ yb-admin -master_addresses yb-master-0.yb-masters.yugabytedb.svc.cluster.local:7100,yb-master-1.yb-masters.yugabytedb.svc.cluster.local:7100,yb-master-2.yb-masters.yugabytedb.svc.cluster.local:7100 list_tablets ysql.commerce product
Tablet-UUID                             Range                                                           Leader-IP               Leader-UUID
9a57c185f18448e1902d961068f4c763        partition_key_start: &#34;&#34; partition_key_end: &#34;&#34;                   yb-tserver-2.yb-tservers.yugabytedb.svc.cluster.local:9100      579810f2f1cc46adab7ed05ce00dde64

$ yb-admin -master_addresses yb-master-0.yb-masters.yugabytedb.svc.cluster.local:7100,yb-master-1.yb-masters.yugabytedb.svc.cluster.local:7100,yb-master-2.yb-masters.yugabytedb.svc.cluster.local:7100 list_tablet_servers 9a57c185f18448e1902d961068f4c763
Server UUID                             RPC Host/Port           Role
7119b812c43448bc8d72e6b5580cdff8        yb-tserver-1.yb-tservers.yugabytedb.svc.cluster.local:9100      FOLLOWER
579810f2f1cc46adab7ed05ce00dde64        yb-tserver-2.yb-tservers.yugabytedb.svc.cluster.local:9100      LEADER
416a684d83e74d96962b95b2128b9870        yb-tserver-0.yb-tservers.yugabytedb.svc.cluster.local:9100      FOLLOWER

$ yb-admin -master_addresses yb-master-0.yb-masters.yugabytedb.svc.cluster.local:7100,yb-master-1.yb-masters.yugabytedb.svc.cluster.local:7100,yb-master-2.yb-masters.yugabytedb.svc.cluster.local:7100 list_tablets ysql.commerce customer
Tablet-UUID                             Range                                                           Leader-IP               Leader-UUID
251ba609612948f085bf149e430e607f        partition_key_start: &#34;&#34; partition_key_end: &#34;&#34;                   yb-tserver-1.yb-tservers.yugabytedb.svc.cluster.local:9100      7119b812c43448bc8d72e6b5580cdff8

$ yb-admin -master_addresses yb-master-0.yb-masters.yugabytedb.svc.cluster.local:7100,yb-master-1.yb-masters.yugabytedb.svc.cluster.local:7100,yb-master-2.yb-masters.yugabytedb.svc.cluster.local:7100 list_tablet_servers 251ba609612948f085bf149e430e607f
Server UUID                             RPC Host/Port           Role
7119b812c43448bc8d72e6b5580cdff8        yb-tserver-1.yb-tservers.yugabytedb.svc.cluster.local:9100      LEADER
579810f2f1cc46adab7ed05ce00dde64        yb-tserver-2.yb-tservers.yugabytedb.svc.cluster.local:9100      FOLLOWER
416a684d83e74d96962b95b2128b9870        yb-tserver-0.yb-tservers.yugabytedb.svc.cluster.local:9100      FOLLOWER

$ yb-admin -master_addresses yb-master-0.yb-masters.yugabytedb.svc.cluster.local:7100,yb-master-1.yb-masters.yugabytedb.svc.cluster.local:7100,yb-master-2.yb-masters.yugabytedb.svc.cluster.local:7100 list_tablets ysql.commerce corder
Tablet-UUID                             Range                                                           Leader-IP               Leader-UUID
e62b819a818b451dbd3305e2daf74832        partition_key_start: &#34;&#34; partition_key_end: &#34;\270:&#34;              yb-tserver-1.yb-tservers.yugabytedb.svc.cluster.local:9100      7119b812c43448bc8d72e6b5580cdff8
f6988d5b198f44a4826200c436c69260        partition_key_start: &#34;\270:&#34; partition_key_end: &#34;&#34;              yb-tserver-0.yb-tservers.yugabytedb.svc.cluster.local:9100      416a684d83e74d96962b95b2128b9870

$ yb-admin -master_addresses yb-master-0.yb-masters.yugabytedb.svc.cluster.local:7100,yb-master-1.yb-masters.yugabytedb.svc.cluster.local:7100,yb-master-2.yb-masters.yugabytedb.svc.cluster.local:7100 list_tablet_servers e62b819a818b451dbd3305e2daf74832
Server UUID                             RPC Host/Port           Role
416a684d83e74d96962b95b2128b9870        yb-tserver-0.yb-tservers.yugabytedb.svc.cluster.local:9100      FOLLOWER
579810f2f1cc46adab7ed05ce00dde64        yb-tserver-2.yb-tservers.yugabytedb.svc.cluster.local:9100      FOLLOWER
7119b812c43448bc8d72e6b5580cdff8        yb-tserver-1.yb-tservers.yugabytedb.svc.cluster.local:9100      LEADER

$ yb-admin -master_addresses yb-master-0.yb-masters.yugabytedb.svc.cluster.local:7100,yb-master-1.yb-masters.yugabytedb.svc.cluster.local:7100,yb-master-2.yb-masters.yugabytedb.svc.cluster.local:7100 list_tablet_servers f6988d5b198f44a4826200c436c69260
Server UUID                             RPC Host/Port           Role
416a684d83e74d96962b95b2128b9870        yb-tserver-0.yb-tservers.yugabytedb.svc.cluster.local:9100      LEADER
579810f2f1cc46adab7ed05ce00dde64        yb-tserver-2.yb-tservers.yugabytedb.svc.cluster.local:9100      FOLLOWER
7119b812c43448bc8d72e6b5580cdff8        yb-tserver-1.yb-tservers.yugabytedb.svc.cluster.local:9100      FOLLOWER
</code></pre>
</details>
<p>If we insert even more data, the <code>corder</code> table might be split again. If we then
add more YB-TServer instance the tablets are redistributed as shown in the two
graphics below.</p>
<pre><code class="language-sh">helm upgrade --install yugabytedb yugabytedb/yugabyte --version 2.25.1 --namespace yugabytedb --wait -f 202_values.yaml
</code></pre>
<div class="grid grid-cols-2 md:grid-cols-2 gap-4">
  <div>
    <a href="https://ricoberger.de/blog/posts/getting-started-with-yugabytedb/assets/architecture-insert-data-3-tablets.png">
      <img class="h-auto max-w-full" src="https://ricoberger.de/blog/posts/getting-started-with-yugabytedb/assets/architecture-insert-data-3-tablets.png" alt="Architecture - Insert Data - 3 Tablets"/>
    </a>
  </div>
  <div>
    <a href="https://ricoberger.de/blog/posts/getting-started-with-yugabytedb/assets/architecture-insert-data-3-tablets-5-pods.png">
      <img class="h-auto max-w-full" src="https://ricoberger.de/blog/posts/getting-started-with-yugabytedb/assets/architecture-insert-data-3-tablets-5-pods.png" alt="Architecture - Insert Data - 3 Tablets - 5 Pods"/>
    </a>
  </div>
</div>
<details>
<summary>Details</summary>
<p>To generate the architecture diagram above, the following commands where used:</p>
<pre><code class="language-sh">kubectl exec -it -n yugabytedb yb-tserver-0 -- bash
</code></pre>
<pre><code class="language-plaintext">$ yb-admin -master_addresses yb-master-0.yb-masters.yugabytedb.svc.cluster.local:7100,yb-master-1.yb-masters.yugabytedb.svc.cluster.local:7100,yb-master-2.yb-masters.yugabytedb.svc.cluster.local:7100 list_tablets ysql.commerce product
Tablet-UUID                             Range                                                           Leader-IP               Leader-UUID
9a57c185f18448e1902d961068f4c763        partition_key_start: &#34;&#34; partition_key_end: &#34;&#34;                   yb-tserver-2.yb-tservers.yugabytedb.svc.cluster.local:9100      579810f2f1cc46adab7ed05ce00dde64

$ yb-admin -master_addresses yb-master-0.yb-masters.yugabytedb.svc.cluster.local:7100,yb-master-1.yb-masters.yugabytedb.svc.cluster.local:7100,yb-master-2.yb-masters.yugabytedb.svc.cluster.local:7100 list_tablet_servers 9a57c185f18448e1902d961068f4c763
Server UUID                             RPC Host/Port           Role
7119b812c43448bc8d72e6b5580cdff8        yb-tserver-1.yb-tservers.yugabytedb.svc.cluster.local:9100      FOLLOWER
579810f2f1cc46adab7ed05ce00dde64        yb-tserver-2.yb-tservers.yugabytedb.svc.cluster.local:9100      LEADER
416a684d83e74d96962b95b2128b9870        yb-tserver-0.yb-tservers.yugabytedb.svc.cluster.local:9100      FOLLOWER

$ yb-admin -master_addresses yb-master-0.yb-masters.yugabytedb.svc.cluster.local:7100,yb-master-1.yb-masters.yugabytedb.svc.cluster.local:7100,yb-master-2.yb-masters.yugabytedb.svc.cluster.local:7100 list_tablets ysql.commerce customer
Tablet-UUID                             Range                                                           Leader-IP               Leader-UUID
251ba609612948f085bf149e430e607f        partition_key_start: &#34;&#34; partition_key_end: &#34;&#34;                   yb-tserver-1.yb-tservers.yugabytedb.svc.cluster.local:9100      7119b812c43448bc8d72e6b5580cdff8

$ yb-admin -master_addresses yb-master-0.yb-masters.yugabytedb.svc.cluster.local:7100,yb-master-1.yb-masters.yugabytedb.svc.cluster.local:7100,yb-master-2.yb-masters.yugabytedb.svc.cluster.local:7100 list_tablet_servers 251ba609612948f085bf149e430e607f
Server UUID                             RPC Host/Port           Role
7119b812c43448bc8d72e6b5580cdff8        yb-tserver-1.yb-tservers.yugabytedb.svc.cluster.local:9100      LEADER
579810f2f1cc46adab7ed05ce00dde64        yb-tserver-2.yb-tservers.yugabytedb.svc.cluster.local:9100      FOLLOWER
416a684d83e74d96962b95b2128b9870        yb-tserver-0.yb-tservers.yugabytedb.svc.cluster.local:9100      FOLLOWER

$ yb-admin -master_addresses yb-master-0.yb-masters.yugabytedb.svc.cluster.local:7100,yb-master-1.yb-masters.yugabytedb.svc.cluster.local:7100,yb-master-2.yb-masters.yugabytedb.svc.cluster.local:7100 list_tablets ysql.commerce corder
Tablet-UUID                             Range                                                           Leader-IP               Leader-UUID
81c479b9629f4d988ced1f31d85d54fb        partition_key_start: &#34;&#34; partition_key_end: &#34;ry&#34;                 yb-tserver-2.yb-tservers.yugabytedb.svc.cluster.local:9100      579810f2f1cc46adab7ed05ce00dde64
bfb33782498d41259b80cebdb3c64414        partition_key_start: &#34;ry&#34; partition_key_end: &#34;\270:&#34;            yb-tserver-1.yb-tservers.yugabytedb.svc.cluster.local:9100      7119b812c43448bc8d72e6b5580cdff8
f6988d5b198f44a4826200c436c69260        partition_key_start: &#34;\270:&#34; partition_key_end: &#34;&#34;              yb-tserver-0.yb-tservers.yugabytedb.svc.cluster.local:9100      416a684d83e74d96962b95b2128b9870

$ yb-admin -master_addresses yb-master-0.yb-masters.yugabytedb.svc.cluster.local:7100,yb-master-1.yb-masters.yugabytedb.svc.cluster.local:7100,yb-master-2.yb-masters.yugabytedb.svc.cluster.local:7100 list_tablet_servers 81c479b9629f4d988ced1f31d85d54fb
Server UUID                             RPC Host/Port           Role
416a684d83e74d96962b95b2128b9870        yb-tserver-0.yb-tservers.yugabytedb.svc.cluster.local:9100      FOLLOWER
579810f2f1cc46adab7ed05ce00dde64        yb-tserver-2.yb-tservers.yugabytedb.svc.cluster.local:9100      LEADER
7119b812c43448bc8d72e6b5580cdff8        yb-tserver-1.yb-tservers.yugabytedb.svc.cluster.local:9100      FOLLOWER

$ yb-admin -master_addresses yb-master-0.yb-masters.yugabytedb.svc.cluster.local:7100,yb-master-1.yb-masters.yugabytedb.svc.cluster.local:7100,yb-master-2.yb-masters.yugabytedb.svc.cluster.local:7100 list_tablet_servers bfb33782498d41259b80cebdb3c64414
7119b812c43448bc8d72e6b5580cdff8        yb-tserver-1.yb-tservers.yugabytedb.svc.cluster.local:9100      LEADER
579810f2f1cc46adab7ed05ce00dde64        yb-tserver-2.yb-tservers.yugabytedb.svc.cluster.local:9100      FOLLOWER
416a684d83e74d96962b95b2128b9870        yb-tserver-0.yb-tservers.yugabytedb.svc.cluster.local:9100      FOLLOWER

$ yb-admin -master_addresses yb-master-0.yb-masters.yugabytedb.svc.cluster.local:7100,yb-master-1.yb-masters.yugabytedb.svc.cluster.local:7100,yb-master-2.yb-masters.yugabytedb.svc.cluster.local:7100 list_tablet_servers f6988d5b198f44a4826200c436c69260
Server UUID                             RPC Host/Port           Role
416a684d83e74d96962b95b2128b9870        yb-tserver-0.yb-tservers.yugabytedb.svc.cluster.local:9100      LEADER
579810f2f1cc46adab7ed05ce00dde64        yb-tserver-2.yb-tservers.yugabytedb.svc.cluster.local:9100      FOLLOWER
7119b812c43448bc8d72e6b5580cdff8        yb-tserver-1.yb-tservers.yugabytedb.svc.cluster.local:9100      FOLLOWER
</code></pre>
<pre><code class="language-plaintext">$ yb-admin -master_addresses yb-master-0.yb-masters.yugabytedb.svc.cluster.local:7100,yb-master-1.yb-masters.yugabytedb.svc.cluster.local:7100,yb-master-2.yb-masters.yugabytedb.svc.cluster.local:7100 list_tablets ysql.commerce product
Tablet-UUID                             Range                                                           Leader-IP               Leader-UUID
9a57c185f18448e1902d961068f4c763        partition_key_start: &#34;&#34; partition_key_end: &#34;&#34;                   yb-tserver-2.yb-tservers.yugabytedb.svc.cluster.local:9100      579810f2f1cc46adab7ed05ce00dde64

$ yb-admin -master_addresses yb-master-0.yb-masters.yugabytedb.svc.cluster.local:7100,yb-master-1.yb-masters.yugabytedb.svc.cluster.local:7100,yb-master-2.yb-masters.yugabytedb.svc.cluster.local:7100 list_tablet_servers 9a57c185f18448e1902d961068f4c763
7119b812c43448bc8d72e6b5580cdff8        yb-tserver-1.yb-tservers.yugabytedb.svc.cluster.local:9100      FOLLOWER
579810f2f1cc46adab7ed05ce00dde64        yb-tserver-2.yb-tservers.yugabytedb.svc.cluster.local:9100      LEADER
416a684d83e74d96962b95b2128b9870        yb-tserver-0.yb-tservers.yugabytedb.svc.cluster.local:9100      FOLLOWER

$ yb-admin -master_addresses yb-master-0.yb-masters.yugabytedb.svc.cluster.local:7100,yb-master-1.yb-masters.yugabytedb.svc.cluster.local:7100,yb-master-2.yb-masters.yugabytedb.svc.cluster.local:7100 list_tablets ysql.commerce customer
Tablet-UUID                             Range                                                           Leader-IP               Leader-UUID
251ba609612948f085bf149e430e607f        partition_key_start: &#34;&#34; partition_key_end: &#34;&#34;                   yb-tserver-1.yb-tservers.yugabytedb.svc.cluster.local:9100      7119b812c43448bc8d72e6b5580cdff8

$ yb-admin -master_addresses yb-master-0.yb-masters.yugabytedb.svc.cluster.local:7100,yb-master-1.yb-masters.yugabytedb.svc.cluster.local:7100,yb-master-2.yb-masters.yugabytedb.svc.cluster.local:7100 list_tablet_servers 251ba609612948f085bf149e430e607f
Server UUID                             RPC Host/Port           Role
b9d541b6befe450d985c990b8710ec94        yb-tserver-3.yb-tservers.yugabytedb.svc.cluster.local:9100      FOLLOWER
a22363a123624ab2ab57ed1e0ec0a555        yb-tserver-4.yb-tservers.yugabytedb.svc.cluster.local:9100      FOLLOWER
7119b812c43448bc8d72e6b5580cdff8        yb-tserver-1.yb-tservers.yugabytedb.svc.cluster.local:9100      LEADER

$ yb-admin -master_addresses yb-master-0.yb-masters.yugabytedb.svc.cluster.local:7100,yb-master-1.yb-masters.yugabytedb.svc.cluster.local:7100,yb-master-2.yb-masters.yugabytedb.svc.cluster.local:7100 list_tablets ysql.commerce corder
Tablet-UUID                             Range                                                           Leader-IP               Leader-UUID
81c479b9629f4d988ced1f31d85d54fb        partition_key_start: &#34;&#34; partition_key_end: &#34;ry&#34;                 yb-tserver-3.yb-tservers.yugabytedb.svc.cluster.local:9100      b9d541b6befe450d985c990b8710ec94
bfb33782498d41259b80cebdb3c64414        partition_key_start: &#34;ry&#34; partition_key_end: &#34;\270:&#34;            yb-tserver-1.yb-tservers.yugabytedb.svc.cluster.local:9100      7119b812c43448bc8d72e6b5580cdff8
f6988d5b198f44a4826200c436c69260        partition_key_start: &#34;\270:&#34; partition_key_end: &#34;&#34;              yb-tserver-0.yb-tservers.yugabytedb.svc.cluster.local:9100      416a684d83e74d96962b95b2128b9870

$ yb-admin -master_addresses yb-master-0.yb-masters.yugabytedb.svc.cluster.local:7100,yb-master-1.yb-masters.yugabytedb.svc.cluster.local:7100,yb-master-2.yb-masters.yugabytedb.svc.cluster.local:7100 list_tablet_servers 81c479b9629f4d988ced1f31d85d54fb
Server UUID                             RPC Host/Port           Role
b9d541b6befe450d985c990b8710ec94        yb-tserver-3.yb-tservers.yugabytedb.svc.cluster.local:9100      LEADER
a22363a123624ab2ab57ed1e0ec0a555        yb-tserver-4.yb-tservers.yugabytedb.svc.cluster.local:9100      FOLLOWER
416a684d83e74d96962b95b2128b9870        yb-tserver-0.yb-tservers.yugabytedb.svc.cluster.local:9100      FOLLOWER

$ yb-admin -master_addresses yb-master-0.yb-masters.yugabytedb.svc.cluster.local:7100,yb-master-1.yb-masters.yugabytedb.svc.cluster.local:7100,yb-master-2.yb-masters.yugabytedb.svc.cluster.local:7100 list_tablet_servers bfb33782498d41259b80cebdb3c64414
Server UUID                             RPC Host/Port           Role
b9d541b6befe450d985c990b8710ec94        yb-tserver-3.yb-tservers.yugabytedb.svc.cluster.local:9100      FOLLOWER
7119b812c43448bc8d72e6b5580cdff8        yb-tserver-1.yb-tservers.yugabytedb.svc.cluster.local:9100      LEADER
579810f2f1cc46adab7ed05ce00dde64        yb-tserver-2.yb-tservers.yugabytedb.svc.cluster.local:9100      FOLLOWER

$ yb-admin -master_addresses yb-master-0.yb-masters.yugabytedb.svc.cluster.local:7100,yb-master-1.yb-masters.yugabytedb.svc.cluster.local:7100,yb-master-2.yb-masters.yugabytedb.svc.cluster.local:7100 list_tablet_servers f6988d5b198f44a4826200c436c69260
Server UUID                             RPC Host/Port           Role
a22363a123624ab2ab57ed1e0ec0a555        yb-tserver-4.yb-tservers.yugabytedb.svc.cluster.local:9100      FOLLOWER
579810f2f1cc46adab7ed05ce00dde64        yb-tserver-2.yb-tservers.yugabytedb.svc.cluster.local:9100      FOLLOWER
416a684d83e74d96962b95b2128b9870        yb-tserver-0.yb-tservers.yugabytedb.svc.cluster.local:9100      LEADER
</code></pre>
</details>
<p>That&#39;s it for today&#39;s post. I had a lot of fun exploring YugabyteDB and
hopefully gained a better understanding of how it works. Compared to Vitess, the
sharding feels like magic, and I&#39;m not sure if I like it 😅. I hope you enjoyed
the post as well, and I&#39;ll see you next time.</p>
<div class="footnotes" role="doc-endnotes">
<hr/>
<ol>
<li id="fn:1">
<p>We will use the dashboard from the following
<a href="https://github.com/yugabyte/yugabyte-db/tree/master/cloud/grafana">GitHub repository</a> <a href="#fnref:1" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
</ol>
</div>
]]></description><author>Rico Berger</author><enclosure url="https://ricoberger.de/blog/posts/getting-started-with-yugabytedb/assets/yugabytedb.png" type="image/png"></enclosure><guid isPermaLink="true">https://ricoberger.de/blog/posts/getting-started-with-yugabytedb/</guid><pubDate>Sun, 13 Apr 2025 13:00:00 +0000</pubDate></item><item><title>Getting Started with Vitess</title><link>https://ricoberger.de/blog/posts/getting-started-with-vitess/</link><description><![CDATA[<p>Hi and welcome to another blog post. Today, we will explore
<a href="https://vitess.io/">Vitess</a>. We will set up a simple Vitess cluster on
Kubernetes. Once the cluster is running, we will discuss how to move tables and
reshard the cluster. Finally, we will look at how to monitor a Vitess cluster
using Prometheus. Please note that I am not an expert in Vitess; I simply want
to experiment with it in this blog post.</p>
<p><a href="https://ricoberger.de/blog/posts/getting-started-with-vitess/assets/vitess.png"><img src="https://ricoberger.de/blog/posts/getting-started-with-vitess/assets/vitess.png" alt="Vitess"/></a></p>
<p>Before we begin setting up Vitess, we need a running Kubernetes cluster. If you
want to try this on your local machine, you can use a tool like
<a href="https://kind.sigs.k8s.io/">kind</a> to create a local cluster. Additionally, we
need to install the
<a href="https://dev.mysql.com/doc/mysql-getting-started/en/">MySQL Client</a> and
<a href="https://vitess.io/docs/get-started/local/#install-vitess">vtctldclient</a>
locally:</p>
<pre><code class="language-sh">go install vitess.io/vitess/go/cmd/vtctldclient@v0.21.3
brew install mysql@8.4
brew link mysql@8.4
</code></pre>
<p>If you are not familiar with the
<a href="https://vitess.io/docs/21.0/overview/architecture/">architecture</a> and
<a href="https://vitess.io/docs/21.0/concepts/">concepts</a> of Vitess, I recommend
reviewing the documentation before proceeding. All the configuration files we
are using can be found in the
<a href="https://github.com/ricoberger/playground/tree/ed7b9386c87abe65d5c0147f6cd8675f35ab3cef/applications/getting-started-with-vitess">ricoberger/playground</a>
GitHub repository.</p>
<h2>Installation</h2>
<p>We will create our Vitess cluster using the
<a href="https://vitess.io/docs/21.0/get-started/operator/">Vitess Operator</a>. To install
the operator, we will create a new namespace called <code>vitess</code>, install the CRDs,
and set up the operator using the following commands. Afterward, we can verify
that the operator is running by executing <code>kubectl get pods</code>.</p>
<pre><code class="language-sh">kubectl apply --server-side -f 001_namespace.yaml
kubectl apply --server-side -f 002_crds.yaml
kubectl apply --server-side -f 003_operator.yaml
</code></pre>
<pre><code class="language-plaintext">NAME                               READY   STATUS    RESTARTS   AGE
vitess-operator-7cc877ccc5-vdndl   1/1     Running   0          21s
</code></pre>
<p>Once the operator is running, we can launch our first Vitess cluster. The
cluster will use one cell (<code>zone1</code>) that includes all the control plane
components (VTAdmin, vtctld, Topology Store) and one keyspace named <code>commerce</code>,
which will contain one primary tablet and one replica tablet.</p>
<p><a href="https://ricoberger.de/blog/posts/getting-started-with-vitess/assets/architecture-initial.png"><img src="https://ricoberger.de/blog/posts/getting-started-with-vitess/assets/architecture-initial.png" alt="Architecture - Initial"/></a></p>
<p>To bring up the cluster we can apply the <code>101_initial_cluster.yaml</code> manifest.
Afterwards we can check the state of the cluster using <code>kubectl get pods</code>. After
a few minutes, it should show that all pods are in the status of running.</p>
<pre><code class="language-sh">kubectl apply --server-side -f 101_initial_cluster.yaml
</code></pre>
<pre><code class="language-plaintext">NAME                                                         READY   STATUS    RESTARTS      AGE
example-commerce-x-x-zone1-vtorc-c13ef6ff-86bd96dfb4-kp8w5   1/1     Running   2 (59s ago)   71s
example-etcd-faf13de3-1                                      1/1     Running   0             72s
example-etcd-faf13de3-2                                      1/1     Running   0             72s
example-etcd-faf13de3-3                                      1/1     Running   0             72s
example-vttablet-zone1-2469782763-bfadd780                   3/3     Running   2 (46s ago)   71s
example-vttablet-zone1-2548885007-46a852d0                   3/3     Running   1 (46s ago)   71s
example-zone1-vtadmin-c03d7eae-68d845dbfd-wnlk9              2/2     Running   0             72s
example-zone1-vtctld-1d4dcad0-75f6fb7c6b-78rpv               1/1     Running   1 (51s ago)   72s
example-zone1-vtgate-bc6cde92-57fdc84bb6-cdj75               1/1     Running   2 (45s ago)   72s
vitess-operator-7cc877ccc5-vdndl                             1/1     Running   0             2m29s
</code></pre>
<p>For ease-of-use, Vitess provides a script to port-forward from Kubernetes to our
local machine. This script also recommends setting up aliases for <code>mysql</code> and
<code>vtctldclient</code>. Once the port-forward starts running, the VTAdmin UI will be
available at <code>http://localhost:14000/</code>.</p>
<pre><code class="language-sh">alias vtctldclient=&#34;vtctldclient --server=localhost:15999&#34;
alias mysql=&#34;mysql -h 127.0.0.1 -P 15306 -u user&#34;
./pf.sh &amp;
</code></pre>
<p>In the last step of the initial installation we will create our initial scheme,
which will deploy a single unsharded keyspace named <code>commerce</code>, with the
following tables:</p>
<ul>
<li>The <code>product</code> table contains the product information for all of the products.</li>
<li>The <code>customer</code> table has a <code>customer_id</code> that has an <code>auto_increment</code>. A
typical customer table would have a lot more columns, and sometimes additional
detail tables.</li>
<li>The <code>corder</code> table (named so because <code>order</code> is an SQL reserved word) has an
<code>order_id</code> auto-increment column. It also has foreign keys into
<code>customer(customer_id)</code> and <code>product(sku)</code>.</li>
</ul>
<pre><code class="language-sh">vtctldclient ApplySchema --sql-file=&#34;102_create_commerce_schema.sql&#34; commerce
vtctldclient ApplyVSchema --vschema-file=&#34;103_vschema_commerce_initial.json&#34; commerce
</code></pre>
<p>We should now be able to connect to the VTGate Server in our cluster by running
the <code>mysql</code> command.</p>
<pre><code class="language-plaintext">mysql&gt; show databases;
+--------------------+
| Database           |
+--------------------+
| commerce           |
| information_schema |
| mysql              |
| sys                |
| performance_schema |
+--------------------+
5 rows in set (0.02 sec)
</code></pre>
<h2>Move Tables</h2>
<p>In the next step we will create a new keyspace named <code>customer</code> and move the
<code>customer</code> and <code>corder</code> tables to the newly created keyspace. This is the
recommended approach before splitting a single table across multiple servers
(sharding).</p>
<p><a href="https://ricoberger.de/blog/posts/getting-started-with-vitess/assets/architecture-move-tables.png"><img src="https://ricoberger.de/blog/posts/getting-started-with-vitess/assets/architecture-move-tables.png" alt="Architecture - Move Tables"/></a></p>
<p>Let&#39;s start by loading some data into our created tables and looking at the data
we inserted. Notice that all of our tables are currently in the <code>commerce</code>
schema/keyspace here.</p>
<pre><code class="language-sh">mysql &lt; 201_insert_commerce_data.sql
mysql --table &lt; 202_select_commerce_data.sql
</code></pre>
<pre><code class="language-plaintext">Using commerce
Customer
+-------------+--------------------+
| customer_id | email              |
+-------------+--------------------+
|           1 | alice@domain.com   |
|           2 | bob@domain.com     |
|           3 | charlie@domain.com |
|           4 | dan@domain.com     |
|           5 | eve@domain.com     |
+-------------+--------------------+
Product
+----------+-------------+-------+
| sku      | description | price |
+----------+-------------+-------+
| SKU-1001 | Monitor     |   100 |
| SKU-1002 | Keyboard    |    30 |
+----------+-------------+-------+
COrder
+----------+-------------+----------+-------+
| order_id | customer_id | sku      | price |
+----------+-------------+----------+-------+
|        1 |           1 | SKU-1001 |   100 |
|        2 |           2 | SKU-1002 |    30 |
|        3 |           3 | SKU-1002 |    30 |
|        4 |           4 | SKU-1002 |    30 |
|        5 |           5 | SKU-1002 |    30 |
+----------+-------------+----------+-------+
</code></pre>
<p>When we list our tablets using the following command, we can see that we have
two tablets running: one primary and one replica.</p>
<pre><code class="language-sh">mysql -e &#34;show vitess_tablets&#34;
</code></pre>
<pre><code class="language-plaintext">+-------+----------+-------+------------+---------+------------------+--------------+----------------------+
| Cell  | Keyspace | Shard | TabletType | State   | Alias            | Hostname     | PrimaryTermStartTime |
+-------+----------+-------+------------+---------+------------------+--------------+----------------------+
| zone1 | commerce | -     | PRIMARY    | SERVING | zone1-2469782763 | 10.244.5.244 | 2025-04-10T06:08:22Z |
| zone1 | commerce | -     | REPLICA    | SERVING | zone1-2548885007 | 10.244.14.73 |                      |
+-------+----------+-------+------------+---------+------------------+--------------+----------------------+
</code></pre>
<p>Now it is time to deploy new tablets for our <code>customer</code> keyspace by applying the
<code>203_customer_tablets.yaml</code> manifest. After some minutes we should see the newly
created tablets in a running state. We should also see that a new vtorc instance
was created for the <code>customer</code> keyspace.</p>
<pre><code class="language-sh">kubectl apply --server-side -f 203_customer_tablets.yaml
</code></pre>
<pre><code class="language-plaintext">NAME                                                         READY   STATUS    RESTARTS        AGE
example-commerce-x-x-zone1-vtorc-c13ef6ff-86bd96dfb4-kp8w5   1/1     Running   2 (5m52s ago)   6m4s
example-customer-x-x-zone1-vtorc-53d270f6-7754f557c-bb87n    1/1     Running   0               72s
example-etcd-faf13de3-1                                      1/1     Running   0               6m5s
example-etcd-faf13de3-2                                      1/1     Running   0               6m5s
example-etcd-faf13de3-3                                      1/1     Running   0               6m5s
example-vttablet-zone1-1250593518-17c58396                   3/3     Running   0               72s
example-vttablet-zone1-2469782763-bfadd780                   3/3     Running   2 (5m39s ago)   6m4s
example-vttablet-zone1-2548885007-46a852d0                   3/3     Running   1 (5m39s ago)   6m4s
example-vttablet-zone1-3778123133-6f4ed5fc                   3/3     Running   2 (35s ago)     72s
example-zone1-vtadmin-c03d7eae-68d845dbfd-wnlk9              2/2     Running   0               6m5s
example-zone1-vtctld-1d4dcad0-75f6fb7c6b-78rpv               1/1     Running   1 (5m44s ago)   6m5s
example-zone1-vtgate-bc6cde92-57fdc84bb6-cdj75               1/1     Running   2 (5m38s ago)   6m5s
vitess-operator-7cc877ccc5-vdndl                             1/1     Running   0               7m22s
</code></pre>
<p>Before we continue we restart the port-forward after launching the pods has
completed. Afterwards we can list our tables again. We can see that we have four
tablets now. The two existing ones for the <code>commerce</code> keyspace and two new ones
for the <code>customer</code> keyspace.</p>
<pre><code class="language-sh">killall kubectl
./pf.sh &amp;
</code></pre>
<pre><code class="language-sh">mysql -e &#34;show vitess_tablets&#34;
</code></pre>
<pre><code class="language-plaintext">+-------+----------+-------+------------+---------+------------------+--------------+----------------------+
| Cell  | Keyspace | Shard | TabletType | State   | Alias            | Hostname     | PrimaryTermStartTime |
+-------+----------+-------+------------+---------+------------------+--------------+----------------------+
| zone1 | commerce | -     | PRIMARY    | SERVING | zone1-2469782763 | 10.244.5.244 | 2025-04-10T06:08:22Z |
| zone1 | commerce | -     | REPLICA    | SERVING | zone1-2548885007 | 10.244.14.73 |                      |
| zone1 | customer | -     | PRIMARY    | SERVING | zone1-1250593518 | 10.244.11.55 | 2025-04-10T06:13:10Z |
| zone1 | customer | -     | REPLICA    | SERVING | zone1-3778123133 | 10.244.8.170 |                      |
+-------+----------+-------+------------+---------+------------------+--------------+----------------------+
</code></pre>
<p>In the next step we will create a <code>MoveTables</code> workflow, which copies the tables
from the <code>commerce</code> keyspace into the <code>customer</code> keyspace. This operation does
not block any database activity.</p>
<pre><code class="language-sh">vtctldclient MoveTables --target-keyspace customer --workflow commerce2customer create --source-keyspace commerce --tables &#39;customer,corder&#39;
</code></pre>
<p>To see what happens under the covers, let&#39;s look at the
<a href="https://github.com/ricoberger/playground/blob/ed7b9386c87abe65d5c0147f6cd8675f35ab3cef/applications/getting-started-with-vitess/204_routing_rules_after_move_table.json">routing rules</a>
that the MoveTables operation created. These are instructions used by a VTGate
to determine which backend keyspace to send requests to for a given table.</p>
<pre><code class="language-sh">vtctldclient GetRoutingRules
</code></pre>
<p>We can monitor the progress of the <code>MoveTables</code> operation using the <code>status</code>
action. We can also validate its correctness by performing a logical diff
between the source and target to confirm that they are fully synced with the
<code>VDiff</code> command.</p>
<pre><code class="language-sh"># Monitoring Progress
vtctldclient MoveTables --target-keyspace customer --workflow commerce2customer status --format=json

{
  &#34;table_copy_state&#34;: {},
  &#34;shard_streams&#34;: {
    &#34;customer/-&#34;: {
      &#34;streams&#34;: [
        {
          &#34;id&#34;: 1,
          &#34;tablet&#34;: {
            &#34;cell&#34;: &#34;zone1&#34;,
            &#34;uid&#34;: 1250593518
          },
          &#34;source_shard&#34;: &#34;commerce/-&#34;,
          &#34;position&#34;: &#34;2a5c9b12-15d2-11f0-a7a3-a2742b031576:1-40&#34;,
          &#34;status&#34;: &#34;Running&#34;,
          &#34;info&#34;: &#34;VStream Lag: 0s&#34;
        }
      ]
    }
  },
  &#34;traffic_state&#34;: &#34;Reads Not Switched. Writes Not Switched&#34;
}

# Validate Correctness
vtctldclient VDiff --target-keyspace customer --workflow commerce2customer create
vtctldclient VDiff --format=json --target-keyspace customer --workflow commerce2customer show last --verbose

VDiff 1163c387-4284-4214-9896-74b3af6a0cef scheduled on target shards, use show to view progress
{
  &#34;Workflow&#34;: &#34;commerce2customer&#34;,
  &#34;Keyspace&#34;: &#34;customer&#34;,
  &#34;State&#34;: &#34;started&#34;,
  &#34;UUID&#34;: &#34;1163c387-4284-4214-9896-74b3af6a0cef&#34;,
  &#34;RowsCompared&#34;: 0,
  &#34;HasMismatch&#34;: false,
  &#34;Shards&#34;: &#34;-&#34;,
  &#34;StartedAt&#34;: &#34;2025-04-10 06:19:56&#34;,
  &#34;TableSummary&#34;: {
    &#34;corder&#34;: {
      &#34;TableName&#34;: &#34;corder&#34;,
      &#34;State&#34;: &#34;started&#34;,
      &#34;RowsCompared&#34;: 0,
      &#34;MatchingRows&#34;: 0,
      &#34;MismatchedRows&#34;: 0,
      &#34;ExtraRowsSource&#34;: 0,
      &#34;ExtraRowsTarget&#34;: 0
    },
    &#34;customer&#34;: {
      &#34;TableName&#34;: &#34;customer&#34;,
      &#34;State&#34;: &#34;pending&#34;,
      &#34;RowsCompared&#34;: 0,
      &#34;MatchingRows&#34;: 0,
      &#34;MismatchedRows&#34;: 0,
      &#34;ExtraRowsSource&#34;: 0,
      &#34;ExtraRowsTarget&#34;: 0
    }
  },
  &#34;Reports&#34;: {
    &#34;corder&#34;: {
      &#34;-&#34;: {
        &#34;TableName&#34;: &#34;&#34;,
        &#34;ProcessedRows&#34;: 0,
        &#34;MatchingRows&#34;: 0,
        &#34;MismatchedRows&#34;: 0,
        &#34;ExtraRowsSource&#34;: 0,
        &#34;ExtraRowsTarget&#34;: 0
      }
    },
    &#34;customer&#34;: {
      &#34;-&#34;: {
        &#34;TableName&#34;: &#34;&#34;,
        &#34;ProcessedRows&#34;: 0,
        &#34;MatchingRows&#34;: 0,
        &#34;MismatchedRows&#34;: 0,
        &#34;ExtraRowsSource&#34;: 0,
        &#34;ExtraRowsTarget&#34;: 0
      }
    }
  },
  &#34;Progress&#34;: {
    &#34;Percentage&#34;: 0
  }
}
</code></pre>
<p>Once the <code>MoveTables</code> operation is complete, the first step in making the
changes live is to switch all query serving traffic from the old <code>commerce</code>
keyspace to the <code>customer</code> keyspace for the tables we moved. Queries against the
other tables will continue to route to the <code>commerce</code> keyspace.</p>
<pre><code class="language-sh">vtctldclient MoveTables --target-keyspace customer --workflow commerce2customer SwitchTraffic
</code></pre>
<p>If we now look at the
<a href="https://github.com/ricoberger/playground/blob/ed7b9386c87abe65d5c0147f6cd8675f35ab3cef/applications/getting-started-with-vitess/205_routing_rules_after_traffic_switch.json">routing rules</a>
after the <code>SwitchTraffic</code> step, we will see that all queries against the
<code>customer</code> and <code>corder</code> tables will get routed to the <code>customer</code> keyspace.</p>
<pre><code class="language-sh">vtctldclient GetRoutingRules
</code></pre>
<p>The final step is to complete the migration using the <code>Complete</code> action. This
will (by default) get rid of the routing rules that were created and <code>DROP</code> the
original tables in the source keyspace (<code>commerce</code>). Along with freeing up space
on the original tablets, this is an important step to eliminate potential future
confusion.</p>
<pre><code class="language-sh">vtctldclient MoveTables --target-keyspace customer --workflow commerce2customer complete
</code></pre>
<h2>Resharding</h2>
<p>In this step, we will divide our <code>customer</code> keyspace into two shards. The final
architecture will appear as shown in the graphic below.</p>
<p><a href="https://ricoberger.de/blog/posts/getting-started-with-vitess/assets/architecture-resharding.png"><img src="https://ricoberger.de/blog/posts/getting-started-with-vitess/assets/architecture-resharding.png" alt="Architecture - Resharding"/></a></p>
<p>Before we can start we have to create a sequence table for our auto-increment
columns and we have to decide for sharding keys or Primary Vindexes within a
VSchema. More information regarding these two topic can be found in the Vitess
documentation<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup>. To create the sequence tables and VSchema we can run the
following commands:</p>
<pre><code class="language-sh">vtctldclient ApplySchema --sql=&#34;$(cat 301_create_commerce_seq.sql)&#34; commerce
vtctldclient ApplyVSchema --vschema=&#34;$(cat 302_vschema_commerce_seq.json)&#34; commerce
vtctldclient ApplyVSchema --vschema=&#34;$(cat 303_vschema_customer_sharded.json)&#34; customer
vtctldclient ApplySchema --sql=&#34;$(cat 304_create_customer_sharded.sql)&#34; customer
</code></pre>
<p>At this point, you have finalized our sharded VSchema and vetted all the queries
to make sure they still work. Now, it’s time to reshard. To do this we will
create the target shards by applying the <code>305_new_shards.yaml</code> manifest.
Afterwards some minutes we should see four new tablets and two new vtorc pods
(via <code>kubectl get pods</code>) for the created target shards.</p>
<pre><code class="language-sh">kubectl apply --server-side -f 305_new_shards.yaml

# Restart the port-forward afterwards:
killall kubectl
./pf.sh &amp;
</code></pre>
<pre><code class="language-plaintext">NAME                                                          READY   STATUS    RESTARTS      AGE
example-commerce-x-x-zone1-vtorc-c13ef6ff-86bd96dfb4-kp8w5    1/1     Running   2 (19m ago)   19m
example-customer-80-x-zone1-vtorc-836adff9-b67657589-ndpxq    1/1     Running   0             76s
example-customer-x-80-zone1-vtorc-2bf8b95e-86b8b56fbc-q69h4   1/1     Running   0             76s
example-customer-x-x-zone1-vtorc-53d270f6-7754f557c-bb87n     1/1     Running   0             14m
example-etcd-faf13de3-1                                       1/1     Running   0             19m
example-etcd-faf13de3-2                                       1/1     Running   0             19m
example-etcd-faf13de3-3                                       1/1     Running   0             19m
example-vttablet-zone1-0118374573-10d08e80                    3/3     Running   2 (35s ago)   76s
example-vttablet-zone1-0120139806-fed29577                    3/3     Running   0             76s
example-vttablet-zone1-1250593518-17c58396                    3/3     Running   0             14m
example-vttablet-zone1-2289928654-7de47379                    3/3     Running   0             76s
example-vttablet-zone1-2469782763-bfadd780                    3/3     Running   2 (19m ago)   19m
example-vttablet-zone1-2548885007-46a852d0                    3/3     Running   1 (19m ago)   19m
example-vttablet-zone1-3778123133-6f4ed5fc                    3/3     Running   2 (13m ago)   14m
example-vttablet-zone1-4277914223-0f04a9a6                    3/3     Running   0             76s
example-zone1-vtadmin-c03d7eae-68d845dbfd-wnlk9               2/2     Running   0             19m
example-zone1-vtctld-1d4dcad0-75f6fb7c6b-78rpv                1/1     Running   1 (19m ago)   19m
example-zone1-vtgate-bc6cde92-57fdc84bb6-cdj75                1/1     Running   2 (19m ago)   19m
vitess-operator-7cc877ccc5-vdndl                              1/1     Running   0             20m
</code></pre>
<p>Now we can start the <code>Reshard</code> operation. It occurs online, and will not block
any read or write operations to your database:</p>
<pre><code class="language-sh">vtctldclient Reshard --target-keyspace customer --workflow cust2cust create --source-shards &#39;-&#39; --target-shards &#39;-80,80-&#39;
</code></pre>
<p>After the reshard is complete, we can use VDiff to check data integrity and
ensure our source and target shards are consistent:</p>
<pre><code class="language-sh">vtctldclient VDiff --target-keyspace customer --workflow cust2cust create
vtctldclient VDiff --format=json --target-keyspace customer --workflow cust2cust show last

VDiff da8b1af8-eaf0-415b-9b63-4d3606798435 scheduled on target shards, use show to view progress
{
  &#34;Workflow&#34;: &#34;cust2cust&#34;,
  &#34;Keyspace&#34;: &#34;customer&#34;,
  &#34;State&#34;: &#34;started&#34;,
  &#34;UUID&#34;: &#34;da8b1af8-eaf0-415b-9b63-4d3606798435&#34;,
  &#34;RowsCompared&#34;: 4,
  &#34;HasMismatch&#34;: false,
  &#34;Shards&#34;: &#34;-80,80-&#34;,
  &#34;StartedAt&#34;: &#34;2025-04-10 06:29:57&#34;,
  &#34;Progress&#34;: {
    &#34;Percentage&#34;: 100,
    &#34;ETA&#34;: &#34;2025-04-10 06:29:57&#34;
  }
}
</code></pre>
<p>After validating for correctness, the next step is to switch all traffic from
the source shards to the target shards:</p>
<pre><code class="language-sh">vtctldclient Reshard --target-keyspace customer --workflow cust2cust SwitchTraffic
</code></pre>
<p>We should now be able to see the data that has been copied over to the new
shards:</p>
<pre><code class="language-sh">mysql --table &lt; 306_select_customer-80_data.sql
mysql --table &lt; 307_select_customer80-_data.sql
</code></pre>
<pre><code class="language-plaintext">Using customer/-80
Customer
+-------------+--------------------+
| customer_id | email              |
+-------------+--------------------+
|           1 | alice@domain.com   |
|           2 | bob@domain.com     |
|           3 | charlie@domain.com |
|           5 | eve@domain.com     |
+-------------+--------------------+
COrder
+----------+-------------+----------+-------+
| order_id | customer_id | sku      | price |
+----------+-------------+----------+-------+
|        1 |           1 | SKU-1001 |   100 |
|        2 |           2 | SKU-1002 |    30 |
|        3 |           3 | SKU-1002 |    30 |
|        5 |           5 | SKU-1002 |    30 |
+----------+-------------+----------+-------+

Using customer/80-
Customer
+-------------+----------------+
| customer_id | email          |
+-------------+----------------+
|           4 | dan@domain.com |
+-------------+----------------+
COrder
+----------+-------------+----------+-------+
| order_id | customer_id | sku      | price |
+----------+-------------+----------+-------+
|        4 |           4 | SKU-1002 |    30 |
+----------+-------------+----------+-------+
</code></pre>
<p>We can now complete the created <code>Reshard</code> workflow and remove the shard that is
no longer required:</p>
<pre><code class="language-sh">vtctldclient Reshard --target-keyspace customer --workflow cust2cust complete
kubectl apply --server-side -f 308_down_shard_-.yaml
kubectl delete vitessshards.planetscale.com example-customer-x-x-dc880356
</code></pre>
<p>Afterwards the list of running pods should look as follows. As we can see the
two tablets for the old shard as well as the vtorc pod were removed.</p>
<pre><code class="language-sh">NAME                                                          READY   STATUS    RESTARTS      AGE
example-commerce-x-x-zone1-vtorc-c13ef6ff-86bd96dfb4-kp8w5    1/1     Running   2 (33m ago)   33m
example-customer-80-x-zone1-vtorc-836adff9-b67657589-ndpxq    1/1     Running   0             15m
example-customer-x-80-zone1-vtorc-2bf8b95e-86b8b56fbc-q69h4   1/1     Running   0             15m
example-etcd-faf13de3-1                                       1/1     Running   0             33m
example-etcd-faf13de3-2                                       1/1     Running   0             33m
example-etcd-faf13de3-3                                       1/1     Running   0             33m
example-vttablet-zone1-0118374573-10d08e80                    3/3     Running   2 (14m ago)   15m
example-vttablet-zone1-0120139806-fed29577                    3/3     Running   0             15m
example-vttablet-zone1-2289928654-7de47379                    3/3     Running   0             15m
example-vttablet-zone1-2469782763-bfadd780                    3/3     Running   2 (33m ago)   33m
example-vttablet-zone1-2548885007-46a852d0                    3/3     Running   1 (33m ago)   33m
example-vttablet-zone1-4277914223-0f04a9a6                    3/3     Running   0             15m
example-zone1-vtadmin-c03d7eae-68d845dbfd-wnlk9               2/2     Running   0             33m
example-zone1-vtctld-1d4dcad0-75f6fb7c6b-78rpv                1/1     Running   1 (33m ago)   33m
example-zone1-vtgate-bc6cde92-57fdc84bb6-cdj75                1/1     Running   2 (33m ago)   33m
vitess-operator-7cc877ccc5-vdndl                              1/1     Running   0             34m
</code></pre>
<h2>Monitoring</h2>
<p>To monitor our Vitess cluster we will use Prometheus and Grafana. We will not go
through the setup of Prometheus and Grafana within this post and assume that we
already have a running Prometheus and Grafana instance. To monitor our Vitess
cluster with Prometheus and Grafana we will create a scrape configuration for
Prometheus and import some dashboards<sup id="fnref:2"><a href="#fn:2" class="footnote-ref" role="doc-noteref">2</a></sup> into Grafana.</p>
<pre><code class="language-sh">kubectl apply --server-side -f 401_monitoring.yaml
</code></pre>
<ul>
<li><a href="https://github.com/ricoberger/playground/blob/ed7b9386c87abe65d5c0147f6cd8675f35ab3cef/applications/getting-started-with-vitess/402_dashboard_misc.json">Vitess Misc</a></li>
<li><a href="https://github.com/ricoberger/playground/blob/ed7b9386c87abe65d5c0147f6cd8675f35ab3cef/applications/getting-started-with-vitess/403_dashboard_queries.json">Vitess Queries</a></li>
<li><a href="https://github.com/ricoberger/playground/blob/ed7b9386c87abe65d5c0147f6cd8675f35ab3cef/applications/getting-started-with-vitess/404_dashboard_query_details.json">Vitess Query Details</a></li>
<li><a href="https://github.com/ricoberger/playground/blob/ed7b9386c87abe65d5c0147f6cd8675f35ab3cef/applications/getting-started-with-vitess/405_dashboard_transactions.json">Vitess Transactions</a></li>
</ul>
<p>Once we get some data in the dashboards, we can also generate some load by
running the example application. The following command will create customers in
our <code>customer</code> table. We can increase the load after some time by restarting the
command with the <code>-goroutines</code> flag.</p>
<pre><code class="language-sh">go run . -create-customers
go run . -create-customers -goroutines=20
</code></pre>
<div class="grid grid-cols-2 md:grid-cols-4 gap-4">
  <div>
    <a href="https://ricoberger.de/blog/posts/getting-started-with-vitess/assets/dashboard-create-customers-1.png">
      <img class="h-auto max-w-full" src="https://ricoberger.de/blog/posts/getting-started-with-vitess/assets/dashboard-create-customers-1.png" alt="About"/>
    </a>
  </div>
  <div>
    <a href="https://ricoberger.de/blog/posts/getting-started-with-vitess/assets/dashboard-create-customers-2.png">
      <img class="h-auto max-w-full" src="https://ricoberger.de/blog/posts/getting-started-with-vitess/assets/dashboard-create-customers-2.png" alt="About"/>
    </a>
  </div>
  <div>
    <a href="https://ricoberger.de/blog/posts/getting-started-with-vitess/assets/dashboard-create-customers-3.png">
      <img class="h-auto max-w-full" src="https://ricoberger.de/blog/posts/getting-started-with-vitess/assets/dashboard-create-customers-3.png" alt="About"/>
    </a>
  </div>
  <div>
    <a href="https://ricoberger.de/blog/posts/getting-started-with-vitess/assets/dashboard-create-customers-4.png">
      <img class="h-auto max-w-full" src="https://ricoberger.de/blog/posts/getting-started-with-vitess/assets/dashboard-create-customers-4.png" alt="About"/>
    </a>
  </div>
</div>
<p>The following commands will create some orders in our <code>corders</code> table. To create
a new order we select a random customer, all products and create a new order for
the selected customer and one of the selected products.</p>
<pre><code class="language-sh">go run . -create-orders -goroutines=10
go run . -create-orders -goroutines=100
</code></pre>
<div class="grid grid-cols-2 md:grid-cols-4 gap-4">
  <div>
    <a href="https://ricoberger.de/blog/posts/getting-started-with-vitess/assets/dashboard-create-orders-1.png">
      <img class="h-auto max-w-full" src="https://ricoberger.de/blog/posts/getting-started-with-vitess/assets/dashboard-create-orders-1.png" alt="About"/>
    </a>
  </div>
  <div>
    <a href="https://ricoberger.de/blog/posts/getting-started-with-vitess/assets/dashboard-create-orders-2.png">
      <img class="h-auto max-w-full" src="https://ricoberger.de/blog/posts/getting-started-with-vitess/assets/dashboard-create-orders-2.png" alt="About"/>
    </a>
  </div>
  <div>
    <a href="https://ricoberger.de/blog/posts/getting-started-with-vitess/assets/dashboard-create-orders-3.png">
      <img class="h-auto max-w-full" src="https://ricoberger.de/blog/posts/getting-started-with-vitess/assets/dashboard-create-orders-3.png" alt="About"/>
    </a>
  </div>
  <div>
    <a href="https://ricoberger.de/blog/posts/getting-started-with-vitess/assets/dashboard-create-orders-4.png">
      <img class="h-auto max-w-full" src="https://ricoberger.de/blog/posts/getting-started-with-vitess/assets/dashboard-create-orders-4.png" alt="About"/>
    </a>
  </div>
</div>
<p>Now we can also restart some tablets and monitor the behaviour of Vitess via the
created dashboards. In teh following we test the following scenarios:</p>
<ul>
<li>Delete the primary tablet for the <code>commerce</code> keyspace</li>
<li>Delete the replica for shard <code>80-</code> in the <code>customer</code> keyspace</li>
<li>Delete the primary for shard <code>80-</code> in the <code>customer</code> keyspace</li>
</ul>
<div class="grid grid-cols-2 md:grid-cols-4 gap-4">
  <div>
    <a href="https://ricoberger.de/blog/posts/getting-started-with-vitess/assets/dashboard-restart-1.png">
      <img class="h-auto max-w-full" src="https://ricoberger.de/blog/posts/getting-started-with-vitess/assets/dashboard-restart-1.png" alt="About"/>
    </a>
  </div>
  <div>
    <a href="https://ricoberger.de/blog/posts/getting-started-with-vitess/assets/dashboard-restart-2.png">
      <img class="h-auto max-w-full" src="https://ricoberger.de/blog/posts/getting-started-with-vitess/assets/dashboard-restart-2.png" alt="About"/>
    </a>
  </div>
  <div>
    <a href="https://ricoberger.de/blog/posts/getting-started-with-vitess/assets/dashboard-restart-3.png">
      <img class="h-auto max-w-full" src="https://ricoberger.de/blog/posts/getting-started-with-vitess/assets/dashboard-restart-3.png" alt="About"/>
    </a>
  </div>
  <div>
    <a href="https://ricoberger.de/blog/posts/getting-started-with-vitess/assets/dashboard-restart-4.png">
      <img class="h-auto max-w-full" src="https://ricoberger.de/blog/posts/getting-started-with-vitess/assets/dashboard-restart-4.png" alt="About"/>
    </a>
  </div>
</div>
<p>Last but not least, we can monitor a single tablet by creating a port forward
and opening <code>http://localhost:15005</code> in our browser. The dashboard displays the
number of queries per second, the current query and transaction log, real-time
queries, and much more.</p>
<pre><code class="language-sh">kubectl port-forward example-vttablet-zone1-4277914223-0f04a9a6 15005:15000
</code></pre>
<p><a href="https://ricoberger.de/blog/posts/getting-started-with-vitess/assets/vttablet.png"><img src="https://ricoberger.de/blog/posts/getting-started-with-vitess/assets/vttablet.png" alt="VTTable"/></a></p>
<p>That&#39;s it for today&#39;s post. I had a lot of fun playing around with Vitess and
hopefully gained a better understanding of how it works. I hope you also enjoyed
the post, and I&#39;ll see you next time.</p>
<div class="footnotes" role="doc-endnotes">
<hr/>
<ol>
<li id="fn:1">
<p>Docuemntation for
<a href="https://vitess.io/docs/21.0/user-guides/configuration-advanced/resharding/#sequences">Sequences</a>
and
<a href="https://vitess.io/docs/21.0/user-guides/configuration-advanced/resharding/#vindexes">Vindexes</a> <a href="#fnref:1" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
<li id="fn:2">
<p>We will use the dashboards from the following
<a href="https://gist.github.com/sougou/09d53531c5aa6baeb417cf50476dbe89">Gist</a> <a href="#fnref:2" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
</ol>
</div>
]]></description><author>Rico Berger</author><enclosure url="https://ricoberger.de/blog/posts/getting-started-with-vitess/assets/vitess.png" type="image/png"></enclosure><guid isPermaLink="true">https://ricoberger.de/blog/posts/getting-started-with-vitess/</guid><pubDate>Thu, 10 Apr 2025 13:00:00 +0000</pubDate></item><item><title>Use OpenTelemetry Traces in React Applications</title><link>https://ricoberger.de/blog/posts/use-opentelemetry-traces-in-react-applications/</link><description><![CDATA[<p>Some years ago, I watched the KubeCon talk
<a href="https://www.youtube.com/watch?v=8Ldp9w8wm-U">&#34;Tracing is For Everyone: Tracing User Events with GraphQL and OpenTelemetry&#34;</a>
by Nina Stawski. Since then, I have wanted to experiment with instrumenting
frontend apps using OpenTelemetry. Recently, the topic came up at work, so I
finally created a small proof of concept to explore this further.</p>
<p>For this proof of concept, we are creating a simple task management app using
React and an <a href="https://expressjs.com/">Express</a> API. The app allows users to
create tasks, delete tasks, and mark tasks as completed. The tasks are stored in
memory on the Express server. The source code for the complete app is available
in the
<a href="https://github.com/ricoberger/playground/tree/ae2828edd288557eacfba73bd08a8999a3bebe1d/applications/use-opentelemetry-traces-in-react-applications">ricoberger/playground</a>
repository. A screenshot of the final app is provided below.</p>
<p><a href="https://ricoberger.de/blog/posts/use-opentelemetry-traces-in-react-applications/assets/app.png"><img src="https://ricoberger.de/blog/posts/use-opentelemetry-traces-in-react-applications/assets/app.png" alt="Task App"/></a></p>
<p>To instrument our React application with OpenTelemetry, we create a
<a href="https://github.com/ricoberger/playground/blob/ae2828edd288557eacfba73bd08a8999a3bebe1d/applications/use-opentelemetry-traces-in-react-applications/frontend/src/components/TraceProvider.jsx"><code>TraceProvider.jsx</code></a>
file in our project. In the first step, we import the <code>WebTracerProvider</code> class
from <code>@opentelemetry/sdk-trace-web</code> to create a new trace provider that allows
us to automatically trace activities in the browser. Within the trace provider,
we can add resources to the spans, such as the service name. We also need to set
the span processors:</p>
<ul>
<li>The <code>ConsoleSpanExporter</code> will print all spans to the console. We use it
solely for testing purposes. In a production setup, this exporter should not
be used.</li>
<li>The <code>OTLPTraceExporter</code> exports traces to the OpenTelemetry collector and
requires an endpoint that ends with <code>/v1/traces</code>. In our case, we will use the
same address as our frontend and proxy all requests to the endpoint for the
OpenTelemetry collector. We will examine this in more detail later.</li>
</ul>
<pre><code class="language-jsx">const provider = new WebTracerProvider({
  resource: resourceFromAttributes({
    [ATTR_SERVICE_NAME]: &#34;react-app&#34;,
  }),
  spanProcessors: [
    new SimpleSpanProcessor(new ConsoleSpanExporter()),
    new SimpleSpanProcessor(
      new OTLPTraceExporter({
        url: &#34;http://localhost:3000/v1/traces&#34;,
      }),
    ),
  ],
});
</code></pre>
<p>In the next step, we register the trace provider for use with the OpenTelemetry
API. We utilize the <code>ZoneContextManager</code> to trace asynchronous operations.
Additionally, we set the appropriate propagators to ensure the trace context is
passed between services. In our case, we use the <code>B3Propagator</code> and the
<code>W3CTraceContextPropagator</code>.</p>
<pre><code class="language-jsx">provider.register({
  contextManager: new ZoneContextManager(),
  propagator: new CompositePropagator({
    propagators: [new B3Propagator(), new W3CTraceContextPropagator()],
  }),
});
</code></pre>
<p>Afterward, we register the instrumentations for the OpenTelemetry SDK. We use
<code>DocumentLoadInstrumentation</code> to automatically create a span for the page load
and all loaded assets, and <code>FetchInstrumentation</code> to trace all API requests. We
also set the urls for which we want to propagate the trace header and clear the
timing resources.</p>
<pre><code class="language-jsx">registerInstrumentations({
  instrumentations: [
    new DocumentLoadInstrumentation(),
    new FetchInstrumentation({
      propagateTraceHeaderCorsUrls: [&#34;http://localhost:3000&#34;],
      clearTimingResources: true,
    }),
  ],
});
</code></pre>
<p>Lastly, we export a <code>TraceProvider</code> component. This component will wrap our
<a href="https://github.com/ricoberger/playground/blob/ae2828edd288557eacfba73bd08a8999a3bebe1d/applications/use-opentelemetry-traces-in-react-applications/frontend/src/main.jsx">root component</a>
to ensure that the tracing setup is properly initialized.</p>
<pre><code class="language-jsx">export function TraceProvider(props) {
  return &lt;&gt;{props.children}&lt;/&gt;;
}
</code></pre>
<p>At this point, we have automatically instrumented all document loads and API
requests. To create spans manually, we need to create a new tracer using our
provider.</p>
<pre><code class="language-jsx">export const webTracer = provider.getTracer(&#34;web-tracer&#34;);
</code></pre>
<p>Now we can use the <code>webTracer</code> to create a new span. We can then execute a
function, in this case, an API call, within the context of the span and add
events to it.</p>
<pre><code class="language-jsx">function toggleTaskCompleted(id) {
  const span = webTracer.startSpan(&#34;toggleTaskCompleted&#34;);
  context.with(trace.setSpan(context.active(), span), () =&gt; {
    fetch(&#34;/api/tasks/&#34; + id, {
      method: &#34;PUT&#34;,
      headers: {
        &#34;Content-Type&#34;: &#34;application/json&#34;,
        Accept: &#34;application/json&#34;,
      },
    })
      .then((res) =&gt; {
        trace.getSpan(context.active()).addEvent(&#34;parseJson&#34;);
        return res.json();
      })
      .then((data) =&gt; {
        trace.getSpan(context.active()).addEvent(&#34;updateTask&#34;);
        const updatedTasks = tasks.map((task) =&gt; {
          if (id === task.id) {
            return data;
          }
          return task;
        });

        trace.getSpan(context.active()).addEvent(&#34;setTasks&#34;);
        span.end();
        setTasks(updatedTasks);
      });
  });
}
</code></pre>
<p>An example trace for when a user marks a task as done in our project can be
found below. It includes our manually created span
<code>react-app: toggleTaskCompleted</code> and the automatically generated span
<code>react-app: HTTP PUT</code> from the <code>FetchInstrumentation</code>. Since we propagate the
trace headers, the spans are also connected to those from our Express backend.</p>
<p><a href="https://ricoberger.de/blog/posts/use-opentelemetry-traces-in-react-applications/assets/trace.png"><img src="https://ricoberger.de/blog/posts/use-opentelemetry-traces-in-react-applications/assets/trace.png" alt="Trace"/></a></p>
<p>As promised, I would like to review the project setup and explain why we are
using <code>http://localhost:3000/v1/traces</code> for the <code>OTLPTraceExporter</code>. We are
running the project using a
<a href="https://github.com/ricoberger/playground/blob/ae2828edd288557eacfba73bd08a8999a3bebe1d/applications/use-opentelemetry-traces-in-react-applications/docker-compose.yaml">Docker Compose file</a>,
which starts four containers:</p>
<ul>
<li>A <code>jaeger</code> container that runs <a href="https://www.jaegertracing.io/">Jaeger</a>. Jaeger
stores our traces and provides a user-friendly interface to view them.</li>
<li>A <code>otel-collector</code> container that runs the
<a href="https://opentelemetry.io/docs/collector/">OpenTelemetry Collector</a>. The
collector receives all spans from our Express backend and React frontend, then
forwards them to Jaeger. You can find the configuration for the collector
<a href="https://github.com/ricoberger/playground/blob/ae2828edd288557eacfba73bd08a8999a3bebe1d/applications/use-opentelemetry-traces-in-react-applications/otel-collector-config.yaml">here</a>.</li>
<li>A <code>backend</code> container that runs our Express app.</li>
<li>A <code>frontend</code> container that runs NGINX to serve our frontend app. In the
<a href="https://github.com/ricoberger/playground/blob/ae2828edd288557eacfba73bd08a8999a3bebe1d/applications/use-opentelemetry-traces-in-react-applications/frontend/nginx.conf#L19">NGINX configuration</a>,
we define that all requests starting with <code>/api</code> are routed to the <code>backend</code>
container, while requests starting with <code>/v1/traces</code> are routed to the
<code>otel-collector</code> container.</li>
</ul>
<p>That&#39;s it! If you want to try this on your own, you can check out the
<a href="https://github.com/ricoberger/playground/tree/ae2828edd288557eacfba73bd08a8999a3bebe1d/applications/use-opentelemetry-traces-in-react-applications">example project</a>
and start it with
<code>docker compose -f docker-compose.yaml up --force-recreate --build</code>. Afterward,
you can open <code>http://localhost:3000</code> in your browser to create, delete, and mark
tasks as completed. If you access the Jaeger frontend at
<code>http://localhost:16686</code>, you should see some traces from our React frontend and
Express backend.</p>
]]></description><author>Rico Berger</author><enclosure url="https://ricoberger.de/blog/posts/use-opentelemetry-traces-in-react-applications/assets/trace.png" type="image/png"></enclosure><guid isPermaLink="true">https://ricoberger.de/blog/posts/use-opentelemetry-traces-in-react-applications/</guid><pubDate>Sat, 29 Mar 2025 11:00:00 +0000</pubDate></item><item><title>Use a VPN to Access your Homelab</title><link>https://ricoberger.de/blog/posts/use-a-vpn-to-access-your-homelab/</link><description><![CDATA[<p>After my
<a href="https://ricoberger.de/blog/posts/use-cloudflare-tunnels-to-access-your-homelab/">last blog post</a>,
a colleague (hi Falk 👋) shared insights about his homelab setup. Since I&#39;m
always open to new ideas, today&#39;s post will explore how to access our homelab
via VPN and how to use Traefik, Cloudflare, and Let&#39;s Encrypt to obtain a free
certificate for accessing our services via HTTPS.</p>
<p><a href="https://ricoberger.de/blog/posts/use-a-vpn-to-access-your-homelab/assets/diagram.png"><img src="https://ricoberger.de/blog/posts/use-a-vpn-to-access-your-homelab/assets/diagram.png" alt="Diagram"/></a></p>
<h2>Setup a VPN</h2>
<p>Fortunately, the router we are using in this blog post supports access to the
home network via a VPN using WireGuard in the background<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup>. We can simply
activate the function and download the configuration file.</p>
<p>When we examine the downloaded configuration file, we see that we will use the
public IP address of our router to connect to the VPN from outside. However, as
we are getting a dynamically assigned public IP address, we have to note down
the address every day before leaving the house. Since this is very
uncomfortable, we can register a dynamic host record
(<a href="https://en.wikipedia.org/wiki/Dynamic_DNS">DynDNS</a>). There are many DynDNS
providers available; however, we will build our own using Cloudflare.</p>
<p>In the first step we need to create a API token at Cloudflare, which has the
permissions to create and update DNS records.</p>
<table>
<thead>
<tr>
<th>Type</th>
<th>Item</th>
<th>Permission</th>
</tr>
</thead>
<tbody>
<tr>
<td>Zone</td>
<td>DNS</td>
<td>Edit</td>
</tr>
</tbody>
</table>
<p>Now we will create a new DNS record at Cloudflare for our VPN that points to the
public IP address of our router.</p>
<pre><code class="language-sh">export CLOUDFLARE_ZONE_ID=
export CLODUFLARE_API_TOKEN=

curl https://api.cloudflare.com/client/v4/zones/${CLOUDFLARE_ZONE_ID}/dns_records \
  -H &#34;Authorization: Bearer ${CLOUDFLARE_API_TOKEN}&#34; \
  -H &#34;Content-Type: application/json&#34; \
  -d &#39;{
    &#34;content&#34;: &#34;&lt;PUBLIC-IP-ADDRESS&gt;&#34;,
    &#34;name&#34;: &#34;vpn.homelab.ricoberger.dev&#34;,
    &#34;proxied&#34;: false,
    &#34;ttl&#34;: 300,
    &#34;type&#34;: &#34;A&#34;
  }&#39;
</code></pre>
<p>To update the DNS record whenever our public IP address changes, we can create a
small script<sup id="fnref:2"><a href="#fn:2" class="footnote-ref" role="doc-noteref">2</a></sup>. This script will retrieve our public IP address and the ID of
the DNS record, then update the DNS record by ID with the new IP address.</p>
<pre><code class="language-sh">#!/usr/bin/env bash

ipaddress=$(curl &#39;https://api.ipify.org?format=json&#39; | jq -r .ip)

dnsrecord=$(curl -s -X GET &#34;https://api.cloudflare.com/client/v4/zones/${CLOUDFLARE_ZONE_ID}/dns_records?type=A&amp;name=vpn.homelab.ricoberger.dev&#34; \
  -H &#34;Authorization: Bearer ${CLOUDFLARE_API_TOKEN}&#34; \
  -H &#34;Content-Type: application/json&#34;)


dnsrecordid=$(echo &#34;${dnsrecord}&#34; | jq -r &#39;{&#34;result&#34;}[] | .[0] | .id&#39;)
dnsrecordipaddress=$(echo &#34;${dnsrecord}&#34; | jq -r &#39;{&#34;result&#34;}[] | .[0] | .content&#39;)

if [ &#34;${ipaddress}&#34; == &#34;${dnsrecordipaddress}&#34; ]; then
  echo &#34;DNS record is already up to date&#34;
  exit 0
fi

curl -X PUT https://api.cloudflare.com/client/v4/zones/${CLOUDFLARE_ZONE_ID}/dns_records/${dnsrecordid} \
  -H &#34;Authorization: Bearer ${CLOUDFLARE_API_TOKEN}&#34; \
  -H &#34;Content-Type: application/json&#34; \
  -d &#39;{
    &#34;content&#34;: &#34;&#39;${ipaddress}&#39;&#34;,
    &#34;name&#34;: &#34;vpn.homelab.ricoberger.dev&#34;,
    &#34;proxied&#34;: false,
    &#34;ttl&#34;: 300,
    &#34;type&#34;: &#34;A&#34;
  }&#39;
</code></pre>
<p>Last but not least we create a new cronjob to run the script every five minutes.
To add a new cronjob we run <code>crontab -e</code> and add the following entry:</p>
<pre><code class="language-plaintext">*/5 * * * * update-dns.sh
</code></pre>
<p>At this point, we have a DNS record (<code>vpn.homelab.ricoberger.dev</code>) to access our
VPN and have ensured that it automatically updates when our router&#39;s public IP
address changes. In the next step, we will set up Traefik with a Let&#39;s Encrypt
certificate to access our services via HTTPS.</p>
<h2>Setup Traefik</h2>
<p>We will deploy Traefik using <a href="https://docs.docker.com/">Docker</a> and
<a href="https://docs.docker.com/compose/">Docker Compose</a>. For that we are creating a
<code>docker-compose.yaml</code> file with the following content:</p>
<pre><code class="language-yaml">services:
  traefik:
    container_name: traefik
    image: traefik:v3.3.4
    restart: always
    security_opt:
      - no-new-privileges:true
    # Use the &#34;proxy&#34; network for the Traefik container
    networks:
      - proxy
    # Map the http and https ports to the host system
    ports:
      - 80:80
      - 443:443
    # Mount the &#34;.env&#34; file which contains our secrets
    env_file: .env
    # - The &#34;docker.sock&#34; file is required to use the Docker provider within
    #   Traefik, see https://doc.traefik.io/traefik/providers/docker/
    # - The &#34;traefik.yaml&#34; file contains the Traefik configuration
    # - The &#34;acme.json&#34; file is used to store the certificate which is issued by
    #   Let&#39;s Encrypt
    # - The &#34;config.yaml&#34; file holds the configuration for the File provider,
    #   see https://doc.traefik.io/traefik/providers/file/
    volumes:
      - /etc/localtime:/etc/localtime
      - /var/run/docker.sock:/var/run/docker.sock
      - ./data/traefik.yaml:/traefik.yaml
      - ./data/acme.json:/acme.json
      - ./data/config.yaml:/config.yaml
    # To be able to redirect traffic from Traefik to a service, which is running
    # on the host system we have to add the following extra hosts
    extra_hosts:
      - host.docker.internal:host-gateway
    # Add labels to the Traefik container to configure the entrypoint, the host
    # which can be used to access the Traefik dashboard, the domains for which
    # the certificate should be issued and the basic auth and http to https
    # redirect middleware
    labels:
      - &#34;traefik.enable=true&#34;
      - &#34;traefik.http.routers.traefik.entrypoints=http&#34;
      - &#34;traefik.http.routers.traefik.rule=Host(`traefik.homelab.ricoberger.dev`)&#34;
      - &#34;traefik.http.middlewares.traefik-auth.basicauth.users=${TRAEFIK_DASHBOARD_CREDENTIALS}&#34;
      - &#34;traefik.http.middlewares.traefik-https-redirect.redirectscheme.scheme=https&#34;
      - &#34;traefik.http.middlewares.sslheader.headers.customrequestheaders.X-Forwarded-Proto=https&#34;
      - &#34;traefik.http.routers.traefik.middlewares=traefik-https-redirect&#34;
      - &#34;traefik.http.routers.traefik-secure.entrypoints=https&#34;
      - &#34;traefik.http.routers.traefik-secure.rule=Host(`traefik.homelab.ricoberger.dev`)&#34;
      - &#34;traefik.http.routers.traefik-secure.middlewares=traefik-auth&#34;
      - &#34;traefik.http.routers.traefik-secure.tls=true&#34;
      - &#34;traefik.http.routers.traefik-secure.tls.certresolver=cloudflare&#34;
      - &#34;traefik.http.routers.traefik-secure.tls.domains[0].main=homelab.ricoberger.dev&#34;
      - &#34;traefik.http.routers.traefik-secure.tls.domains[0].sans=*.homelab.ricoberger.dev&#34;
      - &#34;traefik.http.routers.traefik-secure.service=api@internal&#34;

networks:
  proxy:
    external: true
</code></pre>
<p>In the <code>docker-compose.yaml</code> file above, we defined our intention to use the
<code>proxy</code> Docker network for Traefik. To create the Docker network, we can run the
following command:</p>
<pre><code class="language-sh">docker network create proxy
</code></pre>
<p>The <code>.env</code> file containing our secrets should include the Cloudflare API token,
which we used to create the DNS record for the VPN. It should also have the
email address used to issue the certificate and the credentials for the Traefik
dashboard. You can create the credentials by running
<code>echo $(htpasswd -nB admin) | sed -e s/\\$/\\$\\$/g</code>.</p>
<pre><code class="language-plaintext">CF_DNS_API_TOKEN=
TRAEFIK_CERTIFICATESRESOLVERS_CLOUDFLARE_ACME_EMAIL=
TRAEFIK_DASHBOARD_CREDENTIALS=
</code></pre>
<p>To create the <code>acme.json</code> file, which will store the certificate issued by Let&#39;s
Encrypt, we will run the following commands:</p>
<pre><code class="language-sh">touch data/acme.json
chmod 600 data/acme.json
</code></pre>
<p>In the <code>traefik.yaml</code> file, we define the configuration for Traefik as follows:</p>
<pre><code class="language-sh">log:
  level: DEBUG
api:
  # Enable the Traefik dashboard, see
  # https://doc.traefik.io/traefik/operations/dashboard/
  dashboard: true
  debug: true
# Define the http and https entrypoints and configure the http entrypoint to
# redirect all traffic to the https entrypoint
entryPoints:
  http:
    address: &#34;:80&#34;
    http:
      redirections:
        entryPoint:
          to: https
          scheme: https
  https:
    address: &#34;:443&#34;
serversTransport:
  insecureSkipVerify: true
# Configure the Docker provider, by setting the Docker endpoint to the mounted
# &#34;docker.sock&#34; file and the file provider by setting the name of the
# configuration file, which should be used
providers:
  docker:
    endpoint: &#34;unix:///var/run/docker.sock&#34;
    exposedByDefault: false
  file:
    filename: /config.yaml
# Configure the DNS provider (in our case Cloudflare), which should be used for
# creating the TXT records, required by the DNS-01 challenge to issue the
# certificates
#
# For more information regarding the DNS-01 challenge, see
# https://letsencrypt.org/docs/challenge-types/#dns-01-challenge
certificatesResolvers:
  cloudflare:
    acme:
      storage: acme.json
      caServer: https://acme-v02.api.letsencrypt.org/directory
      # caServer: https://acme-staging-v02.api.letsencrypt.org/directory
      dnsChallenge:
        provider: cloudflare
        resolvers:
          - &#34;1.1.1.1:53&#34;
          - &#34;1.0.0.1:53&#34;
</code></pre>
<p>Last but not least, we can create the <code>config.yaml</code> file by running
<code>touch data/config.yaml</code>. Then, we can start Traefik with the following command:</p>
<pre><code class="language-sh">docker-compose -f docker-compose.yaml up -d --force-recreate
</code></pre>
<p>If we now take a look at the <code>acme.json</code> file (<code>cat data/acme.json</code>) we should
see the issued certifacte for <code>*.homelab.ricoberger.dev</code> by Let&#39;s Encrypt:</p>
<pre><code class="language-json">{
  &#34;cloudflare&#34;: {
    &#34;Account&#34;: {
      &#34;Email&#34;: &#34;&#34;,
      &#34;Registration&#34;: {
        &#34;body&#34;: {
          &#34;status&#34;: &#34;valid&#34;
        },
        &#34;uri&#34;: &#34;https://acme-v02.api.letsencrypt.org/acme/acct/&lt;REDACTED&gt;&#34;
      },
      &#34;PrivateKey&#34;: &#34;&lt;REDACTED&gt;&#34;,
      &#34;KeyType&#34;: &#34;4096&#34;
    },
    &#34;Certificates&#34;: [
      {
        &#34;domain&#34;: {
          &#34;main&#34;: &#34;homelab.ricoberger.dev&#34;,
          &#34;sans&#34;: [&#34;*.homelab.ricoberger.dev&#34;]
        },
        &#34;certificate&#34;: &#34;&lt;REDACTED&gt;&#34;,
        &#34;key&#34;: &#34;&lt;REDACTED&gt;&#34;,
        &#34;Store&#34;: &#34;default&#34;
      }
    ]
  }
}
</code></pre>
<p>If we create a new DNS record similar to the one for the VPN, but using the
local IP address instead of the public one
(<code>ipconfig getifaddr en0 || ipconfig getifaddr en1</code>) for
<code>traefik.homelab.ricoberger.dev</code>, we should be able to access the Traefik
dashboard afterwards via our browser.</p>
<pre><code class="language-sh">export CLOUDFLARE_ZONE_ID=
export CLODUFLARE_API_TOKEN=

curl https://api.cloudflare.com/client/v4/zones/${CLOUDFLARE_ZONE_ID}/dns_records \
  -H &#34;Authorization: Bearer ${CLOUDFLARE_API_TOKEN}&#34; \
  -H &#34;Content-Type: application/json&#34; \
  -d &#39;{
    &#34;content&#34;: &#34;&lt;LOCAL-IP-ADDRESS&gt;&#34;,
    &#34;name&#34;: &#34;traefik.homelab.ricoberger.dev&#34;,
    &#34;proxied&#34;: false,
    &#34;ttl&#34;: 300,
    &#34;type&#34;: &#34;A&#34;
  }&#39;
</code></pre>
<p>At this point, we have created a wildcard certificate through Let&#39;s Encrypt,
allowing us to use it for our services, such as the Traefik Dashboard, which can
be accessed via HTTPS. In the final step, we will examine how to expose the
Ollama API and Open WeUI from the blog post
<a href="https://ricoberger.de/blog/posts/mac-mini-as-ai-server/">Mac mini as AI Server</a>.</p>
<p>For the Ollama API, we are utilizing the file provider of Traefik since it runs
on our host system rather than within a Docker container. To make the Ollama API
accessible via <code>ollama.homelab.ricoberger.dev</code>, we need to add the following
configuration to the <code>data/config.yaml</code> file:</p>
<pre><code class="language-yaml">http:
  # Create a new &#34;ollama&#34; router, which uses the &#34;https&#34; entrypoint and is used
  # when the &#34;ollama.homeelab.ricoberger.de&#34; domain is used
  routers:
    ollama:
      entryPoints:
        - &#34;https&#34;
      rule: &#34;Host(`ollama.homelab.ricoberger.dev`)&#34;
      middlewares:
        - default-headers
        - https-redirectscheme
      tls: {}
      service: ollama

  # Create a &#34;ollama&#34; service for the &#34;ollama&#34; router from above, where we set
  # the url of the Ollama API. Since the Ollama API is running on port 11434 on
  # host system, we have to use the &#34;host.docker.internal&#34; address
  services:
    ollama:
      loadBalancer:
        servers:
          - url: &#34;http://host.docker.internal:11434&#34;
        passHostHeader: true
</code></pre>
<p>For the Open WebUI we can create the following Docker compose file and run it
via <code>docker-compose -f docker-compose.yaml up -d --force-recreate</code></p>
<pre><code class="language-yaml">name: open-webui

services:
  open-webui:
    container_name: open-webui
    image: ghcr.io/open-webui/open-webui:main
    restart: always
    # Use the &#34;proxy&#34; network, so that the Open WebUI container is using the
    # same network as the Traefik container
    networks:
      - proxy
    volumes:
      - ./data:/app/backend/data
    # Add the extra hosts, so that the Open WebUI is able to access the Ollama
    # API
    extra_hosts:
      - host.docker.internal:host-gateway
    # Add the following labels, so that Traefik knows, the port of the Open
    # WebUI, the entrzpoint which should be used and the domain which should be
    # used
    labels:
      - &#34;traefik.enable=true&#34;
      - &#34;traefik.http.routers.open-webui.rule=Host(`open-webui.homelab.ricoberger.dev`)&#34;
      - &#34;traefik.http.routers.open-webui.entrypoints=https&#34;
      - &#34;traefik.http.routers.open-webui.tls=true&#34;
      - &#34;traefik.http.services.open-webui.loadbalancer.server.port=8080&#34;

networks:
  proxy:
    external: true
</code></pre>
<p>If we add a DNS record for <code>ollama.homelab.ricoberger.dev</code> and
<code>open-webui.homelab.ricoberger.dev</code> to Cloudflare as before, we can access the
Ollama API and the Open WebUI through their domains<sup id="fnref:3"><a href="#fn:3" class="footnote-ref" role="doc-noteref">3</a></sup>.</p>
<p>That&#39;s it for today&#39;s post. You can find the complete setup in the
<a href="https://github.com/ricoberger/playground/tree/95b9f245ebe9ef840a8d642bc4c6f530f3452edf/homelab">ricoberger/playground</a>
repository.</p>
<div class="footnotes" role="doc-endnotes">
<hr/>
<ol>
<li id="fn:1">
<p>If your router doesn&#39;t support this function, you can create the WireGuard
setup manually by following one of the many available tutorials. The easiest
one for me was the one from the Pi-hole documentation:
<a href="https://docs.pi-hole.net/guides/vpn/wireguard/server/">Installing the WireGuard server</a>. <a href="#fnref:1" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
<li id="fn:2">
<p>An improved version of the script can be found in the
<a href="https://github.com/ricoberger/playground/blob/95b9f245ebe9ef840a8d642bc4c6f530f3452edf/homelab/dns/dns.sh">ricoberger/playground</a>
repository. <a href="#fnref:2" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
<li id="fn:3">
<p>If you prefer not to add DNS records for your homelab services to your DNS
provider, you can run your own DNS server. I might write a follow-up blog
post about this. <a href="#fnref:3" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
</ol>
</div>
]]></description><author>Rico Berger</author><enclosure url="https://ricoberger.de/blog/posts/use-a-vpn-to-access-your-homelab/assets/diagram.png" type="image/png"></enclosure><guid isPermaLink="true">https://ricoberger.de/blog/posts/use-a-vpn-to-access-your-homelab/</guid><pubDate>Mon, 24 Mar 2025 20:00:00 +0000</pubDate></item><item><title>Use Cloudflare Tunnels to Access your Homelab</title><link>https://ricoberger.de/blog/posts/use-cloudflare-tunnels-to-access-your-homelab/</link><description><![CDATA[<p>In our
<a href="https://ricoberger.de/blog/posts/mac-mini-as-ai-server/">last blog post</a>, we
looked at how to set up an AI server on a Mac Mini and how to access the server
in our homelab. In today&#39;s post, we will make the server available through
Cloudflare Tunnels, allowing us to access it from anywhere.</p>
<p>Cloudflare Tunnel provides us with a secure way to connect our resources to
Cloudflare without a publicly routable IP address. With Tunnel, we do not send
traffic to an external IP — instead, a lightweight daemon in our infrastructure
(<code>cloudflared</code>) creates outbound-only connections to Cloudflare&#39;s global
network. Cloudflare Tunnel can connect HTTP web servers, SSH servers, remote
desktops, and other protocols safely to Cloudflare. This way, our origins can
serve traffic through Cloudflare without being vulnerable to attacks that bypass
Cloudflare.<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup></p>
<h2>How It Works</h2>
<p><code>cloudflared</code> establishes outbound connections (tunnels) between our resources
and Cloudflare&#39;s global network. Tunnels are persistent objects that route
traffic to DNS records. Within the same tunnel, we can run as many <code>cloudflared</code>
processes (connectors) as needed. These processes will establish connections to
Cloudflare and send traffic to the nearest Cloudflare data center.</p>
<p><a href="https://ricoberger.de/blog/posts/use-cloudflare-tunnels-to-access-your-homelab/assets/cloudflare-tunnel.webp"><img src="https://ricoberger.de/blog/posts/use-cloudflare-tunnels-to-access-your-homelab/assets/cloudflare-tunnel.webp" alt="Cloudflare Tunnel"/></a></p>
<h2>Setup a Tunnel</h2>
<p>In the following, we will set up a Cloudflare Tunnel to expose the Open WebUI
from the
<a href="https://ricoberger.de/blog/posts/mac-mini-as-ai-server/">last blog post</a> via
<code>openwebui-homelab.ricoberger.dev</code>. We will create a new tunnel and DNS entry,
connecting the Open WebUI through the tunnel with Cloudflare. This will route
traffic from <code>openwebui-homelab.ricoberger.dev</code> to the <code>localhost:3000</code> address,
where the Open WebUI is running.</p>
<p>To proceed, we need to
<a href="https://developers.cloudflare.com/fundamentals/setup/manage-domains/add-site/">add a website to Cloudflare</a>
or
<a href="https://developers.cloudflare.com/dns/zone-setups/full-setup/setup/">change the domain nameservers</a>
for <code>ricoberger.dev</code> to Cloudflare. Once this is done, we can follow these steps
to create the Cloudflare Tunnel and make our Open WebUI accessible via the
specified domain:</p>
<ol>
<li>
<p><a href="https://developers.cloudflare.com/fundamentals/api/get-started/create-token/">Create an API token</a>
with the following permissions:</p>
<table>
<thead>
<tr>
<th>Type</th>
<th>Item</th>
<th>Permission</th>
</tr>
</thead>
<tbody>
<tr>
<td>Account</td>
<td>Cloudflare Tunnel</td>
<td>Edit</td>
</tr>
<tr>
<td>Zone</td>
<td>DNS</td>
<td>Edit</td>
</tr>
</tbody>
</table>
</li>
<li>
<p>Create a tunnel, by making a <code>POST</code> request to the
<a href="https://developers.cloudflare.com/api/resources/zero_trust/subresources/access/subresources/applications/methods/create/">Cloudflare Tunnel</a>
endpoint:</p>
<pre><code class="language-sh">export CLOUDFLARE_ACCOUNT_ID=
export CLOUDFLARE_API_TOKEN=

curl &#34;https://api.cloudflare.com/client/v4/accounts/$CLOUDFLARE_ACCOUNT_ID/cfd_tunnel&#34; \
  --header &#34;Content-Type: application/json&#34; \
  --header &#34;Authorization: Bearer $CLOUDFLARE_API_TOKEN&#34; \
  --data &#39;{
    &#34;name&#34;: &#34;homelab&#34;,
    &#34;config_src&#34;: &#34;cloudflare&#34;
  }&#39;
</code></pre>
<pre><code class="language-json">{
  &#34;success&#34;: true,
  &#34;errors&#34;: [],
  &#34;messages&#34;: [],
  &#34;result&#34;: {
    &#34;id&#34;: &#34;&lt;tunnel-id&gt;&#34;,
    &#34;account_tag&#34;: &#34;&lt;REDACTED&gt;&#34;,
    &#34;created_at&#34;: &#34;2025-03-15T12:46:52.095257Z&#34;,
    &#34;deleted_at&#34;: null,
    &#34;name&#34;: &#34;homelab&#34;,
    &#34;connections&#34;: [],
    &#34;conns_active_at&#34;: null,
    &#34;conns_inactive_at&#34;: &#34;2025-03-15T12:46:52.095257Z&#34;,
    &#34;tun_type&#34;: &#34;cfd_tunnel&#34;,
    &#34;metadata&#34;: {},
    &#34;status&#34;: &#34;inactive&#34;,
    &#34;remote_config&#34;: true,
    &#34;credentials_file&#34;: {
      &#34;AccountTag&#34;: &#34;&lt;REDACTED&gt;&#34;,
      &#34;TunnelID&#34;: &#34;&lt;REDACTED&gt;&#34;,
      &#34;TunnelName&#34;: &#34;homelab&#34;,
      &#34;TunnelSecret&#34;: &#34;&lt;REDACTED&gt;&#34;
    },
    &#34;token&#34;: &#34;&lt;tunnel-token&gt;&#34;
  }
}
</code></pre>
</li>
<li>
<p>Copy the <code>id</code> and <code>token</code> values shown in the output. We will need these
values to configure and run the tunnel.</p>
<pre><code class="language-sh">export CLOUDFLARE_TUNNEL_ID=
export CLOUDFLARE_TUNNEL_TOKEN=
</code></pre>
</li>
<li>
<p>Make a
<a href="https://developers.cloudflare.com/api/resources/zero_trust/subresources/tunnels/subresources/cloudflared/subresources/configurations/methods/update/"><code>PUT</code> request</a>
to route our local service URL to a public hostname. Our ingress rules must
include a catch-all rule at the end. In the following example, <code>cloudflared</code>
will respond with a 404 status code when the request does not match any of
the previous hostnames.</p>
<pre><code class="language-sh">curl --request PUT \
  &#34;https://api.cloudflare.com/client/v4/accounts/$CLOUDFLARE_ACCOUNT_ID/cfd_tunnel/$CLOUDFLARE_TUNNEL_ID/configurations&#34; \
  --header &#34;Content-Type: application/json&#34; \
  --header &#34;Authorization: Bearer $CLOUDFLARE_API_TOKEN&#34; \
  --data &#39;{
    &#34;config&#34;: {
      &#34;ingress&#34;: [
        {
          &#34;hostname&#34;: &#34;openwebui-homelab.ricoberger.dev&#34;,
          &#34;service&#34;: &#34;http://localhost:3000&#34;,
          &#34;originRequest&#34;: {}
        },
        {
          &#34;service&#34;: &#34;http_status:404&#34;
        }
      ]
    }
  }&#39;
</code></pre>
<pre><code class="language-json">{
  &#34;success&#34;: true,
  &#34;errors&#34;: [],
  &#34;messages&#34;: [],
  &#34;result&#34;: {
    &#34;tunnel_id&#34;: &#34;&lt;tunnel-id&gt;&#34;,
    &#34;version&#34;: 1,
    &#34;config&#34;: {
      &#34;ingress&#34;: [
        {
          &#34;service&#34;: &#34;http://localhost:3000&#34;,
          &#34;hostname&#34;: &#34;openwebui-homelab.ricoberger.dev&#34;,
          &#34;originRequest&#34;: {}
        },
        {
          &#34;service&#34;: &#34;http_status:404&#34;
        }
      ],
      &#34;warp-routing&#34;: {
        &#34;enabled&#34;: false
      }
    },
    &#34;source&#34;: &#34;cloudflare&#34;,
    &#34;created_at&#34;: &#34;2025-03-15T13:08:36.465147Z&#34;
  }
}
</code></pre>
</li>
<li>
<p>Create a DNS record for our application. This DNS record allows Cloudflare to
proxy <code>openwebui-homelab.ricoberger.dev</code> traffic to our Cloudflare Tunnel
(<code>&lt;tunnel-id&gt;.cfargotunnel.com</code>).</p>
<pre><code class="language-sh">export CLOUDFLARE_ZONE_ID=

curl &#34;https://api.cloudflare.com/client/v4/zones/$CLOUDFLARE_ZONE_ID/dns_records&#34; \
  --header &#34;Content-Type: application/json&#34; \
  --header &#34;Authorization: Bearer $CLOUDFLARE_API_TOKEN&#34; \
  --data &#39;{
    &#34;type&#34;: &#34;CNAME&#34;,
    &#34;proxied&#34;: true,
    &#34;name&#34;: &#34;openwebui-homelab.ricoberger.dev&#34;,
    &#34;content&#34;: &#34;&lt;tunnel-id&gt;.cfargotunnel.com&#34;
  }&#39;
</code></pre>
<pre><code class="language-json">{
  &#34;result&#34;: {
    &#34;id&#34;: &#34;&lt;REDACTED&gt;&#34;,
    &#34;name&#34;: &#34;openwebui-homelab.ricoberger.dev&#34;,
    &#34;type&#34;: &#34;CNAME&#34;,
    &#34;content&#34;: &#34;&lt;tunnel-id&gt;.cfargotunnel.com&#34;,
    &#34;proxiable&#34;: true,
    &#34;proxied&#34;: true,
    &#34;ttl&#34;: 1,
    &#34;settings&#34;: {
      &#34;flatten_cname&#34;: false
    },
    &#34;meta&#34;: {},
    &#34;comment&#34;: null,
    &#34;tags&#34;: [],
    &#34;created_on&#34;: &#34;2025-03-15T13:17:18.56921Z&#34;,
    &#34;modified_on&#34;: &#34;2025-03-15T13:17:18.56921Z&#34;
  },
  &#34;success&#34;: true,
  &#34;errors&#34;: [],
  &#34;messages&#34;: []
}
</code></pre>
</li>
<li>
<p>Install and run the tunnel</p>
<pre><code class="language-sh">brew install cloudflared
sudo cloudflared service install $CLOUDFLARE_TUNNEL_TOKEN
</code></pre>
</li>
<li>
<p>Verify tunnel status</p>
<pre><code class="language-sh">curl &#34;https://api.cloudflare.com/client/v4/accounts/$CLOUDFLARE_ACCOUNT_ID/cfd_tunnel/$CLOUDFLARE_TUNNEL_ID&#34; \
  --header &#34;Content-Type: application/json&#34; \
  --header &#34;Authorization: Bearer $CLOUDFLARE_API_TOKEN&#34;
</code></pre>
<pre><code class="language-json">{
  &#34;success&#34;: true,
  &#34;errors&#34;: [],
  &#34;messages&#34;: [],
  &#34;result&#34;: {
    &#34;id&#34;: &#34;&lt;REDACTED&gt;&#34;,
    &#34;account_tag&#34;: &#34;&lt;REDACTED&gt;&#34;,
    &#34;created_at&#34;: &#34;2025-03-15T12:46:52.095257Z&#34;,
    &#34;deleted_at&#34;: null,
    &#34;name&#34;: &#34;homelab&#34;,
    &#34;connections&#34;: [...],
    &#34;conns_active_at&#34;: &#34;2025-03-15T13:20:53.912617Z&#34;,
    &#34;conns_inactive_at&#34;: null,
    &#34;tun_type&#34;: &#34;cfd_tunnel&#34;,
    &#34;metadata&#34;: {},
    &#34;status&#34;: &#34;healthy&#34;,
    &#34;remote_config&#34;: true
  }
}
</code></pre>
</li>
</ol>
<p>That&#39;s it! Now we can open <code>openwebui-homelab.ricoberger.dev</code> in our browser to
access the Open WebUI from anywhere.</p>
<div class="footnotes" role="doc-endnotes">
<hr/>
<ol>
<li id="fn:1">
<p><a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/">Cloudflare Documentation</a> <a href="#fnref:1" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
</ol>
</div>
]]></description><author>Rico Berger</author><enclosure url="https://ricoberger.de/blog/posts/use-cloudflare-tunnels-to-access-your-homelab/assets/cloudflare-tunnel.webp" type="image/webp"></enclosure><guid isPermaLink="true">https://ricoberger.de/blog/posts/use-cloudflare-tunnels-to-access-your-homelab/</guid><pubDate>Sun, 16 Mar 2025 09:00:00 +0000</pubDate></item><item><title>Mac mini as AI Server</title><link>https://ricoberger.de/blog/posts/mac-mini-as-ai-server/</link><description><![CDATA[<p>In today&#39;s blog post, we will explore how to run a local AI server on a Mac
mini. We will use Ollama to run a large language model locally and Open WebUI as
the web interface to access Ollama.</p>
<p><a href="https://ricoberger.de/blog/posts/mac-mini-as-ai-server/assets/ollama.png"><img src="https://ricoberger.de/blog/posts/mac-mini-as-ai-server/assets/ollama.png" alt="Ollama"/></a></p>
<h2>Ollama</h2>
<p><a href="https://ollama.com/">Ollama</a><sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup> is a lightweight, extensible framework
designed for running large language models locally. It enables users to easily
set up and run models such as Llama 3.3, facilitating tasks like code generation
and natural language processing. To install and run Ollama via Homebrew on macOS
we can use the following commands:</p>
<pre><code class="language-sh">brew install ollama
brew services start ollama
</code></pre>
<p>Once Ollama is running we can pull a model via the <code>ollama pull</code> command. In the
following we are using <code>Llama 3</code>, but every other model should also work. A list
of all available models can be found
<a href="https://ollama.com/search">Ollama website</a>.</p>
<pre><code class="language-sh">ollama pull llama3
</code></pre>
<pre><code class="language-plaintext">pulling manifest
pulling 6a0746a1ec1a... 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 4.7 GB
pulling 4fa551d4f938... 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏  12 KB
pulling 8ab4849b038c... 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏  254 B
pulling 577073ffcc6c... 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏  110 B
pulling 3f8eb4da87fa... 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏  485 B
verifying sha256 digest
writing manifest
success
</code></pre>
<p>When the model is pulled, we can begin chatting with it using the <code>ollama run</code>
command:</p>
<pre><code class="language-sh">ollama run llama3
</code></pre>
<pre><code class="language-plaintext">&gt;&gt;&gt; What are large language models?
Large language models (LLMs) are a type of artificial intelligence (AI) that have revolutionized the field of natural language processing (NLP). They&#39;re incredibly powerful AI systems that can process, understand, and generate human-like language at unprecedented scales.

Here&#39;s what makes them so remarkable:

1. **Scale**: LLMs are trained on massive datasets, often consisting of billions or even trillions of tokens (words, phrases, sentences) from various sources, such as books, articles, conversations, and more.
2. **Complexity**: These models use complex algorithms, such as transformer architectures, to process and analyze the vast amounts of text data. This enables them to capture subtle patterns, relationships, and nuances in language.
3. **Generative capabilities**: LLMs can generate new text based on a given prompt or context, which is known as natural language generation (NLG). They can produce coherent, context-specific responses, such as chatbot conversations, summaries, or even entire stories.

Some key features of large language models include:

1. **Attention mechanism**: This allows the model to focus on specific parts of the input text that are relevant to the current task.
2. **Self-supervised learning**: LLMs can learn from unlabeled data, which enables them to develop a deep understanding of language without explicit human feedback.
3. **Multi-turn dialogue generation**: They can engage in conversations that span multiple turns, responding to context and previous utterances.

Large language models have many applications, such as:

1. **Chatbots**: Conversational AI for customer service, entertainment, or education.
2. **Language translation**: Machine translation systems that can translate text from one language to another.
3. **Summarization**: Automatic summarization of long texts into concise summaries.
4. **Text generation**: Generating text based on a prompt or context for various purposes (e.g., creative writing, content creation).
5. **Question answering**: Answering complex questions by analyzing large amounts of text data.

Some notable examples of large language models include:

1. BERT (Bidirectional Encoder Representations from Transformers)
2. RoBERTa (Robustly Optimized BERT Pretraining Approach)
3. transformers
4. XLNet

These models have the potential to transform various industries, such as education, healthcare, marketing, and more, by enabling more accurate language understanding and generation capabilities.

Would you like to know more about a specific aspect of large language models or their applications?
</code></pre>
<h2>Open WebUI</h2>
<p>Until now, we have only been able to access our large language model via the
command line. In the next step, we will install
<a href="https://openwebui.com/">Open WebUI</a><sup id="fnref:2"><a href="#fn:2" class="footnote-ref" role="doc-noteref">2</a></sup> to enable access through our browser.</p>
<p>Open WebUI is an extensible, feature-rich, and user-friendly self-hosted AI
platform designed to operate entirely offline. It supports various LLM runners
like Ollama and OpenAI-compatible APIs, with built-in inference engine for RAG,
making it a powerful AI deployment solution. To run Open WebUI via Docker the
following command can be used:</p>
<pre><code class="language-sh">docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
</code></pre>
<p>Alternatively, we can start Open WebUI using the following Docker Compose file
by running <code>docker-compose up</code>:</p>
<pre><code class="language-yaml">name: open-webui

services:
  open-webui:
    ports:
      - 3000:8080
    extra_hosts:
      - host.docker.internal:host-gateway
    volumes:
      - open-webui:/app/backend/data
    container_name: open-webui
    restart: always
    image: ghcr.io/open-webui/open-webui:main

volumes:
  open-webui:
    external: true
    name: open-webui
</code></pre>
<p>Now we can open our browser to access Open WebUI, in my case the Mac mini is
reachable via <code>ricos-mac-mini.local</code>, so that the Open WebUI can be accessed via
<code>http://ricos-mac-mini.local:3000</code>. When Open WebUI is opened the first time we
have to create a admin user, afterwards we can start using our large language
model through the web interface provided by Open WebUI.</p>
<p><a href="https://ricoberger.de/blog/posts/mac-mini-as-ai-server/assets/open-webui.png"><img src="https://ricoberger.de/blog/posts/mac-mini-as-ai-server/assets/open-webui.png" alt="Open WebUI"/></a></p>
<p>That&#39;s it! We now have our local AI server running, accessible from any device
on our home network. In the next blog post, we will explore how to access our
local AI server from anywhere, even when we are outside our home network.</p>
<div class="footnotes" role="doc-endnotes">
<hr/>
<ol>
<li id="fn:1">
<p><a href="https://github.com/ollama/ollama">Ollama GitHub Repository</a> <a href="#fnref:1" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
<li id="fn:2">
<p><a href="https://github.com/open-webui/open-webui">Open WebUI GitHub Repository</a> <a href="#fnref:2" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
</ol>
</div>
]]></description><author>Rico Berger</author><enclosure url="https://ricoberger.de/blog/posts/mac-mini-as-ai-server/assets/ollama.png" type="image/png"></enclosure><guid isPermaLink="true">https://ricoberger.de/blog/posts/mac-mini-as-ai-server/</guid><pubDate>Fri, 14 Mar 2025 09:00:00 +0000</pubDate></item><item><title>Mac mini as Home Server</title><link>https://ricoberger.de/blog/posts/mac-mini-as-home-server/</link><description><![CDATA[<p>Until now, my home server setup was powered by a Raspberry Pi, which had its
limits since I also used it for watching TV in my office. I decided to replace
it with a Mac mini, and I was fortunate to find the current M4 base model at a
low price. In the following post, I will discuss the basic setup to enable
remote access to the Mac mini and prepare it for the various server workloads we
want to run.</p>
<p><a href="https://ricoberger.de/blog/posts/mac-mini-as-home-server/assets/mac-mini.jpg"><img src="https://ricoberger.de/blog/posts/mac-mini-as-home-server/assets/mac-mini.jpg" alt="Mac mini"/></a></p>
<h2>Why a Mac mini</h2>
<p>I choosed the Mac mini M4 base model, which features 10 CPU and GPU cores, 16 GB
of memory, and a 256 GB SSD. There are a few reasons why a Mac mini would make a
good server:</p>
<ul>
<li><strong>Energy Efficiency</strong> - The M series chip falls between a Raspberry Pi and a
15W x86 chip.</li>
<li><strong>Powerful Performance</strong> - The M series chip delivers powerful performance,
easily handling various server tasks. It is also well-suited for running
smaller AI tasks.</li>
<li><strong>Silence</strong> - Silence is important because our home server is located in our
office.</li>
<li><strong>Content Caching</strong> - With multiple Macs, iPads, iPhones, and an Apple TV in
the house, we have several devices that can utilize local caching.</li>
</ul>
<h2>Setup</h2>
<p>In the following, we will examine the settings needed to reduce the power
consumption of the Mac mini, enable remote access, and provide the basic tools
for initial hacking. We will use the following system settings:</p>
<ul>
<li>System Settings -&gt; Bluetooth -&gt; <strong>Disabled</strong></li>
<li>System Settings -&gt; Firewall -&gt; <strong>Enabled</strong></li>
<li>System Settings -&gt; Energy -&gt; Prevent automatic sleeping when the disply is off
(<strong>Enabled</strong>) / Wake for network access (<strong>Enabled</strong>) / Start up automatically
after power failure (<strong>Enabled</strong>)</li>
<li>System Settings -&gt; Apple Intelligence &amp; Siri -&gt; <strong>Disabled</strong></li>
<li>System Settings -&gt; Spotlight -&gt; <strong>Disable all Options</strong></li>
<li>System Settings -&gt; Notifications -&gt; Show previews (<strong>Never</strong>) / Allow
notifications when then display is sleeping (<strong>Disabled</strong>) / Allow
notifications when the screen is locked (<strong>Disabled</strong>) / Allow notifications
when mirroring or sharing the display (<strong>Disabled</strong>)</li>
<li>System Settings -&gt; Sound -&gt; Play sound on startup (<strong>Disabled</strong>) / Play user
interface sound effects (<strong>Disabled</strong>) / Play feedback when volume is changed
(<strong>Disabled</strong>)</li>
<li>System Settings -&gt; Lock Screen - Start screen saver when inactive (<strong>Never</strong>)
/ Turn display off when inactive (<strong>10 Minutes</strong>) / Require password after
screen saver begins or display is turned off (<strong>Never</strong>) / Show large clock
(<strong>Never</strong>)</li>
<li>System Settings -&gt; Privacy &amp; Security -&gt; Location Services -&gt; <strong>Disabled</strong></li>
<li>System Settings -&gt; Login Password -&gt; Automatically log in after a restart -&gt;
<strong>Enabled</strong></li>
<li>System Settings -&gt; Users &amp; Groups -&gt; Automatically log in as user</li>
<li>System Settings -&gt; Game Center -&gt; <strong>Disabled</strong></li>
<li>System Settings -&gt; General -&gt; Sharing -&gt; Advanced -&gt; Remote Management
(<strong>Enabled</strong>) / Remote Login (<strong>Enabled</strong>) / Remote Application Scripting
(<strong>Enabled</strong>) / Local hostname (<strong>ricos-mac-mini.local</strong>)
<ul>
<li>Remote Management: Always show remote management status in the menu bar
(<strong>Enabled</strong>) / Anyone may request permission to control screen
(<strong>Enabled</strong>) / Allow access for (<strong>All users</strong>) / Options (<strong>Enable all</strong>)</li>
<li>Remote Login: Allow full disk access for remote users (<strong>Enabled</strong>) / Allow
access for (<strong>All users</strong>)</li>
<li>Remote Application Scripting: Allow access for (<strong>All users</strong>)</li>
</ul>
</li>
</ul>
<div class="grid grid-cols-2 md:grid-cols-4 gap-4">
  <div>
    <a href="https://ricoberger.de/blog/posts/mac-mini-as-home-server/assets/screenshot-1-about.png">
      <img class="h-auto max-w-full" src="https://ricoberger.de/blog/posts/mac-mini-as-home-server/assets/screenshot-1-about.png" alt="About"/>
    </a>
  </div>
  <div>
    <a href="https://ricoberger.de/blog/posts/mac-mini-as-home-server/assets/screenshot-2-bluetooth.png">
      <img class="h-auto max-w-full" src="https://ricoberger.de/blog/posts/mac-mini-as-home-server/assets/screenshot-2-bluetooth.png" alt="Bluetooth"/>
    </a>
  </div>
  <div>
    <a href="https://ricoberger.de/blog/posts/mac-mini-as-home-server/assets/screenshot-3-firewall.png">
      <img class="h-auto max-w-full" src="https://ricoberger.de/blog/posts/mac-mini-as-home-server/assets/screenshot-3-firewall.png" alt="Firewall"/>
    </a>
  </div>
  <div>
    <a href="https://ricoberger.de/blog/posts/mac-mini-as-home-server/assets/screenshot-4-energy.png">
      <img class="h-auto max-w-full" src="https://ricoberger.de/blog/posts/mac-mini-as-home-server/assets/screenshot-4-energy.png" alt="Energy"/>
    </a>
  </div>
  <div>
    <a href="https://ricoberger.de/blog/posts/mac-mini-as-home-server/assets/screenshot-5-apple-intelligence-and-siri.png">
      <img class="h-auto max-w-full" src="https://ricoberger.de/blog/posts/mac-mini-as-home-server/assets/screenshot-5-apple-intelligence-and-siri.png" alt="Apple Intelligence and Siri"/>
    </a>
  </div>
  <div>
    <a href="https://ricoberger.de/blog/posts/mac-mini-as-home-server/assets/screenshot-6-spotlight.png">
      <img class="h-auto max-w-full" src="https://ricoberger.de/blog/posts/mac-mini-as-home-server/assets/screenshot-6-spotlight.png" alt="Spotlight"/>
    </a>
  </div>
  <div>
    <a href="https://ricoberger.de/blog/posts/mac-mini-as-home-server/assets/screenshot-7-notifications.png">
      <img class="h-auto max-w-full" src="https://ricoberger.de/blog/posts/mac-mini-as-home-server/assets/screenshot-7-notifications.png" alt="Notifications"/>
    </a>
  </div>
  <div>
    <a href="https://ricoberger.de/blog/posts/mac-mini-as-home-server/assets/screenshot-8-sound.png">
      <img class="h-auto max-w-full" src="https://ricoberger.de/blog/posts/mac-mini-as-home-server/assets/screenshot-8-sound.png" alt="Sound"/>
    </a>
  </div>
  <div>
    <a href="https://ricoberger.de/blog/posts/mac-mini-as-home-server/assets/screenshot-9-lock-screen.png">
      <img class="h-auto max-w-full" src="https://ricoberger.de/blog/posts/mac-mini-as-home-server/assets/screenshot-9-lock-screen.png" alt="Lock Screen"/>
    </a>
  </div>
  <div>
    <a href="https://ricoberger.de/blog/posts/mac-mini-as-home-server/assets/screenshot-10-privacy-and-security.png">
      <img class="h-auto max-w-full" src="https://ricoberger.de/blog/posts/mac-mini-as-home-server/assets/screenshot-10-privacy-and-security.png" alt="Privacy and Security"/>
    </a>
  </div>
  <div>
    <a href="https://ricoberger.de/blog/posts/mac-mini-as-home-server/assets/screenshot-11-login-password.png">
      <img class="h-auto max-w-full" src="https://ricoberger.de/blog/posts/mac-mini-as-home-server/assets/screenshot-11-login-password.png" alt="Login Password"/>
    </a>
  </div>
  <div>
    <a href="https://ricoberger.de/blog/posts/mac-mini-as-home-server/assets/screenshot-12-users-and-groups.png">
      <img class="h-auto max-w-full" src="https://ricoberger.de/blog/posts/mac-mini-as-home-server/assets/screenshot-12-users-and-groups.png" alt="Users and Groups"/>
    </a>
  </div>
  <div>
    <a href="https://ricoberger.de/blog/posts/mac-mini-as-home-server/assets/screenshot-13-game-center.png">
      <img class="h-auto max-w-full" src="https://ricoberger.de/blog/posts/mac-mini-as-home-server/assets/screenshot-13-game-center.png" alt="Game Center"/>
    </a>
  </div>
  <div>
    <a href="https://ricoberger.de/blog/posts/mac-mini-as-home-server/assets/screenshot-14-sharing-1.png">
      <img class="h-auto max-w-full" src="https://ricoberger.de/blog/posts/mac-mini-as-home-server/assets/screenshot-14-sharing-1.png" alt="Sharing"/>
    </a>
  </div>
  <div>
    <a href="https://ricoberger.de/blog/posts/mac-mini-as-home-server/assets/screenshot-15-sharing-2.png">
      <img class="h-auto max-w-full" src="https://ricoberger.de/blog/posts/mac-mini-as-home-server/assets/screenshot-15-sharing-2.png" alt="Sharing"/>
    </a>
  </div>
  <div>
    <a href="https://ricoberger.de/blog/posts/mac-mini-as-home-server/assets/screenshot-16-sharing-3.png">
      <img class="h-auto max-w-full" src="https://ricoberger.de/blog/posts/mac-mini-as-home-server/assets/screenshot-16-sharing-3.png" alt="Sharing"/>
    </a>
  </div>
  <div>
    <a href="https://ricoberger.de/blog/posts/mac-mini-as-home-server/assets/screenshot-17-sharing-4.png">
      <img class="h-auto max-w-full" src="https://ricoberger.de/blog/posts/mac-mini-as-home-server/assets/screenshot-17-sharing-4.png" alt="Sharing"/>
    </a>
  </div>
  <div>
    <a href="https://ricoberger.de/blog/posts/mac-mini-as-home-server/assets/screenshot-18-sharing-5.png">
      <img class="h-auto max-w-full" src="https://ricoberger.de/blog/posts/mac-mini-as-home-server/assets/screenshot-18-sharing-5.png" alt="Sharing"/>
    </a>
  </div>
  <div>
    <a href="https://ricoberger.de/blog/posts/mac-mini-as-home-server/assets/screenshot-19-screen-sharing.png">
      <img class="h-auto max-w-full" src="https://ricoberger.de/blog/posts/mac-mini-as-home-server/assets/screenshot-19-screen-sharing.png" alt="Screen Sharing"/>
    </a>
  </div>
</div>
<p>At this point, we can log in to our Mac mini via the <strong>Screen Sharing</strong>
application and SSH. Before we continue, we will copy Ghostty&#39;s terminfo entry
and our SSH key to the Mac mini:</p>
<pre><code class="language-sh">infocmp -x | ssh ricos-mac-mini.local -- tic -x -

ssh-copy-id -i /Users/ricoberger/.ssh/id_rsa.pub ricoberger@ricos-mac-mini.local
scp /Users/ricoberger/.ssh/id_rsa ricoberger@ricos-mac-mini.local:/Users/ricoberger/.ssh/id_rsa
scp /Users/ricoberger/.ssh/id_rsa.pub ricoberger@ricos-mac-mini.local:/Users/ricoberger/.ssh/id_rsa.pub
</code></pre>
<p>Now we can log in to the Mac mini (<code>ssh ricoberger@ricos-mac-mini.local</code>) and
adjust the SSH configuration to allow logins only via the copied SSH key. For
this, we add the following lines to the configuration file at
<code>/etc/ssh/sshd_config</code>:</p>
<pre><code class="language-plaintext">PermitRootLogin no
PasswordAuthentication no
PermitEmptyPasswords no
ChallengeResponseAuthentication no
</code></pre>
<p>To match the configuration on my MacBook Pro, we will install the Xcode Command
Line Tools, <a href="https://brew.sh/">Homebrew</a> and our dotfiles from
<a href="https://github.com/ricoberger/dotfiles">ricoberger/dotfiles</a>.</p>
<pre><code class="language-sh">xcode-select --install
/bin/bash -c &#34;$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)&#34;

mkdir -p /Users/ricoberger/Documents/GitHub/ricoberger
cd /Users/ricoberger/Documents/GitHub/ricoberger
git clone git@github.com:ricoberger/dotfiles.git

brew bundle install --file=Brewfile

sudo sh -c &#34;echo $(which zsh) &gt;&gt; /etc/shells&#34;
chsh -s $(which zsh)

./install.sh &amp;&amp; source ~/.zshrc
</code></pre>
<p>We installed <a href="https://github.com/abiosoft/colima">Colima</a> as our container
runtime through the Brewfile because we want to deploy most of our services
using Docker. To autostart Colima and our services, use
<code>brew services start colima</code>. We can also increase the resources of the VM
created by Colima by adjusting the following values in the Colima configuration
file at <code>~/.colima/default/colima.yaml</code>:</p>
<pre><code class="language-plaintext"># Number of CPUs to be allocated to the virtual machine.
cpu: 8

# Size of the disk in GiB to be allocated to the virtual machine.
disk: 100

# Size of the memory in GiB to be allocated to the virtual machine.
memory: 12
</code></pre>
<p>That&#39;s it! Now we are ready to experiment with our new Mac mini home server.</p>
]]></description><author>Rico Berger</author><enclosure url="https://ricoberger.de/blog/posts/mac-mini-as-home-server/assets/mac-mini.jpg" type="image/jpeg"></enclosure><guid isPermaLink="true">https://ricoberger.de/blog/posts/mac-mini-as-home-server/</guid><pubDate>Sun, 09 Mar 2025 12:00:00 +0000</pubDate></item><item><title>Convert HTML to PDF / PNG with Puppeteer</title><link>https://ricoberger.de/blog/posts/convert-html-to-pdf-png-with-puppeteer/</link><description><![CDATA[<p>Today, I automated one of the final annoying tasks related to my new website. I
wanted to provide downloadable versions of the cheat sheets in PNG or PDF
format. Until now, I had to create the PNGs manually using the FireShot Safari
extension, which was quite frustrating. This process is now automated with
<a href="https://pptr.dev/">Puppeteer</a>. In the following sections, we will explore how
Puppeteer can be used to generate PDFs and PNGs from HTML sites.</p>
<p><a href="https://ricoberger.de/blog/posts/convert-html-to-pdf-png-with-puppeteer/assets/cheat-sheets-puppeteer.png"><img src="https://ricoberger.de/blog/posts/convert-html-to-pdf-png-with-puppeteer/assets/cheat-sheets-puppeteer.png" alt="Cheat Sheets and Puppeteer"/></a></p>
<h2>Create a New Node.js Project</h2>
<p>Create a new folder for your project, navigate to the directory, and initialize
a new Node.js project:</p>
<pre><code class="language-sh">mkdir html-to-pdf-png
cd html-to-pdf-png
npm init
</code></pre>
<h2>Install Puppeteer</h2>
<p>Install Puppeteer as a dependency:</p>
<pre><code class="language-sh">npm install puppeteer --save
</code></pre>
<p>This will create a <code>node_modules</code> directory in your project folder and add
Puppeteer as a dependency to your <code>package.json</code> file.</p>
<h2>Create a New File</h2>
<p>In the same project, create an <code>index.js</code> file. This is where we will write our
code to convert HTML into PDF and PNG.</p>
<pre><code class="language-sh">touch index.js
</code></pre>
<h2>Create a Browser Instance and a New Page</h2>
<p>Inside the <code>index.js</code> file, we have to import <code>puppeteer</code> first. Afterwards we
can create a new browser instance and a new page:</p>
<pre><code class="language-js">const puppeteer = require(&#34;puppeteer&#34;);

const browser = await puppeteer.launch({
  headless: true,
  args: [&#34;--no-sandbox&#34;, &#34;--disable-setuid-sandbox&#34;],
});

const page = await browser.newPage();
</code></pre>
<p>The <code>headless</code> and <code>args</code> options are important for using it within a GitHub
Action; otherwise, we will encounter the following error:</p>
<pre><code class="language-plaintext">Error: Failed to launch the browser process!
[2178:2178:0304/190806.915940:FATAL:zygote_host_impl_linux.cc(127)] No usable sandbox! If you are running on Ubuntu 23.10+ or another Linux distro that has disabled unprivileged user namespaces with AppArmor, see https://chromium.googlesource.com/chromium/src/+/main/docs/security/apparmor-userns-restrictions.md. Otherwise see https://chromium.googlesource.com/chromium/src/+/main/docs/linux/suid_sandbox_development.md for more information on developing with the (older) SUID sandbox. If you want to live dangerously and need an immediate workaround, you can try using --no-sandbox.
[0304/190806.925145:ERROR:file_io_posix.cc(145)] open /sys/devices/system/cpu/cpu0/cpufreq/scaling_cur_freq: No such file or directory (2)
[0304/190806.925188:ERROR:file_io_posix.cc(145)] open /sys/devices/system/cpu/cpu0/cpufreq/scaling_max_freq: No such file or directory (2)


TROUBLESHOOTING: https://pptr.dev/troubleshooting

    at ChildProcess.onClose (/home/runner/work/ricoberger/ricoberger/node_modules/@puppeteer/browsers/lib/cjs/launch.js:318:24)
    at ChildProcess.emit (node:events:530:35)
    at ChildProcess._handle.onexit (node:internal/child_process:293:12)
</code></pre>
<h2>Resize the Page and Navigate to a URL</h2>
<p>In the next step, we will set the page size for our PDFs and PNGs, and then
navigate to the URL:</p>
<pre><code class="language-js">await page.setViewport({ width: 1920, height: 1080, deviceScaleFactor: 1 });

await page.goto(&#34;https://ricoberger.de/cheat-sheets/gh/&#34;, {
  waitUntil: &#34;networkidle0&#34;,
});
</code></pre>
<p>The value of <code>waitUntil</code> determines whether the navigation is considered
successful. The default value is <code>load</code>, which means navigation is deemed
complete when the <code>load</code> event fires. However, we want to wait until there are
no more than 0 network connections for at least 500ms by using the value
<code>networkidle0</code>.</p>
<h2>Configure the Output</h2>
<p>By default, <code>page.pdf()</code> generates a PDF of the page using print CSS media. To
create a PDF that resembles what we see on the screen, we will use the screen
media. Add <code>page.emulateMediaType(&#39;screen&#39;)</code> before downloading the PDF:</p>
<pre><code class="language-js">await page.evaluate((sel) =&gt; {
  var elements = document.querySelectorAll(sel);
  for (var i = 0; i &lt; elements.length; i++) {
    elements[i].parentNode.removeChild(elements[i]);
  }
}, &#34;#header&#34;);

await page.emulateMediaType(&#34;screen&#34;);

const pageHeight = await page.evaluate(
  () =&gt; document.documentElement.offsetHeight,
);
</code></pre>
<p>We remove the page header with the ID <code>#header</code> because it is unnecessary in the
downloadable version of the cheat sheet. Additionally, we need the page height
to use it as the height of the generated PDF page.</p>
<h2>Download the PDF</h2>
<p>Next, call <a href="https://pptr.dev/api/puppeteer.page.pdf"><code>page.pdf()</code></a> to download
the PDF with the following options passed to the method:</p>
<pre><code class="language-js">await page.pdf({
  path: &#34;cheat-sheet.pdf&#34;,
  printBackground: true,
  width: &#34;1920px&#34;,
  height: pageHeight + &#34;px&#34;,
});
</code></pre>
<ul>
<li><code>path</code>: This is the file path where the PDF will be saved, and it is
mandatory. If we do not specify the path, the file will not be saved to the
disk, and we will receive a buffer instead.</li>
<li><code>printBackground</code>: This parameter controls whether the background graphics of
the web page are printed. The default value is <code>false</code>. You may want to set
this to <code>true</code>, as some images will be missing in the PDF if it remains
<code>false</code>.</li>
<li><code>width</code>: Sets the width of paper. You can pass in a number or a string with a
unit.</li>
<li><code>height</code>: Sets the height of paper. You can pass in a number or a string with
a unit.</li>
</ul>
<h2>Download the PNG</h2>
<p>Next we call
<a href="https://pptr.dev/api/puppeteer.page.screenshot"><code>page.screenshot()</code></a> to
download the PNG with the following options passed to the method:</p>
<pre><code class="language-js">await page.screenshot({
  path: &#34;cheat-sheet.png&#34;,
  fullPage: true,
  type: &#34;png&#34;,
});
</code></pre>
<ul>
<li><code>path</code>: The file path to save the image to. The screenshot type will be
inferred from file extension. If path is a relative path, then it is resolved
relative to current working directory. If no path is provided, the image won&#39;t
be saved to the disk.</li>
<li><code>fullPage</code>: When <code>true</code>, takes a screenshot of the full page.</li>
</ul>
<h2>Close the Browser</h2>
<p>Finally, close the browser instance after downloading the PDF and PNG:</p>
<pre><code class="language-js">await browser.close();
</code></pre>
<p>Our final code appears as follows:</p>
<pre><code class="language-js">const puppeteer = require(&#34;puppeteer&#34;);

(async () =&gt; {
  const browser = await puppeteer.launch({
    headless: true,
    args: [&#34;--no-sandbox&#34;, &#34;--disable-setuid-sandbox&#34;],
  });

  const page = await browser.newPage();

  await page.setViewport({ width: 1920, height: 1080, deviceScaleFactor: 1 });

  await page.goto(&#34;https://ricoberger.de/cheat-sheets/gh/&#34;, {
    waitUntil: &#34;networkidle0&#34;,
  });

  await page.evaluate((sel) =&gt; {
    var elements = document.querySelectorAll(sel);
    for (var i = 0; i &lt; elements.length; i++) {
      elements[i].parentNode.removeChild(elements[i]);
    }
  }, &#34;#header&#34;);

  await page.emulateMediaType(&#34;screen&#34;);

  const pageHeight = await page.evaluate(
    () =&gt; document.documentElement.offsetHeight,
  );

  await page.pdf({
    path: &#34;cheat-sheet.pdf&#34;,
    printBackground: true,
    width: &#34;1920px&#34;,
    height: pageHeight + &#34;px&#34;,
  });

  await page.screenshot({
    path: &#34;cheat-sheet.png&#34;,
    fullPage: true,
    type: &#34;png&#34;,
  });

  await browser.close();
})();
</code></pre>
<p>The adjusted code for our cheat sheets is available in the
<a href="https://github.com/ricoberger/ricoberger/blob/58e0e4f8dc04d989a7d32336543080835075ef16/templates/utils/build-cheat-sheets-assets.js"><code>build-cheat-sheets-assets.js</code></a>
file. You can find the usage within the GitHub Action in the
<a href="https://github.com/ricoberger/ricoberger/blob/58e0e4f8dc04d989a7d32336543080835075ef16/.github/workflows/deploy.yml#L51"><code>deploy.yml</code></a>
file.</p>
]]></description><author>Rico Berger</author><enclosure url="https://ricoberger.de/blog/posts/convert-html-to-pdf-png-with-puppeteer/assets/cheat-sheets-puppeteer.png" type="image/png"></enclosure><guid isPermaLink="true">https://ricoberger.de/blog/posts/convert-html-to-pdf-png-with-puppeteer/</guid><pubDate>Tue, 04 Mar 2025 20:00:00 +0000</pubDate></item><item><title>Neovim: Custom snacks.nvim Picker</title><link>https://ricoberger.de/blog/posts/neovim-custom-snacks-nvim-picker/</link><description><![CDATA[<p>Welcome to another blog post about
<a href="https://ricoberger.de/blog/posts/my-dotfiles/">my dotfiles</a>. Today, I will
briefly discuss how to implement a custom picker using
<a href="https://github.com/folke/snacks.nvim">snacks.nvim</a>. Inspired by a Reddit post,
I wanted to create my own command palette, similar to the one in Visual Studio
Code. Below, I will share the result.</p>
<p><a href="https://ricoberger.de/blog/posts/neovim-custom-snacks-nvim-picker/assets/command-palette.png"><img src="https://ricoberger.de/blog/posts/neovim-custom-snacks-nvim-picker/assets/command-palette.png" alt="Command Palette"/></a></p>
<p>One feature I have always liked about Visual Studio Code is the command palette,
which allows you to quickly change a file&#39;s type, run code formatting, and more.
Inspired by a
<a href="https://www.reddit.com/r/neovim/comments/1ircbgt/handy_toolbox_using_snacks_custom_picker/">Reddit post</a>,
I wanted to create my own command palette in Neovim, which allows me to run
commands without creating a keymap, as I don&#39;t use them frequently enough.</p>
<p>The commands for the command palette are defined in a table, where each entry
has a name and an action. The action can be a command if it starts with <code>:</code>, a
keymap if it&#39;s a string not starting with <code>:</code> or a function if it&#39;s a function,
similar to the actions in the
<a href="https://github.com/folke/snacks.nvim/blob/main/docs/dashboard.md#section-actions">snacks.nvim dashboard</a>.</p>
<p>The commands are passed as <code>items</code> to the <code>Snacks.picker</code>. When a command is
selected, the <code>confirm</code> function is executed, which runs the corresponding
action in a similar way to how it is done in the
<a href="https://github.com/folke/snacks.nvim/blob/acedb16ad76ba0b5d4761372ca71057aa9486adb/lua/snacks/dashboard.lua#L292">snacks.nvim dashboard</a>.</p>
<p>The command palette is exported as a module and registered in the keys section
of the snacks.nvim plugin, so that the command palette can be opened via
<code>leader</code> + <code>p</code>.</p>
<pre><code class="language-lua">return {
  {
    &#34;folke/snacks.nvim&#34;,
    priority = 1000,
    lazy = false,
    keys = {
      {
        &#34;&lt;leader&gt;p&#34;,
        function()
          require(&#34;command-palette&#34;).show_commands()
        end,
        desc = &#34;Command Palette&#34;,
      },
    },
  },
}
</code></pre>
<pre><code class="language-lua">local M = {}

M.commands = {
  {
    name = &#34;Copilot: Actions&#34;,
    action = &#34;&lt;leader&gt;ca&#34;,
  },
  {
    name = &#34;Dim: Toggle&#34;,
    action = function()
      local snacks_dim = require(&#34;snacks&#34;).dim
      if snacks_dim.enabled then
        snacks_dim.disable()
      else
        snacks_dim.enable()
      end
    end,
  },
  {
    name = &#34;Tab: Close&#34;,
    action = &#34;:tabclose&#34;,
  },
  {
    name = &#34;Tab: New&#34;,
    action = &#34;:tabnew&#34;,
  },
  {
    name = &#34;Todo Comments: Quickfix List&#34;,
    action = &#34;:TodoQuickFix&#34;,
  },
  {
    name = &#34;Todo Comments: Location List&#34;,
    action = &#34;:TodoLocList&#34;,
  },
}

function M.show_commands()
  local items = {}

  for idx, command in ipairs(M.commands) do
    local item = {
      idx = idx,
      name = command.name,
      text = command.name,
      action = command.action,
    }
    table.insert(items, item)
  end

  Snacks.picker({
    title = &#34;Command Palette&#34;,
    layout = {
      preset = &#34;default&#34;,
      preview = false,
    },
    items = items,
    format = function(item, _)
      return {
        { item.text, item.text_hl },
      }
    end,
    confirm = function(picker, item)
      if type(item.action) == &#34;string&#34; then
        if item.action:find(&#34;^:&#34;) then
          picker:close()
          return picker:norm(function()
            picker:close()
            vim.cmd(item.action:sub(2))
          end)
        else
          return picker:norm(function()
            picker:close()
            local keys = vim.api.nvim_replace_termcodes(item.action, true, true, true)
            vim.api.nvim_input(keys)
          end)
        end
      end

      return picker:norm(function()
        picker:close()
        item.action()
      end)
    end,
  })
end

return M
</code></pre>
]]></description><author>Rico Berger</author><enclosure url="https://ricoberger.de/blog/posts/neovim-custom-snacks-nvim-picker/assets/command-palette.png" type="image/png"></enclosure><guid isPermaLink="true">https://ricoberger.de/blog/posts/neovim-custom-snacks-nvim-picker/</guid><pubDate>Mon, 03 Mar 2025 19:00:00 +0000</pubDate></item><item><title>Neovim: Extend snacks.nvim Explorer</title><link>https://ricoberger.de/blog/posts/neovim-extend-snacks-nvim-explorer/</link><description><![CDATA[<p>In today&#39;s blog post, I want to take a quick look at the
<a href="https://github.com/folke/snacks.nvim">snacks.nvim</a> explorer and how I extended
it with some useful actions, so that I can search within a directory, diff
selected files, and provide multiple copy options.</p>
<p><a href="https://ricoberger.de/blog/posts/neovim-extend-snacks-nvim-explorer/assets/explorer.png"><img src="https://ricoberger.de/blog/posts/neovim-extend-snacks-nvim-explorer/assets/explorer.png" alt="Explorer"/></a></p>
<p>The first action <code>copy_file_path</code> is used to provide multiple copy options for
the file under the cursor. The function returns a selection menu from which I
can select whether I want to copy the basename, extension, filename, path, the
relative path from the working directory or home directory, or the file URI.</p>
<p>The second action <code>search_in_directory</code> is used to search within the directory
under the cursor. It opens a new snacks.nvim picker where the working directory
is set to the selected directory from the explorer and allows me to search for
all files with a specific search term.</p>
<p>The third action <code>diff</code> is used to compare two selected files. In the explorer,
two files can be selected with <code>Tab</code> or <code>Shift</code> + <code>Tab</code>, and afterwards, a diff
between these two files is opened in a new tab.</p>
<p>The code for the mentioned actions can be found in the following code snippet.
If you want to know more about my Neovim configuration you can have a look at my
<a href="https://github.com/ricoberger/dotfiles">dotfiles repository</a> or
<a href="https://ricoberger.de/blog/posts/my-dotfiles/">my last blog post</a>. If you have
some useful addtions to the snacks.nvim explorer please let me know.</p>
<pre><code class="language-lua">return {
  {
    &#34;folke/snacks.nvim&#34;,
    priority = 1000,
    lazy = false,
    opts = {
      picker = {
        enabled = true,
        sources = {
          explorer = {
            auto_close = true,
            hidden = true,
            layout = {
              preset = &#34;default&#34;,
              preview = false,
            },
            actions = {
              copy_file_path = {
                action = function(_, item)
                  if not item then
                    return
                  end

                  local vals = {
                    [&#34;BASENAME&#34;] = vim.fn.fnamemodify(item.file, &#34;:t:r&#34;),
                    [&#34;EXTENSION&#34;] = vim.fn.fnamemodify(item.file, &#34;:t:e&#34;),
                    [&#34;FILENAME&#34;] = vim.fn.fnamemodify(item.file, &#34;:t&#34;),
                    [&#34;PATH&#34;] = item.file,
                    [&#34;PATH (CWD)&#34;] = vim.fn.fnamemodify(item.file, &#34;:.&#34;),
                    [&#34;PATH (HOME)&#34;] = vim.fn.fnamemodify(item.file, &#34;:~&#34;),
                    [&#34;URI&#34;] = vim.uri_from_fname(item.file),
                  }

                  local options = vim.tbl_filter(function(val)
                    return vals[val] ~= &#34;&#34;
                  end, vim.tbl_keys(vals))
                  if vim.tbl_isempty(options) then
                    vim.notify(&#34;No values to copy&#34;, vim.log.levels.WARN)
                    return
                  end
                  table.sort(options)
                  vim.ui.select(options, {
                    prompt = &#34;Choose to copy to clipboard:&#34;,
                    format_item = function(list_item)
                      return (&#34;%s: %s&#34;):format(list_item, vals[list_item])
                    end,
                  }, function(choice)
                    local result = vals[choice]
                    if result then
                      vim.fn.setreg(&#34;+&#34;, result)
                      Snacks.notify.info(&#34;Yanked `&#34; .. result .. &#34;`&#34;)
                    end
                  end)
                end,
              },
              search_in_directory = {
                action = function(_, item)
                  if not item then
                    return
                  end
                  local dir = vim.fn.fnamemodify(item.file, &#34;:p:h&#34;)
                  Snacks.picker.grep({
                    cwd = dir,
                    cmd = &#34;rg&#34;,
                    args = {
                      &#34;-g&#34;,
                      &#34;!.git&#34;,
                      &#34;-g&#34;,
                      &#34;!node_modules&#34;,
                      &#34;-g&#34;,
                      &#34;!dist&#34;,
                      &#34;-g&#34;,
                      &#34;!build&#34;,
                      &#34;-g&#34;,
                      &#34;!coverage&#34;,
                      &#34;-g&#34;,
                      &#34;!.DS_Store&#34;,
                      &#34;-g&#34;,
                      &#34;!.docusaurus&#34;,
                      &#34;-g&#34;,
                      &#34;!.dart_tool&#34;,
                    },
                    show_empty = true,
                    hidden = true,
                    ignored = true,
                    follow = false,
                    supports_live = true,
                  })
                end,
              },
              diff = {
                action = function(picker)
                  picker:close()
                  local sel = picker:selected()
                  if #sel &gt; 0 and sel then
                    Snacks.notify.info(sel[1].file)
                    vim.cmd(&#34;tabnew &#34; .. sel[1].file)
                    vim.cmd(&#34;vert diffs &#34; .. sel[2].file)
                    Snacks.notify.info(&#34;Diffing &#34; .. sel[1].file .. &#34; against &#34; .. sel[2].file)
                    return
                  end

                  Snacks.notify.info(&#34;Select two entries for the diff&#34;)
                end,
              },
            },
            win = {
              list = {
                keys = {
                  [&#34;y&#34;] = &#34;copy_file_path&#34;,
                  [&#34;s&#34;] = &#34;search_in_directory&#34;,
                  [&#34;D&#34;] = &#34;diff&#34;,
                },
              },
            },
          },
        },
      },
    },
  },
}
</code></pre>
]]></description><author>Rico Berger</author><enclosure url="https://ricoberger.de/blog/posts/neovim-extend-snacks-nvim-explorer/assets/explorer.png" type="image/png"></enclosure><guid isPermaLink="true">https://ricoberger.de/blog/posts/neovim-extend-snacks-nvim-explorer/</guid><pubDate>Sun, 02 Mar 2025 19:00:00 +0000</pubDate></item><item><title>My Dotfiles</title><link>https://ricoberger.de/blog/posts/my-dotfiles/</link><description><![CDATA[<p>In this blog post, I&#39;ll take you through my complete
<a href="https://github.com/ricoberger/dotfiles">dotfiles</a>, detailing the tools and
configurations that empower my development process. From my terminal choice to
the editor I rely on, and even the GitHub CLI commands that simplify my
workflow, I&#39;ll share insights and tips that might just inspire you to optimize
your own environment. Whether you&#39;re a seasoned developer or just starting,
there&#39;s something here for everyone looking to enhance their macOS experience.
Let&#39;s dive in!</p>
<p><a href="https://ricoberger.de/blog/posts/my-dotfiles/assets/terminal.png"><img src="https://ricoberger.de/blog/posts/my-dotfiles/assets/terminal.png" alt="Terminal"/></a></p>
<h2>OS Setup</h2>
<p>I&#39;m using macOS as my daily development environment, and together with
<a href="https://www.raycast.com/">Raycast</a>, I couldn&#39;t be happier with it. Raycast
allows me to quickly create notes and reminders, view my calendar entries,
GitHub pull requests and issues, and Jira tickets. I&#39;m also using it as a
replacement for the built-in Spotlight search and for managing spaces<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup> and
windows.</p>
<h2>Terminal</h2>
<p>By the end of last year, I switched from <a href="https://alacritty.org/">Alacritty</a> to
<a href="https://ghostty.org/">Ghostty</a> as my go-to terminal emulator (yes, the hype
caught me 😅). Ghostty provides most of the features out of the box that I used
tmux for in the past, such as multiple windows, tabs, and panes. It is super
fast, and the available configuration options are on point (not too much, not
too little). The only thing I would wish for is an API so that windows, tabs,
and panes can be created programmatically, which is currently only possible via
AppleScript. In my Ghostty configuration<sup id="fnref:2"><a href="#fn:2" class="footnote-ref" role="doc-noteref">2</a></sup> I set my prefered
<a href="https://catppuccin.com">color scheme</a>,
<a href="https://github.com/microsoft/cascadia-code">font</a> and some key bindings.</p>
<p>As my shell<sup id="fnref:3"><a href="#fn:3" class="footnote-ref" role="doc-noteref">3</a></sup>, I&#39;m using <a href="https://www.zsh.org/">Zsh</a> with
<a href="https://github.com/zdharma-continuum/zinit">Zinit</a> to manage the following
plugins:</p>
<ul>
<li><a href="https://github.com/zsh-users/zsh-completions">zsh-users/zsh-completions</a>:
Additional completion definitions for Zsh</li>
<li><a href="https://github.com/zsh-users/zsh-autosuggestions">zsh-users/zsh-autosuggestions</a>:
Fish-like fast/unobtrusive autosuggestions for Zsh</li>
<li><a href="https://github.com/Aloxaf/fzf-tab">Aloxaf/fzf-tab</a>: Replace Zsh&#39;s default
completion selection menu with fzf</li>
</ul>
<p>For the customization of my prompt<sup id="fnref:4"><a href="#fn:4" class="footnote-ref" role="doc-noteref">4</a></sup>, I&#39;m using
<a href="https://starship.rs/">Starship</a> to show the current OS and user, the directory
I&#39;m working in, the Git branch and status, the exit status of the last command,
the Kubernetes context and namespace, and the current time.</p>
<p>Other important tools I&#39;m using are:</p>
<ul>
<li><a href="http://tmux.github.io/">tmux</a><sup id="fnref:5"><a href="#fn:5" class="footnote-ref" role="doc-noteref">5</a></sup>: tmux is a terminal multiplexer, and while
I replaced most of its functionality with Ghostty, I&#39;m still using it on
servers where the benefit of persistent sessions is unmatched.</li>
<li><a href="https://github.com/junegunn/fzf">fzf</a>: fzf is a command-line fuzzy finder.</li>
<li><a href="https://github.com/BurntSushi/ripgrep">ripgrep</a>: ripgrep is a line-oriented
search tool that recursively searches the current directory for a regex
pattern.</li>
<li><a href="https://yazi-rs.github.io/">Yazi</a><sup id="fnref:6"><a href="#fn:6" class="footnote-ref" role="doc-noteref">6</a></sup>: Yazi is a blazing fast terminal file
manager written in Rust, based on async I/O. It is fairly new in my workflow,
but I want to try it out more to stay within the terminal when doing file
oeprations instead of running <code>open .</code> and doing these operations via the
Finder.</li>
</ul>
<h2>Editor</h2>
<p><a href="https://neovim.io/">Neovim</a> has become my daily editor of choice due to its
powerful features and flexibility. As an extensible text editor based on Vim, it
allows me to tailor it to my specific workflow, whether I&#39;m coding, writing, or
debugging issues. Spending most of my time within the terminal, it is also
faster and feels more natural to stay within the terminal when editing files
instead of opening Visual Studio Code.</p>
<p><a href="https://ricoberger.de/blog/posts/my-dotfiles/assets/neovim.png"><img src="https://ricoberger.de/blog/posts/my-dotfiles/assets/neovim.png" alt="Neovim Dashboard"/></a></p>
<p>Currently, I&#39;m using the following plugins for tasks such as fuzzy searching
files, Git integrations, code completion, formatting, linting, and some AI
features:</p>
<ul>
<li><a href="https://github.com/folke/lazy.nvim">lazy.nvim</a>: A modern plugin manager for
Neovim</li>
<li><a href="https://github.com/catppuccin/nvim">catppuccin</a>: Soothing pastel theme for
Neovim</li>
<li><a href="https://github.com/nvim-lualine/lualine.nvim">lualine.nvim</a>: A blazing fast
and easy to configure Neovim statusline plugin written in pure lua</li>
<li><a href="https://github.com/folke/snacks.nvim">snacks.nvim</a>: A collection of QoL
plugins for Neovim</li>
<li><a href="https://github.com/lewis6991/gitsigns.nvim">gitsigns.nvim</a>: Git integration
for buffers</li>
<li><a href="https://github.com/sindrets/diffview.nvim">diffview.nvim</a>: Single tabpage
interface for easily cycling through diffs for all modified files for any git
rev</li>
<li><a href="https://github.com/nvim-treesitter/nvim-treesitter">nvim-treesitter</a>: Nvim
Treesitter configurations and abstraction layer</li>
<li><a href="https://github.com/nvim-treesitter/nvim-treesitter-context">nvim-treesitter-context</a>:
Show code context</li>
<li><a href="https://github.com/nvim-treesitter/nvim-treesitter-textobjects">nvim-treesitter-textobjects</a>:
Syntax aware text-objects, select, move, swap, and peek support</li>
<li><a href="https://github.com/neovim/nvim-lspconfig">nvim-lspconfig</a>: Quickstart configs
for Nvim LSP</li>
<li><a href="https://github.com/towolf/vim-helm">vim-helm</a>: Vim syntax for helm templates
(yaml + gotmpl + sprig + custom)</li>
<li><a href="https://github.com/stevearc/conform.nvim">conform.nvim</a>: Lightweight yet
powerful formatter plugin for Neovim</li>
<li><a href="https://github.com/mfussenegger/nvim-lint">nvim-lint</a>: An asynchronous linter
plugin for Neovim complementary to the built-in Language Server Protocol
support</li>
<li><a href="https://github.com/Saghen/blink.cmp">blink.cmp</a>: Performant,
batteries-included completion plugin for Neovim</li>
<li><a href="https://github.com/fang2hou/blink-copilot">blink-copilot</a>: Configurable
GitHub Copilot blink.cmp source for Neovim</li>
<li><a href="https://github.com/rafamadriz/friendly-snippets">friendly-snippets</a>: Set of
preconfigured snippets for different languages</li>
<li><a href="https://github.com/jake-stewart/multicursor.nvim">multicursor.nvim</a>: Multiple
cursors in Neovim</li>
<li><a href="https://github.com/folke/todo-comments.nvim">todo-comments.nvim</a>: Highlight,
list and search todo comments in your projects.</li>
<li><a href="https://github.com/zbirenbaum/copilot.lua">copilot.lua</a>: Fully featured &amp;
enhanced replacement for copilot.vim complete with API for interacting with
Github Copilot</li>
<li><a href="https://github.com/CopilotC-Nvim/CopilotChat.nvim">CopilotChat.nvim</a>: Chat
with GitHub Copilot in Neovim</li>
</ul>
<p>While all the plugins mentioned above are awesome and fulfill a specific need, I
want to give a special shoutout to
<a href="https://github.com/jake-stewart/multicursor.nvim">multicursor.nvim</a>, which was
the last missing plugin for me to fully abandon Visual Studio Code.</p>
<h2>GitHub CLI</h2>
<p>Last but not least, I also want to take a look at my workflow with GitHub
because it is an important part of my daily work, and I&#39;m a bit proud of it. I&#39;m
a heavy user of the <a href="https://cli.github.com/">GitHub CLI</a> (<code>gh</code>) and the
<a href="https://github.com/dlvhdr/gh-dash">gh-dash</a> extension, because it allows me to
stay for most of my work within the terminal.</p>
<p>I use the GitHub CLI to create pull requests via the <code>gh pr create</code> command, to
add / remove labels, to view the workflow runs for a pull request and to merge
them. All with the help of some nice helper functions which can be found in the
<a href="https://github.com/ricoberger/dotfiles/tree/main/.bin"><code>.bin</code></a> directory in my
dotfiles.</p>
<pre><code class="language-sh"># Add a label to the pull request, by selecting the label via fzf from a list of
# all available labels in the repository
gh pr edit $1 --add-label &#34;$(gh label list --json name --jq &#34;.[].name&#34; | fzf)&#34;

# Delete a label from the pull request, by selecting the label via fzf from a
# a list of all labels from the pull request
gh pr edit $1 --remove-label &#34;$(gh pr view $1 --json labels --jq &#34;.labels.[].name&#34; | fzf)&#34;

# View the logs of a workflow run, by selecting the workflow run via fzf
branch=$(git rev-parse --abbrev-ref HEAD)
workflow=$(gh run list --branch $branch --json databaseId,workflowName,createdAt,status --template &#39;{{range .}}{{printf &#34;%.0f&#34; .databaseId}}{{&#34;\t&#34;}}{{.status}}{{&#34;\t&#34;}}{{.createdAt}}{{&#34;\t&#34;}}{{.workflowName}}{{&#34;\n&#34;}}{{end}}&#39; | fzf)
gh run view &#34;$(echo $workflow | awk &#39;{print $1}&#39;)&#34; --log

# Squash the commits into one commit and merge it into the base branch, also
# delete the local and remote branch after merge
gh pr merge $1 --squash --delete-branch --admin
</code></pre>
<p>These commands are also integrated into my gh-dash configuration<sup id="fnref:7"><a href="#fn:7" class="footnote-ref" role="doc-noteref">7</a></sup>. gh-dash is
an extension for the GitHub CLI to display a dashboard with pull requests and
issues. It integrates beautifully with tmux and Neovim:</p>
<ul>
<li>Open the diff of a pull request in Neovim
<a href="https://github.com/sindrets/diffview.nvim">diffview.nvim</a> via
<code>cd {{.RepoPath}} &amp;&amp; gh pr checkout {{.PrNumber}} &amp;&amp; nvim -c &#34;:DiffviewOpen origin/HEAD...HEAD --imply-local&#34;</code>
or with tmux via
<code>tmux new-window -n &#34;{{.RepoName}}/{{.PrNumber}}&#34; -c {{.RepoPath}} &#39;gh pr checkout {{.PrNumber}} &amp;&amp; nvim -c &#34;:DiffviewOpen origin/HEAD...HEAD --imply-local&#34;&#39;</code></li>
<li>Open a pull request in Neovim
<a href="https://github.com/pwntester/octo.nvim">octo.nvim</a> via
<code>cd {{.RepoPath}} &amp;&amp; nvim -c &#34;:Octo pr edit {{.PrNumber}}&#34;</code> or with tmux via
<code>tmux new-window -n &#34;{{.RepoName}}/{{.PrNumber}}&#34; -c {{.RepoPath}} &#39;nvim -c &#34;:Octo pr edit {{.PrNumber}}&#34;&#39;</code></li>
<li>Open an issue in Neovim <a href="https://github.com/pwntester/octo.nvim">octo.nvim</a>
via <code>cd {{.RepoPath}} &amp;&amp; nvim -c &#34;:Octo issue edit {{.IssueNumber}}&#34;</code> or with
tmux via
<code>tmux new-window -n &#34;{{.RepoName}}/{{.IssueNumber}}&#34; -c {{.RepoPath}} &#39;nvim -c &#34;:Octo issue edit {{.IssueNumber}}&#34;&#39;</code></li>
</ul>
<p><a href="https://ricoberger.de/blog/posts/my-dotfiles/assets/gh-dash-neovim-diffview.png"><img src="https://ricoberger.de/blog/posts/my-dotfiles/assets/gh-dash-neovim-diffview.png" alt="gh-dash and Neovim Diffview"/></a></p>
<div class="footnotes" role="doc-endnotes">
<hr/>
<ol>
<li id="fn:1">
<p><a href="https://github.com/ricoberger/dotfiles/blob/main/.bin/raycast/create-new-space.applescript">Create new Spaces via AppleScript</a> <a href="#fnref:1" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
<li id="fn:2">
<p><a href="https://github.com/ricoberger/dotfiles/blob/main/.config/ghostty/config">Ghostty Configuration</a> <a href="#fnref:2" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
<li id="fn:3">
<p><a href="https://github.com/ricoberger/dotfiles/blob/main/.zshrc">Zsh Configuration</a> <a href="#fnref:3" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
<li id="fn:4">
<p><a href="https://github.com/ricoberger/dotfiles/blob/main/.config/starship.toml">Starship Configuration</a> <a href="#fnref:4" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
<li id="fn:5">
<p><a href="https://github.com/ricoberger/dotfiles/blob/main/.tmux.conf">tmux Configuration</a> <a href="#fnref:5" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
<li id="fn:6">
<p><a href="https://github.com/ricoberger/dotfiles/tree/main/.config/yazi">Yazi Configuration</a> <a href="#fnref:6" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
<li id="fn:7">
<p><a href="https://github.com/ricoberger/dotfiles/blob/main/.config/gh-dash/config.yml">gh-dash Configuration</a> <a href="#fnref:7" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
</ol>
</div>
]]></description><author>Rico Berger</author><enclosure url="https://ricoberger.de/blog/posts/my-dotfiles/assets/terminal.png" type="image/png"></enclosure><guid isPermaLink="true">https://ricoberger.de/blog/posts/my-dotfiles/</guid><pubDate>Sat, 01 Mar 2025 19:00:00 +0000</pubDate></item><item><title>Welcome to My New Website</title><link>https://ricoberger.de/blog/posts/welcome-to-my-wew-website/</link><description><![CDATA[<p>I spent the past few days creating a new website for my domain
<a href="https://ricoberger.de">ricoberger.de</a>. Previously, I only used the domain as a
landing page with links to my social media profiles. This time, I wanted to add
my cheat sheets, which were previously hosted in my
<a href="https://github.com/ricoberger/cheat-sheets">ricoberger/cheat-sheets</a> GitHub
repository. I also aimed to include a small blog where I can write about topics
I&#39;m interested in. In the following post, we will explore the technologies used
to create the website and the features it offers.</p>
<p><a href="https://ricoberger.de/blog/posts/welcome-to-my-wew-website/assets/landing-page.png"><img src="https://ricoberger.de/blog/posts/welcome-to-my-wew-website/assets/landing-page.png" alt="Landing Page"/></a></p>
<p>To include my cheat sheets on the website, I decided to create my own site
generator in <a href="https://go.dev/">Go</a> instead of using an existing static site
generator like <a href="https://gohugo.io/">Hugo</a>. The site generator is located in the
<a href="https://github.com/ricoberger/ricoberger/blob/main/main.go"><code>main.go</code></a> file and
utilizes the <a href="https://pkg.go.dev/html/template"><code>html/template</code></a> package to
generate HTML files for the website based on various templates available in the
<a href="https://github.com/ricoberger/ricoberger/tree/main/templates"><code>templates</code></a>
directory.</p>
<p>Every site uses the
<a href="https://github.com/ricoberger/ricoberger/blob/main/templates/base.html"><code>base.html</code></a>
template, which provides the basic HTML layout structure, including the <code>&lt;head&gt;</code>
tag and site navigation. We then select a specific template for each site to
generate the final HTML layout using the <code>buildTemplate</code> function. We also
provide a destination path to the function, indicating where the site will be
available and where the <code>index.html</code> file will be created. Finally, we can pass
a <code>Data</code> struct to the template, which includes the <code>Metadata</code> for each site and
custom data specific to each site.</p>
<pre><code class="language-html">&lt;!doctype html&gt;
&lt;html lang=&#34;en&#34;&gt;
  &lt;head&gt;
    &lt;title&gt;{{ .Metadata.Title }}&lt;/title&gt;
  &lt;/head&gt;

  &lt;body&gt;
    &lt;div&gt;&lt;!-- Site Navigation --&gt;&lt;/div&gt;

    {{ template &#34;content&#34; . }}
  &lt;/body&gt;
&lt;/html&gt;
</code></pre>
<pre><code class="language-html">{{ define &#34;content&#34; }}
&lt;div&gt;&lt;!-- Site Content --&gt;&lt;/div&gt;
{{ end }}
</code></pre>
<pre><code class="language-go">type Data struct {
	Metadata Metadata
	Content  any
}

type Metadata struct {
	Title       string
	Description string
	Author      string
	Keywords    []string
	BaseUrl     string
	Url         string
	Image       string
	Prism       bool
}

func buildTemplate(tmpl string, distPath string, data Data) error {
	if err := os.MkdirAll(distPath, os.ModePerm); err != nil {
		return err
	}

	templates, err := template.New(&#34;base.html&#34;).Funcs(template.FuncMap{
		&#34;formatMarkdown&#34;: func(s string) template.HTML {
			md := goldmark.New(
				goldmark.WithExtensions(
					extension.Table,
					extension.Strikethrough,
				),
				goldmark.WithRendererOptions(
					html.WithUnsafe(),
				),
			)

			var buf bytes.Buffer
			if err := md.Convert([]byte(s), &amp;buf); err != nil {
				slog.Error(&#34;Failed to convert markdown&#34;, slog.Any(&#34;error&#34;, err))
			}
			return template.HTML(buf.String())
		},
	}).ParseFiles(&#34;templates/base.html&#34;, fmt.Sprintf(&#34;templates/%s.html&#34;, tmpl))
	if err != nil {
		return err
	}

	f, err := os.Create(fmt.Sprintf(&#34;%s/index.html&#34;, distPath))
	if err != nil {
		return err
	}

	if err := templates.Execute(f, data); err != nil {
		return err
	}

	return nil
}
</code></pre>
<p>For the website&#39;s styling, we use <a href="https://tailwindcss.com/">Tailwind CSS</a>. All
our styles are defined in the
<a href="https://github.com/ricoberger/ricoberger/blob/main/templates/assets/css/input.css"><code>input.css</code></a>
file, which is used to generate the final CSS file (<code>output.css</code>) using
<code>@tailwindcss/cli</code>.</p>
<p>In the <code>input.css</code> file, we specify the location of the source files so that
Tailwind can detect all the used classes. We also define some theme variables
and the styling for each HTML tag used.</p>
<pre><code class="language-css">@import &#34;tailwindcss&#34; source(none);

@source &#34;../../**/*.html&#34;;

/* The used colors are based on the awesome Catppuccin theme: https://catppuccin.com/ */
@theme {
  --color-base: #24273a;
  --color-mantle: #1e2030;
  --color-crust: #181926;
  --color-surface: #5b6078;
  --color-text: #cad3f5;
  --color-primary: #8aadf4;
  --color-red: #ed8796;
  --color-yellow: #eed49f;
  --color-green: #a6da95;
  --color-blue: #8aadf4;
}

@layer base {
  body {
    @apply bg-base text-text;
  }

  /* ... */
}
</code></pre>
<p>Last but not least, we are using <a href="https://alpinejs.dev/">Alpine.js</a> and Tailwind
CSS to create a user-friendly dropdown menu for small screens.</p>
<pre><code class="language-html">&lt;div
  x-data=&#34;{ mobileMenuIsOpen: false }&#34;
  x-on:click.away=&#34;mobileMenuIsOpen = false&#34;
&gt;
  &lt;!-- Site navigation for large screen --&gt;
  &lt;div class=&#34;hidden md:flex&#34;&gt;
    &lt;div&gt;
      &lt;a href=&#34;/&#34;&gt;Home&lt;/a&gt;
    &lt;/div&gt;
    &lt;div&gt;
      &lt;a href=&#34;/about/&#34;&gt;About&lt;/a&gt;
    &lt;/div&gt;
  &lt;/div&gt;

  &lt;!-- Open / close button for the site navigation on small screens --&gt;
  &lt;button x-on:click=&#34;mobileMenuIsOpen = !mobileMenuIsOpen&#34; class=&#34;md:hidden&#34;&gt;
    &lt;div x-cloak x-show=&#34;!mobileMenuIsOpen&#34;&gt;Open&lt;/div&gt;
    &lt;div x-cloak x-show=&#34;mobileMenuIsOpen&#34;&gt;Close&lt;/div&gt;
  &lt;/button&gt;

  &lt;!-- Site navigation for small screens --&gt;
  &lt;div x-cloak x-show=&#34;mobileMenuIsOpen&#34; id=&#34;mobileMenu&#34; class=&#34;md:hidden&#34;&gt;
    &lt;div class=&#34;py-4&#34;&gt;
      &lt;a href=&#34;/&#34;&gt;Home&lt;/a&gt;
    &lt;/div&gt;
    &lt;div class=&#34;py-4&#34;&gt;
      &lt;a href=&#34;/about/&#34;&gt;About&lt;/a&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;
</code></pre>
<h2>Cheat Sheets</h2>
<p>As mentioned at the beginning of the post, an important aspect for me was the
ability to include my cheat sheets on the websites. The cheat sheets are written
as YAML files and have the following structure:</p>
<pre><code class="language-yaml">---
# The title, description, author and keywords for the cheat sheet
title: Vim
description: Vim Cheat Sheet
author: Rico Berger
keywords:
  - Vim
  - Neovim
# Each cheat sheet can have multiple pages with a title and a defined number of
# columns
pages:
  - title: Vim
    columns: 5
    # Each page of a cheat sheet can have multiple sections with a title, which
    # are rendered dynamically, within the defined number of columns
    sections:
      - title: Registers
        # Each section can have multiple items, which are written in Markdown
        items:
          - &#34;`:register` - Show registers content&#34;
          - ...
        # Each section can also have a tip, which is rendered as a box below the
        # defined items. Besides the actual description, each tip can also have
        # a list of items
        tip:
          description: |
            &#34;**Tip:** Registers are being stored in ~/.viminfo, and will be
            loaded again on next restart of vim. Special registers:&#34;
          items:
            - &#34;`0` - Last yank&#34;
            - ...
</code></pre>
<p>The YAML files are decoded using the <code>github.com/goccy/go-yaml</code> package. Then we
are using the
<a href="https://github.com/ricoberger/ricoberger/blob/main/templates/cheat-sheet.html"><code>cheat-sheet.html</code></a>
template to render the cheat sheet via the <code>buildTemplate</code> function. The decoded
cheat sheet is passed to the function within the <code>Data</code> struct. The result for
the rendered cheat sheet then looks as follows
(<a href="https://github.com/ricoberger/ricoberger/blob/main/cheat-sheets/vim/vim.yaml">Vim</a>):</p>
<p><a href="https://ricoberger.de/cheat-sheets/vim/assets/vim-cheat-sheet.png"><img src="https://ricoberger.de/cheat-sheets/vim/assets/vim-cheat-sheet.png" alt="Vim Cheat Sheet"/></a></p>
<h2>Blog Posts</h2>
<p>Blog posts are written as markdown files and rendered via the
<a href="https://github.com/ricoberger/ricoberger/blob/main/templates/blog-post.html"><code>blog-post.html</code></a>
template. The markdown files are parsed and rendered to HTML via the
<code>github.com/yuin/goldmark</code> package. Each markdown file also contains a metadata
section with the following information:</p>
<pre><code class="language-yaml">---
Title: Welcome to My New Website
Description: |
  I spent the past few days creating a new website for my domain ricoberger.de.
  Previously, I only used the domain as a landing page with links to my social
  media profiles. This time, I wanted to add my cheat sheets, which were
  previously hosted in my ricoberger/cheat-sheets GitHub repository. I also
  aimed to include a small blog where I can write about topics I&#39;m interested
  in. In the following post, we will explore the technologies used to create the
  website and the features it offers.
AuthorName: Rico Berger
AuthorTitle: Site Reliability Engineer
AuthorImage: /assets/img/authors/ricoberger.webp
PublishedAt: 2025-02-23 15:00:00
Tags:
  - alpinejs
  - blog
  - cheat-sheets
  - go
  - projects
  - tailwindcss
Image: /blog/posts/welcome-to-my-wew-website/assets/landing-page.png
---
</code></pre>
<p>The document metadata is parsed using the <code>github.com/yuin/goldmark-meta</code>
extension for <code>goldmark</code> and is used to render the header of each blog post and
the <code>meta</code> tags in the HTML file. We include the metadata for the
<a href="https://ogp.me/">Open Graph protocol</a> and
<a href="https://developer.x.com/en/docs/x-for-websites/cards/overview/abouts-cards">X Cards</a>
in every blog post, to make them look great when they are shared:</p>
<pre><code class="language-html">&lt;meta property=&#34;og:type&#34; content=&#34;website&#34; /&gt;
&lt;meta property=&#34;og:title&#34; content=&#34;{{ .Metadata.Title }}&#34; /&gt;
&lt;meta property=&#34;og:description&#34; content=&#34;{{ .Metadata.Description }}&#34; /&gt;
&lt;meta property=&#34;og:url&#34; content=&#34;{{ .Metadata.BaseUrl }}{{ .Metadata.Url }}&#34; /&gt;
&lt;meta
  property=&#34;og:image&#34;
  content=&#34;{{ .Metadata.BaseUrl }}{{ .Metadata.Image }}&#34;
/&gt;

&lt;meta name=&#34;twitter:card&#34; content=&#34;summary_large_image&#34; /&gt;
&lt;meta name=&#34;twitter:site&#34; content=&#34;@rico_berger&#34; /&gt;
&lt;meta name=&#34;twitter:title&#34; content=&#34;{{ .Metadata.Title }}&#34; /&gt;
&lt;meta name=&#34;twitter:description&#34; content=&#34;{{ .Metadata.Description }}&#34; /&gt;
&lt;meta
  name=&#34;twitter:image&#34;
  content=&#34;{{ .Metadata.BaseUrl }}{{ .Metadata.Image }}&#34;
/&gt;
</code></pre>
<p>Since I&#39;m a big fan of RSS feeds (you might want to have a look at
<a href="https://feeddeck.app/">FeedDeck</a> 😉), we also include an RSS feed for the blog
and an RSS feed for each tag, which can be specified for a blog post. The RSS
feed is generated using the <code>buildRssFeed</code> function. While generating the feed
and creating the <a href="https://ricoberger.de/blog/feed.xml"><code>feed.xml</code></a> file, we go through the parsed
HTML of each post to replace all relative links with absolute ones via the
<code>github.com/PuerkitoBio/goquery</code> package.</p>
<h2>Hosting via GitHub Pages</h2>
<p>To host the new website, we are using <a href="https://pages.github.com/">GitHub Pages</a>
like for the old website. GitHub Pages is perfect for hosting static sites and
integrates very well with <a href="https://github.com/features/actions">GitHub Actions</a>.
Within the
<a href="https://github.com/ricoberger/ricoberger/blob/main/.github/workflows/deploy.yml"><code>deploy.yaml</code></a>
GitHub Action, we are building and deploying our website:</p>
<pre><code class="language-yaml">---
name: Deploy

on:
  push:
    branches:
      - main

jobs:
  build-website:
    name: Build Website
    runs-on: ubuntu-latest
    permissions:
      contents: read
      pages: write
      id-token: write

    steps:
      # Checkout the repository, setup Go and Node.js, install the Go and
      # Node.js dependencies and build the &#34;generator&#34; binary
      - name: Checkout
        uses: actions/checkout@v4

      - name: Setup Go
        uses: actions/setup-go@v5
        with:
          go-version-file: go.mod
          cache: true
          cache-dependency-path: go.sum

      - name: Setup Node
        uses: actions/setup-node@v4
        with:
          node-version: &#34;20&#34;
          cache: npm
          cache-dependency-path: package-lock.json

      - name: Setup Pages
        uses: actions/configure-pages@v5

      - name: Install Dependencies / Build Binary
        run: |
          go mod download
          go build -o generator .
          npm install

      # Run the generator to create the static files for our website in the
      # &#34;dist&#34; directory
      #
      # To be able to use a custom domain for our GitHub page we also create a
      # file named &#34;CNAME&#34; within the &#34;dist&#34; directory, which contains our
      # custom domain (see https://docs.github.com/en/pages/configuring-a-custom-domain-for-your-github-pages-site/managing-a-custom-domain-for-your-github-pages-site#configuring-a-subdomain)
      - name: Generate Website
        run: |
          ./generator
          npm run build
          echo &#34;ricoberger.de&#34; &gt; ./dist/CNAME

      # In the last build step we upload the &#34;dist&#34; directory as artifact, so
      # that it can be deployed to a GitHub Page
      - name: Upload Artifact
        uses: actions/upload-pages-artifact@v3
        with:
          path: ./dist

  deploy-website:
    name: Deploy Website
    runs-on: ubuntu-latest
    needs: build-website
    permissions:
      contents: read
      pages: write
      id-token: write
    environment:
      name: github-pages
      url: ${{ steps.deployment.outputs.page_url }}

    # Deploy the uploaded artifact from the build job to GitHub Pages
    steps:
      - name: Deploy to GitHub Pages
        id: deployment
        uses: actions/deploy-pages@v4
</code></pre>
<p>Since we are using GitHub Pages for hosting our site, we can also create a
<a href="https://docs.github.com/en/pages/getting-started-with-github-pages/creating-a-custom-404-page-for-your-github-pages-site">custom 404 page</a>,
by placing a file named <code>404.html</code> (generated via the
<a href="https://github.com/ricoberger/ricoberger/blob/main/templates/404.html"><code>404.html</code></a>
template) in the root directory of our website. This site will display a custom
404 error page when people try to access nonexistent pages on our site.</p>
<h2>Final Words</h2>
<p>As we conclude this blog post, I hope you gained a better understanding of the
internals behind <a href="https://ricoberger.de/">ricoberger.de</a> and perhaps learned
something new. I aim to enhance my writing skills in future posts. If you don&#39;t
want to miss them, feel free to subscribe to the
<a href="https://ricoberger.de/blog/feed.xml">RSS feed</a> or follow me on social media:</p>
<p class="flex flex-row flex-wrap gap-2">
  <a href="https://github.com/ricoberger" target="_blank"><img alt="Github" src="https://img.shields.io/badge/GitHub-181717.svg?&amp;style=for-the-badge&amp;logo=GitHub&amp;logoColor=white"/></a>
  <a href="https://www.linkedin.com/in/ricoberger/" target="_blank"><img alt="LinkedIn" src="https://img.shields.io/badge/LinkedIn-%230077B5.svg?&amp;style=for-the-badge&amp;logo=LinkedIn&amp;logoColor=white"/></a>
  <a href="https://www.xing.com/profile/Rico_Berger5" target="_blank"><img alt="Xing" src="https://img.shields.io/badge/Xing-006567.svg?&amp;style=for-the-badge&amp;logo=Xing&amp;logoColor=white"/></a>
  <a href="https://hachyderm.io/@ricoberger" target="_blank"><img alt="Mastodon" src="https://img.shields.io/badge/Mastodon-6364FF.svg?&amp;style=for-the-badge&amp;logo=Mastodon&amp;logoColor=white"/></a>
  <a href="https://bsky.app/profile/ricoberger.bsky.social" target="_blank"><img alt="Bluesky" src="https://img.shields.io/badge/Bluesky-0285FF.svg?&amp;style=for-the-badge&amp;logo=Bluesky&amp;logoColor=white"/></a>
  <a href="https://twitter.com/rico_berger" target="_blank"><img alt="X" src="https://img.shields.io/badge/X-000000.svg?&amp;style=for-the-badge&amp;logo=X&amp;logoColor=white"/></a>
  <a href="https://medium.com/@ricoberger" target="_blank"><img alt="Medium" src="https://img.shields.io/badge/Medium-000000.svg?&amp;style=for-the-badge&amp;logo=Medium&amp;logoColor=white"/></a>
</p>
<p>If you have any suggestions for future cheat sheets, blog posts, or if you have
some nice articles about the topics of Site Reliability Engineering, Platform
Engineering, Cloud Native, or Kubernetes, feel free to contact me via social
media or create an issue in my
<a href="https://github.com/ricoberger/ricoberger/issues">GitHub repository</a>.</p>
]]></description><author>Rico Berger</author><enclosure url="https://ricoberger.de/blog/posts/welcome-to-my-wew-website/assets/landing-page.png" type="image/png"></enclosure><guid isPermaLink="true">https://ricoberger.de/blog/posts/welcome-to-my-wew-website/</guid><pubDate>Sun, 23 Feb 2025 15:00:00 +0000</pubDate></item></channel></rss>