Kubernetes on GKE with cloudsql proxy, and cronjobsThursday, February 22, 2018
If you are running kubernetes on google cloud, you probably run into the issue that it's not so straightforward to connect to a cloudsql instance from within the cluster. The official best practice is to run cloudsql proxy as a sidecar container in your pods, and connect through that. There are two problems with this, and I will address both.
So the two problems are:
- This will create a sidecar for everything that needs a database connection, CI, one off jobs, the actual application, they all take up resources, which will be a problem for small clusters
- You will not be able to run cronjobs
Cronjobs have landed in k8s in 1.8, a much needed feature, however, one big problem is that cronjobs only consider themselves done, when all the containers in a pod finish, either by exiting succesfully, or failing. This poses a problem with the cloudsql-proxy, because it will persistently stay open, and even though your actual cronjob finishes, the pod will never be unscheduled, and it will very quickly clog up your nodes, since every invocation of the cronjob will create a new pod, until you run out of resources.
Instead of using the the advised method of deploying cloudsql as a sidecar container, you deploy it as a standalone deployment, and a service:
--- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: cloudsql spec: replicas: 1 template: metadata: labels: app: cloudsql spec: containers: - name: cloudsql image: gcr.io/cloudsql-docker/gce-proxy:1.09 command: - "/cloud_sql_proxy" - "-dir=/cloudsql" - "-credential_file=/secret/gcp-key/keyfile.json" - "-instances=sql-instance-here" volumeMounts: - name: gcp-key mountPath: /secret/gcp-key ports: - containerPort: 5432 name: sql volumes: - name: gcp-key secret: secretName: gcp-keyfile --- apiVersion: v1 kind: Service metadata: name: cloudsql spec: type: ClusterIP selector: app: cloudsql ports: - port: 5432 name: sql targetPort: sql
And then use the service,
cloudsql, as the hostname for connecting your clients. As long as your clients don't cache dns requests, this shouldn't be a problem, even if the cloudsql pod cycles, since the dns name is going to automatically point to the new IP.