So the two problems are:

  1. This will create a sidecar for everything that needs a database connection, CI, one off jobs, the actual application, they all take up resources, which will be a problem for small clusters
  2. You will not be able to run cronjobs

Cronjobs have landed in k8s in 1.8, a much needed feature, however, one big problem is that cronjobs only consider themselves done, when all the containers in a pod finish, either by exiting succesfully, or failing. This poses a problem with the cloudsql-proxy, because it will persistently stay open, and even though your actual cronjob finishes, the pod will never be unscheduled, and it will very quickly clog up your nodes, since every invocation of the cronjob will create a new pod, until you run out of resources.

There is already a ticket on cloudsql, and another, more general one here, about better support for sidecar containers. Until then, one manageable workaround is the following:

Instead of using the the advised method of deploying cloudsql as a sidecar container, you deploy it as a standalone deployment, and a service:

yaml
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
    name: cloudsql
spec:
    replicas: 1
    template:
        metadata:
            labels:
                app: cloudsql
        spec:
            containers:
                - name: cloudsql
                  image: gcr.io/cloudsql-docker/gce-proxy:1.09
                  command:
                      - "/cloud_sql_proxy"
                      - "-dir=/cloudsql"
                      - "-credential_file=/secret/gcp-key/keyfile.json"
                      - "-instances=sql-instance-here=tcp:0.0.0.0:5432" # accrording to Jose M. you need the last "=tcp" bit for it to listen on all interfaces
                  volumeMounts:
                    - name: gcp-key
                      mountPath: /secret/gcp-key
                  ports:
                    - containerPort: 5432
                      name: sql
            volumes:
              - name: gcp-key
                secret:
                    secretName: gcp-keyfile
---
apiVersion: v1
kind: Service
metadata:
  name: cloudsql
spec:
  type: ClusterIP
  selector:
    app: cloudsql
  ports:
    - port: 5432
      name: sql
      targetPort: sql

And then use the service, cloudsql, as the hostname for connecting your clients. As long as your clients don't cache dns requests, this shouldn't be a problem, even if the cloudsql pod cycles, since the dns name is going to automatically point to the new IP.