This changelog provides an overview of user-visible updates, including new features, improvements, bug fixes, and deprecations, for the tensorkube CLI.

As the CLI is still in its alpha development stage, breaking changes may occur. We strive to minimize these disruptions and provide the tensorkube upgrade.

We thank you for your understanding and patience as we progress towards a stable release of the CLI.


0.0.7 (June 26, 2024)

  • Added support for T4 and L4 gpu types.
  • Added support for ignoring files and directories specified in a .dockerignore file during the image build process.
  • Added the tensorkube upgrade command to bring your tensorfuse runtime to the latest version
  • Added flags --min-scale and --max-scale to control the number of pods running in your tensorfuse app. By default the minimum scale is 0 and the maximum scale is 3.
  • Non gpu pods will no longer be scheduled on gpu machines.

0.0.5 (June 24, 2024)

  • The CLI now works seamlessly on Linux machines. Earlier, the CLI was supported only on Macs.
  • Added support for Network File System within your Tensorkube cluster. Network File System enables faster cold starts and reduces Image build times as well.
  • Upgraded the Image build engine to use less resources during an Image build process. The new build engine consumes less RAM and hence can run on cheaper instances.
  • You can now use tensorkube install-prerequisites to install and check all the prerequisite packages required to run tensorkube before configuring it with tensorkube configure.
  • tensorkube configure now resumes from the last installation checkpoint. You now don’t have to remove and reset the cluster in case your configuration runs into errors.

0.0.4 (June 17, 2024)

  • Logging support added for streaming logs during the container image build process and while a service is getting started. Every intermediate process from building the image to submitting a nodeclaim to actually starting a pod can be understood and debug using logs now.
  • Support for versioning deployments is now live. Everytime you run tensorkube deploy, a new version of your service is created and the older version is retired.
  • tensorkube deploy now accepts --cpu and --memory optional parameters which allow you to specify the number of CPU millicores and the amount of RAM you want your servers to run on.
  • Your pods now remain active upto 5 minutes after they receive their last HTTP request.
  • Your pods now have a hard upscale limit of 3 pods.

0.0.3 (June 11, 2024)

  • Tensorkube CLI is now available on pypi and can be installed using pip install tensorkube
  • tensorkube deploy now supports --gpus and --gpu-type parameters. --gpus defines the number of GPUs each pod requires and the --gpu-type parameter defines the type of GPU you want your system to run on.
  • You can now list all your service deployments using the tensorkube list deployments command
  • Your file uploads now come with progress bars so that you can optimise your images during deployment.