Jupyter and Container centric data science workflows for building AI applications.
Why does AI need a framework?
Frameworks have long been the friend of the web developers, increasing productivity by easing processes of getting started, building, and deploying applications. Loosely modeled after Laravel, Carme is designed to help data scientists to create and deploy data applications.
Here are just some of the features of Carme. Check out the docs and tutorials for more.
Easily use Jupyter with Docker containers both locally and in the cloud.
Control cloud resources with helpers for GCP, Azure, or AWS.
Use basic git and GitHub functionality a part of any data science workflow.
Easily create and scale complex jobs with Directed Acyclic Graphs (DAGS) and Airflow.
Quickly implement and share Deep Learning models using shared packages.
Build chatbots that using preconfigure samples and deeplearning models.
Easily share & leverage others code through an easy package system.
Easily simulate data to be used outside the enterprise or when working with consultants.
Working with containers allows your infrastructure to become code [this is often referred to as or infrastructure as code (IAC)]. This ensures that others will be able to reproduce your work, and the Carme package system makes sharing your work a snap. Containers are also useful when you want to scale an analysis, perhaps moving from your local machine to a GPU in the cloud or from a single GPU server to a Kubernetes cluster for a classroom Jupyterhub instance.
Right now we are developing targeted solutions for data science communities as we build out the Carme command line interface, documentation, and tutorials. Have a data science problem you think that CarmeLabs might be able to solve? Let us know!
"How has carme made your workflow easier? Seeking testimonials for this section! "
You Data Scientist