Tabnine is an AI pair programming tool. At its heart, Tabnine is meant to be a pal, a buddy, a mentor that lives inside your IDE and gives helpful suggestions in real-time as you write code. Like anybody, Tabnine isn’t always correct, but there are ways to make Tabnine more helpful and contextually aware. Feel free to browse our FAQ’s or our blog to get a sense of the real-world applications and limitations. Let’s get back to the overview of Tabnine and how it works. At the heart of Tabnine is an LLM (Large Language Model) which is a transformer neural network that consumes ordered data and generates responses based on the underlying data that the transformer has been trained on. So as you type in your IDE, the comments and code is used to predict what might likely come next and that is offered to you as a suggestion that can be accepted. The round-trip journey looks roughly like this:
- As you write code and comments, the characters (both behind and in front of the cursor, if they exist) are tokenized and encrypted.
- These encrypted packages are then sent to the inference server on Tabnine.com (SaaS) or on-prem (Enterprise), where they are decrypted.
- The inference server traverses the most likely path for the tokens and generates predicted code.
- That predicted code is encrypted and sent back to your client IDE plugin, where it is decrypted and suggested to you at the cursor level.
Data sent to the inference server is never stored or read by any other person or Tabnine employee. We respect the privacy and security of our customers, and it will always come first. You can check our Security Portal for more information.