Code Monkey home page Code Monkey logo

codecompanion.nvim's Introduction

CodeCompanion.nvim

Currently supports: Anthropic, Ollama and OpenAI adapters

Important

This plugin is provided as-is and is primarily developed for my own workflows. As such, I offer no guarantees of regular updates or support and I expect the plugin's API to change regularly. Bug fixes and feature enhancements will be implemented at my discretion, and only if they align with my personal use-case. Feel free to fork the project and customize it to your needs, but please understand my involvement in further development will be intermittent. To be notified of breaking changes in the plugin, please subscribe to this issue.

Header

✨ Features

  • 💬 A Copilot Chat experience from within Neovim
  • 🔌 Adapter support for many generative AI services
  • 🚀 Inline code creation and modification
  • ✨ Built in actions for specific language prompts, LSP error fixes and code advice
  • 🏗️ Create your own custom actions for Neovim
  • 💾 Save and restore your chats
  • 💪 Async execution for improved performance

📸 Screenshots

Chat.Buffer.mp4
Inline.Code.mp4

⚡ Requirements

  • The curl library installed
  • Neovim 0.9.2 or greater
  • (Optional) An API key for your chosen generative AI service

📦 Installation

Install the plugin with your package manager of choice:

-- Lazy.nvim
{
  "olimorris/codecompanion.nvim",
  dependencies = {
    "nvim-lua/plenary.nvim",
    "nvim-treesitter/nvim-treesitter",
    "nvim-telescope/telescope.nvim", -- Optional
    {
      "stevearc/dressing.nvim", -- Optional: Improves the default Neovim UI
      opts = {},
    },
  },
  config = true
}

-- Packer.nvim
use({
  "olimorris/codecompanion.nvim",
  config = function()
    require("codecompanion").setup()
  end,
  requires = {
    "nvim-lua/plenary.nvim",
    "nvim-treesitter/nvim-treesitter",
    "nvim-telescope/telescope.nvim", -- Optional
    "stevearc/dressing.nvim" -- Optional: Improves the default Neovim UI
  }
})

🔧 Configuration

You only need to the call the setup function if you wish to change any of the defaults:

Click to see the default configuration
require("codecompanion").setup({
  adapters = { -- anthropic|ollama|openai
    chat = "openai",
    inline = "openai",
  },
  saved_chats = {
    save_dir = vim.fn.stdpath("data") .. "/codecompanion/saved_chats", -- Path to save chats to
  },
  display = {
    action_palette = {
      width = 95,
      height = 10,
    },
    chat = { -- Options for the chat strategy
      type = "float", -- float|buffer
      show_settings = true, -- Show the model settings in the chat buffer?
      show_token_count = true, -- Show the token count for the current chat in the buffer?
      buf_options = { -- Buffer options for the chat buffer
        buflisted = false,
      },
      float_options = { -- Float window options if the type is "float"
        border = "single",
        buflisted = false,
        max_height = 0,
        max_width = 0,
        padding = 1,
      },
      win_options = { -- Window options for the chat buffer
        cursorcolumn = false,
        cursorline = false,
        foldcolumn = "0",
        linebreak = true,
        list = false,
        signcolumn = "no",
        spell = false,
        wrap = true,
      },
    },
  },
  keymaps = {
    ["<C-s>"] = "keymaps.save", -- Save the chat buffer and trigger the API
    ["<C-c>"] = "keymaps.close", -- Close the chat buffer
    ["q"] = "keymaps.cancel_request", -- Cancel the currently streaming request
    ["gc"] = "keymaps.clear", -- Clear the contents of the chat
    ["ga"] = "keymaps.codeblock", -- Insert a codeblock into the chat
    ["gs"] = "keymaps.save_chat", -- Save the current chat
    ["]"] = "keymaps.next", -- Move to the next header in the chat
    ["["] = "keymaps.previous", -- Move to the previous header in the chat
  },
  log_level = "ERROR", -- TRACE|DEBUG|ERROR
  send_code = true, -- Send code context to the generative AI service? Disable to prevent leaking code outside of Neovim
  silence_notifications = false, -- Silence notifications for actions like saving saving chats?
  use_default_actions = true, -- Use the default actions in the action palette?
})

Adapters

Warning

Depending on your chosen adapter, you may need to set an API key.

The plugin uses adapters to bridge between generative AI services and the plugin. Currently the plugin supports:

  • Anthropic (anthropic) - Requires an API key
  • Ollama (ollama)
  • OpenAI (openai) - Requires an API key

You can specify an adapter for each of the strategies in the plugin:

require("codecompanion").setup({
  adapters = {
    chat = "anthropic",
    inline = "openai"
  },
})

You may need to modify certain parameters of an adapter. In the example below, we're changing the name of the API key that the OpenAI adapter uses by passing in a table to the use method:

require("codecompanion").setup({
  adapters = {
    chat = require("codecompanion.adapters").use("openai", {
      env = {
        api_key = "DIFFERENT_OPENAI_KEY",
      },
    }),
  },
})

Tip

To create your own adapter please refer to the ADAPTERS guide.

Additional API Key Options

Having API keys in plain text in your shell is not always safe. Thanks to this PR, you can run commands from within the plugin:

require("codecompanion").setup({
  adapters = {
    chat = require("codecompanion.adapters").use("openai", {
      env = {
        api_key = "cmd:gpg --decrypt ~/.openai-api-key.gpg 2>/dev/null",
      },
    }),
  },
})

In this example, we're using gpg to decrypt a file to obtain an API key.

Edgy.nvim Configuration

The author recommends pairing with edgy.nvim for an experience similar to that of GitHub's Copilot Chat:

{
  "folke/edgy.nvim",
  event = "VeryLazy",
  init = function()
    vim.opt.laststatus = 3
    vim.opt.splitkeep = "screen"
  end,
  opts = {
    right = {
      { ft = "codecompanion", title = "Code Companion Chat", size = { width = 0.45 } },
    }
  }
}

Highlight Groups

The plugin sets the following highlight groups during setup:

  • CodeCompanionTokens - Virtual text showing the token count when in a chat buffer
  • CodeCompanionVirtualText - All other virtual text in the chat buffer

🚀 Usage

The plugin has a number of commands:

  • :CodeCompanion - Inline code writing and refactoring
  • :CodeCompanionChat - To open up a new chat buffer
  • :CodeCompanionToggle - Toggle a chat buffer
  • :CodeCompanionActions - To open up the action palette window

For an optimum workflow, the plugin author recommendeds the following keymaps:

vim.api.nvim_set_keymap("n", "<C-a>", "<cmd>CodeCompanionActions<cr>", { noremap = true, silent = true })
vim.api.nvim_set_keymap("v", "<C-a>", "<cmd>CodeCompanionActions<cr>", { noremap = true, silent = true })
vim.api.nvim_set_keymap("n", "<LocalLeader>a", "<cmd>CodeCompanionToggle<cr>", { noremap = true, silent = true })
vim.api.nvim_set_keymap("v", "<LocalLeader>a", "<cmd>CodeCompanionToggle<cr>", { noremap = true, silent = true })

Note

For some actions, visual mode allows your selection to be sent directly to the chat buffer or the API itself (in the case of inline code actions).

The Action Palette

action selector

Note

Please see the RECIPES guide in order to add your own actions to the palette.

The Action Palette, opened via :CodeCompanionActions, contains all of the actions and their associated strategies for the plugin. It's the fastest way to start leveraging CodeCompanion. Depending on whether you're in normal or visual mode will affect the options that are available to you in the palette.

Tip

If you wish to turn off the default actions, set use_default_actions = false in your config.

The Chat Buffer

chat buffer

The chat buffer is where you can converse with the generative AI service, directly from Neovim. It behaves as a regular markdown buffer with some clever additions. When the buffer is written (or "saved"), autocmds trigger the sending of its content to the generative AI service in the form of prompts. These prompts are segmented by H1 headers: user, system and assistant. When a response is received, it is then streamed back into the buffer. The result is that you experience the feel of conversing with your generative AI service from within Neovim.

Keymaps

When in the chat buffer, there are number of keymaps available to you:

  • <C-s> - Save the buffer and trigger a response from the generative AI service
  • <C-c> - Close the buffer
  • q - Cancel the stream from the API
  • gc - Clear the buffer's contents
  • ga - Add a codeblock
  • gs - Save the chat to disk
  • [ - Move to the next header
  • ] - Move to the previous header

Saved Chats

Chat buffers are not saved to disk by default, but can be by pressing gs in the buffer. Saved chats can then be restored via the Action Palette and the Load saved chats action.

Settings

If display.chat.show_settings is set to true, at the very top of the chat buffer will be the adapter's model parameters which can be changed to tweak the response. You can find more detail about them by moving the cursor over them.

Inline Code

Inline.Code.mp4

You can use the plugin to create inline code directly into a Neovim buffer. This can be invoked by using the Action Palette (as above) or from the command line via :CodeCompanion. For example:

:CodeCompanion create a table of 5 fruits
:'<,'>CodeCompanion refactor the code to make it more concise

Note

The command can detect if you've made a visual selection and send any code as context to the API alongside the filetype of the buffer.

One of the challenges with inline editing is determining how the generative AI's response should be handled in the buffer. If you've prompted the API to "create a table of 5 fruits" then you may wish for the response to be placed after the cursor's current position in the buffer. However, if you asked the API to "refactor this function" then you'd expect the response to overwrite a visual selection. If this placement isn't specified then the plugin will use generative AI itself to determine if the response should follow any of the placements below:

  • after - after the visual selection
  • before - before the visual selection
  • cursor - one column after the cursor position
  • new - in a new buffer
  • replace - replacing the visual selection

As a final example, specifying a prompt like "create a test for this code in a new buffer" would result in a new Neovim buffer being created.

In-Built Actions

The plugin comes with a number of in-built actions which aim to improve your Neovim workflow. Actions make use of either a chat or an inline strategy. The chat strategy opens up a chat buffer whilst an inline strategy will write output from the generative AI service into the Neovim buffer.

Chat and Chat as

Both of these actions utilise the chat strategy. The Chat action opens up a fresh chat buffer. The Chat as action allows for persona based context to be set in the chat buffer allowing for better and more detailed responses from the generative AI service.

Tip

Both of these actions allow for visually selected code to be sent to the chat buffer as code blocks.

Open chats

This action enables users to easily navigate between their open chat buffers. A chat buffer can be deleted (and removed from memory) by pressing <C-c>.

Inline code

These actions utilize the inline strategy. They can be useful for writing inline code in a buffer or even refactoring a visual selection; all based on a user's prompt. The actions are designed to write code for the buffer filetype that it is initated in, or, if run from a terminal prompt, to write commands.

The strategy comes with a number of helpers which the user can type in the prompt, similar to GitHub Copilot Chat:

  • /doc to add a documentation comment
  • /optimize to analyze and improve the running time of the selected code
  • /tests to create unit tests for the selected code

Note

The options available to the user in the Action Palette will depend on the Vim mode.

Code advisor

As the name suggests, this action provides advice on a visual selection of code and utilises the chat strategy. The response from the API is streamed into a chat buffer which follows the display.chat settings in your configuration.

LSP assistant

Taken from the fantastic Wtf.nvim plugin, this action provides advice on how to correct any LSP diagnostics which are present on the visually selected lines. Again, the send_code = false value can be set in your config to prevent the code itself being sent to the generative AI service.

🌈 Helpers

Hooks / User events

The plugin fires the following events during its lifecycle:

  • CodeCompanionRequest - Fired during the API request. Outputs data.status with a value of started or finished
  • CodeCompanionChatSaved - Fired after a chat has been saved to disk
  • CodeCompanionChat - Fired at various points during the chat buffer. Comes with the following attributes:
    • data.action = close_buffer - For when a chat buffer has been permanently closed
    • data.action = hide_buffer - For when a chat buffer is hidden
    • data.action = show_buffer - For when a chat buffer is visible after being hidden
  • CodeCompanionInline - Fired during the inline API request alongside CodeCompanionRequest. Outputs data.status with a value of started or finished

Events can be hooked into as follows:

local group = vim.api.nvim_create_augroup("CodeCompanionHooks", {})

vim.api.nvim_create_autocmd({ "User" }, {
  pattern = "CodeCompanionInline",
  group = group,
  callback = function(request)
    print(request.data.status) -- outputs "started" or "finished"
  end,
})

Tip

A possible use case is for formatting the buffer after an inline code request

Heirline.nvim

If you're using the fantastic Heirline.nvim plugin, consider the following snippet to display an icon in the statusline whilst CodeCompanion is conversing with a generative AI service:

local CodeCompanion = {
  static = {
    processing = false,
  },
  update = {
    "User",
    pattern = "CodeCompanionRequest",
    callback = function(self, args)
      self.processing = (args.data.status == "started")
      vim.cmd("redrawstatus")
    end,
  },
  {
    condition = function(self)
      return self.processing
    end,
    provider = "",
    hl = { fg = "yellow" },
  },
}

🎁 Contributing

I am open to contributions but they will be implemented at my discretion. Feel free to open up a discussion before embarking on a big PR.

👏 Acknowledgements

codecompanion.nvim's People

Contributors

olimorris avatar rebel1324 avatar mrjones2014 avatar abayomi185 avatar nuvic avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.