Skip to content

VSCode: Autocomplete with AWS Bedrock triggers an error toast #5770

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
3 tasks done
maximelebastard opened this issue May 21, 2025 · 4 comments
Open
3 tasks done

VSCode: Autocomplete with AWS Bedrock triggers an error toast #5770

maximelebastard opened this issue May 21, 2025 · 4 comments
Assignees
Labels
area:autocomplete Relates to the auto complete feature ide:vscode Relates specifically to VS Code extension kind:bug Indicates an unexpected problem or unintended behavior

Comments

@maximelebastard
Copy link

Before submitting your bug report

Relevant environment info

- OS: macOS
- Continue version: 1.1.32
- IDE version: 1.100.2
- Model: any
- config: 
  
%YAML 1.1
---
version: 1.0.16
schema: v1

bedrock: &bedrock
  provider: bedrock
  env:
    region: eu-central-1
    profile: bedrock

models:
  - name: (aws) Anthropic Claude 3.5 Sonnet
    <<: *bedrock
    model: anthropic.claude-3-5-sonnet-20240620-v1:0
    roles:
      - chat
      - edit
      - apply
  - name: (aws) Anthropic Claude 3.7 Sonnet
    <<: *bedrock
    model: eu.anthropic.claude-3-7-sonnet-20250219-v1:0
    roles:
      - chat
      - edit
  - name: (aws) Anthropic Claude 3 Haiku
    <<: *bedrock
    model: anthropic.claude-3-haiku-20240307-v1:0
    roles:
      - chat
      - autocomplete

  - name: (aws) Cohere Rerank
    <<: *bedrock
    model: cohere.rerank-v3-5:0
    roles:
      - rerank

  - name: (aws) Cohere Embed Multilingual
    <<: *bedrock
    model: cohere.embed-multilingual-v3
    roles:
      - embed
      
  - name: (local) Qwen 2.5 autocomplete
    provider: ollama
    model: qwen2.5-coder:3b
    roles:
      - chat

context:
  - provider: file
  - provider: code
  - provider: open
  - provider: folder
  - provider: clipboard
  - provider: diff
  - provider: terminal
  - provider: docs
  - provider: web
    params:
      n: 5
  - provider: codebase
  - provider: tree
  - provider: url

prompts:
  - name: commit
    description: Create a commit message from staged files
    prompt: "{{input}}"

docs:
  - name: Symfony
    startUrl: https://symfony.com/doc/6.4/index.html

defaultCompletionOptions:
  maxTokens: 4096

Description

When using an AWS Bedrock model as the autocomplete model, typing in a file triggers frequent error toasts about the previous request having been canceled.
This is probably because typing new text cancels the ongoing autocomplete request to create a new one.

To reproduce

  1. Set Haiku 3 model from Bedrock as the autocomplete model
  2. Start to type something in a file
  3. Wait for a second
  4. Start typing again
  5. The toast "Failed to communicate with Bedrock API: Request aborted" appears
  6. The toast "AWS Bedrock stream error (ECONNRESET): aborted" from Continue extension appears on the right hand side of VSCode

Log output

@dosubot dosubot bot added area:autocomplete Relates to the auto complete feature ide:vscode Relates specifically to VS Code extension kind:bug Indicates an unexpected problem or unintended behavior labels May 21, 2025
@chezsmithy
Copy link
Contributor

Autocomplete requires a FIM enabled model to really work. Currently Bedrock doesn't have any FIM models in its catalog that really work well. We've imported CodeLlama for this purpose but it's not the greatest. I'm holding out for mistral to add codestral to bedrock.

@jmoughon
Copy link
Contributor

Reviewing your config:

change Qwen 2.5 autocomplete to have:
roles:
- autocomplete

remove autocomplete role from Anthropic Claude 3 Haiku

@maximelebastard
Copy link
Author

@chezsmithy FIM models may be more accurate, but Haiku mainly works - the error prompt is useless and only related to canceled http requests - probably due to bad error catching. My ticket is not here to report any completion issue, but an http / aws sdk usage issue.

@jmoughon Thanks i'll give it a try, but as there is a model selector (the small box icon over the prompt area) I thought the model used would be the one selected there. And it is Haiku, not Ollama Qwen.

@jmoughon
Copy link
Contributor

jmoughon commented May 29, 2025

@maximelebastard Understood. You need to update your config to allow Qwen:

- name: (local) Qwen 2.5 autocomplete
    provider: ollama
    model: qwen2.5-coder:3b
    roles:
      - autocomplete

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area:autocomplete Relates to the auto complete feature ide:vscode Relates specifically to VS Code extension kind:bug Indicates an unexpected problem or unintended behavior
Projects
Status: Todo
Development

No branches or pull requests

4 participants