summaryrefslogtreecommitdiffhomepage
path: root/README.md
diff options
context:
space:
mode:
Diffstat (limited to 'README.md')
-rw-r--r--README.md251
1 files changed, 236 insertions, 15 deletions
diff --git a/README.md b/README.md
index 54b150f..30cdddf 100644
--- a/README.md
+++ b/README.md
@@ -1,38 +1,259 @@
# Dispatch::Adapter::Copilot
-TODO: Delete this and the text below, and describe your gem
+A Ruby gem that provides a provider-agnostic LLM adapter interface with a concrete GitHub Copilot implementation. Calls the Copilot API directly over HTTP using Ruby's `net/http` — no SDK, no CLI, no external dependencies.
-Welcome to your new gem! In this directory, you'll find the files you need to be able to package up your Ruby library into a gem. Put your Ruby code in the file `lib/dispatch/adapter/copilot`. To experiment with that code, run `bin/console` for an interactive prompt.
+## What It Does
+
+- Defines a **canonical adapter interface** (`Dispatch::Adapter::Base`) that any LLM provider can implement
+- Provides a **complete GitHub Copilot adapter** (`Dispatch::Adapter::Copilot`) supporting:
+ - Chat completions (text responses, tool calls, mixed responses)
+ - Streaming via Server-Sent Events (SSE)
+ - Tool/function calling with structured input/output
+ - Thinking/reasoning effort control for reasoning models (o1, o3, o4-mini, etc.)
+ - Automatic GitHub device OAuth flow for authentication
+ - Copilot token management with automatic refresh
+- Uses **canonical structs** (`Message`, `Response`, `ToolUseBlock`, etc.) so your application code is provider-agnostic
## Installation
-TODO: Replace `UPDATE_WITH_YOUR_GEM_NAME_IMMEDIATELY_AFTER_RELEASE_TO_RUBYGEMS_ORG` with your gem name right after releasing it to RubyGems.org. Please do not do it earlier due to security reasons. Alternatively, replace this section with instructions to install your gem from git if you don't plan to release to RubyGems.org.
+Add to your Gemfile:
+
+```ruby
+gem "dispatch-adapter-copilot"
+```
-Install the gem and add to the application's Gemfile by executing:
+Then run `bundle install`.
-```bash
-bundle add UPDATE_WITH_YOUR_GEM_NAME_IMMEDIATELY_AFTER_RELEASE_TO_RUBYGEMS_ORG
+## Authentication
+
+The adapter authenticates via a GitHub OAuth token. You have three options:
+
+### Option 1: Pass a token directly
+
+```ruby
+adapter = Dispatch::Adapter::Copilot.new(github_token: "gho_your_token_here")
```
-If bundler is not being used to manage dependencies, install the gem by executing:
+### Option 2: Interactive device flow
-```bash
-gem install UPDATE_WITH_YOUR_GEM_NAME_IMMEDIATELY_AFTER_RELEASE_TO_RUBYGEMS_ORG
+Omit the token and the adapter will trigger a GitHub device authorization flow on first use:
+
+```ruby
+adapter = Dispatch::Adapter::Copilot.new
+adapter.chat(messages) # Prints a URL and code to stderr, waits for authorization
+```
+
+The token is persisted to `~/.config/dispatch/copilot_github_token` and reused on subsequent runs.
+
+### Option 3: Custom token path
+
+```ruby
+adapter = Dispatch::Adapter::Copilot.new(token_path: "/path/to/my/token")
```
## Usage
-TODO: Write usage instructions here
+### Basic Chat
-## Development
+```ruby
+require "dispatch/adapter/copilot"
+
+adapter = Dispatch::Adapter::Copilot.new(
+ model: "gpt-4.1", # Model to use (default: "gpt-4.1")
+ max_tokens: 8192 # Max output tokens (default: 8192)
+)
+
+messages = [
+ Dispatch::Adapter::Message.new(role: "user", content: "What is Ruby?")
+]
-After checking out the repo, run `bin/setup` to install dependencies. Then, run `rake spec` to run the tests. You can also run `bin/console` for an interactive prompt that will allow you to experiment.
+response = adapter.chat(messages, system: "You are a helpful programming assistant.")
+puts response.content # => "Ruby is a dynamic, open source..."
+puts response.model # => "gpt-4.1"
+puts response.stop_reason # => :end_turn
+puts response.usage.input_tokens # => 15
+puts response.usage.output_tokens # => 120
+```
+
+### Streaming
+
+```ruby
+adapter.chat(messages, stream: true) do |delta|
+ case delta.type
+ when :text_delta
+ print delta.text
+ when :tool_use_start
+ puts "\nCalling tool: #{delta.tool_name}"
+ when :tool_use_delta
+ # Partial JSON arguments being streamed
+ end
+end
+# Returns a Response after streaming completes
+```
+
+### Tool Calling
+
+Tools can be passed as `ToolDefinition` structs or plain hashes with `name`, `description`, and `parameters` keys (symbol or string). This makes it easy to integrate with tool registries that return plain hashes.
+
+```ruby
+# Define a tool using a struct
+weather_tool = Dispatch::Adapter::ToolDefinition.new(
+ name: "get_weather",
+ description: "Get the current weather for a city",
+ parameters: {
+ "type" => "object",
+ "properties" => {
+ "city" => { "type" => "string", "description" => "City name" }
+ },
+ "required" => ["city"]
+ }
+)
+
+# Send a message with tools available
+messages = [Dispatch::Adapter::Message.new(role: "user", content: "What's the weather in Tokyo?")]
+response = adapter.chat(messages, tools: [weather_tool])
+
+if response.stop_reason == :tool_use
+ # The model wants to call a tool
+ tool_call = response.tool_calls.first
+ puts tool_call.name # => "get_weather"
+ puts tool_call.arguments # => {"city" => "Tokyo"}
+ puts tool_call.id # => "call_abc123"
+
+ # Execute the tool, then send the result back
+ tool_result = Dispatch::Adapter::ToolResultBlock.new(
+ tool_use_id: tool_call.id,
+ content: "72F and sunny"
+ )
+
+ followup = [
+ *messages,
+ Dispatch::Adapter::Message.new(role: "assistant", content: [tool_call]),
+ Dispatch::Adapter::Message.new(role: "user", content: [tool_result])
+ ]
+
+ final_response = adapter.chat(followup, tools: [weather_tool])
+ puts final_response.content # => "The weather in Tokyo is 72F and sunny!"
+end
+```
+
+You can also pass plain hashes instead of `ToolDefinition` structs:
+
+```ruby
+# Plain hash (e.g. from a tool registry)
+tools = [{ name: "get_weather", description: "Get weather", parameters: { "type" => "object", "properties" => { "city" => { "type" => "string" } } } }]
+response = adapter.chat(messages, tools: tools)
+```
-To install this gem onto your local machine, run `bundle exec rake install`. To release a new version, update the version number in `version.rb`, and then run `bundle exec rake release`, which will create a git tag for the version, push git commits and the created tag, and push the `.gem` file to [rubygems.org](https://rubygems.org).
+### Thinking / Reasoning Models
-## Contributing
+For reasoning models like `o1`, `o3`, `o3-mini`, and `o4-mini`, you can control the thinking effort:
-Bug reports and pull requests are welcome on GitHub at https://github.com/[USERNAME]/dispatch-adapter-copilot.
+```ruby
+# Set as default
+adapter = Dispatch::Adapter::Copilot.new(model: "o3-mini", thinking: "high")
+
+# Or override per-call
+response = adapter.chat(messages, thinking: "low")
+
+# Disable for a specific call (even with a constructor default)
+response = adapter.chat(messages, thinking: nil)
+```
+
+Valid values: `"low"`, `"medium"`, `"high"`, or `nil` (disabled).
+
+### Per-Call Max Tokens
+
+```ruby
+# Override the constructor default for a single call
+response = adapter.chat(messages, max_tokens: 100)
+```
+
+### List Available Models
+
+```ruby
+models = adapter.list_models
+models.each do |m|
+ puts "#{m.id} (context: #{m.max_context_tokens} tokens)"
+end
+```
+
+### Adapter Metadata
+
+```ruby
+adapter.model_name # => "gpt-4.1"
+adapter.provider_name # => "GitHub Copilot"
+adapter.max_context_tokens # => 1047576
+adapter.count_tokens(msgs) # => -1 (not supported by Copilot)
+```
+
+## Canonical Types
+
+All communication uses these structs (under `Dispatch::Adapter`):
+
+| Struct | Purpose |
+|---|---|
+| `Message` | Chat message with `role` and `content` |
+| `TextBlock` | Text content block |
+| `ImageBlock` | Image content block (not yet supported) |
+| `ToolUseBlock` | Tool call from the model |
+| `ToolResultBlock` | Result you send back after executing a tool |
+| `ToolDefinition` | Tool schema (name, description, JSON Schema parameters) |
+| `Response` | Complete response with content, tool_calls, usage, stop_reason |
+| `Usage` | Token counts (input, output, cache) |
+| `StreamDelta` | Incremental streaming chunk |
+| `ModelInfo` | Model metadata |
+
+## Error Handling
+
+All errors inherit from `Dispatch::Adapter::Error` (which inherits from `StandardError`):
+
+```ruby
+begin
+ adapter.chat(messages)
+rescue Dispatch::Adapter::AuthenticationError => e
+ puts "Auth failed (#{e.status_code}): #{e.message}"
+rescue Dispatch::Adapter::RateLimitError => e
+ puts "Rate limited, retry after #{e.retry_after} seconds"
+rescue Dispatch::Adapter::RequestError => e
+ puts "Bad request (#{e.status_code}): #{e.message}"
+rescue Dispatch::Adapter::ServerError => e
+ puts "Server error (#{e.status_code}): #{e.message}"
+rescue Dispatch::Adapter::ConnectionError => e
+ puts "Network error: #{e.message}"
+end
+```
+
+## Adapter Interface
+
+All adapters subclass `Dispatch::Adapter::Base` and implement:
+
+| Method | Returns | Required? |
+|---|---|---|
+| `chat(messages, system:, tools:, stream:, max_tokens:, thinking:, &block)` | `Response` | Yes |
+| `model_name` | `String` | Yes |
+| `count_tokens(messages, system:, tools:)` | `Integer` | No (default: -1) |
+| `list_models` | `Array<ModelInfo>` | No |
+| `provider_name` | `String` | No (default: class name) |
+| `max_context_tokens` | `Integer` or `nil` | No (default: nil) |
+
+## Supported Models
+
+Any model available through the GitHub Copilot API, including:
+
+- `gpt-4.1`, `gpt-4.1-mini`, `gpt-4.1-nano`
+- `gpt-4o`, `gpt-4o-mini`
+- `o1`, `o1-mini`, `o3`, `o3-mini`, `o4-mini`
+- `claude-3.5-sonnet`, `claude-3.7-sonnet`
+- `gemini-2.0-flash-001`
+
+## Development
+
+```bash
+bundle install
+bundle exec rspec # Run tests (84 examples)
+bundle exec rubocop # Run linter
+```
## License