1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
|
# Gemini CLI Observability Guide
Telemetry provides crucial data about the Gemini CLI's performance, health, and usage. By enabling it, you can monitor operations, debug issues, and optimize tool usage through traces, metrics, and structured logs.
This entire system is built on the **[OpenTelemetry] (OTEL)** standard, allowing you to send data to any compatible backend, from your local terminal to a cloud service.
[OpenTelemetry]: https://opentelemetry.io/
## Quick Start: Enabling Telemetry
You can enable telemetry in multiple ways. [Configuration](configuration.md) is primarily managed via the `.gemini/settings.json` file and environment variables, but CLI flags can override these settings for a specific session.
> **A Note on Sandbox Mode:** Telemetry is not compatible with sandbox mode at this time. Turn off sandbox mode before enabling telemetry. Tracked in #894.
**Order of Precedence:**
1. **CLI Flag (`--telemetry`):** These override all other settings for the current session.
2. **Workspace Settings File (`.gemini/settings.json`):** If no CLI flag is used, the `telemetry` value from this project-specific file is used.
3. **User Settings File (`~/.gemini/settings.json`):** If not set by a flag or workspace settings, the value from this global user file is used.
4. **Default:** If telemetry is not configured by a flag or in any settings file, it is disabled.
Add these lines to enable telemetry by in workspace (`.gemini/settings.json`) or user (`~/.gemini/settings.json`) settings:
```json
{
"telemetry": true,
"sandbox": false
}
```
#### Mode 1: Console Output (Default)
If you only set `"telemetry": true` and do nothing else, the CLI will output all telemetry data directly to your console. This is the simplest way to inspect events, metrics, and traces without any external tools.
#### Mode 2: Sending to a Collector
To send data to a local or remote OpenTelemetry collector, set the following environment variable:
```bash
export OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317
```
The CLI sends data using the OTLP/gRPC protocol.
Learn more about OTEL exporter standard configuration in [documentation][otel-config-docs].
[otel-config-docs]: https://opentelemetry.io/docs/languages/sdk-configuration/otlp-exporter/
## Running an OTEL Collector
An OTEL Collector is a service that receives, processes, and exports telemetry data. Below are common setups.
### Configurations
Create a folder for the OTEL configurations:
```
mkdir .gemini/otel
```
### Local
This setup prints all telemetry from the Gemini CLI to your terminal using a local Docker container.
**1. Create a Configuration File**
Create the file `.gemini/otel/collector-local.yaml` with the following:
```bash
cat <<EOF > .gemini/otel/collector-local.yaml
receivers:
otlp:
protocols:
grpc:
endpoint: "0.0.0.0:4317"
processors:
batch:
timeout: 1s
exporters:
debug:
verbosity: detailed
service:
telemetry:
logs:
level: "debug"
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [debug]
metrics:
receivers: [otlp]
processors: [batch]
exporters: [debug]
logs:
receivers: [otlp]
processors: [batch]
exporters: [debug]
EOF
```
**2. Run the Collector**
In your terminal, run this Docker command:
```bash
docker run --rm --name otel-collector-local \
-p 4317:4317 \
-v "$(pwd)/.gemini/otel/collector-local.yaml":/etc/otelcol-contrib/config.yaml \
otel/opentelemetry-collector-contrib:latest
```
**3. Stop the Collector**
```bash
docker stop otel-collector-local
```
### Google Cloud
This setup sends all telemetry to Google Cloud for robust, long-term analysis.
**1. Prerequisites**
- A Google Cloud Project ID.
- **APIs Enabled**: Cloud Trace, Cloud Monitoring, Cloud Logging.
- **Authentication**: A Service Account with the roles `Cloud Trace Agent`, `Monitoring Metric Writer`, and `Logs Writer`. Ensure your environment is authenticated (e.g., via `gcloud auth application-default login` or a service account key file).
**2. Set environment variables**
Set the `GOOGLE_CLOUD_PROJECT`, `GOOGLE_CLOUD_LOCATION`, and `GOOGLE_GENAI_USE_VERTEXAI` environment variables:
```bash
GOOGLE_CLOUD_PROJECT="YOUR_PROJECT_ID"
GOOGLE_CLOUD_LOCATION="YOUR_PROJECT_LOCATION" # e.g., us-central1
GOOGLE_GENAI_USE_VERTEXAI=true
```
**3. Create a Configuration File**
Create `.gemini/otel/collector-gcp.yaml`:
```bash
cat <<EOF > .gemini/otel/collector-gcp.yaml
receivers:
otlp:
protocols:
grpc:
endpoint: "0.0.0.0:4317"
processors:
batch:
timeout: 1s
exporters:
googlecloud:
project: "${GOOGLE_CLOUD_PROJECT}"
metric:
prefix: "custom.googleapis.com/gemini_code"
log:
default_log_name: "gemini_code"
debug:
verbosity: detailed
service:
pipelines:
traces:
receivers: [otlp]
exporters: [googlecloud]
metrics:
receivers: [otlp]
exporters: [googlecloud]
logs:
receivers: [otlp]
exporters: [googlecloud]
EOF
```
**4. Run the Collector**
This command mounts your Google Cloud credentials into the container.
If using application default credentials:
```bash
docker run --rm --name otel-collector-gcp \
-p 4317:4317 \
--user "$(id -u):$(id -g)" \
-v "$HOME/.config/gcloud/application_default_credentials.json":/etc/gcp/credentials.json \
-e "GOOGLE_APPLICATION_CREDENTIALS=/etc/gcp/credentials.json" \
-v "$(pwd)/.gemini/otel/collector-gcp.yaml":/etc/otelcol-contrib/config.yaml \
otel/opentelemetry-collector-contrib:latest --config /etc/otelcol-contrib/config.yaml
```
If using sevice account key:
```bash
docker run --rm --name otel-collector-gcp \
-p 4317:4317 \
-v "/path/to/your/sa-key.json":/etc/gcp/sa-key.json:ro \
-e "GOOGLE_APPLICATION_CREDENTIALS=/etc/gcp/sa-key.json" \
-v "$(pwd)/.gemini/otel/collector-gcp.yaml":/etc/otelcol-contrib/config.yaml \
otel/opentelemetry-collector-contrib:latest --config /etc/otelcol-contrib/config.yaml
```
Your telemetry data will now appear in Cloud Trace, Monitoring, and Logging.
**5. Stop the Collector**
```bash
docker stop otel-collector-gcp
```
---
## Data Reference: Logs & Metrics
### Logs
These are timestamped records of specific events.
- `gemini_code.config`: Fired once at startup with the CLI's configuration.
- **Attributes**:
- `model` (string)
- `sandbox_enabled` (boolean)
- `core_tools_enabled` (string)
- `approval_mode` (string)
- `vertex_ai_enabled` (boolean)
- `log_user_prompts_enabled` (boolean)
- `file_filtering_respect_git_ignore` (boolean)
- `file_filtering_allow_build_artifacts` (boolean)
- `gemini_code.user_prompt`: Fired when a user submits a prompt.
- **Attributes**:
- `prompt_char_count`
- `prompt` (except if `log_user_prompts_enabled` is false)
- `gemini_code.tool_call`: Fired for every function call.
- **Attributes**:
- `function_name`
- `function_args`
- `duration_ms`
- `success` (boolean)
- `error` (optional)
- `error_type` (optional)
- `gemini_code.api_request`: Fired when making a request to the Gemini API.
- **Attributes**:
- `model`
- `duration_ms`
- `prompt_token_count`
- `gemini_code.api_error`: Fired if the API request fails.
- **Attributes**:
- `model`
- `error`
- `error_type`
- `status_code`
- `duration_ms`
- `attempt`
- `gemini_code.api_response`: Fired upon receiving a response from the Gemini API.
- **Attributes**:
- `model`
- `status_code`
- `duration_ms`
- `error` (optional)
- `attempt`
### Metrics
These are numerical measurements of behavior over time.
- `gemini_code.session.count` (Counter, Int): Incremented once per CLI startup.
- `gemini_code.tool.call.count` (Counter, Int): Counts tool calls.
- **Attributes**:
- `function_name`
- `success` (boolean)
- `gemini_code.tool.call.latency` (Histogram, ms): Measures tool call latency.
- **Attributes**:
- `function_name`
- `gemini_code.api.request.count` (Counter, Int): Counts all API requests.
- **Attributes**:
- `model`
- `status_code`
- `error_type` (optional)
- `gemini_code.api.request.latency` (Histogram, ms): Measures API request latency.
- **Attributes**:
- `model`
- `gemini_code.token.input.count` (Counter, Int): Counts the total number of input tokens sent to the API.
- **Attributes**:
- `model`
|