summaryrefslogtreecommitdiff
path: root/packages/server/src/utils/nextSpeakerChecker.test.ts
AgeCommit message (Collapse)Author
2025-05-30Rename server->core (#638)Tommaso Sciortino
2025-05-27feat: Allow cancellation of in-progress Gemini requests and pre-execution checksTaylor Mullen
- Implements cancellation for Gemini requests while they are actively being processed by the model. - Extends cancellation support to the logic within tools. This allows users to cancel operations during the phase where the system is determining if a tool execution requires user confirmation, which can include potentially long-running pre-flight checks or LLM-based corrections. - Underlying LLM calls for edit corrections (within and ) and next speaker checks can now also be cancelled. - Previously, cancellation of the main request was not possible until text started streaming, and pre-execution checks were not cancellable. - This change leverages the updated SDK's ability to accept an abort token and threads s throughout the request, tool execution, and pre-execution check lifecycle. Fixes https://github.com/google-gemini/gemini-cli/issues/531
2025-05-26Fix(chat): Finalize next speaker detection logicTaylor Mullen
- Enhance `checkNextSpeaker` to handle cases where the last message is a function response or an empty model message. - If the last message is a function response, the model should speak next. - If the last message is an empty model message, the model should speak next. - This ensures more robust and accurate determination of the next speaker in the conversation, completing the fix for the issue. - Updated tests. Fixes https://github.com/google-gemini/gemini-cli/issues/551
2025-05-26Refactor(chat): Introduce custom Chat class for future modificationsTaylor Mullen
- Copied the `Chat` class from `@google/genai` into `packages/server/src/core/geminiChat.ts`. - This change is in preparation for future modifications to the chat handling logic. - Updated relevant files to use the new `GeminiChat` class. Part of https://github.com/google-gemini/gemini-cli/issues/551
2025-05-10Don't prematurely end convo w/ Gemini.Taylor Mullen
- There seems to be a root model bug where the model will preemptively bail on conversations without trying harder. Typically the stops are VERY obvious and bug-looking where you need to prmopt the model to "continue". - This PR attempts to fix the above by running a 2.0-flash request (don't need somethign more powerful) at the end of every full interaction to see who should speak (user or model). - Add tests for nextSpeakerChecker Fixes https://b.corp.google.com/issues/416826051