Files
claude-hub/cli/__tests__/commands/list.test.ts
Cheffromspace f0edb5695f feat: Add CLI for managing autonomous Claude Code container sessions (#166)
* feat: Add CLI for managing autonomous Claude Code container sessions

This commit implements a new CLI tool 'claude-hub' for managing autonomous Claude Code container sessions. The CLI provides commands for:

- Starting autonomous sessions (start)
- Listing active/completed sessions (list)
- Viewing session logs (logs)
- Continuing sessions with new commands (continue)
- Stopping sessions (stop)

Each session runs in an isolated Docker container and maintains its state across interactions. The implementation includes session management, Docker container operations, and a comprehensive command-line interface.

Resolves #133

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* feat: Complete autonomous CLI feature implementation

This commit adds the following enhancements to the autonomous Claude CLI:

- Add --issue flag to start command for GitHub issue context
- Implement start-batch command with tasks.yaml support
- Enhance PR flag functionality for better context integration
- Implement session recovery mechanism with recover and sync commands
- Add comprehensive documentation for all CLI commands

Resolves all remaining requirements from issue #133

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* test: Add comprehensive test coverage for CLI

- Add unit tests for SessionManager utility
- Add simplified unit tests for DockerUtils utility
- Add integration tests for start and start-batch commands
- Configure Jest with TypeScript support
- Add test mocks for Docker API and filesystem
- Add test fixtures for batch processing
- Document testing approach in README
- Add code coverage reporting

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* ci: Add CLI tests workflow and configure stable test suite

- Create dedicated GitHub workflow for CLI tests
- Update CLI test script to run only stable tests
- Add test:all script for running all tests locally

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: Improve CLI with TypeScript fixes and CI enhancements

- Fix TypeScript Promise handling in list.ts and stop.ts
- Update CI workflow to add build step and run all tests
- Move ora dependency from devDependencies to dependencies
- Update Docker build path to use repository root
- Improve CLI script organization in package.json

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: Skip Docker-dependent tests in CI

- Update test scripts to exclude dockerUtils tests
- Add SKIP_DOCKER_TESTS environment variable to CI workflow
- Remove dockerUtils.simple.test.ts from specific tests

This prevents timeouts in CI caused by Docker tests.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: Refine test patterns to exclude only full Docker tests

- Replace testPathIgnorePatterns with more precise glob patterns
- Ensure dockerUtils.simple.test.ts is still included in the test runs
- Keep specific tests command with all relevant tests

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: Update Jest test patterns to correctly match test files

The previous glob pattern '__tests__/\!(utils/dockerUtils.test).ts' was not finding any tests because it was looking for .ts files directly in the __tests__ folder, but all test files are in subdirectories. Fixed by using Jest's testPathIgnorePatterns option instead.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* test: Add tests for CLI list and continue commands

Added comprehensive test coverage for the CLI list and continue commands:
- Added list.test.ts with tests for all filtering options and edge cases
- Added continue.test.ts with tests for successful continuation and error cases
- Both files achieve full coverage of their respective commands

These new tests help improve the overall test coverage for the CLI commands module.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* test: Add comprehensive tests for CLI logs, recover, and stop commands

Added test coverage for remaining CLI commands:
- logs.test.ts - tests for logs command functionality (94.54% coverage)
- recover.test.ts - tests for recover and sync commands (100% coverage)
- stop.test.ts - tests for stop command with single and all sessions (95.71% coverage)

These tests dramatically improve the overall commands module coverage from 56% to 97%.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: Align PR review prompt header with test expectations

The PR review prompt header in githubController.ts now matches what the test expects in
githubController-check-suite.test.js, fixing the failing test.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-06-02 12:03:20 -05:00

195 lines
6.0 KiB
TypeScript

import { Command } from 'commander';
import { registerListCommand } from '../../src/commands/list';
import { SessionManager } from '../../src/utils/sessionManager';
import { DockerUtils } from '../../src/utils/dockerUtils';
import { SessionConfig } from '../../src/types/session';
// Mock dependencies
jest.mock('../../src/utils/sessionManager');
jest.mock('../../src/utils/dockerUtils');
jest.mock('cli-table3', () => {
return jest.fn().mockImplementation(() => {
return {
push: jest.fn(),
toString: jest.fn().mockReturnValue('mocked-table')
};
});
});
// Mock console methods
const mockConsoleLog = jest.spyOn(console, 'log').mockImplementation();
const mockConsoleError = jest.spyOn(console, 'error').mockImplementation();
describe('List Command', () => {
let program: Command;
let mockListSessions: jest.Mock;
beforeEach(() => {
// Clear all mocks
jest.clearAllMocks();
// Setup program
program = new Command();
// Setup SessionManager mock
mockListSessions = jest.fn();
(SessionManager as jest.Mock).mockImplementation(() => ({
listSessions: mockListSessions
}));
// Register the command
registerListCommand(program);
});
afterEach(() => {
mockConsoleLog.mockClear();
mockConsoleError.mockClear();
});
const mockSessions: SessionConfig[] = [
{
id: 'session1',
repoFullName: 'user/repo1',
containerId: 'container1',
command: 'help me with this code',
status: 'running',
createdAt: '2025-06-01T10:00:00Z',
updatedAt: '2025-06-01T10:05:00Z'
},
{
id: 'session2',
repoFullName: 'user/repo2',
containerId: 'container2',
command: 'explain this function',
status: 'completed',
createdAt: '2025-05-31T09:00:00Z',
updatedAt: '2025-05-31T09:10:00Z'
}
];
it('should list sessions with default options', async () => {
// Setup mock to return sessions
mockListSessions.mockResolvedValue(mockSessions);
// Execute the command
await program.parseAsync(['node', 'test', 'list']);
// Check if listSessions was called with correct options
expect(mockListSessions).toHaveBeenCalledWith({
status: undefined,
repo: undefined,
limit: 10
});
// Verify output
expect(mockConsoleLog).toHaveBeenCalledWith('mocked-table');
expect(mockConsoleLog).toHaveBeenCalledWith(expect.stringContaining('Use'));
});
it('should list sessions with status filter', async () => {
// Setup mock to return filtered sessions
mockListSessions.mockResolvedValue([mockSessions[0]]);
// Execute the command
await program.parseAsync(['node', 'test', 'list', '--status', 'running']);
// Check if listSessions was called with correct options
expect(mockListSessions).toHaveBeenCalledWith({
status: 'running',
repo: undefined,
limit: 10
});
});
it('should list sessions with repo filter', async () => {
// Setup mock to return filtered sessions
mockListSessions.mockResolvedValue([mockSessions[0]]);
// Execute the command
await program.parseAsync(['node', 'test', 'list', '--repo', 'user/repo1']);
// Check if listSessions was called with correct options
expect(mockListSessions).toHaveBeenCalledWith({
status: undefined,
repo: 'user/repo1',
limit: 10
});
});
it('should list sessions with limit', async () => {
// Setup mock to return sessions
mockListSessions.mockResolvedValue([mockSessions[0]]);
// Execute the command
await program.parseAsync(['node', 'test', 'list', '--limit', '1']);
// Check if listSessions was called with correct options
expect(mockListSessions).toHaveBeenCalledWith({
status: undefined,
repo: undefined,
limit: 1
});
});
it('should output as JSON when --json flag is used', async () => {
// Setup mock to return sessions
mockListSessions.mockResolvedValue(mockSessions);
// Execute the command
await program.parseAsync(['node', 'test', 'list', '--json']);
// Verify JSON output
expect(mockConsoleLog).toHaveBeenCalledWith(JSON.stringify(mockSessions, null, 2));
});
it('should show message when no sessions found', async () => {
// Setup mock to return empty array
mockListSessions.mockResolvedValue([]);
// Execute the command
await program.parseAsync(['node', 'test', 'list']);
// Verify output
expect(mockConsoleLog).toHaveBeenCalledWith('No sessions found matching the criteria.');
});
it('should show empty JSON array when no sessions found with --json flag', async () => {
// Setup mock to return empty array
mockListSessions.mockResolvedValue([]);
// Execute the command
await program.parseAsync(['node', 'test', 'list', '--json']);
// Verify output
expect(mockConsoleLog).toHaveBeenCalledWith('[]');
});
it('should reject invalid status values', async () => {
// Execute the command with invalid status
await program.parseAsync(['node', 'test', 'list', '--status', 'invalid']);
// Verify error message
expect(mockConsoleError).toHaveBeenCalledWith(expect.stringContaining('Invalid status'));
expect(mockListSessions).not.toHaveBeenCalled();
});
it('should reject invalid limit values', async () => {
// Execute the command with invalid limit
await program.parseAsync(['node', 'test', 'list', '--limit', '-1']);
// Verify error message
expect(mockConsoleError).toHaveBeenCalledWith('Limit must be a positive number');
expect(mockListSessions).not.toHaveBeenCalled();
});
it('should handle errors from sessionManager', async () => {
// Setup mock to throw error
mockListSessions.mockRejectedValue(new Error('Database error'));
// Execute the command
await program.parseAsync(['node', 'test', 'list']);
// Verify error message
expect(mockConsoleError).toHaveBeenCalledWith('Error listing sessions: Database error');
});
});