File size: 7,759 Bytes
8dc7c86
4729fab
 
 
8dc7c86
 
 
4729fab
 
 
 
8dc7c86
 
4729fab
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
---
title: Benchmark Environment Server
emoji: πŸ•ΉοΈ
colorFrom: purple
colorTo: blue
sdk: docker
pinned: false
app_port: 8000
base_path: /web
tags:
  - openenv
---

# Benchmark Environment

A test environment for benchmarking infrastructure and concurrency. Actions specify how many seconds to wait (sleep), making it ideal for testing parallel execution and server scaling. Returns server identity information to verify which instance handled each request.

## Quick Start

The simplest way to use the Benchmark environment is through the `BenchmarkEnv` class:

```python
from benchmark import BenchmarkAction, BenchmarkEnv

try:
    # Create environment from Docker image
    benchmarkenv = BenchmarkEnv.from_docker_image("benchmark-env:latest")

    # Reset - get server identity
    result = benchmarkenv.reset()
    print(f"Host URL: {result.observation.host_url}")
    print(f"PID: {result.observation.pid}")
    print(f"Session Hash: {result.observation.session_hash}")

    # Test concurrency with different wait times
    wait_times = [0.5, 1.0, 2.0]

    for seconds in wait_times:
        result = benchmarkenv.step(BenchmarkAction(wait_seconds=seconds))
        print(f"Waited: {result.observation.waited_seconds}s")
        print(f"  β†’ Timestamp: {result.observation.timestamp}")
        print(f"  β†’ Reward: {result.reward}")
        print(f"  β†’ Server PID: {result.observation.pid}")

finally:
    # Always clean up
    benchmarkenv.close()
```

That's it! The `BenchmarkEnv.from_docker_image()` method handles:
- Starting the Docker container
- Waiting for the server to be ready
- Connecting to the environment
- Container cleanup when you call `close()`

## Testing Concurrency

The benchmark environment is designed to test concurrent execution:

```python
import asyncio
from benchmark import BenchmarkAction, BenchmarkEnv

async def parallel_requests():
    # Connect to multiple servers or same server
    clients = [
        BenchmarkEnv(base_url="http://localhost:8000"),
        BenchmarkEnv(base_url="http://localhost:8001"),
        BenchmarkEnv(base_url="http://localhost:8002"),
    ]

    # Reset all clients
    for client in clients:
        result = client.reset()
        print(f"Server {result.observation.session_hash}: PID {result.observation.pid}")

    # Send concurrent requests with different wait times
    import concurrent.futures
    with concurrent.futures.ThreadPoolExecutor(max_workers=10) as executor:
        futures = []
        for i, client in enumerate(clients):
            future = executor.submit(
                client.step,
                BenchmarkAction(wait_seconds=i + 1)
            )
            futures.append((client, future))

        for client, future in futures:
            result = future.result()
            print(f"Server {result.observation.session_hash} waited {result.observation.waited_seconds}s")

    # Clean up
    for client in clients:
        client.close()
```

## Building the Docker Image

Before using the environment, you need to build the Docker image:

```bash
# From project root
docker build -t benchmark-env:latest -f server/Dockerfile .
```

## Deploying to Hugging Face Spaces

You can easily deploy your OpenEnv environment to Hugging Face Spaces using the `openenv push` command:

```bash
# From the environment directory (where openenv.yaml is located)
openenv push

# Or specify options
openenv push --namespace my-org --private
```

The `openenv push` command will:
1. Validate that the directory is an OpenEnv environment (checks for `openenv.yaml`)
2. Prepare a custom build for Hugging Face Docker space (enables web interface)
3. Upload to Hugging Face (ensuring you're logged in)

### Prerequisites

- Authenticate with Hugging Face: The command will prompt for login if not already authenticated

### Options

- `--directory`, `-d`: Directory containing the OpenEnv environment (defaults to current directory)
- `--repo-id`, `-r`: Repository ID in format 'username/repo-name' (defaults to 'username/env-name' from openenv.yaml)
- `--base-image`, `-b`: Base Docker image to use (overrides Dockerfile FROM)
- `--private`: Deploy the space as private (default: public)

### Examples

```bash
# Push to your personal namespace (defaults to username/env-name from openenv.yaml)
openenv push

# Push to a specific repository
openenv push --repo-id my-org/my-env

# Push with a custom base image
openenv push --base-image ghcr.io/meta-pytorch/openenv-base:latest

# Push as a private space
openenv push --private

# Combine options
openenv push --repo-id my-org/my-env --base-image custom-base:latest --private
```

After deployment, your space will be available at:
`https://huggingface.co/spaces/<repo-id>`

The deployed space includes:
- **Web Interface** at `/web` - Interactive UI for exploring the environment
- **API Documentation** at `/docs` - Full OpenAPI/Swagger interface
- **Health Check** at `/health` - Container health monitoring

## Environment Details

### Action
**BenchmarkAction**: Contains a single field
- `wait_seconds` (float) - Seconds to wait/sleep before returning (default: 0.0)

### Observation
**BenchmarkObservation**: Contains server identity and timing information
- `host_url` (str) - The URL of the server that handled the request
- `pid` (int) - Process ID of the server
- `session_hash` (str) - Unique 16-character hash identifying this server session
- `waited_seconds` (float) - Actual seconds waited
- `timestamp` (float) - Unix timestamp when observation was created
- `reward` (float) - Reward based on wait time
- `done` (bool) - Always False for benchmark environment
- `metadata` (dict) - Additional info

### Reward
The reward is calculated as: `1.0 / (1.0 + wait_seconds)`
- 0 seconds β†’ reward: 1.0
- 1 second β†’ reward: 0.5
- 2 seconds β†’ reward: 0.33
- Encourages faster responses

## Advanced Usage

### Connecting to an Existing Server

If you already have a Benchmark environment server running, you can connect directly:

```python
from benchmark import BenchmarkEnv, BenchmarkAction

# Connect to existing server
benchmarkenv = BenchmarkEnv(base_url="<ENV_HTTP_URL_HERE>")

# Use as normal
result = benchmarkenv.reset()
print(f"Connected to server: {result.observation.host_url}")
print(f"Session: {result.observation.session_hash}")

result = benchmarkenv.step(BenchmarkAction(wait_seconds=1.5))
print(f"Waited {result.observation.waited_seconds}s")
```

Note: When connecting to an existing server, `benchmarkenv.close()` will NOT stop the server.

## Development & Testing

### Direct Environment Testing

Test the environment logic directly without starting the HTTP server:

```bash
# From the server directory
python3 server/benchmark_environment.py
```

This verifies that:
- Environment resets correctly
- Step executes actions properly
- State tracking works
- Server identity is returned correctly

### Running Locally

Run the server locally for development:

```bash
uvicorn server.app:app --reload
```

## Project Structure

```
benchmark/
β”œβ”€β”€ .dockerignore         # Docker build exclusions
β”œβ”€β”€ __init__.py            # Module exports
β”œβ”€β”€ README.md              # This file
β”œβ”€β”€ openenv.yaml           # OpenEnv manifest
β”œβ”€β”€ pyproject.toml         # Project metadata and dependencies
β”œβ”€β”€ uv.lock                # Locked dependencies (generated)
β”œβ”€β”€ client.py              # BenchmarkEnv client implementation
β”œβ”€β”€ models.py              # Action and Observation models
└── server/
    β”œβ”€β”€ __init__.py        # Server module exports
    β”œβ”€β”€ benchmark_environment.py  # Core environment logic
    β”œβ”€β”€ app.py             # FastAPI application
    └── Dockerfile         # Container image definition
```