Silent Mode (no_output)ΒΆ
Run Flock quietly when embedded in other applications or deployed as a service.
When using Flock as part of a larger application, you may want to suppress the decorative terminal output (banners, Rich tables, streaming displays) while keeping logs available for debugging.
Quick StartΒΆ
from flock import Flock
# Suppress all decorative terminal output
flock = Flock("openai/gpt-4.1", no_output=True)
That's it! The banner, Rich tables, and streaming displays are now suppressed. Only programmatic output (your own print() statements) and configured logging will appear.
What Gets SuppressedΒΆ
| Output Type | Default | With no_output=True |
|---|---|---|
| Startup banner | β Shown | β Hidden |
| Rich streaming display | β Shown | β Hidden |
| Agent output tables | β Shown | β Hidden |
Your print() statements | β Shown | β Shown |
| Logging output | β Shown | β Shown |
Use CasesΒΆ
Running as a ServiceΒΆ
When Flock is embedded in a web service, the decorative output clutters logs:
from fastapi import FastAPI
from flock import Flock, flock_type
from pydantic import BaseModel
app = FastAPI()
# Silent mode for service deployment
flock = Flock("openai/gpt-4.1", no_output=True)
@flock_type
class Query(BaseModel):
text: str
@flock_type
class Response(BaseModel):
answer: str
agent = flock.agent("responder").consumes(Query).publishes(Response)
@app.post("/ask")
async def ask(query: Query):
await flock.publish(query)
await flock.run_until_idle()
responses = await flock.get_artifacts(Response)
return {"answer": responses[0].answer}
Batch ProcessingΒΆ
When running many jobs, the output would be overwhelming:
from flock import Flock
flock = Flock("openai/gpt-4.1", no_output=True)
# Process 1000 items without terminal spam
for item in items:
await flock.publish(item)
await flock.run_until_idle()
# Your summary output only
print(f"Processed {len(items)} items")
TestingΒΆ
Keep test output clean:
import pytest
from flock import Flock
@pytest.fixture
def silent_flock():
return Flock("openai/gpt-4.1", no_output=True)
def test_agent_workflow(silent_flock):
# Tests run without decorative output
...
CI/CD PipelinesΒΆ
Avoid cluttering CI logs:
import os
from flock import Flock
# Auto-enable silent mode in CI
no_output = os.getenv("CI", "false").lower() == "true"
flock = Flock("openai/gpt-4.1", no_output=no_output)
Propagation to ComponentsΒΆ
The no_output flag automatically propagates to all engines and components:
from flock import Flock, DSPyEngine
flock = Flock("openai/gpt-4.1", no_output=True)
# Custom engines automatically inherit no_output
custom_engine = DSPyEngine(model="openai/gpt-4.1")
agent = (
flock.agent("processor")
.consumes(Input)
.publishes(Output)
.with_engines(custom_engine) # no_output=True is propagated
)
This works for:
- β Default DSPyEngine
- β
Custom engines added via
.with_engines() - β Default OutputUtilityComponent
- β
Custom utilities added via
.with_utilities()
Combining with LoggingΒΆ
Silent mode suppresses decorative output but not logging. Configure logging for production visibility:
import logging
from flock import Flock
from flock.logging.logging import configure_logging
# Enable structured logging
configure_logging(level=logging.INFO)
# Suppress decorative output
flock = Flock("openai/gpt-4.1", no_output=True)
# Logs still appear
# INFO:flock.core.orchestrator:Agent 'processor' executed successfully
Recommended Production ConfigurationΒΆ
import logging
import os
from flock import Flock
from flock.logging.logging import configure_logging
# Production settings
configure_logging(
level=logging.INFO if os.getenv("DEBUG") else logging.WARNING
)
flock = Flock(
os.getenv("DEFAULT_MODEL", "openai/gpt-4.1"),
no_output=True # Always silent in production
)
Dashboard CompatibilityΒΆ
Silent mode works with the dashboard. The dashboard receives data via WebSocket, not terminal output:
flock = Flock("openai/gpt-4.1", no_output=True)
# Dashboard still works!
await flock.serve(dashboard=True) # Terminal is quiet, dashboard shows everything
Environment Variable AlternativeΒΆ
You can also control output via environment variable in your code:
import os
from flock import Flock
flock = Flock(
"openai/gpt-4.1",
no_output=os.getenv("FLOCK_NO_OUTPUT", "false").lower() == "true"
)
Then set in your deployment:
See AlsoΒΆ
- Configuration Reference - All environment variables
- Distributed Tracing - Production observability
- Testing Strategies - Test configuration best practices