REST API Reference
Parquet-first edge gateway for bulk financial data. All endpoints stream ZSTD-compressed Parquet files — load directly into DuckDB, Pandas, or Polars.
https://data.valuein.bizAuthentication
Authenticated endpoints require a Bearer token in the Authorization header. Tokens are provisioned automatically when you subscribe via Stripe. The /v1/sample/* endpoints are always public — no token required.
Unauthenticated (sample tier)
$ curl https://data.valuein.biz/v1/sample/entity \
--output entity.parquetAuthenticated (sp500 / full tier)
$ curl https://data.valuein.biz/v1/sp500/fact \
-H "Authorization: Bearer YOUR_TOKEN" \
--output fact.parquetCheck your token plan
$ curl https://data.valuein.biz/v1/me \
-H "Authorization: Bearer YOUR_TOKEN"{
"plan": "sp500",
"status": "active",
"email": "[email protected]"
}Response Format
Data endpoints return raw Parquet bytes. The Content-Type is application/octet-stream. Files are ZSTD-compressed — DuckDB, Pandas, and Polars decompress automatically via read_parquet(). Non-data endpoints return application/json.
| Response Type | Content-Type | Endpoints |
|---|---|---|
| Parquet stream | application/octet-stream | /v1/sample/*, /v1/sp500/*, /v1/full/* |
| JSON | application/json | /health, /v1/me, /v1/manifest, /v1/usage |
Plans
Your token's plan determines which bucket you can access. A higher plan grants access to all lower tiers as well.
| Plan | Auth Required | Bucket | Coverage |
|---|---|---|---|
sample | No | R2_SAMPLE | Public 5-year S&P 500 slice |
sp500 | Yes | R2_SP500 | Full S&P 500 history 1994–present |
full | Yes | R2_FULL | Full 12,000+ ticker universe including delisted |
Endpoints
/healthLiveness check. Returns 200 OK with service status.
{ "status": "ok", "ts": "2026-04-11T00:00:00.000Z" }/v1/meReturns token metadata: plan, status, email, and token prefix.
{ "plan": "sp500", "status": "active", "email": "[email protected]" }/v1/manifestReturns available tables and last snapshot timestamp for your plan tier.
{ "snapshot": "snapshot_20260411", "last_updated": "2026-04-11T00:00:00Z", "tables": [...] }/v1/sample/manifestPublic sample tier manifest — no token required. Includes upgrade CTA.
{ "snapshot": "snapshot_20260411", "tables": [...], "upgrade_url": "/pricing" }/v1/sample/{table}Parquet stream from the public sample bucket (5-year S&P 500 slice). No token required. Valid tables: entity, security, filing, fact, valuation, taxonomy_guide, index_membership, references.
application/octet-stream — raw Parquet bytes/v1/sp500/{table}Parquet stream from the S&P 500 bucket. Requires sp500 or full plan token. Full history, 500+ tickers.
application/octet-stream — raw Parquet bytes/v1/full/{table}Parquet stream from the full universe bucket. Requires full plan token. 12,000+ tickers including delisted companies.
application/octet-stream — raw Parquet bytes/v1/usageReturns daily API call counts, error rates, and per-table breakdowns for the last N days (default 7, max 30).
Query Parameters
daysNumber of days to return (1–30). Defaults to 7.{ "period_days": 7, "total_calls": 1420, "error_rate": 0.012, "daily": [...], "table_breakdown": {...} }Python Example
Download a Parquet table and query it locally with DuckDB in under 10 lines.
import duckdb
import requests
token = "YOUR_TOKEN"
url = "https://data.valuein.biz/v1/sp500/fact"
r = requests.get(url, headers={"Authorization": f"Bearer {token}"}, stream=True)
r.raise_for_status()
with open("fact.parquet", "wb") as f:
for chunk in r.iter_content(chunk_size=8192):
f.write(chunk)
conn = duckdb.connect()
df = conn.execute(
"SELECT * FROM read_parquet('fact.parquet') LIMIT 5"
).df()
print(df)Available Tables
Eight tables cover the full schema. Pass any table name as the {table} path segment. See Parquet Schema Reference for full field definitions.
| Table | Description |
|---|---|
entity | Company profiles: name, sector, SIC code, location, CEO, founding year, description. One row per CIK. |
security | Exchange listings: ticker, exchange, FIGI, valid date range (SCD Type 2). Multiple rows per company. |
filing | SEC EDGAR filing index: accession ID, form type, filing date, acceptance timestamp. Links entity to facts. |
fact | 105M+ financial data points: XBRL concept values with knowledge_at timestamps for PIT accuracy. |
valuation | Pipeline-computed DCF and DDM intrinsic values with WACC and growth rate assumptions. |
taxonomy_guide | Mapping of ~150 standard_concept labels to raw XBRL tags and human-readable descriptions. |
index_membership | Historical index constituents: S&P 500 membership with start/end dates for PIT universe construction. |
references | Derived flat join of entity + security + index_membership. One row per security. Start here for most queries. |
Manifest Response
Call GET /v1/manifest to discover available tables and the current snapshot timestamp for your plan. Check this before downloading tables to detect updates.
{
"snapshot": "snapshot_20260411",
"last_updated": "2026-04-11T00:00:00Z",
"tables": ["entity", "security", "filing", "fact",
"valuation", "taxonomy_guide",
"index_membership", "references"]
}Error Codes
| Status | Meaning | Common Cause |
|---|---|---|
| 200 OK | Success | Request succeeded. Parquet bytes or JSON body in the response. |
| 400 Bad Request | Invalid table | The table name in the path is not in the valid tables list. Check spelling and trailing slashes. |
| 401 Unauthorized | Missing or invalid token | No Authorization header, malformed Bearer token, or token not found in KV store. |
| 403 Forbidden | Plan too low | Your token exists but its plan does not grant access to this bucket (e.g. sample token accessing /v1/sp500/). |
| 429 Too Many Requests | Rate limit exceeded | You have exceeded your daily request quota. Resets at UTC midnight. Upgrade to a higher plan for higher limits. |
| 503 Service Unavailable | Snapshot loading | The R2 snapshot is being refreshed. Retry after 30–60 seconds. This is rare and brief. |
Get your API token
Subscribe to the S&P 500 or Full plan to receive a Bearer token instantly. The sample tier is always free — no credit card required.