Accessing a Private API
Learn how to connect any Private API onchain for consumption with SEDA.
Contents
Accessing a Private API
Data proxies allow developers to expose an entire suite of private data to Oracle Programs on the SEDA network with a single integration. Data Providers can set a fee that is earned for every data request that utilizes their Data Proxy and private data or simply consume it their self in a whitelisted environment.
⚠ Important: Data providers that deploy or operate a data proxy connected to the SEDA Network do so independently and at their sole risk. SEDA provides its infrastructure and services strictly “AS IS”, “AS AVAILABLE”, and “WITH ALL FAULTS”, without any warranties of any kind, whether express or implied. SEDA expressly disclaims all liability, including but not limited to, any reliance on the content, information, or materials available on its sites, documentation, or services, any loss, error, outage, security incident, inaccuracy, or unavailability arising from or related to its infrastructure, services, or data. Data providers are solely responsible for their own infrastructure, data accuracy and legality, security, regulatory compliance, and for any claims, damages, or liabilities (including those bought by users or third parties) arising from or related to their proxies or data. Nothing herein constitutes legal, financial, investment, or other professional advice, nor does it any form of contract, solicitation, recommendation, commitment, or offer to buy or sell. Access to and use of any part of SEDA is subject to the most current SEDA Terms of Service.
Introduction to the Data Proxy
A Data proxy is a simple middleware component that acts as a pipeline or connector between any API and the SEDA network by holding an API key that has access to the data provider’s services.
Additionally, Data Proxies verify conditions to inbound HTTP requests, checking that the requester is part of the SEDA network and that if applicable the provider’s fees have been paid. Additionally, they allow providers to add an SECP private key to the proxy, which would be used to sign the data before it’s forwarded back to the requester, adding additional security and verifiability.
In the end, the proxy nodes allow data providers to expose their entire suite of APIs through a single, one-time deployment and when applicable earn value based on onchain demand with minimal technical overhead or maintenance.
Request Flow

Response Flow

System Requirements
The Data Proxy implementation is lightweight, allowing it to run with minimal infrastructure requirements:
Compute: 2 vCPU
Memory: 1 GB RAM
Storage: 100 MB for the binary (approx. 3 GB GNU/Linux instance)
The Data Proxy efficiently handles hundreds of requests per second, providing latency and throughput that are well-suited to meet current and near-future SEDA protocol demands.
Scaling Up
If performance needs to be boosted beyond vertical scaling, horizontal scaling is straightforward due to its minimal resource footprint. Depending on your existing infrastructure, consider the following strategies:
Load Balancing: Use tools like AWS ELB, NGINX, or HAProxy to distribute traffic across multiple proxy nodes.
Auto-Scaling: Implement cloud-based auto-scaling groups (e.g., Amazon EC2 Auto Scaling) to dynamically adjust instance counts based on traffic.
Container Orchestration: Utilize Kubernetes or managed services such as Amazon EKS/Fargate for efficient management and scaling of containerized instances.
These approaches ensure that your Data Proxy deployment remains scalable and resilient, adapting seamlessly to varying workloads.
Operating and Running a Data Proxy
Below is a step-by-step guide to initializing and running a Data Proxy to the SEDA Network
Prerequisites
Install Bun:
Clone the Data Proxy directory:
Install the project dependencies:
Checkout the CLI commands:
Preparing a Data Proxy Node
Note: The SEDA network won't be able to access your proxy node unless you expose it on a public IP or domain.
Start by generating the config.json (mandatory) and data-proxy-private-key.json. The private key will be used for for signing data. It can also be passed with the following environment variable: SEDA_DATA_PROXY_PRIVATE_KEY. The environment variable takes precedence over the default key file, but when an explicit key file is specified that will take precedence.
Registering a Node
Next you'll need to register your data proxy on the SEDA network. This can be done through a CLI command which will create a transaction for you which you can submit through the SEDA explorer.
payout_address: Required. The SEDA address on which you would like to receive the payout for provided services.<fee>: Required. The amount of SEDA you want to charge per request.--memo: Optional. Free form text you can attach to your data proxy registration.
Clicking on the link will take you to the SEDA explorer where you can double check the attributes before connecting a wallet and signing the transaction. The address of the wallet which signed the transaction will be the admin of the data proxy: they're able to change the payout_address, memo, and submit fee updates. Admin ownership can be changed through the explorer by submitting a transaction.
Example invocation and output:
Verifying Data Proxy Configuration
Before starting the data proxy, you can run the validation command to make sure that it has been set up properly. Run the command with the --silent flag to get summary information.
Running a Data Proxy
To start your proxy execute the following command:
Your proxy is now running and ready to receive requests from the SEDA network.
Debugging
By default the data proxy node will validate that incoming requests originate from the SEDA network by eligible entities. If you want to verify everything is set up correctly you can pass the --disable-proof flag when starting the binary; this disables all incoming request verification on the proxy.
Configuration
Route Group
All proxy routes are grouped under a single path prefix, by default this is "proxy". You can change this by specifying the routeGroup attribute in the config.json:
Base URL
In case you want to have software in front of the data proxy to handle the request (such as another proxy or an API management solution) it's possible that the public URL of the data proxy is different from the URL that the data proxy services. This causes a problem in the tamper proofing mechanism since the data proxy needs to sign the request URL, in order to prove that the overlay node did not change the URL. To prevent this you can specify the baseURL option in the config.json:
Just the protocol and host should be enough, no trailing slash.
Should you do additional path rewriting in the proxy layer you can add that to the baseURL option, but this is not recommended.
Multiple Routes
A single data proxy can expose different data sources through a simple mapping in the config.json. The routes attribute takes an array of proxy route objects which can each have their own configuration.
The two required attributes are path and upstreamUrl. These specify how the proxy should be called and how the proxy should call the upstream. By default a route is configured as GET, but optionally you can specify which methods the route should support with the method attribute.
The OPTIONS method is reserved and cannot be used for a route.
Base URL per route
In addition to specifying the baseURL at the root level you can also specify it per route. The baseURL at the route level will take precedence over one at the root level.
Upstream Request Headers
Should your upstream require certain request headers you can configure those in the routes object. All headers specified in the headers attribute will be sent to the upstream in addition to headers specified by the original request.
Environment Variable Injection
Sometimes you don't want to expose your API key in a config file, or you have multiple environments running. The data proxy node has support for injecting environment variables through the {$MY_ENV_VARIABLE} syntax to the headers or the upstream URL.
Environment variables are evaluated during startup of the data proxy. If it detects variables in the config which aren't present in the environment the process will exit with an error message detailing which environment variable it was unable to find.
Path Parameters
The routes objects have support for path parameter variables and forwarding those to the upstream. Simply declare a variable in your path with the :varName syntax and reference them in the upstreamUrl with the {:varName} syntax. See below for an example:
Forwarding Response Headers
By default the data proxy node will only forward the content-type header from the upstream response. If required you can specify which other headers the proxy should forward to the requesting client:
Wildcard routes
The Data Proxy node has support for wildcard routes, which allows you to quickly expose all your APIs:
JSON path
If you don't want to expose all API info you can use jsonPath to return a subset of the response:
Status Endpoint
The Data Proxy node has support for exposing status information through some endpoints. This can be used to monitor the health of the node and the number of requests it has processed.
The status endpoint has two routes:
/status/healthReturns a JSON object with the following structure:/status/pubkeyReturns the public key of the node.
Status Configuration
The status endpoints can be configured in the config file under the statusEndpoints attribute:
root: Root path for the status endpoints. Defaults tostatus.apiKey: Optionally secure the status endpoints with an API key. Theheaderattribute is the header key that needs to be set, andsecretis the value that it needs to be set to. ThestatusEndpoints.apiKey.secretattribute supports the{$MY_ENV_VARIABLE}syntax for injecting a value from the environment during start up.
Last updated
Was this helpful?

