You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Jan 6, 2026. It is now read-only.
iiuc the current probe of HTTP retrieval is not requiring HTTPS, and both http:// and https:// providers are marked as successful if the response is valid:
Direct retrieval from unencrypted http:// URL is impossible in web browser contexts due to secure context limitation.
Attempting http:// request from a website loaded via https:// URL will block the request and produce mixed-context error in browser console.
Cross-origin fetch in Web API requires https:// with a valid TLS cert (signed by CA set up).
Note, these TLS certs are not used for data integrity – they are there just to facilitate Secure Context in web browsers, and they also allow use of HTTP/2 for additional performance thanks to request multiplexing.
Suggested fix
Spark seems to have all information already – maybe just log the schema used for HTTP request, and have separate success metric only for https:// URLs?
This way no additional request needs to be made, just surface how many of successes were https:// ones.
The biggest gateway implementation (boxo/gateway, which ships in Rainbow and Kubo, and powers ipfs.io and dweb.link + is used by many gateway operators) also has the requirement of https://.
If ecosystem does not correctly probe for HTTPS retrieval, then we don't see how many SPs are actually usable to IPFS Mainnet clients.
If direct retrieval from SPs over https:// is not possible, then data will have to go over centralized proxies and relays that take care of terminating HTTPS, increasing cost and latency.
Looking at https://dashboard.filspark.com/ makes no distinction between
http://andhttps://providers.iiuc the current probe of HTTP retrieval is not requiring HTTPS, and both
http://andhttps://providers are marked as successful if the response is valid:spark-checker/lib/spark.js
Lines 143 to 147 in 799cc5e
spark-checker/lib/multiaddr.js
Lines 22 to 23 in 799cc5e
Problem
Direct retrieval from unencrypted
http://URL is impossible in web browser contexts due to secure context limitation.Attempting
http://request from a website loaded viahttps://URL will block the request and produce mixed-context error in browser console.Cross-origin fetch in Web API requires
https://with a valid TLS cert (signed by CA set up).Note, these TLS certs are not used for data integrity – they are there just to facilitate Secure Context in web browsers, and they also allow use of HTTP/2 for additional performance thanks to request multiplexing.
Suggested fix
Spark seems to have all information already – maybe just log the schema used for HTTP request, and have separate success metric only for
https://URLs?This way no additional request needs to be made, just surface how many of successes were
https://ones.Why HTTPS matters
ipfs.ioanddweb.link+ is used by many gateway operators) also has the requirement ofhttps://.https://impacts IPFS Foundation-driven work in Native HTTP across the IPFS Stack, to enable Filecoin direct retrieval — IPFS/2025 ipshipyard/roadmaps#9 & Reliable, decentralized, and trustless browser fetching of IPFS content — IPFS/2025 ipshipyard/roadmaps#4https://it is effectively dead (unusable) for all web clients that would like to perform direct retrieval (example: https://inbrowser.link/, @helia/verified-fetch - https://blog.ipfs.tech/verified-fetch/ )https://is not possible, then data will have to go over centralized proxies and relays that take care of terminating HTTPS, increasing cost and latency.cc ipshipyard/roadmaps#9 @bajtos @mishmosh @hsanjuan