-
Notifications
You must be signed in to change notification settings - Fork 45
chore: perform various maintenance tasks #112
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
not tested at all, does not even compile yet |
|
I'm very interested in dumping axios in favor of the built-in fetch. I run a lot of things in cloudflare workers and this would be quite a helpful change. Any update on when this might be ready? |
|
I'm ready to continue with this, I was just hoping for pointers regarding #111 (comment) before I go ahead. Shipping a custom implementation of multipart/form-data clearly gives us the best library, but I didn't feel like deciding alone whether or not we want to accept that in this library. |
|
Sorry for the delay. We should go with a custom multipart/form-data implementation then as @KnorpelSenf suggested. I think we need to keep the optional |
Alright, I will commence my work on this in the coming time, but my time for OSS is a bit limited right now, so please don't expect an immediate implementation :)
What if somebody passes a read stream, or an |
Unfortunately, S3 strictly requires the In the mentioned example the server should return a |
Alright!
Yeah I'll add this to the doc string |
|
Hi @KnorpelSenf - I see you've mentioned you won't be able to dedicate much time to this PR - but regardless I'm going to ask! Do you have a rough timeframe here? |
|
Yep, I'm gonna give a talk at https://events.geekle.us/typescript24/ on Tuesday so there are a number of things to prepare until then. The rest of that week will be spent catching up with life in general, and the week after that I'm mostly free and looking forward to getting back to this (but no promises, life can be surprising and I don't want to set myself deadlines) |
|
If you feel like contributing, you can check out the base implementation of what I'll do here in this file: https://github.com/grammyjs/grammY/blob/main/src/core/payload.ts |
|
I will get back to this after my summer vacation, likely in September. Sorry for the delays, there were a few other important tasks on my agenda :) |
That was it! Thanks, it's fixed now. We need to wait for the rate limit again before we can see if all tests pass now.
I did some research and this is actually not true. If you use the multipart/form-data API of S3 (which this implementation does now) then you only have to specify the content length for each chunk (done automatically by the runtime). You do not need to know the total number of bytes to be uploaded. My implementation now makes use of that so we can stream the entire data without querying its size upfront. This enables support for uploading arbitrary data streams, removes the need for a |
|
@josiasmontag this is ready for review now. If you have a project that uses this SDK, it may be reasonable to drop in these changes and try them out. The diff ended up being fairly large (perhaps I should do maintenance more frequently in the future) so we better test them well. |
|
@kingmesal @dncnbuck @elawad you might be interested in doing this, too |
Unfortunately, this does not seem to work for native S3. Some background about the infra: Currently and by default CloudConvert uses Ceph as object storage, which has a S3-compatible API. Ceph seems to be less strict than the "real" S3 and accepts uploads without Also, I adjusted the rate limits for the sandbox. You should not get the rate limit errors any more. |
|
Hmmmm sad. But alright. I'll reintroduce the size argument and precompute the length whenever possible. Thanks for adjusting the limits. |
|
@josiasmontag actually … does that mean that we cannot use the multipart/form-data API at all? The |
|
Or maybe specify a cloudconvert-node/lib/CloudConvert.ts Lines 56 to 58 in e5e0b99
|
|
Content-Length headers for the respective parts is not enough as far as I know, it needs a I think it would be the best to keep the current signature of the upload(
task: Task | JobTask,
stream: Stream,
filename: string | null = null,
size: number | null = null
)If no filename passed -> try to guess the filename. |
|
How is the backend implemented then? If I tell you that the total body size is 500 bytes and then I first send 350 bytes with file data followed by 100 bytes of metadata and the remaining 50 bytes are the interspersed boundaries of the protocol … then the server will not be able to determine the file size until the entire request has been read. On the other hand, if the server reads the entire request anyway, then what's the purpose of the content length? |
|
Unfortunately I have no idea how AWS S3 has implemented this on their end 🤷♂️. The docs only say it requires |
|
Very strange :D It's an easy fix either way. We can restore the old signature of |
|
Oh. The link you've shared actually contains the implementation details I was asking for.
They simply force you to pass all metadata upfront. That way, they can compute the file size based on the content length. The example I described above is simply disallowed. That clears it up, thanks for sharing! |
|
@josiasmontag done! I'm running into rate limits again, will check again later. |
|
Awesome. Would you like to do anything else before merging? |
|
Looks good 👍 Will do some final review and then release this as 3.0.0. |
might get size of different file and result in unexpected results
This is a collection of various tasks that are overdue and somewhat related to each other. Here is a breakdown of the changes. After reading through it, this PR should be reviewed on a per-commit basis. The diff is very large, but the actual changes are mostly self-contained and limited in complexity.
Getting rid of
axiosandform-dataNode has had global
fetchfor a long time now. It is no longer helpful to use dependencies like axios that introduce security vulnerability like #111. Consequently, this PR removes both axios and form-data, and replaces them with global fetch.Unfortunately, it is not possible to support ReadableStreams with built-in fetch as it does not work with form-data. It is not possible to use the built-in
FormDatabecause it does not support ReadableStreams.Hence, this PR introduces a tiny implementation of the multipart/form-data protocol for our use case. As a side-effect, we now also support a few more ways to input files.
This leaves us with socket.io as the only remaining dependency.
Updating supported Node versions
This PR introduces an
enginefield inpackage.json. The package currently supports all active Node versions. This PR now specifies this in thepackage.jsonfile.We also align the Node versions used in CI to this. They are currently set to 20 and 22 only because 24 is not available on GitHub yet.
Updating Prettier and ESLint
The tooling was updated, config files were migrated, source code was reformatted slightly.
Migrating all tests to TypeScript
Previously, the test suite was written in JS. This PR migrates to TS and employs tsx to avoid a compile step.