-
Notifications
You must be signed in to change notification settings - Fork 141
Update @Ericbla encoder fork #243
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
…Buffer unsafeAlloc
…) when reencoding
|
Any word on this getting merged? Seems like a lot of people were looking forward to this feature... |
They were looking for new mantainers after all.. I wouldn't pressure the mantainer lest we have the same thing like xz lmao, but I'm going to quietly use this fork instead @stereokai not sure if this is much to ask, but can you write examples on how to use? (the encoding part) |
|
@Kreijstal check out the encoder tests in this PR which were written by the original encoder developer, @Ericbla, it's very simple and straightforward, you define a parser as usual and then just call |
Ahh, thank you that was so simple... |
|
This looks REALLY nice, I'm wanting to use this fork in a project but am having some issues related to bit fields. Is there a clean way to mark the "end" of a bit field in the encoder? I've dug around the source and couldn't quite find a way to do so There are some cases where data may be tightly packed in a way where many separate bit fields are directly next to each other, which ends up having issues since at the moment this would be treated as a single bit field and only 32 sequential bits are supported In my case this data is Mii data from the Wii U/3DS, which makes heavy use of bit fields. This can be worked around by using |
| private generate_encodeSeek(ctx: Context) { | ||
| const length = ctx.generateOption(this.options.length!); | ||
| ctx.pushCode(`smartBuffer.writeBuffer(Buffer.alloc(${length}));`); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems to not be compatible with parsers which use negative seek values, which is a supported feature https://github.com/keichi/binary-parser#seekreloffset
|
Do you have plans to keep maintaining it? If yes you could create hard fork of it if original author does not want it in his repo. This way your project will come up independently in google search since default behaviour seems to be that only main project appears in google search and forks do not. I have even seen case where such move revived dead project because all the sudden author felt need to save his project from being stolen. |
Not a fork, but I started a rewrite of this library a while ago to address both the lack of encoding (we didn't want to rely on forks like this, since forks can go away at any time) and some other issues we ran into with this library (such as supporting bit fields larger than 32 bits, and needing to encode non-UTF8 strings) https://github.com/PretendoNetwork/binary-parser (dev branch is more up to date) |
|
I would advise against merging https://github.com/PretendoNetwork/binary-parser . It has code: if (data.buffer instanceof SharedArrayBuffer) {
throw new Error("SharedArrayBuffer is not supported");
}Which breaks when script is loaded over file:// instead of http:// . Error you get is Also it stringifies zero terminated string as "HIJK\u0000\u0000\u0000\u0000" current repository correctly stringifies zero terminated string as "HIJK". |
|
I now tested https://github.com/stereokai/binary-parser also. It works when used from local html files. But that also breaks parsing of zero terminated strings giving "HIJK\u0000\u0000\u0000\u0000" just like https://github.com/PretendoNetwork/binary-parser . If anyone is interested to play around it here are the steps needed to compile it:
import { Parser } from './lib/binary_parser.ts';
window.Parser = Parser;
Here is demo script I used for testing: <!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
<title>Binary Parser Parent–Child Demo with Serialization</title>
<script src="parser.global.js"></script> <!-- your patched version -->
</head>
<body>
<pre id="output"></pre>
<script>
// Child structure: 1-byte ID + 4-byte value
const childParser = new Parser()
.uint8('id')
.uint32le('value');
// Parent structure
const parentParser = new Parser()
.uint8('childCount')
.array('children', {
type: childParser,
length: 'childCount'
})
.string('fixedStr', { length: 8, stripNull: false })
.string('zeroTermStr', { length: 8, zeroTerminated: true, stripNull: true });
// Hardcoded test data
const testBytes = new Uint8Array([
0x02, // childCount = 2
0x01, // child[0].id = 1
0x2A, 0x00, 0x00, 0x00, // child[0].value = 42
0x02, // child[1].id = 2
0x63, 0x00, 0x00, 0x00, // child[1].value = 99
// fixedStr = "ABCDEFGH"
0x41, 0x42, 0x43, 0x44, 0x45, 0x46, 0x47, 0x48,
// zeroTermStr = "HIJK" + null padding
0x48, 0x49, 0x4A, 0x4B, 0x00, 0x00, 0x00, 0x00
]);
// Parse
const result = parentParser.parse(testBytes);
document.getElementById('output').textContent = JSON.stringify(result, null, 2);
console.log("Original bytes:", testBytes);
//console.log("Serialized bytes:", serialized);
</script>
</body>
</html>Conclusion: |
No one, including myself, was suggesting merging our fork. Hence why we have not made a pull request. It will always be a "downgrade" in some respects, such as What I said is that it is an early-stages rewrite intended to address some domain-specific issues we were encountering, but that supports most trivial cases that people would use this library for right now, and is something we're already using as a replacement for this library in our services
I'm also not sure what you're talking about with regard to the terminated strings? It seems like what you're expecting doesn't seem to make sense: import { Parser } from '@pretendonetwork/binary-parser;
const bytes = Buffer.from([ 0x48, 0x49, 0x4A, 0x4B, 0x00, 0x00, 0x00, 0x00 ]);
const parser = new Parser()
.string('zeroTermStr', {
length: 8,
zeroTerminated: true,
stripNull: true
});
const decoded = parser.parse(bytes);
const encoded = parser.encode(decoded);
console.log(decoded);
console.log(encoded);This seems to be the expected output? You're telling the parser that the string is 8 bytes long, is zero terminated, and to strip null bytes. Which is exactly what it's doing. If you're expecting it to encode the data with stripped null bytes, such as: then that is what seems incorrect to me. Since the encoded data would not match the input data. That being said, I'd rather not further discuss issues like this here as it just clogs up this pull request. If you have any issues with our library take it to an issue on that repository. |
I never intended to discuss it here, Just warn original repo author against merging there.
|
There was no real need to "warn against merging" as there is no merge request, nor a suggestion of one. I left the link with a clear outline of its use cases and functionality status.
The example you provided does not target my library, |
Created 2 bug reports in your repository. Sorry to mention it here but you seem to get notifications from this thread faster than from your repository. |
lmao |
Hi @keichi, I forked @Ericbla's binary-parser-encoder from #73 and updated it by cherry picking all the commits from this repo that were missing there. I also got all current tests passing both for the parser and the encoder.
Your binary-parser has the most developer friendly syntax out there in my opinion and it deserves an encoder to go along with it, and many users have also been showing interest in one. I'll be happy to commit for a little while to ensure this feature is implemented according to what you think is best for you and for the project.
Please let me know if you'd like to collaborate on this :)