Conversation
This patch adds library support for the rexec() function by: 1. Conditionally including headers based on build configuration - Use kmalloc.h for kernel builds, stdlib.h otherwise - Include fs/fs.h only when CONFIG_FILE_STREAM is enabled 2. Adding stream support for rexec functionality 3. Enhancing netdb functions (getaddrinfo, gethostentbynamer) to support remote execution scenarios Signed-off-by: hongfengchen <hongfengchen@xiaomi.com>
|
Please do not include the affected files section, it is already visible from the diff. |
@linguini1 I think @txy-21 and others add this just because you never comment the code change in all your review, but frequently ask people explain why he/she made this change. The description help you understand the code change and avoid ask the simple question. |
Okay. It doesn't help. I don't care what files are modified, and it doesn't help the review. The summary section is much more valuable, and this PR has a great summary! I don't understand the hostility or why you think I don't read any code in my reviews. The first thing I look at is the PR description to get an idea of what the change is doing. If there are no test logs, I don't bother reading the code. It could be untested for all I know, and that is a waste of review time. If we're playing the blame game, I don't think you ever read the PR descriptions in any of your reviews, seeing as you constantly approve AI-generated PRs with no comments. To clarify @txy-21, your PR is great! I am just suggesting you don't include the files changed section anymore since it does not add information. Trying to save time for you and the reviewers. It was not meant to be an attack at all! |
@linguini1 Could you point out some pr, you have added comment to the real code change like me in the follow examples?
Let's summary how Xiaomi do the test internally before we upstream patch to community:
No test log doesn't mean the patch doesn't test or verifier. Most patch Xiaomi upstream is made in the last year and already pass the aboved test. Many bugs only happen in the special timing or scenario like #18266? when the error happen there is no log at all, how can @Otpvondoiatsdo provide you log? Do you print log after each statement? It's stupid to ask the log for each patch without look at the change!
All patch from Xiaomi team already reviewed by me many round before they can be merged into our internal repo, that's why I just approve them directly. Here is is an internal patch come from @szafonimateusz-mi @Fix-Point is one of my team member, he developed hrtimer patch on github directly(#18224 and #18131), I made many comments before approving: @wangchdo make the significant change to signal and hrtimer recently(#17642 and #17642), most comment come from me: Please do the constructive review, like @acassis @jerpelea (e.g. improve the pr organization, fix the commit message, or add the documentation). BTW, since the recent contribution rule change, Xiaomi team already decides ALL new features and components which doesn't tightly couple with NuttX will never be upstreamed in the future. If you still do review like this, my team will leave NuttX community and never upstream anymore. @acassis. |
@xiaoxiang781216 thanks for explaining the details how your internal test works, but Matteo main point is the usage of AI is creating exaggerated text that doesn't help much during the CI. Unfortunately communication is not easy, even with two person that know each other and speak the same language, image in our case where we have "strange" persons from different places of the world. If @linguini1 asks for more testing, it is in good will, because although your team is testing it more than 10k times internally, your testing environment is different from NuttX mainline. It is so true because sometimes the code that passed on your rigorous test no even can compile when into mainline (it happened few days ago). So, until we get the automatic hardware testing working with the mainline, it is important to have a proof that the PR was tested using the mainline. And I know sometimes it could sound like a burden to your team, but this is the only way (before having the automated hardware testing in place) to guarantee that a PR is not breaking the mainline. |
|
If so, we will stop contrubtion now! sorry, bye. |
This is great. I understand that it's probably frustrating to have to show more testing when you upstream to NuttX with such a thorough internal review. However, NuttX has its own review requirements. We don't know what test cases you ran, in what environments, etc. I would assume a lot of test cases are run on a Vela version of NuttX. If your review process is so strict, maybe you could just incorporate saving some log output from these 10000+ manual tests so that you could include them in the NuttX PRs?
I don't understand why it's stupid. If you discover a deadlock issue, there must be some scenario in which you encountered. If it's not reproduce-able, that can be stated in the testing information. In this particular PR, the author told me that logs did exist, just that they were too old and lost. From my perspective, that means you have a way to obtain them, so that's what I requested. How am I supposed to read your mind about not being able to reproduce something? Yes, usually when I make a change, I have some kind of output that lets me verify that it worked and solved the problem. Sorry if that's not part of your process.
Cool, then you are not following the review process from the contributing guidelines. It doesn't matter how many times you've reviewed them, they must follow the contributing guidelines.
I am! I politely ask for test logs as per the contributing guidelines to make sure that all changes are appropriately tested!
Okay? The NuttX community came together and decided on contributing guidelines as a community. We made these changes in a public forum of which you are a part of, and voted in a capacity where you have a binding vote as a PMC member. That is the avenue by which you were able to voice these concerns. Now these guidelines are here, and it is up to contributors to follow them. Otherwise, reviewers are able to request changes.
Okay, that is your decision. I will not pretend like Xiaomi doesn't contribute lots of valuable work to NuttX, but we have to hold Xiaomi PRs to the same standard as everyone else. NuttX is not a Xiaomi product. It's purposely set up so that companies with vested interest cannot just test "internally" and merge their own PRs without demonstrating to the entire NuttX community that their change does what it says it does. Really, I can understand the frustration of having to show more testing if you've already automated it internally. @acassis is right that I am requesting test logs for the NuttX community, not because of some personal vendetta against you or Xiaomi. But these logs are necessary. Please see it from our perspective:
We don't have great CI infrastructure, we have limited ability and time to review and test changes ourselves, we have limited CI resources and we don't have perfect, easy-to-run test cases that we can just automate to verify every PR with a change. Most of all, none of us are salaried full-time workers for NuttX! How are we meant to keep up with reviewing constant changes from a company with a development team? Please try to facilitate the reviews by following the contributing guidelines and giving us the information we need to make an informed review. We cannot just "trust" Xiaomi contributors; it's not fair to other contributors or the community members who voted for the contributing guidelines. It is also hard to trust these contributors when they are now regularly submitting false information in their PRs. I hope you understand. |
|
Yes, you are right, every contributor should provide the detail of the PR what modified and why modified. But request for the ONLY test log without review the real code changes are "stupid", for lot's of situations, logs says nothing. For example: All the following backtrace comes from gdb with coredump after deadlock ! |
Thank you. As I have said for years, you are going too fast and too big for nuttx. It will be better if nuttx picks interesting contributions from openvela, not the reverse. This will allow going forward at a much more reasonable pace. I thank you for your past useful contributions and I hope nuttx can continue to benefit from your open source work in a way that is more sustainable. |






Summary
Add library support for the rexec() function with conditional header inclusion based on build configurations.
Why:
The rexec() function requires different header files depending on the build configuration (kernel vs. flat builds) and feature enablement (file streams). Without proper conditional compilation guards, the code fails to build in certain configurations.
What changed:
include/nuttx/lib/lib.hto conditionally include headers:nuttx/kmalloc.hfor kernel builds (__KERNEL__defined)stdlib.hfor flat buildsnuttx/fs/fs.hinclusion withCONFIG_FILE_STREAMinclude/nuttx/streams.hto add stream support for rexeclib_getaddrinfo.c,lib_gethostentbynamer.c) to support remote execution scenarioslibs/libc/netdb/lib_netdb.hImpact
Scope:
CONFIG_BUILD_FLAT- Uses standard library allocationCONFIG_BUILD_KERNEL- Uses kernel memory allocatorCONFIG_FILE_STREAM- Conditionally includes file stream supportAffected files:
include/nuttx/lib/lib.h- Header inclusion logicinclude/nuttx/streams.h- Stream definitionslibs/libc/netdb/lib_getaddrinfo.c- Address info resolutionlibs/libc/netdb/lib_gethostentbynamer.c- Host name resolutionlibs/libc/netdb/lib_netdb.h- Network database declarationsBreaking changes: None
Backward compatibility: Fully maintained - existing code continues to work
Testing
Build testing:
Verified compilation across multiple configurations: