Skip to content

Conversation

@weemve
Copy link

@weemve weemve commented Jan 2, 2026

Problem

Currently, the .target platform supports generating an OSGi manifest when a Maven artifact is missing one, using the option generate. However, there is no way to modify or regenerate an existing manifest for Maven-based libraries.

When migrating third-party libraries to Maven locations in .target files, many artifacts already include a manifest that does not meet our requirements (e.g., incomplete OSGi metadata). We need a mechanism to rewrite these manifests without manually editing the JARs.

Proposed Solution

Introduce a new option to rewrite a manifest file in the Maven Artifact Target Entry Dialog:

image

The new option generate_rewrite behaves like the current generate mode for missing manifests, but applies even when a manifest already exists. So, if generate_rewrite is enabled, the manifest will be regenerated based on the same logic used for missing manifests. This allows consistent handling of Maven artifacts during migration without manual intervention.

Benefits

  • Simplifies migration of third-party libraries to Maven-based .target definitions.
  • Ensures uniform OSGi metadata across all artifacts.
  • Reduces manual work and risk of errors when modifying manifests.

This Pull Request is related to eclipse-pde/eclipse.pde#2164

Copy link
Member

@laeubi laeubi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for taking the time to create this pull request and for the detailed explanation of the use case! I appreciate the effort you've put into this solution.

However, after careful consideration, I don't think we can accept this change. Let me explain the reasoning behind this decision:

The Challenge with Modifying Existing OSGi Manifests

While I understand the convenience this feature would provide, allowing modifications to existing OSGi manifests goes against some core principles we try to maintain:

1. Encouraging Upstream Fixes

When an OSGi manifest contains incorrect or incomplete metadata, the right solution is to fix it at the source. Providing an easy workaround risks making these temporary fixes permanent, meaning the upstream library never gets properly corrected. This ultimately affects the entire ecosystem, not just individual projects.

2. Artifact Identity and Consistency

A fundamental principle in dependency management is that a given GAV (Group, Artifact, Version) coordinate should represent the same artifact everywhere. When we allow manifest modifications, the same GAV could exist "in the wild" with different OSGi metadata depending on where and how it was processed. This leads to:

  • Hard-to-diagnose dependency conflicts
  • Reproducibility issues across different environments
  • Confusion when sharing artifacts or debugging issues with others

3. Hidden Technical Debt

Making it easy to modify manifests can mask underlying problems rather than solving them. What appears to be a quick fix today can become a maintenance burden that's difficult to track and understand months or years later.

What We Recommend Instead

For cases where you absolutely need to work with problematic artifacts:

  • Short-term/prototyping: The current approach of wrapping artifacts without existing OSGi manifests can serve as a temporary solution while working toward proper upstream support
  • Long-term solution: Consider helping the library maintainers adopt proper OSGi metadata, or if necessary, create a properly forked/wrapped version under your own GAV using tools like tycho-wrap: wrap

This way, you have full control and transparency about what's been modified, and you're not creating confusion about what the original GAV represents.

Final Thoughts

I realize this might not be the answer you were hoping for, and I genuinely appreciate the thought you put into this feature. The goal isn't to make things harder, but to maintain principles that help keep the OSGi ecosystem healthy and predictable in the long run.

If you'd like to discuss alternative approaches or have questions about the recommended solutions, I'm happy to continue the conversation!

@HeikoKlare
Copy link

Thank you for the detailed response, Christoph!

To put this proposal into some more context: we are working on replacing our outdated custom Orbit with the direct usage of Maven dependencies via targlets in our target platform. As a common issue of large projects, we have the situation that for some dependencies we cannot (yet) migrate to newest versions. And some dependencies have "conflicting" transitive dependencies (such as the good old Guava dependencies) or some dependencies we do not want to have for different reasons. I fully agree with everything you mentioned regarding what would be the "right" solution, just that those approaches are universally feasible for every single dependency in large-scale projects. We are of course not talking about rewriting the manifests for a large number of dependencies, but at least a bunch of that are problematic for us. And we of course properly evaluate whether such a rewrite is appropriate and asses the risk of doing so.
For us, we currently see these options:

  • Update all our dependencies and contribute to all those dependencies so that we can properly consume all dependency chains together as they are: this is something we will not achieve in the short or mid term future and we will honestly not be able to do that for every single case at all
  • Stick to outdated Orbit technology, at least for parts of the dependencies that are not up-to-date and require dependency rewrites: that will clutter our dependency management to different technologies and the Orbit scripts that we use are really something we would like to get rid of.
  • Maintain and deploy an m2e and maybe PDE fork with the necessary functionality for our tooling: definitely better than the one above as we get rid of different means of dependency management; but instead of this, we would of course like to provide that functionality upstream and consume the tooling as-is :-)

The problem you mention regarding artifact identitify and consistency is definitely a relevant one, as without any further adaptations you would install Maven artifacts with modified metadata (but the same version) to your local cache and maybe even deploy them to some repository, so that you produce a "conflicting" version with what is, e.g., available at Central. To avoid that, our goal was to give those Maven artifacts with rewritten Manifest a different version by adding a timestamp. If I am not mistaken that's was the underlying goal of this PR eclipse-pde/eclipse.pde#2164, which just seems to have an unfitting description.

So the overall conceptual goal was: in case you want to rewrite the Manifest (at your own risk and with knowing about the problems it may produce), you can use an according m2e extension as provided here and in that case the version of the generated Maven artifact will be different than the original one by adding/modfying a timestamp.

Do you see a chance that we can bring this into a direction so that we may integrate it into m2e/PDE and avoid that we need to patch our tooling locally? Maybe by thinking about a means to enforce a modification of artifact version when doing a rewrite (to address your second concern) and by somehow making more obvious that this is a risky feature? We are of course open for any ideas to move forward in a way that the result is appropriate for upstream m2e/PDE and capable of solving our current issues.

@laeubi
Copy link
Member

laeubi commented Jan 7, 2026

We are of course not talking about rewriting the manifests for a large number of dependencies, but at least a bunch of that are problematic for us.

I've occasionally encountered similar problems and initially thought adding this option would be helpful. However, in the long run, I've found that alternative solutions work better:

  • Sticking with the legacy approach temporarily
  • Reconsidering whether the dependency is truly needed and migrating away from it
  • Creating a forked variant under a new GAV
  • Manually re-bundling the dependency and deploying it to a corporate Nexus or similar
  • Copying a single JAR into the target project and using a Directory target location

Since you mentioned this affects only a subset of dependencies, these migrations don't need to happen as a big-bang change.

Maintain and deploy an m2e and maybe PDE fork with the necessary functionality for our tooling

Just a heads-up: making it work in the IDE alone typically isn't sufficient. Whatever direction we take, Tycho will need corresponding changes as well.

The problem you mention regarding artifact identity and consistency...

These items are never deployed to Maven repositories (even local ones), and attempts to do so would fail anyway, so I'm not particularly concerned about that aspect.

What we're actually dealing with is the P2/OSGi ecosystem:

  • In P2, the id+version combination is a globally unique identifier. Once you have a problematic artifact there, it persists, which can create challenging issues.
  • When a version range exists in a package, P2 might select any compatible provider, not necessarily the one you expect.

This needs careful handling. The existing wrapping feature is already susceptible to these problems. While I trust you'd handle it carefully, others might not, so I'm somewhat cautious here.

Do you see a chance that we can bring this into a direction so that we may integrate it into m2e/PDE...

I think we should avoid introducing a new type that duplicates existing functionality—as evidenced by the extensive test changes. For a feature like this, I'd expect only new tests to be added, with no modifications to existing tests. Also, the PR should minimize changes; I notice many formatting-only changes with no actual code modifications.

What if we defined these constraints for this feature:

  1. Add a new attribute manifestOverride
  2. When enabled, custom instructions must be provided and the resulting BSN must differ from the original (validated during generation, with an error reported if they match)
  3. The dependency mode must be none

As mentioned, a similar implementation will be needed in Tycho as well, also I don't think it would require any changes to PDE at all.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants