Skip to content

crawling errors needed #26

@prasnune

Description

@prasnune

Hello Team,

Problem:
When using the Dropbox Crawler, I get an error after a long while. The error message in the Cloudwatch logs indicates that a NullPointerException occurred while fetching documents from Dropbox in full crawl mode. This shows that there must be an issue with the data being retrieved from Dropbox or with how the crawler is processing it.

Possible Solution:
How can I check thousands and thousands of files for "inconsistencies"? Wouldn't it make more sense if your crawler would throw an error with the file it can't process?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions