Skip to content

Conversation

@Salehin-07
Copy link

@Salehin-07 Salehin-07 commented Dec 4, 2025

Description

Added a new m3u links scanner which can say if a m3u link is broken or not

Fixes #(issue_no)

#3270

Type of change

Please delete options that are not relevant.

  • New feature (non-breaking change which adds functionality)

Checklist:

  • My code follows the style guidelines(Clean Code) of this project
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas
  • I have created a helpful and easy to understand README.md
  • My documentation follows Template for README.md
  • I have added the project meta data in the PR template.
  • I have created the requirements.txt file if needed.

Category:

  • Scrappers
  • Others

Title: <m3u links scanner >

Folder: <M3U_link_scanner>

Requirements: <M3U_link_scanner/requirements.txt >

Script: <M3U_link_scanner/iptv_tester.py>

Arguments: <none>
Contributor: <Salehin-07 >

Description: <made a script that throughly checks an iptv .m3u link and gives results. Takes approximately 1 min per link>

Summary by Sourcery

Add a new script for comprehensively testing IPTV M3U links and classifying them as working or broken based on multiple connectivity and streaming checks.

New Features:

  • Introduce an IPTV link tester script that validates M3U stream URLs using multiple HTTP, socket, and ffmpeg-based checks and writes results to separate output files.

Documentation:

  • Add a README explaining installation, configuration, usage, and behavior of the IPTV M3U link tester tool.

This script tests IPTV links for functionality using various methods including HTTP requests and socket connections. It logs working and broken links to separate files.
Added comprehensive documentation for the IPTV Link Tester, detailing features, installation, usage, configuration, output files, testing process, troubleshooting, and licensing.
Added the iptv link scanners script by python
@sourcery-ai
Copy link

sourcery-ai bot commented Dec 4, 2025

Reviewer's Guide

Adds a new IPTV M3U link scanner script that reads IPTV URLs from a file, performs multiple network- and ffmpeg-based health checks per link, and writes working vs broken links with success rates to separate output files, documented by a dedicated README and backed by a minimal requirements file.

Sequence diagram for comprehensive testing of a single IPTV link

sequenceDiagram
    actor User
    participant CLI as main
    participant Tester as IPTVLinkTester
    participant FileSystem
    participant HTTP as requests
    participant Net as socket
    participant FFprobe as ffprobe_subprocess

    User->>CLI: run iptv_tester.py
    CLI->>Tester: instantiate IPTVLinkTester()
    CLI->>Tester: process_links()
    Tester->>FileSystem: open input_file
    FileSystem-->>Tester: list of links

    loop for each link
        Tester->>Tester: test_link_comprehensive(url, index, total)

        loop HTTP HEAD attempts
            Tester->>HTTP: head(url, headers, timeout)
            HTTP-->>Tester: status_code
        end

        loop HTTP GET partial attempts
            Tester->>HTTP: get(url, range 0-1024, stream=True)
            HTTP-->>Tester: response
            Tester->>Tester: read small chunk
        end

        loop HTTP streaming attempts
            Tester->>HTTP: get(url, stream=True)
            HTTP-->>Tester: response
            Tester->>Tester: read multiple chunks
        end

        loop socket attempts
            Tester->>Net: connect_ex(host, port)
            Net-->>Tester: result
        end

        loop ffprobe attempts
            Tester->>FFprobe: subprocess.run(ffprobe, url, timeout)
            FFprobe-->>Tester: returncode
        end

        Tester->>Tester: aggregate results
        alt any test passed
            Tester->>Tester: mark link as working
        else all tests failed
            Tester->>Tester: mark link as broken
        end
    end

    Tester->>FileSystem: write working_links.txt
    Tester->>FileSystem: write broken_links.txt
    Tester-->>CLI: summary printed
    CLI-->>User: display results
Loading

Class diagram for the IPTV M3U link scanner

classDiagram
    class IPTVLinkTester {
        +str input_file
        +str working_file
        +str broken_file
        +int timeout
        +int attempts
        +dict headers
        +__init__(input_file, working_file, broken_file)
        +test_http_head(url)
        +test_http_get_partial(url)
        +test_http_streaming(url)
        +test_socket_connection(url)
        +test_with_ffmpeg(url)
        +test_link_comprehensive(url, link_number, total_links)
        +process_links()
    }

    class ModuleLevel {
        +main()
    }

    ModuleLevel ..> IPTVLinkTester : creates
Loading

Flow diagram for IPTV link processing and classification

flowchart TD
    A_start([Start]) --> B_init[Create IPTVLinkTester instance]
    B_init --> C_read[Read iptv_links.txt]
    C_read -->|file not found or empty| Z_end_error([Exit with error message])
    C_read -->|links loaded| D_iterate[[For each link]]

    D_iterate --> E_test["test_link_comprehensive(url, link_number, total_links)"]

    subgraph Comprehensive_testing_per_link
        direction TB
        E_test --> F1[HTTP HEAD tests<br/>1..attempts]
        F1 --> F2[HTTP GET partial tests<br/>1..attempts]
        F2 --> F3[HTTP streaming tests<br/>1..attempts]
        F3 --> F4[Socket connection tests<br/>1..attempts]
        F4 --> F5[FFmpeg probe tests<br/>1..attempts]
        F5 --> G_agg[Aggregate all test results<br/>compute success_percentage]
        G_agg --> H_decision{Any test passed?}
        H_decision -->|yes| I_work[Append link with success rate<br/>to working_links]
        H_decision -->|no| J_broken[Append link with note<br/>to broken_links]
    end

    I_work --> K_next[Optional delay before<br/>next link]
    J_broken --> K_next

    K_next -->|more links| D_iterate
    K_next -->|no more links| L_write[Write working_links.txt<br/>and broken_links.txt]
    L_write --> M_summary[Print testing summary]
    M_summary --> N_end([End])

    Z_end_error --> N_end
Loading

File-Level Changes

Change Details Files
Introduce IPTVLinkTester script that performs comprehensive multi-method health checks on IPTV/M3U links read from an input file and writes categorized results.
  • Define IPTVLinkTester class with configurable input, working, and broken link file paths plus timeout, attempts, and HTTP headers.
  • Implement multiple test methods (HTTP HEAD, partial HTTP GET, streaming GET, raw socket connect, optional ffprobe) each returning boolean availability.
  • Add comprehensive per-link test orchestration that runs all methods for multiple attempts, tracks success metrics, prints detailed progress, and classifies links as working or broken.
  • Implement process_links to load links from a text file, iterate and test each link with delays, and write annotated results to working_links.txt and broken_links.txt, with a console summary.
  • Provide a main entry point that instantiates IPTVLinkTester with defaults and starts the processing when executed as a script.
M3U_link_scanner/iptv_tester.py
Document the IPTV link tester usage, behavior, configuration, and output for end users.
  • Describe the tool’s purpose, feature set, and testing methods, including retry behavior and success-rate computation.
  • Provide installation steps, including optional ffmpeg setup guidance for different platforms.
  • Explain how to prepare the iptv_links.txt input, run the script, interpret working/broken output files, and tweak configuration parameters.
  • Detail the testing process, estimated runtime, troubleshooting tips, and caveats/limitations, along with license and disclaimer notes.
M3U_link_scanner/README.md
Declare Python dependencies required by the IPTV link tester script.
  • Add requests library with a modern minimum version for HTTP operations.
  • Add urllib3 with a modern minimum version, aligning with requests’ HTTP stack.
M3U_link_scanner/requirements.txt

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey there - I've reviewed your changes - here's some feedback:

  • Several modules are imported but never used (e.g., concurrent.futures, os); consider removing them to keep the script minimal and easier to maintain.
  • Most network and subprocess calls use bare except: blocks, which can mask real issues; catching specific exceptions (e.g., requests.RequestException, socket.error, subprocess.TimeoutExpired) would make failures clearer and safer.
  • The repeated manual socket lifecycle in test_socket_connection can be simplified and made safer by using a context manager (e.g., with socket.create_connection(...) as sock:) to ensure resources are always cleaned up.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- Several modules are imported but never used (e.g., `concurrent.futures`, `os`); consider removing them to keep the script minimal and easier to maintain.
- Most network and subprocess calls use bare `except:` blocks, which can mask real issues; catching specific exceptions (e.g., `requests.RequestException`, `socket.error`, `subprocess.TimeoutExpired`) would make failures clearer and safer.
- The repeated manual `socket` lifecycle in `test_socket_connection` can be simplified and made safer by using a context manager (e.g., `with socket.create_connection(...) as sock:`) to ensure resources are always cleaned up.

## Individual Comments

### Comment 1
<location> `M3U_link_scanner/iptv_tester.py:28` </location>
<code_context>
+        try:
+            response = requests.head(url, headers=self.headers, timeout=self.timeout, allow_redirects=True)
+            return response.status_code in [200, 206, 301, 302, 307, 308]
+        except:
+            return False
+
</code_context>

<issue_to_address>
**issue (bug_risk):** Avoid bare `except:` to prevent masking unexpected errors and interruptions.

A bare `except:` here will hide unexpected errors (including `KeyboardInterrupt`), making real failures hard to detect. Please catch the specific exceptions you expect (e.g., `requests.RequestException`) and let others propagate. The same adjustment applies to `test_http_get_partial`, `test_http_streaming`, and `test_socket_connection`.
</issue_to_address>

### Comment 2
<location> `M3U_link_scanner/iptv_tester.py:79-90` </location>
<code_context>
+                text=True
+            )
+            return result.returncode == 0
+        except (subprocess.TimeoutExpired, FileNotFoundError, Exception):
+            return False
+
</code_context>

<issue_to_address>
**suggestion (bug_risk):** Catching both specific exceptions and a broad `Exception` makes the specific entries redundant.

This also obscures intent: it looks like we’re handling those specific errors, but in reality everything is treated the same. Either narrow the `except` to just the exceptions you truly expect, or use a single bare `Exception` only if you intentionally want to catch and ignore all failures here.

```suggestion
        try:
            # Check if ffprobe is available
            result = subprocess.run(
                ['ffprobe', '-v', 'error', '-show_entries', 
                 'format=duration', '-of', 'default=noprint_wrappers=1:nokey=1', url],
                capture_output=True,
                timeout=self.timeout,
                text=True
            )
            return result.returncode == 0
        except (subprocess.TimeoutExpired, FileNotFoundError):
            return False
```
</issue_to_address>

### Comment 3
<location> `M3U_link_scanner/iptv_tester.py:37-40` </location>
<code_context>

</code_context>

<issue_to_address>
**issue (code-quality):** Avoid conditionals in tests. ([`no-conditionals-in-tests`](https://docs.sourcery.ai/Reference/Rules-and-In-Line-Suggestions/Python/Default-Rules/no-conditionals-in-tests))

<details><summary>Explanation</summary>Avoid complex code, like conditionals, in test functions.

Google's software engineering guidelines says:
"Clear tests are trivially correct upon inspection"
To reach that avoid complex code in tests:
* loops
* conditionals

Some ways to fix this:

* Use parametrized tests to get rid of the loop.
* Move the complex logic into helpers.
* Move the complex part into pytest fixtures.

> Complexity is most often introduced in the form of logic. Logic is defined via the imperative parts of programming languages such as operators, loops, and conditionals. When a piece of code contains logic, you need to do a bit of mental computation to determine its result instead of just reading it off of the screen. It doesn't take much logic to make a test more difficult to reason about.

Software Engineering at Google / [Don't Put Logic in Tests](https://abseil.io/resources/swe-book/html/ch12.html#donapostrophet_put_logic_in_tests)
</details>
</issue_to_address>

### Comment 4
<location> `M3U_link_scanner/iptv_tester.py:49-57` </location>
<code_context>

</code_context>

<issue_to_address>
**issue (code-quality):** Avoid conditionals in tests. ([`no-conditionals-in-tests`](https://docs.sourcery.ai/Reference/Rules-and-In-Line-Suggestions/Python/Default-Rules/no-conditionals-in-tests))

<details><summary>Explanation</summary>Avoid complex code, like conditionals, in test functions.

Google's software engineering guidelines says:
"Clear tests are trivially correct upon inspection"
To reach that avoid complex code in tests:
* loops
* conditionals

Some ways to fix this:

* Use parametrized tests to get rid of the loop.
* Move the complex logic into helpers.
* Move the complex part into pytest fixtures.

> Complexity is most often introduced in the form of logic. Logic is defined via the imperative parts of programming languages such as operators, loops, and conditionals. When a piece of code contains logic, you need to do a bit of mental computation to determine its result instead of just reading it off of the screen. It doesn't take much logic to make a test more difficult to reason about.

Software Engineering at Google / [Don't Put Logic in Tests](https://abseil.io/resources/swe-book/html/ch12.html#donapostrophet_put_logic_in_tests)
</details>
</issue_to_address>

### Comment 5
<location> `M3U_link_scanner/iptv_tester.py:52-56` </location>
<code_context>

</code_context>

<issue_to_address>
**issue (code-quality):** Avoid loops in tests. ([`no-loop-in-tests`](https://docs.sourcery.ai/Reference/Rules-and-In-Line-Suggestions/Python/Default-Rules/no-loop-in-tests))

<details><summary>Explanation</summary>Avoid complex code, like loops, in test functions.

Google's software engineering guidelines says:
"Clear tests are trivially correct upon inspection"
To reach that avoid complex code in tests:
* loops
* conditionals

Some ways to fix this:

* Use parametrized tests to get rid of the loop.
* Move the complex logic into helpers.
* Move the complex part into pytest fixtures.

> Complexity is most often introduced in the form of logic. Logic is defined via the imperative parts of programming languages such as operators, loops, and conditionals. When a piece of code contains logic, you need to do a bit of mental computation to determine its result instead of just reading it off of the screen. It doesn't take much logic to make a test more difficult to reason about.

Software Engineering at Google / [Don't Put Logic in Tests](https://abseil.io/resources/swe-book/html/ch12.html#donapostrophet_put_logic_in_tests)
</details>
</issue_to_address>

### Comment 6
<location> `M3U_link_scanner/iptv_tester.py:53-56` </location>
<code_context>

</code_context>

<issue_to_address>
**issue (code-quality):** Avoid conditionals in tests. ([`no-conditionals-in-tests`](https://docs.sourcery.ai/Reference/Rules-and-In-Line-Suggestions/Python/Default-Rules/no-conditionals-in-tests))

<details><summary>Explanation</summary>Avoid complex code, like conditionals, in test functions.

Google's software engineering guidelines says:
"Clear tests are trivially correct upon inspection"
To reach that avoid complex code in tests:
* loops
* conditionals

Some ways to fix this:

* Use parametrized tests to get rid of the loop.
* Move the complex logic into helpers.
* Move the complex part into pytest fixtures.

> Complexity is most often introduced in the form of logic. Logic is defined via the imperative parts of programming languages such as operators, loops, and conditionals. When a piece of code contains logic, you need to do a bit of mental computation to determine its result instead of just reading it off of the screen. It doesn't take much logic to make a test more difficult to reason about.

Software Engineering at Google / [Don't Put Logic in Tests](https://abseil.io/resources/swe-book/html/ch12.html#donapostrophet_put_logic_in_tests)
</details>
</issue_to_address>

### Comment 7
<location> `M3U_link_scanner/iptv_tester.py:55-56` </location>
<code_context>

</code_context>

<issue_to_address>
**issue (code-quality):** Avoid conditionals in tests. ([`no-conditionals-in-tests`](https://docs.sourcery.ai/Reference/Rules-and-In-Line-Suggestions/Python/Default-Rules/no-conditionals-in-tests))

<details><summary>Explanation</summary>Avoid complex code, like conditionals, in test functions.

Google's software engineering guidelines says:
"Clear tests are trivially correct upon inspection"
To reach that avoid complex code in tests:
* loops
* conditionals

Some ways to fix this:

* Use parametrized tests to get rid of the loop.
* Move the complex logic into helpers.
* Move the complex part into pytest fixtures.

> Complexity is most often introduced in the form of logic. Logic is defined via the imperative parts of programming languages such as operators, loops, and conditionals. When a piece of code contains logic, you need to do a bit of mental computation to determine its result instead of just reading it off of the screen. It doesn't take much logic to make a test more difficult to reason about.

Software Engineering at Google / [Don't Put Logic in Tests](https://abseil.io/resources/swe-book/html/ch12.html#donapostrophet_put_logic_in_tests)
</details>
</issue_to_address>

### Comment 8
<location> `M3U_link_scanner/iptv_tester.py:109-132` </location>
<code_context>

</code_context>

<issue_to_address>
**issue (code-quality):** Avoid loops in tests. ([`no-loop-in-tests`](https://docs.sourcery.ai/Reference/Rules-and-In-Line-Suggestions/Python/Default-Rules/no-loop-in-tests))

<details><summary>Explanation</summary>Avoid complex code, like loops, in test functions.

Google's software engineering guidelines says:
"Clear tests are trivially correct upon inspection"
To reach that avoid complex code in tests:
* loops
* conditionals

Some ways to fix this:

* Use parametrized tests to get rid of the loop.
* Move the complex logic into helpers.
* Move the complex part into pytest fixtures.

> Complexity is most often introduced in the form of logic. Logic is defined via the imperative parts of programming languages such as operators, loops, and conditionals. When a piece of code contains logic, you need to do a bit of mental computation to determine its result instead of just reading it off of the screen. It doesn't take much logic to make a test more difficult to reason about.

Software Engineering at Google / [Don't Put Logic in Tests](https://abseil.io/resources/swe-book/html/ch12.html#donapostrophet_put_logic_in_tests)
</details>
</issue_to_address>

### Comment 9
<location> `M3U_link_scanner/iptv_tester.py:113-128` </location>
<code_context>

</code_context>

<issue_to_address>
**issue (code-quality):** Avoid loops in tests. ([`no-loop-in-tests`](https://docs.sourcery.ai/Reference/Rules-and-In-Line-Suggestions/Python/Default-Rules/no-loop-in-tests))

<details><summary>Explanation</summary>Avoid complex code, like loops, in test functions.

Google's software engineering guidelines says:
"Clear tests are trivially correct upon inspection"
To reach that avoid complex code in tests:
* loops
* conditionals

Some ways to fix this:

* Use parametrized tests to get rid of the loop.
* Move the complex logic into helpers.
* Move the complex part into pytest fixtures.

> Complexity is most often introduced in the form of logic. Logic is defined via the imperative parts of programming languages such as operators, loops, and conditionals. When a piece of code contains logic, you need to do a bit of mental computation to determine its result instead of just reading it off of the screen. It doesn't take much logic to make a test more difficult to reason about.

Software Engineering at Google / [Don't Put Logic in Tests](https://abseil.io/resources/swe-book/html/ch12.html#donapostrophet_put_logic_in_tests)
</details>
</issue_to_address>

### Comment 10
<location> `M3U_link_scanner/iptv_tester.py:120-123` </location>
<code_context>

</code_context>

<issue_to_address>
**issue (code-quality):** Avoid conditionals in tests. ([`no-conditionals-in-tests`](https://docs.sourcery.ai/Reference/Rules-and-In-Line-Suggestions/Python/Default-Rules/no-conditionals-in-tests))

<details><summary>Explanation</summary>Avoid complex code, like conditionals, in test functions.

Google's software engineering guidelines says:
"Clear tests are trivially correct upon inspection"
To reach that avoid complex code in tests:
* loops
* conditionals

Some ways to fix this:

* Use parametrized tests to get rid of the loop.
* Move the complex logic into helpers.
* Move the complex part into pytest fixtures.

> Complexity is most often introduced in the form of logic. Logic is defined via the imperative parts of programming languages such as operators, loops, and conditionals. When a piece of code contains logic, you need to do a bit of mental computation to determine its result instead of just reading it off of the screen. It doesn't take much logic to make a test more difficult to reason about.

Software Engineering at Google / [Don't Put Logic in Tests](https://abseil.io/resources/swe-book/html/ch12.html#donapostrophet_put_logic_in_tests)
</details>
</issue_to_address>

### Comment 11
<location> `M3U_link_scanner/iptv_tester.py:145-148` </location>
<code_context>

</code_context>

<issue_to_address>
**issue (code-quality):** Avoid conditionals in tests. ([`no-conditionals-in-tests`](https://docs.sourcery.ai/Reference/Rules-and-In-Line-Suggestions/Python/Default-Rules/no-conditionals-in-tests))

<details><summary>Explanation</summary>Avoid complex code, like conditionals, in test functions.

Google's software engineering guidelines says:
"Clear tests are trivially correct upon inspection"
To reach that avoid complex code in tests:
* loops
* conditionals

Some ways to fix this:

* Use parametrized tests to get rid of the loop.
* Move the complex logic into helpers.
* Move the complex part into pytest fixtures.

> Complexity is most often introduced in the form of logic. Logic is defined via the imperative parts of programming languages such as operators, loops, and conditionals. When a piece of code contains logic, you need to do a bit of mental computation to determine its result instead of just reading it off of the screen. It doesn't take much logic to make a test more difficult to reason about.

Software Engineering at Google / [Don't Put Logic in Tests](https://abseil.io/resources/swe-book/html/ch12.html#donapostrophet_put_logic_in_tests)
</details>
</issue_to_address>

### Comment 12
<location> `M3U_link_scanner/iptv_tester.py:27-28` </location>
<code_context>
    def test_http_head(self, url):
        """Test using HTTP HEAD request"""
        try:
            response = requests.head(url, headers=self.headers, timeout=self.timeout, allow_redirects=True)
            return response.status_code in [200, 206, 301, 302, 307, 308]
        except:
            return False

</code_context>

<issue_to_address>
**suggestion (code-quality):** We've found these issues:

- Use set when checking membership of a collection of literals ([`collection-into-set`](https://docs.sourcery.ai/Reference/Default-Rules/refactorings/collection-into-set/))
- Use `except Exception:` rather than bare `except:` ([`do-not-use-bare-except`](https://docs.sourcery.ai/Reference/Default-Rules/suggestions/do-not-use-bare-except/))

```suggestion
            return response.status_code in {200, 206, 301, 302, 307, 308}
        except Exception:
```
</issue_to_address>

### Comment 13
<location> `M3U_link_scanner/iptv_tester.py:33-43` </location>
<code_context>
    def test_http_get_partial(self, url):
        """Test using HTTP GET with range request"""
        try:
            headers = self.headers.copy()
            headers['Range'] = 'bytes=0-1024'
            response = requests.get(url, headers=headers, timeout=self.timeout, stream=True, allow_redirects=True)
            if response.status_code in [200, 206]:
                # Try to read a small chunk
                chunk = next(response.iter_content(1024), None)
                return chunk is not None and len(chunk) > 0
            return False
        except:
            return False

</code_context>

<issue_to_address>
**issue (code-quality):** We've found these issues:

- Use set when checking membership of a collection of literals ([`collection-into-set`](https://docs.sourcery.ai/Reference/Default-Rules/refactorings/collection-into-set/))
- Use `except Exception:` rather than bare `except:` ([`do-not-use-bare-except`](https://docs.sourcery.ai/Reference/Default-Rules/suggestions/do-not-use-bare-except/))
</issue_to_address>

### Comment 14
<location> `M3U_link_scanner/iptv_tester.py:47` </location>
<code_context>
    def test_http_streaming(self, url):
        """Test streaming capability"""
        try:
            response = requests.get(url, headers=self.headers, timeout=self.timeout, stream=True, allow_redirects=True)
            if response.status_code in [200, 206]:
                # Try to read multiple chunks
                chunk_count = 0
                for chunk in response.iter_content(chunk_size=8192):
                    if chunk:
                        chunk_count += 1
                        if chunk_count >= 3:  # Successfully read 3 chunks
                            return True
                return chunk_count > 0
            return False
        except:
            return False

</code_context>

<issue_to_address>
**issue (code-quality):** We've found these issues:

- Use set when checking membership of a collection of literals ([`collection-into-set`](https://docs.sourcery.ai/Reference/Default-Rules/refactorings/collection-into-set/))
- Use `except Exception:` rather than bare `except:` ([`do-not-use-bare-except`](https://docs.sourcery.ai/Reference/Default-Rules/suggestions/do-not-use-bare-except/))
</issue_to_address>

### Comment 15
<location> `M3U_link_scanner/iptv_tester.py:64` </location>
<code_context>
    def test_socket_connection(self, url):
        """Test basic socket connection"""
        try:
            parsed = urlparse(url)
            host = parsed.hostname
            port = parsed.port or (443 if parsed.scheme == 'https' else 80)

            sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
            sock.settimeout(self.timeout)
            result = sock.connect_ex((host, port))
            sock.close()
            return result == 0
        except:
            return False

</code_context>

<issue_to_address>
**issue (code-quality):** Use `except Exception:` rather than bare `except:` ([`do-not-use-bare-except`](https://docs.sourcery.ai/Reference/Default-Rules/suggestions/do-not-use-bare-except/))
</issue_to_address>

### Comment 16
<location> `M3U_link_scanner/iptv_tester.py:89` </location>
<code_context>
    def test_with_ffmpeg(self, url):
        """Test using ffmpeg/ffprobe if available"""
        try:
            # Check if ffprobe is available
            result = subprocess.run(
                ['ffprobe', '-v', 'error', '-show_entries', 
                 'format=duration', '-of', 'default=noprint_wrappers=1:nokey=1', url],
                capture_output=True,
                timeout=self.timeout,
                text=True
            )
            return result.returncode == 0
        except (subprocess.TimeoutExpired, FileNotFoundError, Exception):
            return False

</code_context>

<issue_to_address>
**suggestion (code-quality):** Remove redundant exceptions from an except clause ([`remove-redundant-exception`](https://docs.sourcery.ai/Reference/Default-Rules/refactorings/remove-redundant-exception/))

```suggestion
        except (subprocess.TimeoutExpired, Exception):
```
</issue_to_address>

### Comment 17
<location> `M3U_link_scanner/iptv_tester.py:148` </location>
<code_context>
    def test_link_comprehensive(self, url, link_number, total_links):
        """Perform comprehensive testing on a single link"""
        print(f"\n{'='*70}")
        print(f"Testing link {link_number}/{total_links}")
        print(f"URL: {url[:80]}{'...' if len(url) > 80 else ''}")
        print(f"{'='*70}")

        results = []
        test_methods = [
            ("HTTP HEAD Request", self.test_http_head),
            ("HTTP GET Partial", self.test_http_get_partial),
            ("HTTP Streaming", self.test_http_streaming),
            ("Socket Connection", self.test_socket_connection),
            ("FFmpeg Probe", self.test_with_ffmpeg)
        ]

        # Perform multiple attempts for each test method
        for method_name, test_func in test_methods:
            print(f"\n{method_name}:")
            method_results = []

            for attempt in range(self.attempts):
                print(f"  Attempt {attempt + 1}/{self.attempts}...", end=' ', flush=True)
                try:
                    result = test_func(url)
                    method_results.append(result)
                    print(f"{'✓ PASS' if result else '✗ FAIL'}")

                    if result:
                        time.sleep(2)  # Short delay on success
                    else:
                        time.sleep(3)  # Slightly longer delay on failure

                except Exception as e:
                    print(f"✗ ERROR: {str(e)[:50]}")
                    method_results.append(False)
                    time.sleep(3)

            success_rate = sum(method_results) / len(method_results) * 100
            print(f"  Success rate: {success_rate:.1f}%")
            results.extend(method_results)

        # Determine if link is working
        total_tests = len(results)
        successful_tests = sum(results)
        success_percentage = (successful_tests / total_tests) * 100

        print(f"\n{'─'*70}")
        print(f"OVERALL RESULT: {successful_tests}/{total_tests} tests passed ({success_percentage:.1f}%)")

        # If ANY test passed even once, consider it potentially working
        is_working = successful_tests > 0

        if is_working:
            print(f"Status: ✓ WORKING (at least {successful_tests} test(s) succeeded)")
        else:
            print(f"Status: ✗ BROKEN (all tests failed)")

        print(f"{'='*70}\n")

        return is_working, success_percentage

</code_context>

<issue_to_address>
**issue (code-quality):** Replace f-string with no interpolated values with string ([`remove-redundant-fstring`](https://docs.sourcery.ai/Reference/Default-Rules/refactorings/remove-redundant-fstring/))
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Refactor IPTV link testing methods for improved error handling and resource management. Consolidate repeated code into helper functions and enhance overall readability. Pr  avinashkranjan#3271
Refactor exception handling in various methods to improve clarity and maintainability. Added specific exception handling for requests and socket operations, while logging unexpected errors.
Refactor IPTV link tester for improved readability and structure. Added helper methods for reading and writing files, and enhanced error handling.
Copy link

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

New security issues found

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant