Compare commits

...

50 Commits
v2.9.8 ... main

Author SHA1 Message Date
Lovi
2fdcb34665 Merge branch 'main' of https://github.com/Arrowar/StreamingCommunity 2025-06-06 12:30:23 +02:00
Lovi
49e038a2c8 Core: Add arm64 version. 2025-06-06 12:30:20 +02:00
github-actions[bot]
ccc2478067 Automatic domain update [skip ci] 2025-06-05 11:18:33 +00:00
None
f4529e5f05
Update schedule 2025-06-03 17:30:27 +02:00
github-actions[bot]
dcfd22bc2b Automatic domain update [skip ci] 2025-06-03 15:27:02 +00:00
Lovi
3cbabfb98b core: Fix requirements 2025-06-02 18:14:36 +02:00
None
6efeb96201
Update update_domain.yml 2025-06-02 12:58:38 +02:00
Lovi
d0207b3669 Fix wrong version pip 2025-06-02 11:08:46 +02:00
Lovi
6713de4ecc Bump v3.0.9 2025-06-01 16:31:24 +02:00
github-actions[bot]
b8e28a30c0 Automatic domain update [skip ci] 2025-06-01 01:02:20 +00:00
Alessandro Perazzetta
a45fd0d37e
Dns check (#332)
* refactor: streamline proxy checking in search function

* refactor: update DNS check method, try a real dns resolution instead of checking dns provider

* refactor: enhance DNS resolution check to support multiple domains across platforms

* refactor: replace os.socket with socket for DNS resolution consistency

---------

Co-authored-by: None <62809003+Arrowar@users.noreply.github.com>
2025-05-31 20:07:30 +02:00
github-actions[bot]
4b40b8ce22 Automatic domain update [skip ci] 2025-05-31 12:17:33 +00:00
Alessandro Perazzetta
73cc2662b8
Dns check refactor (#328)
* refactor: streamline proxy checking in search function

* refactor: update DNS check method, try a real dns resolution instead of checking dns provider

* refactor: enhance DNS resolution check to support multiple domains across platforms

* refactor: replace os.socket with socket for DNS resolution consistency

---------

Co-authored-by: None <62809003+Arrowar@users.noreply.github.com>
2025-05-31 11:30:59 +02:00
Lovi
1776538c6c github: Update domains 2025-05-31 11:28:38 +02:00
None
884bcf656c
Create update_domain.yml 2025-05-31 10:59:11 +02:00
Lovi
71e97c2c65 Site: Update endpoint 2025-05-31 10:58:12 +02:00
Lovi
ded66f446e Remove database of domain 2025-05-31 10:52:16 +02:00
Lovi
86c7293779 Bump v3.0.8 2025-05-25 16:59:29 +02:00
Lovi
ef6c8c9cb3 api: Fix tipo raiplay 2025-05-25 15:37:53 +02:00
Alessandro Perazzetta
c01945fdbc
refactor: streamline proxy checking in search function (#326) 2025-05-22 08:36:44 +02:00
Lovi
4f0c58f14d api: fix actual_search_query 2025-05-18 16:31:15 +02:00
Lovi
b3db6aa8c1 Bump v3.0.7 2025-05-18 14:36:55 +02:00
None
1c89398054
Fix telegram and proxy (#322)
* Add ENABLE_VIDEO

* Fix proxy

* Add error proxy

* Update config.json

* Fix telegram_bot (#312)

* Update config.json

* Fix telegram_bot

* fix bug

* Fix StreamingCommunity site

* Delete console.log

* fix doppio string_to_search

* Update __init__.py

* Update site.py

* Update config.json

* Update site.py

* Update config.json

* Update __init__.py

* Update __init__.py

* Fix proxy (#319)

* Add ENABLE_VIDEO

* Fix proxy

* Add error proxy

* Update config.json

* Refactor user input handling and improve messaging in __init__.py

---------

Co-authored-by: None <62809003+Arrowar@users.noreply.github.com>
Co-authored-by: l1n00 <>

* Fix proxy __init__

* Update os.py

---------

Co-authored-by: l1n00 <delmolinonicola@gmail.com>
2025-05-18 14:16:44 +02:00
None
dfcc29078f
Fix proxy (#319)
* Add ENABLE_VIDEO

* Fix proxy

* Add error proxy

* Update config.json
2025-05-17 09:54:41 +02:00
None
c0f3d8619b Bump v3.0.6 2025-05-14 09:36:08 +02:00
None
8e323e83f9
Dev (#318)
* Fix telegram bot (issues #305 bug) (#316)

* fix create config.json

* fix messagge telegram_bot option 0 (Streamingcommunity)

* Update README.md

* Update domain

---------

Co-authored-by: GiuPic <47813665+GiuPic@users.noreply.github.com>
2025-05-14 09:34:30 +02:00
None
e75d8185f9 Site: Fix color map 2025-05-13 12:33:51 +02:00
None
a071d0d2c4 Bump v3.0.5 2025-05-13 11:37:56 +02:00
None
bfed63bd41 Site: add _deprecate 2025-05-13 11:04:42 +02:00
None
fab21e572c Fix cert path 2025-05-12 17:12:37 +02:00
None
67a5e6e1cb
Create build-dev.yml 2025-05-12 16:44:33 +02:00
None
d51665f5ac Delete .site 2025-05-12 16:41:23 +02:00
Lovi
c59502c1fd Bump v3.0.4 2025-05-10 09:47:17 +02:00
Lovi
22ce91d38b Bump v3.0.3 2025-05-10 09:24:15 +02:00
Lovi
f45ec0d773 Update .site 2025-05-10 09:22:26 +02:00
None
11cb44f6ef
Create static.yml 2025-05-10 09:18:55 +02:00
None
faf83765d0
Dev (#311)
* Update build.yml

* Update site.py

* Update requirements.txt

* Update os.py

* Update run.py

* Update global_search.py

* Update hdplayer.py

* Create index.html

* Create script.js

* Create style.css

* Create pages.yml

* Some fix
2025-05-10 09:17:37 +02:00
GitHub Actions
32197a3c5d Update lines of code badge 2025-05-01 12:26:49 +00:00
None
782b03d248
Bump v3.0.2
* Add api "StreamingWatch"

* Add hdplayer

---------

Co-authored-by: Lovi <62809003+Lovi-0@users.noreply.github.com>
2025-05-01 14:22:47 +02:00
Lovi
bd922afde2 Bump v3.0.1 2025-04-27 19:33:13 +02:00
Lovi
33436ec2fe api: Fix episode parsing au 2025-04-23 09:39:59 +02:00
None
353a23d169
Versione 3.0.0 (#301)
* Update ScrapeSerie.py

* Update site.py

* Update util.py

* Update ffmpeg_installer.py

* Update os.py

* Update ffmpeg_installer.py

* Update setup.py

* Update version.py

* Update util.py
2025-04-22 15:57:43 +02:00
None
0a03be0fae api: Add search verify=False 2025-04-13 10:26:06 +02:00
None
57cc01b6bc Bump v2.9.9 2025-04-12 18:31:48 +02:00
None
fda1bd6e4e api: Fix error f-string: unmatched in python 3.9 2025-04-12 15:29:03 +02:00
Prova45
c3a0be0d85 api: Add raiplay 2025-04-11 16:30:42 +02:00
Lovi
64efc67e6a core: HLS fix custom resolution parsing like 854x480 2025-03-29 09:02:26 +01:00
None
4789c147e4 core: Fix hls error "self.video_res == None" 2025-03-25 15:04:54 +01:00
Lovi
74218e3101 core: MP4 add sanitize path 2025-03-23 09:59:23 +01:00
Lovi
a376556f60 feat: add HLS and MP4 download tests with unittest 2025-03-23 09:55:22 +01:00
96 changed files with 5203 additions and 2028 deletions

360
.github/.domain/domain_update.py vendored Normal file
View File

@ -0,0 +1,360 @@
# 20.04.2024
import os
import json
from datetime import datetime
from urllib.parse import urlparse, unquote
# External libraries
import httpx
import tldextract
import ua_generator
import dns.resolver
# Variables
SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__))
JSON_FILE_PATH = os.path.join(SCRIPT_DIR, "domains.json")
ua = ua_generator.generate(device='desktop', browser=('chrome', 'edge'))
def get_headers():
return ua.headers.get()
def get_tld(url_str):
try:
parsed = urlparse(unquote(url_str))
domain = parsed.netloc.lower().lstrip('www.')
parts = domain.split('.')
return parts[-1] if len(parts) >= 2 else None
except Exception:
return None
def get_base_domain(url_str):
try:
parsed = urlparse(url_str)
domain = parsed.netloc.lower().lstrip('www.')
parts = domain.split('.')
return '.'.join(parts[:-1]) if len(parts) > 2 else parts[0]
except Exception:
return None
def get_base_url(url_str):
try:
parsed = urlparse(url_str)
return f"{parsed.scheme}://{parsed.netloc}"
except Exception:
return None
def log(msg, level='INFO'):
levels = {
'INFO': '[ ]',
'SUCCESS': '[+]',
'WARNING': '[!]',
'ERROR': '[-]'
}
entry = f"{levels.get(level, '[?]')} {msg}"
print(entry)
def load_json_data(file_path):
if not os.path.exists(file_path):
log(f"Error: The file {file_path} was not found.", "ERROR")
return None
try:
with open(file_path, 'r', encoding='utf-8') as f:
return json.load(f)
except Exception as e:
log(f"Error reading the file {file_path}: {e}", "ERROR")
return None
def save_json_data(file_path, data):
try:
with open(file_path, 'w', encoding='utf-8') as f:
json.dump(data, f, indent=2, ensure_ascii=False)
log(f"Data successfully saved to {file_path}", "SUCCESS")
except Exception as e:
log(f"Error saving the file {file_path}: {e}", "ERROR")
def parse_url(url):
if not url.startswith(('http://', 'https://')):
url = 'https://' + url
try:
extracted = tldextract.extract(url)
parsed = urlparse(url)
clean_url = f"{parsed.scheme}://{parsed.netloc}/"
full_domain = f"{extracted.domain}.{extracted.suffix}" if extracted.domain else extracted.suffix
domain_tld = extracted.suffix
result = {
'url': clean_url,
'full_domain': full_domain,
'domain': domain_tld,
'suffix': extracted.suffix,
'subdomain': extracted.subdomain or None
}
return result
except Exception as e:
log(f"Error parsing URL: {e}", "ERROR")
return None
def check_dns_resolution(domain):
try:
resolver = dns.resolver.Resolver()
resolver.timeout = 2
resolver.lifetime = 2
try:
answers = resolver.resolve(domain, 'A')
return str(answers[0])
except:
try:
answers = resolver.resolve(domain, 'AAAA')
return str(answers[0])
except:
pass
return None
except:
return None
def find_new_domain(input_url, output_file=None, verbose=True, json_output=False):
log_buffer = []
original_info = parse_url(input_url)
if not original_info:
log(f"Could not parse original URL: {input_url}", "ERROR")
if json_output:
return {'full_url': input_url, 'domain': None}
return None
log(f"Starting analysis for: {original_info['full_domain']}")
orig_ip = check_dns_resolution(original_info['full_domain'])
if orig_ip:
log(f"Original domain resolves to: {orig_ip}", "SUCCESS")
else:
log(f"Original domain does not resolve to an IP address", "WARNING")
headers = get_headers()
new_domains = []
redirects = []
final_url = None
final_domain_info = None
url_to_test_in_loop = None
for protocol in ['https://', 'http://']:
try:
url_to_test_in_loop = f"{protocol}{original_info['full_domain']}"
log(f"Testing connectivity to {url_to_test_in_loop}")
redirect_chain = []
current_url = url_to_test_in_loop
max_redirects = 10
redirect_count = 0
while redirect_count < max_redirects:
with httpx.Client(verify=False, follow_redirects=False, timeout=5) as client:
response = client.get(current_url, headers=headers)
redirect_info = {'url': current_url, 'status_code': response.status_code}
redirect_chain.append(redirect_info)
log(f"Request to {current_url} - Status: {response.status_code}")
if response.status_code in (301, 302, 303, 307, 308):
if 'location' in response.headers:
next_url = response.headers['location']
if next_url.startswith('/'):
parsed_current = urlparse(current_url)
next_url = f"{parsed_current.scheme}://{parsed_current.netloc}{next_url}"
log(f"Redirect found: {next_url} (Status: {response.status_code})")
current_url = next_url
redirect_count += 1
redirect_domain_info_val = parse_url(next_url)
if redirect_domain_info_val and redirect_domain_info_val['full_domain'] != original_info['full_domain']:
new_domains.append({'domain': redirect_domain_info_val['full_domain'], 'url': next_url, 'source': 'redirect'})
else:
log(f"Redirect status code but no Location header", "WARNING")
break
else:
break
if redirect_chain:
final_url = redirect_chain[-1]['url']
final_domain_info = parse_url(final_url)
redirects.extend(redirect_chain)
log(f"Final URL after redirects: {final_url}", "SUCCESS")
if final_domain_info and final_domain_info['full_domain'] != original_info['full_domain']:
new_domains.append({'domain': final_domain_info['full_domain'], 'url': final_url, 'source': 'final_url'})
final_status = redirect_chain[-1]['status_code'] if redirect_chain else None
if final_status and final_status < 400 and final_status != 403:
break
if final_status == 403 and redirect_chain and len(redirect_chain) > 1:
log(f"Got 403 Forbidden, but captured {len(redirect_chain)-1} redirects before that", "SUCCESS")
break
except httpx.RequestError as e:
log(f"Error connecting to {protocol}{original_info['full_domain']}: {str(e)}", "ERROR")
url_for_auto_redirect = input_url
if url_to_test_in_loop:
url_for_auto_redirect = url_to_test_in_loop
elif original_info and original_info.get('url'):
url_for_auto_redirect = original_info['url']
if not redirects or not new_domains:
log("Trying alternate method with automatic redirect following")
try:
with httpx.Client(verify=False, follow_redirects=True, timeout=5) as client:
response_auto = client.get(url_for_auto_redirect, headers=headers)
log(f"Connected with auto-redirects: Status {response_auto.status_code}")
if response_auto.history:
log(f"Found {len(response_auto.history)} redirects with auto-following", "SUCCESS")
for r_hist in response_auto.history:
redirect_info_auto = {'url': str(r_hist.url), 'status_code': r_hist.status_code}
redirects.append(redirect_info_auto)
log(f"Auto-redirect: {r_hist.url} (Status: {r_hist.status_code})")
final_url = str(response_auto.url)
final_domain_info = parse_url(final_url)
for redirect_hist_item in response_auto.history:
redirect_domain_val = parse_url(str(redirect_hist_item.url))
if redirect_domain_val and original_info and redirect_domain_val['full_domain'] != original_info['full_domain']:
new_domains.append({'domain': redirect_domain_val['full_domain'], 'url': str(redirect_hist_item.url), 'source': 'auto-redirect'})
current_final_url_info = parse_url(str(response_auto.url))
if current_final_url_info and original_info and current_final_url_info['full_domain'] != original_info['full_domain']:
is_already_added = any(d['domain'] == current_final_url_info['full_domain'] and d['source'] == 'auto-redirect' for d in new_domains)
if not is_already_added:
new_domains.append({'domain': current_final_url_info['full_domain'], 'url': str(response_auto.url), 'source': 'final_url_auto'})
final_url = str(response_auto.url)
final_domain_info = current_final_url_info
log(f"Final URL from auto-redirect: {final_url}", "SUCCESS")
except httpx.RequestError as e:
log(f"Error with auto-redirect attempt: {str(e)}", "ERROR")
except NameError:
log(f"Error: URL for auto-redirect attempt was not defined.", "ERROR")
unique_domains = []
seen_domains = set()
for domain_info_item in new_domains:
if domain_info_item['domain'] not in seen_domains:
seen_domains.add(domain_info_item['domain'])
unique_domains.append(domain_info_item)
if not final_url:
final_url = input_url
if not final_domain_info:
final_domain_info = original_info
if final_domain_info:
parsed_final_url_info = parse_url(final_url)
if parsed_final_url_info:
final_url = parsed_final_url_info['url']
final_domain_info = parsed_final_url_info
else:
final_domain_info = original_info
final_url = original_info['url'] if original_info else input_url
results_original_domain = original_info['full_domain'] if original_info else None
results_final_domain_tld = final_domain_info['domain'] if final_domain_info and 'domain' in final_domain_info else None
results = {
'timestamp': datetime.now().strftime("%Y-%m-%d %H:%M:%S"),
'original_url': input_url,
'original_domain': results_original_domain,
'original_ip': orig_ip,
'new_domains': unique_domains,
'redirects': redirects,
'log': log_buffer
}
simplified_json_output = {'full_url': final_url, 'domain': results_final_domain_tld}
if verbose:
log(f"DEBUG - Simplified output: {simplified_json_output}", "INFO")
if output_file:
try:
with open(output_file, 'w', encoding='utf-8') as f:
json.dump(results, f, indent=2, ensure_ascii=False)
log(f"Results saved to {output_file}", "SUCCESS")
except Exception as e:
log(f"Error writing to output file: {str(e)}", "ERROR")
if json_output:
return simplified_json_output
else:
return results
def update_site_entry(site_name: str, all_domains_data: dict):
site_config = all_domains_data.get(site_name, {})
log(f"Processing site: {site_name}", "INFO")
if not site_config.get('full_url'):
log(f"Site {site_name} has no full_url in config. Skipping.", "WARNING")
return False
current_full_url = site_config.get('full_url')
current_domain_tld = site_config.get('domain')
found_domain_info = find_new_domain(current_full_url, verbose=False, json_output=True)
if found_domain_info and found_domain_info.get('full_url') and found_domain_info.get('domain'):
new_full_url = found_domain_info['full_url']
new_domain_tld = found_domain_info['domain']
if new_full_url != current_full_url or new_domain_tld != current_domain_tld:
log(f"Update found for {site_name}: URL '{current_full_url}' -> '{new_full_url}', TLD '{current_domain_tld}' -> '{new_domain_tld}'", "SUCCESS")
updated_entry = site_config.copy()
updated_entry['full_url'] = new_full_url
updated_entry['domain'] = new_domain_tld
if new_domain_tld != current_domain_tld :
updated_entry['old_domain'] = current_domain_tld if current_domain_tld else ""
updated_entry['time_change'] = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
all_domains_data[site_name] = updated_entry
return True
else:
log(f"No changes detected for {site_name}.", "INFO")
return False
else:
log(f"Could not reliably find new domain info for {site_name} from URL: {current_full_url}. No search fallback.", "WARNING")
return False
def main():
log("Starting domain update script...")
all_domains_data = load_json_data(JSON_FILE_PATH)
if not all_domains_data:
log("Cannot proceed: Domain data is missing or could not be loaded.", "ERROR")
log("Script finished.")
return
any_updates_made = False
for site_name_key in list(all_domains_data.keys()):
if update_site_entry(site_name_key, all_domains_data):
any_updates_made = True
print("\n")
if any_updates_made:
save_json_data(JSON_FILE_PATH, all_domains_data)
log("Update complete. Some entries were modified.", "SUCCESS")
else:
log("Update complete. No domains were modified.", "INFO")
log("Script finished.")
if __name__ == "__main__":
main()

62
.github/.domain/domains.json vendored Normal file
View File

@ -0,0 +1,62 @@
{
"1337xx": {
"domain": "to",
"full_url": "https://www.1337xx.to/",
"old_domain": "to",
"time_change": "2025-03-19 12:20:19"
},
"cb01new": {
"domain": "life",
"full_url": "https://cb01net.life/",
"old_domain": "download",
"time_change": "2025-06-01 01:02:16"
},
"animeunity": {
"domain": "so",
"full_url": "https://www.animeunity.so/",
"old_domain": "so",
"time_change": "2025-03-19 12:20:23"
},
"animeworld": {
"domain": "ac",
"full_url": "https://www.animeworld.ac/",
"old_domain": "ac",
"time_change": "2025-03-21 12:20:27"
},
"guardaserie": {
"domain": "meme",
"full_url": "https://guardaserie.meme/",
"old_domain": "meme",
"time_change": "2025-03-19 12:20:24"
},
"ddlstreamitaly": {
"domain": "co",
"full_url": "https://ddlstreamitaly.co/",
"old_domain": "co",
"time_change": "2025-03-19 12:20:26"
},
"streamingwatch": {
"domain": "org",
"full_url": "https://www.streamingwatch.org/",
"old_domain": "org",
"time_change": "2025-04-29 12:30:30"
},
"altadefinizione": {
"domain": "spa",
"full_url": "https://altadefinizione.spa/",
"old_domain": "locker",
"time_change": "2025-05-26 23:22:45"
},
"streamingcommunity": {
"domain": "art",
"full_url": "https://streamingunity.art/",
"old_domain": "bid",
"time_change": "2025-06-05 11:18:33"
},
"altadefinizionegratis": {
"domain": "cc",
"full_url": "https://altadefinizionegratis.cc/",
"old_domain": "icu",
"time_change": "2025-06-02 10:35:25"
}
}

1
.github/.domain/loc-badge.json vendored Normal file
View File

@ -0,0 +1 @@
{"schemaVersion": 1, "label": "Lines of Code", "message": "9110", "color": "green"}

560
.github/.site/css/style.css vendored Normal file
View File

@ -0,0 +1,560 @@
@import url('https://fonts.googleapis.com/css2?family=Inter:wght@300;400;500;600;700&display=swap');
:root {
--primary-color: #8c52ff;
--secondary-color: #6930c3;
--accent-color: #00e5ff;
--background-color: #121212;
--card-background: #1e1e1e;
--text-color: #f8f9fa;
--shadow-color: rgba(0, 0, 0, 0.25);
--card-hover: #2a2a2a;
--border-color: #333333;
}
[data-theme="light"] {
--background-color: #ffffff;
--card-background: #f8f9fa;
--text-color: #212529;
--shadow-color: rgba(0, 0, 0, 0.1);
--card-hover: #e9ecef;
--border-color: #dee2e6;
}
* {
margin: 0;
padding: 0;
box-sizing: border-box;
transition: all 0.2s ease;
}
body {
font-family: 'Inter', 'Segoe UI', sans-serif;
background-color: var(--background-color);
color: var(--text-color);
line-height: 1.6;
min-height: 100vh;
display: flex;
flex-direction: column;
}
.container {
max-width: 1400px;
margin: 0 auto;
padding: 20px;
flex: 1;
}
.header-container {
display: flex;
justify-content: space-between;
align-items: center;
padding: 15px 20px;
background: var(--card-background);
border-radius: 12px;
border: 1px solid var(--border-color);
margin-bottom: 20px;
}
.sites-stats {
display: flex;
gap: 20px;
align-items: center;
}
.total-sites, .last-update-global {
display: flex;
align-items: center;
gap: 8px;
color: var(--text-color);
font-size: 0.95rem;
background: var(--background-color);
padding: 8px 16px;
border-radius: 8px;
border: 1px solid var(--border-color);
transition: all 0.3s ease;
}
.total-sites:hover, .last-update-global:hover {
border-color: var(--primary-color);
transform: translateY(-2px);
}
.total-sites i, .last-update-global i {
color: var(--primary-color);
font-size: 1.1rem;
}
.site-grid {
display: grid;
grid-template-columns: repeat(auto-fill, minmax(300px, 1fr));
gap: 24px;
padding: 2rem 0;
}
.site-item {
min-height: 220px;
background-color: var(--card-background);
border-radius: 16px;
padding: 30px;
box-shadow: 0 6px 20px var(--shadow-color);
transition: all 0.3s ease;
display: flex;
flex-direction: column;
align-items: center;
border: 1px solid var(--border-color);
position: relative;
overflow: hidden;
cursor: pointer;
}
.site-item::before {
content: '';
position: absolute;
top: 0;
left: 0;
width: 100%;
height: 4px;
background: linear-gradient(90deg, var(--primary-color), var(--accent-color));
transition: height 0.3s ease;
}
.site-item:hover {
transform: translateY(-5px);
box-shadow: 0 12px 30px var(--shadow-color);
border-color: var(--primary-color);
}
.site-item:hover::before {
height: 6px;
}
.site-item img {
width: 80px;
height: 80px;
margin-bottom: 1.5rem;
border-radius: 16px;
object-fit: cover;
border: 2px solid var(--border-color);
transition: transform 0.3s ease;
}
.site-item:hover img {
transform: scale(1.05);
}
.site-item h3 {
font-size: 1.4rem;
font-weight: 600;
margin-bottom: 0.5rem;
color: var(--primary-color);
text-align: center;
transition: color 0.3s ease;
}
.site-item:hover h3 {
color: var(--accent-color);
}
.site-info {
display: flex;
flex-direction: column;
align-items: center;
gap: 8px;
margin-top: 10px;
text-align: center;
font-size: 0.85rem;
color: var(--text-color);
opacity: 0.8;
}
.last-update, .old-domain {
display: flex;
align-items: center;
gap: 6px;
}
.last-update i, .old-domain i {
color: var(--primary-color);
}
.site-item:hover .site-info {
opacity: 1;
}
.site-status {
position: absolute;
top: 10px;
right: 10px;
width: 12px;
height: 12px;
border-radius: 50%;
background: #4CAF50;
}
.site-status.offline {
background: #f44336;
}
.status-indicator {
position: fixed;
top: 20px;
right: 20px;
background: var(--card-background);
border: 1px solid var(--border-color);
border-radius: 12px;
padding: 15px 20px;
box-shadow: 0 4px 20px var(--shadow-color);
z-index: 1001;
min-width: 280px;
max-width: 400px;
transition: all 0.3s ease;
}
.status-indicator.hidden {
opacity: 0;
transform: translateY(-20px);
pointer-events: none;
}
.status-header {
display: flex;
align-items: center;
gap: 10px;
margin-bottom: 15px;
font-weight: 600;
color: var(--primary-color);
}
.status-icon {
width: 20px;
height: 20px;
border: 2px solid var(--primary-color);
border-radius: 50%;
border-top-color: transparent;
animation: spin 1s linear infinite;
}
.status-icon.ready {
border: none;
background: #4CAF50;
animation: none;
position: relative;
}
.status-icon.ready::after {
content: '✓';
position: absolute;
top: 50%;
left: 50%;
transform: translate(-50%, -50%);
color: white;
font-size: 12px;
font-weight: bold;
}
@keyframes spin {
0% { transform: rotate(0deg); }
100% { transform: rotate(360deg); }
}
.status-text {
color: var(--text-color);
font-size: 0.9rem;
margin-bottom: 10px;
}
.checking-sites {
max-height: 200px;
overflow-y: auto;
background: var(--background-color);
border-radius: 8px;
padding: 10px;
border: 1px solid var(--border-color);
}
.checking-site {
display: flex;
align-items: center;
justify-content: between;
gap: 10px;
padding: 6px 8px;
margin-bottom: 4px;
border-radius: 6px;
background: var(--card-background);
font-size: 0.8rem;
color: var(--text-color);
transition: all 0.2s ease;
}
.checking-site.completed {
opacity: 0.6;
background: var(--card-hover);
}
.checking-site.online {
border-left: 3px solid #4CAF50;
}
.checking-site.offline {
border-left: 3px solid #f44336;
}
.checking-site .site-name {
flex: 1;
font-weight: 500;
overflow: hidden;
text-overflow: ellipsis;
white-space: nowrap;
}
.checking-site .site-status-icon {
width: 12px;
height: 12px;
border-radius: 50%;
flex-shrink: 0;
}
.checking-site .site-status-icon.checking {
background: var(--primary-color);
animation: pulse 1s infinite;
}
.checking-site .site-status-icon.online {
background: #4CAF50;
}
.checking-site .site-status-icon.offline {
background: #f44336;
}
@keyframes pulse {
0%, 100% { opacity: 1; }
50% { opacity: 0.5; }
}
.progress-bar {
width: 100%;
height: 6px;
background: var(--background-color);
border-radius: 3px;
overflow: hidden;
margin-top: 10px;
}
.progress-fill {
height: 100%;
background: linear-gradient(90deg, var(--primary-color), var(--accent-color));
width: 0%;
transition: width 0.3s ease;
border-radius: 3px;
}
.loader {
width: 48px;
height: 48px;
border: 3px solid var(--primary-color);
border-bottom-color: transparent;
border-radius: 50%;
display: inline-block;
position: relative;
box-sizing: border-box;
animation: rotation 1s linear infinite;
}
.loader::after {
content: '';
position: absolute;
box-sizing: border-box;
left: 0;
top: 0;
width: 48px;
height: 48px;
border-radius: 50%;
border: 3px solid transparent;
border-bottom-color: var(--accent-color);
animation: rotationBack 0.5s linear infinite;
transform: rotate(45deg);
}
@keyframes rotation {
0% { transform: rotate(0deg) }
100% { transform: rotate(360deg) }
}
@keyframes rotationBack {
0% { transform: rotate(0deg) }
100% { transform: rotate(-360deg) }
}
footer {
background: var(--card-background);
border-top: 1px solid var(--border-color);
margin-top: auto;
padding: 40px 20px;
position: relative;
}
.footer-content {
max-width: 1200px;
margin: 0 auto;
display: grid;
grid-template-columns: repeat(3, 1fr);
gap: 30px;
position: relative;
padding: 20px;
}
.footer-section {
padding: 20px;
border-radius: 12px;
transition: transform 0.3s ease, background-color 0.3s ease;
background-color: var(--card-background);
border: 1px solid var(--border-color);
}
.footer-section:hover {
transform: translateY(-5px);
background-color: var(--card-hover);
}
.footer-title {
color: var(--accent-color);
font-size: 1.3rem;
margin-bottom: 1.5rem;
padding-bottom: 0.5rem;
position: relative;
letter-spacing: 0.5px;
}
.footer-title::after {
content: '';
position: absolute;
bottom: 0;
left: 0;
width: 60px;
height: 3px;
border-radius: 2px;
background: linear-gradient(90deg, var(--primary-color), var(--accent-color));
}
.footer-links {
list-style: none;
}
.footer-links li {
margin-bottom: 0.8rem;
}
.footer-links a {
color: var(--text-color);
text-decoration: none;
display: flex;
align-items: center;
gap: 8px;
opacity: 0.8;
transition: all 0.3s ease;
padding: 8px 12px;
border-radius: 8px;
background-color: transparent;
}
.footer-links a:hover {
opacity: 1;
color: var(--accent-color);
transform: translateX(8px);
background-color: rgba(140, 82, 255, 0.1);
}
.footer-links i {
width: 20px;
text-align: center;
font-size: 1.2rem;
color: var(--primary-color);
transition: transform 0.3s ease;
}
.footer-links a:hover i {
transform: scale(1.2);
}
.footer-description {
margin-top: 15px;
font-size: 0.9rem;
color: var(--text-color);
opacity: 0.8;
line-height: 1.5;
}
.update-note {
color: var(--accent-color);
font-size: 0.9rem;
opacity: 0.9;
}
/* Responsiveness */
@media (max-width: 768px) {
.site-grid {
grid-template-columns: repeat(auto-fill, minmax(250px, 1fr));
gap: 15px;
padding: 1rem;
}
.site-item {
min-height: 250px;
padding: 20px;
}
.footer-content {
grid-template-columns: 1fr;
gap: 20px;
padding: 15px;
text-align: center;
}
.header-container {
flex-direction: column;
gap: 15px;
}
.sites-stats {
flex-direction: column;
width: 100%;
}
.total-sites, .last-update-global {
width: 100%;
justify-content: center;
}
.footer-title::after {
left: 50%;
transform: translateX(-50%);
}
.footer-links a {
justify-content: center;
}
.footer-links a:hover {
transform: translateY(-5px);
}
.footer-section {
margin-bottom: 20px;
}
}
@media (max-width: 480px) {
.site-grid {
grid-template-columns: 1fr;
}
.site-item {
min-height: 220px;
}
.container {
padding: 10px;
}
}

83
.github/.site/index.html vendored Normal file
View File

@ -0,0 +1,83 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Streaming Directory</title>
<link rel="stylesheet" href="css/style.css">
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/6.0.0-beta3/css/all.min.css">
</head>
<body> <main>
<section class="container"> <div class="header-container">
<div class="sites-stats">
<span class="total-sites">
<i class="fas fa-globe"></i>
Total Sites: <span id="sites-count">0</span>
</span>
<span class="last-update-global">
<i class="fas fa-clock"></i>
Last Update: <span id="last-update-time">-</span>
</span>
</div>
</div>
<div class="sites-container">
<div id="site-list" class="site-grid">
<div class="loader"></div>
</div>
</div>
</section>
</main>
<footer>
<div class="footer-content"> <div class="footer-section">
<h3 class="footer-title">Repository</h3>
<ul class="footer-links">
<li>
<a href="https://github.com/Arrowar/StreamingCommunity" target="_blank" rel="noopener noreferrer">
<i class="fab fa-github"></i>
Project GitHub
</a>
</li>
</ul>
<p class="footer-description">
An open-source script for downloading movies, TV shows, and anime from various websites.
</p>
</div>
<div class="footer-section">
<h3 class="footer-title">Support</h3>
<ul class="footer-links">
<li>
<a href="https://www.paypal.com/donate/?hosted_button_id=UXTWMT8P6HE2C" target="_blank" rel="noopener noreferrer">
<i class="fab fa-paypal"></i>
Donate with PayPal
</a>
</li>
</ul>
<p class="footer-description">
Support the development of this project through donations.
</p>
</div>
<div class="footer-section">
<h3 class="footer-title">Info</h3>
<ul class="footer-links">
<li>
<span class="update-note">
<i class="fas fa-sync-alt"></i>
Domains updated once every hour
</span>
</li>
</ul>
<p class="footer-description">
All domains are automatically updated once every hour.
</p>
</div>
</div>
</footer>
<script src="js/script.js"></script>
</body>
</html>

245
.github/.site/js/script.js vendored Normal file
View File

@ -0,0 +1,245 @@
document.documentElement.setAttribute('data-theme', 'dark');
let statusIndicator = null;
let checkingSites = new Map();
let totalSites = 0;
let completedSites = 0;
function createStatusIndicator() {
statusIndicator = document.createElement('div');
statusIndicator.className = 'status-indicator';
statusIndicator.innerHTML = `
<div class="status-header">
<div class="status-icon"></div>
<span class="status-title">Loading Sites...</span>
</div>
<div class="status-text">Initializing site checks...</div>
<div class="progress-bar">
<div class="progress-fill"></div>
</div>
<div class="checking-sites"></div>
`;
document.body.appendChild(statusIndicator);
return statusIndicator;
}
function updateStatusIndicator(status, text, progress = 0) {
if (!statusIndicator) return;
const statusIcon = statusIndicator.querySelector('.status-icon');
const statusTitle = statusIndicator.querySelector('.status-title');
const statusText = statusIndicator.querySelector('.status-text');
const progressFill = statusIndicator.querySelector('.progress-fill');
statusTitle.textContent = status;
statusText.textContent = text;
progressFill.style.width = `${progress}%`;
if (status === 'Ready') {
statusIcon.classList.add('ready');
setTimeout(() => {
statusIndicator.classList.add('hidden');
setTimeout(() => statusIndicator.remove(), 300);
}, 2000);
}
}
function addSiteToCheck(siteName, siteUrl) {
if (!statusIndicator) return;
const checkingSitesContainer = statusIndicator.querySelector('.checking-sites');
const siteElement = document.createElement('div');
siteElement.className = 'checking-site';
siteElement.innerHTML = `
<span class="site-name">${siteName}</span>
<div class="site-status-icon checking"></div>
`;
checkingSitesContainer.appendChild(siteElement);
checkingSites.set(siteName, siteElement);
}
function updateSiteStatus(siteName, isOnline) {
const siteElement = checkingSites.get(siteName);
if (!siteElement) return;
const statusIcon = siteElement.querySelector('.site-status-icon');
statusIcon.classList.remove('checking');
statusIcon.classList.add(isOnline ? 'online' : 'offline');
siteElement.classList.add('completed', isOnline ? 'online' : 'offline');
completedSites++;
const progress = (completedSites / totalSites) * 100;
updateStatusIndicator(
'Checking Sites...',
`Checked ${completedSites}/${totalSites} sites`,
progress
);
}
async function checkSiteStatus(url, siteName) {
try {
console.log(`Checking status for: ${url}`);
const controller = new AbortController();
const timeoutId = setTimeout(() => controller.abort(), 3000);
const response = await fetch(url, {
method: 'HEAD',
mode: 'no-cors',
signal: controller.signal,
headers: {
'Accept': 'text/html',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) Chrome/133.0.0.0'
}
});
clearTimeout(timeoutId);
const isOnline = response.type === 'opaque';
console.log(`Site ${url} is ${isOnline ? 'online' : 'offline'} (Type: ${response.type})`);
if (siteName) {
updateSiteStatus(siteName, isOnline);
}
return isOnline;
} catch (error) {
console.log(`Error checking ${url}:`, error.message);
if (siteName) {
updateSiteStatus(siteName, false);
}
return false;
}
}
const domainsJsonUrl = 'https://raw.githubusercontent.com/Arrowar/StreamingCommunity/refs/heads/main/.github/.domain/domains.json';
async function loadSiteData() {
try {
console.log('Starting to load site data from GitHub...');
createStatusIndicator();
updateStatusIndicator('Loading...', 'Fetching site data from GitHub repository...', 0);
const siteList = document.getElementById('site-list');
console.log(`Fetching from GitHub: ${domainsJsonUrl}`);
const response = await fetch(domainsJsonUrl);
if (!response.ok) throw new Error(`HTTP error! Status: ${response.status}`);
const configSite = await response.json(); // Directly get the site data object
siteList.innerHTML = '';
if (configSite && Object.keys(configSite).length > 0) { // Check if configSite is a non-empty object
totalSites = Object.keys(configSite).length;
completedSites = 0;
let latestUpdate = new Date(0);
document.getElementById('sites-count').textContent = totalSites;
updateStatusIndicator('Checking Sites...', `Starting checks for ${totalSites} sites...`, 0);
Object.entries(configSite).forEach(([siteName, site]) => {
addSiteToCheck(siteName, site.full_url);
});
const statusChecks = Object.entries(configSite).map(async ([siteName, site]) => {
const isOnline = await checkSiteStatus(site.full_url, siteName);
return { siteName, site, isOnline };
});
const results = await Promise.all(statusChecks);
updateStatusIndicator('Ready', 'All sites checked successfully!', 100);
results.forEach(({ siteName, site, isOnline }) => {
const siteItem = document.createElement('div');
siteItem.className = 'site-item';
siteItem.style.cursor = 'pointer';
const statusDot = document.createElement('div');
statusDot.className = 'site-status';
if (!isOnline) statusDot.classList.add('offline');
siteItem.appendChild(statusDot);
const updateTime = new Date(site.time_change);
if (updateTime > latestUpdate) {
latestUpdate = updateTime;
}
const siteInfo = document.createElement('div');
siteInfo.className = 'site-info';
if (site.time_change) {
const updateDate = new Date(site.time_change);
const formattedDate = updateDate.toLocaleDateString('it-IT', {
year: 'numeric',
month: '2-digit',
day: '2-digit',
hour: '2-digit',
minute: '2-digit'
});
const lastUpdate = document.createElement('span');
lastUpdate.className = 'last-update';
lastUpdate.innerHTML = `<i class="fas fa-clock"></i> ${formattedDate}`;
siteInfo.appendChild(lastUpdate);
}
if (site.old_domain) {
const oldDomain = document.createElement('span');
oldDomain.className = 'old-domain';
oldDomain.innerHTML = `<i class="fas fa-history"></i> ${site.old_domain}`;
siteInfo.appendChild(oldDomain);
}
siteItem.addEventListener('click', function() {
window.open(site.full_url, '_blank', 'noopener,noreferrer');
});
const siteIcon = document.createElement('img');
siteIcon.src = `https://t2.gstatic.com/faviconV2?client=SOCIAL&type=FAVICON&fallback_opts=TYPE,SIZE,URL&url=${site.full_url}&size=128`;
siteIcon.alt = `${siteName} icon`;
siteIcon.onerror = function() {
this.src = 'data:image/svg+xml;utf8,<svg xmlns="http://www.w3.org/2000/svg" width="100" height="100" viewBox="0 0 24 24" fill="none" stroke="%238c52ff" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"><path d="M21 12.79A9 9 0 1 1 11.21 3 7 7 0 0 0 21 12.79z"></path></svg>';
};
const siteTitle = document.createElement('h3');
siteTitle.textContent = siteName;
siteItem.appendChild(siteIcon);
siteItem.appendChild(siteTitle);
siteItem.appendChild(siteInfo);
siteList.appendChild(siteItem);
});
const formattedDate = latestUpdate.toLocaleDateString('it-IT', {
year: 'numeric',
month: '2-digit',
day: '2-digit',
hour: '2-digit',
minute: '2-digit'
});
document.getElementById('last-update-time').textContent = formattedDate;
} else {
siteList.innerHTML = '<div class="no-sites">No sites available</div>';
updateStatusIndicator('Ready', 'No sites found in the JSON file.', 100);
}
} catch (error) {
console.error('Errore:', error);
siteList.innerHTML = `
<div class="error-message">
<p>Errore nel caricamento</p>
<button onclick="loadSiteData()" class="retry-button">Riprova</button>
</div>
`;
if (statusIndicator) {
updateStatusIndicator('Error', `Failed to load: ${error.message}`, 0);
statusIndicator.querySelector('.status-icon').style.background = '#f44336';
}
}
}
document.addEventListener('DOMContentLoaded', () => {
loadSiteData();
});

View File

@ -1 +0,0 @@
{"schemaVersion": 1, "label": "Lines of Code", "message": "7133", "color": "green"}

BIN
.github/media/logo.ico vendored

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.4 KiB

View File

@ -51,7 +51,7 @@ jobs:
build:
if: startsWith(github.ref_name, 'v') || (github.event_name == 'workflow_dispatch' && github.event.inputs.publish_pypi == 'false')
strategy:
matrix:
include:
@ -59,25 +59,40 @@ jobs:
artifact_name: StreamingCommunity_win
executable: StreamingCommunity_win.exe
separator: ';'
- os: macos-latest
artifact_name: StreamingCommunity_mac
executable: StreamingCommunity_mac
separator: ':'
- os: ubuntu-latest
artifact_name: StreamingCommunity_linux_latest
executable: StreamingCommunity_linux_latest
separator: ':'
- os: ubuntu-20.04
- os: ubuntu-22.04
artifact_name: StreamingCommunity_linux_previous
executable: StreamingCommunity_linux_previous
separator: ':'
# ARM64 build
- os: ubuntu-latest
artifact_name: StreamingCommunity_linux_arm64
executable: StreamingCommunity_linux_arm64
separator: ':'
architecture: arm64
runs-on: ${{ matrix.os }}
# For ARM64, set architecture if present
defaults:
run:
shell: bash
steps:
- name: Set up QEMU (for ARM64)
if: ${{ matrix.architecture == 'arm64' }}
uses: docker/setup-qemu-action@v3
- name: Checkout repository
uses: actions/checkout@v4
with:
@ -94,10 +109,12 @@ jobs:
uses: actions/setup-python@v4
with:
python-version: '3.12'
architecture: ${{ matrix.architecture || 'x64' }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
python -m pip install --upgrade certifi
python -m pip install -r requirements.txt
python -m pip install pyinstaller
@ -121,6 +138,8 @@ jobs:
--hidden-import=Cryptodome.Util --hidden-import=Cryptodome.Util.Padding \
--hidden-import=Cryptodome.Random \
--hidden-import=telebot \
--hidden-import=curl_cffi --hidden-import=_cffi_backend \
--collect-all curl_cffi \
--additional-hooks-dir=pyinstaller/hooks \
--add-data "StreamingCommunity${{ matrix.separator }}StreamingCommunity" \
--name=${{ matrix.artifact_name }} test_run.py

45
.github/workflows/pages.yml vendored Normal file
View File

@ -0,0 +1,45 @@
on:
push:
branches: ["main"]
workflow_dispatch:
permissions:
contents: read
pages: write
id-token: write
concurrency:
group: "pages"
cancel-in-progress: false
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Pages
uses: actions/configure-pages@v5
- name: Copy site files
run: |
mkdir -p _site
cp -r .github/.site/* _site/
ls -la _site/
- name: Upload artifact
uses: actions/upload-pages-artifact@v3
with:
path: _site
deploy:
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}
runs-on: ubuntu-latest
needs: build
steps:
- name: Deploy to GitHub Pages
id: deployment
uses: actions/deploy-pages@v4

View File

@ -48,4 +48,46 @@ jobs:
- name: Run osPath test
run: |
PYTHONPATH=$PYTHONPATH:$(pwd) python -m Test.Util.osPath
PYTHONPATH=$PYTHONPATH:$(pwd) python -m Test.Util.osPath
test-hls-download:
name: Test HLS Download
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.11'
- name: Install dependencies
run: |
python -m pip install --upgrade pip
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
- name: Run HLS download test
run: |
PYTHONPATH=$PYTHONPATH:$(pwd) python -m unittest Test.Download.HLS
test-mp4-download:
name: Test MP4 Download
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.11'
- name: Install dependencies
run: |
python -m pip install --upgrade pip
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
- name: Run MP4 download test
run: |
PYTHONPATH=$PYTHONPATH:$(pwd) python -m unittest Test.Download.MP4

View File

@ -16,12 +16,12 @@ jobs:
- name: Count Lines of Code
run: |
LOC=$(cloc . --json | jq '.SUM.code')
echo "{\"schemaVersion\": 1, \"label\": \"Lines of Code\", \"message\": \"$LOC\", \"color\": \"green\"}" > .github/media/loc-badge.json
echo "{\"schemaVersion\": 1, \"label\": \"Lines of Code\", \"message\": \"$LOC\", \"color\": \"green\"}" > .github/.domain/loc-badge.json
- name: Commit and Push LOC Badge
run: |
git config --local user.name "GitHub Actions"
git config --local user.email "actions@github.com"
git add .github/media/loc-badge.json
git add .github/.domain/loc-badge.json
git commit -m "Update lines of code badge" || echo "No changes to commit"
git push

50
.github/workflows/update_domain.yml vendored Normal file
View File

@ -0,0 +1,50 @@
name: Update domains
on:
schedule:
- cron: "0 7-21 * * *"
workflow_dispatch:
jobs:
update-domains:
runs-on: ubuntu-latest
permissions:
contents: write
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Python
uses: actions/setup-python@v5
with:
python-version: '3.12'
- name: Install dependencies
run: |
pip install httpx tldextract ua-generator dnspython
pip install --upgrade pip setuptools wheel
- name: Configure DNS
run: |
sudo sh -c 'echo "nameserver 9.9.9.9" > /etc/resolv.conf'
cat /etc/resolv.conf
- name: Execute domain update script
run: python .github/.domain/domain_update.py
- name: Commit and push changes (if any)
run: |
git config --global user.name 'github-actions[bot]'
git config --global user.email 'github-actions[bot]@users.noreply.github.com'
# Check if domains.json was modified
if ! git diff --quiet .github/.domain/domains.json; then
git add .github/.domain/domains.json
git commit -m "Automatic domain update [skip ci]"
echo "Changes committed. Attempting to push..."
git push
else
echo "No changes to .github/.domain/domains.json to commit."
fi

2
.gitignore vendored
View File

@ -52,4 +52,4 @@ cmd.txt
bot_config.json
scripts.json
active_requests.json
domains.json
working_proxies.json

View File

@ -2,4 +2,4 @@ build-container:
docker build -t streaming-community-api .
run-container:
docker run --rm -it -p 8000:8000 -v ${LOCAL_DIR}:/app/Video -v ./config.json:/app/config.json streaming-community-api
docker run --rm -it --dns 9.9.9.9 -p 8000:8000 -v ${LOCAL_DIR}:/app/Video -v ./config.json:/app/config.json streaming-community-api

759
README.md
View File

@ -1,5 +1,5 @@
<p align="center">
<img src="https://i.ibb.co/v6RnT0wY/s2.jpg" alt="Project Logo" width="600"/>
<img src="https://i.ibb.co/v6RnT0wY/s2.jpg" alt="Project Logo" width="450"/>
</p>
<p align="center">
@ -25,14 +25,17 @@
<img src="https://img.shields.io/pypi/dm/streamingcommunity?style=for-the-badge" alt="PyPI Downloads"/>
</a>
<a href="https://github.com/Arrowar/StreamingCommunity">
<img src="https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/Arrowar/StreamingCommunity/main/.github/media/loc-badge.json&style=for-the-badge" alt="Lines of Code"/>
<img src="https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/Arrowar/StreamingCommunity/main/.github/.domain/loc-badge.json&style=for-the-badge" alt="Lines of Code"/>
</a>
</p>
# 📋 Table of Contents
<details>
<summary>📦 Installation</summary>
- 🔄 [Update Domains](#update-domains)
- 🌐 [Available Sites](https://arrowar.github.io/StreamingDirectory/)
- 🌐 [Available Sites](https://arrowar.github.io/StreamingCommunity/)
- 🛠️ [Installation](#installation)
- 📦 [PyPI Installation](#1-pypi-installation)
- 🔄 [Automatic Installation](#2-automatic-installation)
@ -40,6 +43,11 @@
- 📝 [Manual Installation](#3-manual-installation)
- 💻 [Win 7](https://github.com/Ghost6446/StreamingCommunity_api/wiki/Installation#win-7)
- 📱 [Termux](https://github.com/Ghost6446/StreamingCommunity_api/wiki/Termux)
</details>
<details>
<summary>⚙️ Configuration & Usage</summary>
- ⚙️ [Configuration](#configuration)
- 🔧 [Default](#default-settings)
- 📩 [Request](#requests-settings)
@ -48,15 +56,23 @@
- 📝 [Command](#command)
- 🔍 [Global search](#global-search)
- 💻 [Examples of terminal](#examples-of-terminal-usage)
</details>
<details>
<summary>🔧 Advanced Features</summary>
- 🔧 [Manual domain configuration](#update-domains)
- 🐳 [Docker](#docker)
- 📝 [Telegram Usage](#telegram-usage)
</details>
<details>
<summary> Help & Support</summary>
- 🎓 [Tutorial](#tutorials)
- 📝 [To do](#to-do)
- 💬 [Support](#support)
- 🤝 [Contribute](#contributing)
- ⚠️ [Disclaimer](#disclaimer)
- ⚡ [Contributors](#contributors)
</details>
# Installation
@ -111,7 +127,8 @@ python run_streaming.py
## Modules
### HLS Downloader
<details>
<summary>📥 HLS Downloader</summary>
Download HTTP Live Streaming (HLS) content from m3u8 URLs.
@ -129,8 +146,10 @@ downloader.download()
```
See [HLS example](./Test/Download/HLS.py) for complete usage.
</details>
### MP4 Downloader
<details>
<summary>📽️ MP4 Downloader</summary>
Direct MP4 file downloader with support for custom headers and referrer.
@ -159,8 +178,10 @@ downloader.download()
```
See [MP4 example](./Test/Download/MP4.py) for complete usage.
</details>
### Torrent Client
<details>
<summary>🧲 Torrent Client</summary>
Download content via torrent magnet links.
@ -178,67 +199,21 @@ client.start_download()
```
See [Torrent example](./Test/Download/TOR.py) for complete usage.
## 2. Automatic Installation
### Supported Operating Systems 💿
| OS | Automatic Installation Support |
|:----------------|:------------------------------:|
| Windows 10/11 | ✔️ |
| Windows 7 | ❌ |
| Debian Linux | ✔️ |
| Arch Linux | ✔️ |
| CentOS Stream 9 | ✔️ |
| FreeBSD | ⏳ |
| MacOS | ✔️ |
| Termux | ❌ |
### Installation Steps
#### On Windows:
```powershell
.\Installer\win_install.bat
```
#### On Linux/MacOS/BSD:
```bash
sudo chmod +x Installer/unix_install.sh && ./Installer/unix_install.sh
```
### Usage
#### On Windows:
```powershell
python .\test_run.py
```
or
```powershell
source .venv/bin/activate && python test_run.py && deactivate
```
#### On Linux/MacOS/BSD:
```bash
./test_run.py
```
</details>
## Binary Location
### Default Locations
<details>
<summary>📂 Default Locations</summary>
- **Windows**: `C:\binary`
- **MacOS**: `~/Applications/binary`
- **Linux**: `~/.local/bin/binary`
</details>
You can customize these locations by following these steps for your operating system:
<details>
<summary>🪟 Windows Configuration</summary>
#### Windows
1. Move the binary folder from `C:\binary` to your desired location
2. Add the new path to Windows environment variables:
- Open Start menu and search for "Environment Variables"
@ -250,8 +225,11 @@ You can customize these locations by following these steps for your operating sy
- Click "OK" to save changes
For detailed Windows PATH instructions, see the [Windows PATH guide](https://www.eukhost.com/kb/how-to-add-to-the-path-on-windows-10-and-windows-11/).
</details>
<details>
<summary>🍎 MacOS Configuration</summary>
#### MacOS
1. Move the binary folder from `~/Applications/binary` to your desired location
2. Add the new path to your shell's configuration file:
```bash
@ -269,8 +247,11 @@ For detailed Windows PATH instructions, see the [Windows PATH guide](https://www
# For zsh
source ~/.zshrc
```
</details>
<details>
<summary>🐧 Linux Configuration</summary>
#### Linux
1. Move the binary folder from `~/.local/bin/binary` to your desired location
2. Add the new path to your shell's configuration file:
```bash
@ -286,6 +267,7 @@ For detailed Windows PATH instructions, see the [Windows PATH guide](https://www
# or
source ~/.zshrc # for zsh
```
</details>
> [!IMPORTANT]
> After moving the binary folder, ensure that all executables (ffmpeg, ffprobe, ffplay) are present in the new location and have the correct permissions:
@ -294,19 +276,24 @@ For detailed Windows PATH instructions, see the [Windows PATH guide](https://www
## 3. Manual Installation
### Requirements 📋
<details>
<summary>📋 Requirements</summary>
Prerequisites:
* [Python](https://www.python.org/downloads/) > 3.8
* [FFmpeg](https://www.gyan.dev/ffmpeg/builds/)
</details>
### Install Python Dependencies
<details>
<summary>⚙️ Python Dependencies</summary>
```bash
pip install -r requirements.txt
```
</details>
### Usage
<details>
<summary>🚀 Usage</summary>
#### On Windows:
@ -319,6 +306,7 @@ python test_run.py
```bash
python3 test_run.py
```
</details>
## Update
@ -338,278 +326,11 @@ python3 update.py
<br>
# Configuration
You can change some behaviors by tweaking the configuration file.
The configuration file is divided into several main sections:
## DEFAULT Settings
```json
{
"DEFAULT": {
"debug": false,
"show_message": true,
"clean_console": true,
"show_trending": true,
"use_api": true,
"not_close": false,
"telegram_bot": false,
"download_site_data": false,
"validate_github_config": false
}
}
```
- `debug`: Enables debug logging
- `show_message`: Displays informational messages
- `clean_console`: Clears the console between operations
- `show_trending`: Shows trending content
- `use_api`: Uses API for domain updates instead of local configuration
- `not_close`: If set to true, keeps the program running after download is complete
* Can be changed from terminal with `--not_close true/false`
- `telegram_bot`: Enables Telegram bot integration
- `download_site_data`: If set to false, disables automatic site data download
- `validate_github_config`: If set to false, disables validation and updating of configuration from GitHub
## OUT_FOLDER Settings
```json
{
"OUT_FOLDER": {
"root_path": "Video",
"movie_folder_name": "Movie",
"serie_folder_name": "Serie",
"anime_folder_name": "Anime",
"map_episode_name": "E%(episode)_%(episode_name)",
"add_siteName": false
}
}
```
- `root_path`: Directory where all videos will be saved
### Path examples:
* Windows: `C:\\MyLibrary\\Folder` or `\\\\MyServer\\MyLibrary` (if you want to use a network folder)
* Linux/MacOS: `Desktop/MyLibrary/Folder`
<br/><br/>
- `movie_folder_name`: The name of the subdirectory where movies will be stored
* Can be changed from terminal with `--movie_folder_name`
<br/><br/>
- `serie_folder_name`: The name of the subdirectory where TV series will be stored
* Can be changed from terminal with `--serie_folder_name`
<br/><br/>
- `anime_folder_name`: The name of the subdirectory where anime will be stored
* Can be changed from terminal with `--anime_folder_name`
<br/><br/>
- `map_episode_name`: Template for episode filenames
### Episode name usage:
You can choose different vars:
* `%(tv_name)` : Is the name of TV Show
* `%(season)` : Is the number of the season
* `%(episode)` : Is the number of the episode
* `%(episode_name)` : Is the name of the episode
* Can be changed from terminal with `--map_episode_name`
<br><br>
- `add_siteName`: If set to true, appends the site_name to the root path before the movie and serie folders
* Can be changed from terminal with `--add_siteName true/false`
<br/><br/>
## QBIT_CONFIG Settings
```json
{
"QBIT_CONFIG": {
"host": "192.168.1.51",
"port": "6666",
"user": "admin",
"pass": "adminadmin"
}
}
```
To enable qBittorrent integration, follow the setup guide [here](https://github.com/lgallard/qBittorrent-Controller/wiki/How-to-enable-the-qBittorrent-Web-UI).
## REQUESTS Settings
```json
{
"REQUESTS": {
"verify": false,
"timeout": 20,
"max_retry": 8
}
}
```
- `verify`: Verifies SSL certificates
- `timeout`: Maximum timeout (in seconds) for each request
- `max_retry`: Number of retry attempts per segment during M3U8 index download
## M3U8_DOWNLOAD Settings
```json
{
"M3U8_DOWNLOAD": {
"tqdm_delay": 0.01,
"default_video_workser": 12,
"default_audio_workser": 12,
"segment_timeout": 8,
"download_audio": true,
"merge_audio": true,
"specific_list_audio": [
"ita"
],
"download_subtitle": true,
"merge_subs": true,
"specific_list_subtitles": [
"ita",
"eng"
],
"cleanup_tmp_folder": true
}
}
```
- `tqdm_delay`: Delay between progress bar updates
- `default_video_workser`: Number of threads for video download
* Can be changed from terminal with `--default_video_worker <number>`
<br/><br/>
- `default_audio_workser`: Number of threads for audio download
* Can be changed from terminal with `--default_audio_worker <number>`
<br/><br/>
- `segment_timeout`: Timeout for downloading individual segments
- `download_audio`: Whether to download audio tracks
- `merge_audio`: Whether to merge audio with video
- `specific_list_audio`: List of audio languages to download
* Can be changed from terminal with `--specific_list_audio ita,eng`
<br/><br/>
- `download_subtitle`: Whether to download subtitles
- `merge_subs`: Whether to merge subtitles with video
- `specific_list_subtitles`: List of subtitle languages to download
* Can be changed from terminal with `--specific_list_subtitles ita,eng`
<br/><br/>
- `cleanup_tmp_folder`: Remove temporary .ts files after download
## Available Language Codes
| European | Asian | Middle Eastern | Others |
|-----------------|-----------------|-----------------|-----------------|
| ita - Italian | chi - Chinese | ara - Arabic | eng - English |
| spa - Spanish | jpn - Japanese | heb - Hebrew | por - Portuguese|
| fre - French | kor - Korean | tur - Turkish | fil - Filipino |
| ger - German | hin - Hindi | | ind - Indonesian|
| rus - Russian | mal - Malayalam | | may - Malay |
| swe - Swedish | tam - Tamil | | vie - Vietnamese|
| pol - Polish | tel - Telugu | | |
| ukr - Ukrainian | tha - Thai | | |
## M3U8_CONVERSION Settings
```json
{
"M3U8_CONVERSION": {
"use_codec": false,
"use_vcodec": true,
"use_acodec": true,
"use_bitrate": true,
"use_gpu": false,
"default_preset": "ultrafast"
}
}
```
- `use_codec`: Use specific codec settings
- `use_vcodec`: Use specific video codec
- `use_acodec`: Use specific audio codec
- `use_bitrate`: Apply bitrate settings
- `use_gpu`: Enable GPU acceleration (if available)
- `default_preset`: FFmpeg encoding preset (ultrafast, fast, medium, slow, etc.)
### Advanced M3U8 Conversion Options
The software supports various advanced encoding options via FFmpeg:
#### Encoding Presets
The `default_preset` configuration can be set to one of the following values:
- `ultrafast`: Extremely fast conversion but larger file size
- `superfast`: Very fast with good quality/size ratio
- `veryfast`: Fast with good compression
- `faster`: Optimal balance for most users
- `fast`: Good compression, moderate time
- `medium`: FFmpeg default setting
- `slow`: High quality, slower process
- `slower`: Very high quality, slow process
- `veryslow`: Maximum quality, very slow process
#### GPU Acceleration
When `use_gpu` is enabled, the system will use available hardware acceleration:
- NVIDIA: NVENC
- AMD: AMF
- Intel: QSV
You need to have updated drivers and FFmpeg compiled with hardware acceleration support.
## M3U8_PARSER Settings
```json
{
"M3U8_PARSER": {
"force_resolution": "Best",
"get_only_link": false
}
}
```
- `force_resolution`: Choose the video resolution for downloading:
* `"Best"`: Highest available resolution
* `"Worst"`: Lowest available resolution
* `"720p"`: Force 720p resolution
* Or specify one of these resolutions:
- 1080p (1920x1080)
- 720p (1280x720)
- 480p (640x480)
- 360p (640x360)
- 320p (480x320)
- 240p (426x240)
- 240p (320x240)
- 144p (256x144)
- `get_only_link`: Return M3U8 playlist/index URL instead of downloading
## SITE_EXTRA Settings
```json
{
"SITE_EXTRA": {
"ddlstreamitaly": {
"ips4_device_key": "",
"ips4_member_id": "",
"ips4_login_key": ""
}
}
}
```
- Site-specific configuration for `ddlstreamitaly`:
- `ips4_device_key`: Device key for authentication
- `ips4_member_id`: Member ID for authentication
- `ips4_login_key`: Login key for authentication
## Update Domains
<details>
<summary>🌐 Domain Configuration Methods</summary>
There are two ways to update the domains for the supported websites:
### 1. Using Local Configuration
@ -645,23 +366,303 @@ Note: If `use_api` is set to `false` and no `domains.json` file is found, the sc
#### 💡 Adding a New Site to the Legacy API
If you want to add a new site to the legacy API, just message me on the Discord server, and I'll add it!
</details>
# Configuration
<details>
<summary>⚙️ Overview</summary>
You can change some behaviors by tweaking the configuration file. The configuration file is divided into several main sections.
</details>
<details>
<summary>🔧 DEFAULT Settings</summary>
```json
{
"DEFAULT": {
"debug": false,
"show_message": true,
"clean_console": true,
"show_trending": true,
"use_api": true,
"not_close": false,
"telegram_bot": false,
"download_site_data": false,
"validate_github_config": false
}
}
```
- `debug`: Enables debug logging
- `show_message`: Displays informational messages
- `clean_console`: Clears the console between operations
- `show_trending`: Shows trending content
- `use_api`: Uses API for domain updates instead of local configuration
- `not_close`: If set to true, keeps the program running after download is complete
* Can be changed from terminal with `--not_close true/false`
- `telegram_bot`: Enables Telegram bot integration
- `download_site_data`: If set to false, disables automatic site data download
- `validate_github_config`: If set to false, disables validation and updating of configuration from GitHub
</details>
<details>
<summary>📁 OUT_FOLDER Settings</summary>
```json
{
"OUT_FOLDER": {
"root_path": "Video",
"movie_folder_name": "Movie",
"serie_folder_name": "Serie",
"anime_folder_name": "Anime",
"map_episode_name": "E%(episode)_%(episode_name)",
"add_siteName": false
}
}
```
#### Directory Configuration
- `root_path`: Directory where all videos will be saved
* Windows: `C:\\MyLibrary\\Folder` or `\\\\MyServer\\MyLibrary` (network folder)
* Linux/MacOS: `Desktop/MyLibrary/Folder`
#### Folder Names
- `movie_folder_name`: Subdirectory for movies (can be changed with `--movie_folder_name`)
- `serie_folder_name`: Subdirectory for TV series (can be changed with `--serie_folder_name`)
- `anime_folder_name`: Subdirectory for anime (can be changed with `--anime_folder_name`)
#### Episode Naming
- `map_episode_name`: Template for episode filenames
* `%(tv_name)`: Name of TV Show
* `%(season)`: Season number
* `%(episode)`: Episode number
* `%(episode_name)`: Episode name
* Can be changed with `--map_episode_name`
#### Additional Options
- `add_siteName`: Appends site_name to root path (can be changed with `--add_siteName true/false`)
</details>
<details>
<summary>🔄 QBIT_CONFIG Settings</summary>
```json
{
"QBIT_CONFIG": {
"host": "192.168.1.51",
"port": "6666",
"user": "admin",
"pass": "adminadmin"
}
}
```
To enable qBittorrent integration, follow the setup guide [here](https://github.com/lgallard/qBittorrent-Controller/wiki/How-to-enable-the-qBittorrent-Web-UI).
</details>
<details>
<summary>📡 REQUESTS Settings</summary>
```json
{
"REQUESTS": {
"verify": false,
"timeout": 20,
"max_retry": 8,
"proxy": {
"http": "http://username:password@host:port",
"https": "https://username:password@host:port"
}
}
}
```
- `verify`: Verifies SSL certificates
- `timeout`: Maximum timeout (in seconds) for each request
- `max_retry`: Number of retry attempts per segment during M3U8 index download
- `proxy`: Proxy configuration for HTTP/HTTPS requests
* Set to empty string `""` to disable proxies (default)
* Example with authentication:
```json
"proxy": {
"http": "http://username:password@host:port",
"https": "https://username:password@host:port"
}
```
* Example without authentication:
```json
"proxy": {
"http": "http://host:port",
"https": "https://host:port"
}
```
</details>
<details>
<summary>📥 M3U8_DOWNLOAD Settings</summary>
```json
{
"M3U8_DOWNLOAD": {
"tqdm_delay": 0.01,
"default_video_workser": 12,
"default_audio_workser": 12,
"segment_timeout": 8,
"download_audio": true,
"merge_audio": true,
"specific_list_audio": [
"ita"
],
"download_subtitle": true,
"merge_subs": true,
"specific_list_subtitles": [
"ita", // Specify language codes or use ["*"] to download all available subtitles
"eng"
],
"cleanup_tmp_folder": true
}
}
```
#### Performance Settings
- `tqdm_delay`: Delay between progress bar updates
- `default_video_workser`: Number of threads for video download
* Can be changed with `--default_video_worker <number>`
- `default_audio_workser`: Number of threads for audio download
* Can be changed with `--default_audio_worker <number>`
- `segment_timeout`: Timeout for downloading individual segments
#### Audio Settings
- `download_audio`: Whether to download audio tracks
- `merge_audio`: Whether to merge audio with video
- `specific_list_audio`: List of audio languages to download
* Can be changed with `--specific_list_audio ita,eng`
#### Subtitle Settings
- `download_subtitle`: Whether to download subtitles
- `merge_subs`: Whether to merge subtitles with video
- `specific_list_subtitles`: List of subtitle languages to download
* Use `["*"]` to download all available subtitles
* Or specify individual languages like `["ita", "eng"]`
* Can be changed with `--specific_list_subtitles ita,eng`
#### Cleanup
- `cleanup_tmp_folder`: Remove temporary .ts files after download
</details>
<details>
<summary>🌍 Available Language Codes</summary>
| European | Asian | Middle Eastern | Others |
|-----------------|-----------------|-----------------|-----------------|
| ita - Italian | chi - Chinese | ara - Arabic | eng - English |
| spa - Spanish | jpn - Japanese | heb - Hebrew | por - Portuguese|
| fre - French | kor - Korean | tur - Turkish | fil - Filipino |
| ger - German | hin - Hindi | | ind - Indonesian|
| rus - Russian | mal - Malayalam | | may - Malay |
| swe - Swedish | tam - Tamil | | vie - Vietnamese|
| pol - Polish | tel - Telugu | | |
| ukr - Ukrainian | tha - Thai | | |
</details>
<details>
<summary>🎥 M3U8_CONVERSION Settings</summary>
```json
{
"M3U8_CONVERSION": {
"use_codec": false,
"use_vcodec": true,
"use_acodec": true,
"use_bitrate": true,
"use_gpu": false,
"default_preset": "ultrafast"
}
}
```
#### Basic Settings
- `use_codec`: Use specific codec settings
- `use_vcodec`: Use specific video codec
- `use_acodec`: Use specific audio codec
- `use_bitrate`: Apply bitrate settings
- `use_gpu`: Enable GPU acceleration (if available)
- `default_preset`: FFmpeg encoding preset
#### Encoding Presets
The `default_preset` configuration can be set to:
- `ultrafast`: Extremely fast conversion but larger file size
- `superfast`: Very fast with good quality/size ratio
- `veryfast`: Fast with good compression
- `faster`: Optimal balance for most users
- `fast`: Good compression, moderate time
- `medium`: FFmpeg default setting
- `slow`: High quality, slower process
- `slower`: Very high quality, slow process
- `veryslow`: Maximum quality, very slow process
#### GPU Acceleration
When `use_gpu` is enabled, supports:
- NVIDIA: NVENC
- AMD: AMF
- Intel: QSV
Note: Requires updated drivers and FFmpeg with hardware acceleration support.
</details>
<details>
<summary>🔍 M3U8_PARSER Settings</summary>
```json
{
"M3U8_PARSER": {
"force_resolution": "Best",
"get_only_link": false
}
}
```
#### Resolution Options
- `force_resolution`: Choose video resolution:
* `"Best"`: Highest available resolution
* `"Worst"`: Lowest available resolution
* `"720p"`: Force 720p resolution
* Specific resolutions:
- 1080p (1920x1080)
- 720p (1280x720)
- 480p (640x480)
- 360p (640x360)
- 320p (480x320)
- 240p (426x240)
- 240p (320x240)
- 144p (256x144)
#### Link Options
- `get_only_link`: Return M3U8 playlist/index URL instead of downloading
</details>
# Global Search
<details>
<summary>🔍 Feature Overview</summary>
You can now search across multiple streaming sites at once using the Global Search feature. This allows you to find content more efficiently without having to search each site individually.
</details>
## Using Global Search
The Global Search feature provides a unified interface to search across all supported sites:
## Search Options
<details>
<summary>🎯 Search Options</summary>
When using Global Search, you have three ways to select which sites to search:
1. **Search all sites** - Searches across all available streaming sites
2. **Search by category** - Group sites by their categories (movies, series, anime, etc.)
3. **Select specific sites** - Choose individual sites to include in your search
</details>
## Navigation and Selection
<details>
<summary>📝 Navigation and Selection</summary>
After performing a search:
@ -673,13 +674,16 @@ After performing a search:
2. Select an item by number to view details or download
3. The system will automatically use the appropriate site's API to handle the download
</details>
## Command Line Arguments
<details>
<summary>⌨️ Command Line Arguments</summary>
The Global Search can be configured from the command line:
- `--global` - Perform a global search across multiple sites.
- `-s`, `--search` - Specify the search terms.
</details>
# Examples of terminal usage
@ -699,25 +703,32 @@ python test_run.py --global -s "cars"
# Docker
You can run the script in a docker container, to build the image just run
<details>
<summary>🐳 Basic Setup</summary>
Build the image:
```
docker build -t streaming-community-api .
```
and to run it use
Run the container with Cloudflare DNS for better connectivity:
```
docker run -it --dns 1.1.1.1 -p 8000:8000 streaming-community-api
```
</details>
<details>
<summary>💾 Custom Storage Location</summary>
By default the videos will be saved in `/app/Video` inside the container. To save them on your machine:
```
docker run -it -p 8000:8000 streaming-community-api
docker run -it --dns 9.9.9.9 -p 8000:8000 -v /path/to/download:/app/Video streaming-community-api
```
</details>
By default the videos will be saved in `/app/Video` inside the container, if you want to to save them in your machine instead of the container just run
```
docker run -it -p 8000:8000 -v /path/to/download:/app/Video streaming-community-api
```
### Docker quick setup with Make
<details>
<summary>🛠️ Quick Setup with Make</summary>
Inside the Makefile (install `make`) are already configured two commands to build and run the container:
@ -729,10 +740,12 @@ make LOCAL_DIR=/path/to/download run-container
```
The `run-container` command mounts also the `config.json` file, so any change to the configuration file is reflected immediately without having to rebuild the image.
</details>
# Telegram Usage
## Configuration
<details>
<summary>⚙️ Basic Configuration</summary>
The bot was created to replace terminal commands and allow interaction via Telegram. Each download runs within a screen session, enabling multiple downloads to run simultaneously.
@ -761,20 +774,21 @@ TOKEN_TELEGRAM=IlTuo2131TOKEN$12D3Telegram
AUTHORIZED_USER_ID=12345678
DEBUG=False
```
</details>
## Install Python Dependencies
<details>
<summary>📥 Dependencies & Launch</summary>
Install dependencies:
```bash
pip install -r requirements.txt
```
## On Linux/MacOS:
Start the bot from the folder /StreamingCommunity/TelegramHelp
Start the bot (from /StreamingCommunity/TelegramHelp):
```bash
python3 telegram_bot.py
```
</details>
# Tutorials
@ -788,22 +802,21 @@ python3 telegram_bot.py
- To Finish [website API](https://github.com/Arrowar/StreamingCommunity/tree/test_gui_1)
- To finish [website API 2](https://github.com/hydrosh/StreamingCommunity/tree/test_gui_1)
# Contributing
## Useful Project
Contributions are welcome! Steps:
1. Fork the repository
2. Create feature branch (`git checkout -b feature/AmazingFeature`)
3. Commit changes (`git commit -m 'Add some AmazingFeature'`)
4. Push to branch (`git push origin feature/AmazingFeature`)
5. Open Pull Request
### 🎯 [Unit3Dup](https://github.com/31December99/Unit3Dup)
Bot in Python per la generazione e l'upload automatico di torrent su tracker basati su Unit3D.
### 🇮🇹 [MammaMia](https://github.com/UrloMythus/MammaMia)
Addon per Stremio che consente lo streaming HTTPS di film, serie, anime e TV in diretta in lingua italiana.
### 🧩 [streamingcommunity-unofficialapi](https://github.com/Blu-Tiger/streamingcommunity-unofficialapi)
API non ufficiale per accedere ai contenuti del sito italiano StreamingCommunity.
### 🎥 [stream-buddy](https://github.com/Bbalduzz/stream-buddy)
Tool per guardare o scaricare film dalla piattaforma StreamingCommunity.
# Disclaimer
This software is provided "as is", without warranty of any kind, express or implied, including but not limited to the warranties of merchantability, fitness for a particular purpose, and noninfringement. In no event shall the authors or copyright holders be liable for any claim, damages, or other liability, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software.
## Contributors
<a href="https://github.com/Arrowar/StreamingCommunity/graphs/contributors" alt="View Contributors">
<img src="https://contrib.rocks/image?repo=Arrowar/StreamingCommunity&max=1000&columns=10" alt="Contributors" />
</a>

View File

@ -18,21 +18,13 @@ max_timeout = config_manager.get_int("REQUESTS", "timeout")
class VideoSource:
def __init__(self, cookie) -> None:
def __init__(self, url, cookie) -> None:
"""
Initializes the VideoSource object with default values.
"""
self.headers = {'user-agent': get_userAgent()}
self.cookie = cookie
def setup(self, url: str) -> None:
"""
Sets up the video source with the provided URL.
Parameters:
- url (str): The URL of the video source.
"""
self.url = url
self.cookie = cookie
def make_request(self, url: str) -> str:
"""

View File

@ -0,0 +1,65 @@
# 29.04.25
import re
# External library
import httpx
from bs4 import BeautifulSoup
# Internal utilities
from StreamingCommunity.Util.headers import get_userAgent
from StreamingCommunity.Util.config_json import config_manager
# Variable
MAX_TIMEOUT = config_manager.get_int("REQUESTS", "timeout")
class VideoSource:
def __init__(self, proxy=None):
self.client = httpx.Client(headers={'user-agent': get_userAgent()}, timeout=MAX_TIMEOUT, proxy=proxy)
def extractLinkHdPlayer(self, response):
"""Extract iframe source from the page."""
soup = BeautifulSoup(response.content, 'html.parser')
iframes = soup.find_all("iframe")
if iframes:
return iframes[0].get('data-lazy-src')
return None
def get_m3u8_url(self, page_url):
"""
Extract m3u8 URL from hdPlayer page.
"""
try:
base_domain = re.match(r'https?://(?:www\.)?([^/]+)', page_url).group(0)
self.client.headers.update({'referer': base_domain})
# Get the page content
response = self.client.get(page_url)
# Extract HDPlayer iframe URL
iframe_url = self.extractLinkHdPlayer(response)
if not iframe_url:
return None
# Get HDPlayer page content
response_hdplayer = self.client.get(iframe_url)
if response_hdplayer.status_code != 200:
return None
sources_pattern = r'file:"([^"]+)"'
match = re.search(sources_pattern, response_hdplayer.text)
if match:
return match.group(1)
return None
except Exception as e:
print(f"Error in HDPlayer: {str(e)}")
return None
finally:
self.client.close()

View File

@ -1,4 +1,5 @@
# 05.07.24
# NOTE: NOT USED
import re
import logging

View File

@ -0,0 +1,64 @@
# 11.04.25
# External libraries
import httpx
# Internal utilities
from StreamingCommunity.Util.config_json import config_manager
from StreamingCommunity.Util.headers import get_headers
# Variable
MAX_TIMEOUT = config_manager.get_int("REQUESTS", "timeout")
class VideoSource:
@staticmethod
def extract_m3u8_url(video_url: str) -> str:
"""Extract the m3u8 streaming URL from a RaiPlay video URL."""
if not video_url.endswith('.json'):
if '/video/' in video_url:
video_id = video_url.split('/')[-1].split('.')[0]
video_path = '/'.join(video_url.split('/')[:-1])
video_url = f"{video_path}/{video_id}.json"
else:
return "Error: Unable to determine video JSON URL"
try:
response = httpx.get(video_url, headers=get_headers(), timeout=MAX_TIMEOUT)
if response.status_code != 200:
return f"Error: Failed to fetch video data (Status: {response.status_code})"
video_data = response.json()
content_url = video_data.get("video").get("content_url")
if not content_url:
return "Error: No content URL found in video data"
# Extract the element key
if "=" in content_url:
element_key = content_url.split("=")[1]
else:
return "Error: Unable to extract element key"
# Request the stream URL
params = {
'cont': element_key,
'output': '62',
}
stream_response = httpx.get('https://mediapolisvod.rai.it/relinker/relinkerServlet.htm', params=params, headers=get_headers(), timeout=MAX_TIMEOUT)
if stream_response.status_code != 200:
return f"Error: Failed to fetch stream URL (Status: {stream_response.status_code})"
# Extract the m3u8 URL
stream_data = stream_response.json()
m3u8_url = stream_data.get("video")[0] if "video" in stream_data else None
return m3u8_url
except Exception as e:
return f"Error: {str(e)}"

View File

@ -0,0 +1,145 @@
# 05.07.24
import re
import logging
# External libraries
import httpx
import jsbeautifier
from bs4 import BeautifulSoup
# Internal utilities
from StreamingCommunity.Util.config_json import config_manager
from StreamingCommunity.Util.headers import get_userAgent
# Variable
MAX_TIMEOUT = config_manager.get_int("REQUESTS", "timeout")
class VideoSource:
STAYONLINE_BASE_URL = "https://stayonline.pro"
MIXDROP_BASE_URL = "https://mixdrop.sb"
def __init__(self, url: str):
self.url = url
self.redirect_url: str | None = None
self._init_headers()
def _init_headers(self) -> None:
"""Initialize the base headers used for requests."""
self.headers = {
'origin': self.STAYONLINE_BASE_URL,
'user-agent': get_userAgent(),
}
def _get_mixdrop_headers(self) -> dict:
"""Get headers specifically for MixDrop requests."""
return {
'referer': 'https://mixdrop.club/',
'user-agent': get_userAgent()
}
def get_redirect_url(self) -> str:
"""Extract the stayonline redirect URL from the initial page."""
try:
response = httpx.get(self.url, headers=self.headers, follow_redirects=True, timeout=MAX_TIMEOUT)
response.raise_for_status()
soup = BeautifulSoup(response.text, "html.parser")
for link in soup.find_all('a'):
href = link.get('href')
if href and 'stayonline' in href:
self.redirect_url = href
logging.info(f"Redirect URL: {self.redirect_url}")
return self.redirect_url
raise ValueError("Stayonline URL not found")
except Exception as e:
logging.error(f"Error getting redirect URL: {e}")
raise
def get_link_id(self) -> str:
"""Extract the link ID from the redirect page."""
if not self.redirect_url:
raise ValueError("Redirect URL not set. Call get_redirect_url first.")
try:
response = httpx.get(self.redirect_url, headers=self.headers, follow_redirects=True, timeout=MAX_TIMEOUT)
response.raise_for_status()
soup = BeautifulSoup(response.text, "html.parser")
for script in soup.find_all('script'):
match = re.search(r'var\s+linkId\s*=\s*"([^"]+)"', script.text)
if match:
return match.group(1)
raise ValueError("LinkId not found")
except Exception as e:
logging.error(f"Error getting link ID: {e}")
raise
def get_final_url(self, link_id: str) -> str:
"""Get the final URL using the link ID."""
try:
self.headers['referer'] = f'{self.STAYONLINE_BASE_URL}/l/{link_id}/'
data = {'id': link_id, 'ref': ''}
response = httpx.post(f'{self.STAYONLINE_BASE_URL}/ajax/linkView.php', headers=self.headers, data=data, timeout=MAX_TIMEOUT)
response.raise_for_status()
return response.json()['data']['value']
except Exception as e:
logging.error(f"Error getting final URL: {e}")
raise
def _extract_video_id(self, final_url: str) -> str:
"""Extract video ID from the final URL."""
parts = final_url.split('/')
if len(parts) < 5:
raise ValueError("Invalid final URL format")
return parts[4]
def _extract_delivery_url(self, script_text: str) -> str:
"""Extract delivery URL from beautified JavaScript."""
beautified = jsbeautifier.beautify(script_text)
for line in beautified.splitlines():
if 'MDCore.wurl' in line:
url = line.split('= ')[1].strip('"').strip(';')
return f"https:{url}"
raise ValueError("Delivery URL not found in script")
def get_playlist(self) -> str:
"""
Execute the entire flow to obtain the final video URL.
Returns:
str: The final video delivery URL
"""
self.get_redirect_url()
link_id = self.get_link_id()
final_url = self.get_final_url(link_id)
video_id = self._extract_video_id(final_url)
response = httpx.get(
f'{self.MIXDROP_BASE_URL}/e/{video_id}',
headers=self._get_mixdrop_headers(),
timeout=MAX_TIMEOUT
)
response.raise_for_status()
soup = BeautifulSoup(response.text, "html.parser")
script_text = next(
(script.text for script in soup.find_all('script')
if "eval" in str(script.text)),
None
)
if not script_text:
raise ValueError("Required script not found")
return self._extract_delivery_url(script_text).replace('"', '')

View File

@ -5,9 +5,9 @@ import logging
# External libraries
import httpx
import jsbeautifier
from bs4 import BeautifulSoup
from curl_cffi import requests
# Internal utilities
@ -28,7 +28,6 @@ class VideoSource:
- url (str): The URL of the video source.
"""
self.headers = get_headers()
self.client = httpx.Client()
self.url = url
def make_request(self, url: str) -> str:
@ -42,8 +41,10 @@ class VideoSource:
- str: The response content if successful, None otherwise.
"""
try:
response = self.client.get(url, headers=self.headers, timeout=MAX_TIMEOUT, follow_redirects=True)
response.raise_for_status()
response = requests.get(url, headers=self.headers, timeout=MAX_TIMEOUT, impersonate="chrome110")
if response.status_code >= 400:
logging.error(f"Request failed with status code: {response.status_code}")
return None
return response.text
except Exception as e:

View File

@ -16,9 +16,9 @@ from StreamingCommunity.Util.headers import get_userAgent
MAX_TIMEOUT = config_manager.get_int("REQUESTS", "timeout")
class AnimeWorldPlayer:
class VideoSource:
def __init__(self, full_url, episode_data, session_id, csrf_token):
"""Initialize the AnimeWorldPlayer with session details, episode data, and URL."""
"""Initialize the VideoSource with session details, episode data, and URL."""
self.session_id = session_id
self.csrf_token = csrf_token
self.episode_data = episode_data
@ -33,7 +33,7 @@ class AnimeWorldPlayer:
timeout=MAX_TIMEOUT
)
def get_download_link(self):
def get_playlist(self):
"""Fetch the download link from AnimeWorld using the episode link."""
try:
# Make a POST request to the episode link and follow any redirects

View File

@ -1,6 +1,6 @@
# 01.03.24
import sys
import time
import logging
from urllib.parse import urlparse, parse_qs, urlencode, urlunparse
@ -24,26 +24,22 @@ console = Console()
class VideoSource:
def __init__(self, url: str, is_series: bool):
def __init__(self, url: str, is_series: bool, media_id: int = None, proxy: str = None):
"""
Initialize video source for streaming site.
Args:
- url (str): The URL of the streaming site.
- is_series (bool): Flag for series or movie content
- media_id (int, optional): Unique identifier for media item
"""
self.headers = {'user-agent': get_userAgent()}
self.url = url
self.proxy = proxy
self.is_series = is_series
def setup(self, media_id: int):
"""
Configure media-specific context.
Args:
media_id (int): Unique identifier for media item
"""
self.media_id = media_id
self.iframe_src = None
self.window_parameter = None
def get_iframe(self, episode_id: int) -> None:
"""
@ -61,7 +57,7 @@ class VideoSource:
}
try:
response = httpx.get(f"{self.url}/iframe/{self.media_id}", params=params, timeout=MAX_TIMEOUT)
response = httpx.get(f"{self.url}/iframe/{self.media_id}", headers=self.headers, params=params, timeout=MAX_TIMEOUT, proxy=self.proxy)
response.raise_for_status()
# Parse response with BeautifulSoup to get iframe source
@ -87,6 +83,7 @@ class VideoSource:
self.window_video = WindowVideo(converter.get('video'))
self.window_streams = StreamsCollection(converter.get('streams'))
self.window_parameter = WindowParameter(converter.get('masterPlaylist'))
time.sleep(0.5)
except Exception as e:
logging.error(f"Error parsing script: {e}")
@ -113,41 +110,45 @@ class VideoSource:
# Parse script to get video information
self.parse_script(script_text=script)
except httpx.HTTPStatusError as e:
if e.response.status_code == 404:
console.print("[yellow]This content will be available soon![/yellow]")
return
logging.error(f"Error getting content: {e}")
raise
except Exception as e:
logging.error(f"Error getting content: {e}")
raise
def get_playlist(self) -> str:
def get_playlist(self) -> str | None:
"""
Generate authenticated playlist URL.
Returns:
str: Fully constructed playlist URL with authentication parameters
str | None: Fully constructed playlist URL with authentication parameters, or None if content unavailable
"""
if not self.window_parameter:
return None
params = {}
# Add 'h' parameter if video quality is 1080p
if self.canPlayFHD:
params['h'] = 1
# Parse the original URL
parsed_url = urlparse(self.window_parameter.url)
query_params = parse_qs(parsed_url.query)
# Check specifically for 'b=1' in the query parameters
if 'b' in query_params and query_params['b'] == ['1']:
params['b'] = 1
# Add authentication parameters (token and expiration)
params.update({
"token": self.window_parameter.token,
"expires": self.window_parameter.expires
})
# Build the updated query string
query_string = urlencode(params)
# Construct the new URL with updated query parameters
return urlunparse(parsed_url._replace(query=query_string))
@ -164,6 +165,7 @@ class VideoSourceAnime(VideoSource):
self.headers = {'user-agent': get_userAgent()}
self.url = url
self.src_mp4 = None
self.iframe_src = None
def get_embed(self, episode_id: int):
"""

View File

@ -21,10 +21,10 @@ from .title import download_title
# Variable
indice = 3
_useFor = "film_serie"
_deprecate = False
_priority = 2
_engineDownload = "tor"
_useFor = "Torrent"
_priority = 0
_engineDownload = "Torrent"
_deprecate = True
console = Console()
msg = Prompt()
@ -39,7 +39,7 @@ def process_search_result(select_title):
def search(string_to_search: str = None, get_onlyDatabase: bool = False, direct_item: dict = None):
"""
Main function of the application for search film, series and anime.
Main function of the application for search.
Parameters:
string_to_search (str, optional): String to search for
@ -62,7 +62,7 @@ def search(string_to_search: str = None, get_onlyDatabase: bool = False, direct_
return media_search_manager
if len_database > 0:
select_title = get_select_title(table_show_manager, media_search_manager)
select_title = get_select_title(table_show_manager, media_search_manager, len_database)
download_title(select_title)
else:

View File

@ -43,17 +43,22 @@ def title_search(query: str) -> int:
console.print(f"[cyan]Search url: [yellow]{search_url}")
try:
response = httpx.get(search_url, headers={'user-agent': get_userAgent()}, timeout=max_timeout, follow_redirects=True)
response = httpx.get(
search_url,
headers={'user-agent': get_userAgent()},
timeout=max_timeout,
follow_redirects=True
)
response.raise_for_status()
except Exception as e:
console.print(f"Site: {site_constant.SITE_NAME}, request search error: {e}")
console.print(f"[red]Site: {site_constant.SITE_NAME}, request search error: {e}")
return 0
# Create soup and find table
soup = BeautifulSoup(response.text, "html.parser")
for tr in soup.find_all('tr'):
for i, tr in enumerate(soup.find_all('tr')):
try:
title_info = {
@ -67,6 +72,9 @@ def title_search(query: str) -> int:
}
media_search_manager.add_media(title_info)
if i == 20:
break
except Exception as e:
print(f"Error parsing a film entry: {e}")

View File

@ -24,10 +24,10 @@ from .series import download_series
# Variable
indice = 2
_useFor = "film_serie"
_deprecate = False
_priority = 1
_useFor = "Film_&_Serie"
_priority = 0
_engineDownload = "hls"
_deprecate = False
msg = Prompt()
console = Console()
@ -57,27 +57,43 @@ def get_user_input(string_to_search: str = None):
return string_to_search
def process_search_result(select_title):
def process_search_result(select_title, selections=None):
"""
Handles the search result and initiates the download for either a film or series.
Parameters:
select_title (MediaItem): The selected media item
selections (dict, optional): Dictionary containing selection inputs that bypass manual input
{'season': season_selection, 'episode': episode_selection}
"""
if select_title.type == 'tv':
download_series(select_title)
season_selection = None
episode_selection = None
if selections:
season_selection = selections.get('season')
episode_selection = selections.get('episode')
download_series(select_title, season_selection, episode_selection)
else:
download_film(select_title)
def search(string_to_search: str = None, get_onlyDatabase: bool = False, direct_item: dict = None):
# search("Game of Thrones", selections={"season": "1", "episode": "1-3"})
def search(string_to_search: str = None, get_onlyDatabase: bool = False, direct_item: dict = None, selections: dict = None):
"""
Main function of the application for search film, series and anime.
Main function of the application for search.
Parameters:
string_to_search (str, optional): String to search for
get_onylDatabase (bool, optional): If True, return only the database object
direct_item (dict, optional): Direct item to process (bypass search)
selections (dict, optional): Dictionary containing selection inputs that bypass manual input
{'season': season_selection, 'episode': episode_selection}
"""
if direct_item:
select_title = MediaItem(**direct_item)
process_search_result(select_title)
process_search_result(select_title, selections)
return
# Get the user input for the search term
@ -94,8 +110,8 @@ def search(string_to_search: str = None, get_onlyDatabase: bool = False, direct_
bot = get_bot_instance()
if len_database > 0:
select_title = get_select_title(table_show_manager, media_search_manager)
process_search_result(select_title)
select_title = get_select_title(table_show_manager, media_search_manager, len_database)
process_search_result(select_title, selections)
else:
console.print(f"\n[red]Nothing matching was found for[white]: [purple]{string_to_search}")
@ -105,4 +121,4 @@ def search(string_to_search: str = None, get_onlyDatabase: bool = False, direct_
# If no results are found, ask again
string_to_search = get_user_input()
search()
search(string_to_search, get_onlyDatabase, None, selections)

View File

@ -1,6 +1,7 @@
# 16.03.25
import os
import re
# External library
@ -42,7 +43,6 @@ def download_film(select_title: MediaItem) -> str:
Return:
- str: output path if successful, otherwise None
"""
if site_constant.TELEGRAM_BOT:
bot = get_bot_instance()
bot.send_message(f"Download in corso:\n{select_title.name}", None)
@ -57,51 +57,38 @@ def download_film(select_title: MediaItem) -> str:
start_message()
console.print(f"[bold yellow]Download:[/bold yellow] [red]{site_constant.SITE_NAME}[/red] → [cyan]{select_title.name}[/cyan] \n")
# Extract mostraguarda link
# Extract mostraguarda URL
try:
response = httpx.get(select_title.url, headers=get_headers(), timeout=10)
response.raise_for_status()
except Exception as e:
console.print(f"[red]Error fetching the page: {e}")
if site_constant.TELEGRAM_BOT:
bot.send_message(f"ERRORE\n\nErrore durante il recupero della pagina.\n\n{e}", None)
return None
soup = BeautifulSoup(response.text, 'html.parser')
iframes = soup.find_all('iframe')
mostraguarda = iframes[0]['src']
# Create mostraguarda url
soup = BeautifulSoup(response.text, "html.parser")
iframe_tag = soup.find_all("iframe")
url_mostraGuarda = iframe_tag[0].get('data-src')
if not url_mostraGuarda:
console.print("Error: data-src attribute not found in iframe.")
if site_constant.TELEGRAM_BOT:
bot.send_message(f"ERRORE\n\nErrore: attributo data-src non trovato nell'iframe", None)
except Exception as e:
console.print(f"[red]Site: {site_constant.SITE_NAME}, request error: {e}, get mostraguarda")
return None
# Extract supervideo URL
supervideo_url = None
try:
response = httpx.get(url_mostraGuarda, headers=get_headers(), timeout=10)
response = httpx.get(mostraguarda, headers=get_headers(), timeout=10)
response.raise_for_status()
soup = BeautifulSoup(response.text, 'html.parser')
pattern = r'//supervideo\.[^/]+/[a-z]/[a-zA-Z0-9]+'
supervideo_match = re.search(pattern, response.text)
supervideo_url = 'https:' + supervideo_match.group(0)
except Exception as e:
console.print(f"[red]Error fetching mostraguarda link: {e}")
console.print("[yellow]Missing access credentials. This part of the code is still under development.")
if site_constant.TELEGRAM_BOT:
bot.send_message(f"ERRORE\n\nErrore durante il recupero del link mostra/guarda.\n\n{e}", None)
bot.send_message(f"ERRORE\n\nCredenziali di accesso mancanti.\nQuesta parte del codice è ancora in fase di sviluppo.", None)
console.print(f"[red]Site: {site_constant.SITE_NAME}, request error: {e}, get supervideo URL")
console.print("[yellow]This content will be available soon![/yellow]")
return None
# Create supervio URL
soup = BeautifulSoup(response.text, "html.parser")
player_links = soup.find("ul", class_="_player-mirrors")
player_items = player_links.find_all("li")
supervideo_url = "https:" + player_items[0].get("data-link")
if not supervideo_url:
return None
# Init class
video_source = VideoSource(url=supervideo_url)
video_source = VideoSource(supervideo_url)
master_playlist = video_source.get_playlist()
# Define the filename and path for the downloaded film

View File

@ -19,8 +19,7 @@ from StreamingCommunity.TelegramHelp.telegram_bot import get_bot_instance, Teleg
from .util.ScrapeSerie import GetSerieInfo
from StreamingCommunity.Api.Template.Util import (
manage_selection,
map_episode_title,
dynamic_format_number,
map_episode_title,
validate_selection,
validate_episode_selection,
display_episodes_list
@ -40,23 +39,24 @@ console = Console()
def download_video(index_season_selected: int, index_episode_selected: int, scrape_serie: GetSerieInfo) -> Tuple[str,bool]:
"""
Download a single episode video.
Downloads a specific episode from a specified season.
Parameters:
- index_season_selected (int): Index of the selected season.
- index_episode_selected (int): Index of the selected episode.
- index_season_selected (int): Season number
- index_episode_selected (int): Episode index
- scrape_serie (GetSerieInfo): Scraper object with series information
Return:
- str: output path
- bool: kill handler status
Returns:
- str: Path to downloaded file
- bool: Whether download was stopped
"""
start_message()
index_season_selected = dynamic_format_number(str(index_season_selected))
# Get info about episode
obj_episode = scrape_serie.seasons_manager.get_season_by_number(int(index_season_selected)).episodes.get(index_episode_selected-1)
# Get episode information
obj_episode = scrape_serie.selectEpisode(index_season_selected, index_episode_selected-1)
console.print(f"[bold yellow]Download:[/bold yellow] [red]{site_constant.SITE_NAME}[/red] → [bold magenta]{obj_episode.name}[/bold magenta] ([cyan]S{index_season_selected}E{index_episode_selected}[/cyan]) \n")
# Telegram integration
if site_constant.TELEGRAM_BOT:
bot = get_bot_instance()
@ -93,21 +93,21 @@ def download_video(index_season_selected: int, index_episode_selected: int, scra
return r_proc['path'], r_proc['stopped']
def download_episode(index_season_selected: int, scrape_serie: GetSerieInfo, download_all: bool = False) -> None:
def download_episode(index_season_selected: int, scrape_serie: GetSerieInfo, download_all: bool = False, episode_selection: str = None) -> None:
"""
Download episodes of a selected season.
Handle downloading episodes for a specific season.
Parameters:
- index_season_selected (int): Index of the selected season.
- download_all (bool): Download all episodes in the season.
- index_season_selected (int): Season number
- scrape_serie (GetSerieInfo): Scraper object with series information
- download_all (bool): Whether to download all episodes
- episode_selection (str, optional): Pre-defined episode selection that bypasses manual input
"""
start_message()
obj_episodes = scrape_serie.seasons_manager.get_season_by_number(index_season_selected).episodes
episodes_count = len(obj_episodes.episodes)
# Get episodes for the selected season
episodes = scrape_serie.getEpisodeSeasons(index_season_selected)
episodes_count = len(episodes)
if download_all:
# Download all episodes without asking
for i_episode in range(1, episodes_count + 1):
path, stopped = download_video(index_season_selected, i_episode, scrape_serie)
@ -117,16 +117,16 @@ def download_episode(index_season_selected: int, scrape_serie: GetSerieInfo, dow
console.print(f"\n[red]End downloaded [yellow]season: [red]{index_season_selected}.")
else:
if episode_selection is not None:
last_command = episode_selection
console.print(f"\n[cyan]Using provided episode selection: [yellow]{episode_selection}")
# Display episodes list and manage user selection
last_command = display_episodes_list(obj_episodes.episodes)
else:
last_command = display_episodes_list(episodes)
# Prompt user for episode selection
list_episode_select = manage_selection(last_command, episodes_count)
try:
list_episode_select = validate_episode_selection(list_episode_select, episodes_count)
except ValueError as e:
console.print(f"[red]{str(e)}")
return
list_episode_select = validate_episode_selection(list_episode_select, episodes_count)
# Download selected episodes if not stopped
for i_episode in list_episode_select:
@ -135,69 +135,65 @@ def download_episode(index_season_selected: int, scrape_serie: GetSerieInfo, dow
if stopped:
break
def download_series(select_season: MediaItem) -> None:
def download_series(select_season: MediaItem, season_selection: str = None, episode_selection: str = None) -> None:
"""
Download episodes of a TV series based on user selection.
Handle downloading a complete series.
Parameters:
- select_season (MediaItem): Selected media item (TV series).
- select_season (MediaItem): Series metadata from search
- season_selection (str, optional): Pre-defined season selection that bypasses manual input
- episode_selection (str, optional): Pre-defined episode selection that bypasses manual input
"""
if site_constant.TELEGRAM_BOT:
bot = get_bot_instance()
start_message()
# Init class
scrape_serie = GetSerieInfo(select_season.url)
# Collect information about seasons
scrape_serie.collect_season()
seasons_count = len(scrape_serie.seasons_manager)
# Get total number of seasons
seasons_count = scrape_serie.getNumberSeason()
if site_constant.TELEGRAM_BOT:
bot = get_bot_instance()
# Prompt user for season selection and download episodes
console.print(f"\n[green]Seasons found: [red]{seasons_count}")
if site_constant.TELEGRAM_BOT:
console.print("\n[cyan]Insert season number [yellow](e.g., 1), [red]* [cyan]to download all seasons, "
"[yellow](e.g., 1-2) [cyan]for a range of seasons, or [yellow](e.g., 3-*) [cyan]to download from a specific season to the end")
# If season_selection is provided, use it instead of asking for input
if season_selection is None:
if site_constant.TELEGRAM_BOT:
console.print("\n[cyan]Insert season number [yellow](e.g., 1), [red]* [cyan]to download all seasons, "
"[yellow](e.g., 1-2) [cyan]for a range of seasons, or [yellow](e.g., 3-*) [cyan]to download from a specific season to the end")
bot.send_message(f"Stagioni trovate: {seasons_count}", None)
bot.send_message(f"Stagioni trovate: {seasons_count}", None)
index_season_selected = bot.ask(
"select_title_episode",
"Menu di selezione delle stagioni\n\n"
"- Inserisci il numero della stagione (ad esempio, 1)\n"
"- Inserisci * per scaricare tutte le stagioni\n"
"- Inserisci un intervallo di stagioni (ad esempio, 1-2) per scaricare da una stagione all'altra\n"
"- Inserisci (ad esempio, 3-*) per scaricare dalla stagione specificata fino alla fine della serie",
None
)
index_season_selected = bot.ask(
"select_title_episode",
"Menu di selezione delle stagioni\n\n"
"- Inserisci il numero della stagione (ad esempio, 1)\n"
"- Inserisci * per scaricare tutte le stagioni\n"
"- Inserisci un intervallo di stagioni (ad esempio, 1-2) per scaricare da una stagione all'altra\n"
"- Inserisci (ad esempio, 3-*) per scaricare dalla stagione specificata fino alla fine della serie",
None
)
else:
index_season_selected = msg.ask(
"\n[cyan]Insert season number [yellow](e.g., 1), [red]* [cyan]to download all seasons, "
"[yellow](e.g., 1-2) [cyan]for a range of seasons, or [yellow](e.g., 3-*) [cyan]to download from a specific season to the end"
)
else:
index_season_selected = msg.ask(
"\n[cyan]Insert season number [yellow](e.g., 1), [red]* [cyan]to download all seasons, "
"[yellow](e.g., 1-2) [cyan]for a range of seasons, or [yellow](e.g., 3-*) [cyan]to download from a specific season to the end"
)
index_season_selected = season_selection
console.print(f"\n[cyan]Using provided season selection: [yellow]{season_selection}")
# Manage and validate the selection
# Validate the selection
list_season_select = manage_selection(index_season_selected, seasons_count)
try:
list_season_select = validate_selection(list_season_select, seasons_count)
except ValueError as e:
console.print(f"[red]{str(e)}")
return
list_season_select = validate_selection(list_season_select, seasons_count)
# Loop through the selected seasons and download episodes
for i_season in list_season_select:
if len(list_season_select) > 1 or index_season_selected == "*":
# Download all episodes if multiple seasons are selected or if '*' is used
download_episode(i_season, scrape_serie, download_all=True)
else:
# Otherwise, let the user select specific episodes for the single season
download_episode(i_season, scrape_serie, download_all=False)
download_episode(i_season, scrape_serie, download_all=False, episode_selection=episode_selection)
if site_constant.TELEGRAM_BOT:
bot.send_message(f"Finito di scaricare tutte le serie e episodi", None)
@ -205,4 +201,4 @@ def download_series(select_season: MediaItem) -> None:
# Get script_id
script_id = TelegramSession.get_session()
if script_id != "unknown":
TelegramSession.deleteScriptId(script_id)
TelegramSession.deleteScriptId(script_id)

View File

@ -46,11 +46,16 @@ def title_search(query: str) -> int:
console.print(f"[cyan]Search url: [yellow]{search_url}")
try:
response = httpx.post(search_url, headers={'user-agent': get_userAgent()}, timeout=max_timeout, follow_redirects=True)
response = httpx.post(
search_url,
headers={'user-agent': get_userAgent()},
timeout=max_timeout,
follow_redirects=True
)
response.raise_for_status()
except Exception as e:
console.print(f"Site: {site_constant.SITE_NAME}, request search error: {e}")
console.print(f"[red]Site: {site_constant.SITE_NAME}, request search error: {e}")
if site_constant.TELEGRAM_BOT:
bot.send_message(f"ERRORE\n\nErrore nella richiesta di ricerca:\n\n{e}", None)
return 0
@ -78,7 +83,8 @@ def title_search(query: str) -> int:
media_search_manager.add_media({
'url': url,
'name': title,
'type': tipo
'type': tipo,
'image': f"{site_constant.FULL_URL}{movie_div.find('img', class_='layer-image').get('data-src')}"
})
if site_constant.TELEGRAM_BOT:

View File

@ -1,5 +1,8 @@
# 16.03.25
import logging
# External libraries
import httpx
from bs4 import BeautifulSoup
@ -15,7 +18,6 @@ from StreamingCommunity.Api.Player.Helper.Vixcloud.util import SeasonManager
max_timeout = config_manager.get_int("REQUESTS", "timeout")
class GetSerieInfo:
def __init__(self, url):
"""
@ -36,37 +38,84 @@ class GetSerieInfo:
soup = BeautifulSoup(response.text, "html.parser")
self.series_name = soup.find("title").get_text(strip=True).split(" - ")[0]
# Process all seasons
season_items = soup.find_all('div', class_='accordion-item')
for season_idx, season_item in enumerate(season_items, 1):
season_header = season_item.find('div', class_='accordion-header')
if not season_header:
continue
season_name = season_header.get_text(strip=True)
# Find all season dropdowns
seasons_dropdown = soup.find('div', class_='dropdown seasons')
if not seasons_dropdown:
return
# Get all season items
season_items = seasons_dropdown.find_all('span', {'data-season': True})
for season_item in season_items:
season_num = int(season_item['data-season'])
season_name = season_item.get_text(strip=True)
# Create a new season and get a reference to it
# Create a new season
current_season = self.seasons_manager.add_season({
'number': season_idx,
'number': season_num,
'name': season_name
})
# Find episodes for this season
episode_divs = season_item.find_all('div', class_='down-episode')
for ep_idx, ep_div in enumerate(episode_divs, 1):
episode_name_tag = ep_div.find('b')
if not episode_name_tag:
# Find all episodes for this season
episodes_container = soup.find('div', {'class': 'dropdown mirrors', 'data-season': str(season_num)})
if not episodes_container:
continue
# Get all episode mirrors for this season
episode_mirrors = soup.find_all('div', {'class': 'dropdown mirrors',
'data-season': str(season_num)})
for mirror in episode_mirrors:
episode_data = mirror.get('data-episode', '').split('-')
if len(episode_data) != 2:
continue
episode_name = episode_name_tag.get_text(strip=True)
link_tag = ep_div.find('a', string=lambda text: text and "Supervideo" in text)
episode_url = link_tag['href'] if link_tag else None
ep_num = int(episode_data[1])
# Find supervideo link
supervideo_span = mirror.find('span', {'data-id': 'supervideo'})
if not supervideo_span:
continue
episode_url = supervideo_span.get('data-link', '')
# Add episode to the season
if current_season:
current_season.episodes.add({
'number': ep_idx,
'name': episode_name,
'number': ep_num,
'name': f"Episodio {ep_num}",
'url': episode_url
})
})
# ------------- FOR GUI -------------
def getNumberSeason(self) -> int:
"""
Get the total number of seasons available for the series.
"""
if not self.seasons_manager.seasons:
self.collect_season()
return len(self.seasons_manager.seasons)
def getEpisodeSeasons(self, season_number: int) -> list:
"""
Get all episodes for a specific season.
"""
if not self.seasons_manager.seasons:
self.collect_season()
# Get season directly by its number
season = self.seasons_manager.get_season_by_number(season_number)
return season.episodes.episodes if season else []
def selectEpisode(self, season_number: int, episode_index: int) -> dict:
"""
Get information for a specific episode in a specific season.
"""
episodes = self.getEpisodeSeasons(season_number)
if not episodes or episode_index < 0 or episode_index >= len(episodes):
logging.error(f"Episode index {episode_index} is out of range for season {season_number}")
return None
return episodes[episode_index]

View File

@ -18,15 +18,16 @@ from StreamingCommunity.TelegramHelp.telegram_bot import get_bot_instance
# Logic class
from .site import title_search, media_search_manager, table_show_manager
from .film_serie import download_film, download_series
from .film import download_film
from .serie import download_series
# Variable
indice = 1
_useFor = "anime"
_deprecate = False
_priority = 2
_useFor = "Anime"
_priority = 0
_engineDownload = "mp4"
_deprecate = False
msg = Prompt()
console = Console()
@ -56,24 +57,42 @@ def get_user_input(string_to_search: str = None):
return string_to_search
def process_search_result(select_title):
def process_search_result(select_title, selections=None):
"""
Handles the search result and initiates the download for either a film or series.
Parameters:
select_title (MediaItem): The selected media item
selections (dict, optional): Dictionary containing selection inputs that bypass manual input
{'season': season_selection, 'episode': episode_selection}
"""
download_series(select_title)
if select_title.type == 'Movie' or select_title.type == 'OVA':
download_film(select_title)
def search(string_to_search: str = None, get_onlyDatabase: bool = False, direct_item: dict = None):
else:
season_selection = None
episode_selection = None
if selections:
season_selection = selections.get('season')
episode_selection = selections.get('episode')
download_series(select_title, season_selection, episode_selection)
def search(string_to_search: str = None, get_onlyDatabase: bool = False, direct_item: dict = None, selections: dict = None):
"""
Main function of the application for search film, series and anime.
Main function of the application for search.
Parameters:
string_to_search (str, optional): String to search for
get_onlyDatabase (bool, optional): If True, return only the database object
direct_item (dict, optional): Direct item to process (bypass search)
selections (dict, optional): Dictionary containing selection inputs that bypass manual input
{'season': season_selection, 'episode': episode_selection}
"""
if direct_item:
select_title = MediaItem(**direct_item)
process_search_result(select_title)
process_search_result(select_title, selections)
return
# Get the user input for the search term
@ -82,7 +101,7 @@ def search(string_to_search: str = None, get_onlyDatabase: bool = False, direct_
# Perform the database search
len_database = title_search(string_to_search)
##If only the database is needed, return the manager
# If only the database is needed, return the manager
if get_onlyDatabase:
return media_search_manager
@ -90,9 +109,9 @@ def search(string_to_search: str = None, get_onlyDatabase: bool = False, direct_
bot = get_bot_instance()
if len_database > 0:
select_title = get_select_title(table_show_manager, media_search_manager)
process_search_result(select_title)
select_title = get_select_title(table_show_manager, media_search_manager,len_database)
process_search_result(select_title, selections)
else:
console.print(f"\n[red]Nothing matching was found for[white]: [purple]{string_to_search}")
@ -101,4 +120,4 @@ def search(string_to_search: str = None, get_onlyDatabase: bool = False, direct_
# If no results are found, ask again
string_to_search = get_user_input()
search()
search(string_to_search, get_onlyDatabase, None, selections)

View File

@ -0,0 +1,40 @@
# 11.03.24
# External library
from rich.console import Console
# Logic class
from .serie import download_episode
from .util.ScrapeSerie import ScrapeSerieAnime
from StreamingCommunity.Api.Template.config_loader import site_constant
from StreamingCommunity.Api.Template.Class.SearchType import MediaItem
# Player
from StreamingCommunity.Api.Player.vixcloud import VideoSourceAnime
# Variable
console = Console()
def download_film(select_title: MediaItem):
"""
Function to download a film.
Parameters:
- id_film (int): The ID of the film.
- title_name (str): The title of the film.
"""
# Init class
scrape_serie = ScrapeSerieAnime(site_constant.FULL_URL)
video_source = VideoSourceAnime(site_constant.FULL_URL)
# Set up video source (only configure scrape_serie now)
scrape_serie.setup(None, select_title.id, select_title.slug)
scrape_serie.is_series = False
# Start download
download_episode(0, scrape_serie, video_source)

View File

@ -1,181 +0,0 @@
# 11.03.24
import os
import logging
from typing import Tuple
# External library
from rich.console import Console
from rich.prompt import Prompt
# Internal utilities
from StreamingCommunity.Util.os import os_manager
from StreamingCommunity.Util.message import start_message
from StreamingCommunity.Lib.Downloader import MP4_downloader
from StreamingCommunity.TelegramHelp.telegram_bot import TelegramSession, get_bot_instance
# Logic class
from .util.ScrapeSerie import ScrapeSerieAnime
from StreamingCommunity.Api.Template.config_loader import site_constant
from StreamingCommunity.Api.Template.Util import manage_selection, dynamic_format_number
from StreamingCommunity.Api.Template.Class.SearchType import MediaItem
# Player
from StreamingCommunity.Api.Player.vixcloud import VideoSourceAnime
# Variable
console = Console()
msg = Prompt()
KILL_HANDLER = bool(False)
def download_episode(index_select: int, scrape_serie: ScrapeSerieAnime, video_source: VideoSourceAnime) -> Tuple[str,bool]:
"""
Downloads the selected episode.
Parameters:
- index_select (int): Index of the episode to download.
Return:
- str: output path
- bool: kill handler status
"""
if site_constant.TELEGRAM_BOT:
bot = get_bot_instance()
# Get information about the selected episode
obj_episode = scrape_serie.get_info_episode(index_select)
if obj_episode is not None:
start_message()
console.print(f"[bold yellow]Download:[/bold yellow] [red]{site_constant.SITE_NAME}[/red] ([cyan]E{obj_episode.number}[/cyan]) \n")
if site_constant.TELEGRAM_BOT:
bot.send_message(f"Download in corso:\nTitolo:{scrape_serie.series_name}\nEpisodio: {obj_episode.number}", None)
# Get script_id
script_id = TelegramSession.get_session()
if script_id != "unknown":
TelegramSession.updateScriptId(script_id, f"{scrape_serie.series_name} - E{obj_episode.number}")
# Collect mp4 url
video_source.get_embed(obj_episode.id)
# Create output path
mp4_name = f"{scrape_serie.series_name}_EP_{dynamic_format_number(str(obj_episode.number))}.mp4"
if scrape_serie.is_series:
mp4_path = os_manager.get_sanitize_path(os.path.join(site_constant.ANIME_FOLDER, scrape_serie.series_name))
else:
mp4_path = os_manager.get_sanitize_path(os.path.join(site_constant.MOVIE_FOLDER, scrape_serie.series_name))
# Create output folder
os_manager.create_path(mp4_path)
# Start downloading
path, kill_handler = MP4_downloader(
url=str(video_source.src_mp4).strip(),
path=os.path.join(mp4_path, mp4_name)
)
return path, kill_handler
else:
logging.error(f"Skip index: {index_select} cant find info with api.")
return None, True
def download_series(select_title: MediaItem):
"""
Function to download episodes of a TV series.
Parameters:
- tv_id (int): The ID of the TV series.
- tv_name (str): The name of the TV series.
"""
start_message()
if site_constant.TELEGRAM_BOT:
bot = get_bot_instance()
scrape_serie = ScrapeSerieAnime(site_constant.FULL_URL)
video_source = VideoSourceAnime(site_constant.FULL_URL)
# Set up video source
scrape_serie.setup(None, select_title.id, select_title.slug)
# Get the count of episodes for the TV series
episoded_count = scrape_serie.get_count_episodes()
console.print(f"[cyan]Episodes find: [red]{episoded_count}")
if site_constant.TELEGRAM_BOT:
console.print(f"\n[cyan]Insert media [red]index [yellow]or [red]* [cyan]to download all media [yellow]or [red]1-2 [cyan]or [red]3-* [cyan]for a range of media")
bot.send_message(f"Episodi trovati: {episoded_count}", None)
last_command = bot.ask(
"select_title",
"Menu di selezione degli episodi: \n\n"
"- Inserisci il numero dell'episodio (ad esempio, 1)\n"
"- Inserisci * per scaricare tutti gli episodi\n"
"- Inserisci un intervallo di episodi (ad esempio, 1-2) per scaricare da un episodio all'altro\n"
"- Inserisci (ad esempio, 3-*) per scaricare dall'episodio specificato fino alla fine della serie",
None
)
else:
# Prompt user to select an episode index
last_command = msg.ask("\n[cyan]Insert media [red]index [yellow]or [red]* [cyan]to download all media [yellow]or [red]1-2 [cyan]or [red]3-* [cyan]for a range of media")
# Manage user selection
list_episode_select = manage_selection(last_command, episoded_count)
# Download selected episodes
if len(list_episode_select) == 1 and last_command != "*":
path, _ = download_episode(list_episode_select[0]-1, scrape_serie, video_source)
return path
# Download all other episodes selecter
else:
kill_handler = False
for i_episode in list_episode_select:
if kill_handler:
break
_, kill_handler = download_episode(i_episode-1, scrape_serie, video_source)
if site_constant.TELEGRAM_BOT:
bot.send_message(f"Finito di scaricare tutte le serie e episodi", None)
# Get script_id
script_id = TelegramSession.get_session()
if script_id != "unknown":
TelegramSession.deleteScriptId(script_id)
def download_film(select_title: MediaItem):
"""
Function to download a film.
Parameters:
- id_film (int): The ID of the film.
- title_name (str): The title of the film.
"""
# Init class
scrape_serie = ScrapeSerieAnime(site_constant.FULL_URL)
video_source = VideoSourceAnime(site_constant.FULL_URL)
# Set up video source
scrape_serie.setup(None, select_title.id, select_title.slug)
scrape_serie.is_series = False
# Start download
download_episode(0, scrape_serie, video_source)

View File

@ -0,0 +1,153 @@
# 11.03.24
import os
from typing import Tuple
# External library
from rich.console import Console
from rich.prompt import Prompt
# Internal utilities
from StreamingCommunity.Util.os import os_manager
from StreamingCommunity.Util.message import start_message
from StreamingCommunity.Lib.Downloader import MP4_downloader
from StreamingCommunity.TelegramHelp.telegram_bot import TelegramSession, get_bot_instance
# Logic class
from .util.ScrapeSerie import ScrapeSerieAnime
from StreamingCommunity.Api.Template.config_loader import site_constant
from StreamingCommunity.Api.Template.Util import manage_selection, dynamic_format_number
from StreamingCommunity.Api.Template.Class.SearchType import MediaItem
# Player
from StreamingCommunity.Api.Player.vixcloud import VideoSourceAnime
# Variable
console = Console()
msg = Prompt()
KILL_HANDLER = bool(False)
def download_episode(index_select: int, scrape_serie: ScrapeSerieAnime, video_source: VideoSourceAnime) -> Tuple[str,bool]:
"""
Downloads the selected episode.
Parameters:
- index_select (int): Index of the episode to download.
Return:
- str: output path
- bool: kill handler status
"""
start_message()
# Get episode information
obj_episode = scrape_serie.selectEpisode(1, index_select)
console.print(f"[bold yellow]Download:[/bold yellow] [red]{site_constant.SITE_NAME}[/red] ([cyan]E{obj_episode.number}[/cyan]) \n")
if site_constant.TELEGRAM_BOT:
bot = get_bot_instance()
bot.send_message(f"Download in corso\nAnime: {scrape_serie.series_name}\nEpisodio: {obj_episode.number}", None)
# Get script_id and update it
script_id = TelegramSession.get_session()
if script_id != "unknown":
TelegramSession.updateScriptId(script_id, f"{scrape_serie.series_name} - E{obj_episode.number}")
# Collect mp4 url
video_source.get_embed(obj_episode.id)
# Create output path
mp4_name = f"{scrape_serie.series_name}_EP_{dynamic_format_number(str(obj_episode.number))}.mp4"
if scrape_serie.is_series:
mp4_path = os_manager.get_sanitize_path(os.path.join(site_constant.ANIME_FOLDER, scrape_serie.series_name))
else:
mp4_path = os_manager.get_sanitize_path(os.path.join(site_constant.MOVIE_FOLDER, scrape_serie.series_name))
# Create output folder
os_manager.create_path(mp4_path)
# Start downloading
path, kill_handler = MP4_downloader(
url=str(video_source.src_mp4).strip(),
path=os.path.join(mp4_path, mp4_name)
)
return path, kill_handler
def download_series(select_title: MediaItem, season_selection: str = None, episode_selection: str = None):
"""
Function to download episodes of a TV series.
Parameters:
- select_title (MediaItem): The selected media item
- season_selection (str, optional): Season selection input that bypasses manual input (usually '1' for anime)
- episode_selection (str, optional): Episode selection input that bypasses manual input
"""
start_message()
if site_constant.TELEGRAM_BOT:
bot = get_bot_instance()
scrape_serie = ScrapeSerieAnime(site_constant.FULL_URL)
video_source = VideoSourceAnime(site_constant.FULL_URL)
# Set up video source (only configure scrape_serie now)
scrape_serie.setup(None, select_title.id, select_title.slug)
# Get episode information
episoded_count = scrape_serie.get_count_episodes()
console.print(f"[green]Episodes count:[/green] [red]{episoded_count}[/red]")
# Telegram bot integration
if episode_selection is None:
if site_constant.TELEGRAM_BOT:
console.print(f"\n[cyan]Insert media [red]index [yellow]or [red]* [cyan]to download all media [yellow]or [red]1-2 [cyan]or [red]3-* [cyan]for a range of media")
bot.send_message(f"Episodi trovati: {episoded_count}", None)
last_command = bot.ask(
"select_title",
"Menu di selezione degli episodi: \n\n"
"- Inserisci il numero dell'episodio (ad esempio, 1)\n"
"- Inserisci * per scaricare tutti gli episodi\n"
"- Inserisci un intervallo di episodi (ad esempio, 1-2) per scaricare da un episodio all'altro\n"
"- Inserisci (ad esempio, 3-*) per scaricare dall'episodio specificato fino alla fine della serie",
None
)
else:
# Prompt user to select an episode index
last_command = msg.ask("\n[cyan]Insert media [red]index [yellow]or [red]* [cyan]to download all media [yellow]or [red]1-2 [cyan]or [red]3-* [cyan]for a range of media")
else:
last_command = episode_selection
console.print(f"\n[cyan]Using provided episode selection: [yellow]{episode_selection}")
# Manage user selection
list_episode_select = manage_selection(last_command, episoded_count)
# Download selected episodes
if len(list_episode_select) == 1 and last_command != "*":
path, _ = download_episode(list_episode_select[0]-1, scrape_serie, video_source)
return path
# Download all other episodes selected
else:
kill_handler = False
for i_episode in list_episode_select:
if kill_handler:
break
_, kill_handler = download_episode(i_episode-1, scrape_serie, video_source)
if site_constant.TELEGRAM_BOT:
bot.send_message(f"Finito di scaricare tutte le serie e episodi", None)
# Get script_id
script_id = TelegramSession.get_session()
if script_id != "unknown":
TelegramSession.deleteScriptId(script_id)

View File

@ -1,6 +1,5 @@
# 10.12.23
import sys
import logging
@ -52,10 +51,8 @@ def get_token() -> dict:
for html_meta in soup.find_all("meta"):
if html_meta.get('name') == "csrf-token":
find_csrf_token = html_meta.get('content')
logging.info(f"Extract: ('animeunity_session': {response.cookies['animeunity_session']}, 'csrf_token': {find_csrf_token})")
return {
'animeunity_session': response.cookies['animeunity_session'],
'csrf_token': find_csrf_token
@ -65,9 +62,6 @@ def get_token() -> dict:
def get_real_title(record):
"""
Get the real title from a record.
This function takes a record, which is assumed to be a dictionary representing a row of JSON data.
It looks for a title in the record, prioritizing English over Italian titles if available.
Parameters:
- record (dict): A dictionary representing a row of JSON data.
@ -85,7 +79,7 @@ def get_real_title(record):
def title_search(query: str) -> int:
"""
Function to perform an anime search using a provided query.
Function to perform an anime search using both APIs and combine results.
Parameters:
- query (str): The query to search for.
@ -98,63 +92,97 @@ def title_search(query: str) -> int:
media_search_manager.clear()
table_show_manager.clear()
seen_titles = set()
choices = [] if site_constant.TELEGRAM_BOT else None
# Create parameter for request
data = get_token()
cookies = {'animeunity_session': data.get('animeunity_session')}
cookies = {
'animeunity_session': data.get('animeunity_session')
}
headers = {
'user-agent': get_userAgent(),
'x-csrf-token': data.get('csrf_token')
}
json_data = {'title': query}
# Send a POST request to the API endpoint for live search
# First API call - livesearch
try:
response = httpx.post(
f'{site_constant.FULL_URL}/livesearch',
cookies=cookies,
headers=headers,
response1 = httpx.post(
f'{site_constant.FULL_URL}/livesearch',
cookies=cookies,
headers=headers,
json={'title': query},
timeout=max_timeout
)
response1.raise_for_status()
process_results(response1.json()['records'], seen_titles, media_search_manager, choices)
except Exception as e:
console.print(f"[red]Site: {site_constant.SITE_NAME}, request search error: {e}")
return 0
# Second API call - archivio
try:
json_data = {
'title': query,
'type': False,
'year': False,
'order': 'Lista A-Z',
'status': False,
'genres': False,
'offset': 0,
'dubbed': False,
'season': False
}
response2 = httpx.post(
f'{site_constant.FULL_URL}/archivio/get-animes',
cookies=cookies,
headers=headers,
json=json_data,
timeout=max_timeout
)
response.raise_for_status()
response2.raise_for_status()
process_results(response2.json()['records'], seen_titles, media_search_manager, choices)
except Exception as e:
console.print(f"Site: {site_constant.SITE_NAME}, request search error: {e}")
return 0
console.print(f"Site: {site_constant.SITE_NAME}, archivio search error: {e}")
# Inizializza la lista delle scelte
if site_constant.TELEGRAM_BOT:
choices = []
if site_constant.TELEGRAM_BOT and choices and len(choices) > 0:
bot.send_message(f"Lista dei risultati:", choices)
result_count = media_search_manager.get_length()
if result_count == 0:
console.print(f"Nothing matching was found for: {query}")
return result_count
for dict_title in response.json()['records']:
def process_results(records: list, seen_titles: set, media_manager: MediaManager, choices: list = None) -> None:
"""Helper function to process search results and add unique entries."""
for dict_title in records:
try:
# Rename keys for consistency
title_id = dict_title.get('id')
if title_id in seen_titles:
continue
seen_titles.add(title_id)
dict_title['name'] = get_real_title(dict_title)
media_search_manager.add_media({
'id': dict_title.get('id'),
media_manager.add_media({
'id': title_id,
'slug': dict_title.get('slug'),
'name': dict_title.get('name'),
'type': dict_title.get('type'),
'status': dict_title.get('status'),
'episodes_count': dict_title.get('episodes_count'),
'plot': ' '.join((words := str(dict_title.get('plot', '')).split())[:10]) + ('...' if len(words) > 10 else '')
'image': dict_title.get('imageurl')
})
if site_constant.TELEGRAM_BOT:
# Crea una stringa formattata per ogni scelta con numero
if choices is not None:
choice_text = f"{len(choices)} - {dict_title.get('name')} ({dict_title.get('type')}) - Episodi: {dict_title.get('episodes_count')}"
choices.append(choice_text)
except Exception as e:
print(f"Error parsing a film entry: {e}")
if site_constant.TELEGRAM_BOT:
if choices:
bot.send_message(f"Lista dei risultati:", choices)
# Return the length of media search manager
return media_search_manager.get_length()
print(f"Error parsing a title entry: {e}")

View File

@ -29,6 +29,7 @@ class ScrapeSerieAnime:
self.is_series = False
self.headers = {'user-agent': get_userAgent()}
self.url = url
self.episodes_cache = None
def setup(self, version: str = None, media_id: int = None, series_name: str = None):
self.version = version
@ -42,55 +43,81 @@ class ScrapeSerieAnime:
def get_count_episodes(self):
"""
Retrieve total number of episodes for the selected media.
This includes partial episodes (like episode 6.5).
Returns:
int: Total episode count
int: Total episode count including partial episodes
"""
if self.episodes_cache is None:
self._fetch_all_episodes()
if self.episodes_cache:
return len(self.episodes_cache)
return None
def _fetch_all_episodes(self):
"""
Fetch all episodes data at once and cache it
"""
try:
# Get initial episode count
response = httpx.get(
url=f"{self.url}/info_api/{self.media_id}/",
headers=self.headers,
url=f"{self.url}/info_api/{self.media_id}/",
headers=self.headers,
timeout=max_timeout
)
response.raise_for_status()
# Parse JSON response and return episode count
return response.json()["episodes_count"]
initial_count = response.json()["episodes_count"]
all_episodes = []
start_range = 1
# Fetch episodes in chunks
while start_range <= initial_count:
end_range = min(start_range + 119, initial_count)
response = httpx.get(
url=f"{self.url}/info_api/{self.media_id}/1",
params={
"start_range": start_range,
"end_range": end_range
},
headers=self.headers,
timeout=max_timeout
)
response.raise_for_status()
chunk_episodes = response.json().get("episodes", [])
all_episodes.extend(chunk_episodes)
start_range = end_range + 1
self.episodes_cache = all_episodes
except Exception as e:
logging.error(f"Error fetching episode count: {e}")
return None
logging.error(f"Error fetching all episodes: {e}")
self.episodes_cache = None
def get_info_episode(self, index_ep: int) -> Episode:
"""
Fetch detailed information for a specific episode.
Args:
index_ep (int): Zero-based index of the target episode
Returns:
Episode: Detailed episode information
Get episode info from cache
"""
try:
if self.episodes_cache is None:
self._fetch_all_episodes()
if self.episodes_cache and 0 <= index_ep < len(self.episodes_cache):
return Episode(self.episodes_cache[index_ep])
return None
params = {
"start_range": index_ep,
"end_range": index_ep + 1
}
response = httpx.get(
url=f"{self.url}/info_api/{self.media_id}/{index_ep}",
headers=self.headers,
params=params,
timeout=max_timeout
)
response.raise_for_status()
# Return information about the episode
json_data = response.json()["episodes"][-1]
return Episode(json_data)
# ------------- FOR GUI -------------
def getNumberSeason(self) -> int:
"""
Get the total number of seasons available for the anime.
Note: AnimeUnity typically doesn't have seasons, so returns 1.
"""
return 1
except Exception as e:
logging.error(f"Error fetching episode information: {e}")
return None
def selectEpisode(self, season_number: int = 1, episode_index: int = 0) -> Episode:
"""
Get information for a specific episode.
"""
return self.get_info_episode(episode_index)

View File

@ -14,58 +14,71 @@ from StreamingCommunity.Api.Template.Class.SearchType import MediaItem
# Logic class
from .site import title_search, media_search_manager, table_show_manager
from .serie import download_series
from .film import download_film
# Variable
indice = 8
_useFor = "anime"
_deprecate = False
_priority = 2
indice = 6
_useFor = "Anime"
_priority = 0
_engineDownload = "mp4"
_deprecate = False
msg = Prompt()
console = Console()
def process_search_result(select_title):
def process_search_result(select_title, selections=None):
"""
Handles the search result and initiates the download for either a film or series.
Parameters:
select_title (MediaItem): The selected media item
selections (dict, optional): Dictionary containing selection inputs that bypass manual input
{'season': season_selection, 'episode': episode_selection}
"""
if select_title.type == "TV":
download_series(select_title)
episode_selection = None
if selections:
episode_selection = selections.get('episode')
download_series(select_title, episode_selection)
def search(string_to_search: str = None, get_onlyDatabase: bool = False, direct_item: dict = None):
else:
download_film(select_title)
def search(string_to_search: str = None, get_onlyDatabase: bool = False, direct_item: dict = None, selections: dict = None):
"""
Main function of the application for search film, series and anime.
Main function of the application for search.
Parameters:
string_to_search (str, optional): String to search for
get_onlyDatabase (bool, optional): If True, return only the database object
direct_item (dict, optional): Direct item to process (bypass search)
selections (dict, optional): Dictionary containing selection inputs that bypass manual input
{'season': season_selection, 'episode': episode_selection}
"""
if direct_item:
select_title = MediaItem(**direct_item)
process_search_result(select_title)
process_search_result(select_title, selections)
return
# Get the user input for the search term
string_to_search = msg.ask(f"\n[purple]Insert a word to search in [green]{site_constant.SITE_NAME}").strip()
if string_to_search is None:
string_to_search = msg.ask(f"\n[purple]Insert a word to search in [green]{site_constant.SITE_NAME}").strip()
# Perform the database search
len_database = title_search(string_to_search)
##If only the database is needed, return the manager
# If only the database is needed, return the manager
if get_onlyDatabase:
return media_search_manager
if len_database > 0:
select_title = get_select_title(table_show_manager, media_search_manager)
process_search_result(select_title)
select_title = get_select_title(table_show_manager, media_search_manager,len_database)
process_search_result(select_title, selections)
else:
console.print(f"\n[red]Nothing matching was found for[white]: [purple]{string_to_search}")
# If no results are found, ask again
string_to_search = msg.ask(f"\n[purple]Insert a word to search in [green]{site_constant.SITE_NAME}").strip()
console.print(f"\n[red]Nothing matching was found for[white]: [purple]{string_to_search}")
search()

View File

@ -0,0 +1,63 @@
# 11.03.24
import os
# External library
from rich.console import Console
# Internal utilities
from StreamingCommunity.Util.os import os_manager
from StreamingCommunity.Util.message import start_message
from StreamingCommunity.Lib.Downloader import MP4_downloader
# Logic class
from .util.ScrapeSerie import ScrapSerie
from StreamingCommunity.Api.Template.config_loader import site_constant
from StreamingCommunity.Api.Template.Class.SearchType import MediaItem
# Player
from StreamingCommunity.Api.Player.sweetpixel import VideoSource
# Variable
console = Console()
def download_film(select_title: MediaItem):
"""
Function to download a film.
Parameters:
- id_film (int): The ID of the film.
- title_name (str): The title of the film.
"""
start_message()
scrape_serie = ScrapSerie(select_title.url, site_constant.FULL_URL)
episodes = scrape_serie.get_episodes()
# Get episode information
episode_data = episodes[0]
console.print(f"[bold yellow]Download:[/bold yellow] [red]{site_constant.SITE_NAME}[/red] ([cyan]{scrape_serie.get_name()}[/cyan]) \n")
# Define filename and path for the downloaded video
mp4_name = f"{scrape_serie.get_name()}.mp4"
mp4_path = os.path.join(site_constant.ANIME_FOLDER, scrape_serie.get_name())
# Create output folder
os_manager.create_path(mp4_path)
# Get video source for the episode
video_source = VideoSource(site_constant.FULL_URL, episode_data, scrape_serie.session_id, scrape_serie.csrf_token)
mp4_link = video_source.get_playlist()
# Start downloading
path, kill_handler = MP4_downloader(
url=str(mp4_link).strip(),
path=os.path.join(mp4_path, mp4_name)
)
return path, kill_handler

View File

@ -19,12 +19,12 @@ from StreamingCommunity.Lib.Downloader import MP4_downloader
# Logic class
from .util.ScrapeSerie import ScrapSerie
from StreamingCommunity.Api.Template.config_loader import site_constant
from StreamingCommunity.Api.Template.Util import manage_selection, dynamic_format_number, map_episode_title
from StreamingCommunity.Api.Template.Util import manage_selection, dynamic_format_number
from StreamingCommunity.Api.Template.Class.SearchType import MediaItem
# Player
from StreamingCommunity.Api.Player.sweetpixel import AnimeWorldPlayer
from StreamingCommunity.Api.Player.sweetpixel import VideoSource
# Variable
@ -33,8 +33,7 @@ msg = Prompt()
KILL_HANDLER = bool(False)
def download_episode(index_select: int, scrape_serie: ScrapSerie, episodes) -> Tuple[str,bool]:
def download_episode(index_select: int, scrape_serie: ScrapSerie) -> Tuple[str,bool]:
"""
Downloads the selected episode.
@ -47,7 +46,8 @@ def download_episode(index_select: int, scrape_serie: ScrapSerie, episodes) -> T
"""
start_message()
# Get information about the selected episode
# Get episode information
episode_data = scrape_serie.selectEpisode(1, index_select)
console.print(f"[bold yellow]Download:[/bold yellow] [red]{site_constant.SITE_NAME}[/red] ([cyan]E{index_select+1}[/cyan]) \n")
# Define filename and path for the downloaded video
@ -57,9 +57,9 @@ def download_episode(index_select: int, scrape_serie: ScrapSerie, episodes) -> T
# Create output folder
os_manager.create_path(mp4_path)
# Collect mp4 link
video_source = AnimeWorldPlayer(site_constant.FULL_URL, episodes[index_select], scrape_serie.session_id, scrape_serie.csrf_token)
mp4_link = video_source.get_download_link()
# Get video source for the episode
video_source = VideoSource(site_constant.FULL_URL, episode_data, scrape_serie.session_id, scrape_serie.csrf_token)
mp4_link = video_source.get_playlist()
# Start downloading
path, kill_handler = MP4_downloader(
@ -70,38 +70,41 @@ def download_episode(index_select: int, scrape_serie: ScrapSerie, episodes) -> T
return path, kill_handler
def download_series(select_title: MediaItem):
def download_series(select_title: MediaItem, episode_selection: str = None):
"""
Function to download episodes of a TV series.
Parameters:
- tv_id (int): The ID of the TV series.
- tv_name (str): The name of the TV series.
- select_title (MediaItem): The selected media item
- episode_selection (str, optional): Episode selection input that bypasses manual input
"""
start_message()
# Create scrap instance
scrape_serie = ScrapSerie(select_title.url, site_constant.FULL_URL)
episodes = scrape_serie.get_episodes()
# Get the count of episodes for the TV series
episodes = scrape_serie.get_episodes()
episoded_count = len(episodes)
console.print(f"[cyan]Episodes find: [red]{episoded_count}")
# Get episode count
console.print(f"[green]Episodes found:[/green] [red]{len(episodes)}[/red]")
# Prompt user to select an episode index
last_command = msg.ask("\n[cyan]Insert media [red]index [yellow]or [red]* [cyan]to download all media [yellow]or [red]1-2 [cyan]or [red]3-* [cyan]for a range of media")
# Display episodes list and get user selection
if episode_selection is None:
last_command = msg.ask("\n[cyan]Insert media [red]index [yellow]or [red]* [cyan]to download all media [yellow]or [red]1-2 [cyan]or [red]3-* [cyan]for a range of media")
else:
last_command = episode_selection
console.print(f"\n[cyan]Using provided episode selection: [yellow]{episode_selection}")
# Manage user selection
list_episode_select = manage_selection(last_command, episoded_count)
list_episode_select = manage_selection(last_command, len(episodes))
# Download selected episodes
if len(list_episode_select) == 1 and last_command != "*":
path, _ = download_episode(list_episode_select[0]-1, scrape_serie, episodes)
path, _ = download_episode(list_episode_select[0]-1, scrape_serie)
return path
# Download all other episodes selecter
# Download all selected episodes
else:
kill_handler = False
for i_episode in list_episode_select:
if kill_handler:
break
_, kill_handler = download_episode(i_episode-1, scrape_serie, episodes)
_, kill_handler = download_episode(i_episode-1, scrape_serie)

View File

@ -31,7 +31,11 @@ def get_session_and_csrf() -> dict:
Get the session ID and CSRF token from the website's cookies and HTML meta data.
"""
# Send an initial GET request to the website
response = httpx.get(site_constant.FULL_URL, headers=get_headers())
response = httpx.get(
site_constant.FULL_URL,
headers=get_headers(),
verify=False
)
# Extract the sessionId from the cookies
session_id = response.cookies.get('sessionId')
@ -70,10 +74,15 @@ def title_search(query: str) -> int:
# Make the GET request
try:
response = httpx.get(search_url, headers={'User-Agent': get_userAgent()})
response = httpx.get(
search_url,
headers={'User-Agent': get_userAgent()},
timeout=max_timeout,
verify=False
)
except Exception as e:
console.print(f"Site: {site_constant.SITE_NAME}, request search error: {e}")
console.print(f"[red]Site: {site_constant.SITE_NAME}, request search error: {e}")
return 0
# Create soup istance
@ -101,11 +110,12 @@ def title_search(query: str) -> int:
'name': title,
'type': anime_type,
'DUB': is_dubbed,
'url': url
'url': url,
'image': element.find('img').get('src')
})
except Exception as e:
print(f"Error parsing a film entry: {e}")
# Return the length of media search manager
return media_search_manager.get_length()
return media_search_manager.get_length()

View File

@ -1,5 +1,6 @@
# 21.03.25
import logging
# External libraries
import httpx
@ -14,7 +15,7 @@ from StreamingCommunity.Util.os import os_manager
# Player
from ..site import get_session_and_csrf
from StreamingCommunity.Api.Player.sweetpixel import AnimeWorldPlayer
from StreamingCommunity.Api.Player.sweetpixel import VideoSource
# Variable
@ -30,7 +31,8 @@ class ScrapSerie:
self.client = httpx.Client(
cookies={"sessionId": self.session_id},
headers={"User-Agent": get_userAgent(), "csrf-token": self.csrf_token},
base_url=full_url
base_url=full_url,
verify=False
)
try:
@ -40,7 +42,6 @@ class ScrapSerie:
except:
raise Exception(f"Failed to retrieve anime page.")
def get_name(self):
"""Extract and return the name of the anime series."""
soup = BeautifulSoup(self.response.content, "html.parser")
@ -68,12 +69,39 @@ class ScrapSerie:
return episodes
def get_episode(self, index):
"""Fetch a specific episode based on the index, and return an AnimeWorldPlayer instance."""
"""Fetch a specific episode based on the index, and return an VideoSource instance."""
episodes = self.get_episodes()
if 0 <= index < len(episodes):
episode_data = episodes[index]
return AnimeWorldPlayer(episode_data, self.session_id, self.csrf_token)
return VideoSource(episode_data, self.session_id, self.csrf_token)
else:
raise IndexError("Episode index out of range")
raise IndexError("Episode index out of range")
# ------------- FOR GUI -------------
def getNumberSeason(self) -> int:
"""
Get the total number of seasons available for the anime.
Note: AnimeWorld typically doesn't have seasons, so returns 1.
"""
return 1
def getEpisodeSeasons(self, season_number: int = 1) -> list:
"""
Get all episodes for a specific season.
Note: For AnimeWorld, this returns all episodes as they're typically in one season.
"""
return self.get_episodes()
def selectEpisode(self, season_number: int = 1, episode_index: int = 0) -> dict:
"""
Get information for a specific episode.
"""
episodes = self.get_episodes()
if not episodes or episode_index < 0 or episode_index >= len(episodes):
logging.error(f"Episode index {episode_index} is out of range")
return None
return episodes[episode_index]

View File

@ -20,11 +20,11 @@ from .film import download_film
# Variable
indice = 4
_useFor = "film"
_deprecate = False
_priority = 2
indice = -1
_useFor = "Film"
_priority = 0
_engineDownload = "mp4"
_deprecate = True
msg = Prompt()
console = Console()
@ -39,7 +39,7 @@ def process_search_result(select_title):
def search(string_to_search: str = None, get_onlyDatabase: bool = False, direct_item: dict = None):
"""
Main function of the application for search film, series and anime.
Main function of the application for search.
Parameters:
string_to_search (str, optional): String to search for
@ -62,7 +62,7 @@ def search(string_to_search: str = None, get_onlyDatabase: bool = False, direct_
return media_search_manager
if len_database > 0:
select_title = get_select_title(table_show_manager, media_search_manager)
select_title = get_select_title(table_show_manager, media_search_manager,len_database)
process_search_result(select_title)
else:

View File

@ -1,8 +1,5 @@
# 03.07.24
import sys
# External libraries
import httpx
from bs4 import BeautifulSoup
@ -44,7 +41,13 @@ def title_search(query: str) -> int:
console.print(f"[cyan]Search url: [yellow]{search_url}")
try:
response = httpx.get(url=search_url, headers={'user-agent': get_userAgent()}, timeout=max_timeout, follow_redirects=True)
response = httpx.get(
search_url,
headers={'user-agent': get_userAgent()},
timeout=max_timeout,
follow_redirects=True,
verify=False
)
response.raise_for_status()
except Exception as e:

View File

@ -1,75 +0,0 @@
# 09.06.24
import logging
from urllib.parse import quote_plus
# External library
from rich.console import Console
from rich.prompt import Prompt
# Internal utilities
from StreamingCommunity.Api.Template import get_select_title
from StreamingCommunity.Api.Template.config_loader import site_constant
from StreamingCommunity.Api.Template.Class.SearchType import MediaItem
# Logic class
from .site import title_search, media_search_manager, table_show_manager
from .series import download_thread
# Variable
indice = 6
_useFor = "serie"
_deprecate = False
_priority = 2
_engineDownload = "mp4"
msg = Prompt()
console = Console()
def process_search_result(select_title):
"""
Handles the search result and initiates the download for either a film or series.
"""
if "Serie TV" in str(select_title.type):
download_thread(select_title)
else:
logging.error(f"Not supported: {select_title.type}")
def search(string_to_search: str = None, get_onlyDatabase: bool = False, direct_item: dict = None):
"""
Main function of the application for search film, series and anime.
Parameters:
string_to_search (str, optional): String to search for
get_onylDatabase (bool, optional): If True, return only the database object
direct_item (dict, optional): Direct item to process (bypass search)
"""
if direct_item:
select_title = MediaItem(**direct_item)
process_search_result(select_title)
return
if string_to_search is None:
string_to_search = msg.ask(f"\n[purple]Insert word to search in [green]{site_constant.SITE_NAME}").strip()
# Search on database
len_database = title_search(quote_plus(string_to_search))
# If only the database is needed, return the manager
if get_onlyDatabase:
return media_search_manager
if len_database > 0:
select_title = get_select_title(table_show_manager, media_search_manager)
process_search_result(select_title)
else:
# If no results are found, ask again
console.print(f"\n[red]Nothing matching was found for[white]: [purple]{string_to_search}")
search()

View File

@ -1,119 +0,0 @@
# 13.06.24
import os
from urllib.parse import urlparse
from typing import Tuple
# External library
from rich.console import Console
# Internal utilities
from StreamingCommunity.Util.message import start_message
from StreamingCommunity.Util.os import os_manager
from StreamingCommunity.Lib.Downloader import MP4_downloader
# Logic class
from StreamingCommunity.Api.Template.Class.SearchType import MediaItem
from StreamingCommunity.Api.Template.Util import (
manage_selection,
map_episode_title,
validate_episode_selection,
display_episodes_list
)
from StreamingCommunity.Api.Template.config_loader import site_constant
# Player
from .util.ScrapeSerie import GetSerieInfo
from StreamingCommunity.Api.Player.ddl import VideoSource
# Variable
console = Console()
def download_video(index_episode_selected: int, scape_info_serie: GetSerieInfo, video_source: VideoSource) -> Tuple[str,bool]:
"""
Download a single episode video.
Parameters:
- tv_name (str): Name of the TV series.
- index_episode_selected (int): Index of the selected episode.
Return:
- str: output path
- bool: kill handler status
"""
start_message()
# Get info about episode
obj_episode = scape_info_serie.list_episodes[index_episode_selected - 1]
console.print(f"[bold yellow]Download:[/bold yellow] [red]{site_constant.SITE_NAME}[/red] → [bold magenta]{obj_episode.get('name')}[/bold magenta] ([cyan]E{index_episode_selected}[/cyan]) \n")
# Define filename and path for the downloaded video
title_name = os_manager.get_sanitize_file(
f"{map_episode_title(scape_info_serie.tv_name, None, index_episode_selected, obj_episode.get('name'))}.mp4"
)
mp4_path = os.path.join(site_constant.SERIES_FOLDER, scape_info_serie.tv_name)
# Create output folder
os_manager.create_path(mp4_path)
# Setup video source
video_source.setup(obj_episode.get('url'))
# Get m3u8 master playlist
master_playlist = video_source.get_playlist()
# Parse start page url
parsed_url = urlparse(obj_episode.get('url'))
# Start download
r_proc = MP4_downloader(
url=master_playlist,
path=os.path.join(mp4_path, title_name),
referer=f"{parsed_url.scheme}://{parsed_url.netloc}/",
)
if r_proc != None:
console.print("[green]Result: ")
console.print(r_proc)
return os.path.join(mp4_path, title_name)
def download_thread(dict_serie: MediaItem):
"""
Download all episode of a thread
"""
start_message()
# Init class
scape_info_serie = GetSerieInfo(dict_serie, site_constant.COOKIE)
video_source = VideoSource(site_constant.COOKIE)
# Collect information about thread
list_dict_episode = scape_info_serie.get_episode_number()
episodes_count = len(list_dict_episode)
# Display episodes list and manage user selection
last_command = display_episodes_list(scape_info_serie.list_episodes)
list_episode_select = manage_selection(last_command, episodes_count)
try:
list_episode_select = validate_episode_selection(list_episode_select, episodes_count)
except ValueError as e:
console.print(f"[red]{str(e)}")
return
# Download selected episodes
kill_handler = bool(False)
for i_episode in list_episode_select:
if kill_handler:
break
kill_handler = download_video(i_episode, scape_info_serie, video_source)[1]

View File

@ -1,82 +0,0 @@
# 09.06.24
import sys
import logging
# External libraries
import httpx
from bs4 import BeautifulSoup
from rich.console import Console
# Internal utilities
from StreamingCommunity.Util.config_json import config_manager
from StreamingCommunity.Util.headers import get_userAgent
from StreamingCommunity.Util.table import TVShowManager
# Logic class
from StreamingCommunity.Api.Template.config_loader import site_constant
from StreamingCommunity.Api.Template.Class.SearchType import MediaManager
# Variable
console = Console()
media_search_manager = MediaManager()
table_show_manager = TVShowManager()
max_timeout = config_manager.get_int("REQUESTS", "timeout")
def title_search(query: str) -> int:
"""
Search for titles based on a search query.
Parameters:
- query (str): The query to search for.
Returns:
- int: The number of titles found.
"""
media_search_manager.clear()
table_show_manager.clear()
search_url = f"{site_constant.FULL_URL}/search/?&q={query}&quick=1&type=videobox_video&nodes=11"
console.print(f"[cyan]Search url: [yellow]{search_url}")
try:
response = httpx.get(search_url, headers={'user-agent': get_userAgent()}, timeout=max_timeout, follow_redirects=True)
response.raise_for_status()
except Exception as e:
console.print(f"Site: {site_constant.SITE_NAME}, request search error: {e}")
return 0
# Create soup and find table
soup = BeautifulSoup(response.text, "html.parser")
table_content = soup.find('ol', class_="ipsStream")
if table_content:
for title_div in table_content.find_all('li', class_='ipsStreamItem'):
try:
title_type = title_div.find("p", class_="ipsType_reset").find_all("a")[-1].get_text(strip=True)
name = title_div.find("span", class_="ipsContained").find("a").get_text(strip=True)
link = title_div.find("span", class_="ipsContained").find("a").get("href")
title_info = {
'name': name,
'url': link,
'type': title_type
}
media_search_manager.add_media(title_info)
except Exception as e:
print(f"Error parsing a film entry: {e}")
return media_search_manager.get_length()
else:
logging.error("No table content found.")
return -999

View File

@ -1,84 +0,0 @@
# 13.06.24
import sys
import logging
from typing import List, Dict
# External libraries
import httpx
from bs4 import BeautifulSoup
# Internal utilities
from StreamingCommunity.Util.config_json import config_manager
from StreamingCommunity.Util.headers import get_userAgent
# Logic class
from StreamingCommunity.Api.Template.Class.SearchType import MediaItem
# Variable
max_timeout = config_manager.get_int("REQUESTS", "timeout")
class GetSerieInfo:
def __init__(self, dict_serie: MediaItem, cookies) -> None:
"""
Initializes the GetSerieInfo object with default values.
Parameters:
- dict_serie (MediaItem): Dictionary containing series information (optional).
"""
self.headers = {'user-agent': get_userAgent()}
self.cookies = cookies
self.url = dict_serie.url
self.tv_name = None
self.list_episodes = None
def get_episode_number(self) -> List[Dict[str, str]]:
"""
Retrieves the number of episodes for a specific season.
Parameters:
n_season (int): The season number.
Returns:
List[Dict[str, str]]: List of dictionaries containing episode information.
"""
try:
response = httpx.get(f"{self.url}?area=online", cookies=self.cookies, headers=self.headers, timeout=max_timeout)
response.raise_for_status()
except Exception as e:
logging.error(f"Insert value for [ips4_device_key, ips4_member_id, ips4_login_key] in config.json file SITE \\ ddlstreamitaly \\ cookie. Use browser debug and cookie request with a valid account, filter by DOC. Error: {e}")
sys.exit(0)
# Parse HTML content of the page
soup = BeautifulSoup(response.text, "html.parser")
# Get tv name
self.tv_name = soup.find("span", class_= "ipsType_break").get_text(strip=True)
# Find the container of episodes for the specified season
table_content = soup.find('div', class_='ipsMargin_bottom:half')
list_dict_episode = []
for episode_div in table_content.find_all('a', href=True):
# Get text of episode
part_name = episode_div.get_text(strip=True)
if part_name:
obj_episode = {
'name': part_name,
'url': episode_div['href']
}
list_dict_episode.append(obj_episode)
self.list_episodes = list_dict_episode
return list_dict_episode

View File

@ -20,34 +20,48 @@ from .series import download_series
# Variable
indice = 5
_useFor = "serie"
_deprecate = False
_priority = 2
indice = 4
_useFor = "Serie"
_priority = 0
_engineDownload = "hls"
_deprecate = False
msg = Prompt()
console = Console()
def process_search_result(select_title):
def process_search_result(select_title, selections=None):
"""
Handles the search result and initiates the download for either a film or series.
Parameters:
select_title (MediaItem): The selected media item
selections (dict, optional): Dictionary containing selection inputs that bypass manual input
{'season': season_selection, 'episode': episode_selection}
"""
download_series(select_title)
season_selection = None
episode_selection = None
if selections:
season_selection = selections.get('season')
episode_selection = selections.get('episode')
download_series(select_title, season_selection, episode_selection)
def search(string_to_search: str = None, get_onlyDatabase: bool = False, direct_item: dict = None):
def search(string_to_search: str = None, get_onlyDatabase: bool = False, direct_item: dict = None, selections: dict = None):
"""
Main function of the application for search film, series and anime.
Main function of the application for search.
Parameters:
string_to_search (str, optional): String to search for
get_onylDatabase (bool, optional): If True, return only the database object
get_onlyDatabase (bool, optional): If True, return only the database object
direct_item (dict, optional): Direct item to process (bypass search)
selections (dict, optional): Dictionary containing selection inputs that bypass manual input
{'season': season_selection, 'episode': episode_selection}
"""
if direct_item:
select_title = MediaItem(**direct_item)
process_search_result(select_title)
process_search_result(select_title, selections)
return
if string_to_search is None:
@ -61,8 +75,8 @@ def search(string_to_search: str = None, get_onlyDatabase: bool = False, direct_
return media_search_manager
if len_database > 0:
select_title = get_select_title(table_show_manager, media_search_manager)
process_search_result(select_title)
select_title = get_select_title(table_show_manager, media_search_manager,len_database)
process_search_result(select_title, selections)
else:

View File

@ -1,6 +1,7 @@
# 13.06.24
import os
import logging
from typing import Tuple
@ -39,22 +40,22 @@ console = Console()
def download_video(index_season_selected: int, index_episode_selected: int, scape_info_serie: GetSerieInfo) -> Tuple[str,bool]:
"""
Download a single episode video.
Downloads a specific episode from a specified season.
Parameters:
- tv_name (str): Name of the TV series.
- index_season_selected (int): Index of the selected season.
- index_episode_selected (int): Index of the selected episode.
- index_season_selected (int): Season number
- index_episode_selected (int): Episode index
- scape_info_serie (GetSerieInfo): Scraper object with series information
Return:
- str: output path
- bool: kill handler status
Returns:
- str: Path to downloaded file
- bool: Whether download was stopped
"""
start_message()
index_season_selected = dynamic_format_number(str(index_season_selected))
# Get info about episode
obj_episode = scape_info_serie.list_episodes[index_episode_selected - 1]
# Get episode information
obj_episode = scape_info_serie.selectEpisode(index_season_selected, index_episode_selected-1)
index_season_selected = dynamic_format_number(str(index_season_selected))
console.print(f"[bold yellow]Download:[/bold yellow] [red]{site_constant.SITE_NAME}[/red] → [bold magenta]{obj_episode.get('name')}[/bold magenta] ([cyan]S{index_season_selected}E{index_episode_selected}[/cyan]) \n")
# Define filename and path for the downloaded video
@ -80,24 +81,23 @@ def download_video(index_season_selected: int, index_episode_selected: int, scap
return r_proc['path'], r_proc['stopped']
def download_episode(scape_info_serie: GetSerieInfo, index_season_selected: int, download_all: bool = False) -> None:
def download_episode(scape_info_serie: GetSerieInfo, index_season_selected: int, download_all: bool = False, episode_selection: str = None) -> None:
"""
Download all episodes of a season.
Handle downloading episodes for a specific season.
Parameters:
- tv_name (str): Name of the TV series.
- index_season_selected (int): Index of the selected season.
- download_all (bool): Download all seasons episodes
- scape_info_serie (GetSerieInfo): Scraper object with series information
- index_season_selected (int): Season number
- download_all (bool): Whether to download all episodes
- episode_selection (str, optional): Pre-defined episode selection that bypasses manual input
"""
# Start message and collect information about episodes
start_message()
list_dict_episode = scape_info_serie.get_episode_number(index_season_selected)
episodes_count = len(list_dict_episode)
# Get episodes for the selected season
episodes = scape_info_serie.get_episode_number(index_season_selected)
episodes_count = len(episodes)
if download_all:
# Download all episodes without asking
# Download all episodes in the season
for i_episode in range(1, episodes_count + 1):
path, stopped = download_video(index_season_selected, i_episode, scape_info_serie)
@ -109,14 +109,15 @@ def download_episode(scape_info_serie: GetSerieInfo, index_season_selected: int,
else:
# Display episodes list and manage user selection
last_command = display_episodes_list(scape_info_serie.list_episodes)
if episode_selection is None:
last_command = display_episodes_list(scape_info_serie.list_episodes)
else:
last_command = episode_selection
console.print(f"\n[cyan]Using provided episode selection: [yellow]{episode_selection}")
# Validate the selection
list_episode_select = manage_selection(last_command, episodes_count)
try:
list_episode_select = validate_episode_selection(list_episode_select, episodes_count)
except ValueError as e:
console.print(f"[red]{str(e)}")
return
list_episode_select = validate_episode_selection(list_episode_select, episodes_count)
# Download selected episodes
for i_episode in list_episode_select:
@ -126,46 +127,47 @@ def download_episode(scape_info_serie: GetSerieInfo, index_season_selected: int,
break
def download_series(dict_serie: MediaItem) -> None:
def download_series(dict_serie: MediaItem, season_selection: str = None, episode_selection: str = None) -> None:
"""
Download all episodes of a TV series.
Handle downloading a complete series.
Parameters:
- dict_serie (MediaItem): obj with url name type and score
- dict_serie (MediaItem): Series metadata from search
- season_selection (str, optional): Pre-defined season selection that bypasses manual input
- episode_selection (str, optional): Pre-defined episode selection that bypasses manual input
"""
# Start message and set up video source
start_message()
# Init class
scape_info_serie = GetSerieInfo(dict_serie)
# Collect information about seasons
seasons_count = scape_info_serie.get_seasons_number()
# Create class
scrape_serie = GetSerieInfo(dict_serie)
# Get season count
seasons_count = scrape_serie.get_seasons_number()
# Prompt user for season selection and download episodes
console.print(f"\n[green]Seasons found: [red]{seasons_count}")
index_season_selected = msg.ask(
"\n[cyan]Insert season number [yellow](e.g., 1), [red]* [cyan]to download all seasons, "
"[yellow](e.g., 1-2) [cyan]for a range of seasons, or [yellow](e.g., 3-*) [cyan]to download from a specific season to the end"
)
# Manage and validate the selection
list_season_select = manage_selection(index_season_selected, seasons_count)
try:
list_season_select = validate_selection(list_season_select, seasons_count)
except ValueError as e:
console.print(f"[red]{str(e)}")
return
# If season_selection is provided, use it instead of asking for input
if season_selection is None:
index_season_selected = msg.ask(
"\n[cyan]Insert season number [yellow](e.g., 1), [red]* [cyan]to download all seasons, "
"[yellow](e.g., 1-2) [cyan]for a range of seasons, or [yellow](e.g., 3-*) [cyan]to download from a specific season to the end"
)
else:
index_season_selected = season_selection
console.print(f"\n[cyan]Using provided season selection: [yellow]{season_selection}")
# Validate the selection
list_season_select = manage_selection(index_season_selected, seasons_count)
list_season_select = validate_selection(list_season_select, seasons_count)
# Loop through the selected seasons and download episodes
for i_season in list_season_select:
if len(list_season_select) > 1 or index_season_selected == "*":
# Download all episodes if multiple seasons are selected or if '*' is used
download_episode(scape_info_serie, i_season, download_all=True)
download_episode(scrape_serie, i_season, download_all=True)
else:
# Otherwise, let the user select specific episodes for the single season
download_episode(scape_info_serie, i_season, download_all=False)
download_episode(scrape_serie, i_season, download_all=False, episode_selection=episode_selection)

View File

@ -44,11 +44,17 @@ def title_search(query: str) -> int:
console.print(f"[cyan]Search url: [yellow]{search_url}")
try:
response = httpx.get(search_url, headers={'user-agent': get_userAgent()}, timeout=max_timeout, follow_redirects=True)
response = httpx.get(
search_url,
headers={'user-agent': get_userAgent()},
timeout=max_timeout,
follow_redirects=True,
verify=False
)
response.raise_for_status()
except Exception as e:
console.print(f"Site: {site_constant.SITE_NAME}, request search error: {e}")
console.print(f"[red]Site: {site_constant.SITE_NAME}, request search error: {e}")
return 0
# Create soup and find table
@ -62,9 +68,10 @@ def title_search(query: str) -> int:
link = serie_div.find('a').get("href")
serie_info = {
'name': title,
'name': title.replace("streaming guardaserie", ""),
'url': link,
'type': 'tv'
'type': 'tv',
'image': f"{site_constant.FULL_URL}/{serie_div.find('img').get('src')}",
}
media_search_manager.add_media(serie_info)

View File

@ -104,4 +104,30 @@ class GetSerieInfo:
except Exception as e:
logging.error(f"Error parsing HTML page: {e}")
return []
return []
# ------------- FOR GUI -------------
def getNumberSeason(self) -> int:
"""
Get the total number of seasons available for the series.
"""
return self.get_seasons_number()
def getEpisodeSeasons(self, season_number: int) -> list:
"""
Get all episodes for a specific season.
"""
episodes = self.get_episode_number(season_number)
return episodes
def selectEpisode(self, season_number: int, episode_index: int) -> dict:
"""
Get information for a specific episode in a specific season.
"""
episodes = self.getEpisodeSeasons(season_number)
if not episodes or episode_index < 0 or episode_index >= len(episodes):
logging.error(f"Episode index {episode_index} is out of range for season {season_number}")
return None
return episodes[episode_index]

View File

@ -1,73 +0,0 @@
# 26.05.24
from urllib.parse import quote_plus
# External library
from rich.console import Console
from rich.prompt import Prompt
# Internal utilities
from StreamingCommunity.Api.Template.config_loader import site_constant
from StreamingCommunity.Api.Template.Class.SearchType import MediaItem
from StreamingCommunity.Lib.TMBD import tmdb, Json_film
# Logic class
from .film import download_film
# Variable
indice = 7
_useFor = "film"
_deprecate = False
_priority = 2
_engineDownload = "hls"
msg = Prompt()
console = Console()
def process_search_result(select_title):
"""
Handles the search result and initiates the download for either a film or series.
"""
download_film(select_title)
def search(string_to_search: str = None, get_onlyDatabase: bool = False, direct_item: dict = None):
"""
Main function of the application for search film, series and anime.
Parameters:
string_to_search (str, optional): String to search for
get_onylDatabase (bool, optional): If True, return only the database object
direct_item (dict, optional): Direct item to process (bypass search)
"""
if direct_item:
select_title = MediaItem(**direct_item)
process_search_result(select_title)
return
if string_to_search is None:
string_to_search = msg.ask(f"\n[purple]Insert word to search in [green]{site_constant.SITE_NAME}").strip()
# Not available for the moment
if get_onlyDatabase:
return 0
# Search on database
movie_id = tmdb.search_movie(quote_plus(string_to_search))
if movie_id is not None:
movie_details: Json_film = tmdb.get_movie_details(tmdb_id=movie_id)
# Download only film
download_film(movie_details)
else:
# If no results are found, ask again
console.print(f"\n[red]Nothing matching was found for[white]: [purple]{string_to_search}")
search()

View File

@ -1,93 +0,0 @@
# 17.09.24
import os
import logging
# External libraries
import httpx
from bs4 import BeautifulSoup
from rich.console import Console
# Internal utilities
from StreamingCommunity.Util.os import os_manager, get_call_stack
from StreamingCommunity.Util.message import start_message
from StreamingCommunity.Util.headers import get_userAgent
from StreamingCommunity.Util.table import TVShowManager
from StreamingCommunity.Lib.Downloader import HLS_Downloader
# Player
from StreamingCommunity.Api.Player.supervideo import VideoSource
# Logic class
from StreamingCommunity.Api.Template.config_loader import site_constant
from StreamingCommunity.Lib.TMBD import Json_film
# Variable
console = Console()
def download_film(movie_details: Json_film) -> str:
"""
Downloads a film using the provided tmbd id.
Parameters:
- movie_details (Json_film): Class with info about film title.
Return:
- str: output path
"""
# Start message and display film information
start_message()
console.print(f"[bold yellow]Download:[/bold yellow] [red]{site_constant.SITE_NAME}[/red] → [cyan]{movie_details.title}[/cyan] \n")
# Make request to main site
try:
url = f"{site_constant.FULL_URL}/set-movie-a/{movie_details.imdb_id}"
response = httpx.get(url, headers={'User-Agent': get_userAgent()})
response.raise_for_status()
except:
logging.error(f"Not found in the server. Dict: {movie_details}")
raise
if "not found" in str(response.text):
logging.error(f"Cant find in the server: {movie_details.title}.")
research_func = next((
f for f in get_call_stack()
if f['function'] == 'search' and f['script'] == '__init__.py'
), None)
TVShowManager.run_back_command(research_func)
# Extract supervideo url
soup = BeautifulSoup(response.text, "html.parser")
player_links = soup.find("ul", class_ = "_player-mirrors").find_all("li")
supervideo_url = "https:" + player_links[0].get("data-link")
# Set domain and media ID for the video source
video_source = VideoSource(url=supervideo_url)
# Define output path
title_name = os_manager.get_sanitize_file(movie_details.title) + ".mp4"
mp4_path = os.path.join(site_constant.MOVIE_FOLDER, title_name.replace(".mp4", ""))
# Get m3u8 master playlist
master_playlist = video_source.get_playlist()
# Download the film using the m3u8 playlist, and output filename
r_proc = HLS_Downloader(
m3u8_url=master_playlist,
output_path=os.path.join(mp4_path, title_name)
).start()
if r_proc['error'] is not None:
try: os.remove(r_proc['path'])
except: pass
return r_proc['path']

View File

@ -0,0 +1,93 @@
# 21.05.24
# External library
from rich.console import Console
from rich.prompt import Prompt
# Internal utilities
from StreamingCommunity.Api.Template import get_select_title
from StreamingCommunity.Api.Template.config_loader import site_constant
from StreamingCommunity.Api.Template.Class.SearchType import MediaItem
# Logic class
from .site import title_search, table_show_manager, media_search_manager
from .series import download_series
from .film import download_film
# Variable
indice = 5
_useFor = "Film_&_Serie"
_priority = 0
_engineDownload = "hls"
_deprecate = False
msg = Prompt()
console = Console()
def get_user_input(string_to_search: str = None):
"""
Asks the user to input a search term.
"""
return msg.ask(f"\n[purple]Insert a word to search in [green]{site_constant.SITE_NAME}").strip()
def process_search_result(select_title, selections=None):
"""
Handles the search result and initiates the download for either a film or series.
Parameters:
select_title (MediaItem): The selected media item
selections (dict, optional): Dictionary containing selection inputs that bypass manual input
{'season': season_selection, 'episode': episode_selection}
"""
if select_title.type == 'tv':
season_selection = None
episode_selection = None
if selections:
season_selection = selections.get('season')
episode_selection = selections.get('episode')
download_series(select_title, season_selection, episode_selection)
else:
download_film(select_title)
def search(string_to_search: str = None, get_onlyDatabase: bool = False, direct_item: dict = None, selections: dict = None):
"""
Main function of the application for search.
Parameters:
string_to_search (str, optional): String to search for
get_onlyDatabase (bool, optional): If True, return only the database object
direct_item (dict, optional): Direct item to process (bypass search)
selections (dict, optional): Dictionary containing selection inputs that bypass manual input
{'season': season_selection, 'episode': episode_selection}
"""
if direct_item:
select_title = MediaItem(**direct_item)
process_search_result(select_title, selections)
return
if string_to_search is None:
string_to_search = msg.ask(f"\n[purple]Insert a word to search in [green]{site_constant.SITE_NAME}").strip()
# Search on database
len_database = title_search(string_to_search)
# If only the database is needed, return the manager
if get_onlyDatabase:
return media_search_manager
if len_database > 0:
select_title = get_select_title(table_show_manager, media_search_manager,len_database)
process_search_result(select_title, selections)
else:
# If no results are found, ask again
console.print(f"\n[red]Nothing matching was found for[white]: [purple]{string_to_search}")
search()

View File

@ -0,0 +1,65 @@
# 21.05.24
import os
from typing import Tuple
# External library
import httpx
from rich.console import Console
# Internal utilities
from StreamingCommunity.Util.os import os_manager
from StreamingCommunity.Util.message import start_message
from StreamingCommunity.Lib.Downloader import HLS_Downloader
from StreamingCommunity.Util.headers import get_headers
# Logic class
from StreamingCommunity.Api.Template.config_loader import site_constant
from StreamingCommunity.Api.Template.Class.SearchType import MediaItem
# Player
from StreamingCommunity.Api.Player.mediapolisvod import VideoSource
# Variable
console = Console()
def download_film(select_title: MediaItem) -> Tuple[str, bool]:
"""
Downloads a film using the provided MediaItem information.
Parameters:
- select_title (MediaItem): The media item containing film information
Return:
- str: Path to downloaded file
- bool: Whether download was stopped
"""
start_message()
console.print(f"[bold yellow]Download:[/bold yellow] [red]{site_constant.SITE_NAME}[/red] → [cyan]{select_title.name}[/cyan] \n")
# Extract m3u8 URL from the film's URL
response = httpx.get(select_title.url + ".json", headers=get_headers(), timeout=10)
first_item_path = "https://www.raiplay.it" + response.json().get("first_item_path")
master_playlist = VideoSource.extract_m3u8_url(first_item_path)
# Define the filename and path for the downloaded film
title_name = os_manager.get_sanitize_file(select_title.name) + ".mp4"
mp4_path = os.path.join(site_constant.MOVIE_FOLDER, title_name.replace(".mp4", ""))
# Download the film using the m3u8 playlist, and output filename
r_proc = HLS_Downloader(
m3u8_url=master_playlist,
output_path=os.path.join(mp4_path, title_name)
).start()
if r_proc['error'] is not None:
try: os.remove(r_proc['path'])
except: pass
return r_proc['path'], r_proc['stopped']

View File

@ -0,0 +1,162 @@
# 21.05.24
import os
from typing import Tuple
# External library
from rich.console import Console
from rich.prompt import Prompt
# Internal utilities
from StreamingCommunity.Util.message import start_message
from StreamingCommunity.Lib.Downloader import HLS_Downloader
# Logic class
from .util.ScrapeSerie import GetSerieInfo
from StreamingCommunity.Api.Template.Util import (
manage_selection,
map_episode_title,
validate_selection,
validate_episode_selection,
display_episodes_list
)
from StreamingCommunity.Api.Template.config_loader import site_constant
from StreamingCommunity.Api.Template.Class.SearchType import MediaItem
# Player
from StreamingCommunity.Api.Player.mediapolisvod import VideoSource
# Variable
msg = Prompt()
console = Console()
def download_video(index_season_selected: int, index_episode_selected: int, scrape_serie: GetSerieInfo) -> Tuple[str,bool]:
"""
Downloads a specific episode from the specified season.
Parameters:
- index_season_selected (int): Season number
- index_episode_selected (int): Episode index
- scrape_serie (GetSerieInfo): Scraper object with series information
Returns:
- str: Path to downloaded file
- bool: Whether download was stopped
"""
start_message()
# Get episode information
obj_episode = scrape_serie.selectEpisode(index_season_selected, index_episode_selected-1)
console.print(f"[bold yellow]Download:[/bold yellow] [red]{site_constant.SITE_NAME}[/red] → [bold magenta]{obj_episode.name}[/bold magenta] ([cyan]S{index_season_selected}E{index_episode_selected}[/cyan]) \n")
# Get streaming URL
master_playlist = VideoSource.extract_m3u8_url(obj_episode.url)
# Define filename and path
mp4_name = f"{map_episode_title(scrape_serie.series_name, index_season_selected, index_episode_selected, obj_episode.name)}.mp4"
mp4_path = os.path.join(site_constant.SERIES_FOLDER, scrape_serie.series_name, f"S{index_season_selected}")
# Download the episode
r_proc = HLS_Downloader(
m3u8_url=master_playlist,
output_path=os.path.join(mp4_path, mp4_name)
).start()
if r_proc['error'] is not None:
try: os.remove(r_proc['path'])
except: pass
return r_proc['path'], r_proc['stopped']
def download_episode(index_season_selected: int, scrape_serie: GetSerieInfo, download_all: bool = False, episode_selection: str = None) -> None:
"""
Handle downloading episodes for a specific season.
Parameters:
- index_season_selected (int): Season number
- scrape_serie (GetSerieInfo): Scraper object with series information
- download_all (bool): Whether to download all episodes
- episode_selection (str, optional): Pre-defined episode selection that bypasses manual input
"""
# Get episodes for the selected season
episodes = scrape_serie.getEpisodeSeasons(index_season_selected)
episodes_count = len(episodes)
if download_all:
for i_episode in range(1, episodes_count + 1):
path, stopped = download_video(index_season_selected, i_episode, scrape_serie)
if stopped:
break
console.print(f"\n[red]End downloaded [yellow]season: [red]{index_season_selected}.")
else:
# Display episodes list and manage user selection
if episode_selection is None:
last_command = display_episodes_list(episodes)
else:
last_command = episode_selection
console.print(f"\n[cyan]Using provided episode selection: [yellow]{episode_selection}")
# Validate the selection
list_episode_select = manage_selection(last_command, episodes_count)
list_episode_select = validate_episode_selection(list_episode_select, episodes_count)
# Download selected episodes if not stopped
for i_episode in list_episode_select:
path, stopped = download_video(index_season_selected, i_episode, scrape_serie)
if stopped:
break
def download_series(select_season: MediaItem, season_selection: str = None, episode_selection: str = None) -> None:
"""
Handle downloading a complete series.
Parameters:
- select_season (MediaItem): Series metadata from search
- season_selection (str, optional): Pre-defined season selection that bypasses manual input
- episode_selection (str, optional): Pre-defined episode selection that bypasses manual input
"""
start_message()
# Extract program name from path_id
program_name = None
if select_season.path_id:
parts = select_season.path_id.strip('/').split('/')
if len(parts) >= 2:
program_name = parts[-1].split('.')[0]
# Init scraper
scrape_serie = GetSerieInfo(program_name)
# Get seasons info
scrape_serie.collect_info_title()
seasons_count = len(scrape_serie.seasons_manager)
console.print(f"\n[green]Seasons found: [red]{seasons_count}")
# If season_selection is provided, use it instead of asking for input
if season_selection is None:
index_season_selected = msg.ask(
"\n[cyan]Insert season number [yellow](e.g., 1), [red]* [cyan]to download all seasons, "
"[yellow](e.g., 1-2) [cyan]for a range of seasons, or [yellow](e.g., 3-*) [cyan]to download from a specific season to the end"
)
else:
index_season_selected = season_selection
console.print(f"\n[cyan]Using provided season selection: [yellow]{season_selection}")
# Validate the selection
list_season_select = manage_selection(index_season_selected, seasons_count)
list_season_select = validate_selection(list_season_select, seasons_count)
# Loop through the selected seasons and download episodes
for season_number in list_season_select:
if len(list_season_select) > 1 or index_season_selected == "*":
download_episode(season_number, scrape_serie, download_all=True)
else:
download_episode(season_number, scrape_serie, download_all=False, episode_selection=episode_selection)

View File

@ -0,0 +1,105 @@
# 21.05.24
# External libraries
import httpx
from rich.console import Console
# Internal utilities
from StreamingCommunity.Util.config_json import config_manager
from StreamingCommunity.Util.headers import get_userAgent
from StreamingCommunity.Util.table import TVShowManager
from StreamingCommunity.Api.Template.config_loader import site_constant
from StreamingCommunity.Api.Template.Class.SearchType import MediaManager
from .util.ScrapeSerie import GetSerieInfo
# Variable
console = Console()
media_search_manager = MediaManager()
table_show_manager = TVShowManager()
max_timeout = config_manager.get_int("REQUESTS", "timeout")
def determine_media_type(item):
"""
Determine if the item is a film or TV series by checking actual seasons count
using GetSerieInfo.
"""
try:
# Extract program name from path_id
program_name = None
if item.get('path_id'):
parts = item['path_id'].strip('/').split('/')
if len(parts) >= 2:
program_name = parts[-1].split('.')[0]
if not program_name:
return "film"
scraper = GetSerieInfo(program_name)
scraper.collect_info_title()
return "tv" if scraper.getNumberSeason() > 0 else "film"
except Exception as e:
console.print(f"[red]Error determining media type: {e}[/red]")
return "film"
def title_search(query: str) -> int:
"""
Search for titles based on a search query.
Parameters:
- query (str): The query to search for.
Returns:
int: The number of titles found.
"""
media_search_manager.clear()
table_show_manager.clear()
search_url = f"https://www.raiplay.it/atomatic/raiplay-search-service/api/v1/msearch"
console.print(f"[cyan]Search url: [yellow]{search_url}")
json_data = {
'templateIn': '6470a982e4e0301afe1f81f1',
'templateOut': '6516ac5d40da6c377b151642',
'params': {
'param': query,
'from': None,
'sort': 'relevance',
'onlyVideoQuery': False,
},
}
try:
response = httpx.post(
search_url,
headers={'user-agent': get_userAgent()},
json=json_data,
timeout=max_timeout,
follow_redirects=True
)
response.raise_for_status()
except Exception as e:
console.print(f"[red]Site: {site_constant.SITE_NAME}, request search error: {e}")
return 0
# Limit to only 15 results for performance
data = response.json().get('agg').get('titoli').get('cards')
data = data[:15] if len(data) > 15 else data
# Process each item and add to media manager
for item in data:
media_search_manager.add_media({
'id': item.get('id', ''),
'name': item.get('titolo', ''),
'type': determine_media_type(item),
'path_id': item.get('path_id', ''),
'url': f"https://www.raiplay.it{item.get('url', '')}",
'image': f"https://www.raiplay.it{item.get('immagine', '')}",
})
return media_search_manager.get_length()

View File

@ -0,0 +1,147 @@
# 21.05.24
import logging
# External libraries
import httpx
# Internal utilities
from StreamingCommunity.Util.headers import get_headers
from StreamingCommunity.Util.config_json import config_manager
from StreamingCommunity.Api.Player.Helper.Vixcloud.util import SeasonManager
# Variable
max_timeout = config_manager.get_int("REQUESTS", "timeout")
class GetSerieInfo:
def __init__(self, program_name: str):
"""Initialize the GetSerieInfo class."""
self.base_url = "https://www.raiplay.it"
self.program_name = program_name
self.series_name = program_name
self.seasons_manager = SeasonManager()
def collect_info_title(self) -> None:
"""Get series info including seasons."""
try:
program_url = f"{self.base_url}/programmi/{self.program_name}.json"
response = httpx.get(url=program_url, headers=get_headers(), timeout=max_timeout)
# If 404, content is not yet available
if response.status_code == 404:
logging.info(f"Content not yet available: {self.program_name}")
return
response.raise_for_status()
json_data = response.json()
# Look for seasons in the 'blocks' property
for block in json_data.get('blocks', []):
# Check if block is a season block or episodi block
if block.get('type') == 'RaiPlay Multimedia Block':
if block.get('name', '').lower() == 'episodi':
self.publishing_block_id = block.get('id')
# Extract seasons from sets array
for season_set in block.get('sets', []):
if 'stagione' in season_set.get('name', '').lower():
self._add_season(season_set, block.get('id'))
elif 'stagione' in block.get('name', '').lower():
self.publishing_block_id = block.get('id')
# Extract season directly from block's sets
for season_set in block.get('sets', []):
self._add_season(season_set, block.get('id'))
except httpx.HTTPError as e:
logging.error(f"Error collecting series info: {e}")
except Exception as e:
logging.error(f"Unexpected error collecting series info: {e}")
def _add_season(self, season_set: dict, block_id: str):
self.seasons_manager.add_season({
'id': season_set.get('id', ''),
'number': len(self.seasons_manager.seasons) + 1,
'name': season_set.get('name', ''),
'path': season_set.get('path_id', ''),
'episodes_count': season_set.get('episode_size', {}).get('number', 0)
})
def collect_info_season(self, number_season: int) -> None:
"""Get episodes for a specific season."""
try:
season = self.seasons_manager.get_season_by_number(number_season)
url = f"{self.base_url}/programmi/{self.program_name}/{self.publishing_block_id}/{season.id}/episodes.json"
response = httpx.get(url=url, headers=get_headers(), timeout=max_timeout)
response.raise_for_status()
episodes_data = response.json()
cards = []
# Extract episodes from different possible structures
if 'seasons' in episodes_data:
for season_data in episodes_data.get('seasons', []):
for episode_set in season_data.get('episodes', []):
cards.extend(episode_set.get('cards', []))
if not cards:
cards = episodes_data.get('cards', [])
# Add episodes to season
for ep in cards:
episode = {
'id': ep.get('id', ''),
'number': ep.get('episode', ''),
'name': ep.get('episode_title', '') or ep.get('toptitle', ''),
'duration': ep.get('duration', ''),
'url': f"{self.base_url}{ep.get('weblink', '')}" if 'weblink' in ep else f"{self.base_url}{ep.get('url', '')}"
}
season.episodes.add(episode)
except Exception as e:
logging.error(f"Error collecting episodes for season {number_season}: {e}")
raise
# ------------- FOR GUI -------------
def getNumberSeason(self) -> int:
"""
Get the total number of seasons available for the series.
"""
if not self.seasons_manager.seasons:
self.collect_info_title()
return len(self.seasons_manager.seasons)
def getEpisodeSeasons(self, season_number: int) -> list:
"""
Get all episodes for a specific season.
"""
season = self.seasons_manager.get_season_by_number(season_number)
if not season:
logging.error(f"Season {season_number} not found")
return []
if not season.episodes.episodes:
self.collect_info_season(season_number)
return season.episodes.episodes
def selectEpisode(self, season_number: int, episode_index: int) -> dict:
"""
Get information for a specific episode in a specific season.
"""
episodes = self.getEpisodeSeasons(season_number)
if not episodes or episode_index < 0 or episode_index >= len(episodes):
logging.error(f"Episode index {episode_index} is out of range for season {season_number}")
return None
return episodes[episode_index]

View File

@ -12,6 +12,7 @@ from rich.prompt import Prompt
# Internal utilities
from StreamingCommunity.Api.Template import get_select_title
from StreamingCommunity.Lib.Proxies.proxy import ProxyFinder
from StreamingCommunity.Api.Template.config_loader import site_constant
from StreamingCommunity.Api.Template.Class.SearchType import MediaItem
from StreamingCommunity.TelegramHelp.telegram_bot import get_bot_instance
@ -25,85 +26,141 @@ from .series import download_series
# Variable
indice = 0
_useFor = "film_serie"
_deprecate = False
_priority = 1
_useFor = "Film_&_Serie" # "Movies_&_Series"
_priority = 0
_engineDownload = "hls"
_deprecate = False
msg = Prompt()
console = Console()
proxy = None
def get_user_input(string_to_search: str = None):
"""
Asks the user to input a search term.
Handles both Telegram bot input and direct input.
If string_to_search is provided, it's returned directly (after stripping).
"""
if string_to_search is None:
if site_constant.TELEGRAM_BOT:
bot = get_bot_instance()
string_to_search = bot.ask(
"key_search",
f"Enter the search term\nor type 'back' to return to the menu: ",
None
)
if string_to_search is not None:
return string_to_search.strip()
if string_to_search == 'back':
if site_constant.TELEGRAM_BOT:
bot = get_bot_instance()
user_response = bot.ask(
"key_search", # Request type
"Enter the search term\nor type 'back' to return to the menu: ",
None
)
if user_response is None:
bot.send_message("Timeout: No search term entered.", None)
return None
if user_response.lower() == 'back':
bot.send_message("Returning to the main menu...", None)
try:
# Restart the script
subprocess.Popen([sys.executable] + sys.argv)
sys.exit()
else:
string_to_search = msg.ask(f"\n[purple]Insert a word to search in [green]{site_constant.SITE_NAME}").strip()
except Exception as e:
bot.send_message(f"Error during restart attempt: {e}", None)
return None # Return None if restart fails
return user_response.strip()
else:
return msg.ask(f"\n[purple]Insert a word to search in [green]{site_constant.SITE_NAME}").strip()
return string_to_search
def process_search_result(select_title):
def process_search_result(select_title, selections=None, proxy=None):
"""
Handles the search result and initiates the download for either a film or series.
Parameters:
select_title (MediaItem): The selected media item. Can be None if selection fails.
selections (dict, optional): Dictionary containing selection inputs that bypass manual input
e.g., {'season': season_selection, 'episode': episode_selection}
proxy (str, optional): The proxy to use for downloads.
"""
if select_title.type == 'tv':
download_series(select_title)
else:
download_film(select_title)
if not select_title:
if site_constant.TELEGRAM_BOT:
bot = get_bot_instance()
bot.send_message("No title selected or selection cancelled.", None)
else:
console.print("[yellow]No title selected or selection cancelled.")
return
def search(string_to_search: str = None, get_onlyDatabase: bool = False, direct_item: dict = None):
if select_title.type == 'tv':
season_selection = None
episode_selection = None
if selections:
season_selection = selections.get('season')
episode_selection = selections.get('episode')
download_series(select_title, season_selection, episode_selection, proxy)
else:
download_film(select_title, proxy)
def search(string_to_search: str = None, get_onlyDatabase: bool = False, direct_item: dict = None, selections: dict = None):
"""
Main function of the application for search film, series and anime.
Main function of the application for search.
Parameters:
string_to_search (str, optional): String to search for
get_onylDatabase (bool, optional): If True, return only the database object
direct_item (dict, optional): Direct item to process (bypass search)
string_to_search (str, optional): String to search for. Can be passed from run.py.
If 'back', special handling might occur in get_user_input.
get_onlyDatabase (bool, optional): If True, return only the database search manager object.
direct_item (dict, optional): Direct item to process (bypasses search).
selections (dict, optional): Dictionary containing selection inputs that bypass manual input
for series (season/episode).
"""
if direct_item:
select_title = MediaItem(**direct_item)
process_search_result(select_title)
return
# Get the user input for the search term
string_to_search = get_user_input(string_to_search)
# Perform the database search
len_database = title_search(quote_plus(string_to_search))
# If only the database is needed, return the manager
if get_onlyDatabase:
return media_search_manager
bot = None
if site_constant.TELEGRAM_BOT:
bot = get_bot_instance()
# Check proxy if not already set
finder = ProxyFinder(site_constant.FULL_URL)
proxy = finder.find_fast_proxy()
if direct_item:
select_title_obj = MediaItem(**direct_item)
process_search_result(select_title_obj, selections, proxy)
return
actual_search_query = get_user_input(string_to_search)
# Handle cases where user input is empty, or 'back' was handled (sys.exit or None return)
if not actual_search_query:
if bot:
if actual_search_query is None: # Specifically for timeout from bot.ask or failed restart
bot.send_message("Search term not provided or operation cancelled. Returning.", None)
return
# Perform search on the database using the obtained query
finder = ProxyFinder(site_constant.FULL_URL)
proxy = finder.find_fast_proxy()
len_database = title_search(actual_search_query, proxy)
# If only the database object (media_search_manager populated by title_search) is needed
if get_onlyDatabase:
return media_search_manager
if len_database > 0:
select_title = get_select_title(table_show_manager, media_search_manager)
process_search_result(select_title)
select_title = get_select_title(table_show_manager, media_search_manager, len_database)
process_search_result(select_title, selections, proxy)
else:
console.print(f"\n[red]Nothing matching was found for[white]: [purple]{string_to_search}")
if site_constant.TELEGRAM_BOT:
bot.send_message(f"No results found, please try again", None)
# If no results are found, ask again
string_to_search = get_user_input()
search()
no_results_message = f"No results found for: '{actual_search_query}'"
if bot:
bot.send_message(no_results_message, None)
else:
console.print(f"\n[red]Nothing matching was found for[white]: [purple]{actual_search_query}")
# Do not call search() recursively here to avoid infinite loops on no results.
# The flow should return to the caller (e.g., main menu in run.py).
return

View File

@ -27,7 +27,7 @@ from StreamingCommunity.Api.Player.vixcloud import VideoSource
console = Console()
def download_film(select_title: MediaItem) -> str:
def download_film(select_title: MediaItem, proxy: str = None) -> str:
"""
Downloads a film using the provided film ID, title name, and domain.
@ -55,14 +55,17 @@ def download_film(select_title: MediaItem) -> str:
console.print(f"[bold yellow]Download:[/bold yellow] [red]{site_constant.SITE_NAME}[/red] → [cyan]{select_title.name}[/cyan] \n")
# Init class
video_source = VideoSource(site_constant.FULL_URL, False)
video_source.setup(select_title.id)
video_source = VideoSource(f"{site_constant.FULL_URL}/it", False, select_title.id, proxy)
# Retrieve scws and if available master playlist
video_source.get_iframe(select_title.id)
video_source.get_content()
master_playlist = video_source.get_playlist()
if master_playlist is None:
console.print(f"[red]Site: {site_constant.SITE_NAME}, error: No master playlist found[/red]")
return None
# Define the filename and path for the downloaded film
title_name = os_manager.get_sanitize_file(select_title.name) + ".mp4"
mp4_path = os.path.join(site_constant.MOVIE_FOLDER, title_name.replace(".mp4", ""))

View File

@ -39,28 +39,22 @@ console = Console()
def download_video(index_season_selected: int, index_episode_selected: int, scrape_serie: GetSerieInfo, video_source: VideoSource) -> Tuple[str,bool]:
"""
Download a single episode video.
Downloads a specific episode from the specified season.
Parameters:
- index_season_selected (int): Index of the selected season.
- index_episode_selected (int): Index of the selected episode.
- index_season_selected (int): Season number
- index_episode_selected (int): Episode index
- scrape_serie (GetSerieInfo): Scraper object with series information
- video_source (VideoSource): Video source handler
Return:
- str: output path
- bool: kill handler status
Returns:
- str: Path to downloaded file
- bool: Whether download was stopped
"""
start_message()
index_season_selected = dynamic_format_number(str(index_season_selected))
# SPECIAL: Get season number
season = None
for s in scrape_serie.seasons_manager.seasons:
if s.number == int(index_season_selected):
season = s
break
# Get info about episode
obj_episode = season.episodes.get(index_episode_selected - 1)
# Get episode information
obj_episode = scrape_serie.selectEpisode(index_season_selected, index_episode_selected-1)
console.print(f"[bold yellow]Download:[/bold yellow] [red]{site_constant.SITE_NAME}[/red] → [bold magenta]{obj_episode.name}[/bold magenta] ([cyan]S{index_season_selected}E{index_episode_selected}[/cyan]) \n")
if site_constant.TELEGRAM_BOT:
@ -98,28 +92,28 @@ def download_video(index_season_selected: int, index_episode_selected: int, scra
return r_proc['path'], r_proc['stopped']
def download_episode(index_season_selected: int, scrape_serie: GetSerieInfo, video_source: VideoSource, download_all: bool = False) -> None:
def download_episode(index_season_selected: int, scrape_serie: GetSerieInfo, video_source: VideoSource, download_all: bool = False, episode_selection: str = None) -> None:
"""
Download episodes of a selected season.
Handle downloading episodes for a specific season.
Parameters:
- index_season_selected (int): Index of the selected season.
- download_all (bool): Download all episodes in the season.
- index_season_selected (int): Season number
- scrape_serie (GetSerieInfo): Scraper object with series information
- video_source (VideoSource): Video source object
- download_all (bool): Whether to download all episodes
- episode_selection (str, optional): Pre-defined episode selection that bypasses manual input
"""
start_message()
scrape_serie.collect_info_season(index_season_selected)
# SPECIAL: Get season number
season = None
for s in scrape_serie.seasons_manager.seasons:
if s.number == index_season_selected:
season = s
break
episodes_count = len(season.episodes.episodes)
# Get episodes for the selected season
episodes = scrape_serie.getEpisodeSeasons(index_season_selected)
episodes_count = len(episodes)
if episodes_count == 0:
console.print(f"[red]No episodes found for season {index_season_selected}")
return
if download_all:
# Download all episodes without asking
# Download all episodes in the season
for i_episode in range(1, episodes_count + 1):
path, stopped = download_video(index_season_selected, i_episode, scrape_serie, video_source)
@ -129,16 +123,16 @@ def download_episode(index_season_selected: int, scrape_serie: GetSerieInfo, vid
console.print(f"\n[red]End downloaded [yellow]season: [red]{index_season_selected}.")
else:
# Display episodes list and manage user selection
last_command = display_episodes_list(season.episodes.episodes)
if episode_selection is None:
last_command = display_episodes_list(episodes)
else:
last_command = episode_selection
console.print(f"\n[cyan]Using provided episode selection: [yellow]{episode_selection}")
# Validate the selection
list_episode_select = manage_selection(last_command, episodes_count)
try:
list_episode_select = validate_episode_selection(list_episode_select, episodes_count)
except ValueError as e:
console.print(f"[red]{str(e)}")
return
list_episode_select = validate_episode_selection(list_episode_select, episodes_count)
# Download selected episodes if not stopped
for i_episode in list_episode_select:
@ -147,70 +141,65 @@ def download_episode(index_season_selected: int, scrape_serie: GetSerieInfo, vid
if stopped:
break
def download_series(select_season: MediaItem) -> None:
def download_series(select_season: MediaItem, season_selection: str = None, episode_selection: str = None, proxy = None) -> None:
"""
Download episodes of a TV series based on user selection.
Handle downloading a complete series.
Parameters:
- select_season (MediaItem): Selected media item (TV series).
- domain (str): Domain from which to download.
- select_season (MediaItem): Series metadata from search
- season_selection (str, optional): Pre-defined season selection that bypasses manual input
- episode_selection (str, optional): Pre-defined episode selection that bypasses manual input
"""
if site_constant.TELEGRAM_BOT:
bot = get_bot_instance()
# Start message and set up video source
start_message()
# Init class
scrape_serie = GetSerieInfo(site_constant.FULL_URL)
video_source = VideoSource(site_constant.FULL_URL, True)
video_source = VideoSource(f"{site_constant.FULL_URL}/it", True, select_season.id, proxy)
scrape_serie = GetSerieInfo(f"{site_constant.FULL_URL}/it", select_season.id, select_season.slug, proxy)
# Setup video source
scrape_serie.setup(select_season.id, select_season.slug)
video_source.setup(select_season.id)
# Collect information about seasons
scrape_serie.collect_info_title()
# Collect information about season
scrape_serie.getNumberSeason()
seasons_count = len(scrape_serie.seasons_manager)
if site_constant.TELEGRAM_BOT:
bot = get_bot_instance()
# Prompt user for season selection and download episodes
console.print(f"\n[green]Seasons found: [red]{seasons_count}")
if site_constant.TELEGRAM_BOT:
console.print("\n[cyan]Insert season number [yellow](e.g., 1), [red]* [cyan]to download all seasons, "
"[yellow](e.g., 1-2) [cyan]for a range of seasons, or [yellow](e.g., 3-*) [cyan]to download from a specific season to the end")
# If season_selection is provided, use it instead of asking for input
if season_selection is None:
if site_constant.TELEGRAM_BOT:
console.print("\n[cyan]Insert season number [yellow](e.g., 1), [red]* [cyan]to download all seasons, "
"[yellow](e.g., 1-2) [cyan]for a range of seasons, or [yellow](e.g., 3-*) [cyan]to download from a specific season to the end")
bot.send_message(f"Stagioni trovate: {seasons_count}", None)
bot.send_message(f"Stagioni trovate: {seasons_count}", None)
index_season_selected = bot.ask(
"select_title_episode",
"Menu di selezione delle stagioni\n\n"
"- Inserisci il numero della stagione (ad esempio, 1)\n"
"- Inserisci * per scaricare tutte le stagioni\n"
"- Inserisci un intervallo di stagioni (ad esempio, 1-2) per scaricare da una stagione all'altra\n"
"- Inserisci (ad esempio, 3-*) per scaricare dalla stagione specificata fino alla fine della serie",
None
)
index_season_selected = bot.ask(
"select_title_episode",
"Menu di selezione delle stagioni\n\n"
"- Inserisci il numero della stagione (ad esempio, 1)\n"
"- Inserisci * per scaricare tutte le stagioni\n"
"- Inserisci un intervallo di stagioni (ad esempio, 1-2) per scaricare da una stagione all'altra\n"
"- Inserisci (ad esempio, 3-*) per scaricare dalla stagione specificata fino alla fine della serie",
None
)
else:
index_season_selected = msg.ask(
"\n[cyan]Insert season number [yellow](e.g., 1), [red]* [cyan]to download all seasons, "
"[yellow](e.g., 1-2) [cyan]for a range of seasons, or [yellow](e.g., 3-*) [cyan]to download from a specific season to the end"
)
else:
index_season_selected = msg.ask(
"\n[cyan]Insert season number [yellow](e.g., 1), [red]* [cyan]to download all seasons, "
"[yellow](e.g., 1-2) [cyan]for a range of seasons, or [yellow](e.g., 3-*) [cyan]to download from a specific season to the end"
)
index_season_selected = season_selection
console.print(f"\n[cyan]Using provided season selection: [yellow]{season_selection}")
# Manage and validate the selection
# Validate the selection
list_season_select = manage_selection(index_season_selected, seasons_count)
try:
list_season_select = validate_selection(list_season_select, seasons_count)
except ValueError as e:
console.print(f"[red]{str(e)}")
return
list_season_select = validate_selection(list_season_select, seasons_count)
# Loop through the selected seasons and download episodes
for i_season in list_season_select:
# SPECIAL: Get season number
season = None
for s in scrape_serie.seasons_manager.seasons:
if s.number == i_season:
@ -219,13 +208,10 @@ def download_series(select_season: MediaItem) -> None:
season_number = season.number
if len(list_season_select) > 1 or index_season_selected == "*":
# Download all episodes if multiple seasons are selected or if '*' is used
download_episode(season_number, scrape_serie, video_source, download_all=True)
else:
# Otherwise, let the user select specific episodes for the single season
download_episode(season_number, scrape_serie, video_source, download_all=False)
download_episode(season_number, scrape_serie, video_source, download_all=False, episode_selection=episode_selection)
if site_constant.TELEGRAM_BOT:
bot.send_message(f"Finito di scaricare tutte le serie e episodi", None)

View File

@ -1,10 +1,11 @@
# 10.12.23
import sys
import json
# External libraries
import httpx
from bs4 import BeautifulSoup
from rich.console import Console
@ -27,7 +28,7 @@ table_show_manager = TVShowManager()
max_timeout = config_manager.get_int("REQUESTS", "timeout")
def title_search(query: str) -> int:
def title_search(query: str, proxy: str) -> int:
"""
Search for titles based on a search query.
@ -43,15 +44,42 @@ def title_search(query: str) -> int:
media_search_manager.clear()
table_show_manager.clear()
search_url = f"{site_constant.FULL_URL}/api/search?q={query}"
try:
response = httpx.get(
f"{site_constant.FULL_URL}/it",
headers={'user-agent': get_userAgent()},
timeout=max_timeout,
proxy=proxy
)
response.raise_for_status()
soup = BeautifulSoup(response.text, 'html.parser')
version = json.loads(soup.find('div', {'id': "app"}).get("data-page"))['version']
except Exception as e:
if "WinError" in str(e) or "Errno" in str(e): console.print("\n[bold yellow]Please make sure you have enabled and configured a valid proxy.[/bold yellow]")
console.print(f"[red]Site: {site_constant.SITE_NAME} version, request error: {e}")
return 0
search_url = f"{site_constant.FULL_URL}/it/search?q={query}"
console.print(f"[cyan]Search url: [yellow]{search_url}")
try:
response = httpx.get(search_url, headers={'user-agent': get_userAgent()}, timeout=max_timeout, follow_redirects=True)
response = httpx.get(
search_url,
headers = {
'referer': site_constant.FULL_URL,
'user-agent': get_userAgent(),
'x-inertia': 'true',
'x-inertia-version': version
},
timeout=max_timeout,
proxy=proxy
)
response.raise_for_status()
except Exception as e:
console.print(f"Site: {site_constant.SITE_NAME}, request search error: {e}")
console.print(f"[red]Site: {site_constant.SITE_NAME}, request search error: {e}")
if site_constant.TELEGRAM_BOT:
bot.send_message(f"ERRORE\n\nErrore nella richiesta di ricerca:\n\n{e}", None)
return 0
@ -62,7 +90,7 @@ def title_search(query: str) -> int:
# Collect json data
try:
data = response.json().get('data', [])
data = response.json().get('props').get('titles')
except Exception as e:
console.log(f"Error parsing JSON response: {e}")
return 0
@ -75,7 +103,7 @@ def title_search(query: str) -> int:
'name': dict_title.get('name'),
'type': dict_title.get('type'),
'date': dict_title.get('last_air_date'),
'score': dict_title.get('score')
'image': f"{site_constant.FULL_URL.replace('stream', 'cdn.stream')}/images/{dict_title.get('images')[0].get('filename')}"
})
if site_constant.TELEGRAM_BOT:
@ -92,4 +120,4 @@ def title_search(query: str) -> int:
bot.send_message(f"Lista dei risultati:", choices)
# Return the number of titles found
return media_search_manager.get_length()
return media_search_manager.get_length()

View File

@ -20,31 +20,22 @@ max_timeout = config_manager.get_int("REQUESTS", "timeout")
class GetSerieInfo:
def __init__(self, url):
def __init__(self, url, media_id: int = None, series_name: str = None, proxy = None):
"""
Initialize the GetSerieInfo class for scraping TV series information.
Args:
- url (str): The URL of the streaming site.
- media_id (int, optional): Unique identifier for the media
- series_name (str, optional): Name of the TV series
"""
self.is_series = False
self.headers = {'user-agent': get_userAgent()}
self.url = url
# Initialize the SeasonManager
self.proxy = proxy
self.media_id = media_id
self.seasons_manager = SeasonManager()
def setup(self, media_id: int = None, series_name: str = None):
"""
Set up the scraper with specific media details.
Args:
media_id (int, optional): Unique identifier for the media
series_name (str, optional): Name of the TV series
"""
self.media_id = media_id
# If series name is provided, initialize series-specific properties
if series_name is not None:
self.is_series = True
self.series_name = series_name
@ -60,7 +51,8 @@ class GetSerieInfo:
response = httpx.get(
url=f"{self.url}/titles/{self.media_id}-{self.series_name}",
headers=self.headers,
timeout=max_timeout
timeout=max_timeout,
proxy=self.proxy
)
response.raise_for_status()
@ -106,17 +98,17 @@ class GetSerieInfo:
if not season:
logging.error(f"Season {number_season} not found")
return
response = httpx.get(
url=f'{self.url}/titles/{self.media_id}-{self.series_name}/stagione-{number_season}',
url=f'{self.url}/titles/{self.media_id}-{self.series_name}/season-{number_season}',
headers={
'User-Agent': get_userAgent(),
'x-inertia': 'true',
'User-Agent': self.headers['user-agent'],
'x-inertia': 'true',
'x-inertia-version': self.version,
},
timeout=max_timeout
timeout=max_timeout,
proxy=self.proxy
)
response.raise_for_status()
# Extract episodes from JSON response
json_response = response.json().get('props', {}).get('loadedSeason', {}).get('episodes', [])
@ -127,4 +119,40 @@ class GetSerieInfo:
except Exception as e:
logging.error(f"Error collecting episodes for season {number_season}: {e}")
raise
raise
# ------------- FOR GUI -------------
def getNumberSeason(self) -> int:
"""
Get the total number of seasons available for the series.
"""
if not self.seasons_manager.seasons:
self.collect_info_title()
return len(self.seasons_manager.seasons)
def getEpisodeSeasons(self, season_number: int) -> list:
"""
Get all episodes for a specific season.
"""
season = self.seasons_manager.get_season_by_number(season_number)
if not season:
logging.error(f"Season {season_number} not found")
return []
if not season.episodes.episodes:
self.collect_info_season(season_number)
return season.episodes.episodes
def selectEpisode(self, season_number: int, episode_index: int) -> dict:
"""
Get information for a specific episode in a specific season.
"""
episodes = self.getEpisodeSeasons(season_number)
if not episodes or episode_index < 0 or episode_index >= len(episodes):
logging.error(f"Episode index {episode_index} is out of range for season {season_number}")
return None
return episodes[episode_index]

View File

@ -0,0 +1,102 @@
# 29.04.25
# External library
from rich.console import Console
from rich.prompt import Prompt
# Internal utilities
from StreamingCommunity.Api.Template import get_select_title
from StreamingCommunity.Lib.Proxies.proxy import ProxyFinder
from StreamingCommunity.Api.Template.config_loader import site_constant
from StreamingCommunity.Api.Template.Class.SearchType import MediaItem
# Logic class
from .site import title_search, table_show_manager, media_search_manager
from .film import download_film
from .series import download_series
# Variable
indice = 7
_useFor = "Film_&_Serie"
_priority = 0
_engineDownload = "hls"
_deprecate = False
msg = Prompt()
console = Console()
proxy = None
def get_user_input(string_to_search: str = None):
"""
Asks the user to input a search term.
Handles both Telegram bot input and direct input.
"""
string_to_search = msg.ask(f"\n[purple]Insert a word to search in [green]{site_constant.SITE_NAME}").strip()
return string_to_search
def process_search_result(select_title, selections=None, proxy=None):
"""
Handles the search result and initiates the download for either a film or series.
Parameters:
select_title (MediaItem): The selected media item
selections (dict, optional): Dictionary containing selection inputs that bypass manual input
{'season': season_selection, 'episode': episode_selection}
"""
if select_title.type == 'tv':
season_selection = None
episode_selection = None
if selections:
season_selection = selections.get('season')
episode_selection = selections.get('episode')
download_series(select_title, season_selection, episode_selection, proxy)
else:
download_film(select_title, proxy)
def search(string_to_search: str = None, get_onlyDatabase: bool = False, direct_item: dict = None, selections: dict = None):
"""
Main function of the application for search.
Parameters:
string_to_search (str, optional): String to search for
get_onlyDatabase (bool, optional): If True, return only the database object
direct_item (dict, optional): Direct item to process (bypass search)
selections (dict, optional): Dictionary containing selection inputs that bypass manual input
{'season': season_selection, 'episode': episode_selection}
"""
if direct_item:
select_title = MediaItem(**direct_item)
process_search_result(select_title, selections) # DONT SUPPORT PROXY FOR NOW
return
# Check proxy if not already set
finder = ProxyFinder(site_constant.FULL_URL)
proxy = finder.find_fast_proxy()
if string_to_search is None:
string_to_search = msg.ask(f"\n[purple]Insert a word to search in [green]{site_constant.SITE_NAME}").strip()
# Perform search on the database using the obtained query
finder = ProxyFinder(url=f"{site_constant.FULL_URL}/serie/euphoria/")
proxy = finder.find_fast_proxy()
len_database = title_search(string_to_search, proxy)
# If only the database is needed, return the manager
if get_onlyDatabase:
return media_search_manager
if len_database > 0:
select_title = get_select_title(table_show_manager, media_search_manager,len_database)
process_search_result(select_title, selections, proxy)
else:
# If no results are found, ask again
console.print(f"\n[red]Nothing matching was found for[white]: [purple]{string_to_search}")
search()

View File

@ -0,0 +1,61 @@
# 29.04.25
import os
# External library
from rich.console import Console
# Internal utilities
from StreamingCommunity.Util.os import os_manager
from StreamingCommunity.Util.message import start_message
from StreamingCommunity.Lib.Downloader import HLS_Downloader
# Logic class
from StreamingCommunity.Api.Template.config_loader import site_constant
from StreamingCommunity.Api.Template.Class.SearchType import MediaItem
# Player
from StreamingCommunity.Api.Player.hdplayer import VideoSource
# Variable
console = Console()
def download_film(select_title: MediaItem, proxy) -> str:
"""
Downloads a film using the provided film ID, title name, and domain.
Parameters:
- domain (str): The domain of the site
- version (str): Version of site.
Return:
- str: output path
"""
start_message()
console.print(f"[bold yellow]Download:[/bold yellow] [red]{site_constant.SITE_NAME}[/red] → [cyan]{select_title.name}[/cyan] \n")
# Get master playlists
video_source = VideoSource(proxy)
master_playlist = video_source.get_m3u8_url(select_title.url)
# Define the filename and path for the downloaded film
title_name = os_manager.get_sanitize_file(select_title.name) + ".mp4"
mp4_path = os.path.join(site_constant.MOVIE_FOLDER, title_name.replace(".mp4", ""))
# Download the film using the m3u8 playlist, and output filename
r_proc = HLS_Downloader(
m3u8_url=master_playlist,
output_path=os.path.join(mp4_path, title_name)
).start()
if r_proc['error'] is not None:
try: os.remove(r_proc['path'])
except: pass
return r_proc['path']

View File

@ -0,0 +1,160 @@
# 29.04.25
import os
from typing import Tuple
# External library
from rich.console import Console
from rich.prompt import Prompt
# Internal utilities
from StreamingCommunity.Util.message import start_message
from StreamingCommunity.Lib.Downloader import HLS_Downloader
# Logic class
from .util.ScrapeSerie import GetSerieInfo
from StreamingCommunity.Api.Template.Util import (
manage_selection,
map_episode_title,
validate_selection,
validate_episode_selection,
display_episodes_list
)
from StreamingCommunity.Api.Template.config_loader import site_constant
from StreamingCommunity.Api.Template.Class.SearchType import MediaItem
# Player
from StreamingCommunity.Api.Player.hdplayer import VideoSource
# Variable
msg = Prompt()
console = Console()
def download_video(index_season_selected: int, index_episode_selected: int, scrape_serie: GetSerieInfo, proxy=None) -> Tuple[str,bool]:
"""
Downloads a specific episode from a specified season.
Parameters:
- index_season_selected (int): Season number
- index_episode_selected (int): Episode index
- scrape_serie (GetSerieInfo): Scraper object with series information
Returns:
- str: Path to downloaded file
- bool: Whether download was stopped
"""
start_message()
# Get episode information
obj_episode = scrape_serie.selectEpisode(index_season_selected, index_episode_selected-1)
console.print(f"[bold yellow]Download:[/bold yellow] [red]{site_constant.SITE_NAME}[/red] → [bold magenta]{obj_episode.name}[/bold magenta] ([cyan]S{index_season_selected}E{index_episode_selected}[/cyan]) \n")
# Define filename and path for the downloaded video
mp4_name = f"{map_episode_title(scrape_serie.series_name, index_season_selected, index_episode_selected, obj_episode.name)}.mp4"
mp4_path = os.path.join(site_constant.SERIES_FOLDER, scrape_serie.series_name, f"S{index_season_selected}")
# Retrieve scws and if available master playlist
video_source = VideoSource(proxy)
master_playlist = video_source.get_m3u8_url(obj_episode.url)
# Download the episode
r_proc = HLS_Downloader(
m3u8_url=master_playlist,
output_path=os.path.join(mp4_path, mp4_name)
).start()
if r_proc['error'] is not None:
try: os.remove(r_proc['path'])
except: pass
return r_proc['path'], r_proc['stopped']
def download_episode(index_season_selected: int, scrape_serie: GetSerieInfo, download_all: bool = False, episode_selection: str = None, proxy = None) -> None:
"""
Handle downloading episodes for a specific season.
Parameters:
- index_season_selected (int): Season number
- scrape_serie (GetSerieInfo): Scraper object with series information
- download_all (bool): Whether to download all episodes
- episode_selection (str, optional): Pre-defined episode selection that bypasses manual input
"""
# Get episodes for the selected season
episodes = scrape_serie.getEpisodeSeasons(index_season_selected)
episodes_count = len(episodes)
if download_all:
for i_episode in range(1, episodes_count + 1):
path, stopped = download_video(index_season_selected, i_episode, scrape_serie, proxy)
if stopped:
break
console.print(f"\n[red]End downloaded [yellow]season: [red]{index_season_selected}.")
else:
if episode_selection is not None:
last_command = episode_selection
console.print(f"\n[cyan]Using provided episode selection: [yellow]{episode_selection}")
else:
last_command = display_episodes_list(episodes)
# Prompt user for episode selection
list_episode_select = manage_selection(last_command, episodes_count)
list_episode_select = validate_episode_selection(list_episode_select, episodes_count)
# Download selected episodes if not stopped
for i_episode in list_episode_select:
path, stopped = download_video(index_season_selected, i_episode, scrape_serie, proxy)
if stopped:
break
def download_series(select_season: MediaItem, season_selection: str = None, episode_selection: str = None, proxy = None) -> None:
"""
Handle downloading a complete series.
Parameters:
- select_season (MediaItem): Series metadata from search
- season_selection (str, optional): Pre-defined season selection that bypasses manual input
- episode_selection (str, optional): Pre-defined episode selection that bypasses manual input
"""
scrape_serie = GetSerieInfo(select_season.url, proxy)
# Get total number of seasons
seasons_count = scrape_serie.getNumberSeason()
# Prompt user for season selection and download episodes
console.print(f"\n[green]Seasons found: [red]{seasons_count}")
# If season_selection is provided, use it instead of asking for input
if season_selection is None:
index_season_selected = msg.ask(
"\n[cyan]Insert season number [yellow](e.g., 1), [red]* [cyan]to download all seasons, "
"[yellow](e.g., 1-2) [cyan]for a range of seasons, or [yellow](e.g., 3-*) [cyan]to download from a specific season to the end"
)
else:
index_season_selected = season_selection
console.print(f"\n[cyan]Using provided season selection: [yellow]{season_selection}")
# Validate the selection
list_season_select = manage_selection(index_season_selected, seasons_count)
list_season_select = validate_selection(list_season_select, seasons_count)
# Loop through the selected seasons and download episodes
for i_season in list_season_select:
if len(list_season_select) > 1 or index_season_selected == "*":
# Download all episodes if multiple seasons are selected or if '*' is used
download_episode(i_season, scrape_serie, download_all=True, proxy=proxy)
else:
# Otherwise, let the user select specific episodes for the single season
download_episode(i_season, scrape_serie, download_all=False, episode_selection=episode_selection, proxy=proxy)

View File

@ -0,0 +1,118 @@
# 29.04.25
import re
# External libraries
import httpx
from bs4 import BeautifulSoup
from rich.console import Console
# Internal utilities
from StreamingCommunity.Util.config_json import config_manager
from StreamingCommunity.Util.headers import get_userAgent
from StreamingCommunity.Util.table import TVShowManager
# Logic class
from StreamingCommunity.Api.Template.config_loader import site_constant
from StreamingCommunity.Api.Template.Class.SearchType import MediaManager
# Variable
console = Console()
media_search_manager = MediaManager()
table_show_manager = TVShowManager()
max_timeout = config_manager.get_int("REQUESTS", "timeout")
def extract_nonce(proxy) -> str:
"""Extract nonce value from the page script"""
response = httpx.get(
site_constant.FULL_URL,
headers={'user-agent': get_userAgent()},
timeout=max_timeout,
proxy=proxy
)
soup = BeautifulSoup(response.content, 'html.parser')
script = soup.find('script', id='live-search-js-extra')
if script:
match = re.search(r'"admin_ajax_nonce":"([^"]+)"', script.text)
if match:
return match.group(1)
return ""
def title_search(query: str, proxy: str) -> int:
"""
Search for titles based on a search query.
Parameters:
- query (str): The query to search for.
Returns:
int: The number of titles found.
"""
media_search_manager.clear()
table_show_manager.clear()
search_url = f"{site_constant.FULL_URL}/wp-admin/admin-ajax.php"
console.print(f"[cyan]Search url: [yellow]{search_url}")
try:
_wpnonce = extract_nonce(proxy)
if not _wpnonce:
console.print("[red]Error: Failed to extract nonce")
return 0
data = {
'action': 'data_fetch',
'keyword': query,
'_wpnonce': _wpnonce
}
response = httpx.post(
search_url,
headers={
'origin': site_constant.FULL_URL,
'user-agent': get_userAgent()
},
data=data,
timeout=max_timeout,
proxy=proxy
)
response.raise_for_status()
soup = BeautifulSoup(response.text, 'html.parser')
except Exception as e:
if "WinError" in str(e) or "Errno" in str(e): console.print("\n[bold yellow]Please make sure you have enabled and configured a valid proxy.[/bold yellow]")
console.print(f"[red]Site: {site_constant.SITE_NAME}, request search error: {e}")
return 0
for item in soup.find_all('div', class_='searchelement'):
try:
title = item.find_all("a")[-1].get_text(strip=True) if item.find_all("a") else 'N/A'
url = item.find('a').get('href', '')
year = item.find('div', id='search-cat-year')
year = year.get_text(strip=True) if year else 'N/A'
if any(keyword in year.lower() for keyword in ['stagione', 'episodio', 'ep.', 'season', 'episode']):
continue
media_search_manager.add_media({
'name': title,
'type': 'tv' if '/serie/' in url else 'Film',
'date': year,
'image': item.find('img').get('src', ''),
'url': url
})
except Exception as e:
print(f"Error parsing a film entry: {e}")
# Return the number of titles found
return media_search_manager.get_length()

View File

@ -0,0 +1,118 @@
# 29.04.25
import re
import logging
# External libraries
import httpx
from bs4 import BeautifulSoup
# Internal utilities
from StreamingCommunity.Util.headers import get_userAgent
from StreamingCommunity.Util.config_json import config_manager
from StreamingCommunity.Api.Player.Helper.Vixcloud.util import SeasonManager, Episode
# Variable
max_timeout = config_manager.get_int("REQUESTS", "timeout")
class GetSerieInfo:
def __init__(self, url, proxy: str = None):
self.headers = {'user-agent': get_userAgent()}
self.url = url
self.seasons_manager = SeasonManager()
self.series_name = None
self.client = httpx.Client(headers=self.headers, proxy=proxy, timeout=max_timeout)
def collect_info_season(self) -> None:
"""
Retrieve all series information including episodes and seasons.
"""
try:
response = self.client.get(self.url)
response.raise_for_status()
soup = BeautifulSoup(response.text, 'html.parser')
if not self.series_name:
title_tag = soup.find('h1', class_='title-border')
self.series_name = title_tag.get_text(strip=True) if title_tag else 'N/A'
# Extract episodes and organize by season
episodes = {}
for ep in soup.find_all('div', class_='bolumust'):
a_tag = ep.find('a')
if not a_tag:
continue
ep_url = a_tag.get('href', '')
episode_title = a_tag.get_text(strip=True)
# Clean up episode title by removing season info and date
clean_title = re.sub(r'Stagione \d+ Episodio \d+\s*\(?([^)]+)\)?\s*\d+\s*\w+\s*\d+', r'\1', episode_title)
season_match = re.search(r'stagione-(\d+)', ep_url)
if season_match:
season_num = int(season_match.group(1))
if season_num not in episodes:
episodes[season_num] = []
episodes[season_num].append({
'id': len(episodes[season_num]) + 1,
'number': len(episodes[season_num]) + 1,
'name': clean_title.strip(),
'url': ep_url
})
# Add seasons to SeasonManager
for season_num, eps in episodes.items():
season = self.seasons_manager.add_season({
'id': season_num,
'number': season_num,
'name': f'Stagione {season_num}'
})
# Add episodes to season's EpisodeManager
for ep in eps:
season.episodes.add(ep)
except Exception as e:
logging.error(f"Error collecting series info: {str(e)}")
raise
# ------------- FOR GUI -------------
def getNumberSeason(self) -> int:
"""
Get the total number of seasons available for the series.
"""
if not self.seasons_manager.seasons:
self.collect_info_season()
return len(self.seasons_manager.seasons)
def getEpisodeSeasons(self, season_number: int) -> list:
"""
Get all episodes for a specific season.
"""
if not self.seasons_manager.seasons:
self.collect_info_season()
season = self.seasons_manager.get_season_by_number(season_number)
if not season:
logging.error(f"Season {season_number} not found")
return []
return season.episodes.episodes
def selectEpisode(self, season_number: int, episode_index: int) -> Episode:
"""
Get information for a specific episode in a specific season.
"""
episodes = self.getEpisodeSeasons(season_number)
if not episodes or episode_index < 0 or episode_index >= len(episodes):
logging.error(f"Episode index {episode_index} is out of range for season {season_number}")
return None
return episodes[episode_index]

View File

@ -7,78 +7,123 @@ import sys
from rich.console import Console
# Internal utilities
from StreamingCommunity.Api.Template.config_loader import site_constant
from StreamingCommunity.TelegramHelp.telegram_bot import get_bot_instance
# Variable
console = Console()
available_colors = ['red', 'magenta', 'yellow', 'cyan', 'green', 'blue', 'white']
column_to_hide = ['Slug', 'Sub_ita', 'Last_air_date', 'Seasons_count', 'Url']
column_to_hide = ['Slug', 'Sub_ita', 'Last_air_date', 'Seasons_count', 'Url', 'Image', 'Path_id']
def get_select_title(table_show_manager, media_search_manager):
def get_select_title(table_show_manager, media_search_manager, num_results_available):
"""
Display a selection of titles and prompt the user to choose one.
Handles both console and Telegram bot input.
Parameters:
table_show_manager: Manager for console table display.
media_search_manager: Manager holding the list of media items.
num_results_available (int): The number of media items available for selection.
Returns:
MediaItem: The selected media item.
MediaItem: The selected media item, or None if no selection is made or an error occurs.
"""
# Determine column_info dynamically for (search site)
if not media_search_manager.media_list:
console.print("\n[red]No media items available.")
# console.print("\n[red]No media items available.")
return None
# Example of available colors for columns
available_colors = ['red', 'magenta', 'yellow', 'cyan', 'green', 'blue', 'white']
# Retrieve the keys of the first media item as column headers
first_media_item = media_search_manager.media_list[0]
column_info = {"Index": {'color': available_colors[0]}} # Always include Index with a fixed color
# Assign colors to the remaining keys dynamically
color_index = 1
for key in first_media_item.__dict__.keys():
if site_constant.TELEGRAM_BOT:
bot = get_bot_instance()
prompt_message = f"Inserisci il numero del titolo che vuoi selezionare (da 0 a {num_results_available - 1}):"
user_input_str = bot.ask(
"select_title_from_list_number",
prompt_message,
None
)
if key.capitalize() in column_to_hide:
continue
if user_input_str is None:
bot.send_message("Timeout: nessuna selezione ricevuta.", None)
return None
if key in ('id', 'type', 'name', 'score'): # Custom prioritization of colors
if key == 'type':
column_info["Type"] = {'color': 'yellow'}
elif key == 'name':
column_info["Name"] = {'color': 'magenta'}
elif key == 'score':
column_info["Score"] = {'color': 'cyan'}
try:
chosen_index = int(user_input_str)
if 0 <= chosen_index < num_results_available:
selected_item = media_search_manager.get(chosen_index)
if selected_item:
return selected_item
else:
bot.send_message(f"Errore interno: Impossibile recuperare il titolo con indice {chosen_index}.", None)
return None
else:
bot.send_message(f"Selezione '{chosen_index}' non valida. Inserisci un numero compreso tra 0 e {num_results_available - 1}.", None)
return None
except ValueError:
bot.send_message(f"Input '{user_input_str}' non valido. Devi inserire un numero.", None)
return None
except Exception as e:
bot.send_message(f"Si è verificato un errore durante la selezione: {e}", None)
return None
else:
column_info[key.capitalize()] = {'color': available_colors[color_index % len(available_colors)]}
color_index += 1
table_show_manager.add_column(column_info)
# Populate the table with title information
for i, media in enumerate(media_search_manager.media_list):
media_dict = {'Index': str(i)}
else:
# Logica originale per la console
if not media_search_manager.media_list:
console.print("\n[red]No media items available.")
return None
first_media_item = media_search_manager.media_list[0]
column_info = {"Index": {'color': available_colors[0]}}
color_index = 1
for key in first_media_item.__dict__.keys():
if key.capitalize() in column_to_hide:
continue
if key in ('id', 'type', 'name', 'score'):
if key == 'type': column_info["Type"] = {'color': 'yellow'}
elif key == 'name': column_info["Name"] = {'color': 'magenta'}
elif key == 'score': column_info["Score"] = {'color': 'cyan'}
else:
column_info[key.capitalize()] = {'color': available_colors[color_index % len(available_colors)]}
color_index += 1
# Ensure all values are strings for rich add table
media_dict[key.capitalize()] = str(getattr(media, key))
table_show_manager.clear()
table_show_manager.add_column(column_info)
table_show_manager.add_tv_show(media_dict)
for i, media in enumerate(media_search_manager.media_list):
media_dict = {'Index': str(i)}
for key in first_media_item.__dict__.keys():
if key.capitalize() in column_to_hide:
continue
media_dict[key.capitalize()] = str(getattr(media, key))
table_show_manager.add_tv_show(media_dict)
# Run the table and handle user input
last_command = table_show_manager.run(force_int_input=True, max_int_input=len(media_search_manager.media_list))
table_show_manager.clear()
last_command_str = table_show_manager.run(force_int_input=True, max_int_input=len(media_search_manager.media_list))
table_show_manager.clear()
# Handle user's quit command
if last_command == "q" or last_command == "quit":
console.print("\n[red]Quit ...")
sys.exit(0)
if last_command_str is None or last_command_str.lower() in ["q", "quit"]:
console.print("\n[red]Selezione annullata o uscita.")
return None
# Check if the selected index is within range
if 0 <= int(last_command) < len(media_search_manager.media_list):
return media_search_manager.get(int(last_command))
else:
console.print("\n[red]Wrong index")
sys.exit(0)
try:
selected_index = int(last_command_str)
if 0 <= selected_index < len(media_search_manager.media_list):
return media_search_manager.get(selected_index)
else:
console.print("\n[red]Indice errato o non valido.")
# sys.exit(0)
return None
except ValueError:
console.print("\n[red]Input non numerico ricevuto dalla tabella.")
# sys.exit(0)
return None

View File

@ -180,10 +180,14 @@ class M3U8Manager:
self.sub_streams = []
if ENABLE_SUBTITLE:
self.sub_streams = [
s for s in (self.parser._subtitle.get_all_uris_and_names() or [])
if s.get('language') in DOWNLOAD_SPECIFIC_SUBTITLE
]
if "*" in DOWNLOAD_SPECIFIC_SUBTITLE:
self.sub_streams = self.parser._subtitle.get_all_uris_and_names() or []
else:
self.sub_streams = [
s for s in (self.parser._subtitle.get_all_uris_and_names() or [])
if s.get('language') in DOWNLOAD_SPECIFIC_SUBTITLE
]
def log_selection(self):
tuple_available_resolution = self.parser._video.get_list_resolution()
@ -209,9 +213,13 @@ class M3U8Manager:
f"[red]Set:[/red] {set_codec_info}"
)
# Get available subtitles and their languages
available_subtitles = self.parser._subtitle.get_all_uris_and_names() or []
available_sub_languages = [sub.get('language') for sub in available_subtitles]
downloadable_sub_languages = list(set(available_sub_languages) & set(DOWNLOAD_SPECIFIC_SUBTITLE))
# If "*" is in DOWNLOAD_SPECIFIC_SUBTITLE, all languages are downloadable
downloadable_sub_languages = available_sub_languages if "*" in DOWNLOAD_SPECIFIC_SUBTITLE else list(set(available_sub_languages) & set(DOWNLOAD_SPECIFIC_SUBTITLE))
if available_sub_languages:
console.print(
f"[cyan bold]Subtitle [/cyan bold] [green]Available:[/green] [purple]{', '.join(available_sub_languages)}[/purple] | "
@ -514,7 +522,7 @@ class HLS_Downloader:
for item in self.download_manager.missing_segments:
if int(item['nFailed']) >= 1:
missing_ts = True
missing_info += f"[red]TS Failed: {item['nFailed']} {item['type']} tracks[/red]\n"
missing_info += f"[red]TS Failed: {item['nFailed']} {item['type']} tracks[/red]"
file_size = internet_manager.format_file_size(os.path.getsize(self.path_manager.output_path))
duration = print_duration_table(self.path_manager.output_path, description=False, return_string=True)

View File

@ -23,7 +23,7 @@ from rich.console import Console
# Internal utilities
from StreamingCommunity.Util.color import Colors
from StreamingCommunity.Util.headers import get_userAgent
from StreamingCommunity.Util.config_json import config_manager, get_use_large_bar
from StreamingCommunity.Util.config_json import config_manager
# Logic class
@ -41,10 +41,9 @@ REQUEST_VERIFY = config_manager.get_bool('REQUESTS', 'verify')
DEFAULT_VIDEO_WORKERS = config_manager.get_int('M3U8_DOWNLOAD', 'default_video_workser')
DEFAULT_AUDIO_WORKERS = config_manager.get_int('M3U8_DOWNLOAD', 'default_audio_workser')
MAX_TIMEOOUT = config_manager.get_int("REQUESTS", "timeout")
MAX_INTERRUPT_COUNT = 3
SEGMENT_MAX_TIMEOUT = config_manager.get_int("M3U8_DOWNLOAD", "segment_timeout")
TELEGRAM_BOT = config_manager.get_bool('DEFAULT', 'telegram_bot')
MAX_INTERRUPT_COUNT = 3
# Variable
console = Console()
@ -160,7 +159,7 @@ class M3U8_Segments:
if self.is_index_url:
try:
client_params = {'headers': {'User-Agent': get_userAgent()}, 'timeout': MAX_TIMEOOUT}
response = httpx.get(self.url, **client_params)
response = httpx.get(self.url, **client_params, follow_redirects=True)
response.raise_for_status()
self.parse_data(response.text)
@ -408,20 +407,12 @@ class M3U8_Segments:
"""
Generate platform-appropriate progress bar format.
"""
if not get_use_large_bar():
return (
f"{Colors.YELLOW}Proc{Colors.WHITE}: "
f"{Colors.RED}{{percentage:.2f}}% "
f"{Colors.CYAN}{{remaining}}{{postfix}} {Colors.WHITE}]"
)
else:
return (
f"{Colors.YELLOW}[HLS] {Colors.WHITE}({Colors.CYAN}{description}{Colors.WHITE}): "
f"{Colors.RED}{{percentage:.2f}}% "
f"{Colors.MAGENTA}{{bar}} "
f"{Colors.YELLOW}{{elapsed}}{Colors.WHITE} < {Colors.CYAN}{{remaining}}{Colors.WHITE}{{postfix}}{Colors.WHITE}"
)
return (
f"{Colors.YELLOW}[HLS] {Colors.WHITE}({Colors.CYAN}{description}{Colors.WHITE}): "
f"{Colors.RED}{{percentage:.2f}}% "
f"{Colors.MAGENTA}{{bar}} "
f"{Colors.YELLOW}{{elapsed}}{Colors.WHITE} < {Colors.CYAN}{{remaining}}{Colors.WHITE}{{postfix}}{Colors.WHITE}"
)
def _get_worker_count(self, stream_type: str) -> int:
"""

View File

@ -21,7 +21,7 @@ from rich.panel import Panel
from StreamingCommunity.Util.headers import get_userAgent
from StreamingCommunity.Util.color import Colors
from StreamingCommunity.Util.config_json import config_manager
from StreamingCommunity.Util.os import internet_manager
from StreamingCommunity.Util.os import internet_manager, os_manager
from StreamingCommunity.TelegramHelp.telegram_bot import get_bot_instance
@ -80,6 +80,7 @@ def MP4_downloader(url: str, path: str, referer: str = None, headers_: dict = No
bot = get_bot_instance()
console.log("####")
path = os_manager.get_sanitize_path(path)
if os.path.exists(path):
console.log("[red]Output file already exists.")
if TELEGRAM_BOT:

View File

@ -18,7 +18,7 @@ import qbittorrentapi
# Internal utilities
from StreamingCommunity.Util.color import Colors
from StreamingCommunity.Util.os import internet_manager
from StreamingCommunity.Util.config_json import config_manager, get_use_large_bar
from StreamingCommunity.Util.config_json import config_manager
# Configuration
@ -316,19 +316,12 @@ class TOR_downloader:
# Ensure the torrent is started
self.qb.torrents_resume(torrent_hashes=self.latest_torrent_hash)
# Configure progress bar display format based on device
if get_use_large_bar():
bar_format = (
f"{Colors.YELLOW}[TOR] {Colors.WHITE}({Colors.CYAN}video{Colors.WHITE}): "
f"{Colors.RED}{{percentage:.2f}}% {Colors.MAGENTA}{{bar}} {Colors.WHITE}[ "
f"{Colors.YELLOW}{{elapsed}} {Colors.WHITE}< {Colors.CYAN}{{remaining}}{{postfix}} {Colors.WHITE}]"
)
else:
bar_format = (
f"{Colors.YELLOW}Proc{Colors.WHITE}: "
f"{Colors.RED}{{percentage:.2f}}% {Colors.WHITE}| "
f"{Colors.CYAN}{{remaining}}{{postfix}} {Colors.WHITE}]"
)
# Configure progress bar display format
bar_format = (
f"{Colors.YELLOW}[TOR] {Colors.WHITE}({Colors.CYAN}video{Colors.WHITE}): "
f"{Colors.RED}{{percentage:.2f}}% {Colors.MAGENTA}{{bar}} {Colors.WHITE}[ "
f"{Colors.YELLOW}{{elapsed}} {Colors.WHITE}< {Colors.CYAN}{{remaining}}{{postfix}} {Colors.WHITE}]"
)
# Initialize progress bar
with tqdm(

View File

@ -36,11 +36,7 @@ def capture_output(process: subprocess.Popen, description: str) -> None:
if not line:
continue
logging.info(f"FFMPEG line: {line}")
# Capture only error
if "rror" in str(line):
console.log(f"[red]FFMPEG: {str(line).strip()}")
logging.info(f"CAPTURE ffmpeg line: {line}")
# Check if termination is requested
if terminate_flag.is_set():

View File

@ -132,31 +132,66 @@ def print_duration_table(file_path: str, description: str = "Duration", return_s
def get_ffprobe_info(file_path):
"""
Get format and codec information for a media file using ffprobe.
Parameters:
- file_path (str): Path to the media file.
Returns:
dict: A dictionary containing the format name and a list of codec names.
Returns None if file does not exist or ffprobe crashes.
"""
if not os.path.exists(file_path):
logging.error(f"File not found: {file_path}")
return None
# Get ffprobe path and verify it exists
ffprobe_path = get_ffprobe_path()
if not ffprobe_path or not os.path.exists(ffprobe_path):
logging.error(f"FFprobe not found at path: {ffprobe_path}")
return None
# Verify file permissions
try:
file_stat = os.stat(file_path)
logging.info(f"File permissions: {oct(file_stat.st_mode)}")
if not os.access(file_path, os.R_OK):
logging.error(f"No read permission for file: {file_path}")
return None
except OSError as e:
logging.error(f"Cannot access file {file_path}: {e}")
return None
try:
cmd = [ffprobe_path, '-v', 'error', '-show_format', '-show_streams', '-print_format', 'json', file_path]
logging.info(f"Running FFprobe command: {' '.join(cmd)}")
# Use subprocess.run instead of Popen for better error handling
result = subprocess.run(
[get_ffprobe_path(), '-v', 'error', '-show_format', '-show_streams', '-print_format', 'json', file_path],
stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True, check=True
cmd,
capture_output=True,
text=True,
check=False # Don't raise exception on non-zero exit
)
output = result.stdout
info = json.loads(output)
format_name = info['format']['format_name'] if 'format' in info else None
codec_names = [stream['codec_name'] for stream in info['streams']] if 'streams' in info else []
return {
'format_name': format_name,
'codec_names': codec_names
}
if result.returncode != 0:
logging.error(f"FFprobe failed with return code {result.returncode}")
logging.error(f"FFprobe stderr: {result.stderr}")
logging.error(f"FFprobe stdout: {result.stdout}")
logging.error(f"Command: {' '.join(cmd)}")
logging.error(f"FFprobe path permissions: {oct(os.stat(ffprobe_path).st_mode)}")
return None
# Parse JSON output
try:
info = json.loads(result.stdout)
return {
'format_name': info.get('format', {}).get('format_name'),
'codec_names': [stream.get('codec_name') for stream in info.get('streams', [])]
}
except json.JSONDecodeError as e:
logging.error(f"Failed to parse FFprobe output: {e}")
return None
except Exception as e:
logging.error(f"Failed to parse JSON output from ffprobe for file {file_path}: {e}")
logging.error(f"FFprobe execution failed: {e}")
return None
@ -173,8 +208,11 @@ def is_png_format_or_codec(file_info):
if not file_info:
return False
#console.print(f"[yellow][FFmpeg] [cyan]Avaiable codec[white]: [red]{file_info['codec_names']}")
return file_info['format_name'] == 'png_pipe' or 'png' in file_info['codec_names']
# Handle None values in format_name gracefully
format_name = file_info.get('format_name')
codec_names = file_info.get('codec_names', [])
return format_name == 'png_pipe' or 'png' in codec_names
def need_to_force_to_ts(file_path):
@ -225,4 +263,4 @@ def check_duration_v_a(video_path, audio_path, tolerance=1.0):
if duration_difference <= tolerance:
return True, duration_difference
else:
return False, duration_difference
return False, duration_difference

View File

@ -1,6 +1,5 @@
# 21.04.25
import sys
import time
import logging
import threading
@ -14,7 +13,6 @@ from tqdm import tqdm
# Internal utilities
from StreamingCommunity.Util.color import Colors
from StreamingCommunity.Util.config_json import get_use_large_bar
from StreamingCommunity.Util.os import internet_manager
@ -31,16 +29,16 @@ class M3U8_Ts_Estimator:
self.segments_instance = segments_instance
self.lock = threading.Lock()
self.speed = {"upload": "N/A", "download": "N/A"}
self._running = True
self.speed_thread = threading.Thread(target=self.capture_speed)
self.speed_thread.daemon = True
self.speed_thread.start()
if get_use_large_bar():
logging.debug("USE_LARGE_BAR is True, starting speed capture thread")
self.speed_thread = threading.Thread(target=self.capture_speed)
self.speed_thread.daemon = True
self.speed_thread.start()
else:
logging.debug("USE_LARGE_BAR is False, speed capture thread not started")
def __del__(self):
"""Ensure thread is properly stopped when the object is destroyed."""
self._running = False
def add_ts_file(self, size: int):
"""Add a file size to the list of file sizes."""
if size <= 0:
@ -50,32 +48,44 @@ class M3U8_Ts_Estimator:
self.ts_file_sizes.append(size)
def capture_speed(self, interval: float = 1.5):
"""Capture the internet speed periodically."""
"""Capture the internet speed periodically with improved efficiency."""
last_upload, last_download = 0, 0
speed_buffer = deque(maxlen=3)
while True:
while self._running:
try:
# Get IO counters only once per loop to reduce function calls
io_counters = psutil.net_io_counters()
if not io_counters:
raise ValueError("No IO counters available")
current_upload, current_download = io_counters.bytes_sent, io_counters.bytes_recv
if last_upload and last_download:
upload_speed = (current_upload - last_upload) / interval
download_speed = (current_download - last_download) / interval
speed_buffer.append(max(0, download_speed))
# Only update buffer when we have valid data
if download_speed > 0:
speed_buffer.append(download_speed)
# Use a more efficient approach for thread synchronization
avg_speed = sum(speed_buffer) / len(speed_buffer) if speed_buffer else 0
formatted_upload = internet_manager.format_transfer_speed(max(0, upload_speed))
formatted_download = internet_manager.format_transfer_speed(avg_speed)
# Minimize lock time by preparing data outside the lock
with self.lock:
self.speed = {
"upload": internet_manager.format_transfer_speed(max(0, upload_speed)),
"download": internet_manager.format_transfer_speed(sum(speed_buffer) / len(speed_buffer))
"upload": formatted_upload,
"download": formatted_download
}
logging.debug(f"Updated speeds - Upload: {self.speed['upload']}, Download: {self.speed['download']}")
last_upload, last_download = current_upload, current_download
except Exception as e:
logging.error(f"Error in speed capture: {str(e)}")
if self._running: # Only log if we're still supposed to be running
logging.error(f"Error in speed capture: {str(e)}")
self.speed = {"upload": "N/A", "download": "N/A"}
time.sleep(interval)
@ -88,6 +98,10 @@ class M3U8_Ts_Estimator:
str: The mean size of the files in a human-readable format.
"""
try:
# Only do calculations if we have data
if not self.ts_file_sizes:
return "0 B"
total_size = sum(self.ts_file_sizes)
mean_size = total_size / len(self.ts_file_sizes)
return internet_manager.format_file_size(mean_size)
@ -101,32 +115,34 @@ class M3U8_Ts_Estimator:
self.add_ts_file(total_downloaded * self.total_segments)
file_total_size = self.calculate_total_size()
if file_total_size == "Error":
return
number_file_total_size = file_total_size.split(' ')[0]
units_file_total_size = file_total_size.split(' ')[1]
if get_use_large_bar():
speed_data = self.speed['download'].split(" ")
if len(speed_data) >= 2:
average_internet_speed = speed_data[0]
average_internet_unit = speed_data[1]
else:
average_internet_speed = "N/A"
average_internet_unit = ""
retry_count = self.segments_instance.active_retries if self.segments_instance else 0
progress_str = (
f"{Colors.GREEN}{number_file_total_size} {Colors.RED}{units_file_total_size}"
f"{Colors.WHITE}, {Colors.CYAN}{average_internet_speed} {Colors.RED}{average_internet_unit}"
f"{Colors.WHITE}, {Colors.GREEN}CRR {Colors.RED}{retry_count} "
)
else:
retry_count = self.segments_instance.active_retries if self.segments_instance else 0
progress_str = (
f"{Colors.GREEN}{number_file_total_size} {Colors.RED}{units_file_total_size}"
f"{Colors.WHITE}, {Colors.GREEN}CRR {Colors.RED}{retry_count} "
)
# Reduce lock contention by acquiring data with minimal synchronization
retry_count = 0
if self.segments_instance:
with self.segments_instance.active_retries_lock:
retry_count = self.segments_instance.active_retries
# Get speed data outside of any locks
speed_data = ["N/A", ""]
with self.lock:
download_speed = self.speed['download']
if download_speed != "N/A":
speed_data = download_speed.split(" ")
average_internet_speed = speed_data[0] if len(speed_data) >= 1 else "N/A"
average_internet_unit = speed_data[1] if len(speed_data) >= 2 else ""
progress_str = (
f"{Colors.GREEN}{number_file_total_size} {Colors.RED}{units_file_total_size}"
f"{Colors.WHITE}, {Colors.CYAN}{average_internet_speed} {Colors.RED}{average_internet_unit} "
#f"{Colors.WHITE}, {Colors.GREEN}CRR {Colors.RED}{retry_count} "
)
progress_counter.set_postfix_str(progress_str)

View File

@ -1,6 +1,6 @@
# 20.04.25
import sys
import re
import logging
@ -418,18 +418,38 @@ class M3U8_Parser:
- uri (str): The URI containing video information.
Returns:
int: The video resolution if found, otherwise 0.
tuple: The video resolution (width, height) if found, otherwise (0, 0).
"""
# Log
logging.info(f"Try extract resolution from: {uri}")
# First try: Check for known resolutions
for resolution in RESOLUTIONS:
if "http" in str(uri):
if str(resolution[1]) in uri:
return resolution
# Default resolution return (not best)
# Pattern to match common resolution formats like 854x480, 1280x720, etc.
resolution_patterns = [
r'(\d+)x(\d+)', # Match format: 854x480
r'(\d+)p', # Match format: 480p, 720p, etc.
r'_(\d+)x(\d+)' # Match format: _854x480
]
for pattern in resolution_patterns:
matches = re.findall(pattern, uri)
if matches:
if len(matches[0]) == 2: # Format like 854x480
width, height = int(matches[0][0]), int(matches[0][1])
return (width, height)
elif len(matches[0]) == 1: # Format like 480p
height = int(matches[0])
# Estimate width based on common aspect ratios (16:9)
width = int(height * 16 / 9)
return (width, height)
logging.warning("No resolution found with custom parsing.")
return (0, 0)

View File

@ -0,0 +1,72 @@
# 29.04.25
import sys
import time
import signal
import warnings
warnings.filterwarnings("ignore", category=UserWarning)
# External library
import httpx
from rich import print
# Internal utilities
from StreamingCommunity.Util.config_json import config_manager
from StreamingCommunity.Util.headers import get_headers
# Variable
MAX_TIMEOUT = config_manager.get_int("REQUESTS", "timeout")
class ProxyFinder:
def __init__(self, url, timeout_threshold: float = 7.0):
self.url = url
self.timeout_threshold = timeout_threshold
self.shutdown_flag = False
signal.signal(signal.SIGINT, self._handle_interrupt)
def _test_single_request(self, proxy_info: tuple) -> tuple:
proxy, source = proxy_info
try:
start = time.time()
print(f"[yellow]Testing proxy for URL: {self.url}...")
with httpx.Client(proxy=proxy, timeout=self.timeout_threshold) as client:
response = client.get(self.url, headers=get_headers())
if response.status_code == 200:
return (True, time.time() - start, response, source)
except Exception:
pass
return (False, self.timeout_threshold + 1, None, source)
def test_proxy(self, proxy_info: tuple) -> tuple:
proxy, source = proxy_info
if self.shutdown_flag:
return (proxy, False, 0, None, source)
success1, time1, text1, source = self._test_single_request(proxy_info)
if not success1 or time1 > self.timeout_threshold:
return (proxy, False, time1, None, source)
success2, time2, _, source = self._test_single_request(proxy_info)
avg_time = (time1 + time2) / 2
return (proxy, success2 and time2 <= self.timeout_threshold, avg_time, text1, source)
def _handle_interrupt(self, sig, frame):
print("\n[red]Received keyboard interrupt. Terminating...")
self.shutdown_flag = True
sys.exit(0)
def find_fast_proxy(self) -> str:
try:
proxy_config = config_manager.get("REQUESTS", "proxy")
if proxy_config and isinstance(proxy_config, dict) and 'http' in proxy_config:
print("[cyan]Using configured proxy from config.json...[/cyan]")
return proxy_config['http']
except Exception as e:
print(f"[red]Error getting configured proxy: {str(e)}[/red]")
return None

View File

@ -0,0 +1,62 @@
{
"DEFAULT": {
"debug": false,
"show_message": true,
"clean_console": true,
"show_trending": true,
"use_api": true,
"not_close": false,
"telegram_bot": true,
"download_site_data": true,
"validate_github_config": true
},
"OUT_FOLDER": {
"root_path": "/mnt/data/media/",
"movie_folder_name": "films",
"serie_folder_name": "serie_tv",
"anime_folder_name": "Anime",
"map_episode_name": "E%(episode)_%(episode_name)",
"add_siteName": false
},
"QBIT_CONFIG": {
"host": "192.168.1.51",
"port": "6666",
"user": "admin",
"pass": "adminadmin"
},
"M3U8_DOWNLOAD": {
"tqdm_delay": 0.01,
"default_video_workser": 12,
"default_audio_workser": 12,
"segment_timeout": 8,
"download_audio": true,
"merge_audio": true,
"specific_list_audio": [
"ita"
],
"download_subtitle": true,
"merge_subs": true,
"specific_list_subtitles": [
"ita",
"eng"
],
"cleanup_tmp_folder": true
},
"M3U8_CONVERSION": {
"use_codec": false,
"use_vcodec": true,
"use_acodec": true,
"use_bitrate": true,
"use_gpu": false,
"default_preset": "ultrafast"
},
"M3U8_PARSER": {
"force_resolution": "Best",
"get_only_link": false
},
"REQUESTS": {
"verify": false,
"timeout": 20,
"max_retry": 8
}
}

View File

@ -575,6 +575,10 @@ class TelegramBot:
cleaned_output = cleaned_output.replace(
"\n\n", "\n"
) # Rimuovi newline multipli
# Inizializza le variabili
cleaned_output_0 = None # o ""
cleaned_output_1 = None # o ""
# Dentro cleaned_output c'è una stringa recupero quello che si trova tra ## ##
download_section = re.search(r"##(.*?)##", cleaned_output, re.DOTALL)

View File

@ -3,7 +3,8 @@
import os
import sys
import time
import asyncio
import importlib.metadata
# External library
import httpx
@ -11,7 +12,7 @@ from rich.console import Console
# Internal utilities
from .version import __version__, __author__, __title__
from .version import __version__ as source_code_version, __author__, __title__
from StreamingCommunity.Util.config_json import config_manager
from StreamingCommunity.Util.headers import get_userAgent
@ -24,25 +25,33 @@ else:
base_path = os.path.dirname(__file__)
console = Console()
async def fetch_github_data(client, url):
"""Helper function to fetch data from GitHub API"""
response = await client.get(
url=url,
headers={'user-agent': get_userAgent()},
timeout=config_manager.get_int("REQUESTS", "timeout"),
follow_redirects=True
)
return response.json()
async def async_github_requests():
"""Make concurrent GitHub API requests"""
async with httpx.AsyncClient() as client:
tasks = [
fetch_github_data(client, f"https://api.github.com/repos/{__author__}/{__title__}"),
fetch_github_data(client, f"https://api.github.com/repos/{__author__}/{__title__}/releases"),
fetch_github_data(client, f"https://api.github.com/repos/{__author__}/{__title__}/commits")
]
return await asyncio.gather(*tasks)
def update():
"""
Check for updates on GitHub and display relevant information.
"""
try:
response_reposity = httpx.get(
url=f"https://api.github.com/repos/{__author__}/{__title__}",
headers={'user-agent': get_userAgent()},
timeout=config_manager.get_int("REQUESTS", "timeout"),
follow_redirects=True
).json()
response_releases = httpx.get(
url=f"https://api.github.com/repos/{__author__}/{__title__}/releases",
headers={'user-agent': get_userAgent()},
timeout=config_manager.get_int("REQUESTS", "timeout"),
follow_redirects=True
).json()
# Run async requests concurrently
response_reposity, response_releases, response_commits = asyncio.run(async_github_requests())
except Exception as e:
console.print(f"[red]Error accessing GitHub API: {e}")
@ -66,11 +75,27 @@ def update():
else:
percentual_stars = 0
# Check installed version
if str(__version__).replace('v', '') != str(last_version).replace('v', '') :
# Get the current version (installed version)
try:
current_version = importlib.metadata.version(__title__)
except importlib.metadata.PackageNotFoundError:
#console.print(f"[yellow]Warning: Could not determine installed version for '{__title__}' via importlib.metadata. Falling back to source version.[/yellow]")
current_version = source_code_version
# Get commit details
latest_commit = response_commits[0] if response_commits else None
if latest_commit:
latest_commit_message = latest_commit.get('commit', {}).get('message', 'No commit message')
else:
latest_commit_message = 'No commit history available'
console.print(f"\n[cyan]Current installed version: [yellow]{current_version}")
console.print(f"[cyan]Last commit: [yellow]{latest_commit_message}")
if str(current_version).replace('v', '') != str(last_version).replace('v', ''):
console.print(f"\n[cyan]New version available: [yellow]{last_version}")
console.print(f"\n[red]{__title__} has been downloaded [yellow]{total_download_count} [red]times, but only [yellow]{percentual_stars}% [red]of users have starred it.\n\
[cyan]Help the repository grow today by leaving a [yellow]star [cyan]and [yellow]sharing [cyan]it with others online!")
time.sleep(3)
time.sleep(4)

View File

@ -1,5 +1,5 @@
__title__ = 'StreamingCommunity'
__version__ = '2.9.8'
__version__ = '3.0.9'
__author__ = 'Arrowar'
__description__ = 'A command-line program to download film'
__copyright__ = 'Copyright 2024'
__copyright__ = 'Copyright 2025'

View File

@ -12,6 +12,10 @@ from typing import Any, List
from rich.console import Console
# Internal utilities
from StreamingCommunity.Util.headers import get_userAgent
# Variable
console = Console()
download_site_data = True
@ -32,8 +36,10 @@ class ConfigManager:
base_path = os.path.dirname(sys.executable)
else:
# Use the current directory where the script is executed
base_path = os.getcwd()
# Get the actual path of the module file
current_file_path = os.path.abspath(__file__)
base_path = os.path.dirname(os.path.dirname(os.path.dirname(current_file_path)))
# Initialize file paths
self.file_path = os.path.join(base_path, file_name)
@ -134,7 +140,7 @@ class ConfigManager:
console.print(f"[bold cyan]Downloading reference configuration:[/bold cyan] [green]{self.reference_config_url}[/green]")
try:
response = requests.get(self.reference_config_url, timeout=10)
response = requests.get(self.reference_config_url, timeout=8, headers={'User-Agent': get_userAgent()})
if response.status_code == 200:
with open(self.file_path, 'wb') as f:
@ -156,13 +162,12 @@ class ConfigManager:
try:
# Download the reference configuration
console.print(f"[bold cyan]Validating configuration with GitHub...[/bold cyan]")
response = requests.get(self.reference_config_url, timeout=10)
response = requests.get(self.reference_config_url, timeout=8, headers={'User-Agent': get_userAgent()})
if not response.ok:
raise Exception(f"Error downloading reference configuration. Code: {response.status_code}")
reference_config = response.json()
console.print(f"[bold cyan]Reference configuration downloaded:[/bold cyan] [green]{len(reference_config)} keys available[/green]")
# Compare and update missing keys
merged_config = self._deep_merge_configs(self.config, reference_config)
@ -263,40 +268,32 @@ class ConfigManager:
self._load_site_data_from_file()
def _load_site_data_from_api(self) -> None:
"""Load site data from API."""
"""Load site data from GitHub."""
domains_github_url = "https://raw.githubusercontent.com/Arrowar/StreamingCommunity/refs/heads/main/.github/.domain/domains.json"
headers = {
"apikey": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJzdXBhYmFzZSIsInJlZiI6Inp2Zm5ncG94d3Jnc3duenl0YWRoIiwicm9sZSI6ImFub24iLCJpYXQiOjE3NDAxNTIxNjMsImV4cCI6MjA1NTcyODE2M30.FNTCCMwi0QaKjOu8gtZsT5yQttUW8QiDDGXmzkn89QE",
"Authorization": f"Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJzdXBhYmFzZSIsInJlZiI6Inp2Zm5ncG94d3Jnc3duenl0YWRoIiwicm9sZSI6ImFub24iLCJpYXQiOjE3NDAxNTIxNjMsImV4cCI6MjA1NTcyODE2M30.FNTCCMwi0QaKjOu8gtZsT5yQttUW8QiDDGXmzkn89QE",
"Content-Type": "application/json"
"User-Agent": get_userAgent()
}
try:
console.print("[bold cyan]Retrieving site data from API...[/bold cyan]")
response = requests.get("https://zvfngpoxwrgswnzytadh.supabase.co/rest/v1/public", headers=headers, timeout=10)
console.print(f"[bold cyan]Retrieving site data from GitHub:[/bold cyan] [green]{domains_github_url}[/green]")
response = requests.get(domains_github_url, timeout=8, headers=headers)
if response.ok:
data = response.json()
if data and len(data) > 0:
self.configSite = data[0]['data']
site_count = len(self.configSite) if isinstance(self.configSite, dict) else 0
console.print(f"[bold green]Site data retrieved:[/bold green] {site_count} streaming services available")
# Show some sites as examples
if site_count > 0:
examples = list(self.configSite.items())[:3]
sites_info = []
for site, info in examples:
url = info.get('full_url', 'N/A')
console.print(f" • [cyan]{site}[/cyan]: {url}")
else:
console.print("[bold yellow]API returned an empty data set[/bold yellow]")
self.configSite = response.json()
site_count = len(self.configSite) if isinstance(self.configSite, dict) else 0
console.print(f"[bold green]Site data loaded from GitHub:[/bold green] {site_count} streaming services found.")
else:
console.print(f"[bold red]API request failed:[/bold red] HTTP {response.status_code}, {response.text[:100]}")
console.print(f"[bold red]GitHub request failed:[/bold red] HTTP {response.status_code}, {response.text[:100]}")
self._handle_site_data_fallback()
except json.JSONDecodeError as e:
console.print(f"[bold red]Error parsing JSON from GitHub:[/bold red] {str(e)}")
self._handle_site_data_fallback()
except Exception as e:
console.print(f"[bold red]API connection error:[/bold red] {str(e)}")
console.print(f"[bold red]GitHub connection error:[/bold red] {str(e)}")
self._handle_site_data_fallback()
def _load_site_data_from_file(self) -> None:
@ -347,7 +344,7 @@ class ConfigManager:
try:
logging.info(f"Downloading {filename} from {url}...")
console.print(f"[bold cyan]File download:[/bold cyan] {os.path.basename(filename)}")
response = requests.get(url, timeout=10)
response = requests.get(url, timeout=8, headers={'User-Agent': get_userAgent()})
if response.status_code == 200:
with open(filename, 'wb') as f:
@ -561,7 +558,6 @@ class ConfigManager:
return section in config_source
# Helper function to check the platform
def get_use_large_bar():
"""
Determine if the large bar feature should be enabled.

View File

@ -238,6 +238,31 @@ class FFMPEGDownloader:
Returns:
Tuple[Optional[str], Optional[str], Optional[str]]: Paths to ffmpeg, ffprobe, and ffplay executables.
"""
if self.os_name == 'linux':
try:
# Attempt to install FFmpeg using apt
console.print("[bold blue]Trying to install FFmpeg using 'sudo apt install ffmpeg'[/]")
result = subprocess.run(
['sudo', 'apt', 'install', '-y', 'ffmpeg'],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True
)
if result.returncode == 0:
ffmpeg_path = shutil.which('ffmpeg')
ffprobe_path = shutil.which('ffprobe')
if ffmpeg_path and ffprobe_path:
console.print("[bold green]FFmpeg successfully installed via apt[/]")
return ffmpeg_path, ffprobe_path, None
else:
console.print("[bold yellow]Failed to install FFmpeg via apt. Proceeding with static download.[/]")
except Exception as e:
logging.error(f"Error during 'sudo apt install ffmpeg': {e}")
console.print("[bold red]Error during 'sudo apt install ffmpeg'. Proceeding with static download.[/]")
# Proceed with static download if apt installation fails or is not applicable
config = FFMPEG_CONFIGURATION[self.os_name]
executables = [exe.format(arch=self.arch) for exe in config['executables']]
successful_extractions = []
@ -346,4 +371,4 @@ def check_ffmpeg() -> Tuple[Optional[str], Optional[str], Optional[str]]:
except Exception as e:
logging.error(f"Error checking or downloading FFmpeg executables: {e}")
return None, None, None
return None, None, None

View File

@ -4,7 +4,6 @@ import io
import os
import glob
import sys
import time
import shutil
import hashlib
import logging
@ -13,15 +12,14 @@ import inspect
import subprocess
import contextlib
import importlib.metadata
from pathlib import Path
import socket
# External library
import httpx
from unidecode import unidecode
from rich.console import Console
from rich.prompt import Prompt
from pathvalidate import sanitize_filename, sanitize_filepath
from dns.resolver import dns
# Internal utilities
@ -107,16 +105,14 @@ class OsManager:
if not path:
return path
# Decode unicode characters
# Decode unicode characters and perform basic sanitization
decoded = unidecode(path)
# Basic path sanitization
sanitized = sanitize_filepath(decoded)
if self.system == 'windows':
# Handle network paths (UNC or IP-based)
if path.startswith('\\\\') or path.startswith('//'):
parts = path.replace('/', '\\').split('\\')
if sanitized.startswith('\\\\') or sanitized.startswith('//'):
parts = sanitized.replace('/', '\\').split('\\')
# Keep server/IP and share name as is
sanitized_parts = parts[:4]
# Sanitize remaining parts
@ -129,9 +125,9 @@ class OsManager:
return '\\'.join(sanitized_parts)
# Handle drive letters
elif len(path) >= 2 and path[1] == ':':
drive = path[:2]
rest = path[2:].lstrip('\\').lstrip('/')
elif len(sanitized) >= 2 and sanitized[1] == ':':
drive = sanitized[:2]
rest = sanitized[2:].lstrip('\\').lstrip('/')
path_parts = [drive] + [
self.get_sanitize_file(part)
for part in rest.replace('/', '\\').split('\\')
@ -141,12 +137,12 @@ class OsManager:
# Regular path
else:
parts = path.replace('/', '\\').split('\\')
parts = sanitized.replace('/', '\\').split('\\')
return '\\'.join(p for p in parts if p)
else:
# Handle Unix-like paths (Linux and macOS)
is_absolute = path.startswith('/')
parts = path.replace('\\', '/').split('/')
is_absolute = sanitized.startswith('/')
parts = sanitized.replace('\\', '/').split('/')
sanitized_parts = [
self.get_sanitize_file(part)
for part in parts
@ -287,6 +283,61 @@ class InternManager():
else:
return f"{bytes / (1024 * 1024):.2f} MB/s"
# def check_dns_provider(self):
# """
# Check if the system's current DNS server matches any known DNS providers.
# Returns:
# bool: True if the current DNS server matches a known provider,
# False if no match is found or in case of errors
# """
# dns_providers = {
# "Cloudflare": ["1.1.1.1", "1.0.0.1"],
# "Google": ["8.8.8.8", "8.8.4.4"],
# "OpenDNS": ["208.67.222.222", "208.67.220.220"],
# "Quad9": ["9.9.9.9", "149.112.112.112"],
# "AdGuard": ["94.140.14.14", "94.140.15.15"],
# "Comodo": ["8.26.56.26", "8.20.247.20"],
# "Level3": ["209.244.0.3", "209.244.0.4"],
# "Norton": ["199.85.126.10", "199.85.127.10"],
# "CleanBrowsing": ["185.228.168.9", "185.228.169.9"],
# "Yandex": ["77.88.8.8", "77.88.8.1"]
# }
# try:
# resolver = dns.resolver.Resolver()
# nameservers = resolver.nameservers
# if not nameservers:
# return False
# for server in nameservers:
# for provider, ips in dns_providers.items():
# if server in ips:
# return True
# return False
# except Exception:
# return False
def check_dns_resolve(self):
"""
Check if the system's current DNS server can resolve a domain name.
Works on both Windows and Unix-like systems.
Returns:
bool: True if the current DNS server can resolve a domain name,
False if can't resolve or in case of errors
"""
test_domains = ["github.com", "google.com", "microsoft.com", "amazon.com"]
try:
for domain in test_domains:
# socket.gethostbyname() works consistently across all platforms
socket.gethostbyname(domain)
return True
except (socket.gaierror, socket.error):
return False
class OsSummary:
def __init__(self):
@ -357,12 +408,15 @@ class OsSummary:
Exits with a message if not the official version.
"""
python_implementation = platform.python_implementation()
python_version = platform.python_version()
if python_implementation != "CPython":
console.print(f"[bold red]Warning: You are using a non-official Python distribution: {python_implementation}.[/bold red]")
console.print("Please install the official Python from [bold blue]https://www.python.org[/bold blue] and try again.", style="bold yellow")
sys.exit(0)
console.print(f"[cyan]Python version: [bold red]{python_version}[/bold red]")
def get_system_summary(self):
self.check_python_version()
@ -454,4 +508,4 @@ def get_ffmpeg_path():
def get_ffprobe_path():
"""Returns the path of FFprobe."""
return os_summary.ffprobe_path
return os_summary.ffprobe_path

View File

@ -158,7 +158,8 @@ class TVShowManager:
else:
key = Prompt.ask(prompt_msg)
else:
choices = [str(i) for i in range(max_int_input + 1)] + ["q", "quit", "b", "back"]
# Include empty string in choices to allow pagination with Enter key
choices = [""] + [str(i) for i in range(max_int_input + 1)] + ["q", "quit", "b", "back"]
prompt_msg = "[cyan]Insert media [red]index"
telegram_msg = "Scegli il contenuto da scaricare:\n Serie TV - Film - Anime\noppure `back` per tornare indietro"
@ -199,7 +200,8 @@ class TVShowManager:
else:
key = Prompt.ask(prompt_msg)
else:
choices = [str(i) for i in range(max_int_input + 1)] + ["q", "quit", "b", "back"]
# Include empty string in choices to allow pagination with Enter key
choices = [""] + [str(i) for i in range(max_int_input + 1)] + ["q", "quit", "b", "back"]
prompt_msg = "[cyan]Insert media [red]index"
telegram_msg = "Scegli il contenuto da scaricare:\n Serie TV - Film - Anime\noppure `back` per tornare indietro"

View File

@ -57,11 +57,12 @@ def load_search_functions():
# Get 'indice' from the module
indice = getattr(mod, 'indice', 0)
is_deprecate = bool(getattr(mod, '_deprecate', True))
use_for = getattr(mod, '_useFor', 'other')
priority = getattr(mod, '_priority', 0)
if not is_deprecate:
modules.append((module_name, indice, use_for))
if priority == 0:
if not getattr(mod, '_deprecate'):
modules.append((module_name, indice, use_for))
except Exception as e:
console.print(f"[red]Failed to import module {module_name}: {str(e)}")
@ -156,7 +157,7 @@ def global_search(search_terms: str = None, selected_sites: list = None):
# Display progress information
console.print(f"\n[bold green]Searching for:[/bold green] [yellow]{search_terms}[/yellow]")
console.print(f"[bold green]Searching across:[/bold green] {len(selected_sites)} sites")
console.print(f"[bold green]Searching across:[/bold green] {len(selected_sites)} sites \n")
with Progress() as progress:
search_task = progress.add_task("[cyan]Searching...", total=len(selected_sites))
@ -187,7 +188,7 @@ def global_search(search_terms: str = None, selected_sites: list = None):
item_dict['source_alias'] = alias
all_results[alias].append(item_dict)
console.print(f"[green]Found {len(database.media_list)} results from {site_name}")
console.print(f"\n[green]Found {len(database.media_list)} results from {site_name}")
except Exception as e:
console.print(f"[bold red]Error searching {site_name}:[/bold red] {str(e)}")
@ -299,17 +300,26 @@ def process_selected_item(selected_item, search_functions):
console.print(f"\n[bold green]Processing selection from:[/bold green] {selected_item.get('source')}")
# Extract necessary information to pass to the site's search function
item_id = selected_item.get('id', selected_item.get('media_id'))
item_id = None
for id_field in ['id', 'media_id', 'ID', 'item_id', 'url']:
item_id = selected_item.get(id_field)
if item_id:
break
item_type = selected_item.get('type', selected_item.get('media_type', 'unknown'))
item_title = selected_item.get('title', selected_item.get('name', 'Unknown'))
if item_id:
console.print(f"[bold green]Selected item:[/bold green] {item_title} (ID: {item_id}, Type: {item_type})")
# Call the site's search function with direct_item parameter to process download
try:
func(direct_item=selected_item)
except Exception as e:
console.print(f"[bold red]Error processing download:[/bold red] {str(e)}")
logging.exception("Download processing error")
else:
console.print("[bold red]Error: Item ID not found.[/bold red]")
console.print("[bold red]Error: Item ID not found. Available fields:[/bold red]")
for key in selected_item.keys():
console.print(f"[yellow]- {key}: {selected_item[key]}[/yellow]")

View File

@ -21,7 +21,7 @@ from rich.prompt import Prompt
from .global_search import global_search
from StreamingCommunity.Util.message import start_message
from StreamingCommunity.Util.config_json import config_manager
from StreamingCommunity.Util.os import os_summary
from StreamingCommunity.Util.os import os_summary, internet_manager
from StreamingCommunity.Util.logger import Logger
from StreamingCommunity.Upload.update import update as git_update
from StreamingCommunity.Lib.TMBD import tmdb
@ -30,7 +30,7 @@ from StreamingCommunity.TelegramHelp.telegram_bot import get_bot_instance, Teleg
# Config
SHOW_TRENDING = config_manager.get_bool('DEFAULT', 'show_trending')
CLOSE_CONSOLE = config_manager.get_bool('DEFAULT', 'not_close')
NOT_CLOSE_CONSOLE = config_manager.get_bool('DEFAULT', 'not_close')
TELEGRAM_BOT = config_manager.get_bool('DEFAULT', 'telegram_bot')
@ -61,7 +61,7 @@ def load_search_functions():
loaded_functions = {}
# Lista dei siti da escludere se TELEGRAM_BOT è attivo
excluded_sites = {"cb01new", "ddlstreamitaly", "guardaserie", "ilcorsaronero", "mostraguarda"} if TELEGRAM_BOT else set()
excluded_sites = {"cb01new", "guardaserie", "ilcorsaronero", "mostraguarda"} if TELEGRAM_BOT else set()
# Find api home directory
if getattr(sys, 'frozen', False): # Modalità PyInstaller
@ -89,11 +89,10 @@ def load_search_functions():
mod = importlib.import_module(f'StreamingCommunity.Api.Site.{module_name}')
# Get 'indice' from the module
indice = getattr(mod, 'indice', 0)
is_deprecate = bool(getattr(mod, '_deprecate', True))
use_for = getattr(mod, '_useFor', 'other')
indice = getattr(mod, 'indice')
use_for = getattr(mod, '_useFor')
if not is_deprecate:
if not getattr(mod, '_deprecate'):
modules.append((module_name, indice, use_for))
except Exception as e:
@ -194,6 +193,13 @@ def force_exit():
def main(script_id = 0):
color_map = {
"anime": "red",
"film_&_serie": "yellow",
"serie": "blue",
"torrent": "white"
}
if TELEGRAM_BOT:
bot = get_bot_instance()
bot.send_message(f"Avviato script {script_id}", None)
@ -203,6 +209,29 @@ def main(script_id = 0):
# Create logger
log_not = Logger()
initialize()
# if not internet_manager.check_dns_provider():
# print()
# console.print("[red]❌ ERROR: DNS configuration is required!")
# console.print("[red]The program cannot function correctly without proper DNS settings.")
# console.print("[yellow]Please configure one of these DNS servers:")
# console.print("[blue]• Cloudflare (1.1.1.1) 'https://developers.cloudflare.com/1.1.1.1/setup/windows/'")
# console.print("[blue]• Quad9 (9.9.9.9) 'https://docs.quad9.net/Setup_Guides/Windows/Windows_10/'")
# console.print("\n[yellow]⚠️ The program will not work until you configure your DNS settings.")
# time.sleep(2)
# msg.ask("[yellow]Press Enter to continue ...")
if not internet_manager.check_dns_resolve():
print()
console.print("[red]❌ ERROR: DNS configuration is required!")
console.print("[red]The program cannot function correctly without proper DNS settings.")
console.print("[yellow]Please configure one of these DNS servers:")
console.print("[blue]• Cloudflare (1.1.1.1) 'https://developers.cloudflare.com/1.1.1.1/setup/windows/'")
console.print("[blue]• Quad9 (9.9.9.9) 'https://docs.quad9.net/Setup_Guides/Windows/Windows_10/'")
console.print("\n[yellow]⚠️ The program will not work until you configure your DNS settings.")
os._exit(0)
# Load search functions
search_functions = load_search_functions()
@ -245,30 +274,6 @@ def main(script_id = 0):
)
# Add arguments for search functions
color_map = {
"anime": "red",
"film_serie": "yellow",
"film": "blue",
"serie": "green",
"other": "white"
}
# Add dynamic arguments based on loaded search modules
used_short_options = set()
for alias, (_, use_for) in search_functions.items():
short_option = alias[:3].upper()
original_short_option = short_option
count = 1
while short_option in used_short_options:
short_option = f"{original_short_option}{count}"
count += 1
used_short_options.add(short_option)
long_option = alias
parser.add_argument(f'-{short_option}', f'--{long_option}', action='store_true', help=f'Search for {alias.split("_")[0]} on streaming platforms.')
parser.add_argument('-s', '--search', default=None, help='Search terms')
# Parse command-line arguments
@ -303,54 +308,45 @@ def main(script_id = 0):
global_search(search_terms)
return
# Map command-line arguments to functions
arg_to_function = {alias: func for alias, (func, _) in search_functions.items()}
# Create mappings using module indice
input_to_function = {}
choice_labels = {}
for alias, (func, use_for) in search_functions.items():
module_name = alias.split("_")[0]
try:
mod = importlib.import_module(f'StreamingCommunity.Api.Site.{module_name}')
site_index = str(getattr(mod, 'indice'))
input_to_function[site_index] = func
choice_labels[site_index] = (module_name.capitalize(), use_for.lower())
except Exception as e:
console.print(f"[red]Error mapping module {module_name}: {str(e)}")
# Check which argument is provided and run the corresponding function
for arg, func in arg_to_function.items():
if getattr(args, arg):
run_function(func, search_terms=search_terms)
return
# Mapping user input to functions
input_to_function = {str(i): func for i, (alias, (func, _)) in enumerate(search_functions.items())}
# Create dynamic prompt message and choices
choice_labels = {str(i): (alias.split("_")[0].capitalize(), use_for) for i, (alias, (_, use_for)) in enumerate(search_functions.items())}
# Add global search option to the menu
#global_search_key = str(len(choice_labels))
#choice_labels[global_search_key] = ("Global Search", "all")
#input_to_function[global_search_key] = global_search
# Display the category legend in a single line
# Display the category legend
legend_text = " | ".join([f"[{color}]{category.capitalize()}[/{color}]" for category, color in color_map.items()])
console.print(f"\n[bold green]Category Legend:[/bold green] {legend_text}")
# Construct the prompt message with color-coded site names
# Construct prompt with proper color mapping
prompt_message = "[green]Insert category [white](" + ", ".join(
[f"{key}: [{color_map.get(label[1], 'white')}]{label[0]}[/{color_map.get(label[1], 'white')}]" for key, label in choice_labels.items()]
[f"[{color_map.get(label[1], 'white')}]{key}: {label[0]}[/{color_map.get(label[1], 'white')}]"
for key, label in choice_labels.items()]
) + "[white])"
if TELEGRAM_BOT:
# Display the category legend in a single line
category_legend_str = "Categorie: \n" + " | ".join([
f"{category.capitalize()}" for category in color_map.keys()
])
# Costruisci il messaggio senza emoji
prompt_message = "Inserisci il sito:\n" + "\n".join(
[f"{key}: {label[0]}" for key, label in choice_labels.items()]
)
console.print(f"\n{prompt_message}")
# Chiedi la scelta all'utente con il bot Telegram
category = bot.ask(
"select_provider",
f"{category_legend_str}\n\n{prompt_message}",
None # Passiamo la lista delle chiavi come scelte
None
)
else:
@ -358,13 +354,6 @@ def main(script_id = 0):
# Run the corresponding function based on user input
if category in input_to_function:
"""if category == global_search_key:
# Run global search
run_function(input_to_function[category], search_terms=search_terms)
else:"""
# Run normal site-specific search
run_function(input_to_function[category], search_terms=search_terms)
else:
@ -373,10 +362,11 @@ def main(script_id = 0):
console.print("[red]Invalid category.")
if CLOSE_CONSOLE:
restart_script() # Riavvia lo script invece di uscire
if NOT_CLOSE_CONSOLE:
restart_script()
else:
force_exit() # Usa la funzione per chiudere sempre
force_exit()
if TELEGRAM_BOT:
bot.send_message(f"Chiusura in corso", None)

View File

@ -1,5 +1,7 @@
# 23.06.24
import unittest
# Fix import
import sys
import os
@ -16,10 +18,31 @@ from StreamingCommunity.Util.logger import Logger
from StreamingCommunity.Lib.Downloader import HLS_Downloader
# Test
start_message()
"""start_message()
logger = Logger()
print("Return: ", HLS_Downloader(
output_path="test.mp4",
result = HLS_Downloader(
output_path=".\\Video\\test.mp4",
m3u8_url="https://acdn.ak-stream-videoplatform.sky.it/hls/2024/11/21/968275/master.m3u8"
).start())
).start()
thereIsError = result['error'] is not None
print(thereIsError)"""
class TestHLSDownloader(unittest.TestCase):
def setUp(self):
os_summary.get_system_summary()
start_message()
self.logger = Logger()
def test_hls_download(self):
result = HLS_Downloader(
output_path=".\\Video\\test.mp4",
m3u8_url="https://acdn.ak-stream-videoplatform.sky.it/hls/2024/11/21/968275/master.m3u8"
).start()
thereIsError = result['error'] is not None
self.assertFalse(thereIsError, "HLS download resulted in an error")
if __name__ == '__main__':
unittest.main()

View File

@ -1,5 +1,7 @@
# 23.06.24
import unittest
# Fix import
import sys
import os
@ -16,10 +18,30 @@ from StreamingCommunity.Util.logger import Logger
from StreamingCommunity.Lib.Downloader import MP4_downloader
# Test
start_message()
"""start_message()
logger = Logger()
print("Return: ", MP4_downloader(
path, kill_handler = MP4_downloader(
url="https://148-251-75-109.top/Getintopc.com/IDA_Pro_2020.mp4",
path=r".\Video\undefined.mp4"
))
path=r".\\Video\\undefined.mp4"
)
thereIsError = path is None
print(thereIsError)"""
class TestMP4Downloader(unittest.TestCase):
def setUp(self):
os_summary.get_system_summary()
start_message()
self.logger = Logger()
def test_mp4_download(self):
path, kill_handler = MP4_downloader(
url="https://148-251-75-109.top/Getintopc.com/IDA_Pro_2020.mp4",
path=r".\\Video\\undefined.mp4"
)
thereIsError = path is None
self.assertFalse(thereIsError, "MP4 download resulted in an error")
if __name__ == '__main__':
unittest.main()

View File

@ -1,22 +1,22 @@
# 23.11.24
# Fix import
import sys
import os
src_path = os.path.abspath(os.path.join(os.path.dirname(__file__), '..', '..'))
sys.path.append(src_path)
# Import
from StreamingCommunity.Util.message import start_message
from StreamingCommunity.Util.logger import Logger
from StreamingCommunity.Api.Player.maxstream import VideoSource
# Test
start_message()
logger = Logger()
video_source = VideoSource("https://cb01new.biz/what-the-waters-left-behind-scars-hd-2023")
master_playlist = video_source.get_playlist()
# 23.11.24
# Fix import
import sys
import os
src_path = os.path.abspath(os.path.join(os.path.dirname(__file__), '..', '..'))
sys.path.append(src_path)
# Import
from StreamingCommunity.Util.message import start_message
from StreamingCommunity.Util.logger import Logger
from StreamingCommunity.Api.Player.mixdrop import VideoSource
# Test
start_message()
logger = Logger()
video_source = VideoSource("https://cb01net.uno/pino-daniele-nero-a-meta-hd-2024/")
master_playlist = video_source.get_playlist()
print(master_playlist)

View File

@ -24,11 +24,6 @@
"user": "admin",
"pass": "adminadmin"
},
"REQUESTS": {
"verify": false,
"timeout": 20,
"max_retry": 8
},
"M3U8_DOWNLOAD": {
"tqdm_delay": 0.01,
"default_video_workser": 12,
@ -59,11 +54,10 @@
"force_resolution": "Best",
"get_only_link": false
},
"SITE_EXTRA": {
"ddlstreamitaly": {
"ips4_device_key": "",
"ips4_member_id": "",
"ips4_login_key": ""
}
"REQUESTS": {
"verify": false,
"timeout": 20,
"max_retry": 8,
"proxy": ""
}
}

View File

@ -1,20 +1,19 @@
FROM python:3.11-slim
COPY . /app
WORKDIR /app
ENV TEMP /tmp
RUN mkdir -p $TEMP
RUN apt-get update && apt-get install -y \
RUN apt-get update && apt-get install -y --no-install-recommends \
ffmpeg \
build-essential \
libssl-dev \
libffi-dev \
python3-dev \
libxml2-dev \
libxslt1-dev
libxslt1-dev \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD ["python", "test_run.py"]

View File

@ -6,9 +6,12 @@ m3u8
certifi
psutil
unidecode
curl_cffi
dnspython
jsbeautifier
pathvalidate
pycryptodomex
ua-generator
qbittorrent-api
pyTelegramBotAPI
pyTelegramBotAPI
beautifulsoup4

View File

@ -1,4 +1,5 @@
import os
import re
from setuptools import setup, find_packages
def read_readme():
@ -8,9 +9,21 @@ def read_readme():
with open(os.path.join(os.path.dirname(__file__), "requirements.txt"), "r", encoding="utf-8-sig") as f:
required_packages = f.read().splitlines()
def get_version():
try:
import pkg_resources
return pkg_resources.get_distribution('StreamingCommunity').version
except:
version_file_path = os.path.join(os.path.dirname(__file__), "StreamingCommunity", "Upload", "version.py")
with open(version_file_path, "r", encoding="utf-8") as f:
version_match = re.search(r"^__version__\s*=\s*['\"]([^'\"]*)['\"]", f.read(), re.M)
if version_match:
return version_match.group(1)
raise RuntimeError("Unable to find version string in StreamingCommunity/Upload/version.py.")
setup(
name="StreamingCommunity",
version="2.9.8",
version=get_version(),
long_description=read_readme(),
long_description_content_type="text/markdown",
author="Lovi-0",
@ -29,4 +42,4 @@ setup(
"Bug Reports": "https://github.com/Lovi-0/StreamingCommunity/issues",
"Source": "https://github.com/Lovi-0/StreamingCommunity",
}
)
)